Social bots are automated accounts that use artificial intelligence to steer discussions and promote specific ideas or products on social media such as Twitter and Facebook.1 To typical social media users browsing their feeds, social bots may go unnoticed as they are designed to resemble the appearance of human users (e.g., showing a profile photo and listing a name or location) and behave online in a manner similar to humans (e.g., “retweeting” or quoting others’ posts and “liking” or endorsing others’ tweets).
Social bots have been studied by computer scientists for years. However, bots have only recently received greater public attention, alongside other social media practices being scrutinized by policymakers. In that context, researchers have discovered that a significant fraction of political tweets made before the 2016 US presidential election had been posted by social bots and that those tweets had been retweeted at a rate similar to that of human-generated ones.2 Although it is now known that social bots have been used to automate online political campaigns, their prevalence and influence in the health domain are largely unknown.
FEW STUDIES CONDUCTED
At present, only a handful of studies on social bots appear on PubMed. An example is a 2017 study that compiled a corpus of 2.2 million Twitter posts and characterized how social bots promote electronic cigarettes.3 To detect tweets posted by bots, the authors employed state-of-the-art bot detection techniques and found that, in comparison with human users, social bots were twice as likely to suggest that electronic cigarettes could be used in smoking cessation, a conclusion not definitively supported by empirical evidence.3 This study also showed that social bots were twice as likely as humans to promote recently introduced electronic cigarette devices and accessories.3
These findings suggest that social bots have been designed to purposely push a particular narrative depicting electronic cigarettes in positive terms. The larger implications for public health, however, remain to be examined. Do users who are exposed to messages from social bots experience changes in their attitudes or offline behaviors? If so, can we design effective countermeasures or intervention strategies to mitigate the influence of bots on public health? Also, given that research has shown that people perceive information from social bots and human accounts similarly,4 what are the ethical implications of using social bots to disseminate information on public health? Several pressing questions regarding the role and effects of social bots in the promotion of other products or behaviors with health consequences are still unanswered.
POSSIBLE THREATS TO HEALTH
The prominence of social media in health-related decision-making is on the rise as people turn to online social platforms to seek out advice from peers and experts.5 In this context, social bots are poised to steer online discussions with inaccurate health claims. Social bots could be employed to promote products by those with clear financial gains at stake, as in the case of tobacco, supplements, diet plans, and medications, as well as those with ideological beliefs for or against specific health decisions.
For example, anti-vaccination activists by and large exploit the power of social media as a megaphone to share scientific disinformation or unverified claims.6 Evidence suggests that people turn to the Internet seeking information about the potential side effects and consequences of vaccines, indicating that social media can affect parents’ decision to have their child vaccinated.6 The Defense Advanced Research Projects Agency recently conceptualized how anti-vaccination activists could magnify their message’s reach by employing social bots to inundate Twitter with tailored narratives intended to drive health-related decision-making.7 More work will be needed to determine whether anti-vaccination campaigns may have contributed in part to the recent measles pandemics in California and Texas.
As research has shown, social bots employed to discuss a particular topic can number in the hundreds of thousands,1 with the potential to drown out medically sound social media messages from medical experts or health campaigns. The number of social bots and the frequency at which they produce and disseminate content may give the impression that a particular behavior is more prevalent online than it is offline, normalizing unhealthy or medically unsound decisions. Social bots may inflate the perception of a health issue by creating panic on the part of the public, contributing to the spread of rumors or unverified information as with the recent Ebola and Zika virus pandemics.
ACTION REQUIRED
Creation of social bots does not require the computer programming skills of a highly trained software engineer. Today, online forums and open-source code repositories provide user-friendly instructions and free software that can be used to deploy social bots. In the wake of growing concern over the number of social bots populating social media, California state legislators now demand implementation of policies requiring that these accounts be easily identified and linked to a human user (a feat not easily accomplished). Although social bots are now on the legislative agenda in California, it will probably take time to effectively implement a regulatory solution. In addition, any legislation aimed at regulating social media companies in California will probably be contested by well-financed corporations reluctant to have their practices policed by the state in which they are headquartered.
Until policies regulating social bots are implemented, public health has an important role in countering inaccurate or unverified health claims and promotion of unhealthy products online. This effort will require a deeper understanding of the problem (e.g., the products being promoted and the claims being made), interdisciplinary teams of scientists that can detect the content of social bots (e.g., engineers, computer scientists, and health experts), and funding that fosters these pursuits and collaborations (e.g., funding from the National Institutes of Health).
ACKNOWLEDGMENTS
This research was supported by grant P50CA180905 from the National Cancer Institute and the Center for Tobacco Products of the Food and Drug Administration (FDA).
Note. The National Institutes of Health (NIH) and the FDA had no role in the design of this research; the collection, analysis, or interpretation of the data; the writing of the editorial; or the decision to submit the editorial for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or FDA.
REFERENCES
- 1.Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016;59(7):96–104. [Google Scholar]
- 2.Bessi A, Ferrara E. Social bots distort the 2016 US presidential election online discussion. Available at: http://firstmonday.org/article/view/7090/5653. Accessed May 12, 2018.
- 3.Allem JP, Ferrara E, Uppu SP, Cruz TB, Unger JB. E-cigarette surveillance with social media data: social bots, emerging topics, and trends. JMIR Public Health Surveill. 2017;3(4):e98. doi: 10.2196/publichealth.8641. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Edwards C, Beattie AJ, Edwards A, Spence PR. Differences in perceptions of communication quality between a Twitterbot and human agent for information seeking and learning. Comput Human Behav. 2016;65:666–671. [Google Scholar]
- 5.Fox S, Purcell K. Chronic Disease and the Internet. Washington, DC: Pew Internet & American Life Project; 2010. [Google Scholar]
- 6.Betsch C, Brewer NT, Brocard P et al. Opportunities and challenges of Web 2.0 for vaccination decisions. Vaccine. 2012;30(25):3727–3733. doi: 10.1016/j.vaccine.2012.02.025. [DOI] [PubMed] [Google Scholar]
- 7.Subrahmanian VS, Azaria A, Durst S et al. The DARPA Twitter bot challenge. Computer. 2016;49(6):38–46. [Google Scholar]