On social media platforms such as Twitter and Facebook, we are increasingly witnessing the division of people into filter bubbles.1 These bubbles are driven by personal biases, because we selectively orient ourselves toward people, groups, organizations, and institutions who share our attitudes and beliefs. Although the idea of connecting to like-minded others is not new, social media amplify this process. In the past five years, increasingly polarizing topics have come to the forefront of society facilitated by these networked, online platforms. Even though these platforms have the potential to facilitate open dialogue and inclusion, they also allow anonymous participants to inject commentary and vitriol in what some have described as a Wild West–like atmosphere.
In this issue of AJPH, in their article on Twitter bots, Russian trolls, and the public health vaccine debate, Broniatowski et al. (p. 1378) describe just how sophisticated the exploitation of the social media environment can be. Their study serves as a warning for all who study public health communication on social media and those who seek to use these platforms to either inform communication strategies or communicate with the public.
ONLY 13 YEARS AGO
In 2005, when scholars began to investigate informal online communication in response to crises and emergencies, the use of social media among the public was still in its infancy. Facebook was extending its reach from its modest beginnings on college campuses to include members of the general public; Twitter had just launched, and even though its value was questioned, technologically savvy users embraced it. By 2008, scholars were recording and analyzing the prosocial and altruistic use of social media by the public. Online crowds coordinated efforts and produced collective intelligence that challenged prevailing notions that the public were not to be trusted, should be managed through command and control procedures, and were more of a problem than an asset.2
Key events, such as the 2009 H1N1 virus outbreak, became a testing ground for public health risk communication. The increased volume of informal online communication presented an opportunity for local, state, and federal public health agencies to establish themselves as trusted sources of information and to contribute a guiding voice in a cacophonous environment. The value of social media was further recognized in 2010, first with the collective response to the devastating earthquake in Haiti, where volunteers mobilized to map the disaster space and develop mobile applications for emergency responders, followed by the Arab Spring protests in the Middle East when members of the public used social media to self-organize and to broadcast messages to the world.
PUBLIC HEALTH RISK COMMUNICATION
With the rise of social media, risk communication researchers and practitioners have identified and emphasized social media best practices,3 which take into account the necessity of monitoring social media chatter across all phases of risk communication. This includes actively engaging with the online public through routine, daily communication, as well as planned information campaigns that target specific topics such as mitigation and preparedness for seasonal events, such as influenza, and unplanned ones, such as an outbreak of measles.
However, empirical evidence of the emergence of bots and trolls influencing social media conversations about public health threats will require researchers and practitioners to develop new communication strategies that take into account the influence of trolls and bots. This is especially needed at times of heightened public awareness or when a new threat emerges. Often, during these moments, much is unknown, and fears run high.
#VACCINATEUS
The study by Broniatowski et al. (p. 1378) reported a sophisticated communication strategy designed to disrupt civic conversations. Drawing from a list of known Russian troll accounts identified by NBC News, Broniatowski et al. found clear evidence that bots and Russian trolls were actively involved in producing neutral and polarizing vaccine-related content. The accounts frequently amplified particular points of the debate by “flooding the discourse,” a strategy in which accounts synchronize their tweeting activity. Furthermore, analyses of #VaccinateUS, a hashtag used by Russian trolls, found that pro-vaccine and antivaccine messages were tied explicitly to American politics, used value-based emotional appeals, and linked arguments to divisive topics such as class divisions, religious appeals, and animal welfare. These strategies are designed to inflame social and political divides. Identifying bots is not an easy task, they noted, even for researchers who specialize in this area of study, and lay users often inadvertently include these accounts in their social media feeds.
The Pew Research Center has found that US online news consumers still get more information from news organizations than from their friends, but these consumers believe that the friends they choose to stay in touch with on social media provide information that is just as relevant as the information provided by news organizations.4 Many individuals rely on their social media connections to curate content by identifying and editorializing newsworthy information that conforms to their personal biases. In doing so, we eliminate dissenting voices and insulate ourselves from recognizing when a filter bubble might have become infected.
RUSSIAN INTERFERENCE
Since the 2016 election, researchers have documented Russian interference in the US political system, suggesting that their goal is to divide the American public by spreading “computational propaganda,” which leverages social media to rapidly spread information supportive of a particular ideology.4 Importantly, the inclusion of both neutral and divisive content achieves the goal of attracting followers as they present messages that are perceived to be fair and confirm their beliefs.
The potential danger of malicious bots is not hard to imagine. Consider, for example, Russia’s active measures campaign in the mid-1980s to disseminate propaganda that the AIDS virus was unleashed on the world by the United States as a biological weapon.5 As an individual campaign, it successfully spread thousands of false stories; its reach was limited only by a lack of human resources to conduct active measures.4 Today, new communication channels, tested strategies, and established preferences among the online public allow the propagation of false information in viral fashion.
At public health agencies, ongoing monitoring of the online information space by risk communicators may alert them to emerging and contentious topics or rumors that misrepresent the current state of an emerging threat. However, topic monitoring is not likely to capture or measure the influence of trolls or bots or even recognize active measures to infiltrate debates about public health issues.
PUBLIC HEALTH AGENCIES AS TRUSTED VOICES
Social media sites such as Twitter provide an opportunity to disseminate public health information directly to the public6 while also allowing for longitudinal digital dialogue and online engagement. To combat the influence of weaponized health communication, organizations must grow their online network by establishing themselves as consistently credible and trustworthy sources of information and showing their usefulness over time. Research has found that the single most important factor predicting message retransmission, which is a measure of public attention that also increases visibility of a message, is the size of the network.7 When focusing events, such as emergent diseases, lead to increased conversations about vaccines, established and credible online sources will compete with trolls and bots aimed to divide the American public. The presence of public health agencies as trusted voices will become all the more important for increasing knowledge and reducing fears.
Footnotes
See also Broniatowski et al., p. 1378.
REFERENCES
- 1.Pariser E. The Filter Bubble: What the Internet Is Hiding From You. London, UK: Penguin; 2011. [Google Scholar]
- 2.Palen L, Anderson KM. Crisis informatics—new data for extraordinary times. Science. 2016;353(6296):224–225. doi: 10.1126/science.aag2579. [DOI] [PubMed] [Google Scholar]
- 3.Veil SR, Buehner T, Palenchar MJ. A work-in-process literature review: incorporating social media in risk and crisis communication. J Contingencies Crisis Manage. 2011;19(2):110–122. [Google Scholar]
- 4.Watts C. Messing With the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News. New York, NY: Harper/HarperCollins; 2018. [Google Scholar]
- 5.Boghardt T. Soviet Bloc Intelligence and its AIDS disinformation campaign. Stud Intell. 2009;53(4):1–24. [Google Scholar]
- 6.Breland JY, Quintilani LM, Schneider KL. Social media as a tool to increase the impact of public health research. Am J Public Health. 2017;107(12):1890–1891. doi: 10.2105/AJPH.2017.304098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Sutton J, Gibson CB, Phillips NE et al. A cross-hazard analysis of terse message retransmission on Twitter. Proc Natl Acad Sci U S A. 2015;112(48):14793–14798. doi: 10.1073/pnas.1508916112. [DOI] [PMC free article] [PubMed] [Google Scholar]