Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2021 Jun;111(6):1055–1057. doi: 10.2105/AJPH.2021.306288

“First Do No Harm”: Effective Communication About COVID-19 Vaccines

David A Broniatowski 1,, Mark Dredze 1, John W Ayers 1
PMCID: PMC8101564  PMID: 33950727

With effective COVID-19 vaccines in hand, we must now address the spread of information on social media that might encourage vaccine hesitancy. Although misinformation comes in many forms,1 including false claims, disinformation (e.g., deliberately false information), and rumors (e.g., unverified information), social media companies now seek to interdict this objectionable content—for the first time in their history—by removing content explicitly containing conspiracy theories and false or debunked claims about vaccines. Concurrently, social media users routinely disparage “anti-vaxxers” online, conflating a large group of vaccine-hesitant individuals who may be using social media to seek information about vaccination with a potentially much smaller group of “vaccine refusers.”2 Both strategies could cause more harm than good, necessitating a change in strategy informed by a large body of scientific evidence for making online communications about COVID-19 vaccines more effective.

WHEN CONTENT REMOVAL BACKFIRES

On December 3, 2020, Facebook stated that they would start removing false claims about COVID-19 vaccines. On December 16, 2020, Twitter followed suit. The scope of Facebook’s content removal has since expanded, stating that they would remove false claims about vaccines more broadly on February 8, 2021. Other social media companies have taken a broader stance to remove even more content.

Although well intentioned, some evidence suggests that such censorship can be ineffective and counterproductive, raising questions regarding whether the risks outweigh the benefits: Can platforms effectively define and remove offending content? Does its removal actually reduce exposure to information that might encourage vaccine hesitancy? Does this censorship change how the public appraises removed information? How will this censorship be interpreted? We can begin to answer these questions by examining other areas where content removal has been applied.

Historically, content violating governing laws (e.g., child pornography) or platforms’ terms of service has been removed haphazardly. Purveyors of this content have, on occasion, also been suspended inconsistently. For instance, one study found that roughly 10% of accounts on Twitter are bots that violate the platform’s terms of service.3 It follows that removing content about vaccines would similarly miss some objectionable content. On the other hand, acceptable content (e.g., a sincere question from a vaccine-hesitant person or a response to vaccine misinformation) might also be deleted as collateral damage. For example, Facebook was forced to apologize to an African American activist after her account was erroneously deleted as she tried to address racism on her Facebook page.4

Even if the algorithms were perfect, social media companies lack formal training and clear accountability mechanisms for differentiating between blatantly false content and legitimate scientific uncertainty. In general, platforms’ policies for content removal are perceived to be unfair, such as when a 2017 ProPublica report found that Facebook’s hate speech censorship rules “. . . tend to favor elites and governments over grassroots activists and racial minorities.”5 When censorship cannot be carried out with precision, it can discourage the vaccine hesitant from genuinely seeking quality information, with the damage possibly outweighing the benefits.

Censorship often yields unintended consequences, even if implemented as intended. First, censored content may be more sought out and more persuasive, thus undermining the credibility of evidence-based information. This is because censorship can lead to outrage, encouraging some people to desire censored information more—the so-called “Streisand Effect.”6 Second, censorship may be ineffective in a world where multiple social media platforms exist. Platforms such as Gab,7 Rumble, and Telegram welcome content banned by larger platforms. Thus, vaccine-related content, including antivaccine content, can still be found when it is inevitably sought out. Third, the act of censorship may make the public more likely to believe censored information. In multiple experiments,8 participants were more likely to change their opinions in favor of content that they had been told was bannedeven if they had only been exposed to the title of the censored content rather than the content in full! Thus, public awareness of censored content that promotes vaccine refusal may increase vaccine hesitancy, even if the public never sees it. Fourth, experiments show that efforts to debunk are less persuasive if the material being debunked is a target of censorship.8 Thus, censoring potentially harmful information about vaccines may reduce the efficacy of high-quality, evidence-based communications. Finally, censorship promotes a narrative—that has increasingly been embraced by vaccine opponents9—portraying social media platforms and the public health establishment as authoritarian and paternalistic, thus eroding confidence in these critical institutions.

USING SOCIAL MEDIA TO ENGAGE WITH THE VACCINE HESITANT

To date, public health communicators do not frequently engage with the vaccine hesitant online. This has left a void on social media, which has been filled by vigilantes who are not trained in effective communication, and who mischaracterize the vaccine hesitant as stupid, science deniers, or conspiracy theorists. Thus, some of the most popular provaccine Facebook fan pages promote discord by mocking those with whom they disagree and stigmatizing those who have real questions about vaccines. Examples include pages with names such as “Refutations to Anti Vaccine Memes” (323 340 followers as of March 13, 2021), “Things Anti Vaxers [sic] Say” (156 070 followers), and “Detox, Antivax, and Woo Insanity” (114 653 followers). While perhaps well intentioned, these pages violate a basic principle of persuasion by relying on ad hominem attacks. A messenger who is well-liked is statistically significantly more likely to be persuasive, irrespective of the message content.8 By contrast, demeaning provaccination messages may be ineffective and possibly harmful, making everyone more vulnerable. For example, Russian Twitter “troll” accounts weaponized demeaning provaccine messages as frequently as vaccine refusal narratives when conducting a broad campaign to promote discord in American society.10

To fill this void, resources spent on censorship could instead be directed to collaborations with public health partners to help craft evidence-based, positive interventions with demonstrated efficacy. Currently, interventions focus on broadcasting promotional messages, or correcting or debunking falsehoods. This approach places the debunker in a position of authority relative to the audience, potentially engendering resistance. Although these strategies are important components to any public health campaign, communications must also address rationales for vaccine hesitancy that vary among communities. The messages must therefore be targeted and tailored, communicating the gist of why people should vaccinate in a manner that is comprehensible, but not simple-minded, and connecting rationales for vaccination to culturally contingent values.11

The messenger matters: although trust in government institutions may be at an all-time low, trust in physicians remains high.12 Platforms could therefore draw upon the expertise of trained medical and public health professionals, and trusted community leaders, to developed targeted and tailored provaccination messages, interjecting these into contested online spaces and disseminating messages that meet the needs of specific communities. Such messages, if appropriately crafted, can go viral on social media.13 However, not all physicians will be equally appealing to all audiences. Here, we may take advantage of social media platforms’ main strength: they excel at microtargeting and could leverage their technological prowess to empower trusted community advocates to connect with public health agencies. Rather than eroding the public’s access to critical health communities, platforms could thus promote a two-way dialogue between community advocates, public health agencies, and physicians on one hand, and the vaccine hesitant on the other, helping to build trust where it may be lacking.

In general, social media platforms facilitate the exchange of medical information among peers, even information that is private or sensitive.14 Action that shuts down these spaces could both deprive participants of a community upon which they have come to rely and make members of these communities harder to reach. If communities on popular, and relatively well-regulated, social media platforms are shut down, members of the public may fill their need for community in less reachable venues, or in private settings where there is no opportunity for public health advocates to encourage evidence-based behaviors.

Cynics might question whether even the most effective communicator can change the minds of “antivaccination crusaders.” We certainly will not if we do not try. Even controversial opinions that seem firmly entrenched can change when contact opens one’s mind to different perspectives. In our opinion, a large social media cluster hosting objectionable conversations about vaccines represents an untapped opportunity to encourage both participants and bystanders to engage in healthy behaviors, building trusting relationships, and potentially changing minds. Social media platforms, working in close collaboration with health experts, could implement targeted and tailored campaigns that could reach clusters of vaccine-hesitant users that would otherwise be unreached.

It is reasonable for the public to have questions in the midst of a pandemic that has left many anxious, or even panicked.15 We must meet the public where they are: reaching out to the vaccine hesitant on social media rather than imposing a blanket ban—enforced by imprecise algorithms—that could backfire and undermine the persuasive power of public health communications. A strategy modeled on evidence-based practices that combines the resources of social media companies, health agencies, and public health advocates could significantly accelerate the uptake of COVID-19 vaccines, potentially ending the pandemic. A necessary first step is systematically appraising our current social media strategies and changing course to align them with evidence-based practices.

ACKNOWLEDGMENTS

This work was supported by a grant from the John S. and James L. Knight Foundation to the George Washington University Institute for Data, Democracy, and Politics; the John and Mary Tu Foundation; and the Burroughs Wellcome Fund.

We thank Davey Smith, MD, MAS, and Eric C. Leas, PhD, MPH, (both with University of California San Diego) for comments on earlier versions of our editorial.

Note. The funders played no role in the decision to publish, nor the conceptualization, preparation, or revision of this work.

CONFLICTS OF INTEREST

D. A. Broniatowski received a speaking honorarium from the United Nations Shot@Life Foundation—a nonprofit organization that promotes childhood vaccination. M. Dredze holds equity in Good Analytics and has received consulting fees from Directing Medicine and Bloomberg LP. J. W. Ayers owns equity positions in Directing Medicine, Health Watcher, and Good Analytics. J. W. Ayers and M. Dredze previously advised GlaxoSmithKline on social media communication strategies for vaccines.

Footnotes

See also Morabia, p. 982, and the Vaccines: Building Long-Term Confidence section, pp. 10491080.

REFERENCES

  • 1.Southwell BG, Niederdeppe J, Cappella JN et al. Misinformation as a misunderstood challenge to public health. Am J Prev Med. 2019;57(2):282–285. doi: 10.1016/j.amepre.2019.03.009. [DOI] [PubMed] [Google Scholar]
  • 2.MacDonald NE. Vaccine hesitancy: definition, scope and determinants. Vaccine. 2015;33(34):4161–4164. doi: 10.1016/j.vaccine.2015.04.036. [DOI] [PubMed] [Google Scholar]
  • 3.Varol O, Ferrara E, Davis C, Menczer F, Flammini A. Online human‒bot interactions: detection, estimation, and characterization. In: Proceedings of the International AAAI Conference on Web and Social Media. 2017;11(1). Available at: https://ojs.aaai.org/index.php/ICWSM/article/view/14871. Accessed April 7, 2021.
  • 4.Guynn J. Facebook apologizes to Black activist who was censored for calling out racism. USA Today. August 3, 2017. Available at: https://www.usatoday.com/story/tech/2017/08/03/facebook-ijeoma-oluo-hate-speech/537682001. Accessed February 12, 2021.
  • 5.Angwin J, Grassegger H. Facebook’s secret censorship rules protect White men from hate speech but not Black children. ProPublica. 2017. Available at: https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms. Accessed February 12, 2021.
  • 6.Jansen S, Martin B. The Streisand effect and censorship backfire. Int J Commun. 2015;9(16):656–671. [Google Scholar]
  • 7.Zhou Y, Dredze M, Broniatowski DA, Adler WD. Elites and foreign actors among the Alt-Right: the Gab social media platform. First Monday. 2019;24(9) doi: 10.5210/fm.v24i9.10062. [DOI] [Google Scholar]
  • 8.Cialdini RB. Influence: The Psychology of Persuasion. Rev ed. New York, NY: William Morrow; 2006. [Google Scholar]
  • 9.Broniatowski DA, Jamison AM, Johnson NF et al. Facebook pages, the “Disneyland” measles outbreak, and promotion of vaccine refusal as a civil right, 2009–2019. Am J Public Health. 2020;110(suppl 3):S312–S318. doi: 10.2105/AJPH.2020.305869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Broniatowski DA, Jamison AM, Qi S et al. Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. Am J Public Health. 2018;108(10):1378–1384. doi: 10.2105/AJPH.2018.304567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Reyna VF. A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proc Natl Acad Sci USA. 2020:201912441. doi: 10.1073/pnas.1912441117. epub ahead of print April 20, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Funk C, Gramlich J. Amid coronavirus threat, Americans generally have a high level of trust in medical doctors. Pew Research Center. March 13, 2020. Available at: https://www.pewresearch.org/fact-tank/2020/03/13/amid-coronavirus-threat-americans-generally-have-a-high-level-of-trust-in-medical-doctors. Accessed December 23, 2020.
  • 13.Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the Disneyland measles outbreak. Vaccine. 2016;34(28):3225–3228. doi: 10.1016/j.vaccine.2016.04.044. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Nobles AL, Leas EC, Althouse BM et al. Requests for diagnoses of sexually transmitted diseases on a social media platform. JAMA. 2019;322(17):1712‒1713. doi: 10.1001/jama.2019.14390. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ayers JW, Leas EC, Johnson DC et al. Internet searches for acute anxiety during the early stages of the COVID-19 pandemic. JAMA Intern Med. 2020;180(12):1706‒1707. doi: 10.1001/jamainternmed.2020.3305. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES