Skip to main content
Elsevier - PMC COVID-19 Collection logoLink to Elsevier - PMC COVID-19 Collection
. 2023 Jan 20;41(4):922–929. doi: 10.1016/j.vaccine.2022.12.045

Collateral damage from debunking mRNA vaccine misinformation

Nicole M Krause a,, Becca Beets a, Emily L Howell a, Helen Tosteson a, Dietram A Scheufele a,b
PMCID: PMC9858741  PMID: 36682880

Abstract

Amid the COVID-19 pandemic, the scientific community has been understandably eager to combat misinformation about issues such as vaccine safety. In highly polarized information environments, however, even well-intentioned messages have the potential to produce adverse effects. In this study, we connect different disciplinary strands of social science to derive and experimentally test the novel hypothesis that although particular efforts to debunk misinformation about mRNA vaccines will reduce relevant misperceptions about that technology, these correctives will harm attitudes toward other types of vaccines. We refer to this as the “collateral damage hypothesis." Our study specifically examines a corrective message stating that “mRNA vaccines do not contain live virus,” and our results offer some support for our hypothesis, with the corrective triggering increased societal risk perceptions of live vaccines. We also find that the effect is, predictably, most evident among those whose vaccine acceptance is low. Building on the theoretical grounding we outline, we test a “damage control” adjustment to the corrective message and present evidence supporting that it mitigates the collateral damage.

Keywords: COVID-19, prebunking, misinformation, vaccines, health communication

1. Introduction

The urgency of the COVID-19 pandemic has spurred communication and policy actions over the past years that are focused on mitigating public health risks and slowing the spread of the virus. In the case of risk communication about COVID-19, many of these efforts have privileged a need to act quickly over the need to act effectively and inclusively [1]. For example, certain campaigns have seemed to dismiss Americans’ lived realities, in which mask-wearing can pose a safety risk (e.g., racial profiling of Black men) [2], or where living and working conditions can preclude low-risk social distancing (e.g., among Americans with lower income) [3]. In a complex crisis where rapidly-changing and emergent science informs decision-making [4], some amount of “collateral damage” is perhaps unavoidable. Still, there is a difference between damage arising from blind spots and damage that has been anticipated and deemed acceptable by diverse stakeholders. Unfortunately, in the realm of misinformation, the World Health Organization’s declaration of an “infodemic” has increased the scientific community’s fervor in deploying a variety of corrective interventions, with little attention to the risk–benefit calculus associated with these actions [5], [6].

Given the likelihood of future pandemics [7], research that examines the unintended, negative effects of present-day response efforts will be necessary and valuable even after the current crisis ends. Addressing this need, we draw on interdisciplinary evidence to derive and test hypotheses about the unintended consequences of correcting misinformation. Specifically, we examine corrective messages currently used by the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) to counter the false claim that “mRNA vaccines contain live viruses.” Using the language of the original correctives, we find that while these attempts to debunk the false claim about COVID-19 mRNA vaccines do indeed reduce misperceptions, they can also inadvertently increase negative attitudes toward other kinds of vaccines (in this case, live virus vaccines). We also find that this “collateral damage effect”—i.e., the inadvertent increase in negative views of other vaccines resulting from misinformation correction—is greatest among those who already report low acceptance of vaccines in general. Finally, we develop and test a corrective that does not appear to create such collateral damage.

These findings highlight the need to thoroughly assess misinformation correctives across a wide range of intended and unintended outcomes for diverse audiences before they are deployed, and we describe one potential approach for doing so. Our study also exposes the urgent need to adopt a more careful definition of intervention “efficacy” that reflects not only improved belief accuracy but also an avoidance of adverse effects.

The need to examine possible adverse effects arises, in part, from the reality that misinformation is embedded in broader societal and information contexts that shape how and why it emerges and spreads among different people [5]. Consequently, interventions which lack nuance and attention to those contextual factors can have unintended, harmful consequences. For example, in information climates where a false claim is repeated often, some correctives can induce the “continued influence effect,” or the persistence of a false claim in a person’s memory even after intervention [8]. In rare cases, correctives can also backfire—i.e., strengthen the false beliefs they are trying to debunk—especially among extremists [9]. Further, attempts to warn people about the presence of false claims in the media ecosystem can generate cynicism about the utility of correctives [10]. Finally, recent research examining how to debunk the claim that “the flu vaccine causes the flu” has shown that, although a factual corrective did reduce misperceptions, it also had the unintended effect of decreasing self-reported intention to vaccinate among those with high concern about side effects [11].

A key commonality of these studies is that they look at adverse effects across a narrow range of outcome variables, including sustained or strengthened belief in the false claim targeted by the corrective, or beliefs about misinformation more broadly. However, even when debunking messages seem to correct a given false belief in given issue domain, we expect that they can also inadvertently “harm” individuals’ cognitions about seemingly unrelated issues.

This expectation—which we refer to as the “collateral damage hypothesis"—arises from research on priming effects and on the rhetoric of argumentation. Successful argumentation requires communicators to establish premises that are logically connected within a larger argument [12]. Sometimes, premises are left implicit, or not stated outright, and implicit premises can arise from contextual cues [13]. Implicit premises have the potential to “prime” certain ideas for communication audiences, which means that certain cognitions will be rendered more salient or top-of-mind [14]. When implicit premises are primed, they will be likely to influence people’s subsequent opinion formation processes [14]. Putting these broad insights in context, we expect that some efforts by the WHO and the CDC to correct inaccurate beliefs about COVID-19 vaccines will do collateral damage to other vaccine attitudes by implicitly priming the belief that other vaccines are unsafe. To test this hypothesis, we focus on a particular correction format that both the WHO and CDC have used to communicate and fact-check vaccine safety information.

As of this writing in June 2022, an excerpt from one of the CDC’s myth-busting pages says, “The mRNA vaccines do not contain any live virus. Instead, they work by teaching our cells to make a harmless piece of a ‘spike protein,’ which is found on the surface of the virus…” ([15], emphasis original). This message blends factual information about the mRNA vaccine (“mRNA vaccines do not contain any live virus”) with commentary on risk (“a harmless piece of a ‘spike protein’”), and it contextualizes the whole message within a broader argument about vaccine safety. Given the safety context, audiences are likely primed with the implicit argumentative premise that if a vaccine does contain live virus, then it is not safe. If readers are subsequently asked how risky they think live virus vaccines are, then the primed premise should influence their evaluations.

We focus on this particular corrective not only because it is a real and prominent example of messaging about COVID-19 mRNA vaccines that we expect will produce collateral damage, but also because the specific collateral damage we anticipate—i.e., increased negative views of live virus vaccines—could be deleterious for ongoing public health challenges and concerns. Live virus vaccines, such the MMR vaccines, are important global public health tools that are also sometimes targets of misinformation campaigns in the US and other countries where anti-vaccination movements are prominent, and where there are related concerns about vaccine uptake [16], [17], [18]. Recent years have exposed some of the costs of negative attitudes toward live virus vaccines and decreased vaccination rates, particularly among children whose parents hold those perceptions [19]. These costs include measles outbreaks and the reemergence of other diseases that can be mitigated with live virus vaccines, such as the poliovirus [20] and monkey pox [21].

Further, even within the COVID-19 context, live virus vaccines remain a key resource in some countries where stocks of mRNA vaccines are low [22]. An intervention that corrects false beliefs about mRNA COVID-19 vaccines while unwittingly increasing concern about live virus vaccines could therefore disproportionately hinder COVID-19 vaccine uptake efforts for other disease outbreaks, or in other parts of the world who may lack equitable access to mRNA technology. Clearly, although the severity and scope of the pandemic sparks an understandable sense of urgency to increase mRNA vaccine uptake, our attempts to intervene in relevant misperceptions with hastily-generated communications might endanger public health in other areas.

In light of these arguments, we examine the possibility of collateral damage effects from attempts to correct the false claim that “mRNA vaccines do not contain live virus,” and we assess the utility of one possible messaging solution that we expect will mitigate adverse effects. We examine the following hypotheses:

H1: Collateral damage hypothesis. Compared to offering no corrective, a message that asserts only that COVID-19 mRNA vaccines do not contain live virus will:

H1a: reduce belief in the false claim,

H1b: but it will also increase negative views of live virus vaccines, defined as heightened risk perceptions and opposition to their use.

In other words, we hypothesize that although the corrective will successfully reduce belief in a specific false claim about mRNA vaccines (H1a), it will do so at the cost of increasing negative beliefs about live virus vaccines (H1b). We refer to these combined outcomes as the “collateral damage effect.”

The literatures we examine also suggest that this adverse effect can be avoided. Because implicit premises leave information unstated, they create space for misinterpretation, especially when messages are intended for diverse audiences. Therefore, communicators can mitigate unintended consequences if they anticipate and explicitly address the implicit premises that are likely to arise from the argumentation context, and that will intersect with audiences’ assumptions and associations [13], [23]. Indeed, we know from decades of research that the most effective risk communications are those which account for contextual factors, including people’s prior attitudes, a polarized sociopolitical climate, and shifting scientific evidence [24].

In the context of efforts to debunk misinformation about COVID-19 mRNA vaccines, a clear goal of the larger risk communication effort is to address the implications of circulating falsehoods—i.e., that these vaccines are unsafe. Considering this, we expect that public health officials’ efforts to debunk false claims about COVID-19 mRNA vaccines will be less likely to do “collateral damage” if they attend to both the explicit content of the misinformation they are trying to debunk (“mRNA vaccines contain live virus”) as well as the implicit content arising from contextual cues (“live virus vaccines are unsafe”):

H2: Damage control hypothesis. A message which asserts both that COVID-19 mRNA vaccines do not contain live virus and that live virus vaccines can be used safely (i.e., a corrective that incorporates “damage control”) will

H2a: reduce belief in the false claim, compared to seeing no corrective,

H2b: and, compared to the original corrective, it will result in less negative views of live virus vaccines (as defined in H1).

In other words, although we expect that both styles of corrective will reduce false beliefs relative to seeing no debunking message (H1a and H2a), we expect that the original corrective will have greater negative impacts on attitudes toward live virus vaccines, compared to the damage control corrective (H2b).

Finally, if we are correct about our collateral damage hypothesis, then we also expect collateral damage to be more pronounced among people whose prior attitudes about vaccines are already negative. Theories of motivated reasoning suggest that it should be easier to implicitly prime negative attitudes—including false beliefs—among people who are already motivated to form new negative beliefs about the topic in question [25], [26].

To test this expectation, we also examine the effects of the different correctives on respondents with different levels of vaccine acceptance. We hypothesize:

H3: The original corrective will interact with individuals’ overall vaccine acceptance, such that the collateral damage effect will be more pronounced among individuals who exhibit lower levels of vaccine acceptance.

It is less clear how the damage control message will work for individuals who have pre-existing negative attitudes toward vaccines. While it is possible that the damage control message will work among this group in a similar manner to the full sample (i.e., mitigating the implicitly primed associations and reducing the possibility of damage control), it is also possible that the presence of additional “pro-vaccine” information in the damage control statement could trigger additional counterarguing, or amplify motivated reasoning [27]. We therefore pose an open-ended research question:

RQ1: Does the damage control message seem to backfire among individuals with low vaccine acceptance?

2. Methods

2.1. Study design

To test our hypotheses, we designed and fielded an online experiment with a sample of 430 U.S. adults randomly assigned to one of three experimental conditions, as depicted in Fig. 1 . The Institutional Review Board at the University of Wisconsin - Madison categorized the study as exempt (protocol 2021-1569), and respondents provided informed consent. To increase ecological validity and the practical utility of our findings, the stimulus was created by excerpting text and imagery from an existing WHO myth-busting website and only slightly editing the contents. The condition we call the “original corrective” (N = 144) was designed to resemble extant debunking messages, and it is the condition that we expected would trigger collateral damage (see H1). Comparatively, the “damage control” condition (N = 154) appends a damage control statement to the original corrective (see H2), and the last condition, “no corrective,” functions as the control (N = 132). After viewing the stimulus, participants provided responses to our four dependent variables: (A) evaluation of the truth or falsity of the statement that “mRNA vaccines contain live virus,” (B) degree of belief that live virus vaccines pose (i) societal and (ii) personal risks, and (C) opposition to live virus vaccines.

Fig. 1.

Fig. 1

Stimuli, as compared to the original WHO website imagery and text.

It is important to briefly discuss the claim we chose to examine. Although some truth-claims about COVID-19 are difficult or even impossible to cleanly label “true” or “false” (5)—consider, e.g., the politically-explosive claim that COVID-19 originated in a research lab in China—the claim we utilize here is more clearly untrue. We chose a “clear-cut” falsehood because it is precisely the type of misinformation that can seem straightforward to debunk, and which likely garners less attention when communicators weigh the risks of intervening on misinformation about COVID-19.

2.2. Sample

The experiment described in this paper was embedded in an online questionnaire, fielded by Qualtrics from January 28th, 2022, to February 2nd, 2022. The final sample for the full survey was N = 1,168. Qualtrics recruited from their participant panels, targeting a nationally representative sample of US adults, in terms of age, sex, and political ideology. The quota requirements were: age = 30 % aged 18–34, 35 % aged 35–54, 35 % aged 55+; sex = 49 % male, 51 % female; political ideology = 33.3 % Republican, 33.3 % Democrat, 33.3 % Independent. The study used an oversample of Black Americans, at 33 % Black, 67 % non-Black. Respondents who completed the study were financially compensated for their participation, based on their agreement with Qualtrics. Of the full survey sample, 430 participants were randomly assigned to this experiment. We examined the distribution of respondents to each experimental group to ensure that the randomization worked for evenly distributing respondents across race, gender, and political partisanship, and we found no significant differences (see Supplemental Materials, Appendix B, Tables 1-3 ). Consequently, the use of the oversample in our dataset should not affect our results.

Table 1.

Effects of condition assignment on all DVs, showing unstandardized coefficients from OLS regressions, alongside standard errors.

false belief, mRNA vax uses live virus
societal risk,

live virus vax
personal risk,

live virus vax
support for use,

live virus vax
B SE B SE B SE B SE
A. Main effects N = 430 N = 382 N = 384 N = 407

constant 2.75 (0.11) 3.05 (0.12) 3.10 (0.12) 4.51 (0.17)
original corrective −0.37* (0.15) 0.38* (0.16) 0.22 (0.17) −0.21 (0.24)
damage control corrective −0.49*** (0.15) −0.03 (0.16) −0.18 (0.17) −0.03 (0.24)
comparing the correctives N = 298 N = 261 N = 268 N = 282
 constant 2.38 (0.11) 3.43 (0.11) 3.32 (0.12) 4.30 (0.17)
 damage control (vs. original) −0.12 (0.15) −0.42* (0.16) −0.40* (0.16) 0.18 (0.23)

B. Heterogenous effects N = 425 N = 382 N = 384 N = 404

constant 3.54 (0.20) 3.47 (0.20) 3.64 (0.21) 3.77 (0.32)
original corrective −0.40 (0.27) 0.90*** (0.27) 0.52 (0.28) −0.03 (0.43)
damage control corrective −0.67* (0.27) 0.40 (0.29) 0.31 (0.28) 0.08 (0.44)
medium vax acceptance −0.79** (0.26) −0.23 (0.26) −0.26 (0.28) 1.00* (0.43)
high vax acceptance −1.41*** (0.26) −0.98*** (0.27) −1.35*** (0.29) 1.09* (0.43)
original*med.vax.accept −0.06 (0.36) −0.92* (0.37) −0.75 (0.38) −0.65 (0.59)
original*high.vax.accept 0.14 (0.36) −0.55 (0.36) −0.15 (0.39) 0.11 (0.58)
dmg.control*med.vax.accept 0.37 (0.36) −0.50 (0.37) −0.86* (0.38) −0.53 (0.58)
dmg.control*high.vax.accept 0.18 (0.36) −0.52 (0.37) −0.23 (0.39) 0.16 (0.58)

C. Low vax acceptance N = 118 N = 105 N = 106 N = 116

constant 3.54 (0.19) 3.47 (0.19) 3.64 (0.20) 3.77 (0.33)
original corrective −0.40 (0.26) 0.90*** (0.25) 0.52 (0.27) −0.03 (0.45)
damage control corrective −0.67* (0.26) 0.40 (0.27) 0.31 (0.28) 0.08 (0.46)
…comparing the correctives N = 83 N = 71 N = 73 N = 81
 constant 3.14 (0.18) 4.38 (0.16) 4.15 (0.18) 3.74 (0.31)
 damage control (vs. original) −0.27 (0.26) −0.50* (0.25) −0.21 (0.27) 0.11 (0.45)

D. Medium vax acceptance N = 149 N = 135 N = 137 N = 139

constant 2.75 (0.17) 3.24 (0.17) 3.38 (0.17) 4.77 (0.23)
original corrective −0.47 (0.24) −0.01 (0.25) −0.24 (0.25) −0.68* (0.33)
damage control corrective −0.30 (0.23) −0.10 (0.24) −0.56* (0.24) −0.46 (0.32)
…comparing the correctives N = 101 N = 89 N = 92 N = 95
 constant 2.28 (0.17) 3.23 (0.19) 3.14 (0.17) 4.09 (0.24)
 damage control (vs. original) 0.17 (0.23) −0.08 (0.26) −0.32 (0.23) 0.22 (0.33)

E. High vax acceptance N = 158 N = 142 N = 141 N = 149

constant 2.13 (0.18) 2.49 (0.19) 2.29 (0.21) 4.86 (0.32)
original corrective −0.26 (0.25) 0.35 (0.25) 0.50 (0.28) 0.08 (0.43)
damage control corrective −0.49* (0.24) −0.12 (0.25) 0.07 (0.27) 0.23 (0.42)
…comparing the correctives N = 112 N = 101 N = 103 N = 106
 constant 1.87 (0.17) 2.84 (0.17) 2.79 (0.19) 4.94 (0.29)
 damage control (vs. original) −0.23 (0.24) −0.47 (0.24) −0.43 (0.26) 0.15 (0.41)

*p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001 (also in bold), p ≤ 0.10. Significance is not denoted for constant terms. All estimates are from OLS regressions that were conducted in SPSS version 28.0.0.0. For the main analyses in A-E, the reference group is the “no corrective” condition (see Fig. 1). For the sub-analyses within A-E, shown under the heading “comparing correctives,” the two correctives were compared to each other (using the original corrective as the reference). All analyses excluded participants who were undecided about their levels of societal risk perception, personal risk perception, or support for the use of live virus vaccines. Therefore, the N differs across the analyses.

2.3. Measures and analysis

To test hypotheses H1-H3, we ran ordinary least squares (OLS) regression models for each of the dependent variables described above. (Results using ordered logistic regression are substantively similar – see the Supplemental Information.) For H3, which focused on the impact of the correctives on respondents exhibiting different levels of vaccine acceptance, we also ran an interaction model for each dependent variable to test the null hypothesis of no heterogenous effects, as well as sub-group analyses for ease of interpretation for each of the four dependent variables, across people with low, medium, and high levels of vaccine acceptance. Details regarding the measurement of each outcome variable, as well as the measure for vaccine acceptance, can be found in the supplement.

All analyses were conducted in SPSS 28.0.0. The independent variables were dummy-coded condition assignments, with the “no corrective” condition as the reference group for all analyses, except for analyses which test H2b. To test H2b’s expectation that the damage control corrective would perform better than the original corrective (in terms of mitigating collateral damage effects), the two correctives were compared to each other—i.e., the dummy-coded damage control condition was used as the independent variable, with the original corrective as the reference.

3. Results

The full results of our analyses appear collectively in Table 1 and Fig. 2 . We recognize that our subgroup samples are sometimes small and may raise concerns about power, and we address this possible limitation in more detail in our discussion. To be as transparent as possible, and so readers may more effectively interpret our results, we have clearly indicated the sample sizes for all of our analyses in Table 1.

Fig. 2.

Fig. 2

Effects of exposure to each corrective, compared to no corrective. Coefficients are unstandardized OLS estimates (see Table 1), with 95% confidence intervals.

*p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001. Figure 2A shows the main effects across all the participants, while Figures 2B through 2D show the results for subsamples of respondents who exhibited different levels of vaccine acceptance. For more details about all the OLS regression analyses and the measures used, see the methods section of the main manuscript.

aIn additional OLS analyses which compared the two correctives to each other (see the “comparing correctives” rows of Table 1), the estimated difference in corrective effects on this outcome variable was significant at p ≤ 0.05.

3.1. Main effects

The main effects findings are described in more detail below, but, to summarize, our results lend some support to our collateral damage hypothesis (H1) and our damage control hypothesis (H2). Although both correctives did successfully reduce belief in the specific false claim under study (“mRNA vaccines contain live viruses”), the original corrective also increased societal risk perceptions of live virus vaccines. Put simply, the original corrective did collateral damage. Meanwhile, appending a damage control message to the original corrective appeared to bypass this adverse effect, while still achieving the overall goal of reducing belief in the false claim.

Although we observed collateral damage to societal risk perceptions, in partial support of our hypothesis, we did not see a significant increase in personal risk perceptions of live virus vaccines, nor increased opposition to their use. Despite the lack of statistical significance for these effects, however, the observed increase in personal risk perceptions caused by the original corrective (and not by the damage control corrective) does align with our expectations. Given that the personal risks of vaccination are theoretically more easily mitigated than societal risks simply by making an individual choice to opt out of the vaccine (except under conditions of a vaccine mandate), it is not surprising that the original corrective’s collateral damage effects are more muted for personal risk perceptions. More details for each dependent variable appear below:

Misperception that mRNA vaccines contain live virus. The original corrective and the damage control corrective both significantly reduced belief in the false claim, compared to seeing no corrective (respectively, B = -0.37, p = 0.017 and B = -0.49, p = 0.001). This is consistent with H1a and H2a. Although the coefficients suggest that the damage control condition was more effective at reducing false beliefs than the original corrective, the difference between these coefficients is not itself statistically significant (see Table 1A, “comparing the correctives”).

Societal risk of live virus vaccines. Consistent with H1b, exposure to the original corrective significantly increased perceived societal risk of live virus vaccines, compared to seeing no corrective (B = 0.38, p = 0.018). Comparatively, the damage control corrective did not have this effect (B = -0.03, p = 0.831). Consistent with H2b, when comparing the damage control corrective to the original corrective, we find that the damage control corrective yields significantly lower societal risk perceptions (B = -0.42, p = 0.011)—i.e., it does not do collateral damage to this outcome measure, but the original corrective does.

Personal risk of live virus vaccines. In terms of statistical significance, neither the original corrective nor the damage control corrective had a significant effect on perceived personal risk of live virus vaccines. Compared to the original corrective, however, the damage control corrective significantly decreased personal risk perceptions (B = -0.40, p = 0.014). As shown in Fig. 2, the overall pattern of effects for personal risk perceptions is consistent with both H1 and H2—i.e., personal risk perceptions are slightly higher among participants who saw the original corrective, relative to the control, but the damage control corrective does not induce this increase.

Support for the use of live virus vaccines. We observed no statistically significant effects on levels of support for the use of live virus vaccines.

3.2. Heterogenous effects

We expected that the collateral damage effect (H1) would be more pronounced among people whose vaccine attitudes are already negative—that is, among people reporting low vaccine acceptance (H3). We also posed a research question about possible heterogenous effects of the damage control corrective (RQ1). To examine these effects, we first ran interaction models for each of the dependent variables, and then ran the same main effects models as described above across subsets of the sample who exhibited low, medium, and high vaccine acceptance. The results of the interaction analyses appear in Table 1B, and the results of the subgroup analyses appear in Table 1C-1E. The subgroup results are also visualized in Fig. 2B-2D. We report the detailed results below.

Interaction models. The results of the interaction analyses appear in Table 1B. Although there were no statistically significant heterogeneous effects by vaccine acceptance level on participants’ belief in the false claim, we did observe significant interaction effects for societal risk perceptions and personal risk perceptions. The original corrective interacted with vaccine acceptance to produce heterogenous effects on societal risk perceptions of live virus vaccines, when comparing participants with medium acceptance to those with low acceptance (B = -0.92, p = 0.012). As visualized in Fig. 2B and 2C (and described more in the subgroup analyses below), the original corrective significantly increased societal risk perceptions of live virus vaccines among those with low vaccine acceptance, but it did not have this effect on people with medium vaccine acceptance. This lends some support for H3, which expected that the collateral damage effect would be worse among people whose vaccine attitudes are already negative.

Further, the damage control corrective interacted with vaccine acceptance to produce heterogeneous effects on personal risk perceptions of live virus vaccines across participants with low versus medium vaccine acceptance (B = -0.86, p = 0.025). Exposure to the damage control corrective significantly lowered personal risk perceptions of live virus vaccines among participants with medium vaccine acceptance compared to those with low vaccine acceptance, who had no significant change in their personal risk perceptions. This result offers some information to address RQ1, which asked if the damage control message backfires among those with low vaccine acceptance. The finding suggests that the answer to RQ1 may be “no.”

To examine and visualize the pattern of heterogenous effects more clearly, we ran the main effects models (see above) separately across subgroups of the sample reflecting the different levels of vaccine acceptance. These results are described below.

Subgroup: Low vaccine acceptance. The pattern of results for individuals with low vaccine acceptance is depicted in Table 1C and Fig. 2B, and it provides some support for H3. Among participants with low vaccine acceptance, both the original corrective and the damage control corrective reduced false beliefs (original: B = -0.40, p = 0.116; damage control: B = -0.67, p = 0.011), but this desired effect was statistically significant only for the damage control message. However, the better performance of the damage control message relative to the original corrective in terms of reducing false beliefs was not itself statistically significant (B = -0.27, p = 0.316).

In terms of possible collateral damage to attitudes toward live virus vaccines, the original corrective significantly increased societal risk perceptions among individuals with low vaccine acceptance (B = 0.90, p < 0.001), while the damage control corrective did not (B = 0.40, p = 0.139), and the better performance of the damage control message was itself significant (B = 0.50, p = 0.044). As for personal risk perceptions and support for the use of live virus vaccines, neither the original corrective nor the damage control corrective had significant effects among this subgroup.

Medium and high vaccine acceptance. The pattern of results for individuals with medium vaccine acceptance is depicted in Table 1D and Fig. 2C, and the pattern for high vaccine acceptance is shown in Table 1E and Fig. 2D. Exposure to the original corrective had no significant effects on false beliefs among individuals with medium or high vaccine acceptance, as compared to seeing no corrective. The damage control corrective, too, had no significant effects among people with medium vaccine acceptance. Finally, although the damage control message did significantly reduce false beliefs among participants with high vaccine acceptance (B = -0.49, p = 0.043), this better performance relative to the original corrective was not itself significant (B = -0.23, p = 0.324).

In terms of possible collateral damage to attitudes toward live virus vaccines, neither the original corrective nor the damage control corrective had significant effects on individuals with high vaccine acceptance. Among individuals with medium vaccine acceptance, however, the original corrective did decrease support for use of live virus vaccines (B = -0.68, p = 0.041), while the damage control message did not. The comparatively better performance of the damage control corrective here was not itself significant, however. Finally, we also observed that, among individuals with medium vaccine acceptance, the damage control message significantly reduced personal risk perceptions (B = -0.56, p = 0.020), while the original corrective had no effect, but the difference in estimated effects across the two correctives was not significant.

4. Discussion

Our results illustrate that collateral damage can occur when misinformation interventions fail to account for the implicit premises that can arise from the context surrounding both the original falsehood and the corrective. In our study, the context involved public health officials attempting to debunk a false claim about COVID-19 mRNA vaccines within a broader risk communication effort intended to establish COVID-19 vaccine safety. Although the original corrective reduced belief in the targeted falsehood (“mRNA vaccines contain live virus”), it did so at the cost of increased societal risk perceptions of live virus vaccines.

We did not see significant changes in levels of personal risk perceptions of live virus vaccines or in support for such vaccines. Differences in how societal and personal risk behave—and how each relates to support—are common and can vary across issues. Information intake can have distinct effects on personal versus societal risk perceptions (which is why we measured both), with research often finding that information from media is more likely to influence societal risk perceptions [28], [29], perhaps because people feel less able to rely on their own experience as they extrapolate to wider occurrences of risk. Similarly, support for use of a technology like vaccines is rarely determined by risk perceptions alone. People often weigh a variety of factors, including particular risks alongside particular benefits (something we did not measure here), in making their judgements about a particular technology, treatment, or issue [30], [31], [32].

With the effects we do find, however, and given that our hypotheses were motivated using literatures which describe universal psychological tendencies, is it arguably likely that the collateral damage effect we observed in the urgent COVID-19 context is also occurring (albeit unmeasured) in other misinformation intervention contexts, so long as corrective messages are designed primarily to address explicit falsehoods, to the neglect of implicit content. We therefore urge science communication researchers and practitioners to anticipate and mitigate collateral damage from debunking campaigns not only for COVID-19, but also in other domains.

To avoid collateral damage, we recommend that (a) misinformation intervention initiatives make it common practice to anticipate and measure possible adverse effects of corrective messages, and (b) the adverse effects under consideration go beyond attitudes and beliefs about the same issue that forms the focus of the study. For example, in studies of science-related misperceptions, researchers should consider capturing effects on audiences’ attitudes about similar types of science and technology, or about science and scientists in general, or other outcomes for which there is theoretical reason to believe audiences might have context-driven associations. Importantly, if future work does identify collateral damage in other contexts, then recent efforts to synthesize what we know about “effective” solutions to misinformation (see, e.g., [33]) should be taken with a grain of salt, given the narrow definition of “efficacy” that is adopted in misinformation research. How effective should we consider a misinformation intervention to be if it corrects one science-related misperception at the expense of other science attitudes?

Collateral damage effects of the kind we observed clearly contradict public health officials’ big-picture goals to encourage broader uptake of vaccines, both in disease contexts beyond COVID-19 and in countries around the world. As mentioned above, the MMR vaccine remains important for both children and adults and has been a long-time target of anti-vaccination and misinformation efforts [34]. In recent years, including through the pandemic, there has been a reemergence of measles, mumps, poliovirus [26], and monkey pox [27], all of which can be effectively managed through vaccinations which contain live viruses. Live virus vaccines are also important tools against COVID-19 in many countries where stocks of mRNA vaccines are low [22]. Collateral damage to perceptions of live virus vaccines, therefore, can be a serious setback in the larger battle against infectious diseases, including global efforts to address COVID-19.

The effects we observed are also striking given the short time that participants were exposed to our stimuli. Although the effect sizes appear small on the scales we used, we fielded this study during the second year of the pandemic in the U.S., and amidst a massive wave of the COVID-19 Omicron variant. Participants presumably had prior exposure to information similar to our stimuli. “Pretreatment” of this kind can make it difficult to move people’s attitudes in research [35], yet we were able to detect effects. Further, we know that cumulative, small effects can strengthen (false) associations, especially when messages are repeated often [8], and the correctives we tested have long been circulating in public health contexts.

Finally, the collateral damage effect on societal risk perceptions of live virus vaccines was more pronounced among people with low vaccine acceptance, who are often precisely the target population for efforts to improve vaccine attitudes. A possible limitation in our study, however, is that we did not sample based on this criterion. Therefore, we have a smaller overall number of participants with low vaccine acceptance (N = 118), compared to medium (N = 149) and high (N = 158) (with at least 30 people per condition for all analyses). Given possible concerns about our sample size for the subgroup analyses—in combination with the consequences of the collateral damage effect and the apparent ease of creating “damage control” correctives—future studies should attempt to replicate our effect and to examine how (if at all) collateral damage may be occurring in other issue contexts. In cases where such studies will look for collateral damage effects among population subgroups who will theoretically experience more intense effects, researchers should intentionally sample the respondents of interest.

Notably, in the time during which our study was under peer review, we have been encouraged to learn that other researchers have theorized and found evidence of adverse effects for corrective messages that are very similar to the kinds we have tested here [36]. Specifically, in testing for unintended consequences of a myth-busting message stating that COVID-19 mRNA vaccines do not edit human DNA, the researchers found that these explicit correctives increased agreement with the false belief that “COVID-19 vaccinations never cause any side effects,” possibly by implicitly priming message recipients. Further, like us, these researchers found that they could avoid the undesired effect by including “damage control” content (our words) in the corrective—i.e., the researchers found that “mentioning potential side effects” within the debunking message “prevented [the] unintended effect” [36].

Despite some limitations of our study, therefore, we are confident that there is a useful signal here about possible collateral damage effects, especially given that people with low vaccine acceptance are often the primary target of corrective campaigns and that researchers are just beginning to be comforted by evidence that “backfire effects” among populations with negative prior attitudes appear to be rare [9]. Clearly, as other similar research is also showing, there are other kinds of adverse effects that can occur, sometimes through processes of implicature.

Even when researchers succeed in correcting false beliefs, our evidence suggests that stealthier effects are possibly occurring for other, often-unmeasured outcomes. This leaves science and risk communicators with an incomplete set of risks and benefits to weigh when deciding whether (and how) to intervene. Clearly, we cannot anticipate and empirically test every possible adverse effect of correcting misinformation. However, we also cannot afford to debunk false claims about one scientific topic while increasing audiences’ concerns about other scientific topics.

Medical scientists and public health officials are familiar with the bittersweet revelation that, although a health intervention successfully treats its targeted disease, its accompanying side effects are a strong detractor from its use. Analogously, no matter how well-intended our myth-busting campaigns might be, science communication practitioners and researchers should tread cautiously with the use of misinformation interventions. We need to examine the possible adverse effects of correctives more intentionally and rigorously—partly by paying careful attention to a much broader set of interconnected “effects” than the individual-level cognitive outcomes easily measured in lab experiments—and then seriously weigh the risks and benefits of taking action. If we fail to do this, then we will remain willfully ignorant of how our misinformation interventions could be doing collateral damage.

Funding

This work was supported by the National Science Foundation [award number 1827864], the John Templeton Foundation [award number 62194], and the University of Wisconsin Foundation [award number PRJ36FF]. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders.

CRediT authorship contribution statement

Nicole M. Krause: Conceputalization, Data curation, Formal analysis, Funding acquisition, Methodology, Project administration, Visualization, Writing - original draft, Writing - review & editing. Becca Beets: Conceputalization, Data curation, Formal analysis, Methodology, Visualization, Writing - original draft, Writing - review & editing. Emily L. Howell: Conceputalization, Data curation, Formal analysis, Methodology, Visualization, Writing - original draft, Writing - review & editing. Helen Tosteson: Writing - original draft. Dietram A. Scheufele: Conceptualization, Funding acquisition, Resources, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.vaccine.2022.12.045.

Appendix A. Supplementary material

The following are the Supplementary data to this article:

Supplementary data 1
mmc1.docx (45.3KB, docx)

Data availability

The data and code can be freely accessed at the GitHub repository for the authors' research group: https://github.com/scimep

References

  • 1.Newman T.P., Brossard D., Howell E.L. COVID-19 public health messages have been all over the place – but researchers know how to do better. (The Conversation) 2021 [Google Scholar]
  • 2.D. B. Taylor (2020) For Black men, fear that masks will invite racial profiling. (The New York Times).
  • 3.S. DeLuca, N. Papageorge, E. Kalish (2020) The unequal cost of social distancing. in Coronavirus Resource (Johns Hopkins University & Medicine).
  • 4.D. A. Scheufele, N. M. Krause, I. Freiling, D. Brossard, How not to lose the COVID-19 communication war. Issues in Science and TechnologyApril 17 (2020).
  • 5.Krause N.M., Freiling I., Scheufele D.A. The infodemic ‘infodemic:’ Toward a more nuanced understanding of truth-claims and the need for (not) combatting misinformation. Ann Am Acad Pol Soc Sci. 2022;700:112–123. [Google Scholar]
  • 6.Scheufele D.A., Krause N.M., Freiling I. Misinformed about the “infodemic?” Science’s ongoing struggle with misinformation. J Appl Res Mem Cogn. 2021;10:522–526. [Google Scholar]
  • 7.Vora N.M., et al. Want to prevent pandemics? Stop spillovers. Nature. 2022 doi: 10.1038/d41586-022-01312-y. [DOI] [PubMed] [Google Scholar]
  • 8.Lewandowsky S., Ecker U.K.H., Seifert C.M., Schwarz N., Cook J. Misinformation and its correction: Continued influence and successful debiasing. Psychol Sci Public Interest. 2012;13:106–131. doi: 10.1177/1529100612451018. [DOI] [PubMed] [Google Scholar]
  • 9.B. Nyhan, Why “backfire effects” do not explain the durability of political misperceptions. Proceedings of the National Academy of Sciences118 (2021). [DOI] [PMC free article] [PubMed]
  • 10.Vraga E.K., Tully M., Bode L. Assessing the relative merits of news literacy and corrections in responding to misinformation on Twitter. New Media Soc. 2021 doi: 10.1177/1461444821998691. [DOI] [Google Scholar]
  • 11.Nyhan B., Reifler J. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine. 2015;33:459–464. doi: 10.1016/j.vaccine.2014.11.017. [DOI] [PubMed] [Google Scholar]
  • 12.B. H. Dowden, Logical reasoning (California State University, Sacramento, CA, 2019).
  • 13.Macagno F., Damele G. The dialogical force of implicit premises: Presumptions in enthymemes. Informal Logic. 2013;33:365–393. [Google Scholar]
  • 14.Scheufele B.T., Scheufele D.A. “Framing and priming effects” in The International Encyclopedia of Media Studies. (Blackwell Publishing Ltd. 2012 doi: 10.1002/9781444361506.wbiems109. [DOI] [Google Scholar]
  • 15.Centers for Disease Control and Prevention (2021) Myths and Facts about COVID-19 Vaccines. (www.cdc.gov). [PubMed]
  • 16.Joslyn M.R., Sylvester S.M. The determinants and consequences of accurate beliefs about childhood vaccinations. Am Politics Res. 2019;47:628–649. [Google Scholar]
  • 17.Larson H.J., et al. Measuring vaccine hesitancy: The development of a survey tool. Vaccine. 2015;33:4165–4175. doi: 10.1016/j.vaccine.2015.04.037. [DOI] [PubMed] [Google Scholar]
  • 18.Motta M., Callaghan T., Sylvester S. Knowing less but presuming more: Dunning-Kruger effects and the endorsement of anti-vaccine policy attitudes. Soc Sci Med. 2018;211:274–281. doi: 10.1016/j.socscimed.2018.06.032. [DOI] [PubMed] [Google Scholar]
  • 19.Callaghan T., Motta M., Sylvester S., Lunz Trujillo K., Blackburn C.C. Parent psychology and the decision to delay childhood vaccination. Soc Sci Med. 2019;238 doi: 10.1016/j.socscimed.2019.112407. [DOI] [PubMed] [Google Scholar]
  • 20.Doucleff M. Polio is found in the U.K. for the first time in nearly 40 years. Here's what it means. (NPR) 2022 [Google Scholar]
  • 21.World Health Organization (2022) Multi-country monkeypox outbreak: situation update. (World Health Organization).
  • 22.S. Nolen, R. Robbins (2022) In Africa, a mix of shots drives an uncertain COVID vaccination push. (The New York Times).
  • 23.Bigi S., Greco Morasso S. Keywords, frames and the reconstruction of material starting points in argumentation. J Pragmat. 2012;44:1135–1149. [Google Scholar]
  • 24.Krause N.M., Freiling I., Beets B., Brossard D. Fact-checking as risk communication: the multi-layered risk of misinformation in times of COVID-19. J Risk Res. 2020;23:1052–1059. [Google Scholar]
  • 25.Kunda Z. The case for motivated reasoning. Psychol Bull. 1990;108:480–498. doi: 10.1037/0033-2909.108.3.480. [DOI] [PubMed] [Google Scholar]
  • 26.Scheufele D.A., Krause N.M. Science audiences, misinformation, and fake news. Proc Natl Acad Sci. 2019;116:7662–7669. doi: 10.1073/pnas.1805871115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Taber C.S., Lodge M. Motivated skepticism in the evaluation of political beliefs. Am J Polit Sci. 2006;50:755–769. [Google Scholar]
  • 28.Coleman C.L. The influence of mass media and interpersonal communication on societal and personal risk judgments. Commun Res. 1993;20 [Google Scholar]
  • 29.Tyler T.R., Cook F.L. The mass media and judgments of risk: Distinguishing impact on personal and societal level judgments. J Pers Soc Psychol. 1984;47:693–708. [Google Scholar]
  • 30.Alhakakmi A.S., Slovic P. A psychological study of the inverse relationship between perceived risk and perceived benefit. Risk Anal. 1994;14:1085–1096. doi: 10.1111/j.1539-6924.1994.tb00080.x. [DOI] [PubMed] [Google Scholar]
  • 31.Frewer L.J., Howard C., Shepherd R. Understanding public attitudes to technology. J Risk Res. 1998;1:221–235. [Google Scholar]
  • 32.Slovic P. Perception of risk. Science. 1987;236:280–285. doi: 10.1126/science.3563507. [DOI] [PubMed] [Google Scholar]
  • 33.Walter N., Tukachinsky R. A meta-analytic examination of the continued influence of misinformation in the face of correction: How powerful is it, why does it happen, and how to stop it? Commun Res. 2019;47:155–177. [Google Scholar]
  • 34.Benecke O., DeYoung S.E. Anti-vaccine decision-making and measles resurgence in the United States. Glob Pediatr Health. 2019;6 doi: 10.1177/2333794X19862949. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Druckman J.N., Leeper T.J. Learning more from political communication experiments: Pretreatment and its effects. Am J Polit Sci. 2012;56:875–896. [Google Scholar]
  • 36.Schmid P., Betsch C. Benefits and pitfalls of debunking interventions to counter mRNA vaccination misinformation during the COVID-19 pandemic. Sci Commun. 2022;44:531–558. doi: 10.1177/10755470221129608. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data 1
mmc1.docx (45.3KB, docx)

Data Availability Statement

The data and code can be freely accessed at the GitHub repository for the authors' research group: https://github.com/scimep


Articles from Vaccine are provided here courtesy of Elsevier

RESOURCES