Skip to main content
Psychological Science logoLink to Psychological Science
. 2017 Sep 12;28(11):1531–1546. doi: 10.1177/0956797617714579

Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation

Man-pui Sally Chan 1,, Christopher R Jones 2, Kathleen Hall Jamieson 2, Dolores Albarracín 1
PMCID: PMC5673564  NIHMSID: NIHMS885931  PMID: 28895452

Abstract

This meta-analysis investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Because misinformation can lead to poor decisions about consequential matters and is persistent and difficult to correct, debunking it is an important scientific and public-policy goal. This meta-analysis (k = 52, N = 6,878) revealed large effects for presenting misinformation (ds = 2.41–3.08), debunking (ds = 1.14–1.33), and the persistence of misinformation in the face of debunking (ds = 0.75–1.06). Persistence was stronger and the debunking effect was weaker when audiences generated reasons in support of the initial misinformation. A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect.

Keywords: misinformation, correction, continued influence, science communication, belief persistence/perseverance, open data


The effects of misinformation are of interest to many areas of psychology, from cognitive science, to social approaches, to the emerging discipline that prescribes the best reporting and publication practices for all psychologists. Misinformation on consequential subjects is of special concern and includes claims that could affect health behaviors and voting decisions. For example, the rumor that genetically modified mosquitoes caused the Zika virus outbreak in Brazil is misinformation, a claim unsupported by scientific evidence (Schipani, 2016). Despite retraction of the scholarly article making the causal link between autism and the measles, mumps, and rubella vaccine, some people are still convinced of this unfounded claim (Newport, 2015). Others continue to hold that there were weapons of mass destruction in Iraq prior to the U.S. invasion in 2003, a belief undercut by the fact that none were found there after the invasion (Newport, 2013). Similarly, other individuals believe that the Affordable Care Act mandated death panels even though independent fact checkers have shown that such consultations about end-of-life care preferences are voluntary and not a precondition of enrolling in the ACA (Henig, 2009; Nyhan, 2010). The false beliefs on which we focus here occur when the audience initially believes misinformation and that misinformation persists or continues to exert psychological influence after it has been rebutted. In this context, two important questions are (a) how strong is the persistence of the misinformation across contexts, and (b) what audience and message factors moderate this effect?

Mounting evidence suggests that the process of correcting misinformation is complex and remains incompletely understood (Lewandowsky et al., 2015; Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012; Schwarz, Sanna, Skurnik, & Yoon, 2007). Lewandowsky and colleagues (2012) qualitatively reviewed the characteristics of effective debunking, a term we define as presenting a corrective message that establishes that the prior message was misinformation. Corrections may be partial, such as those that update details of the information, or complete, such as retractions of scientific articles based on inappropriate or fabricated evidence that the authors or the journal no longer endorse. This meta-analysis complements the Lewandowsky et al. review by quantitatively assessing the size and moderators of the debunking and misinformation-persistence effects.

Audience Factors that Reduce Credulity

As the literature confirms, “human memory is not a recording device, but rather a process of (re)construction that is vulnerable to both internal and external influences” (Van Damme & Smets, 2014, p. 310). Scholars agree that systematically reasoning in line with the arguments contained in a message should increase the message’s impact (Arceneaux, Johnson, & Cryderman, 2013; Chaiken & Trope, 1999; Johnson-Laird, 1994; Kahneman, 2003; Petty & Briñol, 2010; Slothuus & de Vreese, 2010). Accordingly, when the elaboration process organizes, updates, and integrates elements of information, generating explanations in line with the initial misinformation, this process may create a network of confirming causal accounts about the misinformation in memory. Conditions that yield confirming explanations may be associated with increased misinformation persistence and a weakened debunking effect (Arceneaux, 2012; Johnson-Laird, 2013). In contrast, considering the error in the initial information may lead to a weak explanatory model (Kowalski & Taylor, 2009). As a result, conditions that yield explanations that counter the misinformation should be associated with weakened misinformation persistence and an increased debunking effect. In short, the direction of the cognitive activity of the audience is likely to predict misinformation persistence and ineffective correction.

The Debunking Message

Corrections that merely encourage people to consider the opposite of initial information often inadvertently strengthen the misinformation (Schwarz et al., 2007). Therefore, offering a well argued, detailed debunking message appears to be necessary to reduce misinformation persistence (Jerit, 2008). Research on mental models (Johnson-Laird, 1994; Johnson-Laird & Byrne, 1991) suggests that an effective debunking message should be sufficiently detailed to allow recipients to abandon initial information for a new model (Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Messages that simply label the initial information as incorrect may therefore leave recipients unable to remember what was wrong and offer them no new model to understand the information (Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Hence, we hypothesized that the level of detail of the debunking message (i.e., simply labeling the misinformation as incorrect vs. providing new and credible information) would be a vital factor in effective debunking and in curbing the persistence of misinformation.

Method

To conduct the present meta-analysis, we used pairs of keywords to obtain relevant scholarship from multiple databases in relevant areas (e.g., political science, communication, and public health; see the Supplemental Material available online for detailed information). Only reports1 from studies that were clearly or possibly experimental were considered. One of the most popular experimental paradigms is a series of reports of a warehouse fire (see Ecker, Lewandowsky, Swire, & Chang, 2011; Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). This paradigm involves three phases. In the first (manipulation) phase, experimental participants read a booklet containing a misinformation message attributing the fire to the presence of volatile materials in the warehouse; for half of the experimental participants, this misinformation message is accompanied by a debunking message, whereas for the other half it is not. Control participants receive neither the misinformation nor the debunking message. The second phase is a delay during which participants work on an unrelated task for 10 min. In the third phase, participants receive open-ended questionnaires assessing their understanding of the reports. The questionnaires contain 10 causal-inference questions (e.g., “What could have caused the explosions?”), 10 factual questions (e.g., “What time was the fire eventually put out?”), and manipulation-check items. These questions measure the tendency to make more detailed inferences (e.g., “What could have caused the explosions?”) about either the misinformation or the debunking message, with the possibility of greater misinformation persistence when the detailed inferences are about the misinformation.

We also specified three eligibility criteria to identify relevant studies: (a) the presence of open-ended questions or closed-ended scale measures of participants’ beliefs in (e.g., probability judgments about an event or person) or attitudes supporting (e.g., liking for a policy) the earlier misinformation and the debunking information, (b) the presence of a control group as well as one of the experimental groups (i.e., with or without the debunking message), and (c) the inclusion of a news message initially asserted to be true (the misinformation message) as well as a debunking message (see the Supplemental Material for details). Even though many topics involved real-world matters (e.g., see Berinsky, 2012, for the 2010 Affordable Care Act materials and Materials and Methods in the current Supplemental Material), the message positions were unfamiliar to the participants before the experiment.

The selection of studies

To obtain a complete set of studies, we used specific terms and keywords (including wildcards; see Materials and Methods in the Supplemental Material) and searched multiple online databases: (a) PsycINFO, (b) Google Scholar, (c) Medline, (d) PubMed, (e) ProQuest Dissertations and Theses Abstracts and Indexes: Social Sciences, (f) the Communication Source, and (g) the Social Sciences Citation Index. We also checked reviews and bibliographies and culled the references of articles selected for inclusion (see Materials and Methods in the Supplemental Material). By February 15, 2015, this meta-analysis included eight research reports (N = 6,878), 20 experiments, and 52 statistically independent samples (see Fig. 1).

Fig. 1.

Fig. 1.

Flowchart of the search protocol and workflow used for study selection, as suggested by Moher, Liberati, Tetzlaff, and Altman (2009).

Estimation of effect sizes for misinformation, debunking, and misinformation persistence

We used Hedges’s d as our effect size. This approach includes a correction factor j, [1 − 3/(4 × n − 1)], which reduces the positive bias introduced by the use of small samples in experimental studies. All the experiments we synthesized happened to have a between-subjects design. Thus, we compared means between experimental conditions to obtain the effect sizes of interest (see Materials and Methods in the Supplemental Material). The difference between the misinformation group and the control group constitutes the misinformation effect, the difference between the misinformation group and the debunking group constitutes the debunking effect, and the difference between the debunking group and the control group constitutes the misinformation-persistence effect. Two trained raters used means and standard deviations from the different groups to compute Hedges’s d, following the formulas outlined by Borenstein et al. (Borenstein, Hedges, Higgins, & Rothstein, 2009).

Coding of moderators

Two of the authors calculated effect sizes and coded the moderators, including audience and message factors. Specifically, we coded for (a) the generation of explanations in line with the misinformation, (b) the generation of counterarguments to the misinformation, and (c) the level of detail of the debunking message. Raters resolved disagreements by discussion. Adequate agreement was reached for all coded variables (κs = .87−1.00, intraclass correlation coefficients = .90−1.00). Table 1 summarizes the coded characteristics and results in the literature we synthesized.

Table 1.

Characteristics and Effect Sizes of Reports Included in Meta-Analysis

Experi-ment Sample
Misinformation effect size (d) Debunking effect size (d) Misinformation-persistence effect size (d) Publication statusa Online (O) vs. lab (L) data collection Generation of explanations in line with the misinformationb Generation of counter-arguments to the misinformationb Detail of debunking messagesc
N Percentage of females Mean age (years)
Report that the 2010 Affordable Care Act (ACA) contains descriptions of death panels (Berinsky, 2012)
1 618 0.07 WP O −0.93 −0.91 ND
1 618 −0.10 WP O −0.93 −0.91 ND
1 618 −0.28 WP O −0.93 −0.91 ND
1 618 −0.32 WP O −0.93 −0.91 ND
1 618 −0.42 WP O −0.93 −0.91 ND
2 278 0.19 WP O −0.93 −0.91 ND
2 278 −0.15 WP O −0.93 −0.91 ND

Positions of political candidates on policy arguments about Medicaid (Bullock, 2007)
1 204 48 0.93 T O −0.04 −0.37 ND
1 209 48 0.32 T O −0.04 −0.37 ND
2 58 74 0.25 T O −0.04 −0.37 ND
2 173 74 −0.63 T O −0.04 −0.37 ND

Positions of political candidates on spending on social services, protecting the environment versus jobs, and government aid to Blacks (Bullock, 2007)
3 100 64 0.36 T O −0.04 −0.37
3 165 64 0.20 T O −0.04 −0.37

Report about the causes of a minibus accident (Ecker, Lewandowsky, & Tang, 2010)
1 50 76 19.1 0.42 3.26 JA −0.04 −0.37 D
1 50 76 19.1 0.74 2.71 JA L −0.04 −0.37 D
1 50 76 19.1 1.19 1.77 JA L −0.04 0.18 D
1 50 76 19.1 1.23 1.69 JA L −0.04 −0.37 D
2 92 72 19.9 3.31 2.34 0.83 JA L −0.04 0.18 D

Report about the causes of a plane crash (Ecker, Lewandowsky, & Apai, 2011)
1 20 9.89 JA L −0.93 −0.91
1 20 8.91 JA L −0.93 −0.91
1 30 −1.31 10.38 JA L −0.04 −0.91 D
1 30 −0.73 6.45 JA L −0.04 1.21 D
1 30 4.74 1.61 JA L −0.04 −0.91 D
1 30 4.55 1.04 JA L −0.04 1.21 D
2 32 85 21.4 2.49 JA L −0.93 −0.91
2 32 85 21.4 1.34 JA L −0.93 −0.91
2 48 85 21.4 0.87 0.80 JA L −0.04 −0.91 D
2 48 85 21.4 0.98 0.64 JA L −0.04 −0.91 D
2 64 85 21.4 1.03 0.50 JA L −0.04 1.21 D
2 64 85 21.4 0.88 0.81 JA L −0.04 1.21 D

Report about the causes of an accidental warehouse fire (Ecker, Lewandowsky, Swire, & Chang, 2011)
1 69 67 1.55 0.72 0.83 JA L 2.22 1.21
1 69 67 1.55 1.17 0.38 JA L 2.22 1.21
1 69 67 1.2 0.76 0.44 JA L 2.22 1.21
1 69 67 1.2 0.79 0.41 JA L 2.22 1.21
2 46 69 0.33 JA 1.33 1.21
2 46 69 0.29 JA L 1.33 1.21
2 46 69 0.46 JA L 1.33 1.21
2 46 69 0.35 JA L 1.33 1.21

Report about the person responsible for a liquor-store robbery (Ecker, Lewandowsky, Fenton, & Martin, 2014)
1 72 67 19 2.28 0.65 1.28 JA L −0.04 −0.91 D
1 72 67 19 1.68 0.93 1.12 JA L −0.04 −0.91 D

Report about the person responsible for an attempted bank robbery (Ecker, Lewandowsky, et al., 2014)
2 50 69 19 0.69 JA L −0.04 −0.91 D
2 50 69 19 0.73 JA L −0.04 −0.91 D

Report about the causes of an accidental warehouse fire (Johnson & Seifert, 1994)
1 20 1.95 JA L −0.04 1.21 D
1 20 1.52 JA L −0.04 1.21 D
1.5 20 1.12 JA L −0.04 1.21 D
1.5 20 0.98 JA L −0.04 1.21 D
2 60 2.74 0.54 1.91 JA L −0.04 1.21 ND

Report about the causes of a police investigation into a theft at a private home (Johnson & Seifert, 1994)
3 81 0.96 −0.11 0.96 JA L −0.04 1.21 ND
3.5 27 0.57 JA L −0.04 1.21 D

Report of a political candidate who accepted campaign donations from a convicted felon (Thorson, 2013)
1 157 5.71 2.88 1.22 T O −0.93 −0.91 ND
2 240 4.62 3.40 0.91 T O −0.93 −0.91 D
2 234 3.98 3.97 −0.32 T O −0.93 −0.91 D

Note: Each row in the table presents results for a single record; for example, the five rows for Berinsky (2012) correspond to the five conditions in that experiment (descriptions of each record are available on the Open Science Framework; https://osf.io/9d6t4/).

a

Publication status was working paper (WP), thesis (T), or journal article (JA). bValues in this column are the averages of the standardized scores of the direct and indirect codes. cIn this column, ND indicates that there was no detailed debunking information, and D indicates that detailed debunking information was available.

Audience factors

Two trained raters coded the generation of explanations in line with the misinformation as directly induced by experimental procedures (1 = no explicit procedure, 2 = explicit procedure). They also judged whether there were instructions or experimental settings likely to spontaneously activate explanations in line with the misinformation (1 = low likelihood, 2 = moderate likelihood, 3 = high likelihood). For example, Ecker, Lewandowsky, Swire, and Chang’s (2011) Experiment 1 was assigned a 2 for explicit experimental procedure because the misinformation was repeated 1 to 3 times across conditions. The same experiment was assigned a 3 for spontaneous generation of explanations because participants were instructed to complete an open-ended questionnaire with causal-inference questions (e.g., “What could have caused the explosions?”). In contrast, Berinsky’s (2012) report included neither an explicit procedure to strengthen the reception of the misinformation nor questionnaires to induce inferences about the misinformation. Therefore, this report was assigned a 1 for both variables. The standardized scores of these two variables were averaged into a composite index to represent the overall likelihood of explanations in line with the misinformation (see Table 1 for sample indexes).

The raters followed a similar scheme to code the generation of counterarguments to the misinfor­mation after receiving the debunking message, and the generation of counterarguments included generating causal alternatives. First, they coded whether counterarguments were directly induced by the experimental procedures (1 = no explicit procedure, 2 = explicit procedure). Second, they coded whether counterarguments were indirectly induced by the experimental setting (1 = low likelihood, 2 = moderate likelihood, 3 = high likelihood). For example, in Ecker, Lewandowsky, and Tang’s (2010) study, the debunking message was presented one time and did not elaborate on the multiple explanations supporting the information. Thus, the experimental procedure was coded as 1. However, participants were instructed to complete open-ended questions to make inferences about the misinformation after receiving the debunking message. Therefore, this study was coded 2 for spontaneous generation of counterarguments. We then averaged the standardized scores of the direct and indirect codes as an overall index of generation of counterarguments (see Table 1).

Level of detail of the debunking message

The two raters also coded whether the debunking message simply labeled the initial information as incorrect (1 = not detailed) or provided detailed information (2 = detailed). For example, the debunking message presented in Ecker, Lewandowsky, and Apai’s (2011) experiments was assigned a 2 because new information was provided (i.e., the actual cause was determined to be a faulty fuel tank, p. 287).

Analytic procedures

To compare the effects of misinformation, debunking, and misinformation persistence, we performed three separate meta-analyses (see Chapter 25 in Borenstein et al., 2009). We first assessed publication and inclusion bias and analyzed the weighted mean magnitudes (d) of the effect sizes using fixed-effects and random-effects models estimated with maximum-likelihood methods. Then, we conducted Cochran’s Q tests and generated I 2 statistics to determine whether the population of effect sizes was heterogeneous across samples (Hedges & Olkin, 1985). We performed three-level meta-analysis (i.e., nested by reports) to estimate the heterogeneity level and control for dependency among studies from a single report. Finally, we conducted moderator analyses to explain the nonsampling variance in the effects. For descriptive purposes, we followed Cohen’s (1988) definitions of effect sizes (i.e., small effect: ds = 0.10–0.20, medium effect: ds = 0.21–0.50, and large effect: ds = 0.51–0.80).

Results

Descriptions of reports and studies

All reports were published between 1994 and 2015 and yielded 52 experimental conditions and 26 control conditions. The experiments concerned a variety of news. Eight reports used false social and political news, including reports of robberies (Ecker, Lewandowsky, Fenton, & Martin, 2014), the investigations of the warehouse fire (Ecker, Lewandowsky, Swire, & Chang, 2011; Johnson & Seifert, 1994) and traffic accidents (Ecker, Lewandowsky, & Apai, 2011; Ecker et al., 2010), the descriptions of death panels in the 2010 Affordable Care Act (Berinsky, 2012), positions of political candidates on arguments about Medicaid (Bullock, 2007), and whether a political candidate had received donations from a convicted felon (Thorson, 2013). Table 1 presents a summary of characteristics for each meta-analyzed condition. The average number of participants was 132 (SD = 174). Most samples were collected in laboratory settings (69.2%), followed by third-party online platforms (30.8%). The average percentage of females was 72 (SD = 9.57), and the mean age of participants was 20 years old (SD = 1.16).

Mean effect sizes and heterogeneity

Analyses of weighted means were used to estimate the misinformation effect, the debunking effect, and the misinformation-persistence effect (k = 52; total N = 6,878), using fixed-effects models, random-effects models, and random-effects models nested by reports. We followed the detection procedure proposed by Viecht­bauer and Cheung (2010) to examine the influence of outliers with exceptionally large effect sizes (d > 5.50) of the misinformation and misinformation-persistence effects. We estimated all mean effects with and without the removal of outliers, and the estimates were significant in both cases (see Table 2). Furthermore, the I 2 statistics revealed 99% of the nonsampling variability in all cases (see Table 2).

Table 2.

Results of Effect Size Estimates With and Without Outliers

Effect Fixed-effects model
Random-effects model
Three-level random-effects model
Random-effects model with weights
k d 95% CI Cochran’s Q d 95% CI τ2 I  2 Cluster size df d 95% Wald CI I  2 (2) I  2 (3) d 95% CI τ 2 I  2
Analyses with outliers included
Misinformation 16 2.04 [1.93, 2.14] 590.44*** 3.08 [2.02, 4.15] 4.48 (1.66) 98.99 0 14 3.08 [2.00, 4.15] 98.99 2.94 [1.80, 4.08] 4.48 (1.66) 98.99
Debunking 30 0.88 [0.81, 0.93] 1,031.21*** 1.14 [0.68, 1.61] 1.65 (0.44) 98.45 30 28 1.14 [0.68, 1.61] 98.45 1.33 [0.62, 2.04] 2.28 (0.72) 98.76
Misinformation-persistencea 42 0.09 [0.07, 0.12] 1,701.81*** 0.97 [0.60, 1.35] 1.46 (0.33) 99.47 8 39 0.92 [0.40, 1.44] 63.35 35.93 1.06 [0.68, 1.44] 1.38 (0.32) 99.45

Analyses without outliers included
Misinformationb 14 2.01 [1.91, 2.12] 540.03*** 2.46 [1.73, 3.19] 1.87 (0.77) 97.91 6 11 2.49 [1.53, 3.45] 24.82 72.78 2.41 [1.63, 3.20] 1.87 (0.72) 97.91
Misinformation-persistencec 40 0.09 [0.06, 0.12] 1,584.65*** 0.75 [0.50, 1.00] 0.60 (0.14) 99.47 8 37 0.79 [0.36, 1.23] 39.18 59.49 0.77 [0.48, 1.05] 0.62 (0.16) 98.91

Note: Standard errors are given in parentheses. τ2 indicates the estimated amount of total heterogeneity. Random-effects models with weights used standardized N residuals. Blank cells indicate that effect sizes were not estimated because no missing study was identified on the left-hand side of the funnel plot in Figure 2. I2 refers to the level of between-samples heterogeneity, I 2(2) indicates the amount of variance explained at Level 2 (records), and I 2(3) indicates the amount of variance explained at Level 3 (reports). k = number of samples; d = mean Hedges’s d; CI = confidence interval.

a

The likelihood-ratio test between the Level 2 and Level 3 models was significant, χ2(1) = 7.98, p = .005. bThe likelihood-ratio test between the Level 2 and Level 3 models was significant, χ2(1) = 8.34, p = .004. cThe likelihood-ratio test between the Level 2 and Level 3 models was significant, χ2(1) = 20.92, p < .001.

***

p < .001.

Assessment of bias

Given the substantial degree of heterogeneity, we performed multiple sensitivity analyses to assess bias. First, we used contour-enhanced funnel plots (Peters, Sutton, Jones, Abrams, & Rushton, 2008), which are scatterplots of the effects estimated from individual records against a measure of study size. Asymmetrical funnel plots suggest publication bias (Sterne & Harbord, 2004), and contour lines indicate levels of statistical significance. Fixed-effects modeling was used. We next used the trim-and-fill method (Duval, 2005), which is a nonparametric method to correct funnel-plot asymmetry by removing the smaller records that caused the asymmetry, re-estimating the center of the effect sizes, and filling the omitted records to ensure that the funnel plot is more symmetrical (Borenstein, Hedges, Higgins, & Rothstein, 2009). Fixed-effects modeling was used.

Next, we performed the bias tests using selection models (Vevea & Woods, 2005), which are weight-function models accounting for the fact that not all effect sizes have the same probability of being published. Using different probabilities, a selection model adjusts estimates of the mean effect size and can be compared with the unadjusted one to assess publication bias. Random-effects modeling was used. Using meta-regression, we were able to formally examine publication status as in a moderator analysis. When data are selectively reported in a way that is related to the magnitude of the effect size (e.g., when results are reported only when they are statistically significant), such a variable can have biasing effects (Borenstein et al., 2009). Random-effects modeling was used.

Finally, we used p-curve analysis (Simonsohn, Simmons, & Nelson, 2015) and p-uniform tests (van Assen, van Aert, & Wicherts, 2015). In p-curve analysis, the distribution of p values reported in a set of records is plotted. The analysis combines the half (.25) and full p curve to make inferences about evidential value. The p-uniform analysis holds the same underlying assumption as in p-curve analysis, which is that the distribution of the p value under the null hypothesis (that the effect size is equal to the true effect size) is uniform. Table 3 summarizes the results of these analyses (see also Fig. 2). Some of the methods suggested bias, whereas others did not. To be conservative, we explored the sources of this potential bias and corrected for it in later analyses.

Table 3.

Results of Sensitivity Analyses for the Effects of Misinformation, Debunking, and Misinformation Persistence

Analysis type and effect Analyses with outliers included Analyses without outliers included Indication of bias
Contour-enhanced funnel plot
 Misinformation Asymmetric funnel plot with records falling outside the funnel Asymmetric funnel plot with records falling outside the funnel Yes, see Figure 2
 Debunking Asymmetric funnel plot with records falling outside the funnel Yes, see Figure 2
 Misinformation persistence Asymmetric funnel plot with records falling outside the funnel Asymmetric funnel plot with records falling outside the funnel Yes, see Figure 2
Trim-and-fill method
 Misinformation Six estimated records filled on the left Five estimated records filled on the left Yes, see Figure 2
 Debunking Zero estimated records filled on the left No, see Figure 2
 Misinformation persistence Nineteen estimated records filled on the left Nineteen estimated records filled on the left Yes, see Figure 2
Selection models
 Misinformation Small differences between unadjusted and adjusted estimates Small differences between unadjusted and adjusted estimates No, see the Supplemental Material
 Debunking Large differences between unadjusted and adjusted estimates Yes, see the Supplemental Material
 Misinformation persistence Small differences between unadjusted and adjusted estimates Small differences between unadjusted and adjusted estimates No, see the Supplemental Material
Meta-regression model
 Misinformation Publication status was a significant moderator Publication status was a significant moderator Yes, see the Supplemental Material
 Debunking Publication status was a significant moderator Yes, see the Supplemental Material
 Misinformation persistence Publication status was a significant moderator Publication status was a significant moderator Yes, see the Supplemental Material
p-curve analysis
 Misinformation P-curve was right-skewed P-curve was right-skewed No, see the Supplemental Material
 Debunking P-curve was right-skewed No, see the Supplemental Material
 Misinformation persistence P-curve was right-skewed P-curve was right-skewed No, see the Supplemental Material
p-uniform analysis
 Misinformation P-uniform publication-bias test was nonsignificant P-uniform publication-bias test was nonsignificant No, see the Supplemental Material
 Debunking P-uniform publication-bias test was nonsignificant No, see the Supplemental Material
 Misinformation persistence P-uniform publication-bias test was nonsignificant P-uniform publication-bias test was nonsignificant No, see the Supplemental Material

Fig. 2.

Fig. 2.

Contour-enhanced funnel plots showing standard error as a function of effect size, separately for the misinformation, debunking, and misinformation-persistence effects. The top row shows results when all records were included, whereas the bottom row shows results when the smaller records were removed and the trim-and-fill method was used (triangles indicate filled records). The vertical dashed lines indicate the mean estimates of the fixed-effects model. Outliers were removed in the calculations of the misinformation and misinformation-persistence effects.

We first conducted correlation analyses between sample size and methodological factors that had relatively complete data (missing values in less than 5% of the selected records). Table 4 shows that sample size correlated with several methodological factors, including explanations in line with the misinformation and counterarguments to the misinformation. Therefore, we used the results shown in Table 4 to reduce the bias related to sample sizes and also reduce the potential influence of sample sizes in moderator analyses. Using the results of the multiple regression analyses shown in Table 4, we calculated standardized residuals to remove the influence of the covariates on sample size. Those residuals were then used to represent sample size in a way that was independent of the effect of the methodological and publication factors. Specifically, we estimated a weight for each sample by referencing the smallest standardized residual, specifically, standardized residual − minimum (standardized residuals) + 0.0001. The weighted model was likely to mitigate the influence of sample sizes as a potential source of bias, which led us to also repeat all the analyses of misinformation and persistence with these weights included.2 Specifically, we calculated mean effect sizes for all the effects of misinformation, debunking, and misinformation-persistence using random-effects models with the standardized residuals of sample size introduced as weights. Table 2 presents these results, which were similar to the earlier ones. Moderator analyses were also replicated with these weights and are reported in turn (see Table 5).

Table 4.

Results of Correlational and Multiple Regression Analyses Predicting Total Sample Size

Variable Misinformation effect
Misinformation-persistence effect
r(14) b(14) r(40) b(36)
Publication status .91*** 169.78***a .63*** 118.48**a 462.22***b
Online vs. lab data collection .91*** .63***
Publication year .23 −0.82 .36* 2.15
Explanations in line with the misinformation −.21 10.62 −.50*** −2.63
Counterarguments to the misinformation −.52*** 5.49

Note: Results were about the same when explanations in line with the misinformation and counterargument generation were excluded.

a

These coefficients represent dissertations compared with journal articles. bThis coefficient represents working papers compared with journal articles.

*

p < .05. **p < .01. ***p < .001.

Table 5.

Results of Moderator Analyses of All Effects, Effects Nested by Reports, and Effects Without Outliers

Variable Misinformation effect
Debunking effect
Misinformation-persistence effect
MEM WMEM MEM MEM WMEM
All effects
Intercept 3.17*** (0.42) 3.07*** (0.45) −0.74 (0.61) 0.88* (0.39) 1.08** (0.40)
Explanations in line with the misinformation −0.97** (0.33) −0.98** (0.34) −4.08*** (0.72) 1.40* (0.58) 2.09*** (0.62)
Counterarguments to the misinformation 0.93*** (0.26) −0.36 (0.24) −0.68** (0.24)
Level of detail of debunking messagea 1.82** (0.64) 0.86* (0.42) 1.06* (0.43)
 Q M 8.87** (14) 8.30** (14) 33.64*** (17) 17.30*** (31) 26.36*** (31)
 τ 2 2.54 (0.96) 2.54 (0.96) 0.79 (0.26) 1.03 (0.26) 1.03 (0.26)
 I  2 98.18 98.18 96.36 99.30 99.30
 R 2 .43 .43 .65 .44 .44
All effects nested by reports
Intercept 3.17*** (0.42) −0.74 (0.61) 0.88* (0.39)
Explanations in line with the misinformation −0.97** (0.33) −4.08*** (0.72) 1.40* (0.58)
Counterarguments to the misinformation 0.93*** (0.26) −0.36 (0.24)
Level of detail of debunking messagea 1.82** (0.64) 0.86* (0.42)
 Q M 8.87** (14) 33.64*** (17) 17.30*** (31)
 τ 2 2.54 (0.96) 0.79 (0.26) 1.03 (0.26)
 I  2 98.18 96.36 99.30
 R 2 .43 .65 .44
 Likelihood-ratio test, χ2(1) −0.00 −0.00 −0.00
Effects without outliers
Intercept 2.65*** (0.30) 2.60*** (0.33) 0.68** (0.23) 0.72** (0.24)
Explanations in line with the misinformation −0.65** (0.23) −0.64** (0.24) 0.67 (0.35) 0.82* (0.36)
Counterarguments to the misinformation 0.09 (0.15) 0.08 (0.15)
Level of detail of debunking messagea 0.51* (0.25) 0.52* (0.25)
 Q M 8.12** (12) 6.84** (13) 23.38*** (29) 23.47*** (29)
 τ 2 1.16 (0.46) 1.16 (0.46) 0.34 (0.09) 0.34 (0.09)
 I 2 96.61 6.61 98.07 98.07
 R 2 .38 .38 .45 .45

Note: Unless otherwise indicated, values shown are unstandardized coefficients, and standard errors are given in parentheses. QM = test of moderators (degrees of freedom are given in parentheses); τ2 = estimated amount of total heterogeneity, I 2 = level of between-samples heterogeneity; MEM = mixed-effects model; WMEM = mixed-effects model weighted by standardized N residuals.

a

Level of detail of debunking message was coded 1 (labeling initial information as incorrect) or 2 (providing new and credible information).

p < .10. *p < .05. **p < .01. ***p < .001.

Moderator analyses

We used meta-regressions to analyze the effects of the moderators on the misinformation, debunking, and misinformation-persistence effects (see Table 5). As Table 5 shows, the likelihood-ratio tests were nonsignificant for all effects, which suggests that the nonnested models with moderators better represent the data than the nested ones. Table 6 presents effect sizes for debunking and misinformation persistence across moderator levels.

Table 6.

Effect-Size Estimates for Debunking and Misinformation Persistence Across Levels of Moderator Variables Using Weighted Mixed-Effects Models

Moderator and level Debunking effecta Misinformation-persistence effectb
Explanations in line with the misinformation
 High 0.62 (k = 18) 1.72 (k = 25)
 Low 3.77 (k = 3) −0.14 (k = 10)
Counterarguments to the misinformation
 High 1.48 (k = 8) 0.46 (k = 13)
 Low 0.73 (k = 11) 1.58 (k = 22)
Level of detail of the debunking message
 High (detailed debunking information is provided) 1.25 (k = 18) 1.58 (k = 21)
 Low (information source is labeled as incorrect) 0.16 (k = 3) 0.55 (k = 14)

Note: The table shows weighted ds. Means above and below .25 standard deviations were used to group high and low conditions, respectively.

a

For the debunking effect, inverse variances of the effect sizes were included as weights. bFor the misinformation-persistence effect, the standardized residuals of sample size were included as weights.

Elaboration in line with the misinformation

We first examined whether generating explanations in line with the misinformation would moderate our misinformation, debunking, and misinformation-persistence effects. A meta-regression analysis with the misinformation effect as the outcome variable revealed an inverse association with the generation of explanations in line with the misinformation (weighted mixed-effects model: b = −0.98, 95% confidence interval (CI) = [−1.61, −0.33]; nonweighted mixed-effects model: b = −0.97, 95% CI = [−1.64, −0.31]). Specifically, the more likely recipients were to generate explanations supporting the misinformation, the weaker the misinformation effect was. This effect was unexpected because elaborating on information generally increases its impact when the message is strong (Cacioppo, Petty, & Crites, 1994; Petty & Briñol, 2010). Still, this effect does not change the interpretation of the more important results concerning the debunking and misinformation-persistence effects. Our meta-regression analysis of the debunking effect revealed a negative association with the generation of explanations supporting the misinformation (mixed-effects model: b = −4.08, 95% CI = [−5.50, −2.66]). As expected, the greater the elaboration in line with the misinformation, the weaker the later debunking effect. Furthermore, we found the anticipated moderation of the misinformation-persistence effect. The greater the likelihood of generating explanations in line with the misinformation, the greater the persistence of the misinformation (weighted mixed-effects model: b = 2.09, 95% CI = [0.88, 3.30]; nonweighted mixed-effects model: b = 1.40, 95% CI = [0.26, 2.54]).

Elaborating counterarguments to the misinformation

As Table 5 indicates, results were consistent with our expectations that the likelihood of counterarguing the misinformation when the debunking message is presented would moderate the initial misinformation effect (mixed-effects model: b = 0.93, 95% CI = [0.42, 1.44]) as well as misinformation persistence (weighted mixed-effects model: b = −0.68, 95% CI = [−1.16, −0.20]; nonweighted mixed-effects model: b = −0.36, 95% CI = [−0.82, 0.11]). In summary, the debunking effect was stronger and the misinformation persistence was weaker when recipients of the misinformation were more likely to counterargue the misinformation.

Detail of the debunking message

We then assessed whether the level of detail of the debunking message influenced the debunking and misinformation-persistence effects. In line with our expectations, a detailed debunking was associated with a stronger debunking effect than a nondetailed debunking (mixed-effects model: b = 1.82, 95% CI = [0.57, 3.07]). Contrary to expectations, however, a more detailed debunking message was associated with a stronger misinformation-persistence effect (weighted mixed-effects model: b = 1.06, 95% CI = [0.23, 1.90]; nonweighted mixed-effects model: b = 0.86, 95% CI = [0.04, 1.67]). This result suggested that using a more detailed debunking message was effective to discredit the misinformation but was associated with greater misinformation persistence. A post hoc analysis between generating explanations to the initial misinformation and the level of details of debunking message revealed a large positive correlation, r(33) = .52, p = .0015. It seems plausible that the misinformation messages could have been more detailed in studies with more detailed debunking, a possibility that future meta-analyses should investigate.

Discussion

The primary objective of this meta-analysis was to understand the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Examining moderators provided empirical evidence to evaluate recommendations and suggestions for discrediting the false information. Employing Cohen’s (1988) effect-size guidelines, we found large effects for: misinformation, debunking, and misinformation persistence across estimation methods (see Table 2), except the fixed-effects model of misinformation persistence. Table 6 also presents effect sizes for debunking and misinformation persistence across moderator levels.

The results of generating explanations in line with the misinformation were consistent with the hypothesis that people who generate arguments supporting misinformation struggle to later question and change their initial attitudes and beliefs. As shown in Table 6, the debunking message was less effective when people were initially more likely to generate explanations supporting the misinformation than when they were not. The results of counterarguing the misinformation also supported predictions. The debunking message was more effective when people were more likely to counterargue the misinformation than when they were not. Further, the results of the detail of debunking messages were consistent with our hypothesis that debunking is more successful when it provides information that enables recipients to update the mental model justifying the misinformation (see Table 6). As expected, the debunking effect was weaker when the debunking message simply labeled misinformation as incorrect rather than when it introduced corrective information. Contrary to expectations, however, the debunking effects of more detailed debunking messages did not translate into reduced misinformation persistence, as the studies with detailed debunking might also have stronger misinformation persistence. In the following paragraphs, we discuss the detection of inclusion bias in our samples and then present recommendations for uprooting discredited information.

Assessments of inclusion bias

Our analyses of publication and methodological correlates suggest that different research practices have been adopted across published and unpublished reports. Contrary to the usual bias (Hopewell, McDonald, Clarke, & Egger, 2007), unpublished reports in our meta-analysis (i.e., working papers and dissertations) had larger sample sizes than did published articles (see Table 4), a relation observed for the misinformation and misinformation-persistence effect. Furthermore, we found moderate to strong associations between sample size and methodological factors, which suggests that part of the bias is due to differences in study characteristics. Such results could also stem from more refined experimental methods, such as pilot testing a particular procedure to establish the required sample size a priori. In other words, such research practices as power analyses may contribute a greater number of studies with larger sample sizes and smaller effect sizes, as found in our study. The inconsistent results of various sensitivity analyses speak to the needs for future research to investigate the robustness of various bias-detection methods and develop new assessment tools to further understand publication and inclusion bias (Inzlicht, Gervais, & Berkman, 2015; Kepes, Banks, & Oh, 2014; McShane, Böckenholt, & Hansen, 2016; Peters et al., 2010).

Recommendations for debunking misinformation

Our results have practical implications for editorial practices and public opinion.

Recommendation 1: reduce the generation of arguments in line with the misinformation

Our findings suggested that elaboration in line with the misinformation reduces the acceptance of the debunking message, which makes it difficult to eliminate false beliefs. Elaborating on the reasons for a particular event allows recipients to form a mental model that can later bias processing of new information and make undercutting the initial belief difficult (Hart et al., 2009). Therefore, the media and policymakers should report about an incident of misinformation (e.g., a retraction report) in ways that reduce detailed thoughts in support of the misinformation.

Recommendation 2: create conditions that facilitate scrutiny and counterarguing of misinformation

Our findings highlight the conclusion that counter-arguing the misinformation enhances the power of corrective efforts. Therefore, public mechanisms and educational initiatives should induce a state of healthy skepticism. Furthermore, when retractions or corrections are issued, facilitating understanding and generation of detailed coun-terarguments should yield optimal acceptance of the de-bunking message.

Recommendation 3: correct misinformation with new detailed information but keep expectations low

The moderator analyses indicated that recipients of misinformation are less likely to accept the debunking messages when the countermessages simply label the misinformation as wrong rather than when they debunk the misinformation with new details (e.g., Thorson, 2013). A caveat is that the ultimate persistence of the misinformation depends on how it is initially perceived, and detailed debunking may not always function as expected.

Continuing to develop alerting systems

Policymakers should be aware of the likely persistence of misinformation in different areas. Alerting systems, such as Factcheck.org, exist in the political domain. Notably, when a Facebook user’s search turns up a story identified as inaccurate by one of the five major fact-checking groups, a newly implemented feature provides links to fact-checking information generated by one of these debunking sites. Debunking journalism exists in the social and health domains as well. For example, Snopes.com has recently published corrections of fake news claiming that a billionaire had purchased the tiny town of Buford, Wyoming. At the same time, science-communication scholarship and practice offer some innovative initiatives, such as retractionwatch.com, founded in 2010 by Ivan Oransky and Adam Marcus, which provides readers with updated information about scientific retractions. In line with Recommendation 3, Retraction Watch frequently updates readers on the details of retraction investigations online. Such an ongoing monitoring system creates desirable conditions of scrutiny and counterarguing of misinformation.

This meta-analysis began with a review of relevant literature on the perseverance of attitudes and beliefs and then assessed the impact of moderators on the misinformation, debunking, and misinformation-persistence effects. Compared with results from single experiments, meta-analysis is a useful catalogue of experimental paradigms, dependent variables, moderators, and other methods factors used in studies in related domains. In light of our findings, we offer three recommendations: (a) reduce arguments that support misinformation, (b) engage audiences in scrutiny and counterarguing of misinformation, and (c) introduce new information as part of the debunking message. Of course, these recommendations do not take the audience’s dispositional characteristics into account and may not be effective or less effective for people with certain ideologies (Lewandowsky et al., 2015) and cultural backgrounds (Sperber, 2009).

Supplementary Material

Supplementary material
Supplementary material

Acknowledgments

Research reported in this article was supported by the National Cancer Institute of the National Institutes of Health (NIH) and Food and Drug Administration (FDA) Center for Tobacco Products (Award No. P50CA179546). The content is solely the responsibility of the authors and does not necessarily reflect the official views of the NIH or the FDA.

1.

Throughout this article, we use the term “report” to a refer to a publication, which could include one or more studies. The term “record” refers to conditions within each report (see https://osf.io/9d6t4/ for a complete list of records).

2.

Results of the moderator analyses for the misinformation and misinformation-persistence effects were about the same in strength and direction when explanations in line with the misinformation and counterargument generation were excluded from the multiple regression analyses and the estimations of the standardized residual N weights.

Footnotes

Action Editor: Eddie Harmon-Jones served as action editor for this article.

Declaration of Conflicting Interests: The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article.

Supplemental Material: Additional supporting information can be found at http://journals.sagepub.com/doi/suppl/10.1177/0956797617714579

Open Practices: Inline graphic

All data have been made publicly available via the Open Science Framework and can be accessed at https://osf.io/9d6t4/. The complete Open Practices Disclosure for this article can be found at http://journals.sagepub.com/doi/suppl/10.1177/0956797617714579. This article has received the badge for Open Data. More information about the Open Practices badges can be found at http://www.psychologicalscience.org/publications/badges.

References

Asterisks indicate reports included in the meta-analysis.

  1. Arceneaux K. (2012). Cognitive biases and the strength of political arguments. American Journal of Political Science, 56, 271–285. doi: 10.1111/j.1540-5907.2011.00573.x [DOI] [Google Scholar]
  2. Arceneaux K., Johnson M., Cryderman J. (2013). Communication, persuasion, and the conditioning value of selective exposure: Like minds may unite and divide but they mostly tune out. Journal of Political Communication, 30, 213–231. doi: 10.1080/10584609.2012.737424 [DOI] [Google Scholar]
  3. *Berinsky A. J. (2012). Rumors, truths, and reality: A study of political misinformation. Retrieved from http://web.mit.edu/berinsky/www/files/rumor.pdf
  4. Borenstein M., Hedges L., Higgins J., Rothstein H. (2009). Introduction to meta-analysis. New York, NY: Wiley. [Google Scholar]
  5. *Bullock J. G. (2007). Experiments on partisanship and public opinion: Party cues, false beliefs, and Bayesian updating. Department of Political Science, Stanford University; Retrieved from https://books.google.com/books/about/Experiments_on_Partisanship_and_Public_O.html?id=OIhEAQAAIAAJ&pgis=1 [Google Scholar]
  6. Cacioppo J. T., Petty R. E., Crites S. L. J. (1994). Attitude change. In Ramachandran V. S. (Ed.), Encyclopedia of human behavior (pp. 261–270). San Diego, CA: Academic Press. [Google Scholar]
  7. Chaiken S., Trope Y. (1999). Dual-process theories in social psychology. New York, NY: Guilford Press. [Google Scholar]
  8. Cohen J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. [Google Scholar]
  9. Duval S. (2005). The “‘trim and fill’” method. In Rothstein H. R., Sutton A. J., Borenstein M. (Eds.), Publication bias in meta-analysis: Prevention, assessment, and adjustments (pp. 127–144). West Sussex, England: John Wiley & Sons. [Google Scholar]
  10. *Ecker U. K. H., Lewandowsky S., Apai J. (2011). Terrorists brought down the plane!—No, actually it was a technical fault: Processing corrections of emotive information. The Quarterly Journal of Experimental Psychology, 64, 283–310. doi: 10.1080/17470218.2010.497927 [DOI] [PubMed] [Google Scholar]
  11. *Ecker U. K. H., Lewandowsky S., Fenton O., Martin K. (2014). Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Memory & Cognition, 42, 292–304. doi: 10.3758/s13421-013-0358-x [DOI] [PubMed] [Google Scholar]
  12. *Ecker U. K. H., Lewandowsky S., Swire B., Chang D. (2011). Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction. Psychonomic Bulletin & Review, 18, 570–578. doi: 10.3758/s13423-011-0065-1 [DOI] [PubMed] [Google Scholar]
  13. *Ecker U. K. H., Lewandowsky S., Tang D. T. W. (2010). Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory & Cognition, 38, 1087–1100. doi: 10.3758/mc.38.8.1087 [DOI] [PubMed] [Google Scholar]
  14. Hart W., Albarracín D., Eagly A. H., Brechan I., Lindberg M. J., Merrill L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135, 555–588. doi: 10.1037/a0015701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Hedges L. V., Olkin I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press. [Google Scholar]
  16. Henig J. (2009). False euthanasia claims. Retrieved from http://www.factcheck.org/2009/07/false-euthanasia-claims/
  17. Hopewell S., McDonald S., Clarke M. J., Egger M. (2007). Grey literature in meta-analyses of randomized trials of health care interventions. The Cochrane Database of Systematic Reviews, 2, MR000010. doi: 10.1002/14651858.MR000010.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Inzlicht M., Gervais W., Berkman E. (2015). Bias-correction techniques alone cannot determine whether ego depletion is different from zero: Commentary on Carter, Kofler, Forster, & McCullough, 2015. SSRN. doi: 10.2139/ssrn.2659409 [DOI] [Google Scholar]
  19. Jerit J. (2008). Issue framing and engagement: Rhetorical strategy in public policy debates. Political Behavior, 30, 1–24. doi: 10.1007/s11109-007-9041-x [DOI] [Google Scholar]
  20. *Johnson H. M., Seifert C. M. (1994). Sources of the continued influence effect: When misinformation in memory affects later inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1420–1436. doi: 10.1037/0278-7393.20.6.1420 [DOI] [Google Scholar]
  21. Johnson-Laird P. N. (1994). Mental models and probabilistic thinking. Cognition, 50, 189–209. [DOI] [PubMed] [Google Scholar]
  22. Johnson-Laird P. N. (2013). Mental models and consistency. In Gawronski B., Fritz S. (Eds.), Cognitive consistency: A unifying concept in social psychology (pp. 225–244). New York, NY: Guilford Press. [Google Scholar]
  23. Johnson-Laird P. N., Byrne R. M. J. (1991). Deduction. Hillsdale, NJ: Erlbaum. [Google Scholar]
  24. Kahneman D. (2003). A perspective on judgment and choice: Mapping bounded rationality. The American Psychologist, 58, 697–720. doi: 10.1037/0003-066X.58.9.697 [DOI] [PubMed] [Google Scholar]
  25. Kepes S., Banks G. C., Oh I.-S. (2014). Avoiding bias in publication bias research: The value of “null” findings. Journal of Business and Psychology, 29, 183–203. doi: 10.1007/s10869-012-9279-0 [DOI] [Google Scholar]
  26. Kowalski P., Taylor A. K. (2009). The effect of refuting misconceptions in the introductory psychology class. Teaching of Psychology, 36, 153–159. doi: 10.1080/00986280902959986 [DOI] [Google Scholar]
  27. Lewandowsky S., Cook J., Oberauer K., Brophy S., Lloyd E. A., Marriott M. (2015). Recurrent fury: Conspiratorial discourse in the blogosphere triggered by research on the role of conspiracist ideation in climate denial. Journal of Social and Political Psychology, 3, 161–197. doi: 10.5964/jspp.v3i1.443 [DOI] [Google Scholar]
  28. Lewandowsky S., Ecker U. K. H., Seifert C. M., Schwarz N., Cook J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13, 106–131. doi: 10.1177/1529100612451018 [DOI] [PubMed] [Google Scholar]
  29. McShane B. B., Böckenholt U., Hansen K. T. (2016). Adjusting for publication bias in meta-analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11, 730–749. doi: 10.1177/1745691616662243 [DOI] [PubMed] [Google Scholar]
  30. Moher D., Liberati A., Tetzlaff J., Altman D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7), Article e1000097. doi: 10.1371/journal.pmed.1000097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Newport F. (2013). Americans still think Iraq had weapons of mass destruction before war. Retrieved from http://www.gallup.com/poll/8623/americans-still-think-iraq-had-weapons-mass-destruction-before-war.aspx
  32. Newport F. (2015). In U.S., percentage saying vaccines are vital dips slightly. Retrieved from http://www.gallup.com/poll/181844/percentage-saying-vaccines-vital-dips-slightly.aspx
  33. Nyhan B. (2010). Why the “death panel” myth wouldn’t die: Misinformation in the health care reform debate. The Forum, 8(1), Article 5. doi: 10.2202/1540-8884.1354 [DOI] [Google Scholar]
  34. Peters J. L., Sutton A. J., Jones D. R., Abrams K. R., Rushton L. (2008). Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. Journal of Clinical Epidemiology, 61, 991–996. doi: 10.1016/j.jclinepi.2007.11.010 [DOI] [PubMed] [Google Scholar]
  35. Peters J. L., Sutton A. J., Jones D. R., Abrams K. R., Rushton L., Moreno S. G. (2010). Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. Journal of the Royal Statistical Society A: Statistics in Society, 173, 575–591. doi: 10.1111/j.1467-985X.2009.00629.x [DOI] [Google Scholar]
  36. Petty R. E., Briñol P. (2010). Attitude change. In Baumeister R. F., Finkel E. J. (Eds.), Advanced social psychology: The state of science (pp. 217–259). Oxford, England: Oxford University Press. [Google Scholar]
  37. Schipani V. (2016). GMOs didn’t cause Zika outbreak. Retrieved from http://www.factcheck.org/2016/02/gmos-didnt-cause-zika-outbreak/
  38. Schwarz N., Sanna L. J., Skurnik I., Yoon C. (2007). Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. In Zanna M. P. (Ed.), Advances in experimental social psychology (Vol. 39, pp. 127–191). San Diego, CA: Academic Press. doi: 10.1016/S0065-2601(06)39003-X 127 [DOI] [Google Scholar]
  39. Simonsohn U., Simmons J. P., Nelson L. D. (2015). Better P-curves: Making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a reply to Ulrich and Miller (2015). Journal of Experimental Psychology: General, 144, 1146–1152. doi: 10.1037/xge0000104 [DOI] [PubMed] [Google Scholar]
  40. Slothuus R., de Vreese C. H. (2010). Political parties, motivated reasoning, and issue framing effects. The Journal of Politics, 72, 630–645. doi: 10.1017/S002238161000006X [DOI] [Google Scholar]
  41. Sperber D. (2009). Culturally transmitted misbeliefs. Beha-vioral & Brain Sciences, 32, 534–535. doi: 10.1017/S0140525X09991348 [DOI] [Google Scholar]
  42. Sterne J. A. C., Harbord R. M. (2004). Funnel plots in meta-analysis. The Stata Journal, 4, 127–141. [Google Scholar]
  43. *Thorson E. A. (2013). Belief echoes: The persistent effects of corrected misinformation (Doctoral dissertation, University of Pennsylvania). Retrieved from http://repository.upenn.edu/dissertations/AAI3564225
  44. van Assen M. A. L. M., van Aert R. C. M., Wicherts J. M. (2015). Meta-analysis using effect size distributions of only statistically significant studies. Psychological Methods, 20, 293–309. doi: 10.1037/met0000025 [DOI] [PubMed] [Google Scholar]
  45. Van Damme I., Smets K. (2014). The power of emotion versus the power of suggestion: Memory for emotional events in the misinformation paradigm. Emotion, 14, 310–320. doi: 10.1037/a0034629 [DOI] [PubMed] [Google Scholar]
  46. Vevea J. L., Woods C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10, 428–443. doi: 10.1037/1082-989X.10.4.428 [DOI] [PubMed] [Google Scholar]
  47. Viechtbauer W., Cheung M. W.-L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1, 112–125. doi: 10.1002/jrsm.11 [DOI] [PubMed] [Google Scholar]
  48. Wilkes A. L., Leatherbarrow M. (1988). Editing episodic memory following the identification of error. The Quarterly Journal of Experimental Psychology A, 40, 361–387. doi: 10.1080/02724988843000168 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary material
Supplementary material

Articles from Psychological Science are provided here courtesy of SAGE Publications

RESOURCES