Skip to main content
PLOS One logoLink to PLOS One
. 2023 Apr 5;18(4):e0280902. doi: 10.1371/journal.pone.0280902

The efficacy of interventions in reducing belief in conspiracy theories: A systematic review

Cian O’Mahony 1,*, Maryanne Brassil 2, Gillian Murphy 1, Conor Linehan 1
Editor: Pierluigi Vellucci3
PMCID: PMC10075392  PMID: 37018172

Abstract

Conspiracy beliefs have become a topic of increasing interest among behavioural researchers. While holding conspiracy beliefs has been associated with several detrimental social, personal, and health consequences, little research has been dedicated to systematically reviewing the methods that could reduce conspiracy beliefs. We conducted a systematic review to identify and assess interventions that have sought to counter conspiracy beliefs. Out of 25 studies (total N = 7179), we found that while the majority of interventions were ineffective in terms of changing conspiracy beliefs, several interventions were particularly effective. Interventions that fostered an analytical mindset or taught critical thinking skills were found to be the most effective in terms of changing conspiracy beliefs. Our findings are important as we develop future research to combat conspiracy beliefs.

Introduction

Conspiracy beliefs are defined as a set of beliefs that “explain important events as secret plots by powerful and malevolent groups” [1], p. 538]. For example, various groups of individuals hold the beliefs that the 1969 moon landing was a hoax staged by the United States government, and that the cure to cancer has been discovered but is kept secret from the public [2,3]. Holding conspiracy beliefs is often associated with negative personal, social, and health-related consequences. For example, conspiracy beliefs are associated with reluctance to receive a COVID-19 vaccine and reduced adherence to public health regulations [49], extremist and violent behavior [10,11]. As much of this research shows only associations between conspiracy beliefs and negative outcome, further research is needed to draw any conclusions regarding causality. Nevertheless, as there are possible negative implications associated with conspiracy beliefs at both personal and societal levels, there is a need for interventions to lower the likelihood that people will engage uncritically with such theories.

One of the primary characteristics of a conspiracy theory is that they are unfalsifiable. Specifically, any attempt to demonstrate that the claims made by conspiracy theories are false, could be perceived as evidence of a cover-up of the “truth”, and institutions that attempt to debunk conspiracies are perceived as accomplices attempting to conceal such conspiracies [12,13]. The effectiveness of counterarguments on conspiracy beliefs is mixed. Several studies have found that counterarguments could change the conspiracy beliefs of participants [14]. However, other studies found that counterarguments had no effect of conspiracy attitudes [15,16]. Additionally, some research has found that counterarguments in some particular circumstances can paradoxically strengthen conspiracy beliefs [1719]. As such, simply arguing against conspiracy beliefs may not be a sufficient way of changing them.

Conspiracy interventions

The following section introduces and defines the various types of interventions that have been designed to influence conspiracy belief. It should be noted that much of the following research surrounding conspiracy interventions were designed based on evidence accumulated by studies that focus on misinformation and fake news in addition to conspiracy belief. Such papers frequently refer to misinformation, fake news, and conspiracy beliefs interchangeably when citing evidence to support the utility of a particular intervention. While the authors of this paper believe that misinformation, fake news, and conspiracy beliefs are distinct phenomena, the common characteristic of the studies in this area is that they attempt to change beliefs that persist, even when sufficient counterevidence exists.

The first category of conspiracy interventions are informational inoculations. The theory behind this intervention suggests that much like a vaccine protects against disease, an informational inoculation consisting of a pre-emptive debunking [20]. The inoculation consists of giving participants a weakened form of the conspiracy argument along with points that refute the claims made in the argument. So, whereas a direct counterargument or debunking attempts to confront conspiracy beliefs after people have been exposed to conspiracy theories, an informational inoculation confronts conspiracy beliefs before exposure. Previous experiments have shown these inoculations are effective with regards to decreasing belief in fake news and political gossip, as well as conspiracy beliefs [2023]. A meta-analysis has also shown that inoculations overall represent an effective means of challenging belief persistence, in situations where participants would otherwise be unwilling to change their beliefs despite contradictory evidence [24].

Another common type of intervention for changing conspiracy beliefs is priming. Priming interventions experimentally manipulate the psychological states of participants before they are asked about their conspiracy beliefs. For example, Swami et al [20b], found that priming participants to engage in analytical thinking significantly changed the likelihood that they would agree with conspiracy beliefs when tested. While priming interventions have shared features (i.e. they expose participants to a task or stimulus with the intention of subsequently influencing how they score on conspiracy measures), they are a heterogenous group. Some priming interventions seek to alter one’s cognition by administering tasks that increase focus and analytical thinking [25]. Other priming interventions sought to influence one’s affective state, for example altering their feelings of ostracism [26]. Some researchers consider priming to be another form of informational inoculation, as they also consist of treatments that are administered before participants are exposed to conspiracy theories [27].

Other approaches attempt to use the persuasive power of stories to confront conspiracy beliefs, often referred to as narrative persuasion [28]. The hypothesis is that the narrative and anecdotal nature of conspiracy theories are more appealing than the detached nature of scientific evidence. These claims have been supported by the fact that personal anecdotes both in favour and against vaccines were found to be far more persuasive than scientific evidence [29].

Conspiracy beliefs have been associated with a number of cognitive biases and logical fallacies such as the conjunction fallacy [30], jumping-to-conclusions bias [31], and illusionary pattern perception [32]. These biases have been attributed to the unfounded conclusions drawn by those who hold conspiracy beliefs. A number of the interventions similar to those listed above, such as fostering effortful thought, have been successful in reducing susceptibility to some of the cognitive biases associated with conspiracy belief (e.g. conjunction fallacy) [33]. However, it is difficult to draw conclusions regarding how these interventions change the underlying cognitive processes behind conspiracy belief, as many of these studies exclusively measure the extent in which people endorse conspiracy theories.

Current study

Systematic reviews within the area of conspiracy research have typically focused on identifying and understanding the individual differences and predictors of conspiracy belief [1,34]. There is yet to be a comprehensive review of the efficacy of various conspiracy interventions. While one systematic review focused on examining conspiracy interventions, it was solely focused on the use of narrative persuasion in the specific case of anti-vaccine conspiracy theories [28]. As there are growing calls to effectively combat widespread conspiracy beliefs, it is pertinent that we review the evidence for existing interventions. The COVID-19 pandemic and associated conspiracy theories highlighted the potentially harmful public health threats posed by unsubstantiated beliefs. By reviewing the evidence, we may gain a greater understanding of how to challenge conspiracy beliefs.

Methods

The protocol was written in accordance to the PRISMA-P 2015 Statement guidelines (Moher et al., 2015). PRISMA is a 27-point checklist of the preferred reporting items for a systematic review (see S1 File). The authors pre-registered this review at PROSPERO (https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021256263).

Search strategy

Three electronic databases were searched (PubMed, PsychIINFO, Scopus). The search strategy of this review used two search categories; conspiracy theories, and reduction/intervention. The search strings were any combination of (conspirac* OR ’unsubstantiated beliefs’ OR ’implausible beliefs’ OR ’unsubstantiated claims’ OR ’unfounded beliefs’ OR truther) AND (intervention OR reduc* OR chang* OR alter* OR experiment OR persua*). The list of keywords was determined after repeated prechecking.

With the concern of missing potential eligible papers, no filtering was applied in the database searches (subject, publication dates) to avoid the possibility of the scope of the view becoming too narrow. There was no limitation applied to the publication dates of papers in the electronic search. Searches were conducted on 3rd June 2021.

Eligibility criteria

The PICO model [35] was used to determine the eligibility criteria for the current review. During a preliminary review of the literature, we found that very few papers had been published on the topic of conspiracy interventions. Furthermore, literature concerning conspiracy beliefs was found to use various terms to refer to such beliefs (e.g., ‘empirically unfounded beliefs’, ‘conspiratorial ideation’ etc.). Lastly, the research designs of studies researching conspiracy interventions vary considerably. As such, the authors concluded that using a strict inclusion/exclusion criteria would exclude potentially useful papers from this review, and opted to use a broad inclusion criteria to mitigate this possibility.

The inclusion criteria for this review consisted of the following:

Population:

  • Adults above the age of 18

  • Participants without clinical/psychiatric afflictions

Intervention:

  • Includes a measure of conspiracy belief (both general and specific measures were accepted).

  • Experimental condition/stimulus used with the intention of influencing the measurement of conspiracy belief.

Comparator:

  • A control condition either between or within-groups

Outcomes:

  • Difference in conspiracy belief between experimental and control conditions

The exclusion criteria for this review consisted of the following:

Population:

  • Participants under 18 years of age

  • Clinical populations

Intervention:

  • Correlation studies/observational measures.

Comparator:

  • Studies with no control/baseline condition for the experimental intervention condition

Outcomes:

  • No outcomes were excluded from this study.

Data management

References were collected from PsychInfo, PubMed, and Scopus. References were then imported into Endnote, where they were screened for possible duplicates. Once the duplicates were removed, the remaining references were exported to Covidence.

Data review and screening

The screening process was conducted using the Covidence systematic review management online software. The screening process consisted of two waves., Title and Abstract screening, and Full-text screening, in accordance to PRISMA standards [36]. Two reviewers (COM & MB) screened the papers for both the Title/Abstract, and Full Text data screenings. Conflicting votes were resolved by a third reviewer (GM) who made the final decision regarding the inclusion of such papers. The interrater reliability was assessed using Cohen’s Kappa (k). The level of agreement was found to be fair (k = 0.40) for title and abstract screening, and substantial for full-text screening (k = 0.62). Initially, 16 papers passed the full-text screening. However, due to the broad nature of the inclusion criteria, a number of studies were deemed eligible but did not have sufficient data for analysis. As such, a second examination of the remaining papers was conducted to see whether each paper had sufficient data to analyse. Studies that reported neither sufficient data to calculate the Cohen’s d, nor statistical significance, were excluded from the analysis. One paper [37] was excluded as the paper reported its numerical findings solely in the form of graphs, making it difficult to extract exact values. Two papers were excluded as no relevant information could be obtained for our analysis. After the papers were screened for sufficient data, 13 articles in total were included in the final analysis. The PRISMA diagram (Fig 1) shows each stage of this process.

Fig 1. PRISMA flow diagram.

Fig 1

Data extraction and analysis

A number of papers included in the review consisted of more than a single study, leading to a total of 24 studies being included in the final review. The following data was extracted from the studies: (1) Publication characteristics such as author names, journal, and publication dates; (2) Demographic characteristics, such as mean age, country, target population; and (3) Intervention characteristics–type of intervention used, conspiracy measure used, whether the conspiracy measure was specific of generic.

From the 25 studies included in our review, we extracted 48 intervention comparisons (k) in total (see Table 1). The outcome measure of interest was the Cohen’s d effect size. For all papers that used paired group designs, the Cohen’s d had already been reported. Where the Cohen’s d was not reported in a study, we calculated the d value for the independent group designs using the mean difference and the pooled standard deviation [38]. The Cohen’s d was calculated for each comparison using Microsoft Excel. Where there was insufficient data to calculate the d value, statistical significance was used to indicate whether the intervention successfully induced any change in belief. Quality assessment was conducted using the Risk of Bias Tool 2 (Sterne et al., 2019, see S1 Fig) [39]. To avoid repetition, the Risk of Bias was applied to the 25 studies as opposed to each comparator, as each comparator was subject to same randomisation process and measurement within their respective studies.

Table 1. Effectiveness of interventions in included studies.

Author Intervention Design Grouping Aware of Intervertion Conspiracy measure Type of Measure p-value Cohen’s d
[40] Prevention Regulatory Focus Experimental Between-groups Unclear Cooperative Congressional Election Study (Ansolabeher, 2013) General 0.132 0.132
[40] Promotion Regulatory Focus Experimental Between-groups Unclear Cooperative Congressional Election Study (Ansolabeher, 2013) General 0.014* 0.37
[40] Increasing Perceived Control (Promotion) Experimental Between-groups Unclear Cooperative Congressional Election Study (Ansolabeher, 2013) Specific 0.017* 0.49
[40] Increasing Perceived Control (Prevention) Experimental Between-groups Unclear Cooperative Congressional Election Study (Ansolabeher, 2013) Specific 0.534 0.11
[41] Rationality Priming Experimental Between-groups Yes Conspiracist Mentality Questionnaire (Lantian, Muller, Nurra, & Douglas, 2016). General NR -0.016
[42] Ridiculing Beliefs Experimental Within-groups Yes Conspiracy Assessment Tool (CAT) General 0.001** 0.11
[42] Rational counterarguments Experimental Within-groups Yes Conspiracy Assessment Tool (CAT) General 0.001** 0.13
[42] Empathetic counterarguments Experimental Within-groups Yes Conspiracy Assessment Tool (CAT) General 0.09 0.05
[42] Ridiculing Beliefs Experimental Between-groups Yes Conspiracy Assessment Tool (CAT) General 0.059 0.2
[42] Rational counterarguments Experimental Between-groups Yes Conspiracy Assessment Tool (CAT) General 0.011* 0.27
[42] Empathetic counterarguments Experimental Between-groups Yes Conspiracy Assessment Tool (CAT) General 0.333 0.1
[21] Fact-based Inoculation Experimental Between-groups No Generic attitudes (Burgoon, Cohen, Miller, & Montgomery, 1978) Specific <0.001 1.311
[21] Logic-based Inoculation Experimental Between-groups No Generic attitudes (Burgoon, Cohen, Miller, & Montgomery, 1978) Specific <0.001 0.909
[21] Metainoculation (fact based)2 Experimental Between-groups No Generic attitudes (Burgoon, Cohen, Miller, & Montgomery, 1978) Specific NR 0.58
[21] Metainoculation (logic based) 2 Experimental Between-groups No Generic attitudes (Burgoon, Cohen, Miller, & Montgomery, 1978) Specific NR 0.58
[25] Analytical priming Experimental Between-groups Unclear Belief in Conspiracy Theories Inventory (BCTI Swami et al., 2010, 2011). General 0.018* 0.46
[25] Analytical priming Experimental Between-groups Unclear Belief in Conspiracy Theories Inventory (BCTI Swami et al., 2010, 2011). General 0.001** 0.49
[25] Analytical priming (Font) Experimental Between-groups Unclear 7/7 bombings questionnaire Specific 0.046* 0.34
[25] Analytical priming (Font) Experimental Between-groups No Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General 0.014* 0.42
[15] Pro-conspiracy arguments1 Quasi-Experimental Between-groups Yes Anti-vaccine Conspiracy Beliefs Measure Specific 0.001** -0.66
[15] Anti-conspiracy arguments Quasi-Experimental Between-groups Yes Anti-vaccine Conspiracy Beliefs Measure Specific 0.017* 0.42
[15] Anti-conspiracy/conspiracy arguments Quasi-Experimental Between-groups Yes Anti-vaccine Conspiracy Beliefs Measure Specific 0.047* -0.1
[15] Conspiracy/anti-conspiracy arguments1 Quasi-Experimental Between-groups Yes Anti-vaccine Conspiracy Beliefs Measure Specific 0.263 -0.39
[15] Anti-conspiracy/conspiracy arguments Quasi-Experimental Between-groups Yes Anti-vaccine Conspiracy Beliefs Measure Specific 0.1 0.62
[15] Conspiracy/anti-conspiracy arguments1 Quasi-Experimental Between-groups Yes Anti-vaccine Conspiracy Beliefs Measure Specific 0.001** 0.31
[43] Conspiracy labelling Experimental Between-groups No Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General >0.05 NR
[43] Conspiracy labelling Experimental Between-groups No Historical items General >0.05 -0.1
[43] Conspiracy labelling Experimental Between-groups No Adaptation of Wood et al, 2012 Specific >0.05 0.07
[27] Priming Resistance to Persuasion Experimental Between-groups No Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General 0.012* 0.56
[27] Priming Resistance to Persuasion Experimental Between-groups No Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General 0.029* 0.3
[27] Priming Resistance to Persuasion Experimental Between-groups No Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General 0.004* 0.37
[16] Debunking Experimental Between-groups Yes Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General 0.07 0.29
[16] Debunking, motives, fallacy Experimental Between-groups Yes Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013) General <0.05* 0.37
[44] High control priming Experimental Between-groups No Conspiracy Theory Ideation subscale of Conspiracy Mentality Scale (Stojanov & Halberstadt) General NR 0.11
[44] Low control priming1 Experimental Between-groups No Conspiracy Theory Ideation subscale of Conspiracy Mentality Scale (Stojanov & Halberstadt) General 0.07 0.27
[44] High control priming Experimental Between-groups Unclear Conspiracy Theory Ideation subscale of Conspiracy Mentality Scale (Stojanov & Halberstadt) General >0.05 -0.13
[44] Low control priming1 Experimental Between-groups Unclear Conspiracy Theory Ideation subscale of Conspiracy Mentality Scale (Stojanov & Halberstadt) General >0.05 -0.15
[44] Low control priming1 Experimental Between-groups No Conspiracy Theory Ideation subscale of Conspiracy Mentality Scale (Stojanov & Halberstadt) General 0.35 -0.13
[44] Low control priming1 Experimental Between-groups No Belief in Conspiracy Theories Inventory (BCTI Swami et al., 2010, 2011). Specific 0.88 0.02
[26] Ostracism priming1 Experimental Between-groups No Conspiracy beliefs of 14 political events Specific 0.02* NR
[26] Pain priming1 Experimental Between-groups No Conspiracy beliefs of 14 political events Specific 0.78 NR
[26] Ostracism priming1 Experimental Between-groups No Conspiracy beliefs of 6 political events Specific 0.03* NR
[26] Ostracism priming/self-affirmation3 Experimental Between-groups No 10-item version of the Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013), General 0.001** NR
[26] Ostracism priming/no affirmation1 Experimental Between-groups No 10-item version of the Generic Conspiracist Beliefs scale (GCBS; Brotherton, French, & Pickering, 2013), General 0.89 NR
[45] Narrative persuasion Experimental Between-groups Unclear Self-made 2-item measure General 0.086 NR
[45] Narrative persuasion Experimental Between-groups Unclear Conspiracist Mentality Questionnaire (Lantian, Muller, Nurra, & Douglas, 2016). General 0.47 NR
[46] Pseudoscience class Experimental Within-groups Unclear Conspiracy belief subscale of the Paranormal Beliefs Scale (Tobacyk, 2004). Specific < 0.0001** 1.07
[46] Research Methods class Experimental Within-groups Unclear Conspiracy belief subscale of the Paranormal Beliefs Scale (Tobacyk, 2004). Specific 0.14 NR

NR = Not Reported; Design = the experimental design used in the intervention; Experimental = the study used two independent groups; Quasi-experimental = the study used the same participants for both conditions; Aware of Intervention = whether participants were aware of the true nature of the experiment; Yes = participants were aware of the interventions true intentions; No = the true nature of the interventions was not revealed to participants; Unclear = the authors do not establish whether the participants were aware or unaware of the intervention.; Type of Measure = whether the conspiracy measure used measured broad conspiracy ideation, or examined a specific set of beliefs; General = the scale used measured general conspiracy ideation; Specific = the scale measured belief a specific set of conspiracy theories; * indicates p < 0.05; ** indicates p < 0.01.1 = these interventions were designed to increase conspiracy beliefs, as a comparison for the other interventions. 2 = these metainoculations were designed to nullify the effects of the inoculation interventions used in prior studies, not to reduce conspiracy beliefs. 3 = these self-affirmations were designed to nullify the effects of ostracism on increasing conspiracy beliefs.

Results

Characteristics of studies

Out of the 13 articles included, the majority were conducted in English speaking countries; US (3), UK (3). The other studies were conducted in France (2), Hungary (1), Macedonia (1), Hong Kong (1), and Belgium (1). It should be noted that since the majority of studies were conducted online, a number of the studies did not use samples from their countries of origin. Out of the three studies conducted in the UK, two consisted of samples from the United States. Furthermore, the lone study conducted in Hong Kong also used participants from the United States. All studies were published between 2013 and 2020. As highlighted by the Risk of Bias assessment (see S1 Fig), most studies were not preregistered. All papers were peer-reviewed, and had English language versions available.

This review aimed to identify the various interventions that have been studied as a means of challenging conspiracy beliefs. We identified a number of different intervention types within the review. The first type of interventions were informational inoculations. These inoculation interventions had two different variations; one drew attention to the factual inaccuracies of conspiracy theories, and one that described logical fallacies of thought process underpinning conspiracy beliefs [15,21]. The second type of intervention identified was priming. As with inoculation interventions, priming interventions also had several variations. One variation used priming as means of manipulating participants sense of control [44]. These control priming interventions were typically coupled with a second priming task to examine how feelings of control interacted with other variables. Another variation of priming interventions consisted of inducing an analytical mindset in participants by having them complete certain tasks that required mental acuity (e.g., reading difficult-to-read fonts) [25]. The third type of intervention identified were counterargument interventions. These interventions took the approach of providing counterarguments to conspiracy beliefs. A number of variations included arguments that appealed to the participants sense of empathy by describing the negative consequences conspiracy beliefs can have [42]. Another attempted to argue against conspiracy beliefs by ridiculing those who believe in them [42]. Finally, a number of standalone interventions were found in our sample, which did not fit into any of the intervention types described above, such as an educational intervention and a labelling intervention. An overview of the characteristics of each study in the review is provided in S1 Table.

Efficacy of interventions

Overview of included interventions.

A complete overview of the 48 comparisons analysed within this review, along with their the effect sizes and p-values is provided in Table 1. All of the comparisons analysed in this review used self-reported measures of conspiracy belief. Conspiracy belief measures fall into one of two categories; general measures and specific measures. General measures measure an individual’s general disposition to conspiracy ideation based on the extent in which they endorse conspiratorial themes (e.g., that certain events are the result of small malicious group who control the world) [47]. Specific measures assess the extent in which people endorse specific conspiracy theories, such as conspiracy theory suggesting that the September 11 attacks were orchestrated by the US government [48]. As these two types of measures vary, it should be noted that the implications of decreasing scores on a specific measure compared to a general measure may differ [49,50]. Furthermore, the correlates for both types of measures differ. An intervention that changes how one endorses anti-vaccination conspiracy theories may not necessarily affect how they score on a measure of general conspiracy ideation. As noted previously [49], general measures of conspiracy beliefs are more stable and less malleable than their specific measure counterparts.

Approximately 60% of studies used some form of measure that examined generalised conspiracy belief and ideation (k = 28), while the rest used measures that assessed belief in specific conspiracy theories (k = 6). Only a small number of studies utilised a within-group design (k = 5) meaning the majority of comparisons used cross-sectional designs. Overall, 21 (~40%) interventions successfully changed conspiracy beliefs when compared to control, baseline, or neutral groups. However, only a small number of these interventions demonstrated large effect sizes (k = 3). Furthermore, medium effects were only found with a handful of interventions (k = 2). The remaining studies were found to have only small, or very small effects.

Interventions aimed at reducing conspiracy beliefs

The effect sizes of interventions aimed at reducing conspiracy beliefs are reported in Fig 2. Four interventions aimed at reducing conspiracy beliefs were omitted from the graph as they only reported statistical significance, and the Cohen’s d could not be calculated. These interventions were a single conspiracy labelling intervention (k = 1), narrative persuasion interventions (k = 2), and a research methods module (k = 1). This section also omits any interventions that aimed to increase conspiracy beliefs.

Fig 2. Cohen’s d effect sizes interventions aimed at reducing conspiracy beliefs.

Fig 2

Note: This figure has been adapted from [51].

Approximately half of the examined interventions consisted of priming based tasks (k = 15). The majority of these interventions demonstrated a significant change in conspiracy beliefs (k = 10). These effects were all either small or very small. Participants who were primed to be less susceptible to persuasion tactics showed significantly lower conspiracy beliefs when compared to controls among three experimental comparisons. These effects were shown to range from small to medium (d = 0.3–0.56). Interventions that primed participants to engage in analytical thinking resulted in primed participants having lower conspiracy belief than controls (k = 4). However, the effects of these differences were small. Other priming interventions focused on manipulating participants’ sense of control. Priming participants to feel like they had a higher sense of control had mixed results, either increasing or decreasing conspiracy beliefs with very small effects (d = -0.13–0.11) [44].

About a sixth of interventions in the analysis used inoculation methods (k = 5). All were successful at reducing conspiracy beliefs, relative to controls, all with either medium or large effects. Inoculations that identified the factual inaccuracies of conspiracy beliefs were found to be the most effective of all the interventions in the review (d = 1.3) [21]. Jolley and Douglas [15] only reported medium effects for inoculation methods in terms of reducing conspiracy belief measures. Inoculations that demonstrated the logical fallacies of conspiracy beliefs were found to be the second most effective intervention (d = 0.9) [21].

In contrast to inoculations, traditional counterarguments only produced small and very small effects. Rational counterarguments that described the factual inaccuracies of conspiracy theories were found to have only very small to small effects (d = 0.13–0.27) [42]. Stojanov [16], reported similar results, with counterarguments only producing small effects in terms of changing conspiracy beliefs (d = 0.29). These counterarguments were found to be slightly more effective, when they included an extra component that described to participants both the motives of conspiracy theories, and the logical fallacies in the conspiratorial thinking (d = 0.37). Counterarguments that appealed to participant’s sense of empathy, outlining the damages that can result from conspiracy beliefs was found to have very small effects (d = 0.05–0.1), thus being one of the least effective interventions in the sample [42]. Finally, counterarguments that attempted to ridicule those who held conspiracy beliefs also produced only very small effects in terms of reducing conspiracy beliefs (d = 0.11–0.2) [42].

We also found a number of standalone measures that utilised novel interventions to confront conspiracy belief. One study used an in-person 3-month educational course that aimed at explicitly teaching students how to differentiate between proper scientific and pseudoscientific practices. At the end of the term, the pseudoscience class reported significantly lower conspiracy belief in comparison to a typical university research methods class [46]. The effect was also found to be large (d = 1.07). Furthermore, this was one of the few studies that was within-subjects. It was the sole study to use a longitudinal design (3-month interval between pre and post measures). Another study found that labelling conspiracy statements as “conspiracy theories” in comparison to “ideas”, or “political scandals”, had no effect in changing belief in conspiracy theories (p = >0.05, d = -0.1–0.07) [43].

A handful of studies reported no Cohen’s d effects, nor provided sufficient information to calculate them. As per the protocol, in this circumstance we reported the statistical significance of these interventions. One study investigating the effects of narrative persuasion on conspiracy belief was found to have no significant effects [45].

Finally, our analysis revealed a number of “backfiring” interventions; those that aimed at reducing conspiracy belief but paradoxically increased conspiracy beliefs. While labeling conspiracy theories did not produce any significant effects, one study found that it had a small effect in increasing conspiracy beliefs (d = -0.1). Likewise, priming participants to feel like they had a higher sense of control increased conspiracy beliefs in one study (d = -0.13). Anti-conspiracy inoculations followed by pro-conspiracy arguments, while one of the most effective interventions found in this review at reducing conspiracy beliefs, also increased conspiracy beliefs in one study albeit with very a small effect (d = -0.1).

Interventions aimed at increasing conspiracy beliefs.

In addition to the interventions shown in Fig 2, which were all designed to reduce conspiracy belief, the review also found a number of interventions that were not intended to reduce conspiracy belief (and are therefore not shown in Fig 2). Most of these interventions were used as a comparison to interventions within the same study intending to reduce conspiracy beliefs. One study used informational inoculations, where participants were presented with arguments supporting conspiracy beliefs before they were shown conspiratorial content [15]. This interventions was found to have a medium effect in increasing conspiracy beliefs (d = -0.66). Within the same article, two interventions similarly used pro-conspiracy inoculations but were then followed by anti-conspiracy arguments to test whether the order in which these inoculation were presented would influence their effects. The results for these two interventions were mixed ranging from a small effect in increasing conspiracy beliefs and a small effect in decreasing conspiracy beliefs (d = -0.39–0.31). Another study primed participants to feel lower levels of perceived control with the intention of increasing conspiracy beliefs (k = 4). Two low control priming interventions were found to have very small to very small effects in increasing conspiracy beliefs (d = -0.13 –-0.15). However, the remaining two low control priming interventions had small to very small effects in decreasing conspiracy beliefs (d = 0.02–0.27). Finally, one study tested the effects of meta-inoculations, where researchers told participants what inoculations were, with the intention of investigating whether the initial effects of informational inoculations could be negated. Meta-inoculations were another form of inoculations that were found to be reasonably effective in changing conspiracy beliefs, with medium effects, and thus were able to moderately negate the effects of anti-conspiracy inoculations (d = 0.58). Another study found that when participants were primed to feel ostracised, they were found to be more likely to hold conspiracy beliefs than control groups (p = 0.02) [26]. However, it was also reported that when participants who were in the ostracism condition were instructed to think of values that were important to them, this was shown to significantly negate the effects of the ostracism priming [26]. The study reported no effect sizes for these interventions.

Discussion

The aim of this review was to assess how researchers are currently developing interventions to reduce susceptibility to conspiracy beliefs. We found that only half of the interventions examined induced any change in the degree in which participants agreed with conspiracy statements. Furthermore, only a handful of interventions produced medium to large effects. These findings suggest that most existing conspiracy interventions are ineffective in terms of changing conspiracy beliefs. This finding is in line with previous research that suggests that conspiratorial thinking may be underpinned by inherent susceptibilities [52] and may not be easily refuted with standard counterarguments [13,17,19].

The few successful interventions in our sample shared a number of common characteristics. The majority of these interventions consisted of treatments that were given before participants were exposed to conspiracy statements. The first category consisted informational inoculations, of which the two interventions with the largest observed effect sizes belong. These findings suggest that the most effective conspiracy interventions are those take place before participants have been exposed to conspiracy beliefs. Pre-emptively refuting the inaccuracies of conspiracy beliefs was much more effective than traditional counterarguments that were given after participants had encountered conspiracy theories. Traditional counterarguments and debunking strategies in comparison only produced very small effects, whereas inoculation strategies ranged from medium to large effects. However, one pitfall of inoculation-based interventions is that their ability to reduce conspiracy belief was easily counteracted. For participants who were warned about the inoculation interventions, and told there would be an attempt to change their conspiracy beliefs, the effects of the inoculation treatment were rendered negligible [21]. These warnings were referred to as metainoculations. All that was required to make these interventions useless was to warn participants about them beforehand. In essence, the effectiveness of inoculation strategies can be used to counteract their own effects. The effects of these meta-inoculations were found to be in the medium range.

The second category of successful interventions consisted of priming interventions, particularly those that attempted to instill critical thinking skills in their participants. Swami et al [25], found that by having participants evaluate pieces text in fonts that were difficult to read, they were more likely to engage in analytical thinking. Other research has shown that participants that were primed to feel more in control had lower conspiracy beliefs when compared to the baseline group. Recent research has also found that participants who were primed to have a higher resistance to persuasion were shown to be less susceptible to conspiracy beliefs than those in control group [27]. These findings are noteworthy when considering recent research regarding how people engage with conspiracy theories. Previous research has indicated that conspiracy beliefs are the product of intuitive, emotional thinking and that deliberation and focused analysis is associated with lower susceptibility to conspiracy beliefs [53,54]. As such, fostering critical and analytical thinking in participants through priming might possibility motivate them to push past superficial evaluations, and examine the content of conspiracy beliefs. While the possibility of priming members of the public to engage with conspiracy beliefs more critically may be promising, the feasibility of such a priming intervention outside a controlled lab environment has yet to be ascertained. Whether one priming session would be sufficient to foster critical evaluation of conspiracy theories has not yet been demonstrated. It should also be noted that all the analytical priming interventions included in this review originate from one 4-paper study [25]. More independent research may be needed to replicate the results of this intervention. Furthermore, there are some drawbacks associated with the manipulation checks used during these interventions. Participants’ susceptibility to the Moses Illusion was used as a manipulation check to assess whether reading difficult-to-read fonts successfully elicited analytical processing. However, previous research has failed to replicate whether reading in a difficult font has a significant effect on susceptibility to the Moses Illusion [55].

Among the interventions which reported medium-to-large effects, there was one intervention which was characteristically different to the rest. This intervention made use of a university module to teach students about the differentiation between good scientific practice and pseudoscience [46]. After taking the module, students endorsed far fewer conspiracy theories. This was the sole study in this review with a large effect size that did not administer the intervention before participants were exposed to conspiracy statements.

It should be noted that the three intervention types that were most effective within this review are difficult to administer in a real-world setting. Informational inoculations need to be administered to participants before they encounter a conspiracy which would be difficult to maintain in practice. Priming interventions are difficult to maintain as people encounter a variety of new sources every day. Finally, educational interventions require a substantial amount of effort and commitment from both the educator and participants. None of these interventions will provide and easy solution to conspiracy beliefs practically speaking. As such, future research should examine whether the mechanisms of long-form interventions such the three-month university pseudoscience module [46], can be condensed into a short-form counterpart. Research in this area may help bridge the gap between effective but unscalable interventions and interventions that can realistically be implemented in everyday life. In order to efficiently counter misinformation spread by conspiracy beliefs, future interventions will need to focus on developing a solution that will be more easily implemented.

Limitations of studies examined in the review

There were a number of limitations found in the studies examined in this review. The main limitations identified are as follows: focusing only on one form of research design, lack of cross-culture sampling, and issues with the measurement of conspiracy beliefs. Firstly, one of the potential limitations of the findings of this review is the approach taken to research design in the studies included in our sample. The majority of the studies in this sample used cross-sectional designs. Where within-groups comparisons were used, it was over a relatively small interval. As such, it cannot be ascertained whether the beneficial effects of the interventions found in this study will remain consistent after a period of time. The interventions only examined short term effects. Therefore, it is difficult to conclude whether even the most effective intervention found in this review will be of much use in tackling conspiracy beliefs outside the lab. This limitation also applies to interventions that intended to temporarily alter psychological states of participants (e.g., sense of control). As these studies aim at only temporarily changing the psychological states of their participants, it’s to be expected that any effects observed would only last until the effects of experimental manipulation wore off. Furthermore, as identified by Swire-Thompson et al [56], one of the main issues with cross-sectional studies that exclusively use post-intervention measures of belief is that there is no way of establishing whether the groups tested were matched at baseline. The conspiracy beliefs of either the control group or one of the experimental conditions may already have been larger than the other groups before the intervention.

The second limitation we identified is the lack of cultural diversity within the samples tested. Ideally, studies that wish to identify conspiracy interventions that can be widely used should use cross-cultural studies to verify their effects are only observed a specific cultural context. The studies in this review were predominantly conducted with western samples, with the majority of studies coming from either North America or the United Kingdom. While some studies, such as Poon et al [26], were conducted outside of the West, many of these studies recruited American samples using Amazon’s Mechanical Turk. It should also be noted that the 24 studies included in this review only represent 10 groups of authors. Furthermore, half of these studies originate from three research groups, which may suggest a possible source of bias.

Finally, many of the studies used novel, custom measures of conspiracy ideation to measure conspiracy belief. The lack of consistency in measurement makes it difficult to generalize positive findings of these studies. Many studies only examined belief in specific conspiracy theories. Such findings can be difficult to generalize, as an intervention that is successful in changing anti-vaccine conspiracy beliefs may not be particularly effective when used to confront 9/11 conspiracy beliefs. Furthermore, a number of the conspiracy belief measures that were used within these studies were also self-designed, even when general conspiracy belief was being measured. Finally, exclusively using measures of conspiracy belief makes it difficult to draw any conclusions regarding how these interventions affect the underlying cognitive process behind conspiracy beliefs. A focus on more consistent usage of valid and reliable conspiracy measures would be desirable in future studies.

Limitations of the current systematic review

There were also some possible limitations in the systematic review itself. The outcome of interest for this review was the Cohen’s d effect size, meaning that this review focused mainly on pairwise comparisons. While this allowed for consistency when comparisons were made between the interventions, it also came with some drawbacks. For example, a number of studies examined additional variables that may have mediated the changes induced by the interventions. However, as this review focused of pairwise comparisons, some of these details may not have been examined.

Implications for research / practice / design

This review identified a number of directions for future research. Firstly, we recommend that future studies on conspiracy belief interventions focus on using longitudinal designs. Understanding the effects of these interventions beyond their immediate changes is important in terms of identifying how effective they are at challenging conspiracy beliefs. Second, we recommend that future studies be conducted with non-western populations, with more of a focus on cross-cultural sampling. Conspiracy beliefs may often be specific to cultural context, as such we recommend that research be expanded beyond British and American samples.

In terms of practical implications for challenging conspiracy beliefs, we recommend that those with an interest in reducing the misinformation that conspiracy theories spread should do the following:

  1. Avoid appealing to emotions and affect: Interventions that manipulated the emotional state of participants, or appealed to feelings of empathy had small effects in terms of changing conspiracy beliefs.

  2. Counterarguments are not effective: Counterarguments against specific conspiracy beliefs that are given after participants have been exposed to a conspiracy theory tend not to be particularly effective.

  3. Prevention is the best cure: Interventions that provided counterarguments for conspiracy theories were most effective when the counterargument came before the participants were exposed to the particular conspiracy theories that the study focused on. The findings suggest it is more difficult to challenge conspiracy beliefs once participants have started to believe in them. If participants have been taught why certain conspiracy theories are implausible before they have been exposed to conspiratorial media they are much more resistant to conspiracy beliefs.

  4. An analytical mindset and critical thinking skills are the most effective means of challenging conspiracy beliefs: Participants who were primed to have an analytical mindset were less likely to have conspiracy beliefs than controls. Furthermore, when interventions moved beyond putting participants in an analytical mindset, and actually explicitly taught them how to evaluate conspiracy beliefs using specific critical thinking skills, they were much less likely to have conspiracy beliefs.

Conclusion

In conclusion, this review sought to investigate how effective current interventions are at changing conspiracy beliefs. We found that overall, the majority of current conspiracy interventions are ineffective in terms of changing conspiracy beliefs. Despite this, we have identified several promising interventions that may be fruitful to pursue in future studies. We propose that a focus on inoculation-based and critical thinking interventions will bear more promising results for future research, though further efforts are needed to reduce participant burden and more easily implement these interventions in the real world.

Supporting information

S1 Table. Demographic and publication details of studies included in the review.

(DOCX)

S1 Fig. Risk of bias assessment.

(TIF)

S1 File. PRISMA checklist.

(DOCX)

Acknowledgments

We thank Dr. Rebecca Umbach (Google) for comments on the manuscript. We thank Dr. Joe Campbell (University of Nottingham) for providing his LaTeX script to help develop Fig 1 in this manuscript.

Data Availability

The data collected and analysed in this study can be found on the Open-Science Framework (https://osf.io/6yjt3/).

Funding Statement

Awardee: COM Grant Number: EPSPG/2021/212 Funder: Irish Research Council URL: https://research.ie/ The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Douglas KM, Sutton RM, Cichocka A. The Psychology of Conspiracy Theories. Curr Dir Psychol Sci. 2017;26: 538–542. doi: 10.1177/0963721417718261 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Goldacre B. Bad pharma: how drug companies mislead doctors and harm patients. London: Fourth Estate; 2012. [Google Scholar]
  • 3.Plait PC. Bad astronomy: misconceptions and misuses revealed, from astrology to the moon landing “hoax.” New York: Wiley; 2002. [Google Scholar]
  • 4.Bierwiaczonek K, Kunst JR, Pich O. Belief in covid‐19 conspiracy theories reduces social distancing over time. Appl Psychol Health Well-Being. 2020. doi: 10.1111/aphw.12223 [DOI] [PubMed] [Google Scholar]
  • 5.Imhoff R, Dieterle L, Lamberty P. Resolving the puzzle of conspiracy worldview and political activism: Belief in secret plots decreases normative but increases nonnormative political engagement. Soc Psychol Personal Sci. 2021;12: 71–79. doi: 10.1177/1948550619896491 [DOI] [Google Scholar]
  • 6.Knobel P, Zhao X, White KM. Do conspiracy theory and mistrust undermine people’s intention to receive the COVID‐19 vaccine in Austria? J Community Psychol. 2022;50: 1269–1281. doi: 10.1002/jcop.22714 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Allington D, McAndrew S, Moxham-Hall V, Duffy B. Coronavirus conspiracy suspicions, general vaccine attitudes, trust and coronavirus information source as predictors of vaccine hesitancy among uk residents during the covid-19 pandemic. Psychol Med. 2021. doi: 10.1017/S0033291721001434 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Bierwiaczonek K, Gundersen AB, Kunst JR. The role of conspiracy beliefs for COVID-19 health responses: A meta-analysis. Curr Opin Psychol. 2022;46: 101346. doi: 10.1016/j.copsyc.2022.101346 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Klimiuk K, Czoska A, Biernacka K, Balwicki Ł. Vaccine misinformation on social media—topic-based content and sentiment analysis of Polish vaccine-deniers’ comments on Facebook. Hum Vaccines Immunother. 2021; 1–10. doi: 10.1080/21645515.2020.1850072 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Jolley D, Paterson JL. Pylons ablaze: Examining the role of 5G COVID-19 conspiracy beliefs and support for violence. Br J Soc Psychol. 2020;59: 628–640. doi: 10.1111/bjso.12394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Bartlett J, Miller C. The power of unreason: Conspiracy theories, extremism and counter-terrorism. Demos London; 2010. [Google Scholar]
  • 12.Grimes DR. On the viability of conspiratorial beliefs. PLoS ONE. 2016;11. doi: 10.1371/journal.pone.0147905 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lewandowsky S, Oberauer K, Gignac GE. NASA faked the moon landing—therefore, (climate) science is a hoax: An anatomy of the motivated rejection of science. Psychol Sci. 2013;24: 622–633. doi: 10.1177/0956797612457686 [DOI] [PubMed] [Google Scholar]
  • 14.Swami V, Pietschnig J, Tran US, Nader IW, Stieger S, Voracek M. Lunar Lies: The Impact of Informational Framing and Individual Differences in Shaping Conspiracist Beliefs About the Moon Landings: Conspiracy theories. Appl Cogn Psychol. 2013;27: 71–80. doi: 10.1002/acp.2873 [DOI] [Google Scholar]
  • 15.Jolley D, Douglas KM. Prevention is better than cure: Addressing anti‐vaccine conspiracy theories. J Appl Soc Psychol. 2017;47: 459–469. doi: 10.1111/jasp.12453 [DOI] [Google Scholar]
  • 16.Stojanov A. Reducing conspiracy theory beliefs. Psihologija. 2015;48: 251–266. doi: 10.2298/PSI1503251S [DOI] [Google Scholar]
  • 17.Nyhan B, Reifler J, Ubel PA. The Hazards of Correcting Myths About Health Care Reform. Med Care. 2013;51: 127–132. doi: 10.1097/MLR.0b013e318279486b [DOI] [PubMed] [Google Scholar]
  • 18.Nyhan B, Reifler J, Richey S, Freed GL. Effective Messages in Vaccine Promotion: A Randomized Trial. Pediatrics. 2014;133: e835–e842. doi: 10.1542/peds.2013-2365 [DOI] [PubMed] [Google Scholar]
  • 19.Nyhan B, Reifler J. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine. 2015;33: 459–464. doi: 10.1016/j.vaccine.2014.11.017 [DOI] [PubMed] [Google Scholar]
  • 20.Lewandowsky S, van der Linden S. Countering Misinformation and Fake News Through Inoculation and Prebunking. Eur Rev Soc Psychol. 2021. doi: 10.1080/10463283.2021.1876983 [DOI] [Google Scholar]
  • 21.Banas JA, Miller G. Inducing Resistance to Conspiracy Theory Propaganda: Testing Inoculation and Metainoculation Strategies. Hum Commun Res. 2013;39: 184–207. doi: 10.1111/hcre.12000 [DOI] [Google Scholar]
  • 22.Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019;5: 65. doi: 10.1057/s41599-019-0279-9 [DOI] [Google Scholar]
  • 23.van der Linden S, Roozenbeek J, Compton J. Inoculating Against Fake News About COVID-19. Front Psychol. 2020;11. doi: 10.3389/fpsyg.2020.566790 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Banas JA, Rains SA. A Meta-Analysis of Research on Inoculation Theory. Commun Monogr. 2010;77: 281–311. doi: 10.1080/03637751003758193 [DOI] [Google Scholar]
  • 25.Swami V, Voracek M, Stieger S, Tran US, Furnham A. Analytic thinking reduces belief in conspiracy theories. Cognition. 2014;133: 572–585. doi: 10.1016/j.cognition.2014.08.006 [DOI] [PubMed] [Google Scholar]
  • 26.Poon K-T, Chen Z, Wong W-Y. Beliefs in conspiracy theories following ostracism. Pers Soc Psychol Bull. 2020;46: 1234–1246. doi: 10.1177/0146167219898944 [DOI] [PubMed] [Google Scholar]
  • 27.Bonetto E, Troïan J, Varet F, Lo Monaco G, Girandola F. Priming Resistance to Persuasion decreases adherence to Conspiracy Theories*. Soc Influ. 2018;13: 125–136. doi: 10.1080/15534510.2018.1471415 [DOI] [Google Scholar]
  • 28.Lazić A, Žeželj I. A systematic review of narrative interventions: Lessons for countering anti-vaccination conspiracy theories and misinformation. Public Underst Sci. 2021. doi: 10.1177/09636625211011881 [DOI] [PubMed] [Google Scholar]
  • 29.Xu Z. Personal stories matter: topic evolution and popularity among pro- and anti-vaccine online articles. J Comput Soc Sci. 2019;2: 207–220. doi: 10.1007/s42001-019-00044-w [DOI] [Google Scholar]
  • 30.Brotherton R, French CC. Belief in Conspiracy Theories and Susceptibility to the Conjunction Fallacy. Appl Cogn Psychol. 2014;28: 238–248. doi: 10.1002/acp.2995 [DOI] [Google Scholar]
  • 31.Pytlik N, Soll D, Mehl S. Thinking Preferences and Conspiracy Belief: Intuitive Thinking and the Jumping to Conclusions-Bias as a Basis for the Belief in Conspiracy Theories. Front Psychiatry. 2020;11. doi: 10.3389/fpsyt.2020.568942 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.van Prooijen J-W, Douglas KM, De Inocencio C. Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural: Illusory pattern perception. Eur J Soc Psychol. 2018;48: 320–335. doi: 10.1002/ejsp.2331 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Scherer LD, Yates JF, Baker SG, Valentine KD. The Influence of Effortful Thought and Cognitive Proficiencies on the Conjunction Fallacy: Implications for Dual-Process Theories of Reasoning and Judgment. Pers Soc Psychol Bull. 2017;43: 874–887. doi: 10.1177/0146167217700607 [DOI] [PubMed] [Google Scholar]
  • 34.Mulukom V van, Pummerer L, Alper S, Bai H, Čavojová V, Farias J, et al. Antecedents and consequences of COVID-19 conspiracy beliefs: a systematic review. 2020. doi: 10.31234/osf.io/u8yah. [DOI] [PMC free article] [PubMed]
  • 35.Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP J Club. 1995;123: A12–13. [PubMed] [Google Scholar]
  • 36.Moher D, Liberati A, Tetzlaff J, Altman DG, for the PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339: b2535–b2535. doi: 10.1136/bmj.b2535 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Einstein KL, Glick DM. Do I Think BLS Data are BS? The Consequences of Conspiracy Theories. Polit Behav. 2015;37: 679–701. doi: 10.1007/s11109-014-9287-z [DOI] [Google Scholar]
  • 38.Murphy KR, Myors B, Wolach A. Statistical Power Analysis. 0 ed. Routledge; 2014. doi: 10.4324/9781315773155 [DOI] [Google Scholar]
  • 39.Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019; l4898. doi: 10.1136/bmj.l4898 [DOI] [PubMed] [Google Scholar]
  • 40.Whitson JA, Kim J, Wang CS, Menon T, Webster BD. Regulatory focus and conspiratorial perceptions: The importance of personal control. Pers Soc Psychol Bull. 2019;45: 3–15. doi: 10.1177/0146167218775070 [DOI] [PubMed] [Google Scholar]
  • 41.Adam-Troian J, Caroti D, Arciszewski T, Ståhl T. Unfounded beliefs among teachers: The interactive role of rationality priming and cognitive ability. Appl Cogn Psychol. 2019;33: 720–727. doi: 10.1002/acp.3547 [DOI] [Google Scholar]
  • 42.Orosz G, Krekó P, Paskuj B, Tóth-Király I, Bőthe B, Roland-Lévy C. Changing conspiracy beliefs through rationality and ridiculing. Front Psychol. 2016;7. doi: 10.3389/fpsyg.2016.01525 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Wood MJ. Some dare call it conspiracy: Labeling something a conspiracy theory does not reduce belief in it. Polit Psychol. 2016;37: 695–705. doi: 10.1111/pops.12285 [DOI] [Google Scholar]
  • 44.Stojanov A, Bering JM, Halberstadt J. Does perceived lack of control lead to conspiracy theory beliefs? Findings from an online mturk sample. PLoS ONE. 2020;15. doi: 10.1371/journal.pone.0237771 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Nera K, Pantazi M, Klein O. “These are just stories, Mulder”: Exposure to conspiracist fiction does not produce narrative persuasion. Front Psychol. 2018;9. doi: 10.3389/fpsyg.2018.00684 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Dyer KD, Hall RE. Effect of Critical Thinking Education on Epistemically Unwarranted Beliefs in College Students. Res High Educ. 2019;60: 293–314. doi: 10.1007/s11162-018-9513-3 [DOI] [Google Scholar]
  • 47.Brotherton R, French CC, Pickering AD. Measuring Belief in Conspiracy Theories: The Generic Conspiracist Beliefs Scale. Front Psychol. 2013;4. doi: 10.3389/fpsyg.2013.00279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Swami V, Chamorro-Premuzic T, Furnham A. Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Appl Cogn Psychol. 2010;24: 749–761. doi: 10.1002/acp.1583 [DOI] [Google Scholar]
  • 49.Imhoff R, Bertlich T, Frenken M. Tearing apart the “evil” twins: A general conspiracy mentality is not the same as specific conspiracy beliefs. Curr Opin Psychol. 2022;46: 101349. doi: 10.1016/j.copsyc.2022.101349 [DOI] [PubMed] [Google Scholar]
  • 50.Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020;34: 118–122. doi: 10.1016/j.cobeha.2020.02.015 [DOI] [Google Scholar]
  • 51.Marshall J, Linehan C. Are Exergames Exercise? A Scoping Review of the Short-Term Effects of Exertion Games. IEEE Trans Games. 2021;13: 160–169. doi: 10.1109/TG.2020.2995370 [DOI] [Google Scholar]
  • 52.Enders AM, Uscinski JE, Klofstad CA, Seelig MI, Wuchty S, Murthi MN, et al. Do conspiracy beliefs form a belief system? Examining the structure and organization of conspiracy beliefs. J Soc Polit Psychol. 2021;9: 255–271. doi: 10.5964/jspp.5649 [DOI] [Google Scholar]
  • 53.Bago B, Rand DG, Pennycook G. Does deliberation decrease belief in conspiracies? J Exp Soc Psychol. 2022;103: 104395. doi: 10.1016/j.jesp.2022.104395 [DOI] [Google Scholar]
  • 54.Forgas JP, Baumeister RF, editors. The Social Psychology of Gullibility: Fake News, Conspiracy Theories, and Irrational Beliefs. 1st ed. Routledge; 2019. doi: 10.4324/9780429203787 [DOI] [Google Scholar]
  • 55.Janouskova A, Kocyan J, Simova M, Uvirova K, Zahradnickova K, Vaculik M, et al. The effect of font readability on the Moses illusion: A replication study. Conscious Cogn. 2022;99: 103284. doi: 10.1016/j.concog.2022.103284 [DOI] [PubMed] [Google Scholar]
  • 56.Swire-Thompson B, DeGutis J, Lazer D. Searching for the Backfire Effect: Measurement and Design Considerations. J Appl Res Mem Cogn. 2020;9: 286–299. doi: 10.1016/j.jarmac.2020.06.006 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Pierluigi Vellucci

2 Nov 2022

PONE-D-22-25039The efficacy of interventions in reducing belief in conspiracy theories: a systematic reviewPLOS ONE

Dear Dr. O'Mahony,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Your paper has been revised by four expert reviewers. They highlighted significant changes to be made to improve your paper. You are therefore invited to make the best use of those suggestions to provide a revised version of your paper.

==============================

Please submit your revised manuscript by Dec 17 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Pierluigi Vellucci

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Please amend either the abstract on the online submission form (via Edit Submission) or the abstract in the manuscript so that they are identical.

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 

4. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 2 in your text; if accepted, production will need this reference to link the reader to the Table.

5. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables (should remain/ be uploaded) as separate "supporting information" files.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an interesting paper on an increasingly popular topic of study: conspiratorial though (CT). While this area of research has been studied extensively, especially in the time of COVID and Trump, the current study is a novel meta-analysis of CT interventions. While most papers look at the causes and consequences of CT, as shown in the paper far fewer test methods to combat this style of thinking. I have comments below I'd like to see addressed, but overall this seems like a useful contribution to this ever-growing area of research.

-Abstract/Discussion: The premise that CT is on the rise is not accurate. Please see Uscinski et al. (2022a) for a test of this hypothesis. This is not a fatal flaw because, as the authors articulate well, there are many negative externalities associated with CT (regardless of whether it is on the increase). That said, I would not frame the paper around the assumption that CT is having a renaissance.

-Pages 6, 9: For readers who do not conduct meta analyses themselves, please define PRISMA more clearly. Please apply this comment to any other meta-analysis terminology that might not be familiar to a lay reader.

-Page 6: Why was Google Scholar not included in the search?

-Table 1: Given the amount of information here, can one perform a multivariate analysis? For example, with Cohen's d as the DV, and the other columns in the table as IVs?

-Page 20/Figure 2: Is it possible to add CIs to the Cohen's d plots? I would also sort this figure ascending, rather than by intervention type, to group the most/least effective interventions together. This will tie more directly into the discussion of patterns in the results on pp. 13-22.

-Pages 22ff: A key takeaway of the paper is that CT is a deeply-embedded psychological characteristic that is hard to change. I would discuss this a bit more. Please see work by Uscinski et al. on this (2022a,b; 2021; 2016).

-Pages 25-26: In the discussion of limitations I would note that the 24 studies represent only 10 groups of authors. Moreover, half of the studies come from just three groups. I am mindful that the analysis is limited by what research has been published, but were I writing this paper I would point this out as a possible source of bias.

REFERENCES

Uscinski, JE et al. 2022a Have beliefs in conspiracy theories increased over time? PLoS ONE 1 7(7): e0270429.

Uscinski, JE, et al. 2022b Cause and Effect: On the Antecedents and Consequences of Conspiracy Theory Beliefs. Current Opinion in Psychology 47: 101364.

Uscinski, JE, et al. 2021 Do Conspiracy Beliefs form a Belief System?: Examining the Structure and Organization of Conspiracy Beliefs. Journal of Social and Political Psychology 9: 255-271.

Uscinski, JE, et al. 2016 What Drives Conspiratorial Beliefs? The Role of Informational Cues and Predispositions. Political Research Quarterly 69: 57-71.

Reviewer #2: There is a lot to like in this paper. The review of this kind is timely and much needed. The authors do a good job motivating this study - they build a persuasive case.

I also liked the categorization of the interventions they propose and the way it was argued.

The fact that they pre registered the study and shared materials and data made it much easier to evaluate and use by future researchers (the osf documentation is clearly labeled and easy to follow) - much appreciated.

The authors closely followed PRISMA protocol.

They were careful not to overclaim in the Discussion section and to rely exclusively on their results.

That being said, I have a few major suggestions:

First, I would like the authors to acknowledge the fact that priming interventions are in fact a very heterogeneous group - explicitly state what are their shared features, but also how they differ amongst themselves.

Second, the effect sizes of interventions in both directions (aimed to decrease but also to increase CT beliefs, e.g. Jolley & Douglas, 2017a) are lumped together. I would suggest redoing the analysis omitting the interventions increasing CT beliefs and also commenting the effect sizes for the two types of interventions.

Next, the authors should make sure to address the fact that the analytical thinking priming interventions stem from one 4-study paper (Swami et al., 2014). Before implementing these types of interventions widely, it would be good to test the robustness of the effect in an independent replication study. I would also like the authors to discuss the upsides and drawbacks techniques for experimental priming of analytical thinking in this research (e.g. reading in a difficult font) and its manipulation checks (e.g. susceptibility to Moses illusion). The duration of the priming effects is also unknown.

I also have a few minor things to suggest:

First, it would be better to avoid over simplified explanations, for example:

"The tendency to believe in contradicting conspiracy The efficacy of interventions in reducing belief conspiracy theories theories suggest that those who believe in conspiracies tend to make superficial judgements about whether they are true."

Please consult these papers about contradictory conspiracy theories (they discuss whether they necessarily reflect superficial judgments about the veracity of the CTs):

Lukić, P., Žeželj, I., & Stanković, B. (2019). How (ir) rational is it to believe in contradictory conspiracy theories?. Europe’s journal of psychology, 15(1), 94-107.

Wood M. J. (2017). Conspiracy suspicions as a proxy for beliefs in conspiracy theories: Implications for theory and measurement. British Journal of Psychology, 108(3), 507–527.

Second, I would also suggest rewording emotionally charged claims such as:

By synthesizing the evidence, we may better prepare for future real-world threats from conspiracists.

Reviewer #3: This is a good paper. I think some rewriting would make it publishable.

1. There is no "striking increase in conspiracy theory beliefs." You don't need to make this throwaway claim.

See: J. Uscinski et al., Have beliefs in conspriacy theories increased over time? Plos One 17, e0270429 (2022).

D. Romer, K. H. Jamieson, Conspiracy theories as barriers to controlling the spread of COVID-19 in the US. Social Science & Medicine 263, 1-8 (2020).

M. Mancosu, S. Vassallo, The life cycle of conspiracy theories: evidence from a long-term panel survey on conspiracy beliefs in Italy. Italian Political Science Review/Rivista Italiana di Scienza Politica 52, 1-17 (2022).

2. I am not sure what the abstract means when it says that little research has reviewed methods for reducing beliefs. I have seen tons of studies on this topic. And lots of critiques of those studies.

3. Use newer citations for the intro. Many of the cites are more than five years old. No need for that.

4. Grimes is not a good citation for conspiracy beliefs being resistant to refutation. He doesn't test that.

5. Claiming that conspiracy theory beliefs have harmful outcomes assumes causality. I think such assumptions are unwarranted at this time.

6. The intro should almost be entirely removed - the paper can stand on its own without a broad review on conspiracy theories - just focus on the interventions, that's it. Don't cite so much stuff either, just cite the Douglas et al 2019 lit review, for example.

7. pg.6. Not sure that that conspiracy theory beliefs have "very real public health threats." I think the jury is still out. See:

J. Uscinski, A. M. Enders, C. Klofstad, J. Stoler, Cause and Effect: On the Antecedents and Consequences of Conspiracy Theory Beliefs. Current Opinion in Psychology https://doi.org/10.1016/j.copsyc.2022.101364, 101364 (2022).

8. Did the search miss "conspiratorial" because it has a t instead of a c at the end? How about misperceptions and rumors?

9. There needs to be a better demarcation between general conspiracy mentality measures and specific measures of conspiracy theory beliefs. These are different things, and the implications of decreasing one versus the other are very different.

10. Figure 2 is hard to read. The dots are too far from the labels.

11. page 22: Conspiracy theories are not "becoming" widespread, and the literature cited, Oliver and Wood and van der linden show no such thing. Also those cites are both really old for making claims about recent over time change.

12. Your paper is about conspiracy theories, not conspiracies. Be sure to use the term conspiracy theories, for example, on bottom of page 23 but elsewhere too. Conspiracies are real.

13. Page 27. What is a "misinformation effect of conspiracy theories"? This is clunky language.

14. The paper, in general, needs to focus on the meta analysis, and the findings of that. Trim the front substantially and stay laser focused on the basic findings. This will highlight the paper's value. I think the abstract could highlight the main findings more: what works and what doesn't? In other words, only focus on interventions and nothing else.

Reviewer #4: This paper is a systematic review of the efficacy of interventions in countering belief in conspiracy theories. It is a very meaningful study as the field is growing fast and a review is warranted. However, the paper could benefit from some substantial changes.

Major comments:

- Because the search was conducted in June 2021, the results feel a little outdated, as with COVID-19 there is a burgeoning number of studies that look at the issue in an empirical setting, which are not included. It may be too much to ask the authors to conduct another updated search, considering the amount of anticipated work associated with it. That said, if the authors intend to keep the included papers as they are, maybe a more in-depth discussion is necessary regarding the implications for future research given all the new emerging studies.

- Relatedly, the included studies are not a lot, even though each paper investigates several interventions. The authors mentioned that many of the studies use misinformation, fake news, and conspiracy theories interchangeably. Then why not include all of them in the keyword search to expand the sample? Or at least, justify why only conspirac* is used in the search strategy.

- In the background section, besides explaining the different types of interventions, it's also helpful to include a discussion on the different cognitive processes of conspiracy beliefs - proportionality bias, intentionality bias, pattern perception, jumping to conclusions, confirmation bias, and the conjunction fallacy. And if possible, relate the interventions to these categories.

- Is there any systematic difference between the more general and more specific conspiracy beliefs?

- As discussed in the limitations section, the paper lacks external validity. What kind of potential interventions in real life can we devise, for example, on social media platforms?

Minor comments:

- The PRISMA flowchart can be more detailed - breaking down the number of papers excluded according to the reason/criteria

- Period missing at the end of page 30.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

Reviewer #4: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Apr 5;18(4):e0280902. doi: 10.1371/journal.pone.0280902.r002

Author response to Decision Letter 0


21 Dec 2022

Authors: We wish to sincerely thank the reviewers for taking the time to carefully consider our paper and for the suggestions you have all contributed to this manuscript. Below we have included our responses and we hope we have adequately addressed the issues you have raised. In the revised manuscript, the changes are visible in blue font.

Reviewer 1

-Abstract/Discussion: The premise that CT is on the rise is not accurate. Please see Uscinski et al. (2022a) for a test of this hypothesis. This is not a fatal flaw because, as the authors articulate well, there are many negative externalities associated with CT (regardless of whether it is on the increase). That said, I would not frame the paper around the assumption that CT is having a renaissance.

Authors: Thank you for raising this point. We became aware of the cited study shortly after our submission and agree that statement is not accurate and so have removed it from the abstract

-Pages 6, 9: For readers who do not conduct meta analyses themselves, please define PRISMA more clearly. Please apply this comment to any other meta-analysis terminology that might not be familiar to a lay reader.

Authors: Thank you for the suggestion. We have provided a brief explanation of the PRISMA process. We also refer readers to our PRISMA checklist in supplementary materials as a more complete summary of what the 27 items on the checklist entail on page 6 of the manuscript.

-Page 6: Why was Google Scholar not included in the search?

Authors: Based on our preparatory work, we agreed that the three databases we searched were sufficient in identifying these papers. As Google Scholar primarily crawls other databases and identify the same articles that our existing databases would already find. As such, we did not see the need to include it in our searches – this is typical for systematic reviews, where Google Scholar is not usually included.

-Table 1: Given the amount of information here, can one perform a multivariate analysis? For example, with Cohen's d as the DV, and the other columns in the table as IVs?

Authors: This is an interesting suggestion. Unfortunately, as identified by another reviewer, the interventions and their measures are quite heterogenous, meaning any multivariate analysis would not provide any useful results. As such, we concluded that we would confine the analysis of this paper to the narrative synthesis.

-Page 20/Figure 2: Is it possible to add CIs to the Cohen's d plots? I would also sort this figure ascending, rather than by intervention type, to group the most/least effective interventions together. This will tie more directly into the discussion of patterns in the results on pp. 13-22.

Authors: Thank you for the suggestion. With regards to adding CIs to the plots, there are two main problems. First, due to the fact that we include multiple effect sizes in each row it would be difficult to see the overlapping CI stems. Secondly, the manner of which the Cohen’s d was reported each study differed greatly. As such, some papers would not have sufficient data to calculate the CIs.

Regarding your suggestion to sort the data from most effective to least, we have rearranged the figure according to your suggestion. This alteration also addresses another comment made by another reviewer, where the dots are too far from the interventions/authors. This change should place the data points closer to those labels.

-Pages 22ff: A key takeaway of the paper is that CT is a deeply-embedded psychological characteristic that is hard to change. I would discuss this a bit more. Please see work by Uscinski et al. on this (2022a,b; 2021; 2016).

Authors: Thank you for recommending these citations. We have included a brief discussion regarding this topic on page 24.

-Pages 25-26: In the discussion of limitations I would note that the 24 studies represent only 10 groups of authors. Moreover, half of the studies come from just three groups. I am mindful that the analysis is limited by what research has been published, but were I writing this paper I would point this out as a possible source of bias.

Authors: Thank you for highlighting this – we have mentioned this within our limitations section as a potential source of bias on page 29.

Thank you for taking the time to review our work, we hope you find the revised manuscript suitable for publication.

Reviewer 2

First, I would like the authors to acknowledge the fact that priming interventions are in fact a very heterogeneous group - explicitly state what are their shared features, but also how they differ amongst themselves.

Authors: Thank you for pointing this out – while it was useful to group interventions into broader categories for the sake of our analysis and communicating our findings, we agree that priming interventions in particular are quite a heterogenous group. We have made note of this in our manuscript, on page. 5.

Second, the effect sizes of interventions in both directions (aimed to decrease but also to increase CT beliefs, e.g. Jolley & Douglas, 2017a) are lumped together. I would suggest redoing the analysis omitting the interventions increasing CT beliefs and also commenting the effect sizes for the two types of interventions.

Authors: Thank you for this suggestion. We agree that the interventions that aimed to reduce conspiracy beliefs are the primary target of this review and it is important to clarify the effects of these interventions, separate from those used as comparators that were not expected to reduce conspiracy beliefs. We have made two amendments to make this clearer to readers. First, we provide a section on pages 22-24 within the results section that discusses the effects of interventions that both aimed to increase conspiracy belief, and those that aimed to reduce belief but paradoxically increased it. Secondly, we have removed any interventions that aimed to increase conspiracy beliefs from Figure 2 to make the results clearer.

Next, the authors should make sure to address the fact that the analytical thinking priming interventions stem from one 4-study paper (Swami et al., 2014). Before implementing these types of interventions widely, it would be good to test the robustness of the effect in an independent replication study. I would also like the authors to discuss the upsides and drawbacks techniques for experimental priming of analytical thinking in this research (e.g. reading in a difficult font) and its manipulation checks (e.g. susceptibility to Moses illusion). The duration of the priming effects is also unknown.

Authors: Thank you for identifying this – we have noted in the discussion section (page 26) that the analytical priming interventions originate from one paper and we draw attention to the need for independent replication. We have also critiqued some of the techniques used in this study (i.e. difficult to read font) as other studies indicate that the manipulation checks in question (such as susceptibility to the Moses illusion) fail to replicate.

I also have a few minor things to suggest:

First, it would be better to avoid over simplified explanations, for example:

"The tendency to believe in contradicting conspiracy. The efficacy of interventions in reducing belief conspiracy theories suggest that those who believe in conspiracies tend to make superficial judgements about whether they are true."

Please consult these papers about contradictory conspiracy theories (they discuss whether they necessarily reflect superficial judgments about the veracity of the CTs):

Authors: Thank you for referring us to these studies – in light of these findings we have removed the quoted statement from our manuscript on page 25. We have chosen to retain the message of that paragraph, as other research has suggested that deliberation and System 2 thinking has been associated with fewer conspiracy beliefs/ reduced conspiratorial thinking (Bago et al., 2022; Forgas & Baumeister, 2019). As such, we still draw the conclusion that analytical thinking may help individuals consider information more clearly (these changes can be seen on page 25). However, we have removed contradicting conspiracy beliefs as supporting evidence for this suggestion.

Second, I would also suggest rewording emotionally charged claims such as:

By synthesizing the evidence, we may better prepare for future real-world threats from conspiracists.

Authors: Agreed – we have removed this sentence from page 6 of the manuscript.

Thank you for your constructive comments, we hope you now find the manuscript suitable for publication.

Reviewer #3:

This is a good paper. I think some rewriting would make it publishable.

1. There is no "striking increase in conspiracy theory beliefs." You don't need to make this throwaway claim.

Authors: Thank you for highlighting this – we have removed this statement and similar statements from the abstract and page 25 at the beginning of the discussion section.

2. I am not sure what the abstract means when it says that little research has reviewed methods for reducing beliefs. I have seen tons of studies on this topic. And lots of critiques of those studies.

Authors: Thank you for highlighting this – we have reworded the abstract. Our initial intention was to indicate that a systematic review of such interventions in a broad context had yet to be conducted. We agree that the initial wording makes it read as if we are suggesting little research has been conducted on conspiracy belief interventions themselves. We have revised this statement to better communicate our position.

3. Use newer citations for the intro. Many of the cites are more than five years old. No need for that.

Authors: Thank you for the suggestion – we have included a number of more recent citations in our introduction section, for example; (Bierwiaczonek et al., 2022; Imhoff et al., 2022; Knobel et al., 2022; Mulukom et al., 2020; Pytlik et al., 2020; Sutton & Douglas, 2020)

4. Grimes is not a good citation for conspiracy beliefs being resistant to refutation. He doesn't test that.

Authors: Thank you for highlighting this – we have removed this section from page 3 as part of the streamlining of the introduction.

5. Claiming that conspiracy theory beliefs have harmful outcomes assumes causality. I think such assumptions are unwarranted at this time.

Authors: Thank you for highlighting this – we have indicated in the manuscript on page 3 that many of these findings are demonstrating associations between these phenomena and causal conclusions cannot be made at the current time.

6. The intro should almost be entirely removed - the paper can stand on its own without a broad review on conspiracy theories - just focus on the interventions, that's it. Don't cite so much stuff either, just cite the Douglas et al 2019 lit review, for example.

Authors: Thank you for the suggestion – as mentioned above, we have streamlined the introduction to focus more on the interventions.

7. pg.6. Not sure that that conspiracy theory beliefs have "very real public health threats." I think the jury is still out. See:

J. Uscinski, A. M. Enders, C. Klofstad, J. Stoler, Cause and Effect: On the Antecedents and Consequences of Conspiracy Theory Beliefs. Current Opinion in Psychology https://doi.org/10.1016/j.copsyc.2022.101364, 101364 (2022).

Authors: Thank you for the comment – we have removed absolutist statement regarding the public health threats related to conspiracy beliefs and have instead highlighted that there exists an association between these two phenomenon. We also highlight that, as you noted, causal claims cannot be drawn at the moment.

8. Did the search miss "conspiratorial" because it has a t instead of a c at the end? How about misperceptions and rumors?

Authors: Our stance was that misperceptions and rumours do not fit into our definition of conspiracy beliefs. Conspiracy theories are stories that are actively created by someone/ some group to explain ambiguous events. On the contrary, misperceptions and rumours often are the result of people getting things wrong or mishearing them on the grapevine. We believe they have common traits, however, conspiracy theories are more about imposing ideological narratives to explain events.

9. There needs to be a better demarcation between general conspiracy mentality measures and specific measures of conspiracy theory beliefs. These are different things, and the implications of decreasing one versus the other are very different.

Authors: Thank you for the suggestion – we have described the differences between general and specific measures as well as referring to surrounding research that indicates that different requirements are needed to change scores on one versus the other.

10. Figure 2 is hard to read. The dots are too far from the labels.

Authors: Thank you for highlighting this –we have removed the Intervention Type label to bring the dots closer the relevant labels.

11. page 22: Conspiracy theories are not "becoming" widespread, and the literature cited, Oliver and Wood and van der linden show no such thing. Also those cites are both really old for making claims about recent over time change.

Authors: Thank you for highlighting this – We have removed this line from page 22, and we have removed any similar phrases.

12. Your paper is about conspiracy theories, not conspiracies. Be sure to use the term conspiracy theories, for example, on bottom of page 23 but elsewhere too. Conspiracies are real.

Authors: Thank you for highlighting this – this was primarily an error as we chose to focus on “conspiracy beliefs”. As such, we have removed any instances of the word “conspiracy/ies” where not appropriate.

13. Page 27. What is a "misinformation effect of conspiracy theories"? This is clunky language.

Authors: Thank you for highlighting this – we agree in retrospect that this does not read well and have instead indicated that this is “misinformation spread by conspiracy beliefs”

14. The paper, in general, needs to focus on the meta analysis, and the findings of that. Trim the front substantially and stay laser focused on the basic findings. This will highlight the paper's value. I think the abstract could highlight the main findings more: what works and what doesn't? In other words, only focus on interventions and nothing else.

Authors: Thank you for the suggestion – as mentioned above, we have streamlined the introduction to focus more on the interventions.

Thank you for your time in reviewing our work. We hope you find the paper much improved.

Reviewer #4:

This paper is a systematic review of the efficacy of interventions in countering belief in conspiracy theories. It is a very meaningful study as the field is growing fast and a review is warranted. However, the paper could benefit from some substantial changes.

Major comments:

- Because the search was conducted in June 2021, the results feel a little outdated, as with COVID-19 there is a burgeoning number of studies that look at the issue in an empirical setting, which are not included. It may be too much to ask the authors to conduct another updated search, considering the amount of anticipated work associated with it. That said, if the authors intend to keep the included papers as they are, maybe a more in-depth discussion is necessary regarding the implications for future research given all the new emerging studies.

Authors: Thank you for highlighting this – we have discussed the findings of this review in the context of emerging research in this field per your recommendation

- Relatedly, the included studies are not a lot, even though each paper investigates several interventions. The authors mentioned that many of the studies use misinformation, fake news, and conspiracy theories interchangeably. Then why not include all of them in the keyword search to expand the sample? Or at least, justify why only conspirac* is used in the search strategy.

Authors: We perhaps should have been more explicit in this section. Our initial statement sought to indicate that previous authors used fake news, misinformation, and conspiracy theories interchangeably. Many of the conspiracy interventions were developed on the basis that they were previously successful in reducing misinformation or fake news. As such, the background sections of these papers show how priming was effective by citing how it reduces fake news. However, the papers cited in the introduction themselves are focusing on conspiracy beliefs and not fake news or misinformation. We take the stance that while conspiracy beliefs are similar to fake news and misinformation, they are different things. We now specify that the conspiracy interventions listed from pages 4-6 were designed on the basis of previous findings that show their success in combatting misinformation and fake news.

- In the background section, besides explaining the different types of interventions, it's also helpful to include a discussion on the different cognitive processes of conspiracy beliefs - proportionality bias, intentionality bias, pattern perception, jumping to conclusions, confirmation bias, and the conjunction fallacy. And if possible, relate the interventions to these categories.

Authors: Thank you for the suggestion, this is an interesting idea. We have included examples of a number of cognitive biases associated with conspiracy belief, and have provided examples of how similar interventions have reduced these biases

However, the underlying processes of these interventions are still somewhat of a black box, and it is difficult to draw any conclusions regarding how these interventions relate to these cognitive biases. Based on your recommendation, we have included a section from pages 5-6 in the introduction that describe how several of the cognitive biases you mentioned have been linked to conspiracy belief. We also note how similar interventions have been successful in reducing these biases, such as the conjunction fallacy.

As a side note – for this reason, we are currently conducting a study to create a conspiracy measure that assesses how individuals critically evaluate conspiracy theories. The measure aims to identify common logical biases and fallacies associated with conspiracy beliefs related to core critical thinking domains. We aim to explore the points you mentioned above more in this coming paper.

- Is there any systematic difference between the more general and more specific conspiracy beliefs?

Authors: An interesting suggestion. We ran a independent sample t-test and found there were no significant differences between general conspiracy measures (M = 0.194, SD = 0.21) and specific conspiracy measures (M = 0.35, SD = 0.51); t(19) = -1.179, p = .25. However, as noted by Reviewer 3, previous research has indicated that specific and generic measures capture fundamentally different traits and have different implications for reducing one versus the other. As such, we have included a brief description of the differences between these measures.

- As discussed in the limitations section, the paper lacks external validity. What kind of potential interventions in real life can we devise, for example, on social media platforms?

Authors: While we would be eager to suggest a number of possibilities for real life interventions, we believe it may be too soon to draw any conclusions or make suggestion to social media platforms. For now, the jury is out regarding how we can realistically implement these interventions in real life settings. We have included in page 26 suggestions that further research should be conducted to see the viability of condensing some of the long-form interventions, and extending the effects of short-form interventions.

Minor comments:

- The PRISMA flowchart can be more detailed - breaking down the number of papers excluded according to the reason/criteria

Authors: Thank you for highlighting this – we have updated the PRISMA flow chart to meet these requirements

- Period missing at the end of page 30.

Authors: Thank you for spotting this error – we have amended the manuscript

Authors: Once again, we would like to thank you for taking the time to carefully consider our manuscript and to helpfully review our work. We hope you find that the revised manuscript is in better condition and has adequately addressed the concerns you have raised.

References

Bago, B., Rand, D. G., & Pennycook, G. (2022). Does deliberation decrease belief in conspiracies? Journal of Experimental Social Psychology, 103, 104395. https://doi.org/10.1016/j.jesp.2022.104395

Bierwiaczonek, K., Gundersen, A. B., & Kunst, J. R. (2022). The role of conspiracy beliefs for COVID-19 health responses: A meta-analysis. Current Opinion in Psychology, 46, 101346. https://doi.org/10.1016/j.copsyc.2022.101346

Forgas, J. P., & Baumeister, R. F. (Eds.). (2019). The Social Psychology of Gullibility: Fake News, Conspiracy Theories, and Irrational Beliefs (1st ed.). Routledge. https://doi.org/10.4324/9780429203787

Imhoff, R., Bertlich, T., & Frenken, M. (2022). Tearing apart the “evil” twins: A general conspiracy mentality is not the same as specific conspiracy beliefs. Current Opinion in Psychology, 46, 101349. https://doi.org/10.1016/j.copsyc.2022.101349

Knobel, P., Zhao, X., & White, K. M. (2022). Do conspiracy theory and mistrust undermine people’s intention to receive the COVID‐19 vaccine in Austria? Journal of Community Psychology, 50(3), 1269–1281. https://doi.org/10.1002/jcop.22714

Mulukom, V. van, Pummerer, L., Alper, S., Bai, H., Čavojová, V., Farias, J., Kay, C. S., Lazarević, L. B., Lobato, E. J. C., Marinthe, G., Banai, I. P., Šrol, J., & Žeželj, I. (2020). Antecedents and consequences of COVID-19 conspiracy beliefs: A systematic review. https://doi.org/10.31234/osf.io/u8yah

Pytlik, N., Soll, D., & Mehl, S. (2020). Thinking Preferences and Conspiracy Belief: Intuitive Thinking and the Jumping to Conclusions-Bias as a Basis for the Belief in Conspiracy Theories. Frontiers in Psychiatry, 11. Scopus. https://doi.org/10.3389/fpsyt.2020.568942

Sutton, R. M., & Douglas, K. M. (2020). Conspiracy theories and the conspiracy mindset: Implications for political ideology. Current Opinion in Behavioral Sciences, 34, 118–122. Scopus. https://doi.org/10.1016/j.cobeha.2020.02.015

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 1

Pierluigi Vellucci

11 Jan 2023

The efficacy of interventions in reducing belief in conspiracy theories: a systematic review

PONE-D-22-25039R1

Dear Dr. O'Mahony,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Pierluigi Vellucci

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: I am happy with the changes the authors introduced in the revised version of the paper and I think it is now suitable for publication.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Acceptance letter

Pierluigi Vellucci

14 Mar 2023

PONE-D-22-25039R1

The efficacy of interventions in reducing belief in conspiracy theories: a systematic review

Dear Dr. O'Mahony:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Pierluigi Vellucci

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Demographic and publication details of studies included in the review.

    (DOCX)

    S1 Fig. Risk of bias assessment.

    (TIF)

    S1 File. PRISMA checklist.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    The data collected and analysed in this study can be found on the Open-Science Framework (https://osf.io/6yjt3/).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES