Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 Mar 10;17(3):e0265211. doi: 10.1371/journal.pone.0265211

Do conspiracy theories efficiently signal coalition membership? An experimental test using the “Who Said What?” design

Mathilde Mus 1,2,*, Alexander Bor 1, Michael Bang Petersen 1
Editor: Peter Karl Jonason3
PMCID: PMC8912250  PMID: 35271659

Abstract

Theoretical work in evolutionary psychology have proposed that conspiracy theories may serve a coalitional function. Specifically, fringe and offensive statements such as conspiracy theories are expected to send a highly credible signal of coalition membership by clearly distinguishing the speaker’s group from other groups. A key implication of this theory is that cognitive systems designed for alliance detection should intuitively interpret the endorsement of conspiracy theories as coalitional cues. To our knowledge, no previous studies have empirically investigated this claim. Taking the domain of environmental policy as our case, we examine the hypothesis that beliefs framed in a conspiratorial manner act as more efficient coalitional markers of environmental position than similar but non-conspiratorial beliefs. To test this prediction, quota sampled American participants (total N = 2462) completed two pre-registered Who-Said-What experiments where we measured if participants spontaneously categorize targets based on their environmental position, and if this categorization process is enhanced by the use of a conspiratorial frame. We find firm evidence that participants categorize by environmental position, but no evidence that the use of conspiratorial statements increases categorization strength and thus serves a coalitional function.

Introduction

Why do people believe and share conspiracy theories? Three psychological motives have been put forward by previous research [1, 2]: a) epistemic motives, referring to people’s need to understand and navigate their environment [3]; b) existential motives, relating to people’s need to feel secure and in control of their environment [4]; c) social motives, by which people can manage their reputation and signal their membership to a coalition [5, 6]. For example, the conspiratorial belief that global warming is a hoax can at the same time provide an explanation for temperatures that may be perceived as incongruent with global warming (e.g., colder winters), prevent an existential anguish over the impending climate catastrophe, and signal an engagement with environmental-skeptic groups. In this paper, we focus on the proposed social motives associated with the endorsement of conspiratorial beliefs. Specifically, we explore the claim that endorsing conspiracy theories can send a credible signal of coalition membership, a claim which to our knowledge has not yet been empirically evaluated.

From an evolutionary perspective, the coalitional function of beliefs arises from the fact that beliefs can serve as a cue to distinguish ingroup members from outgroup members as they, for example, signal familiarity with cultural norms and customs. For most of their evolutionary history, humans lived in small hunter-gatherer groups where both coordination with ingroup members and group-based defense against outgroups acted as strong selection pressures. As a consequence, the human mind has evolved a series of specialized mechanisms for coalitional management to respond to these adaptive challenges [7, 8]. The first crucial step in coalitional management is the detection of alliances, namely being able to detect who is likely to belong to one’s ingroup or to one’s outgroup prior to an interaction. This requires specialized cognitive adaptations capable of making predictive forecasts about coalitional membership on the basis of the available cues [9, 10]. Empirical evidence in favor of such an “alliance detection system” in the human mind, keeping track of relevant coalitional cues, has been established [1012]. Such cues could be physical in nature, taking the form of clothing and ornaments for example, but could also be contained in shared attitudes. Indeed, as people who share beliefs, values and opinions tend to cooperate and form alliances, it is likely that the mind evolved to perceive cues of shared attitudes as coalitional markers. In line with this, prior research has shown that political attitudes are encoded as coalitional markers by the alliance detection system [12].

However, there should be variability in the degree to which various shared beliefs act as coalitional cues. The best coalitional signals should be the ones that clearly indicate loyalty to one group and differentiation from other groups [13]. Beliefs that undermine the person’s ability to join other coalitions, by triggering irreversible reputational costs, can thus acquire a strategic advantage [14, 15]. Indeed, the more likely beliefs are to lead to a rejection by other social groups, the more the belief-holder should appear as a loyal member of the group in line with these beliefs. This phenomenon of earning credibility by reducing one’s options–here by reducing one’s chances to join other coalitions–has been called a “burning bridges” strategy [16]. Burning bridges is a classical strategy in game theory where removing or limiting a player’s options can paradoxically improve payoffs [17]. This commitment device signals loyalty towards a targeted group by greatly limiting cooperation opportunities with other groups.

In the context of burning bridges with other coalitions by endorsing certain beliefs, how can this strategy be best achieved? One possibility is to use fringe beliefs, that is statements that contradict common sense or established facts and that are held by a minority of people. Indeed, they should act as an honest signal of coalition membership both because of the specialized knowledge they convey [5] and the resulting rejection expected from most other groups. Another efficient way to make belief statements burn bridges is by being offensive towards other coalitions, attacking either their intentions or their competence [16], which is also very likely to result in rejection by targeted groups.

Conspiracies and environmental policy

In this manuscript, we have chosen to focus on a current and specific example of the burning bridges strategy: the endorsement of conspiracy theories. The present manuscript seeks to empirically test the hypothesis that endorsements of conspiratorial beliefs efficiently act as coalitional markers through bridge-burning. A conspiracy theory is commonly defined as the belief that a group of agents secretly acts together with malevolent intent [18, 19]. Most conspiracy theories are thus inherently offensive: they accuse some actors of harming innocent people, either directly (as in the chemtrail conspiracy) or indirectly by concealing relevant information and “covering up tracks”. Another common case is that conspiracy theories deny grievances or important achievements of certain actors (e.g. Holocaust deniers or the 9/11 Truth Movement; moon-landing hoax), thereby also fostering inter-group conflict. Moreover, many conspiracy beliefs oppose mainstream narratives and are often held by small minorities (e.g. Reptilian conspiracies), thereby also possessing a fringe element. Endorsing fringe beliefs accusing other groups of malevolent intent is therefore a costly behavior because of the expected ostracization the belief-holder faces. For instance, Redditors active in conspiracy communities get moderated and receive negative replies more often than users who are never active in conspiracy communities [20]. Conspiracy believers themselves appear to be aware of these costs: those who share conspiracy theories believe that others evaluate them negatively and expect to face social exclusion [21]. These findings indicate that if one is seeking to signal their loyalty by alienating other groups, endorsing conspiracies may be a potential successful strategy.

As our case, we have chosen to focus on conspiratorial beliefs related to the environment. Environmental conspiracy theories have been rising over the last decades, especially those regarding climate change denial [22, 23]. These beliefs may hinder the implementation of effective policies urgently required to mitigate global warming [23]. Indeed, conspiracy theories can negatively affect policy making both directly by fostering opposition to evidence-based measures, and indirectly by diverting useful time and resources in order to address them [24].

To test if endorsements of environmental conspiracy theories act as more efficient coalitional markers than non-conspiratorial environmental beliefs, we study activation patterns of the alliance detection system. The alliance detection system must be able to pick up on which alliance categories are currently shaping people’s behavior and inhibit non-relevant alliance categories. Indeed, alliances may change, and people always belong to more than one coalition [11]. Therefore, presenting new alliance categories relevant to a current situation should both increase categorization along the new dimensions and decrease categorization by other alliance categories that do not act as good predictors of alliance relationships at the moment. Experimental evidence has shown that, although race is a strong alliance cue in contemporary American society, the alliance detection system readily downregulates categorization by race when more relevant alliance categories–such as basketball team membership, charity membership or political support–are presented [1012]. Thus, categorization by race is an ideal indicator to determine if a presented cue acts as a coalitional marker.

We extend previous research by positing that the framing of beliefs has an impact on the strength of categorization processes. In line with the burning bridges account, we hypothesize that beliefs with a conspiratorial dimension send a more credible signal of coalition membership than beliefs without a conspiratorial dimension. Consequently, we expect conspiratorial statements to increase categorization by the relevant alliance category and to decrease categorization by the non-relevant alliance categories. In the context of the present research, environmental position acts as the relevant alliance category and race as the non-relevant one. We therefore hypothesize that endorsements of environmental beliefs framed in a conspiratorial manner should, compared to similar but non-conspiratorial beliefs, lead to an increase in categorization by environmental position and a decrease in race categorization.

Materials and methods

The three present studies bear on how people categorize speakers on the basis of race and environmental position in the presence or absence of conspiratorial arguments. In all studies, implicit social categorization was measured using the ‘‘Who Said What?” memory confusion paradigm, following standards in the literature [1012]. Data for this project has been collected in full compliance with the law of the Danish National Committee on Health Research Ethics (§14.2), which specifies that survey and interview studies that do not include human biological materials are exempted from an ethical approval by the committee. All surveys started with a written informed consent form.

Procedure

The Who-Said-What experimental paradigm proceeds in three stages. Following the procedure of Petersen [25, 26], we used a shortened version of the task adapted for web surveys of representative samples.

a. Presentation phase

When entering the study, participants were told that they would be viewing a discussion about the environment among pro-environmental individuals and environmental skeptics. After providing written informed consent to take part in the experiment, participants then watched a sequence of eight pictures of young men in their 20’s, each paired with a statement about the environment displayed for 20 seconds. Participants were simply asked to form an impression of the target individuals by looking at the pictures and reading the statements.

b. Distractor task

A distractor task was then used to reduce rehearsal and recency effects. In this task, participants were asked to list as many countries as they could in one minute.

c. Surprise recall phase

In the surprise recall phase, each statement was presented in a random order and participants were asked to choose which of the eight simultaneously displayed targets had uttered the given statement. The errors made in the recall phase reveal whether the mind spontaneously categorizes targets along a dimension. If so, targets belonging to the same category along this dimension are more likely to be confused with each other than targets from different categories.

Finally, participants answered a few demographic questions and were thanked.

Materials and general design

Four statements expressed the view that more should be done to protect the environment (“pro-environmental”) and four that less should be done (“environmental-skeptic”). To ensure ecological validity, all statements were modelled after real views expressed on social media sites in debates about the environment. The presentation order of the statements was randomized within the constraint that they should alternate between a pro-environmental position and an environmental-skeptic position to create a discussion frame. Each statement first expressed an environmental position identical between the control condition and the treatment condition, and then provided a justification whose framing–conspiratorial or non-conspiratorial–differed across conditions. The control condition can be considered as a placebo rather than a “pure control” in which the treatment is absent [27]. Indeed, to the extent possible, we sought to design similar justifications between the two conditions, which varied solely by the presence or absence of a conspiratorial dimension in order to maximize experimental control. Fig 1 presents a set of sample statements in both conditions. The full list of statements can be found in the S1 File.

Fig 1. Illustration of experimental stimuli.

Fig 1

Each statement is composed of two sentences. The first one, here written in ordinary type, corresponds to an environmental position and is identical across conditions. The second sentence, here written in italics, corresponds to a justification of the environmental position that varies across conditions, being either framed as conspiratorial (treatment condition) or not (control condition). Statements were paired with target photos taken from the Center for Vital Longevity Face Database [28].

Statements in the control condition were somewhat shorter than statements in the treatment condition. However, even if the additional length leads to more errors, there is no reason to expect a bias towards either more within-category or between-category errors.

Following standard practice for Internet-based experiments in psychology [29, 30], materials were pre-tested. 100 American participants (32 women; mean age = 36.9 years) were recruited on Amazon Mechanical Turk and compensated with pay. The pre-test assessed that the statements respected the study’s criteria: (a) statements designed to be “pro-environmental” were rated as significantly more pro-environmental (M = 5.43, SD = 1.49) than “environmental-skeptic” statements (M = 3.48, SD = 2.16); F(1, 788) = 218.38), p < .001; (b) statements designed to be “conspiratorial” were rated as significantly more conspiratorial (M = 5.35, SD = 1.53) than “non-conspiratorial” statements (M = 4.36, SD = 1.96); F(1, 783) = 61.98, p < .001. Complete information about the pre-test can be found in the S2 File.

Four speakers were white and four were black in order to induce race categorization. Men targets were used because previous studies showed that race categorization for men targets is more resistant to change than race categorization for women targets [1012], creating a more stringent test for our hypothesis. Target photos were taken from the Center for Vital Longevity Face Database [28]. Speakers’ race was balanced across the environmental dimension, such that both environmental positions were defended by two white and two black targets, removing the correlation between race and present alliances. Within this constraint, the pairing between targets and statements was randomized.

Measures

Each answer in the surprise recall task is categorized as either a correct answer, a within-category error or a between-category error. A within-category error is made when the chosen target belongs to the same category as the correct response. For example, a within-category error for race is made when a statement uttered by a black target is wrongly attributed to one of the other black targets. In a between-category error, the two confused responses belong to different categories. Because a target cannot be confused with itself (as that would be a correct answer), within-category errors are less frequent than between-category errors. To correct for this asymmetry in base-rates, the number of between-category errors for both race and environmental position is multiplied by 0.75 for each participant [31]. Finally, a categorization score is calculated as the difference between these two types of errors. A mean categorization score significantly above zero signals that participants spontaneously categorize targets along the given dimension, namely race or environmental position in the present study. One-sample t-tests are run in order to determine if categorization scores, both for race and environmental position, are positive. Two-sample t-tests are run to examine whether there is an increase in environmental position categorization and a decrease in race categorization when statements are framed in a conspiratorial manner rather than in a non-conspiratorial manner. Following standards in the literature [1012], categorization scores are translated into a measure of effect size, Pearson’s r, with higher values corresponding to stronger categorization along a given dimension. All pre-registered directional hypotheses are tested with one-tailed tests.

Pilot study

To our knowledge, environmental position has never been tested as a coalitional cue in the Who-Said-What paradigm. Before studying differences in activation patterns of the alliance detection system in relation to the conspiracy variable, a pilot study was run in order to test if the mind spontaneously categorizes people according to their views on environmental policy.

Participants

Based on effect sizes found by Pietraszewski et al. (2015) [12], a power analysis indicated that a sample of 100 persons would allow to detect a small-sized effect with a probability of 90% using a two-tailed t-test. 120 American participants were recruited from the online platform Amazon Turk (46 women; mean age = 35.9 years) and were paid 1$ to complete the experiment.

Design

This study included only one experimental condition in which all statements were presented in their non-conspiratorial form (i.e. the control condition in Fig 1). Indeed, this pilot study solely aimed at establishing the existence of categorization by environmental position using the Who-Said-What experimental paradigm.

Results and discussion

Categorization scores were significantly above zero for both race (r = .35, p < .001, 95% CI [0.20, 0.48]) and environmental position (r = .24, p < .001, 95% CI [0.05, 0.39]). We thus first replicate the finding that the mind spontaneously encodes race as an alliance category [1012]. This result can be related to the central place of race in American politics, where persisting racial divisions, resentments, and group loyalties have been evidenced [32]. The results also demonstrate that, in parallel to race categorization, the mind spontaneously categorizes people according to their views on environmental policy. This result is in line with the findings of Pietraszewski and colleagues (2015) [12] who found evidence in favor of categorization by political attitudes.

This result also implies that environmental policy positions offer a good case to study the effects of conspiratorial framing on categorization strength. Indeed, non-conspiratorial environmental beliefs constitute a good baseline for our hypothesis which predicts a decline of race categorization and an increase in environmental categorization, as initial high levels of categorization by race and moderate levels of categorization by environmental position were obtained.

Study 1

Study 1 explores the specified hypothesis that environmental beliefs framed in a conspiratorial manner should act as efficient coalitional cues and thus lead to stronger categorization by environmental position and weaker categorization by race than similar but non-conspiratorial beliefs. The study design and analysis plan were pre-registered at OSF https://osf.io/6aumy. In our pre-registered studies, we planned to exclude participants who failed attention checks. However, because attention checks were implemented post-treatment, these exclusions could bias our causal estimates [33]. Accordingly, we deviate from our pre-registrations and include all respondents in the analyses reported below. In the S3 File, we report pre-registered analyses on attentive respondents yielding identical substantive conclusions.

Participants

A power analysis using a one-tailed test and 5% alpha level indicated that to detect a small effect size (d = .2) in a two-samples t-test with 90% power, 858 participants are required. 1200 U.S. citizens were recruited from the online platform Lucid. Lucid uses quota sampling to ensure that the sample margins resemble population distributions in terms age, gender, race, education, and region. Lucid provides samples consisting of more diverse and less experienced participants than those recruited on Amazon Mechanical Turk. This platform has been validated as a good alternative online panel marketplace [34]. Only participants who finished the survey were included in the analysis, leaving 1147 participants (554 women; mean age = 43.1 years).

Design

There were two between-subjects conditions in this study: in the control condition, all statements were presented in their non-conspiratorial form whereas in the treatment condition, all statements were conspiratorial (cf. Fig 1). Participants were randomly assigned to one of the two conditions.

Results and discussion

In both the control and the treatment condition, participants categorized target speakers on the basis of environmental position (control: r = .15, p < .001, 95% CI [0.07, 0.23]; treatment: r = .10, p = .01, 95% CI [0.01, 0.19]) and race (control: r = .43, p < .001, 95% CI [0.36, 0.49]; treatment: r = .34, p < .001, 95% CI [0.26, 0.41]). These results replicate the findings of the pilot study, highlighting their robustness. As predicted, categorization by race was significantly lower in the treatment condition compared to the control condition (t = 1.83, df = 1005, p = .03). Categorization by environmental position, however, was not significantly larger in the treatment condition (t = 0.60, df = 964, p = .72; see Fig 2). If anything, categorization by environmental position was weaker when conspiratorial justifications were offered.

Fig 2. Categorization by race and environmental position when statements are framed either in a non-conspiratorial (control) or conspiratorial (treatment) form (N = 1147).

Fig 2

Only race categorization was significantly lowered by the use of a conspiratorial frame. The reported numbers are effect sizes (r). Error bars correspond to bootstrapped 95% confidence intervals.

The findings therefore support only one of the specified predictions. When statements were framed in a conspiratorial manner rather than in a non-conspiratorial manner, there was a significant decrease in race categorization but not a significant increase in categorization by environmental position.

To further explore our results, we performed additional analyses to investigate whether the predicted results may be conditioned by the direction of the statements (pro-environmental or environmental-skeptic) or by participants’ political worldviews. Indeed, it has been shown that people selectively apply their conspiracy thinking in line with their political identity [35]. Regarding climate-related conspiracies, Uscinski and Olivella (2017) [36] review evidence that “Republicans are more likely to believe that climate change is a hoax while Democrats are more likely to believe that oil companies are hiding solutions to climate change” (p.2). However, we do not find that the direction of the statements in our study moderates the effect of conspiratorial framing on environmental categorization scores (p = 0.75). We also do not find evidence that participants’ political ideology or level of environmental concern moderates the studied relationship (p = 0.11 and p = 0.48 respectively). Hence, participants do not appear to categorize targets differently according to either the direction of the statements or their political worldviews.

A possible confound influencing the results of this study is that conspiratorial justifications could serve as an indicator of affiliation with an independent coalition composed of all conspiracy theorists. In the psychological literature, it has indeed been argued that conspiracy theorists possess a specific “conspiratorial mindset” displaying for example a low level of interpersonal trust. People believing in one conspiracy theory tend to believe in other conspiracies [37] even if they contradict each other or are entirely fictitious [38]. Conspiratorial statements may also signal low competence, a dimension that the mind automatically encodes [39]. In both cases, conspiracy theorists might therefore be categorized as belonging to the same coalition. If this was true, we would still expect a reduction in categorization by race because a novel coalition cue was introduced (conspiratorial arguments), but unlike our original prediction, categorization by environmental position would either be reduced or remain unaffected because the novel cue blurs differences between the two positions. Indeed, whereas in the control group the two opposing environmental views are contrasted, in the treatment group both views could be seen as branches of the same coalition (i.e., of conspiracy theorists).

Study 2

Study 2 was designed to investigate further the unexpected results of Study 1, by eliminating the potential confound that conspiratorial justifications may serve as an indicator of affiliation with an independent coalition composed of all conspiracy theorists. To do so, we modify the treatment condition by eliminating half of the conspiratorial frames compared to Study 1. As our focus remains on the potential use of conspiratorial sentences to boost categorization across another coalitional dimension, we do not create a new conspiracy dimension orthogonal to race and environmental position. Instead, we align the conspiratorial dimension with environmental position such that either all four pro-environmental statements are conspiratorial and no environmental-skeptic statements are, or vice versa. We then test whether conspiratorial arguments strengthen categorization by environmental position if only one side uses them.

If this is true, we expect categorization by environmental position to increase in the treatment group compared to the control group, as all conspiracy theorists now share the same environmental stance. Furthermore, if indeed conspiratorial asymmetries boost environmental position as a coalitional cue, we expect categorization by race to decrease in the treatment group. Similarly to Study 1, categorization along a dimension is measured as the propensity to make more errors between targets who share this dimension (e.g. race or environmental position) than between targets who differ regarding this dimension. The study design and analysis plan were pre-registered at OSF https://osf.io/43trw.

Participants

1200 American participants were recruited from the online platform Lucid. Only participants who finished the survey were included in the analysis, leaving 1195 participants (610 women; mean age = 45.1 years).

Design

As in Study 1, there were two between-subjects conditions to which participants were randomly assigned. The only difference in design between Study 1 and Study 2 lies in the treatment condition. In the treatment condition of Study 2, only half of the statements were conspiratorial and the conspiracy dimension was superposed with environmental position: either all four pro-environmental statements were conspiratorial and no environmental skeptic-statements were, or vice versa.

Results and discussion

In both the control and the treatment conditions, participants categorized target speakers on the basis of environmental position (control: r = .15, p < .001, 95% CI [0.07, 0.23]; treatment: r = .21, p < .001, 95% CI [0.13, 0.29]) and race (control: r = .46, p < .001, 95% CI [0.40, 0.51]; treatment: r = .45, p < .001, 95% CI [0.38, 0.51]). However, neither of our hypotheses were supported by the data. Despite a slight increase in categorization by environmental position, the change does not reach significance at conventional levels (t = 1.22, df = 1193, p = .11, see Fig 3). Neither was categorization by race significantly decreased in the treatment condition compared to the control condition (t = 0.04, df = 1193, p = .51).

Fig 3. Categorization by race and environmental position when statements are either framed in a non-conspiratorial form (control) or in a conspiratorial form aligned with environmental position (treatment) (N = 1195).

Fig 3

Both race environmental position categorization scores do not significantly differ across conditions. The reported numbers are effect sizes (r). Error bars correspond to bootstrapped 95% confidence intervals.

Hence, the findings of Study 2 do not support the prediction that conspiratorial frames boost categorization by environmental position when only one side uses them, as only a weak effect in the expected direction was found.

Discussion

Several lines of theory within evolutionary psychology have emphasized the social function that false and extreme beliefs could serve, in line with the burning bridges account [5, 16]. Taking as our case environmental conspiracy beliefs, we empirically investigated the hypothesis that conspiracy theories act as efficient coalitional markers. However, the reported experiments do not provide significant evidence in favor of this hypothesis.

As a first step, we demonstrated that environmental position elicits categorization in the Who-Said-What design. This result is consistent with the findings of Pietraszewski and colleagues (2015) [12] who established that political positions act as coalitional markers. Our main studies then tested whether, when the environmental position was justified with a conspiracy theory, categorization by environmental position increased and categorization by race–another potential but here irrelevant alliance dimension–decreased.

Study 1 found evidence only for one of the specified predictions: race categorization significantly decreased when environmental statements were framed in a conspiratorial manner instead of a non-conspiratorial manner, but categorization by environmental position did not significantly increase. Study 2 was designed to eliminate a confound that could influence Study 1’s results, namely that conspiratorial justifications may serve as an indicator of affiliation with an independent coalition composed of all conspiracy theorists. However, Study 2 only found a weak effect in favor of the coalitional cue conveyed by conspiracy theories when removing this confound.

Therefore, the reported experiments do not provide strong evidence in favor of the hypothesis that conspiratorial beliefs act as coalitional markers, beyond the political position they indicate. These results may first suggest that the “burning-bridges” component of beliefs sends a weaker coalitional signal than what has been theoretically suggested in the literature [5, 6]. They may also suggest that the coalitional function of conspiratorial beliefs more generally plays a smaller explanatory role than the other motivations identified as drivers of conspiracy theories such as epistemic motives and existential concerns [1, 2]. For instance, when conspiracy theories are endorsed in online contexts where anonymity is the rule, it is likely that the belief-holder will be less affected by reputational costs than in offline contexts. In this case, the coalitional motivation of sharing such content may become weaker.

However, our findings might also reflect false negative results due to chance, as well as methodological artefacts. Indeed, although we conducted two rigorously designed and high-powered experiments on diverse online samples, our studies suffer from some limitations. A first possible explanation for null results obtained in our studies may lie in our choice of the studied alliance category. As no empirical research on the alliance detection system has previously used environmental position as an alliance category, it is possible that this domain behaves differently from other alliance categories. For instance, environmental position might still not be perceived as a category divisive enough in the population and hence would activate less strongly the alliance detection system than other more classic political coalitions. In our sample, there were around four times more participants who believed that federal spending should be increased to protect the environment rather than decreased, whereas the proportion of participants identifying as Democrats versus Republicans were similar. Despite the acknowledged correlation between political orientation and environmental concern [40], the latter proved to be less divisive than political orientation in our sample. Future research may seek to replicate our experiments with more divisive and commonly used alliance categories, such as partisanship or broader political ideology.

A second methodological point that future work could address is related to the difference in the perceived conspiratorial dimension between treatment and control statements. When designing the statements, there was a trade-off between maximizing this conspiracy gap across conditions and preserving experimental control. Our ambition in designing the sentences had been two-fold: 1) maximizing ecological validity by rooting sentences in environmental statements found online and 2) maximizing internal validity by ensuring high similarity between treatment and control sentences. Although a pre-test assessed that conspiracy ratings significantly differed between treatment and control statements (mean difference = 0.99, SD = 0.12, d = .56), it is possible that the gap between the two conditions was not large enough to elicit the predicted results. Future research could therefore investigate the generalizability of our findings by modifying our sentence stimuli. A first approach could be to design statements based on more theoretical considerations (e.g., varying in their burning bridges components), which could both make the manipulations stronger and help to dissect the various mechanisms involved in alliance signaling. Another more ecological approach would be to use extreme examples of conspiratorial beliefs (e.g., Pizzagate, Reptilian conspiracies) for proof-of-concept purposes, despite the loss of experimental control occurring when comparing the effects with non-conspiratorial statements.

In conclusion, we tested the hypothesis that endorsements of conspiracy theories are processed as coalitional cues, using environmental conspiracy theories as our case. We did not find clear empirical support for this hypothesis, using a series of Who-Said-What experiments. As this study is, to our knowledge, the first to empirically test the coalitional function of conspiracy theories, future research could attempt to replicate the reported experiments while addressing the potential methodological limitations underlined above, in order to further explore the validity of the evolutionary framework under scrutiny.

Supporting information

S1 File. Full list of statements.

(PDF)

S2 File. Pre-test of statements.

(PDF)

S3 File. Attention checks and analyses without inattentive respondents.

(PDF)

Acknowledgments

We are grateful to the members of the Research on Online Political Hostility (ROPH) group at Aarhus University for their insightful comments. We also thank the anonymous referees for their helpful suggestions to improve the manuscript.

Data Availability

All informational and reproducibility materials are available at https://osf.io/2ufhy/.

Funding Statement

This research was funded by grant no. CF18-1108 (ROPH: Center for Research on Online Political Hostility) from the Carlsberg Foundation to MBP. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. https://www.carlsbergfondet.dk/en.

References

  • 1.Douglas KM, Sutton RM & Cichocka A. The psychology of conspiracy theories. Current directions in psychological science. 2017;26(6):538–542. doi: 10.1177/0963721417718261 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS et al. Understanding conspiracy theories. Political Psychology. 2019;40:3–35. [Google Scholar]
  • 3.Garrett R, Weeks B. Epistemic beliefs’ role in promoting misperceptions and conspiracist ideation. PLOS ONE. 2017;12(9):e0184733. doi: 10.1371/journal.pone.0184733 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.van Prooijen JW & Acker M. The influence of control on belief in conspiracy theories: Conceptual and applied extensions. Applied Cognitive Psychology. 2015;29(5):753–761. [Google Scholar]
  • 5.Petersen M. The evolutionary psychology of mass mobilization: how disinformation and demagogues coordinate rather than manipulate. Current Opinion in Psychology. 2020;35:71–75. doi: 10.1016/j.copsyc.2020.02.003 [DOI] [PubMed] [Google Scholar]
  • 6.Wolff A. On the Function of Beliefs in Strategic Social Interactions. Working Papers of BETA 2019–41, Bureau d’Economie Théorique et Appliquée, UDS, Strasbourg; 2019. [Google Scholar]
  • 7.Tooby J, Cosmides L, Price M. Cognitive adaptations for n-person exchange: the evolutionary roots of organizational behavior. Managerial and Decision Economics. 2006;27(2–3):103–129. doi: 10.1002/mde.1287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Tooby J, Cosmides L. Human morality and sociality: evolutionary and comparative perspectives. Choice Reviews Online. 2010;48(04):48-2376-48-2376. [Google Scholar]
  • 9.Pietraszewski D Intergroup processes: Principles from an evolutionary perspective. In Van Lange P., Higgins E. T., & Kruglanski A. W.(Eds), Social Psychology: Handbook of Basic Principles (3rd Edition). New York: Guilford. 2021;373–391. [Google Scholar]
  • 10.Kurzban R, Tooby J & Cosmides L. Can race be erased? Coalitional computation and social categorization. Proceedings of the National Academy of Sciences. 2001;98(26):15387–15392. doi: 10.1073/pnas.251541498 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Pietraszewski D, Cosmides L, Tooby J. The Content of Our Cooperation, Not the Color of Our Skin: An Alliance Detection System Regulates Categorization by Coalition and Race, but Not Sex. PLoS ONE. 2014;9(2):e88534. doi: 10.1371/journal.pone.0088534 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Pietraszewski D, Curry O, Petersen M, Cosmides L, Tooby J. Constituents of political cognition: Race, party politics, and the alliance detection system. Cognition. 2015;140:24–39. doi: 10.1016/j.cognition.2015.03.007 [DOI] [PubMed] [Google Scholar]
  • 13.Tooby J. Coalitional instincts. Edge; 2017. Available from: org. https://www.edge.org/response-detail/27168, accessed: 2020-03-05. [Google Scholar]
  • 14.Boyer P. Minds make societies: How cognition explains the world humans create. Yale University Press; 2018. [Google Scholar]
  • 15.Kurzban R & Christner J. Are supernatural beliefs commitment devices for intergroup conflict? 2011. Available from: http://www.sydneysymposium.unsw.edu.au/2010/chapters/KurzbanSSSP2010.pdf [Google Scholar]
  • 16.Mercier H. Not Born Yesterday: The Science of Who We Trust and What We Believe. Social Forces. 2020;99(1):191. [Google Scholar]
  • 17.Pénard T. Game theory and institutions. New institutional economics: A guidebook. 2008;158–179. [Google Scholar]
  • 18.Bale JM. Political paranoia v. political realism: On distinguishing between bogus conspiracy theories and genuine conspiratorial politics. Patterns of Prejudice. 2007;41(1):45–60. [Google Scholar]
  • 19.Coady D. 2006. Conspiracy theories: The philosophical debate, Farnham, UK: Ashgate. [Google Scholar]
  • 20.Phadke S, Samory M, & Mitra T. What Makes People Join Conspiracy Communities? Role of Social Factors in Conspiracy Engagement. Proceedings of the ACM on Human-Computer Interaction. 2021;4(CSCW3):1–30. [Google Scholar]
  • 21.Lantian A, Muller D, Nurra C, Klein O, Berjot S, Pantazi M: Stigmatized beliefs: conspiracy theories, anticipated negative evaluation of the self, and fear of social exclusion. European Journal of Social Psychology. 2018, 48:939–954. [Google Scholar]
  • 22.Brulle RJ, Carmichael J & Jenkins JC. Shifting public opinion on climate change: an empirical assessment of factors influencing concern over climate change in the US, 2002–2010. Climatic change. 2012;114(2):169–188. [Google Scholar]
  • 23.Goertzel T. Conspiracy theories in science: Conspiracy theories that target specific research can have serious consequences for public health and environmental policies. EMBO reports. 2010;11(7):493–499. doi: 10.1038/embor.2010.84 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Uscinski JE & Parent JM. American conspiracy theories. Oxford University Press; 2014. p.5. [Google Scholar]
  • 25.Petersen MB. Social welfare as small‐scale help: evolutionary psychology and the deservingness heuristic. American Journal of Political Science. 2012;56(1):1–16. doi: 10.1111/j.1540-5907.2011.00545.x [DOI] [PubMed] [Google Scholar]
  • 26.Petersen M. B. (2017). Healthy out-group members are represented psychologically as infected in-group members. Psychological Science, 28(12), 1857–1863. doi: 10.1177/0956797617728270 [DOI] [PubMed] [Google Scholar]
  • 27.Porter E & Velez YR. Placebo Selection in Survey Experiments: An Agnostic Approach. Political Analysis. 2021;1–14. [Google Scholar]
  • 28.Minear M & Park DC. A lifespan database of adult facial stimuli. Behavior Research Methods, Instruments, & Computers. 2004;36:630–633. Access to the database can be requested at: https://agingmind.utdallas.edu/download-stimuli/face-database/ doi: 10.3758/bf03206543 [DOI] [PubMed] [Google Scholar]
  • 29.Reips UD. Standards for Internet-based experimenting. Experimental psychology. 2002;49(4):243. doi: 10.1026/1618-3169.49.4.243 [DOI] [PubMed] [Google Scholar]
  • 30.Reips UD. The methodology of Internet-based experiments. The Oxford handbook of Internet psychology. 2007;373–390. [Google Scholar]
  • 31.Bor A. Correcting for base rates in multidimensional “Who said what?” experiments. Evolution and Human Behavior. 2018;39(5):473–478. [Google Scholar]
  • 32.Hutchings VL & Valentino NA. The centrality of race in American politics. Annual Review of Political Science. 2004(7):383–408. [Google Scholar]
  • 33.Montgomery JM, Nyhan B & Torres M. How conditioning on posttreatment variables can ruin your experiment and what to do about it. American Journal of Political Science. 2018;62(3):760–775. [Google Scholar]
  • 34.Coppock A & McClellan OA. Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Research & Politics. 2019; 6(1):2053168018822174. [Google Scholar]
  • 35.Miller JM, Saunders KL & Farhart CE. Conspiracy endorsement as motivated reasoning: The moderating roles of political knowledge and trust. American Journal of Political Science. 2016;60(4):824–844. [Google Scholar]
  • 36.Uscinski JE & Olivella S. The conditional effect of conspiracy thinking on attitudes toward climate change. Research & Politics. 2017;4(4):2053168017743105. [Google Scholar]
  • 37.Goertzel T. Belief in conspiracy theories. Political psychology. 1994;731–742. [Google Scholar]
  • 38.Swami V, Coles R, Stieger S, Pietschnig J, Furnham A, Rehim S et al. Conspiracist ideation in Britain and Austria: Evidence of a monological belief system and associations between individual psychological differences and real-world and fictitious conspiracy theories. British Journal of Psychology. 2011;102(3):443–463. doi: 10.1111/j.2044-8295.2010.02004.x [DOI] [PubMed] [Google Scholar]
  • 39.Bor A. Spontaneous categorization along competence in partner and leader evaluations. Evolution and Human Behavior. 2017;38(4):468–473. [Google Scholar]
  • 40.Cruz SM. The relationships of political ideology and party affiliation with environmental concern: A meta-analysis. Journal of Environmental Psychology, 2017;53:81–91. [Google Scholar]

Decision Letter 0

Shang E Ha

26 Oct 2021

PONE-D-21-29587Do conspiracy theories efficiently signal coalition membership? An experimental test using the “Who Said What?” designPLOS ONE

Dear Dr. Mus,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Dec 10 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Shang E. Ha, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please change "female” or "male" to "woman” or "man" as appropriate, when used as a noun (see for instance https://apastyle.apa.org/style-grammar-guidelines/bias-free-language/gender).

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

4. Please ensure that you refer to Figure 2 and 3 in your text as, if accepted, production will need this reference to link the reader to the figure.

5. We note that Figure 1 includes an image of a participant in the study.

As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”.

If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: General opinion

- Overall, I think the manuscript does add to our understanding of the evolution of human coalitional psychology, and how conspiratorial beliefs might contribute to it. The hypothesis is plausible, and that the lack of significant results should not be seen as a negative in terms of publication. However there does seem to be a lack of engagement with the conspiracy theory literature, and this harms the integration of the evolution-based hypothesis into that field in the subsequent in-text discussion.

Introduction

- There is some important literature missing regarding conspiratorial beliefs. Specifically, existential concerns are missing (see., Douglas et al, 2017; 2019). This means 1/3 of the established explanations for conspiratorial belief are missing, meaning the background to conspiratorial belief is not covered; it at least requires a mention. Given existential concerns cover fear and security, it seems relevant to coalitional psychology. Specifically, group-level identity (and out-group threat) is categorized under existential concerns as a predictor of belief.

- The example given to illustrate the psychology of conspiracies does not really capture epistemic motives. Most research on that category focuses on aspects such as uncertainty or pattern-recognition, or to explain dramatic or shocking events. Equally, the conspiracy theory regarding Obama’s nationality served initially to delegitimize his presidency in principle and identify him as an outgroup, not to explain policy decisions as “anti-American” or “pro-African” per se. I would find a better example or rephrase the explanation of this conspiracy.

- While I am convinced by the logic that accepting fringe or counter-factual beliefs serves as group-membership cues, the suggestion of “burning bridges” requires more explanation. I would also be keen to see specific empirical examples (rather than book references) of where fringe beliefs lead to ostracization by others, rather that self-exclusion by the believer to signal their commitment to their new community. There is a paper by Van Prooijen and colleagues that is currently under review that does touch on this.

- Overall, I find the premise and the resulting predictions logical, though more engagement with the conspiracy theory literature is needed to truly tie it into an evolutionary framework.

Methods/results

- Regarding the statements/stimuli. I understand the inclusion criteria as per the SI, but judging by the means alone the differences are sometimes minimal even if statistically significant. For an example, if we take a mid-point of 3.5 as neither agree or disagree whether X conspiracy is widely held, 4.23 and 4.5 are not extreme. I would also guess they both are actually significantly above this mid-point. This seems the case for most of the statements, so how likely are they to produce a response in this paradigm. I am not familiar with it, so has this piloting approach - and selection criteria - been shown to be useful in creating stimuli in the past? – this issue is touched on in the discussion, briefly, but an explanation is warranted because it might invalidate the findings as a whole.

- Equally, and this subjective of course, but none seem especially extreme – we live in a world of Qanon and some very identity-charged conspiracies, or ones that are counter-factual boarding on psychopathology (the UK royal family are lizards, for example). So, were not more extreme statements about the environment considered, even if for proof-of-concept purposes?

- Perhaps it is my unfamiliarity with the who-said-what paradigm, but I did have trouble keeping track of exactly what was being measured. I would suggest reminding the reader exactly what categorization means when introducing study 2.

- I would be interested to see whether there was any effect of the direction of the statement as well as the accompanying conspiratorial statement. While there is contention on political ideology in the Conspiracy literature, there to seem to be differences in how individuals on the left and right respond to conspiratorial beliefs that correspond to their general perspective. Has this analysis been performed: i.e., Pro/Anti*control/conspiracy?

Discussion

- The arguments given here for the lack of support for the hypotheses are quite weak. I would like to see the null-results put in the broader context of both evolved coalition psychology and the conspiracy literature before limitations are discussed. Neither literature is given appropriate consideration here. It may suggest that bridge burning is a more nuanced, or a weaker, part of commitment signaling than suggested for example(?).

- I get the impression the study has used conspiracy beliefs as a simple convenient way to probe the coalitional psychology theories. This is fine. If this is the case though, as above, more needs to be said beyond “null hypothesis supported, maybe there were method issues”. If my impression is correct, this would also necessitate a restructuring of the introduction section, with conspiracies simply being a specific and current example of bridge-burning.

- The discussion does mention environmental concerns as a perhaps a less divisive issue, but there is a literature (as mentioned) on left wing Vs right-wing conspiracies that might add to this discussion. It is certainly different when compared to anti-vax beliefs where there is an intersection of left and right. I would recommend the exploratory analysis suggested previously

Reviewer #2: This is an interesting and well-done paper that should be published in PLOS One. I especially the admire the authors' forthrightness in confronting their hypothesis for which they failed to find evidence.

That said, I do have some comments and observations meant to improve the manuscript.

-I was not entirely clear about the relationship between conspiracy belief and bridge burning. It seems to me that I can endorse conspiracy theories without burning bridges; especially in online contexts, the costs of promoting, and then walking away from, various conspiracies seem low. For those unfamiliar with this literature, the authors need to clarify the relationship.

-The authors describe a control condition that would be better described as a placebo. For challenges associated with placebos in survey experiments, consult Velez and Porter (2021). 

-Were the removed participants removed because they followed an attention check pre or post treatment? The authors should clarify. If the attention check occurred post treatment, the authors should re-insert those participants to avoid post-treatment bias.

-The very first paragraph seems to overstate the prevalence of conspiracy beliefs; the claim that "the magnitude and prominence of conspiratorial beliefs is soaring" should either be toned down or tied to a reference that persuasively makes that point.

-There's not nearly enough discussion of the role that racial perceptions may be playing in these studies. Especially as this was administered on U.S. samples, it seems likely to me that participants were judging the stimuli for the race of the person *and only the race* and nothing else. The authors need to elaborate on the relationship between race and the effects observed.

But again, this is well-done and interesting and deserves to be published.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Mar 10;17(3):e0265211. doi: 10.1371/journal.pone.0265211.r002

Author response to Decision Letter 0


7 Jan 2022

EDITOR

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

We have made several corrections to meet PLOS ONE’s style requirements. We have renamed our Figures as Fig 1, Fig 2 and Fig 3 (in the main text, figure captions and file names). We also renamed the supporting information files, which are now submitted as five separate files along the manuscript.

2. Please change "female” or "male" to "woman” or "man" as appropriate, when used as a noun

We implemented the changes in the revised manuscript.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

The full ethics statement now appears in the Methods section.

4. Please ensure that you refer to Figure 2 and 3 in your text as, if accepted, production will need this reference to link the reader to the figure.

We have now added the reference to Figure 2 and 3 in the main text (p.11 and p.13 respectively).

5. We note that Figure 1 includes an image of a participant in the study.

We admit that the source of the images included in Figure 1 was unclear. The images in Figure 1 do not belong to participants of our studies but are taken from the Center for Vital Longevity Face Database, from which eight target pictures were chosen as stimuli. To clarify the source of images in the figure caption, we added “Target photos were taken from the Center for Vital Longevity Face Database [28].” to Fig 1’s caption.

[Update: After receiving information by the editor that these images must be removed, we removed all pictures from this database and deleted our S3 File which reproduced the target photos used in our experiments. We also added after the relevant reference (28) in the reference list that “Access to the database can be requested at: https://agingmind.utdallas.edu/download-stimuli/face-database/”]

REVIEWER 1

General opinion

- Overall, I think the manuscript does add to our understanding of the evolution of human coalitional psychology, and how conspiratorial beliefs might contribute to it. The hypothesis is plausible, and that the lack of significant results should not be seen as a negative in terms of publication. However there does seem to be a lack of engagement with the conspiracy theory literature, and this harms the integration of the evolution-based hypothesis into that field in the subsequent in-text discussion.

We are grateful for the insightful comments. We agree with the reviewer about the need to incorporate a broader set of references from the conspiracy theory literature, both in the introduction and the discussion. We have made several additions to our manuscript in this regard. These are detailed in the points below.

Introduction

- There is some important literature missing regarding conspiratorial beliefs. Specifically, existential concerns are missing (see., Douglas et al, 2017; 2019). This means 1/3 of the established explanations for conspiratorial belief are missing, meaning the background to conspiratorial belief is not covered; it at least requires a mention. Given existential concerns cover fear and security, it seems relevant to coalitional psychology. Specifically, group-level identity (and out-group threat) is categorized under existential concerns as a predictor of belief.

The introduction now opens with the classification by Douglas et al. (2017; 2019) listing all three explanations for conspiratorial beliefs:

“Three categories of psychological motives influencing conspiratorial endorsement have been put forward by previous research [1,2]: a) epistemic motives, referring to people’s need to understand their environment, that helps them navigate in it [3], b) existential motives, relating to people’s need to feel secure and in control of their environment [4], c) social motives, by which people can manage their reputation and signal their membership to a coalition [5,6].” (p.2)

- The example given to illustrate the psychology of conspiracies does not really capture epistemic motives. Most research on that category focuses on aspects such as uncertainty or pattern-recognition, or to explain dramatic or shocking events. Equally, the conspiracy theory regarding Obama’s nationality served initially to delegitimize his presidency in principle and identify him as an outgroup, not to explain policy decisions as “anti-American” or “pro-African” per se. I would find a better example or rephrase the explanation of this conspiracy.

We acknowledge the limits of the previous example and introduce a new one that captures all three mentioned motives of conspiracy beliefs (epistemic, existential, social). We are now using the conspiracy of climate change being a hoax, which has the additional advantage of being thematically related to our case study. More specifically, we have added in the introduction the following sentence, after the list of the 3 psychological motives:

“For example, the conspiratorial belief that global warming is a hoax can at the same time provide an explanation for temperatures that may be perceived as incongruent with global warming (e.g. colder winters), prevent an existential anguish over the impending climate catastrophe, and signal an engagement with environmental-skeptic groups.” (p.2).

- While I am convinced by the logic that accepting fringe or counter-factual beliefs serves as group-membership cues, the suggestion of “burning bridges” requires more explanation. I would also be keen to see specific empirical examples (rather than book references) of where fringe beliefs lead to ostracization by others, rather that self-exclusion by the believer to signal their commitment to their new community. There is a paper by Van Prooijen and colleagues that is currently under review that does touch on this.

We now explain bridge-burning in more detail in the introduction (p.3-4), as this notion was indeed not clear enough.

Regarding the second part of the comment: When introducing conspiracy theories as a specific case of bridge-burning that we wished to investigate (p. 4), we have now added empirical references in which holders of conspiratorial beliefs have been ostracized by others or expect social exclusion for expressing such views (Phadke et al., 2021 ; Lantian et al., 2018). We found these very relevant empirical studies in the review paper by Van Prooijen and colleagues (to be published) mentioned by the reviewer, whom we thank for this reference.

- Overall, I find the premise and the resulting predictions logical, though more engagement with the conspiracy theory literature is needed to truly tie it into an evolutionary framework.

Thank you for this encouraging perspective, we hope that the additions made to the introduction have filled this caveat.

Methods/results

- Regarding the statements/stimuli. I understand the inclusion criteria as per the SI, but judging by the means alone the differences are sometimes minimal even if statistically significant. For an example, if we take a mid-point of 3.5 as neither agree or disagree whether X conspiracy is widely held, 4.23 and 4.5 are not extreme. I would also guess they both are actually significantly above this mid-point. This seems the case for most of the statements, so how likely are they to produce a response in this paradigm. I am not familiar with it, so has this piloting approach - and selection criteria - been shown to be useful in creating stimuli in the past? – this issue is touched on in the discussion, briefly, but an explanation is warranted because it might invalidate the findings as a whole.

The reviewer is right in noticing that differences in characteristics between treatment and control statements are indeed sometimes minimal. This results from a methodological choice of favoring experimental control over a more pronounced difference between the stimuli in the two conditions. We wanted treatment statements to differ from the control statements only in their conspiratorial dimension, to capture the sole effect of “conspiraciness” and not other variables that are likely to vary if the content of the statements was manipulated further. We thus wished the content of the statements to be as similar as possible between the two conditions, except for this conspiratorial dimension. We have tried to make this argument clearer in the Materials and design section (p.7). This methodological choice led us to use conspiratorial statements which are indeed less extreme than they could have been if we had not made this choice. We encourage future research to implement the opposite methodological trade-off: favoring the difference between conditions over experimental control, at least for proof-of-concept purposes as suggested by the reviewer in the next comment. We have made this point more salient in the discussion (p.17).

Regarding the point raised about the mid-point threshold and the fact that some control statements rate above this point, this is also a very relevant comment. In this study, we were interested mainly in a relative phenomenon, namely that beliefs possessing more “burning-bridges” components are more efficient in triggering categorization by environmental position than beliefs that are less prone to burning bridges. This is why we chose the differences in ratings between statements as the relevant statistical test for our pre-test to be validated rather than absolute comparisons with the midpoint. We have added this point in the supplementary file describing the pre-test (S2).

Regarding the piloting approach, while most previous Who-Said-What studies sought to establish whether participants categorize along certain dimensions, here our first ambition was to assess whether conspiratorial statements increase the level of categorization. This necessitated more careful pre-testing of materials: Our goal was not only to offer clear cues on a given category of information, but to manipulate the conspiracy dimension between the control and treatment conditions. Accordingly, we followed standard practice for Internet-based experiments in psychology and pre-tested our materials on a number of dimensions (Reips, 2002; 2007). The goal of the pre-test was to validate that the environmental position conveyed by statements was picked up by participants, and that treatment statements were perceived as more conspiratorial and more likely to burn bridges (i.e. less widely held and more offensive) than control statements. We updated both the manuscript (p.8) and supplementary materials (S2) to better reflect these considerations.

- Equally, and this subjective of course, but none seem especially extreme – we live in a world of Qanon and some very identity-charged conspiracies, or ones that are counter-factual boarding on psychopathology (the UK royal family are lizards, for example). So, were not more extreme statements about the environment considered, even if for proof-of-concept purposes?

We completely agree with the reviewer on the point raised. We have tried to explain our methodological trade-off in the answer to the previous comment. Future research should indeed use more extreme statements to see if the predicted results are elicited in that case and this is the next step we ourselves will take. We have underlined this direction for future research in the discussion (p.17)

- Perhaps it is my unfamiliarity with the who-said-what paradigm, but I did have trouble keeping track of exactly what was being measured. I would suggest reminding the reader exactly what categorization means when introducing study 2.

We have now reminded the reader of what is being measured when introducing Study 2 and clarified the alternative hypothesis under scrutiny:

“Study 2 was designed to investigate further the unexpected results of Study 1. It empirically explores the alternative hypothesis that the mind encodes the conspiracy dimension as an alliance category independent from environmental position. This alliance category would be composed of all conspiracy theorists, from both sides of the environmental spectrum. To test this prediction, we align the conspiratorial dimension of statements with environmental position in the treatment condition, such that all conspiratorial statements are either pro-environmental or environmental-skeptic. Similarly to previous studies, categorization along a dimension is measured as the propensity to make more errors between targets that share this dimension (e.g. race, environmental position) than between targets who differ regarding this dimension. If the specified hypothesis is true, categorization by environmental position is expected to increase in the treatment group compared to the control group as all conspiracy theorists share the same environmental stance in this new design, while race categorization should still decrease because a new relevant alliance category is being introduced as in previous studies. ” (p.13-14)

- I would be interested to see whether there was any effect of the direction of the statement as well as the accompanying conspiratorial statement. While there is contention on political ideology in the Conspiracy literature, there to seem to be differences in how individuals on the left and right respond to conspiratorial beliefs that correspond to their general perspective. Has this analysis been performed: i.e., Pro/Anti*control/conspiracy?

This is a very interesting point. We have now performed the analysis pro/anti*control/conspiracy and have found no interaction effect in Study 1 (we have not done the analysis for Study 2 as the environmental dimension of statements is aligned with the conspiracy dimension). We now report the results of this analysis at the end of the Results section of Study 1, as an additional analysis performed to test whether the predicted effect could be conditional on the direction of the statement (pro or anti), citing references that are in line with the phenomenon mentioned by the reviewer.

Ideally, to test the hypothesis of the reviewer, a three-way interaction poolitical_worldviews*pro/anti*control/conspiracy would be required. However, this statistical test would be underpowered. Thus, we decided to run two-way interactions between political worldviews and our manipulation, to bring additional evidence to the point raised while maintaining acceptable statistical power. In our experiment, we measured participants’ opinion on whether federal spending on environmental protection should be kept the same, increased/decreased a little, increased/decreased moderately or increased/decreased a great deal (7-point scale). We also have access to the political ideology endorsed by participants (Republican or Democrat, 10-point scale) measured by the platform Lucid. We therefore conducted additional analyses in line with the reviewer’s comment, studying the interaction between political worldviews (environmental concern and political ideology) and our experimental manipulation on categorization scores in Study 1. No significant interaction was found either for environmental concern nor political ideology. We have reported these results at the end of the Results section of Study 1 as well:

“To further explore our results, we performed additional analyses to investigate whether the predicted results may be conditional on the direction of the statements (pro-environmental or environmental-skeptic) and on participants’ political worldviews. Indeed, it has been shown that people selectively apply their conspiracy thinking in line with their political identity (Miller et al., 2016). Regarding climate-related conspiracies, Uscinki and Olivella (2017) review evidence that “Republicans are more likely to believe that climate change is a hoax while Democrats are more likely to believe that oil companies are hiding solutions to climate change” (p.2). However, we do not find that the direction of the statements in our study moderates the effect of conspiratorial framing on categorization scores (p = 0.93). We also do not find evidence that participants’ political ideology or level of environmental concern moderates the studied relationship (p = 0.11 and p = 0.81 respectively).” (p.13)

We are grateful to the reviewer for these very interesting leads to further explore our results.

Discussion

- The arguments given here for the lack of support for the hypotheses are quite weak. I would like to see the null-results put in the broader context of both evolved coalition psychology and the conspiracy literature before limitations are discussed. Neither literature is given appropriate consideration here. It may suggest that bridge burning is a more nuanced, or a weaker, part of commitment signaling than suggested for example(?).

We agree with the reviewer about the missing theoretical implications of our results before limitations are exposed. We have now added a paragraph in the discussion that puts the results into a broader theoretical context (p.16). We have now suggested that either bridge-burning is indeed a weaker part of coalitional signalling than what was thought, or that it is the social function of conspiratorial beliefs more generally that may be weaker than the identified other motivations driving these beliefs (epistemic, existential), especially in online contexts as pointed out by the second reviewer (#1). Limitations are now discussed after this theoretical account.

- I get the impression the study has used conspiracy beliefs as a simple convenient way to probe the coalitional psychology theories. This is fine. If this is the case though, as above, more needs to be said beyond “null hypothesis supported, maybe there were method issues”. If my impression is correct, this would also necessitate a restructuring of the introduction section, with conspiracies simply being a specific and current example of bridge-burning.

We agree about the need to clarify the relationship between conspiracies and bridge-burning, a point also raised by the second reviewer (#1). In the introduction, we have made additions to the section “Conspiracies and environmental policy” to clarify the fact that conspiracies are indeed considered as optimal candidates for bridge-burning and thus should act as efficient coalitional markers:

“In this paper, we have chosen to focus on a current and specific example of the burning bridges strategy: conspiracy theories. The present manuscript seeks to empirically test the hypothesis that endorsements of conspiratorial beliefs efficiently act as coalitional markers through bridge-burning. A conspiracy theory is commonly defined as the belief that a group of agents secretly acts together with malevolent intent [references] – thus these beliefs are offensive by definition. Moreover conspiracy beliefs oppose mainstream narratives and often held by small minorities, thereby also possessing a fringe element. Endorsing fringe beliefs accusing other groups with malevolent actions is therefore a costly behavior because of the expected ostracization the belief-holder faces.“ (p.4)

In line with what the reviewer suggests in this comment and the previous comment, we have also modified our discussion section by adding theoretical implications of our null results on bridge-burning and the social function of conspiracy theories (explained in the previous answer), to tie into the evolutionary framework outlined in the introduction.

- The discussion does mention environmental concerns as a perhaps a less divisive issue, but there is a literature (as mentioned) on left wing Vs right-wing conspiracies that might add to this discussion. It is certainly different when compared to anti-vax beliefs where there is an intersection of left and right. I would recommend the exploratory analysis suggested previously

As we have found no interaction between the direction of the statement and our experimental manipulation, we have not added this point to the discussion but we detail this relevant hypothesis of a moderation by the direction of the statement when we report this analysis in the manuscript (Results section of Study 1). However, we agree with the reviewer about the need to enrich this point of environmental concern being a potentially less divisive issue in the discussion. We believe it is important to mention the correlation between political orientation and environmental concern because it may go against the argument of environmental concern being a less divisive issue than political orientation (if the correlation is very strong). In our data however, we find evidence of political orientation being more divisive than environmental concern and have added the following sentences in the discussion:

“For instance, environmental position might still not be perceived as a category divisive enough in the population and hence would activate less strongly the alliance detection system than other more classic political coalitions. In our sample, there were around four times more participants who believed that federal spending should be increased to protect the environment rather than decreased, whereas about the same proportion of participants were favoring the Democrat party or the Republican party (S6). Despite the acknowledged correlation between political orientation and environmental concern [Cruz, 2017], the latter proved to be less divisive than political orientation in our sample. Future research may seek to replicate the reported experiments with more divisive and commonly used alliance categories, such as partisanship or broader political ideology. ” (p.17)

REVIEWER 2

This is an interesting and well-done paper that should be published in PLOS One. I especially the admire the authors' forthrightness in confronting their hypothesis for which they failed to find evidence. That said, I do have some comments and observations meant to improve the manuscript.

We are grateful to the reviewer for this very encouraging perspective and for the time taken to review our manuscript.

1. I was not entirely clear about the relationship between conspiracy belief and bridge burning. It seems to me that I can endorse conspiracy theories without burning bridges; especially in online contexts, the costs of promoting, and then walking away from, various conspiracies seem low. For those unfamiliar with this literature, the authors need to clarify the relationship.

We agree that the relationship between the endorsement of conspiratorial beliefs and bridge-burning was not sufficiently clear, as also pointed out by Reviewer 1. We study conspiracy theories as a current and specific example of the burning-bridges strategy. Conspiratorial beliefs are both fringe and offensive beliefs, two characteristics that are likely to burn bridges with other groups. We have tried to clarify this relationship in the introduction, in the “Conspiracies and environmental policy” section:

“In this paper, we have chosen to focus on a current and specific example of the burning bridges strategy: conspiracy theories. The present manuscript seeks to empirically test the hypothesis that endorsements of conspiratorial beliefs efficiently act as coalitional markers through bridge-burning. A conspiracy theory is commonly defined as the belief that a group of agents secretly acts together with malevolent intent [references] – thus these beliefs are offensive by definition. Moreover conspiracy beliefs oppose mainstream narratives and often held by small minorities, thereby also possessing a fringe element. Endorsing fringe beliefs accusing other groups with malevolent actions is therefore a costly behavior because of the expected ostracization the belief-holder faces.“ (p.4)

We have also added two empirical references showing that people expressing conspiracy theories are more likely to see their posts or comments on Reddit moderated or receiving negative feedback, and that people expressing conspiracy theories expect to be negatively evaluated, and to be socially excluded by others (p.5). These examples empirically demonstrate that endorsing conspiracy theories tends to burn bridges with outgroups that are not in line with these beliefs.

Finally, we use a commonly used definition of conspiracy theories (“A conspiracy theory is commonly defined as the belief that a group of agents secretly acts together with malevolent intent”) that suggests that these beliefs always burn bridges because of the “malevolent intent” it includes. However, we agree with the reviewer that the extent of bridge-burning and resulting reputational costs very much depend on the context in which these beliefs are endorsed. In an online context where there is anonymity, a conspiracy theorist does not have to face reputational costs, and the coalitional signalling function of endorsing a conspiracy theory may in that case be much weaker. We have added this very interesting point in the discussion (p.16).

2. The authors describe a control condition that would be better described as a placebo. For challenges associated with placebos in survey experiments, consult Velez and Porter (2021).

When introducing the control condition, we have added the following sentences:

“The control condition can be considered as a placebo rather than a “pure control” in which the treatment is absent (Porter & Velez, 2021). Indeed, to the extent possible, we sought to design similar justifications between the two conditions, which varied solely by the presence or absence of a conspiratorial dimension”. (p.8)

We have chosen to keep the term “control condition” in the rest of the paper in order not to confuse readers who would not be familiar with the term “placebo” in survey experiments but we hope that this addition allows to raise the point rightfully made.

We thank the reviewer for mentioning Porter & Velez (2021), who point to the challenge of minimizing researchers’ impact on the creation of placebos. We thought this issue was important to raise so we have also added in the Materials and design section that “to also maximize ecological validity and minimize bias in the design of stimuli, we designed beliefs inspired from environmental statements available on the internet.” (p.8)

3. Were the removed participants removed because they followed an attention check pre or post treatment? The authors should clarify. If the attention check occurred post treatment, the authors should re-insert those participants to avoid post-treatment bias.

This information was indeed missing. The attention check happened post-treatment, which can lead to post-treatment bias (Montgomery et al., 2018). Following the recommendation of the reviewer, we have reinserted those participants and rerun the analyses for our three studies (pilot, study 1, study 2). The results are unchanged and we have reported them in a supplementary file as robustness checks (S5). We have chosen to keep the current results with excluded participants in the main manuscript because we had committed to exclude these participants in the pre-registration of the studies. However, if the reviewer or the editor deem it preferable that the results without excluded participants be presented in the main manuscript rather than in the supplementary information, we will be happy to implement the changes. We have added in the discussion section this caveat of post-treatment bias and the new analyses conducted:

“However, our findings might also reflect false negative results due to chance, as well as methodological artefacts. Indeed, although we conducted two rigorously designed and high-powered experiments on diverse online samples, our studies suffer from some limitations. As attention checks happened post-treatment, a bias can occur in the exclusion of inattentive respondents (Montgomery et al., 2018). However, when re-running the analyses of all studies without excluding participants, results remain unchanged (S5). ”. (p.17)

4. The very first paragraph seems to overstate the prevalence of conspiracy beliefs; the claim that "the magnitude and prominence of conspiratorial beliefs is soaring" should either be toned down or tied to a reference that persuasively makes that point.

We have removed this claim from the introduction as we have found no reference that persuasively validates this point, we thank the reviewer for raising this issue.

5. There's not nearly enough discussion of the role that racial perceptions may be playing in these studies. Especially as this was administered on U.S. samples, it seems likely to me that participants were judging the stimuli for the race of the person *and only the race* and nothing else. The authors need to elaborate on the relationship between race and the effects observed.

This is an interesting point. Race categorization is indeed the strongest form of categorization taking place in our studies (effects sizes are about twice larger than in the case of categorization by environmental position). However, all our studies find evidence that categorization by environmental position also takes place (as categorization r-scores are significantly above 0). Moreover, multiple studies using the Who-Said-What paradigm show that the mind is able to encode several categories in parallel (Kurzban et al, 2021; Pietrazewski et al., 2014; 2015). But we agree that more emphasis on the large effect of race categorization (stronger than categorization by environmental position) is required, and that this makes sense when working with an American sample. We thus made additions in the part describing the results from the pilot, that focused on proving that both race categorization and categorization by environmental position were taking place:

“Categorization scores were significantly above zero for both race (r = .40, p < .001, 95% CI [0.24, 0.53]) and environmental position (r = .25, p = .008, 95% CI [0.06, 0.41]). We thus first replicate the finding that the mind spontaneously encodes race as an alliance category [Kurzban et al, 2021; Pietrazewski et al., 2014; 2015]. This result can be related to the central place of race in American politics, where persisting racial divisions, resentments, and group loyalties have been evidenced [Hutchings & Valentino, 2004]. The results also demonstrate that, in parallel to race categorization, the mind spontaneously categorizes people according to their views on environmental policy. “ (p.11)

But again, this is well-done and interesting and deserves to be published.

Thank you!

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 1

Peter Karl Jonason

24 Jan 2022

PONE-D-21-29587R1Do conspiracy theories efficiently signal coalition membership? An experimental test using the “Who Said What?” designPLOS ONE

Dear Dr. Mus,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I was able to get the opinion of the original two reviewers. One felt the paper was ready. The other asked for only minor changes now. Please submit your revised manuscript by Mar 10 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Having read the manuscript, I feel the authors have addressed the comments raised in the initial review in the new manuscript draft itself and in their response.

Reviewer #2: I applaud the authors for a well-executed revision. The study and its contributions are much more clear. A few remaining points:

1. If I were the authors, I would indeed report results for all subjects, including those who failed the post treatment attention check. The authors acknowledge that these results are what *should* be reported; the results don't change (they say) if those subjects are included; and, perhaps most importantly, this paper is going to be published, and it would be unfortunate if readers focused on this error, rather than the substantive contribution of the paper. In short, I think it's in their interest, and the long-term interests of this paper, to make this change.

2 I would appreciate more details on the modifications made in Study 2. Right now, I don't think I fully grasp how the alignment of "conspiratorial dimension of statements with environmental position in the treatment condition" resulted in "all conspiratorial statements [being] either pro-environmental or environmental-skeptic." I *think* what the authors are trying to say is that they wanted to evaluate categorization by conspiracism in general, not categorization by conspiracism by environmental position. They should clarify on this point (and offer examples.)

3. Finally, I admit I don't fully understand why conspiracy theories are inherently "offensive." Consider those who believe in JFK assassination theories. Given how widely held such beliefs are among U.S. citizens, it's hard to understand how the belief itself is "offensive" in any meaningful way. The authors should either explain this term or use a more precise one.

But again, this is a strong revision. I look forward to reading the published version.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Mar 10;17(3):e0265211. doi: 10.1371/journal.pone.0265211.r004

Author response to Decision Letter 1


16 Feb 2022

EDITOR

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

We have reviewed our reference list and ensured it is complete and correct. We have not cited papers which have been retracted. The only modification made to the reference list is its order (reference 39 became 33, so that references 33 to 38 became references 34 to 39).

REVIEWER 2

1. If I were the authors, I would indeed report results for all subjects, including those who failed the post treatment attention check. The authors acknowledge that these results are what *should* be reported; the results don't change (they say) if those subjects are included; and, perhaps most importantly, this paper is going to be published, and it would be unfortunate if readers focused on this error, rather than the substantive contribution of the paper. In short, I think it's in their interest, and the long-term interests of this paper, to make this change.

We have made the changes suggested by the reviewer, moving the results with exclusion of inattentive respondents in the supplementary information (in S3, after the description of attention checks) and replacing them in the main text with the results without exclusion (that were reported in S4 in the revision). We agree with the reviewer that it would be unfortunate if readers focused on the possible post-treatment bias when reading the article. Moreover, our conclusions remain unchanged, with only a slight reduction in effect sizes when inattentive respondents are included. We also modified the figures accordingly.

In the Participants section of all studies, we removed all references to attention checks and modified the number of participants included in the analyses. Also, when introducing Study 1, as we deviate from the pre-registration by not excluding inattentive respondents, we added the following paragraph:

“In our pre-registered studies, we planned to exclude participants who failed attention checks. However, because attention checks were implemented post-treatment, these exclusions could bias our causal estimates [33]. Accordingly, we deviate from our pre-registrations and include all respondents in the analyses reported below. In the supporting information (S3), we report pre-registered analyses on attentive respondents yielding identical substantive conclusions.“ (p.11)

2 I would appreciate more details on the modifications made in Study 2. Right now, I don't think I fully grasp how the alignment of "conspiratorial dimension of statements with environmental position in the treatment condition" resulted in "all conspiratorial statements [being] either pro-environmental or environmental-skeptic." I *think* what the authors are trying to say is that they wanted to evaluate categorization by conspiracism in general, not categorization by conspiracism by environmental position. They should clarify on this point (and offer examples.)

We agree with the reviewer on the need to clarify the design of Study 2, its aim and its differences with Study 1. We realized that it may be clearer to describe Study 2 as an experiment eliminating a confound rather than testing an alternative hypothesis. Indeed, we believed that the unexpected results of Study 1 may be due to the fact that people categorize targets according to conspiracism in general, and thus that having only conspiratorial statements in our treatment condition may be a confounding factor blurring categorization by environmental position. We therefore wished to make conspiracism vary in our new design, which is why only half of the statements in the treatment condition are conspiratorial in Study 2. Because we were still mainly focusing on the potential use of conspiratorial sentences to strengthen categorization across another coalitional dimension, we did not create a new conspiracy dimension orthogonal to race and environment. Instead, we aligned the conspiratorial dimension with environmental position such that either all four pro-environmental statements are conspiratorial and no environmental-skeptic statements are, or vice versa. We then tested whether conspiratorial arguments strengthen categorization by environmental position if only one side uses them. We therefore reframed both the discussion of Study 1, the introduction of Study 2 and its conclusion to clarify these points:

“(...) A possible confound influencing the results of this study is that conspiratorial justifications could serve as an indicator of affiliation with an independent coalition composed of all conspiracy theorists (...) ” (p. 13, discussion of Study 1)

“Study 2 was designed to investigate further the unexpected results of Study 1, by eliminating the potential confound that conspiratorial justifications may serve as an indicator of affiliation with an independent coalition composed of all conspiracy theorists. To do so, we modify the treatment condition by eliminating half of the conspiratorial frames compared to Study 1. As our focus remains on the potential use of conspiratorial sentences to boost categorization across another coalitional dimension, we do not create a new conspiracy dimension orthogonal to race and environment. Instead, we align the conspiratorial dimension with environmental position such that either all four pro-environmental statements are conspiratorial and no environmental-skeptic statements are, or vice versa. We then test whether conspiratorial arguments strengthen categorization by environmental position if only one side uses them. If this is true, we expect categorization by environmental position to increase in the treatment group compared to the control group, as all conspiracy theorists now share the same environmental stance. Furthermore, if indeed conspiratorial asymmetries boost environmental position as a coalitional cue, we expect categorization by race to decrease in the treatment group.“ (p.14-15)

“(...) Hence, the findings of Study 2 do not support the prediction that conspiratorial frames boost categorization by environmental position when only one side uses them, as only a weak effect in the expected direction was found.” (p.16)

Finally, we modified the general discussion to clarify the findings of Study 2:

“Study 2 was designed to eliminate a confound that could influence Study 1’s results, namely that conspiratorial justifications may serve as an indicator of affiliation with an independent coalition composed of all conspiracy theorists. However, Study 2 only found a weak effect in favor of the coalitional cue conveyed by conspiracy theories when removing this confound.” (p. 16-17)

3. Finally, I admit I don't fully understand why conspiracy theories are inherently "offensive." Consider those who believe in JFK assassination theories. Given how widely held such beliefs are among U.S. citizens, it's hard to understand how the belief itself is "offensive" in any meaningful way. The authors should either explain this term or use a more precise one.

We agree with the reviewer that conspiracy theories can take multiple forms and thus sometimes do not appear as explicitly offensive. In the case of JFK assassination conspiracy theories, some groups were accused of the assassination such as the CIA, the Mafia, Lyndon Johnson, Fidel Castro, the KGB, etc. But it is true that sometimes the theory just runs as “JFK was assassinated” and that the reference to malevolent groups is rather implicit. We have reframed and nuanced the part of the introduction where the offensive dimension of conspiracy theories is discussed to reflect more diverse forms of conspiracy theories (including examples):

“A conspiracy theory is commonly defined as the belief that a group of agents secretly acts together with malevolent intent [18,19]. Most conspiracy theories are thus inherently offensive: they accuse some actors of harming innocent people, either actively (as in the chemtrail conspiracy) or passively by concealing relevant information and “covering up tracks”. Another common case is that conspiracy theories deny grievances or important achievements of certain actors (e.g. Holocaust deniers or the 9/11 Truth Movement; moon-landing hoax), thereby also fostering inter-group conflict. Moreover, many conspiracy beliefs oppose mainstream narratives and are often held by small minorities (e.g. Reptilian conspiracies), thereby also possessing a fringe element.” (p.4)

But again, this is a strong revision. I look forward to reading the published version.

Thank you!

Attachment

Submitted filename: Response to Reviewers (2).pdf

Decision Letter 2

Peter Karl Jonason

28 Feb 2022

Do conspiracy theories efficiently signal coalition membership? An experimental test using the “Who Said What?” design

PONE-D-21-29587R2

Dear Dr. Mus,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Peter Karl Jonason

2 Mar 2022

PONE-D-21-29587R2

Do conspiracy theories efficiently signal coalition membership? An experimental test using the “Who Said What?” design

Dear Dr. Mus:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Peter Karl Jonason

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Full list of statements.

    (PDF)

    S2 File. Pre-test of statements.

    (PDF)

    S3 File. Attention checks and analyses without inattentive respondents.

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Attachment

    Submitted filename: Response to Reviewers (2).pdf

    Data Availability Statement

    All informational and reproducibility materials are available at https://osf.io/2ufhy/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES