Abstract
Despite overwhelming scientific consensus on the existence of human-caused climate change, public opinion among Americans remains split. Directly informing people of scientific consensus is among the most prominent strategies for climate communication, yet the reasons for its effectiveness and its limitations are not fully understood. Here, we propose that consensus messaging provides information not only about the existence of climate change but also traits of climate scientists themselves. In a large () nationally representative survey experiment, we examine how consensus information affects belief in human-caused climate change by shaping perceptions of climate scientist credibility. In the control group (), we first show that people learn both from and about climate scientists when presented with consensus and that perceived scientist credibility (especially skill) mediates up to about 40% of the total effect of consensus information on climate belief. We demonstrate that perceptions of climate scientists are malleable with two novel interventions that increase belief in climate change above and beyond consensus information.
Keywords: climate communications, science communications, persuasion, consensus messaging, climate change
Significance Statement.
American public opinion remains divided on the occurrence and importance of human-caused climate change. In this paper, we investigate how, when, and why one prominent strategy for climate communications—consensus messaging—works. We demonstrate that information about scientific consensus provides evidence not only about climate change itself but the climate scientists who create that consensus. The inferences people make about climate scientists help understand why consensus messaging is effective and when it has heterogeneous effects, and highlights an important avenue for further climate communications research.
Introduction
Despite the growing focus on climate change in science, politics, and culture, much of the American public remains skeptical about its human-caused nature (1). This skepticism has been linked to decreased support for climate-related actions and policies (2). What interventions are effective at bridging the divide in views on climate change?
Among the most prominent strategies for climate communication is consensus messaging (2–4). Consensus messaging involves highlighting to people that 97% of climate scientists agree on the reality of human-caused climate change. This strategy of enhancing perceived scientific consensus has been shown to influence people’s beliefs about climate change (4). However, despite the attention it has received, the reasons behind the effectiveness of consensus messaging are not fully understood (5, 6), and estimates of its effectiveness have varied substantially (2, 7). Prior research has also highlighted a broad disagreement about whether there are segments of the population where consensus information is ineffective (8–10) such as along the political divide (11). This knowledge gap poses a significant challenge in our ability to communicate scientific opinion to Americans broadly and, particularly, specific subgroups, like climate skeptics.
Here, we propose that consensus information shapes perceptions of scientist credibility and that these perceptions of climate scientists inform how consensus is interpreted. Put differently, the perceived credibility of scientists serves as an important mediator of the impact of consensus information on climate belief. Specifically, if people are unsure of how credible climate scientists are, the consensus level can provide information about their traits consistent with principles of rational belief updating (12, 13). That is, the fact that 97% of scientists concur (as opposed to 50% or 75%) is itself revealing about the scientists, and could indicate they are likely very skilled or very biased (or both), with corresponding implications for climate belief. When people are presented with high levels of consensus, do they infer that skilled experts have converged on the truth or that a conspiracy is underway to trick them?
In a large () US nationally representative survey experiment, we test how scientific consensus simultaneously affects beliefs about climate change and scientist credibility. We randomize participants to one of three conditions: a control group or two interventions aimed at influencing the perceived credibility of climate scientists. After receiving the intervention, we vary the hypothetical level of climate consensus presented to participants and measure the perceived skill and bias (14, 15) of scientists who either agree or disagree with the consensus (16). Participants were presented with five hypothetical levels of scientific consensus from a survey on human-caused climate change [50%, 75%, 90%, 97%, 99%] and asked, at each level, to report their belief in human-caused climate change. Additionally, at each consensus level, they were asked to rate both the bias (tendency to express a position regardless of the evidence) and the skill (ability to detect the truth about climate change) of climate scientists. We asked these questions about skill and bias regarding scientists who agree (proconsensus) as well as disagree (anticonsensus) that climate change is occurring (see Methods for details).
This experimental design enables us to probe the effects of consensus directly and allows for the possibility that scientists are perceived differently depending on their stance on climate change. We begin by restricting to the control group, which received no additional information about scientists credibility and show that consensus level has both a direct effect on belief in human-caused climate change, and a sizable indirect effect (about two-thirds the size of the direct effect) via changes in perceptions of scientists, particularly the skill of proconsensus scientists. In two additional large studies conducted on different platforms, Lucid () and CloudResearch () which—like the control group in the main study—provide no information beyond consensus, we show that the patterns of inference are similar across samples and with different experimental designs.
We explore the prevalence of heterogeneous effects of consensus information across the measured range of consensus levels. Our results suggest that there is substantial heterogeneity in climate belief updating, especially at high levels of consensus—with an increase in consensus from 97% to 99% causing a reduction in climate belief for 54% of participants. We demonstrate that the inferences people make about the attributes of climate scientists explains a substantial portion of this heterogeneity when compared to the explanatory power of observable demographics.
Given the importance of the perceived skill and bias of scientists in consensus messaging, we next include the two interventions in our analyses and test their impact on these perceptions. We show that providing information about the methods and history behind climate science can affect perceived scientist skill and increase climate belief above and beyond consensus information. This suggests a pathway to improving upon consensus messaging. Taken together, our results suggest that consensus messaging, and its effectiveness, involves a process of learning not only from but about climate scientists.
The climate belief system
To understand the belief system surrounding consensus messaging, we first restrict the analysis to the control group (), which was not provided with any information beyond consensus. Figure 1 shows reported beliefs about climate change and the skill and bias of pro and anticonsensus scientists. While there are clear baseline differences in belief by political party, we find that, on average, within-person belief in human-caused climate change increases by 0.24pp (95% Cr. Int.: [0.20, 0.28]) with each 1pp increase in consensus. Over a wide range of possible consensus levels and across parties and level of political conservatism (see Figure S2, scientific consensus increases belief in human-caused climate change.
Fig. 1.
Reported belief in climate change and the skill and bias of pro and anticonsensus scientists at each consensus level. The x-axis shows the hypothetical level of consensus among climate scientists while the y-axis shows reported belief expressed on 0–100 scales split by political party identification. The first column shows belief in human-caused climate change, the second and third columns show beliefs about the skill of proconsensus and anticonsensus scientists. The last two columns show beliefs about the bias of the two groups of scientists. 95% confidence intervals around the mean are shown.
Crucially, the inferences drawn from consensus are not restricted to belief in climate change but extend to the attributes of climate scientists. A 1pp increase in consensus is associated with increases in the perceived skill and bias of proconsensus ( pp, Cr. Int.: [0.12, 0.19]; pp, Cr. Int.: [0.07, 0.15]) and anticonsensus ( pp, Cr. Int.: [0.08, 0.15]; pp, Cr. Int.: [0.07, 0.15]) scientists. Furthermore, while there are substantial differences in the baseline belief of each variable across political parties with Republicans viewing anticonsensus scientists as credible while Democrats believe the opposite, the demonstrated patterns of belief updating are broadly similar across parties (see Table S2) with the exception of Democrats demonstrating a slightly higher effect of consensus on climate belief (, Cr. Int.: [0.03, 0.26]) and Republicans demonstrating a slightly higher effect of consensus on belief in the skill of anticonsensus scientists (, Cr. Int.: [0.05, 0.23]) compared to Independents. For all belief variables, we compare the posterior distributions of the effect of consensus and Democrats and Republicans and fail to detect statistical differences between the parties. In an additional experiment conducted on CloudResearch (Table S4) with 521 participants, we similarly find that the perceived skill ( pp, Cr. Int.: [0.20, 0.30]) of proconsensus scientists and the skill ( pp, Cr. Int.: [0.02, 0.13]) and bias ( pp, Cr. Int.: [0.10, 0.21]) of anticonsensus scientists increase with consensus along with belief in climate change ( pp, Cr. Int.: [0.40, 0.53]; see Section S2 in the Supplementary Material). An additional experiment on Lucid confirms the strong relationship between consensus and climate change, and consensus and perceived skill of proconsensus scientists (Table S6).
While it may appear counter-intuitive that beliefs about the skill and bias of both types of scientists increase with consensus, it may be that consensus increases both the probability that the field of climate science has the expertise to provide a meaningful answer, and the probability the field itself may be biased. If a field consisted largely of skilled and unbiased scientists, it would reach a substantial consensus. Apparent consensus could also occur if the field were largely biased. Thus, a field in broad agreement could stem from—and hence signal—high skill or collusion or both, compared to a field in disagreement (13). While participants, on average, demonstrate this pattern of beliefs, there is substantial heterogeneity in the patterns of belief updating at the individual-level. Moving from 50% to 99% consensus, approximately one-quarter of respondents decrease their belief in climate change and perceived skill of proconsensus scientists, while one-third of participants show negative belief updates for the other attributes.
It appears that scientist attributes can be impacted by consensus level. To quantify the importance of inferences about scientist attributes in consensus messaging, we fit a Bayesian multilevel structural equation model which allows us to compare the direct effect of consensus on climate belief and the effect mediated by scientist attributes (see Figure S1). Bayesian analyses capture uncertainty in a nuanced way without arbitrary statistical cutoffs, helping to ensure that the information contained in the data are neither overstated nor understated (17). While consensus has a strong direct effect (, Cr. Int.: [0.13, 0.19]), the indirect pathways provide meaningful routes to impact climate belief with their total effect being 82% that of the direct effect (45% of the total effect). Inferences about the skill of scientists appear to be impactful—comparable to 48% of the size of the direct effect. In two additional experiments, one on CloudResearch (; Figure S3) and one on Lucid (which was preregistered; ; Figure S4) which vary in sample, stimulus presentation, and outcome elicitation, we find that the indirect effect accounts for varying proportions of the total effect (40% and 9% respectively) with skill being most important, corresponding to 77% and 98% of the indirect effects. In the Lucid sample, consensus level notably did not appear to have much effect on perceptions of other scientist attributes. Differences could occur due to variation in the subject pool, as prevailing beliefs change across groups and over time. Such variation may affect quantitative properties of the beliefs held and inferences made about climate change and climate scientists.
While consensus information appears to meaningfully increase belief in human-caused climate change, on average, there may be substantial heterogeneity in its effectiveness (5, 12). Prior work has debated whether climate consensus messaging may be less effective for conservatives (10, 18, 19). Our results so far indicate a possible pathway through which heterogeneity in effects may occur—the inferences made about climate scientists. In particular, the effect of consensus messaging may be most effective for people who infer that proconsensus scientists are skilled.
First, we observe substantial heterogeneity in the average effect over both the entire range of consensus and from one consensus level to the next—which we call a “one-step” belief update. We find that 26% of participants demonstrate a negative average effect of consensus, as estimated with a Bayesian random-effects regression. There is even greater heterogeneity in effects for one-step updates where 42%, 42%, 49%, and 54% of participants have negative effects for the 75–50, 90–75, 97–90, and 99–97 consensus changes.
Next, we explore the predictors of heterogeneity in one-step updates. In a Bayesian random-effects model, we find that a person’s one-step climate belief update is meaningfully related to their one-step update about the skill and bias of proconsensus scientists (Figure 2, right panel). A competing model using demographic variables and prior trust in university scientists generally (which may be distinct from climate scientists specifically; (20)) produces no predictors whose credible intervals exclude zero. In fact, the model accounting for only inferences about climate scientists explains more of the variance (Bayesian ) in belief updating than the demographics model (Bayesian ).
Fig. 2.
Relationship between one-step updates and backfires in climate belief and one-step updates in perceptions of scientist attributes. The left-panel predicts whether a participant demonstrates a backfire—a reduction in their climate belief from one consensus level to the next—using their change in perceptions of the skill and bias of pro- and anticonsensus scientists. The right-panel predicts the size of the one-step—from one consensus level to the next—change in climate belief using the same predictors. The left-panel uses a Bayesian logistic regression while the right panel uses a Bayesian Gaussian model. The predictors are z-scored in both models. The mean predicted effect is shown with a diamond while 80% (wide lines) and 95% (thin lines) credible intervals are shown around the mean.
Furthermore, we predict backfire effects—instances where the one-step climate update is negative—using a similar Bayesian random-effects model and find that, again, one-step updates about the skill of proconsensus scientists provides a meaningful predictor (Figure 2, left panel). No demographic predictors nor prior trust in university scientists have credible intervals that exclude 0.
Intervening on perceptions of scientists
Our analysis so far suggests that consensus messaging shapes beliefs about climate scientists which in turn mediates the effects on belief in climate change, particularly through perceptions of the skill of proconsensus scientists. We next test whether it is possible to intervene in the indirect pathways to affect belief beyond the effect of consensus. 1,698 participants were assigned to receive one of two interventions to shift beliefs about the credibility of scientists, in conjunction with consensus information. Participants in the “History” condition received a paragraph of information about the longstanding history and foundations of climate science while those in the “Institutions” condition received information about the institutional constraints on bias in climate science (see Methods/Supplementary Material). Both interventions might affect beliefs about scientists and climate change. Importantly, neither intervention was explicitly about the likelihood of human-caused climate change occurring.
Figure 3 plots the Bayesian posterior average treatment effect estimate, controlling for the direct effect of consensus, with 80% and 95% credible intervals shown for both interventions on every outcome. We find that across consensus levels, the “History” intervention increased belief in the skill of proconsensus scientists (, Cr. Int.: , ), decreased belief in the skill of anticonsensus scientists (, Cr. Int.: , ), and ultimately increased belief in human-caused climate change (, Cr. Int.: , ). Meanwhile, the “Institutions” intervention increased belief in the skill of proconsensus scientists (, Cr. Int.: , ). Both interventions appeared to have little effect on perceived bias. The Bayesian analyses reveal that the posterior probability of a positive effect was 99% for the “History” intervention and 73% for the “Institutions” intervention. These results suggest that information which improves perceptions of the credibility of proconsensus scientists, and especially emphasizing the skill gap between pro and anticonsensus scientists, can raise climate belief over and above the effect of consensus information.
Fig. 3.
Treatment effect of the history and institutions interventions on beliefs about climate and scientist attributes. Treatment effects are plotted from a hierarchical Bayesian regression of the focal belief on a dummy variable for each experimental condition with random intercepts for each individual and random intercepts and slopes for each consensus level. 80% credible intervals are shown with thick lines around the mean while 95% credible intervals are shown with thin lines.
Discussion
Overall, our results hold theoretical and practical implications for the use of climate consensus messaging. By exploring the belief system surrounding climate change, we provide novel insights into how, when, and why climate consensus messaging works. In particular, the data suggest that consensus is understood in relation to beliefs about climate scientists. If proconsensus scientists are viewed as skilled, especially relative to anticonsensus scientists, and the receiver of consensus information infers that consensus arose from this skill, the message is likely to be effective. Relatedly, if the receiver infers that the holdouts from consensus are likely to be biased, then the message is also likely to be effective. For example, anticonsensus climate scientists may be anticonsensus because of bias due to funding from the fossil fuel industry. Given that the average belief in scientific consensus in the sample is 78% and only 12% of respondents believe consensus is at least 95%, there is substantial room to correct misperceptions about the scientific consensus and shape resulting climate beliefs. However, inferences about the causes of consensus and what this means about climate scientists and climate change are central to the effectiveness of consensus messaging. Consensus itself shapes these inferences.
These inferences appear important in explaining heterogeneous effects in consensus messaging. Inferences about the attributes of scientists explain a larger proportion of the heterogeneity in effects than demographics, suggesting that heterogeneity may cut across political parties and be better explained by belief systems (12, 21–24). This insight also suggests that “persuasion in parallel” (the finding that persuasive messaging acts in the expected direction for almost all subgroups; (25)) largely holds across demographic subgroups but may not hold when considering similarities in how certain groups make inferences. Studying the belief-based predictors of heterogeneity may prove fruitful in understanding why consensus messaging, and other persuasive messages, work in some populations but not others. For example, opposition to climate science has been linked to endorsement of conspiracy theories, at least in the United States (26–28), which could potentially capture variation in the interpretation and impact of consensus information.
Given the centrality of perceptions of the attributes of climate scientists, we also demonstrate that it is possible to intervene directly on these beliefs to improve the effectiveness of consensus messaging. We show that an intervention which increases the perceived difference in skill between pro and anticonsensus scientists increases climate beliefs in conjunction with the consensus message. This effect occurs across political parties.
Further research may explore how closely these beliefs translate into action such as engaging in more sustainable behaviors, donating to climate charities, or lobbying for policy change (7). The mechanism presented here suggests that informing people of other types of consensus may have important effects if that agreement comes from a credible process which can be tested in domains like informing people of the high-level of support for addressing climate change among the American public (29). In addition, while our experimental design varied consensus level in order to evaluate its impact most directly, traditional consensus messaging experiments assess the impact of a single message relative to one’s initial beliefs. We additionally measure belief in human-caused climate change directly rather than separate measurements for belief in climate change and its human-causation as is common in the literature, which prevents us from observing how the two propositions are differentially impacted by consensus. The extent to which our findings generalize to such settings remains to be seen, and depends on the salience of the various interpretations of consensus. Another key limitation of our study is the use of an exclusively US-based sample. Recent work has demonstrated that consensus messaging is effective among liberals but not other groups across 63 countries (30) which suggests an area for important future research on the inferences about consensus made in non-US contexts.
These results are important for the effective use of scientific communication, as expert credibility is one of the most hotly contested targets in the battle for public opinion over climate science (31, 32). Perceived credibility may affect not only the interpretation of a message, but also receptiveness to subsequent messaging, willingness to seek out information from scientific sources (33), and support for funding of science (34). Inspired by normative principles (13, 35), we systematically link key variables like consensus information, source credibility, and belief change. This work reveals important properties of public response to climate communication (12, 13, 36) while providing insights into strategies that can bolster the effectiveness of science communication.
Materials and methods
Our primary sample consists of 2,545 participants from a nationally representative American sample recruited on Bovitz—a high-quality recruitment platform that specializes in representative samples (for a paper demonstrating the benefits of Bovitz in data collection see (37)). A summary of this sample can be seen in Table S1 (with Tables S3 and S5 showing equivalent demographic information for the additional CloudResearch and Lucid samples). The sample closely matches the US population more broadly, especially on partisanship. Participants provided informed consent, and the studies were approved by the MIT Committee on the Use of Human as Experimental Subjects (COUHES).
In the primary study design with the Bovitz sample, participants completed a survey composed primarily of questions concerning demographics, their beliefs about climate change, and their beliefs about climate scientists. In the demographics section, we collected data on traditional demographics (age, race, gender, income, and education) plus political demographics (party affiliation, political extremity, and affective polarization). The composition of the sample is shown in Table S1.
Next, we asked participants their belief in human-caused climate change, their belief in the consensus level among climate scientists, and their confidence in each of these estimates. Additionally, we asked about the likelihood that a climate scientist is extremely biased, meaning they will ignore the evidence in their evaluation of whether climate change is happening; and the likelihood that an extremely biased scientist will be biased to say climate change is or is not occurring. The product of these two beliefs gives the prior probability that any given climate scientist is biased in a certain direction. Lastly, we asked participants the likelihood that, conditional on climate change (not) occurring, an unbiased climate scientist will correctly identify this using their skill.
Finally, participants are randomized to one of three conditions: (i) the “Institutions” intervention (), (ii) the “History” intervention (), or (iii) the control group (). Participants in one of the interventions are presented with the intervention text (shown in Supplementary Material S1.2) while control participants proceed to the next screen. We then presented participants with the main questions of interest relating to their beliefs about human-caused climate change and proconsensus scientists (those who express that climate change is occurring) and anticonsensus scientists (those who do not). For each belief—human-caused climate change, the skill of proconsensus scientists, the skill of anticonsensus scientists, the bias of proconsensus scientists, the bias of anticonsensus scientists—we asked participants to hypothetically consider five different levels of consensus among climate scientists [50%, 75%, 90%, 97%, 99%] and report their belief at that consensus level using a 0–100 scale. For the bias of scientists, participants rated the likelihood that a randomly selected climate scientist would express an opinion irrespective of the evidence. For skill, they rated the likelihood that an unbiased scientist would be able to discern the truth about climate change. In both cases, participants provided these ratings for two kinds of scientists—those who support the consensus (proconsensus) and those who oppose it (anticonsensus).
To quantify the effect of consensus on belief in climate change and about the attributes of scientists, we implement a hierarchical Bayesian model in R restricting to only participants in the control group. We regress belief in the attribute on a continuous variable for consensus level (normalized such that 50% consensus is 0) and a random intercept and consensus-slope for each participant. We use the default priors in the “brms” package (38) and run the Markov chain Monte Carlo (MCMC) sampling procedure for 4,000 warm-up and 4,000 sampling iterations across 4 chains.
To measure heterogeneity in the average effect of consensus on belief, we take individual-level “consensus” random slopes from the Bayesian random-effects model used above. We estimate the percent of these random slopes that are negative. We, additionally, compute the empirical one-step update from one consensus level to the next by taking the difference between expressed belief in climate change at a consensus level minus belief at the previous consensus level.
To estimate the predictors of heterogeneity in one-step updates, we use a Bayesian random-effects model. The primary model predicts the size of a participant’s one-step climate belief update with their z-scored one-step update in each of the four scientist attributes. We include a random-intercept for participant and a random intercept and random scientist attribute slopes for each consensus level. We estimate an equivalent model for whether there is a backfire (a negative one-step update) in climate belief using identical predictors and a Bernoulli family with a logit-link. We estimate equivalent models using demographic predictors—age, a dummy for if the participant is female, a dummy for if they went to college, a continuous measure of income, a continuous measure of conservatism in politics, their trust in academic scientists, and their political identification. These predictors are not z-scored. As in the belief-based model, random slopes are included for consensus. We use default “brms” priors with 4,000 warmup and 4,000 sampling iterations with an “adapt delta” of 0.999 and a max tree-depth of 15. To compute a Bayesian , we use the procedure in (39) and take the median.
To implement the structural equation model, we use a multilevel Bayesian structural equation model implemented using the “bsem” package in R. The between-subject level predicts each belief (climate and the 4 scientist attributes) from the participant’s age, and separate dummy variables indicating whether the participant is male, white, a Democrat, and completed college. The within-subject level predicts each scientist attribute from the consensus level while climate belief is predicted from the consensus level and each scientist attribute. The model is sampled for 1,000 warm-up and 1,000 sampling iterations on each of 4 chains. The “bsem” default priors are used.
To estimate the treatment effects presented in Figure 3, we include participants from all three experimental arms. For each belief, we implement a hierarchical Bayesian regression in R. We regress each belief on a dummy for the condition a participant was assigned to and the consensus level (normalized such that 50% consensus is 0) as well as a random intercept and random consensus-slope for each participant. We use the default priors in the “brms” package and implement 4,000 warm-up and 4,000 sampling iterations across 4 chains.
Supplementary Material
Acknowledgments
The authors thank three anonymous reviewers for their helpful feedback. The authors would also like to thank members of the Human Cooperation Lab at MIT, the Computational Cognitive Neuroscience Lab at Harvard, and the MIT Climate and Sustainability Consortium, and attendees of the First Annual Polarization Research Lab Conference, the Triennial Choice Symposium, the Annual Meeting of the Society for Mathematical Psychology, and the Annual Meeting of the Society for Judgment and Decision Making for their comments.
Contributor Information
Reed Orchinik, Sloan School of Management, MIT, 100 Main St, Cambridge, MA 02142, USA.
Rachit Dubey, Sloan School of Management, MIT, 100 Main St, Cambridge, MA 02142, USA.
Samuel J Gershman, Department of Psychology, Harvard University, 33 Kirkland St, Cambridge, MA 02138, USA.
Derek M Powell, School of Social and Behavioral Sciences, Arizona State University, 4701 W. Thunderbird Rd, Glendale, AZ 85306, USA.
Rahul Bhui, Sloan School of Management, MIT, 100 Main St, Cambridge, MA 02142, USA.
Supplementary Material
Supplementary material is available at PNAS Nexus online.
Funding
The authors declare no funding sources.
Author Contributions
R.O. conceived and conducted the experiments, analyzed the results, and wrote the manuscript. R.D. conceived the experiments and revised the manuscript. S.J.G. revised the manuscript. D.M.P. conceived the experiments and revised the manuscript. R.B. conceived and conducted the experiments and wrote the manuscript.
Previous Presentation
These results were previously presented at the first annual meeting of the Polarization Research Lab. Preliminary results, using a different sample than the current submission, were presented at the 45th Annual Conference of the Cognitive Science Society, and a report appeared in the conference’s nonarchival proceedings.
Preprints
A preprint of this article is published at https://osf.io/preprints/psyarxiv/ezua5.
Data Availability
Data, surveys, and code are available at: https://osf.io/jynqh/ (40).
References
- 1. Leiserowitz A, et al. 2021. Climate change in the American mind - September 2021. Technical report. Yale program on climate change communication.
- 2. van der Linden S. 2021. The gateway belief model (GBM): a review and research agenda for communicating the scientific consensus on climate change. Curr Opin Psychol. 42:7–12. [DOI] [PubMed] [Google Scholar]
- 3. Rode JB, et al. 2021. Influencing climate change attitudes in the United States: a systematic review and meta-analysis. J Environ Psychol. 76:101623. [Google Scholar]
- 4. van der Linden SL, Leiserowitz AA, Feinberg GD, Maibach EW. 2015. The scientific consensus on climate change as a gateway belief: experimental evidence. PLoS One. 10(2):e0118489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Bayes R, Bolsen T, Druckman JN. 2020. A research agenda for climate change communication and public opinion: the role of scientific consensus messaging and beyond. Environ Commun. 17(1):1–19. [Google Scholar]
- 6. Landrum AR, Slater MH. 2020. Open questions in scientific consensus messaging research. Environ Commun. 14(8):1033–1046. [Google Scholar]
- 7. Vlasceanu M, et al. 2024. Addressing climate change with behavioral science: a global intervention tournament in 63 countries. Sci Adv. 10(6):eadj5778. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Chinn S, Lane DS, Hart PS. 2018. In consensus we trust? persuasive effects of scientific consensus communication. Public Underst Sci. 27(7):807–823. [DOI] [PubMed] [Google Scholar]
- 9. Goldberg MH, Gustafson A, van der Linden S, Rosenthal SA, Leiserowitz A. 2022. Communicating the scientific consensus on climate change: diverse audiences and effects over time. Environ Behav. 54(7-8):1133–1165. [Google Scholar]
- 10. Rode JB, Dent AL, Ditto PH. 2023. Climate change consensus messages may cause reactance in conservatives, but there is no meta-analytic evidence that they backfire. Environ Commun. 17(1):60–66. 10.1080/17524032.2022.2101501 [DOI] [Google Scholar]
- 11. Većkalov B, et al. 2024. A 27-country test of communicating the scientific consensus on climate change. Nat Hum Behav. 8(10):1892–1905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Cook J, Lewandowsky S. 2016. Rational irrationality: modeling climate change belief polarization using Bayesian networks. Top Cogn Sci. 8(1):160–179. [DOI] [PubMed] [Google Scholar]
- 13. Hahn U, Harris AJL, Corner A. 2016. Public reception of climate science: coherence, reliability, and independence. Top Cogn Sci. 8(1):180–195. [DOI] [PubMed] [Google Scholar]
- 14. Dieckmann NF, et al. 2017. Public perceptions of expert disagreement: bias and incompetence or a complex and random world? Public Underst Sci. 26(3):325–338. [DOI] [PubMed] [Google Scholar]
- 15. Lupia A. 2013. Communicating science in politicized environments. Proc Natl Acad Sci U S A. 110(supplement_3):14048–14054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Anderegg WRL, Prall JW, Harold J, Schneider SH. 2010. Expert credibility in climate change. Proc Natl Acad Sci U S A. 107(27):12107–12109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Wasserstein RL, Schirm AL, Lazar NA. 2019. Moving to a world beyond “p < 0.05”. Am Stat. 73(sup1). 10.1080/00031305.2019.1583913 [DOI] [Google Scholar]
- 18. Chinn S, Hart PS. 2021. Effects of consensus messages and political ideology on climate change attitudes: inconsistent findings and the effect of a pretest. Clim Change. 167(3-4):47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Kim JW, Liu R. 2024. Persuading climate skeptics with facts: effects of causal evidence vs. consensus messaging. Res Politics. 11(1):20531680241237311. [Google Scholar]
- 20. Myers TA, et al. 2017. Predictors of trust in the general science and climate science research of us federal agencies. Public Underst Sci. 26(7):843–860. [DOI] [PubMed] [Google Scholar]
- 21. Alvarez RM, Debnath R, Ebanks D. 2023. Why don’t Americans trust university researchers and why it matters for climate change. PLOS Clim. 2(9):e0000147. [Google Scholar]
- 22. Gustafson A, Rice RE. 2020. A review of the effects of uncertainty in public science communication. Public Underst Sci. 29(6):614–633. [DOI] [PubMed] [Google Scholar]
- 23. Hornsey MJ, Harris EA, Bain PG, Fielding KS. 2016. Meta-analyses of the determinants and outcomes of belief in climate change. Nat Clim Chang. 6(6):622–626. [Google Scholar]
- 24. Rabinovich A, Morton TA. 2012. Unquestioned answers or unanswered questions: beliefs about science guide responses to uncertainty in climate change risk communication. Risk Anal. 32(6):992–1002_eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1539-6924.2012.01771.x [DOI] [PubMed] [Google Scholar]
- 25. Coppock A. 2022. Persuasion in parallel. Chicago (IL): University of Chicago Press. [Google Scholar]
- 26. Hornsey MJ, Harris EA, Fielding KS. 2018. Relationships among conspiratorial beliefs, conservatism and climate scepticism across nations. Nat Clim Chang. 8(7):614–620. [Google Scholar]
- 27. Lewandowsky S, Gignac GE, Oberauer K. 2013. The role of conspiracist ideation and worldviews in predicting rejection of science. PLoS One. 8(10):e75637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Tam K-P, Chan H-W. 2023. Conspiracy theories and climate change: a systematic review. J Environ Psychol. 91:1–15. [Google Scholar]
- 29. Sparkman G, Geiger N, Weber EU. 2022. Americans experience a false social reality by underestimating popular climate policy support by nearly half. Nat Commun. 13(1):4779. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Berkebile-Weinberg M, Goldwert D, Doell KC, Van Bavel JJ, Vlasceanu M. 2024. The differential impact of climate interventions along the political divide in 60 countries. Nat Commun. 15(1):3885. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Cook J, van der Linden S, Maibach E, Lewandowsky S. 2018. The consensus handbook. Center for Climate Change Communication. [Google Scholar]
- 32. Hmielowski JD, Feldman L, Myers TA, Leiserowitz A, Maibach E. 2014. An attack on science? media use, trust in scientists, and perceptions of global warming. Public Underst Sci. 23(7):866–883. [DOI] [PubMed] [Google Scholar]
- 33. Zhang FJ. 2023. Political endorsement by nature and trust in scientific expertise during COVID-19. Nat Hum Behav. 7(5):696–706. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Ophir Y, Walter D, Jamieson PE, Jamieson KH. 2023. Factors assessing science’s self-presentation model and their effect on conservatives’ and liberals’ support for funding science. Proc Natl Acad Sci U S A. 120(38):e2213838120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Merdes C, von Sydow M, Hahn U. 2021. Formal models of source reliability. Synthese. 198(S23):5773–5801. [Google Scholar]
- 36. Druckman JN, McGrath MC. 2019. The evidence for motivated reasoning in climate change preference formation. Nat Clim Chang. 9(2):111–119. [Google Scholar]
- 37. Stagnaro MN, Druckman J, Arechar AA, Willer R, Rand D. 2024. Representativeness versus attentiveness: a comparison across nine online survey samples. OSF.
- 38. Bürkner P-C. 2017. brms: an R package for Bayesian multilevel models using Stan. J Stat Softw. 80(1):1–28. [Google Scholar]
- 39. Gelman A, Goodrich B, Gabry J, Vehtari A. 2019. R-squared for Bayesian regression models. Am Stat. 73(3):307–309. 10.1080/00031305.2018.1549100 [DOI] [Google Scholar]
- 40. Orchinik R, Dubey R, Gershman SJ, Powell D, Bhui R. 2024. Learning from and about scientists. OSF. 10.17605/OSF.IO/JYNQH [DOI] [PMC free article] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data, surveys, and code are available at: https://osf.io/jynqh/ (40).



