Skip to main content
Sage Choice logoLink to Sage Choice
. 2023 Nov 8;33(3):308–324. doi: 10.1177/09636625231203538

Belief updating when confronted with scientific evidence: Examining the role of trust in science

Tom Rosman 1,, Sianna Grösser 1
PMCID: PMC10958746  PMID: 37937866

Abstract

In one exploratory study (N = 985) and one preregistered study (N = 1100), we investigated whether trust in science influences belief change on a medico-scientific issue when laypersons are confronted with scientific evidence. Moreover, we tested whether individuals with high trust in science trust science “blindly,” meaning that their trust in a scientific claim’s source prevents them from adequately evaluating the claim itself. Participants read eight fictitious studies on the efficacy of acupuncture, which were experimentally manipulated regarding direction (evidence favoring acupuncture vs diverging evidence) and quality (high vs low; only Study 2). Acupuncture-related beliefs were measured before and after reading. Moderator and mediator analyses showed that the magnitude of belief change indeed depends on trust in science. Furthermore, we found that people with high trust in science are better able to evaluate the quality of scientific studies, which, in turn, protects them from being influenced by low-quality evidence.

Keywords: belief updating, firsthand evaluation, scientific evidence, secondhand evaluation, trust in science


Members of information societies often use secondhand evaluations to evaluate the veracity and truthfulness of knowledge claims. This means that they do not directly evaluate the claim itself, but instead evaluate the trustworthiness of a claim’s source (e.g. a scientific organization). For this reason, trust in science and scientists plays a crucial role in shaping individual beliefs on scientific issues, such as the safety of vaccines or appropriate reactions to climate change. While the number of studies investigating relationships between trust in science and beliefs on such issues is vast and ever growing, only a few studies investigate how trust in science affects changes in beliefs on such issues (i.e. belief updating; Anglin, 2019; Druckman and McGrath, 2019). In addition, not much is known on whether individuals with high trust in science trust science “blindly,” meaning that their trust in a claim’s source prevents them from adequately evaluating the claim itself.

For this reason, the present article first strives to test whether trust in science indeed affects belief updating on a medico-scientific issue (beliefs on the efficacy of acupuncture). We investigated this research question in two separate experimental studies, which are described below. A second aim of this article focuses on a potential consequence of belief updating that is based on secondhand evaluations—namely that secondhand evaluations may lead to bias in information evaluation if they prevent individuals from evaluating the quality of the information itself (i.e. in terms of a firsthand evaluation). We chose to investigate this interplay between both types of evaluations more closely because two outcomes are conceivable: on one hand, the potential harm in secondhand evaluations overriding firsthand evaluations is huge, given that it would make individuals with high trust in science more vulnerable to low-quality scientific evidence or even pseudoscience. On the other hand, it may also be that individuals with higher trust in science generally possess a higher scientific literacy and favor a nuanced (instead of dogmatic) approach to scientific evidence, which would lead to better firsthand evaluations in this population. To our knowledge, no studies have yet analyzed such an interplay between firsthand and secondhand evaluations, which is corroborated by Anglin (2019), who calls for additional research on “people’s receptiveness to evidence that is inconclusive, misrepresented, or false (p. 197).” It, therefore, remains an open question whether or not individuals with high trust in science are more vulnerable to pseudoscience. We investigated this in our second experimental study.

1. Background and hypotheses

Today’s societies are characterized by a strong division of cognitive labor (Bromme et al., 2010). This means that “most knowledge claims are based on specialized knowledge provided by specialized experts” (Bromme et al., 2010: 163), leading to an uneven distribution of knowledge across society. As a consequence, members of the general public often lack the skills to directly evaluate science-based knowledge claims. For example, someone who thinks about getting vaccinated against COVID-19 will likely seek for information about the safety of the mRNA technology. However, unless he has a scientific background, he will not be able to directly evaluate the corresponding studies. Instead, he has to gauge the scientists’ trustworthiness, which requires inferring, according to Fiske and Dupree (2014), both their capabilities (“competence”) and their intentions (“warmth”) to make valid claims on the issue at hand. For these reasons, trust in science and scientists plays a crucial role in how public opinion on scientific issues is shaped.

In the present article, we follow the distinction between firsthand and secondhand evaluations brought forward by Bromme et al. (2010). A firsthand evaluation of a knowledge claim implies a direct assessment of the veracity of a knowledge claim, for example by analyzing its logical consistency or by comparing it with other knowledge (Bromme et al., 2010). In our example from above, the person could, for example, try to evaluate the sampling or the design of the vaccine studies. This, however, is a hard task for individuals with no background knowledge on the topic at hand. Therefore, such persons will likely refer to secondhand evaluations: instead of evaluating the knowledge claim directly, they will evaluate the credibility of the claim’s source and its relevance for the topic at hand (Bromme et al., 2010). For example, when looking at information found on Twitter, they may evaluate whether the person who tweeted about the safety of the vaccine is qualified to do so (i.e. by evaluating her or his professional background), and by estimating whether the person has good intentions (Fiske and Dupree, 2014).

Just how crucial trust in science is for belief updating is underlined in the above example. Given that they often lack the knowledge and skills to conduct firsthand evaluations and instead refer to secondhand evaluations (Bromme et al., 2010), individuals who see scientists as trustworthy are much more likely to update their beliefs on science-related issues when they are confronted with scientific claims. A number of empirical studies support this assumption. First and foremost, a study by Pilditch et al. (2020) found that individuals with low trust in a source potentially reject its claims, thus providing more or less direct evidence for the existence of secondhand evaluations. Similarly, with reference to the scientific context, Bleich et al. (2007) found trust in science to be a strong predictor of public attention to scientific experts on obesity, which, in turn, significantly predicted weight-related behaviors such as monitoring fruit and vegetable intake. Based on these findings, we posit, in a first set of hypotheses, that belief changes on a medico-scientific issue (acupuncture treatment of back pain) are stronger in individuals with high trust in science compared with those with lower trust in science. In addition, given that trust in science is generally high in Western countries (e.g. Pew Research Center, 2019; Sturgis et al., 2021) and since people seem to be rather receptive to clear and consistent scientific evidence (Anglin, 2019), we expect that a confrontation with scientific evidence will lead to a certain amount of belief updating, even when not factoring in the effects of trust in science.

  • Hypothesis 1. When confronted with scientific evidence on a certain topic, individuals’ beliefs on this topic will shift in the direction that the evidence is suggesting.

  • Hypothesis 2. The effect suggested in Hypothesis 1 is moderated by trust in science: the higher individual trust in science, the stronger the shift in topic-related beliefs.

While such effects of trust in science on belief changes would generally be good news for science and society alike, they bear a worrisome implication: if individuals fully rely on secondhand evaluations to evaluate science-related claims, they might be more prone to being influenced (or even manipulated) by low-quality scientific evidence or by pseudoscience, especially if they are unable or unwilling to additionally evaluate the claims through firsthand evaluations. Empirical evidence for this proposition comes from O’Brien et al. (2021), who found that individuals with high trust in science were “more likely to believe and disseminate false claims that contain scientific references than false claims that do not” (p. 1). In addition, Graso et al. (2022) found that individuals with high trust in the science of COVID-19 believed that even unsupported claims about COVID-19 would be supported by scientific evidence. Such findings are also in line with recent studies by Xiao et al. (2021) and Doss et al. (2023), who found that individuals with high trust in specific information sources (e.g. social media) are more vulnerable to being manipulated by these sources (e.g. by deepfake videos; Doss et al., 2023).

Notwithstanding these rather worrisome findings, a number of studies also found that individuals with high trust in science exhibit a higher amount of scientific literacy (e.g. Nan et al., 2022). Given that the very definition of scientific literacy includes skills to evaluate scientific information and arguments (Gormally et al., 2012), we argue that individuals with higher trust in science conduct more and better firsthand evaluations compared with individuals with lower trust in science. For example, a person who knows how to identify valid scientific arguments and how to interpret certain elements of a research design—important aspects of the scientific literacy construct (Gormally et al., 2012)—will more likely evaluate scientific claims on the level of firsthand evaluations compared with a person with lower scientific literacy. In addition, research from the field of epistemic beliefs has shown that individuals who trust in scientific authorities (justification by authority; Ferguson and Bråten, 2013) adopt a more nuanced approach to scientific evidence, as is evidenced by positive relationships between justification by authority and the understanding of conflicting information (Strømsø et al., 2008), for example. In sum, we thus argue that individuals with high trust in science are better at directly evaluating the veracity of scientific knowledge claims through firsthand evaluations, which is why such individuals should have stronger capabilities to differentiate between high- and low-quality evidence. We posit the following hypotheses:

  • Hypothesis 3. When individuals are confronted with high-quality evidence, the shift in topic-related beliefs will be stronger depending on trust in science, whereas when confronted with low-quality evidence, trust in science will play a less strong moderating role. In other words, the moderator effect suggested in Hypothesis 2 will be stronger when individuals are confronted with high-quality evidence compared with low-quality evidence.

  • Hypothesis 4. The (positive) relationship between trust in science and individual assessments of the quality of evidence is stronger when participants are confronted with high-quality evidence compared with low-quality evidence.

  • Hypothesis 5. Trust in science and scientific literacy are positively related.

2. Study 1 (exploratory)

Hypotheses 1 and 2 were tested in two separate studies. The first study (Study 1) consists of a reanalysis of existing data, 1 and its results were used as a basis for deriving confirmatory hypotheses to be tested in the second study (Study 2). It should be noted that Study 1 data were collected and analyzed before preregistering the hypotheses specified above, and therefore, all Study 1 findings are to be treated as exploratory. Study 1 data as well as the analysis code for the present paper and all corresponding results are available in the online supplemental material for this article.

Participants, study design, and procedure

Study 1 was designed as a randomized online experiment using the software EFS survey. Participants were recruited and incentivized via an online panel provider. The preregistration (Rosman et al., 2020) outlined several inclusion and exclusion criteria, which were upheld through targeted recruitment efforts and a screening page at the start of the survey. Regarding the targeted recruiting, participants were only invited to the survey if they were between 18 and 65 years and if they had an educational level of at least middle maturity (10 years of schooling). N = 6294 persons responded to the screening questions. Participants who had received acupuncture treatments in the previous 10 years, and thus might have more entrenched beliefs on the topic in question, were excluded after the screening page (n = 2275; 36.15%). In addition, the screening excluded participants believing in the superiority of massage over acupuncture in the treatment of back pain (n = 1109; 17.62%). Furthermore, participants believing either in the superiority of acupuncture or having no clear opinion on this topic were specified to be distributed equally across the dataset, and gender was also specified to be equal across these groups (which resulted in screening out additional n = 1736 participants). The three latter criteria were defined to meet the requirements of Rosman et al.’s (2021) research questions. Additional n= 234 participants did not complete the survey and were also excluded, resulting in a final sample of N = 985 participants (German-speaking general population aged 18–65 years; mean age: M = 42.42; standard deviation (SD) = 13.47; 49.8% women).

The study employed a 2 × 3 experimental design with one between- and one within-person factors. The between-person manipulation (three conditions) was implemented into a reading task in which participants read 12 brief summaries (around 150 words each) of fictitious scientific studies. All studies were portrayed as pre–post studies with two experimental groups; for example, one study focused on comparing 159 back pain patients receiving either acupuncture treatment or massaging with regard to changes in their subjective pain intensity. In the “pro-acupuncture” condition, all eight studies on acupuncture suggested the superiority of acupuncture over massage in the treatment of back pain, which was indicated by considerable treatment effects in the acupuncture but not in the massage group and/or by significant differences between groups (see supplemental material). In the “contra-acupuncture” condition, all corresponding eight studies suggested the inferiority of acupuncture. In both these conditions, we opted to present participants with a clear and unambiguous set of findings. This was to maximize belief change and to minimize the effects of biased assimilation (Lord et al., 1979), given that “when presented with clear, disconfirming evidence, people may not be able to justify maintaining their beliefs” (Anglin, 2019: 195). Finally, in the “diverging evidence” condition, four studies indicated superiority and four studies indicated inferiority. The diverging evidence condition can thus be considered as a control group because we expect little change in acupuncture-related beliefs when the number of pro- and contra-acupuncture studies is balanced. Noteworthy, the fictitious studies were of rather high scientific quality, demonstrated by their use of an experimental design with sufficiently large sample sizes and standardized measurement of the dependent variables. To make the reading task more realistic, four thematically irrelevant studies were included in each condition. For example, one control study dealt with the effects of stretching on back pain, making no reference to acupuncture at all. As within factor, acupuncture-related beliefs were measured twice—once before and once after the experimental manipulation. The distribution of participants across experimental conditions and descriptive statistics on the study variables can be found in Table 1; English translations of the study materials can be found in the supplemental material.

Table 1.

Distribution of participants and descriptive statistics across experimental groups (Study 1).

Experimental factor “direction of evidence”
Pro-acupuncture Contra-acupuncture Controversial evidence
N  
acupre acupost
trust mcheckdirection
3273.89 (SD = 0.71)4.19 (SD = 0.81)4.16 (SD  = 0.84)6.36 (SD  = 1.30) 3263.84 (SD  = 0.74)3.20 (SD  = 0.79)4.14 (SD  = 0.87)1.57 (SD  = 1.08) 3323.91 (SD  = 0.71)3.80 (SD  = 0.73)4.27 (SD  = 0.82)4.02 (SD  = 1.45)

acupre: mean acupuncture-related beliefs before reading; acupost: mean acupuncture-related beliefs after reading; trust: mean trust in science before reading; mcheckdirection: mean score on the direction manipulation check (i.e. perceived direction of evidence; higher scores = evidence favors acupuncture); SD: standard deviation.

Ntotal= 985.

The study was in full accordance with the declaration of Helsinki and the American Psychological Association (APA) ethics code (2002). Before participation, participants were required to explicitly agree to an informed consent form, which included information on the study procedure, participants’ rights, and data protection. At the end of the study, participants were comprehensively debriefed, with this debriefing varying over experimental conditions (see Rosman et al., 2020 for details).

Measures

Acupuncture-related beliefs were measured using a specifically constructed eight-item scale (sample item: “Acupuncture can significantly reduce back pain”; 6-point response format from “do not agree at all” to “fully agree”). Four of the eight items were inversely coded. Scale reliability was good, with α = .783 at pretest and α = .847 at posttest.

Trust in science was assessed using a translation of the five-item scale by Nisbet et al. (2015; sample item: “Findings of scientists are trustworthy”; same response format as above). Again, scale reliability was satisfactory to good with α = .793. According to Fage-Butler et al. (2022), this constitutes an attitudinal conceptualization of trust, meaning that trust is perceived as an “attitude that can be measured or probed” (p. 837).

To measure the perceived evidence direction (our manipulation check), we used a semantic differential which asked participants, after reading, to indicate the general message that was conveyed in the 12 texts they had read (“Overall, the presented findings suggest that ‘massaging’ . . . ‘acupuncture’ is better suited to treat back pain”; 7-point semantic differential).

Results

Manipulation check

We expected participants in the pro-acupuncture condition to score higher on the evidence direction item compared with the contra-acupuncture and diverging evidence conditions. In addition, we expected participants in the diverging evidence condition to score somewhere between the two other conditions. These expectations were fully supported (pro-acupuncture: M = 6.36, SD = 1.30; contra-acupuncture: M = 1.57, SD = 1.08; and diverging evidence: M = 4.02, SD = 1.45, F (2, 982) = 1132.179, p < .001, ηp2 = .698; all Tukey honest significant difference (HSD) post hoc comparisons p < .001).

Hypothesis 1

To test Hypothesis 1, a two-factor repeated measures analysis of variance (ANOVA) was conducted (between-factor: experimental condition and within-factor: acupuncture-related beliefs). The analysis revealed a large and significant within–between interaction, F (2, 981) = 178.763, p < .001, ηp2 = .267, which means that pre–post changes in acupuncture-related beliefs indeed varied widely across experimental groups. As expected, there was an increase in acupuncture-related beliefs (i.e. a more positive evaluation of acupuncture) in the pro-acupuncture condition, a decrease in the contra-acupuncture condition, and little change in the diverging evidence condition. Tukey HSD post hoc tests indicated significant differences between the respective groups (all p < .01). Hypothesis 1 is fully supported.

Hypothesis 2

Hypothesis 2 was tested using Model 1 of the PROCESS 4.1 macro for SPSS (Hayes, 2018). Since PROCESS cannot deal with within-person dependent variables out of the box, we first calculated a difference score by subtracting pretest from posttest acupuncture-related beliefs. Positive values on this score indicate an increase in acupuncture-related beliefs (i.e. participants becoming more favorable toward acupuncture), and negative values a corresponding decrease (i.e. increasing skepticism toward acupuncture). This score was used as dependent variable in all calculations involving change in acupuncture-related beliefs. Trust was entered as independent variable and experimental condition as multicategorical moderator (i.e. dummy coded with the diverging evidence condition as reference category).

Results revealed a significant increase in R2 when adding the interactions between trust and the two dummy variables (Dpro and Dcontra) into the regression equation, ΔR2 = .023, F (2, 978) = 15.626, p < .001, with both interaction coefficients becoming significant as well, Dpro*Trust: B = 0.152, t (978) = 2.628, p = .009; Dcontra*Trust: B = −0.164, t (978) = −2.867, p = .004. Testing of the conditional effects of trust on belief changes revealed, as expected, a nonsignificant effect in the diverging evidence condition (p = .327), a significant positive effect in the pro-acupuncture condition, B = 0.112, t (978) = 2.752, p = .006, and a significant negative effect in the contra-acupuncture condition, B = −0.204, t (978) = −5.190, p < .001 (see Figure 1). This means that the shift in acupuncture-related beliefs specified in Hypothesis 1 becomes stronger with increasing trust in science—Hypothesis 2 is fully supported. As is often the case in interaction testing using multiple regression, the effect size (in terms of ΔR2) was rather low.

Figure 1.

Figure 1.

Effects of trust in science on belief change across experimental conditions (Study 1).

Interim discussion of Study 1

The main finding of Study 1 is that trust in science does indeed seem to moderate how individuals react to confrontations with scientific evidence. The more they trust science, the more individuals’ beliefs on a certain topic align with the scientific evidence they are presented with. A number of explanations for this finding come to mind. First, participants with higher trust in science simply might have been more apt at assessing the methodological quality of the fictional studies included in our study (which was rather high, as we discussed above); hence, they might have conducted better firsthand evaluations compared with participants with lower trust in science (Bromme et al., 2010). However, it might also be that trust in science had led to more favorable secondhand evaluations of the evidence (Bromme et al., 2010). In other words, participants with higher trust in science might have evaluated the source of the information (i.e. the scientists responsible for the studies) as more trustworthy compared with participants with lower trust in science, simply because the information stemmed from the science domain. While a mix of both types of evaluations is likely in practice, a dominance of secondhand evaluations would imply that individuals with high trust in science follow science blindly, thus becoming more prone to being influenced by low-quality scientific evidence or even pseudoscience. Unfortunately, we are unable to test this proposition with Study 1 data because we neither experimentally manipulated the quality of the fictitious studies nor did we measure individual perceptions of evidence quality. In addition, Study 1 excluded participants who had received acupuncture treatments in the past 10 years, or who believed in the superiority of massaging over acupuncture, which possibly inflated the corresponding effect sizes given that participants with more moderate beliefs (i.e. participants with no clear opinion on the topic at hand) were over-represented in our sample. For these reasons, we conducted a second study that aimed at (1) replicating Study 1 findings in a more diverse sample and (2) investigating firsthand evaluations through explicit measurements and experimental manipulations of the quality of the fictitious studies.

3. Study 2 (confirmatory)

All five study hypotheses as well as Study 2 design, procedure, sampling, and analysis were preregistered at PsychArchives (Rosman and Grösser, 2022). The study materials (in German language) are included in the appendix of the preregistration. Study 2 data as well as the analysis code and all results are available in the supplemental material.

Participants, study design, and procedure

Study 2 was also realized as a randomized experiment, this time with a total of N = 1100 participants (German-speaking general population aged 18–65 years; mean age: M = 41.39; SD = 13.89; 47.1% women). The preregistered target sample size was N = 1056 based on a sample size calculation in MorePower 6.0 (Campbell and Thompson, 2012; see preregistration). The slight oversampling occurred because participants who were filling out the questionnaire at the time the target sample size had been reached were allowed to complete the study. EFS survey was used to conduct the study, and participants were recruited by an online panel provider. Just like in Study 1, participants who had received acupuncture treatment in the past 10 years were excluded. However, we no longer excluded participants who believed in the superiority of massage over acupuncture, thus increasing the representativeness of our sample. Finally, we had, as in Study 1, intended to exclude participants with an educational level lower than middle maturity (see preregistration). However, due to an error in the panel provider’s filter programming, the educational status of n = 101 participants remained unclear. Manual matching of our data with the providers’ database showed that around 75% of these participants did have an educational level below middle maturity, thus constituting a deviation from our preregistered inclusion criterion. Due to the fact that our results did not differ depending on whether we included these n = 101 participants or not (see below) and because their inclusion in the dataset offers a better representation of the German general population, we decided to not exclude these participants post hoc.

Study 2 used a 2 × 2 × 2 within–between design, with the experimental procedure being largely similar to Study 1. Participants read 12 summaries of fictitious studies, of which eight focused on the efficacy of acupuncture. These eight texts were varied depending on the experimental condition, and participants’ acupuncture-related beliefs were measured before and after reading. The first experimental factor was the direction of evidence: participants read either eight studies suggesting that acupuncture is better than massage (pro-acupuncture condition) or four studies suggesting superiority and four studies suggesting inferiority of acupuncture (diverging evidence condition). This experimental manipulation was thus identical to the one used in Study 1, except that we discarded the contra-acupuncture condition. This was to keep the study design as parsimonious as possible, and given that Study 1 effect sizes were strongest in the contra-acupuncture condition, it can be considered a rather conservative choice, thus not impacting the generalizability of our findings much. The second experimental factor was the quality of the presented evidence, with a high-quality and a low-quality condition. While the texts from the high-quality condition were identical to the Study 1 texts, we manipulated the quality of the studies in the low-quality condition, by, for example, (1) dramatically reducing their sample sizes (e.g. N = 149 vs N = 5), (2) reducing their degree of standardization (e.g. standardized scales vs suggestive questioning), and (3) altering treatment durations across groups (e.g. equal duration in both groups vs considerably longer massage compared with acupuncture sessions). Despite these modifications, the experimental designs of the fictitious studies and their results were identical across conditions, and the text properties (e.g. text structure, sentence length, and writing style) were kept as similar as possible (Rosman and Grösser, 2022). The distribution of participants across experimental conditions and descriptive statistics on the study variables can be found in Table 2, and English translations of the study materials can be found in the supplemental material.

Table 2.

Distribution of participants and descriptive statistics across experimental groups (Study 2).

Experimental factor “direction of evidence”
Pro-acupuncture Controversial evidence
Experimental factor “quality of evidence” High quality Nacupreacuposttrustmcheckdirection mcheckquality 2693.77 (SD = 0.72) 4.18 (SD  = 0.88)4.28 (SD  = 0.89)6.22 (SD  = 1.31)4.51 (SD  = 1.11) 2753.72 (SD  = 0.66)3.71 (SD  = 0.67)4.23 (SD  = 0.90)4.00 (SD  = 1.29)4.28 (SD  = 0.95)
Low quality Nacupreacuposttrustmcheckdirection mcheckquality 2763.68 (SD  = 0.73)3.99 (SD  = 0.82)4.20 (SD  = 0.90)6.07 (SD  = 1.21)3.47 (SD  = 1.55) 2803.73 (SD  = 0.65)3.60 (SD  = 0.71)4.26 (SD  = 0.89)3.99 (SD  = 1.11)3.26 (SD  = 1.44)

acupre: mean acupuncture-related beliefs before reading; acupost: mean acupuncture-related beliefs after reading; trust: mean trust in science before reading; mcheckdirection: mean score on the direction manipulation check (i.e. perceived direction of evidence; higher scores = evidence favors acupuncture); mcheckquality: mean score on the quality manipulation check (i.e. perceived evidence quality; higher scores = higher quality); SD: standard deviation.

Ntotal= 1100.

Like in Study 1, all procedures were in accordance with the declaration of Helsinki and the APA ethics code, participation required agreement to an informed consent form, and a debriefing was included at the end of the questionnaire.

Measures

Acupuncture-related beliefs, trust in science, and perceived evidence direction 2 (manipulation check) were measured using exactly the same instruments as in Study 1. Again, scale reliabilities were satisfactory to good (pretest acupuncture-related beliefs: α = .752; posttest acupuncture-related beliefs: α = .812; and trust in science: α = .832).

Scientific literacy was measured using 10 items of the German version of the Test of Scientific Literacy Skills (TOSLS; Gormally et al., 2012), which was translated to German by Schreiner et al. (2023). The questions included in the questionnaire (see preregistration for details) relate to the four skills: (1) identifying a scientific argument; (2) understanding elements of research design; (3) solving problems using quantitative skills; and (4) justifying inferences, predictions, and conclusions based on quantitative data (Gormally et al., 2012). All 10 items consist of a question and four forced-choice response options (i.e. only one response option can be selected), and participants are instructed to select the correct response option. As is often with performance tests, reliability of this scale was somewhat lower (α = .594).

Perceived evidence quality, which served both as a manipulation check and as a dependent variable for Hypothesis 4, was measured using a semantic differential item administered after each text. Participants were asked how they perceived the quality of the respective study on a 7-point semantic differential (“In my opinion, the scientific quality of this study is ‘very low’ . . . ‘very high’”; see preregistration and supplemental material). Scores on this item were averaged across the eight texts.

Results

Descriptive statistics and sample sizes across experimental groups can be found in Table 2. The inference criterion for all analyses was p < .05, and, as specified in the preregistration, one-tailed tests were conducted where appropriate. For space reasons, only key findings are reported here; the full results can be found in the supplemental material.

Manipulation checks

To test whether the experimental manipulation had worked as intended, we contrasted participants’ perceived direction and perceived quality ratings across experimental conditions using t-tests. Both manipulation checks were successful (direction of evidence: t (1092.635) = −28.938, p < .001, Cohen’s d = 1.747; quality of evidence: t (989.545) = −13.267, p < .001, Cohen’s d = 0.798; corrected for variance inhomogeneity; see supplemental material for details).

Hypothesis 1

As specified in our preregistration, Hypothesis 1 was tested using the regression-based MEMORE 2.1 macro for SPSS (Model 3; Montoya, 2019), with pretest and posttest acupuncture-related beliefs as dependent variables and the evidence direction factor as independent variable (coding: 0 = diverging information; 1 = pro-acupuncture). The overall regression model was significant (R2 = .118, F (1, 1098) = 147.182, p < .001). In addition, the model intercept was significant and positive (though rather small; B = 0.071, t (1098) = 2.850, p = .002 (one-tailed)) which indicates a slight decrease 3 in acupuncture-related beliefs in the diverging information condition (i.e. participants becoming more skeptical toward acupuncture). Furthermore, the coefficient for the experimental group variable was significant and negative (B = −0.427, t (1098) = −12.132, p < .001), which indicates an increase in acupuncture-related beliefs in participants reading evidence favoring acupuncture. Hypothesis 1 is supported.

Hypothesis 2

Hypothesis 2 was tested by, in addition, including trust in science into the model described above. To ensure optimal alignment with Hypothesis 2, we did not include the evidence quality factor in this analysis just yet. While the overall model remained significant (R2 = .119, F (3, 1096) = 49.275, p < .001), no significant interaction between trust and the evidence direction factor was found (p = .242 (one-tailed)). Hence, the shifts in acupuncture-related beliefs specified in Hypothesis 1 do not seem to become stronger with increasing trust in science, which is why, in this strict confirmatory analysis, Hypothesis 2 is not supported.

Hypothesis 3

The specification of Hypothesis 3 equals a moderated moderation (Hayes, 2018)—we expect that the moderating effects of the evidence direction factor on the relationship between trust and belief change are, in turn, moderated by the quality factor. In other words, we expect that trust in science has stronger effects on belief change when individuals are confronted high-quality as compared with low-quality pro-acupuncture evidence. As a minor deviation from our preregistration, we used PROCESS (Model 3) for testing Hypothesis 3 instead of MEMORE because it allows for more advanced statistical techniques. We thereby set up PROCESS using the same workaround as described in Study 1 (i.e. subtracting pretest from posttest belief scores as dependent variable). The model set up to test Hypothesis 3 was identical to the Hypothesis 2 model, except that we, in addition, included the evidence quality factor (coding: 0 = low quality; 1 = high quality) and all its possible interactions. This resulted in a significant three-way interaction between trust, quality, and direction (B = 0.226, t (1092) = 2.889, p = .002 (one-tailed), R2 = .136, ΔR2 = .007), thus suggesting that the moderator effect proposed in Hypothesis 2 indeed varies depending on evidence quality (though it should be noted that with a ΔR2 of just .007, the corresponding effect is rather small in magnitude). This moderated moderation is depicted in Figure 2.

Figure 2.

Figure 2.

Effects of trust in science on belief change across experimental conditions (Study 2).

To investigate this moderated moderation more closely, we used PROCESS to test the conditional effects of trust in science at different levels of the moderators. In line with our expectations, these analyses revealed that with increasing trust in science, belief change became less strong when participants were confronted with low-quality pro-acupuncture evidence (B = −.118, t (1092) = −3.049, p = .001 (one-tailed)) and stronger when confronted with high-quality pro-acupuncture evidence (B = 0.069, t (1092) = 1.726, p = .043 (one-tailed)) whereas there were no corresponding effects in participants confronted with diverging information (high-quality diverging evidence: p = .685; low-quality diverging evidence: p = .550). The moderator effect of trust in science was thus stronger in the high-quality compared with the low-quality pro-acupuncture condition, thus supporting Hypothesis 3. In addition, the pattern of conditional effects described above as well as a significant interaction between trust and the evidence direction factor (B = −.141, t (1092) = −2.572, p = .005 (one-tailed)) in this model provide empirical support for Hypothesis 2, which was not supported in our confirmatory analysis reported above. Hence, the effects of trust on belief change specified in Hypothesis 2 do show in the data, but only when participants are confronted with high-quality evidence.

Hypothesis 4

Hypothesis 4 was tested using PROCESS (Model 1). Perceived evidence quality was used as dependent variable, trust in science was the independent variable, and the evidence quality factor was the moderator. The evidence direction factor was ignored in this analysis since the distribution of high- and low-quality evidence was (largely) identical across the pro-acupuncture and the controversial evidence conditions (see Table 2). Results showed a significant interaction between trust and the evidence quality factor (B = 0.648, t (1096) = 7.685, p < .001, R2 = .187, ΔR2 = .044), and the conditional effects of trust on perceived evidence quality were significant in both evidence quality conditions (high evidence quality: B = 0.206, t (1096) = 3.435, p = .001; low evidence quality: B = −0.442, t (1096) = −7.459, p < .001). This means that with increasing trust in science, individuals viewed high-quality evidence more favorably, but became less favorable toward low-quality evidence (see Figure 3). Hypothesis 4 is supported.

Figure 3.

Figure 3.

Effects of trust in science on quality perceptions across high- and low-quality conditions (Study 2).

Hypothesis 5

The Pearson correlation between trust in science and scientific literacy was significant (r = .307, p < .001), which means that individuals with high trust in science also have a higher science literacy. This supports Hypothesis 5.

Exploratory analyses: Covariates

To test whether the significant results we obtained for Hypotheses 3–5 might be confounded by demographic variables, we re-ran the corresponding analyses while specifying age, gender, and education as covariates. Given that we had no data on the education status of n = 101 participants, the sample size for these analyses decreased to n = 999. All significant effects remained significant in these analyses (see supplemental material), suggesting that our results for Hypotheses 3–5 hold when controlling for demographic variables (Hayes, 2018).

Exploratory analysis: Moderated mediation

In a final exploratory model, we aimed to integrate our findings from confirmatory hypothesis testing. The general expectation when building this model was that individual perceptions of evidence quality would be causally responsible for the effects of trust on changes in acupuncture-related beliefs, depending on whether participants are confronted with high- or low-quality evidence. In participants confronted with high-quality evidence, we expected that trust would positively influence perceived evidence quality, which would, in turn, cause more belief change. In contrast, in participants confronted with low-quality evidence, we assumed that trust would negatively affect quality perceptions, which would, in turn, cause less belief change. In other words, we expected that individuals with higher trust in science would be better at evaluating evidence quality (as is evidenced by our results for Hypotheses 4 and 5), which would lead to more accurate assessments of our study materials, which, in turn, would cause belief changes that are contingent on the quality of the presented evidence (as we found when testing Hypothesis 3).

To reduce model complexity, we discarded the diverging evidence condition for these calculations, given that no belief change is to be expected in this group (cf. Hypotheses 1 and 3). The calculations were thus based only on n = 545 participants. A moderated mediation model was specified using PROCESS Model 8 (Hayes, 2018), which included trust in science as independent variable, the evidence quality factor as moderator, perceived evidence quality as mediator, and changes in acupuncture-related beliefs as dependent variable. With the mediator included in the model (R2 = .070), the direct effects of trust in science on belief change were not significant in either of the two experimental conditions (low evidence quality: B = −0.065, t (540) = −1.518, p = .219; high evidence quality: B = 0.053, t (540) = 1.230, p = .130), which means that when holding perceived evidence quality constant, there is very little change in acupuncture-related beliefs in either experimental condition (Hayes, 2018). In contrast, the indirect effects of trust on belief changes over quality perceptions were significant in the low-quality condition (B = −0.053, 95% bootstrapped confidence interval (CI) = [−0.086, −0.025]) but not in the high-quality condition (B = 0.016, 95% bootstrapped CI = [−0.002, −0.035]), and Hayes’ (2018) index of moderated mediation was also significant (index = 0.069, 95% bootstrapped CI = [0.033, 0.111]). This indicates full mediation in the low-quality condition. Hence, reduced quality perceptions seem to have caused the negative effects of trust on belief changes when individuals were confronted with low-quality evidence (cf. Hypothesis 3). Or, in less technical terms: the higher trust in science, the more likely individuals are to detect inferior study quality, which, in turn, makes them less likely to be manipulated by low-quality evidence.

Interim discussion of Study 2

It is noteworthy that four out of five preregistered hypotheses were supported in strict confirmatory testing. Only the initial findings for Hypothesis 2 were nonsignificant. However, when additionally including the quality factor in the regression equation while testing Hypothesis 3, the corresponding interaction between evidence direction and trust in science became significant as well. A closer look at the three-way interaction tested in regard to Hypothesis 3 reveals an intriguing explanation for this: a confrontation with low-quality pro-acupuncture evidence resulted in significant negative effects of trust on belief changes, whereas the corresponding relationship was positive in the high-quality pro-acupuncture evidence group (see also Figure 2). Given that discarding the evidence quality factor in our preregistered test of Hypothesis 2 implies averaging belief changes over the low- and high-quality conditions, the corresponding effects thus likely canceled each other out, leading to the nonsignificant interaction. The reason for ignoring the quality factor in our test of Hypothesis 2 was that we had not mentioned it in the preregistered hypothesis. However, at least in hindsight, taking into account the quality factor seems purposeful, especially because this implies a better alignment with the corresponding methodology in Study 1 (which did not include low-quality evidence at all). While a conservative confirmatory test of Hypothesis 2 thus yielded a nonsignificant result, a possibly more purposeful exploratory analysis, which takes into account both experimental factors, provides a convincing amount of empirical evidence for Hypothesis 2 nonetheless.

Another aspect worth mentioning in the Study 2 results is that, somewhat to our surprise, a confrontation with low-quality evidence led to a negative (and not just nonsignificant) effect of trust in science on quality ratings and belief change. While in line with our theorizing, we had not explicitly hypothesized such effects, given the abovementioned findings possible negative effects of trust in science (Graso et al., 2022; O’Brien et al., 2021). What is reassuring in these findings is that firsthand evaluations may have overridden possible secondhand evaluations in participants with high trust in science. In other words, these individuals seem to be particularly suspicious of low-quality evidence, whereas people with lower trust in science differentiate less between high- and low-quality studies, thus not extensively engaging in successful firsthand evaluations.

A third point of interest is that we found evidence for a full mediation of evidence evaluations (i.e. quality perceptions) on the relationship between trust in science and belief updating. This stands in contrast to the findings by Anglin (2019), which suggested that “factors other than evidence evaluation influence the extent to which participants maintained vs. changed their beliefs in response to the evidence” (p. 195). This is striking given that our study evaluation measure was rather similar to the one by Anglin (2019), though it should also be noted that in our study, the Pearson correlations between quality perceptions and belief change were only moderate to low, too (low-quality pro-acupuncture group: r = .284, p < .001; high-quality pro-acupuncture group: r = .207, p < .001).

4. Discussion

When Xiao et al. (2021) found that individuals who blindly trusted social media were more likely to fall for conspiracy theories, they stated that their results were “illuminating but somewhat dispiriting” (p. 985). Quite in contrast, our results on the moderating role of trust in science paint a reassuring picture. People with high trust in science seem not to trust science blindly, but, on the contrary, they seem to have a more nuanced view on scientific findings, which manifest itself in their ability to better evaluate the quality of scientific studies (i.e. conduct better firsthand evaluations) than individuals who trust science less. This, in turn, protects them from being manipulated by low-quality scientific evidence. In contrast, individuals with low trust in science do not seem to differentiate much between high- and low-quality studies, which arguably make them more susceptible to pseudoscience and fake news.

Limitations and future directionst

As a first limitation, it should be noted that our experimental manipulations were rather strong, and, therefore, missing a certain amount of ecological validity. In a real-world information environment, one is rarely confronted with one-sided evidence alone, nor is one confronted with only high- or low-quality studies. To address this, future research should, for example, vary the proportion of high- and low-quality studies.

A second limitation concerns the connection between our theorizing and our method. While we made a strong point that the effects of trust in science on belief changes are due to individuals engaging in secondhand evaluations, we neither explicitly measured nor manipulated these evaluations in any of our studies. We, therefore, cannot present empirical evidence for our participants engaging in such evaluations. What our Study 2 data does show is that trust in science leads to better firsthand evaluations, but, to pit firsthand and secondhand evaluations against each other, additional experiments are needed, which explicitly measure (or manipulate) secondhand evaluations.

Another limitation concerns our sample, given that we had specified several exclusion criteria prior to data collection. Most notably, we had excluded participants who had received acupuncture treatments in the previous 10 years since these might have stronger and thus less malleable beliefs. The reason for doing so is that our research questions focus how belief updating occurs under normal circumstances, and not on investigating belief perseverance. Nevertheless, it should be pointed out that when including these participants (or when investigating more heated topics such as stem-cell therapy), the corresponding effect sizes might be smaller. However, it is also of note that Anglin (2019) found no significant changes in belief updating (regarding the death penalty) depending on whether they included or excluded participants with more moderate beliefs. Nevertheless, additional research on belief updating across participants with varying prior beliefs is needed.

Finally, even though the paths suggested in our moderated mediation model are intuitively plausible, our design does not allow an explicit test of causality. To test whether trust in science is causally responsible for the effects on belief change, one would have to experimentally manipulate trust, which is a hard task given that it is mostly conceptualized as an individual difference variable. Nevertheless, some interventions on trust in science do exist (e.g. Agley et al., 2021), or, alternatively, one might consider the literature on epistemic belief interventions (e.g. Rosman et al., 2019) to develop a corresponding intervention. For the time being, however, it is important to note that the effects of trust might have been caused by unmeasured third variables (e.g. political orientation or social status), although it should also be pointed out that all hypothesis tests remained significant when controlling for background factors such as age, gender, and education.

Practical implications

In addition to providing evidence for the positive effects of trust in science on handling scientific evidence, our findings bear a number of practical implications. First, they suggest that especially members of the public who trust science have the ability to conduct adequate firsthand evaluations. Therefore, science communication formats should be designed to allow for such evaluations, which requires a careful balancing between simplifying science-based information and providing sufficient methodological detail. With reference to our study materials, science communicators should at least include sample sizes, some information on the measurement of the dependent variables, and an easy-to-understand description of the experimental procedures. Furthermore, when engaging in public communication of their findings, scientists should orient themselves on established guidelines (e.g. for writing plain language summaries; see Stoll et al., 2022, for an overview). In addition, educational efforts should be undertaken to foster firsthand evaluations in individuals with lower trust in science, for example, through systematic information literacy instruction (e.g. Rosman et al., 2018) or inoculating and “prebunking” against misinformation (e.g. Lewandowsky and van der Linden, 2021). Trust in science and scientific literacy are consistently lower among individuals with lower education levels, which is why integrating such elements into community college courses might be particularly fruitful. After all, in an information society, empowerment depends on the possibility to readily access scientific information and to appraise the veracity and truthfulness of this information through self-determined critical thinking.

Supplemental Material

sj-zip-1-pus-10.1177_09636625231203538 – Supplemental material for Belief updating when confronted with scientific evidence: Examining the role of trust in science

Supplemental material, sj-zip-1-pus-10.1177_09636625231203538 for Belief updating when confronted with scientific evidence: Examining the role of trust in science by Tom Rosman and Sianna Grösser in Public Understanding of Science

Acknowledgments

The authors thank Lisa Trierweiler for proofreading and language editing.

Author biographies

Tom Rosman is a senior researcher and head of the Research Literacy department at ZPID. His current research focuses on epistemic beliefs, epistemic trust, and open science.

Sianna Grösser was a student assistant in the Research Literacy department at ZPID during the time of the study.

1.

Additional analyses of the Study 1 dataset have already been published in Rosman et al. (2021). However, no findings on Hypotheses 1 and 2 have been published yet, and no data on trust in science or acupuncture-related beliefs are included in Rosman et al. (2021). The overlap between both articles is thus minimal.

2.

This is labeled “type of evidence” in the preregistration.

3.

The within-person variable in MEMORE is calculated by subtracting posttest from pretest scores. Thus, positive coefficients imply a decrease and negative coefficients imply an increase in scores on the pre–post measurement (Montoya, 2019).

Footnotes

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

Supplemental material: Supplemental material for this article is available online.

References

  1. Agley J, Xiao Y, Thompson EE, Chen X, Golzarri-Arroyo L. (2021) Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: Randomized controlled trial. Journal of Medical Internet Research 23(10): e32425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American Psychological Association (2002) Ethical principles of psychologists and code of conduct. American Psychologist 57(12): 1060–1073. [PubMed] [Google Scholar]
  3. Anglin SM. (2019) Do beliefs yield to evidence? Examining belief perseverance vs. change in response to congruent empirical findings. Journal of Experimental Social Psychology 82: 176–199. [Google Scholar]
  4. Bleich S, Blendon R, Adams A. (2007) Trust in scientific experts on obesity: Implications for awareness and behavior change. Obesity 15(8): 2145–2156. [DOI] [PubMed] [Google Scholar]
  5. Bromme R, Kienhues D, Porsch T. (2010) Who knows what and who can we believe? Epistemological beliefs are beliefs about knowledge (mostly) to be attained from others. In: Bendixen LD, Feucht FC. (eds) Personal Epistemology in the Classroom: Theory, Research, and Implications for Practice. Cambridge: Cambridge University Press, pp. 163–193. [Google Scholar]
  6. Campbell JID, Thompson VA. (2012) MorePower 6.0 for ANOVA with relational confidence intervals and Bayesian analysis. Behavior Research Methods 44(4): 1255–1265. [DOI] [PubMed] [Google Scholar]
  7. Doss C, Monschein J, Shu D, Wolfson T, Kopecky D, Fitton-Kane VA, et al. (2023) Deepfakes and scientific knowledge dissemination. Scientific Reports 13: 13429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Druckman JN, McGrath MC. (2019) The evidence for motivated reasoning in climate change preference formation. Nature Climate Change 9(2): 111–119. [Google Scholar]
  9. Fage-Butler A, Ledderer L, Nielsen KH. (2022) Public trust and mistrust of climate science: A meta-narrative review. Public Understanding of Science 31(7): 832–846. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Ferguson LE, Bråten I. (2013) Student profiles of knowledge and epistemic beliefs: Changes and relations to multiple-text comprehension. Learning and Instruction 25: 49–61. [Google Scholar]
  11. Fiske ST, Dupree C. (2014) Gaining trust as well as respect in communicating to motivated audiences about science topics. Proceedings of the National Academy of Sciences of the United States of America 111(Suppl. 4): 13593–13597. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gormally C, Brickman P, Lutz M. (2012) Developing a test of scientific literacy skills (TOSLS): Measuring undergraduates’ evaluation of scientific information and arguments. CBE Life Sciences Education 11(4): 364–377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Graso M, Henwood A, Aquino K, Dolan P, Chen FX. (2022) The dark side of belief in Covid-19 scientists and scientific evidence. Personality and Individual Differences 193: 111594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hayes AF. (2018) Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-based Approach. New York, NY: Guilford Press. [Google Scholar]
  15. Lewandowsky S, van der Linden S. (2021) Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology 32(2): 348–384. [Google Scholar]
  16. Lord CG, Ross L, Lepper MR. (1979) Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology 37(11): 2098–2109. [Google Scholar]
  17. Montoya AK. (2019) Moderation analysis in two-instance repeated measures designs: Probing methods and multiple moderator models. Behavior Research Methods 51(1): 61–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Nan X, Wang Y, Thier K. (2022) Why do people believe health misinformation and who is at risk? A systematic review of individual differences in susceptibility to health misinformation. Social Science & Medicine 314: 115398. [DOI] [PubMed] [Google Scholar]
  19. Nisbet EC, Cooper KE, Garrett RK. (2015) The Partisan brain. The ANNALS of the American Academy of Political and Social Science 658(1): 36–66. [Google Scholar]
  20. O’Brien TC, Palmer R, Albarracin D. (2021) Misplaced trust: When trust in science fosters belief in pseudoscience and the benefits of critical evaluation. Journal of Experimental Social Psychology 96: 104184. [Google Scholar]
  21. Pew Research Center (2019) Trust and mistrust in Americans’ views of scientific experts. Available at: https://www.pewresearch.org/science/wp-content/uploads/sites/16/2019/08/PS_08.02.19_trust.in_.scientists_FULLREPORT_8.5.19.pdf
  22. Pilditch TD, Madsen JK, Custers R. (2020) False prophets and Cassandra’s curse: The role of credibility in belief updating. Acta Psychologica 202: 102956. [DOI] [PubMed] [Google Scholar]
  23. Rosman T, Grösser S. (2022) Preregistration: Belief updating when individuals are confronted with scientific evidence. PsychArchives. Available at: 10.23668/psycharchives.8238 [DOI]
  24. Rosman T, Kerwer M, Chasiotis A, Wedderhoff O. (2020) Preregistration: Person- and situation-specific factors in discounting science via scientific impotence excuses. PsychArchives. Available at: 10.23668/psycharchives.3163 [DOI] [PMC free article] [PubMed]
  25. Rosman T, Kerwer M, Chasiotis A, Wedderhoff O. (2021) Person- and situation-specific factors in discounting science via scientific impotence excuses. Europe’s Journal of Psychology 17(4): 288–305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Rosman T, Mayer A-K, Merk S, Kerwer M. (2019) On the benefits of ‘doing science’: Does integrative writing about scientific controversies foster epistemic beliefs? Contemporary Educational Psychology 58: 85–101. [Google Scholar]
  27. Rosman T, Peter J, Mayer A-K, Krampen G. (2018) Conceptions of scientific knowledge influence learning of academic skills: Epistemic beliefs and the efficacy of information literacy instruction. Studies in Higher Education 43(1): 96–113. [Google Scholar]
  28. Schreiner MR, Quevedo Pütter J, Rebholz TR. (2023) Supplemental materials for: Time for an update: Belief updating based on ambiguous scientific evidence. PsychArchives. Available at: https://www.psycharchives.org/jspui/handle/20.500.12034/8399
  29. Stoll M, Kerwer M, Lieb K, Chasiotis A. (2022) Plain language summaries: A systematic review of theory, guidelines and empirical research. Plos ONE 17(6): e0268789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Strømsø HI, Bråten I, Samuelstuen MS. (2008) Dimensions of topic-specific epistemological beliefs as predictors of multiple text understanding. Learning and Instruction 18(6): 513–527. [Google Scholar]
  31. Sturgis P, Brunton-Smith I, Jackson J. (2021) Trust in science, social consensus and vaccine confidence. Nature Human Behaviour 5(11): 1528–1534. [DOI] [PubMed] [Google Scholar]
  32. Xiao X, Borah P, Su Y. (2021) The dangers of blind trust: Examining the interplay among social media news use, misinformation identification, and news trust on conspiracy beliefs. Public Understanding of Science 30(8): 977–992. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-zip-1-pus-10.1177_09636625231203538 – Supplemental material for Belief updating when confronted with scientific evidence: Examining the role of trust in science

Supplemental material, sj-zip-1-pus-10.1177_09636625231203538 for Belief updating when confronted with scientific evidence: Examining the role of trust in science by Tom Rosman and Sianna Grösser in Public Understanding of Science


Articles from Public Understanding of Science (Bristol, England) are provided here courtesy of SAGE Publications

RESOURCES