Skip to main content
Science Advances logoLink to Science Advances
. 2025 Aug 29;11(35):eadv3758. doi: 10.1126/sciadv.adv3758

Prebunking and credible source corrections increase election credibility: Evidence from the US and Brazil

John M Carey 1,, Brian Fogarty 2,, Marília Gehrke 3,, Brendan Nyhan 1,, Jason Reifler 4,*,
PMCID: PMC12396325  PMID: 40880473

Abstract

We investigate how to counter misinformation about voter and election fraud using data from the US and Brazil. Our study first compares two types of messages countering claims of widespread fraud: (i) retrospective corrections from credible sources speaking against interest and (ii) prebunking messages that prospectively warn of false claims about future elections and provide information about election security practices. In the US, each approach immediately increased election confidence and reduced fraud beliefs, with prebunking showing somewhat more durable effects. In Brazil, prebunking had positive immediate effects across measured outcomes, whereas those of the credible source corrections were less consistent. We then conducted an experiment in the US randomizing exposure to a persuasion forewarning before election security information is provided. Prebunking again increased confidence and decreased fraud beliefs but only when the forewarning was omitted, suggesting that novel factual information is responsible for the observed effects of the prebunking treatment.


Election fraud misperceptions can be corrected by credible public figures or with information about protections against fraud.

INTRODUCTION

During and after losing bids for reelection, US President Donald Trump and Brazilian President Jair Bolsonaro promoted claims of widespread voter and election fraud, undermining trust in elections and inspiring supporters to storm their nations’ capitols in efforts to overturn the results. Similar attacks on election integrity threaten confidence in democratic institutions around the world.

To safeguard democracy, it is essential to determine how to protect confidence in elections against false accusations. Misinformation about voter fraud disproportionately reduces confidence in election integrity among supporters of the losing side (13). We test whether providing accurate election-related information can safeguard democracy against misinformation that damages confidence in elections. While corrective information generally reduces misperceptions just after exposure (4, 5), these effects vary. Treatment effects also attenuate over time, raising questions about how to durably reduce the prevalence of false beliefs (6, 7). We also test whether corrective information works similarly across different contexts. Although a study finds similar levels of effectiveness between Western democracies and the Global South (8), others have found mixed results (911), including in Brazil (1214).

We test two distinct approaches to countering misinformation about election legitimacy. Both provide accurate information—an effective way of countering false claims (15, 16). However, they rely on different psychological mechanisms. The first approach uses “situationally credible sources”—individuals speaking against their partisan interests. Corrections from sources who speak against interest may be seen as especially credible to people who are skeptical of such information or the sources from which it typically originates (1720).

The second approach, “prebunking,” places less emphasis on source and is typically forward-looking rather than retrospective. Prebunking interventions like the ones we test typically seek to provide novel factual information that will increase belief accuracy and counter the effects of misinformation exposure (21). Providing people corrective information before exposure is thought to help the accurate information be encoded into memory and shape how subsequent information is processed (15, 16), although exposure to a correction after encountering misinformation has also been found to be effective (2228).

A specific version of prebunking known as inoculation has been especially influential (21, 29). Inoculation interventions include a forewarning to watch out for false claims along with examples for either a particular issue or rhetorical technique (29, 30). Studies have found that inoculation helps reduce misperceptions (21, 29, 31), but others suggest that its effects may be limited (3234). Although inoculations are typically conceptualized as being most effective before misinformation exposure (35), inoculation after exposure can still increase the resistance to both previously encountered false information and similar types of future misinformation (35, 36).

To examine how these approaches affect perceptions of past and future elections, we fielded three between-subjects survey experiments in the US and Brazil, two contexts where fraud misinformation has undermined trust in elections (12, 14, 37, 38). The timing of these studies and the interventions and outcome measures used are summarized in Table 1.

Table 1. Study design and content summary.

Note: Study 1 was a three-wave panel survey with the experiment administered in wave 1; the persistence of effects was measured in waves 2 and 3. Study 3 was conducted as part of wave 3 of the Study 1 panel. TSE, Superior Electoral Court in Brazil (Tribunal Superior Eleitoral).

Timing Interventions Outcome measures Key features
Study 1 Wave 1: Oct./Nov. 2022 Credible sources (Republicans) Biden rightful winner US context
Wave 2: Dec. 2022 Prebunking + forewarning (CISA) Confidence in vote Pre-2022 midterms
Wave 3: Jan. 2023 Placebo Fraud prevalence Panel design
Seats won by fraud Measure durability
Study 2 Feb. 24–28, 2023 Credible sources (Bolsonaro allies) Confidence in vote Brazil context
Prebunking + forewarning (TSE) Fraud prevalence Post-2022 election
Placebo Seats won by fraud Test belief effects
Belief accuracy Includes neutral source
Study 3 Jan. 21–30, 2023 Prebunking + forewarning (CISA) Confidence in vote US context
Prebunking, no forewarning (CISA) Fraud prevalence Post-2022 midterms
Placebo Seats won by fraud Tests forewarning
Belief accuracy Test belief effects

Study 1 was conducted in the US before the 2022 midterm elections. Study 2 was a parallel experiment fielded in Brazil after its 2022 presidential election. In both studies, participants were randomized into one of three conditions: a situationally credible source treatment in which participants read how allies of the incumbent (Trump in the US and Bolsonaro in Brazil) affirmed the legitimacy of the election (testing the mechanism of increased messenger credibility); a prebunking condition explaining election security measures coupled with an inoculation forewarning (testing the joint effect of novel factual information about current policy and a warning that people may seek to mislead them with false claims on this issue); or a placebo condition. Study 3, which was solely conducted in the US, isolated the effect of the inoculation forewarning by randomizing participants into one of three conditions: prebunking with an inoculation forewarning (mirroring Studies 1 and 2), prebunking without an inoculation forewarning, or a placebo.

These studies test multiple theoretical mechanisms for correcting misinformation across two different countries and estimate the effects of those treatments on outcomes that reflect the legitimacy of both past and future elections. We measure a number of different outcomes for these elections in both countries (election confidence, legitimacy of past winners, fraud frequency, seats won by fraud, and belief accuracy), allowing us to (i) assess efficacy broadly, (ii) ensure that our results are not specific to any one context, and (iii) determine whether corrections of specific fraud claims affect broader beliefs and attitudes about elections (39).

In Study 1 in the US, both the credible source correction and prebunking correction had similar immediate effects, although only prebunking showed (mixed) evidence of durable impact. In Study 2 in Brazil, prebunking was more effective than credible sources at increasing election confidence, at reducing belief in fraud, and in increasing the accuracy of factual beliefs and improving discernment between true and false statements about electoral fraud. Last, in Study 3 in the US, the inoculation forewarning did not measurably change the effectiveness of the prebunking correction on beliefs about fraud prevalence, although it did improve accuracy and discernment between true and false statements about election fraud. Moreover, the marginal effects of the forewarning on election confidence and perceptions of the prevalence and effects of fraud were only statistically significant when it was omitted. In each study, the effects were often larger among people who were previously misinformed or who are more predisposed to believe misinformation.

These results suggest that both prebunking and situational credibility can effectively counter false information about voter and election fraud but provide some evidence that prebunking can be more effective—a potentially important finding for safeguarding democracy. Given the lack of forewarning effects in Study 3, we interpret these differences as resulting from the prebunking correction providing novel information about election security. These findings are consistent with recent findings showing that corrections targeting misperceptions about existing policy (i.e., how elections are secured) are more effective than those targeting misperceptions about outcomes (i.e., who won the election) (40, 41).

RESULTS

Study 1: Credible source and prebunking corrections in the US

Election fraud claims have become a central threat to American democracy. Claims of fraud by conservative groups, pundits, and politicians had become prominent as early as 2008 and 2012 (42), leading a plurality of Americans to falsely believe fraud was a major problem (43, 44). Trump then stoked fears about fraud as a president before finally embracing the “Big Lie,” his false claim that the presidency was stolen in 2020 by fraud. These claims increased polarization in fraud beliefs, threatening the perceived legitimacy of US elections (45). For example, confidence in the national vote count increased from 61 to 88% after the election among Trump’s opponents but declined from 56 to 28% among his supporters (3). Republicans continued to express high levels of belief in fraud and to question Joe Biden’s victory after 2020 (46, 47). In late 2022, for instance, 37% of Americans said Biden only won because of voter fraud or indicated that they were uncertain if he won fairly, including a majority (55%) of Republicans (48, 49).

In Study 1, we test the two approaches described above for addressing misperceptions about voter fraud and election integrity in the US. The first is the credible source correction, which seeks to reassure voters that American elections are safe and secure by presenting evidence of Republican judges and officials rejecting claims of voter fraud from Trump, a copartisan. The second approach is the prebunking correction, which instead warns people about election misinformation they may encounter (a forewarning) and seeks to counter those myths by describing procedural details about measures that are in place to ensure the integrity of the election process (providing information about existing policy). We refer to this treatment and those like it in Studies 2 and 3 as prebunking because their content is forward-looking (describing specific safeguards and practices in place to enhance the integrity of upcoming elections) and their design differs slightly from standard inoculation interventions (which often provide relevant misinformation directly after treatment).

These approaches also differ in their temporal orientation. The credible source correction is inherently retrospective—in Study 1, it offers information about the 2020 US presidential election, which had already taken place. By contrast, the prebunking correction provided information prospectively about the 2022 US midterm election, which had not yet occurred. (We test the effects of both interventions on the perceived integrity of both elections, however.)

We specifically test the following preregistered hypotheses, which focus on the election addressed in the content of the correction:

1) H1: Exposure to a credible source correction will increase confidence in the 2020 election and reduce beliefs about the prevalence and effects of fraud in the 2020 election.

2) H2: Exposure to a prebunking correction will increase confidence in the 2022 election and reduce beliefs in the prevalence and effects of fraud in the 2022 election.

We also report results for preregistered research questions about whether these corrections would affect perceptions of the other election (i.e., 2022 for credible sources and 2020 for prebunking), whether the effects of the two treatments would be statistically distinct from each other, and whether effects would still be observable in future waves. (Further details on Study 1’s experimental design and sample are provided in Materials and Methods; the study preregistration is available at https://osf.io/h89wa/.)

Figure 1 shows the estimated effects of the credible source and prebunking correction treatments (see table S2 for results in tabular form). Consistent with our first hypothesis, exposure to the credible source correction about the 2020 election (triangular markers) increased belief that Biden was the rightful 2020 winner, increased confidence in the 2020 vote count, and diminished belief in the prevalence of fraud in 2020. The estimated effect of the credible source correction on beliefs about the number of House seats won by fraud in 2020 is in the expected (negative) direction, but the effect is not statistically significant. By contrast, although the correction prebunking false claims about 2022 (square markers) increased confidence in the 2022 vote count, it did not measurably diminish the number of House seats people thought would be won by fraud in 2022, providing mixed support for the second hypothesis.

Fig. 1. Immediate effects of Study 1 treatment.

Fig. 1.

Estimated covariate-adjusted treatment effects and 95% CIs for the listed outcome variables; full model estimates reported in table S2.

Both treatments also had significant effects on outcomes related to the election other than the one they targeted (which were preregistered research questions). The credible source correction, which retrospectively focused on the 2020 election, reduced expectations of House seats won by fraud in 2022 but did not measurably increase confidence in the 2022 vote count. The prebunking correction, which prospectively focused on the 2022 election, increased belief that Biden was the rightful winner in 2020, increased confidence in the 2020 vote count, and diminished estimates of fraud prevalence in 2020.

When we compare treatment effects directly (a preregistered research question), the credible source and prebunking corrections operated relatively similarly across the range of the outcome measures (see Fig. 1 and table S2, which reports the difference in treatment effects). Across six outcomes, the credible source correction increased perceptions more that Biden won the 2020 election, the prebunking correction treatment reduced beliefs more about the prevalence of fraud in 2020, and there was no measurable difference for the other four outcomes. Last, we consider whether effects of treatments on specific outcomes varied between elections (2020 versus 2022; another preregistered research question). We found that the effects for both treatments on confidence in election vote counts were measurably weaker for beliefs about the 2022 election relative to those about 2020 but found no measurable difference across elections for seats won by fraud (see table S3).

We also tested for heterogeneous treatment effects among participants on the basis of their political predispositions (partisanship and support for Donald Trump) and pretreatment measures of the outcome variable in question. As we show in tables S4 to S6, the most pronounced pattern of heterogeneous effects applies to outcome variables associated with the 2020 election—belief that Biden was the rightful winner, election confidence, and prevalence of fraud (not House seats won by fraud in 2020, although we sometimes see heterogeneous effects for seats won by fraud in 2022). Across these measures, we find stronger rather than weaker effects of both the credible source and prebunking corrections among groups with a greater affinity toward fraud narratives (Republicans, Trump supporters, and participants who were not among the tercile with the lowest beliefs in fraud or greatest confidence in elections).

The heterogeneous effects that we observe by pretreatment outcomes may reflect floor and ceiling effects. The proportion of participants whose pretreatment outcomes were at the relevant floor or ceiling and could not move further down or up (respectively) because of treatment is as follows: Biden was the rightful winner: 61%; confidence in the 2020 vote count: 46%; confidence in the 2022 vote count: 40%; seats won because of fraud in 2020: 60%; seats won because of fraud in 2022: 60%; prevalence of voter and election fraud in 2020: 13%. These findings suggest that messages correcting fraud misperceptions could be especially effective among the audiences that are most susceptible to such narratives.

Substantively, the treatment had meaningful effects immediately after exposure, increasing the percentage of respondents who said Biden was “definitely” or “probably” the rightful winner from 71.8% under the control condition to 76.1% under the credible source correction condition and 75.4% under the prebunking correction condition. These effects were often larger for groups that we expected to be particularly susceptible to misperceptions. Among Republicans, for example, belief that Biden was the rightful winner increased from 32.5% in control to 43.8 and 38.5%, respectively, for credible sources and prebunking—increasing the proportion of Republicans who accept Biden’s victory by 34.8 and 18.5%, respectively, in relative terms.

Last, we test whether the effects of the treatments in Study 1 are detectable in later waves of our panel survey. We retain 77% of participants in wave 2 ( n=2896 ) and 54% in wave 3 ( n=2030 ). We find no evidence of differential attrition by condition in either case [wave 2: Pearson χ2(2)=0.24 , P=0.89 ; wave 3: Pearson χ2(2)=2.04 , P=0.36]. Figure 2 shows treatment effect estimates for outcomes measured in more than one survey wave: whether Biden was the rightful winner in 2020, confidence in the 2020 and 2022 vote count, beliefs about House seats won by fraud in 2020 and 2022, and beliefs about the prevalence of fraud in 2020. For the credible source treatment, effects remain in the anticipated direction but are no longer detectable in later waves (the sole exception is confidence in the 2022 election, which unexpectedly reaches significance in wave 3 of the panel survey after not doing so in wave 1 or 2). For the prebunking treatment, we instead see reductions in belief in the prevalence of fraud in the 2020 election in wave 2 (persisting from wave 1) and reduced belief in the number of seats that would be won by fraud in 2020 and 2022 (not significant in wave 1 in either case). An exploratory analysis finds that none of the estimated treatment effects in wave 2 or 3 are measurably different from the wave 1 estimates (see table S8).

Fig. 2. Over-time impact of Study 1 treatment effects.

Fig. 2.

Estimated covariate-adjusted treatment effects and 95% CIs for the listed outcome variables; full model estimates reported in table S7.

We offer three general observations on the results from Study 1. First, both treatments improve the credibility of election results immediately after exposure across a range of outcome measures. Second, these results are generally similar—despite focusing on different elections, the retrospective credible source treatment and prospective prebunking treatment affected perceptions of both the 2020 and 2022 elections. Last, the prebunking treatment had downstream effects on fraud beliefs about the 2020 election.

Study 2: Prebunking and credible source corrections in Brazil

Do the findings from Study 1 apply outside the US? We use the same design in Study 2 to examine whether corrective messages about election and voter fraud are effective in Brazil, a prominent democracy in the Global South.

Brazil’s presidential election in 2022 had notable parallels to the US presidential election in 2020 and its aftermath. After the first round of the contest on 2 October 2022, the top two candidates—incumbent Jair Bolsonaro and former president Luiz Inácio (Lula) da Silva—advanced to a runoff on October 30. Bolsonaro consistently trailed Lula in public opinion polls and repeatedly made unsubstantiated claims that Brazil’s electronic voting machines, which do not produce verifiable paper records, were insecure (50). Observers widely understood Bolsonaro’s claims as an effort to build support for a challenge to an anticipated win by Lula, particularly within Brazil’s military, in which Bolsonaro had previously served as an officer (51, 52).

In the election’s aftermath, a Defense Ministry report found no support for the president’s fraud claims (53). Nonetheless, the president’s supporters set up protest camps outside military facilities, asking military forces to intervene directly (54, 55). On 8 January 2023, a week after Lula was inaugurated, protesters who were still camped at the Armed Forces Headquarters in Brasilia marched to the Congress building, breached security, and sacked the facility (56). Brazilian security forces soon reestablished order, arresting many of the protesters.

Study 2 leverages the parallels between the US and Brazil cases to estimate the effects of credible source and prebunking corrections in a novel context. Following Study 1, we evaluate the effect of a credible source correction quoting Bolsonaro allies affirming the legitimacy of the election as well as a prebunking correction using messages drawn from Brazil’s top election security agency that delivers factual content countering specific false claims about election fraud. The presentation of both types of corrective content in Study 2, as well as of nonpolitical placebo content, mirrors that in Study 1. (There is one difference of note between Study 1 and Study 2 under the credible source condition. To mirror Study 1, we again chose to present an introduction page as well as four specific articles. Because of limited time available for study design before fielding the study, only three of the four articles presented endorsements of the election result by Bolsonaro allies. The fourth article instead presented neutral election observers affirming the legitimacy of the outcomes.)

In addition to the relevant set of outcomes from Study 1 (confidence in the vote count, perceived frequency of fraud, and number of seats won by fraud for both the prior and upcoming election), Study 2 also introduces a set of outcomes measuring participants’ abilities to accurately identify true and false statements about elections and to distinguish between them. As a result, we can directly assess the impact of exposure to corrections on the factual beliefs targeted by the corrections as well as broader effects on outcomes such as election credibility and beliefs about the prevalence of fraud.

Mirroring the first hypothesis from Study 1, we therefore expect the credible source correction in Study 2 to increase the credibility of the 2022 election:

3) H3: Exposure to a credible source correction treatment about the prevalence of voter fraud in the 2022 election (compared to a placebo condition) will increase confidence in the 2022 election and reduce beliefs about the prevalence and effects of fraud (frequency of voter fraud and the number of seats changed by fraud) in the 2022 election. (For expositional reasons, our hypotheses are numbered consecutively in the manuscript, which differs from the numbering in our preregistrations after Study 1.)

Because the data from Study 1 (which we analyzed before fielding Study 2) showed that the prebunking treatment improved perceptions of the prior election (2020, not hypothesized) as well as the next election (2022, hypothesized), we updated our preregistered expectations to explicitly include both the most recent election (2022 in the Brazilian case) and the next election (now 2026 in the Brazilian case):

4) H4: Exposure to a prebunking correction treatment will increase confidence in the 2022 and 2026 elections and reduce beliefs about the prevalence and effects of fraud (frequency of voter fraud and the number of seats changed by fraud) in the 2022 and 2026 elections compared to the placebo condition.

Last, we expected the prebunking treatment to increase the accuracy of respondent beliefs about the election administration and security practices that it describes as being used in Brazilian elections:

5) H5: Exposure to a prebunking correction treatment will increase the perceived accuracy of the true claims it supports, reduce the perceived accuracy of the misperceptions it targets, and improve respondents’ ability to distinguish between them compared to the placebo condition.

We also report results for preregistered research questions about the effects of the credible source correction on perceptions of the 2026 election and respondent factual beliefs, whether the effects of the treatments differ between the 2022 and 2026 elections, and whether the effects of the treatments differ from each other for the outcomes listed above. (Further details on Study 2’s experimental design and sample are provided in Materials and Methods; the study preregistration is available at https://osf.io/h89wa/.)

Figure 3 shows the estimated effects of the credible source and prebunking corrections on voter confidence, fraud perceptions, and the accuracy of factual beliefs about elections in Brazil. Our hypothesis that the credible source correction would increase confidence in the 2022 election and reduce beliefs about the prevalence and effects of fraud is partly supported. As Fig. 3 shows, the credible source correction increased confidence in past (2022) and future (2026) elections and decreased the number of Chamber of Deputies seats believed to have been won by fraud in 2022 but did not change beliefs in the prevalence of election fraud in either election or the expected number of seats that would be won by fraud in 2026.

Fig. 3. Effects of Study 2 treatment.

Fig. 3.

Estimated covariate-adjusted treatment effects and 95% CIs for the listed outcome variables; full model estimates reported in tables S10 and S11.

A preregistered research question asked whether the backward-looking credible source correction would affect perceptions of the 2026 election. As noted above, we only find such an effect on voter confidence, not on beliefs about the prevalence or effects of fraud. In general, we examined whether the differences between estimated effects on beliefs related to the 2022 versus 2026 election were statistically distinguishable—another preregistered research question. Table S14 shows that they are not measurably different for either treatment on any outcome.

Because the prebunking correction delivered factual information about ongoing election security practices rather than endorsements of the results from one specific election, we predicted that this treatment should affect both retrospective beliefs about the 2022 election and prospective beliefs about 2026. Consistent with this expectation, Fig. 3 shows that the prebunking treatment increased confidence in elections and decreased beliefs in the perceived prevalence and effects of fraud for both elections. Moreover, we can reject the null of no difference with the credible source correction for four of the six outcomes (see table S10 for point estimates). In each case, prebunking is more effective.

Figure 3 also shows that the prebunking treatment increased respondents’ ability to accurately identify true and false statements about Brazilian election procedures and to discern the difference between them as we predicted (these questions closely relate to the content of the treatment). The credible source treatment had a similar effect on respondents’ ability to accurately identify true statements and to discern between true and false statements but not on accurately identifying false statements. Per table S11, the estimated effect of prebunking was statistically larger for each measure of factual belief accuracy, echoing the results above for voter confidence and fraud perceptions. These results align with previous research indicating that prebunking corrections enhance people’s ability to distinguish between true and false claims they encounter in the future (21, 57).

As in Study 1, we also tested whether treatment effects varied by participants’ political predispositions (support for Jair Bolsonaro and partisanship) or pretreatment measures of the outcome variable in question. Table S17 shows that the corrective effect of the prebunking treatment was measurably greater among respondents in the top tercile of Bolsonaro sentiment compared with those in the bottom tercile for four of six measures of voter confidence and fraud perceptions (confidence in the 2022 and 2026 elections and seats won by fraud in both elections). For the credible source treatment, by contrast, the difference in effects between the top and bottom terciles reaches significance only for confidence in the 2022 election. We find little evidence of treatment effect heterogeneity by feelings toward Bolsonaro for factual belief measures (table S18).

We also test for heterogeneous effects by party identification, although Brazil’s multiparty system, and particularly the small size of Bolsonaro’s Partido Liberal (PL; only 5.7% of survey respondents—see table S9), limits our leverage. Table S15 shows that the effects of the prebunking treatment were measurably weaker for members of Lula’s Partido dos Trabalhadores (PT; 15.5% of respondents)—the group whose baseline levels of voter confidence were the highest to begin with—for three of six outcomes compared with those who identified with neither the PL nor PT. We observe no such evidence of heterogeneity for the credible source treatment. In general, we also find little evidence of heterogeneity in treatment effects on factual beliefs by party identification (see table S16).

Last, we find that the prebunking treatment in particular often had larger effects on participants who were previously most misinformed about election security, which we evaluate using tests for treatment effect heterogeneity by pretreatment outcomes. On four of the six measures of election confidence and fraud beliefs reported in table S19, for instance, the effect of the prebunking correction was discernibly greater among the most misinformed tercile (i.e., the people with the least confidence in the 2022 election before the treatment) than the least (i.e., those with the most confidence in the 2022 election before the treatment). By contrast, the effect of the credible source correction was measurably greater for the most misinformed tercile for only two of six outcomes (election confidence in 2022 and 2026). [The percentages of participants whose pretreatment outcomes were at the relevant floor or ceiling and could not move further down or up (respectively) because of treatment were as follows: seats won because of fraud in the 2022 election: 60%; seats won because of fraud in the 2026 election: 60%; confidence in the 2022 election: 35%; confidence in the 2026 election: 36%; prevalence of fraud in the 2022 election: 46%; prevalence of fraud in the 2026 election: 55%.] Similarly, prebunking was more effective for the evaluation of true and false statements and discernment between them among participants with the highest pretreatment levels of misinformation, whereas credible sources had a discernibly greater effect in this group only for the correct identification of true claims (table S20).

The effects we report above are also substantively meaningful. Confidence in the 2022 and 2026 elections exceeded the scale midpoint (indicating that respondents were “very” or “somewhat” confident in the results on average) for 64.2 and 63.7% of respondents in credible sources and 63.4 and 62.9% in prebunking, respectively, versus 56.6 and 58.1% of respondents, respectively, under the control condition. For the top tercile of respondents by feelings toward Bolsonaro, these effects were even larger. For example, confidence in the 2022 election among these Bolsonaro supporters was 28.5 and 28.2% under the credible source and prebunking conditions, respectively, compared to 20.0% under the control condition. Belief in false statements about the 2022 election (indicating that false statements were “very” or “somewhat accurate” on average) was also lower under the credible source and prebunking conditions compared to the control condition overall (27.3 and 23.6% versus 29.4%, respectively) and among Bolsonaro supporters (54.1 and 55.0% compared to 39.7%, respectively).

Study 2 suggests three general observations. First, both treatments increased confidence in elections (as in Study 1) and improved the accuracy of factual beliefs about elections (outcome measures not included in Study 1). Second, the prebunking treatment was more effective; it had a significant effect on every outcome variable versus the control condition, and the effect was measurably larger than the credible source treatment effect for all nine outcomes (tables S10 and S11). Third, as in the US, the effects of our experimental interventions, and particularly of prebunking, were the strongest precisely among people who were most predisposed to believe fraud claims or most misinformed.

Study 3: Prebunking with and without forewarning in the US

Together, Studies 1 and 2 demonstrate the effectiveness of the prebunking correction. In the format we tested, the prebunking consisted of inoculation-style forewarning messages followed by procedural details about election security, with the forewarning messages designed to elicit perceptions of threat that would increase receptivity to corrective information (21, 58, 59).

Some recent scholarship suggests that forewarnings of this sort are more important than novel corrective information (60, 61), a finding that evokes previous calls for research to test the role of forewarning in producing inoculation effects (59). Empirical evidence on this point is limited, however.

Study 3 thus compares two versions of a prebunking correction treatment. Both include procedural details about election security and content analogous to a weakened dose of misinformation, but the inclusion of the forewarning is randomized to isolate its effect on the outcomes of interest. Study 3 also lets us consider potentially important boundary conditions, such as whether prebunking corrections are effective for different past and future US elections (2022 and 2024, respectively, versus 2020 and 2022 in Study 1). As in Study 2, we also examine whether the effects of these interventions affect participants’ ability to identify true and false statements about elections (a relevant question given that the forewarning treatment specifically warns people about misinformation).

Our preregistered hypotheses therefore address the effects of each version of the treatment (with and without a forewarning message) compared to the placebo condition on voter confidence and fraud perceptions in the 2022 and 2024 elections and discernment between true and false statements:

6) H6: Exposure to a prebunking correction will increase confidence in the 2022 and 2024 elections and reduce beliefs about the prevalence and effects of fraud (frequency of voter fraud and the number of seats changed by fraud) in the 2022 and 2024 elections compared to the placebo condition regardless of whether the prebunking correction is preceded by a warning alerting participants that they might be exposed to misinformation in the future.

7) H7: Exposure to a prebunking correction will reduce the perceived accuracy of the misperceptions it targets, increase the perceived accuracy of the true claims it supports, and improve respondents’ ability to distinguish between them compared to the placebo condition regardless of whether the prebunking correction is preceded by a warning alerting participants that they might be exposed to misinformation in the future.

We also report results for a preregistered research question asking whether there are differences between the two versions of the prebunking correction (with and without a forewarning message) for our key outcome measures — confidence in elections, beliefs in the prevalence and effects of fraud, and belief in and discernment between true and false statements. (Further details on Study 3’s experimental design and sample are provided in Materials and Methods; the study preregistration is available at https://osf.io/h89wa/.)

We first evaluate the effects of the prebunking correction with and without forewarning, which are presented in Fig. 4 and table S22. For five of the six outcomes measured, the prebunking treatment condition without a forewarning had a significant effect relative to the placebo condition. By contrast, the effect of the prebunking treatment with a forewarning was not significant for any outcomes, although we can only directly reject the null of no difference in one case (see table S22). We further note that the pooled estimates of the treatment effects are only measurably different from zero for one outcome measure because of weaker effects in the forewarning condition (House seats won by fraud in 2024; see table S24). These results thus only partly confirm our expectations that the correction will improve participants’ election confidence and diminish their beliefs in the prevalence and effects of fraud. These more limited results in Study 3 may be the result of higher overall confidence in elections because of the decrease in elite messaging questioning their legitimacy after the 2022 elections (compared to the post-2020 period in which Study 1 was conducted). In Study 1, mean confidence levels among controls in the relevant previous and future elections were 3.30 [95% confidence interval (CI): 3.25 to 3.35] and 3.35 (95% CI: 3.30 to 3.39) for the 2020 and 2022 elections, respectively. By contrast, mean confidence levels in Study 3 among controls were 3.47 (95% CI: 3.41 to 3.53) and 3.44 (95% CI: 3.38 to 3.50) for the 2022 and 2024 elections, respectively.

Fig. 4. Effects of Study 3 prebunking treatment with and without forewarning.

Fig. 4.

Estimated covariate-adjusted treatment effects and 95% CIs for the listed outcome variables; full model estimates reported in tables S22 and S23.

Consistent with our expectations, both versions of the prebunking treatment were effective in improving factual knowledge about elections. Figure 4 and table S23 show that the treatments increased participants’ ability to recognize true and false statements and to distinguish between them. These findings are consistent when we estimate pooled treatment effects—see table S25.

As with previous studies, we test for heterogeneous treatment effects. For our measures of voter confidence and fraud perceptions, we find no patterns of consistent differential effects by party (table S26), by Trump support (table S28), or exposure to corrective treatments in previous waves of the panel survey (tables S32 to S34). However, both treatments often diminished belief in fraud prevalence in 2022 and in expected seats won by fraud in both 2022 and 2024 to a greater degree among participants with higher pretreatment fraud beliefs (table S30). Similarly, the effects of the treatments are often stronger for identification of false statements and for the discernment of true versus false statements among Trump supporters (table S29) and people who were more misinformed pretreatment (table S31)

We note, again, that floor/ceiling effects may mute treatment effects. The percentages of participants whose pretreatment outcomes were at the relevant floor or ceiling and could not move further down or up (respectively) because of treatment were as follows: seats won because of fraud in the 2022 election: 63%; seats won because of fraud in the 2024 election: 63%; confidence in the 2022 election: 54%; confidence in the 2024 election: 49%; prevalence of fraud in the 2022 election: 18%; prevalence of fraud in the 2024 election: 19%. Exploratory analyses also reveal that for two key belief-related outcomes—belief in false claims and discernment between true and false claims—the treatment effects were weaker among Republicans who received the forewarning than for those who did not ( P<0.005 and P<0.05 , respectively; see table S27). A similar pattern was observed for respondents with the warmest feelings toward Trump for belief in false claims, which was reduced significantly more for treated respondents who did not receive the forewarning compared to those who did not ( P<0.01 ; see table S29). We found no measurable difference in effects on discernment, however, in this group.

Substantively, effects on our binary measure of election confidence were modest. The percentages of respondents who expressed confidence in the 2022 and 2024 elections (scoring above the scale midpoint for each) were 86.8 and 84.5%, respectively, among controls; 87.1 and 86.4%, respectively, for prebunking with a forewarning; and 86.8 and 85.5%, respectively, for prebunking with no forewarning. However, belief in false statements (perceptions that the claims are “very” or “somewhat” accurate on average) diminished substantially, declining from 19.5% among controls to 12.3 and 10.6% with and without a forewarning, respectively. These differences were especially large for subgroups that are likely to be most susceptible to misinformation. Among Republicans, for example, false claim beliefs decreased from 41.3% among controls to 24.4 and 19.7%, respectively, for the prebunking treatment with and without a forewarning (relative reductions of 40.9 and 52.3%, respectively).

We infer from these results that the prebunking treatment’s effectiveness was driven by its factual content, not the inoculation-style forewarning message. We only observe a statistically discernible effect on fraud beliefs and election confidence when the forewarning is omitted. As in Studies 1 and 2, these treatments were often differentially effective among participants who were misinformed or more susceptible to misinformation. Last, these effects were unexpectedly attenuated among Republicans (although still significant) when a forewarning was included, suggesting that inoculation-style language may in some cases be counterproductive. The forewarning message may have triggered skepticism among Republicans, who predominantly regard fact checkers as unfair and partisan (in contrast to Democrats who typically regard fact checkers as fair) (62).

DISCUSSION

The studies described here suggest four central conclusions. First, Studies 1 and 2 showed that both credible source and prebunking corrections increase electoral confidence and corrected misperceptions about fraud. Second, Study 2 revealed that prebunking outperforms the credible sources approach in the Brazilian context, which is consistent with research suggesting that explanations of current policy are particularly effective at addressing misperceptions (40, 41). Third, Study 3 demonstrates that the effectiveness of prebunking was driven by the factual content delivered rather than by forewarning respondents about potential exposure to untruths. Last, the effects of both corrections—and of prebunking, in particular—were often larger among people who were previously misinformed or who are especially vulnerable to misinformation.

To summarize the findings from all three studies, Table 2 presents key results from each experimental treatment on the outcome measures, which we order left to right from more general attitudes to more specific beliefs. Studies 1 and 2 in the US and Brazil, respectively, tested the effectiveness of credible sources affirming election integrity and prebunking messages delivering factual content about election safeguards. In these two studies, both approaches almost always increased confidence in election results both retrospectively and prospectively. These effects were often greatest among those who were most misinformed (see the Supplementary Materials). The effects of the treatments on perceptions of fraud and its effects on election outcomes were similar in the US, whereas the effects of the prebunking message were stronger in Brazil. Study 1 provided evidence that only the prebunking message had downstream effects on fraud perceptions. The cases from Study 1 where we see significant treatment effects in one or more future waves are indicated by boxes in Table 2 (see table S7 for details). Last, both treatments were effective at improving the accuracy of respondents’ factual beliefs (Studies 2 and 3).

Table 2. Summary of results immediately after treatment across outcomes and studies.

Cell entries are P values for immediate treatment effects: *P < 0.05; **P < 0.01; ***P < 0.001. n.s. indicates not significant; empty cells indicate that the outcome was not measured. Underlined cells indicate *P < 0.05 in either future wave (Study 1 only). All models were estimated using OLS regression with robust standard errors (Study 1: tables S2 and S7; Study 2: tables S10 and S11; Study 3: tables S22 and S23).

←More general attitudes More specific factual beliefs→
Election conf. Biden Fraud freq. Seats won Factual beliefs
Past Future Past Past Future Past Future T F T-F
Study 1 (US)
 Credible sources *** n.s. *** *** n.s. *
 Prebunking *** *** * *** n.s. n.s.
Study 2 (Brazil)
 Credible sources ** *** n.s. n.s. * n.s. * n.s.
 Prebunking *** *** *** *** *** *** *** *** ***
Study 3 (US; prebunking only)
 No forewarning n.s. * * * * * *** *** ***
 Forewarning n.s. n.s. n.s. n.s. n.s. n.s. *** *** ***

Why did prebunking outperform a credible source correction in Study 2 in Brazil but not in Study 1 in the US? This difference may reflect the timing of the study and the recency of the events referenced in the credible source correction in our Brazilian experiment. In Study 1, which was fielded almost 2 years after the 2020 election, the US credible source treatment referred to court decisions from cases about the prior election that had been resolved and to a detailed report produced by a committee of high-ranking Republican officials. By contrast, Study 2 was fielded right after Brazil’s 2022 election. As a result, the credible source treatment featured statements in the immediate aftermath of the election that may not have been as compelling (albeit from high-ranking officials, like the president of the Chamber of Deputies and Bolsonaro’s own son).

Last, Study 3 showed that in the context of the US 2022 midterm elections, the prebunking correction without a forewarning message increased the overall confidence in a future election (but not the most recent prior election), diminished beliefs in the prevalence of fraud practices and estimates of House seats determined by fraud (both retrospectively and prospectively), and improved the accuracy of factual beliefs and discernment. By contrast, the same prebunking correction with a forewarning message succeeded only in improving the accuracy of factual beliefs and discernment (unlike Study 1, which focused on the 2020 election). We thus find no evidence that forewarning (which has been presented as an important part of the broader inoculation approach) increases the efficacy of corrective information. The difference between the treatments’ estimated effects is almost never significant, and the forewarning actually reduced the effects of prebunking on the belief accuracy of Republicans. These findings raise important questions about the mechanism that is responsible for inoculation effects—something that future research should consider.

The advantages we find to using prebunking are reinforced by some practical advantages for actors in real-world contexts (e.g., journalists and social media platforms trying to correct misinformation about elections). First, prebunking does not require finding credible sources who will speak against their partisan interest or require amplifying messages from partisans. The facts required for effective prebunking are readily available from neutral sources, although the Cybersecurity and Infrastructure Security Agency (CISA; the source for the prebunking treatments in Studies 1 and 3; see Materials and Methods) is no longer helping states respond to election misinformation (63). Second, prebunking does not require context about a particular election, political event, politician, etc., to understand the corrective content. By contrast, credible source corrections rely on people understanding why a statement is against the interest of a particular political actor. Last, the factual content provided in prebunking corrections maintains its validity over time. By contrast, the informational value of sources speaking against interest may diminish precisely because their willingness to contradict their partisan allies on a controversial factual question undermines their standing with their copartisans. We note, however, that nothing in our results suggests forgoing credible sources where they are available. In particular, credible sources that provide novel factual information seem especially likely to be effective.

We identify several other questions for future research. First, as noted above, the differences we observed between studies could be partly attributable to idiosyncratic features of the context or messages—future studies should test other messages in other contexts. Second, it would be valuable to replicate the Brazil study with a fully representative sample. Third, further research should aim to determine why some message effects persist longer than others (i.e., in Study 1) and whether or how exposure to counter-frames moderates these effects (64). For example, recent research suggests that durability can be enhanced by immediate exposure to a related evaluation task or repeated exposure to fact checks over time (65, 66). Future research could explore how best to design interventions that can be repeated over time and practiced immediately after exposure. Fourth, we show that controlling for pretreatment outcomes yields very similar results to our preregistered approach of including all lasso-selected pretreatment covariates (67, 68), suggesting the need for future research on best practices in experimental data analysis and preregistration. Last, we tested the effects of corrective messages conditional on exposure within the controlled setting of an experiment. Future research should examine what content people are actually exposed to about voter and election fraud, building on recent work that uses large language models to measure the frequency and slant of such exposure in digital behavior data (69).

In the end, however, these findings are optimistic. Democracy-defending messages can be effective, especially prebunking approaches that provide novel factual information about election security. Moreover, these effects are often stronger among the groups we expected to be most resistant, suggesting the potential for substantial belief and attitude change if these messages could be deployed more widely.

MATERIALS AND METHODS

Study 1: Survey and experimental design

We conducted a three-wave YouGov online panel survey bracketing the 2022 midterm election. A substantial portion of survey participants ( n=2643 ) was invited to this panel because they had participated in a two-wave panel examining election confidence and voter fraud perceptions after the 2020 US presidential election (December 2020 to January 2021). We assume no spillover effects from the 2020 study to the 2022 study (637 days elapsed between studies), which we did not describe as being related. Results from the 2020–2021 panel will be presented in a separate paper. The sample for the 2022 midterm election panel was constructed to maximize the retention of participants from the 2020 panel, with YouGov using its standard matching and weighting approach to maximize the representativeness of the resulting sample. The principal results in Study 1 are from an experiment embedded in the first survey wave, which was fielded from 18 October to 7 November 2022 ( n=3772 ). We also measure treatment effects from Study 1 in the second and third waves, which were fielded 7 to 20 December 2022 ( n=2986 ) and 21 to 30 January 2023 ( n=2030 ), respectively. Each subsequent survey wave also included experiments; results from the second wave are reported elsewhere (39), and results from the third wave are reported as Study 3. The outcomes used to test for over-time effects of Study 1 were measured in pretreatment batteries in the second and third waves (i.e., before any new experimental manipulation). Unweighted sample demographics are summarized in table S1—the sample leans female, older, and Democratic (55% female; median age group of 55 to 64; 36% have a college degree; 72% white; 52% identify as Democrats or lean toward the party). Last, the sample was highly attentive (89% passed both pretreatment attention checks we conducted). As a result, we do not condition on attention or estimate heterogeneous treatment effects by attention.

Participants were randomized in a between-subjects experiment into one of three conditions (see table S1): credible sources, prebunking, or a placebo condition. For ethical reasons, they were not shown uncorrected misinformation. We assume, however, that the vast majority had previously encountered claims questioning the integrity of the 2020 election by the time the study was conducted (October/November 2022). Our design thus allows us to estimate the treatment effect of corrective messages when specific misinformation is prevalent and salient in the information environment (rather than the effect of corrections on belief in misinformation that is not otherwise salient just after exposure, which is more common in prior research).

Under both treatment conditions, respondents were exposed to an introductory article summarizing the treatment content followed by four articles of corrective information (see https://osf.io/h89wa/ for all instruments and stimuli). The credible source correction in this study highlighted statements from Republicans who spoke against their partisan interest in affirming the legitimacy of Joe Biden’s election. The introductory article, titled “Legitimacy of 2020 Election Affirmed by Leading Republicans,” was followed by four articles highlighting key Republican figures debunking voter fraud claims about the 2020 election. These articles were adapted from news articles (70, 71), reports (72), and quotes documenting Republican judges and officials affirming the legitimacy of the 2020 election (assembled by the authors). Under the prebunking condition, the introductory article, which was titled “Beware of False Rumors You May Hear about the 2022 Election,” was instead followed by four articles debunking specific myths circulating in 2022 about the security and integrity of the voting process. The prebunking introductory article presents examples of false claims about US election integrity and highlights the Department of Homeland Security’s confirmation that procedures are in place to safeguard elections. These articles were adapted from the Rumor vs. Reality section of the website of the CISA (https://web.archive.org/web/20230224150824/https://www.cisa.gov/rumor-vs-reality), which is part of the Department of Homeland Security (the source to which the prebunking articles are attributed). The headlines of the four articles shown to participants after the introductory article are presented in Table 3. Following the presentation on the CISA website, each article had a heading titled “Reality” that was designated with a green check mark and a “Rumor” designated with a red X. In this way, our study exposes participants to a weakened dose of false information (a key component of inoculation interventions).

Table 3. Treatment article headlines.

See https://osf.io/h89wa/ for all instruments and stimuli.

Credible sources Prebunking Placebo
“Legitimacy of 2020 Election Affirmed by Leading Republicans” “Beware of False Rumors You May Hear about the 2022 Election” “Keep Up-To-Date with World Events”
“Article: Republican Leaders Say Biden Won” “Reality: Safeguards protect the integrity of the mail-in/absentee ballot process” “Article: Sauces in cooking”
“Article: Republican Judges Reject Trump’s Election Lawsuits” “Reality: Robust safeguards protect against tampering with ballots returned via drop box” “Article: Why hiking is good for your health”
“Article: Trump’s Attorney General Says No Evidence of Widespread Fraud” “Reality: Voting systems must be certified by state and/or federal voting system testing programs” “Article: Airlines serve hearing-impaired passengers”
“Article: Republican Governors Certify Biden Wins in Swing States” “Reality: Voter registration list maintenance and other election integrity measures protect against illegal voting” “Article: Sleep aids are now high-tech”

For respondents under the prebunking condition, each of the four articles after the introductory article began with the following forewarning message: “Some politically-motivated groups are using misleading tactics to confuse voters and sow distrust in the electoral process. Here is the truth about some claims you might hear concerning the 2022 midterm elections that will be held this November” (73, 74). The prebunking correction thus includes the two key elements of inoculations: a warning about potential future exposure to false claims and a message that debunks false claims preemptively before misinformation exposure. (We test the specific contribution of the inoculation forewarning in Study 3.)

To ensure that participants received the treatment content, they were told in advance that they would be asked a question about each article after exposure and were unable to advance the article page for 10 s. Participants who answered a comprehension check correctly advanced to the next article or question in the survey. Those who failed were asked to reread the article and answer the comprehension question up to two more times before advancing (i.e., respondents would advance after answering correctly or after getting the question wrong a third time). Because these comprehension questions were administered posttreatment, the analyses that follow do not subset to participants who answered them correctly (75). Our results therefore estimate the effects of message reception. We discuss potential differences in real-world exposure to these messages further in the conclusion.

Across all three studies, we use a variety of outcome measures to assess attitudes about the credibility and legitimacy of elections that tap general beliefs about election integrity (confidence in the vote count), the prevalence and effects of fraud (how often specific types of fraud or malfeasance occur and how many legislative elections were decided by fraud), and whether specific election outcomes were legitimate.

The outcome variables in Study 1 include both retrospective (i.e., regarding the 2020 election) and prospective (i.e., regarding the 2022 election) assessments. The measures assess beliefs about the prevalence of fraud and its potential impact on presidential and congressional race outcomes. Retrospective assessments include whether Joe Biden was the rightful winner of the 2020 presidential election (measured on a four-point scale ranging from “definitely not the rightful winner” to “definitely the rightful winner”), the perceived prevalence of various types of fraud in 2020 (a six-item battery measured on a seven-point scale from “a million or more” to “less than 10”), confidence in the accuracy of the 2020 vote count (an index of four items measuring confidence in votes being counted as voters intended—your vote, your local area, your state, and nationally—on a four-point scale from “very confident” to “not at all confident”), and the perceived number of US House seats won by fraud in 2020 (measured on a four-point scale from “none” to “10 or more”). Prospective assessments include confidence in the 2022 vote count (using the same measure as for 2020) and the number of House races won by fraud in 2022 (using the same measure as for 2020). (Summaries of all the measures used are provided in the Supplementary Materials; see https://osf.io/h89wa/ for all instruments and stimuli.)

All analyses were preregistered unless otherwise indicated (see https://osf.io/gpy3s/ and https://osf.io/ynbxp/). Statistical models were estimated using ordinary least squares (OLS) regression with robust standard errors; all models are estimated with pretreatment control variables selected via lasso from a preregistered list to increase precision (68). In all cases, the set of control variables considered includes pretreatment outcome variables, which we measure to improve precision in line with current guidance on best practice (67). (We inadvertently omitted factual belief measures pretreatment in Studies 2 and 3 from the lasso covariate list that was preregistered. We deviate from our preregistration to include them to maintain fidelity to our intention to include all pretreatment outcome measures in the covariate list.) We use the lasso for covariate selection to limit researcher degrees of freedom in selecting potential control variables. However, results are typically the same when including only the pretreatment measurement of the outcome variables as a control and leaving out others selected by the lasso procedure. (The slightly greater imprecision of the models omitting lasso-selected covariates means that some results narrowly fail to reach significance at the P<0.05 level.) Last, we note that no a priori power analyses were conducted for Study 1 or either of the other studies reported in this manuscript.

Study 2: Survey and experimental design

Participants in Study 2 were recruited by Netquest from its opt-in internet panel (fielding dates: 24 to 28 February 2023; n=2949 ). The survey, which was administered using Qualtrics, was programmed and translated into Portuguese by the authors and was designed to match the look and feel of Study 1 as closely as possible. Our sample size was determined by maximizing the number of respondents who could be recruited given our design and deadlines imposed by our grant funding. We began fielding with quotas for age, region, and sex, but these were deactivated after 2 days to maximize sample size and, thus, statistical power for our experiment. As would be expected of an online survey without strictly imposed quotas, our Brazil sample is somewhat more highly educated, affluent, white, and female than the Brazilian population as a whole. Unweighted sample demographics are summarized in table S9 (57% female; median age of 35 to 44; 55% have at least some college; 52% white; 49% right of center). The sample was highly attentive (86% passed a pretreatment attention check), so we do not condition on attention or estimate heterogeneous treatment effects by attention. (A second pretreatment attention check asked respondents to agree or disagree with the statement that Brazil has an emperor, but this question picked up anti-Lula sentiment—right-of-center respondents were significantly more likely to agree with it—so we do not use it as an attention check.)

Participants in Study 2 were randomized in a between-subjects experiment into three conditions mirroring those used in Study 1 (see table S9): a credible source correction, a prebunking correction, or a placebo condition with nonpolitical content. Under both treatment conditions, respondents were exposed to an introductory article summarizing the treatment content followed by four articles of corrective information (see https://osf.io/h89wa/ for all instruments and stimuli). For the credible source correction, the introductory article, titled “Legitimacy of 2022 Election Affirmed by Bolsonaro Supporters and Independent Observers,” was followed by four articles affirming the legitimacy and integrity of Brazil’s election. Three of the four sources in this treatment were speaking against partisan interest in affirming Bolsonaro’s defeat, matching the approach in Study 1. These included Bolsonaro’s son, his coalition partners in the legislature (including the President of the Chamber of Deputies), and a former Bolsonaro cabinet minister. One of the articles focuses on how Senator Flavio Bolsonaro, the president’s eldest son, posted a statement online the day after the election that was widely seen as an early acknowledgment of defeat from Bolsonaro’s inner circle (7678). The credibility of the fourth source, a team of international election observers, rests on their partisan neutrality rather than being politically opposed to Lula (a small departure from the treatment in Study 1). At the time we fielded, the Brazilian court system had not yet ruled on Bolsonaro’s eligibility to run for office in the future (he was later ruled ineligible to run for public office until 2030 for having intentionally spread unfounded doubts about election fraud during the 2022 campaign). As a result, the credible source correction drew more from events such as statements and announcements made after the election and during the inauguration of the next Congress, rather than from outcomes of judicial or other official proceedings, as in Study 1.

The prebunking treatment also mirrors the approach of Study 1 in presenting participants with an introductory article, titled “Beware of False Rumors You May Hear about Brazilian Elections,” and a forewarning message followed by four short articles rebutting specific unsupported allegations of election fraud or mismanagement that were circulating in Brazilian politics at the time of the 2022 election. The articles, which were adapted from the website of Brazil’s Superior Electoral Court (Tribunal Superior Eleitoral, TSE), addressed practices for the review of voting machine software, safeguards against hackers, security measures taken by poll workers, and the conduct of vote count audits. Parallel with Study 1, each article had a heading titled “Reality” (“Realidade”) that was designated with a green check mark and a “Rumor” (“Boato”) designated with a red X, with the rumor content corresponding to a weakened dose of false information, a key component of inoculation interventions.

As in Study 1, we asked Brazilian respondents about their confidence in the accuracy of vote counts in elections (using the same measure as in Study 1), the perceived frequency of the various types of fraud (slightly adapted from the measure used in Study 1), and the number of seats in the Brazilian Chamber of Deputies won by fraud (measured on a four-point scale from “none” to “10 or more”). These questions were asked both retrospectively (for the 2022 election) and prospectively (for the next national election in 2026). The belief accuracy measures added in Study 2 consisted of two true and two false statements about Brazilian elections measured on a four-point scale from “very accurate” to “not at all accurate.” Summaries of all measures used are provided in the Supplementary Materials.

All analyses were preregistered unless otherwise indicated (https://osf.io/ynbxp/; see the discussion of the process of preregistration of this study in the Supplementary Materials for more details). Statistical models were estimated using OLS regression with robust standard errors; all models are estimated with pretreatment control variables selected via lasso from a preregistered list to increase precision (68). We deviate from our preregistration to omit ideological self-placement and feeling thermometer measures from the set of control variables to include in the lasso because of excess missingness (21.2% of respondents have a missing value for at least one of these variables). We observe no evidence of differential attrition (see table S21).

Study 3: Survey and experimental design

The design of Study 3 allows for important comparisons with Studies 1 and 2. Like Study 1, it was conducted around the 2022 midterm elections in the US, and the treatment source material comes from the CISA. Like Study 2, it was conducted about 10 weeks after an election—from 21 to 30 January 2023 ( n=2030)—rather than before. As a result, we shifted the retrospective and prospective elections about which respondents were asked to report their beliefs about fraud from 2020 and 2022 to 2022 and 2024. Asking about 2024 also required US participants to project estimates of fraud further into the future than in Study 1 (similarly to our Brazilian participants, who were asked in Study 2 about the 2026 election). Unweighted sample demographics are summarized in table S21—the participants, who were retained from Study 1 as part of a panel survey, again lean older and Democratic (55% female; median age group of 55 to 64; 35% have a college degree; 73% white; 53% identify as Democrats or lean toward the party). The sample was again highly attentive (91% passed both pretreatment attention checks we conducted), so we do not condition on attention or estimate heterogeneous treatment effects by attention.

Participants were randomized in a between-subjects experiment into one of three conditions (see table S21): prebunking with an introductory article and forewarning message as used in Study 1, prebunking without the introductory article and forewarning message, or a placebo condition (see https://osf.io/h89wa/ for all instruments and stimuli).

To ensure that results were not affected by exposure in Study 1, the treatments in Study 3 targeted a different set of myths, and the messages themselves were changed. However, these messages were again adapted from genuine CISA “myth versus reality” messages and matched the form of the stimuli used in Study 1 (summaries of all the measures used are provided in the Supplementary Materials; see https://osf.io/h89wa/ for all instruments and stimuli).

Given the timing of Study 3, we shifted the focus of our outcome questions to the 2022 (retrospective) and 2024 (prospective) elections, asking respondents (as in the prior studies) to estimate the frequency of an array of election and voter fraud activities and the number of US House seats won by fraud and about their overall confidence in the accuracy of vote counts in elections. Also, as in Study 2, we included questions designed to test participants’ ability to discern accurate from inaccurate information about fraud. (See https://osf.io/h89wa/ for all instruments and stimuli.)

All analyses were preregistered unless otherwise indicated (https://osf.io/gpy3s/). Statistical models were estimated using OLS regression with robust standard errors; all models are estimated with pretreatment control variables selected via lasso from a preregistered list to increase precision (68).

Acknowledgments

We thank V. Arceneaux and audiences at Princeton University, New York University, and the University of North Carolina at Chapel Hill for helpful comments. All errors are our own.

Funding: We acknowledge funding support from an Evolving Election Administration Landscape Grant awarded by MIT Election Data + Science Lab (funds provided by Election Performance Project LLC, a subsidiary of the Pew Charitable Trusts). This project (to J.R.) received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 682758).

Author contributions: J.C., B.F., M.G., B.N., and J.R. designed the study. J.C., B.F., B.N., and J.R. analyzed the data. All the authors wrote the original manuscript. J.C., B.F., B.N., and J.R. revised the manuscript.

Competing interests: The authors declare that they have no competing interests.

Data and materials availability: Data and code necessary to replicate the results in this study have been posted on Dataverse (https://doi.org/10.7910/DVN/W16868). All other data needed to evaluate the conclusions in this paper are present in the paper and/or the Supplementary Materials.

Supplementary Materials

This PDF file includes:

Supplementary Text

Tables S1 to S91

sciadv.adv3758_sm.pdf (604KB, pdf)

REFERENCES AND NOTES

  • 1.Berlinski N., Doyle M., Guess A. M., Levy G., Lyons B., Montgomery J. M., Nyhan B., Reifler J., The effects of unsubstantiated claims of voter fraud on confidence in elections. J. Exp. Pol. Sci. 10, 34–49 (2023). [Google Scholar]
  • 2.Clayton K., Blair S., Busam J. A., Forstner S., Glance J., Green G., Kawata A., Kovvuri A., Martin J., Morgan E., Sandhu M., Sang R., Scholz-Bright R., Welch A. T., Wolff A. G., Zhou A., Nyhan B., Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Political Behav. 42, 1073–1095 (2020). [Google Scholar]
  • 3.Bright Line Watch, “A democratic stress test — The 2020 election and its aftermath” (2020). Downloaded 11 April 2023 from http://brightlinewatch.org/a-democratic-stress-test-the-2020-election-and-its-aftermathbright-line-watch-november-2020-survey/.
  • 4.Walter N., Murphy S. T., How to unring the bell: A meta-analytic approach to correction of misinformation. Commun. Monogr. 85, 423–441 (2018). [Google Scholar]
  • 5.Walter N., Cohen J., Lance Holbert R., Morag Y., Fact-checking: A meta-analysis of what works and for whom. Polit. Commun. 37, 350–375 (2020). [Google Scholar]
  • 6.Nyhan B., Facts and myths about misperceptions. J. Econ. Perspect. 34, 220–236 (2020). [Google Scholar]
  • 7.Nyhan B., Why the backfire effect does not explain the durability of political misperceptions. Proc. Natl. Acad. Sci. U.S.A. 118, e1912440117 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Carey J. M., Guess A. M., Loewen P. J., Merkley E., Nyhan B., Phillips J. B., Reifler J., The ephemeral effects of fact-checks on COVID-19 misperceptions in the United States, Great Britain and Canada. Nat. Hum. Behav. 6, 236–243 (2022). [DOI] [PubMed] [Google Scholar]
  • 9.Blair R. A., Gottlieb J., Nyhan B., Paler L., Argote P., Stainfield C. J., Interventions to counter misinformation: Lessons from the Global North and applications to the Global South. Curr. Opin. Psychol. 55, 101732 (2024). [DOI] [PubMed] [Google Scholar]
  • 10.R. A. Blair, J. Gottlieb, B. Nyhan, L. Paler, C. J. Stainfield, J. A. Weaver, “How effective are media literacy interventions at countering misinformation in the Global South?” (Democratic Erosion Evidence Brief, Democracy Erosion Consortium, 2024); www.democratic-erosion.com/wp-content/uploads/2024/02/HOW-EFFECTIVE-ARE-MEDIA-LITERACY-INTERVENTIONS-AT-COUNTERING-MISINFORMATION-IN-THE-GLOBAL-SOUTH.pdf.
  • 11.Guess A. M., Lerner M., Lyons B., Montgomery J. M., Nyhan B., Reifler J., Sircar N., A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proc. Natl. Acad. Sci. U.S.A. 117, 15536–15545 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Pereira F. B., Bueno N. S., Nunes F., Pavão N., Fake news, fact checking, and partisanship: The resilience of rumors in the 2018 Brazilian elections. J. Polit. 84, 2188–2201 (2022). [Google Scholar]
  • 13.Pereira F. B., Bueno N. S., Nunes F., Pavão N., Inoculation reduces misinformation: Experimental evidence from multidimensional interventions in Brazil. J. Exp. Pol. Sci. 11, 239–250 (2023). [Google Scholar]
  • 14.P. Rossini, C. Mont’Alverne, A. Kalogeropoulos, “Explaining beliefs in electoral misinformation in the 2022 Brazilian election: The role of ideology, political trust, social media, and messaging apps” in Harvard Kennedy School Misinformation Review (Harvard Kennedy School, 2023); https://misinforeview.hks.harvard.edu/article/explaining-beliefs-in-electoral-misinformation-in-the-2022-brazilian-election-the-role-of-ideology-political-trust-social-media-and-messaging-apps/.
  • 15.J. Cook, S. Lewandowsky, “The debunking handbook” (2011). Downloaded 17 November 2023; https://skepticalscience.com/docs/Debunking_Handbook.pdf.
  • 16.Prike T., Ecker U. K. H., Effective correction of misinformation. Curr. Opin. Psychol. 54, 101712 (2023). [DOI] [PubMed] [Google Scholar]
  • 17.A. Lupia, M. D. McCubbins, The Democratic Dilemma: Can Citizens Learn What They Need to Know? (Cambridge Univ. Press, 1998).
  • 18.Nyhan B., The limited effects of testimony on political persuasion. Public Choice 148, 283–312 (2011). [Google Scholar]
  • 19.Berinsky A. J., Rumors and health care reform: Experiments in political misinformation. Br. J. Polit. Sci. 47, 241–262 (2017). [Google Scholar]
  • 20.Clayton K., Willer R., Endorsements from Republican politicians can increase confidence in US elections. Res. Polit. 10, 20531680221148967 (2023). [Google Scholar]
  • 21.Lewandowsky S., van der Linden S., Countering misinformation and fake news through inoculation and prebunking. Eur. Rev. Soc. Psychol. 32, 348–384 (2021). [Google Scholar]
  • 22.Vraga E. K., Kim S. C., Cook J., Bode L., Testing the effectiveness of correction placement and type on Instagram. Int. J. Press/Politics 25, 632–652 (2020). [Google Scholar]
  • 23.Brashier N. M., Pennycook G., Berinsky A. J., Rand D. G., Timing matters when correcting fake news. Proc. Natl. Acad. Sci. U.S.A. 118, e2020043118 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Swire-Thompson B., Cook J., Butler L. H., Sanderson J. A., Lewandowsky S., Ecker U. K. H., Correction format has a limited role when debunking misinformation. Cogn. Res. 6, 83 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kotz J., Giese H., König L. M., How to debunk misinformation? An experimental online study investigating text structures and headline formats. Br. J. Health Psychol. 28, 1097–1112 (2023). [DOI] [PubMed] [Google Scholar]
  • 26.Tay L. Q., Hurlstone M. J., Kurz T., Ecker U. K. H., A comparison of prebunking and debunking interventions for implied versus explicit misinformation. Br. J. Psychol. 113, 591–607 (2022). [DOI] [PubMed] [Google Scholar]
  • 27.Pillai R. M., Brown-Schmidt S., Fazio L. K., Does wording matter? Examining the effect of phrasing on memory for negated political fact checks. J. Appl. Res. Mem. Cogn. 12, 48–58 (2023). [Google Scholar]
  • 28.M. Linegar, B. Sinclair, S. van der Linden, R Michael Alvarez, Prebunking elections rumors: Artificial intelligence assisted interventions increase confidence in American elections. arXiv:2410.19202 [econ.GN] (2024).
  • 29.Traberg C. S., Roozenbeek J., van der Linden S., Psychological inoculation against misinformation: Current evidence and future directions. Ann. Am. Acad. Pol. Soc. Sci. 700, 136–151 (2022). [Google Scholar]
  • 30.Roozenbeek J., Traberg C. S., van der Linden S., Technique-based inoculation against real-world misinformation. R. Soc. Open Sci. 9, 211719 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Cook J., Lewandowsky S., Ecker U. K. H., Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLOS ONE 12, e0175799 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.E. Durbin, L. Everett, A. French, N. Mancini, M. Pease, F. A. Rincon, H. Tanenbaum, B. Nyhan, “Inoculation discourages consumption of news from unreliable sources, but fails to neutralize misinformation” (2024). Downloaded 17 January 2024; https://sites.dartmouth.edu/nyhan/current-research/.
  • 33.Schmid-Petri H., Bürger M., The effect of misinformation and inoculation: Replication of an experiment on the effect of false experts in the context of climate change communication. Public Underst. Sci. 31, 152–167 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Spampatti T., Hahnel U. J. J., Trutnevyte E., Brosch T., Psychological inoculation strategies to fight climate disinformation across 12 countries. Nat. Hum. Behav. 8, 380–398 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Compton J., Prophylactic versus therapeutic inoculation treatments for resistance to influence. Commun. Theory 30, 330–343 (2020). [Google Scholar]
  • 36.Ivanov B., Rains S. A., Geegan S. A., Vos S. C., Haarstad N. D., Parker K. A., Beyond simple inoculation: Examining the persuasive value of inoculation for audiences with initially neutral or opposing attitudes. West. J. Commun. 81, 105–126 (2017). [Google Scholar]
  • 37.G. Pennycook, D. G. Rand, “Examining false beliefs about voter fraud in the wake of the 2020 presidential election” in The Harvard Kennedy School Misinformation Review (Harvard Kennedy School, 2021), vol. 2; https://misinforeview.hks.harvard.edu/wp-content/uploads/2021/01/pennycook_voter_fraud_elections_20210111.pdf.
  • 38.Dourado T., Almeida S., Piaia V., Fraude nas urnas e contestação eleitoral no brasil: Análise multiplataforma de atores políticos, viés conspiratório e moderação de conteúdo [Voting fraud and electoral contestation in Brazil: Multiplatform analysis of political actors, conspiratorial bias and content moderation]. Opinião Pública 30, e3017 (2024). [Google Scholar]
  • 39.Carey J., Chun E., Cook A., Fogarty B., Jacoby L., Nyhan B., Reifler J., Sweeney L., The narrow reach of targeted corrections: No impact on broader beliefs about election integrity. Polit. Behav. 47, 737–750 (2025). [Google Scholar]
  • 40.E. Thorson, The Invented State: Policy Misperceptions in the American Public (Oxford Univ. Press, 2024).
  • 41.Thorson E., Abdelaaty L., Misperceptions about refugee policy. Am. Polit. Sci. Rev. 117, 1123–1129 (2023). [Google Scholar]
  • 42.Fogarty B. J., Curtis J., Gouzien P. F., Kimball D. C., Vorst E. C., News attention to voter fraud in the 2008 and 2012 US elections. Res. Polit. 2, 2053168015587156 (2015). [Google Scholar]
  • 43.Ahlquist J. S., Mayer K. R., Jackman S., Alien abduction and voter impersonation in the 2012 US general election: Evidence from a survey list experiment. Elect. Law J. 13, 460–475 (2014). [Google Scholar]
  • 44.J. Levitt, “A comprehensive investigation of voter impersonation finds 31 credible incidents out of one billion ballots cast,” Washington Post, 6 August 2014.
  • 45.Jacobson G. C., Comparing the impact of Joe Biden and Donald Trump on popular attitudes toward their parties. Pres. Stud. Q. 53, 440–459 (2023). [Google Scholar]
  • 46.Bright Line Watch, “Rebound in confidence: American democracy and the 2022 midterm elections” (2022). November 2022; http://brightlinewatch.org/american-democracy-and-the-2022-midterm-elections/.
  • 47.Bright Line Watch, “Uncharted territory: The aftermath of presidential indictments” (2023). July 2023; http://brightlinewatch.org/uncharted-territory-the-aftermath-of-presidential-indictments/.
  • 48.Monmouth University Poll, “Faith in American system recovers after summer Jan. 6 hearings” (2022). Downloaded 26 January 2024 from www.monmouth.edu/polling-institute/reports/monmouthpoll_us_092722/.
  • 49.Monmouth University Poll, “What makes a good Republican?” (2022). Downloaded 26 January 2024 from www.monmouth.edu/polling-institute/reports/monmouthpoll_us_121622/.
  • 50.A. Attie, J. M. Carey, M. Langevin, J. Korn, R. Rhett, “Featured q&a: How secured is Brazil’s voting system?” (Latin American Advisor, 2022).
  • 51.M. Savarese, D. Jeantet, “Brazil’s Jair Bolsonaro is barred from running for office until 2030,” Associated Press, 30 June 2023; https://apnews.com/article/brazil-bolsonaro-ineligible-court-ruling-vote-99dee0fe4b529019ccbb65c9636a9045#.
  • 52.Deutsche Welle. “Brazil: Bolsonaro on trial over electoral fraud claims” DW, 22 June 2023; www.dw.com/en/brazil-bolsonaro-on-trial-over-electoral-fraud-claims/a-66006609.
  • 53.D. Jeantet, C. Bridi, “Report by Brazil’s military on election count cites no fraud,” Associated Press, 11 November 2022; https://apnews.com/article/jair-bolsonaro-caribbean-brazil-rio-de-janeiro-ffc6206a16e26e192c87995430c4d17c.
  • 54.A. Downie, “Brazil military finds no evidence of election fraud, dashing hopes of Bolsonaro supporters,” The Guardian, 10 November 2022; www.theguardian.com/world/2022/nov/10/brazil-military-finds-no-evidence-of-election-dashing-hopes-of-bolsonaro-supporters.
  • 55.“Protestos nos quartéis e tiros de guerra ganham caráter de vigília pró-bolsonaro,” [Protests at the Armed Forces headquarters and military divisions resemble a pro-Bolsonaro vigil]. Uol Notícias, 21 November 2022; https://noticias.uol.com.br/ultimas-noticias/agencia-estado/2022/11/21/protestos-nos-quarteis-e-tiros-de-guerra-ganham-carater-de-vigilia-pro-bolsonaro.htm.
  • 56.J. Nicas, A. Spigariol, F. Milhorance, A. Ionova, “The moment the Brazil rioters broke through: Exclusive video,” New York Times, 11 January 2023; www.nytimes.com/2023/01/11/world/americas/brazil-riots-congress-security.html.
  • 57.Pennycook G., Berinsky A. J., Bhargava P., Lin H., Cole R., Goldberg B., Lewandowsky S., Rand D. G., Inoculation and accuracy prompting increase accuracy discernment in combination but not alone. Nat. Hum. Behav. 8, 2330–2341 (2024). [DOI] [PubMed] [Google Scholar]
  • 58.Amazeen M. A., Krishna A., Eschmann R., Cutting the bunk: Comparing the solo and aggregate effects of prebunking and debunking Covid-19 vaccine misinformation. Sci. Commun. 44, 387–417 (2022). [Google Scholar]
  • 59.J. Compton, “Inoculation theory” in The SAGE Handbook of Persuasion, Second Edition: Developments in Theory and Practice, J. P. Dillard, L. Shen, Eds. (SAGE Publications, 2012), pp. 220–236.
  • 60.Kuru O., Literacy training vs. psychological inoculation? Explicating and comparing the effects of predominantly informational and predominantly motivational interventions on the processing of health statistics. J. Commun. 75, 64–78 (2025). [Google Scholar]
  • 61.Spampatti T., Brosch T., Trutnevyte E., Hahnel U. J. J., A trust inoculation to protect public support of governmentally mandated actions to mitigate climate change. J. Exp. Soc. Psychol. 115, 104656 (2024). [Google Scholar]
  • 62.M. Walker, J. Gottfried, “Republicans far more likely than Democrats to say fact-checkers tend to favor one side,” Pew Research Center, June 27, 2019; www.pewresearch.org/short-reads/2019/06/27/republicans-far-more-likely-than-democrats-to-say-fact-checkers-tend-to-favor-one-side/.
  • 63.J. Fifield, “U.S. agency has stopped supporting states on election security, official confirms,” Votebeat, 12 March 2025; www.votebeat.org/2025/03/11/cisa-ends-support-election-security-nass-nased/.
  • 64.Nyhan B., Porter E., Wood T. J., Time and skeptical opinion content erode the effects of science coverage on climate beliefs and attitudes. Proc. Natl. Acad. Sci. U.S.A. 119, e2122069119 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Capewell G., Maertens R., Remshard M., van der Linden S., Compton J., Lewandowsky S., Roozenbeek J., Misinformation interventions decay rapidly without an immediate posttest. J. Appl. Soc. Psychol. 54, 441–454 (2024). [Google Scholar]
  • 66.Bowles J., Croke K., Larreguy H., Liu S., Marshall J., Sustaining exposure to fact-checks: Misinformation discernment, media consumption, and its political implications. Am. Polit. Sci. Rev. , 1–24 (2025). [Google Scholar]
  • 67.Clifford S., Sheagley G., Piston S., Increasing precision without altering treatment effects: Repeated measures designs in survey experiments. Am Polit. Sci. Rev. 115, 1048–1065 (2021). [Google Scholar]
  • 68.Bloniarz A., Liu H., Zhang C.-H., Sekhon J. S., Yu B., Lasso adjustments of treatment effect estimates in randomized experiments. Proc. Natl. Acad. Sci. U.S.A. 113, 7383–7390 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.M. Lavigne, B. Fogarty, J. Carey, B. Nyhan, J. Reifler, “Inattention and differential exposure: How media questioning of election fraud misinformation often fails to reach the public” (2025); https://sites.dartmouth.edu/nyhan/files/2025/03/voter_fraud_news_exposure.pdf.
  • 70.R. S. Helderman, E. Viebeck, “‘The last wall’: How dozens of judges across the political spectrum rejected Trump’s efforts to overturn the election,” Washington Post, 12 December 2020.
  • 71.M. Balsamo, “Disputing Trump, Barr says no widespread election fraud,” Associated Press, 28 June 2022.
  • 72.J. Danforth, B. Ginsberg, T. B. Griffith, D. Hoppe, J. Michael Luttig, M. W. McConnell, T. B. Olson, G. H. Smith, “Lost, not stolen: The conservative case that Trump lost and Biden won the 2020 presidential election.” Downloaded 12 April 2023; https://lostnotstolen.org.
  • 73.van der Linden S., Countering science denial. Nat. Hum. Behav. 3, 889–890 (2019). [DOI] [PubMed] [Google Scholar]
  • 74.S. van der Linden, Leiserowitz S., Rosenthal E. M., E. Maibach, Inoculating the public against misinformation about climate change. Glob. Chall. 1, 1600008 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Montgomery J. M., Nyhan B., Torres M., How conditioning on posttreatment variables can ruin your experiment and what to do about it. Am. J. Polit. Sci. 62, 760–775 (2018). [Google Scholar]
  • 76.J. B. Silva, “Flávio Bolsonaro fala pela primeira vez após a derrota do pai,” [Flávio acknowledges Lula’s victory and asks supporters: ‘Keep your head up’]. Veja, 31 October 2022; https://veja.abril.com.br/politica/flavio-bolsonaro-fala-pela-primeira-vez-apos-a-derrota-do-pai/ https://veja.abril.com.br/politica/flavio-bolsonaro-fala-pela-primeira-vez-apos-a-derrota-do-pai/.
  • 77.D. Gullino, “Em primeira manifestação após derrota do pai, flávio bolsonaro fala em ‘erguer a cabeça’ e ‘não desistir’,” [In his first social media post after his father’s defeat, Flávio Bolsonaro ask supporters to ‘keep their heads up’ and ‘do not give up’]. O Globo, 31 October 2022 [accessed 7 February 2024]; https://oglobo.globo.com/politica/eleicoes-2022/noticia/2022/10/em-primeira-manifestacao-apos-derrota-do-pai-flavio-bolsonaro-fala-em-erguer-a-cabeca-e-nao-desistir.ghtml.
  • 78.J. Scheller, “‘Pai, estou contigo pro que der e vier!’, diz flávio após derrota de bolsonaro para lula no 2° turno,” [Father, I’m with you no matter what!’ says Flávio after Bolsonaro’s defeat to Lula in the 2nd round of the elections]. Estadão, 31 October 2022; https://oglobo.globo.com/politica/eleicoes-2022/noticia/2022/10/em-primeira-manifestacao-apos-derrota-do-pai-flavio-bolsonaro-fala-em-erguer-a-cabeca-e-nao-desistir.ghtml.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Text

Tables S1 to S91

sciadv.adv3758_sm.pdf (604KB, pdf)

Articles from Science Advances are provided here courtesy of American Association for the Advancement of Science

RESOURCES