Skip to main content
Philosophical Transactions of the Royal Society B: Biological Sciences logoLink to Philosophical Transactions of the Royal Society B: Biological Sciences
. 2022 Oct 31;377(1866):20210342. doi: 10.1098/rstb.2021.0342

Polarized imagination: partisanship influences the direction and consequences of counterfactual thinking

Kai Epstude 1,†,, Daniel A Effron 2,, Neal J Roese 3
PMCID: PMC9619232  PMID: 36314153

Abstract

Four studies examine how political partisanship qualifies previously documented regularities in people's counterfactual thinking (n = 1186 Democrats and Republicans). First, whereas prior work finds that people generally prefer to think about how things could have been better instead of worse (i.e. entertain counterfactuals in an upward versus downward direction), studies 1a–2 find that partisans are more likely to generate and endorse counterfactuals in whichever direction best aligns with their political views. Second, previous research finds that the closer someone comes to causing a negative event, the more blame that person receives; study 3 finds that this effect is more pronounced among partisans who oppose (versus support) a leader who ‘almost' caused a negative event. Thus, partisan reasoning may influence which alternatives to reality people will find most plausible, will be most likely to imagine spontaneously, and will view as sufficient grounds for blame.

This article is part of the theme issue ‘Thinking about possibilities: mechanisms, ontogeny, functions and phylogeny’.

Keywords: counterfactual thinking, mental simulation, political partisanship, motivated reasoning, moral judgement

1. Introduction

Today's politically polarized climate provides ample examples of a socialpsychological truism: partisans can witness the same situation but disagree on what actually happened [1,2]. The present research explores a subtler manifestation of political polarization: partisans considering the same situation may disagree on what could have happened. In other words, partisanship may not only influence the facts people believe, but also the counterfactuals they imagine—and the conclusions they draw from these counterfactuals.

Counterfactuals are mental simulations of ‘what might have been'—imagined alternatives to past outcomes that might have occurred if circumstances had been different [3,4]. Counterfactuals often support causal inferences, particularly when the counterfactual takes the form of an ‘if-then' conditional [5]. Psychologically, counterfactual thoughts are consequential in that they help individuals to learn from mistakes and to plan for the future [6,7]. Politically, they allow societies to judge the effectiveness of past actions, assign praise or blame to leaders, and determine how much to support various policies (e.g. [8]).

Yet counterfactuals also provide fertile ground for partisan reasoning. Counterfactual events, by definition, did not occur and hence cannot be verified [9]. People may imagine counterfactual events that fit with their beliefs and motivations, asserting it ‘almost happened' without the risk of being proved wrong. We propose that, when people reflect counterfactually about political events, political partisanship predicts what they imagine as well as what they infer—in other words, both the content and the conclusions of their counterfactual thinking.

With respect to content, we propose that partisanship predicts preferences for imagining a better or worse alternative to reality. For example, when considering what the US economy would be like if President Trump had not cut taxes in 2017, Democrats might be more likely to imagine that the economy would have been better, whereas Republicans might be more likely to imagine it would have been worse. With respect to conclusions, we propose that people not only endorse different counterfactuals, but also draw different inferences from the same counterfactual. In general, people may regard a negative event that nearly happened as sufficient evidence that the relevant leader deserves blame. However, more specifically, partisanship may moderate this relationship. For example, even if Democrats and Republicans agreed that Trump ‘almost’ provoked a war with North Korea in 2017, Democrats might be more likely than Republicans to see this close counterfactual as sufficient grounds for blaming Trump. By examining the content and conclusions of counterfactual thinking, we reveal an underappreciated source of political polarization. The next sections develop our theorizing about counterfactual content and conclusions.

(a) . Partisanship and counterfactual content

Prior research identifies several empirical regularities in the counterfactual thoughts that people generate and endorse. One regularity concerns a counterfactual's direction of comparison [4,5]. Most counterfactual thoughts focus on how the past could have been better instead of worse—i.e. counterfactuals tend to be upward rather than downward [7]. This preference for upward counterfactuals connects to what people want and strive for [10]. Upward counterfactuals suggest ways of achieving a better state of affairs and hence reflect a functional orientation towards personal improvement [7]. Counterfactual thinking is typically activated by unexpected negative events (e.g. failing a test), and upward counterfactual thinking helps people determine how to avoid such events in the future (e.g. ‘I would have passed if only I had studied'). Downward counterfactuals can help people feel better about negative events (e.g. [11]), but in practice are rarely generated.

A second empirical regularity is that personal knowledge and preferences shape counterfactual thinking (e.g. [3]). An important source of such knowledge and preference is political partisanship. Accordingly, partisanship predicts which counterfactuals people find most plausible [12]. For example, partisans are more likely to believe that a lie ‘could have been true' if it aligns with their political views [13].

However, previous work neither predicts nor tests how these two empirical regularities may relate to each other. When partisans consider how a political event or policy could have been different, do they gravitate towards upward counterfactuals [7] or towards counterfactuals in whichever direction happens to fit with their political views [12]? We predict the latter. In other words, we propose that political partisanship creates an important boundary condition on people's general preference for upward counterfactual thinking.

There are at least two reasons to predict partisan effects on the direction of counterfactual thinking. First, different partisans are motivated to reach different conclusions. Whereas laypeople's counterfactual thinking may be motivated by a desire to improve following a negative event, resulting in upward counterfactual thoughts that point the way to betterment, partisans' counterfactual thinking may instead be more motivated by a desire to justify and defend their political views, resulting in more flexibility about the direction of counterfactual thinking. Much like people selectively search their memories for evidence consistent with preferred beliefs (e.g. [14]) or test the impact of selected economic choices in line with their political preference [15], partisans may thus selectively imagine counterfactuals in whichever direction is consistent with preferred political conclusions. Second, different partisans have different knowledge, beliefs and assumptions. As a result, they may accept and generate different counterfactual thoughts—not through motivated reasoning, but through rational (Bayesian) thinking. In practice, these processes are difficult to disentangle and may operate in concert [16,17]. Based on these theorized processes, we hypothesized:

hypothesis 1a (H1a): partisans flexibly accept and generate either upward or downward counterfactuals that are consistent with the preferred ideological stance or inconsistent with the opposed ideological stance.

The alternative hypothesis we tested was:

hypothesis 1b (H1b): regardless of their preferred ideological stance, partisans will accept and generate upward counterfactuals more than downward counterfactuals.

(b) . Partisanship and counterfactual conclusions

Partisanship may be related not only to the direction of counterfactual thinking, but also to the conclusions people draw from counterfactuals in a particular direction. Our focus is on blame, a politically consequential moral judgement linked to counterfactual thinking [3]. Another empirical regularity about counterfactual thinking is that the closer someone comes to causing a negative event, the more blame that person receives [18,19]. Citizens might blame their leader more for bringing their nation within minutes of an avoidable nuclear war than for bringing them within months of such a war, even though in both cases no war occurred. At the same time, it is ambiguous how much counterfactual closeness should figure into blame judgements. How much more blame does a leader deserve for bringing a nation within minutes versus months of a nuclear war? We propose that this ambiguity offers a degree of flexibility that partisan reasoning can exploit. Specifically, the closer people think a negative event came to occurring, the more they may blame the relevant leader—but especially if they oppose (versus support) that leader. In this way, partisanship may moderate the impact of counterfactual closeness on moral judgement.

This conceptualization extends the idea that when people are motivated to reach a conclusion, they set lower evidentiary standards for reaching it [20]. For example, participants examined the qualifications of a disliked person less thoroughly than those of a person they liked [21]. Facts provide stronger evidence than counterfactuals. However, when a particular conclusion aligns with a partisan's views, he or she may be less likely to require factual evidence of the conclusion; counterfactual evidence may suffice. For example, counterfactual thinking played a bigger role in partisans' judgements of media hypocrisy when the partisans were motivated to dismiss the media as hypocritical than when they were not [22].

Extending these ideas, we propose that the closeness of a downward counterfactual seems like a more compelling reason to blame someone when the target of blame is someone a partisan opposes (versus supports). Specifically, when partisans think poorly of a leader, they may be more inclined to blame that leader for a negative outcome that did not occur, but nearly did. In this sense, partisanship sets the evidentiary standards people use for drawing conclusions about blame. Based on these theorized processes, we hypothesized:

hypothesis 2 (H2): the positive relationship between the closeness of an undesirable counterfactual event and blame is stronger when partisans oppose (versus support) the target of blame.

(c) . Prior research on how beliefs and motivations shape counterfactual thinking

Inside and outside the political domain, people's beliefs and motivations predict their counterfactual thoughts (e.g. [8,13,23,24]). For example, when motivated to prove their moral character, people will invent ‘counterfactual transgressions'—bad deeds they imagine they could have done, but did not actually do [25,26]. Policy experts found historical counterfactuals (e.g. how the Cold War could have turned out differently) to be more plausible if those counterfactuals aligned with their views [12,27], and fundamentalist Christians were averse to even considering counterfactuals that challenged their religious beliefs [28].

Unlike the present research, however, these prior studies systematically examined neither counterfactual direction, nor the relationship between counterfactual closeness and blame judgements. We further advance this prior research by demonstrating how partisanship qualifies previously documented regularities in counterfactual thinking. First, we find that the effect of partisanship on counterfactual thinking is so strong that it overrides people's general preference to for upward (versus downward) counterfactual thoughts [7]. Second, we demonstrate that partisanship moderates people's general tendency to use counterfactual closeness as a cue for whom to blame. Finally, by testing our hypotheses with a range of contemporary political issues, we reveal that partisan divisions extend beyond what people believe has actually happened; they also encompass what people think could have happened, and who is to blame for it.

(d) . The present research

Four studies tested our hypotheses among Democrats and Republicans. Testing H1a, studies 1a and 1b examined whether partisans would rate both upward and downward counterfactuals as more plausible when these counterfactuals aligned with their politics, and study 2 examined whether partisans would be more likely to generate counterfactuals in whichever direction (upward versus downward) was more aligned with their views on a given political issue. These studies also tested alternative hypothesis H1b, that partisans would show a general preference for upward counterfactual thinking. Finally, testing H2, study 3 examined whether partisans are more likely to blame a president for a negative event when they think it ‘almost' occurred—especially when they oppose that president.

(e) . Open practices

We pre-registered all studies, determined stopping rules for data collection before running each study, and report all measures, conditions and data exclusions. Verbatim study materials, data, analysis code and links to pre-registration documents are posted at: https://osf.io/3m6p7/ [29].

2. Studies 1a and 1b

Studies 1a and 1b examined how partisans judge the plausibility of political counterfactuals. Democrats and Republicans rated the plausibility of six counterfactuals: half upward and half downward, half aligned and half misaligned with participants' political views. In study 1a, the upward counterfactuals aligned with Democrats' views and all downward counterfactuals aligned with Republicans' views. In study 1b, we reversed the linkage, such that all upward counterfactuals aligned with Republicans' ideology and all downward counterfactuals aligned with Democrats' ideology. Flipping the connection between partisanship and counterfactual direction tests H1a (that the partisan thinker will accept either an upward or downward counterfactual depending on its alignment with their politics) and H1b (that partisans will instead prefer upward over downward counterfactuals).

(a) . Method

(i) . Participants

We recruited American partisans from Prolific Academic (see the electronic supplementary material for screening criteria and exclusions). Study 1a's final sample size was n = 201 (101 Democrats, 100 Republicans; 97 men, 103 women and one non-binary person; M age = 40 years, s.d. = 13; 83% White, 7% Black, 5% American Indian or Alaska Native, 6% Latino/Latina) and study 1b's was n = 192 (100 Democrats and 92 Republicans; 93 men, 93 women, five non-binary people, and one person who declined to indicate gender; M age = 35 years, s.d. = 14; 66% White, 12% Asian, 11% Black, 6% Latino/Latina, and the remainder other races and ethnicities).

(ii) . Procedure

Both studies presented six political topics with corresponding counterfactuals (see the electronic supplementary material, table S1). Three counterfactuals were upward and three downward, with counterbalancing of direction with topic. In study 1a, all upward counterfactuals aligned with Democrats' views (e.g. ‘If Trump had not passed the tax cuts, then the economy would currently be much better') and all downward counterfactuals aligned with Republicans' views (e.g. ‘If Trump had not passed the tax cuts, then the economy would currently be much worse'). Study 1b flipped this pairing, such that all downward counterfactuals aligned with Democrats' views (e.g. ‘If Trump had been able to pass even bigger tax cuts, then the economy would currently be much worse') and all upward counterfactuals aligned with Republicans' views (e.g. ‘If Trump had been able to pass even bigger tax cuts, then the economy would currently be much better').

(iii) . Measures

Participants rated the plausibility of each counterfactual via three items: agreement, appropriateness and plausibility (α = 0.91 in both studies 1a and 1b). As an ancillary measure, participants also rated how angry the counterfactual made them (see the electronic supplementary material for results). As a manipulation check, participants rated the counterfactuals' compatibility with their beliefs. Response options ranged from extremely appropriate, plausible, etc. (5) to extremely inappropriate, implausible etc. (1). For exploratory purposes, participants rated the importance of each topic and reported strength of party identification (using a 4-item scale adapted from [30]); relevant findings appear in the electronic supplementary material.

(b) . Results

(i) . Analytic strategy

We submitted each measure to a mixed regression model with fixed effects for condition (1 = aligned, 0 = misaligned), fixed effects for the six political issues, and random intercepts for participants to account for the repeated-measures design.1 The mixed models in this and all subsequent studies were computed in Stata 16 using the mixed command, which assumes an independent variance-covariance structure, employs maximum-likelihood estimation, and tests coefficients against the z-distribution. We pre-registered one-tailed significance tests of directional predictions (conclusions were identical with two-tailed tests).

(ii) . Manipulation check

Confirming the success of our manipulation, participants rated the counterfactuals that were meant to be aligned with their politics as more compatible with their beliefs than the counterfactuals that were meant to be misaligned with their politics. This result emerged in both study 1a (aligned: (M = 3.78, s.d. = 0.86; misaligned: M = 2.40, s.d. = 0.99): b = 1.38, s.e. = 0.07, z = 19.23, p < 0.001, dz = .97, and study 1b (aligned: M = 3.33, s.d. = 0.91; misaligned: M = 2.26, s.d. = 0.86): b = 1.08, s.e. = 0.07, z = 14.54, p < 0.001, dz = 0.81.

(iii) . Counterfactuals seemed more plausible when aligned with one's politics

In study 1a, participants thought a counterfactual was more plausible when it was aligned with their politics (M = 3.86, s.d. = 0.77) than when it was misaligned with their politics (M = 2.70, s.d. = 0.92): b = 1.16, s.e. = 0.06, z = 18.78, p < 0.001, dz = 0.98. This finding confirms our main prediction for this study. However, it is unclear whether the results reflect a preference for upward counterfactuals among Democrats or a more general tendency for partisans to prefer whichever direction of counterfactual happens to appeal to their partisan view.

Study 1b disentangles these interpretations by reversing which counterfactual direction aligned with which party's views. The results showed a flexible acceptance of either upward or downward counterfactuals by partisans. That is, participants rated a counterfactual as more plausible when it was aligned with their politics (M = 3.46, s.d. = 0.83) than when it was misaligned with their politics (M = 2.47, s.d. = 0.77): b = 0.99, s.e. = 0.07, z = 14.87, p < 0.001, dz = 0.80. Note that this relationship is the same as in study 1a, despite the fact that in study 1b, upward counterfactuals were aligned with Republicans' views and downward counterfactuals were aligned with Democrats' views. Taken together, studies 1a and 1b suggest that partisans prefer whichever counterfactual direction aligns with their views.

(iv) . Preference for upward versus downward counterfactuals depended on partisanship

Recall the previous research documents a general preference for upward over downward counterfactual thinking (e.g. [7]). Were the partisan effects in the present studies strong enough to swamp this general preference? Figure 1's results suggest that the answer is yes. We analysed these results by submitting plausibility ratings to a mixed model with fixed effects for the counterfactual's direction (1 = up, 0 = down), participants' political party (1 = Republican, 0 = Democrat), and their interaction, fixed effects for item, and random intercepts for participant. We then computed the simple slope of counterfactual direction for each political party (these analyses were not pre-registered).

Figure 1.

Figure 1.

Mean plausibility judgements, ± 95% confidence interval (CI), by political party in studies 1a and 1b. Note: Ms are predictive margins from the mixed regression models described in the main text.

The results showed that in study 1a, when upward counterfactuals were aligned with Democrats' views, upward counterfactuals were rated as more plausible than downward counterfactuals among Democrats (figure 1's top panel; Mup = 4.03 versus Mdown = 2.38, s.d.s = 1.05 and 1.12, respectively): b = 1.65, s.e. = 0.08, z = 19.42, p < 0.001, consistent with the general pattern found in previous research. However, Republicans showed the reverse pattern, rating downward counterfactuals as more plausible than upward counterfactuals (Mup = 3.02, Mdown = 3.70, s.d.s = 1.26 and 1.15, respectively): b = –0.67, s.e. = 0.09, z = 7.90, p < 0.001. The interaction between party and direction was significant: b = –2.32, s.e. = 0.12, z = 19.33, p < 0.001.

The results were the mirror image in study 1b, in which downward counterfactuals were aligned with Democrats' views (figure 1's bottom panel). Now Republicans showed the usual pattern, rating upward counterfactuals as more plausible than downward counterfactuals (Mup = 3.30, Mdown = 2.77, s.d.s = 1.18 and 1.24, respectively): b = 0.26, s.e. = 0.09, z = 2.84, p = 0.005, whereas Democrats showed the reverse pattern, rating down counterfactuals as more plausible than upward counterfactuals (Mup = 2.20, Mdown = 3.86, s.d.s = 0.98 and 1.17, respectively): b = –1.67, s.e. = 0.09, z = 19.21, p < 0.001. The interaction was significant: b = 1.93, s.e. = 0.13, z = 15.38, p < 0.001.

In short, the results of studies 1a and 1b indicate that the plausibility of an upward or downward counterfactual depends on how well it fits with partisan political views, supporting H1a. This effect was apparently strong enough to override people's general tendency to prefer upward over downward counterfactuals, thus offering no support for H1b.

3. Study 2

Study 2 provided a further test of H1a and H1b. Participants wrote counterfactuals about eight political events, and we counterbalanced whether the most salient upward or downward counterfactuals aligned with Democrats' or Republicans' views. Whereas studies 1a and 1b showed that partisans flexibly accept pre-written counterfactuals in whichever direction aligns with their politics, study 2 tests whether partisans will also generate their own counterfactuals in the direction aligned with their politics (H1a)—or whether they will consistently generate upward instead of downward counterfactuals (H1b).

(a) . Method

(i) . Participants

We recruited American partisans from Prolific Academic (see the electronic supplementary material for screening criteria and exclusions). The final sample was n = 190 (100 Democrats, 90 Republicans; 152 women, 33 men and five non-binary people; M age = 26 years, s.d. = 8; 76% White, 9% Black, 6% Asian, 6% Latina/Latino, remainder other races).

(ii) . Procedure

Participants read eight descriptions of contentious political issues. After each issue, we asked participants to complete a partially written counterfactual statement (free response). Specifically, each statement provided participants with a counterfactual's antecedent (i.e. ‘if [something had been different] …') and participants needed to write the consequent (i.e. ‘then …'; see the electronic supplementary material, table S2). The statements varied such that in half, upward (versus downward) counterfactuals aligned with Democratic views and in the other half they aligned with Republican views. For example, one item was, ‘If Senate Republican's hadn't blocked Obama's appointee for the Supreme Court …' Here, an upward consequent aligns more with Democrats' views (e.g. ‘then things would have been better') whereas a downward consequent aligns more with Republicans' views (e.g. ‘then things would have been worse'). Another item was, ‘If Republicans had been able to pass the tax cut earlier than 2017…'. Here, an upward consequent aligns more with Republicans' views, whereas a downward consequent aligns more with Democrats' views. In addition to this counterbalancing of ideological-direction alignment, we also counterbalanced whether the antecedent was an action or an inaction.

After participants responded to the eight statements, they responded to the dependent measure. Specifically, participants viewed their prior responses to each statement, and indicated whether their responses focused on how the situation could have been better (indicating an upward counterfactual), worse (indicating a downward counterfactual) or neither. Finally, participants rated the importance of each topic as well as their party-identification strength using the same scales as in the previous studies (see the electronic supplementary material for relevant results).

(b) . Results and discussion

Supporting H1a, participants were more likely to generate counterfactuals in whichever direction was more aligned with their partisan views. As figure 2 shows, participants generated more upward counterfactuals (66.45%) than downward counterfactuals (21.84%) when it was upward counterfactuals that aligned with their views. When downward counterfactuals aligned with their views, they instead generated more downward (67.50%) than upward (21.84%) counterfactuals. The proportion of counterfactuals classified as neither downward nor upward was similar across conditions (11.97% versus 11.71%). Note that these results suggest that partisan thinking in this context overrides the general preference for upward over downward counterfactual thinking suggested by prior research, thus offering no support for H1b.

Figure 2.

Figure 2.

Study 2: percentage of upward and downward counterfactuals generated in each condition, ± 95% CI.

To test the significance of this pattern, we submitted the dependent measure (counterfactual direction; upward = 1, downward = 0; ‘neither up nor down' responses omitted) to a mixed logistic regression model with fixed effects for condition (1 = upward aligned, 0 = downward aligned), fixed effects for the eight political topics, and random intercepts for participants. The results showed a significant effect of condition with an odds ratio (OR) greater than 1, indicating that people are more likely to generate upward (versus downward) counterfactuals when upward (versus downward) counterfactuals are aligned with their politics: OR = 19.17, s.e. = 3.45, z = 16.40, p < 0.001.

Disaggregating the results by political party in an exploratory analysis showed no evidence of a general preference for upward counterfactuals among members of either party (figure 3). When upward counterfactuals aligned with Democrats' views, they were far more likely to generate upward counterfactuals (82.75%) than downward counterfactuals (7.00%), but when downward counterfactuals aligned with Democrats’ views, they were far more likely to generate downward counterfactuals (82.25%) than upward counterfactuals (9.75%): OR = 119.52, s.e. = 81.11, z = 7.05, p < 0.001 for the effect of condition when analysing Democrats' data with the mixed model described above. Republicans showed the same result: when upward counterfactuals aligned with their views, they were more likely to generate upward (48.33%) than downward (38.33%) counterfactuals—but when downward counterfactuals aligned with their views, it was downward counterfactuals that they generated more frequently than upward (51.11% versus 32.50%): OR = 12.80, s.e. = 5.22, z = 6.25, p < 0.001 for the effect of condition when analysing Republicans' data. (Interestingly, similar to studies 1a and 1b, the effect of condition was stronger among Democrats than Republicans. That is, when we analysed all the data with the mixed model described above, adding a fixed effect for party (1 = Republican, 0 = Democrat) and its interaction with condition, the interaction was significant: OR = 0.02, s.e. = 0.01, z = 12.55, p < 0.001.) In summary, study 2 showed that participants were more likely to generate counterfactuals in whichever direction that was consistent with their partisan views on a particular issue, thus supporting H1a over H1b.

Figure 3.

Figure 3.

Study 2: percentage of upward and downward counterfactuals generated in each condition, by Democrats and Republicans, ± 95% CI.

4. Study 3

Whereas the previous studies showed that partisanship predicted the direction of comparison of counterfactual thinking (H1a and H1b), study 3 tested whether partisanship would moderate the relationship between judgements of counterfactual closeness and blame (H2). How much will partisans blame a leader for a negative event that ‘almost occurred' on that leader's watch? Study 3 asked American partisans to consider downward counterfactual events, some of which could have occurred during the Trump presidency (e.g. war with North Korea), and some of which could have occurred during the Biden presidency (e.g. renewed war with the Taliban). We then assessed the relationship between how close partisans thought these events came to occurring and how much they blamed the leader who was president when the events could have occurred. We expected that the closer people thought the event came to occurring, the more they would blame the president—but especially if they opposed (versus supported) that president.

(a) . Method

(i) . Participants

After applying our pre-registered exclusion criteria (see the electronic supplementary material), the final sample was n = 603 American participants recruited from Prolific Academic (595 of whom provided demographics; 354 women, 234 men and seven non-binary; M age = 30 years, s.d. = 11; 305 Trump voters and 304 Biden voters; 74% White, 12% Black, 6% Latino/Latina, 4% Asian, remainder other races and ethnicities).

(ii) . Procedure

Participants read eight brief descriptions of negative political events that did not happen, such as the US and North Korea going to war in the summer of 2017 (see the electronic supplementary material, table S3). We chose these counterfactual events because we expected variance in how ‘close' participants would think the events came to occurring. Half of the events plausibly could have occurred during the term of a president participants supported (i.e. Biden or Trump, depending on participants' politics), whereas the other half could have occurred during the term of a president participants opposed (i.e. Trump or Biden, depending on participants' politics).

Participants evaluated how close the event came to occurring (1 = not close at all to 7 = extremely close) and how much the president at the time (i.e. Trump or Biden) should be blamed or praised for ‘nearly' allowing or causing the negative event. Then we administered some exploratory analyses (emotional reactions, issue importance, how good or bad the counterfactual outcome would have been, and political party identification), which we discuss in the electronic supplementary material. Participants also reported demographics.

(b) . Results

(i) . Analytic approach

We submitted each dependent measure to a mixed regression model with fixed effects for the president being judged (1 = supported; 0 = opposed), counterfactual closeness (1–7 scale), their interaction, fixed effects for the eight items, and random intercepts for participants.2

(ii) . Blame

Our main hypothesis was that when people considered a president they opposed, the closer they believed a negative event came to occurring under his watch, the more they would blame him, and that this effect would be attenuated (or even reversed) when people considered a president they supported. In other words, we predicted a stronger positive relationship between counterfactual closeness and blame when participants had opposed (versus supported) the relevant president (H2).

As predicted, we observed a significant interaction between judgements of counterfactual closeness and whether participants supported or opposed the relevant president: b = –0.12, s.e. = 0.03, z = 4.70, p < 0.001. Decomposing this interaction with simple slopes revealed the predicted pattern (shown in figure 4; for violin plot, see the electronic supplementary material, figure S5). When people considered the president they opposed, there was a strong positive relationship between counterfactual closeness and blame: b = 0.54, s.e. = 0.02, z = 28.02, p < 0.001. When people considered the president they supported, there was also a positive relationship between closeness and blame: b = 0.42 s.e. = 0.02, z = 22.43, p < 0.001, however (as shown by the interaction term reported above), it was significantly attenuated.

Figure 4.

Figure 4.

Study 3: stronger relationship between counterfactual closeness and blame when participants judged a president they opposed (versus supported). Note: the values are predictive margins, with 95% CIs, from the mixed regression model.

(iii) . Praise

The results for praise judgements complemented the results for blame judgements. As figure 5 shows, the closer participants thought a negative event had come to occurring, the less they praised the respective president, especially if they opposed that president (see the electronic supplementary material, figure S6 for violin plot). This pattern was significant, as shown by the interaction term in the mixed model described above: b = 0.07, s.e. = 0.02, z = 3.29, p < 0.001. There was a negative relationship between closeness and praise judgements regardless of whether participants supported or opposed the president, but this relationship was significantly stronger when participants opposed the president: b = –0.12, s.e. = 0.02, z = 7.29, p < 0.001, than when they supported the president: b = –0.05, s.e. = 0.02, z = 3.01, p = 0.003. We pre-registered a tentative prediction that greater closeness could mean more praise (for avoiding the negative event) from partisans who supported the relevant president, but the results did not support this prediction.

Figure 5.

Figure 5.

Study 3: stronger relationship between counterfactual closeness and praise when participants judged a president they opposed (versus supported). Note: the values are predictive margins, with 95% CIs, from the mixed regression model.

(iv) . Moderation by political orientation

Exploratory analyses found that political orientation moderated the results in different ways for our two dependent measures.

Recall that the closer participants thought a negative event had come to occurring, the more they blamed the relevant president, but only if they opposed that president. This effect was entirely driven by Trump voters (figure 6). When we added a dummy code for the specific president participants supported to the mixed model described above (0 = Trump supporters, 1 = Biden supporters), we found a significant three-way interaction between counterfactual closeness, whether participants were judging a president they supported (coded 1) versus opposed (coded 0), and whether participants were Trump voters (coded 0) or Biden voters (coded 1): b = 0.23, s.e. = 0.05, z = 4.38, p < 0.001.

Figure 6.

Figure 6.

Study 3: Trump voters drive the predicted pattern for the blame measure. Note: the values are predictive margins, with 95% CIs, from the mixed regression model.

Recall also that that the closer participants thought a negative event had come to occurring, the less they praised the relevant president—but only if they opposed that president. This effect was entirely driven by Biden voters (figure 7), as shown by a significant three-way interaction when submitting the praise measure to the mixed model just described: b = 0.19, s.e. = 0.05, z = 4.15, p < 0.001.

Figure 7.

Figure 7.

Study 3: Biden voters drive the predicted pattern for the praise measure. Note: the values are predictive margins, with 95% CIs, from the mixed regression model.

It is unclear whether this pattern of results reflects something about Biden versus Trump supporters, or about the specific political issues or negative counterfactual events to which the stimuli referred.

(c) . Discussion

Study 3 suggests that partisanship moderates the relationship between counterfactual thinking and moral judgements of blame and praise. The closer a negative event came to occurring on a president's watch, the more harshly partisans blamed that president, particularly when they had opposed him. When partisans already think poorly of a leader, they are more likely to blame that leader for a negative outcome that did not occur, but nearly did.

5. General discussion

Our four studies shed new light on how partisan beliefs relate to counterfactual thinking. Partisans find a given counterfactual more plausible when it aligns with their views (studies 1a and 1b), selectively generate counterfactuals that align with their views (study 2), and deploy counterfactuals that support preferred moral judgements about leaders (study 3). In summary, partisanship predicts both the content and the conclusions of counterfactual thoughts.

Our research makes several theoretical contributions. Our main contribution is to demonstrate how partisanship qualifies two empirical regularities in the counterfactual thinking literature. First, whereas previous research demonstrated an overwhelming preference for upward over downward counterfactuals (e.g. [31]), studies 1a–2 found a complete reversal of this preference when downward counterfactuals aligned with participants' views. That is, partisans in our studies flexibly generated and endorsed counterfactuals in whichever direction best aligned with their political views on a particular issue (supporting H1a over H1b). One explanation is that prior research tended to examine situations in which people were motivated by a desire to discover ‘how things could be better,' whereas partisans tend to be more motivated by a desire to justify and defend their political views.

The second empirical regularity we qualify is that the closer someone comes to causing a negative event, the more blame that person receives (e.g. [19]). Study 3 replicated this effect, but also showed that it is more pronounced among partisans who oppose (versus support) a leader who ‘almost' caused a negative event (supporting H2). One explanation is that when people dislike a leader, they lower their standards for what constitutes evidence of that leader's blameworthiness, giving more weight to imagined events—what could have happened under the leader's watch.

In short, partisan reasoning may influence which alternatives to reality people will find most plausible, will be most likely to spontaneously imagine, and will view as sufficient grounds for blame—thus creating important boundary conditions on previously documented effects.

Another theoretical contribution is that study 3 advances understanding of counterfactual thinking's role in moral judgement [32]. In some cases, downward counterfactual thinking connects to more-lenient moral judgements [33]—a contrast effect. For example, participants felt licensed to act in a less-than-virtuous manner after they reflected on the sinful actions they could have (but did not) performed [25,26]. In other cases, downward counterfactual thinking results in harsher moral judgements [34]—an assimilation effect. Study 3 suggests that the extent to which downward counterfactual thinking produces harsher moral judgements depends on partisanship. When partisans disliked a president, downward counterfactual thinking was more tightly associated with blaming that president. That is, the closer people thought a negative event came to occurring, the more likely they were to blame the president, especially if the president was opposed by the partisan. Our findings thus raise the possibility that motivation influences how much of an assimilation effect result from downward counterfactual thinking. Future research should further examine this possibility.

Third, our results contribute to a debate about whether conservatives are more prone to cognitive biases than are liberals (cf. [1,35,36]). Our results suggest that partisanship connects to counterfactual thinking among people at both ends of the political spectrum (i.e. Democrats and Republicans). That said, our results contain nuance. In studies 1a–2, Democrats and Republicans alike were more inclined to endorse and generate counterfactuals that were aligned (versus misaligned) with their views—but this effect was larger among Democrats. In study 3, people's tendency to blame a president they opposed for negative events that nearly happened was larger among Trump supporters than Biden supporters—but participants' tendency to praise a president they supported for having averted negative events was larger among Biden than Trump supporters. Future research should assess the generality of these patterns and pinpoint why they emerge. However, our results do not support the possibility that, when it comes to counterfactual thinking, conservatives show more partisan bias than do liberals.

As noted, our results are consistent with the idea that partisans engage in motivated counterfactual thinking. That is, the content and conclusions of their counterfactual thinking may reflect their desire to justify their political beliefs and to blame leaders they oppose. However, like most partisan effects in political psychology [16], ours could also be explained by non-motivated processes. For example, Republicans could be more likely than Democrats to think that ‘things would have been better’ without a particular Democratic policy because Republicans have been exposed to more information about that policy's shortcomings. Meanwhile, Democrats could be more likely than Republicans to blame Trump for ‘almost' causing war with North Korea because only Democrats are more likely to have the prior that Trump makes bad decisions. Of course, the priors and indeed the information to which partisans have been exposed may themselves have motivated origins, which illustrates the challenge of distinguishing motivated from purely cognitive processes [17]. In practice, both types of processes may work together [37].

6. Conclusion

As Tetlock & Visser [12, p. 174] observed, ‘counterfactual thinking is often heavily theory-driven'. Our results add new nuance to the recognition that partisanship constitutes an important aspect of theory-driven counterfactual thinking. People's political views predict which alternatives to reality they will find most plausible, will be most likely to spontaneously imagine, and will view as sufficient evidence of a conclusion. Partisans do not only disagree about facts—they disagree about counterfactuals and their implications for moral judgement. In today's political climate, it is not just our attitudes that are polarized—it is also our imaginations.

Acknowledgement

We thank Stephanie Rodriguez and Andrei Viziteu for research assistance.

Endnotes

1

In this and all subsequent studies, we pre-registered item as a fixed effect owing to the small number of items (i.e. k ≤ 8), but the conclusions were always identical when we instead treated item as a random effect.

2

We coded whether participants supported or opposed each president based on prescreen data indicating whom they had voted for (see the electronic supplementary material). However, the conclusions were the same when we instead coded based on participants’ responses to a question about presidential support administered after the dependent measures.

Ethics

All studies were approved under protocol REC734 by London Business School.

Data accessibility

All data and materials are available at: https://osf.io/3m6p7/ [29].

Data are also provided in the electronic supplementary material [38].

Authors' contributions

K.E.: conceptualization, data curation, formal analysis, investigation, methodology, writing—original draft, writing—review and editing; D.A.E.: conceptualization, data curation, formal analysis, investigation, methodology, writing—original draft, writing—review and editing; N.J.R.: conceptualization, investigation, methodology, writing—original draft, writing—review and editing.

All authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interest declaration

We declare we have no competing interests.

Funding

This research was partially supported by a grant to D.A.E. from the Center for the Science of Moral Understanding at UNC Chapel Hill.

References

  • 1.Ditto PH, Liu BS, Clark CJ, Wojcik SP, Chen EE, Grady RH, Celniker JB, Zinger JF. 2019. At least bias is bipartisan: a meta-analytic comparison of partisan bias in liberals and conservatives. Perspect. Psychol. Sci. 14, 273-291. ( 10.1177/1745691617746796) [DOI] [PubMed] [Google Scholar]
  • 2.Leeper TJ, Slothuus R. 2014. Political parties, motivated reasoning, and public opinion formation. Adv. Polit. Psychol. 35, 129-156. ( 10.1111/pops.12164) [DOI] [Google Scholar]
  • 3.Byrne RMJ. 2016. Counterfactual thought. Annu. Rev. Psychol. 67, 7.1-7.23. ( 10.1146/annurev-psych-122414-033249) [DOI] [PubMed] [Google Scholar]
  • 4.Roese NJ. 1997. Counterfactual thinking. Psychol. Bull. 121, 133-148. ( 10.1037/0033-2909.121.1.133) [DOI] [PubMed] [Google Scholar]
  • 5.Epstude K, Roese NJ. 2008. The functional theory of counterfactual thinking. Pers. Soc. Psychol. Rev. 12, 168-192. ( 10.1177/1088868308316091) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Epstude K, Roese NJ. 2011. When goal pursuit fails: the functions of counterfactual thought in intention formation. Soc. Psychol. 42, 19-27. ( 10.1027/1864-9335/a000039) [DOI] [Google Scholar]
  • 7.Roese NJ, Epstude K. 2017. The functional theory of counter-factual thinking: new evidence, new challenges, new insights. Adv. Exp. Soc. Psychol. 56, 1-79. ( 10.1016/bs.aesp.2017.02.001) [DOI] [Google Scholar]
  • 8.Catellani P, Covelli V. 2013. The strategic use of counterfactual communication in politics. J. Lang. Soc. Psychol. 32, 480-489. ( 10.1177/0261927X13495548) [DOI] [Google Scholar]
  • 9.Tetlock PE, Lebow RN. 2001. Poking counterfactual holes in covering laws: cognitive styles and historical reasoning. Amer. Polit. Sci. Rev. 95, 829-843. ( 10.1017/S0003055400400043) [DOI] [Google Scholar]
  • 10.Gamlin J, Smallman R, Epstude K, Roese NJ. 2020. Dispositional optimism weakly predicts upward, rather than downward, counterfactual thinking: a prospective correlational study using episodic recall. PLoS ONE 15, e0237644. ( 10.1371/journal.pone.0237644) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.White K, Lehman DR. 2005. Looking on the bright side: downward counterfactual thinking in response to negative life events. Pers. Soc. Psychol. Bull. 31, 1413-1424. ( 10.1177/0146167205276064) [DOI] [PubMed] [Google Scholar]
  • 12.Tetlock PE, Visser PS. 2000. Thinking about Russia: plausible pasts and probable futures. Brit. J. Soc. Psychol. 39, 173-196. ( 10.1348/014466600164417) [DOI] [PubMed] [Google Scholar]
  • 13.Effron DA. 2018. It could have been true: how counterfactual thoughts reduce condemnation of falsehoods and increase political polarization. Pers. Soc. Psychol. Bull. 44, 729-745. ( 10.1177/0146167217746152) [DOI] [PubMed] [Google Scholar]
  • 14.Sanitioso R, Kunda Z, Fong GT. 1990. Motivated recruitment of autobiographical memories. J. Pers. Soc. Psychol. 59, 229-241. ( 10.1037/0022-3514.59.2.229) [DOI] [PubMed] [Google Scholar]
  • 15.Caddick ZA, Rottman BM. 2021. Motivated reasoning in an explore-exploit task. Cogn. Sci. 45, e13018. ( 10.1111/cogs.13018) [DOI] [PubMed] [Google Scholar]
  • 16.Tappin BM, Pennycook G, Rand DG. 2020. Thinking clearly about causal inferences of politically motivated reasoning: why paradigmatic study designs often undermine causal inference. Curr. Opin. Behav. Sci. 34, 81-87. ( 10.1016/j.cobeha.2020.01.003) [DOI] [Google Scholar]
  • 17.Tetlock PE, Levi A. 1982. Attribution bias: on the inconclusiveness of the cognition-motivation debate. J. Exp. Soc. Psychol. 18, 68-88. ( 10.1016/0022-1031(82)90082-8) [DOI] [Google Scholar]
  • 18.Johnson JT. 1986. The knowledge of what might have been: affective and attributional consequences of near outcomes. Pers. Soc. Psychol. Bull. 12, 51-62. ( 10.1177/0146167286121006) [DOI] [Google Scholar]
  • 19.Miller DT, McFarland C. 1986. Counterfactual thinking and victim compensation: a test of norm theory. Pers. Soc. Psychol. Bull. 12, 513-519. ( 10.1177/0146167286124014) [DOI] [Google Scholar]
  • 20.Dawson E, Gilovich T, Regan DT. 2002. Motivated reasoning and performance on the Wason selection task. Pers. Soc. Psychol. Bull. 28, 1379-1387. ( 10.1177/014616702236869) [DOI] [Google Scholar]
  • 21.Ditto P H, Lopez D F. 1992. Motivated skepticism: use of differential decision criteria for preferred and nonpreferred conclusions. J. Pers. Soc. Psychol. 63, 568-584. ( 10.1037/0022-3514.63.4.568) [DOI] [Google Scholar]
  • 22.Helgason B, Effron DA. 2022. From critical to hypocritical: counterfactual thinking increases partisan disagreement about media hypocrisy. J. Exp. Soc. Psychol. 101, 104308. ( 10.1016/j.jesp.2022.104308) [DOI] [Google Scholar]
  • 23.Milesi P, Catellani P. 2011. The day after an electoral defeat: counterfactuals and collective action. Brit. J. Soc. Psychol. 50, 690-706. ( 10.1111/j.2044-8309.2011.02068.x) [DOI] [PubMed] [Google Scholar]
  • 24.Spellman BA, Mandel DR. 1999. When possibility informs reality: counterfactual thinking as a cue to causality. Curr. Dir. Psychol. Sci. 8, 120-123. ( 10.1111/1467-8721.00028) [DOI] [Google Scholar]
  • 25.Effron DA, Miller DT, Monin B. 2012. Inventing racist roads not taken: the licensing effect of immoral counterfactual behaviors. J. Pers. Soc. Psychol. 103, 916-932. ( 10.1037/a0030008) [DOI] [PubMed] [Google Scholar]
  • 26.Effron DA, Monin B, Miller D. 2013. The unhealthy road not taken: licensing indulgence by exaggerating counterfactual sins. J. Exp. Soc. Psychol. 49, 573-578. ( 10.1016/j.jesp.2012.08.012) [DOI] [Google Scholar]
  • 27.Tetlock PE. 1998. Close-call counterfactuals and belief-system defenses: I was not almost wrong but I was almost right. J. Pers. Soc. Psychol. 75, 639-652. ( 10.1037/0022-3514.75.3.639) [DOI] [Google Scholar]
  • 28.Tetlock PE, Kristel OV, Elson SB, Green MC, Lerner JS. 2000. The psychology of the unthinkable: taboo trade-offs, forbidden base rates, and heretical counterfactuals. J. Pers. Soc. Psychol. 78, 853-970. ( 10.1037/0022-3514.78.5.853) [DOI] [PubMed] [Google Scholar]
  • 29.Epstude K, Effron DA, Roese NJ. 2022. Polarized imagination: partisanship influences the direction and consequences of counterfactual thinking. OSF. (https://osf.io/3m6p7/) [DOI] [PMC free article] [PubMed]
  • 30.Leach CW, van Zomeren M, Zebel S, Vliek MLW, Pennekamp SF, Doosje B, Ouwerkerk JW, Spears R. 2008. Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification. J. Pers. Soc. Psychol. 95, 144-165. ( 10.1037/0022-3514.95.1.144) [DOI] [PubMed] [Google Scholar]
  • 31.Roese NJ, Olson JM. 1993. The structure of counterfactual thought. Pers. Soc. Psychol. Bull. 19, 312-319. ( 10.1177/0146167293193008) [DOI] [Google Scholar]
  • 32.Byrne RMJ. 2017. Counterfactual thinking: from logic to morality. Curr. Dir. Psychol. Sci. 26, 314-322. ( 10.1177/0963721417695617) [DOI] [Google Scholar]
  • 33.Markman KD, Mizoguchi N, McMullen MN. 2008. ‘It would have been worse under Saddam:’ implications of counterfactual thinking for beliefs regarding the ethical treatment of prisoners of war. J. Exp. Soc. Psychol. 44, 650-654. ( 10.1016/j.jesp.2007.03.005) [DOI] [Google Scholar]
  • 34.Miller DT, Visser PS, Staub BD. 2005. How surveillance begets perceptions of dishonesty: the case of the counterfactual sinner. J. Pers. Soc. Psychol. 89, 117-128. ( 10.1037/0022-3514.89.2.117) [DOI] [PubMed] [Google Scholar]
  • 35.Baron J, Jost JT. 2019. False equivalence: are liberals and conservatives in the United States equally biased? Perspect. Psychol. Sci. 14, 292-303. ( 10.1177/1745691618788876) [DOI] [PubMed] [Google Scholar]
  • 36.Brandt MJ, Crawford JT. 2020. Worldview conflict and prejudice. Adv. Exp. Soc. Psychol. 61, 1-66. ( 10.1016/bs.aesp.2019.09.002) [DOI] [Google Scholar]
  • 37.Kunda Z. 1990. The case for motivated reasoning. Psychol. Bull. 108, 480-498. ( 10.1037/0033-2909.108.3.480) [DOI] [PubMed] [Google Scholar]
  • 38.Epstude K, Effron DA, Roese NJ. 2022. Polarized imagination: partisanship influences the direction and consequences of counterfactual thinking. FigShare. ( 10.6084/m9.figshare.c.6189589) [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. Epstude K, Effron DA, Roese NJ. 2022. Polarized imagination: partisanship influences the direction and consequences of counterfactual thinking. OSF. (https://osf.io/3m6p7/) [DOI] [PMC free article] [PubMed]
  2. Epstude K, Effron DA, Roese NJ. 2022. Polarized imagination: partisanship influences the direction and consequences of counterfactual thinking. FigShare. ( 10.6084/m9.figshare.c.6189589) [DOI] [PMC free article] [PubMed]

Data Availability Statement

All data and materials are available at: https://osf.io/3m6p7/ [29].

Data are also provided in the electronic supplementary material [38].


Articles from Philosophical Transactions of the Royal Society B: Biological Sciences are provided here courtesy of The Royal Society

RESOURCES