Skip to main content
Wiley - PMC COVID-19 Collection logoLink to Wiley - PMC COVID-19 Collection
. 2022 Jun 3:10.1111/gove.12701. Online ahead of print. doi: 10.1111/gove.12701

When blame avoidance backfires: Responses to performance framing and outgroup scapegoating during the COVID‐19 pandemic

Gregory Porumbescu 1,2,, Donald Moynihan 3, Jason Anastasopoulos 4, Asmus Leth Olsen 5
PMCID: PMC9348279  PMID: 35942431

Abstract

Public officials use blame avoidance strategies when communicating performance information. While such strategies typically involve shifting blame to political opponents or other governments, we examine how they might direct blame to ethnic groups. We focus on the COVID‐19 pandemic, where the Trump administration sought to shift blame by scapegoating (using the term “Chinese virus”) and mitigate blame by positively framing performance information on COVID‐19 testing. Using a novel experimental design that leverages machine learning techniques, we find scapegoating outgroups backfired, leading to greater blame of political leadership for the poor administrative response, especially among conservatives. Backlash was strongest for negatively framed performance data, demonstrating that performance framing shapes blame avoidance outcomes. We discuss how divisive blame avoidance strategies may alienate even supporters.

1. INTRODUCTION

A close‐up photograph of President Trump at the podium made clear that he had used a Sharpie pen to replace the term “Coronavirus” with “Chinese virus” (Karni, 2020). Across subsequent press conferences, rallies and tweets, Trump and White House officials repeatedly invoked the term “Chinese virus,” and less frequently, variants such as “Wuhan virus,” or “Kung Flu.”

The Trump Administration's use of the term “Chinese virus” illustrates the strong incentives public officials have to scapegoat outgroups when confronting critiques of their performance (Weaver, 1986). Outgroups can be understood as social groups (e.g., professions, political parties, nationalities, or ethnicities) that an individual does not identify with (Tajfel, 1970). Examples of public officials scapegoating outgroups are found around the world—opposition parties in Brazil (Samuels & Zucco, 2014), bureaucrats in South Korea (Hong & Kim, 2019), and private service providers in England (James et al., 2016). At the subnational level, a number of US governors blamed undocumented immigrants for the surge in COVID in their states (Rose, 2021). The risk of scapegoating ethnic groups is not new. European Jews faced violent attacks when they were falsely blamed for the Black Death. The Great Flu was variously called the French flu, the Russian flu, the Chinese flu, and the Spanish flu by different factions, partly reflecting the tensions of World War I (Davis, 2018). To avoid stigmatizing ethnic groups, the World Health Organization adopted a policy of avoiding names that invoke specific places. The Trump Administration eschewed these guidelines, claiming its goal was to highlight the Chinese Government's role in contributing to the pandemic. But, a reported increase in racial hostility and hate crimes against Asian‐Americans raised questions of who, precisely, was being scapegoated (McCormick, 2021). Our focus in this article is not to determine who President Trump intended to direct blame to, but rather the empirical question of how the public responded when the term “Chinese virus” was employed.

This article extends research on blame avoidance in two ways. First, it examines how the use of an outgroup trigger to convey performance information shapes who the public blames. The COVID‐19 pandemic tested government capacity at a time when populism was on the rise (Pevehouse, 2020), increasing the potential for outgroup blaming, often through the use of xenophobic dog whistles (Bartos et al., 2021). President Trump is one frequently discussed example, but illustrations of this pattern can be found around the world, from Italy (Tondo, 2020) to Brazil (Fleck, 2021), and India (Haokip, 2021). In a relatively diverse society, attacks on foreign governments could easily be redirected toward ethnic groups viewed as representative of that government. Thus, we ask if the use of outgroup triggers result in the scapegoating of certain social groups.

Second, we assess whether the impact of outgroup scapegoating on who the public blames varies according to performance information framing and perceptions of the performance context. Government officials spin performance information to manage public perception and recast poor performance in a more positive light. While this helps them avoid a negativity bias in performance evaluation and mitigate blame (Olsen, 2015), how performance information framing interacts with outgroup scapegoating is unclear. This gap in knowledge is important because blame avoidance strategies are seldom used in isolation. By positively framing testing data and improving perceptions of the government's response to the pandemic, did Trump inadvertently weaken the effect of scapegoating? Conversely, does scapegoating result in greater outgroup blame when performance information is negatively framed and evaluations of performance more critical?

To address these questions, we use a novel research design that leverages a 2 × 2 between subjects survey experiment of US residents (n = 1439), where we vary the presence of an outgroup trigger (“Chinese virus” vs. the more neutral “COVID‐19”) and framing of performance information (positively vs. negatively). We employ machine learning techniques to analyze how open‐ended responses vary in relation to the performance information participants were randomly assigned to. A key advantage of this approach is that it limits bias (e.g., social desirability or demand effects) that would otherwise present itself in the selection of response options for close‐ended items (Roberts et al., 2014). To contextualize our open‐ended findings, we also report responses to close‐ended analysis, primarily in Supplementary Material.

Using a control group to establish a blame attribution baseline, we show that absent blame avoidance strategies, participants emphasize the president as a target of blame. Consistent with research demonstrating motivated reasoning in blame attribution (Malhotra & Kuo, 2008), conservatives are less inclined to blame a conservative president for his handling of the COVID‐19 when compared to those who do not identify as conservative.

We find scapegoating evokes blame attribution patterns distinct from those we would expect from partisan identity. Specifically, the outgroup trigger does not deflect blame, but rather attracts it—participants exposed to the term “Chinese virus” were significantly more inclined to blame President Trump, pointing to a “backfire effect.” Surprisingly, this effect is driven by conservatives. Close‐ended responses suggest the term led to a very small increase in blame directed toward Chinese residents, but not the Chinese government, although increased discussion of either group was not present in the open‐ended responses. As a tool of blame avoidance by political elites, outgroup triggers appeared to fail in our case. Rather than scapegoating outgroups, The Trump administration's use of the ‘Chinese virus’ cue may have been interpreted as a signal of incompetence.

Negatively framed performance data about testing increased the salience of the topic of performance, but did not redirect blame away from the federal government. Performance information framing was more effective at mitigating blame among conservatives—participants exposed to positively framed performance information were significantly less likely to blame a range of political and administrative actors. Further, we find interesting nuance when examining how the effect of the outgroup trigger varies according to performance information framing. Our findings suggest negatively framing performance information blunts the effect of the “Chinese virus” cue on blame attributed to actors frequently criticized by President Trump and, instead focuses blame on both parties and the broader political environment.

These findings advance public management research on performance information by demonstrating how members of the public use performance information to attribute blame (van den Bekerom et al., 2021). Further, they contribute to the public management literature on blame avoidance by exploring blame attribution in a context of public health outcomes that are highly salient, rapidly changing, and related to intense concerns about personal safety across society. While past research has explored blame attribution in the context of crises (Bisgaard, 2015) and government contracting (Leland et al., 2021), blame avoidance research more typically considers how public opinion might be influenced by blame shifting to other political opponents or governments (Hong et al., 2020), rather than how the public responds when other members of the public may be included among the targets of blame. Further research has explored how the use of racial epithets impacts attitudes toward ethnic groups (Dhanani & Franz, 2021). We bridge these research streams by examining how the relationship between outgroup triggers and blame attribution is impacted by public service provision context and perceptions of government performance. As elites throughout the world increasingly use populist appeals to garner public support, understanding how government performance shapes public responses to such divisive rhetoric is critical.

In the following sections, we establish expectations about how the key variables we examine shape blame attribution before explaining the data and analysis.

2. PERFORMANCE INFORMATION AND BLAME ATTRIBUTION

Exposure to performance information not only triggers performance evaluations, but also attributions of blame, especially for poor performance (Olsen, 2015; van den Bekerom et al., 2021). How the performance of public officials is evaluated, and how their blame avoidance strategies are received, is influenced by their perceived ideological alignment with their audience. Partisans engage in motivated reasoning to find and interpret information that limits blame to co‐partisans and shifts blame to actors, typically other politicians or governments, they disagree with (Lodge & Taber, 2000). In other words, rather than using performance information to arrive at more accurate conclusions, individuals are motivated to reason in a direction that confirms existing beliefs (James et al., 2020).

Cognitive biases are mechanisms that can influence how performance information is reconciled with existing attitudes toward political or ethnic outgroups, and can be activated through the use of blame avoidance strategies. For instance, negativity bias can be triggered by framing numbers in negative, rather than positive, terms (Olsen, 2015). Positively framing performance information can help to mitigate blame. Biases toward ethnic outgroups (i.e., an ethnic group to which one does not identify with) can be activated by cues that trigger animus toward outgroups (Whitehead et al., 1982).

We examine each of these processes in turn.

2.1. Partisan motivated reasoning and blame attribution

Motivated reasoning suggests citizens evaluate members of the political party they identify with more positively and are more critical of the performance of parties they oppose (Jilke & Bækgaard, 2020). In other words, partisan ideology establishes a basis for blame attribution. While crises can sometimes engender a tendency to “rally around the flag,” political ideology offers a heuristic by which individuals make sense of crises where the situation is dynamic and facts are contested (Bisgaard, 2015). For example, Democratic voters blamed a Republican President after the poor response to Hurricane Katrina, while Republicans blamed a Democratic governor (Malhotra & Kuo, 2008).

In the COVID‐19 case, we expect that, all else equal, conservatives will be less inclined to attribute blame to a government led by a conservative President when discussing poor performance in responding to the COVID‐19 pandemic.

Conservatives will be less inclined to allocate blame for poor performance in responding to the COVID‐19 pandemic to the Trump administration and the federal government.

2.2. Scapegoating as a blame avoidance strategy

Public management research is most attentive to motivated reasoning that is driven by political ideology. However, other sources of motivated reasoning, such as outgroup attitudes, may also influence responses to blame avoidance strategies.

Pre‐existing negative attitudes lead people to craft narratives that attribute blame for negative events to groups they do not identify with (Joslyn & Haider‐Markel, 2017, p. 361). Outgroups can be constructed in different ways. One obvious source of difference is ethnic or racial differences. For example, Ben‐Porath and Shaker (2010) show that including photographs of black Hurricane Katrina victims from the City of New Orleans resulted in whites blaming (predominantly black) residents of the City more for the consequences of Hurricane Katrina.

In the context of COVID‐19, use of the term “Chinese virus” provided a clear example of an attempt to shift blame, although to whom is unclear. As criticisms of the US response to COVID‐19 mounted in early March, some Republicans began to use the terms Chinese virus, Wuhan Virus or Kung Flu. Trump included “Chinese virus” in a tweet in mid‐March of 2020, before making it a staple of his re‐election campaign and public briefings about the COVID‐19. While President Trump claimed that this blame avoidance strategy was intended to direct attention to the culpability of China, the term raised concerns about stigmatizing Asian‐Americans (Rogers et al., 2020). The concerns were well founded—the week following President Trump's use of “Chinese virus,” there was a surge of anti‐Asian hashtags on social media (Hswen et al., 2021). Further evidence suggests that stressing COVID‐19 originated in China increased anti‐Asian and xenophobic attitudes (Dhanani & Franz, 2021). Our goal is not to understand who Trump and supporters were attempting to scapegoat when using the term “Chinese virus,” but rather whether the public interpreted this outgroup trigger in a way that directed blame towards Chinese residents of the US. Thus, we propose the following hypothesis:

Participants exposed to the term “Chinese virus” will be more likely to blame Chinese residents when compared to participants exposed to the term “COVID‐19.”

Shortly after President Trump began using the term “Chinese virus” to deflect criticism, there was an 800% increase in conservative media outlets' use of the term (Darling‐Hammond et al., 2020). Conservative political elites also began coupling the term “Chinese virus” with Chinese stereotypes of bat and snake consumption (Shepherd, 2020). The rapid increase in such rhetoric corresponded with a growth in the belief Asian‐Americans are less American, especially among conservative Americans (Darling‐Hammond et al., 2020). This ‘othering’ of Asian‐Americans exacerbates their marginalization and increases the tendency to scapegoat this group (Yi Dionne & Turkmen, 2020).

Such responses can best be explained by intergroup emotions theory, which argues individuals angrily respond to threats to the group they identify with (Mackie et al., 2000). Intergroup emotions theory also has explanatory power in political contexts, where partisans are shown to respond angrily and, perhaps even violently, to perceived threats to the political party they are ideologically aligned with (Kalmoe & Mason, 2019). It is plausible that concerns about threats to Trump in an election year may have heightened conservative's motivated reasoning and baseline willingness to respond to “Chinese virus” as an outgroup trigger.

Conservatives exposed to the term “Chinese virus” will be more likely to blame Chinese residents when compared to liberals.

2.3. Performance information framing as a blame avoidance strategy

Performance measurement systems are premised on the hope they can make blame attribution easier, by rendering governmental outcomes more legible to the public. Nevertheless, like outgroup triggers, performance information is commonly manipulated by elected officials who seek to obfuscate blame attribution (Bevan & Hood, 2006), and bias interpretation (James et al., 2020).

Studies show citizens consistently use performance information to form judgments and make decisions, especially in visible and salient service areas, such as education and health (James and Van Ryzin, 2017). This work also suggests that a negativity bias makes the use of performance information when attributing blame and credit asymmetric: negative performance scores gain attention and activate attribution in a way that positive performance does not (van den Bekerom et al., 2021). For example, members of the public are more inclined to engage in attributional reasoning when they are exposed to negatively framed as opposed to positively framed, yet equivalent, performance information (Olsen, 2015).

To examine a negativity bias in performance attribution, we use equivalence framing. Here, respondents are shown the same public health performance information, but in some instances, it is negatively framed (percent of public seeking a test that were unable to get it), and in others, positively framed (percent of the public seeking a test that received one). Equivalence framing closely mirrors blame avoidance strategies aimed at “spinning” performance information: when politicians can't change the actual level of performance, they try to change its meaning (Bevan & Hood, 2006).

Taking into account literature on performance information framing, our previous hypotheses, and the Trump Administration's efforts to manage blame arising from the US pandemic response, we expect standard patterns of blame attribution, where bad news leads to blame of the executive branch, and motivated reasoning, where bad news leads co‐partisans to look for other actors to blame.

Exposure to negatively framed performance information will trigger greater blame attributed to the Trump administration and the federal government.

Among conservatives, exposure to negatively framed performance information will lead to blaming actors outside of the Trump administration and the federal government.

Finally, we propose that would‐be blame‐shifters face an inherent tension between the use of outgroup triggers and performance information framing: scapegoating others to deflect blame is premised on an assumption of bad outcomes, an assumption then contradicted by attempts to positively frame performance information. In other words, one blame avoidance scheme is based on bad news, the other on good news. This expectation of a tension between the two blame avoidance strategies is informed by two strands of research. First, as noted above, public management research points to a negativity bias in attributional reasoning for government performance (James et al., 2020; Olsen, 2015; van den Bekerom et al., 2021). Thus, the public will be more sensitive to efforts to redirect blame to outgroups when performance is framed negatively. Similarly, political psychology research shows that the success of efforts to scapegoat outgroups for poor performance are conditioned by portrayals of the outgroup as a status threat (Mutz, 2018). More negative perceptions of performance, which result from negatively framing performance information, heighten the perceived status threat of an outgroup, and are therefore more effective at directing blame toward the outgroup when compared to positively framed performance information (Bukowski et al., 2017). Thus, we predict that efforts to mitigate blame by positively framing performance information will render efforts to deflect blame by scapegoating outgroups less effective.

Positively framed performance information will weaken the effect of the term “Chinese virus” on blame attribution to Chinese residents.

3. SETTING AND RESEARCH DESIGN

To test our hypotheses, we use a survey experiment that ran between June 25, 2020 and June 27, 2020. At this point in the pandemic, the US reported more than 120,000 COVID‐related deaths. Respondents were randomly assigned to one of four different treatment groups, representing a 2 × 2 between‐subjects design, plus one additional baseline group (Appendix A in Supplementary Material provides the exact wording of the prompts). The baseline group provides estimates for patterns of motivated reasoning absent experimental exposure to our performance framing or outgroup trigger. However, given the timing of the experiment, the crisis context in which it was run, and effort to ensure the treatments mirror real life presentational strategies it is important to acknowledge that participants assigned to a control will have been exposed to versions of such treatments from media coverage and political messaging around the pandemic. Our experimental results should therefore be read as relatively conservative tests of the blame avoidance strategies, since the non‐treated subjects will, to some degree, have been exposed to those same strategies. While a limitation of our research, this is an inevitable tradeoff of trying to model highly visible and salient blame avoidance efforts in an applied, real‐life setting.

In the baseline group, respondents are told the Trump administration is dealing with the challenge of testing residents for a new and potentially dangerous virus—neither performance information nor outgroup cue is provided. Treatments vary according to: (a) whether performance information is framed positively or negatively and; (b) whether the term “Chinese virus” or the more neutral “COVID‐19” is mentioned. Performance information is presented as COVID‐19 testing capacity, which is framed in terms of the percent of people seeking tests who can be tested (e.g., 65%) versus percent of people seeking tests who cannot be tested (e.g., 35%). Since it is possible that respondents might be more influenced by round‐number integers (James et al., 2020), we randomly varied the integers that subjects were exposed to (between a range of 51%–99% for those who can be tested and 1%–49% for those who cannot be tested).

A relevant ethical concern is that the experiment would expose respondents to misinformation or negative stereotypes. The article was subject to IRB review at the lead author's home institution. A number of steps were taken to minimize harm. First, the use of the term “Chinese virus” was widely present at the time of the experiment, and thus did not subject respondents to any risks not present in everyday life. Second, subjects were debriefed about the purpose of the article on its completion, informed that the testing numbers some were exposed to were made‐up, and that they could seek to be excluded from the analysis.

Given the highly politicized setting in which we conducted our survey experiment, an important concern relates to whether the composition of participants assigned to our experimental groups was biased in ways that could plausibly influence responses to our treatments. To address this concern, we conducted chi‐squared tests to examine whether randomization was successful and all groups to which participants were assigned were balanced on key covariates. Results reveal no significant differences across treatment groups in terms of demographic variables that could bias responses, such as participant race, education, age, party affiliation, political ideology, political trust, and gender. However, we do find an imbalance across treatment groups in terms of income (p = 0.047). Controlling for income generates no substantive variation when compared to models that did not control for income (see Supplementary Material). These results offer greater confidence that the treatment effects we uncover are indeed the result of our treatments and not some other confounding factor(s).

The vignettes, items, and flow of the experiment were all preregistered (https://osf.io/k28sz/?view_only=8240e043be3a4849829cece7afcf9db4). While the preregistration also includes hypotheses that cover closed‐ended and open‐ended response items, the hypotheses in the paper are somewhat different for two reasons. First, we consolidated and re‐worded a larger number of hypotheses on motivated reasoning, framing effects and blame into the six included in the analysis. Second, we focus on hypotheses that are tied to the open‐ended items in this analysis (results for the close‐ended items are provided in the supplemental material, and where relevant referred to in the result section).

4. SAMPLE

We recruited 1945 American participants using CloudResearch, a research platform that integrates with Amazon's Mechanical Turk (MTurk). 1 The CloudResearch platform was created for the purpose of more targeted recruitment of MTurk participants by, for example, allowing researchers to screen out multiple responses from the same worker and to recruit a more diverse pool of participants (Litman et al., 2017). Responses from online convenience samples are consistently found to be highly comparable to those obtained using more representative samples (Coppock et al., 2018).

Sampling criteria used in recruitment included approval ratings (greater than 94%) and number of hits (greater than 1000). 2 Data were cleaned following procedures outlined by Dennis et al. (2018). Incomplete responses (n = 66), responses completed in an unusually short amount of time (less than 180 s) (n = 394), responses from suspicious (4) or duplicate IP addresses (n = 42) were dropped, leaving 1439 usable responses. Roughly 26% of participants recruited were not included in the final sample. This aligns with benchmarks from Ahler et al. (2019).

5. MEASURED VARIABLES

The survey experiment included close‐ended and open‐ended items. Information on close‐ended items can be found in the preregistration and the Supplementary Material. In this article, we focus on open‐ended responses given important advantages when compared to close‐ended items. Open‐ended responses “provide a direct view into the respondent's own thinking” (Roberts et al., 2014, p. 1065) and avoid forcing participants to interpret events through a lens of pre‐constructed categories developed by the researcher. In this way, using open‐ended response options allow us to pick up on nuance that would not otherwise be possible when using close‐ended measures. We therefore include an open‐ended item that asks participants to share their opinions in 20 characters or more on the following question: ‘In the United States, who has done a bad job in responding to the pandemic? Why?’ The wording of this item is based on previous work by Hamleers et al. (2017). While we ask participants to focus on the US, in asking participants to justify their response, we give them space to attribute the performance they perceive to a wider range of American and non‐American actors.

6. DATA ANALYSIS STRATEGY

Our data analysis strategy consists of two parts. First, we use structural topic modeling (STM), an unsupervised method of text analysis, to infer sources of blame from the open‐ended response items (Roberts et al., 2014). Following this, we use ordinary least squares to regress the top five most prevalent sources of blame identified in the open‐ended responses on an explanatory variable (partisan ideology, outgroup trigger or performance information framing). This second step allows us to observe how topic proportions vary in relation to political ideology or treatment. In running our regressions, estimation uncertainty of topic proportions is estimated using the “Global” function in the STM package for R (Roberts et al., 2019). More information on STM and the topic model estimation approach used in this article can be found in Sections 4 and 5 of the Supplementary Material. Section 5 of the Supplementary Material provides an example response for each topic, identified using the findThoughts function in the STM package for R.

7. RESULTS

7.1. Hypothesis 1: Effects of motivated reasoning

Our first hypothesis predicted that conservatives would be less critical of a conservative President and his government in assigning blame. We assess the evidence on motivated reasoning using participants in our control group, who were not exposed to the performance framing or outgroup scapegoating treatments. Figure 1 presents the expected topic proportions for our five‐topic topic model. The five topics were generated from participant responses to the question of “who has done a bad job in responding to the pandemic.” The most prevalent topic is highly critical of the president, whereas the second topic focuses on state governors. The third most prevalent topic again deals with the president but consists of more neutral language (see Section 5 of the Supplementary Material). The final two topics include the federal government and states' responses.

FIGURE 1.

FIGURE 1

Five most frequent topic proportions among all participants in the control group for the question Who do you think did a bad job responding to the pandemic? Topic prevalence is estimated using the conservative variable

Figure 2 shows the relationship between identifying as a conservative and the chances of a participant blaming each of the five entities identified in Figure 1. Patterns of blame attribution are illustrative of partisan motivated reasoning—conservatives are less likely to blame the President and the federal government, and more likely to blame states and governors, which have frequently been targeted by prominent Republicans, such as President Trump. Close‐ended responses also reflect this finding, and further show conservatives more likely to downplay the threat of COVID (see Supplementary Material).

FIGURE 2.

FIGURE 2

Graphical display of topical prevalence contrast between conservatives and non‐conservatives in the control group for the question Who do you think did a bad job responding to the pandemic? Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon political ideology. All coefficients can be interpreted as the percentage change in topic prevalence across conditions

In summary, we find participants assigned to the control group primarily attributed blame to a diverse set of government institutions (federal government, States) and groups (Governors), with the only individual blamed being the President. No social or ethnic groups were blamed. Further, motivated reasoning shapes the assignment of blame. Conservatives differ significantly from the rest of the population by not blaming President Trump and his government, and instead blaming actors frequently criticized by the president. This control group offers us a baseline pattern of blame attribution, which in the following sections we compare to responses where subjects were exposed to performance information framing and outgroup scapegoating.

7.2. Hypothesis 2: Effects of the outgroup trigger

Our second hypothesis proposed that the use of “Chinese virus” will trigger blame to Chinese residents. Figure 3 presents the expected topic proportions for the five‐topic topic model when using the “Chinese virus” cue. The top topic in Figure 3 indicates the scapegoating strategy directs blame towards the person who pushed the term: President Trump. The remaining four most frequent topics, ordered by prevalence include: federal government's response, partisan response, states' response, and the public. As with the control group, we see a diverse set of government institutions (Federal Government, States) and groups (the public) blamed, with the president being the only individual blamed. One interpretation is that use of the term “Chinese virus,” and consequent resentment, is more closely linked to President Trump than his administration. Perhaps most tellingly, the term ‘Chinese’ was not included in any of the five most prevalent topics. In responses to close‐ended questions where subjects could select Chinese‐Americans, the cue did have a small impact on blame toward this group, but when this option is not offered in the open‐ended text, subjects did not direct much attention to Chinese‐Americans. Thus, we find only very limited support for the hypothesis.

FIGURE 3.

FIGURE 3

Five most frequent topic proportions among participants assigned to the Chinese virus and COVID‐19 treatment for the question Who do you think did a bad job responding to the pandemic? In estimating topics, prevalence is estimated using the scapegoat treatment variable

As illustrated in Figure 4, participants exposed to the term “Chinese virus” were significantly more likely to blame President Trump and less likely to blame state governments and the public. The overarching pattern of attribution is suggestive of a backfire effect stemming from the use of Chinese virus as a blame avoidance strategy—use of the outgroup trigger directs attention back toward President Trump.

FIGURE 4.

FIGURE 4

Graphical display of topical prevalence contrast between participants exposed to the term Chinese virus and COVID‐19 for the question Who do you think did a bad job responding to the pandemic? Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon assignment to the term Chinese virus or COVID‐19. All coefficients can be interpreted as the percentage change in topic prevalence across conditions

7.3. Hypothesis 3: Effects of the outgroup trigger among conservatives

Our third hypothesis proposed that conservatives would be more responsive to the “Chinese virus” treatment. Figure 5 provides a five‐topic structural topic model used to identify the entities most commonly blamed by conservatives exposed to the outgroup trigger. As with the broader subject pool, use of the Chinese virus term led conservatives to most frequently blame President Trump's handling of the pandemic in particular, rather than conservative actors more generally. In addition to Trump, conservatives blamed a general cast of liberal actors as opposed to specific actors on the left. Once again, the terms China or Chinese, in any capacity, are not mentioned at all in topics. This is consistent with the close‐ended findings, which show that conservatives in this treatment group are no more likely to assign blame to Chinese residents. Thus, the findings do not support the hypothesis.

FIGURE 5.

FIGURE 5

Five most frequent topic proportions among conservative participants assigned to the Chinese virus and COVID‐19 treatment for the question Who do you think did a bad job responding to the pandemic? In estimating these topics, prevalence is estimated using the scapegoat variable

Figure 6 further illustrates the effect of the Chinese virus trigger on patterns of blame attribution among conservatives, with conservatives more inclined to blame President Trump for the poor response to the COVID‐19 pandemic compared to those exposed to the COVID‐19 cue and significantly less inclined to blame the media and Democratic states. As with the full sample, we find that participants exposed to the outgroup trigger were more inclined than those exposed to the COVID‐19 cue to blame Democrat governors. This pattern of blame attribution generally suggests President Trump's use of the term Chinese virus as a blame avoidance strategy failed not just among the general public, but also among fellow conservatives. This finding suggests that the use of scapegoating to shift blame for poor performance may actually strengthen the association in the minds of the public between the blamer and the negative event they are trying to distance themselves from.

FIGURE 6.

FIGURE 6

Graphical display of topical prevalence contrast between conservatives exposed to the term Chinese virus and COVID‐19 for the question Who do you think did a bad job responding to the pandemic? Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon assignment to the term Chinese virus or COVID‐19. All coefficients can be interpreted as the percentage change in topic prevalence across conditions

7.4. Hypotheses 4: Effects of performance information framing

Our fourth hypothesis proposed that negatively framed performance data (whether testing availability data was presented in negative or positive terms) will increase blame toward the Trump administration and federal government. Figure 7 presents the expected topic proportions for our five‐topic topic model used to identify the most commonly blamed entities. The three most frequently blamed entities focus on the president and federal government. To a lesser extent, participants also blame state governments and the public shrugging off recommendations aimed at stemming the transmission of COVID‐19. Thus, for the full sample, we continue to see a heavy emphasis placed upon President Trump and his administration's efforts to respond to the pandemic, with an emphasis on the President's testing response not seen in prior topic models. Thus, while negative performance framing did not change who was blamed, it increased the salience of the specific topic of performance.

FIGURE 7.

FIGURE 7

Five most frequent topic proportions among all participants for the question Who do you think did a bad job responding to the pandemic? Topic prevalence is estimated using the performance information framing variable

Figure 8 shows the impact of exposure to negatively versus positively framed performance information on the frequency with which a particular entity is discussed. The effect of performance information framing on blame attribution appears subtle for the full sample and often falls below conventional measures of statistical significance. This generally aligns with responses to close‐ended items (see Appendix in Supplementary Material). Those exposed to positively framed performance information were less inclined to blame President Trump's testing response and the federal government, though this effect is small. However, we do find that those exposed to positively framed performance information were more inclined to blame the President more generally. Put differently, while positively framed performance information mitigated blame to the Trump Administration's struggle to build COVID‐19 testing infrastructure, it did not absolve the Trump Administration from blame outright. Based on these findings, one observation is that performance information framing as a blame avoidance strategy is much more nuanced in its impact on patterns of blame attribution when compared to the outgroup trigger. In this case, it affected blame about the specific performance issue, but perhaps because that function was viewed as a federal responsibility, it did not shift blame away from the Trump or the federal government.

FIGURE 8.

FIGURE 8

Graphical display of topical prevalence contrast between exposure to positively and negatively framed performance information among all participants for responses to the question Who do you think did a bad job responding to the pandemic? Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon assignment to positively versus negatively framed performance information. All coefficients can be interpreted as the percentage change in topic prevalence across conditions

7.5. Hypothesis 5: Effects of performance information framing among conservatives

Hypothesis 5 considered the combination of performance framing and motivated reasoning, proposing that it would lead conservatives to seek non‐ideologically aligned actors to blame. The five most frequently blamed entities for conservatives, estimated using performance information framing, are shown in Figure 9. Some of the entities mentioned echo President Trump's talking points—conservative participants cite states and mass media when discussing factors contributing to the poor response to the COVID‐19 pandemic. However, President Trump remains the most prominent source of blame. Three of the top five topics focus on him. Also noteworthy is the emphasis on President Trump and the absence of other conservative entities (e.g., Republicans) or organizations that fell under the direct influence of the Trump Administration (e.g., the federal government). These findings seem to suggest that, while conservative participants do evaluate the pandemic through a partisan lens, they still fault President Trump for his handling of the pandemic.

FIGURE 9.

FIGURE 9

Most frequent topic proportions among conservatives for the question Who do you think did a bad job responding to the pandemic? In estimating these topics, prevalence is estimated using the performance information framing variable

Figure 10 illustrates how topic prevalence varies according to performance information framing among conservatives. We observe statistically significant variation across performance frames, but also see a more substantive impact when compared to the full sample. Exposure to positively framed performance information resulted in President Trump being a less frequent target for blame attribution. At the same time, positively framed performance information also appears to have directed blame to frequent conservative targets such as the mass media, with subjects sometimes claiming that media coverage of the COVID‐19 pandemic was sensationalized. Further, positive performance framing also increases blame assigned to states. This may be because positively framed performance information led conservative participants to interpret challenges in testing as a general problem originating in state governments, irrespective of their political leadership.

FIGURE 10.

FIGURE 10

Graphical display of topical prevalence contrast between exposure to positively and negatively framed performance information among conservatives for responses to the question Who do you think did a bad job responding to the pandemic? Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon assignment to positively versus negatively framed performance information. All coefficients can be interpreted as the percentage change in topic prevalence across conditions

These findings offer limited support for Hypothesis 5, with the qualification that exposure to framing operates in more nuanced ways than established by previous research. Frames did increase blame‐shifting by conservatives, even as positive and negative frames were processed differently to assign blame to different actors.

7.6. Hypothesis 6: Effects of outgroup scapegoating depend on performance framing

Our last hypothesis examined if positive performance framing will reduce the effect of the outgroup trigger on Chinese residents. As shown in the Supplementary Material, semantic coherence and exclusivity analyses indicate a 10‐topic model is appropriate here. Figure 11 provides the results of this model, with topic prevalence estimated using a multiplicative term that combines performance information framing and the outgroup trigger. As can be seen, the most common source of blame was the lack of a bipartisan response to addressing the pandemic. Related, entities that are blamed are diverse and range from President Trump to COVID testing to the public. However, of note is the absence of any mention of China from the list.

FIGURE 11.

FIGURE 11

Most frequent topic proportions for the question Who do you think did a bad job responding to the pandemic? In estimating these topics, prevalence is estimated using a multiplicative term, combining the performance information and outgroup scapegoat framing variables

Next, we turn to the question of whether the effect of the outgroup trigger on who participants blamed varies according to the way performance information is framed. As a reminder, the outgroup trigger itself did not generate a substantive effect, so an obvious conclusion is that there is little that performance framing can mitigate. Not surprisingly then, we find no evidence to support the hypothesis. Our analysis reveals a single significant conditional effect of the outgroup trigger on attribution of blame to the least common entity—leadership in general (see Figure 12). As illustrated in the figure, exposure to the Chinese virus cue increases blame attribution to leadership when performance information is positively framed, but decreases blame relative to the COVID‐19 cue when performance information is negatively framed, contrary to our hypothesis.

FIGURE 12.

FIGURE 12

Conditional effect of Chinese virus cue on blaming leadership for the poor pandemic response. Left panel focuses only on those assigned to the positively framed performance condition and the right panel only those assigned to the negatively framed performance condition. Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon assignment to the term Chinese virus or COVID‐19

Prior findings suggest a potential backfire effect from conservatives associated with the outgroup trigger, so we focus more closely on this group. We return to a five‐topic model that estimates topic prevalence estimated using the multiplicative term combining performance information framing and the outgroup trigger. Figure 13 reveals the most prevalent entity in conservative blame attribution was a lack of a bipartisan response to the pandemic. Following this, conservative participants blamed states, the media, and democratic governors. Notably, the president continues to feature in the top five most blamed entities among conservatives.

FIGURE 13.

FIGURE 13

Most frequent topic proportions among conservatives for the question Who do you think did a bad job responding to the pandemic? In estimating these topics, prevalence is estimated using a multiplicative term, combining the performance information and outgroup trigger variables

For conservative participants, the effect of the Chinese virus cue on blaming bipartisan failure significantly increases when performance information is negatively framed, relative to when it is positively framed. We find the opposite with respect to blaming Democratic governors and states—the importance of the Chinese virus cue as a determinant of blame significantly decreases when performance information is negatively framed, relative to when it is positively framed. We discuss the significance of these findings in greater detail below (Figure 14).

FIGURE 14.

FIGURE 14

Conditional effect of Chinese virus cue on conservative's proclivity to blame bipartisan failure, democratic governors, and states for the poor pandemic response. Left panel focuses only on those assigned to the positively framed performance condition and the right panel only those assigned to the negatively framed performance condition. Horizontal lines represent 95% confidence intervals. The x axis illustrates the difference in expected topic proportions based upon assignment to the term Chinese virus or COVID‐19

8. DISCUSSION

This article examined how the public responds to efforts on the part of public officials to avoid blame for poor performance outcomes, focusing on two specific blame avoidance strategies—performance information framing and outgroup scapegoating. Our findings build on blame attribution theory and research in two important ways.

First, we offer nuanced insights into how the public responds to outgroup triggers. Research on blame attribution in public management has largely focused on public officials as the target of blame, and offers little insight into how the public might respond to other outgroup distinctions. An increase in populist rhetoric by governing elites around the world is accompanied by a standard populist trait of criticizing outgroups for societal problems and government failings (Hameleers et al., 2017). In some cases, scapegoating may be subtle or ambiguous—Trump's intended target may have been the Chinese government when he used the term “Chinese virus.” Regardless of the motivation, it is important to empirically understand how the public responds to such language. While use of the term “Chinese virus” may have directed anger toward Chinese‐Americans, our evidence indicates it was not an effective blame avoidance tool, and was instead interpreted as a signal of incompetence. To this end, we build upon existing theory by demonstrating how the use of outgroup triggers and divisive language to escape blame, can breed contempt and heighten perceptions of incompetence. Based upon this contribution, an important question for future research is under just what conditions do blame avoidance strategies generate backfire effects?

Second, our findings shed light on the impacts of performance information framing on blame attribution. Contrary to much prior work (van den Bekerom et al., 2021), we find that performance framing is largely ineffective for the full sample. We did find that negative performance framing increased the specificity of blame for that performance topic, but did not redirect it. These nuanced findings perhaps reflect that performance data matters less for polarized and widely‐covered policy topics like COVID where there is little room to shape opinions, relative to subject areas featured in previous research such as school, municipal or contractor performance. However, heterogenous effects are evident: we find that performance information framing is somewhat effective in shielding President Trump from blame among conservative participants.

We also extend previous research demonstrating an outgroup bias in responses to performance information (Porumbescu et al., 2021) by illustrating how performance information framing shapes the impact of the outgroup trigger, but once again, mainly among conservatives. Negatively‐framed performance information reduces the impact of the ‘Chinese virus’ cue on blame attributed to actors frequently criticized by President Trump and instead focuses blame on both parties and the broader political environment. One way of interpreting this finding is that policymakers who simultaneously apply the blame avoidance strategies of proclaiming performance is good while scapegoating others undermine the coherence of their arguments. This explanation corresponds to the argument that participants were less inclined to view the “Chinese virus” as an outgroup trigger and more inclined to interpret it as a signal of policymaker incompetence.

Some limitations of this article point out directions for future research. First, we examine blame attribution in one country during a global pandemic. The conditions of the US setting—perceptions of a botched response, intense polarization, and a President engaged in blame avoidance strategies—make it both ideal to analyze the variables we study, but also mark it distinct from many other countries. An obvious extension of our findings would be to examine how the public evaluates government performance in more ordinary times, and other political contexts. A further important limitation relates to our methods. While the use of STM and unsupervised machine learning techniques present new opportunities to understand responses to blame avoidance, it requires researchers to infer sources of blame, which creates the potential for imprecision. One way of addressing this imprecision is to replicate this article using alternative forms of text analysis, such as human coders or supervised methods, as well as close‐ended response items. A final limitation concerns that wording of the outcome variable, which asks participants who they believe has done a bad job responding to the pandemic. Given the inductive nature of our analysis, it is possible that blame attribution patterns, and in particular the scapegoating of Chinese residents in particular, would be clearer when a differently worded question is used.

9. CONCLUSION

Major crises significantly impact public judgments of the quality of government. While public actors can try to shape those judgments through skillful management of the event itself, they also devote significant attention to managing perceptions by employing different blame avoidance strategies (Weaver, 1986). Our findings illustrate the limited power political leaders have, and significant risks they face when engaging in blame avoidance during a crisis. Conditions of polarization may partially shield public actors, but also ensure that some blame is inevitable, as members of the public interpret events via motivated reasoning processes. Conservatives were more likely to excuse a conservative President from blame about the pandemic response and look for liberal targets, while liberals did the opposite.

While partisan motivated reasoning provides the environment within which blame is allocated, we provide some evidence that blame avoidance strategies may work unpredictably or even backfire. In general, framing of performance data in negative or positive terms had limited effects on how people thought about blame for COVID, suggesting that most subjects had made up their mind. We also offer evidence that divisive blame avoidance efforts seeking to shift blame to outgroups can backfire, may instead signal incompetence, and risk generating a negative response even from co‐partisans.

Supporting information

Supplementary Material

Acknowledgment

Gregory Porumbescu's work was supported by NSF SCC Grant 1952096.

Porumbescu, G. , Moynihan, D. , Anastasopoulos, J. , & Olsen, A. L. (2022). When blame avoidance backfires: Responses to performance framing and outgroup scapegoating during the COVID‐19 pandemic. Governance, 1–25. 10.1111/gove.12701

[Correction added on 13 June 2022, after first Online publication: Grant acknowledgment has been included; link mentioned in the Data Availability statement has been updated]

ENDNOTES

1

An explanation of how the sample size was calculated is included in the preregistration.

2

Approval rating is a continuous scale that runs from 0% to 100%. An approval rating of 0 indicates that the work done by this MTurk participant was negatively evaluated by the requester (i.e., researcher) for every task they performed. By contrast, an approval rating of 94% indicated that an MTurk participant, on average was positively evaluated for every task they participated in. The number of hits means how many tasks an MTurk participant has carried out.

DATA AVAILABILITY STATEMENT

Data and code used in this article is available at: https://doi.org/10.7910/DVN/Z9XXOW.

REFERENCES

  1. Ahler, D. J. , Roush, C. E. , & Sood, G. (2019). The micro‐task market for lemons: Data quality on Amazon's Mechanical Turk. Political Science Research and Methods, 1–20. 10.1017/psrm.2021.57 [DOI] [Google Scholar]
  2. Bartoš, V. , Bauer, M. , Cahlíková, J. , & Chytilová, J. (2021). Covid‐19 crisis and hostility against foreigners. European Economic Review, 137, 103818. 10.1016/j.euroecorev.2021.103818 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ben‐Porath, E. N. , & Shaker, L. K. (2010). News images, race, and attribution in the wake of Hurricane Katrina. Journal of Communication, 60(3), 466–490. 10.1111/j.1460-2466.2010.01493.x [DOI] [Google Scholar]
  4. Bevan, G. , & Hood, C. (2006). What’s measured is what matters: Targets and gaming in the English public health care system. Public Administration, 84(3), 517–538. 10.1111/j.1467-9299.2006.00600.x [DOI] [Google Scholar]
  5. Bisgaard, M. (2015). Bias will find a way: Economic perceptions, attributions of blame, and partisan‐motivated reasoning during crisis. The Journal of Politics, 77(3), 849–860. 10.1086/681591 [DOI] [Google Scholar]
  6. Bukowski, M. , de Lemus, S. , Rodriguez‐Bailón, R. , & Willis, G. B. (2017). Who’s to blame? Causal attributions of the economic crisis and personal control. Group Processes & Intergroup Relations, 20(6), 909–923. 10.1177/1368430216638529 [DOI] [Google Scholar]
  7. Coppock, A. , Leeper, T. J. , & Mullinix, K. J. (2018). Generalizability of heterogeneous treatment effect estimates across samples. Proceedings of the National Academy of Sciences, 115(49), 12441–12446. 10.1073/pnas.1808083115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Darling‐Hammond, S. , Michaels, E. K. , Allen, A. M. , Chae, D. H. , Thomas, M. D. , Nguyen, T. T. , Mujahid, M. M. , & Johnson, R. C. (2020). After “The China Virus” went viral: Racially charged coronavirus coverage and trends in bias against Asian Americans. Health Education & Behavior, 47(6), 870–879. 10.1177/1090198120957949 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Davis, K. (2018). More deadly than war: The hidden history of the Spanish flu and the First World War. Henry Holt Books for Young Readers. [Google Scholar]
  10. Dennis, S. A. , Goodson, B. M. , & Pearson, C. (2018). MTurk workers’ use of low‐cost “virtual private servers” to circumvent screening methods: A research note. Working Paper. Available at SSRN: https://ssrn.com/abstract=3233954. 10.2139/ssrn.3233954 [DOI] [Google Scholar]
  11. Dhanani, L. Y. , & Franz, B. (2021). Why public health framing matters: An experimental study of the effects of COVID‐19 framing on prejudice and xenophobia in the United States. Social Science & Medicine, 269, 113572. 10.1016/j.socscimed.2020.113572 [DOI] [PubMed] [Google Scholar]
  12. Dionne, K. Y. , & Turkmen, F. F. (2020). The politics of pandemic othering: Putting COVID‐19 in global and historical context. International Organization, 74(S1), E213–E230. 10.1017/s0020818320000405 [DOI] [Google Scholar]
  13. Fleck, G. (2021). I s Bolsonaro’s anti‐China rhetoric fueling anti‐Asian hate in Brazil? Sino‐Brazilians report increased intolerance. Global Voices. https://globalvoices.org/2021/03/26/is‐bolsonaros‐anti‐china‐rhetoric‐fueling‐anti‐asian‐hate‐in‐brazil/ [Google Scholar]
  14. Hameleers, M. , Bos, L. , & De Vreese, C. H. (2017). “They did it”: The effects of emotionalized blame attribution in populist communication. Communication Research, 44(6), 870–900. 10.1177/0093650216644026 [DOI] [Google Scholar]
  15. Haokip, T. (2021). From ‘Chinky’ to ‘Coronavirus’: Racism against northeast Indians during the COVID‐19 pandemic. Asian Ethnicity, 22(2), 353–373. 10.1080/14631369.2020.1763161 [DOI] [Google Scholar]
  16. Hong, S. , Kim, S. H. , & Son, J. (2020). Bounded rationality, blame avoidance, and political accountability: How performance information influences management quality. Public Management Review, 22(8), 1240–1263. 10.1080/14719037.2019.1630138 [DOI] [Google Scholar]
  17. Hong, S. , & Kim, Y. (2019). Loyalty or competence: Political use of performance information and negativity bias. Public Administration Review, 79(6), 829–840. 10.1111/puar.13108 [DOI] [Google Scholar]
  18. Hswen, Y. , Xu, X. , Hing, A. , Hawkins, J. B. , Brownstein, J. S. , & Gee, G. C. (2021). Association of “# Covid19” versus “# Chinesevirus” with anti‐Asian sentiments on twitter: March 9–23, 2020. American Journal of Public Health, 111(5), 956–964. 10.2105/ajph.2021.306154 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. James, O. , Jilke, S. , Petersen, C. , & Van de Walle, S. (2016). Citizens' blame of politicians for public service failure: Experimental evidence about blame reduction through delegation and contracting. Public Administration Review, 76(1), 83–93. 10.1111/puar.12471 [DOI] [Google Scholar]
  20. James, O. , Olsen, A. L. , Moynihan, D. P. , & Van Ryzin, G. G. (2020). Behavioral public performance: How people make sense of government metrics. Cambridge University Press. [Google Scholar]
  21. James, O. , & Van Ryzin, G. G. (2017). Motivated reasoning about public performance: An experimental study of how citizens judge the affordable care act. Journal of Public Administration Research and Theory, 27(1), 197–209. 10.1093/jopart/muw049 [DOI] [Google Scholar]
  22. Jilke, S. , & Bækgaard, M. (2020). The political psychology of citizen satisfaction: Does responsibility attribution matter? Journal of Public Administration Research and Theory. in press. [Google Scholar]
  23. Joslyn, M. R. , & Haider‐Markel, D. P. (2017). Gun ownership and self‐serving attributions for mass shooting tragedies. Social Science Quarterly, 98(2), 429–442. 10.1111/ssqu.12420 [DOI] [Google Scholar]
  24. Kalmoe, N. , & Mason, L. (2019). Lethal mass partisanship: Prevalence, correlates, and electoral contingencies. In: National Capital Area Political Science Association American Politics Meeting. [Google Scholar]
  25. Karni, A. (2020). In daily coronavirus brief, Trump tries to redefine himself. New York Times. https://www.nytimes.com/2020/03/23/us/politics/coronavirus‐trump‐briefing.html [Google Scholar]
  26. Leland, S. , Mohr, Z. , & Piatak, J. (2021). Accountability in government contracting arrangements: Experimental analysis of blame attribution across levels of government. The American Review of Public Administration, 51(4), 251–262. 10.1177/0275074021990458 [DOI] [Google Scholar]
  27. Litman, L. , Robinson, J. , & Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433–442. 10.3758/s13428-016-0727-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Lodge, M. , & Taber, C. (2000). Three steps toward a theory of motivated political 33 reasoning. In Lupia A., McCubbins M., & Popkin S. (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality (pp. 183–213). Cambridge University Press. [Google Scholar]
  29. Mackie, D. M. , Devos, T. , & Smith, E. R. (2000). Intergroup emotions: Explaining offensive action tendencies in an intergroup context. Journal of Personality and Social Psychology, 79(4), 602–616. 10.1037/0022-3514.79.4.602 [DOI] [PubMed] [Google Scholar]
  30. Malhotra, N. , & Kuo, A. G. (2008). Attributing blame: The public’s response to Hurricane Katrina. Journal of Politics, 70(1), 120–135. 10.1017/s0022381607080097 [DOI] [Google Scholar]
  31. McCormick, E. (2021). California saw staggering rise in hate crimes against Asians in 2020. The Guardian. https://www.theguardian.com/us‐news/2021/jul/01/california‐hate‐crimes‐reports‐anti‐asian [Google Scholar]
  32. Mutz, D. (2018). Status threat, not economic hardship, explains the 2016 presidential vote. Proceedings of the National Academy of Sciences, 115(19), E4330–E4339. 10.1073/pnas.1718155115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Olsen, A. (2015). Citizen (dis)satisfaction: An experimental equivalence framing study. Public Administration Review, 75(3), 469–478. 10.1111/puar.12337 [DOI] [Google Scholar]
  34. Pevehouse, J. (2020). The COVID‐19 pandemic, international cooperation, and populism. International Organization, 74(S1), E191–E212. 10.1017/s0020818320000399 [DOI] [Google Scholar]
  35. Porumbescu, G. , Piotrowski, S. , & Mabillard, V. (2021). Performance information, racial bias, and citizen evaluations of government: Evidence from two studies. Journal of Public Administration Research and Theory, 31(3), 523–541. 10.1093/jopart/muaa049 [DOI] [Google Scholar]
  36. Roberts, M. E. , Stewart, B. M. , Tingley, D. , Lucas, C. , Leder‐Luis, J. , Gadarian, S. K. , Albertson, B. , & Rand, D. G . (2014). Structural topic models for open‐ended survey responses. American Journal of Political Science, 58(4), 1064–1082. [Google Scholar]
  37. Roberts, M. , Stewart, B. M. , & Tingley, D. (2019). Stm: An R package for structural topic models. Journal of Statistical Software, 91(2). 10.18637/jss.v091.i02 [DOI] [Google Scholar]
  38. Rogers, K. , Jakes, L. , & Swanson, A. (2020). Trump defends using ‘Chinese Virus’ label, ignoring growing criticism. New York Times. https://www.nytimes.com/2020/03/18/us/politics/china‐virus.html [Google Scholar]
  39. Rose, J. (2021). Some republicans blame migrants for COVID‐19 surges. Doctors say they're scapegoating. https://www.npr.org/2021/08/10/1026178171/republicans‐migrants‐covid‐19‐surges [Google Scholar]
  40. Samuels, D. , & Zucco, C., Jr. (2014). The power of partisanship in Brazil: Evidence from survey experiments. American Journal of Political Science, 58(1), 212–225. 10.1111/ajps.12050 [DOI] [Google Scholar]
  41. Shepherd, K. (2020). John Cornyn criticized Chinese for eating snakes. He forgot about the rattlesnake roundups back in Texas. Washington Post. https://www.washingtonpost.com/nation/2020/03/19/coronavirus‐china‐cornyn‐blame/ [Google Scholar]
  42. Tajfel, H. (1970). Experiments in intergroup discrimination. Scientific American, 223(5), 96–103. 10.1038/scientificamerican1170-96 [DOI] [PubMed] [Google Scholar]
  43. Tondo, L. (2020). Salvini attacks Italy PM over coronavirus and links to rescue ship. The Guardian. https://www.theguardian.com/world/2020/feb/24/salvini‐attacks‐italy‐pm‐over‐coronavirus‐and‐links‐to‐rescue‐ship [Google Scholar]
  44. van den Bekerom, P. , van der Voet, J. , & Christensen, J. (2021). Are citizens more negative about failing service delivery by public than private organizations? Evidence from a large‐scale survey experiment. Journal of Public Administration Research and Theory, 31(1), 128–149. 10.1093/jopart/muaa027 [DOI] [Google Scholar]
  45. Weaver, R. K. (1986). The politics of blame avoidance. Journal of Public Policy, 6(4), 371–398. 10.1017/s0143814x00004219 [DOI] [Google Scholar]
  46. Whitehead, G. I., III , Smith, S. H. , & Eichhorn, J. A. (1982). The effect of subject's race and other's race on judgments of causality for success and failure. Journal of Personality, 50(2), 193–202. 10.1111/j.1467-6494.1982.tb01023.x [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

Data Availability Statement

Data and code used in this article is available at: https://doi.org/10.7910/DVN/Z9XXOW.


Articles from Governance (Oxford, England) are provided here courtesy of Wiley

RESOURCES