Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Oct 20;16(10):e0256740. doi: 10.1371/journal.pone.0256740

Politicization of COVID-19 health-protective behaviors in the United States: Longitudinal and cross-national evidence

Wolfgang Stroebe 1, Michelle R vanDellen 2,*, Georgios Abakoumkin 3, Edward P Lemay Jr 4, William M Schiavone 2, Maximilian Agostini 1,#, Jocelyn J Bélanger 5,#, Ben Gützkow 1,#, Jannis Kreienkamp 1,#, Anne Margit Reitsema 1,#, Jamilah Hanum Abdul Khaiyom 6,, Vjolica Ahmedi 7,, Handan Akkas 8,, Carlos A Almenara 9,, Mohsin Atta 10,, Sabahat Cigdem Bagci 11,, Sima Basel 5,, Edona Berisha Kida 7,, Allan B I Bernardo 12,, Nicholas R Buttrick 13,, Phatthanakit Chobthamkit 14,, Hoon-Seok Choi 15,, Mioara Cristea 16,, Sára Csaba 17,, Kaja Damnjanović 18,, Ivan Danyliuk 19,, Arobindu Dash 20,, Daniela Di Santo 21,, Karen M Douglas 22,, Violeta Enea 23,, Daiane Gracieli Faller 5,, Gavan Fitzsimons 24,, Alexandra Gheorghiu 23, Ángel Gómez 25,, Ali Hamaidia 26,, Qing Han 27,, Mai Helmy 28,, Joevarian Hudiyana 29,, Bertus F Jeronimus 1,, Ding-Yu Jiang 30,, Veljko Jovanović 31,, Željka Kamenov 32,, Anna Kende 17,, Shian-Ling Keng 33,, Tra Thi Thanh Kieu 34,, Yasin Koc 1,, Kamila Kovyazina 35,, Inna Kozytska 10,, Joshua Krause 1,, Arie W Kruglanksi 4,, Anton Kurapov 19,, Maja Kutlaca 36,, Nóra Anna Lantos 17,, Cokorda Bagus Jaya Lemsmana 37,, Winnifred R Louis 38,, Adrian Lueders 39,, Najma Iqbal Malik 10,, Anton Martinez 40,, Kira O McCabe 41,, Jasmina Mehulić 32,, Mirra Noor Milla 29,, Idris Mohammed 42,, Erica Molinario 43,, Manuel Moyano 44,, Hayat Muhammad 45,, Silvana Mula 21,, Hamdi Muluk 29,, Solomiia Myroniuk 1,, Reza Najafi 46,, Claudia F Nisa 5,, Boglárka Nyúl 17,, Paul A O’Keefe 33,, Jose Javier Olivas Osuna 47,, Evgeny N Osin 48,, Joonha Park 49,, Gennaro Pica 50,, Antonio Pierro 21,, Jonas Rees 51,, Elena Resta 21,, Marika Rullo 52,, Michelle K Ryan 53,1,, Adil Samekin 54,, Pekka Santtila 55,, Edyta Sasin 5,, Birga M Schumpe 56,, Heyla A Selim 57,, Michael Vicente Stanton 58,, Samiah Sultana 1,, Robbie M Sutton 22,, Eleftheria Tseliou 3,, Akira Utsugi 59,, Jolien Anne van Breen 60, Caspar J Van Lissa 61,, Kees Van Veen 1,, Alexandra Vázquez 25,, Robin Wollast 39,, Victoria Wai-Lan Yeung 62,, Somayeh Zand 46,, Iris Lav Žeželj 18,, Bang Zheng 63,, Andreas Zick 51,, Claudia Zúñiga 64,, N Pontus Leander 1,#
Editor: Amitava Mukherjee65
PMCID: PMC8528320  PMID: 34669724

Abstract

During the initial phase of the COVID-19 pandemic, U.S. conservative politicians and the media downplayed the risk of both contracting COVID-19 and the effectiveness of recommended health behaviors. Health behavior theories suggest perceived vulnerability to a health threat and perceived effectiveness of recommended health-protective behaviors determine motivation to follow recommendations. Accordingly, we predicted that—as a result of politicization of the pandemic—politically conservative Americans would be less likely to enact recommended health-protective behaviors. In two longitudinal studies of U.S. residents, political conservatism was inversely associated with perceived health risk and adoption of health-protective behaviors over time. The effects of political orientation on health-protective behaviors were mediated by perceived risk of infection, perceived severity of infection, and perceived effectiveness of the health-protective behaviors. In a global cross-national analysis, effects were stronger in the U.S. (N = 10,923) than in an international sample (total N = 51,986), highlighting the increased and overt politicization of health behaviors in the U.S.

Introduction

Prior to the development of vaccines, behavioral measures were the primary means of preventing the spread of COVID-19. The World Health Organization (WHO) and the U.S. Centers for Disease Control (CDC) recommended a number of health-protective behaviors to lower a person’s risk of contracting and spreading the virus. The initial list of such recommendations included hand washing, social distancing, and self-quarantining, followed by an additional recommendation to wear face masks and face coverings. The effectiveness of lockdowns imposed in multiple countries demonstrated the potential of extreme social distancing to prevent infection [1, 2]. However, considering the severe economic consequences of countrywide lockdowns, many countries relied on individual decision-making to contain the spread of COVID-19. With the availability of vaccines in 2021, these countries are relying on the willingness of individuals (including essential care workers in healthcare, education and other high contact fields) to be inoculated. A central question, therefore, is whether individuals’ willingness to adopt health-protective behaviors and to be vaccinated varies with their subjective perceptions about COVID-19, perceptions that may be shaped by political concerns and politicized social influence.

According to theories of health behavior, individuals’ compliance with recommendations depends on their perceptions of infection risk, the anticipated severity of such an infection, and the perceived effectiveness of recommended health-protective behaviors [39]. For example, the Health Belief Model, a widely tested theory of health behavior, asserts that the likelihood of individuals engaging in a given health-protective behavior is determined by the perceived severity of the health threat and the perceived effectiveness of the recommended health-protective behavior [5, 9]. The perceived severity of a health threat is determined by the extent to which individuals believe they are likely to contract an illness and how severe they anticipate the personal consequences of that illness to be. Irrespective of this perception, however, the likelihood of individuals engaging in a recommended health-protective behavior will depend on whether they perceive the recommended measure to be effective in preventing the health threat and whether the perceived benefits of that behavior outweigh the perceived costs [5, 9, 10]. Other health behavior theories—such as the Protection Motivation Theory—confirm the importance of these perceptions for the adoption of health-protective behavior [7].

In accordance with these theories, individuals would be expected to adopt health-protective behaviors to prevent a COVID-19 infection to the extent they believe they could become infected and consider such an infection to be a serious threat to their health. Whether or not people adopt these recommended health-protective behaviors would also be influenced by the perceived effectiveness of that behavior in preventing an infection. Two large meta-analyses on the effectiveness of fear-arousing communications have provided empirical support for the role of perceived threat and perceived behavioral effectiveness in predicting health behaviors from both experimental and observational studies [4, 9].

Political beliefs are associated with differing perceptions of health risks

Several studies provide evidence for a relationship between individual-level political orientation and perception of health risk associated with COVID-19, as well as compliance with recommended health-protective behaviors in the U.S. [1120]. In the U.S., early public polls indicated partisan differences in perceptions of health threat posed by COVID-19 [21, 22]. Geotracking data of 15 million smartphones suggested that people who lived in counties that voted for Trump in the 2016 U.S. election were 14% less likely to engage in recommended social distancing behaviors [16]. Another study based on the daily reported activities data of more than a million Americans indicated that political partisanship predicted reduced physical and social mobility much more strongly than did the local incidence of COVID-19 [13].

For people living in the U.S., perceptions of both the threat of being infected with COVID-19 and the effectiveness of the recommended health-protective behaviors are possible explanations for these political differences in compliance with recommended health-protective behaviors. What remains unclear, however, is whether such political differences in behavior merely reflect consistent differences in the impact of political ideologies on behavior, or whether they reflect dynamic, politicized forms of social influence. Although conservative-leaning Americans generally perceive their environments as more threatening [23, 24], they deemphasized the public health threat, instead focusing on perceived threats to the economy and personal liberty that would result from pandemic-related preventive measures. Such a shift is reminiscent of studies on solution aversion which showed people deny the existence of a problem when presented with a solution they perceive as politically unpalatable (such as with cap-and-trade or gun control; Kay & Campbell, 2014 [25]).

Politicized social influence may be exercised and maintained through partisan messaging and information consumption. Public communication from the right-leaning Trump White House consistently downplayed the seriousness of COVID-19 and the risk of getting infected. For instance, on February 26th of 2020, the President publicly called the coronavirus “a regular flu”, stated there were few cases in the U.S, and that the pandemic was under control [26]. Similar statements were made by right-leaning politicians with regards to the efficacy of mask-wearing and social distancing recommendations [27, 28], with some of them hosting indoor and maskless election rallies that defied state regulations and CDC recommendations [29]. These behaviors convey the message that the recommended health-protective behaviors are neither necessary nor effective.

Liberal (economically left-leaning) Americans and conservative (right-leaning) Americans Liberal (economically left-leaning) Americans and conservative (economically right-leaning) Americans tend to rely on different sources of information, which prioritize different values. Perceptions of the credibility of these sources also vary as a function of political orientation [30]. The credibility of a source can be an important determinant of the impact of communication [31, 32], particularly if respondents’ motivation and ability to scrutinize the communication is low [33]. If conservatives believed their vulnerability to a health threat to be low, they would be less motivated to carefully scrutinize health communication [3, 4] and would therefore be more likely to accept information from a source they consider credible [33].

According to a survey by the PEW Research Center [30], 76% of liberals said that the CDC and other public health experts “get the facts right almost all of the time” with regards to the COVID-19 outbreak, whereas only 51% of conservatives agreed with this statement. In contrast, 54% of conservatives believed that the Trump White House got its facts right compared to 9% of liberals. Differences in the information sources relied on and trusted by conservatives and liberals may have exacerbated perceptions of the seriousness of the COVID-19 pandemic. An academic study based on a representative sample of Americans taken in March 2020 similarly found that liberals place less trust in politicians to handle the pandemic and are more trusting of medical experts such as the WHO [19]. Similarly, results from a representative survey of Americans adults administered in September 2020 indicated that trust that the World Health Organization is capable of effectively managing the pandemic and providing reliable information about COVID-19 is predicted by Democratic Party identity, liberal ideology, and a strong internationalist foreign policy orientation. Trust in the competence of the WHO is also a strong predictor of both social distancing and compliance with COVID-19 guidelines. However, this effect is reduced when trust in the CDC is also taken into account. Finally, a study found that Americans, who identified as Republicans or Independents perceived a COVID infection as less severe, were less fearful of getting infected, had less knowledge about COVID-19, had less trust in science and were less prepared to comply with health behavior recommendations [14]. In summary then, compared to liberals, conservatives are less likely to trust science and the information provided by scientific organization such as the CDC and the WHO and rather rely on information provided by politicians of their own political persuasion. As a result, they are less informed about the pandemic, are less fearful of getting infected and are also less prepared to comply with the health recommendations.

The present research

In the context of COVID-19, threat perceptions and associated health-protective behaviors are disproportionately adopted by liberals compared to their conservative counterparts. To the extent that this effect is localized to the U.S., it would further suggest the effect of politicized social influence, as opposed to ideological differences between conservative and liberal ideologies. The two studies reported in this paper apply a health psychological model–the Health Belief Model—to a social psychological problem, namely the association between political orientation and people’s acceptance of and compliance with recommendations for health-protective behaviors. The starting point of our studies is the well-documented assertion that the Trump White House and conservative-leaning information sources systematically deemphasized the seriousness of COVID-19 and the effectiveness of the WHO and CDC recommendations regarding health-protective behaviors. To the extent that political orientation reflects differences in COVID-19 information consumption patterns, we expected that conservatives would perceive both the risk of becoming personally infected and the protective effects of health behaviors as lower. As a consequence, they would be less motivated to engage in recommended health-protective behaviors. Most importantly, we further predicted that any politicized adoption of health-protective behaviors would be mediated by political differences in the perceived risk of contracting the virus, the perceived severity of the consequences of such an infection, and the perceived effectiveness of the recommended health-protective behaviors.

We tested these hypotheses in two studies with samples of participants living in the U.S. The second study also included an international sample for comparison. In both studies, we assessed political orientation (conservative vs. liberal), perceived risk of getting infected, and willingness to engage in recommended health-protective behaviors. In Study 2, we additionally assessed perceived severity of getting infected and the perceived effectiveness of wearing a face covering. In both studies, participants were resampled for several weeks, allowing for examination of the effects across time (5 time points in Study 1 and 13 time points in Study 2). Finally, Study 2 also allowed for a comparison of the relationship between politics and health-protective behavior in the U.S. relative to other countries. This comparison would enable us to rule out the possibility that the association between political orientation and virus perception in the U.S. could merely be the result of different worldviews, or beliefs inherent to conservative and liberal ideologies. We hypothesized these effects would be stronger in the U.S. compared to other countries. Support for this prediction would suggest the effects of political conservatism on lower risk perceptions and health-protective behaviors are due to sources of influence that are localized to the U.S.

Study 1

Method

Participants and procedure

This study involved longitudinally tracking participants’ attitudes and self-reported behaviors across five time points. Wave 1 (Baseline) was launched on March 10th, one day before the WHO declared the COVID-19 outbreak a pandemic. To capture potentially acute and relatively long-term changes, we followed up with participants at three time points in close succession (March 20th, March 28th, and April 11th, 2020) as well as a longer-term follow-up on June 16th, 2020.

Participants were Amazon MTurk respondents. They were recruited to “fill out five surveys across the next months asking questions about recent events in society.” Current residence in the U.S. was an eligibility criterion, and we used an IP address filter to ensure fulfillment of this requirement. At Baseline, 1,056 MTurk respondents participated in the study. Seventeen individuals were excluded from analyses due to suspicion of data invalidity (e.g., double MTurk ID; survey completion in less than five minutes); thus, the final sample size was N = 1,039. Table 1 reports characteristics of these participants. At Wave 2, 649 participants yielded valid data (data from seven individuals were excluded), at Wave 3, there were 642 participants with valid data (seven individuals were excluded), there were 547 participants at Wave 4 (nine were excluded), and 462 participated in Wave 5 (one was excluded). Effect sizes were not anticipated prior to data collection.

Table 1. Demographic information at baseline for participants in Studies 1 and 2.
Study 1 (U.S.) Study 2 (U.S.) Study 2 (Non-U.S.)
N N N
Gender
Male 463 4043 19732
Female 529 6773 31704
Other 6 81 223
Did not report 41 26 327
Age
    18–24 62 1670 12746
    25–34 367 3244 11991
    35–44 256 2446 9554
    45–54 153 1534 7518
    55–64 111 1211 5739
    65+ 49 784 4086
Did not report 41 34 352
Education
Some High School or less 7 360 547
High School graduate/GED 85 1637 12601
Some College 211 2146 12549
College Graduate 415 4229 14834
Graduate Degree 261 2512 11044
Did not report 60 39 411

Measures

A critical aim of this study was to capture attitudes and behaviors as quickly during the pandemic as possible. Our approach was to select and use brief face-valid measures. This decision afforded high response rates to surveys, allowed available funds to be used to expand the sample size, and ultimately afforded the translation of items into 30 languages in Study 2. Moreover, short measures are not faulty per se, but can be psychometrically appropriate [3436].

Perceived risk of infection. Perceived risk of infection was assessed at all five time points with an adapted threat likelihood item adapted [37]: “How likely is it that the following will happen to you in the next few months? … You will get infected with the Coronavirus.” (1 = Not at all likely; 5 = Extremely likely).

Health-protective behaviors. In this study, we assessed health-protective behaviors based on the three recommendations made by the WHO. At the start of this study, the health-protective behaviors were assessed at all five time points using the statement: “To minimize my chances of getting Coronavirus, I …” was followed by the items “…wash my hands more often.”, “…avoid crowded spaces.”, and “…put myself in quarantine.” (-3 = Strongly disagree; +3 = Strongly agree). The items were specifically phrased to contextualize the behaviors as relevant to COVID-19 and were chosen because they covered the primary health-protective behaviors recommended by the WHO and the CDC at that time. Items were averaged to build a health-protective behaviors scale. The scale had satisfactory internal consistency (αs from .69 to .84 across time points). Descriptive statistics at each wave are presented in Table 2.

Table 2. Relationship of baseline political orientation with perceived health risk and health-protective behaviors: Study 1.
Date Perceived health risk WHO Health-protective behaviors
M (SD), N r (N) M (SD), N r (N)
March 10th 2.55 (1.13), 1029 .138 (1001) 1.84 (1.04), 1021 .093 (1001)
March 20th 2.73 (1.08), 646 .157 (640) 2.26 (0.89), 642 .085 (636)
March 28th 2.75 (1.05), 634 .195 (627) 2.34 (0.91), 634 .089 (627)
April 11th 2.56 (1.03), 547 .118 (540) 2.34 (0.95), 547 .141 (540)
June 16th 2.47 (0.97), 456 .158 (452) 2.17 (1.10), 456 .183 (452)

Note. Higher scores on this measure of political orientation correspond to more liberal attitudes.

Political orientation. Prior research on COVID-19 suggests that single-item indicators of political orientation suffice to predict virus threat perceptions [12]. Political orientation was measured at Baseline with the item: “What is your political orientation?” (1 = Extremely conservative; 9 = Extremely liberal; M = 5.72, SD = 2.39).

Results

Political orientation and perceived health risk

To examine whether political orientation was associated with perceived health risk, we calculated correlations between political orientation at Baseline and perceived health risk at all five time points. These correlations (see Table 2) show consistently across all time points that the more participants describe their political orientation at baseline as conservative, the lower they perceived their risk of infection. We also calculated partial correlations between the focal variables, controlling for gender, age, and education separately, as well as controlling for all three variables concurrently. The pattern of correlations between political orientation and perceived health risk was not altered after controlling for these variables.

Political orientation and health-protective behaviors

To examine whether political orientation was associated with health-protective behaviors, we calculated correlations between political orientation at Baseline and WHO-recommended health-protective behaviors at all five time points. The correlations depicted in Table 2 show a small but consistent pattern over time: the more participants described themselves as conservative, the less they enacted health-protective behaviors. In addition, partial correlations controlling for gender, age, and education separately, as well as for all three variables simultaneously, produced the same pattern of results.

Mediation analyses

To examine whether perceived infection risk mediated the relationship between political orientation and health-protective behaviors, we conducted five bootstrapping analyses (PROCESS macro, Model 4, 5,000 bootstrap samples [38]), one for each assessment wave. Note that political orientation was measured at Baseline, whereas perceived health risk and health-protective behaviors were measured at each time point. In support of our hypothesis, we found indirect effects of political orientation on health-protective behaviors via perceived health risk for four out of five assessment waves (see Table 3; additional path coefficients are presented in S1 Table).

Table 3. Tests of the mediational model in five time points: Study 1.
Date Direct Effect: Baseline Political Orientation to WHO Virus Mitigation Behaviors Indirect Effect: Baseline Political Orientation to WHO Virus Mitigation Behaviors through Perceived Risk
B SE CI ab SE CI
March 10th .037 .014 .009, .064 .004 .002 .001, .009
March 20th .028 .015 -.001, .057 .003 .003 -.001, .010
March 28th .024 .015 -.006, .054 .009 .004 .003, .018
April 11th .047 .017 .016, .081 .006 .003 .001, .015
June 16th .072 .020 .031, .112 .009 .005 .002, .020

Note. CI = 95% bootstrap confidence interval. The a and b pathways are presented in S1 Table.

Results suggest that in the U.S. context, political orientation at Baseline predicted health risk perceptions as well as health-protective behaviors across time. The finding that the association between these variables did not weaken during this three-months period is consistent with evidence that political orientation is stable over time, [3941]. We also found evidence that health risk perceptions mediated the effects of political orientation on health-protective behaviors across time. However, a limitation of this study is that it is focused only on behaviors initially recommended by the WHO, whereas other behaviors–such as mask wearing and vaccination intentions, may have become more politicized during the course of the pandemic.

Moreover, American MTurk samples are not representative of Americans in general. MTurk workers tend to have lower average income, lower average ages and higher levels of education than the general population. MTurk samples are also more liberal than nationally representative samples [42, 43]. However, these factors would not be expected to change the associations between political orientation and compliance with health-protective behavior recommendations, or associations with the mediator variables that we examined.

Additionally, the present results were exclusive to the U.S. context, whereas a pandemic is a global phenomenon. Without comparing these patterns across countries, it is difficult to discern whether such patterns are due to differences in worldviews inherent to political ideologies, or if they are due to politicized influences that are unique to the United States. Finally, this study did not examine two other facets of health models: the perceived severity of COVID-19 as a mediator, or the use of face coverings or willingness to be vaccinated against COVID-19. To address these limitations, we report analyses from a second study in which participant recruitment extended beyond MTurk and beyond individuals currently living in the U.S.

Study 2

The data we collected for Study 1 only allowed us to test the mediating role of perceived risk. However, health belief models also specify the perceived consequences of an infection and effectiveness of health-protective behaviors predict outcomes and both of these factors may have been politicized in the U.S. Additionally, in Study 1, we did not test whether the association between political orientation and health-protective behaviors was specific to the situation in the United States. We did, however, address these questions in Study 2. We also extended the health-protective behaviors we assessed to include wearing a face covering in public as it became more clearly recommended by the WHO (and the CDC). Although vaccines had not yet been approved for use at the time of conducting this study, we anticipated (correctly) that vaccination would become a politically polarized topic, and thus also investigated vaccination intentions [44].

Method

Participants and procedure

Participants from the U.S., as well as from 114 other countries were recruited for a longitudinal survey; S2 Table reports the most frequently represented countries at Baseline. Assessments began on March 19th, 2020 and the current results are based on data collected up to July 13th, 2020 from 62,909 individuals. The survey was distributed online through a combination of convenience sampling and snowball sampling. Members of the research team distributed the survey using social media campaigns, academic networks, and press releases. This convenience sample was supplemented with age and gender paid representative samples from 25 countries (collected only at baseline). On completion of the survey and debrief, a final screen invited respondents (both paid and unpaid) to distribute the survey to their networks and to participate in weekly (unpaid) follow-ups. To maximize data collection while minimizing participant strain, follow-up surveys with rotating questions were administered from March 19th to July 13th, 2020. As new themes emerged in the discourse surrounding COVID-19, additional items were included in the survey. For instance, attitudes and behaviors pertaining to the wearing of masks/face coverings were added as the WHO amplified its support for their use.

Participants were eligible to enroll in the study by completing baseline at any point. Demographic characteristics of participants at Baseline are reported in Table 1. Most participants (75.43%) enrolled in the study between March 19th and April 18th; see S1 Fig for a histogram of date participants completed the Baseline survey. Following completion of the Baseline survey, participants received invitations to complete follow-up surveys at fixed time points (no follow-up surveys included participant payment). Some participants completed later follow-up waves but were not assessed in earlier follow-up waves because they entered the study only after those earlier follow-up waves had been administered. As a result of these design features, each wave contains both different subsets of the total sample of participants and differing time lag between baseline completion and follow-up survey completion, largely as a function of when participants enrolled in the study. In the U.S. sample, the timing of participants’ completion of the baseline survey was not associated with political orientation (r = .01); within the non-U.S. sample, there was a small association of political orientation and enrollment in baseline study (r = .11).

Being a large-scale project covering a broad-range of psychological factors (for a full codebook of all questions included in the manuscript, see: https://osf.io/qhyue/), effect sizes for the research questions examined in this paper were not estimated a priori. All participants provided electronic consent in lieu of documenting signatures for consent. The study was approved by the Ethics Committees of the University of Groningen (PSY-1920-S-0390) and New York University Abu Dhabi (HRPP-2020-42).

Measures

Political orientation. We assessed political orientation using the image from the political compass (https://www.politicalcompass.org/analysis2). The official measure uses a lengthy text description to explain the graphic. For the purposes of the present study, we used the left to right continuum to capture conservatism without lengthy explanation. This measure was chosen for its adaptability across diverse political frameworks. Participants were specifically prompted to click on a position on the graphic that represents their political orientation from economically left (-200) to economically right (+200; MUS = -16.04, SD = 80.68; Mnon-US = -4.83, SD = 67.03). In order to maintain consistency, we used the labels “conservative” and “liberal” to refer to economic right and left orientations, respectively. As political orientation is recognized to be stable over time, we collected it only during the baseline survey [3941].

Perceived risk of infection. As in Study 1, we assessed the perceived risk of infection using a single item about participants’ perceived likelihood of becoming infected with coronavirus in the next few months (1 = exceptionally unlikely; 7 = all but certain). An additional response choice allowed participants to indicate if they had already become infected with the coronavirus. As the analyses focused on perceptions of risk, participants who selected this latter response were excluded from analyses. This measure of risk perception taps into the deliberative aspects of risk perceptions [45]. Although not purely objective, it assesses a threat-specific perception of likelihood. We assessed perceived risk in the baseline survey and in nine follow-up surveys.

Perceived severity of infection. To capture subjective perceptions of risk, we asked participants to indicate how subjectively disturbing it would be for them if they were infected with Coronavirus (1 = not disturbing at all; 5 = extremely disturbing). This measure represents an experiential health risk perception that combines broad affective responses to the trigger (e.g., stigma about becoming infected, fear of the side effects of the disease) and deliberative aspects of risk (e.g., awareness of increased risk with age or employment status [45]). Perceived severity was assessed only in the baseline survey.

Perceived effectiveness of health-protective behaviors. Beliefs about health-protective behaviors being effective in protecting against the risk of infection were assessed using two separate measures about social distancing (at three time points) and wearing a mask (at four time points). Participants reported their beliefs about the effectiveness of social distancing by agreeing with the statement ‘In the absence of effective medical treatment or vaccines, social distancing measures are the most effective means of controlling the pandemic’. Participants reported beliefs about the effectiveness of wearing a mask or face covering for preventing infection of COVID-19 by indicating their agreement with the statement ‘I believe that wearing a mask protects myself.’ Both efficacy belief items used the same scale (-2 = strongly disagree; +2 = strongly agree). Perceptions of the effectiveness of social distancing were measured in three follow-up surveys; perceptions of the effectiveness of wearing a face covering were measured in four follow-up surveys.

Health-protective behaviors. We assessed three health-protective behaviors. Correlations between each health behavior in the follow-up surveys were small to medium, (see S3 Table).

WHO virus mitigation behaviors. As in Study 1, we assessed engagement in the three health-protective behaviors recommended by the WHO (i.e., hand washing, avoiding crowds, and self-isolating). We reused the items and scaling from Study 1. Our primary concern was utilizing a set of items to capture adherence to behaviors that were uniformly recommended for virus mitigation. The items demonstrated acceptable internal consistency (αs = .62-.74). WHO Virus mitigation behaviors were measured at baseline and in three follow-up waves.

Willingness to be vaccinated. Participants reported their willingness to be vaccinated in three follow-up waves by responding to the question ‘How likely are you to get vaccinated against coronavirus once a vaccine becomes available?’ on a five point scale (-2 = extremely unlikely; +2 = extremely likely). The item was adapted from prior flu vaccine research using a single item measure to capture vaccine intentions [46]. Note that the final assessment of vaccine intentions was conducted in July 2020, well before any vaccine had been approved for use.

Wearing a mask. Although wearing a face mask is now considered a health-protective behavior, it was not initially recommended by the WHO and was, therefore, neither included at Baseline, nor at any time point in Study 1. As it became evident that mask wearing would be a critical health-protective behavior in response to the COVID-19 pandemic, we added a measure of it to our longitudinal survey. At four time points (Waves 6, 8, 10, and 12), participants were asked about their frequency of wearing a mask/face covering in public. Participants responded to the statement ‘In the past week, I have covered my face in public places,’ using a five point scale (1 = [almost] never; 5 = [almost] always).

Results

Data analytic plan

Given the large sample size differences between participants who participated in our baseline and follow-up surveys, we analyzed the baseline and follow-up data separately. To simplify presentation of results and account for measuring different variables at different times, we averaged responses to the same variable across the follow-up waves. These averages estimate participants’ relatively enduring standing on each variable. The primary analyses for the follow-up waves were conducted using these averages. Means and sample sizes for each variable across wave of data collection are presented in Table 4. As shown in Table 4, each participant’s average score reflects between one and four responses, depending on the number of waves the measure was assessed (e.g., WHO Virus Mitigation behaviors were assessed in three follow-up waves) and the number of waves the participant completed. We report results for each follow-up wave separately in the supplementary materials.

Table 4. Dates, participants, and descriptive statistics of variables used in analyses, Study 2.
BL W1 W2 W3 W4 W5 W6 W7 W8 W9 W10 W11 W12 Ave
Political Orientation
U.S. 16.06 (80.68) -36.91 (84.42) -24.44 (84.29) -25.93 (84.93) -24.79 (83.20) -31.35 (83.34) -30.15 (82.24) -25.54 (83.05) -31.79 (81.89) -32.10 (82.70) -35.50 (80.21) -36.84 (80.84) -36.23 (80.15)
10923 540 2672 1856 1356 1031 883 601 803 743 527 769 689
Non-U.S. -4.83 (67.03) -42.42 (72.07) -35.69 (69.89) -35.37 (71.00) -17.17 (73.29) -16.99 (72.91) -18.33 (72.77) -18.44 (73.05) -20.07 (73.03) -18.48 (73.21) -21.00 (71.99) -19.57 (72.01) -20.33 (72.38)
51986 981 3514 3621 6588 6251 5014 4651 4282 4052 3391 4128 3596
Perceived Risk of COVID-19
U.S. 3.78 (1.38) 3.98 (1.29) 3.76 (1.32) 3.67 (1.32) 3.66 (1.40) 3.67 (.32) 3.64 (1.32) 3.69 (1.34) 3.84 (1.25) 3.73 (1.27)
10912 540 2672 1856 1031 601 743 769 689 4166
Non-U.S. 3.48 (1.40) 4.12 (1.41) 3.90 (1.30) 3.85 (1.35) 3.61 (1.36) 3.59 (1.35) 3.48 (1.34) 3.51 (1.34) 3.67 (1.29) 3.61 (1.27)
51750 981 3514 3621 6251 4651 4052 4128 3596 12901
Perceived Severity of COVID-19
U.S. 4.04 (1.14)
10914
Non-U.S. 3.87 (1.28)
51684
WHO Virus Mitigation Behaviors
U.S. 2.22 (0.95) 2.09 (1.13) 1.69 (1.31) 1.79 (1.29) 1.94 (1.15)
10917 1357 769 689 1811
Non-U.S. 2.20 (0.99) 2.10 (1.07) 1.28 (1.38) 1.13 (1.40) 1.71 (1.20)
51805 6590 4127 3600 8621
BL W1 W2 W3 W4 W5 W6 W7 W8 W9 W10 W11 W12 Ave
Perceived Efficacy of Social Distancing
U.S. 1.50 (0.78) 1.53 (0.77) 1.50 (0.83) 1.50 (0.75)
2672 1856 1357 3576
Non-U.S. 1.36 (0.84) 1.34 (0.83) 1.39 (0.81) 1.36 (0.80)
3513 3620 6588 8987
Vaccine Intentions
U.S. 1.16 (1.19) 1.13 (1.23) 1.14 (1.17)
1357 769 1811
Non-U.S. 1.39 (0.81) 0.82 (1.24) 0.86 (1.18)8519
6588 4098
Efficacy of Wearing a Face Covering
U.S. 0.67 (1.28) 0.58 (1.31) 0.64 (1.27) 1.21 (1.15) 0.75 (1.19)
960 834 549 689 1489
Non-U.S. 0.51 (1.31) 0.33 (1.34) 0.52 (1.31) 0.83 (1.23) 0.58 (1.23)
5553 4484 3572 3576 7659
Wearing a Face Covering
U.S. 4.22 (1.31) 4.32 (1.23) 4.38 (1.18) 4.70 (0.83) 4.38 (1.11)
883 803 527 646 1441
Non-U.S. 3.47 (1.66) 3.59 (1.60) 3.61 (1.60) 3.72 (1.49) 3.64 (1.50)
5014 4484 3391 3310 7349

Notes. Political orientation was assessed only at baseline. Presented numbers reflect the political orientation (reported at baseline) of participants who completed each wave. BL = baseline, W = wave, Ave = average of construct across wave.

We first evaluated the zero-order correlations between political orientation and each outcome (i.e., perceived risk, health-protective behaviors) within and across locations (U.S. vs non U.S.). To compare correlations across locations, we used a general linear model with location (U.S. = 0; non-U.S. = 1) as a categorical between-subjects factor and political orientation as a continuous between-subjects factor. A test of the interaction between location and political orientation evaluated whether associations between political orientation and outcomes were different for participants living inside versus outside of the United States. These parsimonious models allow for easy interpretation of effects and effect size.

We also conducted robustness checks to confirm that the interaction between location and political orientation persisted after several considerations. First, political orientation was weakly associated with age (r = .04), education (r = -.09), and gender (r = -.04) at Baseline, and these factors might be expected to account for some of the shared variance between political orientation and health beliefs and behaviors. Additionally, we observed that date of Baseline survey completion was related to most outcomes (see S4 Table). Finally, participants in our study were not entirely independent of each other—people residing within different countries were exposed to different messaging, norms, and support factors related to COVID-19. Thus, we conducted robustness checks using multilevel modeling in which participants were nested in countries (with intercepts modeled as randomly varying across countries), while controlling for age, education, gender, and baseline survey date. Cumulatively, these robustness checks allowed us to account for interdependence of data and the alternative explanation that perceptions of risk might be due to demographic or methodological (i.e., differential enrollment across time) factors. All observed interactions between location (U.S. vs non-U.S.) and political orientation remained significant after these robustness checks (see S4 Table).

Political orientation and perceived health risk

As Table 5 shows, in the U.S., political orientation was associated with perceived risk of infection such that more conservative individuals reported a lower likelihood of becoming infected. We tested whether correlations in the U.S. and non-U.S. sampled by evaluating the interaction term between political orientation and location; these tests are reported in Table 5 with both an F value representing the interaction and an effect size representing the proportion of variance in the outcome explained by the interaction between the variables. The correlations between political orientation and perceived risk were stronger in the U.S. than in the non-U.S. sample. The observed interactions between location and political orientation remained similar during our multilevel model robustness checks that nested participants within each location and controlled for date of survey completion, age, education, and gender at all time points.

Table 5. Correlations between baseline political orientation and perceived risk, perceived effectiveness, and health-protective behaviors, Study 2.
U.S. r Non-U.S. r U.S. vs. Non-U.S. Correlation Comparison
F
N N ƞ2 (90%CI)
Baseline
Perceived Risk -.13*** -.08*** F = 5.87*
10912 51570 ƞ2 < .001 (< .001, < .001)
Severity of Contracting the Virus: -.08*** .03*** F = 87.65***
10914 51684 ƞ2 = .001 (.001, .002)
WHO Virus Mitigation Behaviors -.13*** -.03*** F = 76.38***
11030 52072 ƞ2 = .001 (.001, .002)
Follow-up Waves
Perceived Risk -.19*** -.08*** F = 26.64***
4166 12901 ƞ2 = .002 (.001, .003)
Effectiveness of Social Distancing -.22*** -.02 F = 82.49***
3576 8987 ƞ2 = .006 (.004, .009)
Effectiveness of Wearing a Mask -.17*** .08*** F = 73.38***
1489 7659 ƞ2 = .008 (.005, .011)
WHO Virus Mitigation Behaviors -.23*** .02 F = 82.64***
1811 8621 ƞ2 = .008 (.005, .011)
Wearing a Mask -.28*** .04** F = 76.28***
1441 7349 ƞ2 = .008 (.005, .012)
Willingness to be Vaccinated -.32*** -.07*** F = 84.45***
1811 8519 ƞ2 = .008 (.005, .011)

*p < .05

**p < .01

***p < .001.

Note. This table reports the results of averaged responses across the Follow-Up waves. Results across each time point were consistent and can be seen in S5 Table.

In our cross-sectional Baseline questionnaire, political orientation was also negatively associated with expected severity of infection such that more conservative individuals expected a COVID-19 infection to be less severe if they were to contract it. Here, the association of political orientation and perceived severity reversed direction for participants living outside the U.S.

Political orientation and perceived effectiveness of health-protective behaviors

Political orientation was also associated with the perceived effectiveness of health-protective behaviors (i.e., social distancing, wearing a face covering; see Table 4) such that more conservative individuals perceived these behaviors as less useful. The interactions with location indicate that these effects were stronger for participants in the U.S. relative to non-U.S. participants, and these interactions persisted after our robustness checks.

Political orientation and health-protective behaviors

As Table 5 shows, political orientation was associated with the WHO recommended health-protective behaviors, willingness to be vaccinated, and wearing a face covering, such that more conservative individuals engaged in less health-protective behaviors. These effects were larger among U.S. participants versus non-U.S. participants at every time point, and the interactions between political orientation and location held during our robustness checks.

Mediation analyses

As in Study 1, we used the PROCESS macro (seed = 31216 [38]) to examine whether the relationship between political orientation and health-protective behaviors was mediated by perceived health risk, perceived severity of contracting the virus, or perceived efficacy of health-protective behaviors (Model 4), as well as whether indirect effects were moderated by location (Model 7). We conducted two sets of analyses—one for measures assessed only at baseline and one for measures assessed at follow-up. Note that due to its identified stability over time [3941], political orientation was only measured at Baseline. For all mediational analyses, we standardized the political orientation variable. Because all correlations and interactions between location and political orientation remained after our robustness checks, tests of indirect effects neither included covariates nor nested participants into location.

Across all health-protective behaviors and mediators, we observed consistent patterns of a) mediation of health-protective behaviors by perceived risk, perceived consequences, and effectiveness of relevant health-protective behaviors and b) stronger indirect effects for U.S. participants relative to non-U.S. participants (see Table 6). Notably, the association between political orientation and perceived effectiveness of wearing a face covering were negative for participants living in the United States, but positive for participants living outside the U.S., consistent with our hypothesis regarding the unique effects of political orientation in the U.S., and reflecting the potentially unique discourse surrounding masks in the U.S.

Table 6. Perceived risk and severity mediates the relationship between baseline political orientation and health-protective behaviors, Study 2.
IV to Mediator Mediator to DV Direct Effect Indirect Effect
b (CI) b (CI) b (CI) b (CI)
Baseline
Baseline Political Orientation–Perceived Risk—WHO Virus Mitigation Behaviors
Index of Moderated Mediation: .0014 (.0002, .0025)
US -.153 (-.175, -.131) .023 (.010, .036) -.110 (-.122, -.010) -.0065 (-.0079, -.0052)
Non-US -.121 (-.134, -.109) .046 (.040, .052) -.023 (-.032, -.014) -.0052 (-.0061, -.0043)
Baseline Political Orientation–Perceived Severity—WHO Virus Mitigation Behaviors
Index of Moderated Mediation: .0201 (.0162, .0241)
US -.077 (-.096, -.059) .224 (.209, .239) -.093 (-.107, -.078) -.0140 (-.0174,-.0106)
Non-US .034 (.022, .045) .173 (.167, .179) -.034 (-.043, -.025) .0061 (.0041, .0082)
Follow-Up Waves
Baseline Political Orientation–Perceived Risk—WHO Virus Mitigation Behaviors
Index of Moderated Mediation: .0140 (.0076, .0206)
US -.223 (-.269,-.176) .098 (.055, .141) -.199 (-.244, -.155) -.0245 (-.0319, -.0176)
Non-US -.096 (-.120, -.081) .106 (.085, .126) .019 (-.005, .044) -.0105 (-.0141, -.0073)
Baseline Political Orientation–Perceived Risk—Willingness to be Vaccinated
Index of Moderated Mediation: .0204 (.0115, .0301)
US -.223 (-.269,-.176) .196 (.154, .238) -.275 (-.319, -.232) -.0367 (-.0466, -.0280)
Non-US -.099 (-.123, -.074) .151 (.131, .172) .-.064 (-.088, -.040) -.0162 (-.0210, -.0118)
Baseline Political Orientation–Perceived Risk—Wearing a Mask
Index of Moderated Mediation: .0226 (0.0124, .0341)
US -.233 (-.288, -.178) .163 (.115, .210) -.235 (-.285, -.185) -.0350 (-.0465, -.0248)
Non-US -.083 (-.111, -.055) .131 (.101, .160) .051 (.016, .085) -.0124 (-.0177, -.0080)
Baseline Political Orientation–Perceived Efficacy of Wearing a Mask—Wearing a Mask
Index of Moderated Mediation: .1514 (.1148, .1894)
US -.172 (-.224, -.121) .372 (.329, .415) -.199 (-.242, -.155) -.0986 (-.1315, -.0662
Non-US .092 (.065, .119) .590 (.566, .615) -.003 (-.031, .026) .0527 (.0378, .0686)
Baseline Political Orientation–Perceived Efficacy of Social Distancing—WHO Virus Mitigation Behaviors
Index of Moderated Mediation: .0691 (.0499, .0894)
US -.158 (-.188, -.128) .690 (.622, .758) -.098 (-.140, -.055) -.0797 (-.0981, -.0623)
Non-US -.021 (-.038, -.004) .455 (.422, .488) .041 (.162, .065) -.0106 (-.0197, -.0012)

Notes. Results across each time point were consistent, for analyses within time point, see S6 and S7 Tables. Political orientation was standardized prior to analysis. We analyzed the indirect pathway between political orientation and WHO virus mitigation behaviors through perceived efficacy of social distancing because two of the three items included in that scale relate to keeping distance from others.

Ancillary analyses

In the analyses described above, we compared associations across US and non-US participants living in 114 other countries. To examine the possibility that our findings could be an artifact of aggregating the data across these 114 countries, we also explored these effects in a (sub-)sample of individual countries. We focused our attention to comparison countries in which we recruited the most participants into the Baseline survey (Spain [n = 3156], Romania [n = 2696], Netherlands [n = 2992]), Indonesia [n = 2407], Greece [n = 2870], and Republic of Serbia [n = 2118]). Additionally, we evaluated responses in Canada (n = 1531) because residents of Canada might be expected to be exposed to political messaging from the U.S. to a greater degree than other individuals due to the proximity and shared border of the countries, and thus represent a conservative examination of the unique effects of political orientation in the U.S. Decisions about which countries to include were made prior to examining direction or size of associations within the comparison countries. Both total and partial (i.e., controlling for age, gender, education, and date of baseline survey completion) associations between political orientation and health beliefs and behaviors were larger in the U.S. than in each of these other comparison countries, with the exception that several associations were similar across the U.S., Republic of Serbia, and Canada. S8 Table reports the associations within the U.S. and these seven comparison countries.

General discussion

Countrywide lockdowns and social distancing measures have severe economic and social consequences. Because compliance with behavioral recommendations involves personal costs, people need to be persuaded that it is in their own best interest to adopt these behaviors. According to theories of health behavior, this can be achieved by convincing people that there is a high risk of getting infected, that this has serious, often fatal consequences, and that recommended health-protective behaviors will in fact be effective in reducing the infection risk [36, 9, 10].

The U.S. is one of few countries where leading conservative government figures as well as an influential conservative-leaning news network questioned both the seriousness of the pandemic and the effectiveness of some of the recommended health-protective behaviors. As empirical studies have already demonstrated, U.S. conservatives engaged in less health-protective behaviors related to COVID-19 than did U.S. liberals. Recent work suggests these patterns have extended over time, even during periods of increased disease threat [47]. Our work replicates these patterns and suggests that perceptions of risk and effectiveness of health behaviors partially explain the effects of political orientation on enactment of these behaviors. Effects of political orientation were stronger (and sometimes in the opposite direction) in the United States than they were globally, which provides novel evidence suggesting these political differences are explained by politicized forces within the United States rather than differences in beliefs fundamental to political ideologies.

We were fortunate to be able to capture perceptions of health risk and health-protective behaviors as the pandemic was beginning to unfold internationally. By following up with participants over time, we were able to assess associations between political ideology, risk perceptions, and health-protective behaviors, even as the context of the pandemic changed. In Study 2, we could observe beliefs about risk and the emerging health-protective behaviors of wearing a face covering and intentions to get vaccinated. Across both studies, we observed strong consistency in the size of effects over time, suggesting that differences due to political affiliation emerged due to early politicization of health-protective behaviors. Indeed, during March and April 2020, these differences were already prominent between conservative and liberal political leaders.

These patterns are consistent with our prediction that the deleterious effects of political orientation on health-protective behaviors are specific to the U.S. and to the conservative leadership during the early stages of the pandemic. Indeed, outside of the U.S., conservatives were more likely than liberals to believe masks would provide personal protection (and were consequently more likely to report wearing a face covering). Moreover, these patterns are different than what might be expected based on evidence that conservatives are more sensitive to threats (especially physical threats) than liberals [48, 49] and that conservatives in the U.S. (relative to liberals in the U.S.) expressed more concern about a pandemic happening under other (Democratic) political leadership [24]. Thus, although we did not empirically assess attention to or agreement with conservative leadership and news sources, the patterns we observe differ from what might be expected based on past research on conservative responses to virus threats, suggesting a U.S.-specific and COVID-specific influence. Although our studies did not directly examine political communication, their findings highlight mechanisms by which political communication may become life-threatening—when it alters the perceptions of risk of health-threatening circumstances and the efficacy of mitigation behavior.

Another strength of our studies is the size of our samples and their repeated measures over time. Because we captured health behaviors and perceptions at many points in a changing pandemic, it is unlikely that associations between baseline orientation and outcomes were driven by one specific contextual factor. Admittedly, the effect sizes representing the association between political orientation and compliance with health-protective behavior recommendations observed in our samples are small. Our large samples allowed us to identify these small effect sizes precisely, as noted by narrow confidence intervals. Moreover, weak effects on an individual level can still have a powerful impact at the population level. For example, even though smokers run a much greater risk of lung cancer than non-smokers, the 10-year absolute risk of lung cancer for a 35-year old man who is a heavy smoker is only about 0.9% [50]. And yet, these small effects have a great impact at the population level. In a group of 1 million heavy smokers aged 35, for instance, nearly 10,000 will die prematurely before the age of 45 due to smoking [50].

As all research, our research has some limitations. Because our data are correlational, we cannot draw causal conclusions. What we can show, however, is that the pattern of our data is consistent with such a causal interpretation. And such support is evidenced by the consistent mediation of health-protective behaviors by perceived risk of getting infected, perceived severity of the consequences of such an infection, and perceived efficacy of relevant health-protective behaviors in preventing such negative outcomes. Finally, the difference in the magnitude of the indirect effects between the U.S. and the non-U.S. data suggest that these effects are specific to the situation in the U.S. Table 5, which compares effects across U.S. and non-US participants, illustrates the uniqueness of the U.S. effects. We also found parallel patterns for the perceived effectiveness of social distancing. Political orientation predicted willingness to observe social distancing, and this association was mediated by perceived effectiveness, with the mediation effect again being moderated by location.

Another weakness is that our measures of political orientation, particularly in Study 2, are not optimal. Whereas Republican-leaning Americans are likely to rate themselves as more conservative than Democrat-leaning Americans, this association is less clear for the economic dimension of the political compass used in Study 2. The political compass measure was chosen in order to make the political orientation data comparable across countries. However, the fact that the correlations are of a similar magnitude in both studies suggests that the right to left dimension was similar to the conservative to liberal continuum. Most likely, Republican-leaning conservatives would identify as right-leaning relative to Democratic-leaning liberals. However, not all conservatives are Republicans, Trump supporters, or viewers of Fox News, and these characteristics would more directly index exposure to messages that have downplayed COVID-19, as well as susceptibility to influence by such messages. In a future study, we would also assess participants on the dimension of conservatism to liberalism to ascertain the correlation of this dimension with the political compass. A further potential weakness is the fact that we did not measure all variables concurrently during all waves. These decisions were made to conserve space and reduce participant burden but limit causal interpretations. Finally, the fact that—as a function of when participants enrolled in the study—each wave in our sample contained both different subsets of the total sample of participants and differing time lag between baseline completion and follow-up survey completion, complicated our analysis. In a future study, we would definitely avoid this problem, even at the cost of being able to enroll fewer participants.

Our studies illustrate both the applicability of social- and health-psychological theories to address a real-world issue and the use of a real-world problem to test psychological theories. The starting point of our analyses is the political situation in the U.S., where the former president as well as leading conservative politicians consistently downplayed the severity of the COVID-19 pandemic and belittled the effectiveness of scientific recommendations regarding health-protective behaviors. Like other researchers and opinion surveys before us, we showed that political orientation is associated with compliance with recommended health-protective behaviors. We expanded on this research by testing and supporting the theory-based prediction that this association was mediated by risk perception, perceived severity of the infection as well the perceived effectiveness of the recommended health-protective behaviors.

Individuals who fail to comply with health-protective behavior recommendations increase their chances of contracting COVID-19, dying or suffering long-term effects from the disease, and spreading it to others. Our studies suggest that politicized messages from leaders and media outlets that downplay risks might be linked to increased spread of COVID-19. Indeed, U.S. counties that voted for Donald Trump over Hillary Clinton in 2016 have not only observed less social distancing, but this failure to observe social distancing was associated with subsequently higher COVID-19 infections and fatalities [16].

Supporting information

S1 Table. Unstandardized coefficients for the paths from baseline political orientation to perceived health risk (a) and from perceived risk to health-protective behaviors (b) at five time points, Study 1

(DOCX)

S2 Table. Number of participants from the most frequently represented countries, Study 2.

(DOCX)

S3 Table. Correlations between health-protective behaviors at follow-up, Study 2.

(DOCX)

S4 Table. Results of robustness checks in which participants were nested into country, and we included date of baseline survey completion, age, gender, and education as covariates.

(DOCX)

S5 Table. Correlations between political orientation and perceived risk, perceived efficacy, and health-protective behaviors.

(DOCX)

S6 Table. Perceived risk and severity mediates relationship between political orientation and health-protective behaviors separated by wave, Study 2.

(DOCX)

S7 Table. Perceived risk and efficacy mediates effects of political orientation on wearing a mask by wave, Study 2.

(DOCX)

S8 Table. Correlations between political orientation and other variables across specific comparison countries.

(DOCX)

S1 Fig. Distribution of days after March 19 (beginning of survey) in which participants completed the baseline.

(TIF)

Data Availability

Data cannot be shared publicly because the institution governing the data collection and management has deemed political orientation as a sensitive personal piece of data. Data are available for the data managers of the PsyCorona project for researchers who meet the criteria for access to confidential data. This contact should go to the psycorona@rug.nl email address. The authors had no special access privileges to the data that others requesting the data will not have.

Funding Statement

This research received support from the New York University Abu Dhabi (VCDSF/75-71015) to J.N., the University of Groningen (Sustainable Society & Ubbo Emmius Fund) to N.P.L., and the Instituto de Salud Carlos III (COV20/00086) co-funded by the European Regional Development Fund (ERDF ‘A way to make Europe’ to M.M. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Alfano V, Ercolano S. The efficacy of lockdown against COVID-19: a cross-country panel analysis. Applied health economics and health policy. 2020. Aug;18:509–17. doi: 10.1007/s40258-020-00596-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Vinceti M, Filippini T, Rothman KJ, Ferrari F, Goffi A, Maffeis G, et al. Lockdown timing and efficacy in controlling COVID-19 using mobile phone tracking. EClinicalMedicine. 2020. Aug 1;25:100457. doi: 10.1016/j.eclinm.2020.100457 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.De Hoog N, Stroebe W, De Wit JB. The impact of fear appeals on processing and acceptance of action recommendations. Personality and social psychology bulletin. 2005. Jan;31(1):24–33. doi: 10.1177/0146167204271321 [DOI] [PubMed] [Google Scholar]
  • 4.De Hoog N, Stroebe W, De Wit JB. The impact of vulnerability to and severity of a health risk on processing and acceptance of fear-arousing communications: A meta-analysis. Review of General Psychology. 2007. Sep;11(3):258–85. doi: 10.1037/1089-2680.11.3.258 [DOI] [Google Scholar]
  • 5.Janz NK, Becker MH. The health belief model: A decade later. Health education quarterly. 1984. Mar;11(1):1–47. doi: 10.1177/109019818401100101 [DOI] [PubMed] [Google Scholar]
  • 6.Rogers RW. Cognitive and psychological processes in fear appeals and attitude change: A revised theory of protection motivation. Social psychophysiology: A sourcebook. 1983:153–76. [Google Scholar]
  • 7.Rogers RW, Mewborn CR. Fear appeals and attitude change: effects of a threat’s noxiousness, probability of occurrence, and the efficacy of coping responses. Journal of personality and social psychology. 1976. Jul;34(1):54. doi: 10.1037//0022-3514.34.1.54 [DOI] [PubMed] [Google Scholar]
  • 8.Abraham C, Sheeran P. The health belief model. Predicting health behaviour: Research and practice with social cognition models. 2015. May 16;2:30–55. [Google Scholar]
  • 9.Tannenbaum MB, Hepler J, Zimmerman RS, Saul L, Jacobs S, Wilson K, et al. Appealing to fear: A meta-analysis of fear appeal effectiveness and theories. Psychological bulletin. 2015. Nov;141(6):1178. doi: 10.1037/a0039729 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Stroebe W. Social psychology and health. McGraw-Hill Education; (UK: ); 2011. May 1. [Google Scholar]
  • 11.Andersen M. Early evidence on social distancing in response to COVID-19 in the United States. Available at SSRN 3569368. 2020. Apr 6. [Google Scholar]
  • 12.Calvillo DP, Ross BJ, Garcia RJ, Smelter TJ, Rutchick AM. Political ideology predicts perceptions of the threat of covid-19 (and susceptibility to fake news about it). Social Psychological and Personality Science. 2020. Nov;11(8):1119–28. doi: 10.1177/1948550620940539 [DOI] [Google Scholar]
  • 13.Clinton J, Cohen J, Lapinski J, Trussler M. Partisan pandemic: How partisanship and public health concerns affect individuals’ social mobility during COVID-19. Science advances. 2021. Jan 1;7(2):eabd7204. doi: 10.1126/sciadv.abd7204 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Franz B, Dhanani LY. Beyond political affiliation: an examination of the relationships between social factors and perceptions of and responses to COVID-19. Journal of Behavioral Medicine. 2021. Apr 20:1–2. doi: 10.1007/s10865-021-00226-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Gadarian SK, Goodman SW, Pepinsky TB. Partisanship, health behavior, and policy attitudes in the early stages of the COVID-19 pandemic. Plos one. 2021. Apr 7;16(4):e0249596. doi: 10.1371/journal.pone.0249596 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Gollwitzer A, Martel C, Brady WJ, Pärnamets P, Freedman IG, Knowles ED, et al. Partisan differences in physical distancing are linked to health outcomes during the COVID-19 pandemic. Nature human behaviour. 2020. Nov;4(11):1186–97. doi: 10.1038/s41562-020-00977-7 [DOI] [PubMed] [Google Scholar]
  • 17.Gratz KL, Richmond JR, Woods SE, Dixon-Gordon KL, Scamaldo KM, Rose JP, et al. Adherence to Social Distancing Guidelines Throughout the COVID-19 Pandemic: The Roles of Pseudoscientific Beliefs, Trust, Political Party Affiliation, and Risk Perceptions. Annals of Behavioral Medicine. 2021. May;55(5):399–412. doi: 10.1093/abm/kaab024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Huynh HP, Senger AR. A little shot of humility: Intellectual humility predicts vaccination attitudes and intention to vaccinate against COVID‐19. Journal of Applied Social Psychology. 2021. Apr;51(4):449–60. doi: 10.1111/jasp.12747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Kerr J, Panagopoulos C, van der Linden S. Political polarization on COVID-19 pandemic response in the United States. Personality and Individual Differences. 2021. Sep 1;179:110892. doi: 10.1016/j.paid.2021.110892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Rothgerber H, Wilson T, Whaley D, Rosenfeld DL, Humphrey M, Moore A, et al. Politicizing the COVID-19 pandemic: ideological differences in adherence to social distancing. Retrieved from: https://psyarxiv.com/k23cv/ [Google Scholar]
  • 21.Pew Research Center (April, 2, 2020). 5 facts about partisan reactions to COVID-19 in the U.S. Retrieved from https://www.pewresearch.org/fact-tank/2020/04/02/5-facts-about-partisan-reactions-to-covid-19-in-the-u-s/
  • 22.Pew Research Center (March 5th, 2021). A year of U.S. public opinion on the Corona pandemic.
  • 23.Pew Research Center (October 21, 2014). Ebola worries rise, but most are ‘fairly’ confident in government, hospitals to deal with disease. Retrieved from: https://www.pewresearch.org/politics/2014/10/21/ebola-worries-rise-but-most-are-fairly-confident-in-government-hospitals-to-deal-with-disease/
  • 24.Stupi EK, Chiricos T, Gertz M. Perceived criminal threat from undocumented immigrants: Antecedents and consequences for policy preferences. Justice Quarterly. 2016. Feb 23;33(2):239–66. doi: 10.1080/07418825.2014.902093 [DOI] [Google Scholar]
  • 25.Campbell TH, Kay AC. Solution aversion: On the relation between ideology and motivated disbelief. Journal of personality and social psychology. 2014. Nov;107(5):809. doi: 10.1037/a0037963 [DOI] [PubMed] [Google Scholar]
  • 26.Forbes (September 10th, 2020). All the times Trump compared COVID-19 to the Flu, even after he knew COVID-19 was far more deadly. Retrieved from: https://www.forbes.com/sites/tommybeer/2020/09/10/all-the-times-trump-compared-covid-19-to-the-flu-even-after-he-knew-covid-19-was-far-more-deadly/#68e832b8f9d2
  • 27.CBS New (August 10th, 2020). Unmasked: How Trump’s mixed messaging on face-covering hurt U.S. coronavirus response. Retrieved from: https://www.nbcnews.com/politics/donald-trump/calendar-confusion-february-august-trump-s-mixed-messages-masks-n1236088.
  • 28.CNN (29th April, 2020). Pence flouts Mayo Clinic policy on masks- which is to wear one. Retrieved from: https://edition.cnn.com/2020/04/28/politics/mike-pence-mayo-clinic-mask/index.html
  • 29.Guardian (September 14th, 2020). Trump’s first indoor rally since June defies Covid laws, attacks Biden. Retrieved from https://www.theguardian.com/us-news/2020/sep/14/trumps-first-indoor-rally-since-june-defies-covid-laws-attacks-biden?CMP=Share_iOSApp_Other
  • 30.Pew Research Center (June 29, 2020). Americans rate CDC highly, Trump and his administration poorly, on getting the facts right about COVID-19. Retrieved from: https://www.journalism.org/2020/06/29/americans-rate-cdc-highly-trump-and-his-administration-poorly-on-getting-the-facts-right-about-covid-19/
  • 31.Hovland CI, Weiss W. The influence of source credibility on communication effectiveness. Public opinion quarterly. 1951. Jan 1;15(4):635–50. doi: 10.1086/266350 [DOI] [Google Scholar]
  • 32.Pornpitakpan C. The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of applied social psychology. 2004. Feb;34(2):243–81. doi: 10.1111/j.1559-1816.2004.tb02547.x [DOI] [Google Scholar]
  • 33.Petty RE, Cacioppo JT, Goldman R. Personal involvement as a determinant of argument-based persuasion. Journal of personality and social psychology. 1981. Nov;41(5):847. doi: 10.1037/0022-3514.41.5.847 [DOI] [Google Scholar]
  • 34.Bowling A. Just one question: If one question works, why ask several?. Journal of Epidemiology & Community Health [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Burisch M. Test length and validity revisited. European Journal of Personality. 1997. Nov;11(4):303–15. doi: [DOI] [Google Scholar]
  • 36.Gosling SD, Rentfrow PJ, Swann WB Jr. A very brief measure of the Big-Five personality domains. Journal of Research in personality. 2003. Dec 1;37(6):504–28. doi: 10.1016/S0092-6566(03)00046-1 [DOI] [Google Scholar]
  • 37.Stroebe W, Leander NP, Kruglanski AW. Is it a dangerous world out there? The motivational bases of American gun ownership. Personality and social psychology bulletin. 2017. Aug;43(8):1071–85. doi: 10.1177/0146167217703952 [DOI] [PubMed] [Google Scholar]
  • 38.Hayes AF. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford publications; 2017. Dec 13. [Google Scholar]
  • 39.Alwin DF, Cohen RL, Newcomb TM. Political attitudes over the life span: The Bennington women after fifty years. Univ of Wisconsin Press; 1991. doi: 10.1038/sj.bdj.4807661 [DOI] [Google Scholar]
  • 40.Marwell G, Aiken MT, DEMERATH NJ III. The persistence of political attitudes among 1960s civil rights activists. Public Opinion Quarterly. 1987. Jan 1;51(3):359–75. doi: 10.1086/269041 [DOI] [Google Scholar]
  • 41.Sears DO, Funk CL. Evidence of the long-term persistence of adults’ political predispositions. The Journal of Politics. 1999. Feb 1;61(1):1–28. [Google Scholar]
  • 42.Levay KE, Freese J, Druckman JN. The demographic and political composition of Mechanical Turk samples. Sage Open. 2016. Mar 4;6(1):2158244016636433. doi: 10.1177/2158244016636433 [DOI] [Google Scholar]
  • 43.Paolacci G, Chandler J. Inside the Turk: Understanding Mechanical Turk as a participant pool. Current directions in psychological science. 2014. Jun;23(3):184–8. [Google Scholar]
  • 44.Baumgaertner B, Carlisle JE, Justwan F. The influence of political ideology and trust on willingness to vaccinate. PloS one. 2018. Jan 25;13(1):e0191728. doi: 10.1371/journal.pone.0191728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Ferrer RA, Klein WM. Risk perceptions and health behavior. Current opinion in psychology. 2015. Oct 1;5:85–9. doi: 10.1016/j.copsyc.2015.03.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Logan J, Nederhoff D, Koch B, Griffith B, Wolfson J, Awan FA, et al. ‘What have you HEARD about the HERD?’ Does education about local influenza vaccination coverage and herd immunity affect willingness to vaccinate?. Vaccine. 2018. Jun 27;36(28):4118–25. doi: 10.1016/j.vaccine.2018.05.037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Fridman A, Gershon R, Gneezy A. COVID-19 and vaccine hesitancy: A longitudinal study. PloS one. 2021. Apr 16;16(4):e0250123. doi: 10.1371/journal.pone.0250123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Jost JT, Glaser J, Kruglanski AW, Sulloway FJ. Conservatism as motivated social cognition. Psychological bulletin. 2003;129(3):339–75. doi: 10.1037/0033-2909.129.3.339 [DOI] [PubMed] [Google Scholar]
  • 49.Pedersen WS, Muftuler LT, Larson CL. Conservatism and the neural circuitry of threat: economic conservatism predicts greater amygdala–BNST connectivity during periods of threat vs safety. Social cognitive and affective neuroscience. 2018. Jan;13(1):43–51. doi: 10.1093/scan/nsx133 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Jeffery RW. Risk behaviors and health: Contrasting individual and population perspectives. American Psychologist. 1989. Sep;44(9):1194. doi: 10.1037/0003-066X.44.9.1194 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Amitava Mukherjee

5 Jun 2021

PONE-D-21-14147

Politicization of COVID-19 Health-Protective Behaviors in the United States: Longitudinal and Cross-National Evidence

PLOS ONE

Dear Dr. vanDellen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 20 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Amitava Mukherjee, ME, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

  1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

2a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

2b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

3. Please amend your authorship list in your manuscript file to include author Anton Kurapov,.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Review of : Politicization of COVID-19 Health-Protective Behaviors in the United States: Longitudinal and Cross-National Evidence

This manuscript present two studies, the first examines the associations between political ideology, perceived COVID-19 risk and engagement in three protective behaviors in a series of US surveys. The second study expands to include mask wearing and vaccine intentions as further outcomes and compares the US against an international non-US sample.

Although the samples in these studies are large, the conclusions that can be drawn from the data are limited due to the nature of the survey roll out and structure of the data. The authors are keenly aware of this and, for the most part, clearly acknowledge these limitations.

The sample sizes are impressive and the overall statistical approach (examining correlations and mediation) is reasonable.

The basic premise of the manuscript is clear and worthwhile, essentially examining how political ideology in the US plays into the Health Belief Model (HBM) in the context of COVID-19 protective behaviors. The HBM posits that engagement in a given health behavior is the product of perceived susceptibility and severity of a disease and perceived efficacy of the behavior in preventing it (among other factors). The authors sensibly investigate the extent to which several HBM predictors mediate the established relationship between ideology and protective behavior in the US.

Below I outline my major and minor concerns with specific revisions indicated as numbered points.

Major concerns/revisions:

The biggest challenge for the studies appears to arise from the fact that political orientation was only measured at one point, and often at a different time point than the other variables in analyses. If I am incorrect about this, then a wholly different set of analyses would be more appropriate and far more informative (e.g. Random Intercepts Cross-Lagged models).

Thus the conclusions drawn are based on the assumption that political orientation is a stable individual factor that did not change throughout the survey period. I not sure how well this assumption holds, and would like to see some more discussion and justification of this. Ultimately, the authors are limited by the data they have and have made choices about how to best analyze it given those limitations.

In Study 1 the authors examine the correlations between (March baseline) political orientation and health behaviors, and identify perceived health risk of COVID-19 as significant mediator explaining the association between political orientation intended protective behvaiour across the five waves of the survey.

1) Please make clear in table 3 and 4 that is associations with ‘*baseline* political orientation’ that are presented.

In Study 2 The authors analyze the results of a large international convenience/snowball sample. After several readings I’m still not sure of the nature of surveys. A key question I have is when were people recruited? My immediate assumption was that large a sample was recruited in March and then administered follow up surveys. However, the results suggest that new participants were recruited throughout the survey period. At any given time point in Table 4, to what extent were participants returning (who had provided their political orientation in a previous wave) vs new participants (who provided their political orientation concurrently)?

2) Supplementary tables outlining the distribution of participants recruited in each wave, and their subsequent participation in following waves should be included (i.e. cross tabulating wave participation x wave recruited). The authors should also ensure that this is adequately captured in the raw data to eventually be made available with the article.

This becomes even more complicated in Table 6 where perceived risk and mask-wearing behavior were measured at different time points. This leads to a ‘cross-sectional’ mediation analysis where the effect of political orientation (as I understand it) measured at either W1,W2,W3,W4, or W5 on mask wearing at W6 is mediated by perceived risk at W5. Furthermore it is unclear how many participants are captured in such an analysis as we don’t know how many people completed both W5 and W6. It is entirely possible that I am misunderstanding this but, if so, the authors need to be clearer about *when* each construct, including political orientation was measured.

The ‘baseline’ analyses are also troublesome, in that (I assume) they cover all people who completed the baseline survey at some point between March and July. This covers a period where peoples’ perceptions and understanding of the virus would have been changing dramatically.

3) The authors should either break up the baseline analyses into separate time points, or clearly outline, both in their results and limitations, the possible problems with covering such a long time period .

Lastly the comparison between the US and non-US is a little foolhardy. Based on such an analysis, you cannot draw that conclusion that the US is somehow different to the rest of the world. It is possible that many countries are like the US in terms of politicization, and in other countries the reverse pattern plays out (i.e. liberals perceive less risk/engage in less behavior). As there is no information on the composition of the sample in terms of country (and across waves also), the reader is unable to judge.

4) Provide some indication of the extent to which other countries were represented in the data. This would not have to be a frequency table of all countries but perhaps the top 20 or so.

5) I feel it would be useful to offer some specific country comparisons (perhaps in a supplement, perhaps those where the authors have the largest sample sizes) this would offer some weight to their claim that US is different to *other countries*, rather than only comparing it to the lumped together ‘rest of the world’

6) Refrain from referring to comparing “across countries” or “between countries” or “country of residence” – non-US is not a country.

Overall, my main concerns regarding Study 2 stem from a lack of clarity about who was asked about WHAT, WHEN.

The analyses not ideal for answering the questions that the authors pose (for example HBM predictors are only considered as single mediators rather than a more comprehensive application of the full model). But I believe that they are trying to make the most of the large dataset available, covering multiple constructs in different waves. No study is perfect, and I can personally appreciate the difficulty in getting a large-scale survey off the ground in right in the middle of the first wave of an international pandemic. Given the specific time frame examined, I believe this study can make a useful contribution to the literature if appropriately revised.

Minor revisions:

Abstract

1) Remove the term ‘cross-cultural’. This was definitely not a cross-cultural analysis.

Introduction

2) P7 Capitalize protection motivation theory (for consistency with HBM)

3) P8 “…within the context of COVID-19, the group deemphasized the public health threat…” who is the group here? Conservatives? I feel like this might be referring more to conservative elites (e.g. Trump).

4) P9 there are number of citations of news articles here, which is fine. Are there any more systematic, peer-reviewed analyses of media/elite statements that could be cited to as evidence of the claim?

Study 1 Methods

5) I would be clear here, and throughout the rest of the manuscript, that what was measured was *intended* behaviors (‘I would…’) not reported behaviors (‘I have…’).

6) P15 Please report the results of your analyses controlling for demographics in a supplement.

Study 2 Methods

7) P19 “March 27th to July 13th, 2020.” Inconsistent superscripting

8) P19 “The study was approved by the Ethics…” unnecessary quote marks

9) Measuring political orientation with a 400pt scale is odd - rescaling (e.g. to -1+1) would not change the results but might make the mediation results a little more interpretable and save a few zeros.

10) P20 “Perceived Severity of Infection.” I’m pretty sure this paragraph repeats itself.

Study 2 Results

11) P23 it is great that the authors conducted multilevel analyses as a robustness checks, and I’m fine with them including the simpler analyses in the main text. But they should provide at least a summary of the results of these additional analyses in the supplementary material.

12) Again I would reiterate in this section just what is referred to when discussing ‘political orientation’ – i.e. at which point(s) it was measured.

13) Table 4, do the month rows (e.g. Late April/Early May) correspond to waves?

General Discussion

14) P31 In discussing mask use, the authors should acknowledge that the primary reason for wearing masks is to prevent the spread of the virus *to others* rather than self-protection.

15) P31 “…happening under other political leadership” – I would be specific here and note that it was democratic leadership.

16) P31 “Our studies, however, go beyond merely demonstrating that political communication has consequences that may be life-threatening.” This sentence is overstating the results, study did not investigate political communication.

17) “…allowing us to examine the stability of associations over…Ten waves in study 2” This should be rephrased – no associations were examined over all ten waves, and given political orientation was only measured once, I don’t think you can make strong claims about stability.

18) P34 “Our studies show that messages from leaders and media outlets…” again this is overstates the results. The studies did not examine media messages. ‘Indicate’ or ‘suggests’ would be more appropriate and tentative verb to use.

19) In light of the many limitations of this study, it would be good to outline a more perfect version that could be undertaken in future (e.g. examining all HBM predictors as parallel mediators of the association between politics and behavior; conducting a truly longitudinal panel study with all measures repeated at all waves, allowing statistical tests of stability and stronger casual inferences…).

Reviewer #2: In this manuscript, the [impressively large] collaboration of coauthors use two longitudinal studies of U.S. residents to show that political conservatism was

inversely associated with perceived health risk and adoption of health-protective COVID-19 behaviors over time. They also found the effects of political orientation on health-protective behaviors were mediated by perceived risk of infection, perceived severity of infection, and perceived effectiveness of the health-protective behaviors. The manuscript also includes crossnational analyses to show effects were stronger in the U.S. (N=10,923) than in an international sample (total N=51,986), highlighting the increased and overt politicization of health behaviors in the U.S.

This is an interesting study that examines aspects and implications of the relationship between political orientation and health behaviors in the case of COVID-19. It adds to a growing number of studies that have made similar observations. Although the study does not advance this literature much theoretically, it does include some additional mediating variables that contribute to our understanding of these relationships. Overall, I find the study to be generally well-written and analyzed, although I have some concerns. If these can be addressed, the study may be publishables.

First and foremost, as the authors acknowledge, the nature of the MTurk sample is problematic. The authors recognize this but dismiss the implications too readily. Why should we believe these differences did not affect results? Also, the authors should show differences between their samples (every wave) and the general US population and probe attrition in the sample further to assure readers there were no imbalances that affect results. More needs to be dine here.

The authors could also make a more compelling case by 1) presenting key patterns and findings visually in figures and 2) reporting uncertainty measures and other methodological details more clearly.

The authors also mention differences between partisanship and ideology but could do more here to distinguish and consider implications.

Finally, lots of recent work (including studies by Sander van der Linden and colleagues) on this topic is overlooked and should be integrated.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: John Kerr

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Oct 20;16(10):e0256740. doi: 10.1371/journal.pone.0256740.r002

Author response to Decision Letter 0


3 Jul 2021

Reviewer 1

“The biggest challenge for the studies appears to arise from the fact that political orientation was only measured at one point, and often at a different time point than the other variables in analyses. If I am incorrect about this, then a wholly different set of analyses would be more appropriate and far more informative (e.g. Random Intercepts Cross-Lagged models).”

We considered Cross-Lagged models with random intercepts but as we only measured political orientation at Baseline, these alternative models are not useful for our data. Although we could apply them to our data, the straightforward approach of correlations and mediations allows for much easier interpretation of effect sizes and accessibility to a larger audience. Cross-lagged models would be unlikely to reveal anything different than our current analysis given the consistency of effects across time.

“Thus the conclusions drawn are based on the assumption that political orientation is a stable individual factor that did not change throughout the survey period. I not sure how well this assumption holds, and would like to see some more discussion and justification of this. Ultimately, the authors are limited by the data they have and have made choices about how to best analyze it given those limitations.”

The reviewer is correct that our results hinge on the assumption that political orientation is a stable individual factor. Although our data cannot directly speak to this assumption, we have now included an extensive list of references that suggest political orientation in adulthood is a stable individual factor.

“In Study 1 the authors examine the correlations between (March baseline) political orientation and health behaviors, and identify perceived health risk of COVID-19 as significant mediator explaining the association between political orientation intended protective behvaiour across the five waves of the survey.” Please make clear in table 3 and 4 that is associations with ‘*baseline* political orientation’ that are presented.”

We have updated the titles in all tables to be clear that political orientation refers to a baseline measurement.

“In Study 2 The authors analyze the results of a large international convenience/snowball sample. After several readings I’m still not sure of the nature of surveys. A key question I have is when were people recruited? My immediate assumption was that large a sample was recruited in March and then administered follow up surveys. However, the results suggest that new participants were recruited throughout the survey period. At any given time point in Table 4, to what extent were participants returning (who had provided their political orientation in a previous wave) vs new participants (who provided their political orientation concurrently)?”

We have clarified the details of how the survey was conducted. Our goal with the study was to collect as large a cross-national sample as possible while also collecting data as the pandemic was unfolding. Most of the participants in our study (>75%) completed the Baseline survey within the first thirty days of its availability to the public. Rather than delete the participants who completed the study later—and because political orientation is a stable individual difference—we opted to retain them in the analyses. Date of Baseline survey completion was not associated with political orientation in the U.S (r = .01), but it was associated with political orientation for participations outside of the U.S. (r = .11). For this reason, we have added date of completion of the Baseline survey as a covariate to all analyses our robustness checks.

“Supplementary tables outlining the distribution of participants recruited in each wave, and their subsequent participation in following waves should be included (i.e. cross tabulating wave participation x wave recruited). The authors should also ensure that this is adequately captured in the raw data to eventually be made available with the article.”

We have added this information to the supplemental materials. Country of residence will be available in the data.

“This becomes even more complicated in Table 6 where perceived risk and mask-wearing behavior were measured at different time points. This leads to a ‘cross-sectional’ mediation analysis where the effect of political orientation (as I understand it) measured at either W1,W2,W3,W4, or W5 on mask wearing at W6 is mediated by perceived risk at W5. Furthermore it is unclear how many participants are captured in such an analysis as we don’t know how many people completed both W5 and W6. It is entirely possible that I am misunderstanding this but, if so, the authors need to be clearer about *when* each construct, including political orientation was measured.”

We decided to simplify the analyses by averaging participants’ responses across each variable. In doing so, we represent participants’ data in a more cross-sectional way. In essence this approach represents each person’s behavior at a trait level over the available time points. This approach also resolves the problem with behaviors and beliefs being measures at different time points as all time points are now incorporated into the analysis. Given the consistency in the direction and size of our associations over time, this approach is warranted. We continue to report the original analyses separated by wave in the supplemental materials.

“The ‘baseline’ analyses are also troublesome, in that (I assume) they cover all people who completed the baseline survey at some point between March and July. This covers a period where peoples’ perceptions and understanding of the virus would have been changing dramatically. The authors should either break up the baseline analyses into separate time points, or clearly outline, both in their results and limitations, the possible problems with covering such a long time period.”

Although understanding of the virus was changing dramatically, the associations between political orientation and beliefs and behaviors related to COVID-19 did not change in our analysis over time. We have now included date of baseline survey completion as a covariate in our robustness checks; all observed interactions remain strong even after including baseline survey completion date.

“Lastly the comparison between the US and non-US is a little foolhardy. Based on such an analysis, you cannot draw that conclusion that the US is somehow different to the rest of the world. It is possible that many countries are like the US in terms of politicization, and in other countries the reverse pattern plays out (i.e. liberals perceive less risk/engage in less behavior). As there is no information on the composition of the sample in terms of country (and across waves also), the reader is unable to judge. Provide some indication of the extent to which other countries were represented in the data. This would not have to be a frequency table of all countries but perhaps the top 20 or so. I feel it would be useful to offer some specific country comparisons (perhaps in a supplement, perhaps those where the authors have the largest sample sizes) this would offer some weight to their claim that US is different to *other countries*, rather than only comparing it to the lumped together ‘rest of the world’”

This is a fair point from the reviewer. We decided to compare the U.S. to (and report the associations within) the six countries with the largest sample at Baseline (Spain, Romania, Netherlands, Serbia, Indonesia, and Greece). Additionally, although it was not one of the top 6 countries, we thought that Canada—because it shares a border with the U.S. and might have exposure to some of the messages from U.S. politicians—would be a conservative comparison country and we included it in analyses. We compared effects across these countries in the follow-up data for consistency. We made these decisions about comparisons prior to examining any specific patterns of association in the data. We now reference these analyses in the main text and report supplemental analyses within these countries. The patterns in the U.S. differed markedly from those in Spain, Romania, Netherlands, Indonesia, and Greece. The U.S. also differed in some ways from Canada and Serbia, although the directions of the effects were consistent and the magnitude of the effects was often similar. We leave discussions of these specific patterns to political scientists. We have also added a distribution of the participants represented in the top 20 countries at Baseline and in the Follow-up analyses (see Table S2).

Refrain from referring to comparing “across countries” or “between countries” or “country of residence” – non-US is not a country.

We have made this change throughout the manuscript.

Overall, my main concerns regarding Study 2 stem from a lack of clarity about who was asked about WHAT, WHEN.

The analyses not ideal for answering the questions that the authors pose (for example HBM predictors are only considered as single mediators rather than a more comprehensive application of the full model). But I believe that they are trying to make the most of the large dataset available, covering multiple constructs in different waves. No study is perfect, and I can personally appreciate the difficulty in getting a large-scale survey off the ground in right in the middle of the first wave of an international pandemic. Given the specific time frame examined, I believe this study can make a useful contribution to the literature if appropriately revised.

We thank this reviewer for their careful consideration of our work and the context in which it was conducted. As they note, our aims were to provide the most comprehensive coverage of reactions to the pandemic in a manner which would be easily interpretable by a large readership.

“1) Remove the term ‘cross-cultural’. This was definitely not a cross-cultural analysis.

Introduction

2) P7 Capitalize protection motivation theory (for consistency with HBM)

3) P8 “…within the context of COVID-19, the group deemphasized the public health threat…” who is the group here? Conservatives? I feel like this might be referring more to conservative elites (e.g. Trump).

4) P9 there are number of citations of news articles here, which is fine. Are there any more systematic, peer-reviewed analyses of media/elite statements that could be cited to as evidence of the claim?

Study 1 Methods”

We have made these changes and clarifications.

“5) I would be clear here, and throughout the rest of the manuscript, that what was measured was *intended* behaviors (‘I would…’) not reported behaviors (‘I have…’).”

We apologize for this confusion. The virus mitigation behaviors were measured in present tense (e.g., To minimize my chances of getting Coronavirus, I…. wash my hands more often.) We had added the word ‘would’ by mistake. Wearing a face covering was also captured with present tense (i.e., wearing a face covering in the last week). Only vaccine intentions were captured as intentions, as the vaccine was not available at the time of the study. We have reviewed the description of our measures to be clear about how items were measured.

“6) P15 Please report the results of your analyses controlling for demographics in a supplement.”

We now provide these analyses in the supplement.

“Study 2 Methods

7) P19 “March 27th to July 13th, 2020.” Inconsistent superscripting

8) P19 “The study was approved by the Ethics…” unnecessary quote marks.

We have made these changes.

“9) Measuring political orientation with a 400pt scale is odd - rescaling (e.g. to -1+1) would not change the results but might make the mediation results a little more interpretable and save a few zeros.”

We have rescaled this measure for the mediational analyses. The reviewer was correct that doing so improved the readability of the results.

“10) P20 “Perceived Severity of Infection.” I’m pretty sure this paragraph repeats itself.”

We have addressed this issue.

“Study 2 Results

11) P23 it is great that the authors conducted multilevel analyses as a robustness checks, and I’m fine with them including the simpler analyses in the main text. But they should provide at least a summary of the results of these additional analyses in the supplementary material.”

The supplemental materials now report these analyses.

“12) Again I would reiterate in this section just what is referred to when discussing ‘political orientation’ – i.e. at which point(s) it was measured.”

We have added reminders that political orientation was assessed only at Baseline.

“13) Table 4, do the month rows (e.g. Late April/Early May) correspond to waves?”

Yes, in the original manuscript submission, these time frames referred to waves. We had the intuition that attaching the results to specific time points might be interesting to readers. In the current version of the manuscript, we collapse across the waves to create variables that reflect trait-like variables for each behavior for each person. As a result of this change, this particular issue is no longer relevant. We do still reference waves in the supplemental materials where we break down the results separately across time point.

“General Discussion

14) P31 In discussing mask use, the authors should acknowledge that the primary reason for wearing masks is to prevent the spread of the virus *to others* rather than self-protection.”

We acknowledge that mask-wearing has a benefit of protecting others, and that this message was communicated by the WHO and the CDC. However, mask wearing was also communicated as a way to protect oneself. To the extent that people were more concerned about spreading COVID-19 to others than about getting it themselves, they may also have intended to become vaccinated (to reduce their chances of spreading the virus), washed their hands (to reduce spreading the virus by touching surfaces with dirty hands), avoided large crowds (so as to avoid spreading the virus), and self-quarantined if they were sick. Thus, all the health measures we assessed could measure both self- and other-protection motivations, and in this case, we were focused on the plausibility of variables related to primarily self-protection motivations. To the extent that factors such as age may have driven self- (vs. other-) motivations for engaging in the health behaviors, our robustness checks would have captured this variance.

“15) P31 “…happening under other political leadership” – I would be specific here and note that it was democratic leadership.”

We have made this change.

“16) P31 “Our studies, however, go beyond merely demonstrating that political communication has consequences that may be life-threatening.” This sentence is overstating the results, study did not investigate political communication.”

We have acknowledged that we do not directly test political communication and altered this paragraph.

“17) “…allowing us to examine the stability of associations over…Ten waves in study 2” This should be rephrased – no associations were examined over all ten waves, and given political orientation was only measured once, I don’t think you can make strong claims about stability.”

We have changed the language here to avoid talking about stability of findings.

“18) P34 “Our studies show that messages from leaders and media outlets…” again this is overstates the results. The studies did not examine media messages. ‘Indicate’ or ‘suggests’ would be more appropriate and tentative verb to use.”

We have re-written the last paragraph to be more speculative and integrate our findings with other research.

“19) In light of the many limitations of this study, it would be good to outline a more perfect version that could be undertaken in future (e.g. examining all HBM predictors as parallel mediators of the association between politics and behavior; conducting a truly longitudinal panel study with all measures repeated at all waves, allowing statistical tests of stability and stronger casual inferences…).”

We have added these points to the discussion.

“Reviewer #2: In this manuscript, the [impressively large] collaboration of coauthors use two longitudinal studies of U.S. residents to show that political conservatism was inversely associated with perceived health risk and adoption of health-protective COVID-19 behaviors over time. They also found the effects of political orientation on health-protective behaviors were mediated by perceived risk of infection, perceived severity of infection, and perceived effectiveness of the health-protective behaviors. The manuscript also includes crossnational analyses to show effects were stronger in the U.S. (N=10,923) than in an international sample (total N=51,986), highlighting the increased and overt politicization of health behaviors in the U.S.

This is an interesting study that examines aspects and implications of the relationship between political orientation and health behaviors in the case of COVID-19. It adds to a growing number of studies that have made similar observations. Although the study does not advance this literature much theoretically, it does include some additional mediating variables that contribute to our understanding of these relationships. Overall, I find the study to be generally well-written and analyzed, although I have some concerns. If these can be addressed, the study may be publishables.

First and foremost, as the authors acknowledge, the nature of the MTurk sample is problematic. The authors recognize this but dismiss the implications too readily. Why should we believe these differences did not affect results? Also, the authors should show differences between their samples (every wave) and the general US population and probe attrition in the sample further to assure readers there were no imbalances that affect results. More needs to be done here.”

We have done extensive additional reporting of participant information by wave (see the supplemental materials). We also now more clearly describe how participants in Study 2 were collected. Although we had not highlighted this feature in the initial submission, a large subset (~25,000) of the participants in Study 2 were age and gender representative samples All participants—both paid (representative samples) and unpaid (convenience samples) were invited to participate in unpaid follow-up surveys. Thus, some of the responses in Follow-up are likely to come from participants recruited specifically to be representative.

One of our analytic decisions in the revision also addresses the concern about attrition. That is, by focusing on the follow-up data using averages across wave for each participant, we reduce concerns about attrition—participants who responded in only one wave are included in the analysis. As Table S3 shows, although there was a very slight leftward shift among participants who participated in follow-up surveys compared to those who participated in only the Baseline survey, this shift was small and unlikely to influence the overall pattern of results.

“The authors could also make a more compelling case by 1) presenting key patterns and findings visually in figures and 2) reporting uncertainty measures and other methodological details more clearly.”

We have reviewed our measures and expanded detail. Although we did not opt to present the findings using figures, we hope the Editor and Reviewer will see that presenting our results for the Follow-up analyses using averages does make it easier for the reader to see the consistency of patterns across variables. Note that our supplemental materials continue to present the analysis separated by wave for the reader interested in those details.

“The authors also mention differences between partisanship and ideology but could do more here to distinguish and consider implications.”

The primary goal of this work was to connect available data on political orientation to variables in the Health Belief Model expected to relate to COVID-19 health-protective behaviors. Because this is not a political science paper, our data do not allow us to disentangle partisanship from ideology, nor to speak more to this issue than the paper currently does.

“Finally, lots of recent work (including studies by Sander van der Linden and colleagues) on this topic is overlooked and should be integrated.”

We have extended our discussion of recent work on this topic.

General Editorial Comments

Finally, as requested, we have made style adjustments to the documents, included Anton Kurapov to the author list, included captions, and updated file names (Tables, Figures, Supplemental Materials).

The one lingering issue is that we are not currently in a position to place the data in a repository. We are not currently allowed to share the data from other dataset because the ethical board governing the study has deemed political orientation as a special category of personal data. As a project, we have been committed to working to get our full dataset for the larger PsyCorona study (Study 2 in the current paper) fully available for public use. This will require navigating with our data protection, data privacy, legal advisors, data management, and data security mangers at the governing institution (University of Groningen). Although we are committed to making the data available, we are not yet in a position to do so. We have enhanced transparency by linking to a codebook with exact wording (and transcriptions) of every item included in the survey. As the broader issues related to data security are resolved, we hope the data for this project will become available. At this time, however, we are under a legal and ethical obligation not to place the data in a public repository.

Attachment

Submitted filename: Response To reviewers.docx

Decision Letter 1

Amitava Mukherjee

23 Jul 2021

PONE-D-21-14147R1

Politicization of COVID-19 Health-Protective Behaviors in the United States: Longitudinal and Cross-National Evidence

PLOS ONE

Dear Dr. vanDellen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 06 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

  • Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Amitava Mukherjee, ME, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I would like to first acknowledge the authors efforts to address my many points and their attempt to simplify the results.

Their revised analyses are now much more streamlined, but they have sacrificed some nuance in the process. I am relatively sceptical of their approach of averaging responses across waves. I’m willing to accept the proposed stability of political orientation over time. But there are inherent assumptions in averaging outcomes and mediators across waves. Fortunately the authors mitigate this to some extent by retaining the original analyses in the supplementary material, so the motivated reader can examine the effects by wave more closely and see they are relatively consistent over time. This is not the approach I would have taken, but the authors are clear about the choices they have made, and I believe their conclusions can be drawn from the results they report, so I’m willing to (begrudgingly) accept this approach.

I believe the manuscript to be publishable but note the following minor points for further revision.

I made a previous recommendation that the authors outline how future research might circumvent some of the limitations of the current study. Although their response states that they have added this to the discussion I was not able to spot it (apologies if I have missed it).

I think there may be an error in the column labels of Table S4. It currently appears that the waves were alternatingly conducted in US and non-US samples, with the table values being separated by location by row as well. I would guess that the US/Non-Us labelling in the columns is an error?

Other than that point of confusion, I found this table to be very informative for understanding the structure of dataset. I would recommend that this be included in the main text – especially as the average column represents the final variables used in the main text analyses. It would help the reader to understand what is going on a bit better. If the authors are hesitant about the size of the table then perhaps a very simple table or diagram outlining which variables were included in which waves could suffice?

I would also clarify in the main text that averages were calculated for any participant that provided a response for a given variable at least one relevant wave. That is, a given participants average score on, say, perceived efficacy of face coverings could be the average of between 1 and 4 responses, depending on how many of the relevant waves they participated in. At least that is how I interpret the revised approach – if I’m wrong then then the nature of how averages were calculated should still be clarified.

A refence on page 10 (Franz and Dhanani (2021)) is incorrectly formatted.

Table 4 – clarify the nature of the comparison column, I assume this is the F result and effect size for the PO*location interaction term in the GLM. But it is not clear in the table or text.

In Table 5 I assume the indirect effect CI is based on bootstrapping, please note in the table note or text the number of samples. I would also recommend reiterating that the PO variable was standardised in the table note.

Reviewer #2: The revisions have strengthened the manuscript and addressed my major concerns and those of the other reviewer. I leave it to the editors to render a decision about the availability of the data and any lingering concerns this may pose with respect to journal policy. In any case, the revised manuscript is theoretically and technically sound and makes a valuable contribution to the burgeoning literature onnthis topic. I support publication in its current form.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: John R Kerr

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Oct 20;16(10):e0256740. doi: 10.1371/journal.pone.0256740.r004

Author response to Decision Letter 1


11 Aug 2021

Dr. Mukherjee,

We continue to appreciate you and your team of reviewers. We appreciate that all parties have acknowledged the improvements in the manuscript, many of which were made possible by the reviewers’ initial thoughtful reviews We have made the requested minor changes to the manuscript. We detail our responses to the reviewer comments below. We provide reviewer comments in bold italics and our responses below each comment.

Sincerely,

Michelle vanDellen

Reviewer 1

“I would like to first acknowledge the authors efforts to address my many points and their attempt to simplify the results.

Their revised analyses are now much more streamlined, but they have sacrificed some nuance in the process. I am relatively sceptical of their approach of averaging responses across waves. I’m willing to accept the proposed stability of political orientation over time. But there are inherent assumptions in averaging outcomes and mediators across waves. Fortunately the authors mitigate this to some extent by retaining the original analyses in the supplementary material, so the motivated reader can examine the effects by wave more closely and see they are relatively consistent over time. This is not the approach I would have taken, but the authors are clear about the choices they have made, and I believe their conclusions can be drawn from the results they report, so I’m willing to (begrudgingly) accept this approach.”

We appreciate the reviewer’s candor. We carefully considered many possibilities for presenting our results, especially given their longitudinal consistency. The choice we ultimately made would only have been possible given this consistency and, as the reviewer notes, we were able to use supplemental materials to report the full details of each analysis to allow readers access to this information.

“I believe the manuscript to be publishable but note the following minor points for further revision.

I made a previous recommendation that the authors outline how future research might circumvent some of the limitations of the current study. Although their response states that they have added this to the discussion I was not able to spot it (apologies if I have missed it).”

Although we had added to the discussion section statements that addressed our limitations, we did not explicitly state what we would have done in future studies. We have now added these explicit statements.

“I think there may be an error in the column labels of Table S4. It currently appears that the waves were alternatingly conducted in US and non-US samples, with the table values being separated by location by row as well. I would guess that the US/Non-Us labelling in the columns is an error?”

Thank you for catching this mistake. That row was relevant to the Table we adapted to create Table S4. We have removed it and hope the information is now more helpful.

“Other than that point of confusion, I found this table to be very informative for understanding the structure of dataset. I would recommend that this be included in the main text – especially as the average column represents the final variables used in the main text analyses. It would help the reader to understand what is going on a bit better. If the authors are hesitant about the size of the table then perhaps a very simple table or diagram outlining which variables were included in which waves could suffice?”

We are glad this table was helpful and we moved it to the main text as the reviewer suggested.

“I would also clarify in the main text that averages were calculated for any participant that provided a response for a given variable at least one relevant wave. That is, a given participants average score on, say, perceived efficacy of face coverings could be the average of between 1 and 4 responses, depending on how many of the relevant waves they participated in. At least that is how I interpret the revised approach – if I’m wrong then then the nature of how averages were calculated should still be clarified.”

The reviewer’s interpretation of how averages were calculated is correct; we have added text to the document to increase this clarity.

“A refence on page 10 (Franz and Dhanani (2021)) is incorrectly formatted.”

Because this reference was not an endote (i.e., we referred to the study as the subject of the sentence), reference formatting was not relevant. We have modified the sentence to make the reference consistent with others in the text.

“Table 4 – clarify the nature of the comparison column, I assume this is the F result and effect size for the PO*location interaction term in the GLM. But it is not clear in the table or text.”

Table 4 is now included as Table 5. We have added text to clarify what the F and eta squared reports represent.

“In Table 5 I assume the indirect effect CI is based on bootstrapping, please note in the table note or text the number of samples. I would also recommend reiterating that the PO variable was standardised in the table note.”

Table 5 is now Table 6. The bootstrapping approach and number of samples is already reported (see Page 29). We have added a reminder that the political orientation variable was standardized.

“Reviewer #2: The revisions have strengthened the manuscript and addressed my major concerns and those of the other reviewer. I leave it to the editors to render a decision about the availability of the data and any lingering concerns this may pose with respect to journal policy. In any case, the revised manuscript is theoretically and technically sound and makes a valuable contribution to the burgeoning literature on this topic. I support publication in its current form.”

We thank this reviewer for their time in reviewing the manuscript.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Amitava Mukherjee

16 Aug 2021

Politicization of COVID-19 Health-Protective Behaviors in the United States: Longitudinal and Cross-National Evidence

PONE-D-21-14147R2

Dear Dr. vanDellen,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Amitava Mukherjee, ME, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Amitava Mukherjee

27 Sep 2021

PONE-D-21-14147R2

Politicization of COVID-19 Health-Protective Behaviors in the United States: Longitudinal and Cross-National Evidence

Dear Dr. vanDellen:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Dr. Amitava Mukherjee

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Unstandardized coefficients for the paths from baseline political orientation to perceived health risk (a) and from perceived risk to health-protective behaviors (b) at five time points, Study 1

    (DOCX)

    S2 Table. Number of participants from the most frequently represented countries, Study 2.

    (DOCX)

    S3 Table. Correlations between health-protective behaviors at follow-up, Study 2.

    (DOCX)

    S4 Table. Results of robustness checks in which participants were nested into country, and we included date of baseline survey completion, age, gender, and education as covariates.

    (DOCX)

    S5 Table. Correlations between political orientation and perceived risk, perceived efficacy, and health-protective behaviors.

    (DOCX)

    S6 Table. Perceived risk and severity mediates relationship between political orientation and health-protective behaviors separated by wave, Study 2.

    (DOCX)

    S7 Table. Perceived risk and efficacy mediates effects of political orientation on wearing a mask by wave, Study 2.

    (DOCX)

    S8 Table. Correlations between political orientation and other variables across specific comparison countries.

    (DOCX)

    S1 Fig. Distribution of days after March 19 (beginning of survey) in which participants completed the baseline.

    (TIF)

    Attachment

    Submitted filename: Response To reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    Data cannot be shared publicly because the institution governing the data collection and management has deemed political orientation as a sensitive personal piece of data. Data are available for the data managers of the PsyCorona project for researchers who meet the criteria for access to confidential data. This contact should go to the psycorona@rug.nl email address. The authors had no special access privileges to the data that others requesting the data will not have.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES