Skip to main content
PLOS One logoLink to PLOS One
. 2021 Sep 24;16(9):e0257945. doi: 10.1371/journal.pone.0257945

Social distancing in America: Understanding long-term adherence to COVID-19 mitigation recommendations

Christopher P Reinders Folmer 1,*, Megan A Brownlee 1, Adam D Fine 2, Emmeke B Kooistra 1, Malouke E Kuiper 1, Elke H Olthuis 1, Anne Leonore de Bruijn 1, Benjamin van Rooij 1,3
Editor: Wen-Jun Tu4
PMCID: PMC8462713  PMID: 34559863

Abstract

A crucial question in the governance of infectious disease outbreaks is how to ensure that people continue to adhere to mitigation measures for the longer duration. The present paper examines this question by means of a set of cross-sectional studies conducted in the United States during the COVID-19 pandemic, in May, June, and July of 2020. Using stratified samples that mimic the demographic characteristics of the U.S. population, it seeks to understand to what extent Americans continued to adhere to social distancing measures in the period after the first lockdown ended. Moreover, it seeks to uncover which variables sustained (or undermined) adherence across this period. For this purpose, we examined a broad range of factors, relating to people’s (1) knowledge and understanding of the mitigation measures, (2) perceptions of their costs and benefits, (3) perceptions of legitimacy and procedural justice, (4) personal factors, (5) social environment, and (6) practical circumstances. Our findings reveal that adherence was chiefly shaped by three major factors: respondents adhered more when they (a) had greater practical capacity to adhere, (b) morally agreed more with the measures, and (c) perceived the virus as a more severe health threat. Adherence was shaped to a lesser extent by impulsivity, knowledge of social distancing measures, opportunities for violating, personal costs, and descriptive social norms. The results also reveal, however, that adherence declined across this period, which was partly explained by changes in people’s moral alignment, threat perceptions, knowledge, and perceived social norms. These findings show that adherence originates from a broad range of factors that develop dynamically across time. Practically these insights help to improve pandemic governance, as well as contributing theoretically to the study of compliance and the way that rules come to shape behavior.

Introduction

The global COVID-19 outbreak in 2020 has made clear that the initial defense against a new deadly infectious disease requires large scale behavioral modification. Until there is a vaccine or a cure that can halt a pandemic outbreak, the only protection that people have is to ensure that the spread of the disease is minimized. This entails a range of changes in basic human conduct, from things that have limited economic and social consequences, such as better hand hygiene and the adoption of face masks, to profound, costly changes such as social distancing, forced isolation, quarantine, and broader lockdowns. Such measures only work, however, if people effectively follow them. In this way, the 2020 pandemic has shown the importance of understanding compliance and adherence to outbreak mitigation measures.

There is now a quite well-developed body of research about what made people across the globe follow mitigation measures when they were first adopted. When many governments adopted lockdown rules and social distancing measures as compulsory mandates during this initial “first wave” period, compliance levels were high. This is demonstrated not only by drastic reductions in mobility [1], but also by consequences associated with this, such as the unprecedented event that the price of oil turned negative [2]. A recent review identifies a range of variables that predicted compliance with social distancing measures during the first pandemic wave, including psychosocial, institutional, and situational variables, as well as incentives [3]. Furthermore, this review showed that some highly important policy variables were not associated with compliance during this period. These included for instance deterrence where neither the threat of stricter punishment nor more certain punishment predicted compliance.

After the first wave, many countries lifted the most invasive restrictions, such as lockdowns, and even some of the social distancing measures. Yet as the outbreak was neither controlled nor overcome through a vaccine or medicine, mitigation measures have remained essential for keeping the virus at bay. During the fall, however, many countries found themselves faced with a second pandemic wave. This raises the question of how adherence to mitigation measures has developed during the summer months after the initial strict behavioral measures of the first wave were repealed. Is it the case that social distancing has degraded back toward pre-pandemic normality, and thus gave fertile ground for a resurgence of infections? And if so, which factors shaped such changes and caused people to abandon (or sustain) social distancing?

To understand these questions, the present research collected three cross-sectional surveys in the United States in May, June, and July of 2020. Using stratified samples that mimic the demographic characteristics of the U.S. population, we examined how Americans’ adherence to social distancing measures has developed across this period, and which factors have sustained or undermined this. To answer this question, we consider a broad range of influences, which can be arranged into six categories. First, factors related to people’s practical understanding and knowledge of the measures. Second, factors related to their perception of the costs and benefits of the measures. Third, factors related to their perceptions of the legitimacy and procedural justice of the measures and the responsible authorities. Fourth, personal factors relevant to adherence. Fifth, influences from people’s social environment. And sixth, practical circumstances that may constrain or facilitate their adherence. The paper allows us to understand how these variables shape adherence to social distancing measures in the critical period that follows as a country starts to reopen following a first wave in a pandemic outbreak. By doing so, we contribute to the overall understanding of pandemic governance, as well as to insight into the interaction between rules and human conduct most generally. We also contribute to compliance theory by illuminating how influences at each of these levels may shape adherence over a longer time period. And finally, we identify important avenues for policy, on how adherence to mitigation measures can be promoted when strict measures are lifted.

The present study

Following the initial lockdown period in the spring of 2020, the United States underwent dramatic changes, both in terms of the spread of the virus and the measures to counter it. At the beginning of April, approximately 70% of Americans were subject to stay-at-home and social distancing measures [4, 5]. However, by the end of April, infections began to decline [6], and some states began re-opening or reopened altogether, starting with the Southern and Midwestern regions [7]. During the same period, Federal social distancing guidelines were repealed [8], although the requirement remained in place at nearly every state level [9]. Infection rates strongly accelerated from mid-June to late July, however, reaching a peak of almost 75,000 new cases per day [6].

The period between May and July was also characterized by increasing controversy over mitigation measures. There was a continuation of protests against mitigation measures, where people deliberately violated social distancing and other mitigation measures [10, 11]. Furthermore, mitigation measures became increasingly politicized. Compared to Democrats, Republicans voiced greater concern over the economic costs of mitigation measures, and less concern over the threat of the virus [12, 13]. This was illustrated during the 2020 presidential election campaign, where Republican mass rallies were held and some organizers actively countered social distancing measures (e.g., by removing “do not sit here” stickers) [14].

Throughout this period, mitigation measures have remained essential for keeping the virus at bay. But to what extent have Americans followed these measures, and what factors influenced them to do so (or not)? To answer these questions, we leveraged three surveys, collected in May, June, and July of 2020, among stratified samples that mimic the demographic characteristics of the U.S. population.

Our surveys focus on adherence to social distancing recommendations. Although they became less visible in federal public health recommendations after this period, social distancing recommendations continue to exist nearly everywhere at the state level [9]. Our surveys assessed self-reported adherence to social distancing recommendations across various situations, and examine how this has developed in the period after the first wave lockdown. Furthermore, we explored a range of factors that may explain why people did, or did not adhere to these measures, derived from insights on compliance from psychology, criminology, sociology, and economics [5, 1518]. In operationalizing the present study we broadly distinguish six categories of variables:

  1. People’s practical understanding and knowledge of mitigation measures. In order to be able to adhere to mitigation measures, it is necessary that people have sufficient knowledge of what is expected from them [1921], and that the measures are clear to them [22]. Accordingly, our surveys firstly test people’s knowledge of social distancing measures, and the perceived clarity of the mitigation measures to them. Logically, a lack of knowledge about mitigation measures would be expected to reduce adherence, as would lower perceived clarity.

  2. The perceived costs and benefits of the mitigation measures. According to the rational choice theory of compliance, people’s tendency to adhere should decrease as the costs of doing so become larger, and increase as the benefits improve [23, 24]. Our surveys assess different aspects of this. A first aspect is people’s perception of the threat of the virus. Mitigation measures become more beneficial if people regard the virus as a severe threat to their own health and/or that of others. Yet the health risk of COVID-19 varies between individuals [2527], as do subjective perceptions of this risk [28]. For this reason, we expected that adherence to social distancing measures would be higher among people who perceive the virus as a greater health threat. The second aspect is the cost people personally face due to the mitigation measures. Due to the pandemic and the measures to mitigate it, many Americans have suffered decreases to their income or employment opportunities [29]. We expected that adherence would be lower among people for whom the personal costs of the mitigation measures are greater. The final aspect is fear of punishment (deterrence). Although social distancing measures were not widely enforced in the U.S., sanctions did occur during the first wave lockdown [30]; furthermore, severe sanctions were communicated for other COVID-related violations [31]. Research on perceptual deterrence suggests that subjective perceptions of punishment may also influence compliance [32]. For this reason, we also examined subjective perceptions of punishment for not following social distancing measures, separating punishment certainty and severity–the key dimensions separated by general deterrence theory [3335]. We expected that adherence would be higher among people who regarded punishment as more certain, and more severe.

  3. The perceived legitimacy and procedural justice of the mitigation measures and the responsible authorities. As Max Weber has explained: “So far as it is not derived merely from fear or from motives of expediency, a willingness to submit to an order imposed by one man or a small group, always implies a belief in the legitimate authority (Herrschaftsgewalt) of the source imposing it” (see [36] p. 37). Accordingly, we also aimed to capture such legitimacy perceptions in our study. Jackson and Gau [37] describe legitimacy as the property or quality of possessing rightful power and the acceptance of authority. To the extent that the law and legal authority are perceived to be legitimate, people will feel more obligated to obey the law. Individuals judge legal authority to be legitimate to the extent that they embody the values of being appropriate and proper [38, 39]. Our study assesses six core areas of this. First, we assess people’s moral alignment with social distancing measures; i.e., the extent to which they agree with the substance of these measures [40, 41]. During the period that preceded our study, there were clear indications that support for mitigation measures differed among Americans [42, 43]. We expected that adherence would be higher among people with greater substantive support for social distancing measures.

A second core area is people’s evaluation of the authorities’ responses. To study this, we examined whether people found the overall approach taken by authorities to be consistent and adequate. We expected that adherence would be higher among people who evaluated the authorities’ approach more favorably. Relatedly, we assessed procedural justice, or people’s perceptions of the procedural fairness through which the rules were made and enforced. The more that people see that rules are made and enforced in a procedurally fair manner, the more likely it is that they will see them as legitimate–and the more likely it becomes that they will feel bound to obey such measures and come to comply with them [40, 41, 44].

In the final area, we assessed people’s sense of duty to obey the law. Such sense of duty is a core expression or a downstream consequence of their felt legitimacy, as people with a higher sense of legitimacy, in theory, develop more of a sense of a duty to obey rules developed and enforced by authorities they view as legitimate [45]. We have used three measures to capture this. First is the normative obligation to obey the law, which captures people’s sense of duty to obey the law out of moral obligation [46]. Second is the non-normative duty to obey the law, which originates in a sense of coercion or fear, where people feel obligated to obey the law out of fear of the authorities [46]. And last is people’s obligation to obey the law in general, which captures the extent to which they feel that they should obey the law regardless of different circumstances [4749].

  • 4

    Personal factors relevant to adherence. As the fourth facet of adherence to social distancing measures, we look at personal factors that are relevant for people’s stance toward mitigation measures, or for compliance more generally. A first factor is people’s trust in science. Scientific evidence (and indeed, scientists) have played an important (and very visible) role in the public health response to COVID-19, and the measures to mitigate it. Yet trust in science varies between individuals, which may strongly affect their willingness to follow these measures [50, 51]. We expected that adherence to social distancing measures would be higher among people who have greater trust in science. A second, related factor is trust in traditional media. Research suggests that distrust in traditional media is associated with greater belief in misinformation about COVID-19 [52]. This, in turn, predicts lower adherence to measures to mitigate it [53]. Accordingly, we expected adherence to social distancing measures to be higher among people with more trust in traditional media. The third personal factor is impulsivity. To effectively distance oneself from others, it is necessary to inhibit one’s usual tendency to get close to them. However, people differ in their capacity to control their impulses, and high levels of impulsivity predict deviant and rule breaking behavior [5458]. We therefore expected adherence to be lower among more impulsive individuals. Last, we examined people’s emotional state. According to strain theory, people may cope with negative emotions through rule violating behavior [5965]. Indeed, also in context of the COVID-19 pandemic, studies show that negative emotions may lead to lower compliance with quarantine measures [66]. Thus, we expected adherence to be lower among people who experienced more negative emotions.

  • 5

    People’s social environment. As the fifth facet of adherence, we look at influences from people’s social environment–specifically descriptive social norms for adhering. In many situations, it is highly visible whether others do (or do not) adhere to social distancing measures. Research shows that perceptions of the norms for complying with particular rules or requests can have an important effect on people’s own tendency to do so: the more that they see others comply, the more likely they are to do so themselves; the more that they see others violate or disobey, the more likely they are to offend [6770]. In light of this, our surveys assessed people’s perceptions of the norms for social distancing within their community. We expected adherence to be higher among people who perceived more adherence within their community.

  • 6

    People’s practical circumstances. As the final facet, we looked at the practical circumstances that may shape people’s adherence. Whether people can adhere to social distancing measures (or conversely, can violate these) may also be contingent on the extent to which their practical circumstances allow them to do so. Our surveys looked at different aspects of this. First, people’s practical capacity to adhere. In order for people to effectively do as social distancing measures demand, it is necessary that their practical circumstances effectively allow them to do so. However, in practice, their capacity to adhere may often vary. For example, keeping a safe distance from others may be more difficult in crowded or constrained environments, or in occupations that cannot be conducted from home or at a distance. Capacity thus may strongly shape adherence, but it should be understood that these concepts are not identical. Simply having the capacity to commit a crime does not mean that one also will do so. The same applies to social distancing: being practically able to keep a distance from others does not mean that someone wishes to do so. We expected adherence with social distancing measures to be higher among people who had greater practical capacity to adhere to these measures. The second aspect is people’s opportunities for violating the measures. In order to violate social distancing recommendations, it first is necessary that there are practical opportunities to do so. However, practical circumstances may make this impossible, for example, when physical environments have been rearranged to separate people from each other. Insights from routine activities theory [7173] and situational crime prevention [74, 75] show that there is less rule breaking when there are less practical opportunities to do so. We expected greater adherence with social distancing measures among people who saw less opportunities for violating such measures by getting close to others.

Method

We obtained ethical approval for this project from the Institutional Review Board of the University of California, Irvine and by the Ethics Review Board of the University of Amsterdam. All participants provided consent before participating in the study. Participation was voluntary, and all participants could stop the survey at any time.

Participants

Participants were residents (18 years or older) of the U.S. that were recruited via the online survey platform SurveyMonkey (https://surveymonkey.com). They were recruited using a stratified sampling approach, in which the final intended sample size was divided into subgroups with the same demographic proportions (age, gender, and race/ethnicity) as the national population based on estimates from the U.S. Census Bureau (https://www.census.gov/). This stratified sampling approach mimics the demographic characteristics of the United States, though it retains the biases and characteristics of a non-probability convenience sample. Three cross-sectional surveys were administered in May, June, and July 2020, using different samples of participants. Participants were paid $3.00 for participating.

1,452 participants took part in Survey 1 (May 8–18). Here, 436 participants were excluded from the sample because they failed to complete the survey, provided incomplete responses, or failed to pass two attention checks. Six participants indicated a nonbinary gender identity; as this number was insufficient for analysis, they were also omitted. The final sample for Survey 1 consisted of 1,012 cases (56.5% women, 43.5% men; Mage = 40.32 years).

1,711 participants took part in Survey 2 (June 8–16). Here, 723 participants failed to complete the survey, provided incomplete responses, or failed to pass two attention checks; these participants were excluded. Additionally, five nonbinary participants were omitted from the sample. The final sample for Survey 2 consisted of 986 cases (54.3% women, 45.7% men; Mage = 40.17 years).

1,758 participants took part in Survey 3 (July 11–17). Here, 835 participants failed to complete the survey, provided partial responses, or failed to pass two attention checks; again, these participants were excluded. Four nonbinary participants were also omitted. As such, the final sample for Survey 3 consisted of 921 cases (52.7% women, 47.3% men; Mage = 40.17 years).

In total, the sample thus consisted of 2,919 cases across three cross-sectional survey waves (54.5% women, 45.5% men; Mage = 40.22 years). The sample thus was slightly more female and older than the general population (2019 census: 50.9% women, 49.0% men; Mage = 38.3 years) [76]. There was some variability between waves on specific variables (i.e., education, COVID care, inclusion in an ethnic minority group, insurance status, socio-economic status change, and health risk to self and others). These variables were either unrelated to adherence or controlled for in the analyses. Demographical information for all three survey waves and for the full sample is displayed in Table 1.

Table 1. Sample characteristics and control variables, Surveys 1 (May), 2 (June), and 3 (July), and full sample.

Survey 1 (May 8–18) Survey 2 (June 8–16) Survey 3 (July 11–17) Full sample
Age 40.29 (12.88) 40.22 (13.41) 40.17 (12.87) 40,22 (13,05)
Gender
Female 56.5% 54.3% 52.7% 54,5%
Male 43.5% 45.7% 47.3% 45,5%
Region
Northeast 20.2% 20.6% 20.5% 20.4%
Midwest 21.3% 19.7% 21.3% 20.8%
South 44.3% 42.5% 41.5% 42.8%
West 14.2% 17.2% 16.7% 16.0%
Minority 31.0% 38.5% 33.3% 34.3%
Education
No diploma 2.5% 2.9% 3.3% 2.9%
High school degree 41.2% 43.2% 46.1% 43.4%
Associate degree 12.7% 13.2% 13.0% 13.0%
College degree and higher 43.6% 40.7% 37.6% 40.7%
Employed 65.7% 64.0% 61.8% 63.9%
Insurance
Uninsured 12.9% 14.9% 13.6% 13.8%
Public insurance 27.4% 27.3% 33.9% 29.4%
Private insurance 59.7% 57.8% 52.6% 56.8%
Socio-econ status, pre-COVID 6.05 (1.95) 6.00 (2.10) 5.86 (2.10) 5.97 (2.05)
Socio-econ status, post-COVID 5.61 (2.11) 5.80 (2.20) 5.63 (2.28) 5.68 (2.20)
Socio-econ status, change -.44 (1.66) -.20 (1.59) -.23 (1.70) -.29 (1.65)
Political orientation
Very progressive 16.0% 20.6% 17.5% 18.0%
Slightly progressive 25.2% 24.9% 24.1% 24.8%
Slightly conservative 29.6% 28.9% 27.9% 28.8%
Very conservative 16.7% 14.8% 17.7% 16.4%
Prefer not to say 12.4% 10.7% 12.8% 12.0%
Care professionally for COVID 6.8% 10.1% 9.4% 8.8%
Health risk self 31.9% 32.4% 37.9% 33.9%
Health risk others 57.9% 55.3% 62.2% 58.4%
N 1012 986 921 2919

Note. Standard deviations between parentheses.

Materials

Survey

Our survey (see Supporting Information) was based on our prior surveys conducted in April 2020 in the United States [5], the United Kingdom [77], the Netherlands [78], and Israel [79]. It assessed the same variables and relied on the same measures. Measures that displayed poor internal consistency in the previous surveys were revised to improve their internal consistency (e.g., adherence, social norms, capacity to adhere, and opportunity to violate); reliability of the revised measures was high (α ≥ .85, more details below). Throughout the survey, we referred to COVID-19 as “the coronavirus,” which reflects the greater usage of this name in everyday speech, especially during the early stages of the pandemic.

Control variables

The following demographic variables were recorded: age, gender, nationality, information on residency (state), inclusion in an ethnic minority group, education, employment status, insurance status, social economic status before and after COVID-19 (MacArthur Scale of Subjective Social Status [80]), and political orientation (adapted from [8183]). For political orientation, a considerable number of participants preferred to not disclose their preference (Survey 1: 12.4%; Survey 2: 10.7%; Survey 3: 12.8%). To enable such cases to be retained in the analysis, this variable was recoded into two dummy variables: one comparing conservative to progressive orientation (1 = very conservative or conservative, 0 = progressive, very progressive, or prefer not to say) and one comparing undisclosed to progressive orientation (1 = prefer not to say, 0 = very conservative, conservative, progressive, very progressive). This approach yielded the same results for adherence as the scale measure, but allowed all cases to be utilized.

Additionally, we asked several questions that probed exposure to and risk from COVID-19. Specifically, we asked participants to indicate whether they provided professional care for coronavirus patients, and whether they or anyone they knew had underlying health issues that would put them more at-risk to suffer complications from the coronavirus.

Correlations between the control variables for all three surveys are displayed in S1S3 Tables.

Adherence to social distancing measures

To assess adherence to social distancing measures, we measured participants’ self-reported tendency to keep a safe distance from others in various situations [18]. Specifically, we included seven questions that measured their tendency to keep a safe distance (six feet or more) from: (1) “others outside of my direct household,” (2) “my neighbors,” (3) “colleagues at work,” (4) “friends and family from outside of my direct household,” (5) “others when grocery shopping,” (6) “others when taking a walk or exercising,” and (7) “others when commuting or traveling” (1 = “never,” 7 = “always”). Responses were mean-scored into a single measure for each wave (Survey 1: α = .92; Survey 2: α = .92; Survey 3: α = .93), with higher scores indicating greater adherence to COVID-19 social distancing measures (see Table 2).

Table 2. Descriptive statistics of dependent variables, Surveys 1 (May), 2 (June), and 3 (July), and full sample.

Survey 1 Survey 2 Survey 3 Full sample
(May 8–18) (June 8–16) (July 11–17)
I keep a safe distance (six feet or more) from…
Others outside of household 6.02 (1.41) 5.85 (1.51) 5.83 (1.55) 5.90 (1.49)
Neighbors 6.13 (1.36) 5.85 (1.52) 5.84 (1.64) 5.94 (1.51)
Colleagues at work 5.88 (1.70) 5.59 (1.84) 5.57 (1.91) 5.68 (1.82)
Friends and family outside household 5.67 (1.60) 5.38 (1.74) 5.27 (1.84) 5.45 (1.73)
Others when grocery shopping 6.08 (1.26) 5.93 (1.37) 5.94 (1.44) 5.99 (1.36)
Others when walking or exercising 6.13 (1.36) 5.96 (1.46) 5.94 (1.55) 6.01 (1.46)
Others when commuting or traveling 6.16 (1.39) 5.95 (1.53) 5.94 (1.60) 6.02 (1.51)
Adherence scale measure 6.01 (1.20) 5.79 (1.29) 5.76 (1.39) 5.86 (1.30)
N 1012 986 921 2919

Note. Standard deviations between parentheses.

Practical knowledge and understanding

To assess participants’ knowledge and understanding of the mitigation measures, two variables were measured: (1) knowledge of these measures, and (2) perceived clarity of these measures.

To measure participants’ knowledge of mitigation measures, we asked them to indicate whether current COVID-19 mitigation measures required them to keep a safe distance (six feet or more) from others (1 = yes, 2 = no, 3 = don’t know). The key comparison is whether people who know that they are under social distancing measures adhere more to these recommendations than people who do not, or are unsure of this. To capture this, these responses were recoded (1 = yes, 0 = no or don’t know).

To measure the perceived clarity of mitigation measures, one item was solicited. This asked them to evaluate how clear the measures were that were taken by the authorities to reduce the spread of the coronavirus (1 = “extremely unclear;” 7 = “extremely clear”).

Costs and benefits

To assess the costs and benefits of the mitigation measures, four variables were measured: (1) the perceived health threat of COVID-19, (2) personal costs of the mitigation measures, (3) perceptions of the certainty of punishment for not following social distancing measures, and (4) perceptions of the severity of punishment for failure to do so.

The perceived health threat of COVID-19 was measured by mean-scoring three items. These asked participants to indicate to what extent they believed the coronavirus to be a major threat to (1) their own health, (2) the health of friends and relatives, and (3) the general health (1 = “strongly disagree,” 7 = “strongly agree”). Their answers were combined into a scale measure (Survey 1: α = .91; Survey 2: α = .92; Survey 3: α = .92), with higher scores indicating greater perceived health threat.

Personal costs of COVID-19 mitigation measures were assessed by means of five items. Specifically, we asked participants to indicate how likely it was that they would (1) “lose income,” (2) “lose their job,” (3) “not be able to work,” (4) “not be able to work as effectively as normal,” and (5) “experience a negative impact on their social life” as a result of the measures (1 = “extremely unlikely,” 7 = “extremely likely”). These were combined into a scale measure of personal costs (Survey 1: α = .86; Survey 2: α = .86; Survey 3: α = .86), with higher scores indicating personal greater costs of the mitigation measures.

Perceptions of punishment certainty for violating social distancing measures were measured with two questions. These assessed the perceived likelihood that the authorities would (1) “find out,” and (2) “punish you” if participants would not keep a safe distance (six feet or more) from others (1 = “extremely improbable,” 7 = “extremely probable”). Both items were highly correlated (Survey 1: r = .75; Survey 2: r = .75; Survey 3: r = .74), and hence were aggregated into a scale measure, with higher scores indicating greater perceived punishment certainty.

Perceptions of punishment severity were assessed using one item. Participants indicated how much they would “suffer” if the authorities would punish them for not keeping a safe distance (six feet or more) from others (1 = “extreme suffering;” 6 = “no suffering at all”). The item was reverse-coded so that higher scores indicate greater perceived punishment severity.

Legitimacy, procedural justice, and obligation to obey

Six variables were measured to capture participants’ perceptions of the legitimacy of the mitigation measures and the responsible authorities, and their felt obligation to follow them: (1) their moral alignment with social distancing measures, (2) their evaluation of the authority response to the pandemic, (3) their normative obligation to obey the authorities handling the pandemic, (4) their non-normative obligation to obey these authorities, (5) their general obligation to obey the law, and (6) their perception of the procedural fairness of these authorities when enforcing the measures.

Moral alignment with social distancing measures was measured by asking participants to which extent they “morally believe that people should keep a safe distance from others (six feet or more) in order to contain the coronavirus” (1 = “strongly disagree,” 7 = “strongly agree”).

Evaluation of the authority response was measured using two items. These asked to which extent participants believed the authorities to have been (1) “consistent,” and (2) “adequate” in their response to contain the coronavirus (1 = “strongly disagree,” 7 = “strongly agree”). Both items were strongly correlated (Survey 1: r = .81; Survey 2: r = .79; Survey 3: r = .80); accordingly, a scale measure was constructed from their responses, with higher scores indicating more favorable evaluations.

Participants’ normative obligation to obey the authorities handling COVID-19 was measured by mean-scoring three items (adapted for this study following [46, 84]): (1) “I feel a moral obligation to obey the authorities handling the coronavirus,” (2) “I feel a moral duty to support the decisions of the authorities handling the coronavirus, even if I disagree with them,” and (3) “I feel a moral duty to obey the instructions of the authorities handling the coronavirus, even when I don’t understand the reasons behind them” (1 = “strongly disagree,” 5 = “strongly agree”). Answers were aggregated into a scale measure (Survey 1: α = .87; Survey 2: α = .89; Survey 3: α = .90). Higher scores indicated greater normative obligation to obey.

Participants’ non-normative obligation to obey the authorities handling COVID-19 was assessed with three items (again adapted for this study following [46, 84]): (1) “people like me have no choice but to obey the authorities handling the coronavirus,” (2) “if you don’t do what the authorities handling the coronavirus tell you they will treat you badly,” and (3) “I only obey the authorities handling the coronavirus because I am afraid of them” (1 = “strongly disagree,” 5 = “strongly agree”). Responses were combined into a scale measure (Survey 1: α = .72; Survey 2: α = .73; Survey 3: α = .70), with higher scores indicating greater non-normative obligation to obey.

Participants’ general obligation to obey the law was measured using the 12-item Rule Orientation scale [47]. This instrument assesses the perceived acceptability of breaking legal rules across a range of situations (e.g., when the rule is against one’s moral principles; when the rule is not enforced; when others think that breaking the rule is justified, etc.; 1 = “strongly disagree,” 7 = “strongly agree”). Responses were mean-scored into a scale measure (Survey 1: α = .94; Survey 2: α = .94; Survey 3: α = .94), with higher scores indicating greater felt obligation to obey the law in general.

Perceptions of the authorities’ procedural fairness in enforcing the mitigation measures were measured by means of four items (adapted from [40, 8587]). These asked to which extent they expected that the authorities would: (1) “treat people with respect,” (2) “give a person the chance to tell their side of the story if the person is accused of violating measures to contain the coronavirus,” (3) “treat people fairly, despite gender, race, religion, or socioeconomic background,” and (4) “be honest in enforcing measures to contain the coronavirus” (1 = “strongly disagree,” 7 = “strongly agree”). Responses were aggregated into a scale measure (Survey 1: α = .92; Survey 2: α = .93; Survey 3: α = .92), with higher scores indicating greater perceived procedural fairness.

Personal factors

Four variables were measured to assess personal factors relevant to adherence: participants’ (1) trust in science, (2) their trust in traditional media, (3) their impulsivity, and (4) the negative emotions that they experience as a result of the pandemic.

Trust in science was measured by means of four items [88]. Participants indicated to which extent they trusted scientists to (1) “create knowledge that is unbiased and accurate,” (2) “create knowledge that is useful,” (3) “advise government officials on policy,” and (4) “inform the public on important issues” (1 = completely distrust, 5 = completely trust). Their answers were mean-scored into a scale measure (Survey 1: α = .92; Survey 2: α = .92; Survey 3: α = .92), with higher scores indicating greater trust in science.

Trust in media was assessed by means of a single item [5]: “Please indicate how much you trust traditional media (e.g., newspapers, TV news, news apps) to be unbiased and accurate”(1 = completely distrust, 5 = completely trust).

Impulsivity was measured by means of a subset of five items taken from the 8-item impulse control subscale from the Weinberger Adjustment Inventory (WAI; [89]): (1) “I should try harder to control myself when I’m having fun,” (2) “I do things without giving them enough thought,” (3) “When I’m doing something fun (like partying or acting silly), I tend to get carried away and go too far,” (4) “I say the first thing that comes to my mind without thinking enough about it,” and (5) “I stop and think things through before I act” (1 = “false,” 5 = “true;” last item reverse coded). The last item correlated poorly with the other items, and hence was eliminated. The remaining four items were combined into a scale measure (Survey 1: α = .82; Survey 2: α = .81; Survey 3: α = .82), with higher scores indicating greater impulsivity.

Negative emotional state due to COVID-19 was assessed by means of six items. Participants indicated to what extent the coronavirus made them feel (1) “angry,” (2) “scared,” (3) “powerless,” (4) “depressed,” (5) “stressed,” and (6) “lonely” (1 = “strongly disagree,” 7 = “strongly agree”). Responses were aggregated into a scale measure (Survey 1: α = .89; Survey 2: α = .91; Survey 3: α = .90), with higher scores indicating more negative emotions.

Social environment

To capture influences from the social environment, one variable was measured: perceived (descriptive) social norms for adhering to social distancing measures.

Perceived descriptive social norms regarding safe-distancing measures were measured by means of seven items, based on our measure of reported adherence. Participants were asked whether most people they knew were keeping a safe distance (six feet or more) from: (1) “others outside of their direct household,” (2) “their neighbors,” (3) “colleagues at work,” (4) “friends and family from outside of their direct household,” (5) “others when grocery shopping,” (6) “others when taking a walk or exercising,” and (7) “others in traffic or public transport” (1 = “strongly disagree,” 7 = “strongly agree”). Participants’ answers were combined into a scale measure (Survey 1: α = .94; Survey 2: α = .95; Survey 3: α = .95), with higher scores indicating greater perceived descriptive social norms for adhering.

Practical circumstances

To assess practical circumstances, two variables were measured: (1) participants’ practical capacity to adhere to social distancing measures, and (2) their perceived opportunity to violate those measures.

Participants’ practical capacity to adhere to social distancing mitigation measures was measured by means of seven items, again based on our measures of reported adherence. Participants were asked whether they were capable of keeping a safe distance (six feet or more) from: (1) “others outside of my direct household,” (2) “my neighbors,” (3) “colleagues at work,” (4) “friends and family from outside of my direct household,” (5) “others when grocery shopping,” (6) “others when taking a walk or exercising,” and (7) “others in traffic or public transport” (1 = “strongly disagree,” 7 = “strongly agree”). Responses were mean-scored into a single scale measure (Survey 1: α = .87; Survey 2: α = .85; Survey 3: α = .89), with higher scores indicating greater practical capacity to adhere.

Opportunity to violate social distancing measures was measured by means of seven items (again based on our measures of adherence). Participants were asked whether, at the present time, it was still possible for them to come within an unsafe distance (closer than six feet) from: (1) “others outside of my direct household,” (2) “my neighbors,” (3) “colleagues at work,” (4) “friends and family from outside of my direct household,” (5) “others when grocery shopping,” (6) “others when taking a walk or exercising,” and (7) “others in traffic or public transport” (1 = “strongly disagree,” 7 = “strongly agree”). Responses were aggregated into a single scale measure (Survey 1: α = .94; Survey 2: α = .94; Survey 3: α = .94), with higher scores indicating greater practical opportunity to violate.

Descriptive statistics of all independent variables are displayed for all three samples in Table 3, and correlations are shown in S4S6 Tables.

Table 3. Descriptive statistics of independent variables, Surveys 1 (May), 2 (June), and 3 (July), and full sample.

Survey 1 Survey 2 Survey 3 Full sample
(May 8–18) (June 8–16) (July 11–17)
Practical knowledge and understanding
Knowledge of measures 90.4% 82.9% 86.3% 86.6%
Clarity of measures 5.36 (1.62) 5.15 (1.74) 5.03 (1.81) 5.19 (1.73)
Costs and benefits
Perceived health threat 5.60 (1.47) 5.53 (1.56) 5.74 (1.49) 5.62 (1.51)
Personal costs 4.31 (1.62) 4.09 (1.66) 4.15 (1.64) 4.18 (1.64)
Punishment certainty 3.34 (1.76) 3.19 (1.78) 3.24 (1.74) 3.26 (1.76)
Punishment severity 3.80 (1.70) 3.80 (1.73) 3.89 (1.73) 3.83 (1.72)
Legitimacy
Moral alignment 6.21 (1.18) 6.10 (1.34) 6.15 (1.36) 6.15 (1.30)
Authority response 4.29 (1.85) 4.36 (1.84) 3.81 (1.94) 4.16 (1.89)
Normative obligation to obey 3.97 (0.85) 3.84 (0.91) 3.90 (0.93) 3.90 (0.90)
Non-normative obligation to obey 2.95 (0.99) 2.97 (1.02) 2.94 (0.98) 2.95 (1.00)
Obligation to obey the law (general) 4.40 (1.46) 4.29 (1.50) 4.38 (1.49) 4.36 (1.48)
Procedural justice of enforcement 5.24 (1.51) 5.06 (1.68) 5.08 (1.65) 5.13 (1.61)
Personal factors
Trust in science 3.89 (0.96) 3.83 (0.99) 3.83 (1.00) 3.85 (0.99)
Trust in media 2.92 (1.30) 2.94 (1.30) 2.83 (1.34) 2.90 (1.31)
Impulsivity 2.40 (1.10) 2.52 (1.14) 2.46 (1.13) 2.46 (1.12)
Negative emotions 4.60 (1.53) 4.53 (1.61) 4.63 (1.57) 4.58 (1.57)
Social environment
Descriptive social norms 5.46 (1.30) 5.21 (1.40) 5.08 (1.68) 5.25 (1.40)
Practical circumstances
Practical capacity to adhere 6.06 (0.94) 5.97 (0.94) 5.91 (1.08) 5.98 (0.99)
Opportunity to violate 4.46 (1.78) 4.70 (1.75) 4.61 (1.71) 4.59 (1.75)
N 1012 986 921 2919

Note. Standard deviations between parentheses.

Analysis plan

Our research focused on five major questions: (1) To what extent have Americans adhered to social distancing measures in the period after the first wave lockdown, between May and July 2020, (2) how have the various predictors that were hypothesized to influence adherence developed during this period, (3) which of these predictors in fact influenced adherence during this period, (4) how has the influence of these predictors on adherence changed across this period, and (5) how do the in- and decreases in the level of these predictors that occurred during this period explain the observed changes in adherence? Accordingly, our analysis consisted of three steps.

To examine the first two questions, we explored how adherence to social distancing measures, as well as the situational and motivational variables that were hypothesized to sustain it, evolved from May to July. To do so, we compare these variables between the three survey waves by means of analyses of covariance (ANCOVA), with parameter estimates with robust standard errors (HC3) to conduct pairwise comparisons between months. To illuminate the strictness with which individuals adhere to social distancing recommendations, we also compare frequencies of full adherence. This approach exploits the notion that anyone who reports anything less than full adherence (7 = “always”) in fact admits to not having followed the measures (either occasionally or more frequently); this therefore represents a stricter measure of adherence than the average. We compared the frequency of full adherence (across all seven situations) between survey waves using negative binomial regression; to compare the probability of full adherence within specific situations, logistic regression was utilized. All analyses controlled for all demographic and control variables.

To answer the third question, we examined how adherence to social distancing measures was predicted by the various predictors that were hypothesized to sustain it. To do so, we relied on linear (OLS) regression analyses, in which self-reported adherence to social distancing measures was regressed upon these variables (for a similar approach, see [5]). We estimated a hierarchical model in which the different categories of predictors were added to the model in iterated steps. To examine the fourth question, we reran the final iteration of the model expanded with an interaction term, between one of the predictors and survey wave. Separate models were estimated to test the interaction with survey wave for each of the predictors. All analyses were adjusted for heteroscedasticity using Huber/White robust standard error estimation.

Finally, to examine the fifth question, mediation analyses were conducted. These tested how the effect of survey wave on adherence was explained by its indirect effect on the key predictors that were identified in the final step of the hierarchical regression model.

Results

Development of adherence levels, May to July

First, we examined how Americans’ relative levels of adherence to social distancing measures developed from May to July by comparing average adherence levels between the surveys.

Average adherence

Adherence levels on average as well as by situation are displayed in Fig 1. ANCOVA using parameter estimates with robust standard errors indicated that average levels of adherence among Americans declined from May to June (b = -.23, robust SE = .05, p < .001, Cohen’s d = .15), but did not change further from June to July (b = -.01, robust SE = .06, p = .797, Cohen’s d = .00). When separating the seven situations, adherence declined from May to June in all situations (outside household: b = -.19, robust SE = .07, p = .004, Cohen’s d = .11; neighbors: b = -.29, robust SE = .07, p < .001, Cohen’s d = .15; colleagues: b = -.29, robust SE = .08, p < .001, Cohen’s d = .13; friends and family: b = -.41, robust SE = .08, p < .001, Cohen’s d = .20; grocery shopping: b = -.13, robust SE = .06, p = .029, Cohen’s d = .09; walk or exercise: b = -.18, robust SE = .07, p = .005, Cohen’s d = .11; commute or travel: b = -.20, robust SE = .07, p = .003, Cohen’s d = .11). From June to July, however, no further significant changes in adherence were observed in any of the situations (all ps ≥ .269). In sum, the findings suggest a pattern where adherence to social distancing measures declined from May to June (although differences were relatively modest in terms of effect size), but not further in July.

Fig 1. Adherence to social distancing measures, Survey 1 (May) to Survey 3 (July).

Fig 1

Full adherence

Levels of full adherence are displayed in Fig 2. It displays the percentage of participants who reported adhering fully (7 = “always”) in each situation (grey and black lines), as well the average percentage of full adherence across all situations (red dashed line). Moreover, it displays the percentage of participants who reported full adherence in all seven situations (red solid lines). When comparing levels of full adherence averaged across all seven situations (red dashed line), negative binomial regression revealed a significant difference between the three survey waves, Wald χ2 (2) = 13.45, p = .001. Average levels of full adherence declined by from May to June (b = -.15, SE = .04, Wald χ2 (1) = 12.16, p < .001 –a reduction of 14.4% relative to May), but did not change further from June to July (b = -.03, SE = .04, Wald χ2 (1) = 0.57, p = .450). When comparing the number of participants who reported full adherence in every situation (red solid line), there also was a significant decrease from May to June (b = -.27, SE = .10, Wald χ2 (1) = 6.63, p = .01 –a reduction of 15.0% relative to May). Here also, no further changes were observed from June to July (b = -.02, SE = .11, Wald χ2 (1) = 0.05, p = .825).

Fig 2. Full adherence by situation, across all situations, and in every situation, Survey 1 (May) to Survey 3 (July).

Fig 2

When separating the seven situations (grey and black lines), logistic regression indicated that the probability that participants fully adhered to social distancing recommendation declined significantly from May to June in all situations (outside household: b = -.23, SE = .09, Wald χ2 (1) = 6.21, p = .013; neighbors: b = -.39, SE = .09, Wald χ2 (1) = 17.26, p < .001; colleagues: b = -.39, SE = .09, Wald χ2 (1) = 17.26, p < .001; friends and family: b = -.32, SE = .09, Wald χ2 (1) = 11.22, p = .001; grocery shopping: b = -.22, SE = .09, Wald χ2 (1) = 5.78, p = .016; walk or exercise: b = -.27, SE = .09, Wald χ2 (1) = 8.20, p = .004; commute or travel: b = -.37, SE = .09, Wald χ2 (1) = 15.49, p < .001). From June to July, however, probabilities of full adherence did not change any further (all ps ≥ .116).

Development of predictor variables, May to July

Practical knowledge and understanding

Fig 3 displays the development of participants’ knowledge of social distancing measures across the three surveys, as well as that of their perceptions of the clarity of those measures. Logistic regression indicated that levels of knowledge of social distancing measures (Table 3) declined significantly in June (b = -.71, SE = .14, Wald χ2 (1) = 26.03, p < .001), but partially recovered in July (b = .28, SE = .13, Wald χ2 (1) = 4.48, p = .034). Furthermore, ANCOVA using parameter estimates with robust standard errors indicated that relative to May, the perceived clarity of mitigation measures was significantly lower in July (b = -.34, robust SE = .08, p < .001, Cohen’s d = .17).

Fig 3. Practical knowledge and understanding, Survey 1 (May) to Survey 3 (July).

Fig 3

Costs and benefits

Fig 4 displays the development of the variables reflecting costs and benefits of mitigation measures across the three surveys. Threat perceptions did not change significantly between May and June (p = .073), but increased significantly from June to July (b = .20, robust SE = .07, p = .002, Cohen’s d = .11). Conversely, reported personal costs of mitigation measures decreased from May to June (b = -.22, robust SE = .07, p = .002, Cohen’s d = .11), as did perceptions of the certainty of punishment (b = -.24, robust SE = .07, p = .002, Cohen’s d = .11); neither changed significantly thereafter (both ps ≥ .392). Perceptions of the severity of punishment did not change significantly between May and July (all ps ≥ .110).

Fig 4. Costs and benefits of mitigation measures, Survey 1 (May) to Survey 3 (July).

Fig 4

Legitimacy, procedural justice, and obligation to obey

Fig 5 displays the development of the variables reflecting the core constructs in this area. The analyses revealed that moral alignment with social distancing measures declined significantly from May to June (b = -.14, robust SE = .05, p = .011, Cohen’s d = .09), while evaluations of the authority response declined significantly from June to July (b = -.54, robust SE = .08, p < .001, Cohen’s d = .25). Furthermore, there was a significant decline from May to June in participants’ normative obligation to obey the authorities handling COVID-19 (b = -.15, robust SE = .04, p < .001, Cohen’s d = .14), and in perceptions of their procedural fairness (b = -.21, robust SE = .07, p = .003, Cohen’s d = .11). No significant changes were observed in non-normative obligation to obey these authorities, however, or in their general obligation to obey the law (all ps ≥ .214).

Fig 5. Legitimacy variables, Survey 1 (May) to Survey 3 (July).

Fig 5

Personal factors

Fig 6 shows the development of personal factors relevant to adherence. The results revealed small, but significant changes in trust in science and media: trust in science decreased significantly from May to June (b = -.09, robust SE = .04, p = .032, Cohen’s d = .09), whereas trust in mainstream media showed a significant decrease from May to July (b = -.12, robust SE = .06, p = .030, Cohen’s d = .09). No significant changes were observed in impulsivity (all ps ≥ .092) or negative emotions (all ps ≥ .554).

Fig 6. Personal factors, Survey 1 (May) to Survey 3 (July).

Fig 6

Social environment

Fig 7 shows the development of perceived (descriptive) social norms for adhering to social distancing measures. From May to July, perceived social norms for keeping a safe distance were significantly reduced (b = -.38, robust SE = .06, p < .001, Cohen’s d = .23).

Fig 7. Social environment, Survey 1 (May) to Survey 3 (July).

Fig 7

Practical circumstances

Finally, Fig 8 displays the development in practical circumstances for adhering. From May to July, there was a significant decrease in respondents’ reported capacity to adhere to social distancing measures (b = -.14, robust SE = .05, p = .002, Cohen’s d = .11). Conversely, perceived opportunities for violate social distancing measures became significantly greater from May to June (b = .20, robust SE = .08, p = .013, Cohen’s d = .09).

Fig 8. Practical circumstances, Survey 1 (May) to Survey 3 (July).

Fig 8

Understanding adherence to social distancing measures from May to July

Hierarchical regression model

As the previous section demonstrates, adherence to social distancing measures declined significantly in the period after the initial first wave lockdown. At the same time, significant changes were observed in many of the variables that were hypothesized to shape adherence. Our next major question is to understand how these processes shaped adherence to social distancing measures during this period. To do so, we estimated a linear regression model, in which adherence was regressed upon the various predictors in a series of hierarchical steps. This model was estimated using the combined data from all three survey waves (N = 2,919), with survey wave included as an additional predictor (1 = May, 2 = June, 3 = July). Collinearity statistics indicated no issues with multicollinearity (all VIFs ≤ 2.55; all tolerances ≥ .39). Table 4 displays the results.

Table 4. Hierarchical linear regression (with robust standard errors), adherence to mitigation measures by predictor and control variables.
Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Effect size (Cohen’s d)
Survey wave
Month: June (vs May) -.23*** (.05) -.14** (.05) -.13** (.05) -.10* (.05) -.09* (.04) -.06 (.04) -.09* (.04) .08
Month: July (vs May) -.24*** (.06) -.17** (.06) -.23*** (.05) -.20*** (.05) -.19*** (.05) -.15** (.05) -.14** (.04) .12
Control variables
Age .01*** (.00) .01*** (.00) .01*** (.00) .01*** (.00) .00** (.00) .00** (.00) .00* (.00) .09
Gender, female (vs male) .26*** (.05) .23*** (.05) .20*** (.04) .15*** (.04) .13** (.04) .13** (.04) .12** (.03) .13
Minority .18*** (.05) .12* (.05) -.01 (.04) -.01 (.04) -.00 (.04) -.01 (.04) -.00 (.04) .00
Education .05** (.02) .06*** (.02) .06*** (.01) .05** (.01) .04** (.01) .04** (.01) .03** (.01) .11
Employed -.06 (.05) -.06 (.05) -.05 (.05) -.02 (.04) -.02 (.04) -.02 (.04) .01 (.04) .01
COVID Care -.22* (.09) -.26** (.09) -.33*** (.08) -.23** (.08) -.18* (.08) -.16* (.07) -.05 (.07) .03
Insurance, public (vs no) .09 (.08) .05 (.07) .08 (.07) .07 (.07) .07 (.06) .07 (.06) .09 (.06) .06
Insurance, private (vs no) .12 (.08) .07 (.08) .06 (.07) .06 (.07) .07 (.07) .08 (.07) .13* (.06) .09
Socio-economic status, pre-COVID .04** (.01) .03* (.01) .03* (.01) .02 (.01) .02 (.01) .01 (.01) .00 (.01) .02
Socio-economic status change (post-pre) -.00 (.02) -.01 (.01) .02 (.01) .01 (.01) .01 (.01) .00 (.01) .00 (.01) .00
Health risk self .16** (.05) .17** (.05) -.05 (.04) .02 (.04) .03 (.04) .03 (.04) .03 (.04) .03
Health risk others .13* (.05) .08 (.05) -.02 (.04) -.03 (.04) -.04 (.04) -.03 (.04) -.02 (.04) .02
Political orientation, conservative (vs liberal) -.26*** (.05) -.23*** (.05) -.01 (.04) .05 (.04) .06 (.05) .05 (.04) .05 (.04) .05
Political orientation, not disclosed (vs liberal) -.12 (.08) -.06 (.08) .10 (.07) .13 (.07) .15* (.07) .15* (.07) .11 (.06) .07
Region: Midwest (vs Northeast) -.30*** (.07) -.24** (.07) -.13* (.06) -.10 (.06) -.08 (.06) -.07 (.06) -.10 (.05) .07
Region: South (vs Northeast) -.23*** (.06) -.17** (.06) -.14* (.05) -.13** (.05) -.12* (.05) -.09 (.05) -.12** (.04) .10
Region: West (vs Northeast) -.04 (.07) .00 (.07) .01 (.06) .01 (.06) .02 (.06) .02 (.06) .00 (.05) .00
Practical knowledge and understanding
Knowledge of measures .66*** (.09) .45*** (.07) .33*** (.07) .31*** (.07) .28*** (.07) .18** (.06) .13
Clarity of measures .13*** (.01) .06*** (.01) .01 (.01) .00 (.01) -.00 (.01) -.02 (.01) .06
Costs and benefits
Perceived health threat .39*** (.02) .14*** (.02) .14*** (.02) .14*** (.02) .13*** (.02) .27
Personal costs .03* (.01) .04** (.01) .03* (.01) .03* (.01) .03* (.01) .08
Punishment certainty -.01 (.01) .01 (.01) .02 (.01) .01 (.01) .01 (.01) .02
Punishment severity .01 (.01) -.00 (.01) -.00 (.01) -.00 (.01) -.00 (.01) .01
Legitimacy
Moral alignment .37*** (.03) .36*** (.03) .34*** (.03) .25*** (.03) .47
Authority response -.01 (.01) -.00 (.01) -.01 (.01) -.01 (.01) .03
Normative obligation to obey .11*** (.03) .10*** (.03) .09** (.03) .02 (.03) .03
Non-normative obligation to obey .01 (.02) .02 (.02) -.00 (.02) .02 (.02) .03
Obligation to obey the law (general) .05*** (.01) .02 (.02) .02 (.02) .01 (.01) .04
Procedural justice of enforcement .00 (.01) .00 (.01) -.01 (.01) -.01 (.01) .03
Personal factors
Trust in science .08** (.03) .06* (.03) .04 (.02) .06
Trust in media -.02 (.02) -.03 (.02) -.01 (.01) .03
Impulsivity -.12*** (.02) -.13*** (.02) -.08*** (.02) .17
Negative emotions .03 (.01) .02 (.01) .02 (.01) .05
Social environment
Descriptive social norms .15*** (.02) .03* (.01) .07
Practical circumstances
Practical capacity to adhere .52*** (.03) .89
Opportunity to violate -.03** (.01) .10
Constant 5.05*** (.16) 3.98*** (.17) 2.29*** (.19) 1.09*** (.20) 1.25*** (.22) 1.08*** (.22) -.10 (.21)
Rsq .08 .14 .31 .39 .40 .42 .52

Note. Robust standard errors between parentheses.

* p < .05

** p < .01

*** p < .001.

In Step 1, the model included only the survey wave dummies and the control variables. As shown in Table 4 (column 1), relative to May, adherence decreased significantly in June and in July. Adherence was significantly higher among older participants, female participants, minority group members, people with higher levels of education, and people with higher socio-economic status. Furthermore, adherence was greater among people who suffered from a health condition that placed them at increased risk, or who knew others who suffered from such health conditions. Conversely, adherence was significantly lower among people who professionally cared for COVID patients, and among participants with more conservative political orientations. Lastly, relative to participants from the Northeast region, adherence was significantly lower among participants from the Midwest and the South regions. Together, the model explained only 8% of the variability in adherence, however.

In Step 2, predictors reflecting participants’ practical knowledge and understanding of the mitigation measures were added to the model. Adherence was significantly greater among participants who had greater knowledge of social distancing measures, and among participants who regarded these measures as more clear. Inclusion of these predictors meant that the effect of health risk to others was no longer significant. The model now explained 14% of the variance in adherence, a significant increase relative to Step 1 (χ2 (2) = 223.87, p < .001).

In Step 3, predictors related to the costs and benefits of adherence were entered into the model. Adherence was significantly greater among participants and who regarded the COVID-19 pandemic as more threatening. Also, adherence was greater among participants for whom the costs of adhering was higher. Conversely, punishment perceptions did not predict adherence. Inclusion of these variables rendered the effect of political orientation nonsignificant. The model explained 31% of the variance in adherence, a significant increase over Step 2 (χ2 (4) = 622.75, p < .001).

In Step 4, predictors reflecting participants’ legitimacy perceptions were added to the model. Adherence was significantly higher among participants who morally agreed more with the measures (i.e., moral alignment), who felt greater normative obligation to obey the COVID-19 authorities, and who felt a higher general obligation to obey the law. Inclusion of these variables rendered nonsignificant the effect of perceived clarity of mitigation measures, socio-economic status, and Midwest region. The model explained 39% of the variance in adherence, a significant increase over Step 3 (χ2 (5) = 383.19, p < .001).

In Step 5, personal factors were entered into the model. Adherence was significantly higher among participants who had greater trust in science. Conversely, adherence was significantly lower among more impulsive participants. Controlling for these variables rendered nonsignificant the effect of general obligation to obey the law, and revealed a significant effect of undisclosed political orientation, which predicted greater adherence (relative to liberals). The model explained 40% of the variance in adherence, a significant increase over Step 4 (χ2 (5) = 46.47, p < .001).

In Step 6, predictors reflecting the social environment were added to the model. Adherence was significantly higher among people who perceived stronger (descriptive) social norms for keeping a safe distance. Inclusion of this variable rendered nonsignificant the effect of South region. Additionally, the decline in adherence from May to June was reduced to nonsignificance when this variable was included. The model explained 42% of the variance in adherence, a significant increase over Step 5 (χ2 (1) = 103.62, p < .001).

Finally, in Step 7, predictors reflecting the practical circumstances were entered into the model. Adherence was significantly higher among people who had greater practical ability to keep at a safe distance from others. In contrast, adherence was significantly lower among people who saw more opportunities for violating social distancing measures. By including these variables in the model, the effects of trust in science, normative obligation to obey, and care for COVID patients were reduced to nonsignificance. Conversely, inclusion of these variables restored to significance the previously observed effect of South region, and the decline in adherence from May to June. Last, inclusion of these variables revealed a significant effect of private insurance, such that adherence was greater among participants who had private (relative to no) insurance. The final model explained 52% of the variance in adherence, a significant increase over Step 6 (χ2 (2) = 532.61, p < .001).

Change in predictors across waves

In addition, we sought to understand how the effect of these predictors on adherence changed across survey waves. To do so, we estimated additional models based on the final step of the hierarchical regression model (step 7, Table 4). Each of these models included all the predictors and control variables from the hierarchical model (as in step 7), and one single interaction term, between survey wave and one of the 19 predictors (respectively knowledge of measures, clarity of measures, perceived health threat, personal costs, punishment certainty, punishment severity, moral alignment, authority response, normative obligation to obey, non-normative obligation to obey, general obligation to obey the law, procedural justice of enforcement, trust in science, trust in media, impulsivity, negative emotions, descriptive social norms, practical capacity to adhere, or opportunity to violate). In total, 19 interaction models therefore were estimated, each including a single interaction term. Because our interest with these models was exclusively in the interactive effects with survey waves, only the interaction terms are displayed in Table 5, for each of the 19 models.

Table 5. Interaction models: Interaction effects on adherence to mitigation measures of predictors by survey waves.
Model 8 Model 9 Model 10 Model 11 Model 12 Model 13 Model 14 Model 15 Model 16 Model 17 Model 18 Model 19 Model 20 Model 21 Model 22 Model 23 Model 24 Model 25 Model 26 Effect size (Cohen’s d)
Practical knowledge and understanding
Knowledge of measures
x Month: June (vs May) .03 (.15) .01
x Month: July (vs May) -.07 (.16) .02
Clarity of measures
x Month: June (vs May) .04 (.02) .06
x Month: July (vs May) .03 (.02) .04
Costs and benefits
Perceived health threat
x Month: June (vs May) .04 (.03) .06
x Month: July (vs May) .04 (.03) .05
Personal costs
x Month: June (vs May) .00 (.02) .01
x Month: July (vs May) .03 (.03) .05
Punishment certainty
x Month: June (vs May) .01 (.02) .02
x Month: July (vs May) .01 (.03) .02
Punishment severity
x Month: June (vs May) -.01 (.02) .02
x Month: July (vs May) -.01 (.02) .01
Legitimacy
Moral alignment
x Month: June (vs May) .03 (.04) .03
x Month: July (vs May) .02 (.04) .03
Authority response
x Month: June (vs May) .02 (.02) .04
x Month: July (vs May) .02 (.02) .04
Normative obligation to obey
x Month: June (vs May) .09 (.05) .07
x Month: July (vs May) .13** (.05) .11
Non-normative obligation to obey
x Month: June (vs May) .09* (.04) .09
x Month: July (vs May) .17*** (.04) .15
Obligation to obey the law (general)
x Month: June (vs May) -.02 (.03) .03
x Month: July (vs May) -.03 (.03) .04
Procedural justice of enforcement
x Month: June (vs May) .05 (.03) .07
x Month: July (vs May) .04 (.03) .06
Personal factors
Trust in science
x Month: June (vs May) .10* (.04) .09
x Month: July (vs May) .14** (.04) .13
Trust in media
x Month: June (vs May) .06* (.03) .07
x Month: July (vs May) .07* (.03) .09
Impulsivity
x Month: June (vs May) .07 (.04) .07
x Month: July (vs May) .07* (.04) .07
Negative emotions
x Month: June (vs May) .04 (.03) .06
x Month: July (vs May) .02 (.03) .03
Social environment
Descriptive social norms
x Month: June (vs May) .02 (.03) .02
x Month: July (vs May) .05 (.03) .07
Practical circumstances
Practical capacity to adhere
x Month: June (vs May) .00 (.05) .00
x Month: July (vs May) .04 (.05) .04
Opportunity to violate
x Month: June (vs May) .00 (.02) .00
x Month: July (vs May) .01 (.02) .02

Note. Robust standard errors between parentheses.

* p < .05

** p < .01

*** p < .001.

The results of these analyses indicated that largely, the effect of the predictors on adherence did not vary across waves. Of the key predictors of adherence in the final hierarchical model (i.e., knowledge, perceived threat, personal costs, moral alignment, impulsivity, descriptive social norms, practical capacity to adhere, and opportunity to violate, see Table 4, step 7), only the effect of impulsivity varied across survey waves (in May: b = -.13, SE = .03, p < .001; in July: b = -.06, SE = .03, p = .049; contrast = .07, SE = .04, p = .049). The results did indicate changes across waves in the effects of normative obligation to obey (in May: b = -.06, SE = .04, p = .098; in July: b = .08, SE = .04, p = .066; contrast = .13, SE = .05, p = .006), non-normative obligation to obey (in May: b = -.07, SE = .03, p = .043; in July: b = .10, SE = .03, p = .001; contrast = .17, SE = .04, p < .001), trust in science (in May: b = -.04, SE = .03, p = .181; in July: b = .10, SE = .04, p = .008; contrast = .14, SE = .04, p = .001), and trust in media (in May: b = -.06, SE = .02, p = .011; in July: b = .02, SE = .02, p = .511; contrast = .07, SE = .03, p = .021). Thus, impulsivity became gradually less important as a predictor of adherence across waves, while particularly non-normative obligation to obey and trust in science became more influential. These changes were modest in terms of effect size, however.

Although the effect of the key predictors of adherence generally did not vary across waves (i.e., they did not become more or less predictive of adherence), the absolute levels of these variables did change significantly between May and July. Indeed, as was previously shown in the descriptive analyses of changes across survey waves, there were significant changes during this period in participants’ reported knowledge of mitigation measures, their perceptions of the threat of the virus, their personal costs of the mitigation measures, their moral alignment with those measures, their perceived social norms, their practical capacity to comply, and their perceived opportunities for violating social distancing measures. To examine how these changes contributed to the observed decline in compliance across this period, we finally conducted mediation analyses. These tested whether the effect of survey wave on compliance was mediated by the effect of survey wave on each of the key predictors of adherence.

To do so, we relied on the PARAMED module in Stata [90], which can handle both linear and categorical mediators. In these models, survey wave was the independent variable, adherence the dependent variable, and the mediator was either reported knowledge of mitigation measures, perceptions of the threat of the virus, personal costs of the mitigation measures, moral alignment with mitigation measures, perceived social norms, practical capacity to comply, or perceived opportunities for violating. The models controlled for all other predictors and control variables, and featured bias-corrected bootstrap confidence intervals (1,000 replications). In total, 14 mediation models were estimated (for 7 mediators, with two models each: one comparing wave 1 to wave 2, and one comparing wave 1 to wave 3). Results are presented in Table 6.

Table 6. Mediation models: Total, direct, and indirect effects per mediator by survey waves.
Estimate Bootstrapped SE Lower 95% CI Upper 95% CI
Knowledge of measures
x Month: June (vs May) Total effect -.11 .04 -.20 -.04
Direct effect -.10 .00 -.18 -.03
Indirect effect -.01 .00 -.02 -.01
x Month: July (vs May) Total effect -.11 .04 -.18 -.04
Direct effect -.11 .04 -.17 -.03
Indirect effect -.01 .00 -.02 -.00
Perceived health threat
x Month: June (vs May) Total effect -.09 .04 -.18 -.02
Direct effect -.10 .04 -.18 -.03
Indirect effect .01 .01 -.00 .02
x Month: July (vs May) Total effect -.07 .04 -.15 .00
Direct effect -.11 .04 -.17 -.03
Indirect effect .03 .01 .02 .05
Personal costs
x Month: June (vs May) Total effect -.10 -.04 -.18 -.03
Direct effect -.10 .04 -.18 -.03
Indirect effect -.00 .00 -.01 .00
x Month: July (vs May) Total effect -.11 .04 -.18 -.03
Direct effect -.11 .04 -.17 -.03
Indirect effect -.00 .00 -.01 .00
Moral alignment
x Month: June (vs May) Total effect -.11 .04 -.20 -.04
Direct effect -.10 .04 -.18 -.03
Indirect effect -.01 .01 -.02 .01
x Month: July (vs May) Total effect -.12 .04 -.19 -.04
Direct effect -.10 .04 -.17 -.03
Indirect effect -.02 .01 -.03 -.00
Descriptive social norms
x Month: June (vs May) Total effect -.11 .04 -.19 -.04
Direct effect -.10 .04 -.18 -.03
Indirect effect -.01 .00 -.02 -.00
x Month: July (vs May) Total effect -.12 .04 -.18 -.03
Direct effect -.11 .04 -.17 -.03
Indirect effect -.01 .00 -.02 -.00
Practical capacity to adhere
x Month: June (vs May) Total effect -.08 .04 -.16 -.00
Direct effect -.10 .04 -.18 -.03
Indirect effect .02 .01 -.00 .05
x Month: July (vs May) Total effect -.10 .04 -.17 -.01
Direct effect -.11 .04 -.17 -.03
Indirect effect .01 .01 -.01 .04
Opportunity to violate
x Month: June (vs May) Total effect -.11 .04 -.19 -.03
Direct effect -.10 .04 -.18 -.03
Indirect effect -.01 .00 -.01 -.00
x Month: July (vs May) Total effect -.11 .04 -.18 -.03
Direct effect -.11 .04 -.17 -.03
Indirect effect -.00 .00 -.01 -.00

The indirect effects reported in Table 6 suggest that the effect of survey wave on adherence (the total effect) was significantly reduced, and thus partially mediated (i.e., confidence interval of the indirect effect does not include zero) by the following variables: knowledge of mitigation measures, perceived health threat, moral alignment, and social norms. Conversely, personal costs and capacity to adhere did not mediate this effect. Accordingly, these findings suggest that the observed decrease in adherence from May to July was partially explained by their lower knowledge of mitigation measures, by reductions in the perceived health threat of COVID-19, by people’s lower alignment with social distancing measures, and by reduced (descriptive) social norms for keeping distance. The notion that the personal cost of mitigation measures decreased during this period, and that people became more practically capable of adhering to these, did not counter these trends.

Discussion

The results of our study show that a broad range of behavioral mechanisms has been at play in shaping adherence to pandemic mitigation measures in the period that followed the first wave lockdown against COVID-19. In the period after stricter mitigation measures were repealed, during the summer months of 2020, a significant decline in adherence was observed. Across this period, adherence to social distancing measures was shaped by a range of factors, relating to people’s practical knowledge and understanding of mitigation measures, their perceptions of their costs and benefits, their perceptions of legitimacy and procedural justice, their personal factors, their social environment, and their practical circumstances. Moreover, changes in the levels of these factors during this period explained (in part) the observed decline in adherence. These findings demonstrate that large-scale behavioral change can be accomplished through a combination of factors situated at different levels. Yet, the study also shows that some variables that have received much attention in general psychological, economic, and criminological compliance scholarship did not play a clear and consistent role in shaping adherence.

Across the different steps of our analysis, eight variables emerged as consistent predictors of adherence. Respondents adhered more when (1) they had greater knowledge of social distancing measures, (2) they perceived the virus as a more severe health threat, (3) adherence was more costly for them (possibly reflecting the reverse: that costs were higher for those who adhered more), (4) they morally agreed more with the measures, (5) they were low in impulsivity, (6) they perceived stronger (descriptive) social norms for keeping a safe distance, (7) they had greater practical ability to adhere, and (8) they perceived fewer opportunities for violating the measures. When examining their effect sizes in the final step of the regression model, however, it becomes clear that especially capacity had a critical impact on respondents’ adherence (according to Cohen’s standards, a large effect). Moral alignment and perceived threat also had a substantial, but smaller impact on adherence (according to Cohen’s standards, a small to medium effect). The impact of impulsivity, knowledge, opportunity for violating, personal costs, and social norms was only limited, however (according to Cohen’s standards, a small effect).

The impact of the predictors on adherence was largely consistent throughout this period, although the influence of impulsivity became gradually weaker (and the influence of non-normative obligation to obey and trust in science gradually stronger) as the distance from the lockdown period increased. The decline that occurred across this period in levels of knowledge, moral alignment, and perceived social norms for adhering partially explained the observed decrease in adherence. Conversely, the increase in perceived threat that was observed toward the end of this period positively affected the development of adherence. Other variables, however, failed to predict adherence, or no longer did so when other variables were taken into account. These most notably included procedural justice [40, 41], obligation to obey the law or the responsible authorities [46, 47], deterrence [3335], and trust in science [50, 51].

Theoretically, the present comparison of adherence over the summer months demonstrates that the nature of behavioral change and influence on behavior is not static. Rather, our findings show that across similar samples of people, with similar measures staying in place, key factors that sustain compliance can grow or decline even in a matter of months. Our data allow us to trace these processes more deeply by examining how the key predictors have changed over the summer months. Although the influence of these variables on adherence was largely consistent throughout this period, the data revealed significant changes in their absolute levels. People reported, for instance, to have more opportunity to violate the social distancing measures (which makes sense given that stay-at-home orders were mostly lifted in this period), lower capacity to adhere, and lower perceived social norms for adherence (consistent with the notion that there were larger crowds, and that more people were expected to resume normal work and social activities). Our mediation analyses revealed that the observed decline in adherence to social distancing measures that was observed during this period was partially explained by the decreases in people’s knowledge, moral alignment, and perceived social norms for adhering. Conversely, the increase in perceived threat that was observed toward the end of this period positively affected the development of adherence. When viewed together, these changes provide important indications of why adherence has changed over time. These processes do not seem to indicate that there was a so-called general behavioral fatigue [91, 92] at play at this time, but rather that lower adherence may have resulted from very particular and factual changes in people’s circumstances, the environment, and their motivations. By providing insight into which variables do (and do not) shape adherence, the present research offers a more practical way of assessing whether people are able to sustain behavioral change for as long as needed, compared to broad and vague concepts such as behavioral fatigue (which rely more on common-sense understanding than mechanisms from behavioral science). An important question for future research, however, is to understand more deeply how the changes that we observed across this period may be connected to local developments in policy, society, and the pandemic (e.g., see [93]). For this, a more fine-grained analysis is needed, which takes into account how these processes developed locally at the level of regions, states, counties, or even cities.

Our findings on deterrence deserve extra discussion. In light of the fact that stricter mitigation measures have been repealed, and thus are no longer widely enforced [94], it is noteworthy that Americans nevertheless reported moderately high levels (i.e., close to the scale midpoint) of perceived punishment certainty and severity. One explanation for such continuing perceptions of deterrence when there is no longer any enforcement is that there are spill-over effects. In this case, this might mean that prior enforcement continues to drive deterrence perceptions even after it has ended, or that enforcement of other measures (e.g., facemasks; quarantine) also shapes deterrence perceptions for social distancing [95]. A second, and related explanation is that people generally do not have very good perceptions of deterrence and can underestimate or overestimate both the certainty and severity of punishment [96]. Importantly, however, even though many Americans considered it quite likely that they would be punished when not keeping a safe distance, and regarded such punishment as quite severe, these beliefs did not predict greater adherence. This finding is in line with studies in other countries where there was actual enforcement of social distancing measures, where also no effects of deterrence on compliance were observed [18]. However, these conclusions clearly oppose belief in the effectiveness of strong punishment for COVID-19 violations [97, 98].

Clearly, the data allow for the exploration of many other relationships beyond those that we study in the present manuscript. For example, the data can inform about relationship between adherence and political orientation or trust in science (both singled out as important predictors of adherence in prior research [50, 51, 99, 100], yet neither a significant predictor in our final regression model), or demographic factors like ethnicity or socio-economic status. From the results of the hierarchical regression analysis, it seems plausible that these and other factors may have indirect relationships with adherence, through their effects on more proximal predictors. The data further could illuminate how specific subsets of predictors may interact with each other, or could be used to study other outcome variables (e.g., how these predictors may explain felt negative emotions, or support for authorities, etc.). The present research was primarily oriented on understanding the proximal predictors of adherence. For this reason, we feel that other relationships, such as those outlined above, are best reserved for dedicated manuscripts that are specifically oriented on these questions. We welcome further analyses of these questions, and have made our data publicly available for this purpose. Future research could also expand on these findings by zooming in further on specific variables that may directly or indirectly shape compliance (e.g., by distinguishing essential and nonessential work; by separating individuals from different generations [101]), or by identifying further variables with which our model could be expanded.

Our findings have several policy implications, which may aid authorities in the U.S. and elsewhere to sustain adherence with mitigation measures, both for the current outbreak and for future pandemics. The results of our surveys identify seven factors that influence adherence. We formulate recommendations based on the most influential of these.

First, and most critically, authorities can increase adherence by making it practically easier for citizens to do so, and by removing opportunities to offend. Indeed, in terms of effect size, people’s practical capacity to adhere was the strongest predictor of adherence, by some margin. This suggests that authorities can have an important impact on adherence by increasing citizens’ practical capacity to do so. In context of social distancing, this has included arrangements that guide crowds through public venues in ways that keep them apart as much as possible, facilitating telework where possible, instituting caps on the number of people able to enter a public space, and so forth. Conversely, authorities can also shape adherence by removing practical opportunities for not following mitigation measures. Such measures are best reserved for especially harmful offenses that are widely condemned, however, because if these are not widely supported, overly coercive measures may strongly undermine citizens’ motivation [102].

Second, our results show that individuals adhere more when they morally agree with mitigation measures. This finding suggests that authorities can increase adherence if they can effectively convince citizens of the importance and legitimacy of such measures. In the case of social distancing, this has included presenting evidence of how social distancing measures can prevent the spread of the virus, or emphasizing citizens’ shared moral duty to protect vulnerable individuals. Cultivating citizens’ support–or conversely, attuning mitigation measures to what is widely supported–will increase the chance that citizens will effectively adhere to such measures.

Third, perceptions of threat to oneself and others are an important predictor of adherence to mitigation measures. However, the present findings also demonstrate that threat perceptions are dynamic. Here, threat perceptions increased from June to July, reflecting the increase in infections that occurred during this period [6]. Findings from our studies in the Netherlands [18], however, demonstrate that threat perceptions can quickly recede as infections decline, with deleterious effects on support for, and adherence to, mitigation measures. Accordingly, to sustain adherence to mitigation measures, it is important that authorities do not give the impression that the threat is waning once infections recede [103]. Rather, authorities can sustain adherence if they successfully convince citizens of the continuing threat of the pandemic, for example to themselves or vulnerable others.

Fourth, our findings show that knowledge of mitigation measures is important for adherence. Due to the fragmented authority response in the U.S., mitigation measures may differ substantially between states, counties, and municipalities. Consequently, it can be unclear to citizens what mitigation measures require of them. Accordingly, authorities can promote adherence by clearly communicating what the measures are and what they require of citizens.

Finally, our findings demonstrate that people’s adherence to mitigation measures is influenced by the behavior of others in their community (i.e., descriptive social norms). Although this effect was modest in terms of effect size in the final regression model, effects of social norms on social distancing have also been demonstrated in other research [18, 104, 105]. Authorities thus can enhance adherence to mitigation measures by demonstrating that adherence is common and widely approved of. This also means, however, that authorities should take care to not convey the impression that violations are ubiquitous and normal. It is plausible that in the period after the initial lockdown, highly publicized instances where people widely disregarded social distancing measures may have undermined adherence, by normalizing lack of distancing. To promote adherence to mitigation measures, authorities therefore should express that doing so is the norm (or ought to be), highlight examples where many others are seen to adhere, not draw undue attention to examples where people do not, and to ensure that they are always seen to adhere to the measures themselves.

Overall, the study of adherence of social distancing measures has important implications for the study of compliance generally and the way rules shape human behavior. These questions have been studied across different academic domains, and with a focus on different mechanisms and interventions [106]. This has resulted in a patchwork of theories and approaches that are seldomly brought together, which exist in compartmentalized silos that draw on their own literatures, methods and findings. The present study brings together a broad range of variables from across these approaches, situated at different levels (i.e., the individual, the social environment, the practical circumstances), and reveals how these together shape adherence in context of social distancing measures. Although the associations that were observed here may not extend beyond this setting, the insight that adherence derived from such a diverse range of influences is nevertheless important for study of compliance. It underlines that to better understand why people comply, research can benefit from a multi-theoretical approach, in which the extant, siloed literatures are brought together and integrated.

Our study has several limitations. First, although our samples were large and stratified sampled by age, gender, and ethnicity to mimic the demographic characteristics of the United States population based on U.S. Census Bureau data, they remain non-probability convenience samples. Furthermore, there was some variability between the samples in terms of demographics, possibly due to the considerable subset of participants who failed to complete the survey or pass the attention checks. As a consequence, our samples cannot be regarded as truly nationally representative. Nevertheless, there is evidence that such convenience samples can be as accurate as random digit dial telephone surveys [107, 108], and they may reduce social desirability biases [109]. Further research therefore is needed to understand the robustness of the observed findings, although they align with evidence from other research [3]. Second, our surveys rely on self-reported measures that may be subject to imperfect recall or social desirability bias [110, 111]. We do note, however, that a recent study demonstrated that social desirability bias did not inflate the estimates of compliance with COVID-19 measures in online surveys [112], and that the finding of high self-reported adherence is in line with objective data from Google COVID19 Community Mobility [113]. Furthermore, prior research shows that there can be strong concordance between self-reported and objective compliance measures when surveys are used (see [78] p. 29). Even so, future research into these questions would benefit from methods that supplement self-reported measures with behavioral data, such as video observation [114].

Conclusion

In the summer of 2020, the Federal lockdown and stay-at-home measures against COVID-19 that were in force in spring were lifted, and in large parts of the country, society began to reopen. The present findings, based on three stratified samples collected in May, June, and July, show that Americans’ adherence to social distancing measures declined, as did several of the factors that sustained it–including people’s practical capacity to adhere, their knowledge of the measures, and social norms for adherence. Our research identifies key variables that predicted greater adherence as society reopened, and which contributed to the changes in adherence that were observed thereafter. By doing so, this research contributes to the understanding of pandemic governance and the interaction between rules and human conduct more generally. Moreover, in the current stage of the pandemic, these findings provide important directions for the public health response, by highlighting processes through which adherence to mitigation measures can be promoted, as we strive to return to normality.

Supporting information

S1 Survey. Survey materials.

(PDF)

S1 Dataset. Dataset and syntax files.

(DOCX)

S1 Table. Kendall’s tau correlations between demographic variables and adherence.

May 8–18 (Survey 1. N = 1012). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level. Gender–Female as reference category. Political orientation–N = 866.

(DOCX)

S2 Table. Kendall’s tau correlations between demographic variables and adherence.

June 8–16 (Survey 2. N = 986). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level. Gender–Female as reference category. Political orientation–N = 880.  

(DOCX)

S3 Table. Kendall’s tau correlations between demographic variables and adherence.

July 11–17 (Survey 3. N = 921). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level. Gender–Female as reference category. Political orientation–N = 803.

(DOCX)

S4 Table. Kendall’s tau correlations between independent variables and adherence.

May 8–18 (Survey 1. N = 1012). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level.

(DOCX)

S5 Table. Kendall’s tau correlations between independent variables and adherence.

June 8–16 (Survey 2. N = 986). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level.

(DOCX)

S6 Table. Kendall’s tau correlations between independent variables and adherence.

July 11–17 (Survey 3. N = 921). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level.

(DOCX)

S1 Output. Complete regression output Survey 1 (May).

(PDF)

S2 Output. Complete regression output Survey 2 (June).

(PDF)

S3 Output. Complete regression output Survey 3 (July).

(PDF)

S4 Output. Complete regression output Surveys 1–3 combined (May-July).

(PDF)

Data Availability

All data files and analysis syntax are available from the Figshare database (accession number 13125206) https://uvaauas.figshare.com/articles/dataset/Social_Distancing_in_America_Compliance_with_COVID-19_mitigation_measures_in_the_United_States/13125206.

Funding Statement

This research was funded by the Dutch Research Council (NWO, https://www.nwo.nl/en) by means of a Corona: Fast-track data grant, awarded to Benjamin van Rooij (grant number 440.20.033).

References

  • 1.Ritchie H. Google Mobility Trends: How has the pandemic changed the movement of people around the world? [Internet]. Our World in Data; 2020Jun2 [cited 2020 Dec 8]. Available from: https://ourworldindata.org/covid-mobility-trends. [Google Scholar]
  • 2.Walker A. US oild prices turn negative as demand dries up [Internet]. BBC News; 2020Apr20 [cited 2020 Dec 8]. Available from: https://www.bbc.com/news/business-52350082. [Google Scholar]
  • 3.Kooistra EB, Van Rooij B. Pandemic compliance: A systematic review of influences on social distancing behavior during the first wave of the COVID-19 outbreak. PsyArXiv c5x2k [Preprint]. 2020Nov25 [cited 2020 Dec 8]. Available from: https://psyarxiv.com/c5x2k. [Google Scholar]
  • 4.Haffajee RL, Mello MM. Thinking globally, acting locally—The US response to COVID-19. The New England Journal of Medicine. 2020382(22):e75. doi: 10.1056/NEJMp2006740 [DOI] [PubMed] [Google Scholar]
  • 5.Van Rooij B, de Bruijn AL, Reinders Folmer CP, Kooistra E, Kuiper ME, Brownlee M, et al. Compliance with COVID-19 mitigation measures in the United States. SSRN 3582626 [Preprint]. 2020May1 [cited 2020 Sep 11]. Available from: 10.2139/ssrn.3582626. [DOI] [Google Scholar]
  • 6.CDC COVID Data Tracker: The Centers for Disease Control and Prevention; 2020. [cited 2020 Sep 10]. Available from: https://covid.cdc.gov/covid-data-tracker/#trends. [Google Scholar]
  • 7.Rojas R, Fausset R. Businesses Tiptoe Into a World of Masks, Gloves and Wary Customers [Internet]. The New York Times; 2020Apr24 [cited 2020 Sep 10]. Available from: https://www.nytimes.com/2020/04/24/us/coronavirus-georgia-oklahoma-alaska-reopen.html. [Google Scholar]
  • 8.Freking K, Colvin J. Trump says he’s not extending social distancing guidelines [Internet]. Associated Press; 2020Apr30 [cited 2020 Oct 28]. Available from: https://abcnews.go.com/Politics/wireStory/trump-extending-social-distancing-guidelines-70419986. [Google Scholar]
  • 9.See How All 50 States Are Reopening (and Closing Again) [Internet]. The New York Times; 2020. [cited 2020 Sep 10]. Available from: https://www.nytimes.com/interactive/2020/us/states-reopen-map-coronavirus.html. [Google Scholar]
  • 10.Pitofsky M. Stylists ticketed for cutting hair on Michigan Capitol lawn to protest lockdown [Internet]. The Hill; 2020May21 [cited 2020 Dec 8]. Available from: https://thehill.com/blogs/blog-briefing-room/news/499031-stylists-ticketed-for-cutting-hair-on-michigan-capitol-lawn-to. [Google Scholar]
  • 11.DeSantis M. Protesters In Commack Demand Economy Reopens [Internet]. Patch; 2020May1 [cited 2020 Dec 8]. Available from: https://patch.com/new-york/commack/protesters-commack-demand-economy-opens. [Google Scholar]
  • 12.Hart PS, Chinn S, Soroka S. Politicization and polarization in COVID-19 news coverage. Science Communication. 2020Oct1;42(5):679–97. doi: 10.1177/1075547020950735 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Tyson A. Republicans remain far less likely than Democrats to view COVID-19 as a major threat to public health [Internet]. Pew Research Center; 2020Jul22 [cited 2020 Dec 8]. Available from: https://www.pewresearch.org/fact-tank/2020/07/22/republicans-remain-far-less-likely-than-democrats-to-view-covid-19-as-a-major-threat-to-public-health/. [Google Scholar]
  • 14.Partlow J, Dawsey J. Workers removed thousands of social distancing stickers before Trump’s Tulsa rally, according to video and a person familiar with the set-up [Internet]. The Washington Post; 2020Jun27 [cited 2020 Sep 10]. Available from: https://www.washingtonpost.com/politics/workers-removed-thousands-of-social-distancing-stickers-before-trumps-tulsa-rally-according-to-video-and-a-person-familiar-with-the-set-up/2020/06/27/f429c3be-b801-11ea-9b0f-c797548c1154_story.html. [Google Scholar]
  • 15.Feldman Y. The law of good people: Challenging states’ ability to regulate human behavior. Cambridge, UK: Cambridge University Press; 2018. [Google Scholar]
  • 16.Friedman LM. Impact. Cambridge: Harvard University Press; 2016. [Google Scholar]
  • 17.Van Rooij B, Sokol DD. Cambridge Handbook of Compliance. Cambridge, UK: Cambridge University Press; 2021. [Google Scholar]
  • 18.Reinders Folmer CP, Kuiper ME, Olthuis E, Kooistra EB, De Bruijn AL, Brownlee M, et al. Maintaining compliance when the virus returns: Understanding adherence to COVID-19 social distancing measures in the Netherlands in July 2020. PsyArXiv vx3mn [Preprint]. 2020Aug28 [cited 2020 Sep 10]. Available from: https://psyarxiv.com/vx3mn/. [Google Scholar]
  • 19.Darley JM, Robinson PH, Carlsmith KM. The ex ante function of the criminal law. Law and Society Review. 200135(1):165–90. doi: 10.2307/3185389 [DOI] [Google Scholar]
  • 20.Kim PT. Norms, learning and law: Exploring the influences of workers’ legal knowledge. University of Illinois Legal Review. 1999. (2):447–516. [Google Scholar]
  • 21.Van Rooij B. Do people know the law? Empirical evidence about legal knowledge and its implications for compliance. In: van Rooij B, Sokol DD, editors. Cambridge Handbook of Compliance. Cambridge, UK: Cambridge University Press; 2021. p. 467–88. [Google Scholar]
  • 22.Feldman Y, Teichman D. Are all legal probabilities created equal. NYUL Review. 200984:980–1022. [Google Scholar]
  • 23.Paternoster R, Simpson S. A rational choice theory of corporate crime. In: Clarke RV, Felson M, editors. Routine Activity and Rational Choice: Advances in Criminological Theory. New York: Routledge; 1993. p. 37–58. [Google Scholar]
  • 24.Donovan JL, Blake DR. Patient non-compliance: deviance or reasoned decision-making? Social Science & Medicine. 199234(5):507–13. doi: 10.1016/0277-9536(92)90206-6 [DOI] [PubMed] [Google Scholar]
  • 25.Zhou F, Yu T, Du R, Fan G, Liu Y, Liu Z, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. The Lancet. 2020395(10229):1054–62. doi: 10.1016/S0140-6736(20)30566-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Jung S-M, Akhmetzhanov AR, Hayashi K, Linton NM, Yang Y, Yuan B, et al. Real-time estimation of the risk of death from novel coronavirus (COVID-19) infection: Inference using exported cases. Journal of Clinical Medicine. 20209(2):523. doi: 10.3390/jcm9020523 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Weiss P, Murdoch DR. Clinical course and mortality risk of severe COVID-19. The Lancet. 2020395(10229):1014–5. doi: 10.1016/S0140-6736(20)30633-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Yıldırım M, Geçer E, Akgül Ö. The impacts of vulnerability, perceived risk, and fear on preventive behaviours against COVID-19. Psychology, Health & Medicine. 202126(1):35–43. doi: 10.1080/13548506.2020.1776891 [DOI] [PubMed] [Google Scholar]
  • 29.Economic Attitudes as the Country Starts to Reopen [Internet]. AP NORC; 2020. [cited 2020 Dec 8]. Available from: https://apnorc.org/projects/economic-attitudes-as-the-country-starts-to-reopen/. [Google Scholar]
  • 30.Hunter L, Pearl B, Lo K. Tracking enforcement measures for violation of stay-at-home orders [Internet]. Center for American Progress; 2020Apr2 [cited 2021 Jul 7]. Available from: https://www.americanprogress.org/issues/criminal-justice/news/2020/04/02/482568/tracking-enforcement-measures-violation-stay-home-orders/. [Google Scholar]
  • 31.Janes C. Coughing ’attacks’ may be prosecuted as terrorism in a war on coronavirus [Internet]. The Washington Post; 2020Apr9 [cited 2021 Jul 7]. Available from: https://www.washingtonpost.com/national/health-science/coughing-attacks-may-be-prosecuted-as-terrorism-in-war-on-coronavirus/2020/04/08/b97d7f9a-790d-11ea-9bee-c5bf9d2e3288_story.html. [Google Scholar]
  • 32.Grasmick HG, Bryjak GJ. The deterrent effect of perceived severity of punishment. Social Forces. 198059(2):471–91. doi: 10.1093/sf/59.2.471 [DOI] [Google Scholar]
  • 33.Becker GS. Crime and punishment: an economic approach. In: Fielding NG, Clarke A, Witt R, editors. The Economic Dimensions of Crime. 76. London: Palgrave Macmillan; 1968. p. 169–217. [Google Scholar]
  • 34.Polinsky AM, Shavell S. Public Enforcement of Law. In: Bouckaert B, De Geest G, editors. Encyclopedia of Law and Economics, Volume V: The Economics of Crime and Litigation. Cheltenham: Edward Elgar; 2000. p. 307–44. [Google Scholar]
  • 35.Shavell S. Specific versus general enforcement of law. Journal of Political Economy. 199199:1099–108. doi: 10.1086/261790 [DOI] [Google Scholar]
  • 36.Weber M. Economy and society: An outline of interpretive sociology. Berkeley, CA: University of California Press; 1978. [Google Scholar]
  • 37.Jackson J, Gau JM. Carving Up Concepts? Differentiating Between Trust and Legitimacy in Public Attitudes Towards Legal Authority. In: Shockley E, Neal TMS, PytlikZillig LM, Bornstein BH, editors. Interdisciplinary Perspectives on Trust: Towards Theoretical and Methodological Integration. Cham: Springer International Publishing; 2016. p. 49–69. [Google Scholar]
  • 38.Jackson J, Bradford B, Hough M, Myhill A, Quinton P, Tyler TR. Why do people comply with the law? Legitimacy and the influence of legal institutions. British Journal of Criminology. 201252(6):1051–71. doi: 10.1093/bjc/azs032 [DOI] [Google Scholar]
  • 39.Jackson J, Huq AZ, Bradford B, Tyler TR. Monopolizing force? Police legitimacy and public attitudes toward the acceptability of violence. Psychology, Public Policy, and Law. 201319(4):479–97. doi: 10.1037/a0033852 [DOI] [Google Scholar]
  • 40.Tyler TR. Procedural fairness and compliance with the law. Swiss Journal of Economics and Statistics. 1997133(2):219–40. [Google Scholar]
  • 41.Tyler TR. Why People Obey the Law. Princeton: Princeton University Press; 2006. [Google Scholar]
  • 42.Beaumont T, Fingerhut H. AP-NORC poll: Few Americans support easing virus protections [Internet]. AP News; 2020Apr22 [cited 2020 Dec 8]. Available from: https://apnews.com/article/9ed271ca13012d3b77a2b631c1979ce1. [Google Scholar]
  • 43.Freking K, Fingerhut H. AP-NORC poll: Support for restrictions, virus worries wane [Internet]. AP news: AP news; 2020Jun25 [cited 2020 Dec 8]. Available from: https://apnews.com/article/915fdbccb3434fee125efaaaaefba0af. [Google Scholar]
  • 44.Walters GD, Bolger PC. Procedural justice perceptions, legitimacy beliefs, and compliance with the law: A meta-analysis. Journal of Experimental Criminology. 201915(3):341–72. doi: 10.1007/s11292-018-9338-2 [DOI] [Google Scholar]
  • 45.Gau JM. Procedural justice, police legitimacy, and legal cynicism: A test for mediation effects. Police Practice and Research. 201516(5):402–15. doi: 10.1080/15614263.2014.927766 [DOI] [Google Scholar]
  • 46.Posch K, Jackson J, Bradford B, Macqueen S. "Truly Free Consent”? Clarifying the nature of police legitimacy using causal mediation analysis. Journal of Experimental Criminology. 2020. doi: 10.1007/s11292-020-09426-x [DOI] [Google Scholar]
  • 47.Fine A, Van Rooij B, Feldman Y, Shalvi S, Leib M, Scheper E, et al. Rule Orientation and behavior: Development and validation of a scale measuring individual acceptance of rule violation. Psychology, Public Policy, and Law 201622(3):314–29. doi: 10.1037/law0000096 [DOI] [Google Scholar]
  • 48.Fine A, Thomas A, van Rooij B, Cauffman E. Age-graded differences and parental influences on adolescents’ obligation to obey the law. Journal of Developmental and Life-Course Criminology. 2020:1–18. doi: 10.1007/s40865-020-00134-8 [DOI] [Google Scholar]
  • 49.Fine AD, van Rooij B. Legal socialization: Understanding the obligation to obey the law. Journal of Social Issues. 202177(2):367–91. doi: 10.1111/josi.12440 [DOI] [Google Scholar]
  • 50.Kreps S, Kriner D. Model uncertainty, political contestation, and public trust in science: Evidence from the COVID-19 pandemic. Science Advances. 20206(43):eabd4563. doi: 10.1126/sciadv.abd4563 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Plohl N, Musil B. Modeling compliance with COVID-19 prevention guidelines: The critical role of trust in science. Psychology, Health & Medicine. 202126(1):1–12. doi: 10.1080/13548506.2020.1772988 [DOI] [PubMed] [Google Scholar]
  • 52.De Coninck D, Frissen T, Matthijs K, d’Haenens L, Lits G, Champagne-Poirier O, et al. Beliefs in conspiracy theories and misinformation about COVID-19: Comparative perspectives on the role of anxiety, depression and exposure to and trust in information sources. Frontiers in Psychology. 202112:646394. doi: 10.3389/fpsyg.2021.646394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, et al. Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science. 20207(10):201199. doi: 10.1098/rsos.201199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Gottfredson MR, Hirschi T. A general theory of crime: Stanford University Press; 1990. [Google Scholar]
  • 55.Pratt TC, Cullen FT. The empirical status of Gottfredson and Hirschi’s general theory of crime: A meta‐analysis. Criminology. 200038(3):931–64. doi: 10.1111/j.1745-9125.2000.tb00911.x [DOI] [Google Scholar]
  • 56.Pratt TC, Cullen FT. Assessing macro-level predictors and theories of crime: A meta-analysis. Crime and Justice. 200532:373–450. doi: 10.1086/655357 [DOI] [Google Scholar]
  • 57.Pratt TC, Lloyd K. Self-control and offending. In: Van Rooij B, Sokol DD, editors. The Cambridge Handbook on Compliance. Cambridge, UK: Cambridge University Press; 2021. p. 489–98. [Google Scholar]
  • 58.Vazsonyi AT, Mikuška J, Kelley EL. It’s time: A meta-analysis on the self-control-deviance link. Journal of Criminal Justice. 201748:48–63. doi: 10.1016/j.jcrimjus.2016.10.001 [DOI] [Google Scholar]
  • 59.Agnew R. Foundation for a general strain theory of crime and delinquency. Criminology. 199230(1):47–88. doi: 10.1111/j.1745-9125.1992.tb01093.x [DOI] [Google Scholar]
  • 60.Agnew R. Pressured into crime: An overview of general strain theory. Los Angeles: Roxbury; 2006. [Google Scholar]
  • 61.Agnew R, White HR. An empirical test of general strain theory. Criminology. 199230(4):475–500. doi: 10.1111/j.1745-9125.1992.tb01113.x [DOI] [Google Scholar]
  • 62.Agnew R, Brezina T, Wright JP, Cullen FT. Strain, personality traits, and delinquency: Extending general strain theory. Criminology. 200240(1):43–72. doi: 10.1111/j.1745-9125.2002.tb00949.x [DOI] [Google Scholar]
  • 63.Baron SW. General strain, street youth and crime: A test of Agnew’s revised theory. Criminology. 200442(2):457–84. doi: 10.1111/j.1745-9125.2004.tb00526.x [DOI] [Google Scholar]
  • 64.Piquero NL, Sealock MD. Gender and general strain theory: A preliminary test of Broidy and Agnew’s gender/GST hypotheses. Justice Quarterly. 200421(1):125–58. doi: 10.1080/07418820400095761 [DOI] [Google Scholar]
  • 65.Botchkovar EV, Tittle CR, Antonaccio O. General strain theory: Additional evidence using cross‐cultural data. Criminology. 200947(1):131–76. doi: 10.1111/j.1745-9125.2009.00141.x [DOI] [Google Scholar]
  • 66.Brooks SK, Webster RK, Smith LE, Woodland L, Wessely S, Greenberg N, et al. The psychological impact of quarantine and how to reduce it: rapid review of the evidence. The Lancet. 2020395(10229):912–20. doi: 10.1016/S0140-6736(20)30460-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Schultz PW, Nolan JM, Cialdini RB, Goldstein NJ, Griskevicius V. The constructive, destructive, and reconstructive power of social norms. Psychological Science. 200718(5):429–34. doi: 10.1111/j.1467-9280.2007.01917.x [DOI] [PubMed] [Google Scholar]
  • 68.Cialdini RB, Demaine LJ, Sagarin BJ, Barrett DW, Rhoads K, Winter PL. Managing social norms for persuasive impact. Social Influence. 20061(1):3–15. doi: 10.1080/15534510500181459 [DOI] [Google Scholar]
  • 69.Goldstein NJ, Cialdini RB, Griskevicius V. A room with a viewpoint: using social norms to motivate environmental conservation in hotels. Journal of Consumer Research. 200835:472–82. doi: 10.1086/586910 [DOI] [Google Scholar]
  • 70.Cialdini RB, Goldstein NJ. Social influence: compliance and conformity. Annual Review of Psychology. 200455:591–621. doi: 10.1146/annurev.psych.55.090902.142015 [DOI] [PubMed] [Google Scholar]
  • 71.Cohen LE, Felson M. Social change and crime rate trends: A routine activity approach. American Sociological Review. 197944(4):588–608. doi: 10.2307/2094589 [DOI] [Google Scholar]
  • 72.Osgood DW, Wilson JK, O’malley PM, Bachman JG, Johnston LD. Routine activities and individual deviant behavior. American Sociological Review. 199661(4):635–55. doi: 10.2307/2096397 [DOI] [Google Scholar]
  • 73.Spano R, Freilich JD. An assessment of the empirical validity and conceptualization of individual level multivariate studies of lifestyle/routine activities theory published from 1995 to 2005. Journal of Criminal Justice. 200937(3):305–14. doi: 10.1016/j.jcrimjus.2009.04.011 [DOI] [Google Scholar]
  • 74.Clarke RV. Seven misconceptions of situational crime prevention. In: Tilley N, editor. Handbook of crime prevention and community safety. New York, NY: Routledge; 2005. p. 39–70. [Google Scholar]
  • 75.Clarke RV. "Situational" Crime Prevention: Theory and practice. The British Journal of Criminology. 198020(2):136–47. doi: 10.1093/oxfordjournals.bjc.a047153 [DOI] [Google Scholar]
  • 76.Age and Sex Composition in the United States: 2019 [Internet]. United States Census Bureau; 2019 [cited 2020 Dec 8]. Available from: https://www.census.gov/data/tables/2019/demo/age-and-sex/2019-age-sex-composition.html.
  • 77.Kooistra EB, Reinders Folmer CP, Kuiper ME, Olthuis E, Brownlee M, Fine A, et al. Mitigating COVID-19 in a nationally representative uk sample: Personal abilities and obligation to obey the law shape compliance with mitigation measures. SSRN 3598221 [Preprint]. 2020May11 [cited 2021 Jul 7]. Available from: 10.2139/ssrn.3598221. [DOI] [Google Scholar]
  • 78.Kuiper ME, De Bruijn AL, Reinders Folmer CP, Olthuis E, Brownlee M, Kooistra EB, et al. The intelligent lockdown: Compliance with COVID-19 mitigation measures in the Netherlands. SSRN 3598215 [Preprint]. 2020May13 [cited 2020 Sep 10]. Available from: 10.2139/ssrn.3598215. [DOI] [Google Scholar]
  • 79.De Bruijn AL, Feldman Y, Kuiper ME, Brownlee M, Reinders Folmer CP, Kooistra EB, et al. Why did Israelis comply with COVID-19 Mitigation Measures during the initial first wave lockdown? SSRN 3681964 [Preprint]. 2020Aug27 [cited 2021 Jul 7]. Available from: 10.2139/ssrn.3681964. [DOI] [Google Scholar]
  • 80.Adler NE, Epel ES, Castellazzo G, Ickovics JR. Relationship of subjective and objective social status with psychological and physiological functioning: Preliminary data in healthy white women. Health Psychology. 200019(6):586–92. doi: 10.1037//0278-6133.19.6.586 [DOI] [PubMed] [Google Scholar]
  • 81.Fine AD, Rowan Z, Simmons C. Do politics trump race in determining America’s youths’ perceptions of law enforcement? Journal of Criminal Justice. 201961:48–57. doi: 10.1016/j.jcrimjus.2019.01.003 [DOI] [Google Scholar]
  • 82.Hasson Y, Tamir M, Brahms KS, Cohrs JC, Halperin E. Are liberals and conservatives equally motivated to feel empathy toward others? Personality and Social Psychology Bulletin. 201844(10):1449–59. doi: 10.1177/0146167218769867 [DOI] [PubMed] [Google Scholar]
  • 83.Wojcik SP, Hovasapian A, Graham J, Motyl M, Ditto PH. Conservatives report, but liberals display, greater happiness. Science. 2015347(6227):1243–6. doi: 10.1126/science.1260817 [DOI] [PubMed] [Google Scholar]
  • 84.Tankebe J, Reisig MD, Wang X. A multidimensional model of police legitimacy: A cross-cultural assessment. Law and Human Behavior. 201640(1):11–22. doi: 10.1037/lhb0000153 [DOI] [PubMed] [Google Scholar]
  • 85.Baker T, Gau JM. Female offenders’ perceptions of police procedural justice and their obligation to obey the law. Crime & Delinquency. 201864(6):758–81. doi: 10.1177/0011128717719418 [DOI] [Google Scholar]
  • 86.Gau JM. Procedural justice and police legitimacy: A test of measurement and structure. American Journal of Criminal Justice. 201439(2):187–205. doi: 10.1007/s12103-013-9220-8 [DOI] [Google Scholar]
  • 87.Wolfe SE, Nix J, Kaminski R, Rojek J. Is the effect of procedural justice on police legitimacy invariant? Testing the generality of procedural justice and competing antecedents of legitimacy. Journal of Quantitative Criminology. 201632(2):253–82. doi: 10.1007/s10940-015-9263-8 [DOI] [Google Scholar]
  • 88.McCright AM, Dentzman K, Charters M, Dietz T. The influence of political ideology on trust in science. Environmental Research Letters. 20138(4):044029. doi: 10.1088/1748-9326/8/4/044029/meta [DOI] [Google Scholar]
  • 89.Weinberger DA, Schwartz GE. Distress and restraint as superordinate dimensions of self‐reported adjustment: A typological perspective. Journal of Personality. 199058(2):381–417. doi: 10.1111/j.1467-6494.1990.tb00235.x [DOI] [PubMed] [Google Scholar]
  • 90.Emsley R, Liu H. PARAMED: Stata module to perform causal mediation analysis using parametric regression models. 2013Apr26. [Google Scholar]
  • 91.Harvey N. Behavioral fatigue: Real phenomenon, naïve construct, or policy contrivance? Frontiers in Psychology. 202011:589892. doi: 10.3389/fpsyg.2020.589892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Mahase E. Covid-19: Was the decision to delay the UK’s lockdown over fears of “behavioural fatigue” based on evidence? BMJ. 2020370:m3166. doi: 10.1136/bmj.m3166 [DOI] [PubMed] [Google Scholar]
  • 93.Feyman Y, Bor J, Raifman J, Griffith KN. Effectiveness of COVID-19 shelter-in-place orders varied by state. Plos One. 2020Dec31;15(12):e0245008. doi: 10.1371/journal.pone.0245008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Kaste M. Police Back Off From Social Distancing Enforcement [Internet]. NPR; 2020May15 [cited 2020 Sep 10]. Available from: https://www.npr.org/2020/05/15/857144397/police-back-off-from-social-distancing-enforcement. [Google Scholar]
  • 95.Van Rooij B. Weak enforcement, strong deterrence: Dialogues with Chinese lawyers about tax evasion and compliance. Law & Social Inquiry. 201641(2):288–310. doi: 10.1111/lsi.12136 [DOI] [Google Scholar]
  • 96.Apel R. Sanctions, perceptions, and crime: Implications for criminal deterrence. Journal of Quantitative Criminology. 201329(1):67–101. doi: 10.1007/s10940-012-9170-1 [DOI] [Google Scholar]
  • 97.Flanders C, Federico C, Harmon E, Klein L. "Terroristic threats" and COVID-19: A guide for the perplexed. University of Pennsylvania Law Review Online. 2020169:63–89. [Google Scholar]
  • 98.General OotDA. Memorandum For All Heads of Law Enforcement Components, Heads of Litigating Division, And United States Attorneys [Internet]. U.S. Department of Justice; 2020 March 24 [cited 2020 Dec 8]. Available from: https://www.politico.com/f/?id=00000171-128a-d911-aff1-becb9b530000.
  • 99.Kushner Gadarian S, Wallace Goodman S, Pepinsky TB. Partisanship, health behavior, and policy attitudes in the early stages of the COVID-19 pandemic. Plos One. 202116(4):e0249596. doi: 10.1371/journal.pone.0249596 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Rothberger H, Wilson T, Rosenfeld D, Humphrey M, Moore A, Bihl A. Politicizing the covid-19 pandemic: Ideological differences in adherence to social distancing. PsyArXiv k23cv [Preprint]. 2020Apr22 [cited 2020 Sep 10]. Available from: https://psyarxiv.com/k23cv. [Google Scholar]
  • 101.Mahmoud AB, Fuxman L, Mohr I, Reisel WD, Grigoriou N. “We aren’t your reincarnation!” workplace motivation across X, Y and Z generations. International Journal of Manpower. 202042(1):193–209. doi: 10.1108/IJM-09-2019-0448 [DOI] [Google Scholar]
  • 102.Schmelz K. Enforcement may crowd out voluntary support for COVID-19 policies, especially where trust in government is weak and in a liberal society. Proceedings of the National Academy of Sciences. 2021118(1):e2019378118. doi: 10.1073/pnas.2016385118 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Lutz T, Pengelly M. Trump claims ’victory’ as US sees Covid-19 case records in multiple states [Internet]. The Guardian; 2020Jul4 [cited 2020 Sep 10]. Available from: https://www.theguardian.com/us-news/2020/jul/04/us-coronavirus-cases-fourth-of-july-holiday. [Google Scholar]
  • 104.Reinders Folmer CP, Kuiper ME, Olthuis E, Kooistra EB, De Bruijn AL, Brownlee M, et al. Sustaining compliance with Covid-19 mitigation measures? Understanding distancing behavior in the Netherlands during June 2020. PsyArXiv xafwp [Preprint]. 2020Sep26 [cited 2020 Dec 8]. Available from: https://psyarxiv.com/xafwp/. [Google Scholar]
  • 105.Reinders Folmer CP, Kuiper ME, Olthuis E, Kooistra EB, de Bruijn AL, Brownlee M, et al. Compliance in the 1.5 meter society: Longitudinal analysis of citizen’s adherence to COVID-19 mitigation measures in a representative sample in the Netherlands. SSRN 3624959 [Preprint]. 2020Sep3 [cited 2020 Dec 8]. Available from: 10.2139/ssrn.3624959. [DOI] [Google Scholar]
  • 106.van Rooij B, Sokol DD. Introduction: Compliance as the interaction between rules and behavior. In: van Rooij B, Sokol DD, editors. Cambridge Handbook of Compliance. Cambridge, UK: Cambridge University Press; 2021. p. 1–10. [Google Scholar]
  • 107.Ansolabehere S, Schaffner BF. Does survey mode still matter? Findings from a 2010 multi-mode comparison. Political Analysis. 201422(3):285–303. doi: 10.1093/pan/mpt025 [DOI] [Google Scholar]
  • 108.Hines DA, Douglas EM, Mahmood S. The effects of survey administration on disclosure rates to sensitive items among men: A comparison of an internet panel sample with a RDD telephone sample. Computers in Human Behavior. 201026(6):1327–35. doi: 10.1016/j.chb.2010.04.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Zhang X, Kuchinke L, Woud ML, Velten J, Margraf J. Survey method matters: Online/offline questionnaires and face-to-face or telephone interviews differ. Computers in Human Behavior. 201771:172–80. doi: 10.1016/j.chb.2017.02.006 [DOI] [Google Scholar]
  • 110.Bauhoff S. Systematic self-report bias in health data: Impact on estimating cross-sectional and treatment effects. Health Services and Outcomes Research Methodology. 201111:44–53. doi: 10.1007/s10742-011-0069-3 [DOI] [Google Scholar]
  • 111.Van de Mortel TF. Faking it: Social desirability response bias in self-report research. The Australian Journal of Advanced Nursing. 200825(4):40–8. doi: 10.3316/informit.210155003844269 [DOI] [Google Scholar]
  • 112.Larsen MV, Nyrup J, Petersen MB. Do survey estimates of the public’s compliance with COVID-19 regulation suffer from social desirability bias? Journal of Behavioral Public Administration. 20203(2). doi: 10.30636/jbpa.32.164 [DOI] [Google Scholar]
  • 113.Google COVID19 Community Mobility Report [Internet]. 2020 [cited 2020 Sep 10]. Available from: https://www.gstatic.com/covid19/mobility/Global_Mobility_Report.csv?cachebust=5e35f7008c7c1554.
  • 114.Hoeben EM, Bernasco W, Liebst LS, Van Baak C, Rosenkrantz Lindegaard M. Social distancing compliance: A video observational analysis. Plos One. 202116(3):e0248221. doi: 10.1371/journal.pone.0248221 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Ali B Mahmoud

5 Jun 2021

PONE-D-20-39307

Social Distancing in America 

Understanding Long-term Adherence to COVID-19 Mitigation Recommendations

PLOS ONE

Dear Dr. Reinders Folmer,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 20 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Ali B. Mahmoud, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

  1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

  1. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information. Moreover, please include more details on how the questionnaire was pre-tested, and whether it was validated.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper presents findings from three cross-sectional samples of US residents in May, June and July of 2020 to determine factors that are related to social distancing behavior during and after the lockdown in many states in the US. The surveys covered a great deal of potential influences on that behavior and seeing how those factors changed in their relations with social distancing behavior is interesting and informative. I do have some suggestions to make the paper easier to digest, because there are so many analyses, one is likely to get lost in all of the approaches taken.

In regard to the main analysis, I don’t see why this is divided into two separate models as shown in Tables 4 and 5. The model in Table 5 seems quite different from the one in Table 4 and I didn’t understand why the variables in Table 4 were treated differently from the ones in Table 5. Why can’t all of this be done in one model?

In addition, it would be valuable to conduct the analyses in a more hierarchical manner, so that demographic and other personal characteristics were entered first and the various other types of variables were entered in blocks as you have done in Table 5. One can then see how various beliefs might be associated with the variables entered first, such as political ideology. As you note, this characteristic was not significant in the model when all variables were entered simultaneously, but it was when it was entered before a lot of the other beliefs. Another way to handle this is to present a table with all of the variables tested for their univariate relationship with social distancing. But this would not allow one to see how relations change as new variables are added to the model. I also am puzzled as to why a variable like trust in science or media is treated like a control variable. These are very important considerations in whether someone will adhere to government recommendations. I would place them together with other beliefs such as those regarding respect for the law.

I am also a bit puzzled by the measure of ability to practice social distancing. It seems to be very similar to the actual dependent variable, and so controlling for it seems to be redundant with the outcome. If you want to use it, I would enter it later to see how it changes the earlier associations.

Given that this is a study of the US, it would also be helpful to provide a breakdown of the geographic distribution of the sample. One could use the four census divisions as a way to do that. These divisions are also of interest because there was considerable variation in compliance with social distancing recommendations in different parts of the US. This could go into the first set of predictors in a hierarchical model.

I think the national representativeness of the sample should be downplayed in the description of it in the Method. This is basically a convenience sample that was recruited with demographic targets aimed to be representative of the US. But that is not necessarily probability-based, and so the conclusions that one can draw must be tempered to a degree.

I am also puzzled by the definition of situational variables. I don’t see how impulsivity is situational. This is a personality disposition that is relatively stable. Negative emotions are not necessarily situational, especially during a crisis like a pandemic. Knowledge and understanding of measures are important but they are no more situational than the perceived health threat, punishment severity, or many of the other variables in your model. Can you find a better way to organize these predictors?

I think the inclusion of criminology predictors is interesting. But in all honesty, I don’t think people regarded the social distancing mandates apart from the lockdowns as all that subject to sanctions. People were encouraged to maintain distance, but very few were arrested.

I think the abstract needs some work. No one will understand what you mean by motivational versus situational influences. The sentence that says: “as the core variables that sustain can change…” needs attention.

Reviewer #2: The research questions are clearly defined in the abstract and introduction, and are, of course, both timely and relevant. The questions and topic are interesting, especially given the protracted nature of the current pandemic, and the possibility of future pandemics. The paper is well written and easy to read. Variables are defined and explained in terms of examples nicely. In addition, the paper is well laid out and tables/figures are clear and easy to read; they aid in understanding the data and information being presented.

I found it interesting that the introduction to the paper specifically highlighted that pandemic measures such as lockdowns were eased in Southern and Midwestern parts of the US beginning in April. Given that some states, especially in the east, had much longer lockdown periods or strict pandemic rules, would location of participants not have perhaps played a role in some of the survey responses and attitudes? Additionally, there is some demographic information, such as age, ethnicity, or field of employment that may have large impacts on survey responses. For example, the study took into account those who worked directly with COVID patients/care, but other sectors of employment (ie/ the essential workers) may have also held different attitudes which may have impacted responses as well. Also, the authors mention that the study sample is nationally representative, but then highlight that that is in terms of only sex (Male, Female, Binary) or Age. Again, is the study representative in terms of location (eg/ higher populated states, areas of the country) or ethnicity? Of course, it is impossible to take every variable into account, but mention of some of these things (eg/ why they were included or excluded) might be beneficial.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Dan Romer

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Sep 24;16(9):e0257945. doi: 10.1371/journal.pone.0257945.r002

Author response to Decision Letter 0


9 Jul 2021

Editor’s comments

1. PLOS ONE style requirements

You request to ensure that the manuscript meets PLOS ONE’s style requirements, including those for file naming.

We thank you for reminding us of this. We have carefully re-checked this to ensure that the manuscript meets all style requirements, including those for file naming.

2. Information about the survey and the analyses

You ask that additional information is provided about the survey that was used in the study, and to ensure that sufficient details are presented to allow others to replicate the analyses.

We agree that it is important to ensure that sufficient information is provided about the survey and the analyses. In response, we have included all survey materials in the revised submission as Supplementary Information. Furthermore, we discuss in more detail the steps that were taken in developing the survey (p. 13):

“Our survey (see Supporting Information) was based on our prior surveys conducted in April 2020 in the United States [5], the United Kingdom [77], the Netherlands [78], and Israel [79]. It assessed the same variables and relied on the same measures. Measures that displayed poor internal consistency in the previous surveys were revised to improve their internal consistency (e.g., adherence, social norms, capacity to adhere, and opportunity to violate); reliability of the revised measures was high (α ≥ .85, more details below).”

Furthermore, the Supplementary Information included with the manuscripts includes all syntax required to replicate the analyses. These are presented in structured fashion so that all analyses can directly be replicated by running them.

Reviewer 1’s comments

1. Regression model

The Reviewer wonders why the main analysis was divided into two regression models, rather than a single model. The Reviewer suggests to present the analysis as a single model, in which different types of variables are added in blocks.

We thank the Reviewer for this suggestion. We agree that conducting the analyses as a hierarchical regression model provides greater clarity, for example into how the relationships shown in previous blocks are affected by the inclusion of further predictors in subsequent blocks. In the revised manuscript, we have adopted this approach (see p. 28-35). We conduct the regression analysis as a hierarchical model, starting with the control variables, and adding six categories of variables in sequential steps: (1) variables relating to people’s practical knowledge and understanding of the mitigation measures, (2) variables relating to their perceptions of their costs and benefits, (3) variables relating to their evaluation and felt obligation toward the measures and the responsible authorities, (4) variables relating to personal factors, (5) variables relating to their social environment, and (6) variables relating to their practical circumstances.

To further increase the clarity and parsimoniousness of these analyses, the separate analyses that were conducted per wave in the previous version of the manuscript have been replaced by a single regression model, based on the aggregated data. This model captures developments in adherence across survey waves by including survey wave as a predictor. Furthermore, to understand how the impact of the predictors has changed across this period, it further explores interactions between the predictors and survey wave. Finally, the analyses explore how the observed developments in the predictors (i.e., the changes in their absolute levels across this period) have contributed to the decline in adherence. For this purpose, we have added mediation analyses.

In sum, based on the Reviewer’s recommendations, we have thoroughly revised the analyses. The revised analyses improve upon the original version by illuminating (A) which variables predict greater adherence, (B) how their impact is affected by the inclusion of other variables in subsequent steps, (C) how their impact has changed across survey waves, and (D) how developments in the predictors explain the observed decline in adherence.

2. Trust in science and media

The Reviewer wonders why trust in science and trust in media were treated as control variables, in light of the important impact that such considerations may have on adherence to government recommendations.

The Reviewer is right that trust in science and media can be highly relevant for adherence. In response to the recommendations, we include these factors in our predictors in the revised classification of our predictors (see Reviewer 1’s points 1 and 6). We have arranged them under personal factors, which are added in Step 5 of the hierarchical regression model.

3. Capacity to adhere

The Reviewer observes that capacity to adhere appears to be very similar to the actual dependent variable, and wonders whether it may be redundant. If not, the Reviewer advises to enter it later in the model, to demonstrate how it changes the earlier associations.

Indeed, people’s capacity to adhere obviously can have an important impact on their actual adherence. Yet it is important to note that conceptually, these are very different variables. Capacity captures the notion that the circumstances may make it more or less difficult for individuals to adhere, such that people must exert more (or less) effort to effectively do so. This does not imply that high or low capacity automatically result in (non)adherence, however. Simply having the capacity to commit a crime, does not mean that one also will do so. Indeed, for many types of offenses, people’s capacity to offend may generally be high (e.g., theft or murder), yet few individuals effectively commit such transgressions. The same applies to social distancing: being practically able to keep a distance from others does not mean that someone always wishes to do so. For this reason, capacity to adhere remains an important, conceptually distinct variable. To assuage the Reviewers’ concerns, we now include it in the final step of the regression model, so that its impact on the preceding steps can readily be observed (see p. 30-33; relative to the preceding steps, only the effects of normative obligation to obey and trust in science were rendered nonsignificant). Furthermore, we discuss the relationship between capacity and adherence in greater detail in the introduction (see p. 8-9):

“In order for people to effectively do as social distancing measures demand, it is necessary that their practical circumstances effectively allow them to do so. However, in practice, their capacity to adhere may often vary. For example, keeping a safe distance from others may be more difficult in crowded or constrained environments, or in occupations that cannot be conducted from home or at a distance. Capacity thus may strongly shape adherence, but it should be understood that these concepts are not identical. Simply having the capacity to commit a crime does not mean that one also will do so. The same applies to social distancing: being practically able to keep a distance from others does not mean that someone wishes to do so. We expected adherence with social distancing measures to be higher among people who had greater practical capacity to adhere to these measures.”

4. Geographic distribution

The Reviewer observes that it would be helpful to provide a breakdown of the geographic distribution of the sample, for example based on the four census divisions. The Reviewer notes that this could be included to the control variables in the first step of the regression model.

We thank the Reviewer for this suggestion, and have followed their recommendation. We now present the geographic distribution of our samples by census region (see Table 1, p. 11-12), and include region (dummy-coded) as a control variable in the first step of the regression model (Table 4, p. 30-33). Relative to respondents from the Northeast region, adherence was significantly lower among respondents from the Midwest and the South regions. However, it is important to note that these differences were mostly rendered nonsignificant by the addition of further predictors in subsequent steps of our model, such as moral alignment and normative obligation to obey (Step 4) and descriptive social norms (Step 6).

5. National representativeness of the sample

The Reviewer remarks that the national representativeness of the sample should be downplayed in the Method section, given that it basically is a convenience sample that was recruited with demographic targets aimed to be representative of the US.

The Reviewer is right that our sample was stratified according to demographic targets based on US Census stratifications. In the revised manuscript, we have specified this more clearly, and have toned down all claims that the sample is nationally representative (e.g., in the abstract, introduction, method, discussion). We also explicate our sampling strategy more clearly in the method section (p. 10):

“Participants were residents (18 years or older) of the U.S. that were recruited via the online survey platform SurveyMonkey (https://surveymonkey.com). They were recruited using a stratified sampling approach, in which the final intended sample size was divided into subgroups with the same demographic proportions (age, gender, and race/ethnicity) as the national population based on estimates from the U.S. Census Bureau (https://www.census.gov/). This stratified sampling approach mimics the demographic characteristics of the United States, though it retains the biases and characteristics of a non-probability convenience sample.”

Moreover, we mention this as a limitation of our study in the Discussion (p. 52):

“First, although our samples were large and stratified sampled by age, gender, and ethnicity to mimic the demographic characteristics of the United States population based on U.S. Census Bureau data, they remain non-probability convenience samples. Furthermore, there was some variability between the samples in terms of demographics, possibly due to the considerable subset of participants who failed to complete the survey or pass the attention checks. As a consequence, our samples cannot be regarded as truly nationally representative. Nevertheless, there is evidence that such convenience samples can be as accurate as random digit dial telephone surveys [105, 106], and they may reduce social desirability biases [107].”

6. Organization of predictors

The Reviewer wondered about the classification of the predictors, especially the ‘situational’ variables, and asks if we can find a better way to organize these.

We thank the Reviewer for this suggestion. In the revised manuscript, we present a revised classification of the predictors. In this typology, the predictors are classified into six categories: (1) variables relating to people’s practical knowledge and understanding of the mitigation measures, (2) variables relating to their perceptions of their costs and benefits, (3) variables relating to their perceptions of legitimacy, procedural justice, and their obligation to obey the measures and the responsible authorities, (4) variables relating to personal factors, (5) variables relating to their social environment, and (6) variables relating to their practical circumstances. We have also augmented our substantive explanation of these categories on p. 5-9 of the revised manuscript. To show their respective contribution to adherence, these categories are entered in sequential steps in the hierarchical regression analysis (see Reviewer 1’s point 1). We believe that this approach better illuminates how different types of variables contribute to adherence, and hope that the Reviewer agrees with us.

7. Sanctions

The Reviewer remarks that although the inclusion of criminological predictors was interesting, people probably did not regard social distancing measures as very subject to sanctions, as very few people were effectively arrested.

Our decision to include deterrence was motivated by the fact that in general, punishment is a major intervention that is used to promote compliance, and also a major theoretical approach, based in rational choice. Furthermore, as the Reviewer observes, punishment did occur during the lockdown phase, and moreover, strong sanctions were threatened or imposed for other violations of mitigation measures (e.g., threats of prosecuting coughing attacks as terrorism). Research on deterrence shows that it is particularly perceptions of punishment certainty and severity, rather than the actual chance or severity of punishment, that is important for compliance (e.g., [32]; also see Apel, 2013; Decker et al., 1993). In light of prior enforcement, and continuing enforcement of other measures related to COVID-19, it is possible that respondents’ perceptions of punishment certainty and severity may still be meaningful, even when enforcement is trivial in practice. Indeed, it is noteworthy that respondents nevertheless reported moderately high levels (i.e., close to the scale midpoint) of perceived punishment certainty and severity, and that considerable variability existed in these perceptions – such that some people regarded punishment as more certain and meaningful than others (see Table 3, p. 21-22). Accordingly, we were interested in how such perceptions may shape adherence. In the revised manuscript, we outline this reasoning in the introduction (see p. 6):

“Although social distancing measures were not widely enforced in the U.S., sanctions did occur during the first wave lockdown [30]; furthermore, severe sanctions were communicated for other COVID-related violations [31]. Research on perceptual deterrence suggests that subjective perceptions of punishment may also influence compliance [32]. For this reason, we also examined subjective perceptions of punishment for not following social distancing measures, separating punishment certainty and severity – the key dimensions separated by general deterrence theory [33-35].”

8. Abstract

The Reviewer points out that the abstract needs work as readers will not understand ‘situational’ and ‘motivational’ variables without further specification.

We thank the Reviewer for mentioning this. In response, we have thoroughly revised the abstract, to correspond with the revised classification of variables. In the revised abstract, we now mention the six categories of variables, as well as the specific variables that were found to predict adherence. Furthermore, in the revised abstract, we also mention the results of the mediation analysis, in terms of the variables that were found to explain the observed decline in compliance across survey waves (p. 2):

“(…) For this purpose, we examined a broad range of factors, relating to people’s (1) knowledge and understanding of the mitigation measures, (2) perceptions of their costs and benefits, (3) perceptions of legitimacy and procedural justice, (4) personal factors, (5) social environment, and (6) practical circumstances. Our findings reveal that adherence was chiefly shaped by three major factors: respondents adhered more when they (a) had greater practical capacity to adhere, (b) morally agreed more with the measures, and (c) perceived the virus as a more severe health threat. Adherence was shaped to a lesser extent by impulsivity, knowledge of social distancing measures, opportunities for violating, personal costs, and descriptive social norms. The results also reveal, however, that adherence declined across this period, which was partly explained by changes in people’s moral alignment, threat perceptions, knowledge, and perceived social norms. These findings show that adherence originates from a broad range of factors, which develop dynamically across time. (…).”

Reviewer 2’s comments

1. Effect of location

The Reviewer wonders whether participants’ location may not have played a role in their responses, given that some states were later to repeal lockdown measures than others.

We thank the Reviewer for this question, which is similar to a point raised by Reviewer 1 (point 4). Indeed, it is likely that geographic region may have influenced participants’ responses, given that differences between regions existed in terms of the length of lockdown measures, as well as levels of infections. In the revised manuscript, we now include region (based on the U.S. census regions) as an additional control variable in our regression model. As stated in our response to Reviewer 1, adherence was lower in the Midwest and South regions than in the Northeast. However, such regional differences were mostly reduced to nonsignificance when other, more proximal predictors were added to the model.

2. Influence of demographic and other variables

The Reviewer observes that there is some demographic information that may have had an important impact on survey responses. The Reviewer specifically mentions age, gender and ethnicity, and field of employment (particularly essential work).

The Reviewer is correct that many demographic variables may have shaped responses to the survey. For this reason, we controlled for these variables in all analyses. We do recognize, however, that many variables might have important indirect effects on adherence, for example by shaping key predictors (e.g., perceived threat, moral alignment, capacity to comply). These may include the demographic variables mentioned by the Reviewer, but also region (see Reviewer 1 point 4 and Reviewer 2 point 1), political orientation, or trust in science and media (see Reviewer 1’s point 2). While we regard such indirect relationships as highly interesting, we do feel that they are too many to analyze in sufficient detail in the present manuscript, which is oriented on understanding more proximal, direct relationships with adherence. However, we do wholeheartedly encourage other researchers to explore further indirect and interactive relationships using these data, and have made them publicly available for this purpose. We outline this reasoning in the discussion (see p. 49-50):

“Clearly, the data allow for the exploration of many other relationships beyond those that we study in the present manuscript. For example, the data can inform about relationship between adherence and political orientation or trust in science (both singled out as important predictors of adherence in prior research [50, 51, 98, 99], yet neither a significant predictor in our final regression model), or demographic factors like ethnicity or socio-economic status. From the results of the hierarchical regression analysis, it seems plausible that these and other factors may have indirect relationships with adherence, through their effects on more proximal predictors. The data further could illuminate how specific subsets of predictors may interact with each other, or could be used to study other outcome variables (e.g., how these predictors may explain felt negative emotions, or support for authorities, etc.). The present research was primarily oriented on understanding the proximal predictors of adherence. For this reason, we feel that other relationships, such as those outlined above, are best reserved for dedicated manuscripts that are specifically oriented on these questions. We welcome further analyses of these questions, and have made our data publicly available for this purpose.”

We agree with the Reviewer that the distinction between essential and nonessential work is important. Regrettably, we did not measure this in the present research, and thus could not explore this further.

3. National representativeness of the sample

The Reviewer wonders to what extent our sample was nationally representative on other features than sex and age, such as location and ethnicity.

This point is related to that raised by Reviewer 1 (point 5), and we have responded in detail there. In brief, our participants were stratified sampled into pre-determined subgroups with the same demographic proportions (age, gender, and race/ethnicity) as the national population based on U.S. Census Bureau statistics. As such, it comes very close to being nationally representative on demographics. However, the result remains a non-probability convenience sample, rather than a truly nationally representative sample (if one truly exists). We have explicated this more clearly in the method section (p. 10), have acknowledged this as a limitation in the discussion (p. 52), and have toned down all claims of national representativeness from the manuscript. We did not directly measure ethnic group membership (only whether respondents regarded themselves as part of a minority group), and therefore cannot directly compare these to national statistics from the census (though the sample was stratified to have the same proportions as the national population).

Our sample was not designed to be stratified according to location. A comparison with the estimates from the U.S. Census Bureau indicates that our sample somewhat overrepresented residents from the Northeast (20.4%, vs. 17.2% in the census) and South (42.8%, vs. 38.1%), underrepresented residents of the West (16.0%, vs. 23.8%), and was almost equivalent in residents from the Midwest (20.8%, vs. 20.9%). To fully examine any regional effects, we would need a substantially larger sample, though it is important to note that the regional differences we uncovered were both small in magnitude and generally rendered non-significant when other variables were included. We mention in the Discussion that further research is needed to understand how these processes occurred at the regional or subordinate levels (p. 48):

“An important question for future research, however, is to understand more deeply how the changes that we observed across this period may be connected to local developments in policy, society, and the pandemic. For this, a more fine-grained analysis is needed, which takes into account how these processes developed locally at the level of regions, states, counties, or even cities.”

References

Apel, R. (2013). Sanctions, perceptions, and crime: Implications for criminal deterrence. Journal of

quantitative criminology, 29(1), 67-101. https://doi.org/10.1007/s10940-012-9170-1

Decker, S., Wright, R., & Logie, R. (1993). Perceptual deterrence among active residential burglars:

A research note. Criminology, 31(1), 135-147. https://doi.org/10.1111/j.1745-9125.1993.tb01125.x

Decision Letter 1

Ali B Mahmoud

13 Aug 2021

PONE-D-20-39307R1

Social Distancing in America: Understanding Long-term Adherence to COVID-19 Mitigation Recommendations

PLOS ONE

Dear Dr. Reinders Folmer,

Thank you for submitting your manuscript to PLOS ONE. The reviewers have recommended publication, but also suggested a couple of minor additions that would help improve the quality of your research. Mainly, rather than as a linear predictor, presenting the findings for age by the typical categories used for this variable. In addition, this variable may have nonlinearities that would be interesting to investigate. Further, whilst you note that the contrast between essential and non-essential work is significant; however, your study did not measure this distinction. Thus, it would be a good idea to include this in your Discussion as a research implication. Therefore, I invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Sep 27 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Ali B. Mahmoud, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the careful revisions to the suggestions posed after the first submission. If I had one more suggestion, it would be to display the results for age by the typical categories used for this variable rather than as a linear predictor. There may be nonlinearities in this variable that would be interesting to see. But that is only a suggestion.

Reviewer #2: Thank you for addressing the comments made by both reviewers. The abstract is much more clear and the revised classification of the predictors, as well as the explanations for them, was well done.

One small suggestion - you mention that the distinction between essential and nonessential work is important, but was not measured in your study. Perhaps it would be prudent to add this to your Discussion, as a direction or suggestion for future research.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Dan Romer

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Sep 24;16(9):e0257945. doi: 10.1371/journal.pone.0257945.r004

Author response to Decision Letter 1


23 Aug 2021

Editor’s comments

1. Age as a linear predictor

In line with Reviewer 1’s comment, you note that age may have nonlinearities that would be interesting to investigate. As suggested by Reviewer 1, you suggest to present the findings for age by category, using the typical categories for this variable.

We thank you and Reviewer 1 for this suggestion. We agree that age could have interesting nonlinearities that are obscured by treating this covariate as a linear variable. In response, we have carefully considered this suggestion, and conducted additional analyses to assess it. However, this has lead us to conclude that this change will not yield additional insight beyond the original analyses.

A first important consideration is that reducing age to a categorical variable will result in a loss of information by aggregating individuals with different ages into the same category. We also note that in similar articles in your journal, both categorical and continuous measures of age have been used.

More importantly, when dividing participants into age categories (15-17 | 18-20 | 21-44 | 45-64 | 65+, in line with the US Census), we found no differences between age groups – neither when using the lowest age group as the reference category, nor when using the oldest age group. Moreover, using age categories did not change the results of the predictor variables (please see S5 Output). In sum: we found no indications of nonlinearities in terms of the effect of age (probably because such differences are better explained by the predictor variables, eg threat perceptions, than by the unique effect of age). Furthermore, to implement age categories in the manuscript, all analyses presented in the Results section (p. 23-46) would need to be revised (because all coefficients are slightly altered by the loss in degrees of freedom). Given that age is only a control variable in our model, and that the Reviewer presents this as “only a suggestion”, we were hesitant to make substantial changes that do not yield greater informativeness. As such, we have retained the original analyses (with age as a linear covariate) in the final manuscript. However, in case that you or the Reviewers believe that separating age by category nonetheless can be informative, we will of course be happy to revise the results accordingly.

We mention generational differences as a possible avenue for future research in the Discussion (p. 50), in line with your suggestion:

“Future research could also expand on these findings by zooming in further on specific variables that may directly or indirectly shape compliance (e.g., (…) by separating individuals from different generations [101])”

2. Distinction between essential and nonessential work

Following Reviewer 2’s suggestion, you suggest that it would be a good idea to include the distinction between essential and nonessential work in the Discussion as a research implication.

We agree that this distinction would be interesting to explore, and have mentioned this as a possible avenue for future research in the Discussion (p. 50):

“Future research could also expand on these findings by zooming in further on specific variables that may directly or indirectly shape compliance (e.g., by distinguishing essential and nonessential work; by separating individuals from different generations [101]), or by identifying further variables with which our model could be expanded.”

3. Reference list

As per journal policy, you ask that we review our reference list to ensure that it is complete and correct, and to include the rationale in the case that retracted papers are cited.

We have reviewed all references and made changes wherever appropriate (please see p. 53-64). This chiefly involved adding DOIs where these were not yet provided [19, 24-26, 32, 35, 37, 44, 55, 56, 58, 59, 61-69, 71-73, 75, 77, 79, 84, 96, 11]. Page numbers were added for [25] and [66]. A missing issue number was added for [72].

In addition, the previous version of the manuscript mentioned an incorrect reference for [33]; this has been replaced with the correct reference. An additional relevant reference was added to the discussion [93]. A typo was corrected for [97], and for [99] and [112] the citations were updated (from preprints to published papers). No retracted papers are cited.

Reviewer 1’s comments

1. Age as a linear predictor

As a final suggestion, the Reviewer suggests to present the findings for age by category, using the typical categories for this variable.

We thank the Reviewer for this suggestion. We have responded to it in detail in our response to the Editor’s point 1 (please see above).

Reviewer 2’s comments

1. Distinction between essential and nonessential work

The Reviewer notes that because we mentioned that the distinction between essential and nonessential work is important (in our rebuttal letter for the previous revision round), it might be prudent to add this to the Discussion, as a direction or suggestion for future research.

We thank the Reviewer for this suggestion. As detailed in our response to the Editor’s point 2, we now mention this in the Discussion (please see p. 50).

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Wen-Jun Tu

15 Sep 2021

Social Distancing in America: Understanding Long-term Adherence to COVID-19 Mitigation Recommendations

PONE-D-20-39307R2

Dear Dr. Reinders Folmer,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Wen-Jun Tu

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Wen-Jun Tu

17 Sep 2021

PONE-D-20-39307R2

Social Distancing in America: Understanding Long-term Adherence to COVID-19 Mitigation Recommendations

Dear Dr. Reinders Folmer:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Wen-Jun Tu

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Survey. Survey materials.

    (PDF)

    S1 Dataset. Dataset and syntax files.

    (DOCX)

    S1 Table. Kendall’s tau correlations between demographic variables and adherence.

    May 8–18 (Survey 1. N = 1012). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level. Gender–Female as reference category. Political orientation–N = 866.

    (DOCX)

    S2 Table. Kendall’s tau correlations between demographic variables and adherence.

    June 8–16 (Survey 2. N = 986). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level. Gender–Female as reference category. Political orientation–N = 880.  

    (DOCX)

    S3 Table. Kendall’s tau correlations between demographic variables and adherence.

    July 11–17 (Survey 3. N = 921). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level. Gender–Female as reference category. Political orientation–N = 803.

    (DOCX)

    S4 Table. Kendall’s tau correlations between independent variables and adherence.

    May 8–18 (Survey 1. N = 1012). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level.

    (DOCX)

    S5 Table. Kendall’s tau correlations between independent variables and adherence.

    June 8–16 (Survey 2. N = 986). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level.

    (DOCX)

    S6 Table. Kendall’s tau correlations between independent variables and adherence.

    July 11–17 (Survey 3. N = 921). Note. *–Correlation is significant at the .05 level. **–Correlation is significant at the .01 level.

    (DOCX)

    S1 Output. Complete regression output Survey 1 (May).

    (PDF)

    S2 Output. Complete regression output Survey 2 (June).

    (PDF)

    S3 Output. Complete regression output Survey 3 (July).

    (PDF)

    S4 Output. Complete regression output Surveys 1–3 combined (May-July).

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All data files and analysis syntax are available from the Figshare database (accession number 13125206) https://uvaauas.figshare.com/articles/dataset/Social_Distancing_in_America_Compliance_with_COVID-19_mitigation_measures_in_the_United_States/13125206.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES