Skip to main content
PLOS One logoLink to PLOS One
. 2021 Feb 24;16(2):e0247520. doi: 10.1371/journal.pone.0247520

Who is (not) complying with the U. S. social distancing directive and why? Testing a general framework of compliance with virtual measures of social distancing

Russell H Fazio 1,*, Benjamin C Ruisch 2, Courtney A Moore 1, Javier A Granados Samayoa 1, Shelby T Boggs 1, Jesse T Ladanyi 1
Editor: Lambros Lazuras3
PMCID: PMC7904183  PMID: 33626066

Abstract

A study involving over 2000 online participants (US residents) tested a general framework regarding compliance with a directive in the context of the COVID-19 pandemic. The study featured not only a self-report measure of social distancing but also virtual behavior measures—simulations that presented participants with graphical depictions mirroring multiple real-world scenarios and asked them to position themselves in relation to others in the scene. The conceptual framework highlights three essential components of a directive: (1) the source, some entity is advocating for a behavioral change; (2) the surrounding context, the directive is in response to some challenge; and (3) the target, the persons to whom the directive is addressed. Belief systems relevant to each of these three components are predicted, and were found, to relate to compliance with the social distancing directive. The implications of the findings for public service campaigns encouraging people to engage in social distancing are discussed.

Introduction

Until a vaccine has been disseminated widely, minimizing the spread of COVID-19 requires that people change their behavior. People are urged to wash their hands frequently, use hand sanitizer, disinfect surfaces, and wear masks. Above all, since mid-March 2020 when the pandemic reached a critical level in the United States, government leaders and health experts have pleaded with citizens to engage in social distancing–that is, to deliberately increase the physical space between themselves and other people. The mantra “stay six feet away from others” has been repeated regularly. Most states took even stronger action, imposing “shelter-in-place” orders for weeks, sometimes months, in the interest of minimizing contact among people. Even after dire economic concerns prompted some states to gradually begin the process of re-opening, the plea to engage in social distancing has, if anything, been emphasized all the more. Tape on the floors of stores designates intervals of six feet; restaurants and bars are required to situate tables six feet apart; public parks are outlined with circles demarcating six feet of separation.

Rarely has the entire population been called upon to exhibit immediate behavior change in compliance with an urgent directive. That raises an important question: who is or is not complying? Understanding who chooses to practice (or not) social distancing–and why–is crucial for the design of effective public service campaigns, both now, and during the occurrence of future pandemics. Whom should such campaigns target? What specific beliefs should be addressed?

Theory and research concerning compliance, i.e., behavioral change in response to an explicit or implicit request, is central to social psychology. As delineated in Kelman’s classic treatise regarding the nature of the changes that may be fostered by a communication, any such behavioral change may range from passive conformity with a source’s message in the interest of avoiding disapproval to a more internalized, private acceptance of the inherent value of the message [1]. Although internalization promotes more generalized behavior change that is not dependent upon normative approval or disapproval, the key for the present purposes, and how we define compliance here, is whether a given individual responds to the request by engaging in the desired behavior, whatever the reason may be. The field has acquired substantial knowledge regarding social influence tactics that promote compliance [2], including such classic approaches as the foot-in-the-door [3], door-in-the-face [4], and low-balling [5]. In addition, the impact of both descriptive and injunctive norms, and the interplay between them, has been examined extensively [6, 7].

However, as some scholars have noted [8], the field lacks a general theoretical framework about who is likely to comply with a directive, and why they might or might not. Such a framework is particularly important when considering a directive calling for compliance and behavior change on a large scale, as is currently true of the social distancing directive. The major aim of the current research is to test such a theoretical framework regarding the who and why of compliance.

Compliance with a directive: What’s involved?

Any directive is open to interpretation and ultimately will be assessed as warranting or not warranting compliance. Only if deemed justified on the basis of one’s reasoning regarding the merits of the source’s arguments, or on the basis of one’s mere respect for the source, is the directive likely to promote behavioral change. Yet, one of the core principles of social psychology is that individuals construct their own reality [911]. Such constructions influence and are influenced by the information to which individuals choose to expose themselves [12], their exploratory behavior, and ultimately the accuracy of their understanding of reality [13, 14]. Thus, any given directive will be viewed through the lens of the individual’s knowledge, beliefs, and attitudes. Decades of research demonstrate the pervasive influence of such factors on judgments and decisions [1517]. An excellent example of such processes, and their potential significance within the domain of health, stems from issues that are now prominent regarding parental vaccination of children. Endorsement of such vaccine conspiracy beliefs as “pharmaceutical companies cover-up the dangers of vaccines” is strongly associated with parents not complying with routine vaccination recommendations for their children [18, 19].

Such considerations raise the question of what might be regarded as the essential components of a directive. Our theoretical framework highlights three: (1) the source: the entity advocating for behavioral change; (2) the surrounding context: the challenge the directive addresses; and (3) the target: the persons to whom the directive is addressed. Critically, the framework guiding the present research and our selection of predictor variables contends that belief systems relevant to each of these three components will influence the likelihood of compliance. Is the source to be trusted? What does the surrounding context imply about the seriousness of the challenge? Are there individual propensities that affect responsivity to the directive? Thus, a complex network of beliefs will affect who chooses to comply or not, and for what reasons. Some individuals’ belief systems will lead them to assess the directive favorably, thus increasing the likelihood of behavior change. Others will reach less positive conclusions and, hence, are likely to fail to respond appropriately.

This conceptual framework bears some similarity to the classic “who says what to whom” question pursued by Carl Hovland and his associates in the Yale Communication Group regarding persuasive communication [20]. Although the emphasis on source variables (“who”) is indeed parallel with our framework, our focus does not include a consideration of any specific message variables (“what”), largely because any directive concerning a large-scale challenge, like the social distancing directive in response to the COVID-19 pandemic, is likely to involve diverse messages delivered across multiple media. Our interest is in how individuals respond to the general plea, not any specific persuasive message in service of that plea. Finally, whereas the Yale Communication group was largely concerned with personality variables that might relate to general persuasibility (“whom”), our framework’s focus on target characteristics is more specific. The concern is with characteristics that are likely to relate to receptivity to a given directive in light of a given challenge.

Measuring social distancing

In examining compliance, the challenge rests in how to assess social distancing. Observation of individuals’ behavior in the field, arguably the “gold standard” in research on behavioral compliance, followed by extensive interviewing of those who are and are not maintaining social distance, is simply impractical. For this reason, the field’s dominant approach is to ask people to report the frequency with which they socially distance. However, the problems associated with self-reports of behavior have been discussed for decades. Individuals may over-report their social distancing to convey a socially desirable impression to others and themselves [2124]. Moreover, self-reports may be all the more problematic to the extent that they rely on retrospective memory regarding past behavior [25, 26]. Even more troubling, some of the very characteristics and beliefs we predict will affect responsiveness to the directive may also influence how a person (mis)represents their social distancing on self-report measures. For example, strongly valuing one’s identity as a liberal or as a conservative might promote the reconstruction of memory regarding one’s social distancing behavior in the very direction implied by that valued identity. Likewise, believing oneself to be a compassionate person who is concerned about others’ vulnerability to COVID-19 may promote an exaggerated reconstruction of the extent to which one is practicing social distancing.

We thus supplemented self-reports of social distancing with a more innovative, behaviorally-oriented approach. We simulate social distancing behavior with graphical depictions mirroring real-world scenarios. These involved a variety of situations in which individuals commonly encounter other people (e.g., sidewalks, a crosswalk, park pathways, a plaza, a grocery, a beach, a library, and a coffeeshop) and, hence, experience an opportunity to engage in social distancing. In each case, we asked participants to position themselves in relation to others in the scene. Hence, the virtual social distancing scenarios required a concrete, “in-the-moment” behavioral decision, which could vary in the degree to which participants did or did not distance themselves from others. For example, in one scenario participants chose whether to cross a park via a circuitous but isolated path versus a more-direct but crowded route. In another, they were presented an aerial image of a crowded beach and asked to click on the spot where they personally would lay down their towel. Yet another presented an interactive image of two people approaching each other in a crosswalk for which participants were asked to move a slider that shifted the walkers from the center of the crosswalk to the distance that they personally would leave between themselves and the other individual.

Our argument regarding the value of these virtual behavior scenarios parallels a relevant empirically-supported proposition regarding attitudes as predictors of behavior. Attitude measures are more likely to predict behavior when they match the behavior in terms of specificity regarding the action in question and the context in which the action is performed [27, 28]. Similarly, the simulated scenarios closely match real-life situations in terms of their concreteness. They offer a means, in addition to a self-report, of indexing the extent to which individuals make decisions that accord with the principle of social distancing.

The validity of the self-report measure of social distancing and, more importantly, the novel virtual behavior measure has been established by recent longitudinal data [29]. Four months after participating in studies involving these measures, over 2000 participants indicated whether they had contracted COVID-19 during the interim period. Both measures proved predictive. However, illustrating both the problems associated with self-reports and the value of the novel measurement approach, the virtual behavior measure accounted for unique variance when the two predictors were considered simultaneously, whereas the self-report measure did not.

Predictor variables

This study examines the relation between social distancing and various predictor variables. Surely, innumerable variables may merit consideration for inclusion in such research. In our case, the selection of predictors was guided by our framework regarding the essential components of a directive. How trustworthy are those espousing the plea? How serious is the challenge that the requested behavioral change is intended to address? Are there particular individual characteristics that are likely to influence receptivity? Hence, to test our compliance framework, the predictors involve our three classes of beliefs–those regarding the source of the directive, the surrounding context posed by the challenge, and additional characteristic of the targets themselves.

Beliefs about the source

The primary source of the social-distancing directive is government and health officials. The latter are medical scientists or liaisons representing the scientific community. Given the distinction in the literature between valuing science as means of acquiring knowledge and trusting scientists and their work [30], we hypothesized that both (a) greater belief in science and (b) greater trust in scientists would relate positively to compliance.

Given the highly polarized sociopolitical context and the politicization of the pandemic within the United States, assessing faith in government officials is more complex. The messages the public received from various officials were not consistent, with President Trump often downplaying the seriousness of the pandemic relative to what state Governors and health experts were communicating early in the pandemic while most states were under “shelter-in-place” orders [3133]. Accordingly, we separately assessed trust in the President’s and Governors’ leadership regarding the pandemic, predicting that these measures may relate differently to compliance.

Beliefs about the context

In accord with our guiding framework, we generated a series of items related to the challenge that the social distancing directive aimed to address. These involved assessments of the seriousness of the pandemic and support for social distancing. They were included to examine the hypothesis that greater concern about the virus and positive attitudes toward the directive would be associated with more social distancing. We tested a similar hypothesis regarding accurate knowledge about COVID-19 by administering a brief quiz about the virus. More knowledgeable individuals were expected to display more distancing.

Target characteristics

Two sets of target characteristics were expected to relate to individuals’ receptivity to the plea to engage in social distancing: (a) beliefs relevant to disease or views of the government and (b) more general characteristics of the individual relevant to the plea to socially distance. Perceived vulnerability to disease [34] and its concomitant disgust sensitivity [35] were expected to relate positively to social distancing. Similarly, general compassion [36] and concern for others’ vulnerability to the coronavirus were expected to predict distancing. Our conceptual reasoning led us to identify two additional receptivity-related beliefs that would affect individuals’ acceptance of the social distancing directive as a result of the influence that they would exert on both beliefs about the source and beliefs about the challenge. First, we expected political conservatism to relate to less social distancing. Our reasoning was that more conservative individuals traditionally place greater emphasis on economic matters, and social distancing directives may be viewed as a threat to the economy. Moreover, President Trump, a key conservative leader, expressed both an equivocal stance regarding the severity of the pandemic and urgency regarding reopening the economy [32]. The second receptivity-related belief on which we focused was the general tendency to endorse conspiracy theories. A considerable literature points to the significance of conspiratorial ideation as a factor associated with the rejection of scientific findings and recommendations [3739]. We predicted that such beliefs would promote minimization of COVID-19’s severity, and hence relate to less compliance.

Turning to the second set of target characteristics, we also predicted that individual differences in scientific literacy [40] would likely relate to both trust in health experts and the development of accurate knowledge regarding the coronavirus. Hence, scientific literacy was expected to be associated with more distancing. Additionally, a considerable literature highlights the importance of the particular news sources that individuals follow [4143]. We expected that reliance on more conservative news sources would relate to minimizing the threat posed by the pandemic and less distancing behavior.

Materials and methods

We recruited a sample from Mechanical Turk. Although not representative of the U.S. population, MTurk samples are considerably more diverse than the student samples used in most psychological research [44, 45], and they perform similarly to non-MTurk samples across many tasks and measures [46, 47], including surveys on political attitudes [48]. Further, our aim is not to make claims regarding the absolute frequency of beliefs and behaviors in the population, but rather to understand how the psychological variables of interest relate to social distancing behavior. Given these aims, we judged the MTurk sample as appropriate for testing our hypotheses. As will become evident, both the very systematic nature of the data and their replication of some relations previously established in the literature attest further to the appropriateness of the MTurk sample.

Past experience with MTurk participants led us to believe that they prefer, and respond most conscientiously, when a study is relatively short. Hence, we included only our social distancing measures, survey items assessing beliefs and knowledge about the pandemic, and various demographics as the elements of a common survey that was completed by all the participants. Subsets of our other predictors were included in four distinct surveys to which participants were randomly assigned. The four subsets involved: (a) source beliefs and science literacy, (b) news sources and belief in conspiracy theories, (c) compassion and concern for others vulnerability to COVID-19, and (d) perceived vulnerability to disease and disgust sensitivity. Demographic data regarding the participants in each of the four sub-studies are presented in S2 Material; these attest to the comparability of the four randomly-assigned samples.

Participants

We aimed for sample sizes that would clearly be large enough to obtain stable estimates of the relations with social distancing within each of our four sub-studies [49]. A total of 2,001 MTurk workers (US residents) participated in the common survey (903 women, 1,084 men, 14 no response; Mage = 38.66, SDage = 12.33), with about 500 being randomly assigned to each sub-study. They completed the study on May 7–8, 2020, at which time some states had begun to re-open their economies.

Measures

Ohio State University’s Institutional Review Board approved all study procedures (IRB: 2020B0129). After providing informed consent, participants completed the behavioral measures of social distancing, followed by questions regarding the pandemic, the test of COVID-19 knowledge, the unique set of predictor variables for the study to which the participant had been randomly assigned, and finally a series of demographic questions. All of the measures and the datafile are available at https://osf.io/359et/.

Virtual social distancing behaviors

Ten graphical scenarios comprised the virtual measure of social distancing behavior. Examples include: (a) An image of two people approaching each other in a crosswalk. Participants moved a slider that shifted the walkers from the center of the crosswalk to the distance that they would prefer. (b) An aerial image of a crowded plaza that participants were asked to traverse by drawing a path from a start point located at the southwest end of the plaza to an end point at the northeast end. The length of the paths that participants drew (in pixels) was measured as the data of interest. (c) A graphic depicting a park for which participants used a 4-point scale to indicate whether would they definitely or probably walk via one of two paths. One path was less direct, but also more isolated relative to the many people situated on either side of the alternative path. Still images of these three graphical scenarios are presented in Fig 1. All ten of the behavioral scenarios are described in S1 Material and can be viewed at our demonstration website, http://psychvault.org/social-distancing/. After standardizing scores from each measure, we computed the average as our index of social distancing behaviors (α = .82).

Fig 1. Example virtual behavior items.

Fig 1

Predictor variables

Questions regarding the pandemic. The behavioral scenarios were followed by the common portion of the survey, including the self-report measure of social distancing: “Generally speaking, how strictly have you personally been following the "social distancing" recommendations?” to which participants responded on a 7-point scale ranging from “not at all” to “very strictly.” They also responded to a number of questions regarding perceptions of the pandemic. Participants were asked how worried they were about contracting the virus, how likely they were to do so, and how concerned they were about the spread of the virus. They also indicated whether they felt the threat of COVID-19 had been “greatly exaggerated,” “somewhat exaggerated,” “adequately conveyed,” or “not conveyed strongly enough.” Yet another item inquired about the tradeoff between economic considerations and safety by asking participants to endorse one of six statements ranging from “Authorities should ONLY focus on protecting people from COVID-19 / the coronavirus, regardless of how much the economy will suffer” to “Authorities should ONLY focus on protecting the economy, regardless of how many people will suffer from COVID-19 / the coronavirus.” A more general attitudinal question asked participants to use a 7-point scale to indicate the extent to which they supported or opposed the guideline to engage in social distancing.

COVID-19 knowledge. Participants indicated whether each of 13 statements regarding the coronavirus (facts and myths addressed by the Centers for Disease Control and Prevention and the World Health Organization) were true or false. Included were false items such as “Antibiotics are an effective treatment for COVID-19 / the coronavirus” and true items such as “Some individuals who have COVID-19 / the coronavirus do not show any symptoms.” The number answered correctly served as our index of COVID-19 knowledge (α = .83).

Faith in government. To assess people’s trust in different elements of the government, we used four single-item measures all of which involved responding on a 7-point scale ranging from “Not at all” to “Very Much.” Specifically, participants were asked to rate the extent to which they “trust President Trump to lead us effectively through the current COVID-19 crisis” and separately whether they trust state governors to do so. They also indicated the extent of their general confidence in President Trump and general confidence that the federal government will address the nation’s problems effectively.

Belief in the value of science. To assess the extent to which individuals believe in the value of science as the best way to accumulate knowledge about the world, participants responded to a shortened version (the six items with the highest factor loadings) of a scale developed by Farias, Newheiser, Kahane, & de Toledo [50]. Participants rated the degree to which they endorsed statements such as “Science is the most efficient means of attaining truth” on a six-point scale ranging from 1 (Strongly disagree) to 6 (Strongly agree). The average score across the six items was computed as the relevant index (α = .92).

Trust in scientists. This variable was assessed using a shortened version (the 11 items, out of 21, with the highest corrected item-total correlations) of the scale developed by Nadelson et al. [30]. Participants rated on a scale ranging from 1 (Strongly disagree) to 5 (Strongly agree) their level of agreement with statements such as “We should trust the work of scientists” and “Scientists ignore evidence that contradicts their work” (reverse coded). The average rating across the eleven items was computed (α = .80).

Science literacy. Participants’ understanding of basic scientific ideas was assessed using the Civic Scientific Literacy Scale [40]. This scale consists of 11 claims such as “Light travels faster than sound” and “Electrons are smaller than atoms” for which participants indicate agreement or disagreement. The number of correct responses was computed (α = .59).

Conspiracy theories. Participants completed the Generic Conspiracist Beliefs scale [37]. The scale consists of 15 items that address a variety of generic conspiracy theories, including “Evidence of alien contact is being concealed from the public,” “A small, secret group of people is responsible for making all major world decisions, such as going to war,” and “Experiments involving new drugs or technologies are routinely carried out on the public without their knowledge or consent.” Participants responded to each statement on a scale of 1 (definitely not true) to 5 (definitely true), with the average rating serving as the measure of general belief in conspiracy theories (α = .96).

News sources. Participants were presented with a list of potential News sources: CNN, Fox News, MSNBC, NPR, national newspapers and magazines, social media, and ABC, CBS, or NBC News, as well as the option “do not follow the news.” They were asked to select all the sources from which they got their news in the past week. Any who selected an option other than not following the news were then asked to select which one of the outlets they consider to be their primary source of news. Our interest was especially in exposure to Fox News, an outlet whose political leaning is known to be conservative [43].

Compassion. To assess general compassion for others, we employed a subset of items of the Interpersonal Reactivity Index [36]. Specifically, we included the 14 items of the scale related to empathic concern (e.g., “When I see someone being taken advantage of, I feel kind of protective towards them”) and perspective taking (e.g., “Before criticizing somebody, I try to imagine how I would feel if I were in their place”). Participants responded on a 5-point scale ranging from “Does not describe me well” to “Describes me very well” and their average rating served as the measure of interest (α = .87).

Concern for others’ vulnerability to COVID-19. Four items assessed the extent to which participants experienced empathic concern for people who had contracted COVID-19 or were vulnerable to do so. Participants rated their agreement on a 6-point scale with such statements as “I feel it is my personal responsibility to keep others safe from COVID-19 coronavirus.” The average rating was computed (α = .67).

Perceived vulnerability to disease. Individuals’ perceptions of their likelihood of contracting a disease or illness was assessed with the 15-item scale developed by Duncan, Schaller, & Park [34]. Participants rated the degree to which they agreed with statements such as “If an illness is ‘going around’ I will catch it” on a 5-point scale. After the required reverse-coding of some items, the average response to the 15 scale items was computed (α = .73).

Disgust sensitivity. The contamination subscale (five items) from the Disgust Scale Revised [35] was used to assess individuals’ sensitivity regarding situations that have the potential for the transmission of pathogens. Participants rated on 5-point scales how disgusted they would be by various scenarios such as “A friend offers you a piece of chocolate shaped like dog doo,” as well as their agreement with statements such as “I probably would not go to my favorite restaurant if I found out the cook had a cold.” The average response to the five scale items served as the measure of disgust sensitivity (α = .70).

Demographics. In addition to a number of demographic questions (e.g., age, gender, and employment status), participants were asked to identify their political orientation on a scale ranging from 1 (Extremely liberal) to 7 (Extremely conservative).

Attention check. The survey concluded with a brief attention check in which participants were informed that a man had seen a beautiful butterfly, and were then asked to select what he had seen: a girl, a day, a fruit, or an insect. Ninety-one percent correctly chose insect. To provide a more conservative test of our hypotheses, we did not exclude participants who failed this attention check. However, none of our conclusions or statistical results are altered to any meaningful degree if these participants are excluded from analyses.

Results

Social distancing

To test each of the hypotheses, we examined the multiple correlation between a given variable and our two indices of social distancing–the behavioral and the self-report measures. Table 1 presents the regression data for each of our hypothesized predictor variables. Table 2 displays the correlations among the variables. Turning first to the source, both belief in the value of science and trust in scientists correlated positively with social distancing. As expected, these variables also correlated with assessments of the pandemic itself, including more support for the social distancing guidelines, greater concern about the spread of COVID-19, stronger beliefs that the threat posed by the virus had not been exaggerated, and a view that public safety should be prioritized over economic recovery.

Table 1. Multiple correlations with virtual behavioral and self-reported social distancing.

Standardized Betaa
R F df Behavioral Self-Report
Beliefs about the Source
Belief in Value of Science .247 15.993*** 494 .117* .172***
Trust in Scientists .372 39.754*** 494 .299*** .127**
Trust President Trump re COVID-19 crisis .294 23.337*** 493 -.309*** .038
Trust State Governors re COVID-19 crisis .268 19.021*** 493 .027 .255***
Confidence in Federal Gov’t Effectiveness .205 10.832*** 493 -.223*** .139**
General Confidence in President Trump .281 21.118*** 493 -.294*** .033
Beliefs about the Context
Support Social Distancing Guideline .650 728.207*** 1995 .248*** .498***
Worry about Contracting Virus .303 101.249*** 1998 .145*** .208***
Likely to Contract Virus .096 9.327*** 1997 .026 .081**
Concerned about the Spread of the Virus .409 200.380*** 1997 .221*** .257***
Threat (not) exaggerated .458 265.539*** 1998 .343*** .185***
Economy More Important than Safety .377 165.181*** 1997 -.162*** -.274***
COVID Knowledge .286 89.128 1998 .269*** .034
Other Receptivity-Related Beliefs
General interpersonal compassion .359 36.753*** 496 .134** .275***
Concern for others’ COVID vulnerability .501 83.228*** 496 .122** .432***
Disgust Sensitivity .151 5.888** 502 -.020 .159**
Perceived vulnerability to disease .227 13.632*** 502 .116* .150**
Political ideology (higher, more conservative) .228 54.526*** 1996 -.183*** -.075**
Belief in conspiracy theories .257 17.522*** 496 -.249*** -.016
Other Target Characteristics
Age .166 28.393*** 1996 .137*** .051*
Gender (1 = male/0 = female)b .126 16.018*** 1984 -.107*** -.034
Science Literacy .198 10.026*** 494 .178*** .038
Fox Newsc .177 7.650** 475 -.190*** .033
NPRc .118 3.325* 475 .079 .058
Papers, Magazinesc .141 4.790** 475 .093 .071

a Higher numbers reflect more social distancing behavior.

b Participants who responded “other” or “prefer not to answer” were excluded from the analysis.

coded 0 = neither watch last week, nor primary news source, 1 = watched last week or primary, 2 = both

*p < .05

**p < .01

***p < .001

Table 2. Correlation matrix.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z AA
A. Belief in Value of Science
B. Trust in Scientists .289**
C. Trust Trump re COVID-19 -.105* -.528**
D. Trust Governors re COVID-19 .185** .007 .311**
E. Confidence in Federal Gov’t .040 -.395** .767** .423**
F. General Confidence in Trump -.101* -.536** .927** .289** .760**
G. Support Social Distancing .342** .366** -.195** .364** -.016 -.164**
H. Worry Contracting Virus .256** -.113* .040 .180** .124** .071 .330**
I. Likely to Contract Virus .169** -.232** .074 .138** .099* .093* .148** .673**
J. Concern about Spread of Virus .198** .212** -.098* .226** -.066 -.107* .396** .387** .205**
K. Threat (not) exaggerated .113* .587** -.495** .024 -.374** -.528** .456** .082** -.054* .290**
L. Economy versus Safety -.209** -.217** .286** -.134** .122** .251** -.479** -.320** -.191** -.267** -.338**
M. COVID Knowledge -.007 .644** -.464** -.141** -.426** -.487** .154** -.300** -.368** .083** .547** -.019
N. Interpersonal compassion .308** -.029 -.087 .170** .327** -.171** .370**
O. Concern for others .481** .307** .191** .274** .221** -.259** .086 .483**
P. Disgust Sensitivity .159** .347** .244** .127** -.175** -.069 -.390**
Q. Vulnerability to disease .213** .335** .225** .190** .119** -.108* .010 .356**
R. Politically conservative -.300** -.375** .522** .045 .376** .522** -.233** -.053* -.011 -.144** -.388** .284** -.276** -.184** -.171** .170** -.043
S. Belief in conspiracy theories -.136** .186** .260** -.002 -.376** .035 -.571** .128**
T. Age -.082 .107* -.007 .039 -.074 .017 .055* -.075** -.131** .096** .069** .046* .164** .127** -.028 -.036 -.110* .097** -.218**
U. Gender (1 = male/0 = female) .141** -.119** .091* .102* .134** .088 -.056* -.018 .041 -.086** -.140** -.007 -.182** -.221** -.167** -.009 -.087 .036 .084 -.117**
V. Area Re-opened (1 = Yes/0 = No) .054 -.124** .120** -.048 .141** .120** -.044 .049* .116** -.015 -.131** .047* -.204** -.050 -.026 .115** .041 .093** .228** -.007 .022
W. Science Literacy .007 .514** -.396** -.125** -.436** -.409** .021 -.260** -.267** .030 .348** .089* .621** -.211** .212** -.079 -.170**
X. Fox Newsb -.168** -.032 -.015 -.053 -.275** .191** -.149** .412** .151** .126** -.007 .112*
Y. NPRb .157** .000 .090* .071 .162** -.122** .114* -.221** -.032 -.025 .057 -.087 -.196**
Z. Papers, Magazinesb .124** .006 .047 .106* .167** -.053 .210** -.210** -.220** .035 -.007 -.092* -.191** .124**
AA. Self-Report Distancing .223** .257** -.096* .267** .042 -.095* .611** .275** .093** .359** .342** -.348** .158** .339** .490** .150** .202** -.159** -.137** .114** -.083** -.052* .115* -.057 .095* .114*
BB. Behavioral Distancing .192** .354** -.292** .138** -.162** -.279** .476** .241** .064** .339** .428** -.287** .284** .266** .328** .051 .183** -.218** -.256** .160** -.122** -.095** .195** -.174** .106* .126** .459**

** p < .01

* p < .05; Cells without an entry involve variables that were included in different sub-studies.

Faith in the government officials was more complex, just as we anticipated. Whereas greater trust that the State Governors can lead us effectively through the COVID-19 crisis was positively associated with behavioral compliance, the relations were negative when participants considered either President Trump specifically or the federal government more generally. More confidence in those sources was associated with less social distancing. Interestingly, these relations do not appear to be a simple reflection of political orientation. Although participants who more strongly identified as conservative engaged in less social distancing and expressed more trust/confidence in President Trump, in each case the measures of social distancing accounted for unique variance over and above that explained by political orientation (all p’s < .001).

All the belief measures concerning the pandemic itself related as expected with social distancing. This was especially true of support for the social distancing guideline, worry about contracting the virus, concern about the spread of the virus, and the assessment that the threat posed by the virus had not been exaggerated. Believing that relatively more emphasis should be placed on economic recovery than public safety also was associated with less social distancing.

Answers to our test of COVID-19 knowledge also related positively to behavioral compliance. Importantly, the recognition of true statements and the rejection of misinformation each correlated with social distancing (multiple R’s of .234 and .260, respectively). Knowledge also was associated with support for the social distancing guideline and especially with the belief that the threat posed by the coronavirus had not been exaggerated. In addition, more knowledgeable individuals expressed greater trust in scientists and less confidence in President Trump.

Self-beliefs highlighting interpersonal compassion and concern for others’ vulnerability to the virus were associated with more social distancing. These variables correlated as expected with beliefs about the pandemic. For example, more compassionate individuals were more supportive of the social distancing guideline and believed that the threat of the virus had not been exaggerated. The same was true of individuals who had expressed concern for others’ vulnerability; they also were more worried that they themselves would contract the virus.

The data offered a number of interesting observations regarding the extent to which respondents viewed themselves as generally vulnerable to disease. This variable was related to more social distancing, and, just as one would expect, with greater worry about contracting COVID-19 and greater perceived likelihood of contracting it. Disgust sensitivity correlated with perceived vulnerability to disease, replicating past findings, and also related to social distancing. Stronger disgust sensitivity also was associated with greater worry about contracting COVID-19 and greater likelihood of doing so.

As already noted, political orientation also was relevant; more conservative individuals engaged in less physical distancing. Just as expected, political ideology correlated strongly with general confidence in President Trump and trust in his leadership regarding the COVID-19 crisis, but not with trust in the state Governors. More conservative individuals also reported less belief in the value of science and less trust in scientists. They also believed the threat of the coronavirus to have been exaggerated and that economic considerations needed to take priority over public safety.

Generally believing in conspiracy theories also was predictive of less social distancing, possibly because it promotes a less accurate view of the pandemic. Indeed, such beliefs correlated strongly with scores on the test of COVID-19 knowledge. Conspiracy theorists also were more likely to believe that the threat posed by the coronavirus had been exaggerated.

In addition to beliefs, we examined a number of other personal characteristics that seemed potentially relevant to receptivity to the directive (see the fourth section of Table 1). Female participants displayed more evidence of social distancing, as did older participants. Our hypothesis regarding science literacy also received support. Those who exhibited a greater understanding of a small set of basic scientific facts engaged in more social distancing. Scientific literacy also related strongly to expressed trust in scientists and scores on the test of COVID-19 knowledge. It also was associated with the belief that the threat posed by the virus had not been exaggerated.

Finally, although none of the multiple correlations were very substantial, accounting for less than 3% of the variance, a number of the news sources variables related to social distancing. Whereas engagement with NPR or newspapers and magazines was associated positively social distancing, greater involvement with Fox News related negatively to distancing. The latter was more common for participants who endorsed a more conservative political orientation, whereas the former was associated more strongly with a more liberal perspective. Parallel relations were observed with support for the social distancing guideline, valuing economic considerations more than public safety, believing that the COVID-19 threat had been exaggerated, and accurate knowledge regarding the virus.

Comparing the virtual behavior and self-report measures of social distancing

Our primary interest was to employ the virtual behavior and self-report measures of social distancing as supplemental to one another and, hence, as jointly related to each of the hypothesized predictor variables. Although the two measures were related (r = .459, p < .001), the magnitude of the correlation was not so overwhelming as to suggest that they were equivalent. Given this observation, it is interesting to consider how the two variables differ with respect to the unique variance for which they each accounted in the various multiple regressions reported in Table 1. For COVID-19 knowledge and scientific literacy, the multiple correlation was driven almost entirely by the virtual behavior measure. Indeed, the self-report failed to account for any significant unique variance. The same was true with respect to engagement with Fox News, belief in conspiracy theories, and the variables reflecting trust in President Trump’s leadership regarding the pandemic and general confidence in him. On the other hand, for the measure of support for the social distancing guideline, the self-report measure of social distancing accounted for twice the unique variance that was associated with the virtual behavior measure. Similar patterns were evident for general compassion, concern for others’ vulnerability to COVID-19, and disgust sensitivity.

In order to statistically compare the relations with the virtual behavior measure to those with the self-report measure, we tested the difference between each pair of simple correlations with a t-test for dependent correlations [51]. Table 3 summarizes this comparison. Any predictor variable for which the comparison yielded p-value less than .05 is listed. Although drawing any strong inferences from these patterns is difficult, the two statistically largest differences appear especially striking and accord well with the above observations concerning unique variance. COVID-19 knowledge, which is a much more objective measure than any of the other predictor variables, correlated more strongly with the virtual behavior measure. On the other hand, the measure that is arguably most subjective–support or opposition for the social distancing guideline–correlated much more strongly with the self-report measure of social distancing.

Table 3. Comparing virtual behavior and self-report measures of social distancing.

Behavioral r Self-Reported r t-test of Difference p =
Beliefs about the Source
Trust President Trump re COVID-19 crisis -.292 -.096 3.206 .002
Trust State Governors re COVID-19 crisis .138 .267 -2.179 .030
Confidence in Federal Gov’t General Effectiveness -.162 .042 3.242 .001
General Confidence in President Trump -.279 -.095 3.019 .003
Beliefs about the Context
Support Social Distancing Guideline .476 .611 -5.870 .000
Threat (not) exaggerated .428 .342 3.587 .000
Economy More Important than Safety -.287 -.348 -2.412 .016
COVID Knowledge .285 .158 5.007 .000
Other Receptivity-Related Beliefs
Concern for others’ COVID vulnerability .386 .490 -2.066 .039
Political ideology (higher #, more conservative) -.218 -.159 2.090 .037
Belief in conspiracy theories -.256 -.137 2.081 .038

It appears that a sizeable number of participants may have offered self-reports that were overestimates of their actual social distancing behavior. Whereas the virtual behavior data displayed a normal distribution, the distribution of scores on the self-report measure was skewed with a substantial majority responding at or near the positive endpoint of the scale (M = 5.98 on a 7-point scale, SD = 1.18). Such overestimation is to be expected to the extent that participants wished to believe themselves as having acted in manners that avoided placing their health, or that of others, at risk, or were simply concerned with responding in socially-desirable fashion. As noted earlier, such self-beliefs and concerns have been shown to influence retrospective memory processes [21, 25, 26].

To examine such overestimation more systematically, we focused on the residuals from a simple regression predicting scores on the self-report measure of social distancing from the virtual behavior measure. More positive residuals reflect a self-report score that is higher than expected on the basis of the social distancing exhibited on the virtual behavior items. We then correlated these residuals with each of our predictor variables, in order to assess the extent to which each related to such statistical overestimation. Table 4 lists any predictor variable for which the correlation was statistically significant. In general, the more participants held beliefs associated with a serious view of the pandemic (e.g., greater belief in science, worry about the coronavirus, concerns about their own and others’ vulnerability), the more their self-reports of social distancing were greater than expected on the basis of their virtual distancing behavior. Especially noteworthy, once again, is the extent to which participants supported or opposed the social distancing guideline, for which the correlation with the residual was the highest of any variable. The more support participants expressed, the more their retrospective reports of compliance with the social distancing directive appeared to be overestimates. These more supportive individuals reported having followed the social distancing recommendations to a much greater extent than would be expected on the basis of their “in-the-moment” decisions on the graphical scenario items that comprised the virtual behavior measure of social distancing.

Table 4. Correlations with the residual predicting self-report measure of social distancing from the virtual behavior measurea.

r p n
Beliefs about the Source
Belief in Value of Science .151 .001 497
Trust in Scientists .107 .017 497
Trust State Governors re COVID-19 crisis .226 .000 496
Confidence in Federal Gov’t Effectiveness .129 .004 496
Beliefs about the Context
Support Social Distancing Guideline .442 .000 1998
Worry about Contracting Virus .185 .000 2001
Likely to Contract Virus .072 .001 2000
Concerned about the Spread of the Virus .233 .000 2000
Threat (not) exaggerated .164 .000 2001
Economy More Important than Safety -.244 .000 2000
Other Receptivity-Related Beliefs
General interpersonal compassion .237 .000 499
Concern for others’ COVID vulnerability .374 .000 499
Disgust Sensitivity .143 .001 505
Perceived vulnerability to disease .136 .002 505
Political ideology (higher, more conservative) -.066 .003 1999
Other Target Characteristics
Age .045 .043 1999

a Higher numbers reflect greater reports of social distancing than expected on the basis of the virtual behavior measure.

b coded 0 = neither watch last week, nor primary news source, 1 = watched last week or primary, 2 = both

Discussion

The findings highlight the importance of individuals’ beliefs as factors associated with social distancing behavior. They also support the theoretical framework of compliance that guided our selection of variables for inclusion in the study. Any directive regarding behavior change will be shaped by beliefs about the directive’s source, beliefs about the context surrounding the challenge to which the directive is responding, and relevant self-views and characteristics. As such, the framework is applicable to any call for behavior change aimed at the general public. When applied to the specific challenge posed by the spread of the COVID-19 virus and the directive to engage in social distancing, the conceptual framework led to our focus on (a) source variables related to the government and public health officials, (b) beliefs regarding COVID-19 and the severity of the threat it posed, and (c) various self-related beliefs and target characteristics influencing receptivity to the social distancing directive.

Importantly, these relations were evident not only on a self-report social-distancing measure but also on a measure that relied on vivid, graphical simulations of real-life behavior. Participants made concrete, “in-the-moment” decisions about actions involving different degrees of social distancing. They interactively distanced themselves from oncoming passersby, from people standing in line, and from fellow grocery shoppers, coffeeshop customers, and library patrons. They selected a position on a crowded beach and traversed a crowded plaza. As such, the behavioral decisions, albeit virtual, closely matched the features of real-life situations.

The current findings did indeed reveal some striking differences between behavioral and self-report measures of social distancing. Although the two were related, the correlation did not reach a level that suggested these were equivalent measures of the same construct. Moreover, both the simple correlations and the unique variance accounted for by each measure differed markedly for a number of predictor variables. Especially telling was that scores on our tests of COVID-19 knowledge–the most objective of our predictor variables–related more strongly to the behavioral measures, with self-reports accounting for little or no additional variance. Thus, self-reports do not cohere with behavioral decisions sufficiently to suggest they are mutually interchangeable. The findings suggest that retrospective reports of social distancing behavior may be unduly influenced by attitudes toward social distancing guidelines and self-beliefs that imply the desired behavior. Nevertheless, the virtual behavior measure of social distancing and the self-report measure did complement one another well, as is evident by their accounting for unique variance for many predictor variables.

Before concluding, we do wish to acknowledge a few important limitations regarding the present research. First, the participants were U. S. residents and many of the survey items referenced that particular context. Hence, whether and to what degree these findings can be generalized to other nations, especially ones that managed to avoid politicizing the pandemic to the extent that has been true in the United States. Second, we made a strategic methodological decision to use a “planned missing” design, segmenting the various predictor variables into four subsets, to which the participants were randomly assigned. This allowed us to keep the survey for each individual participant relatively brief and, hence, lessen the possibility of participant fatigue and associated confounds. However, it did come at a cost. This design means that we are unable to assess the relations between some of the predictors and are unable to empirically confirm (e.g., via factor analysis) the categorical distinctions that comprise our guiding theoretical framework: beliefs about the source of the directive, beliefs about the challenge the directive addresses, and relevant target characteristics. Future research may address this limitation through the use of multi-wave surveys or other means of countering attrition and inattentive responding in longer-format surveys. With responses to each measure from every participant, a factor analysis could examine the extent to which our conceptual categorizations are supported by the data. Finally, although we assessed participants’ knowledge regarding COVID-19 with a series of true and false statements regarding its spread and treatment, we did not specifically address understanding of the social distancing recommendations. Any participants who were either unaware of the guidelines or misunderstand them are unlikely to have behaved accordingly. However, we suspect that any such misunderstandings would correlate strongly with scores on our test of COVID-19 knowledge.

We conclude with a brief consideration of the implications of the present findings for public service campaigns encouraging social distancing. How might compliance be promoted? The literature regarding scientific communications (e.g., those concerning climate change, vaccinations, or stem-cell research) highlights that persuading individuals to adopt scientifically-sound beliefs and modify their behavior accordingly is fraught with difficulties, especially as an issue becomes politicized [52]. Message recipients often fail to process information accurately. Various motivated reasoning processes, including source derogation, counterarguing, and sheer denial, allow individuals who are exposed to a counterattitudinal message to reach a desired conclusion, thus failing to disconfirm, and sometimes lending support, to their preexisting beliefs and ideology [5355]. Given this critical barrier to effective science communication, many researchers have emphasized the importance of attending to the motivations that underlie science-skeptical attitudes [56, 57] and the value of tailoring messages to the audience such that functionally equivalent information is framed in a manner that is consistent with ideological values [5860].

Unfortunately, the pandemic has become extraordinarily politicized within the U.S., much more so than in such countries as Canada, Germany, and South Korea whose leaders pursued a more consistent and pragmatic approach to the initial wave of the pandemic [6164]. That politicization is very evident in the present data. Very different relations were observed with respect to trust in President Trump versus the State Governors as providing effective leadership during the early months of the COVID-19 crisis. Participants’ political orientation, and even their exposure to more partisan news sources, related to beliefs about the severity of the COVID-19 threat, support for the social distancing guideline, and social distancing behavior.

It is precisely this politicization that poses such a barrier to effective science communication regarding the pandemic. Message tailoring surely will be critical with respect to promoting acceptance of and compliance with social distancing recommendations. Campaigns are more likely to be effective when they address the motivational roots underlying minimization of the severity of the pandemic and accord with individuals’ social identity needs [57, 6567]. Multiple strategies are likely necessary for widespread acceptance. Although the present findings are silent with respect to the strategies themselves, they do offer some insights regarding the content that should be emphasized, albeit framed optimally. The findings highlight the importance of communicating accurate knowledge, as well as dispelling misinformation, about COVID-19 (how it spreads and how the risks of contraction can be mitigated). There is also likely value in appealing to, and heightening concern, about others’ vulnerability to the coronavirus and the suffering of those infected. Similarly, the results regarding perceived vulnerability to disease and compassion suggest the need to emphasize the vulnerability of people of all ages to the virus and the role that everyone, whether symptomatic or not, plays in spreading it. Indeed, the data seem to call for frequent repetition of the portrayal of social distancing guidelines that White House coronavirus response coordinator Dr. Deborah Birx offered: “This is a road map to prevent your grandmother from getting sick” [68].

The virtual behavior measures of social distancing that we employed in this research also may prove helpful in the context of public health campaigns. They certainly could be used as educational tools to illustrate appropriate social distancing behavior. Moreover, following some educational intervention, they could serve as exercises that encourage individuals to rehearse behaviors that abide by social distancing recommendations. The virtual behavior items also could be employed, sometime after exposure to an intervention, as outcome measures to test the effectiveness of a persuasive campaign.

Supporting information

S1 Material. Description of the simulated behavioral measures of social distancing.

(PDF)

S2 Material. Demographic details for each of the four sub-studies.

(PDF)

Data Availability

The datafile is available on the Open Science Framework at https://osf.io/359et/.

Funding Statement

This work was supported by a RAPID grant from the National Science Foundation under Award ID BCS-2031097 (RHF). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Kelman HC. Compliance, identification, and internalization three processes of attitude change. Journal of Conflict Resolution. 1958;2(1):51–60. [Google Scholar]
  • 2.Cialdini RB. Influence: Science and Practice. 5th ed. Boston: Pearson; 2009. [Google Scholar]
  • 3.Freedman JL, Fraser SC. Compliance without pressure: the foot-in-the-door technique. J Pers Soc Psychol. 1966;4(2):195–202. 10.1037/h0023552 [DOI] [PubMed] [Google Scholar]
  • 4.Cialdini RB, Vincent JE, Lewis SK, Catalan J, Wheeler D, Darby BL. A reciprocal concessions procedure for inducing compliance: The door-in-the-face technique. Journal of Personality and Social Psychology. 1975;31:206–215. [Google Scholar]
  • 5.Cialdini RB, Cacioppo JT, Bassett R, Miller JA. Low-ball procedure for producing compliance: Commitment then cost. J Pers Soc Psychol. 1978;36(5):463–76. [Google Scholar]
  • 6.Cialdini RB, Reno RR, Kallgren CA. A focus theory of normative conduct: Recycling the concept of norms to reduce littering in public places. J Pers Soc Psychol. 1990;58(6):1015–26. [Google Scholar]
  • 7.Cialdini RB. Crafting normative messages to protect the environment. Curr Dir Psychol Sci. 2003;12(4):105–9. [Google Scholar]
  • 8.Nezlek JB, Smith V. Social influence and personality In: Harkins SG, Williams KD, Burger JM, editors. Oxford University Press; 2017. p. 53–68. [Google Scholar]
  • 9.Balcetis E, Dunning D. See what you want to see: motivational influences on visual perception. J Pers Soc Psychol. 2006;91(4):612–25. 10.1037/0022-3514.91.4.612 [DOI] [PubMed] [Google Scholar]
  • 10.Bruner JS. On perceptual readiness. Psychol Rev. 1957;64(2):123–52. 10.1037/h0043805 [DOI] [PubMed] [Google Scholar]
  • 11.Fazio RH. Accessible attitudes as tools for object appraisal: Their costs and benefits In: Maio G, Olson J, editors. Mahwah, NJ: Erlbaum; 2000. p. 1–36. [Google Scholar]
  • 12.Sweeney PD, Gruber KL. Selective exposure: Voter information preferences and the Watergate affair. J Pers Soc Psychol. 1984;46(6):1208–21. [Google Scholar]
  • 13.Fazio RH, Eiser JR, Shook NJ. Attitude formation through exploration: valence asymmetries. J Pers Soc Psychol. 2004;87(3):293–311. 10.1037/0022-3514.87.3.293 [DOI] [PubMed] [Google Scholar]
  • 14.Kelley HH, Stahelski AJ. Social interaction basis of cooperators’ and competitors’ beliefs about others. J Pers Soc Psychol. 1970;16(1):66–91. [Google Scholar]
  • 15.Ajzen I, Fishbein M. Understanding attitudes and predicting social behavior Englewood Cliffs, NJ: Prentice Hall; 1980. [Google Scholar]
  • 16.Davidson AR, Jaccard JJ. Variables that moderate the attitude-behavior relation: Results of a longitudinal survey. J Pers Soc Psychol. 1979;37(8):1364–76. [Google Scholar]
  • 17.Fazio RH, Olson MA. The MODE model: Attitude-behavior processes as a function of motivation and opportunity In: Sherman JW, Gawronski B, Trope Y, editors. New York: Guilford Press; 2014. p. 155–171. [Google Scholar]
  • 18.Tomljenovic H, Bubic A, Erceg N. It just doesn’t feel right—the relevance of emotions and intuition for parental vaccine conspiracy beliefs and vaccination uptake. Psychol Health. 2020;35(5):538–54. 10.1080/08870446.2019.1673894 [DOI] [PubMed] [Google Scholar]
  • 19.Hornsey MJ, Harris EA, Fielding KS. The psychological roots of anti-vaccination attitudes: A 24-nation investigation. Health Psychol. 2018;37(4):307–15. 10.1037/hea0000586 [DOI] [PubMed] [Google Scholar]
  • 20.Hovland CI, Janis IL, Kelley HH. Communication and persuasion; psychological studies of opinion change. New Haven: Yale University Press; 1953. [Google Scholar]
  • 21.Balcetis E. Where the motivation resides and self-deception hides: How motivated cognition accomplishes self-deception. Soc Personal Psychol Compass. 2008;2(1):361–81. [Google Scholar]
  • 22.Fisher RJ. Social desirability bias and the validity of indirect questioning. J Consum Res. 1993;20(2):303. [Google Scholar]
  • 23.Gur RC, Sackeim HA. Self-deception: A concept in search of a phenomenon. J Pers Soc Psychol. 1979;37(2):147–69. [Google Scholar]
  • 24.Leary MR, Kowalski RM. Impression management: A literature review and two component model. Psychol Bull. 1990;107:34–47. [Google Scholar]
  • 25.Kouchaki M, Gino F. Memories of unethical actions become obfuscated over time. Proc Natl Acad Sci U S A. 2016;113(22):6166–71. 10.1073/pnas.1523586113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Ross M. Relation of implicit theories to the construction of personal histories. Psychol Rev. 1989;96(2):341–57. [Google Scholar]
  • 27.Ajzen I, Fishbein M. Attitude-behavior relations: A theoretical analysis and review of empirical research. Psychol Bull. 1977;84(5):888–918. [Google Scholar]
  • 28.Weigel RH, Vernon DTA, Tognacci LN. Specificity of the attitude as a determinant of attitude-behavior congruence. J Pers Soc Psychol. 1974;30(6):724–8. [Google Scholar]
  • 29.Fazio RH, Ruisch BC, Moore CA, Granados Samayoa JA, Boggs JT, Ladanyi JT. Social distancing decreases an individual’s likelihood of contracting COVID-19. Proc Natl Acad Sci U S A. [Internet]. 2021. February;118(8). Available from: https://www.pnas.org/content/118/8/e2023131118. 10.1073/pnas.2023131118 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Nadelson L, Jorcyk C, Yang D, Jarratt Smith M, Matson S, Cornell K, et al. I just don’t trust them: The development and validation of an assessment instrument to measure trust in science and scientists: Trust in science and scientists. Sch Sci Math. 2014;114(2):76–86. [Google Scholar]
  • 31.All the President’s Lies About the Coronavirus [Internet]. 2020. Available from: https://www.theatlantic.com/politics/archive/2020/11/trumps-lies-about-coronavirus/608647/
  • 32.Trump’s broadsides against science put GOP governors in a bind [Internet]. Available from: https://www.politico.com/news/2020/10/21/trump-science-republican-governors-430535
  • 33.Ohio’s G.O.P. Governor Splits From Trump, and Rises in Popularity [Internet]. Available from: https://www.nytimes.com/2020/04/28/us/politics/mike-dewine-ohio-coronavirus.html
  • 34.Duncan LA, Schaller M, Park JH. Perceived vulnerability to disease: Development and validation of a 15-item self-report instrument. Pers Individ Dif. 2009;47(6):541–6. [Google Scholar]
  • 35.Olatunji BO, Williams NL, Tolin DF, Abramowitz JS, Sawchuk CN, Lohr JM, et al. The Disgust Scale: item analysis, factor structure, and suggestions for refinement. Psychol Assess. 2007;19(3):281–97. 10.1037/1040-3590.19.3.281 [DOI] [PubMed] [Google Scholar]
  • 36.Davis MH. Measuring individual differences in empathy: Evidence for a multidimensional approach. J Pers Soc Psychol. 1983;44(1):113–26. [Google Scholar]
  • 37.Brotherton R, French CC, Pickering AD. Measuring belief in conspiracy theories: the generic conspiracist beliefs scale. Front Psychol. 2013;4:279 10.3389/fpsyg.2013.00279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Douglas KM, Sutton RM, Cichocka A. The psychology of conspiracy theories. Curr Dir Psychol Sci. 2017;26(6):538–42. 10.1177/0963721417718261 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Lewandowsky S, Gignac GE, Oberauer K. The role of conspiracist ideation and worldviews in predicting rejection of science. PLoS One. 2013;8(10):e75637 10.1371/journal.pone.0075637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Miller JD. The measurement of civic scientific literacy. Public Underst Sci. 1998;7(3):203–23. [Google Scholar]
  • 41.Feldman L, Hart PS, Milosevic T. Polarizing news? Representations of threat and efficacy in leading US newspapers’ coverage of climate change. Public Underst Sci. 2017;26(4):481–97. 10.1177/0963662515595348 [DOI] [PubMed] [Google Scholar]
  • 42.Garrett RK, Weeks BE, Neo RL. Driving a Wedge Between Evidence and Beliefs: How Online Ideological News Exposure Promotes Political Misperceptions: Driving a wedge between evidence and beliefs. J Comput Mediat Commun. 2016;21(5):331–48. [Google Scholar]
  • 43.Pew Research Center. 5 facts about Fox News. https://www.pewresearch.org/fact-tank/2020/04/08/five-facts-about-fox-news/; 2020.
  • 44.Keith MG, Tay L, Harms PD. Systems perspective of Amazon Mechanical Turk for organizational research: Review and recommendations. Front Psychol. 2017;8:1359 10.3389/fpsyg.2017.01359 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Paolacci G, Chandler J. Inside the Turk: Understanding mechanical turk as a participant pool. Current Directions in Psychological Science. 2014;23:184–188. [Google Scholar]
  • 46.Berinsky AJ, Huber GA, Lenz GS. Evaluating online labor markets for experimental research: Amazon. com’s Mechanical Turk. Political Analysis. 2012;20:351–368. [Google Scholar]
  • 47.Hauser D, Paolacci G, Chandler JJ. Common concerns with MTurk as a participant pool: Evidence and solutions In: Kardes FR, Herr PM, Schwarz, editors. New York: Routledge; 2019. p. 319–336. [Google Scholar]
  • 48.Clifford S, Jewell RM, Waggoner PD. Are samples drawn from Mechanical Turk valid for research on political ideology? Research & Politics [Internet]. 2015;2 Available from: 10.1177/2053168015622072. [DOI] [Google Scholar]
  • 49.Schönbrodt FD, Perugini M. At what sample size do correlations stabilize? J Res Pers. 2013;47(5):609–12. [Google Scholar]
  • 50.Farias M, Newheiser A-K, Kahane G, de Toledo Z. Scientific faith: Belief in science increases in the face of stress and existential anxiety. J Exp Soc Psychol. 2013;49(6):1210–3. 10.1016/j.jesp.2013.05.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Bruning JL, Kintz BL. Computational handbook of statistics. 4th ed. Upper Saddle River, NJ: Pearson; 1997. [Google Scholar]
  • 52.Bolsen T, Druckman JN. Counteracting the politicization of science: Counteracting the politicization of science. J Commun. 2015;65(5):745–69. [Google Scholar]
  • 53.Bolsen T, Druckman JN, Cook FL. The influence of partisan motivated reasoning on public opinion. Polit Behav. 2014;36(2):235–62. [Google Scholar]
  • 54.Kunda Z. Motivated inference: Self-serving generation and evaluation of causal theories. J Pers Soc Psychol. 1987;53(4):636–47. [Google Scholar]
  • 55.Taber CS, Lodge M. Motivated skepticism in the evaluation of political beliefs. Am J Pol Sci. 2006;50(3):755–69. [Google Scholar]
  • 56.Hornsey MJ. Why facts are not enough: Understanding and managing the motivated rejection of science. Curr Dir Psychol Sci. 2020;096372142096936. [Google Scholar]
  • 57.Hornsey MJ, Fielding KS. Attitude roots and Jiu Jitsu persuasion: Understanding and overcoming the motivated rejection of science. Am Psychol. 2017;72(5):459–73. 10.1037/a0040437 [DOI] [PubMed] [Google Scholar]
  • 58.Dixon G, Hmielowski J, Ma Y. Improving climate change acceptance among U.S. conservatives through value-based message targeting. Sci Commun. 2017;39(4):520–34. [Google Scholar]
  • 59.Luong KT, Garrett RK, Slater MD. Promoting persuasion with ideologically tailored science messages: A novel approach to research on emphasis framing. Sci Commun. 2019;41(4):488–515. [Google Scholar]
  • 60.Wolsko C, Ariceaga H, Seiden J. Red, white, and blue enough to be green: Effects of moral framing on climate change attitudes and conservation behaviors. J Exp Soc Psychol. 2016;65:7–19. [Google Scholar]
  • 61.Crayne MP, Medeiros KE. Making sense of crisis: Charismatic, ideological, and pragmatic leadership in response to COVID-19. Am Psychol [Internet]. 2020; Available from: 10.1037/amp0000715 [DOI] [PubMed] [Google Scholar]
  • 62.McCurry J. Test, trace, contain: How South Korea flattened its coronavirus curve. The Guardian [Internet]. 2020, April 22; Available from: www.theguardian.com/world/2020/apr/23/test-trace-contain-how-southkorea-flattened-its-coronavirus-curve
  • 63.Merkley E, Bridgman A, Loewen PJ, Owen T, Ruths D, Zhilin O. A rare moment of cross-partisan consensus: Elite and public response to the COVID-19 pandemic in Canada. Can J Polit Sci. 2020;53(2):311–8. [Google Scholar]
  • 64.Rising D. Germany praised for handling of COVID-19 [Internet]. Yahoo! News. 2020, April 23 Available from: https://news.yahoo.com/germany-praised-handling-covid-19-044913509.html [Google Scholar]
  • 65.Bain PG, Hornsey MJ, Bongiorno R, Jeffries C. Promoting pro-environmental action in climate change deniers. Nat Clim Chang. 2012;2(8):600–3. [Google Scholar]
  • 66.McKenzie-Mohr D. Fostering sustainable behavior through community-based social marketing. Am Psychol. 2000;55(5):531–7. [PubMed] [Google Scholar]
  • 67.Schultz T, Fielding K. The common in-group identity model enhances communication about recycled water. J Environ Psychol. 2014;40:296–305. [Google Scholar]
  • 68.White House coronavirus response coordinator on new guidelines: “We can conquer this.” [Internet]. CBS News. 2020. Available from: https://www.cbsnews.com/news/white-house-coronavirus-response-coordinator-deborah-birx-first-interview-2020-03-16/

Decision Letter 0

Lambros Lazuras

11 Nov 2020

PONE-D-20-30793

Who Is (Not) Complying with the Social Distancing Directive and Why?  Testing a General Framework of Compliance with Multiple Measures of Social Distancing

PLOS ONE

Dear Prof. Fazio,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Your manuscript was carefully reviewed by two expert social psychologists with a track record of research in attitude-behaviour relationships and applied social and health psychology. One of the reviewers suggested minor revision, while the second suggested major revision. I concur with the second reviewer (major revision required) and kindly ask you to consider the points raised and provide a point-by-point response letter, should you decide to revise and resubmit your work.

Please submit your revised manuscript by Dec 26 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Lambros Lazuras

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please ensure that you include a title page within your main document. We do appreciate that you have a title page document uploaded as a separate file, however, as per our author guidelines (http://journals.plos.org/plosone/s/submission-guidelines#loc-title-page) we do require this to be part of the manuscript file itself and not uploaded separately.

Could you therefore please include the title page into the beginning of your manuscript file itself, listing all authors and affiliations.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper tests a theoretical model of compliance with social directives in the context of social distancing behaviors relevant to COVID-19 infection. The research is novel in testing a new model and comprehensive in testing multiple predictors of both self-reported and objectively measured behavior. The model is assessed using a cross-sectional survey of 2,000 MTurk workers. The data, analysis, and write-up are all of a high standard. Overall, I am very favorably disposed towards publication of this research in PLoS One.

There are a small number of issues that the authors might wish to consider in revising the manuscript.

1. I found theoretical framework compelling and liked the division of predictors into beliefs about the context, beliefs about the source, and target characteristics. I can see the conceptual basis of the distinctions and wondered if this categorization can also be supported empirically (e.g., via factor or cluster analysis)?

2. The behavioral measure of social distancing is very clever and adds an interesting dimension to the research. At the same time, is it accurate to suggest that it is an “objective” measure of “behavior”? The scenarios are hypothetical, and participants are asked about their comfort levels within each one. Is this more a measure of willingness (or something else) than behavior? If so, would it be worthwhile to predict discrepancies between willingness and self-reported behavior?

3. The differences between the factors that predict “objective” and self-report measures of behavior are intriguing. Could the analyses go further and determine whether these seeming differences are significant? For instance, could MLM be used here with the two outcomes as a within-participants factor?

4. Table 1 offers a long list of significant predictors of social distancing (attesting to the model’s value) and I appreciated the multivariable analyses of predictors within each set of beliefs. Would the research benefit from an additional analysis that included all 19 beliefs in the model to determine which beliefs or characteristics are most important, and how much variance in the outcomes they collectively explain?

Reviewer #2: The authors present an article that reports upon the findings of an online, survey-based study designed to identify more about the antecedents of people’s willingness to comply with social distancing measures in the wake of the COVID-19 pandemic. Two distinct DVs are used; self-report measures and a novel ‘behavioral’ measure.

The research is timely, given the unfolding nature of the global pandemic, and there are also potential theoretical and applied implications for the work. However, there are a number of significant issues with the study – as reported – that I feel preclude publication on PLOS One at this time.

A key issue is the lack of detail provided in each of the sections of the study (particularly the introduction, methods and discussion). There needed to be more methodological detail to allow for the replication of this study, if necessary. In terms of the results, I think that the basic analysis is okay, but I wonder whether the authors could be more ambitious with their modelling. The discussion is also notably light on detail – more effort needed to be made to couch the findings of the study among those from the rich, extant literature on science communication.

It is possible that with major corrections/additions that this article can be shaped into a publishable offering. What follows is a specific appraisal of the key issues with the article, from my perspective. I wish the authors all the best in making the suggested amendments and will look forward to appraising the article in due course.

Title

It should be stated within the title that the focus is on compliance within the US. The study is clearly tied to social distancing directives in the US and the study is conducted on a US sample. The current title infers that the scope of the study transcends the US context, which is not the case.

Introduction.

The introduction would benefit from a short paragraph outlining the COVID-19 crisis and stating more about the situation and response in the US. Again, this is important to clarify the context and reach of the study, which is US-centric. This US-centricity is exemplified by the claim that “stay six feet away from others” is a regularly repeated mantra. This might be the case for the US, but this is not the case in Europe, for example, where the mantra has been more ‘metric’ (2 meters or 1 meter with extra precautions).

Line 35 – I feel that the claim that people will only respond to directives that are justified needs to be more nuanced. There are examples of where people comply with directives that are not deemed to be well-justified, because they do not wish to suffer social/group disapproval.

Line 41 – Please outline a relevant (e.g. health-related) example of a study that evidences the point you are making here.

Line 44 – You have evidenced the source and audience as being important, but what about message factors? If you are treating message factors as part of the ‘surrounding context’ then I feel that this needs to be explicitly stated. If you are not classing message factors as important, then this needs to be strongly justified, dismissing the ‘what’ from the ‘who is saying what to whom?’ mantra would otherwise appear to be a significant oversight.

Line 49-50 – I think that the phrasing here needs to be more nuanced. It is not always going to be the case that people who process information in a favorable way will engage in the correct/sustained/sufficient behaviors (we see this a lot with health and environmental actions, where people often engage in compensatory behavioral trade-offs). Equally, sometimes people who are less positive about the directions of a message, might still comply with them.

Line 58 – Please provide a reasoned example of the characteristics/beliefs that might affect mis-representation of social distancing on self-report measures.

Line 66 – I need more convincing that the novel behavioral measures you have designed have the ecological validity that you are claiming. Granted they do present people with in the moment behavioral decisions, but these are virtual and vicarious. Please provide a fuller justification of the validity of your ‘graphical scenarios’ in emulating real-life decision making.

Line 75-112 – There needs to be a fuller, theoretical justification for the selection of the various predictors used in this study (source, context and target). While I am not disputing the relevance of the predictors you have selected, the selection process is opaque, and the selection of items appears subjective and unsystematic. For example, your decision to look at ‘objective knowledge of COVID-19’ as opposed to ‘subjective knowledge’ is interesting, given that subjective knowledge of science/technological issues often shares a stronger relationship with acceptance (see, e.g., some of the work by Stugis and Allum).

Tying the selection of variable more explicitly to an established over-arching theoretical framework(s) would have made for a more compelling narrative. It would have also allowed you to be more adventurous in modeling moderation/mediation pathways.

Line 88 – You need to provide evidence for the claim that Trump downplayed the severity relative to state Governors (I believe you, but you need evidence).

Methods

Line 123 – Strictly speaking ‘MTurkers’ is not a word and so you should probably put this is ‘inverted commas’

Line 127 – I understand the arguments for creating four versions of the survey, but you should probably raise the fact that participants did not all complete the same survey as a limitation to the study within the discussion.

Line 131 – Bearing in mind you have a between-subjects design, you need to show evidence that there has been successful randomization to condition in terms of core participant demographics. Please provide some evidence of this – means/frequencies, plus basic statistical comparative analysis will be sufficient.

Line 136 – Related to my point above, you need to have specified that there was a national lockdown in the US (and the nature of this lockdown), rather than assuming that the readership of the article will know/be aware of this.

Line 140 – Please specifically refer to COVID-19 as there could be other COVIDs in the future (although we hope not of course!)

Line 144 – What kind of selection, piloting and pre-testing did your social distancing behavioral measures go through? Now, they might have a degree of face-validity as measures of social-distancing compliance, but what formal evidence is there of their content-validity beyond this? While I do applaud the innovation in measurement here, I am concerned that that drawing strong inferences based on an un-proven, un-tested measure is ill-advised. As such you need to provide a fuller account of the selection and development of these measures, while also acknowledging the limitations around their uses within the discussion.

Line 156-233 – You need to include the responses options for all your measures. You also need to provide access (probably in the form of an appendix or supplementary information) to your questionnaire measures, so people can see all the questions that were asked and how they were asked. Within the text you might wish to add two example questions/statements per scale outlined.

Line 160 – Did you assess people’s awareness and understanding of the social distancing recommendations? If not, this is a limitation that needs consideration within the discussion.

Line 168 – How many statements were used to assess COVID-19 knowledge?

Line 199 – For those outside the US it would help to know what political leaning each of these news/media sources have.

Line 202 – Revisit this sentence and see if you can rephrase it to make it clearer.

Line 226 – What demographics were included and how were they assessed?

Results

The results appear to be okay but very basic given the scope of the study and the data you have to work with. I wonder if you could have been more ambitious with your modeling/analysis, to say something a little more about who is responding to COVID-19 guidance or not?

This point of critique maps back to a point I made early re: the selection of the predictors, which – although not disputing their relevance - appears quite haphazard. Your narrative would be substantially more compelling if you were able to chart, model and report some moderation/mediation pathways. You have a considerable number of predictors here, which should make it possible, provided that theoretically justified pathways can be created.

Line 308 – Please state ‘COVID-19’.

Discussion

The discussion needs a lot of work. Lines 328-351 essentially just re-report detail that has just been outlined within the results section. As such, this detail can be pared back somewhat.

The discussion from Line 352 onwards is basic fails to map clearly and robustly to the rich literature that exists around science communication. The claims that are being made, need to be more couched within the extant literature and explicitly linked back to the study findings. There also needs to be a more upfront consideration of the limitations of the research.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Lambros Lazuras

18 Jan 2021

PONE-D-20-30793R1

Who Is (Not) Complying with the U. S. Social Distancing Directive and Why?  Testing a General Framework of Compliance with Multiple Measures of Social Distancing

PLOS ONE

Dear Dr. Fazio,

Thank you for submitting your revised manuscript to PLOS ONE. Both reviewers expressed their satisfaction with the revisions made, and almost all of their comments and suggestions have been accommodated. Your manuscript can be accepted for publication subject to addressing a few minor issues identified by the second reviewer. Addressing those issues will further strengthen your manuscript, and I am hoping that you are willing to consider them.

Please submit your revised manuscript by Mar 04 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Lambros Lazuras

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I had no substantive concerns about the original version of this manuscript and merely made some suggestions that I hoped might serve to increase the impact of an already compelling and important piece of research. The authors were very responsive to my suggestions and the changes to the manuscript made a very good paper even better. I am very happy to recommend publication of this research in PLoS ONE.

Reviewer #2: I thank the authors for taking the time to respond to my previous comments so thoroughly. I feel that the article is now much improved. I do have a few additional minor points that I feel should be considered before this article can be accepted for publication. I outline these points below. The page numbers that area identified relate to the revised version of the manuscript that includes the highlighted additions, changes and omissions.

Title, Abstract and Introduction

The title, abstract and introduction are now much improved, however, I have a couple of final suggestions for these sections:

1. I feel that the title would benefit from mentioning COVID-19 – as the work focuses on compliance with social distancing requests in response to this specific pandemic.

2. On page 2-3, it might be beneficial to incorporate a couple of photographs to depict what the social measures (e.g. tape on the floor) look like. This should hopefully be simple to get hold of as University campuses tend to have such measures (if they are accessible during the pandemic)

Methods

The methods section is now fuller. Thank you for including more of the detail, rather than placing this all in the supplementary material. It might add to the article if you were to include a couple of ‘stills’ depicting the virtual behaviour measures. I understand these are in the ‘Supplementary Material’ but having a couple of images in the main text would be good for to illustrate the innovative method you have used.

Results

Looking at Table 1, I am wondering how you coded the people who did not enter their gender? It could be that these individuals wished to withhold information about their gender, but equally it could be that these individuals identify as non-binary. In the spirit of inclusivity, you should probably add a footnote to Table 1 to note that people withholding gender were not included.

Overall, while I find the results section relatively simple bearing in mind the data-set that has been accrued, I sense that my requirement for more in-depth inferential analysis should be traded-off against the value of getting this article into the public realm (also, I appreciate the issues caused by the methods used). I would encourage you to perhaps outline in the discussion section what the obvious ‘next steps’ might be for the work, which might allow for more in-depth inferential statistics to be performed – one or two suggestions would suffice.

I am wondering whether using a basic t-test to compare the strengths of the correlation coefficients is appropriate (see Table 3). This is something that you should check, as I feel that there might be more statistically robust ways of drawing such comparisons. For example, this recent article in PLOS offers up suggestions and techniques for drawing such comparisons: cocor: A Comprehensive Solution for the Statistical Comparison of Correlations (plos.org). If the current analysis is appropriate, then including a couple of references to studies that have used this method of analysis should provide adequate justification for the approach.

P22 - you suggest that “Any predictor variable for which the comparison yielded a significance level greater than .05 is listed”; do you mean ‘lower than .05’?

Also, on P22 you state “…it is interesting to consider how the two variables differ with respect to the unique variance for which they each accounted in the multiple regressions”. The end of this sentence suggests you have done a multiple (linear) regression analysis, when in reality you have done a multitude of regressions (unless I am mistaken). As such, you should probably look to reword this sentence.

P23 - you suggest that the skew in the self-report measure was due to people wishing to believe that they had acted in a way to lessen their and others’ risk. It could also be that people wished to present themselves in such a way. This should probably be accounted for in your statement.

P25 - I find the paragraph beneath the table quite difficult to parse. Is there are way to rephrase things slightly to make it easier to understand.

Discussion

P27 – do you have countries that you can reference where there was less politicisation of the pandemic? You might wish to mention South Korea, which seems to have done this effectively. Also, there are interesting contemporary articles showing how politicisation has affected the COVID response that you might wish to mention (e.g. Crayne and Medeiros, 2020). These articles might also be applicable to some of your commentary on politicisation on P28.

Crayne, M. P., & Medeiros, K. E. (2020). Making sense of crisis: Charismatic, ideological, and pragmatic leadership in response to COVID-19. American Psychologist. Advance online publication. http://dx.doi.org/10.1037/amp0000715

I generally like the additional commentary regarding the implications for the work, however, I wonder if something more could be said. I do not dispute that we should be looking to communicate accurate information and dispel myths, but this conclusion is fairly standard. Is it possible to say something more about how you could (perhaps) adapt your virtual behavior measures as an educational tool (bearing in mind the correlation with COVID-19 risk)? Also, maybe you could comment a bit more concretely about strategies that could be used to overcome the issues you outline (e.g. Hornesy and Fielding, 2017). I don’t think that there needs to be much added here, but a little more specificity around the guidance and suggestions would give the article more ‘bite’. Relatedly, adding in a few more references to back up you claims about how emotional appeals around vulnerability might be effective would be good.

Hornsey, M. J., & Fielding, K. S. (2017). Attitude roots and Jiu Jitsu persuasion: Understanding and overcoming the motivated rejection of science. American Psychologist, 72(5), 459-473. http://dx.doi.org/10.1037/a0040437

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Paschal Sheeran

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Lambros Lazuras

9 Feb 2021

Who Is (Not) Complying with the U. S. Social Distancing Directive and Why?  Testing a General Framework of Compliance with Virtual Measures of Social Distancing

PONE-D-20-30793R2

Dear Dr. Fazio,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Lambros Lazuras

Academic Editor

PLOS ONE

Acceptance letter

Lambros Lazuras

17 Feb 2021

PONE-D-20-30793R2

Who is (Not) Complying with the U. S. Social Distancing Directive and Why? Testing a General Framework of Compliance with Virtual Measures of Social Distancing

Dear Dr. Fazio:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Lambros Lazuras

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Material. Description of the simulated behavioral measures of social distancing.

    (PDF)

    S2 Material. Demographic details for each of the four sub-studies.

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    The datafile is available on the Open Science Framework at https://osf.io/359et/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES