Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 May 13;16(5):e0251073. doi: 10.1371/journal.pone.0251073

Risk-taking unmasked: Using risky choice and temporal discounting to explain COVID-19 preventative behaviors

Kaileigh A Byrne 1,*, Stephanie G Six 1, Reza Ghaiumy Anaraky 2, Maggie W Harris 1, Emma L Winterlind 1
Editor: Pablo Brañas-Garza3
PMCID: PMC8118306  PMID: 33983970

Abstract

To reduce the spread of COVID-19 transmission, government agencies in the United States (US) recommended precautionary guidelines, including wearing masks and social distancing to encourage the prevention of the disease. However, compliance with these guidelines has been inconsistent. This correlational study examined whether individual differences in risky decision-making and motivational propensities predicted compliance with COVID-19 preventative behaviors in a sample of US adults (N = 404). Participants completed an online study from September through December 2020 that included a risky choice decision-making task, temporal discounting task, and measures of appropriate mask-wearing, social distancing, and perceived risk of engaging in public activities. Linear regression results indicated that greater temporal discounting and risky decision-making were associated with less appropriate mask-wearing behavior and social distancing. Additionally, demographic factors, including personal experience with COVID-19 and financial difficulties due to COVID-19, were also associated with differences in COVID-19 preventative behaviors. Path analysis results showed that risky decision-making behavior, temporal discounting, and risk perception collectively predicted 55% of the variance in appropriate mask-wearing behavior. Individual differences in general decision-making patterns are therefore highly predictive of who complies with COVID-19 prevention guidelines.

Introduction

In the era of the COVID-19 pandemic, normal social activities like going to the mall or meeting with friends engender a certain level of risk. COVID-19, or coronavirus, is a contagious respiratory virus that spreads through close-contact airborne and droplet transmission [1]. To mitigate the spread of COVID-19, the Center for Disease Control and Prevention recommended that people wear face masks, avoid nonessential indoor activities, engage in social distancing by staying at least six feet apart from other people when in public places, and avoid in-person gatherings [2]. However, Americans have exhibited mixed responses to these guidelines with some individuals strictly adhering to these recommendations while others choose not to heed them.

Indeed, a simple trip to the grocery store could become a highly polarizing event. For example, if a store required face masks to enter, it is not uncommon to see people wearing their masks incorrectly on their chin or removing them upon entry. Some people may silently feel outraged towards people that are not complying with CDC guidelines, while others may feel a sense of solidarity in defying this ‘new norm’. The factors that predict whether or not individuals will engage in COVID-19 preventative behavior are not well understood. Some have proposed that compliance with mask-wearing and social distancing may be based less on scientific findings and more on political affiliation [3, 4]. Others have suggested that younger adults may feel invincible, thinking that they will not get sick from COVID-19 [5]; thus, another view is that age may be a factor that affects compliance. However, empirical research examining decision-making factors that influence compliance with mask-wearing and social distancing guidelines is lacking. Therefore, in addition to demographic factors, this study seeks to examine whether certain decision-making constructs, such as risky decision-making and temporal discounting, are predictive of compliance with appropriate mask-wearing and social distancing behaviors.

Judgment and decision-making perspectives

This study examines the relationship between COVID-19 preventative behaviors and individual differences in four classic judgment and decision-making constructs: decision-making under risk, risk perception, the optimism bias, and temporal discounting. This pandemic is an emotion-fueled, unprecedented situation, and simple decisions to go to the store or socialize could have life and death consequences. Below we define each of the four constructs examined in this study and describe how they may be related to COVID-19 preventative behavior.

Decision-making under risk

A risk involves options in which the probabilities of each possible outcome are known, but the exact outcome itself remains unknown [6]. For example, a decision-maker might choose between an alternative that has a 100% probability of providing $5 or an alternative that has a 50% probability of providing $20 but also a 50% probability of yielding $0. The decision to comply with social distancing guidelines can be framed as a decision under risk. A decision-maker can choose the safe option of maintaining one’s health by choosing to wear masks and social distance. Alternatively, a person could choose the risky option of interacting with others, in which he or she benefits from the enjoyment of social interactions at the possible cost of contracting COVID-19 and/or infecting others. While there is an element of uncertainty in knowing exactly who is infected in a given setting, each state’s public health department makes its daily infection rates publicly available.

Decisions made under risk can be conceptualized based on expected values, or the probability of an outcome occurring multiplied by the potential value of that outcome. Sensitivity to expected value can provide objective information about decision-making behavior that maximizes the likelihood of reward [7]. Choosing options with higher expected values reflects increased sensitivity to differences in expected value between choice options [7, 8]. As evidenced by performance on decision-making paradigms such as the Iowa Gambling Task [9, 10] and the Cups Task [7, 8], this increased sensitivity can lead to reward maximization [7, 8]. People do not always base decisions solely on reward maximization however and instead, frequently choose options that maximize certainty over larger, uncertain gains (risk aversion) [11]. Psychological and economic studies have shown that risk preferences are often not monotonic, and risky decisions can change under different conditions [1214]. Individual differences in risk aversion have been observed due to such factors as age, gender, and personality [1517]. Given that reward sensitivity and risk level can differentially influence decision-making, it is important to consider how both factors may affect COVID-19 preventative behavior. This study seeks to determine whether increased risky decision-making may be associated with decreased mask-wearing and social distancing behavior.

Risk perception

In addition to actual risk decision-making behavior, perceived risk may also shape the way a person responds to COVID-19. People tend to perceive the risk of a phenomenon as high when it is outside of one’s control, may have catastrophic potential or fatal outcomes, or when it is a new risk [18]. Risk perception is also influenced by the affect heuristic, in which people’s positive or negative feelings towards an activity or phenomenon guides their evaluation of its risk [19]. When people believe that they can feel pleasure from engaging in an activity, such as engaging with friends despite being amid a pandemic, the perceived risk of that activity may be low. On the other hand, when individuals have a first-hand negative experience with a potentially risky event, such as watching someone pass away from COVID-19 complications, then they may feel more negatively towards that event and perceive the risk as high. When risk perception is high, people should want to see that risk minimized [20, 21]. However, it is unclear whether that also means that they are willing to take action to mitigate that risk themselves. In evaluating the perceived risk of COVID-19, there are clear opposing forces at play—the high benefit of social interactions may decrease the perceived risk of COVID-19, but the risk of a new, potentially fatal virus may increase the perceived risk. It is expected that individuals with first-hand COVID-19 experience will have higher risk perception and that higher risk perception will enhance COVID-19 preventative behavior.

Optimism bias

The optimism bias reflects the belief that negative events have a lower likelihood of affecting oneself compared to other people, while positive events are more likely to affect oneself [22, 23]. This bias in risk perception occurs when people attempt to predict the likelihood of future events occurring and result in a disconnect between perceived and actual risk. The optimism bias was observed during risk evaluations for the H1N1 flu pandemic in 2009 [24, 25]. Similarly, it is reasonable to predict that, on average, individuals may perceive that their risk of contracting COVID-19 may be lower than the risk of one’s peers contracting the virus.

Temporal discounting

Temporal discounting, also known as delay discounting, refers to the tendency to prefer small immediate rewards over larger delayed rewards [26]. The subjective value placed on reward options tends to decline as the time delay for those rewards increases. For example, given the choice between receiving a small amount of money now (e.g., $5 now) or a larger amount of money after a time delay, such as getting $10 after one month, many people would prefer the former option [27]. From an economic perspective, such decisions are ‘irrational’ because the objective value of the delayed option is larger than the immediate option. However, from a psychological perspective, immediate rewards elicit more tangible positive emotions in the present compared to imagining how one might feel in the future [28, 29]. Representations of future rewards also tend to be more abstract, while immediate rewards are more concrete and vivid [30, 31]. Individual differences in temporal discounting have been shown to predict maladaptive health behaviors, including drug and alcohol use, unhealthy eating, general prophylactic health behaviors, and lack of exercise [3235].

Decision-making research for the H1N1 pandemic

The most recent pandemic recorded prior to COVID-19 was the H1N1 influenza outbreak in 2009. Previous research with H1N1 responses showed that people who perceived the risk of contracting H1N1 as high exhibited low risk-taking behaviors and high avoidance behaviors, like avoiding heavily populated areas [36, 37]. Some studies showed that people who exhibited more signs of worry about contracting the virus tended to engage in more preventative measures [36, 38]. Another study indicated that people who resided in areas with a high concentration of the virus reported the belief in a higher likelihood of catching the virus but showed no signs of a higher degree of engagement of preventative behaviors [39]. In terms of demographics, risk-aversive behavior was associated with older age [37] and larger household size [39]. Additionally, previous research indicates that preventative behaviors decreased over time [37, 39], suggesting a decrease in risk perception and a subsequent increase in risky behaviors. While some research on preventative behaviors during the H1N1 pandemic exists, measures of risk-taking behaviors and delayed discounting and their relationship to H1N1 responses are far scarcer. This fundamental gap in research with prior pandemics motivated the design of the present study on COVID-19 preventative behaviors.

Decision-making research for the COVID-19 pandemic

While research on the effects and perceptions of COVID-19 is in its very early stages, early work has begun to characterize COVID-19 risk perception, transmission-mitigation compliance behavior, and optimism biases. Recent research has demonstrated that some factors that can increase COVID-19 related risk perception include first-hand experience, prosocial values, trust in medical recommendations, individual knowledge about the virus, and political ideology [40]. Age has also been shown to influence COVID-19 risk perception such that older adults perceive the risk of contracting COVID-19 as lower than younger adults but exhibit heightened risk perception of dying from COVID-19 [41].

In addition to risk perception, emotional states and personality traits influence compliance with COVID-19 safety recommendations. Fear and anxiety surrounding COVID-19 are associated with increased hand-washing and social distancing recommendations [42]. In terms of personality, trait conscientiousness has been shown to increase the likelihood of compliance with COVID-19 prevention guidelines by over 30% [43], while antisocial traits are associated with diminished compliance [44]. Early work has also found evidence to support the presence of the optimism bias for contracting COVID-19 [4547]. Collectively, early COVID-19 decision-making research suggests that pro-social behavior, personal experience, demographics, personality, and emotional factors shape individuals’ perception of COVID-19 risk and transmission-mitigation compliance behavior. However, it remains unclear how individual differences in risk perception, risky decision-making, and temporal discounting influence compliance with COVID-19 preventative behaviors.

Current study and hypotheses

As of December 2020, nearly 350,000 Americans have died from COVID-19, and over 20 million Americans have contracted the virus [1]. At the time this research was conducted, COVID-19 vaccines were not available to the general population, and one of the only ways for people to protect themselves from contracting COVID-19 was by engaging in mask-wearing and social distancing behaviors. Consequently, it is crucial to understand how specific decision-making tendencies can predict adherence to COVID-19 prevention guidelines.

This research seeks to elucidate how individual differences in risky decision-making, risk perception, the optimism bias, and temporal discounting can forecast compliance with COVID-19 prevention guidelines. These specific decision-making constructs may reflect the way that individuals evaluate the pandemic information they are exposed to and subsequently influence their decision to engage in COVID-19 preventative behavior or not. We predict that increased risky decision-making, decreased risk perception of COVID-19, greater temporal discounting, and increased magnitude of the optimism bias will be associated with reduced compliance with COVID-19 preventative behavior.

Materials and methods

An a priori power analysis was performed that included five demographic covariates (age, political affiliation, SES, negative financial experiences due to coronavirus, and experience with coronavirus) and four primary predictors (temporal discounting, advantageous gambles, disadvantageous gambles, and ambiguous gambles). The power analysis conducted in G*Power 3.1 indicated that to have 80% power to detect an effect at the p = .05 level with an effect size of f2 = .10, a minimum of 172 participants would be needed. We anticipated an exclusion rate of 15% due to missed attention checks, incomplete responses, or duplicate responses. The goal was therefore to recruit at least 200 participants. This study was pre-registered through the Open Science Framework (OSF): https://osf.io/kavxr. The data are also available through OSF: https://osf.io/xy6aj

Participants

This study was approved by the Institutional Review Board at Clemson University (IRB Approval Number 2020–220) before procedures were implemented. Participants were recruited using Amazon Mechanical Turk (MTurk), and data collection occurred in two waves: the first wave (N = 220) occurred from 9/7/2020–9/11/2020, and the second wave (N = 200) occurred from 12/29/2020–12/30/2020. Study participation was voluntary. A total of 420 participants were recruited to complete the study online. Previous work has shown that several experimental cognitive psychology paradigms conducted MTurk have a high degree of reliability with laboratory-based results [48]. The typical risky decision-making trends in which individuals tend to be more risk-seeking in loss contexts and risk-averse in gain contexts have also been observed in MTurk samples with minimal differences in effect sizes compared to studies performed in a laboratory [49]. Therefore, there is evidence that MTurk is a reliable way to collect data using standard decision-making paradigms.

Participants were compensated $3.50 for completing the study. To be eligible for study participation, participants needed to be between the ages of 18–90 and live in the United States. Participants were excluded from data analysis if they failed to pass one or more attention check questions (n = 12) or completed the study more than once (n = 4). Although participants were prevented from taking the study with the same MTurk Worker ID more than once, we identified several cases of duplicate IP addresses. Three attention check questions (e.g., “If you are reading this question, please choose Option C”) were included throughout the study. These questions were modeled after instructional manipulation checks from previous research with unsupervised participants [49, 50]. Previous psychology research using MTurk and other online platforms have recommended using such attention check questions to improve the reliability of the data [4951].

Thus, there were 404 participants (195 females; age range = 18–81, Mage = 40.91, SDage = 13.57) in the final sample. Table 1 shows the participant characteristics of the sample.

Table 1.

Demographic Characteristics
Variable Mean Standard Deviation
Age 40.91 13.57
Years of Education 14.70 2.21
Gender Number Percentage
 Male 208 51.19%
 Female 195 48.27%
 Non-binary 1 0.25%
Race/Ethnicity
 Caucasian/White 331 81.93%
 African American/Black 29 7.18%
 Asian/Asian Indian 22 5.44%
 Hispanic/Latino 17 4.21%
 Other 5 1.24%
Political Affiliation
 Democrat 186 46.04%
 Republican 125 30.94%
 Independent/Other 93 23.02%
Region
 Southeast (SE) 127 31.44%
 Midwest (MW) 84 20.79%
 West 81 20.05%
 Northeast (NE) 73 18.07%
 Southwest (SW) 39 9.65%
Income Level
 <$30,000 175 43.32%
 $30,000–$49,999 88 21.78%
 $50,000–$99,999 118 29.21%
 >$100,000 23 5.69%
Personal Knowledge of Someone Who Contracted Coronavirus
 Yes 214 52.97%
 No 190 47.03%
Negative Financial Experiences due to Coronavirus
 Yes 136 33.66%
 No 268 66.34%

Design

This study entailed a correlational design in which we examined the relationship between temporal discounting and decisions on a risky choice task with several coronavirus-related behaviors and beliefs. Temporal discounting scores, the proportion of risky choices, and the difference in perceived risk when social distancing compared to not social distancing were used as predictors. Mask-wearing behavior, interpersonal social interactions, social distancing activities, optimism bias towards coronavirus, and perceived risk of engaging in public activities were included as outcome measures. Covariates in the study included age, political affiliation, geographic region, income level, negative financial experiences due to COVID-19, and personal health experience with COVID-19 (e.g., knowing someone who became ill or died from COVID-19, contracting COVID-19 themselves). Geographic region was a non-significant predictor in all analyses and was trimmed from all models for simplicity. Some deviations from the OSF pre-registration to the final study were made, such as the addition of the social distancing variables. Documentation of these differences are described in the S1 File for full transparency.

Measures

Demographics

Participants provided information about their age, gender, political affiliation, income, and state of residence (Table 1). Income was coded into income level brackets: <$30,000 (low), $30,000–$49,999 (lower middle), $50,000–$99,999 (middle), $100,00+ (upper middle/high) [52]. State of residence was coded into geographic regions: West, Southwest, Midwest, Southeast, Northeast. Participants were also asked questions regarding the negative effects of the pandemic on their work situation. Participants could use a checklist to indicate whether they had received a pay cut, been furloughed or lost their job, or were unable to find work because of COVID-19. Participants were queried about their personal experience with COVID-19 using Yes/No responses. Specifically, participants were asked whether they had tested positive for COVID-19, personally knew someone who had become symptomatic and/or ill because of COVID-19, or personally knew someone who had passed away because of COVID-19.

Mask-wearing behavior

To assess appropriate mask-wearing behavior, participants were first asked to indicate the percentage of time across the past 4–8 weeks that they wore a mask in public settings (e.g., grocery stores, malls, restaurants) using a slider bar. Next, participants were asked to report the percentage of time they wore a mask above their mouth but below their nose. This question was used to indicate incorrect mask-wearing behavior. The incorrect mask-wearing behavior value was then subtracted from 100% to provide an index of the amount of time that participants correctly wore a mask in public. Next, we multiplied the percentage of time that participants wore a mask in public settings by the number of times participants wore a mask correctly to compute the percentage of time that participants engaged in appropriate mask-wearing behavior in public.

Social distancing behavior

Social distancing behavior was measured in two ways. First, interpersonal social interactions were quantified as the total number of people outside one’s household with whom participants had physical, face-to-face interactions without wearing masks or social distancing in the past 14 days. Secondly, social distancing activities were quantified as the total number of times that participants engaged in the following activities in the past 30 days: (1) spent time in a group of more than 20 people, including activities such as church or social gatherings, (2) attended a small group hangout of 3 or more people, (3) ate at a dine-restaurant, (4) went to a mall or shopping center, (5) went to a hair salon, nail salon, or barbershop, and (6) worked out at a gym outside one’s home. A composite sum score of these six activities was computed to form the social distancing activities measure. This metric is similar to the COVID States Project’s Relative Social Distancing Index [53]. Higher scores indicate less social distancing. S1 Table in S1 File provides further information about participants’ social distancing behavior.

Perceived risk of activities in public settings

Participants were asked to indicate how risky they believed a list of activities were at the present time, assuming that people were not social distancing. The list included seven activities: returning to in-person work, returning to in-person school, going for a walk in the park, going to a restaurant, traveling by plane, attending an indoor concert with 500+ people, and attending a college football game. Participants indicated their perceived risk of engaging in these activities on a 1 (Not at All Risky) to 5 (Extremely Risky) scale. Participants were also asked the same questions, but assuming that people ARE social distancing. However, results concerning these variables were largely similar for both questions. Thus, we report the results for responses assuming no social distancing as the dependent variable below for simplicity. The perceived risk of these activities was computed as the average reported risk score across the seven activities.

Secondly, we computed the difference in perceived risk when people are and are not social distancing. This measure provides information regarding how effective social distancing is and allows for examining risk compensation behavior. Higher values reflect greater perceived risk while not social distancing compared to when engaging in social distancing.

Optimism bias

The optimism bias refers to the cognitive bias that aversive events are less likely to affect oneself relative to one’s peers [22, 23]. To examine optimism bias in relationship to COVID-19, we asked participants “What do you think is the likelihood that an average person your same age and gender will contract coronavirus in the next six months?”. We then asked participants to indicate the likelihood that they themselves would contract coronavirus in the next six months. Participants responded using multiple-choice percentage options (Less than 10%, 10–20%, 20–30%, etc.). The optimism bias was operationalized as the likelihood of others contracting COVID-19 minus the likelihood that the participant would contract COVID-19. Higher values indicate that participants believed that other people would be more likely to contract COVID-19 than them.

Temporal discounting

A delayed discounting task was utilized to investigate participants’ preferences for instant or delayed reward gratification. A significant body of research comparing the effects of real to hypothetical rewards has demonstrated that temporal discounting rates are highly similar under both conditions [5457], which suggests that hypothetical rewards are a valid proxy for incentivized rewards in temporal discounting experiments. As such, participants were asked questions about non-incentivized, hypothetical situations regarding whether they would prefer a certain amount of money now or a larger sum after a certain number of days. Four time-delays (7 days, 30 days, 180 days, and 365 days) were presented in random order, and the starting amount was $5. Each question increased the previous monetary amount by $5 until reaching $30 for the instant reward choice. An example question was “Would you rather have $5 now or $30 after 7 days?”. Participants’ indifference points, or the smallest sum of money for which they first indicated their preference for the instant gratification reward over the delayed reward, was then recorded. To evaluate the overall preference for instant gratification against delayed gratification, an area under the curve (AUC) approach was employed. Smaller AUC values indicate a stronger preference for immediate gratification over larger delayed rewards.

Risky choice task

The risky choice task involved 36 non-incentivized, hypothetical gain-framed decisions and was similar to a descriptive gains-only version of the Cups Task [8]. Most recent studies have shown that decisions on risky choice tasks are not significantly altered under hypothetical compared to real reward conditions [5861], though some exceptions have been observed [62]. In the task, participants were explicitly made aware of the probability of reward and reward magnitude. On each trial, two choices were presented: a sure option and a risky option. The ‘sure’ option involved a guaranteed amount of money. The ‘risky’ option involved a probability of a larger amount of money and a probability of receiving no money.

In line with previous research, the expected value—the product of the reward probability and magnitude—of the risky option was manipulated, which allowed for comparing interactions between expected value and risk on decision-making [8]. Therefore, this task distinguishes the advantageousness of choices, defined by selecting options with higher expected values, from general risk preference. Specifically, the task involved disadvantageous risky gambles (n = 12 trials) in which the expected value for the risky gamble was lower than the sure option. An example of this type of gamble would be choosing between getting $100 guaranteed or an option offering a 50% chance of getting $150, but also a 50% chance of getting $0. Both advantageous risky gambles (n = 12 trials) in which the expected value for the risky gamble was higher than the sure option and equal gambles (n = 12 trials) in which the expected value for the risky and sure options was identical or nearly identical were also presented. Trials were pseudo-randomized once at the study outset. S2 Table in S1 File shows the full list of questions.

While the Cups Task involves 54 gain and loss trials of varying expected value levels (disadvantageous, advantageous, and equal), the present task used a modified gains-only task because the study predictions were localized to risk behavior and not loss aversion. The average proportion of risky gambles across all gambling types was computed for regression analyses and used as the primary analysis variable for this task. Additionally, following previous research using the Cups Task and similar risky choice paradigms [8, 6370], the proportion of risky choices for each gamble type (risky advantageous, risky equal, and risky disadvantageous) was computed and used in follow-up analyses; this provides further information about whether sensitivity to expected values, as reflected by differential risk-taking in advantageous compared to disadvantageous decision contexts, influences the outcome variables.

Supplemental COVID-19 questions

Participants were asked about their beliefs regarding mask-wearing effectiveness of reducing the spread of coronavirus on a scale from 1 (Not at all effective) to 4 (Very Effective). Additional questions include ‘Uncertainty brought on by COVID-19 coronavirus has caused me stress’ and ‘I am worried about getting COVID-19 coronavirus’ using a 1 (Strongly Disagree) to 7 (Strongly Agree) scale. Table 2 shows descriptive information for these supplemental questions. Participants were also asked to indicate on a scale of 1 (Not at all) to 100 (Extreme) the extent to which they thought they were a risk-taker.

Table 2.
Variable Descriptive Information
Dependent Variables Mean Standard Deviation Range
Appropriate Mask Wearing 78.72 34.44 0–100
Number of Interpersonal Social Interactions 4.02 6.49 0–31
Number of Non-Essential Activities 16.54 36.12 0–186
Optimism Bias 7.90 15.73 -60–90
Perceived Risk (not social distancing) 3.76 0.97 1–5
Perceived Risk (are social distancing) 3.05 0.89 1–5
Independent Variables
Temporal Discounting 0.60 0.29 0.15–1.00
Overall Proportion of Risky Choices 0.20 0.19 0–0.92
Proportion of Advantageous-EV Risky Choices 0.32 0.26 0–1.00
Proportion of Equal-EV Risky Choices 0.16 0.21 0–1.00
Proportion of Disadvantageous -EV Risky Choices 0.12 0.19 0–0.83

Note. Perceived Risk refers to the perceived risk of engaging in public activities. EV refers to expected value.

Supplemental prosociality measures

Emerging research between data collection periods suggested that prosociality is a predictor of compliance with physical distancing guidelines and mask wearing [40, 7173]. In line with this research, two measures of pro-sociality were added in the second wave of data collection: (1) the Prosocial Behavioral Intentions Scale [74] and (2) a version of the Dictator Game. Further methodological description of task, and the results for this measure are described in the S1 File.

Procedure

Participants were first presented with the online consent statement and were asked to indicate whether they did or did not voluntarily agree to participate in the study. Participants who specified that they did not consent were prevented from continuing with the study. Participants who indicated their online voluntary consent completed the questions in the following order: demographic items, several supplemental COVID-19 questions, optimism bias items, mask-wearing behavior questions, social distancing questions, and questions pertaining to the perceived risk of activities in public settings. Several unrelated filler questions were intermixed to avoid demand characteristics. Next, participants completed the temporal discounting task followed by the risky choice task. Participants ended the study by indicating the extent to which they believed they were a risk-taker using self-report.

Data analysis

To characterize associations between the independent and dependent variables, bivariate correlations were first performed. Next, multiple linear regressions using ordinary least squares (OLS) were performed for each of the dependent variables (Appropriate Mask-Wearing Behavior, Social Distancing Behavior, Optimism Bias, and Perceived Risk). The between-subjects fixed-effect independent variables were Delay Discounting, Proportion of Risky Choices, and Perceived Risk. The data collection wave (September vs. December) was included as a fixed factor in the analyses. The covariates age, political affiliation, income level, negative financial experiences due to COVID-19, and personal health experience with COVID-19 were also included in the model. The following regression equation was used:

Y=β0+β1X1+β2X2+β3X3+β4X4+β5X5+β6X6+β7X7+β8X8+β9X9+β10X10+ϵ

In the equation, β1X1 − β3X3 represent the predictors (Average Proportion of Risky Choices, Temporal Discounting, and Difference in Perceived Risk) and β4X4 − β10X10 are the covariates (Data Collection Wave, Age, Education, Income Level, Political Affiliation, Personal COVID-19 experience, and financial complications from COVID-19).

For the regression models, tests to determine whether the data met the assumption of collinearity indicated that multicollinearity was not a concern (VIFs range = 1.04–1.20). All descriptive, correlational, and regression analyses were performed using RStudio Version 1.2.5042 and SPSS version 26, and the exploratory path model was performed using MPlus. Standardized beta coefficients are reported for regressions; unstandardized betas are reported in the S1 File.

Results

Descriptives

Participants’ appropriate mask-wearing behavior ranged from 0%–100% (M = 78.72, SD = 34.44). The average number of people that participants engaged with in the past 14 days without wearing a mask or social distancing was 4.02 (SD = 6.49), and the average number of times participants engaged in non-essential activities in the past 30 days was 16.54 (SD = 36.12). Table 2 shows additional descriptive information for the independent and dependent variables.

Additionally, independent samples t-tests showed that participants in the December data collection wave (M = 83.03, SD = 31.61) reported greater mask-wearing than those in the September data wave (M = 76.61, SD = 36.53), t(402) = -2.48, p = .014. December participants also reported engaging in fewer social interactions (t(402) = 2.32, p = .021) and non-essential public activities (t(402) = 3.88, p < .001) than the September participants, suggesting that compliance with COVID-19 preventative guidelines increased from September to December in this sample.

The perceived risk of engaging in such activities assuming that people were social distancing was significantly lower at 3.05 (SD = 0.89), t(403) = 20.63, p < .001. In terms of beliefs about COVID-19, 60.89% believed that COVID-19 reduced transmission risk for both oneself and others (the remaining participants believed masks reduced the risk for either oneself only, others only, or no one) and 49.50% believed that masks were very effective in reducing the spread of COVID-19 (Table 3). The majority of participants expressed positive beliefs toward mask-wearing in preventing or slowing COVID-19 transmission.

Table 3.

COVID-19 Belief Characteristics
Demographics Number Percentage
Participant tested positive for COVID-19 33 8.17%
Know someone who tested positive for COVID-19 224 55.45%
Know someone who was symptomatic due to COVID-19 203 50.25%
Know someone who died due to COVID-19 90 22.28%
Beliefs Number Percentage
Masks only reduce one’s own risk of contracting COVID-19 from others 41 10.15%
Masks only reduce other people’s risk of contracting COVID-19 80 19.80%
Masks do not reduce the risk for anyone from contracting COVID-19 37 9.16%
Masks reduce the risk for others and oneself of contracting COVID-19 246 60.89%
Masks are very effective in reducing spread of COVID-19 200 49.50%
Masks are moderately effective in reducing spread of COVID-19 140 34.65%
Masks are a little effective in reducing spread of COVID-19 34 8.42%
Masks are not at all effective in reducing spread of COVID-19 30 7.43%

In terms of risky decision-making, 17.08% of participants chose the safe option on all trials. By gambling context, 19.55% of participants chose the safe option in all advantageous expected value contexts, 46.78% chose the safe option in all equal expected value contexts, 57.43% chose the safe option in all disadvantageous contexts. In contrast, no participants chose the risky option on all trials.

To examine the optimism bias, a paired samples t-test was conducted to compare participants’ estimation of the likelihood that their peers would contract COVID-19 compared to them. Results showed evidence of an optimism bias for contracting COVID-19, t(403) = 10.09, p < .001, d = .31. Participants believed that the likelihood of others contracting COVID-19 (M = 40.69, SD = 25.04) was 7.9% higher than the likelihood of contracting the virus themselves (M = 32.80, SD = 25.34).

Correlations

Results revealed significant correlations between the proportion of all risky gambles and appropriate mask-wearing behavior, interpersonal social activities, non-essential social activities, perceived risk of public activities, and the optimism bias (ps < .05; Table 4).

Table 4.

Correlational Analyses
Appropriate Mask Wearing Interpersonal Social Interactions Social Activities Optimism Bias Perceived Risk
Delay Discounting Score 0.35** -0.31** -0.41** 0.03 0.14*
Perceived Risk Difference 0.40** -0.30** -0.36** 0.13* 0.46**
Proportion of All Risky Gambles -0.30** 0.28** 0.34** -0.19** -0.18**
Proportion of Risky Advantageous Gambles -0.04 0.07 0.06 -0.13* -0.11*
Proportion of Risky Disadvantageous Gambles -0.47** 0.39** 0.51** -0.18** -0.19**
Proportion of Equal Gambles -0.35** 0.32** 0.39** -0.21** -0.18**

**indicates significance at the p < .001 level

*indicates significance at the p < .05 level

Additionally, significant correlations between temporal discounting scores (ps < .05) and difference in perceived risk (ps < .05) with appropriate mask-wearing, interpersonal social interaction, non-essential social activities, and perceived risk were also observed. These relationships suggest that the lack of appropriate mask-wearing and social distancing is associated with increased risk-taking behavior and preference for immediate small rewards over larger, delayed rewards. Greater perceived risk under non-social distancing conditions was associated with more mask-wearing and social distancing behavior. Moreover, greater perceived risk of engaging in public activities and greater optimism bias are associated with decreased risk-taking behavior.

Additional correlations were performed between perceived risk and actual COVID-19 preventative behaviors. Results indicated that the higher perceived risk of engaging in public activities, assuming no social distancing, was significantly correlated with appropriate mask-wearing behavior (r = .414, p < .001), number of interpersonal social interactions (r = -.262, p < .001), and weakly with engaging in non-essential social activities (r = -.131, p < .01). Perceived risk assuming that people are social distancing was also positively correlated with mask-wearing (r = .142, p = .004), but the magnitude of the relationship was much weaker than for perceived risk assuming no social distancing.

Further correlations were conducted between the optimism bias and COVID-19 compliance behaviors. Results showed that the magnitude of the optimism bias was associated with reduced social interactions (r = -.205, p < .001), participation in non-essential social activities (r = -.201, p < .001), and increased mask-wearing (r = .210, p < .001).

Regression analyses

Appropriate mask-wearing behavior

The linear regression for Appropriate Mask Wearing Behavior revealed a significant relationship between Proportion of Risky Choices and Appropriate Mask Wearing (β = -.199, p < .0001) such that those who made more risky choices reported wearing masks less (Fig 1). Separate follow-up analyses within gambling context showed that those who made more risky choices in disadvantageous (p < .0001) and equal (p < .0001) gambling tasks tended to disregard appropriate mask-wearing behavior. There was no significant relationship between advantageous risky choice and mask-wearing behavior (p = .575). Furthermore, temporal discounting predicted mask-wearing such that those who prefer delayed over immediate rewards tended to engage in more appropriate mask-wearing behavior (β = .215, p < .0001). Additionally, perceived risk difference was also associated with greater mask-wearing behavior (β = .280, p < .0001).

Fig 1. Effect of risky choice on appropriate mask wearing behavior by gambling type.

Fig 1

Advantageous gambles indicate that the expected value for the risky choice was higher than the safe gamble. Disadvantageous gambles indicate that the expected value for the risky option was lower than the safe option, and the expected values for both safe and risky choices were the same for Equal gambles. Results show that choosing a higher proportion of Disadvantageous and Equal Gambles predicted less mask-wearing behavior.

In terms of demographic covariates, the results showed that individuals who had not experienced financial problems due to COVID-19 (M = 87.61, SD = 26.95) reported practicing appropriate mask-wearing behavior more than those who had faced financial problems (M = 61.19, SD = 40.47, β = -0.231, p < .0001). In addition, those without previous COVID experiences (M = 83.93, SD = .29.44) were more likely to practice appropriate mask-wearing than those with personal COVID-19 experience (M = 74.08, SD = 37.80, β = -.100, p = .018). Age (p = .191), Education (p = .112), Political Affiliation (p = .623), and Income Level (p = .418) were not significant predictors of mask-wearing behavior. The model results are reported in S3 Table in S1 File. Overall, 35.6% of the variance in appropriate mask-wearing behavior was explained by this model.

Social distancing behavior

Two linear regressions were conducted to examine the effect of temporal discounting and risky choice on social distancing behavior: one for Interpersonal Social Interactions and one for Non-Essential Social Activities. Participants that reported higher perceived risk when not social distancing engaged in fewer social interactions (β = -.193, p < .0001). In contrast, those who showed a higher preference for immediate rather than delayed rewards engaged in more interpersonal social interactions (β = -.221, p < .0001). In terms of risky decision-making, those who made more risky choices (β = 185, p < .001) tended to engage in more face-to-face social interactions without a mask. Separate follow-up analyses by gambling context showed that the effects of Risky Choice on Interpersonal Social Interactions were significant in the disadvantageous (p < .001) and equal (p < .001) contexts but not the advantageous (p = .216) context. Fig 2 shows these effects.

Fig 2. Effect of risky choice on number of interpersonal social interactions by gambling type.

Fig 2

Advantageous gambles indicate that the expected value for the risky choice was higher than the safe gamble. Disadvantageous gambles indicate that the expected value for the risky option was lower than the safe option, and the expected values for both safe and risky choices were the same for Equal gambles. The results demonstrated that choosing a higher proportion of Disadvantageous and Equal Gambles predicted a greater number of interpersonal social interactions, meaning that participants engaged in less rigorous social distancing.

Additionally, interpersonal social interactions were also significantly associated with Age, education level, COVID-19 experience, and whether or not COVID-19 had financially impacted the participants Surprisingly, level of education (β = .160, p = .0001), Age (β = .096, p = .035), personal experiences with COVID-19 (β = .170, p < .0001), and negative financial complications due to COVID-19 (β = .135, p = .003) were all associated with increased mask-less social interactions and, thus, less social distancing. Regression results are reported in S4 Table in S1 File. The R2 value for the overall model was.284.

The second linear regression examined predictors of Non-Essential Social Activities, like going to the mall or dine-restaurants. The results for this social distancing variable were largely consistent with results for Interpersonal Social Interactions. While difference in perceived risk were associated with less non-essential activities (β = -.218, p < .0001), both greater temporal discounting (β = -0.282, p < .0001) and increased proportion of risky choice (β = .191, p < .0001) on the gambling task predicted greater engagement in non-essential social activities, indicating less social distancing (Fig 3).

Fig 3. Effect of risky choice on number of interpersonal social interactions by gambling type.

Fig 3

Advantageous gambles indicate that the expected value for the risky choice was higher than the safe gamble. Disadvantageous gambles indicate that the expected value for the risky option was lower than the safe option, and the expected values for both safe and risky choices were the same for Equal gambles. Higher proportion of Disadvantageous and Equal risky choices predicted greater engagement in non-essential social activities, an indicator of reduced social distancing behavior.

In terms of demographics, non-essential social activities were significantly associated with level of education, experience with COVID-19, and the financial impact of COVID-19. As level of education (β = 0.189, p < .0001), personal experiences with COVID-19 (β = 0.120, p = .002), and negative financial impact from COVID-19 (β = 0.235, p < .0001) increased, so did the number of non-essential social activities in which participants partook. Unlike Interpersonal Interactions, however, significant differences in Data Collection Wave were observed for non-essential social activities (β = -0.106, p = .009). S5 Table in S1 File shows the regression results. This model explained 43.8% of the variance in participation in non-essential social activities.

Optimism bias

The regression results (S6 Table in S1 File) showed that the proportion of risky choices predicted a reduced optimism bias (β = -.158, p = .002), but neither temporal discounting nor difference in perceived risk were associated with the optimism bias. For the covariates included in the model, years of education (β = -.188, p < .001) predicted diminished optimism bias, or a decreased perception that others would be more likely to contract COVID-19 than them. No other predictors (ps>.10) or covariates were significant (ps>.05). The R2 value for the overall model was.099.

Perceived risk

Regression indicated that the proportion of risky choices (β = -0.200, p < .001) and temporal discounting (β = 0.104, p = .035) were predictive of Perceived Risk of engaging in activities in public settings. A difference also emerged for Political Affiliation (β = -0.270, p < .001) in which Democrats indicated a higher perceived risk of these activities compared to Republicans and Independents. No other covariates were significant (ps>.40). S7 Table in S1 File shows the regression results. The difference in perceived risk variable was not included as a predictor in this model as it was derived from this measure. The R2 value for the overall model was.126.

Path model

As an exploratory analysis, a path model was performed to examine the relationships between risky choice, temporal discounting, and perceived risk-taking on both actual social distancing and mask-wearing behavior together. We found an overall effect of risky choice on risk perceptions where mask-wearing is not being practiced. Participants who take more risky choices attribute less risk to going to places where people do not wear masks (p = .003). Furthermore, we found that the effect of risky choice on mask-wearing is mediated by perceived risk of public activities and engagement in non-essential social activities respondents who make more risky choices are more likely to attend to non-essential social activities (p < .0001) and perceive lower risk of engaging in public activities (p < .05). On the other hand, those who have higher levels of temporal discounting tend to avoid non-essential social activities (p< 0001). In terms of appropriate mask-wearing behaviors, those who attend non-essential social activities are less likely to wear masks (p < .0001), and those who attribute higher risks to public activities are more likely to wear masks (p < .0001). These effects are shown in Fig 4.

Fig 4. Path model showing the relationship between risky choice, delay discounting, and appropriate mask wearing behavior.

Fig 4

Risky Choice indicates the overall proportion of risky decisions in the Risky Choice Task. The relationship between Delay Discounting scores and appropriate mask-wearing behavior was mediated by the frequency of engaging in non-essential social activities. The relationship between Risky Choice and appropriate mask-wearing behavior was mediated by both the frequency of engaging in non-essential social activities and one’s perceived risk of engaging in public activities. Values in parentheses indicate standard error. ** indicates p < .05. *** indicates p < .001.

Supplemental analyses

Self-reported risk-taking (M = 36.53, SD = 24.15) ranged from 0–100. There was a strong correlation between self-reported risk-taking with overall proportion of risky choices (r = .53, p < .001), appropriate mask-wearing (r = -.37, p < .001), interpersonal social interactions (r = .35, p < .001), and non-essential social activities (r = .47, p < .001). Thus, self-reported risk-taking also appears to have a strong relationship with COVID-19 preventative behavior like risky choices did. Moreover, one’s level of worry about contracting COVID-19 was significantly related to the perceived risk of engaging in activities in public settings (r = .52, p < .001) and mask-wearing (r = .18, p < .001) but did not correlate with social distancing behavior. There was no association between stress-related uncertainty due to COVID-19 and risky choice, mask-wearing, or social distancing measures (ps>.50). The full correlational results are shown in S8 Table in S1 File. In terms of prosocial behavior, correlations between the Prosocial Behavioral Intentions Questionnaire and all outcome variables were nonsignificant, but there was an association between prosocial behavior on the Dictator Game and interpersonal social interactions (r = .160, p = .025) and non-essential social activities (r = .196, p = .006). Surprisingly, this association suggests that increased prosocial behavior on the Dictator Game was associated with less social distancing (S9 Table in S1 File). Supplemental exploratory analyses for political affiliation and a multivariate regression analysis with all dependent variables is shown in the (S10 Table in S1 File).

Discussion

To mitigate the spread of COVID-19 transmission in the United States, the CDC and other government policy institutes have recommended that individuals wear masks in public places and practice social distancing [1]. Compliance with these recommendations has been inconsistent, and the pandemic remains a major public health crisis. The results of this study provide insight into how certain decision-making and motivation propensities are associated with compliance behaviors. The primary findings demonstrate that increased temporal discounting and risky decision-making are associated with less appropriate mask-wearing behavior and social distancing. These findings support our hypothesis.

The effect of risk-taking on COVID-19 compliance behavior was dependent on sensitivity to expected values. When a risky option has a higher expected value than a safe option, it is advantageous to choose the risky option. The results indicated that risky choices in these advantageous contexts were not predictive of compliance behavior. Instead, greater risk-taking in disadvantageous contexts in which the expected value for the risky option was lower than the safe option predicted diminished compliance behavior. This finding suggests that individuals who are less sensitive to changes in expected value and exhibit less adaptive risky decision-making behavior are less likely to engage in mask-wearing and social distancing during the pandemic. Individuals who chose the risky option more frequently in equal expected value contexts (when the expected values for the risky and safe options matched) behaved similarly to those who chose more disadvantageous risky choices. However, the magnitude of the relationship between risk-taking and noncompliant behavior was greater in the disadvantageous contexts than in the equal contexts. It should be noted that the average percentage of risky choices across all participants in equal and disadvantageous contexts was low—less than 20%. Moreover, over half (57%) of participants chose the safe option on all disadvantageous trials, and 47% chose the safe option on all equal expected value trials. The relationship between risk-taking and compliance behavior appears to be driven by a minority of participants, which may limit the generalizability of the findings. Nevertheless, the behavior of even a minority of individuals can have substantial impacts during a pandemic. Recent estimates indicate that approximately 10% of infected individuals account for 80% of COVID-19 transmission [75]. Though a minority of individuals engage in equal and disadvantageous risky decision-making, the corresponding decrease in compliance with mask-wearing and social distancing observed in this study could have wide-ranging consequences for spreading COVID-19.

Furthermore, previous work has shown that temporal discounting is associated with maladaptive health behaviors, including unhealthy eating and engaging in risky sexual activities [3335]. Our findings build on this prior work by showing that temporal discounting is also predictive of engagement in COVID-19 preventative behaviors. Thus, this work provides empirical evidence that individual differences in risk-taking and motivation influence compliance with COVID-19 prevention guidelines.

Recent research on COVID-19 risk perception has observed that individuals tend to believe that others are more likely to contract COVID-19 than they are, which supports the optimism bias [4547]. Our findings are consistent with this research; on average, individuals underestimated their likelihood of contracting COVID-19 compared to their peers by 7.9%. No relationship between risk-taking behavior or temporal discounting was associated with the magnitude of the optimism bias in this study. While it was anticipated that a stronger optimism bias would be associated with reduced compliance with COVID-19 preventative behavior, correlational results suggest an opposing view. The optimism bias was associated with increased social distancing and mask-wearing behavior, which was unexpected. However, concern for others has been associated with increased generosity towards strangers [76]. It is, therefore, possible that increased risk perception of spreading COVID-19 to others may increase concern for others, leading to increased mask-wearing and social distancing to protect others. This explanation is speculative, and future research is needed to replicate this relationship and further examine the role of the optimism bias in COVID-19 preventative behaviors.

In addition to differences in risk perception of others versus self in contracting COVID-19, the relationship between perceived risk of engaging in public activities and COVID-19 preventative behaviors was examined. Risky decision-making and temporal discounting were not predictive of risk perception. However, the hypothesis that increased risk perception would be associated with greater social distancing and mask-wearing was supported through correlational results, which is consistent with other recent findings [40]. This observed relationship is also in line with prior work showing that when people perceive an event as high risk, they want the risk reduced and support the establishment of risk-reduction regulations [20, 21]. The desire for a risk to be reduced is not the same as taking action to reduce the risk yourself though, and it was unclear whether heightened risk perception would be associated with actual increases in COVID-19 preventative behavior. The study results shed light on this matter: heightened risk perception of engaging in public activities during the COVID-19 pandemic is associated with increased mask-wearing and social distancing.

This study further demonstrated that higher risk perception of public activities under non-social compared to social distancing conditions were predictive of greater mask-wearing and social distancing behavior. This finding suggests that individuals who feel that social distancing is effective are more likely to engage in such behaviors. From another perspective, however, this result may have implications for risk compensation, which proposes that individuals adapt their behavior based on their level of the perceived risk of that behavior, typically behaving in a more risk-taking way when perceived risk is low [77]. This theory has been observed in safety contexts in which having more protective measures in place, such as safety equipment [78, 79], increases people’s risk-taking behavior because these protective measures decrease one’s perceived level of risk. Applied to COVID-19 preventative behavior, it is possible that if people perceive the risk of social activities as lower when others are engaging in prophylactic behaviors—wearing masks or social distancing, then they may be more willing to empirically engage in those social activities. Future research is needed to examine risk compensation in the context of COVID-19 preventative behavior.

The relationship between several demographic factors and COVID-19 preventative behaviors were also examined in this study. Although prior H1N1 and COVID-19 research has observed age differences in virus risk perception [37, 41, 80], the present study found no significant relationship between age and preventative behaviors or risk perception of engaging in public activities during the COVID-19 pandemic. The results indicated that higher income levels predicted more compliance with mask-wearing guidelines but did not affect social distancing behavior. It was also expected that first-hand personal experience with COVID-19, including knowing someone who became sick or died, and negative financial consequences from the pandemic would lead to greater compliance behaviors. The results do not support these predictions; instead, both factors were associated with reduced social distancing. Further research is needed to examine why negative personal experience with COVID-19 may not necessarily lead to greater COVID-19 preventative behaviors.

Response to the pandemic has become highly politicized in the United States [3, 4]. This study showed that differences in political affiliation were predictive of perceived risk of engaging in public activities (assuming no social distancing), which is consistent with other recent findings [40]. Democrats reported a higher risk perception of engaging in public activities compared to Republicans. Exploratory analyses examining the effect of political affiliation alone on mask-wearing and social distancing showed that Democrats and Independents engaged in more social distancing and appropriate mask-wearing behavior than Republicans (see S1 File). However, political affiliation was not significant in the regression model, which suggests that political affiliation is not predictive of mask-wearing and social distancing over and above the behavioral decision-making factors of risk-taking, temporal discounting, and perceived risk. In other words, risk-taking and temporal discounting are stronger predictors of mask-wearing and social distancing than political affiliation. Outside of political affiliation, the results indicated that those that participated in the study in December reported increased mask-wearing and social distancing than those that participated in September. This result is consistent with research from the COVID States Project [53] showing that mask-wearing behavior has increased since the pandemic began. Therefore, while political affiliation may be a divisive factor in terms of COVID-19 risk perception, the data suggest that, on the whole, individuals’ mask-wearing and social distancing behavior increased between September and December 2020.

As supplementary analyses, the relationship between worry, risk perception, and COVID-19 prevention behaviors was also explored. The study results showed that although high levels of worry about contracting COVID-19 were associated with increased risk perception, worry was not associated with heightened compliance with COVID-19 prevention guidelines. Our supplementary results also showed that prosocial behavior was not associated with increased compliance with mask-wearing and social distancing in our sample. This finding diverges from previous research showing a positive relationship between worry and prosocial behavioral changes [36, 38, 42]. However, this result may suggest an interesting dissociation in which worry may exert differential effects on risk perception and actual risk behavior—people worried about contracting COVID-19 may perceive higher risks about engaging in public activities but still choose to engage in them anyway. In a self-destructive cycle, engaging in such perceived high-risk activities may then heighten worry about getting COVID-19 from those activities.

To explore the direction of the relationship between decision-making, risk perception, and COVID-19 preventative behaviors, a path analysis was performed. The results showed that risk perception and engagement in non-essential social activities mediated the relationship between decision-making and mask-wearing behavior. Specifically, individual differences in risky decision-making predicted both perceived risk and actual risk-reduction behavior. Risky decision-making predicts reduced risk perception which, in turn, leads to diminished mask-wearing behavior. One possible explanation for this relationship is that individuals who regularly make risky choices may not view the COVID-19 pandemic as perilous relative to some of the other risks they have taken. The path model has demonstrated that the relationship between temporal discounting and mask-wearing behavior is mediated by engagement in non-essential social activities, but not risk perception. People who are more motivated by immediate gratification engage in more non-essential social interactions to experience the pleasure of those activities and interactions at the expense of long-term public health consequences. Engaging in more of these immediately rewarding social activities may mean that these individuals encounter more incidents in which masks and social distancing should be employed, yet they choose to engage in less appropriate mask-wearing behavior. Risky decision-making behavior, temporal discounting, and risk perception collectively predicted 55% of the variance in appropriate mask-wearing behavior. Individual differences in general decision-making patterns are therefore highly predictive of who complies with COVID-19 prevention guidelines.

Implications

Despite widespread feelings of anxiety and fear surrounding COVID-19, many individuals still behave in a way that is inconsistent with social distancing guidelines. Some individuals who report lower adherence to preventive measures may have been disregarding such guidelines throughout the entire pandemic duration. However, others may have been rigidly following guidelines early in the pandemic but may have begun to disregard the warnings due to a type of “quarantine burnout” in which people have simply run out of willpower to enact safe practices every time they leave their house. This idea fits with our finding that instant gratification may influence COVID-19 compliance behaviors. While people may still believe that COVID-19 is a serious risk, the value of long-term health and engaging in COVID-19 preventative behaviors may decline as the time duration of the pandemic increases.

Limitations and future directions

This study is not without limitations. Firstly, all study participants were living in the United States, an individualistic culture in which the needs of the individual are often valued over the needs of the community. Individualistic cultures may be more susceptible to allowing preferences for short-term rewards like socializing over long-benefits of community health to guide compliance behavior. As previous work has shown that responses to the coronavirus vary across countries [40], the present results may not extend to other countries or cultures with more collectivist values.

A limitation in our design is that overall knowledge regarding coronavirus information was not assessed, which may affect COVID-19 preventative behavior. Moreover, as with many studies that utilize self-report measures, social desirability biases may have influenced participants’ risky choices or disclosure of mask-wearing and social distancing behavior. This study was also conducted online, and as such our findings may not generalize to individuals with limited internet or computer access. The online nature of the study means that experimental control of the study environment was limited. We also note that other factors outside of this study including personality [43], line of work, mental health, and emotional states [42] may also play a role in COVID-19 preventative behavior. These other unexamined factors may covary with risk-taking or temporal discounting, which has the potential to inflate the observed estimation of effect sizes observed in this study. We therefore caution that the study effect sizes may represent an upper bound of the overall effect of risky decision-making and temporal discounting on COVID-19 preventative behavior. Moreover, it should be emphasized that the observed effects in this study show associations, rather than causal effects.

Furthermore, according to the risk-as-feelings hypothesis, anticipatory ‘in-the-moment’ emotions have a substantial impact on one’s decisions [81]. The risk-as-feelings hypothesis can explain decisions that diverge from ones that may objectively seem to be the best course of action. However, our study design did not capture feelings about COVID-19 or anticipatory emotions, and future research should consider examining the effect of decision-making patterns on COVID-19 preventative behavior using a risk-as-feelings framework.

Conclusions

The effects of the COVID-19 pandemic have been widespread and deleterious. This work sought to characterize the role of risky decision-making and motivation factors that underlie COVID-19 preventative behavior. This study provides empirical evidence that increased risk-taking during decision-making, diminished risk perception, and motivation for immediate over delayed gratification predict reduced adherence to COVID-19 prevention guidelines. This information may provide insight into ways to increase compliance with these guidelines. While public health guidelines and messaging need to emphasize the risks of COVID-19, it is also important to convey activities and opportunities that can be immediately rewarding. This approach may provide an appropriate outlet for those with more risk-taking tendencies and those who seek out immediate rewards.

Supporting information

S1 File

(DOCX)

Data Availability

All data files are available from the Open Science Framework database (https://osf.io/xy6aj).

Funding Statement

Clemson Universtiy Creative Inquiry Program # 1267. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. No authors received salary funding from this grant.

References

  • 1.Center for Disease Control and Prevention (CDC). Scientific Brief: SARS-CoV-2 and Potential Airborne Transmission. October 5, 2020; https://www.cdc.gov/coronavirus/2019-ncov/more/scientific-brief-sars-cov-2.html, 2020.
  • 2.Honein MA, Christie A, Rose DA, Brooks JT, Meaney-Delman D, Cohn A, et al. Summary of guidance for public health strategies to address high levels of community transmission of SARS-CoV-2 and related deaths. MMWR Morbidity and Mortality Weekly Report 2020;69:1860–1867. 10.15585/mmwr.mm6949e2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Allcott H, Boxell L, Conway J, Gentzkow M, Thaler M, Yang DY. Polarization and public health: Partisan differences in social distancing during the Coronavirus pandemic. NBER Working Paper 2020. [DOI] [PMC free article] [PubMed]
  • 4.Rasmussen SA, Jamieson DJ. Public health decision making during Covid-19—Fulfilling the CDC pledge to the American people. New England Journal of Medicine 2020;383(10):901–903. 10.1056/NEJMp2026045 [DOI] [PubMed] [Google Scholar]
  • 5.Newey S. Young people ’letting their guard down’ are driving COVID resurgence, WHO warns. The Telegraph 2020 July 30.
  • 6.Morgado P, Sousa N, Cerqueira JJ. The impact of stress in decision making in the context of uncertainty. Journal of Neuroscience Research 2015;93(6):839–847. 10.1002/jnr.23521 [DOI] [PubMed] [Google Scholar]
  • 7.Weller JA, Levin IP, & Bechara A. Do individual differences in Iowa Gambling: Task performance predict adaptive decision making for risky gains and losses? Journal of Clinical and Experimental Neuropsychology, 2010;32(2):141–150. 10.1080/13803390902881926 [DOI] [PubMed] [Google Scholar]
  • 8.Levin I, Weller J, Pederson A, Harshman L. Age-related differences in adaptive decision making: Sensitivity to expected value in risky choice. Judgment Decision-Making 2007;2:225–233. [Google Scholar]
  • 9.Busemeyer JR, Stout JC. A contribution of cognitive decision models to clinical assessment: decomposing performance on the Bechara gambling task. Psychological Assessment 2002; 14, 253–262. 10.1037//1040-3590.14.3.253 [DOI] [PubMed] [Google Scholar]
  • 10.Worthy DA, Pang B, Byrne KA. Decomposing the roles of perseveration and expected value representation in models of the Iowa gambling task. Frontiers in Psychology 2013; 4, 640. 10.3389/fpsyg.2013.00640 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kahneman D, Tversky A. Prospect Theory: An Analysis of Decision under Risk. Econometrica 1979;47(2):263-264–292. [Google Scholar]
  • 12.Anderson LR, Mellor JM. Are risk preferences stable? Comparing an experimental measure with a validated survey-based measure. Journal of Risk and Uncertainty 2009;39(2):137–160. [Google Scholar]
  • 13.Levy DJ, Thavikulwat AC, Glimcher PW. State dependent valuation: The effect of deprivation on risk preferences. PloS one 2013;8(1). 10.1371/journal.pone.0053978 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Weber EU, Blais AR, & Betz NE. A domain‐specific risk‐attitude scale: Measuring risk perceptions and risk behaviors. Journal of Behavioral Decision Making 2002, 15(4), 263–290. [Google Scholar]
  • 15.Albert SM, Duffy J. Differences in risk aversion between young and older adults. Neuroscience and neuroeconomics 2012;2012(1). 10.2147/NAN.S27184 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Charness G, Gneezy U. Strong evidence for gender differences in risk taking. Journal of Economic Behavior & Organization 2012;83:50–58. [Google Scholar]
  • 17.Mishra S, Lalumière ML. Individual differences in risk-propensity: Associations between personality and behavioral measures of risk. Personality and Individual Differences 2011;50(6):869–873. [Google Scholar]
  • 18.Slovic P. Perception of risk. Science 1987;236(4799):280–285. 10.1126/science.3563507 [DOI] [PubMed] [Google Scholar]
  • 19.Finucane ML, Alhakami A, Slovic P, Johnson SM. The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making 2000;13(1):1–17. [Google Scholar]
  • 20.Slovic P, Fischhoff B, Lichtenstein S. Perceived risk: psychological factors and social implications. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 1981;376(1764):17–34. [Google Scholar]
  • 21.Slovic P, Fischhoff B, Lichtenstein S. Characterizing perceived risk. Perilous progress: Managing the hazards of technology 1985:91–125. [Google Scholar]
  • 22.Weinstein ND. Unrealistic optimism about future life events. Journal of Personality and Social Psychology 1980;39(5):806–820. [Google Scholar]
  • 23.Weinstein ND. Unrealistic optimism about susceptibility to health problems: Conclusions from a community-wide sample. Journal of Behavioral Medicine 1987;10(5):481–500. 10.1007/BF00846146 [DOI] [PubMed] [Google Scholar]
  • 24.Cho H, Lee JS, Lee S. Optimistic bias about H1N1 flu: Testing the links between risk communication, optimistic bias, and self-protection behavior. Health Communication 2013;28(2):146–158. 10.1080/10410236.2012.664805 [DOI] [PubMed] [Google Scholar]
  • 25.Kim HK, Niederdeppe J. Exploring optimistic bias and the integrative model of behavioral prediction in the context of a campus influenza outbreak. Journal of Health Communication 2013;8(2):206–222. 10.1080/10810730.2012.688247 [DOI] [PubMed] [Google Scholar]
  • 26.Loewenstein G, Thaler RH. Anomalies: intertemporal choice. Journal of Economic Perspectives 1989;3(4):181–193. [Google Scholar]
  • 27.Richards JB, Zhang L, Mitchell SH, De Wit H. Delay or probability discounting in a model of impulsive behavior: effect of alcohol. Journal of Experimental Analysis of Behavior 1999;71(2):121–143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Kassam KS, Gilbert DT, Boston A, Wilson TD. Future anhedonia and time discounting. Journal of Experimental Social Psychology 2008;44(6):1533–1537. [Google Scholar]
  • 29.McClure SM, Laibson DI, Loewenstein G, Cohen JD. Separate neural systems value immediate and delayed monetary rewards. Science 2004;306(5695):503–507. 10.1126/science.1100907 [DOI] [PubMed] [Google Scholar]
  • 30.Trope Y, Liberman N. Temporal Construal. Psychological Review 2003;110(3):403. 10.1037/0033-295x.110.3.403 [DOI] [PubMed] [Google Scholar]
  • 31.Vaidya AR, Fellows LK. The neuropsychology of decision-making: A view from the frontal lobes. Decision Neuroscience 2017:277–289. [Google Scholar]
  • 32.Bickel WK, Odum AL, Madden GJ. Impulsivity and cigarette smoking: Delay discounting in current, never, and ex-smokers. Psychopharmacology 1999;146(4):447–454. 10.1007/pl00005490 [DOI] [PubMed] [Google Scholar]
  • 33.Daugherty JR, Brase GL. Taking time to be healthy: Predicting health behaviors with delay discounting and time perspective. Personality and Individual differences 2010;48(2):202–207. [Google Scholar]
  • 34.Garza KB, Ding M, Owensby JK, Zizza CA. Impulsivity and fast-food consumption: a cross-sectional study among working adults. Journal of the Academy of Nutrition and Dietetics 2016;116(1):61–68. 10.1016/j.jand.2015.05.003 [DOI] [PubMed] [Google Scholar]
  • 35.Snider SE, DeHart WB, Epstein LH, Bickel WK. Does delay discounting predict maladaptive health and financial behaviors in smokers? Health Psychology 2019;38(1):21–28. 10.1037/hea0000695 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Rudisill C. How do we handle new health risks? Risk perception, optimism, and behaviors regarding the H1N1 virus. Journal of Risk Research 2013;16(8):959–980. [Google Scholar]
  • 37.Bults M, Beaujean DJ, de Zwart O, Kok G, van Empelen P, van Steenbergen JE, et al. Perceived risk, anxiety, and behavioural responses of the general public during the early phase of the Influenza A (H1N1) pandemic in the Netherlands: Results of three consecutive online surveys. BMC Public Health 2011;11(2). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kim Y, Zhong W, Jehn M, Walsh L. Public risk perceptions and preventive behaviors during the 2009 H1N1 influenza pandemic. Disaster Medicine and Public Health Preparedness 2015;9(2):145–154. 10.1017/dmp.2014.87 [DOI] [PubMed] [Google Scholar]
  • 39.Ibuka Y, Chapman GB, Meyers LA, Li M, Galvani AP. The dynamics of risk perceptions and precautionary behavior in response to 2009 (H1N1) pandemic influenza. BMC Infectious Diseases 2010;10(1):296. 10.1186/1471-2334-10-296 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Dryhurst S, Schneider CR, Kerr J, Freeman AL, Recchia G, Van Der Bles A. M., et al. Risk perceptions of COVID-19 around the world. Journal of Risk Research 2020:1–13. [Google Scholar]
  • 41.Bruine de Bruin W. Age differences in COVID-19 risk perceptions and mental health: Evidence from a national US survey conducted in March 2020. The Journals of Gerontology: Series B 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Harper CA, Satchell LP, Fido D, Latzman RD. Functional fear predicts public health compliance in the COVID-19 pandemic. International Journal of Mental Health and Addiction 2020. 10.1007/s11469-020-00281-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Nofal AM, Cacciotti G, Lee N. Who complies with COVID-19 transmission mitigation behavioral guidelines? PloS one 2020;15(10). 10.1371/journal.pone.0240396 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.O’Connell K, Berluti K, Rhoads SA, Marsh A. Reduced social distancing during the COVID-19 pandemic is associated with antisocial behaviors in an online United States sample. PsyArXiv 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Asimakopoulou K, Hoorens V, Speed E, Coulson N, Antoniszczak D, Collyer F, et al. Comparative optimism about infection and recovery from COVID-19: Implications for adherence with lockdown advice. Health Expectations 2020;23(6):1502–1511. 10.1111/hex.13134 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Bottemanne H, Morlaàs O, Fossati P, Schmidt L. Does the Coronavirus epidemic take advantage of human Optimism Bias? Frontiers in Psychology 2020;11. 10.3389/fpsyg.2020.02001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Druică E, Musso F, Ianole-Călin R. Optimism bias during the COVID-19 pandemic: Empirical evidence from Romania and Italy. Games 2020;11(3):39. [Google Scholar]
  • 48.Crump MJ, McDonnell JV, Gureckis TM. Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PloS one 2013;8(3). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Goodman JK, Cryder CE, Cheema A. Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 2013;26(3):213–224. [Google Scholar]
  • 50.Oppenheimer DM, Meyvis T, Davidenko N. Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology 2009;45(4):867–872. [Google Scholar]
  • 51.Keith MG, Tay L, Harms PD. Systems perspective of Amazon Mechanical Turk for organizational research: Review and recommendations. Frontiers in Psychology 2017;8:1359. 10.3389/fpsyg.2017.01359 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Rose S. The growing size and incomes of the upper middle class. Urban Institute; 2016. [Google Scholar]
  • 53.Lazer D, Santillana M, Perlis R, Quintana A, Ognyanova K, Green J, et al. The COVID States Project: A 50-state COVID-19 survey report #26: Trajectory of COVID-19-related behaviors. November 2020.
  • 54.Brañas-Garza P, Jorrat D, Espín A, Sanchez A. Paid and hypothetical time preferences are the same: Lab, field and online evidence. arXiv preprint arXiv 2020c.
  • 55.Johnson MW, Bickel WK. Within‐subject comparison of real and hypothetical money rewards in delay discounting. Journal of the Experimental Analysis of Behavior 2002;77(2):129–146. 10.1901/jeab.2002.77-129 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Locey ML, Jones BA, Rachlin H. Real and hypothetical rewards. Judgment and Decision Making 2011;6(6):552–564. [PMC free article] [PubMed] [Google Scholar]
  • 57.Madden GJ, Raiff BR, Lagorio CH, Begotka AM, Mueller AM, Hehli DJ, et al. Delay discounting of potentially real and hypothetical rewards: II. Between-and within-subject comparisons. Experimental and clinical psychopharmacology 2004;12(4):251. 10.1037/1064-1297.12.4.251 [DOI] [PubMed] [Google Scholar]
  • 58.Brañas-Garza P, Estepa Mohedano L, Jorrat D, Orozco V, Rascon-Ramirez E. To pay or not to pay: Measuring risk preferences in lab and field. Munich Personal RePEc Archive 2020a. [Google Scholar]
  • 59.Carlsson F, Martinsson P. Do hypothetical and actual marginal willingness to pay differ in choice experiments? Application to the valuation of the environment. Journal of Environmental Economics and Management 2001;41(2):179–192. [Google Scholar]
  • 60.Hinvest NS, Anderson IM. The effects of real versus hypothetical reward on delay and probability discounting. Quarterly Journal of Experimental Psychology 2010;63(6):1072–1084. 10.1080/17470210903276350 [DOI] [PubMed] [Google Scholar]
  • 61.Wiseman DB, Levin IP. Comparing risky decision making under conditions of real and hypothetical consequences. Organizational behavior and human decision processes 1996;66(3):241–250. [Google Scholar]
  • 62.Slovic P. Differential effects of real versus hypothetical payoffs on choices among gambles. Journal of Experimental Psychology 1969;80:434. [Google Scholar]
  • 63.Brevers D, Cleeremans A, Goudriaan AE, Bechara A, Kornreich C, Verbanck P, et al. Decision making under ambiguity but not under risk is related to problem gambling severity. Psychiatry Research 2012;200(2–3):568–574. 10.1016/j.psychres.2012.03.053 [DOI] [PubMed] [Google Scholar]
  • 64.Byrne KA, Ghaiumy Anaraky R. Strive to win or not to lose? Age-related differences in framing effects on effort-based decision-making. The Journals of Gerontology: Series B 2020;75(10):2095–2105. 10.1093/geronb/gbz136 [DOI] [PubMed] [Google Scholar]
  • 65.Galván A, McGlennen KM. Daily stress increases risky decision‐making in adolescents: A preliminary study. Developmental Psychobiology 2012;54(4):433–440. 10.1002/dev.20602 [DOI] [PubMed] [Google Scholar]
  • 66.Jasper JD, Bhattacharya C, Levin IP, Jones L, Bossard E. Numeracy as a predictor of adaptive risky decision making. Journal of Behavioral Decision Making 2013;26(2):164–173. [Google Scholar]
  • 67.Weller JA, Levin IP, Shiv B, Bechara A. Neural correlates of adaptive decision making for risky gains and losses. Psychological Science 2007;18(11):958–964. 10.1111/j.1467-9280.2007.02009.x [DOI] [PubMed] [Google Scholar]
  • 68.Weller JA, Levin IP, Shiv B, Bechara A. The effects of insula damage on decision-making for risky gains and losses. Social Neuroscience 2009;4(4):347–358. 10.1080/17470910902934400 [DOI] [PubMed] [Google Scholar]
  • 69.Weller JA, Levin IP, Denburg NL. Trajectory of risky decision making for potential gains and losses from ages 5 to 85. Journal of Behavioral Decision Making 2011;24(4):331–344. [Google Scholar]
  • 70.Yao YW, Chen PR, Li S, Wang LJ, Zhang JT, Yip SW, et al. Decision-making for risky gains and losses among college students with Internet gaming disorder. PloS one 2015;10(1). 10.1371/journal.pone.0116471 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Jordan J, Yoeli, Rand DG. Don’t get it or don’t spread it? Comparing self-interested versus prosocial motivations for COVID-19 prevention behaviors. PsyArXiv 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Betsch C, Korn L, Sprengholz P, Felgendreff L, Eitze S, Schmid P, et al. Social and behavioral consequences of mask policies during the COVID-19 pandemic. Proceedings of the National Academy of Sciences 2020; 117(36), 21851–21853. 10.1073/pnas.2011674117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Brañas-Garza P, Jorrat DA, Alfonso A, Espin AM, García T, Kovarik J. Exposure to the Covid-19 pandemic and generosity in southern Spain. PsyArxiv 2020b. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Baumsteiger R, & Siegel JT. Measuring prosociality: The development of a prosocial behavioral intentions scale. Journal of Personality Assessment 2019; 101(3), 305–314. 10.1080/00223891.2017.1411918 [DOI] [PubMed] [Google Scholar]
  • 75.Endo A, Center for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group, Abbott S, Kucharski AJ, Funk S. Estimating the overdispersion in COVID-19 transmission using outbreak sizes outside China. Wellcome Open Research 2020; 5:67. 10.12688/wellcomeopenres.15842.3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Kappes A, Faber NS, Kahane G, Savulescu J, Crockett MJ. Concern for others leads to vicarious optimism. Psychological Science 2018;29(3):379–389. 10.1177/0956797617737129 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Wilde GJ. The theory of risk homeostasis: implications for safety and health. Risk Analysis 1982;2(4):209–225. [Google Scholar]
  • 78.Hasanzadeh S, de la Garza J. M., Geller ES. Latent effect of safety interventions. Journal of Construction Engineering and Management 2020;146(5). [Google Scholar]
  • 79.Hedlund J. Risky business: safety regulations, risk compensation, and individual behavior. Injury Prevention 2000;6(2):82–89. 10.1136/ip.6.2.82 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Masters NB, Shih SF, Bukoff A, Akel KB, Kobayashi LC, Miller AL, et al. Social distancing in response to the novel coronavirus (COVID-19) in the United States. PloS one 2020;15(9). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Loewenstein GF, Weber EU, Hsee CK, Welch N. Risk as feelings. Psychological bulletin 2001;127(2):267. 10.1037/0033-2909.127.2.267 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Pablo Brañas-Garza

29 Dec 2020

PONE-D-20-32678

Risk-Taking Unmasked: Using Risky Choice and Temporal Discounting to Explain COVID-19 Preventative Behaviors

PLOS ONE

Dear Dr. Byrne,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Both reports are attached. As you will see the referees are asking for clarifications (for instance, the measure of risk aversion). Both referees are concerned about the sample size (and power), in particular with the n=20 sample of Clemson. Perhaps you may consider enlarge the sample size. This is a personal recommendation only. I am not asking for new experiments.

Both referees are mention that the entire experiment if self-reported (no incentives). I am in favor of hypothetical experiments and in fact I do experiments comparing  hypothetical and incentivised (as both referees cite)... but you need to explain this. Its important to clarify and show the potential limitations.

There are many other comments in the report you need to handle. Please keep in mind that I will send back the paper to the very same referees.

Please submit your revised manuscript by Feb 12 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Pablo Brañas-Garza, PhD Economics

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

3. Thank you for stating the following financial disclosure:

"The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

At this time, please address the following queries:

  1. Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution.

  2. State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

  3. If any authors received a salary from any of your funders, please state which authors and which funders.

  4. If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The general idea of the paper is to analyze the correlation of risk preferences, temporal discounting, risk perception and measures of appropriate mask wearing, social distancing.

In order to do the analysis, the authors run an online experiment (n=225). Participants were recruited using MTurk (N=220) and the undergraduate subject pool at Clemson University (N=20).

The work is very well written and yields interesting results on the relations between different behaviors measures and the proper use of masks and social distancing. Despite this, it is a correlational study and some results should be considered with caution.

General comments

The paper analyzes the relationship between COVID-19 preventative behaviors and individual differences in four classic judgment and decision-making constructs. But, the correct use of masks and compliance with social distancing can be seen also as a collective action problem, where there are other hypotheses that can explained how and why people cooperates. Also, the COVID could have a direct impact on risk, delayed discounting and selfishness (see Brañas et al., 2020a; Adena and Harke, 2020). At the end, the results that authors can be seen is that people became more selfish, impatient or risk averse as a response to this situation, and they become even more as the day passed in the time window that did the survey. Probably, author need to add a paragraph with this discussion and might controlled for days fixed effect in the regression analysis.

Also, the independent variables used do not reflect directly compliance with COVID-19 prevention guidelines. Preventive measures are public knowledge, so many people can answer what is socially desirable and do not necessarily reveal their true intention. Probably authors need to discuss the social desirability bias in their hypotheses.

Specific comments

a) They talked about the study’s limitations; they need to analyze how representative is the sample to the standard US population. With 225 observations, probably is not representative and the external validity of the results is very restricted.

b) Despite the power calculations made, the number of observations is low for a Mturk sample. However, the design is very good and the results are very interesting, so authors should think about redoing the experiment with a larger sample. It is not necessary to do it in Mturk. Jorrat (2020) suggests a procedure to do online experiments in a short time and achieve a high number of observations.

c) Another interesting independent variable to analyze could be the difference between the perceived risk of the different activities with and without social distancing. This could be a measure of how effective people think social distancing is.

d) Authors need to discuss about why hypothetical time a risk experimental measures are a good proxy of incentivized ones. These papers study this experimental question:

Brañas-Garza, P., Jorrat, D., Espín, A. M., & Sanchez, A. (2020). Paid and hypothetical time preferences are the same: Lab, field and online evidence. arXiv preprint arXiv:2010.09262.

Brañas-Garza, P., Estepa Mohedano, L., Jorrat, D., Orozco, V., & Rascon-Ramirez, E. (2020). To pay or not to pay: Measuring risk preferences in lab and field.

Falk, A., Becker, A., Dohmen, T. J., Huffman, D., and Sunde, U. (2015). The preference survey module: A validated instrument for measuring risk, time, and social preferences. IZA Discussion Paper.

e) A regression analyses with all the dependent variables is need it. Authors can made different specifications and add each of the four variables separately and other specifications with all the variables. Authors also need to put the regressions tables in the supplementary materials.

References:

Adena, M. & Harke, J, (2020). COVID-19 and pro-sociality: the effect of pandemic severity and increased pandemic awareness on charitable giving. Mimeo.

Branas-Garza, P., Jorrat, D. A., Alfonso, A., Espin, A. M., García, T., & Kovarik, J. (2020). Exposure to the Covid-19 pandemic and generosity. https://doi.org/10.31234/osf.io/6ktuz

Jorrat, D. A. (2020). Recruiting experimental subjects using WhatsApp. https://doi.org/10.31234/osf.io/6vgec

Reviewer #2: The article aims to find evidence supporting the idea that individual behavior to prevent the diffusion of COVID19, or attitudes toward the riskiness of the pandemic, are linked to individual underlying preferences, such as in particular risk aversion, risk perception, time discounting. Preventative individual behaviors that are considered are wearing facemasks, avoiding large gatherings. Attitudes are perception of risk and optimism bias. The authors find evidence of the relevance of risk preferences on such dependent variables.

Major points:

1) Two categories of individual motivations are relevant in preventative behavior. One category refers to risk aversion, and this is considered by the authors. Another concerns pro-sociality. For instance, wearing masks is at the same time something that protects the individual from the infection, but also protects others from catching the infection (e.g. Cheng et al., 2020). The participants themselves are aware of this aspect, as apparent from responses to an item of the questionnaire. For this reason, I find the design of the study incomplete, because it does not include a measure for pro-sociality. If pro-sociality was positively correlated with risk-aversion, then risk aversion would arguably pick up some of the effects of pro-sociality, thus inflating the effect size. Since there does not seem to be items measuring pro-sociality in the questionnaire, this appears an irreparable flaw of the design. The authors should at least discuss the extent to which their estimation of the effect of risk aversion is an upper bound of the real effect.

2) I am puzzled by the measurement of risk preferences. The authors divide lotteries into three types – risk advantageous, risk disadvantageous, and equal risk, but only find significant effects for the latter two. It is not theoretically clear why this should be the case, and why we should consider these three types of lotteries separately from each other. Individuals who would prefer lotteries to the safe option, when lotteries are disadvantageous (or “equal”, in the authors’ wording), are normally referred to as “risk lovers”, or “risk-neutral” individuals. In my knowledge, risk lovers and risk neutrals are a minority of the population, while most people are “risk averse”. Since the authors only find significant effects for “disadvantageous” or “equal” lotteries, I wonder whether this effect is only driven by a relative minority of the sample. This would not be an uninteresting result per se, but there may be issues of generalizability. I would have liked to see simple descriptive statistics on this variable, but they were not reported.

3) Another related point concerns the construction of the risk aversion variable. Dividing lotteries into these three levels and measuring the percentage of risky choices within each level seems a rather coarse approach. First, within each level, different lotteries will have different expected payoff values and thus different degrees of “advantageousness” and “disadvantageousness”. The level of risk is fixed in equal risky choice, but here (presumably), the size of the pie was manipulated. Hence, considerable information seems to have been ignored when constructing the indexes. The approach that I would advice is instead different. Drawing on contributions in experimental economics, a single parameter for individual risk aversion may be estimated, on the basis of choices throughout the three levels of risk. An approach models utility as “Constant Relative Risk Aversion”, and the curvature of the utility function (which is given by one parameter) is a synthetic indicator of an individual’s risk aversion (see Harrison & Rutström, 2008, Wakker, 2008). More sophisticated calibrations are also possible, including in particular the estimation of a loss aversion parameter (Abdellaoui et al., 2008). Incidentally, it is not clear from the text whether lotteries including losses were administered, but this seems to be the case from the examples reported in the questionnaire registred at OSF. Other approaches would be possible, but the current approach is unsatisfactory as it stands at the moment, in my view.

4) The authors use linear regressions, but given the discrete nature of the dependent variable, an ordered logit model, or interval regression, would have been more appropriate. I am also not clear why authors use repeated measures for the risky hypothetical choices, done over the three levels (advantageous, equal, disadvantageous) – which, incidentally, was not part of the pre-analysis plan. This seems to arbitrarily inflate the power of the risky decision variable, and does not appear to be grounded in theory. Tables with the regression results should be reported, either in the main text or the Appendix.

5) At page 16 the authors state: “equal gambles (N=12) in which the expected value for the risky and sure options were identical or nearly identical […]”. It appears arbitrary to classify lotteries whose expected value is the same or “nearly the same” as the certain option as belonging to the same category. In this sense, having 12 different lotteries that are “equal” seem to be rather excessive. I understand this may be customary in the strand of literature the authors are following. If so, this aspect should be clarified.

6) Page 16-17: “The Appendix shows the full list of questions”. I could only find three questions in the OSF website.

7) An obvious concern is that all the variables are self-reported. In particular, it was not clear to me that the choice of risky lotteries had not been monetarily incentivized. Only in the pre-registration of hypothesis I could eventually find this information. To the very least, the authors should discuss the implications of hypothetical Vs. monetarily incentivized questions (e.g. Beattie & Loomes, 1997; Brañas-Garza et al., 2020; Carlsson & Martinsson, 2001; Donkers et al. 2001).

8) I appreciate that authors followed the good practice of pre-registering their hypotheses. It is clear that the analysis reported in the paper generally followed the pre-analysis plan. Nevertheless, there appear to be some deviations, and these should be flagged out in the paper. In particular, (a) the second hypothesis (Greater stress-related uncertainty due to COVID-19 will be associated with decreased risk-taking) has not been analyzed in the paper. (b) Dependent variable (3), “Willingness to return to work (continuous scale)” has not been analyzed, while “Perceived Risk” has been. I am very much against the pre-registration acting as a straitjacket on what authors should report in the paper. But transparency requires to inform the reader as to why some modifications of the pre-analysis plan were undertaken, what is post-hoc rationalization of results, and what is post-diction rather than pre-diction.

9) The authors may benefit from including in their analysis the report issued by “The Covid States Project” (https://covidstates.org/), led by Northeastern University. In particular, the report on COVID19-associated behavior signals that wearing face masks is the only behavior that, among those considered, has been on the rise, while others, like social distancing, have been declining since measuring began (at the end of April). See: https://covidstates.org/ https://kateto.net/covid19/COVID19%20CONSORTIUM%20REPORT%2026%20BEHAVIOR%20NOV%202020.

Minor points:

10) The authors give the impression to associate “rationality” with decisions taken according to Expected Utility Theory (page 5). This is incorrect, and it does not seem to be necessary, also in the light of the authors’ following discussion. I do not see the need to incorporate the discussion of prospect theory in the introduction, as this theory is not used later on.

11) It is really odd that a small portion of the sample (N=20) is made up of Clemson university students.

12) I find the paper too wordy in the introduction and discussion. There is no need to motivate the scope of the paper multiple times, or to expand theoretical review much beyond what is actually used in the paper. The policy implication discussed in the discussion to send out “positive messages” of what people may do, rather than negative messages of what people can’t do, does not seem to be supported by evidence produced in this paper.

13) There are several inaccuracies: Page 5: “Thus, the probabilities of COVID-19 infection rates are known”. Strictly speaking, this is not true, because the actual cases are arguably much higher, and unknown, than reported cases. The modalities of infection are still not fully known.

14) The author states “It is unclear how perceived risk will influence actual COVID-19 preventative behavior” (Page 6) and “It is unclear how individual differences in temporal discounting relate to COVID-19 preventative behaviors”. (Page 7). But at page 10 they say they hypothesize that higher perceived risk and higher temporal discounting are linked with lower preventative behavior (as should reasonably be expected).

15) Page 9: “Because there is currently no vaccine available”: This should obviously now be updated.

16) In the questionnaire text reported in the OSF website, the last question reads: “There’s a 75% chance that you will lose $100, but a $25 chance that you will not lose any money.” I hope the typo was amended when presented to participants.

17) Page 19: Truncated sentence: “To examine whether the optimism bias, a paired samples t-test was conducted”.

18) Page 11: The discussion of the quality of M-Turk samples is presented before saying that the sample was from M-Turk.

19) Page 2: the authors claim that the sample is “representative”, but with N=225 this can never be the case.

References

Abdellaoui, M., Bleichrodt, H., & l’Haridon, O. (2008). A tractable method to measure utility and loss aversion under prospect theory. Journal of Risk and uncertainty, 36(3), 245.

Beattie, J., & Loomes, G. (1997). The impact of incentives upon risky choice experiments. Journal of Risk and Uncertainty, 14(2), 155-168.

Brañas-Garza, P., Jorrat, D., Espín, A. M., & Sanchez, A. (2020). Paid and hypothetical time preferences are the same: Lab, field and online evidence. arXiv preprint arXiv:2010.09262.

Carlsson, F., & Martinsson, P. (2001). Do hypothetical and actual marginal willingness to pay differ in choice experiments?: Application to the valuation of the environment. Journal of Environmental Economics and Management, 41(2), 179-192.

Cheng, Vincent CC, Shuk-Ching Wong, Vivien WM Chuang, Simon YC So, Jonathan HK Chen, Siddharth Sridhar, Kelvin KW To et al. "The role of community-wide wearing of face mask for control of coronavirus disease 2019 (COVID-19) epidemic due to SARS-CoV-2." Journal of Infection (2020).

Donkers, B., Melenberg, B., & Van Soest, A. (2001). Estimating risk attitudes using lotteries: A large sample approach. Journal of Risk and uncertainty, 22(2), 165-195.

Harrison, G. W., & Rutström, E. E. (2008). Risk aversion in the laboratory. Research in experimental economics, 12(8), 41-196.

Wakker, P. P. (2008). Explaining the characteristics of the power (CRRA) utility family. Health economics, 17(12), 1329-1344.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Review - PONE-D-20-32678.pdf

PLoS One. 2021 May 13;16(5):e0251073. doi: 10.1371/journal.pone.0251073.r002

Author response to Decision Letter 0


13 Jan 2021

Response to Reviewer Comments

Reviewer #1: The general idea of the paper is to analyze the correlation of risk preferences, temporal discounting, risk perception and measures of appropriate mask wearing, social distancing. In order to do the analysis, the authors run an online experiment (n=225). Participants were recruited using MTurk (N=220) and the undergraduate subject pool at Clemson University (N=20). The work is very well written and yields interesting results on the relations between different behaviors measures and the proper use of masks and social distancing. Despite this, it is a correlational study and some results should be considered with caution.

General comments

The paper analyzes the relationship between COVID-19 preventative behaviors and individual differences in four classic judgment and decision-making constructs. But, the correct use of masks and compliance with social distancing can be seen also as a collective action problem, where there are other hypotheses that can explained how and why people cooperates. Also, the COVID could have a direct impact on risk, delayed discounting and selfishness (see Brañas et al., 2020a; Adena and Harke, 2020). At the end, the results that authors can be seen is that people became more selfish, impatient or risk averse as a response to this situation, and they become even more as the day passed in the time window that did the survey. Probably, author need to add a paragraph with this discussion and might controlled for days fixed effect in the regression analysis.

We certainly agree that there are likely other factors and hypotheses that could affect how and why people cooperate with COVID-19 prevention guidelines. It was not our intention to provide a comprehension examination of how all plausible decision-making factors may influence COVID-19 prevention guidelines. We have made efforts to clarify this and temper our research objectives throughout the Introduction. For example, we have now replaced the prior reference to the study being a “systematic empirical research” endeavor with a statement that “empirical research examining decision-making factors that influence compliance with mask-wearing and social distancing guidelines is lacking. Therefore, in addition to demographic factors, this study seeks to examine whether certain decision-making constructs, such as general risk-taking propensity and temporal discounting, are predictive of compliance with appropriate mask-wearing and social distancing behaviors”.

Additionally, in the Limitations section of the Discussion, we state that “we did not include an exhaustive list of possible influences on compliance with COVID-19 prevention guidelines in this study. Other factors including pro-sociality (Brañas-Garza et al., 2020), personality (Nofal et al., 2020), line of work, mental health, and emotional states (Harper et al., 2020) may also play a role in COVID-19 preventative behavior”.

With regard to controlling for days, we note that Wave 1 (Sept 7 -11) and Wave 2 (Dec 29 – 30) were collected over a very narrow range of days. However, it is certainly possible that participants’ social distancing and mask-wearing behavior in September may differ from behavior in December. Consequently, we now include Wave (September vs. December) as a fixed effect in the regression analyses.

Also, the independent variables used do not reflect directly compliance with COVID-19 prevention guidelines. Preventive measures are public knowledge, so many people can answer what is socially desirable and do not necessarily reveal their true intention. Probably authors need to discuss the social desirability bias in their hypotheses.

The dependent variables (appropriate mask-wearing, avoiding nonessential indoor spaces, and avoiding in-person gatherings) are direct prevention recommendations from the CDC.

The point that people may think that compliance with COVID-19 preventative measures is socially desirable is debatable; because the pandemic is highly politicized in the U.S., many people proudly refuse to wear masks or social distance. In general, social desirability biases are an artifact of the majority of psychological research studies involving self-report. To address this issue, we added the following sentence to the Limitations section of the Discussion: “Moreover, as with many studies that utilize self-report measures, social desirability biases may have influenced participants’ risky choices or disclosure of mask-wearing and social distancing behavior”.

Specific comments

a) They talked about the study’s limitations; they need to analyze how representative is the sample to the standard US population. With 225 observations, probably is not representative and the external validity of the results is very restricted.

The updated sample population has fewer African American and Hispanic individuals than the US population, so we have now removed the reference to the sample being representative.

b) Despite the power calculations made, the number of observations is low for a Mturk sample. However, the design is very good and the results are very interesting, so authors should think about redoing the experiment with a larger sample. It is not necessary to do it in Mturk. Jorrat (2020) suggests a procedure to do online experiments in a short time and achieve a high number of observations.

We have now recruited an additional 200 participants through MTurk. Additionally, we removed the 20 participants from the Clemson sample per Reviewer 2’s suggestions. The current sample size (N=404) is now nearly double the original sample size. Given the power analysis results, exceeding this sample size by more than double may artificially inflate the observed study effects given that it is easier to reach statistical significance with larger samples. Therefore, the present sample size should now be sufficient to appropriately address the study research questions while minimizing problems associated with both over- and under-powered studies.

c) Another interesting independent variable to analyze could be the difference between the perceived risk of the different activities with and without social distancing. This could be a measure of how effective people think social distancing is.

Although not originally in our hypotheses or design, we agree that examining the difference in the perceived risk with and without social distancing is a valuable contribution to the study. Not only could this measure provide insight on the perceived effectiveness of social distancing, but it also has implications for risk compensation behavior in the context of the pandemic. To incorporate this addition, we have made several modifications to the manuscript.

First, we have added this measure (Perceived Risk Difference) to the Method section, to each of the regression results, and to the path model analysis. The results show that greater perceived risk of activities when not socially compared to when people are socially distancing was associated with greater mask-wearing behavior and social distancing.

Finally, we added a paragraph describing the implications of this result in the Discussion. In particular, we state, “This study further demonstrated that higher risk perception of public activities under non-social compared to social distancing conditions were predictive of greater mask-wearing and social distancing behavior. This finding suggests that individuals who feel that social distancing is effective are more likely to engage in such behaviors. From another perspective, however, this result may have implications for risk compensation, which proposes that individuals adapt their behavior based on their level of perceived risk of that behavior, typically behaving in a more risk-taking way when perceived risk is low (Wilde, 1982). This theory has been observed in safety contexts in which having more protective measures in place, such as safety equipment (e.g., Hansanzadeh et al., 2020; Hedlund, 2000), increases people’s risk-taking behavior because these protective measures decrease one’s perceived level of risk. Applied to COVID-19 preventative behavior, it is possible that if people perceive the risk of social activities as lower when others are engaging in prophylactic behaviors—wearing masks or social distancing, then they may be more willing to engage in those social activities. Future research is needed to empirically examine risk compensation in the context of COVID-19 preventative behavior”.

d) Authors need to discuss about why hypothetical time a risk experimental measures are a good proxy of incentivized ones. These papers study this experimental question:

Brañas-Garza, P., Jorrat, D., Espín, A. M., & Sanchez, A. (2020). Paid and hypothetical time preferences are the same: Lab, field and online evidence. arXiv preprint arXiv:2010.09262.

Brañas-Garza, P., Estepa Mohedano, L., Jorrat, D., Orozco, V., & Rascon-Ramirez, E. (2020). To pay or not to pay: Measuring risk preferences in lab and field.

Falk, A., Becker, A., Dohmen, T. J., Huffman, D., and Sunde, U. (2015). The preference survey module: A validated instrument for measuring risk, time, and social preferences. IZA Discussion Paper.

We agree that this distinction should have been clarified in the original manuscript. We now address research on real vs. hypothetical rewards in temporal discounting in the Temporal Discounting sub-section of the Methods by stating, “A significant body of research comparing the effects of real to hypothetical rewards has demonstrated that temporal discounting rates are highly similar under both conditions (e.g., Brañas-Garza et al., 2020c; Johnson & Bickel, 2002; Locey, Jones, & Rachlin, 2011; Madden et al., 2004), which suggests that hypothetical rewards are a valid proxy for incentivized rewards in temporal discounting experiments.”

Additionally, we discuss this distinction for risk-taking in the Risky Choice Task sub-section of the Methods by stating, “Most recent studies have shown that decisions on risky choice tasks are not significantly altered under hypothetical compared to real reward conditions (e.g., Brañas-Garza et al., 2020a; Carlsson & Martinsson, 2001; Hinvest & Anderson, 2010; Wiseman & Levin, 1996), though some exceptions have been observed (Slovic, 1969)”.

These additions serve to clarify why hypothetical temporal and risky choice experimental measures are valid proxies of incentivized ones.

e) A regression analyses with all the dependent variables is need it. Authors can made different specifications and add each of the four variables separately and other specifications with all the variables. Authors also need to put the regressions tables in the supplementary materials.

The path analysis has all the dependent variables, and path analysis is an extension of regression. Although the path analysis has redundancies with a multivariate regression analysis, we added the results of the multivariate regression to the Supplementary Material (Table S9). The tables with the regression results are also now reported in the Supplementary Material.

References:

Adena, M. & Harke, J, (2020). COVID-19 and pro-sociality: the effect of pandemic severity and increased pandemic awareness on charitable giving. Mimeo.

Branas-Garza, P., Jorrat, D. A., Alfonso, A., Espin, A. M., García, T., & Kovarik, J. (2020). Exposure to the Covid-19 pandemic and generosity. https://doi.org/10.31234/osf.io/6ktuz

We have added the Branas-Garza et al., 2020 citation to the manuscript. Mimeo appears to be a private repository; we requested access to the Adena & Harke, 2020 preprint, but Mimeo did not provide it. However, we do have other citations (e.g., Dryhurst et al., 2020) that address the relationship between prosocial behavior and COVID-19.

Reviewer #2: The article aims to find evidence supporting the idea that individual behavior to prevent the diffusion of COVID19, or attitudes toward the riskiness of the pandemic, are linked to individual underlying preferences, such as in particular risk aversion, risk perception, time discounting. Preventative individual behaviors that are considered are wearing facemasks, avoiding large gatherings. Attitudes are perception of risk and optimism bias. The authors find evidence of the relevance of risk preferences on such dependent variables.

Major points:

1) Two categories of individual motivations are relevant in preventative behavior. One category refers to risk aversion, and this is considered by the authors. Another concerns pro-sociality. For instance, wearing masks is at the same time something that protects the individual from the infection, but also protects others from catching the infection (e.g. Cheng et al., 2020). The participants themselves are aware of this aspect, as apparent from responses to an item of the questionnaire. For this reason, I find the design of the study incomplete, because it does not include a measure for pro-sociality. If pro-sociality was positively correlated with risk-aversion, then risk aversion would arguably pick up some of the effects of pro-sociality, thus inflating the effect size. Since there does not seem to be items measuring pro-sociality in the questionnaire, this appears an irreparable flaw of the design. The authors should at least discuss the extent to which their estimation of the effect of risk aversion is an upper bound of the real effect.

In the new second wave of data collection, we added two measures of pro-sociality: (1) the Prosocial Behavioral Intentions Scale (Baumsteiger & Siegal, 2019) and (2) a version of the Dictator Game. Neither the Prosocial Behavioral Intentions score (r = .047, p = .516) nor the Dictator Game measure (r = .058, p = .421) were correlated with overall risk-taking in the sample. The null correlation should address the reviewer’s concern that multicollinearity between risk-taking/risk aversion and pro-sociality could have inflated the effect size for risk-taking/risk aversion.

We note that the correlations between the Prosocial Behavioral Intentions Questionnaire and all outcome variables were nonsignificant, but there was an association between Dictator Game pro-social behavior and interpersonal social interactions (r = .160, p=.025) and non-essential social activities (r = .196, p = .006). In other words, those that were more prosocial in the Dictator Game (i.e., gave more to the receiver than oneself) reported more social interactions and engaging in more-essential activities more (i.e., less social distancing). We considered adding these results regarding pro-sociality to the manuscript, but we feel that it muddies the waters and decreases the clarity of the manuscript and findings.

Correlations between Prosocial Behavioral Intentions Questionnaire, Dictator Game, and Study Outcome Measures

Appropriate Mask Wearing Interpersonal Social Interactions Social Activities Perceived Risk Optimism Bias

PBIS 0.053 0.046 -0.005 -0.036 -0.133

Dictator Game -0.087 0.16* 0.196* 0.042 -0.094

Note. PBIS = Prosocial Behavioral Intentions Scale. The Dictator Game is defined as the amount participants opted to give to the received minus the amount kept for oneself. Higher scores reflect greater pro-sociality.

* indicates p<.05

We would also like to note that our objective in this study was not to comprehensively consider all possible decision-making constructs that may relate to COVID-19 preventative behavior. There are likely other constructs outside of pro-sociality that may contribute to COVID-19 preventative behavior as well, but these are outside the scope of this investigation. The original manuscript may have alluded that the study was trying to examine all decision-making or motivational factors that may affect COVID-19 preventative behavior. To address this issue, we made several edits throughout the Introduction to ensure that the objectives of the study are not overstated.

Furthermore, in the Limitations section of the Discussion in which we state that “we did not include an exhaustive list of possible influences on compliance with COVID-19 prevention guidelines in this study,” we have now added pro-sociality to the list of other factors that may play a role in COVID-19 preventative behavior. We further elaborate that “these other unexamined factors may covary with risk-taking or temporal discounting, which has the potential to inflate the observed estimation of effect sizes observed in this study. We therefore caution that the study effect sizes may represent an upper bound of the overall effect of risky decision-making and temporal discounting on COVID-19 preventative behavior”.

Reference:

Baumsteiger, R., & Siegel, J. T. (2019). Measuring prosociality: The development of a prosocial

behavioral intentions scale. Journal of Personality Assessment, 101(3), 305-314.

Dictator Game Instructions:

Imagine that a store is having a grand opening and is giving different amounts of money between $5 and $200 cash to the first 100 shoppers. You are shopper #99, and you get $100 cash.

Unfortunately, the store employee miscounted, and they do not have any money to give to shopper #100. The employee asks you if you would like to give some of your $100 to shopper #100. You do not know shopper #100, and it is very unlikely you will ever see them again in the future. How much money (if any) would you leave for shopper #100?

The outcome measure was defined as the amount participants opted to give to shopper #100 minus the amount they opted to keep for themselves. Higher scores reflect greater pro-sociality.

2) I am puzzled by the measurement of risk preferences. The authors divide lotteries into three types – risk advantageous, risk disadvantageous, and equal risk, but only find significant effects for the latter two. It is not theoretically clear why this should be the case, and why we should consider these three types of lotteries separately from each other. Individuals who would prefer lotteries to the safe option, when lotteries are disadvantageous (or “equal”, in the authors’ wording), are normally referred to as “risk lovers”, or “risk-neutral” individuals. In my knowledge, risk lovers and risk neutrals are a minority of the population, while most people are “risk averse”. Since the authors only find significant effects for “disadvantageous” or “equal” lotteries, I wonder whether this effect is only driven by a relative minority of the sample. This would not be an uninteresting result per se, but there may be issues of generalizability. I would have liked to see simple descriptive statistics on this variable, but they were not reported.

To clarify the measurement of risk preferences, we have added further information to the Methods section (p. 15 – 16). Importantly, we now state that the risky choice task is similar to the cups task and that this analytical approach is in line with previous research with this task and other related tasks.

We understand that the theoretical information motivating this approach of risky choices was lacking from the original manuscript. We have added a paragraph to the Introduction (p. 4 bottom – p. 5) to provide background information on why it is important to distinguish between risk-taking and expected values. For example, we state that sensitivity to expected utility can provide objective information about adaptive decision-making performance (Weller, Levin, & Bechara, 2010). Choosing risky options with higher expected utility maximizes is economically advantageous, while choosing risky options that have a lower expected utility than a safe option is maladaptive and can often lead to sub-optimal decision-making (Weller, Levin, & Bechara, 2010; von Neumann & Morgenstern, 1944).

In the Discussion, we have added a paragraph (p. 28, bottom – p. 29) to explain why examining both risk level and expected values may have important implications. For example, we state that “This finding suggests that individuals who are less sensitive to changes in expected value and exhibit less adaptive risky decision-making behavior are less likely to engage in mask-wearing and social distancing during the pandemic. Individuals who chose the risky option more frequently in equal expected value contexts (when the expected values for the risky and safe options matched) behaved similarly to those who chose more disadvantageous risky choices. However, the magnitude of the relationship between risk-taking and noncompliant behavior was greater in the disadvantageous contexts than the equal contexts”. We have also made updates to the Abstract.

Regarding the use of the terms ‘risk-neutral’, ‘risk-lover’, we understand that these terms are commonly used in the field of economics; however, in the field of psychology these terms are used less frequently. Psychological decision-making instead favors focusing on behavior (e.g., greater/less risk-taking; greater/less risk-aversion behavior) and avoiding the use of person-labels. The reason is simply that risk-taking behavior can be altered by many factors (emotions, stress, valuation differences, etc.) and a ‘risk-neutral’ person may be more risk-averse or more risk-loving in different states or situations. We agree that those that make a high proportion of disadvantageous risky choices are a minority of the population. We have now added a table (Table 2) to display the descriptive statistics for all the independent (including the proportion of risky choice) and dependent variables. Finally, in the Limitations section of the Discussion, we describe the limitations in generalizability by stating: “Given that the risk-taking findings were specific to equal and disadvantageous expected value contexts, it should be noted that some participants chose the safe option on all trials. Therefore, the relationship between risk-taking and compliance behavior may be driven by a subsample of the sample, which may limit the generalizability of the findings”.

These revisions serve to explain the importance of examining both sensitivity to expected values and risk-taking as well as to better explain the risky choice task in the context of the broader psychology literature.

3) Another related point concerns the construction of the risk aversion variable. Dividing lotteries into these three levels and measuring the percentage of risky choices within each level seems a rather coarse approach. First, within each level, different lotteries will have different expected payoff values and thus different degrees of “advantageousness” and “disadvantageousness”. The level of risk is fixed in equal risky choice, but here (presumably), the size of the pie was manipulated. Hence, considerable information seems to have been ignored when constructing the indexes. The approach that I would advice is instead different. Drawing on contributions in experimental economics, a single parameter for individual risk aversion may be estimated, on the basis of choices throughout the three levels of risk. An approach models utility as “Constant Relative Risk Aversion”, and the curvature of the utility function (which is given by one parameter) is a synthetic indicator of an individual’s risk aversion (see Harrison & Rutström, 2008, Wakker, 2008). More sophisticated calibrations are also possible, including in particular the estimation of a loss aversion parameter (Abdellaoui et al., 2008). Incidentally, it is not clear from the text whether lotteries including losses were administered, but this seems to be the case from the examples reported in the questionnaire registred at OSF. Other approaches would be possible, but the current approach is unsatisfactory as it stands at the moment, in my view.

We realize that the task information was not clear in the original manuscript, and we thank the reviewer for noting this. We have now clarified in the Methods that the Risky Choice Task utilized in our task was similar to the Cups Task, which divides risky gambles into advantageous, disadvantageous, and equal expected value scenarios, and the study’s analytic approach mirrors the approach used with the Cups Task. We have now clarified this analytically approach in the manuscript by stating: “Following previous research using the Cups Task and similar risky choice paradigms (Brevers et al., 2012; Byrne & Ghaiumy Anaraky, 2020; Galván & McGlennen, 2012; Jasper et al., 2013; Levin et al., 2007; Weller et al., 2007; 2009; 2010; 2011; Yao et al., 2015), the proportion of risky gambles for each gamble type (risky advantageous, risky equal, and risky disadvantageous) was computed”.

Additionally, we state that “the risky choice task involved 36 non-incentivized, hypothetical gain-framed decisions” in the manuscript to emphasize that there were not lotteries including losses administered. The full list of all 36 risky choices is now shown in Table S2 in the supplementary material.

We emphasize that the purpose of this task and study is not to construct a risk aversion variable but rather to characterize the relationship between risky choices and mask-wearing/social distancing. We understand that the strength of constructing a risk aversion variable is to have a streamlined, valuable metric, and we have used computational modeling to create such parameters to characterize reinforcement learning behavior in other studies in our lab. However, these modeling parameters are typically supplemental to choices and/or decision-making performance, which is the focus of this study. Showing the differences in risky choice in disadvantageous vs. advantageous scenarios provides important information—it demonstrates the context in which risky decision-making is associated with mask-wearing and social distancing. By replacing the current risky choice variables with a single risk aversion metric, we would lose this information.

4) The authors use linear regressions, but given the discrete nature of the dependent variable, an ordered logit model, or interval regression, would have been more appropriate. I am also not clear why authors use repeated measures for the risky hypothetical choices, done over the three levels (advantageous, equal, disadvantageous) – which, incidentally, was not part of the pre-analysis plan. This seems to arbitrarily inflate the power of the risky decision variable, and does not appear to be grounded in theory. Tables with the regression results should be reported, either in the main text or the Appendix.

All of the dependent variables are continuous variables, rather than discrete, which is why linear regressions were conducted. The addition of Table 2, which shows the range, mean, and standard deviation of the dependent variables should clarify this for readers.

While the OSF does not list repeated measures regression under the Analysis Plan, we do state in the pre-registration that Risk-taking score A (advantageous gambles), Risk-taking score B (disadvantageous gambles, and Risk-taking score C (ambiguous gambles) will be used as predictors in the Sampling Plan and Measured Variables sections.

The repeated measures approach was used because risky-choice type (advantageous vs. disadvantageous vs. equal) is essentially a within-subjects moderator of risky choice. This approach is often used with the Cups Task and other risky decision-making tasks. To explain this approach, we state that “consistent with approaches used in similar risky choice tasks (Byrne & Ghaiumy Anaraky, 2020; Byrne et al., 2020; Levin et al., 2007; Madan et al., 2015; Weller et al., 2011), the within-subjects variable was defined as Risky Choice Level depending on whether the expected value for the risky option was higher, lower, or the same as the expected value of the safe option. The levels were operationalized as advantageous risky choice, disadvantageous risky choice, and equal risky choice” (p. 21).

The tables with the regression results are now reported in the Supplementary Material.

5) At page 16 the authors state: “equal gambles (N=12) in which the expected value for the risky and sure options were identical or nearly identical […]”. It appears arbitrary to classify lotteries whose expected value is the same or “nearly the same” as the certain option as belonging to the same category. In this sense, having 12 different lotteries that are “equal” seem to be rather excessive. I understand this may be customary in the strand of literature the authors are following. If so, this aspect should be clarified.

We have added the following information to the Method section to clarify the risky choice scenarios: “While the Cups Task involves 54 gain and loss trials of varying expected value levels (disadvantageous, advantageous, and equal), the present task used a modified gains-only task because the study predictions were localized to risk behavior and not loss aversion.

6) Page 16-17: “The Appendix shows the full list of questions”. I could only find three questions in the OSF website.

The Appendix on the OSF website originally included some examples of the type of questions that risky choice tasks often involve, rather than the comprehensive list. We have added the full list of questions to the Supplementary Material (Table S2), and we have updated the OSF website with the full list of Risky Choice Task questions.

7) An obvious concern is that all the variables are self-reported. In particular, it was not clear to me that the choice of risky lotteries had not been monetarily incentivized. Only in the pre-registration of hypothesis I could eventually find this information. To the very least, the authors should discuss the implications of hypothetical Vs. monetarily incentivized questions (e.g. Beattie & Loomes, 1997; Brañas-Garza et al., 2020; Carlsson & Martinsson, 2001; Donkers et al. 2001).

We appreciate the reviewer noting this and providing helpful citations. We now state in the description of the Risky Choice Task that the task “involved 36 non-incentivized, hypothetical gain-framed decisions”. Additionally, we have added information on prior research that has compared hypothetical vs. real reward incentives to the Temporal Discounting and Risky Choice Task sub-sections of the Methods. These additions serve to clarify why hypothetical temporal and risky choice experimental measures are valid proxies of incentivized ones. Furthermore, we now acknowledge the Limitations of self-report variables in the Discussion by stating, “as with many studies that utilize self-report measures, social desirability biases may have influenced participants’ risky choices or disclosure of mask-wearing and social distancing behavior.”

8) I appreciate that authors followed the good practice of pre-registering their hypotheses. It is clear that the analysis reported in the paper generally followed the pre-analysis plan. Nevertheless, there appear to be some deviations, and these should be flagged out in the paper. In particular, (a) the second hypothesis (Greater stress-related uncertainty due to COVID-19 will be associated with decreased risk-taking) has not been analyzed in the paper. (b) Dependent variable (3), “Willingness to return to work (continuous scale)” has not been analyzed, while “Perceived Risk” has been. I am very much against the pre-registration acting as a straitjacket on what authors should report in the paper. But transparency requires to inform the reader as to why some modifications of the pre-analysis plan were undertaken, what is post-hoc rationalization of results, and what is post-diction rather than pre-diction.

We acknowledge that we did not have a direct 1:1 correspondence between the OSF pre-registration, which occurred pre-IRB approval, and our final study design, and we neglected to document those differences in the manuscript. We have added the results showing the nonsignificant association between stress-related uncertainty due to COVID-19, risky choice, mask-wearing, and social distancing to the manuscript. Thus, the data do not support this hypothesis. The Perceived Risk variable is part of the original pre-registration (under Sampling Plan and Measured Variables).

We collected the data for the willingness to return to work variable, but post-hoc realized that this variable was very different from the others (in terms of topic and analysis) and simply didn’t fit cohesively with the study focus. We have not analyzed any data for this variable, but the data will be made available on the OSF so that interested readers can pursue it if they wish.

Finally, and most importantly, we have added documentation of changes from the OSF pre-registration in the Supplementary Materials. We refer to this in the Method section (p.12) by stating that “some deviations from the OSF pre-registration to the final study were made, such as the addition of the social distancing variables. Documentation of these differences are described in the Supplementary Material for full transparency”.

9) The authors may benefit from including in their analysis the report issued by “The Covid States Project” (https://covidstates.org/), led by Northeastern University. In particular, the report on COVID19-associated behavior signals that wearing face masks is the only behavior that, among those considered, has been on the rise, while others, like social distancing, have been declining since measuring began (at the end of April). See: https://covidstates.org/ https://kateto.net/covid19/COVID19%20CONSORTIUM%20REPORT%2026%20BEHAVIOR%20NOV%202020.

We appreciate the reviewer’s suggestion to incorporate the findings of this report into the manuscript. To do so, we have first, added a sentence to the Method section stating that the social distancing metric used in the COVID States Project Report “is similar to the COVID States Project’s Relative Social Distancing Index (Lazer et al., 2020)” (p. 14).

Since we increased the sample size and collected the second wave of data 2.5 months after the initial data was collected, we were able to examine whether mask-wearing and social distancing were different between the two time points (early September and end of December). We have added these results to the Descriptives section, and we include time point (September vs. December as a covariate in the analyses). The results showed that participants in the December data collection wave reported greater mask-wearing and social distancing than those in the September data wave. We discuss these findings in the Discussion section (p. 32) by stating, “Outside of political affiliation, the results indicated that those that participated in the study in December reported increased mask-wearing and social distancing than those that participated in September. This result is consistent with research from the COVID States Project (Lazar, 2020)”.

Minor points:

10) The authors give the impression to associate “rationality” with decisions taken according to Expected Utility Theory (page 5). This is incorrect, and it does not seem to be necessary, also in the light of the authors’ following discussion. I do not see the need to incorporate the discussion of prospect theory in the introduction, as this theory is not used later on.

We have removed all references to Expected Utility Theory, Prospect Theory, and rationality/irrationality (in reference to decision-making under risk) from the Introduction.

11) It is really odd that a small portion of the sample (N=20) is made up of Clemson university students.

We have removed the Clemson University student sample from the analyses and recruited an additional 200 participants upon Reviewer 1’s suggestion. The final sample is now comprised of 404 MTurk-only participants.

12) I find the paper too wordy in the introduction and discussion. There is no need to motivate the scope of the paper multiple times, or to expand theoretical review much beyond what is actually used in the paper. The policy implication discussed in the discussion to send out “positive messages” of what people may do, rather than negative messages of what people can’t do, does not seem to be supported by evidence produced in this paper.

In the Introduction, we have removed duplicate sentences regarding the scope and purpose of the study, substantially shortened the section on Decision-Making under Risk by removing the comparisons of expected utility and prospect theories, and shortened the Current Study and Hypotheses sub-section.

In the Discussion, we have removed the implications for messaging and shortened the overall implications.

13) There are several inaccuracies: Page 5: “Thus, the probabilities of COVID-19 infection rates are known”. Strictly speaking, this is not true, because the actual cases are arguably much higher, and unknown, than reported cases. The modalities of infection are still not fully known.

We have removed this sentence from the manuscript.

14) The author states “It is unclear how perceived risk will influence actual COVID-19 preventative behavior” (Page 6) and “It is unclear how individual differences in temporal discounting relate to COVID-19 preventative behaviors”. (Page 7). But at page 10 they say they hypothesize that higher perceived risk and higher temporal discounting are linked with lower preventative behavior (as should reasonably be expected).

In these sentences, we were conveying that the effect of perceived risk and temporal discounting on COVID-19 preventative behavior is unclear because they had not been empirically tested before this study. We predict that lower perceived risk and higher temporal discounting may be linked with lower preventative behavior, but this effect was not known before we obtained the results.

However, to clarify this, we have changed “It is unclear how perceived risk will influence actual COVID-19 preventative behavior” to “It is expected that individuals with first-hand COVID-19 experience will have higher risk perception and that higher risk perception will enhance COVID-19 preventative behavior.” We have also removed the sentence stating “It is unclear how individual differences in temporal discounting relate to COVID-19 preventative behaviors” to make the paper less wordy as we state the hypotheses in the Current Study and Hypotheses sub-section.

15) Page 9: “Because there is currently no vaccine available”: This should obviously now be updated.

We have replaced this phrase with the following statement: “At the time this research was conducted, COVID-19 vaccines were not available to the general population…”.

16) In the questionnaire text reported in the OSF website, the last question reads: “There’s a 75% chance that you will lose $100, but a $25 chance that you will not lose any money.” I hope the typo was amended when presented to participants.

This question was an example of the type of questions included in typical Risky Choice Tasks. We only included gain-framed, rather than loss-framed questions in the study procedure. We have updated this information on the OSF website, and the Supplementary Material now includes the full list of Risky Choice Task questions used in the study.

17) Page 19: Truncated sentence: “To examine whether the optimism bias, a paired samples t-test was conducted”.

We have corrected this sentence by removing the word ‘whether’.

18) Page 11: The discussion of the quality of M-Turk samples is presented before saying that the sample was from M-Turk.

We appreciate the reviewer noting this, and we have now corrected this in the manuscript.

19) Page 2: the authors claim that the sample is “representative”, but with N=225 this can never be the case.

We have removed the reference to the sample being representative.

References

Abdellaoui, M., Bleichrodt, H., & l’Haridon, O. (2008). A tractable method to measure utility and loss aversion under prospect theory. Journal of Risk and uncertainty, 36(3), 245.

Beattie, J., & Loomes, G. (1997). The impact of incentives upon risky choice experiments. Journal of Risk and Uncertainty, 14(2), 155-168.

Brañas-Garza, P., Jorrat, D., Espín, A. M., & Sanchez, A. (2020). Paid and hypothetical time preferences are the same: Lab, field and online evidence. arXiv preprint arXiv:2010.09262.

Carlsson, F., & Martinsson, P. (2001). Do hypothetical and actual marginal willingness to pay differ in choice experiments?: Application to the valuation of the environment. Journal of Environmental Economics and Management, 41(2), 179-192.

Cheng, Vincent CC, Shuk-Ching Wong, Vivien WM Chuang, Simon YC So, Jonathan HK Chen, Siddharth Sridhar, Kelvin KW To et al. "The role of community-wide wearing of face mask for control of coronavirus disease 2019 (COVID-19) epidemic due to SARS-CoV-2." Journal of Infection (2020).

Donkers, B., Melenberg, B., & Van Soest, A. (2001). Estimating risk attitudes using lotteries: A large sample approach. Journal of Risk and uncertainty, 22(2), 165-195.

Harrison, G. W., & Rutström, E. E. (2008). Risk aversion in the laboratory. Research in experimental economics, 12(8), 41-196.

Wakker, P. P. (2008). Explaining the characteristics of the power (CRRA) utility family. Health economics, 17(12), 1329-1344.

Attachment

Submitted filename: Revision.docx

Decision Letter 1

Pablo Brañas-Garza

23 Feb 2021

PONE-D-20-32678R1

Risk-Taking Unmasked: Using Risky Choice and Temporal Discounting to Explain COVID-19 Preventative Behaviors

PLOS ONE

Dear Dr. Byrne,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

As you will see referee #2 is still asking a number of serious modifications in the statistical analysis. Please do it carefully since I will send back the manuscript to him.

Please submit your revised manuscript by Apr 09 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Pablo Brañas-Garza, PhD Economics

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thanks to the authors for adressing all the comments. The paper impoved substantially. Congratulations.

Reviewer #2: See attached report. See attached report. See attached report. See attached report. See attached report.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Diego Andrés Jorrat

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Report 2.pdf

PLoS One. 2021 May 13;16(5):e0251073. doi: 10.1371/journal.pone.0251073.r004

Author response to Decision Letter 1


29 Mar 2021

The present version of the paper is improved and addresses most of the issues that the other reviewer and I had raised. I also accept that there may be differences in methods and approaches between psychology and economics, which should be toned down in an interdisciplinary journal like PLOS-1. Nonetheless, the authors do choose economic theory to justify their methods, and should then make sure that their approach is correct. Most importantly, I am still unclear on the nature of the econometric model the authors use. I would then recommend the authors to address the following issues:

1) The authors inserted the following statement to explain their approach concerning the measurement of risk aversion (pp. 4-5):

Choosing risky options with higher expected utility is economically advantageous, while choosing risky options that have a lower expected utility than a safe option is maladaptive and can often lead to sub-optimal decision-making (Weller, Levin, & Bechara, 2010; von Neumann & Morgenstern, 1944). I find this statement confusing and factually wrong. I suspect that the authors confuse expected utility with the expected value of a gamble (i.e. a risky choice). Otherwise, taking this sentence literally would entail that individuals would choose “risky options that have a lower expected utility than a safe option”. Since utility is unobservable, the principle of revealed preferences by von Neumann and Morgestern (cited) – after the rationalization by Savage (1954) -, states that, precisely for this reason, we can deduct the expected utility function from actual choices that people make. Observing individuals choosing lotteries having lower expected utility than a safe option is then by construction impossible within the theory. This is why I guess that the authors meant “expected value” rather than expected utility, as also suggested by the language they use in other parts of the paper (e.g. figures’ captions). Even so, their statement is factually incorrect. While saying that lotteries with higher expected value are economically advantageous is a platitude, saying that

“choosing risky options having a lower expected *value* than a safe option is maladaptive and can often lead to sub-optimal decision-making” goes against cross-country experimental evidence finding that economically richer countries are characterized, on average, by lower, rather than higher, risk

tolerance (Falk et al., 2018; Bouchouicha, R., & Vieider, 2019). Admittedly, this finding only holds after controlling for other psychological traits (see Table 9 in Falk et al., 2018), while an opposite

relation between risk tolerance and income holds within country (l'Haridon & Vieider, 2019).

Nevertheless, this evidence is enough to make the authors’ statement basically unfounded. Moreover, it is perfectly acceptable, I would dare say from a psychological point of view, that individuals prefer a safe option to a gamble, be it advantageous or disadvantageous, because it is perfectly acceptable that for most individuals well-being (or utility) is decreased by risk. This should not be seen as either maladaptive or sub-optimal, as it is in fact a rather widespread characteristic of individual preferences.

In sum, I find the authors’ justification of their multi-level approach used in their econometric analysis to be flawed theoretically and empirically. I would suggest the authors to drop any reference to expected utility theory. I would instead ask the authors to provide a different theoretical justification of the reason why we should consider choices in the disadvantageous, equal, or advantageous domain, as distinct from one another, possibly relying on other studies doing so.

To correct this issue in the Introduction, we have reframed this paragraph by removing all references to expected utility. We conceptual adaptive or optimal decision-making in terms of reward maximation, which depends on sensitivity to expected values (not utility, in accordance with what the comment above describes), rather than utility or individual preference. This conceptualization is in line with the 7,000+ studies that have relied on tasks like the Cups Task and Iowa Gambling Task (Bechara et al., 1994) and the Expectancy Valence model of behavior on these tasks (Yechiam et al., 2005; Worthy, Pang, & Byrne, 2013), which consistently shows that greater sensitivity to expected values is associated with greater reward maximization. Indeed, the Cups Task was developed in collaboration with Bechara, who developed the IGT.

We are a bit unclear on the comparisons between the references cited on the relationship between income and risk tolerance and the assumption that choosing higher expected values in classic risky choice decision-making paradigms leads to adaptive decision-making (i.e., decision-making that maximizes reward). We have statistically controlled for income by including income level as a covariate in the regressions. Moreover, behavior that leads to reward maximization is quite different from the demographic variable of income, which is largely a product of one’s life circumstances, privilege (i.e., white privilege), and inequality in access to resources.

We have updated the paragraph to instead rely on the same framework as the one used in the seminal Cups Task paper: sensitivity to expected values. We replace the sentence described above with the following (p. 5): “Choosing options with higher expected values reflects increased sensitivity to differences in expected value between choice options [7, 8]. As evidenced by performance on decision-making paradigms such as the Iowa Gambling Task [9, 10] and the Cups Task [7,8], this increased sensitivity can lead to reward maximization”. In other words, choosing the risky choice in advantageous expected value contexts but not disadvantageous expected value contexts reflects increased reward sensitivity and is likely to lead to reward maximation. Using this task allows us to assess both risk-taking behavior (overall performance on the Cups Task) and sensitivity to expected values (varying behavior in different expected value contexts). This explanation was already expressed in the Discussion (p. 30, top), but we failed to make this information clear in the Introduction originally. However, given the comments below, we’ve toned down the emphasis on risky choice in these varying contexts.

2) Even if the reader can now have a clear view of how the measurement of risk-taking behaviour

was carried out, thanks to Table S2 in the Appendix, I still find their econometric model unclear.

Have the authors taken the three following measures for each individual: [proportion of risky

choices taken in the disadvantageous domain; proportion of risky choices in the “equal” domain;

proportion of risky choices in the advantageous domain]? This is what appears from the descriptive statistics. If this is the case, this should be stated clearly. But then, what is the purpose of including what appear to be interaction terms between the “level” of the variables, as defined above, and the proportion of risky choices? This is what appears in Table S3 in the Appendix, while the caption seems to indicate the lack of inclusion of interaction terms:

Equal EV Gamble X Equal Risky Choices indicates proportion of equal EV risky choices and Disadv. EV

Gamble X Disadv. Risky Choices refers to proportion of disadvantageous EV risky choices.

But then, why do the authors include these five variables in the model?

Risky Choice EV Type

Equal EV Gamble

Disadvantageous EV Gamble

Equal EV Gamble X Equal Risky Choices

Disadv. EV Gamble X Disadv. Risky Choices

By reading that the authors used a repeated measure model, my understanding was that the authors

organized their data in panel format, with the individual as the cross-sectional unit and the three

decisions at the three levels as the “longitudinal” variable. But having these interaction terms in the

model (assuming they are interaction terms) makes me think that the authors used instead a pooled

model? And in any case, what is the point of having an interaction term between the different levels?

Are we at all interested in knowing that participants made riskier decisions in the advantageous

domain? This seems obvious, and I have not seen this result being reported. In conclusion, I think it is

necessary that the authors provide more details of their econometric approach, written down with

equations to avoid any misunderstanding. I would also appreciate if the authors could make available their data and codes they used in their analysis. This is requested in any case by publication in PLOS1. The authors should also report in Table S3 the total number of observations, and the observations by cluster (three, if my interpretation is correct). Since observations are clustered at the individual level, the best practice is to use heteroschedasticity-robust standard errors clustered at this level.

We acknowledge that the presentation of the regressions was unclear, and we have now added a “Data Analysis” subsection to the paper to clarify the statistical approach for the regressions. Originally, the proportion of risky choices was included as a main effect, Expected Value Level (dummy coded) was also a main effect, and then the Proportion of Risky Choices X Expected Value Level interaction term was included, which allowed for assessing whether risky choices in the advantageous, disadvantageous, or equal gamble domains were predictive of each dependent variable. There were 5 terms included in the tables because MPlus and R (the statistical analysis tools we use), have options to combine the main effects and simple contrasts to identify the locus of the interaction into one table, which can be more informative. The rationale for including the different levels was to show the contexts in which participants made risky choices—essentially does all risky choice or just risky choices that leads to lower expected values (i.e., disadvantageous) affect mask-wearing and social distancing? To answer this question, it is important methodologically to include advantageous risky choices as a comparison group.

However, based on these comments, we can see where this approach may ‘muddy the waters’ and make the analyses unnecessarily complex without adding substantial knowledge depth. As such, we have now removed the analysis by EV level from the regression models. Now, average proportion of risky choices (averaged across advantageous, disadvantageous, and equal contexts) alone is included in the model, and multiple regressions are used. We have also added the regression equation. This information is now explained in the new Data Analysis sub-section (p. 18).

Furthermore, in the description of the risky choice task, we now state “the average proportion of risky gambles across all gambling types was computed for regression analyses and used as the primary analysis variable for this task. Additionally, following previous research using the Cups Task and similar risky choice paradigms [8, 63—70], the proportion of risky choices for each gamble type (risky advantageous, risky equal, and risky disadvantageous) was computed and used in follow-up analyses; this provides further information about whether sensitivity to expected values, as reflected by differential risk-taking in advantageous compared to disadvantageous decision contexts, influences the outcome variables” (p. 16, bottom & p. 17, top).

The data is available; we included the link in the PLoS One submission questions but didn’t realize this information was not sent to reviewers; we apologize for our misunderstanding. The link to the data is here:

https://osf.io/xy6aj/?view_only=1360495284da453baac8df96c7a732d1

Upon manuscript acceptance, we will remove the ‘view-only’ link and the link will be https://osf.io/xy6aj/. This link has now been added to the manuscript. We are hesitant to make the data available outside of a view-only link before manuscript acceptance because we have had our data stolen by an outside research group in the past.

We have now added the number of observations/participants and heteroscedasticity-robust standard errors (Huber-White standard errors) to Tables S3 – S7 in addition to the R2-value and p-value for the omnibus test.

In terms of addressing the specific statistical concerns, we note that our analytical expertise is rooted in training from experimental psychology and computer science. Therefore, we are unfamiliar with some of the terminology referred to (e.g., econometric model, cross-sectional units, pooled models, etc.), as this lies outside of our field. We mention this simply to acknowledge that we may use different terms to refer to similar concepts/approaches that exist in other fields, such as economics.

3) As could have been expected, only a minority of people chose risky options in the

disadvantageous and equal domain, while more chose the risky option in the advantageous

domain. The authors discuss this aspect in the discussion section:

“Given that the risk-taking findings were specific to equal and disadvantageous expected value contexts, it should be noted that some participants chose the safe option on all trials. Therefore, the relationship between risk-taking and compliance behavior may be driven by a subsample of the sample, which may limit the generalizability of the findings”.

This statement sounds unnecessarily reticent. Rather than saying that the results are driven by

“some participants”, the authors should quantify how many participants chose the safe option in all cases. I would expect a more extensive discussion of the fact that the result is driven by only a

minority of participants. This is not at all uncommon in statistical analysis. Moreover, for social

phenomena like mask-wearing, the behavior of a minority individuals, maybe as low as 10% of the

population, may be enough to tip the social equilibrium from one of general compliance to one of

general non-compliance. Therefore, I do not think that even if the result was driven by a minority of

participants would detract from the general interest of the paper.

We have now added the percentage of participants who chose the safe option in all cases to the Results section (p. 20, bottom). In particular, we state: “In terms of risky decision-making, 17.08% of participants chose the safe option on all trials. By gambling context, 19.55% of participants chose the safe option in all advantageous expected value contexts, 46.78% chose the safe option in all equal expected value contexts, 57.43% chose the safe option in all disadvantageous contexts.”

We have updated the Discussion to include a more extensive discussion of this finding (p.30, middle). Specifically, we have added the following text: “It should be noted that the average percentage of risky choices across all participants in equal and disadvantageous contexts was low—less than 20%. Moreover, over half (57%) of participants chose the safe option on all disadvantageous trials, and 47% chose the safe option on all equal expected value trials. The relationship between risk-taking and compliance behavior appears to be driven by a minority of participants, which may limit the generalizability of the findings. Nevertheless, the behavior of even a minority of individuals can have substantial impacts during a pandemic. Recent estimates indicate that approximately 10% of infected individuals account for 80% of COVID-19 transmission [75]. Though a minority of individuals engage in equal and disadvantageous risky decision-making, the corresponding decrease in compliance with mask-wearing and social distancing observed in this study could have wide-ranging consequences for spreading COVID-19.

4) Even if the above clarifications are necessary in my opinion, I still struggle to see the need for breaking down the observations by domain (advantageous, equal, disadvantageous gambles).

What is this regression is telling us? Is it really telling us that people who are more risk tolerant,

as measured in the “cups task”, wear the mask less frequently? This would seem the type of

analysis to address the research question of the paper. For example, suppose that Participant P1 chose all risky options in the advantageous domain, and all safe options in the equal and disadvantageous domain. Suppose that Participant P2 chose all safe options in the advantageous domain and in the equal domain, while choosing a quarter of risky choices in the disadvantageous domain. The econometric model would predict that P1 should wear a mask in public, while P2 should not. But can we infer in any meaningful sense that this kind of choice captures that P2 is more risk tolerant than P1? Or is this finding relevant purely on methodological grounds, rather than to associate individual psychological traits to individual real-life behavior?

To me it would seem more natural to consider one measure of risk tolerance for each individual

defined over all the possible set of lotteries. Even if more sophisticated measures would be

possible, an obvious candidate would be to use the proportion of risky choices over the whole

set of 36 choices. This measure should be used as the main predictor in a regression analogous

to the one reported in Table S3. Looking at the correlations in Table 4, it would appear that this

variable would turn out as a significant predictor of mask-wearing. The authors could

subsequently break down the analyses by domain, as they presently do.

We appreciate the feedback on this point. To briefly explain our original rationale, the original regression was intended to gauge whether individuals’ risky decisions on the Cups Task (in addition to other factors) relates to their decisions regarding mask-wearing and social distancing. The risk level (advantageous, disadvantageous, and equal expected value) for this task is typically included as a modifier—simply to provide more specific information about the contexts in which people make or do not make risky choices (disadvantageous vs. equal, etc.). Those that make riskier choices in advantageous contexts more than disadvantageous contexts are considered more sensitivity to changes in expected value [7, 8]. The task and analytical approach were intended to capture this context variability because risk-taking is not a static ‘trait’ but varies within a single individual based on domain, context, and time (Weber et al., 2002), while psychological traits are defined as consistent over time and context. We realize that this information was not conveyed clearly in the previous revision.

Based on the feedback, we have now revised the regressions for each outcome measure to include a single predictor of risky decision-making behavior, which is defined as the proportion of risky choices over the whole set of choices as suggested. After reporting the results of the primary regressions, we report the results of separate follow-up tests that break down the analyses by gambling domain. The results showed that both risky decision-making behavior and temporal discounting predicted decreased mask-wearing and social distancing behavior. There were changes in the significance level for some of the covariates, and this information has been updated throughout the results and discussion. We have also now included some brief additional information to better explain why the different contexts (advantageous, disadvantageous, and equal) were examined as part of the study (p. 5, top). The corresponding tables have also now been updated. Overall, this updated approach led to more straightforward results that we think may be clearer and more useful for a broad audience like the PLoS One readership to interpret.

Weber, E. U., Blais, A. R., & Betz, N. E. (2002). A domain‐specific risk‐attitude scale: Measuring risk perceptions and risk behaviors. Journal of behavioral decision making, 15(4), 263-290.

5) I note that the authors went at great length in addressing the comment that pro-sociality was a

possible confound in the theoretical model. I agree with the authors that giving too much

attention to this aspect would detract from the main focus of the paper. Nonetheless, since both

the other reviewer and I had the same concern, it is quite likely that other readers would as

well. I would then recommend the authors to report their analysis concerning pro-sociality in

the Appendix and mention the result in the main paper. I would also stress that it is obvious that

the list of possible explanatory variables is potentially infinite, and nobody in their right mind

could ask to incorporate all of them in the analysis. Nonetheless, pro-sociality is such an obvious

possible motivator in the current context, that many would feel that not including could

jeopardize the interpretation of the results.

We have now added the description of the prosociality measures to the Methods section (p. 17). We then elaborate on the methodological details and report the results in the Supplementary Material and mention the correlational results in the Results section of the main paper (p. 29) and in the Discussion (p. 34). The sentence stating that an exhaustive list of possible influences on COVID-19 compliance has also been removed.

Attachment

Submitted filename: ReviewerComments2.docx

Decision Letter 2

Pablo Brañas-Garza

20 Apr 2021

Risk-Taking Unmasked: Using Risky Choice and Temporal Discounting to Explain COVID-19 Preventative Behaviors

PONE-D-20-32678R2

Dear Dr. Byrne,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Pablo Brañas-Garza, PhD Economics

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Acceptance letter

Pablo Brañas-Garza

3 May 2021

PONE-D-20-32678R2

Risk-Taking Unmasked:Using Risky Choice and Temporal Discounting to Explain COVID-19 Preventative Behaviors

Dear Dr. Byrne:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr Pablo Brañas-Garza

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (DOCX)

    Attachment

    Submitted filename: Review - PONE-D-20-32678.pdf

    Attachment

    Submitted filename: Revision.docx

    Attachment

    Submitted filename: Report 2.pdf

    Attachment

    Submitted filename: ReviewerComments2.docx

    Data Availability Statement

    All data files are available from the Open Science Framework database (https://osf.io/xy6aj).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES