Skip to main content
PLOS One logoLink to PLOS One
. 2021 Nov 18;16(11):e0260061. doi: 10.1371/journal.pone.0260061

The impact of pandemic-related worry on cognitive functioning and risk-taking

Kevin da Silva Castanheira 1,*, Madeleine Sharp 2, A Ross Otto 1
Editor: David V Smith3
PMCID: PMC8601558  PMID: 34793534

Abstract

Here, we sought to quantify the effects of experienced fear and worry, engendered by the COVID-19 pandemic, on both cognitive abilities—speed of information processing, task-set shifting, and proactive control—as well as economic risk-taking. Leveraging a repeated-measures cross-sectional design, we examined the performance of 1517 participants, collected during the early phase of the pandemic in the US (April–June 2020), finding that self-reported pandemic-related worry predicted deficits in information processing speed and maintenance of goal-related contextual information. In a classic economic risk-taking task, we observed that worried individuals’ choices were more sensitive to the described outcome probabilities of risky actions. Overall, these results elucidate the cognitive consequences of a large-scale, unpredictable, and uncontrollable stressor, which may in turn play an important role in individuals’ understanding of, and adherence to safety directives both in the current crisis and future public health emergencies.

Introduction

The COVID-19 pandemic represents a significant threat to the physical, mental, and economic well-being of people globally. While a spate of recent work has documented the direct effects of the pandemic on mental health [14], less is known about the consequences it might hold for cognitive functioning. Identifying and understanding these cognitive and behavioural consequences is especially critical as governments continue to face the challenge of controlling the spread of the virus and mitigating its social, economic, and psychological consequences. More than ever, people are being asked to attend to a continuous stream of public health messages, to adhere to government directives and to control their impulses for the sake of the collective good [5]. Seemingly simple decisions—like refraining from sharing a coffee with friends or registering for the vaccine—are the key elements of the global fight against COVID-19, yet concerning trends suggest declining adherence to these regulations and far-from-universal vaccine acceptance [6, 7]. Successful navigation of these situations is thought to rely on a key set of related cognitive processes often referred to as executive functions [8, 9]—broadly defined as monitoring and selection of behaviours in accordance with internal goals [10]. Here, we elucidate the impact of the early stages of the COVID-19 pandemic on both executive function and risky decision-making.

At the same time, a body of work suggests that executive functioning is impaired under conditions of fear and anxiety [1115] like those reported during the pandemic [16, 17], possibly owing to the processing resources (e.g., working memory) displaced by excessive worry [18]. For instance, anxiety is demonstrated to impair goal-directed cognitive processing thereby increasing the influence of more reflexive responses [19]. However, past work has also shown that some cognitive processes are preserved—or even facilitated—by anxiety [20]. Given the variability in the reported effects of emotional distress, pinpointing the locus of cognitive impairments brought about by pandemic-related worry may be particularly important.

Here, we provide an initial examination of the effects of pandemic-related worry upon cognitive functioning in a large, representative US-based sample, by measuring pandemic-related worry and assessing three distinct facets of cognitive functioning: 1) processing speed, measured with the digit-symbol coding task [21], 2) the ability to shift between multiple task sets, measured using a task-switching paradigm [22] and 3) proactive cognitive control, the ability to utilize contextual, task-related information in accordance with internally maintained goals, measured by the Dot Pattern Expectancy task (DPX) [23]. We chose these three specific cognitive tasks because they index related yet disparate facets of cognitive ability [24], all rely on anxiety-sensitive executive functions (e.g., working memory) [21, 25, 26], were previously validated [27], and administered online allowing for comparisons with pre-pandemic samples.

We administered this online task battery during the early phase of the COVID-19 pandemic in North America, across 3 waves, between April and June of 2020. We measured self-reported pandemic-related worry using the validated “Fear of Coronavirus” questionnaire [28], and subsequently probed the relationship between pandemic worry and cognitive performance across these tasks, controlling for both financial and overall perceived stress levels. We also explored 1) the impact of the pandemic on cognition by comparing performance of participants recruited during the pandemic to that of participants who had been recruited to participate in separate studies before the pandemic but who completed the same online tasks, and 2) the impact of pandemic progression by directly comparing task performance between waves.

Finally, given the importance of risk assessments in a pandemic [29], we examined the possibility that the effects of pandemic-related worry might also extend to individuals’ risk-taking. Previous work has found that anxiety engenders an increased perceived likelihood of negative outcomes [3032], which can manifest as heightened risk aversion [20, 33, 34]. To evaluate this, we probed the effect of pandemic-related worry on decision-making by measuring individuals’ risk attitudes in a traditional economic choice task [35, 36]—specifically, whether pandemic-related worry is associated with an overall tendency towards risk aversion, or if this association was selective to choices pertaining to gains or losses [3739].

Materials and methods

Participants

We recruited three samples of 509, 501 and 507 adult participants residing in the US for waves 1 (April 2nd, 3rd & 6th) 2 (April 17th, 20th -23rd) and 3 (June 19th, 22nd) respectively (1517 total), via Amazon Mechanical Turk (MTurk; see Fig 1) [40]. Given the unprecedented nature of the relationship being examined, we planned the samples in advance anticipating data exclusions and ensuring sufficient statistical power (i.e., 90%) to detect modest effect sizes (r > = 0.20) for the relationships between fear of coronavirus (FCQ) scores and the cognitive measures of interest [41]. Participants provided informed written consent and were paid $5 USD for completing the task battery. Our experimental protocol was approved by the McGill University Research Ethics Board-2 and carried out in accordance with institutional guidelines and regulations. We employed strict exclusion criteria on a task-by-task basis to ensure the quality of the data collected—particularly that possible declines were not simply due to noisier responding (see Supplemental Material in S1 File). Crucially, exclusion from our sample was not found to covary with the key variable of interest—pandemic worry (see Supplemental Results in S1 File). Given the cross-sectional design of the study, it is essential to establish that the recruited samples are comparable in terms of demographic variables. To determine the similarity of the samples, we compared the three recruited samples both to each other and to the collapsed pre-pandemic samples on several demographic variables (see S15, S16 Tables in S1 File). Overall, participants had comparable ages across waves 1 (Mage = 36.2, SDage = 9.87), 2 (Mage = 36.5, SDage = 10.4,), and 3 (Mage = 36.6, SDage = 10.4,), and in comparison to the collapsed pre-pandemic sample (Mage = 37.8, SDage = 14.6, F(3, 1753) = 1.655, p = 0.175). In terms of reported gender, the third wave (females = 138, males = 327, other = 0) contained proportionally more male participants than the first wave (females = 172, males = 275, other = 4), second wave (females = 166, males = 287, other = 2), and collapsed pre-pandemic sample (females = 164, males = 224, other = 4). To control for demographic differences between samples, we included age, gender, income, years on MTurk, and education level as covariates. We further controlled for pandemic-specific variables which may covary with performance: number of adults and children living at home given the stay-at-home orders, perceived risk for contracting COVID-19, and self-reported COVID symptoms [42].

Fig 1. Summary of participant responses across waves of data collection.

Fig 1

A. Map of the U.S. depicting the distribution of participants collected during the three samples. B. Correlation between the number of responses per state and the state population. C. Participants reported moderate levels of overall perceived stress which did not vary as a function of wave (β = -0.321, CI = -0.841–0.198, p = .255); plotted here with 95% bootstrapped confidence intervals. D. Mean self-reported financial strain also decreased with pandemic progression (wave β = -0.232, CI = -0.457—-0.007, p = .042); again, plotted with 95% bootstrapped confidence intervals. E. Mean self-reported coronavirus fear/worry (FCQ) decreased as a function of wave (β = -1.188, CI = -1.64—-0.733, p < .001); again plotted with 95% bootstrapped confidence intervals.

Procedure and materials

To assess the extent of the COVID-19 pandemic’s impact on individuals’ cognitive functioning and behaviour, we asked participants to complete a battery of 4 different tasks in a counterbalanced order. These tasks were identical to those used in our pre-pandemic samples in terms of stimuli, trial number and response-deadlines.

Digit-symbol coding task

To measure processing speed, participants were asked to complete a computerized version of the Digit-Symbol Coding task [43, 44]. During this task, participants are shown a static list of 9 digit-symbol pairs, which remain visible at the top of screen for the entirety of the task (see Fig 2A). On each trial, participants are asked to indicate whether the digit-symbol pair presented in the centre of the screen matches one of the 9 digit-symbol pairs depicted at the top of the screen. Yes/No responses were made using the left and right arrow keys, with the response-key mappings counterbalanced between participants. Following prior work [43], participants were asked to respond correctly to as many trials as they could within 90 seconds. In keeping with previous pre-pandemic samples collected in the lab, we implemented an attention check designed for online data collection where participants who did not achieve 70% accuracy on the task were asked to complete the task a second time (28% of participants in wave 1; 33% in wave 2; and 42% in wave 3). In this case, only data from the second run were analyzed [45]. We compared the current samples to the previously collected data [45].

Fig 2. Participants’ performance on the digit-symbol coding task.

Fig 2

A. Screenshot of the Digit symbol-coding task. B. We observed a relationship between FCQ scores and Digit Symbol-Score (β = -0.940, CI = [-1.591, -0.289], p = .005); here plotted with the marginal distribution of the ordinate and abscissa C. We also observed a relationship between FCQ scores and log correct RTs (β = 0.019, CI = [0.005, 0.033], p = .009); again, plotted with the marginal distribution of the ordinate and abscissa. D. Processing Speed decreased in comparison to the pre-pandemic sample and by wave (p’s < .05;); here plotted by wave with 95% bootstrapped confidence interval. E. Participants’ response times increased in comparison to the pre-pandemic sample and by wave (p’s < .05); again, plotted here with 95% bootstrapped confidence intervals.

Task-switching paradigm

To measure task-set shifting ability, we asked participants to complete a task switching paradigm [22, 46] in which participants were shown a square either on the top or bottom half of the screen (see Fig 3A). Depending on the position of the square, participants were asked to indicate the colour (i.e. orange or blue) or the pattern of the box (i.e. solid or stripped) using either the using the ‘E’ or ‘I’ buttons on the keyboard (e.g. blue = “E”, orange = “I”; solid = “E”, striped =“I”) within the 1500ms time limit. Critically, both the position-task mappings, and the key-response mappings were counterbalanced between participants. Participants completed 80 trials, in which half switched between subtasks (e.g., from color to pattern) and the other half repeated the previous subtask (e.g., from color to color). These data were compared to the “preliminary phase” data of previously collected data [47].

Fig 3. Participants’ performance on the Task-Switching paradigm.

Fig 3

A. Schematic of task-switching paradigm. B. Accuracy (top; p’s < .05) but not response-times (bottom) decreased by wave (p’s ≥ .05); variables plotted with 95% bootstrapped confidence interval. C. There was no observed relationship observed between FCQ scores and switch costs expressed in RT (β = -0.0005, CI = [-0.0142, 0.0132], p = .946); here, plotted with marginal distributions for the ordinate and abscissa and D. There was no observed relationship between FCQ scores and switch trial accuracy (β = -0.0385, CI = [-0.0865, 0.0096], p = .117); again, plotted with marginal distributions for the ordinate and abscissa.

Dot Pattern Expectancy task (DPX)

We assessed proactive cognitive control by using the dot-pattern expectancy task (DPX) [48]. During this task, participants were first shown a dot pattern which served as the cue for 500ms (blue dot pattern; A or B), followed by a fixation cross for 2000ms (the delay period), and finally the probe stimulus for 500ms (white dot pattern; X or Y). Participants were instructed that a specific combination of cue and probe dot patterns would serve as the “target” stimulus and were told to respond using the “1” key when presented the “target” cue-probe pair—which we refer to as the AX trial—and respond with the “2” key with all other non-target combinations—AY, BX, BY (see Fig 4). Participants completed a total of 128 trials, where the majority were target trials (AX, 68.75%), to establish the prepotency of the “target” response. The remaining trials were equally distributed among the three other trial types (BX, BY, AY). Using participant’s RTs, we calculated the proactive behavioural index (PBI) [49] which reflects the relative inference on AY and BX trials–calculated as (RTAY−RTBX)/(RTAY + RTBX), where a larger PBI reflect more utilization of cue-based contextual information (and less probe-driven behavior) [49, 50]. Note that our pre-pandemic DPX sample [51] lacked self-reported age or gender, and accordingly these variables they were not included in the linear model comparing across pre-pandemic and pandemic samples.

Fig 4. Participants’ performance on the dot pattern expectancy task.

Fig 4

A. Schematic of the dot pattern expectancy task where a specific combination of cue (shown in blue) and probe (shown in white) dots were deemed to be the “target” stimulus and associated with target responses, all other combinations required non-target responses. B. We observed a negative relationship between FCQ scores PBI (β = -0.0020, CI = [-0.0034, -0.0005], p = .010); here plotted with marginal distributions for the ordinate and abscissa. C. We observed a negative relationship between perceived stress scores and PBI (β = -0.0108, CI = [-0.0200 –-0.0015], p = .023); again, with marginal distributions plotted for the ordinate and abscissa. D. Proactive behavioural index (PBI) decreased in comparison to the pre-pandemic sample (β = -0.0353, CI = -[-0.0674, -0.0032], p = .031) but not with pandemic progression (p’s > .20); error bars represent the 95% bootstrap confidence interval. E. Overall accuracy was not found to decrease as a function of pandemic progression (p>.05).

Risky decision-making task

Participants’ risk preferences were assessed using a simple risky decision-making task in which they were presented with six practice trials and 120 binary choices between a risky and a certain option. For half of the choices, options were framed as losses (i.e., either -$200 or -$100) and the remaining half were framed as gains (i.e., either $200 or $100). All risky options were associated with both a chance of winning (or losing) a non-zero amount of hypothetical money and a chance of winning (or losing) nothing at all (see Fig 5). For the risky options, the probability of the non-zero outcome varied between likely (0.90, 0.95 or 0.99) or unlikely (0.10, 0.05, 0.01) outcomes [36]. All choices were between options of equal expected value (EV), except for twelve “catch” trials in which the expected value greatly favored an option (expected value = +/- 90), ensuring that participants understood the task (see S11 Table in S1 File for full list of stimuli used). Participants were asked to select an option within a 2500ms limit by responding with either the left or right arrow key. Data was compared to a pre-pandemic sample, previously collected, where participants completed the same risky decision-making task under two deadlines (2500ms and 3500ms) [45].

Fig 5. Participants’ performance on the risky decision-making task.

Fig 5

A. Schematic of the economic decision-making task used to assess risk preferences. B. Overall, participants were loss averse, as demonstrated by their sensitivity to the framing of problems as either losses or gains (β = -1.3200, CI = [-1.4430, -1.1969], p < .001) which also varied as a function of pandemic wave (β = 0.1949, CI = 0.0440, 0.3458], p = .011). C. Average proportion of risky choice by frame, outcome probability and sample. Participants in the third wave, compared to the first and the pre-pandemic sample, were more likely to distort outcome probability (p ≤ .001) D. Average proportion of risky choice by frame, outcome probability and fear of coronavirus questionnaire (FCQ) scores median split. E. We observed a positive relationship between fear of coronavirus and one’s tendency to distort outcome probability (β = 0.1720, CI = [0.0929, 0.2511], p <0.001)—indexed as the magnitude of the fourfold pattern ((P(risky)Likely Losses—P(risky)Unlikely Losses)+ (P(risky)Unlikely Losses—P(risky)Likely Gains)); plotted here with marginal distributions for the ordinate and abscissa.

Questionnaires

First participants completed all four behavioural tasks in a counterbalanced order, then the PSS, FSS, and the FCQ questionnaire in a randomized order. Finally, participants were asked to complete a final questionnaire assessing demographic variables and rated their ability to focus.

Fear of Coronavirus Questionnaire–(FCQ). To assess anxiety related to the pandemic, we administered an 8-item questionnaire assessing beliefs and behaviours relating to the COVID-19 pandemic [28]. Participants were asked to rate on a 5-point scale, the extent to which they endorsed statements relating to fear of the virus (e.g., “I am very worried about the coronavirus outbreak.”) or engaging in certain behaviours (e.g., “I am constantly following all news updates regarding the virus.”). Scores on this questionnaire ranged from 8 to 40, where larger numbers reflect greater fear.

Perceived Stress Scale. The Perceived Stress Scale is a 10-item instrument which measures an individual’s perceived level of stress over the last month [52]. Participants were asked to rate how often they felt overwhelmed (e.g., “In the last month, how often have you felt nervous and ‘stressed’?”) or in control (e.g., “In the last month, how often have you felt that you were on top of things?”) on a scale from 1 “very often” to 5 “never”. Responses to the reverse-coded questions were reversed-scored, such that larger scores reflect more stress, and range from 10 to 50.

Financial Strain Scale (FSS). We used a 3-item measure of financial strain which asks participants to rate, on a 5 point scale, how much they endorse statements conveying financial hardships (e.g., “in the next two months, how much do you anticipate that you or your family will experience actual hardships such as inadequate housing, food, or medical attention?”) [53]. Scores ranged from 3 to 15, where larger numbers reflect more strain.

Self-reported ability to focus. At the end of the cognitive tasks, participants were asked to rate their ability to focus during the experiment using a 9-item scale which assessed the degree of understanding (e.g., “all instructions were clear”), motivation (e.g., “I did my best on the task at hand”), and distraction (e.g., “I was distracted during the experiment”) [27]. Ratings were taken on a 11-point scale, resulting in a range of possible scores from 0 to 90, where lower scores reflect less focus.

Demographic variables. In addition to the above measures, participants were asked to indicate several demographic variables to be included as covariates in the regressions. We asked participants to indicate their age, gender, highest level of education achieved, their annual household income, and the number of adults and children currently living with them. Additionally, we asked participants to specify if English was their first language, and if not, what age did they first start learning English. To account for the possibility that the pandemic could have attracted new people to Mturk, we asked participants to indicate how long have they been a worker on Mturk. Finally, participants were asked to indicate if they felt sick with any COVID-19 symptoms and to determine if they believed themselves to be at an elevated risk of contracting a more severe case of COVID-19 (i.e., “Do you think you are at increased risk of experiencing a more severe case of COVID-19/coronavirus (i.e,. due to an underlying medical condition)?”) [54].

Data analysis

To test the effect of worry on cognitive functioning, we ran separate hierarchical linear or logistic mixed-effect models for each task to assess the extent to which response times or response accuracy related to pandemic worry, controlling for several covariates (Age, Gender, income, highest education level, perceived risk for contracting COVID-19, total years on MTurk, and number of adults and children living in the same household). For the multi-level models, we included the within-participant trial-by-trial predictor variables as both random and fixed effects (i.e. task switch versus repetition, response congruency, decision frame losses versus gains, outcome probability likely versus unlikely). This allowed us to estimate group- and participant- level estimates for the effects in question (e.g., switch costs). We further estimated fixed-effects linear models to examine the relationship between singular, per-participant summary measures of task performance (i.e., total number of correct trials on the digit-symbol task, PBI) and both predictor variables (wave, worry, perceived, and financial stress) and subject-level covariates.

Beyond the main effects of interest (i.e., pandemic-worry), we also explored the overall effect of the pandemic on cognitive functioning. To this end, we ran separate linear or logistic mixed-effect models for each task where pandemic participants (all three waves collapsed) were compared to the pre-pandemic sample with a dummy coded variable (i.e. Pre-Pandemic: 0 vs Pandemic: 1). We also evaluated whether the individual measures of stress and anxiety changed as a function of pandemic progression (i.e., between waves) using simple linear regressions. As with the worry-analyses, we controlled for sample differences (i.e., gender and age) where possible; however, as these pre-pandemic and wave comparisons are cross-sectional, they should be taken as exploratory. All analyses were conducted using the lme4 package (version 1.1–21) for the R programming language.

Results

We collected US-based online samples across three waves corresponding to the early phase of the COVID-19 pandemic in North America: April 2nd, 3rd, and 6th, April 17th and 20th -23rd, and June 19th and 22nd, 2020 (see Fig 1A). Within this period, many of the U.S. states enacted strict regulations to mitigate the spread of the virus, including travel bans, limitations on gatherings, and stay-at-home directives, indicating that the pandemic was widespread at the time of data collection. Importantly, we also observed a strong relationship between the number of responses in each U.S. state and that state’s population in 2019 (r = 0.976, p < .0001; collapsed over waves), suggesting that the geographical distribution of participants was consistent with the state-level population distribution (Fig 1B). We measured participants’ performance on three cognitive tasks, an economic choice task, pandemic-related worry (e.g., subjective worry, looking up pandemic information) using the Fear of Coronavirus Questionnaire (FCQ) [28], and both financial stress (FSS) [53] and overall perceived stress (PSS) [52].

Experienced fear, worry and stress

As depicted in Fig 1C and 1D, participants reported experiencing both moderate financial (MFSS = 8.35; SDFSS = 3.22) and overall perceived stress (MPSS = 18.63; SDPSS = 7.42) as measured by the FSS [53] and the PSS [52] respectively. Additionally, participants’ self-reported pandemic worry—measured using the FCQ [28] (MApril = 29.81; SDApril = 6.40; MJune = 28.42; SDJune = 6.73)—matched the levels reported by other work assessing worry in an international sample during the same time period [55]. Relatedly, perceiving oneself as being at an elevated risk for contracting COVID-19 was associated with higher levels of pandemic-related worry (β = 2.3799, 95% CI = [1.5289–3.2309], p < 0.001) [54]. Next, we compared pandemic worry, perceived overall stress, and financial stress across waves (Fig 1E), finding that pandemic-related worry decreased linearly with wave (β = -1.188, CI = [-1.642, -0.733], p< .001) mirroring previous observations taken during the North American pandemic onset [55, 56]. Financial stress (β = -0.232, CI = [-0.457, -0.007], p = .042), but not perceived stress levels were found to decrease as a function of pandemic progression (linear effect β = -0.321, CI = [-0.841, 0.198], p = .225).

Processing speed

We measured participants’ ability to quickly process information [21], using the digit-symbol coding task, where participants were required to indicate, for as many stimuli as possible within 90 seconds, whether the digit-symbol pair displayed matched one of the pairs displayed in the digit-symbol key at the top of the screen [21] (Fig 2A). Interestingly, we observed that higher levels of pandemic-related worry—indexed by the FCQ—predicted lower performance (Fig 2B). Statistically, this predictive effect was significant, controlling for age, gender, overall stress, financial strain, perceived risk of contracting COVID-19, and several other demographic variables (FCQ β = -0.940, CI = [-1.591, -0.289], p = .005; see S1 Table in S1 File). Additionally, we found that even while controlling for pandemic-related worry, perceived risk of contracting a severe case of COVID-19 was also predictive of slower overall processing speed (β = 0.0362, 95% CI = [0.0003–0.0722], p = 0.048; see S1 Table in S1 File). We also examined whether the predictive effect of worry on digit-symbol performance could be explained by a decrease in response caution, as indexed by faster correct RTs [43]. Instead, we found that increased FCQ scores were associated with slower correct response times (RTs; β = 0.019, CI = [0.005, 0.033], p = .009; see Fig 2C and S2 Table in S1 File) after controlling for the covariates listed above, indicating that the onus of worry’s predictive effect was a general slowing of information processing.

Next, in an exploratory analysis, we compared performance across waves (Fig 2D) and with a pre-pandemic sample taken in the US in 2018–2019 (see S3 Table sample information in S1 File). We found that pandemic samples exhibited lower digit-symbol scores (β = -5.470, CI = [-7.640, -3.300], p <0.001) and responded more slowly (β = 0.105, CI = [0.059, 0.151], p <0.001) compared to the pre-pandemic sample, controlling for participants’ age and gender. Across pandemic waves, we also observed a linear decline in processing speed as a function of pandemic progression (β = -1.120, CI = [-1.980, -0.259], p = .011; see Fig 2D): we found digit-symbol scores were lower in the second (β = -2.295, CI = [-3.863, -0.728], p = .004) and third waves (β = -2.124, CI = [-3.8492, -0.4000], p = .0168; see S1 Table in S1 File), relative to the first wave, controlling for FCQ scores. This pattern was echoed in RTs, where participants were slower in the latter two waves (e.g., Waves 2 and 3) in comparison to the first wave (p’s < .05, see Fig 2E and S2 Table in S1 File). In short, the observed impairments in processing speed, both in association with pandemic-related worry and pandemic progression, hint that a worry-induced reduction in basic information processing ability may be a cognitive consequence of the pandemic.

Task-switching

We examined individual’s ability to reconfigure mental processes in response to a change in goals, indexed by switch costs—the slowing of responses following switches between subtasks [22, 46]. In the task-switching paradigm, participants are required to respond to a stimulus based on a subtask that varied from trial-to-trial (see Fig 3A). On half of the trials, the required subtask (COLOR versus PATTERN) repeated, while the other half of trials entailed a switch to the other subtask, yielding “repeat” and “switch” trials respectively.

As is typical with task switching paradigms, we observed significantly slower RTs on switch trials compared to repeat trials (β = 0.1037, CI = [0.0930, 0.1144], p <0.001, see Fig 3B and S4 Table in S1 File), reflecting switch RT costs [22]. We examined whether pandemic-related worry predicted task performance, but failed to observe a statistically meaningful relationship between FCQ scores and RTs (β = -0.0005, CI = [-0.0142, 0.0132], p = 0.946; see S4 Table in S1 File), nor switch costs (β = -0.0015, CI = [-0.0080, 0.0050], p = 0.647; see Fig 3C and S4 Table in S1 File), suggesting that worry may not have affected set-shifting ability.

Examining task-switching accuracy, we observed a typical decrease in response accuracy on task switches compared to task repetitions (β = -0.5719, CI = [-0.6531, -0.4907], p <0.001; see S5 Table in S1 File) [22]. As depicted in Fig 3D, we failed to find a relationship between either pandemic-related worry (FCQ) and overall accuracy (β = -0.0385, CI = [-0.0865, 0.0096], p = .117; See S5 Table in S1 File) or switch-costs expressed in terms of accuracy (β = 0.0066, CI = [-0.0415, 0.0547], p = .788; see S5 Table in S1 File). Together, these results suggest that, unlike processing speed, general task-set shifting abilities—indexed by task switch costs—were unrelated to worry.

In terms of our exploratory analysis, we found that the pandemic samples were generally less accurate compared to the pre-pandemic sample (β = -0.2443, CI = [-0.3835, -0.1051], p = 0.001; see Fig 3B and S6 Table in S1 File), controlling for age and gender. Overall, accuracy was lower in the second and third waves compared to the first (p’s < .05; see S5 Table in S1 File), after controlling for age, gender, perceived risk for contracting COVID-19 and worry (see S7 Table in S1 File for full list of covariates). However, we did not find any differences in either RTs (β = -0.0066, CI = [-0.0437, 0.0304], p = .726; see S7 Table in S1 File), or switch RT costs (β = -0.0045, CI = [-0.0045,0.0110], p = .572) between the pandemic and pre-pandemic sample collected in 2019. Similarly, comparing across waves, we did not observe an effect of pandemic wave upon RTs when comparing between waves (all p’s .05; see S4 Table in S1 File).

Proactive cognitive control

We probed whether pandemic-related worry had any predictive bearing on proactive cognitive control, assessed with the Dot Pattern Expectancy task (DPX) [48]. In the DPX task (Fig 4A), a cue stimulus (blue dot pattern; A or B) is presented briefly, followed by a delay and a probe stimulus (white dot pattern; X or Y). Participants were instructed to make a ‘target’ response only to a valid cue-probe pair (denoted AX), and a ‘non-target’ response for all other invalid cue-probe pairs: BX, AY, and BY. Importantly, target (AX) trials are far more frequent than non-target trials (approximately 2/3 of trials), engendering a preparatory context triggered by the A cue, and a bias for target responses triggered by the X probe.

The relative performance on BX versus AY trials is used as a measure of proactive control, because the cue-driven expectancy set by the B context can be utilized to inhibit the incorrect, but prepotent target response to the X probe [57]. Accordingly, engaging in the use of proactive control supports performance on BX trials, but at the same time, can impair performance on non-target AY trials. Conversely, reactive control relies more on the probe, which is thought to impair performance on non-target BX trials but improve AY performance. Overall, DPX RTs and accuracies in the pandemic sample mirrored typically observed patterns [51, 58]: subjects made faster and more accurate responses to target AX trials and non-target BY trials compared to AY and BX (RT cue × probe interaction F = 254.7, p < .001; Accuracy interaction F = 905.57, p < .001; see S8 Table in S1 File).

We next examined whether pandemic-related worry predicted deficits in the proactive behavioural index (PBI) [49]—an RT-based measure of proactive control, calculated as (RTAY—RTBX) / (RTAY + RTBX)—finding that individuals with higher FCQ scores had lower PBIs (Fig 4B; β = -0.0020, CI = [-0.0034, -0.0005], p = 0.010; see S9 Table in S1 File), controlling for gender, age, perceived risk of contracting COVID-19, self-reported, and financial stress (see S8 Table in S1 File for full list of covariates). That is, individuals reporting more pandemic-related worry had greater difficulty inhibiting stimulus-driven responding. Additionally, we observed a negative relationship between reported chronic stress (measured with the PSS) and PBIs (β = -0.0108, CI = [-0.0200 –-0.0015], p = 0.023, see Fig 4C). In summary, these results suggest that pandemic-related worry is associated with deficits in proactive cognitive control.

Relative to a pre-pandemic sample collected in 2013, the pandemic sample exhibited significantly—albeit slightly—reduced PBIs (β = -0.0353, CI = -[-0.0674, -0.0032], p = 0.031; Fig 4B), but no pandemic progression effect (linear effect of wave: β = -0.0075, CI = [-0.0204, 0.0054], p = .254; see Fig 4D) nor any significant PBI differences between the first wave and the latter two waves (p’s > 0.20 & S8 Table in S1 File). Importantly, overall response accuracy did not vary as a function of wave or FCQ (p’s > .05; see Fig 4E and S10 Table in S1 File).

Risky decision-making

Finally, we examined how pandemic-related worry relates to risk preferences as a function of effect-coded variables of gain-loss decision framing [59], outcome probability, and their possible interactions [36]. To do this, we measured risk preferences using a classic economic choice task (Fig 5A), wherein participants made a series of hypothetical choices between a ‘certain’ option e.g., a sure win of $75, and a ‘risky’ option e.g. a 25% chance of winning $0 and a 75% chance of winning $100 (see S11 Table in S1 File for full list of stimuli used). These choices varied both in terms of the decision frame (i.e., losses versus gain outcomes) and the probability of the non-zero risky outcome, which varied from unlikely (1–10%) to likely (90–99%).

Overall, we observed participants were sensitive to the decision frame in accordance with the framing effect [35, 36]: participants’ choices were risk-averse for gains and risk-seeking for losses (gain/loss frame β = -1.3200, CI = [-1.4430 –-1.1969], p <0.001; see S12 Table in S1 File and Fig 5B). We next examined if pandemic-related worry predicted risk preferences, finding that FCQ did not relate to individuals’ overall risk-taking (FCQ main effect; β = - 0.0450, CI = [-0.1250, 0.0350], p = .270; see S12 Table in S1 File), nor susceptibility to the framing effect (FCQ x Frame interaction; β = 0.0226, CI = [-0.0977, 0.1429], p = 0.713; see S12 Table in S1 File). Participants were also sensitive to described risk level (Frame x Probability interaction β = 0.2560, CI = [0.1744, 0.3375], p < .001; see Fig 2C), in accordance with the ‘Fourfold pattern’ of risk preferences: choices were relatively more risk-seeking in situations of unlikely gains and likely losses compared to situations of likely gains and unlikely losses [36]. This pattern of preference is thought to arise from the underweighting of likely probabilities and overweighting of unlikely probabilities. Individuals reporting greater pandemic-related worry appeared more sensitive to described risk level (Fig 5D): FCQ scores predicted participants’ tendency to exhibit this fourfold pattern in choice (visualized continuously in Fig 5E) as indicated by a significant three-way interaction between frame, outcome probability and worry (β = 0.1720, CI = [0.0929, 0.2511], p <0.001; see S12 Table in S1 File). As with the analysis of cognitive task performance, this relationship between sensitivity to outcome probabilities and individual worry remained after controlling for demographic variables, and perceived risk of contracting COVID-19 (see S12 Table in S1 File for full list of covariates).

In an exploratory analysis, we compared the 2019 pre-pandemic sample to the pandemic samples and observed a decrease in overall risk-taking (Sample main effect β = -0.3490, CI = [-0.6417, -0.0563], p = .019; see S13 Table in S1 File) and a decrease in sensitivity to gain/loss framing (Frame x Sample interaction β = 0.7465, CI = [0.3242, 1.1688], p = .001; see S13 Table in S1 File) after controlling for age and gender. Participants in the pandemic sample, compared to the pre-pandemic sample, also exhibited increased sensitivity to described risk level, i.e., greater expression of the fourfold pattern described above (Fig 5C), which was supported statistically by an interaction between frame, outcome probability, and sample (pre-pandemic versus pandemic; β = 0.4723, CI = [0.1925, 0.7522], p = .001, see S13 Table in S1 File). We did not observe an effect of pandemic progression on overall risk preferences (linear effect of wave β = -0.0884, CI = [-0.1819, 0.0123], p = .085), but we did find that overall sensitivity to the gain/loss decision frame decreased as a function of wave (linear effect of wave x Frame interaction β = 0.1949, CI = [0.0440, 0.3458], p = .011)—participants in the third wave were found to be less sensitive to decision frame when compared to the first wave (Wave x Frame interaction β = 0.4281, CI = [0.1256–0.7306], p = .006; see S14 Table in S1 File). We also found participants in the third wave showed greater expression of the fourfold pattern in comparison to the first wave (Wave x Frame x Probability interaction β = 0.6322, CI = [0.4278–0.8366], p < .001, see S14 Table in S1 File). In summary, we found that both pandemic-related worry and pandemic progression were associated with a greater sensitivity to described risk level, but only pandemic progression impacted framing effects.

Discussion

Here, we sought to delineate the cognitive and behavioural consequences of the COVID-19 global pandemic in a representative US sample. Consistent with our predictions, we observed a marked decrease in individual’s executive control as a function of individual differences in experienced fear/worry. Together, the results outlined above buttress previously observed anxiety-induced impairments in executive functioning [13, 19, 60], and extend this literature by demonstrating a decline in executive control in response to the threats posed by the COVID-19 pandemic [16, 17]—a naturalistic stressor.

Previous accounts of anxiety posit that excessive worry impairs executive functioning by displacing cognitive resources (i.e., working memory) necessary for successful goal-directed control [18]. Supporting this view, we found that higher levels of pandemic-related worry predicted impairments in proactive cognitive control and in information processing speed but not in task-switching ability. We believe that this uneven effect of worry across cognitive tasks reflects the extent to which each task relies on working memory resources. On this view, we found that self-reported coronavirus anxiety measured by FCQ was associated with reduced use of goal-driven (i.e. proactive) control [19, 61, 62]—which is thought to rely more strongly on working-memory capacity than reactive control [25, 26]. Similarly, declines in processing speed, like those observed here, are thought to reflect both the slowing of basic mental operations, and limitations to the amount of simultaneously available information [21]. However, we failed to find a relationship between pandemic worry and task-switching ability. This is perhaps not surprising considering that studies of the effects of negative affective states on task-switching ability have revealed mixed findings [63, 64]. Critically, working-memory demands are thought to play a key role in anxiety-induced impairments in set shifting ability, as both task complexity [65], and response preparation [66] have been found to mediate the predictive effects of anxiety on task-switching. Thus, it is possible that the chosen task switching paradigm did not sufficiently tax working memory to observe a worry-induced decline in performance, as task complexity and response preparation were not manipulated. Dovetailing with these findings, we found that processing speed was related to performance on both the task-switching and proactive control tasks, but proactive control and task-switching performance did not appear to relate to one another (see S17 Table in S1 File). Together, these results highlight how executive functioning reflects a related, yet distinct set of cognitive abilities [24], and suggest that the effects of worry are not of equal magnitude across domains.

Whether poorer cognitive control is a function of, or a consequence of anxiety remains unclear, as previous work suggests that worrying interferes with cognitive ability [67], and that lower cognitive ability contributes to worrying through an inability to inhibit negative thoughts [68]. Furthermore, other work has proposed that engaging in demanding cognitive tasks can reduce anxiety [69, 70], highlighting the hypothesized shared resources between worrying and goal-directed behaviour. Critically, as anxiety-related deficits in task performance did not generalize to all tasks, we believe it is unlikely that our results could be exclusively explained by exertion-induced reductions in anxiety. However, future work is needed to disentangle these possibilities.

In the context of a pandemic, individuals’ risk assessments are equally crucial as they predict compliance with social-distancing measures [29]. Extant accounts suggest that anxiety shapes decision-making via an increased tendency to interpret uncertain information more negatively, though there are some inconsistencies [37]. Anxious individuals are thought to overestimate the likelihood of negative outcomes [3032], leading to greater risk aversion [20, 33, 34]. However, we failed to find any evidence that greater worry was associated with risk aversion, nor loss aversion. Instead, we found that pandemic worry predicted individuals’ tendency to distort described risk levels: underweighting likely probabilities and overweighting unlikely probabilities regardless or valence [36]. Perhaps, this disparity across studies in the observed effect of anxiety on choice reflects the use of different strategies to manage risk and/or worry [71].

One possible explanation for anxious individuals’ greater sensitivity to described risk is an increase in information seeking, resulting in greater exposure to discussions of risk, which are the current focus of media coverage. On this view, intolerance of uncertainty has been associated both with greater pandemic worry [55], and greater information seeking—particularly when the uncertainty is related to one’s health [72, 73]. Indeed, among individuals experiencing elevated levels of anxiety, there have been increases in searches for pandemic-related information [74]. While the risk attitudes measured here were unrelated to one’s health, we believe this increased contemplation of risks, particularly for those experiencing anxiety, reflects an overall greater saliency of described risks on behaviour.

We also explored whether the pandemic had a general effect on task performance. We found that, in comparison to our pre-pandemic samples, those tested during the pandemic seem to exhibit slower processing speed [21, 43, 44], decreased task-switching accuracy [22, 46], less proactive control [49, 50], and greater sensitivity to described risk [36]. We further probed the effect of longer exposure to the stressor i.e., pandemic progression, controlling for worry, perceived risk of contracting COVID-19, financial and overall stress. While proactive control remained stable, we found that participants in the third wave, compared to the first, had slower processing speed, lower task-switching accuracy, and were more sensitive to risk. Given that we are comparing different groups of participants who were captured at different points in time, we can only speculate as to the mechanisms underlying the decline in cognitive performance across this 2-month period in the early stages of the pandemic. In keeping with other research [55, 56], we observed a decrease in the level of worry over time whereas task performance did not significantly change—suggesting there may be other factors contributing to impaired executive functioning. For example, the persistently impaired task performance may reflect the impact of prolonged (i.e. chronic) exposure to stress or a shift in the sources of worry. Supporting this view, we found that self-reported perceived stress remained stable across the waves of data collection. However, despite controlling for possible demographic differences between samples (e.g., age, gender, years on MTurk; see Supplemental Materials in S1 File), we cannot rule out the possibility that our groups (collected both in years prior to the pandemic and throughout) sampled different populations, as the pandemic was associated with a marked increase in financial hardships and remote work. Thus, the sample differences discussed here should be interpreted as exploratory and more work is needed to understand whether these wave effects reflect meaningful differences.

We believe the results outlined here have broader implications for human behaviour, as intact executive control is a cornerstone of healthy daily living (e.g., economic productivity), workplace performance [75], and the delay of gratification [76]. For instance, though we did not measure current compliance with COVID-19-related measures, it is possible that the behavioral repertoires examined here—executive functioning and risky decision-making—and the affective states (i.e., worry) of an individual might play a role in that individuals’ understanding and compliance with public health measures. Indeed, recent work has observed a positive relationship between one’s decision to comply with government sanctioned social-distancing directives and working-memory [77] as well as perceived (but not actual) pandemic-related risk [54]. Mirroring these findings, we found individuals who perceived themselves at higher risk for contracting COVID-19 also experienced greater pandemic-worry, and slower information processing (see S1 Table in S1 File). Given the putative role of coronavirus anxiety in successful goal-directed behaviour, it is possible that an individual’s worry level plays a key role in the tendency to comply with public health measures: anxiety is associated both with worse working-memory and higher perceived risk for contracting COVID-19. One possibility is that pandemic-related worry could decrease compliance via putative changes in cognitive performance; another possibility is that worry could lead to increased compliance merely as a response to higher perceived risk of becoming sick, i.e., independently of the effects of worry on cognition. Given our cross-sectional design, our results cannot shed light on the possible directionality of these effects. Future research should examine the underlying factors contributing to cognitive function and decision-making under high stress to better understand the relationship between affective and cognitive states, and the impact of these states on real-world behaviours. This is of particular interest considering the declining adherence to government directives and the far-from-universal acceptance of vaccination [6, 7]. Our results also highlight the importance of considering the individual (i.e., worry) and the circumstantial (i.e., the pandemic) factors that lie outside of the experimenter’s control but may contribute to measured behaviour—hereafter it may be unreasonable to assume that human participants enter experiments in a neutral state [78]. Going forward, we believe that future research should consider these individual differences when interpreting any behaviour relying on executive functioning—particularly for research conducted during, and immediately following the pandemic.

Supporting information

S1 File

(DOCX)

Data Availability

The data and analysis scripts are available on the Open Science Framework, accessible with the following link https://osf.io/87wz5/.

Funding Statement

This work was supported by G. W. Stairs Fund, Natural Sciences and Engineering Research Council of Canada Discovery Grant [RGPIN-2017-03918] https://www.nserc-crsng.gc.ca/nserc-crsng/, Social Sciences and Humanities Research Council of Canada grant [430-2020-00518] https://www.sshrc-crsh.gc.ca/home-accueil-eng.aspx and Canadian, Foundation for Innovation Grant [36557] https://www.innovation.ca/ awarded to ARO, as well as Fonds de Recherche du Québec - Santé https://frq.gouv.qc.ca/en/health/ grant awarded to MS. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Brooks SK, Webster RK, Smith LE, Woodland L, Wessely S, Greenberg N, et al. The psychological impact of quarantine and how to reduce it: rapid review of the evidence. The Lancet. 2020;395: 912–920. doi: 10.1016/S0140-6736(20)30460-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Di Blasi M, Gullo S, Mancinelli E, Freda MF, Esposito G, Gelo OCG, et al. Psychological distress associated with the COVID-19 lockdown: a two-wave network analysis. Journal of affective disorders. 2021. doi: 10.1016/j.jad.2021.02.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Kachanoff F, Bigman Y, Kapsaskis K, Gray K. Measuring Realistic and Symbolic Threats of COVID-19 and their Unique Impacts on Wellbeing and Adherence to Public Health Behaviors. 2020. doi: 10.1037/pspi0000217 [DOI] [PubMed] [Google Scholar]
  • 4.Taylor S, Landry CA, Paluszek MM, Fergus TA, McKay D, Asmundson GJ. COVID stress syndrome: Concept, structure, and correlates. Depression and anxiety. 2020;37: 706–714. doi: 10.1002/da.23071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.van Bavel JJV, Baicker K, Boggio PS, Capraro V, Cichocka A, Cikara M, et al. Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour. 2020;4: 460–471. doi: 10.1038/s41562-020-0884-z [DOI] [PubMed] [Google Scholar]
  • 6.Crane MA, Shermock KM, Omer SB, Romley JA. Change in reported adherence to nonpharmaceutical interventions during the COVID-19 pandemic, April-November 2020. JAMA. 2021. doi: 10.1001/jama.2021.0286 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lazarus JV, Ratzan SC, Palayew A, Gostin LO, Larson HJ, Rabin K, et al. A global survey of potential acceptance of a COVID-19 vaccine. Nature medicine. 2021;27: 225–228. doi: 10.1038/s41591-020-1124-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hinson JM, Jameson TL, Whitney P. Impulsive decision making and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2003;29: 298–306. doi: 10.1037/0278-7393.29.2.298 [DOI] [PubMed] [Google Scholar]
  • 9.Peters J, Büchel C. The neural mechanisms of inter-temporal decision-making: understanding variability. Trends in Cognitive Sciences. 2011;15: 227–239. doi: 10.1016/j.tics.2011.03.002 [DOI] [PubMed] [Google Scholar]
  • 10.Diamond A. Executive Functions. Annu Rev Psychol. 2013;64: 135–168. doi: 10.1146/annurev-psych-113011-143750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Eysenck MW, Calvo MG. Anxiety and Performance: The Processing Efficiency Theory. Cognition and emotion. 1992;6: 409–434. doi: 10.1080/02699939208409696 [DOI] [Google Scholar]
  • 12.Hayes S, Hirsch C, Mathews A. Restriction of working memory capacity during worry. Journal of abnormal psychology. 2008;117: 712. doi: 10.1037/a0012908 [DOI] [PubMed] [Google Scholar]
  • 13.Moran TP. Anxiety and working memory capacity: A meta-analysis and narrative review. Psychological Bulletin. 2016;142: 831. doi: 10.1037/bul0000051 [DOI] [PubMed] [Google Scholar]
  • 14.Eysenck MW, Calvo MG. Anxiety and Performance: The Processing Efficiency Theory. null. 1992;6: 409–434. doi: 10.1080/02699939208409696 [DOI] [Google Scholar]
  • 15.Wetherell JL, Reynolds CA, Gatz M, Pedersen NL. Anxiety, Cognitive Performance, and Cognitive Decline in Normal Aging. The Journals of Gerontology: Series B. 2002;57: P246–P255. doi: 10.1093/geronb/57.3.P246 [DOI] [PubMed] [Google Scholar]
  • 16.Fitzpatrick KM, Drawve G, Harris C. Facing new fears during the COVID-19 pandemic: The State of America’s mental health. Journal of Anxiety Disorders. 2020;75: 102291. doi: 10.1016/j.janxdis.2020.102291 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Heffner J, Vives M-L, FeldmanHall O. Psychological determinants of emotional distress during the COVID-19 pandemic. 2020. [Google Scholar]
  • 18.Eysenck MW, Derakshan N, Santos R, Calvo MG. Anxiety and cognitive performance: attentional control theory. Emotion. 2007;7: 336. doi: 10.1037/1528-3542.7.2.336 [DOI] [PubMed] [Google Scholar]
  • 19.Yang Y, Miskovich TA, Larson CL. State anxiety impairs proactive but enhances reactive control. Frontiers in psychology. 2018;9: 2570. doi: 10.3389/fpsyg.2018.02570 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Robinson O, Vytal K, Cornwell B, Grillon C. The impact of anxiety upon cognition: perspectives from human threat of shock studies. Frontiers in Human Neuroscience. 2013;7: 203. doi: 10.3389/fnhum.2013.00203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kail R, Salthouse TA. Processing speed as a mental capacity. Acta psychologica. 1994;86: 199–225. doi: 10.1016/0001-6918(94)90003-5 [DOI] [PubMed] [Google Scholar]
  • 22.Monsell S. Task switching. Trends in cognitive sciences. 2003;7: 134–140. doi: 10.1016/s1364-6613(03)00028-7 [DOI] [PubMed] [Google Scholar]
  • 23.Cohen JD, Barch DM, Carter C, Servan-Schreiber D. Context-processing deficits in schizophrenia: converging evidence from three theoretically motivated cognitive tasks. Journal of abnormal psychology. 1999;108: 120. doi: 10.1037//0021-843x.108.1.120 [DOI] [PubMed] [Google Scholar]
  • 24.Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive psychology. 2000;41: 49–100. doi: 10.1006/cogp.1999.0734 [DOI] [PubMed] [Google Scholar]
  • 25.Braver TS. The variable nature of cognitive control: a dual mechanisms framework. Trends in cognitive sciences. 2012;16: 106–113. doi: 10.1016/j.tics.2011.12.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Burgess GC, Braver TS. Neural mechanisms of interference control in working memory: effects of interference expectancy and fluid intelligence. PloS one. 2010;5: e12861. doi: 10.1371/journal.pone.0012861 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Zwaan RA, Pecher D, Paolacci G, Bouwmeester S, Verkoeijen P, Dijkstra K, et al. Participant Nonnaiveté and the reproducibility of cognitive psychology. Psychonomic Bulletin & Review. 2018;25: 1968–1972. doi: 10.3758/s13423-017-1348-y [DOI] [PubMed] [Google Scholar]
  • 28.Mertens G, Gerritsen L, Duijndam S, Salemink E, Engelhard IM. Fear of the coronavirus (COVID-19): Predictors in an online study conducted in March 2020. Journal of Anxiety Disorders. 2020; 102258. doi: 10.1016/j.janxdis.2020.102258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Wise T, Zbozinek TD, Michelini G, Hagan CC, Mobbs D. Changes in risk perception and self-reported protective behaviour during the first week of the COVID-19 pandemic in the United States. Royal Society Open Science. 2020;7: 200742. doi: 10.1098/rsos.200742 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Butler G, Mathews A. Anticipatory anxiety and risk perception. Cognitive therapy and research. 1987;11: 551–565. [Google Scholar]
  • 31.Eysenck MW, Derakshan N. Cognitive biases for future negative events as a function of trait anxiety and social desirability. Personality and Individual differences. 1997;22: 597–605. [Google Scholar]
  • 32.MacLeod AK, Williams JM, Bekerian DA. Worry is reasonable: The role of explanations in pessimism about future personal events. Journal of Abnormal psychology. 1991;100: 478. doi: 10.1037//0021-843x.100.4.478 [DOI] [PubMed] [Google Scholar]
  • 33.Clark L, Li R, Wright CM, Rome F, Fairchild G, Dunn BD, et al. Risk-avoidant decision making increased by threat of electric shock. Psychophysiology. 2012;49: 1436–1443. doi: 10.1111/j.1469-8986.2012.01454.x [DOI] [PubMed] [Google Scholar]
  • 34.Maner JK, Richey JA, Cromer K, Mallott M, Lejuez CW, Joiner TE, et al. Dispositional anxiety and risk-avoidant decision-making. Personality and Individual Differences. 2007;42: 665–675. doi: 10.1016/j.paid.2006.08.016 [DOI] [Google Scholar]
  • 35.Ruggeri K, Alí S, Berge ML, Bertoldo G, Bjørndal LD, Cortijos-Bernabeu A, et al. Replicating patterns of prospect theory for decision under risk. Nature Human Behaviour. 2020; 1–12. doi: 10.1038/s41562-020-0818-9 [DOI] [PubMed] [Google Scholar]
  • 36.Tversky A, Kahneman D. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty. 1992;5: 297–323. [Google Scholar]
  • 37.Hartley CA, Phelps EA. Anxiety and Decision-Making. Biological Psychiatry. 2012;72: 113–118. doi: 10.1016/j.biopsych.2011.12.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Porcelli AJ, Delgado MR. Acute stress modulates risk taking in financial decision making. Psychological Science. 2009;20: 278–283. doi: 10.1111/j.1467-9280.2009.02288.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Xu P, Gu R, Broster LS, Wu R, Van Dam NT, Jiang Y, et al. Neural Basis of Emotional Decision Making in Trait Anxiety. J Neurosci. 2013;33: 18641. doi: 10.1523/JNEUROSCI.1253-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Crump MJ, McDonnell JV, Gureckis TM. Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PloS one. 2013;8: e57410. doi: 10.1371/journal.pone.0057410 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Cohen J. Statistical power analysis for the behavioral sciences 2nd edn. Erlbaum Associates, Hillsdale; 1988. [Google Scholar]
  • 42.Kowal M, Coll-Martín T, Ikizer G, Rasmussen J, Eichel K, Studzińska A, et al. Who is the Most Stressed During the COVID-19 Pandemic? Data From 26 Countries and Areas. Applied Psychology: Health and Well-Being. 2020;12: 946–966. 10.1111/aphw.12234 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Mathias SR, Knowles EE, Barrett J, Leach O, Buccheri S, Beetham T, et al. The processing-speed impairment in psychosis is more than just accelerated aging. Schizophrenia bulletin. 2017;43: 814–823. doi: 10.1093/schbul/sbw168 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Salthouse TA. Speed of behavior and its implications for cognition. 1985. [Google Scholar]
  • 45.da Silva Castanheira K, Otto AR. Time Pressure and Processing Speed Effects on Risk Preferences. submitted. [Google Scholar]
  • 46.Otto AR, Daw ND. The opportunity cost of time modulates cognitive effort. Neuropsychologia. 2019;123: 92–105. doi: 10.1016/j.neuropsychologia.2018.05.006 [DOI] [PubMed] [Google Scholar]
  • 47.Otto AR, Vassena E. It’s all relative: Reward-induced cognitive control modulation depends on context. Journal of Experimental Psychology: General. 2020. doi: 10.1037/xge0000842 [DOI] [PubMed] [Google Scholar]
  • 48.MacDonald AW III. Building a Clinically Relevant Cognitive Task: Case Study of the AX Paradigm. Schizophrenia Bulletin. 2008;34: 619–628. doi: 10.1093/schbul/sbn038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Braver TS, Paxton JL, Locke HS, Barch DM. Flexible neural mechanisms of cognitive control within human prefrontal cortex. Proceedings of the National Academy of Sciences. 2009;106: 7351–7356. doi: 10.1073/pnas.0808187106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Gonthier C, Macnamara BN, Chow M, Conway ARA, Braver TS. Inducing Proactive Control Shifts in the AX-CPT. Frontiers in Psychology. 2016;7: 1822. doi: 10.3389/fpsyg.2016.01822 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Otto AR, Skatova A, Madlon-Kay S, Daw ND. Cognitive control predicts use of model-based reinforcement learning. Journal of cognitive neuroscience. 2015;27: 319–333. doi: 10.1162/jocn_a_00709 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Cohen S, Kamarck T, Mermelstein R. Perceived stress scale. Measuring stress: A guide for health and social scientists. 1994;10: 1–2. [Google Scholar]
  • 53.Vinokur AD, Price RH, Caplan RD. Hard times and hurtful partners: How financial strain affects depression and relationship satisfaction of unemployed persons and their spouses. Journal of personality and social psychology. 1996;71: 166. doi: 10.1037//0022-3514.71.1.166 [DOI] [PubMed] [Google Scholar]
  • 54.Sinclair AH, Hakimi S, Stanley M, Adcock RA, Samanez-Larkin G. Pairing Facts with Imagined Consequences Improves Pandemic-Related Risk Perception. 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Mertens G, Duijndam S, Lodder P, Smeets T. Pandemic panic? Results of a 6-month longitudinal study on fear of COVID-19. 2020. 10.31234/osf.io/xtu3f [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Cunningham TJ, Fields EC, Garcia SM, Kensinger EA. The relation between age and experienced stress, worry, affect, and depression during the spring 2020 phase of the COVID-19 pandemic in the United States. Emotion. 2021. [DOI] [PubMed] [Google Scholar]
  • 57.Braver TS, Cohen JD. On the control of control: The role of dopamine in regulating prefrontal function and working memory. Control of cognitive processes: Attention and performance XVIII. 2000; 713–737. [Google Scholar]
  • 58.Henderson D, Poppe AB, Barch DM, Carter CS, Gold JM, Ragland JD, et al. Optimization of a Goal Maintenance Task for Use in Clinical Applications. Schizophrenia Bulletin. 2012;38: 104–113. doi: 10.1093/schbul/sbr172 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211: 453. doi: 10.1126/science.7455683 [DOI] [PubMed] [Google Scholar]
  • 60.Fales C, Barch D, Burgess G, Schaefer A, Mennin D, Gray J, et al. Anxiety and cognitive efficiency: differential modulation of transient and sustained neural activity during a working memory task. Cognitive, affective, & behavioral neuroscience. 2008;8: 239–253. doi: 10.3758/cabn.8.3.239 [DOI] [PubMed] [Google Scholar]
  • 61.Krug MK, Carter CS. Proactive and reactive control during emotional interference and its relationship to trait anxiety. Brain Research. 2012;1481: 13–36. doi: 10.1016/j.brainres.2012.08.045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Lamm C, Pine DS, Fox NA. Impact of negative affectively charged stimuli and response style on cognitive-control-related neural activation: An ERP study. Brain and Cognition. 2013;83: 234–243. doi: 10.1016/j.bandc.2013.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Plessow F, Kiesel A, Kirschbaum C. The stressed prefrontal cortex and goal-directed behaviour: acute psychosocial stress impairs the flexible implementation of task goals. Experimental brain research. 2012;216: 397–408. doi: 10.1007/s00221-011-2943-1 [DOI] [PubMed] [Google Scholar]
  • 64.Plessow F, Fischer R, Kirschbaum C, Goschke T. Inflexibly focused under stress: acute psychosocial stress increases shielding of action goals at the expense of reduced cognitive flexibility with increasing time lag to the stressor. Journal of cognitive neuroscience. 2011;23: 3218–3227. doi: 10.1162/jocn_a_00024 [DOI] [PubMed] [Google Scholar]
  • 65.Derakshan N, Smyth S, Eysenck MW. Effects of state anxiety on performance using a task-switching paradigm: An investigation of attentional control theory. Psychonomic Bulletin & Review. 2009;16: 1112–1117. doi: 10.3758/PBR.16.6.1112 [DOI] [PubMed] [Google Scholar]
  • 66.Steinhauser M, Maier M, Hübner R. Cognitive control under stress: how stress affects strategies of task-set reconfiguration. Psychological science. 2007;18: 540–545. doi: 10.1111/j.1467-9280.2007.01935.x [DOI] [PubMed] [Google Scholar]
  • 67.Beckwé M, Deroost N, Koster EHW, De Lissnyder E, De Raedt R. Worrying and rumination are both associated with reduced cognitive control. Psychological Research. 2014;78: 651–660. doi: 10.1007/s00426-013-0517-5 [DOI] [PubMed] [Google Scholar]
  • 68.De Lissnyder E, Koster EHW, Goubert L, Onraedt T, Vanderhasselt M-A, De Raedt R. Cognitive control moderates the association between stress and rumination. Journal of Behavior Therapy and Experimental Psychiatry. 2012;43: 519–525. doi: 10.1016/j.jbtep.2011.07.004 [DOI] [PubMed] [Google Scholar]
  • 69.Balderston NL, Quispe-Escudero D, Hale E, Davis A, O’Connell K, Ernst M, et al. Working memory maintenance is sufficient to reduce state anxiety. Psychophysiology. 2016;53: 1660–1668. doi: 10.1111/psyp.12726 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Vytal K, Cornwell B, Arkin N, Grillon C. Describing the interplay between anxiety and cognition: from impaired performance under low cognitive load to reduced anxiety under high load. Psychophysiology. 2012;49: 842–852. doi: 10.1111/j.1469-8986.2012.01358.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Starcke K, Brand M. Decision making under stress: A selective review. Neuroscience & Biobehavioral Reviews. 2012;36: 1228–1248. doi: 10.1016/j.neubiorev.2012.02.003 [DOI] [PubMed] [Google Scholar]
  • 72.Norr AM, Albanese BJ, Oglesby ME, Allan NP, Schmidt NB. Anxiety sensitivity and intolerance of uncertainty as potential risk factors for cyberchondria. Journal of Affective Disorders. 2015;174: 64–69. doi: 10.1016/j.jad.2014.11.023 [DOI] [PubMed] [Google Scholar]
  • 73.Rosen NO, Knäuper B. A little uncertainty goes a long way: State and trait differences in uncertainty interact to increase information seeking but also increase worry. Health communication. 2009;24: 228–238. doi: 10.1080/10410230902804125 [DOI] [PubMed] [Google Scholar]
  • 74.Chao M, Xue D, Liu T, Yang H, Hall BJ. Media use and acute psychological outcomes during COVID-19 outbreak in China. Journal of Anxiety Disorders. 2020;74: 102248. doi: 10.1016/j.janxdis.2020.102248 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Schilbach F, Schofield H, Mullainathan S. The Psychological Lives of the Poor. The American Economic Review. 2016;106: 435–440. doi: 10.1257/aer.p20161101 [DOI] [PubMed] [Google Scholar]
  • 76.Shamosh NA, DeYoung CG, Green AE, Reis DL, Johnson MR, Conway AR, et al. Individual differences in delay discounting: relation to intelligence, working memory, and anterior prefrontal cortex. Psychological science. 2008;19: 904–911. doi: 10.1111/j.1467-9280.2008.02175.x [DOI] [PubMed] [Google Scholar]
  • 77.Xie W, Campbell S, Zhang W. Working memory capacity predicts individual differences in social-distancing compliance during the COVID-19 pandemic in the United States. Proc Natl Acad Sci USA. 2020;117: 17667. doi: 10.1073/pnas.2008868117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Goldfarb EV. Participant stress in the COVID-19 era and beyond. Nature Reviews Neuroscience. 2020;21: 663–664. doi: 10.1038/s41583-020-00388-7 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

David V Smith

30 Sep 2021

PONE-D-21-23700

The impact of pandemic-related worry on cognitive functioning and risk-taking

PLOS ONE

Dear Dr. da Silva Castanheira,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Your manuscript has been reviewed by two expert reviewers. Although both reviewers noted several strengths of the study, they also raised many important concerns that should be addressed in a revision. Both reviewers recommended adding additional details regarding the methods. In addition, Reviewer 1 raised several important points regarding the interpretation of the results, and Reviewer 2 identified a few key issues with the methods and analyses. Addressing these issues will strengthen the manuscript. 

Please submit your revised manuscript by Nov 14 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

David V. Smith, Ph.D.

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this manuscript, the authors present results of a cognitive battery (processing speed, task switching, proactive vs. reactive cognitive control, and economic decision-making tasks) collected online in 3 waves between April-June 2020 (shortly after the COVID-19 pandemic began). They report that with increasing self-reported COVID worry, processing speed slows and proactive cognitive control (in terms of RT) decreases, and sensitivity to risky decision outcome probabilities increases. Task switching and decision risk/loss aversion did not significantly vary with COVID worry. Exploratory comparisons with pre-pandemic cognitive performance on these tasks suggests that cognitive performance was poorer and risk taking was reduced in the pandemic sample relative to pre-pandemic.

This is a valuable investigation on a timely issue, given the pervasiveness of stress and anxiety related to COVID-19 and the implications of associated changes in cognitive and decision performance. Overall I thought the paper was relatively sound; however, some additional information could have been provided and implications of the data explored further.

I was curious about how to interpret differences in the data between the waves collected. Specifically, it appears that pandemic-related worry decreased from Wave 1 to Wave 3, and some aspects of cognitive performance (i.e., processing speed) also declined from Wave 1 to 3, despite there also being a negative relationship between pandemic worry and processing speed. What additional sources of variance might have contributed to the performance decline? Should this be interpreted as noise?

I thought more information could be provided regarding the cognitive tasks as well as relationships in performance between them. For example, when describing the cognitive tasks, I was curious how long the task blocks of task switching, DPX, and decision performance were, and how this compared to the pre-pandemic data that they were compared against. I also was curious about correlations between performance and relationship with pandemic worry on different tasks (i.e., slowing on both the processing speed and DPX task) and whether examining for such correlations might help clarify why some metrics of executive function appeared to be modulated by pandemic worry and others were not.

I also was curious about the extent to which perceived risk for contracting COVID-19 (treated as a covariate) was related to COVID worry and whether this metric independently accounted for variance in performance. Sinclair et al. 2021, in PNAS, might also be relevant here.

Given that the paper framed COVID-related changes in cognitive performance in terms of understanding and complying with public health measures, I thought the authors could have discussed current public failures to comply with such measures in a bit more depth and nuance. Are the authors thinking of cognitive performance as a primary driver of such compliance, or other factors as well? Perhaps the observed decline in COVID worry that they observed over time continued past June 2020, and can be thought of as an affective factor contributing to failures in public compliance to COVID safety measures. I thought the paper could do a better job discussing potential contributions of such factors to public behavior as well as making clear the limitations of the present data and what kinds of contextual factors future studies might want to consider when examining cognitive performance under duress.

Reviewer #2: This research examined the relationship between Covid-19 related worry, working memory, and risk-taking behavior in an online sample. In general, I found the paper quite enjoyable — the abstract is exceptionally well-written. I think this paper would merit publication with minor revisions.

1. The primary thing I would like to see in a revised manuscript would be the correlation of the cognitive tasks with one another. This seems important particularly in light of the claims in the discussion that they perhaps differentially rely on working memory.

2. There is some use of acronyms without specifying what they are to the reader (FCQ; AX-CPT). I inferred, but it would be helpful to make it more explicit.

3. I’m confused about the analysis outlined on p. 10. I thought that covid related worry was predicting cognitive function, but it sounds like those were used as predictor variables, given that they are described as fixed and random effects?

4. Why was the Digit-Symbol Coding task run twice? I understand its due to accuracy, but aren’t they worried about practice effects? How many participants fell into this category? Were there similar accuracy constraints on the other tasks (it seems like no, why not?)

5. The claim that worry serves as a mediator to working memory disruption and correspondingly, obeying government sanctions is a big one. I would back off of that language, especially given that the link between worry, working memory, and Covid-19 related behavior were not tested in the scope of this paper.

6. There are a number of typos/omissions in the paper, and I would suggest a careful read before the next revision. I caught a few, listed below:

- “Dependant” is a typo on p. 10

- Figure 1, the caption is missing another variable name when it says “overall perceived stress did not vary as a function of”

- Figure 2E is missing the bars

- Figure 5e, the x axis is “Pandemic-related worry”

- Pg 19 is missing an of in “the effects negative stress on task-switching ability”

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Nov 18;16(11):e0260061. doi: 10.1371/journal.pone.0260061.r002

Author response to Decision Letter 0


18 Oct 2021

Reviewer #1:

In this manuscript, the authors present results of a cognitive battery (processing speed, task switching, proactive vs. reactive cognitive control, and economic decision-making tasks) collected online in 3 waves between April-June 2020 (shortly after the COVID-19 pandemic began). They report that with increasing self-reported COVID worry, processing speed slows and proactive cognitive control (in terms of RT) decreases, and sensitivity to risky decision outcome probabilities increases. Task switching and decision risk/loss aversion did not significantly vary with COVID worry. Exploratory comparisons with pre-pandemic cognitive performance on these tasks suggests that cognitive performance was poorer and risk taking was reduced in the pandemic sample relative to pre-pandemic.

This is a valuable investigation on a timely issue, given the pervasiveness of stress and anxiety related to COVID-19 and the implications of associated changes in cognitive and decision performance. Overall I thought the paper was relatively sound; however, some additional information could have been provided and implications of the data explored further.

We appreciate the reviewer’s overall positive evaluation of our manuscript and thank the reviewer for their constructive comments on the clarity of the submission.

I was curious about how to interpret differences in the data between the waves collected. Specifically, it appears that pandemic-related worry decreased from Wave 1 to Wave 3, and some aspects of cognitive performance (i.e., processing speed) also declined from Wave 1 to 3, despite there also being a negative relationship between pandemic worry and processing speed. What additional sources of variance might have contributed to the performance decline? Should this be interpreted as noise?

We share the reviewer’s curiosity over the decreases in task performance over waves (e.g., processing speed) without a corresponding increase in self-reported subjective worry indexed by FCQ scores. In the discussion, we now provide more speculation as to the source of this variation. While it is possible that these differences reflect a meaningful change between waves (e.g., prolonged exposure to stress), it is also possible—as the reviewer points out—that these differences may reflect noise. Critically, given the cross-sectional nature of our study, we discuss the possibility that these wave differences in both performance and self-report anxiety reflect demographic differences in the samples associated with changes to the pool of MTurk participants—particularly as the pandemic was associated with a marked increase in financial hardship and work from home.

Page 29:

Given that we are comparing different groups of participants who were captured at different points in time, we can only speculate as to the mechanisms underlying the decline in cognitive performance across this 2-month period in the early stages of the pandemic. In keeping with other research [54,55], we observed a decrease in the level of worry over time whereas task performance did not significantly change—suggesting there may be other factors contributing to impaired executive functioning. For example, the persistently impaired task performance may reflect the impact of prolonged (i.e. chronic) exposure to stress or a shift in the sources of worry. Supporting this view, we found that self-reported perceived stress remained stable across the waves of data collection. However, despite controlling for possible demographic differences between samples (e.g., age, gender, years on AMT; see Supplemental Materials), we cannot rule out the possibility that our groups (collected both in years prior to the pandemic and throughout) sampled different populations, as the pandemic was associated with a marked increase in financial hardships and remote work. Thus, the sample differences discussed here should be interpreted as exploratory as more work is needed to understand whether these wave effects reflect meaningful differences.

I thought more information could be provided regarding the cognitive tasks as well as relationships in performance between them. For example, when describing the cognitive tasks, I was curious how long the task blocks of task switching, DPX, and decision performance were, and how this compared to the pre-pandemic data that they were compared against. I also was curious about correlations between performance and relationship with pandemic worry on different tasks (i.e., slowing on both the processing speed and DPX task) and whether examining for such correlations might help clarify why some metrics of executive function appeared to be modulated by pandemic worry and others were not.

The reviewer’s point on the clarity of our methods is well taken. We have made edits throughout the methods section to improve the clarity and transparency of the procedures used.

With respect to block length, all the tasks used in the current experiments were identical to those used in the pre-pandemic samples, both in terms of stimuli and duration (trial number, response-deadlines etc.). We thank the reviewer for pointing out this ambiguity and have clarified this point in our revision

Page 8:

These tasks were identical to those used in our pre-pandemic samples in terms of stimuli, trial number and response-deadlines.

Finally, with respect to correlations, we share the reviewers’ interest in the relationship among the various executive functioning tasks used and the observed pattern—FCQ predicted slower processing speed and proactive control but not task switching. As suggested, we opted to compute simple Pearson correlations among different metrics of task performance: digit symbol scores computed as the total number of correct responses within 90 seconds, PBI computed as the relative degree of interference on trials recruiting proactive vs reactive control (RTAY – RTBX)/(RTAY + RTBX), and the empirical bayes estimates of switch costs estimated using the linear hierarchical regression predicting log RTs. As seen in the correlation matrix below, we observe a statistically significant relationship between digit symbol scores and both proactive control, and task-switching costs. However, reliance on proactive control was not related to performance on the task-switching paradigm. As the reviewer intuited, this corroborates our results, while processing speed seems to be a more fundamental aspect of cognition—being related to performance on the other two tasks—proactive cognitive control and task-switching performance seem to index disparate aspects of executive functioning (Miyake et al., 2000). While these correlations do not provide direct evidence, they are aligned with the purported effects of worry on working memory capacity outlined in the discussion.

Switch Costs Digit Symbol Score PBI

Switch Costs

Digit Symbol Score 0.112*

PBI 0.002 0.124*

Computed correlation used pearson-method with listwise-deletion.

* p < .05

Accordingly, we have updated the discussion section of the paper to reflect this new analysis. We thank the reviewer for their insightful comment, which has strengthened the overall argument of the submission.

Page 27:

However, we failed to find a relationship between pandemic worry and task-switching ability. This is perhaps not surprising considering that studies of the effects of negative affective states on task-switching ability have revealed mixed findings [62,63]. Critically, working-memory demands are thought to play a key role in anxiety-induced impairments in set shifting ability, as both task complexity [64], and response preparation [65] have been found to mediate the predictive effects of anxiety on task-switching. Thus, it is possible that the chosen task switching paradigm did not sufficiently tax working memory to observe a worry-induced decline in performance, as task complexity and response preparation were not manipulated. Dovetailing with these findings, here we found that processing speed was related to performance on both the task-switching and proactive control tasks, but proactive control and task-switching performance did not appear to relate to one another (see S17 Table). Together, these results highlight how executive functioning reflects a related, yet distinct set of cognitive abilities [23], and suggest that the effects of worry are not of equal magnitude across domains.

I also was curious about the extent to which perceived risk for contracting COVID-19 (treated as a covariate) was related to COVID worry and whether this metric independently accounted for variance in performance. Sinclair et al. 2021, in PNAS, might also be relevant here.

We share the reviewer’s intuition that perceived risk of contracting COVID-19 would contribute to task performance. While our previous submission had already controlled for perceived risk of contracting severe COVID-19, we recognize that these results were not presented clearly and with due emphasis. We have updated the manuscript accordingly.

In our study, participants rated whether they perceived themselves at being of elevated risk for contracting a severe case of COVID-19 (i.e., “Do you think you are at increased risk of experiencing a more severe case of COVID-19/coronavirus (i.e,. due to an underlying medical condition)?“). This variable was then effects-coded (high vs low risk) based on whether participants did or did not endorse the statement, and entered into all appropriate regressions. Perceiving oneself as being at an elevated risk for contracting COVID-19 was associated with higher levels of pandemic-related worry (i.e., FCQ scores, B =2.3799, 95% CI =[1.5289 – 3.2309], p <0.001). In terms of task performance, beyond self-reported worry, those who perceived themselves to be at an elevated risk, were slower (B= 0.0362, 95% CI = [0.0003 – 0.0722], p = 0.048) and made fewer correct responses on the digit-symbol coding task (B =-0.8549, 95% CI =[-1.4616 – -0.2482], p =0.006). However, no effect of perceived risk was observed for our measure of proactive control (PBI; B=0.0003, 95% CI =[-0.0258 – 0.0264], p=0.982) nor overall risky choice (B=-0.1459, 95% CI =[-0.3497 – 0.0579], p =0.160). Thus, while it seems like overall perceived risk for contracting a severe case of COVID-19 predicts some variance in processing speed independently of pandemic-worry (i.e., FCQ scores), it seems this is not the case for either proactive control or overall risk-preferences. Considering these results, we have updated the manuscript to include this important distinction between worry and perceived risk.

We have updated the Methods, Results, and Discussion sections to explicitly state the question used and the observed relationships:

Page 17:

Finally, participants were asked to indicate if they felt sick with any COVID-19 symptoms and to determine if they believed themselves to be at an elevated risk of contracting a more severe case of COVID-19 (i.e., “Do you think you are at increased risk of experiencing a more severe case of COVID-19/coronavirus (i.e,. due to an underlying medical condition)?”) [53].

Page 19:

Additionally, perceiving oneself as being at an elevated risk for contracting COVID-19 was associated with higher levels of pandemic-related worry (B =2.3799, 95% CI =[1.5289 – 3.2309], p <0.001) [53].

Page 20:

Interestingly, we observed that higher levels of pandemic-related worry—indexed by the FCQ—predicted lower performance (Fig 2B). Statistically, this predictive effect was significant, controlling for age, gender, overall stress, financial strain, perceived risk of contracting COVID-19, and several other demographic variables (FCQ = -0.940, CI = [-1.591, -0.289], p = .005; see S1 Table). Additionally, we found that even while controlling for pandemic-related worry, perceived risk of contracting a severe case of COVID-19 was also predictive of slower overall processing speed ( = 0.0362, 95% CI = [0.0003 – 0.0722], p = 0.048; see S1 Table). We also examined whether the predictive effect of worry on digit-symbol performance could be explained by a decrease in response caution, as indexed by faster correct RTs [42]. Instead, we found that increased FCQ scores were associated with slower correct response times (RTs; = 0.019, CI = [0.005, 0.033], p = .009; see Fig 2C and S2 Table) after controlling for the covariates listed above, indicating that the onus of worry’s predictive effect was a general slowing of information processing.

Page 22:

We next examined whether pandemic-related worry predicted deficits in the proactive behavioural index (PBI)[48]—an RT-based measure of proactive control, calculated as (RTAY - RTBX ) / (RTAY + RTBX )—finding that individuals with higher FCQ scores had lower PBIs (Fig 4B; = -0.0020, CI = [-0.0034, -0.0005] , p = 0.010; see S9 Table), controlling for gender, age, perceived risk of contracting COVID-19, self-reported, and financial stress (see S8 Table for full list of covariates).

Page 23:

Importantly, this relationship between sensitivity to outcome probabilities and individual worry was robust after controlling for perceived risk of contracting COVID-19 (see S12 Table for full list of covariates).

Page 30:

For instance, though we did not measure current compliance with COVID-19-related measures, it is possible that the behavioral repertoires examined here—executive functioning and risky decision-making—and the affective states (i.e., worry) of an individual might play a role in that individuals’ understanding and compliance with public health measures. Indeed, recent work has observed a positive relationship between one’s decision to comply with government sanctioned social-distancing directives and working-memory [76] as well as perceived (but not actual) pandemic-related risk [53]. Mirroring these findings, we found individuals who perceived themselves at higher risk for contracting COVID-19 also experienced greater pandemic-worry, and slower information processing (see S1 Table).

Given that the paper framed COVID-related changes in cognitive performance in terms of understanding and complying with public health measures, I thought the authors could have discussed current public failures to comply with such measures in a bit more depth and nuance. Are the authors thinking of cognitive performance as a primary driver of such compliance, or other factors as well? Perhaps the observed decline in COVID worry that they observed over time continued past June 2020, and can be thought of as an affective factor contributing to failures in public compliance to COVID safety measures. I thought the paper could do a better job discussing potential contributions of such factors to public behavior as well as making clear the limitations of the present data and what kinds of contextual factors future studies might want to consider when examining cognitive performance under duress.

We agree with the reviewer that the discussion of the possible future directions on the factors contributing to adaptive behaviour (e.g., following public health directive) under duress was unclear. As the reviewer points out, it is likely that both cognitive ability, and affective states contribute to one’s ability to respond to stressful situations (e.g., a global pandemic). However, the directionality of these effects remains unclear. We have accordingly updated the Discussion to reflect our speculations on the underlying factors contributing to adaptive behaviour and to further outline limitations of the present study.

Page 28:

We believe the results outlined here have broader implications for human behaviour, as intact executive control is a cornerstone of healthy daily living (e.g., economic productivity), workplace performance [74], and the delay of gratification [75]. For instance, though we did not measure current compliance with COVID-19-related measures, it is possible that the behavioral repertoires examined here—executive functioning and risky decision-making—and the affective states (i.e., worry) of an individual might play a role in that individuals’ understanding and compliance with public health measures. Indeed, recent work has observed a positive relationship between one’s decision to comply with government sanctioned social-distancing directives and working-memory [76] as well as perceived (but not actual) pandemic-related risk [53]. Mirroring these findings, we found individuals who perceived themselves at higher risk for contracting COVID-19 also experienced greater pandemic-worry, and slower information processing (see S1 Table). Given the putative role of coronavirus anxiety in successful goal-directed behaviour, it is possible that an individual’s worry level plays a key role in the tendency to comply with public health measures: anxiety is associated both with worse working-memory and higher perceived risk for contracting COVID-19. One possibility is that pandemic-related worry could decrease compliance via putative changes in cognitive performance; another possibility is that worry could lead to increased compliance merely as a response to higher perceived risk of becoming sick, i.e. independently of the effects of worry on cognition. Given our cross-sectional design our results cannot shed light on the possible directionality of these effects. Future research should examine the underlying factors contributing to cognitive function and decision-making under high stress to better understand the relationship between affective and cognitive states, and the impact of these states on real-world behaviours. This is of particular interest considering the declining adherence to government directives and the far-from-universal acceptance of vaccination [6,7].

Reviewer #2:

This research examined the relationship between Covid-19 related worry, working memory, and risk-taking behavior in an online sample. In general, I found the paper quite enjoyable — the abstract is exceptionally well-written. I think this paper would merit publication with minor revisions.

We are excited to hear the reviewer’s positive evaluation of our paper and are appreciative for their helpful and insightful comments.

1. The primary thing I would like to see in a revised manuscript would be the correlation of the cognitive tasks with one another. This seems important particularly in light of the claims in the discussion that they perhaps differentially rely on working memory.

We share the reviewers’ curiosity about whether our cognitive measures index shared versus disparate aspects of executive functions. While previous works has more extensively looked at the interrelationships between different cognitive tasks (see. Miyake et al. 2000, in Cognitive psychology), we report here correlations between the different measures in question. As suggested, we ran Pearson correlations between the different metrics of task performance: processing speed was computed as the total number of correct responses within 90 seconds on the digit symbol task, proactive control was computed from performance on the PBI as the relative degree of interference on trials recruiting proactive vs reactive control (RTAY – RTBX)/(RTAY + RTBX), and task switching ability was computed from the empirical bayes estimates of switch costs estimated using the linear hierarchical regression predicting log RTs. As seen below, we observe a statistically significant relationship between our measure of processing speed and both proactive control, and task-switching ability. However, reliance on proactive control was not related to performance on the task-switching paradigm. In keeping with the different relationships between the tasks and worry, this pattern of correlations further supports the possibility that these different measures of cognitive function index related, but distinct facets of executive functioning (Miyake et al. 2000). While processing speed seems to be a more fundamental aspect of cognition—being related to performance of the other two tasks—proactive cognitive control and task-switching performance seem to be unrelated. Perhaps this lack of correlation, despite the large sample size, reflects a differential reliance on working-memory capacity between the tasks. While these correlations do not provide evidence for a specific underlying factor structure, we believe they support the hypothesized effects of worry on working memory capacity outlined in the discussion. We have thus updated the discussion to reflect these analyses.

Switch Costs Digit Symbol Score PBI

Switch Costs

Digit Symbol Score 0.112*

PBI 0.002 0.124*

Computed correlation used pearson-method with listwise-deletion.

* p < .05

Page 27:

Critically, working-memory demands are thought to play a key role in anxiety-induced impairments in set shifting ability, as both task complexity [64], and response preparation [65] have been found to mediate the predictive effects of anxiety on task-switching. Thus, it is possible that the chosen task switching paradigm did not sufficiently tax working memory to observe a worry-induced decline in performance, as task complexity and response preparation were not manipulated. Dovetailing with these findings, here we found that processing speed was related to performance on both the task-switching and proactive control tasks, but proactive control and task-switching performance did not appear to relate to one another (see S17 Table).

2. There is some use of acronyms without specifying what they are to the reader (FCQ; AX-CPT). I inferred, but it would be helpful to make it more explicit.

We thank the reviewer for their astute observation. We have updated the manuscript to include explicit definitions of the acronyms used.

3. I’m confused about the analysis outlined on p. 10. I thought that covid related worry was predicting cognitive function, but it sounds like those were used as predictor variables, given that they are described as fixed and random effects?

The reviewer is correct in pointing out that pandemic-related worry served as a predictor variable in the hierarchical models with task performance (i.e., correct responses, or response-times) as the outcome variable. The variables which served as both fixed and random effects were the experimental trial-by-trial manipulation. In other words, the outcomes were not the trial type (i.e., switch or repeat) but response-times, which were modeled as a function of trial type. We apologize for the confusion and have updated the manuscript to clarify these ambiguities in our analysis.

Page 17:

For the multi-level models, we included the within-participant trial-by-trial predictor variables as both random and fixed effects (i.e. task switch versus repetition, response congruency, decision frame losses versus gains, outcome probability likely versus unlikely). This allowed us to estimate group- and participant- level estimates for the effects in question (e.g., switch costs).

4. Why was the Digit-Symbol Coding task run twice? I understand its due to accuracy, but aren’t they worried about practice effects? How many participants fell into this category? Were there similar accuracy constraints on the other tasks (it seems like no, why not?)

We share the reviewer’s concern over the decision to run the digit-symbol task twice. This procedure was chosen to mirror the procedure used in a previous study in our lab (da Silva Castanheira & Otto, submitted). Originally, this choice was used as an attention check to ensure participants were taking the task seriously, as the task was administered online. So as not to introduce new sources of variance, we decided to keep this procedure when collecting data during the three waves.

The digit symbol coding task itself is relatively simple and does not require memorizing or learning any rules; the digit-symbol mappings are presented to the participant and do not change throughout the task (Mathias et al. 2017). This is because the digit-symbol coding task is specifically aimed at indexing individuals’ ability to process information quickly, and not their ability to learn, or their working memory capacity (Salthouse 1985). Thus, given that participants are given ample opportunity to practice, it is unlikely their performance reflects learning.

Approximately 35% of participants were asked to repeat the task (28% in wave 1, 33% in wave 2, and 42% in wave 3). Since we anticipated worse performance on the tasks due to pandemic-induced worry and overall stress, we wanted to include as many opportunities as possible to ensure optimal task performance. Despite this, we found participants are overall worse at performing the task, both as a function of pandemic worry, and across waves (both pre-pandemic and during). Overall, we found that participants who repeated the task were generally worse—making fewer correct responses within the allotted 90 minutes—than those who did not ( = -2.3263, CI = [-4.0692, -0.5833], p =0.009), suggesting that task repetitions did not improve performance through practice effects. Critically, controlling for whether the participant repeated or did not repeat the task did not change the conclusion of our analysis: pandemic-related anxiety is associated with slowed processing speed ( = -0.8950, CI= [-1.5451, -0.2448[, p= 0.007). Thus, we are confident that this design choice did not fundamentally influence our results.

Accordingly, we have updated the description and logic behind this choice in our revised Methods.

Page 8:

In keeping with previous pre-pandemic samples collected in the lab, we implemented an attention check designed for online data collection where participants who did not achieve 70% accuracy on the task were asked to complete the task a second time (28% of participants in wave 1, 33% in wave 2, and 42% in wave 3). In this case, only data from the second run were analyzed [44]. We compared the current samples to the previously collected data [44].

5. The claim that worry serves as a mediator to working memory disruption and correspondingly, obeying government sanctions is a big one. I would back off of that language, especially given that the link between worry, working memory, and Covid-19 related behavior were not tested in the scope of this paper.

We agree with the reviewer that the discussion of the results needs to be more carefully framed. Given Reviewer 1’s interest in this possible mediation (see comment 4), we have tempered language used in our discussion of the possible linkage between these constructs. In response to this comment, we have also elaborated on the limitations of this sort of interpretation and highlight the need for future research to directly test this question.

Page 28.

We believe the results outlined here have broader implications for human behaviour, as intact executive control is a cornerstone of healthy daily living (e.g., economic productivity), workplace performance [74], and the delay of gratification [75]. For instance, though we did not measure current compliance with COVID-19-related measures, it is possible that the behavioral repertoires examined here—executive functioning and risky decision-making—and the affective states (i.e., worry) of an individual might play a role in that individuals’ understanding and compliance with public health measures. Indeed, recent work has observed a positive relationship between one’s decision to comply with government sanctioned social-distancing directives and working-memory [76] as well as perceived (but not actual) pandemic-related risk [53]. Mirroring these findings, we found individuals who perceived themselves at higher risk for contracting COVID-19 also experienced greater pandemic-worry, and slower information processing (see S1 Table). Given the putative role of coronavirus anxiety in successful goal-directed behaviour, it is possible that an individual’s worry level plays a key role in the tendency to comply with public health measures: anxiety is associated both with worse working-memory and higher perceived risk for contracting COVID-19. One possibility is that pandemic-related worry could decrease compliance via putative changes in cognitive performance; another possibility is that worry could lead to increased compliance merely as a response to higher perceived risk of becoming sick, i.e. independently of the effects of worry on cognition. Given our cross-sectional design our results cannot shed light on the possible directionality of these effects. Future research should examine the underlying factors contributing to cognitive function and decision-making under high stress to better understand the relationship between affective and cognitive states, and the impact of these states on real-world behaviours. This is of particular interest considering the declining adherence to government directives and the far-from-universal acceptance of vaccination [6,7].

6. There are a number of typos/omissions in the paper, and I would suggest a careful read before the next revision. I caught a few, listed below:

- “Dependant” is a typo on p. 10

- Figure 1, the caption is missing another variable name when it says “overall perceived stress did not vary as a function of”

- Figure 2E is missing the bars

- Figure 5e, the x axis is “Pandemic-related worry”

- Pg 19 is missing an of in “the effects negative stress on task-switching ability”

We have updated the manuscript according to the comments above and thank the reviewer for their astuteness.

Attachment

Submitted filename: Response to Reviewers_ms.docx

Decision Letter 1

David V Smith

2 Nov 2021

The impact of pandemic-related worry on cognitive functioning and risk-taking

PONE-D-21-23700R1

Dear Dr. da Silva Castanheira,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Although I was not able to secure a re-review from one of the original reviewers, I believe the comments were addressed and do not require an additional reviewer. 

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

David V. Smith, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you. The authors have addressed my concerns.

Very minor revision - I note that in the revised text the authors use the acronym AMT (page 29) which does not appear to be used anywhere else in the text. With some sleuthing I was able to determine this means Amazon Mechanical Turk, but I think this term could be corrected to “MTurk” as that shortened name is used elsewhere in the manuscript, or the acronym should be defined appropriately.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Acceptance letter

David V Smith

8 Nov 2021

PONE-D-21-23700R1

The impact of pandemic-related worry on cognitive functioning and risk-taking

Dear Dr. da Silva Castanheira:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. David V. Smith

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers_ms.docx

    Data Availability Statement

    The data and analysis scripts are available on the Open Science Framework, accessible with the following link https://osf.io/87wz5/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES