Skip to main content
PLOS One logoLink to PLOS One
. 2022 Sep 2;17(9):e0273945. doi: 10.1371/journal.pone.0273945

Mood symptoms predict COVID-19 pandemic distress but not vice versa: An 18-month longitudinal study

Benjamin A Katz 1,*, Iftah Yovel 1
Editor: Sergio A Useche2
PMCID: PMC9439223  PMID: 36054108

Abstract

The COVID-19 pandemic has had medical, economic and behavioral implications on a global scale, with research emerging to indicate that it negatively impacted the population’s mental health as well. The current study utilizes longitudinal data to assess whether the pandemic led to an increase in depression and anxiety across participants or whether a diathesis-stress model would be more appropriate. An international group of 218 participants completed measures of depression, anxiety, rumination and distress intolerance at two baselines six months apart as well as during the onset of the COVID-19 pandemic exactly 12 months later. Contrary to expectations, depression, rumination, and distress intolerance were at equivalent levels during the pandemic as they were at baseline. Anxiety was reduced by a trivial degree (d = .10). Furthermore, a comparison of quantitative explanatory models indicated that symptom severity and pandemic-related environmental stressors predicted pandemic-related distress. Pandemic-related distress did not predict symptom severity. These findings underscore the necessity of longitudinal designs and diathesis-stress models in the study of mental health during the COVID-19 pandemic. They also emphasize that individuals with higher rates of baseline psychopathology are as particularly at risk for higher levels of distress in response to disaster-related stressors.

Introduction

COVID-19 (SARS-CoV-2) is a novel, disproportionately infectious and lethal strain of coronavirus that has become a global pandemic over the course of early 2020 [1]. Theoretical articles have highlighted how the COVID-19 pandemic includes stressors uniquely fit to negatively impact mental health on a population level as well [26]. The high levels of health and financial uncertainly salient to the pandemic are strongly linked to stress [7], internalizing pathology [8] and externalizing pathology [9]. Furthermore, early restrictions on travel and gathering, self-isolation, and quarantine have led to periods of loss of social support and loneliness, both of which predict depression in particular [10, 11].

Researchers and public health specialists raised concerns that this time of acute stress would bring about a global spike in mental illness [12]. In doing so, they implicitly argue in favor of a general-stressor model, pointing out that the many disruptions and challenges presented by the COVID-19 pandemic have led to greater levels of cumulative stress across the population (Fig 1a). This population-level stress would transdiagnostically increase levels of mental illness on a population level [13]. Indeed, infection with COVID-19 has indeed been found to negatively impact mental health [14, 15] and pandemic-related stress has been found to be associated with elevated symptom severity [16, 17]. However, recent large-scale studies have found less support for the general stressor model on a population level, even at the beginning of the pandemic, a period of great stress and uncertainty [6, 18, 19]. Indeed, populations tend to be quite resilient in the face of disaster-related stress (for review, see [20]). For example, in 2012, Hurricane Sandy-related stress only predicted elevated symptoms among children predisposed to symptom-relevant affect; those high in temperamental sadness in a prior assessment showed elevated levels of depressive symptoms while those high in temperamental fearfulness showed elevated levels of anxiety symptoms [21]. Similarly, the COVID-19 pandemic may serve as a trigger for those with vulnerabilities to specific disorders as opposed to as a population-level stressor [22].

Fig 1. Alternate explanations for the association between COVID-19 pandemic-related stress and mood disorders.

Fig 1

Fig 1a presents a general stressor model, where pandemic-related environmental stressors lead to stress, which in turn leads to a change in symptom severity. Fig 1b presents a diathesis-stress model, where pandemic-related stress is predicted by both current levels of symptom severity and pandemic-related environmental stressors.

Furthermore, the theorized relationship between pandemic-related distress and psychopathology may follow an opposite causal direction as that presumed in a general-stressor model. Loneliness caused by self-isolation may lead to greater levels of depression as hypothesized [10, 23]. However, in a diathesis-stress model (See Fig 1b), the opposite causal direction is also possible [13]. In such a case, individuals with more severe depression are themselves more sensitive to distress [24]. Those with more severe baseline symptom severity may be more sensitive to the loneliness experienced during self-isolation, particularly at the beginning [18, 25]. Thus, in such a scenario, the observed association between depression and pandemic-related loneliness would still exist [16]. However, it would not be because pandemic-related loneliness led to greater levels of depression. Rather, in such a model, those who entered the pandemic with higher levels of depression would feel greater levels of loneliness during lockdown.

Testing these alternative hypotheses demands certain research design prerequisites [26]. Stress-oriented models of psychopathology are inherently longitudinal [13] and should ideally include comparable baseline measures taken before the stressor was introduced [27]. Most disaster research, however, is either cross-sectional or longitudinal only following the disaster [28]. This is true for much research on the COVID-19 pandemic as well [14]. Furthermore, an assessment of psychiatric symptom severity must also statistically control for the confounding roles of environmental stressors and the distressed reactions to these stressors [13]. Thus, in order to assess the COVID-19 pandemic’s negative impact on mental health, it is necessary to identify participants with pre-pandemic baseline data available, and to separately assess clinical symptoms, environmental stressors, and subjective distress. In doing so, the COVID-19 pandemic’s role in mental health may be quantified, setting the groundwork for subsequent empirically-based interventions.

Current study

The current study compared two opposing models of the COVID-19 pandemic’s effect on depression and anxiety during a peak time of pandemic-related fatalities. To do so, we contacted participants who had participated in a previous six-month longitudinal study on emotion regulation, depression and anxiety and invited them one year following the final assessment to complete these measures again along with measures of stressors related to COVID-19 and the corollary public health interventions (e.g., self-isolation). The pandemic assessment period occurred between April 15, 2020 and April 20, 2020, during the 5-day period with the highest number of deaths per day in the United States in the first half of 2020 and the second-highest period in the United Kingdom (which immediately followed the highest one; see Fig 2 [29]). Thus, we were able to examine participants’ levels of depression, anxiety, rumination and distress intolerance during the height of the COVID-19 pandemic, while also having two baseline measures from exactly one year and 18 months prior.

Fig 2. Daily confirmed deaths of COVID-19 between March 20, 2020 and May 20, 2020.

Fig 2

Figure retrieved on September 26, 2021 from OurWorldInData.org/coronavirus, which visualized data reported by the European Center for Disease Control. The red rectangle indicates the dates in which the third assessment took place, between April 15 and April 20.

We had originally hypothesized a general stressor model, that (a) there would be no differences in the measures between the two baseline time-points, (b) there would be a group-level increase in levels of symptom severity and clinically relevant measures, and (c) increases in symptom severity would be predicted by participants’ subjective distress related to COVID-19. However, after unexpectedly finding virtually no change in the measures between baseline and during the pandemic, we conducted a series of post-hoc analyses in order to confirm their equivalence across assessments.

Finally, we directly evaluated the general stressor model against the alternate, diathesis-stress model using two alternate explanatory models for how the COVID-19 pandemic relates to symptom severity beyond baseline. According to the first approach [10], pandemic-related environmental stressors would cause subjective distress, which in turn would predict symptom severity beyond baseline levels (Fig 3a and 3b). According to the second approach [20], symptom severity and environmental stressors would independently predict pandemic-related distress (Fig 4a and 4b). We quantified the likelihood of these approaches using two sets of path analyses that operationalized each approach and offered fit statistics for each model.

Fig 3.

Fig 3

a and b. Structural equation model where COVID-19 loneliness/stress predicts depression/anxiety beyond baseline. *** p < .001, * p ≤ .05, † p < .10. T2 –Data collection at Time 2, April 15–22, 2019. T3 –Data collection at Time 3, April 15–20, 2020. The above models were bad fits for the data (e.g., CFI = .000) and were therefore rejected.

Fig 4.

Fig 4

a and b. Structural equation model where COVID-19 depression/anxiety predicts COVID-19 loneliness/stress. *** p < .001, * p ≤ .05, † p < .10. T2 –Data collection at Time 2, April 15–22, 2019. T3 –Data collection at Time 3, April 15–20, 2020. The above models were excellent fits for the data (e.g., CFI = 1.00) and were therefore retained.

Method

Participants

The current sample consists of 218 participants (women = 118, men = 97, other/not applicable = 3) involved in an ongoing longitudinal study (see Procedure below). They represented a wide range of ages (M = 42.87, SD = 13.09, range = 19–75), with 102 participants in the United States, 100 in the United Kingdom, 15 in Canada, and one in Ireland. In order to ensure that results were not biased by the Canadian and Irish participants, analyses were replicated for only participants from the United States and United Kingdom. Average scores on the measures and effect sizes remained the same as in the main analysis. See S1 File for the full output. Further demographic data is available in S1 Table.

Materials

Depression anxiety and stress scale-21, depression and anxiety subscales (DASS-21-D & DASS-21-A) [30, 31]

The DASS-21 is a widely used measure of affective symptoms experienced during the previous week and has been found to successfully track symptom change following natural disasters [32]. It consists of three seven-item subscales, measuring depression, anxiety, and stress. The first two subscales were included in the current study. The Depression subscale assesses of sadness and anhedonia (e.g., “I felt down-hearted and blue”) and the Anxiety subscale assesses somatic experiencing of anxiety and fear (e.g., “I felt I was close to panic”). Each item was rated on a scale of 0 (= does not apply to me) to 3 (= applies to me very much, or most of the time). Subscale scores were calculated by summing together the items, with a possible range of 0–21, and higher scores indicating greater levels of depression or anxiety, respectively. In the current study, they showed very good-to-excellent reliability at all time points (Cronbach’s alphas Anxiety = .86-.88; Depression = .94-.95).

Reflection and Rumination Questionnaire RRQ; [33]

The 12-item scale measures individual differences in the use of repetitive, self-critical rumination. (e.g., “I spend a great deal of time thinking back over my embarrassing or disappointing moments”). Participants would rate their agreement with items on a scale of 1 (= strongly disagree) to 5 (= strongly agree). Subscale scores were calculated by summing together the items, with a possible range of 12–60, and higher scores indicating greater levels of rumination. The subscale showed excellent reliability at all time points (Cronbach’s alphas = .96 at all time points).

Distress Tolerance Scale [34]

The Distress Tolerance Scale utilizes 15 items to assess participants’ preference for avoiding emotional distress (e.g., “I’ll do anything to stop feeling distressed or upset”). Participants would rate their agreement with items on a scale of 1 (= strongly disagree) to 5 (= strongly agree). Subscale scores were calculated by summing together the items, with a possible range of 15–75, and higher scores indicating greater levels of distress intolerance. The subscale showed good reliability at all time points (Cronbach’s alphas = .83–84).

Coronavirus stressor items

An ad-hoc questionnaire assessed participants’ exposure to common stressors (e.g., “Job requires possible exposure to coronavirus”; See Table 1 for list of items). Participants indicated whether such stressors have happened to them, to somebody close to them, or did not apply to either. Additionally, they used a slider to rate on a scale of 1 (= no stress/loneliness at all) to 100 (= a lot of anxiety/loneliness) the extent to which they experienced anxiety or stress due to the COVID-19 pandemic (i.e., COVID stress) and the extent to which they felt increased loneliness as a result of the pandemic (i.e., COVID loneliness).

Table 1. Stressors related to COVID-19 and cross-sectional correlation with depression and anxiety.
Variable Mean (SD) / Number in sample (Percent) Depression T3 (r) Anxiety T3 (r)
Anxiety/stress as a result of COVID-19 pandemic 57.40 (28.00) .38** .32**
Loneliness as a result of COVID-19 pandemic 42.33 (32.74) .48** .32**
Became ill from possible exposure to COVID-19
 Me 7 (3.2%) .16* .15*
 Close to me 38 (17.4%) .11 .03
 n/a 179 (82.1%) -.10 .03
Knows someone who died from COVID-19
 Me 9 (4.1%) -.02 .02
 Close to me 18 (8.3%) -.02 -.00
 n/a 194 (89.0%) .02 -.02
Job requires possible exposure to COVID-19
 Me 35 (16.1%) .02 .03
 Close to me 58 (26.7%) .02 -.00
 n/a 147 (67.4%) -.06 -.04
Lost job or reduced income due to COVID-19 pandemic
 Me 62 (28.2%) .00 -.02
 Close to me 69 (31.7%) .07 -.01
 n/a 121 (55.5%) -.04 -.02
Increased responsibilities at home due to COVID-19 pandemic
 Me 71 (32.3%) .03 .03
 Close to me 41 (18.6%) -.03 .02
 n/a 135 (61.8%) -.02 -.08
Self-isolating due to government regulation or recommendation
 Me 138 (63.3%) .05 .02
 Close to me 95 (43.6%) .11 -.00
 n/a 65 (29.4%) -.12 -.07
Currently living alone 45 (20.6%) .07 -.09

* indicates p < .05.

** indicates p < .01.

T3 –Data collection at Time 3, taking place from April 15-April 20, 2020.

Procedure

Participants were recruited via the Prolific Academic Platform as part of an ongoing study on reinforcement sensitivity, emotion regulation, and affective psychopathology [35]. Five hundred and seventeen participants were initially recruited via the Prolific Academic platform on October 17, 2018 (i.e., T1). They completed a series of self-report questionnaires related to reinforcement sensitivity, emotion regulation and affective pathology followed by an unrelated behavioral task (e.g., for similar procedure, see [36]). Questionnaires only assessed recent levels of psychopathology. Histories of psychopathology or childhood risk factors (e.g., adverse childhood events) were not assessed. Participants included in the study had an approval rate of 95% or above following at least 50 completed tasks. Six months later, all participants who successfully completed the first study were invited to complete the same measures again, between the dates of April 15 and April 22, 2019 (i.e., T2). Three hundred and forty-eight participants (67.3% of T1) completed the study. This was generally consistent with attrition rates in other longitudinal Internet-based studies (e.g., 70% after three months [37]). Exactly one year later, participants who completed the T2 measures were contacted again, and 218 (62.6% of T2) completed the same study for a third time, between the dates of April 15, 2020 and April 20, 2020 (i.e., T3). Importantly, this assessment took place during a peak in COVID-19-related fatalities (Fig 2; [29]). Participants who returned for the final assessment showed small differences in symptom severity and moderate differences in age from those who did not (S1 File). Specifically, participants who returned were older than those who did not (M = 41.90, SD = 13.07 vs M = 34.07 SD = 10.28, d = .65, p < .001), less depressed (M = 5.85, SD = 5.64 vs M = 7.76, SD = 5.97, d = .33, p = .004), and less anxious (M = 3,25, SD = 3.91 vs M = 4.33, SD = 4.25, d = .27, p = .019). In order to minimize their effects on difference scores between timepoints, only participants who completed all three time-points were included in analyses.

Analysis plan

Analytic strategies were selected based on the psychometric properties of the variables. Environmental stressors were measured with binary data and as such were summarized using frequency statistics (i.e., rate and percentage). The relationships between the environmental stressors and symptom severity were assessed using the recommended polychoric correlations [38]. Subjective measures of COVID-19 stress, rumination, distress intolerance, depression, and anxiety were measured using dimensional variables. As such, they were summarized using descriptive statistics and their interrelationships were calculated using Pearson’s correlations. Polychoric correlations and Pearson’s correlations were evaluated using the same significance cutoff of p < .05 and their effect sizes were considered comparable to each other.

Assessing the extent to which depression and anxiety changed or remained the same between time points was done in three steps. First, we performed conventional null hypothesis significance tests (NHST) to examine differences between time-points. Specifically, within-group t tests were used, with symptom and trait measures (e.g., DASS-Depression) entered as the repeated measures across consecutive time-points (i.e., T1-T2 or T2-T3). In this case, a significant finding would indicate that the repeated measures changed between time-points.

After observing the unexpectedly small effect sizes of change between time-points, we next estimated whether these effects were statistically equivalent to zero. This procedure is performed using a combination of the previous NHSTs and an additional two one-sided tests (TOST) procedure for equivalence testing [39]. In this procedure, an a priori smallest effect size of interest (SESOI) is calculated. Two one-sided t-tests then assess whether the full confidence interval of the change score estimate falls within the positive and negative SESOI (e.g., Cohen’s d = -.20 < X < .20). If so, the change score is judged as statistically equivalent to zero and scores are considered equivalent to each other. Finally, in the case of significant difference, we examined Cohen’s d of difference in order to assess the size of the difference between timepoints.

Finally, and most importantly, we compared two possible explanatory models for the relationships between COVID-19 stress, and loneliness and symptom severity. The models included binary data (i.e., environmental stressors) and were therefore calculated using polychoric correlations and a diagonal weighted least squares (DWLS) estimator [40]. Models were assessed using robust fit statistics using the recommended cutoffs [41, 42] of: non-significant chi-square test, CFI > .95, RMSEA < .06, SRMR < .08.

Ethics

The study was conducted with the approval of the Ethics Committee of the Faculty of Social Sciences at the Hebrew University of Jerusalem (Approval #124120). All participants provided written consent prior to each wave of assessments.

Results

Data and analysis syntax are available at https://osf.io/sjp4a/.

Presence of stressors related to COVID-19

We first examined the pandemic’s impact on our sample (see Table 1). Overall, we found that it introduced unique stressors to most participants’ environments. Most participants (63.3%) were self-isolating as a result of government regulation. Similarly, 44.5% of the participants reported that either they or a person close to them has experienced a reduction of income as a result of the pandemic. Importantly, participants reported subjective feelings of distress as a result of the pandemic. They reported that, on average, it caused them moderate levels of loneliness (M = 42.33, SD = 32.74) and stress (M = 57.40, SD = 28.00; both on a 0–100 scale).

Equivalence of measures across baselines

Next, we examined whether the measures of symptom severity and clinically relevant traits changed between the two baseline time-points (see Table 2; T1-T2), taken prior to the pandemic. We hypothesized that all measures would remain statistically equivalent on a group level. NHST t-tests revealed that depression, anxiety, and rumination were indeed non-different from each other (Cohen’s ds = -.06 - .02, ps > .177). However, distress intolerance significantly decreased by a trivial degree, t(217) = 1.98, p = .049, d = -.10 95% CI[-.19; .00], between T1 (M = 42.94, SD = 10.26) and T2 (M = 41.95, SD = 9.95).

Table 2. Equivalence testing of clinical measures and clinically relevant traits.

Comparison M (SD) Cohen’s d [90% CI] NHST test for differences TOST test for equivalence Conclusion
Depression
T1 5.94 (5.52) -0.02 [-0.10; 0.06] t(217) = 0.4, p = .691 t(217) = 2.89, p = .002 Equivalent and not different
T2 5.85 (5.64)
T2 5.85 (5.64) 0.08 [-0.03; 0.18] t(217) = 1.45, p = .147 t(217) = 4.74, p < .001 Equivalent and not different
T3 6.28 (5.50)
Anxiety
T1 3.17 (3.92) 0.02 [-0.07; 0.12] t(217) = 0.43, p = .665 t(217) = 3.72, p < .001 Equivalent and not different
T2 3.25 (3.91)
T2 3.25 (3.91) -0.11 [-0.22; 0.00] t(217) = 1.98, p = .048 t(217) = 1.30, p = .097 Not equivalent and different
T3 2.83 (3.61)
Rumination
T1 43.11 (11.02) -0.06 [-0.14; 0.03] t(217) = 1.35, p = .177 t(217) = 1.94, p = .027 Equivalent and not different
T2 42.46 (11.74)
T2 42.46 (11.74) -0.07 [-0.15; 0.02] t(217) = 1.59, p = .112 t(217) = 1.69, p = .046 Equivalent and not different
T3 41.66 (12.15)
Distress Tolerance
T1 42.94 (10.26) -0.10 [-0.19; 0.00] t(217) = 1.98, p = .049 t(217) = 1.31, p = .096 Not equivalent and different
T2 41.95 (9.95)
T2 41.95 (9.95) -0.09 [-0.19; 0.01] t(217) = 1.78, p = .076 t(217) = 2.86, p = .002 Equivalent and not different
T3 41.03 (10.08)

Note. T1 –Data collection at Time 1, October 17, 2018. T2 –Data collection at Time 2, April 15–22, 2019. T3 –Data collection at Time 3, April 15–20, 2020.

In order to ascertain whether these small differences were statistically equal to zero, we performed a series of equivalence tests for each measure using the recommended TOST procedure [39]. Indeed, depression, anxiety, and rumination were all found to be significantly equivalent (ps ≤ .027). Thus, these three measures were judged to be equivalent and non-different across the two baseline timepoints. However, distress intolerance–which was found to be significantly different from T1 to T2 using NHST (d = .10, p = .049)–was not found to be significantly equivalent using TOST (p = .096). As such, distress intolerance was judged to be different between T1 and T2, to a significant albeit trivial degree.

Equivalence between baseline and COVID-19 pandemic

Next, to assess the validity of a general-stressor model, we examined whether any group-level change occurred between the second baseline assessment and the assessment that occurred during the height of the COVID-19 pandemic one year later (see Table 2; T2-T3). We expected to find increases in all these distress-related measures and tested this hypothesis with a series of within-group t-tests to detect differences between timepoints. Contrary to our hypothesis, no change was detected for depression, rumination or distress tolerance (ds = -.09 - .08, ps > .076). While significant change did occur for anxiety, the effect was trivially small, and similar to the other measures that did not show significant change, t(217) = 1.98, p = .048, d = -.11, 95% CI [-.22; .00]. More importantly, this trivial change occurred in the direction opposite from what was hypothesized with anxiety during the pandemic (T3 M = 2.83, SD = 3.61) being slightly lower than at baseline (T2 M = 3.25, SD = 3.91).

Due to the non-significant findings for depression, rumination, and distress intolerance, and the small effect size found for anxiety, we performed a series of post-hoc equivalence tests to assess whether these effect sizes were statistically equivalent to zero. Indeed, depression, rumination and distress intolerance were equivalent between baseline and during the COVID-19 pandemic (ds = -.09 - .08, ps < .046). Anxiety, on the other hand, was found to be non-equivalent between baseline and during the pandemic, t(217) = 1.3, p = .097, d = -.11, 95% CI [-.22; .00]. Thus, depression, rumination, and distress intolerance were equivalent to baseline, while a trivial albeit significant decrease in anxiety was observed as well.

Relationship between COVID-19 stress and symptom severity

After we did not find a group-level increase in depression or anxiety, we next examined the relationship between pandemic-related distress and psychopathology at the onset of the pandemic. First, we examined the bivariate correlations between stressors related to COVID-19, depression, and anxiety (see Table 1). Previous illness from possible exposure to COVID-19 was the only environmental stressor that was significantly associated with depression (r = .16) and anxiety (r = .15). No other environmental stressor significantly correlated with depression or anxiety (ps = .094 - .986). On the other hand, pandemic-related distress (i.e., loneliness and stress as a result of COVID-19) was associated with both depression (r = .42 and .38, p < .001, respectively) and anxiety (rs = .32, p < .001).

We then performed our primary analysis, comparing two explanatory models for how pandemic-related distress related to clinical symptoms. In the general stressor model (Fig 2a and 2b), pandemic-related environmental stressors predicted change in psychopathology. This model included the hypotheses that (a) environmental stressors should predict emotional distress, and (b) that emotional distress should predict symptom severity beyond baseline. Due to the relationship between social disconnect and depression [23], the first model specified self-isolation to predict COVID-related loneliness which in turn predicted depression after controlling for baseline (Fig 3a). Due to the relationship between economic uncertainty and anxiety [8], the second model specified negative effects on participants’ incomes to predict COVID-related anxiety/stress, which in turn predicted anxiety after controlling for baseline (Fig 3b). These models fit the data very poorly (see Table 3).

Table 3. Summary of model fit statistics for the alternative path models.

Model χ2 (df) p value Robust CFI Robust RMSEA RMSEA 90% CI Robust SRMR
Depression models
 Loneliness -> depression model (Fig 3a) 28.60 (2) < .001 .000 .248 .172-.332 .002
 Depression -> loneliness model (Fig 4a) .177 (2) .92 1.00 0.00 0.00–0.50 .010
Anxiety models
 Stress -> anxiety model (Fig 3b) 16.25 (2) < .001 0.00 .181 .107-.267 .002
 Anxiety -> stress model (Fig 4b) 1.08 (2) .583 1.00 .000 .000-.011 .034

CFI = comparative fit index; RMSEA = root-mean-square error of approximation; CI = confidence interval; SRMR = Standardized root mean square residual.

In the diathesis-stress model (Fig 4a and 4b), both psychopathology and environmental stressors independently predicted current levels of COVID-related distress. In these models, (a) only baseline measures predicted symptom severity at T3 and emotional distress was independently predicted by (b) symptom severity at T3 (c) as well as environmental stressors. Thus, for depression (Fig 4a), we specified depression at T2 to predict depression at T3. COVID-related feelings of loneliness were predicted by depression at T3 and whether the participant was self-isolating. For anxiety (Fig 4b), anxiety at T2 predicted anxiety at T3, which then predicted subjective COVID-related stress along with loss of income. These models were excellent fits for the data (see Table 3).

Taken together, we found that the COVID-19 pandemic did not lead to a general increase in depression or anxiety. Furthermore, distress caused by environmental stressors did not lead to a rise in symptom severity. Rather, participants’ ratings of COVID-19-related distress were independently predicted by symptom severity and environmental stressors. Thus, instead of a general-stressor model wherein the pandemic caused a group-level rise in symptom severity, a diathesis-stress model was supported, wherein prior symptom severity predicted pandemic-related distress.

Discussion

The current study models the relationship between the COVID-19 pandemic, distress, depression, and anxiety during the pandemic’s early stages. It did so by longitudinally assessing participants’ levels of depression, anxiety, rumination and distress intolerance from two baseline assessments 12 and 18 months prior to the pandemic’s high points in Spring 2020. Prior to analysis, we expected to observe a rise in depression and anxiety, resulting from distress associated with environmental, pandemic-related stressors [10, 12]. However, no rise occurred. Instead, symptoms and clinically relevant measures were either equivalent or reduced by a trivial degree in the case of anxiety. The only non-trivial predictor of symptom severity during the pandemic was symptom severity at baseline.

While unexpected, these findings were consistent with patterns observed following other disasters. Populations appear to be resilient to large-scale environmental stressors [20]. As such, a broad, simplistic general-stressor model of the COVID-19 pandemic has not been supported in the current study or elsewhere [18]. Rather, the current study argues that diathesis-stress models [22, 43] are necessary to identify how and for whom pandemic-related stressors may lead to psychopathology.

The current study also identifies individuals who may be more at risk for distress during the COVID-19 pandemic. After comparing alternative explanatory models for the relationship between environmental stressors, subjective distress, and symptom severity, we found that the model with an excellent fit for the data specified distress to be independently predicted by environmental stressors and psychopathology. The alternate model that specified psychopathology to be predicted by distress [10] fit the data very poorly. Indeed, the only environmental stressor to predict symptom severity was possible previous infection with COVID-19 [14]. Thus, those most at risk for distress during the pandemic were those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety). These findings may complement others that use demographic correlates to identify those at higher risk for elevated mental illness during the pandemic. Factors such as age, gender, race, ethnicity, location, and education are associated with internalizing pathology at the onset of the pandemic, alongside pandemic-related stressors [44]. These different risk factors interact syndemically, wherein pandemic-related stressors have greater impacts upon mental health for those with minoritized racial identities [45] or live in lower-income communities [46]. While the current study’s sample size is too small to undertake such interaction analyses, other large-scale studies will be better equipped to do so.

The current study highlights the importance of including baseline measures when studying the effects of the COVID-19 pandemic [13, 27]. One of its key strengths is the inclusion of two assessments, occurring exactly 12 months and 18 months prior. This context highlights the lack of meaningful difference in the sample’s clinical measures, even when the pandemic was underway. Future research on the COVID-19 pandemic’s impact will ideally include baseline assessments of clinical measures from prior to the pandemic’s stressors were introduced [11]. These baseline measures are often missing from disaster research [20] and likewise should be emphasized as a key limitation to a cross-sectional study’s ability to directly assess the role of COVID-19 as a stressor. Indeed, it also more generally highlights the importance of longitudinal data in identifying those most at risk for elevated levels of mental illness during periods of great distress. For example, individuals with adverse childhood experiences may also be at risk for increased levels of mental illness during times of pandemic-related stress [47]. In this case, ongoing, lifetime longitudinal studies are specially equipped to assess the relationship between diatheses related to earlier experiences and later distress [16].

Future research may limit or replicate the current study’s findings by comparing varied methods of sampling and assessment when measuring symptom severity. For example, the current study utilized an Internet-based sample. While such samples show similar rates of depression and anxiety as the general population [48], they may also differ from in terms of traits relevant to self-isolation such as lower extraversion and higher Internet use [49]. It is also possible that the assessment that took place during the COVID-19 pandemic (i.e., T3) did not occur at the most distressing time for participants. Although it occurred during peak rates of COVID-19 fatality (see Fig 2), other events during the pandemic may have led to other distress peaks as well, such as when participants began to self-isolate [50].

Additionally, the differences between those who returned for the final assessment and those who did not is worthy of acknowledgement. Although the differences in symptom severity were small and of low clinical significance before the pandemic, it is possible that the participants who did not return for the final assessment were more at risk for symptom increase than those who did. In such a case, it is possible that had they been included, equivalence across timepoints may not have occurred. This hypothetical circumstance, however, is also inconsistent with the general stressor model that predicts symptom increase across the population. Furthermore, the current study is consistent with other large-scale studies that have also not found that symptom severity increased precipitously during the pandemic [6, 18, 19]. Further research may benefit from focusing on how the pandemic impacted well-being among participants at different levels of symptom severity.

Finally, the current study approaches risk factors for mental illness from the perspectives of contemporary intrapersonal processing, acute stress, and internalizing psychopathology. However, many risk factors for elevated mental illness occurred prior to the original baseline measures included in the study. Elevated levels of psychopathology prior to assessment [16] or adverse childhood events [47] are both useful for identifying those at risk for greater levels of distress while facing pandemic-related stressors. Future studies using similar designs may use retrospective assessments to capture some of these risk factors as well. Similarly, many risk factors both preceding and during the COVID-19 pandemic are interpersonal in nature. Social stressors both from the participant’s past (e.g., and during their experiences in quarantine (e.g., domestic violence) [51] are themselves likely risk factors for loneliness, and mood disorder [52]. Future studies may further examine the role of interpersonal factors in well-being at the onset of the pandemic [47]. Furthermore, the current research design focuses on the acute periods of stress that accompanied the beginning of the pandemic. However, as the pandemic continued, stress has accumulated. Future studies may assess the association between cumulative stress and mental illness [47]. Additionally, the current study operationalized mental health in terms of metrics of distress, loneliness, depression, and anxiety. However, other metrics of behavioral health, such as substance use and behavioral addictions, have also been associated with social isolation during the pandemic’s onset [9]. The current study may be considered against others related to externalizing disorders in order to assess trends across different types of psychopathology.

Collaborative work between multiple laboratories will also offer opportunities to compare findings among diverse populations, assessment methods, and clinical histories in order to more efficiently identify at-risk populations [53]. Identifying which participants are most vulnerable to mental illness in the face of pandemic-related stress is the first step in developing targeted interventions [54]. In doing so, the clinical science community may rise to the formidable challenge that COVID-19 has set before it.

Supporting information

S1 Table. Additional demographic information of sample.

(DOCX)

S2 Table. Comparison of participants that did and did not return for later assessments.

T1 returners are those who returned for T2. T1 dropouts are those who did not return for T2. T2 returners are those who completed T1, T2, and T3. T2 returners are those who completed T1 and T2 but not T3.

(DOCX)

S1 File. Main analyses including only participants from the United Kingdom and United States.

(DOCX)

Data Availability

All data, and analysis syntax are available from https://osf.io/sjp4a/.

Funding Statement

This research was supported by the Israeli Science Foundation Grant 886/18 to IY. (https://www.isf.org.il). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Uddin M, Mustafa F, Rizvi TA, Loney T, Al Suwaidi H, Al-Marzouqi AHH, et al. SARS-CoV-2/COVID-19: Viral genomics, epidemiology, vaccines, and therapeutic interventions. Viruses. 2020;12: 526. doi: 10.3390/v12050526 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Gruber J, Prinstein MJ, Clark LA, Rottenberg J, Abramowitz JS, Albano AM, et al. Mental health and clinical psychological science in the time of COVID-19: Challenges, opportunities, and a call to action. American Psychologist. 2021;76: 409–426. doi: 10.1037/amp0000707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Viner R, Russell S, Saulle R, Croker H, Stansfield C, Packer J, et al. School closures during social lockdown and mental health, health behaviors, and well-being among children and Adolescents during the first COVID-19 wave. JAMA Pediatrics. 2022;176: 400. doi: 10.1001/jamapediatrics.2021.5840 [DOI] [PubMed] [Google Scholar]
  • 4.Alimoradi Z, Gozal D, Tsang HWH, Lin C, Broström A, Ohayon MM, et al. Gender-specific estimates of sleep problems during the COVID-19 pandemic: Systematic review and meta-analysis. Journal of Sleep Research. 2022;31. doi: 10.1111/jsr.13432 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Alimoradi Z, Ohayon MM, Griffiths MD, Lin C-Y, Pakpour AH. Fear of COVID-19 and its association with mental health-related factors: systematic review and meta-analysis. BJPsych Open. 2022;8: e73. doi: 10.1192/bjo.2022.26 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Robinson E, Sutin AR, Daly M, Jones A. A systematic review and meta-analysis of longitudinal cohort studies comparing mental health before versus during the COVID-19 pandemic in 2020. Journal of Affective Disorders. 2022;296: 567–576. doi: 10.1016/j.jad.2021.09.098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Taylor S, Landry CA, Paluszek MM, Fergus TA, McKay D, Asmundson GJG. COVID stress syndrome: Concept, structure, and correlates. Depression and Anxiety. 2020;37: 706–714. doi: 10.1002/da.23071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.De Witte H, Pienaar J, De Cuyper N. Review of 30 years of longitudinal studies on the association between job insecurity and health and well-being: Is there causal evidence? Australian Psychologist. 2016;51: 18–31. doi: 10.1111/ap.12176 [DOI] [Google Scholar]
  • 9.Avena NM, Simkus J, Lewandowski A, Gold MS, Potenza MN. Substance use disorders and behavioral addictions during the COVID-19 pandemic and COVID-19-related restrictions. Frontiers in Psychiatry. 2021;12. doi: 10.3389/fpsyt.2021.653674 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Brooks SK, Webster RK, Smith LE, Woodland L, Wessely S, Greenberg N, et al. The psychological impact of quarantine and how to reduce it: Rapid review of the evidence. The Lancet. 2020;395: 912–920. doi: 10.1016/S0140-6736(20)30460-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Luchetti M, Lee JH, Aschwanden D, Sesker A, Strickhouser JE, Terracciano A, et al. The trajectory of loneliness in response to COVID-19. American Psychologist. 2020;75: 897–908. doi: 10.1037/amp0000690 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ghebreyesus TA. Addressing mental health needs: An integral part of COVID-19 response. World Psychiatry: Official Journal of the World Psychiatric Association (WPA). 2020;19: 129–130. doi: 10.1002/wps.20768 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hammen C. Stress and depression. Annual Review of Clinical Psychology. 2005;1: 293–319. doi: 10.1146/annurev.clinpsy.1.102803.143938 [DOI] [PubMed] [Google Scholar]
  • 14.Rogers JP, Chesney E, Oliver D, Pollak TA, McGuire P, Fusar-Poli P, et al. Psychiatric and neuropsychiatric presentations associated with severe coronavirus infections: A systematic review and meta-analysis with comparison to the COVID-19 pandemic. The Lancet Psychiatry. 2020. doi: 10.1016/S2215-0366(20)30203-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Bourmistrova NW, Solomon T, Braude P, Strawbridge R, Carter B. Long-term effects of COVID-19 on mental health: A systematic review. Journal of Affective Disorders. 2022;299: 118–125. doi: 10.1016/j.jad.2021.11.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Hawes MT, Szenczy AK, Klein DN, Hajcak G, Nelson BD. Increases in depression and anxiety symptoms in adolescents and young adults during the COVID-19 pandemic. Psychological Medicine. 2021; 1–9. doi: 10.1017/S0033291720005358 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Berman NC, Fang A, Hoeppner SS, Reese H, Siev J, Timpano KR, et al. COVID-19 and obsessive-compulsive symptoms in a large multi-site college sample. Journal of Obsessive-Compulsive and Related Disorders. 2022; 100727. doi: 10.1016/j.jocrd.2022.100727 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Yarrington JS, Lasser J, Garcia D, Vargas JH, Couto DD, Marafon T, et al. Impact of the COVID-19 Pandemic on Mental Health among 157,213 Americans. Journal of Affective Disorders. 2021;286: 64–70. doi: 10.1016/j.jad.2021.02.056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Shanahan L, Steinhoff A, Bechtiger L, Murray AL, Nivette A, Hepp U, et al. Emotional distress in young adults during the COVID-19 pandemic: evidence of risk and resilience from a longitudinal cohort study. Psychological Medicine. 2022;52: 824–833. doi: 10.1017/S003329172000241X [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Goldmann E, Galea S. Mental health consequences of disasters. Annual Review of Public Health. 2014;35: 169–183. doi: 10.1146/annurev-publhealth-032013-182435 [DOI] [PubMed] [Google Scholar]
  • 21.Kopala-Sibley DC, Danzig AP, Kotov R, Bromet EJ, Carlson GA, Olino TM, et al. Negative emotionality and its facets moderate the effects of exposure to Hurricane Sandy on children’s postdisaster depression and anxiety symptoms. Journal of Abnormal Psychology. 2016;125: 471–481. doi: 10.1037/abn0000152 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ingram RE, Luxton DD. Vulnerability-stress models. Development of psychopathology: A vulnerability-stress perspective. New York: Sage; 2005. pp. 37–46. [Google Scholar]
  • 23.Hawkley LC, Cacioppo JT. Loneliness matters: A theoretical and empirical review of consequences and mechanisms. Annals of Behavioral Medicine. 2010;40: 218–227. doi: 10.1007/s12160-010-9210-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Katz BA, Matanky K, Aviram G, Yovel I. Reinforcement sensitivity, depression and anxiety: A meta-analysis and meta-analytic structural equation model. Clinical Psychology Review. 2020;77: 101842. doi: 10.1016/j.cpr.2020.101842 [DOI] [PubMed] [Google Scholar]
  • 25.Taquet M, Quoidbach J, Fried EI, Goodwin GM. Mood homeostasis before and during the coronavirus disease 2019 (COVID-19) lockdown among students in the Netherlands. JAMA Psychiatry. 2020. doi: 10.1001/jamapsychiatry.2020.2389 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Borsboom D, van der Maas HLJ, Dalege J, Kievit RA, Haig BD. Theory Construction Methodology: A practical framework for building theories in psychology. Perspectives on Psychological Science. 2021; 1–11. doi: 10.1177/1745691620969647 [DOI] [PubMed] [Google Scholar]
  • 27.Kazdin AE. Mediators and mechanisms of change in psychotherapy research. Annual Review of Clinical Psychology. 2007;3: 1–27. doi: 10.1146/annurev.clinpsy.3.022806.091432 [DOI] [PubMed] [Google Scholar]
  • 28.Kessler RC, Galea S, Gruber MJ, Sampson NA, Ursano RJ, Wessely S. Trends in mental illness and suicidality after Hurricane Katrina. Molecular Psychiatry. 2008;13: 374–384. doi: 10.1038/sj.mp.4002119 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.European Centre for Disease Prevention and Control. Today’s data on the geographic distribution of COVID-19 cases worldwide. 2021 [cited 26 Sep 2021]. https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide
  • 30.Henry JD, Crawford JR. The short-form version of the Depression Anxiety Stress Scales (DASS-21): Construct validity and normative data in a large non-clinical sample. British Journal of Clinical Psychology. 2005;44: 227–239. doi: 10.1348/014466505X29657 [DOI] [PubMed] [Google Scholar]
  • 31.Lovibond PF, Lovibond SH. The structure of negative emotional states: Comparison of the depression anxiety stress scales (DASS) with the Beck Depression and Anxiety Inventories. Behaviour Research and Therapy. 1995;33: 335–343. doi: 10.1016/0005-7967(94)00075-u [DOI] [PubMed] [Google Scholar]
  • 32.Chan RCK, Xu T, Huang J, Wang Y, Zhao Q, Shum DHK, et al. Extending the utility of the Depression Anxiety Stress scale by examining its psychometric properties in Chinese settings. Psychiatry Research. 2012;200: 879–883. doi: 10.1016/j.psychres.2012.06.041 [DOI] [PubMed] [Google Scholar]
  • 33.Trapnell PD, Campbell JD. Private self-consciousness and the five-factor model of personality: Distinguishing rumination from reflection. Journal of Personality and Social Psychology. 1999;76: 284–304. doi: 10.1037//0022-3514.76.2.284 [DOI] [PubMed] [Google Scholar]
  • 34.Simons JS, Gaher RM. The Distress Tolerance Scale: Development and validation of a self-report measure. Motivation and Emotion. 2005;29: 83–102. doi: 10.1007/s11031-005-7955-3 [DOI] [Google Scholar]
  • 35.Katz BA, Yovel I. Reinforcement sensitivity predicts affective psychopathology via emotion regulation: Cross-sectional, longitudinal and quasi-experimental evidence. Journal of Affective Disorders. 2022;301: 117–129. doi: 10.1016/j.jad.2022.01.017 [DOI] [PubMed] [Google Scholar]
  • 36.Friedman A, Katz BA, Cohen IH, Yovel I. Expanding the Scope of Implicit Personality Assessment: An Examination of the Questionnaire-Based Implicit Association Test (qIAT). Journal of Personality Assessment. 2021;103: 380–391. doi: 10.1080/00223891.2020.1754230 [DOI] [PubMed] [Google Scholar]
  • 37.Sheeran P, Conner M. Degree of reasoned action predicts increased intentional control and reduced habitual control over health behaviors. Social Science & Medicine. 2019;228: 68–74. doi: 10.1016/j.socscimed.2019.03.015 [DOI] [PubMed] [Google Scholar]
  • 38.Holgado–Tello FP, Chacón–Moscoso S, Barbero–García I, Vila–Abad E. Polychoric versus Pearson correlations in exploratory and confirmatory factor analysis of ordinal variables. Quality & Quantity. 2010;44: 153–166. doi: 10.1007/s11135-008-9190-y [DOI] [Google Scholar]
  • 39.Lakens D, Scheel AM, Isager PM. Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science. 2018;1: 259–269. doi: 10.1177/2515245918770963 [DOI] [Google Scholar]
  • 40.Savalei V, Rhemtulla M. The performance of robust test statistics with categorical data. British Journal of Mathematical and Statistical Psychology. 2013;66: 201–223. doi: 10.1111/j.2044-8317.2012.02049.x [DOI] [PubMed] [Google Scholar]
  • 41.Kline RB. Principles and practices of structural equation modelling. 4th ed. Methodology in the social sciences. New York, NY: The Guilford Press; 2015. [Google Scholar]
  • 42.Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999;6: 1–55. doi: 10.1080/10705519909540118 [DOI] [Google Scholar]
  • 43.LeMoult J. From stress to depression: Bringing together cognitive and biological science. Current Directions in Psychological Science. 2020;29: 592–598. doi: 10.1177/0963721420964039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Milman E., Lee S. A., Neimeyer R. A., Mathis A. A., & Jobe M. C. (2020). Modeling pandemic depression and anxiety: The mediational role of core beliefs and meaning making. Journal of affective disorders reports, 2, 100023. doi: 10.1016/j.jadr.2020.100023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Cokley K., Krueger N., Cunningham S. R., Burlew K., Hall S., Harris K., et al. (2021). The COVID-19/racial injustice syndemic and mental health among Black Americans: The roles of general and race-related COVID worry, cultural mistrust, and perceived discrimination. Journal of Community Psychology. [DOI] [PubMed] [Google Scholar]
  • 46.Yadav U. N., Rayamajhee B., Mistry S. K., Parsekar S. S., & Mishra S. K. (2020). A syndemic perspective on the management of non-communicable diseases amid the COVID-19 pandemic in low-and middle-income countries. Frontiers in public health, 8, 508. doi: 10.3389/fpubh.2020.00508 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.McLaughlin KA, Rosen ML, Kasparek SW, Rodman AM. Stress-related psychopathology during the COVID-19 pandemic. Behaviour Research and Therapy. 2022;154: 104121. doi: 10.1016/j.brat.2022.104121 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Shapiro DN, Chandler J, Mueller PA. Using Mechanical Turk to study clinical populations. Clinical Psychological Science. 2013;1: 213–220. doi: 10.1177/2167702612469015 [DOI] [Google Scholar]
  • 49.Paolacci G, Chandler J. Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science. 2014;23: 184–188. doi: 10.1177/0963721414531598 [DOI] [Google Scholar]
  • 50.Fried EI, Papanikolaou F, Epskamp S. Mental health and social contact during the COVID-19 pandemic: An ecological momentary assessment study. Clinical Psychological Science. 2021; 216770262110178. doi: 10.1177/21677026211017839 [DOI] [Google Scholar]
  • 51.Drotning KJ, Doan L, Sayer LC, Fish JN, Rinderknecht RG. Not all homes are safe: Family violence following the onset of the Covid-19 Pandemic. Journal of Family Violence. 2022. doi: 10.1007/s10896-022-00372-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Seon J, Cho H, Choi G-Y, Son E, Allen J, Nelson A, et al. Adverse childhood experiences, intimate partner violence victimization, and self-perceived health and depression among college students. Journal of Family Violence. 2022;37: 691–706. doi: 10.1007/s10896-021-00286-1 [DOI] [Google Scholar]
  • 53.Perrino T, Howe G, Sperling A, Beardslee W, Sandler I, Shern D, et al. Advancing science through collaborative data sharing and synthesis. Perspectives on Psychological Science. 2013;8: 433–444. doi: 10.1177/1745691613491579 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Shoham V, Insel TR. Rebooting for whom?: Portfolios, technology, and personalized intervention. Perspectives on Psychological Science. 2011;6: 478–482. doi: 10.1177/1745691611418526 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Vedat Sar

17 Sep 2021

PONE-D-21-18866Mood Symptoms Predict COVID-19 Pandemic Distress but not Vice Versa:An 18-Month Longitudinal StudyPLOS ONE

Dear Dr. Katz,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Please submit your revised manuscript by Nov 01 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Vedat Sar, M.D.

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at http://journals.plos.org/plosone/s/latex.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Sep 2;17(9):e0273945. doi: 10.1371/journal.pone.0273945.r002

Author response to Decision Letter 0


29 Sep 2021

Editor’s comments

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming

We have implemented the following changes consistent with PLOS ONE’s style book:

• Headings, table titles and figure titles were adjusted in terms of location and font size.

• Citations were changed to PLOS ONE style instead of APA style.

• Tables and figure titles were integrated into the text of the manuscript and Supplementary Table 1 was noted at the end.

2. Please update your submission to use the PLOS LaTeX template

In keeping with PLOS ONE policy, we uploaded a .docx version of the manuscript with and without changes tracked.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

The following ethics statement is now included at the end of the Method section:

The study was conducted with the approval of the Ethics Committee of the Faculty of Social Sciences at the Hebrew University of Jerusalem under the study, "A path model for the connection between reinforcement sensitivity theory, emotion regulation, and psychopathology". All participants provided written consent prior to each wave of assessments.

Additionally, all figures were converted to .tif format and passed through the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool prior to upload. We have also carefully followed all file naming conventions as requested.

Attachment

Submitted filename: PLoS Response to Reviewers.docx

Decision Letter 1

Mohammad Farris Iman Leong Bin Abdullah

18 May 2022

PONE-D-21-18866R1Mood Symptoms Predict COVID-19 Pandemic Distress but not Vice Versa: An 18-Month Longitudinal StudyPLOS ONE

Dear Dr. Katz,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by July 2, 2022 at 11:59 PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mohammad Farris Iman Leong Bin Abdullah, Dr Psych

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

When submitting your revision, we need you to address these additional requirements:

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: p. 3

“The high levels of health and financial uncertainly salient to the pandemic are strongly linked to internalizing pathology [3].”

What about “externalizing pathology”. Seems important to reference as well.

p. 4

“Thus, in order to assess the COVID-19 pandemic’s negative impact on mental health, it is necessary to identify participants with pre-pandemic baseline data available, and to separately assess clinical symptoms, environmental stressors, and subjective distress.”

Thoughtful, complex design.

“The pandemic assessment period occurred between April 15, 2020 and April 20, 2020,..”

This research is limited by the lack of measurements 6 or 12 or 18 months into the pandemic, that reflects the effects of long-term, cumulative stress.

p. 5-6

Where are the demographic data. It appears pre-disaster trauma history (e.g. ACES study) was not included. Given that interpersonal trauma could be a predictor of post-disaster/pandemic reactivity, this should at minimum be mentioned as a study limitation.

p. 6

“They represented a wide range of ages (M = 42.87, SD = 13.09, range = 19 – 75)”

It would be helpful to see the data on how many people in each age grouping, example 19-29, 30-39, etc. to see if age cohorts mattered or not.

p. 9-10

“The relationships between the environmental stressors and symptom severity were assessed using polychoric correlations. All other relationships (e.g., between subjective COVID-19 stress and symptom severity) were calculated using Pearson’s correlations.”

I am not well-versed in statistics, which perhaps is why it would be helpful for the reader to understand why two different measurements of correlation were chosen, and what they each differentially show and don’t show.

“After observing the unexpectedly small effect sizes of change between time-points, we next estimated whether these effects were statistically equivalent to zero. This was done using the two one-sided tests (TOST) procedure for equivalence testing [26].”

It is my understanding the more tests you run, the more you’re likely to “find something” of statistical significance. Is this taken into account? If so, specify how.

p. 13

“Distress intolerance, on the other hand, was not found to be significantly equivalent (p = .096) across timepoints. As such, distress intolerance was judged to be different between T1 and T2, to a significant albeit trivial degree.”

Is p= .096 considered significant?? Shouldn’t it be lower than .05? Perhaps this should be explained as to why this is statistically significant, however “trivial”.

p. 17

“While unexpected, these findings were consistent with patterns observed following other disasters. Populations appear to be resilient to large-scale environmental stressors [9].”

How much is this a function of this being very early in the pandemic? Would it change after 1 year? After getting the vaccine, and then new variants causing a resurgence?

Also, the subject demographics does not assessment of interpersonal trauma history, e.g, along the lines of the ACES studies. In my clinical experience, the trauma history pre-“disaster” has a significant effect on post-disaster coping. At the minimum, the fact that interpersonal trauma history was NOT assessed should be mentioned.

“…those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety).”

Again, trauma history is not included. Some of the anxiety, depression, isolation, etc. could be a consequence of post-trauma sequelae. Again, this limitation of the study needs mentioning.

p. 18

“Although it occurred during peak rates of COVID-19 fatality (see Figure 1), other events during the pandemic may have led to other distress peaks as well, such as when participants began to self-isolate [33].”

This is a good point.

..

“Identifying which participants are most vulnerable to mental illness in the face of pandemic-related stress is the first step in developing targeted interventions [35]. In doing so, the clinical science community may rise to the formidable challenge that COVID-19 has set before it.”

Agreed, but I don’t think the researchers take into account what I believe (and I assume research has shown) the importance of interpersonal, particularly severe childhood trauma history (a la the ACES research) and neglect. If you don’t take into account childhood trauma and neglect, your interventions will not be as patient-specific and effective as desired.

Reviewer #2: I appreciate the authors' hard efforts in this investigation. The submitted work has the following strengths: (i) a longitudinal design that can provide scientific evidence in temporal association; (ii) robust statistical analyses that can examine the theories proposed in the present study; (iii) the use of theoretical model for investigation. However, there are some concerns in the present work and the authors are encouraged to revise their work according to the following comments.

1. The Introduction should include some systematic reviews reporting the evidence of mental health issues during COVID-19 pandemic to emphasize the importance to investigate mental health during COVID-19 pandemic. Please see the following suggestions.

Rajabimajd, N., Alimoradi, Z., & Griffiths, M. D. (2021). Impact of COVID-19-related fear and anxiety on job attributes: A systematic review. Asian Journal of Social Health and Behavior, 4, 51-55

Olashore, A. A., Akanni, O. O., Fela-Thomas, A. L., & Khutsafalo, K. (2021). The psychological impact of COVID-19 on health-care workers in African Countries: A systematic review. Asian Journal of Social Health and Behavior, 4, 85-97

Alimoradi, Z., Ohayon, M. M., Griffiths, M. D., Lin, C.-Y., & Pakpour, A. H. (2022). Fear of COVID-19 and its association with mental health related factors: A systematic review and meta-analysis. BJPsych Open, 8, e73.

Alimoradi, Z., Lin, C.-Y., Ullah, I., Griffiths, M. D., & Pakpour, A. H. (2022). Item response theory analysis of Fear of COVID-19 Scale (FCV-19S): A systematic review. Psychology Research and Behavior Management, 15, 581-596.

Alimoradi, Z., Gozal, D., Tsang, H. W. H., Lin, C.-Y., Broström, A., Ohayon, M. M., & Pakpour, A. H. (2022). Gender-specific estimates of sleep problems during the COVID-19 pandemic: Systematic review and meta-analysis. Journal of Sleep Research, 31(1), e13432.

Alimoradi, Z., Broström, A., Tsang, H. W. H., Griffiths, M. D., Haghayegh, S., Ohayon, M. M., Lin, C.-Y., Pakpour, A. H. (2021). Sleep problems during COVID-19 pandemic and its’ association to psychological distress: A systematic review and meta-analysis. EClinicalMedicine, 36, 100916.

2. The authors tested two theoretical modes (i.e., general-stressor model and diathesis-stressor model). However, they did not introduce the two models in the Introduction. They only briefly mention the general concepts of the two models. However, given that the study’s main focus is to examine and compare the two models, the authors should elaborate the information and descriptions of the two models. It would be much better if the authors also use the figures to explain the two models in the Introduction.

3. The majority of the participants were recruited from the US and the UK. Also, the authors described more COVID-19 information for the two countries. Therefore, I think that it is necessarily to do a sensitivity analysis on the US and UK samples (i.e., removing the Canada and Ireland participants to examined the tested models).

4. The description of DASS is unclear. From the description of “seven-item subscales”, I know that the authors used DASS-21 instead of DASS-42. However, the authors did not make it clear that they have used DASS-21. Moreover, the citation credit should give to Lovibond and Lovibond (1995), as they are the original developers. It is fine to cite Henry and Crawford. However, Lovibond and Lovibond cannot be excluded in the citations.

5. The authors should provide scoring information for all the instruments. Also, the meaning of the directions in each instrument should be provided. Without such information, one cannot interpret the scores for the instruments.

6. From the Procedure section, one can understand that the attrition rate of the longitudinal study was about 40%. This is fine. However, the authors should provide information regarding whether the retained participants and the lost-to-follow-up participants share similar demographics. This can check if the missing is at random.

7. From the statement, “Participants were recruited via the Prolific Academic Platform as part of an ongoing study on reinforcement sensitivity, emotion regulation, and affective psychopathology”, I wonder whether the authors have some publications already published to make a citation here.

8. Tables 2 and 3 are out of the size and cannot be read.

9. For all the tables, the authors should use footnotes to provide definition of T1, T2, and T3.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PLOS One Review Mood Symptoms Predict COVID (Benau, 1-16-22).docx

PLoS One. 2022 Sep 2;17(9):e0273945. doi: 10.1371/journal.pone.0273945.r004

Author response to Decision Letter 1


20 Jun 2022

Reviewer #1:

p. 3

“The high levels of health and financial uncertainly salient to the pandemic are strongly linked to internalizing pathology [3].”

What about “externalizing pathology”. Seems important to reference as well.

We have now included a citation for externalizing pathology in the sentence (p. 3):

The high levels of health and financial uncertainly salient to the pandemic are strongly linked to stress, [7] internalizing pathology [8] and externalizing pathology [9].

p. 4

“The pandemic assessment period occurred between April 15, 2020 and April 20, 2020,..”

This research is limited by the lack of measurements 6 or 12 or 18 months into the pandemic, that reflects the effects of long-term, cumulative stress.

We have now included this point in the limitation section (pp. 23):

Furthermore, the current research design focuses on the acute periods of stress that accompanied the beginning of the pandemic. However, as the pandemic continued, stress has accumulated. Future studies may assess the association between cumulative stress and mental illness [47].

p. 5-6

Where are the demographic data. It appears pre-disaster trauma history (e.g. ACES study) was not included. Given that interpersonal trauma could be a predictor of post-disaster/pandemic reactivity, this should at minimum be mentioned as a study limitation.

Demographic data, including gender, location, race/ethnicity, age, education, and employment status, are available in S1 Table (p. 29). We have checked prior to uploading that this table is available for review.

Additionally, we now include the lack of assessment of adverse childhood experiences and other interpersonal factors as a limitation on p. 23:

Finally, the current study approaches risk factors for mental illness from the perspectives of intrapersonal processing and of acute stress. However, many risk factors both preceding and during the COVID-19 pandemic are interpersonal in nature. Social stressors both from the participant’s past (e.g., adverse childhood events) [47] and during their experiences in quarantine (e.g., domestic violence) [48] are themselves likely risk factors for loneliness, and mood disorder [49]. Future studies may further examine the role of interpersonal factors in well-being at the onset of the pandemic [47].

p. 6

“They represented a wide range of ages (M = 42.87, SD = 13.09, range = 19 – 75)”

It would be helpful to see the data on how many people in each age grouping, example 19-29, 30-39, etc. to see if age cohorts mattered or not.

This breakdown of ages is now included in the demographics table, S1 Table.

p. 9-10

“The relationships between the environmental stressors and symptom severity were assessed using polychoric correlations. All other relationships (e.g., between subjective COVID-19 stress and symptom severity) were calculated using Pearson’s correlations.”

I am not well-versed in statistics, which perhaps is why it would be helpful for the reader to understand why two different measurements of correlation were chosen, and what they each differentially show and don’t show.

Response:

To clarify this issue, we have now rewritten the beginning of the analysis plan to explain the use of polychoric vs Pearson correlations as well as how to interpret them (p. 11):

Analytic strategies were selected based on the psychometric properties of the variables. Environmental stressors were measured with binary data and as such were summarized using frequency statistics (i.e., rate and percentage). The relationships between the environmental stressors and symptom severity were assessed using the recommended polychoric correlations [38]. Subjective measures of COVID-19 stress, rumination, distress intolerance, depression, and anxiety were measured using dimensional variables. As such, they were summarized using descriptive statistics and their interrelationships were calculated using Pearson’s correlations. Polychoric correlations and Pearson’s correlations were evaluated using the same significance cutoff of p < .05 and their effect sizes were considered comparable to each other..

“After observing the unexpectedly small effect sizes of change between time-points, we next estimated whether these effects were statistically equivalent to zero. This was done using the two one-sided tests (TOST) procedure for equivalence testing [26].”

It is my understanding the more tests you run, the more you’re likely to “find something” of statistical significance. Is this taken into account? If so, specify how.

Response:

We certainly agree with the reviewer’s concern regarding degrees of freedom that researchers take for themselves. In keeping with this concern, we follow the Open Science movement approach that recommends transparent reporting of which analyses were chosen a priori and which were secondary (e.g., Benning et al., 2019). In this case, we found the data to be more appropriate for a TOST procedure, which incorporates both null hypotheses significance testing (NHST) and TOST. Importantly, we did not perform other tests on these data germane to the current study, and have now included a citation when describing the study that may refer interested readers to the other tests that have been performed on this dataset in a separate study (p. 10):

Participants were recruited via the Prolific Academic Platform as part of an ongoing study on reinforcement sensitivity, emotion regulation, and affective psychopathology [35]

Source: Benning, S. D., Bachrach, R. L., Smith, E. A., Freeman, A. J., & Wright, A. G. (2019). The registration continuum in clinical science: A guide toward transparent practices. Journal of Abnormal Psychology, 128(6), 528.

p. 13

“Distress intolerance, on the other hand, was not found to be significantly equivalent (p = .096) across timepoints. As such, distress intolerance was judged to be different between T1 and T2, to a significant albeit trivial degree.”

Is p= .096 considered significant?? Shouldn’t it be lower than .05? Perhaps this should be explained as to why this is statistically significant, however “trivial”.

Response:

All differences in time-points were subjected to two rounds of testing. First, we used null-hypothesis significance testing (NHST) to evaluate difference at the two time points. Then, we used the two one-sided tests (TOST) approach to examine equivalence. A significant NHST indicates difference, while a significant TOST indicates equivalence. In this case, NHST of distress tolerance from T1 to T2 previously found significant difference (d = -.10, p = .049). Then, at the point in analysis the reviewer references, we observed a failed TOST (p = .096), indicating non-equivalence. We thus concluded that the values were different, based on NHST, and non-equivalent, based on TOST.

To clarify this point, we have now updated the text of that section of analysis to clarify this difference (p. 13):

However, distress intolerance – which was found to be significantly different from T1 to T2 using NHST (d = .10, p = .049) – was not found to be significantly equivalent using TOST (p = .096).

p. 17

“While unexpected, these findings were consistent with patterns observed following other disasters. Populations appear to be resilient to large-scale environmental stressors [9].”

How much is this a function of this being very early in the pandemic? Would it change after 1 year? After getting the vaccine, and then new variants causing a resurgence?

Also, the subject demographics does not assessment of interpersonal trauma history, e.g, along the lines of the ACES studies. In my clinical experience, the trauma history pre-“disaster” has a significant effect on post-disaster coping. At the minimum, the fact that interpersonal trauma history was NOT assessed should be mentioned.

“…those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety).”

Again, trauma history is not included. Some of the anxiety, depression, isolation, etc. could be a consequence of post-trauma sequelae. Again, this limitation of the study needs mentioning.

Response:

We now include these as limitations to the study (p. 23):

Finally, the current study approaches risk factors for mental illness from the perspectives of intrapersonal processing and of acute stress. However, many risk factors both preceding and during the COVID-19 pandemic are interpersonal in nature. Social stressors both from the participant’s past (e.g., adverse childhood events) [47] and during their experiences in quarantine (e.g., domestic violence) [48] are themselves likely risk factors for loneliness, and mood disorder [49]. Future studies may further examine the role of interpersonal factors in well-being at the onset of the pandemic [47]. Furthermore, the current research design focuses on the acute periods of stress that accompanied the beginning of the pandemic. However, as the pandemic continued, stress has accumulated. Future studies may assess the association between cumulative stress and mental illness [47].

p. 18

“Identifying which participants are most vulnerable to mental illness in the face of pandemic-related stress is the first step in developing targeted interventions [35]. In doing so, the clinical science community may rise to the formidable challenge that COVID-19 has set before it.”

Agreed, but I don’t think the researchers take into account what I believe (and I assume research has shown) the importance of interpersonal, particularly severe childhood trauma history (a la the ACES research) and neglect. If you don’t take into account childhood trauma and neglect, your interventions will not be as patient-specific and effective as desired.

We now mention adverse childhood events and other interpersonal risk factors in the new limitation paragraph (see above response). 

Reviewer #2

1. The Introduction should include some systematic reviews reporting the evidence of mental health issues during COVID-19 pandemic to emphasize the importance to investigate mental health during COVID-19 pandemic. Please see the following suggestions.

Rajabimajd, N., Alimoradi, Z., & Griffiths, M. D. (2021). Impact of COVID-19-related fear and anxiety on job attributes: A systematic review. Asian Journal of Social Health and Behavior, 4, 51-55

Olashore, A. A., Akanni, O. O., Fela-Thomas, A. L., & Khutsafalo, K. (2021). The psychological impact of COVID-19 on health-care workers in African Countries: A systematic review. Asian Journal of Social Health and Behavior, 4, 85-97

Alimoradi, Z., Ohayon, M. M., Griffiths, M. D., Lin, C.-Y., & Pakpour, A. H. (2022). Fear of COVID-19 and its association with mental health related factors: A systematic review and meta-analysis. BJPsych Open, 8, e73.

Alimoradi, Z., Lin, C.-Y., Ullah, I., Griffiths, M. D., & Pakpour, A. H. (2022). Item response theory analysis of Fear of COVID-19 Scale (FCV-19S): A systematic review. Psychology Research and Behavior Management, 15, 581-596.

Alimoradi, Z., Gozal, D., Tsang, H. W. H., Lin, C.-Y., Broström, A., Ohayon, M. M., & Pakpour, A. H. (2022). Gender-specific estimates of sleep problems during the COVID-19 pandemic: Systematic review and meta-analysis. Journal of Sleep Research, 31(1), e13432.

Alimoradi, Z., Broström, A., Tsang, H. W. H., Griffiths, M. D., Haghayegh, S., Ohayon, M. M., Lin, C.-Y., Pakpour, A. H. (2021). Sleep problems during COVID-19 pandemic and its’ association to psychological distress: A systematic review and meta-analysis. EClinicalMedicine, 36, 100916.

We have now included updated citations from this list and others, including:

Alimoradi Z, Gozal D, Tsang HWH, Lin C, Broström A, Ohayon MM, et al. Gender‐specific estimates of sleep problems during the COVID‐19 pandemic: Systematic review and meta‐analysis. Journal of Sleep Research. 2022;31. doi:10.1111/jsr.13432

Alimoradi Z, Ohayon MM, Griffiths MD, Lin C-Y, Pakpour AH. Fear of COVID-19 and its association with mental health-related factors: systematic review and meta-analysis. BJPsych Open. 2022;8: e73. doi:10.1192/bjo.2022.26

Avena NM, Simkus J, Lewandowski A, Gold MS, Potenza MN. Substance use disorders and behavioral addictions during the COVID-19 pandemic and COVID-19-related restrictions. Frontiers in Psychiatry. 2021;12. doi:10.3389/fpsyt.2021.653674

Berman NC, Fang A, Hoeppner SS, Reese H, Siev J, Timpano KR, et al. COVID-19 and obsessive-compulsive symptoms in a large multi-site college sample. Journal of Obsessive-Compulsive and Related Disorders. 2022; 100727. doi:10.1016/j.jocrd.2022.100727

Bourmistrova NW, Solomon T, Braude P, Strawbridge R, Carter B. Long-term effects of COVID-19 on mental health: A systematic review. Journal of Affective Disorders. 2022;299: 118–125. doi:10.1016/j.jad.2021.11.031

Hawes MT, Szenczy AK, Klein DN, Hajcak G, Nelson BD. Increases in depression and anxiety symptoms in adolescents and young adults during the COVID-19 pandemic. Psychological Medicine. 2021; 1–9. doi:10.1017/S0033291720005358

Robinson E, Sutin AR, Daly M, Jones A. A systematic review and meta-analysis of longitudinal cohort studies comparing mental health before versus during the COVID-19 pandemic in 2020. Journal of Affective Disorders. 2022;296: 567–576. doi:10.1016/j.jad.2021.09.098

Shanahan L, Steinhoff A, Bechtiger L, Murray AL, Nivette A, Hepp U, et al. Emotional distress in young adults during the COVID-19 pandemic: evidence of risk and resilience from a longitudinal cohort study. Psychological Medicine. 2022;52: 824–833. doi:10.1017/S003329172000241X

Viner R, Russell S, Saulle R, Croker H, Stansfield C, Packer J, et al. School closures during social lockdown and mental health, health behaviors, and well-being among children and Adolescents during the first COVID-19 wave. JAMA Pediatrics. 2022;176: 400. doi:10.1001/jamapediatrics.2021.5840

2. The authors tested two theoretical modes (i.e., general-stressor model and diathesis-stressor model). However, they did not introduce the two models in the Introduction. They only briefly mention the general concepts of the two models. However, given that the study’s main focus is to examine and compare the two models, the authors should elaborate the information and descriptions of the two models. It would be much better if the authors also use the figures to explain the two models in the Introduction.

We have now expanded upon the two models in the introduction to give each theoretical model more space to be explained (pp. 3-4).

Researchers and public health specialists raised concerns that this time of acute stress would bring about a global spike in mental illness [12]. In doing so, they implicitly argue in favor of a general-stressor model, pointing out that the many disruptions and challenges presented by the COVID-19 pandemic have led to greater levels of cumulative stress across the population (Fig 1a). This population-level stress would transdiagnostically increase levels of mental illness on a population level [13]. Indeed, infection with COVID-19 has indeed been found to negatively impact mental health [14,15] and pandemic-related stress has been found to be associated with elevated symptom severity [16,17]. However, recent large-scale studies have found less support for the general stressor model on a population level, even at the beginning of the pandemic, a period of great stress and uncertainty. [6,18,19] Indeed, populations tend to be quite resilient in the face of disaster-related stress (for review, see [20]). For example, in 2012, Hurricane Sandy-related stress only predicted elevated symptoms among children predisposed to symptom-relevant affect; those high in temperamental sadness in a prior assessment showed elevated levels of depressive symptoms while those high in temperamental fearfulness showed elevated levels of anxiety symptoms [21]. Similarly, the COVID-19 pandemic may serve as a trigger for those with vulnerabilities to specific disorders as opposed to as a population-level stressor [22].

Furthermore, the theorized relationship between pandemic-related distress and psychopathology may follow an opposite causal direction as that presumed in a general-stressor model. Loneliness caused by self-isolation may lead to greater levels of depression as hypothesized [10,23]. However, in a diathesis-stress model (See Fig 1b), the opposite causal direction is also possible [13]. In such a case, individuals with more severe depression are themselves more sensitive to distress [24]. Those with more severe baseline symptom severity may be more sensitive to the loneliness experienced during self-isolation, particularly at the beginning [18,25]. Thus, in such a scenario, the observed association between depression and pandemic-related loneliness would still exist. [16] However, it would not be because pandemic-related loneliness led to greater levels of depression. Rather, in such a model, those who entered the pandemic with higher levels of depression would feel greater levels of loneliness during lockdown.

We have also included additional citations to support each approach and explicitly traced how each part of the model would relate to each other. Additionally, we have taken the reviewer’s suggestion to include a figure that displays these relationships, as preparation for the analyses that will be performed below (Figures 1a-1b).

3. The majority of the participants were recruited from the US and the UK. Also, the authors described more COVID-19 information for the two countries. Therefore, I think that it is necessarily to do a sensitivity analysis on the US and UK samples (i.e., removing the Canada and Ireland participants to examined the tested models).

We have repeated analyses using only samples from the US and UK and the resultant findings were remarkably similar. Average scores on the metrics of interest closely overlapped. For example, in the full sample, participants’ levels of pandemic-related anxiety were M = 57.40, SD = 28.00 while in the only UK-US sample, it was M = 57.54, SD = 27.51. Effect sizes from the main analyses were extremely similar as well. In the full sample, for example, depression from T1 to T2 changed with a Cohen’s d of -.02 [-0.10; 0.06]. In the UK-US sample, Cohen’s d for this effect was -.02 [-0.11; 0.06]. We thus feel that the main findings were not driven by the inclusion of participants from Ireland and Canada. We now include the full re-analysis as a supplemental material (S1 File) for others who may have a similar concern.

4. The description of DASS is unclear. From the description of “seven-item subscales”, I know that the authors used DASS-21 instead of DASS-42. However, the authors did not make it clear that they have used DASS-21. Moreover, the citation credit should give to Lovibond and Lovibond (1995), as they are the original developers. It is fine to cite Henry and Crawford. However, Lovibond and Lovibond cannot be excluded in the citations.

We have updated the description to clarify the use of the DASS-21 and included the Lovibond & Lovibond citation as well.

5. The authors should provide scoring information for all the instruments. Also, the meaning of the directions in each instrument should be provided. Without such information, one cannot interpret the scores for the instruments.

We have provided information for each scale’s item scoring system, potential range, and meaning of higher scores.

6. From the Procedure section, one can understand that the attrition rate of the longitudinal study was about 40%. This is fine. However, the authors should provide information regarding whether the retained participants and the lost-to-follow-up participants share similar demographics. This can check if the missing is at random.

We have reviewed attrition rates and demographic information about the participants and found that those who dropped out were higher in depression and anxiety to a small degree. These differences in depression (M (SD) = 5.85 (5.64) vs 7.76 (5.97)) and anxiety (M (SD) = 3.25 (3.91) vs 4.33 (4.25)), however, were not judged to be substantial enough to establish a clinically significant difference between the groups. Furthermore, because participants were only included if they completed all three assessments, we were not concerned about this difference impacting sample scores across timepoints.

We do feel, however, that readers would want to be informed of this difference in order to better evaluate the results. As such, we have updated the manuscript in a number of ways. First, we now report these differences in the text of the Procedure section (p. 11):

Participants who returned for the final assessment showed small differences in symptom severity and moderate differences in age from those who did not (S1 File). Specifically, participants who returned were older than those who did not (M = 41.90, SD = 13.07 vs M = 34.07 SD = 10.28, d = .65, p < .001), slightly less depressed (M = 5.85, SD = 5.64 vs M = 7.76, SD = 5.97, d = .33, p = .004), and slightly less anxious (M = 3,25, SD = 3.91 vs M = 4.33, SD = 4.25, d = .27, p = .019). In order to minimize their effects on difference scores between timepoints, only participants who completed all three time-points were included in analyses.

Second, we include a table of differences in scores as a function of attrition in supplementary table S2 Table (p. 32).

Finally, we note this issue at the end of the Discussion (pp. 22-23):

Additionally, the differences between those who returned for the final assessment and those who did not is worthy of acknowledgement. Although the differences in symptom severity were small and of low clinical significance before the pandemic, it is possible that the participants who did not return for the final assessment were more at risk for symptom increase than those who did. In such a case, it is possible that had they been included, equivalence across timepoints may not have occurred. This hypothetical circumstance, however, is also inconsistent with the general stressor model that predicts symptom increase across the population. Furthermore, the current study is consistent with other large-scale studies that have also not found that symptom severity increased precipitously during the pandemic. [6,18,19] Further research may benefit from focusing on how the pandemic impacted well-being among participants at different levels of symptom severity.

7. From the statement, “Participants were recruited via the Prolific Academic Platform as part of an ongoing study on reinforcement sensitivity, emotion regulation, and affective psychopathology”, I wonder whether the authors have some publications already published to make a citation here.

We have now amended the text to read (p. 10) “Participants were recruited via the Prolific Academic Platform as part of an ongoing study on reinforcement sensitivity, emotion regulation, and affective psychopathology [35].”.

8. Tables 2 and 3 are out of the size and cannot be read.

We have adjusted the page orientation for these tables in order to include the full tables within the margins.

9. For all the tables, the authors should use footnotes to provide definition of T1, T2, and T3.

Tables and figures were updated to include definitions for each time-point (e.g., “T1 – Data collection at Time 1, October 17, 2018. T2 – Data collection at Time 2, April 15-22, 2019. T3 – Data collection at Time 3, April 15-20, 2020.”)_

Attachment

Submitted filename: PLOS One Reviews.docx

Decision Letter 2

Sergio A Useche

28 Jul 2022

PONE-D-21-18866R2Mood Symptoms Predict COVID-19 Pandemic Distress but not Vice Versa: An 18-Month Longitudinal StudyPLOS ONE

Dear Dr. Katz,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Your paper has been re-assessed by our referees. Overall, they remark the good progress made in your previous round of revisions. However, Reviewer 1 raised a set of minor comments that must be addressed by you (see attachments) before considering the manuscript as acceptable for publication in PLOS ONE. As these changes are not substantial (bit still need to be addressed with all the rigor possible), I will be pleased to evaluate them myself, instead of starting a new round of reviews. This might help to expedite an editorial decision on the manuscript. 

Please submit your revised manuscript by Sep 11 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Sergio A. Useche, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Mood Symptoms Predict COVID-19 Pandemic Distress but not Vice Versa:

An 18-Month Longitudinal Study

Benjamin A. Katz,1* Iftah Yovel1

KSB Review (7-27-22)

1st review comments (1-16-22): Yellow

2nd review comments (7-27-22): Blue

p. 3

“The high levels of health and financial uncertainly salient to the pandemic are strongly linked to internalizing pathology [3].”

What about “externalizing pathology”. Seems important to reference as well.

My question re: externalizing pathology was not addressed by the authors. I would assume there are a lot of data demonstrating that the pandemic was correlated with increased marital/family conflict, increased drug use, increased aggression, etc. Even if the authors chose not to explore this in their article, I think it would be important to mention, briefly, as it reflects an oft discussed problem associated with social isolation.

p. 4

“Thus, in order to assess the COVID-19 pandemic’s negative impact on mental health, it is necessary to identify participants with pre-pandemic baseline data available, and to separately assess clinical symptoms, environmental stressors, and subjective distress.”

Thoughtful, complex design.

“The pandemic assessment period occurred between April 15, 2020 and April 20, 2020,..”

This research is limited by the lack of measurements 6 or 12 or 18 months into the pandemic, that reflects the effects of long-term, cumulative stress.

This was addressed by the authors. Perhaps I missed it the first time, or perhaps they added it, but I appreciate their recognizing the importance of longer-term followup.

p. 5-6

Where is the demographic data. It appears pre-disaster trauma history (e.g. ACES study) was not included. Given that interpersonal trauma could be a predictor of post-disaster/pandemic reactivity, this should at minimum be mentioned as a study limitation.

This has not been mentioned/addressed by the authors, and I think should at minimum be included in their discussion of the limitations of this study. On page 17, they wrote: “Thus, those most at risk for distress during the pandemic were those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety).”

That is good as far as it goes, using baseline data, but it makes no reference to how trauma history might predict greater distress and psychopathology as response to the pandemic. I would still want this referenced, however briefly, around page 17.

p. 6

“They represented a wide range of ages (M = 42.87, SD = 13.09, range = 19 – 75)”

It would be helpful to see the data on how many people in each age grouping, example 19-29, 30-39, etc. to see if age cohorts mattered or not.

The authors wrote: “Further demographic data is available in S1.”

The authors did not explore whether there are any cohort differences wrt outcome. They also didn’t explore whether there were significant outcome differences between white vs non-white, North America vs UK/Ireland. Again, I would like this at minimum referenced as a limitation worthy of further investigation.

p. 9-10

“The relationships between the environmental stressors and symptom severity were assessed using polychoric correlations. All other relationships (e.g., between subjective COVID-19 stress and symptom severity) were calculated using Pearson’s correlations.”

I am not well-versed in statistics, which perhaps is why it would be helpful for the reader to understand why two different measurements of correlation were chosen, and what they each differentially show and don’t show.

The authors now wrote: “The models included binary data (i.e., environmental stressors) and were therefore calculated using polychoric correlations and a diagonal weighted least squares (DWLS) estimator [27].”

As far as I can tell, this addressed my concern as stated above.

“After observing the unexpectedly small effect sizes of change between time-points, we next estimated whether these effects were statistically equivalent to zero. This was done using the two one-sided tests (TOST) procedure for equivalence testing [26].”

It is my understanding the more tests you run, the more you’re likely to “find something” of statistical significance. Is this taken into account? If so, specify how.

I’m not sure this was addressed by the authors, but I am not a statistician so perhaps it was considered when the authors wrote: “Models were assessed using robust fit statistics using the recommended cutoffs [28,29] of: non- significant chi-square test, CFI > .95, RMSEA < .06, SRMR < .08.”

p. 13

“Distress intolerance, on the other hand, was not found to be significantly equivalent (p = .096) across timepoints. As such, distress intolerance was judged to be different between T1 and T2, to a significant albeit trivial degree.”

Is p= .096 considered significant?? Shouldn’t it be lower than .05? Perhaps this should be explained as to why this is statistically significant, however “trivial”.

The authors have not addressed my concern at all. How can something be “not significantly equivalent” and “different... to a significant albeit trivial degree.” I understand p= .096 fits with “not significantly equivalent”, but then how do the authors conclude T1 and T2 are “different to a significant albeit trivial degree”? “Not equivalent” is not the same as “different to a significant degree”. Where is the data to support that second statement? Am I missing something?

p. 17

“While unexpected, these findings were consistent with patterns observed following other disasters. Populations appear to be resilient to large-scale environmental stressors [9].”

How much is this a function of this being very early in the pandemic? Would it change after 1 year? After getting the vaccine, and then new variants causing a resurgence?

Since the authors did followup at 12 and 18 months, this concern was addressed.

Also, the subject demographics does not assessment of interpersonal trauma history, e.g, along the lines of the ACES studies. In my clinical experience, the trauma history pre-“disaster” has a significant effect on post-disaster coping. At the minimum, the fact that interpersonal trauma history was NOT assessed should be mentioned.

As I highlighted above, p. 5-6, no mention was made of pre-pandemic trauma history. In my clinical experience and that of my colleagues, there was an intersection between complex, childhood trauma and response to the pandemic. Again, I would want that stated as a limitation of this study.

“…those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety).”

Again, trauma history is not included. Some of the anxiety, depression, isolation, etc. could be a consequence of post-trauma sequelae. Again, this limitation of this study needs mentioning.

Again, history of mental illness is an important variable, but trauma history may be as or more important. Mention this is worthy of further study.

p. 18

“Although it occurred during peak rates of COVID-19 fatality (see Figure 1), other events during the pandemic may have led to other distress peaks as well, such as when participants began to self-isolate [33].”

This is a good point.

..

“Identifying which participants are most vulnerable to mental illness in the face of pandemic-related stress is the first step in developing targeted interventions [35]. In doing so, the clinical science community may rise to the formidable challenge that COVID-19 has set before it.”

Agreed, but I don’t think the researchers take into account what I believe (and I assume research has shown) the importance of interpersonal, particularly severe childhood trauma history (a la the ACES research) and neglect. If you don’t take into account childhood trauma and neglect, your interventions will not be as patient-specific and effective as desired.

On page 18, the authors wrote: “Collaborative work between multiple laboratories will also offer opportunities to compare findings among diverse populations and assessment methods in order to more efficiently identify at-risk populations [34].

Not to beat a dead horse, but what I highlighted speaks directly to my concern that pre-pandemic, and especially early childhood, relational trauma history was not assessed. Since the issue of relational trauma addresses both matters of “population” and “assessment”, this would be the place to mention trauma history as important in “further/future study.”

Reviewer #2: I am happy with the revised manuscript. Apparently, the authors have well addressed all the prior comments. I have no more comments and I would like to thank the authors again for addressing all the prior issues.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PLOS One Review Mood Symptoms Predict COVID (KSB, 7-27-22).pdf

PLoS One. 2022 Sep 2;17(9):e0273945. doi: 10.1371/journal.pone.0273945.r006

Author response to Decision Letter 2


8 Aug 2022

Reviewer #1:

p. 3

“The high levels of health and financial uncertainly salient to the pandemic are strongly linked to internalizing pathology [3].”

My question re: externalizing pathology was not addressed by the authors. I would assume there are a lot of data demonstrating that the pandemic was correlated with increased marital/family conflict, increased drug use, increased aggression, etc. Even if the authors chose not to explore this in their article, I think it would be important to mention, briefly, as it reflects an oft discussed problem associated with social isolation.

We have now expanded upon the role of externalizing pathology and recommended that the current findings should be considered against similar studies on externalizing disorders (p. 25):

Additionally, the current study operationalized mental health in terms of metrics of distress, loneliness, depression, and anxiety. However, other metrics of behavioral health, such as substance use and behavioral addictions, have also been associated with social isolation during the pandemic’s onset [9]. The current study may be considered against others related to externalizing disorders in order to assess trends across different types of psychopathology.

p. 5-6

Where are the demographic data. It appears pre-disaster trauma history (e.g. ACES study) was not included. Given that interpersonal trauma could be a predictor of post-disaster/pandemic reactivity, this should at minimum be mentioned as a study limitation.

This has not been mentioned/addressed by the authors, and I think should at minimum be included in their discussion of the limitations of this study. On page 17, they wrote: “Thus, those most at risk for distress during the pandemic were those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety).”

That is good as far as it goes, using baseline data, but it makes no reference to how trauma history might predict greater distress and psychopathology as response to the pandemic. I would still want this referenced, however briefly, around page 17.

Following the paragraph mentioned by the reviewer now on pp. 21-22, we now include the importance of adverse childhood experiences and other interpersonal factors in the discussion and explicitly point to where other longitudinal studies may be used to assess such questions (p. 23):

Indeed, it also more generally highlights the importance of longitudinal data in identifying those most at risk for elevated levels of mental illness during periods of great distress. For example, individuals with adverse childhood experiences may also be at risk for increased levels of mental illness during times of pandemic-related stress [47]. In this case, ongoing, lifetime longitudinal studies are specially equipped to assess the relationship between diatheses related to earlier experiences and later distress [16].

p. 6

“They represented a wide range of ages (M = 42.87, SD = 13.09, range = 19 – 75)”

The authors wrote: “Further demographic data is available in S1.”

The authors did not explore whether there are any cohort differences wrt outcome. They also didn’t explore whether there were significant outcome differences between white vs non-white, North America vs UK/Ireland. Again, I would like this at minimum referenced as a limitation worthy of further investigation.

Owing to the limited sample size and distribution of participants across varied locations, we are concerned that applying the number of comparisons in this suggestion may yield false positives in terms of group differences in effect size. However, consistent with the reviewer’s request, we now reference the role of demographics on COVID-related depression and anxiety in the discussion, highlighting the role of syndemic interactions between minoritized identity and the effects of the pandemic. We agree with the reviewer that interactions such as these are worthy of future inquiry, and, consistent with the reviewer’s request, note this as a limitation in the current study’s analysis and highlight this point as being worthy of further investigation on p. 23:

These findings may complement others that use demographic correlates to identify those at higher risk for elevated mental illness during the pandemic. Factors such as age, gender, race, ethnicity, location, and education are associated with internalizing pathology at the onset of the pandemic, alongside pandemic-related stressors [44]. These different risk factors interact syndemically, wherein pandemic-related stressors have greater impacts upon mental health for those with minoritized racial identities [45] or live in lower-income communities [46]. While the current study’s sample size is too small to undertake such interaction analyses, other large-scale studies will be better equipped to do so.

p. 13

“Distress intolerance, on the other hand, was not found to be significantly equivalent (p = .096) across timepoints. As such, distress intolerance was judged to be different between T1 and T2, to a significant albeit trivial degree.”

Is p= .096 considered significant?? Shouldn’t it be lower than .05? Perhaps this should be explained as to why this is statistically significant, however “trivial”.

The authors have not addressed my concern at all. How can something be “not significantly equivalent” and “different... to a significant albeit trivial degree.” I

understand p= .096 fits with “not significantly equivalent”, but then how do the authors

conclude T1 and T2 are “different to a significant albeit trivial degree”? “Not

equivalent” is not the same as “different to a significant degree”. Where is the data to

support that second statement? Am I missing something?

Response:

The TOST approach entails three steps when assessing group difference-vs-equivalence. First, we assess difference using null-hypothesis significance testing (NHST). For distress intolerance, his was the first test which was significant (i.e., “significantly different from T1 to T2 using NHST (d = .10, p = .049)”). Next, we assessed equivalence using the two one-sided tests (TOST) approach, which led to a non-significant outcome (i.e., “was not found to be significantly equivalent using TOST (p = .096)”). Finally, we assessed the size of this difference using Cohen’s d. Consistent with convention, effect sizes up to d = .10 were considered trivially small. We understand that the reviewer was also previously concerned that this would be an excess of tests for a single effect size. However, we would like to emphasize that this three-step process is the gold-standard approach for equivalence testing. We have added clarifications to the Analysis Plan section of the manuscript so that the systematic nature of these steps would be more clear (p. 12):

Assessing the extent to which depression and anxiety changed or remained the same between time points was done in three steps. First, we performed conventional null hypothesis significance tests (NHST) to examine differences between time-points. Specifically, within-group t tests were used, with symptom and trait measures (e.g., DASS-Depression) entered as the repeated measures across consecutive time-points (i.e., T1-T2 or T2-T3). In this case, a significant finding would indicate that the repeated measures changed between time-points.

After observing the unexpectedly small effect sizes of change between time-points, we next estimated whether these effects were statistically equivalent to zero. This procedure is performed using a combination of the previous NHSTs and an additional two one-sided tests (TOST) procedure for equivalence testing [39]. In this procedure, an a priori smallest effect size of interest (SESOI) is calculated. Two one-sided t-tests then assess whether the full confidence interval of the change score estimate falls within the positive and negative SESOI (e.g., Cohen’s d = -.20 < X < .20). If so, the change score is judged as statistically equivalent to zero and scores are considered equivalent to each other. Finally, in the case of significant difference, we examined Cohen’s d of difference in order to assess the size of the difference between timepoints.

p. 17

“…those experiencing both environmental stressors relevant to the distress (e.g., economic stress) as well as a history of mental illness (e.g., anxiety).”

Again, trauma history is not included. Some of the anxiety, depression, isolation, etc. could be a consequence of post-trauma sequelae. Again, this limitation of the study needs mentioning.

Again, history of mental illness is an important variable, but trauma history may be as

or more important. Mention this is worthy of further study.

those experiencing both environmental stressors relevant to the distress (e.g.,

economic stress) as well as a history of mental illness (e.g., anxiety).”

Again, trauma history is not included. Some of the anxiety, depression, isolation, etc.

could be a consequence of post-trauma sequelae. Again, this limitation of this study

needs mentioning.

Again, history of mental illness is an important variable, but trauma history may be as

or more important. Mention this is worthy of further study

Response:

We have taken four steps to remedy this concern. First, we now mention in the Method section that childhood experiences were not assessed (p. 10):

They completed a series of self-report questionnaires related to reinforcement sensitivity, emotion regulation and affective pathology followed by an unrelated behavioral task (e.g., for similar procedure, see [36]). Questionnaires only assessed recent levels of psychopathology. Histories of psychopathology or childhood risk factors (e.g., adverse childhood events) were not assessed.

Second, in the discussion on p. 23, we now consider the role of lifetime longitudinal studies in identifying those who experienced ACEs (“Indeed, it also more…and later distress [16].” See response to p. 5-6 above for full quote).

Third, in the limitations section, we now expand upon the extant discussion of ACEs to highlight the importance of including measures of participants’ past experiences that occurred prior to the first baseline assessment (p. 24-25):

Finally, the current study approaches risk factors for mental illness from the perspectives of contemporary intrapersonal processing, acute stress, and internalizing psychopathology. However, many risk factors for elevated mental illness occurred prior to the original baseline measures included in the study. Elevated levels of psychopathology prior to assessment [24] or adverse childhood events [47] are both useful for identifying those at risk for greater levels of distress while facing pandemic-related stressors. Future studies using similar designs may use retrospective assessments to capture some of these risk factors as well.

Finally, we now highlight the importance of assessing clinical history as a future direction of research in the closing paragraph (p. 25):

Collaborative work between multiple laboratories will also offer opportunities to compare findings among diverse populations, assessment methods, and clinical histories in order to more efficiently identify at-risk populations [50].

p. 18

“Identifying which participants are most vulnerable to mental illness in the face of pandemic-related stress is the first step in developing targeted interventions [35]. In doing so, the clinical science community may rise to the formidable challenge that COVID-19 has set before it.”

Agreed, but I don’t think the researchers take into account what I believe (and I assume research has shown) the importance of interpersonal, particularly severe childhood trauma history (a la the ACES research) and neglect. If you don’t take into account childhood trauma and neglect, your interventions will not be as patient-specific and effective as desired.

On page 18, the authors wrote: “Collaborative work between multiple laboratories will

also offer opportunities to compare findings among diverse populations and assessment methods in order to more efficiently identify at-risk populations [34].

Not to beat a dead horse, but what I highlighted speaks directly to my concern that prepandemic, and especially early childhood, relational trauma history was not assessed. Since the issue of relational trauma addresses both matters of “population” and “assessment”, this would be the place to mention trauma history as important in

“further/future study.”

Please see the response to p. 17 for our four changes to the manuscript that were made in order to more explicitly highlight the role of childhood in symptom change during the pandemic.

Attachment

Submitted filename: PLOS One Reviews.docx

Decision Letter 3

Sergio A Useche

19 Aug 2022

Mood Symptoms Predict COVID-19 Pandemic Distress but not Vice Versa: An 18-Month Longitudinal Study

PONE-D-21-18866R3

Dear Dr. Katz,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sergio A. Useche, Ph.D.

Academic Editor

PLOS ONE

Acceptance letter

Sergio A Useche

25 Aug 2022

PONE-D-21-18866R3

Mood symptoms predict COVID-19 pandemic distress but not vice versa: An 18-month longitudinal study

Dear Dr. Katz:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sergio A. Useche

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Additional demographic information of sample.

    (DOCX)

    S2 Table. Comparison of participants that did and did not return for later assessments.

    T1 returners are those who returned for T2. T1 dropouts are those who did not return for T2. T2 returners are those who completed T1, T2, and T3. T2 returners are those who completed T1 and T2 but not T3.

    (DOCX)

    S1 File. Main analyses including only participants from the United Kingdom and United States.

    (DOCX)

    Attachment

    Submitted filename: PLoS Response to Reviewers.docx

    Attachment

    Submitted filename: PLOS One Review Mood Symptoms Predict COVID (Benau, 1-16-22).docx

    Attachment

    Submitted filename: PLOS One Reviews.docx

    Attachment

    Submitted filename: PLOS One Review Mood Symptoms Predict COVID (KSB, 7-27-22).pdf

    Attachment

    Submitted filename: PLOS One Reviews.docx

    Data Availability Statement

    All data, and analysis syntax are available from https://osf.io/sjp4a/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES