Skip to main content
PLOS One logoLink to PLOS One
. 2020 Aug 13;15(8):e0237631. doi: 10.1371/journal.pone.0237631

Subjective ratings of emotive stimuli predict the impact of the COVID-19 quarantine on affective states

Héctor López-Carral 1,2,#, Klaudia Grechuta 1,#, Paul F M J Verschure 1,3,*
Editor: Stephan Doering4
PMCID: PMC7425917  PMID: 32790759

Abstract

The COVID-19 crisis resulted in a large proportion of the world’s population having to employ social distancing measures and self-quarantine. Given that limiting social interaction impacts mental health, we assessed the effects of quarantine on emotive perception as a proxy of affective states. To this end, we conducted an online experiment whereby 112 participants provided affective ratings for a set of normative images and reported on their well-being during COVID-19 self-isolation. We found that current valence ratings were significantly lower than the original ones from 2015. This negative shift correlated with key aspects of the personal situation during the confinement, including working and living status, and subjective well-being. These findings indicate that quarantine impacts mood negatively, resulting in a negatively biased perception of emotive stimuli. Moreover, our online assessment method shows its validity for large-scale population studies on the impact of COVID-19 related mitigation methods and well-being.

Introduction

In December 2019, Chinese health authorities reported a cluster of pneumonia cases in the city of Wuhan, in the Hubei province, caused by the novel coronavirus SARS-CoV-2 (COVID-19) [1]. By mid-March 2020, a total of 200,000 confirmed cases [2] had been reported worldwide, showing an exponential increase with the current number of identified cases exceeding 14 million, whereby Spain, Italy, and the United Kingdom are the most-affected European nations.

To prevent the spread of COVID-19, public health authorities have employed mitigation strategies and, in particular, quarantine [3] and isolation, which are currently practiced across the globe. Mandatory mass quarantine restrictions, which include social distancing, stay-at-home rules, and limiting work-related travel outside the home [4] might impact both physical and mental health of the affected individuals [5]. Indeed, prolonged widespread lock-down and limiting social contact has resulted in post-traumatic stress disorder, depression, anxiety, mood dysregulations, and anxiety-induced insomnia during previous periods of quarantine [68]. These, in turn, led to cognitive distortions and maladaptive behaviors, including suicide [9, 10]. A growing body of evidence from COVID-19 demonstrates that the current mass quarantine has been producing similar adverse psychological effects, which might have long-lasting consequences on both individual subjects and society [5, 1113]. Moreover, it is unclear for how long and how frequent confinement measures will be put in place in the medium and long-term. Hence, understanding the specific impact of COVID-19 on mental health and the development of monitoring and diagnostic tools to identify individuals at risk are of critical importance.

Disturbances in mental health, including disorders of mood, are commonly assessed using explicit questionnaires and interview measures [14]. Both clinician-rated and self-reported instruments have been used for decades [15]. Some studies, however, have outlined noteworthy limitations of standard assessments of depression, such as conceptual and psychometric flaws [1620]. For instance, the Hamilton Depression Rating Scale (HDRS, [21]), which has been considered a gold standard in clinical practice as well as clinical trials, was widely criticized for its subjectivity as well as the multidimensional structure, which varies across studies hence preventing replication across samples as well as poor factorial and content validity [1620, 22]. Moreover, it is well-established that self-reports in psychological research can suffer from response bias such as socially desirable responding or a tendency to provide positive self-descriptions [2325]. To counteract possible response bias and suggestion effects, in the current study, we employed affective ratings of calibrated emotional stimuli as an implicit measure of mental state building on earlier validation studies of online emotional rating methods of calibrated emotional stimuli [26].

Mood-state-dependent changes in emotional reactivity are reflected in emotion experience evaluations [27]. Indeed, there is converging evidence that ratings of affective stimuli might serve as a robust, indirect measure of mood. For example, empirical studies show reduced subjective and expressive emotional responses to neutral and positive stimuli in depression, including in major depressive disorder (MDD) [2831]. Specifically, the results show significant negative shifts in emotional ratings of valence compared to the healthy controls such that patients judge the stimuli as substantially less pleasant. Alternatively, Borderline Personality Disorder (BPD) patients show hypersensitivity to emotional stimuli as compared to healthy controls [32]. These findings support the notion that response to emotive stimuli is be altered in disorders of mood.

Given the mental health risk of medium to long-term isolation [7, 8, 33], it is relevant to develop methods that can effectively and unobtrusively assess and monitor the impact of the restriction of movement and social distancing on well-being and mental health. Hence, the goal of this study is to evaluate the effects of quarantine-induced changes in mood, as measured implicitly through the subjective ratings of emotional stimuli. We predicted that individuals in quarantine due to COVID-19 might present changes in their affective ratings that reflect their subjective experience of isolation. To test this hypothesis, we conducted an online experiment in which volunteers were asked to rate the affective content of a subset of standardized visual stimuli and report their current personal situation and experience related to the pandemic. We compared the affective ratings of valence (i.e., indicative of disturbances in mood) between groups of subjects in the pre-quarantine “normal” condition and under quarantine.

Materials and methods

Participants

After providing their consent, one hundred twelve subjects participated in the study (64.29% females) with a mean age of 32.38 (SD = 9.04). The sample size of N = 110 was determined a priori using G*Power software version 3.1 (Kiel, Germany) based on α = 0.05, power of 80% and medium effect size (0.5). Volunteers accessed the online experiment using a URL (uniform resource locator) that was shared through social media and instant messaging platforms by the experimenters. 51.79% of the subjects held postgraduate degrees or higher. Subjects originated from 19 different countries (30.36% Spanish and 21.43% Italian), and they lived in 17 countries (53.57% in Spain and 16.07% in Italy). This sampling approach was chosen to cover a range of countries that were similarly impacted by self-isolation measures. In particular, for the analyses, we included only those participants who were actively undergoing quarantine. Thus, all participants were uniform in their cultural traits [34] and quarantine measures, including social isolation and distancing, the banning of social events and gatherings, the closure of schools, offices, and other facilities, and travel restrictions [35, 36].

The reported data were collected between the 9th and the 20th of April 2020. The personal data of the subjects were anonymized and kept confidential. All participants were blind to the purpose of the study. Specifically, until the end of the session, subjects did not know the study’s objective, which could bias their responses. However, they were informed about it at the end of the trial.

Materials

Affective Slider

We employed the Affective Slider tool [26] for digital assessments of the arousal and pleasure dimensions of the emotive stimuli. Its design principles follow the circumplex model of emotion proposed by James Russell [37, 38]. In this bipolar model, arousal corresponds to the intensity of an affective response (i.e., evoked level of excitement), while valence represents the positivity or negativity of the response (i.e., happiness). Consequently, the Affective Slider consists of a pair of slider controls flanked by emoticons that correspond to the ratings of arousal and valence, respectively. Both sliders are oriented horizontally and located above each other (Fig 1). In this study, Affective Slider served to allow for continuous subjective assessment of the presented images, thus counteracting methodological limitations of classical scales such as the Self-Assessment Manikin (SAM) [39] especially when applied in online assessments [26]. During the experiment, the position of the two sliders on the screen (e.g., arousal on top of valence or vice versa) randomly changed at every trial to prevent the order-effects and automaticity in the responses.

Fig 1. Example of digital assessments of the arousal and pleasure using the Affective Slider [26].

Fig 1

On the left, there is an example image from the OASIS data set [40]. On the right, there are the ratings. The top slider corresponds to arousal and the bottom one to valence. This visual order was randomized over trials.

Experimental stimuli: Open Affective Standardized Image Set (OASIS)

OASIS is a validated open-access data set, which consists of nine hundred images acquired online [40]. Each stimulus includes normative ratings of both arousal and valence reported on a scale between 1 and 7 by 822 participants. The stimuli depict a variety of themes within four categories that include people, animals, scenes, and objects. In contrast with the well-known International Affective Picture Set (IAPS) [41], OASIS allows for online use of the data set and provides more recent ratings. For the purpose of this study, we chose a subset of 30 images from the categories people and scenes, corresponding to 61.78% of the entire set. The choice was determined by the content of the stimuli, which was related to social and outdoor activities. The subset was selected randomly from the whole set of images to achieve a representative sample (see Fig 2). The same set of 30 images was presented to all participants in a randomized order (S1 File Image Selection).

Fig 2. Distribution of the valence and arousal rating for the 30 images selected for this study (solid circles) and the OASIS data set of 900 images (semitransparent circles).

Fig 2

COVID-19 questionnaire

To evaluate the current personal and social situation of each participant and their subjective experience during the COVID-19 global health crisis, we created a custom questionnaire. The scale was composed of 14 items, including an optional field to provide personal comments related to the quarantine period (see S2 File COVID-19 Questionnaire). The answers to the remaining questions were to be delivered using either a multiple-choice scale or standard sliders derived from the Affective Slider. In the case of the latter, subjects rated their level of agreement on a scale ranging from “not at all” to “very much”. The questionnaire was administered at the end of the experiment. For the analysis, we included only the data of those subjects who completed the questionnaire.

Procedure

The online experiment consisted of four main sections: (a) instructions, the consent form, disclaimer, as well as the collection of demographic data (gender, age, education level, country of origin, and country of residence), (b) experimental task, (c) COVID-19 questionnaire, and (d) explanation of the rationale of the study.

During the experimental task, each participant was presented with a sequence of thirty affective stimuli from the OASIS image set [40]. Participants provided their ratings using the Affective Slider located on the right side of the image (Fig 1). Each stimulus remained visible until the submission of both ratings, which had no time limit, as in the experimental tasks of both the tool [26] and the data set [40]. Only when both ratings were provided, subjects could advance to the next image by clicking a separate button. After that, the next stimulus was immediately displayed together with the corresponding Affective Slider.

Once participants completed the experimental task, they were required to complete the COVID-19 questionnaire. Finally, after having submitted the questionnaire, participants were presented with a final page that included the experimental rationale and the researchers’ contact information.

Data analysis

Tests of normality were performed on the data, and subsequently, T-tests were used to identify differences between the affective ratings. All comparative analyses used two-tailed tests and a standard level of significance (p <.05). For each comparison, the effect sizes were computed using Cohen’s d [42]. A Pearson product-moment correlation coefficient was computed for the subsequent linear correlation analyses. Fourteen participants who reported not being in quarantine were excluded from the analysis.

Finally, we applied machine learning techniques to evaluate the plausibility of predicting participants’ personal situation and reported subjective state during the quarantine based on their valence ratings provided during the experiment. To achieve this, we trained a C-Support Vector Classification (SVC) model. Parameter tuning was performed using a grid search algorithm. The model was cross-validated to evaluate its performance based on the F-score. The classification was performed using the Scikit-learn machine learning library [43].

Results

First, we assessed the linear relationship between the affective ratings of arousal and valence collected in the present experiment and those acquired in the original study [40]. To this end, we computed the mean rating from all the subjects for each of the experimental stimuli and extracted the corresponding mean values from the OASIS data set. The analysis yielded high and significant positive correlation between the mean scores for both arousal (r(30) = .77, p <.001, see Fig 3A) and valence (r(30) = .88, p <.001, see Fig 3B).

Fig 3. Linear correlations between the ratings obtained in our study and those from OASIS.

Fig 3

A: Linear correlation between arousal ratings from OASIS (y-axis) and those acquired in the present study (x-axis). B: Linear correlation between valence ratings from OASIS (y-axis) and those acquired in the present study (x-axis). In both graphs, dashed lines represent the identity lines; ***p <.001.

Second, to test our hypothesis, we evaluated the existence of possible shifts in the affective ratings between the present study and the OASIS for the subsets of neutral and positive stimuli. In the neutral subset, we included all the images whose mean ratings for valence ranged between 3 and 5 (N = 15), while in the positive one, those whose mean valence ratings ranged between 5 and 7 (N = 11). For these analyses, we computed the mean rating of both arousal and valence from all subjects for each chosen subset. For the neutral stimuli, statistical analyses yielded that, while the mean ratings of arousal for the chosen images did not differ (t(15) = .61, p = .546), there was a statistically significant negative shift in the ratings of valence (t(15) = −2.28, p = .030, d = .859, see Fig 4).

Fig 4. Shift in the affective ratings for neutral and positive images.

Fig 4

The graphs present the comparison between the ratings of arousal (left) and valence for neutral images (middle) and valence for positive ones (right) obtained in our study with those from the OASIS. In all graphs, the blue lines correspond to the mean, while the individual lines show differences for individual images (N = 15); *p <.05.

Similarly, for the positive stimuli, we found no differences in the mean ratings of arousal (t(11) = 1.313, p = .203). In line with literature, however, we found a statistically significant negative shift in the ratings of valence (t(11) = −2.148, p = .044, d = .974).

Third, we conducted post hoc analyses to assess relationships between the affective ratings of valence and participants’ situation during the quarantine period evaluated through the COVID-19 questionnaire. Specifically, we investigated if the mean ratings of valence are related to whether the subjects (a) enjoy working from home, (b) miss the “normal” pre-quarantine life, and (c) live alone. For these analyses, we computed the differences in mean ratings from the present study and the OASIS data set for each participant. The first correlation analysis yielded a significant positive linear relationship between the strength of the enjoyment of working from home and the mean difference in valence ratings (r(98) = .24, p = .043). In particular, we found that participants who reported enjoying working from home rated the images more positively than those who did not (Fig 5A). Second, our results revealed a significant negative correlation between the degree of missing the “normal” pre-quarantine life and the differences in valence ratings (r(98) = −.22, p = .032). Hence, participants who missed more to return to the normal life rated images more negatively than those who missed it less (Fig 5B).

Fig 5. Correlations between the differences in valence ratings per participant and self-reported situation during the quarantine period.

Fig 5

A: Linear regression between differences in valence ratings and the degree of enjoyment to work from home. B: Linear regression between differences in valence ratings and the degree of missing the “normal” pre-quarantine life. In both graphs, blue lines represent a linear regression fit; *p <.05.

We also report a difference in the ratings of valence between those subjects who lived alone and those who lived with their families, partners, or friends (t(98) = −2.42, p = .017, d = .611). Specifically, we found that participants living alone rated the images significantly more negatively (Fig 6).

Fig 6. Differences in valence ratings relative to those of OASIS between participants who, during the quarantine, lived alone and those who did not.

Fig 6

*p <.05.

Fourth, we analyzed the time that participants took to rate each image. To do this, we computed the median rating time for each participant. The D’Agostino-Pearson normality test revealed that the rating times were not normally distributed (p < 0.001). Hence, similar to other studies [44], we applied nonparametric statistics for the subsequent analyses of rating times. We found a significant positive correlation between ratings times and both arousal (r(98) = .32, p = .001) and valence (r(98) = 0.25, p = .012).

Finally, we applied machine learning techniques to showcase the potential of automatically detecting users who might be at risk of developing mood disorders based on their ratings. To achieve this, we trained an SVC classifier with the valence rating information and the questionnaire’s key answers. The proposed method was able to classify between those participants who lived alone and those who lived with other people with a mean accuracy of 84% (SD = 4). Additionally, another SVC classifier could determine whether participants missed the pre-quarantine life with an accuracy of 65% (SD = 4.5).

Discussion

In this study, we aimed at assessing the effects of the COVID-19 quarantine on the emotional state of the affected individuals. We predicted that the quarantine restrictions and, in particular, the lock-down might negatively impact mental health. It has been shown that mood deviations are reflected in the perception of affective stimuli. Hence, to test our hypothesis, we devised an online study whereby volunteers evaluated the arousal and valence of a set of standardized stimuli and compared the acquired scores with those from the original data set. We predicted that the current ratings of valence might be lower than those of OASIS, possibly due to the recruited participants’ personal and social situation during the confinement.

Our results revealed that individuals who, during the experiment, were undergoing the quarantine due to COVID-19 rated neutral stimuli as significantly less pleasant when compared to the subjects who evaluated the same images during a non-quarantine period. We propose that the reported shifts in the valence ratings might be further indicative of a more general negative affective state caused by the quarantine. Indeed, we find evidence about negative changes in perception, as measured through self-reported valence ratings of visual stimuli in people with depression compared to healthy controls [30].

Based on the acquired data, we further observed a significant effect of some of the critical aspects of our sample’s personal and working situation during the self-isolation period on the reported ratings. Our results revealed a positive relationship between how much the subjects enjoyed working from home during confinement and the affective ratings. On the one hand, this finding is consistent with the literature, which demonstrates that unemployed people tend to report higher episodic sadness levels than employed people [45]. On the other hand, this result might indirectly represent the effect of a decreased in-person social interaction that many jobs entail, provided that social interaction positively impacts psychological well-being [46].

The experience of missing regular life before the quarantine also yielded a significant effect on the negativity of the emotive ratings. We found that those participants who missed it more also experienced more substantial negative shifts in the affective assessments of the stimuli than those who missed it less. As previously demonstrated [68], we speculate that this relationship might be directly indicative of the lowered mood stemming from the negative perception of the current situation and the desire for the social distancing measures and self-quarantine to terminate. This, in turn, may be related to an increased need for both social interaction and freedom.

Furthermore, our results revealed that the ratings of valence differed depending on the participants’ social living situation. Specifically, those individuals who lived alone provided more negative ratings than those living with other people. This might suggest that increased social isolation and reduced social interaction in individuals who undergo the quarantine while living alone more negatively impact their perception and, possibly, mood. Indeed, ample scientific evidence demonstrates that social isolation can result in lowered mood and depression and induce many other adverse effects on health [47]. These effects can range from mental disorders such as depression or anxiety [4850] to cardiovascular diseases [51, 52]. Moreover, loneliness can have detrimental effects on health through several mechanisms, including health behaviors, cardiovascular activation, cortisol levels, and sleep [53]. Although social isolation and loneliness are prevalent in a large proportion of the general population, affecting both younger [54] and older [55, 56] adults, these conditions can be exacerbated or become even more strict under exceptional circumstances that force a decrease in social contact. In the case of the COVID-19 pandemic, several studies also point out a significant psychological impact, including symptoms that correspond to those found in social isolation [5759].

The above-discussed findings converge to suggest that the mitigation strategies employed to prevent the spread of the COVID-19 pandemic are negatively impacting the emotional state of the affected individuals, which is reflected by negative shifts in the ratings of the affective stimuli. Furthermore, this pernicious effect is exacerbated by personal circumstances related to working conditions and social isolation, which, in the long term, might result in an increased prevalence of mental health conditions such as depression or post-traumatic stress disorder [60]. Importantly, in the present paradigm, we focused primarily on the evaluation of neutral and positive stimuli. According to literature [30], however, one could expect that quarantine-induced disorders of mood might also result in shifts in the negative stimuli—the hypothesis we are currently addressing in a follow-up study.

It is worth noting that our data presented variability in the relationships between the mean difference in valence ratings and both the enjoyment of working from home and the feeling of missing life from before the quarantine. This may be explained by the interaction of additional factors that were not captured by the present experiment but might have impacted the participants’ emotional state. For example, personality traits might play an essential role in the ways individual participants are affected by social isolation and how they cope with it [6163]. Furthermore, the intensity of the enforced quarantine measures was not the same for all participants, resulting variation in self-isolation. Future studies should address these limitations by controlling for additional, possibly confounding factors. Moreover, the participant sample used in this study comes from a variety of European countries. This sampling approach was intentionally chosen to cover a set of regions with comparable cultures as well as quarantine and self-isolation measures. It is possible, however, that the underlying diversity of the sample could have introduced heterogeneity in the data, which could impact the generalizability of our findings. This limitation shall be addressed in future studies by focusing the collection of data from a smaller subset of countries to further ensure the commonality of demographic aspects that could better represent the mental health of the sampled population.

On the one hand, the outcome of this study highlights the impact of the COVID-19-induced quarantine on the affective states, thus emphasizing the need for continuous monitoring of the psychological health and well-being of the general population. Since the psychological effects of isolation might have long-term consequences, the identification of individuals at risk and carrying out interventions to mitigate the reported negative impact might be necessary not only during but also post-quarantine. On the other hand, the hereby proposed method for diagnosing the affective changes through subjective ratings of emotive stimuli may already be of use to the healthcare system. Specifically, the current findings, as well as the reported machine learning techniques, could be translated into clinical practice by using techniques such as in-person visits and digital technology in the form of smartphone apps. The former could provide a unique opportunity of combining multidimensional scales including, for instance, brain scanning (e.g., functional Magnetic Resonance Imaging) genomic measurements, observer-rated neurocognitive evaluations (e.g., HDRS), patient self-reports (e.g., BDI), medical record reviews, as well as implicit measures such as the affective evaluations used in our study. From the academic and medical perspectives, such a compound diagnosis could contribute to fundamental advances in understanding neuropsychological conditions. However, there is a need for easy to apply and low-cost solutions for diagnostics, monitoring, and treatment. Hence, the implicit assessment validated in our study can allow continuous monitoring of the effective ratings as the proxy of the affective states allowing for a prediction of the personal situation based on the obtained ratings. Such software could promote at-home remote diagnostics and monitoring of at-risk patients continuously, at a low cost, and with a further benefit of preventing possible response biases [2325]. We have successfully deployed such an approach in the domain of stroke rehabilitation. We have successfully deployed such an approach in the domain of stroke rehabilitation [64, 65]. To this end, in future studies, we shall more systematically investigate the specific factors that may influence the participants’ affective ratings, including personality type, as well as other symptoms that might indicate abnormal psychological states, such as insomnia. Moreover, we will further validate the statistical relationship between the proposed implicit measure of the affective states and standard tools used to evaluate the mood, such as BDI [66] or PHQ-9 [67].

The efficient diagnosis, monitoring, and treatment of a neuropsychiatric illness are becoming increasingly important because its burden exceeds that of cardiovascular disease and cancer [68] and it is estimated that about 25% of individuals will suffer neurological or mental disorders at some point in their lives. However, due to several factors, including the lack of trained healthcare professionals, pervasive underdiagnosis, and stigma, only 0.2% will be able to receive the necessary treatment [69]. Hence, key current challenges include the improvement of the efficacy of the diagnosis of psychological disturbances and overcoming known limitations of current clinical scales [1620, 22] together with accurately capturing symptoms and patient specific concerns [70]. To this end, we propose that an optimal evaluation strategy may comprise explicit, observer-rated and self-reported evaluation tools combined with implicit physiological and behavioral monitoring using biometric sensing, such as the proposed affective rating methods and associated tools [71].

Importantly, at the current stage, the proposed classification algorithms serve rather as proof of the potential to automatically classify well-being [72]. Future work will address this limitation by further improving the model. Those improvements will imply additional training of the classifier and the inclusion of supplementary variables that might affect participants’ mental state, such as personality traits and biometrics.

Additionally, the present findings support the notion that the results from online studies carried out during the quarantine period that rely on the assessment of affective ratings or similar, might be significantly affected. Hence, this impact should be considered in the analyses and the interpretation of the acquired results.

Taken together, the present report presents a significant and timely finding which sheds light on the current quarantine’s impact beyond the experience of the individuals who undergo it. In line with other studies [5, 1113] our results confirm that individuals undergoing current mass quarantine can experience adverse psychological effects and be at risk of anxiety, mood dysregulations, and depression, which, in the long term, may lead to post-traumatic stress disorder and affect overall wellbeing [68]. Indeed, according to previous studies, the measures that are commonly undertaken to mitigate pandemics, including stay-at-home rules and social distancing may have drastic consequences. For instance, people can experience intense fear and anger leading to severe consequences at cognitive and behavioral levels, culminating in civil conflict and legal procedures [6] as well as suicide [9, 10]. In addition, the long-term impact of this change in wellbeing is currently not understood and deserves further study. The results presented in this report highlight the need to explore possible impacts of the COVID-19 pandemic and its effects on psychological wellbeing and mental health. To this aim, more studies need to be conducted to systematically investigate the interventions that may be deployed by both the healthcare system and individuals undergoing quarantine to mitigate the adverse psychological effects.

Supporting information

S1 File. Selection of images used in this study from the OASIS data set.

(PDF)

S2 File. The COVID-19 questionnaire used in the study.

(PDF)

Data Availability

Data is available on Kaggle. The corresponding DOI is: 10.34740/kaggle/dsv/1396507 (https://doi.org/10.34740/kaggle/dsv/1396507). Alternative URL: https://www.kaggle.com/hectorlopezcarral/covid19-affective-ratings.

Funding Statement

This research has been supported by the European Commission under contract H2020-787061 (ANITA) and H2020-840052 (cRGS), and by EIT Health under grant ID 19277 (RGS@home) to PFMJV.

References

  • 1.World Health Organization. Novel Coronavirus—China; 2020.
  • 2.The Center for Systems Science and Engineering, Johns Hopkins. Coronavirus COVID-19 Global Cases; 2020.
  • 3.Centers for Disease Control and Prevention. Quarantine and isolation; 2017.
  • 4. Rothstein MA, Alcalde G, Elster NR, Majumder MA, Palmer LI, Stone HT, et al. Quarantine and isolation: Lessons learned from SARS. Diane Pub Co; 2003. [Google Scholar]
  • 5. Nobles J, Martin F, Dawson S, Moran P, Savovic J. The potential impact of COVID-19 on mental health outcomes and the implications for service solutions.; 2020. [Google Scholar]
  • 6. Miles SH. Kaci Hickox: public health and the politics of fear. The American Journal of Bioethics. 2015;15(4):17–19. 10.1080/15265161.2015.1010994 [DOI] [PubMed] [Google Scholar]
  • 7. Brooks SK, Webster RK, Smith LE, Woodland L, Wessely S, Greenberg N, et al. The psychological impact of quarantine and how to reduce it: rapid review of the evidence. The Lancet. 2020;395(10227):912–920. 10.1016/S0140-6736(20)30460-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Hossain MM, Sultana A, Purohit N. Mental Health Outcomes of Quarantine and Isolation for Infection Prevention: A Systematic Umbrella Review of the Global Evidence. SSRN Electronic Journal. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Rubin GJ, Wessely S. The psychological effects of quarantining a city. Bmj. 2020;368 [DOI] [PubMed] [Google Scholar]
  • 10. Barbisch D, Koenig KL, Shih FY. Is there a case for quarantine? Perspectives from SARS to Ebola. Disaster medicine and public health preparedness. 2015;9(5):547–553. 10.1017/dmp.2015.38 [DOI] [PubMed] [Google Scholar]
  • 11. Holmes EA, O’Connor RC, Perry VH, Tracey I, Wessely S, Arseneault L, et al. Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science. The Lancet Psychiatry. 2020;7(6):547–560. 10.1016/S2215-0366(20)30168-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Rajkumar RP. COVID-19 and mental health: A review of the existing literature. Asian Journal of Psychiatry. 2020;52 10.1016/j.ajp.2020.102066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Torales J, O’Higgins M, Castaldelli-Maia JM, Ventriglio A. The outbreak of COVID-19 coronavirus and its impact on global mental health. International Journal of Social Psychiatry. 2020;66(4):317–320. 10.1177/0020764020915212 [DOI] [PubMed] [Google Scholar]
  • 14. Clark LA, Watson D. Tripartite model of anxiety and depression: psychometric evidence and taxonomic implications. Journal of abnormal psychology. 1991;100(3):316 10.1037/0021-843X.100.3.316 [DOI] [PubMed] [Google Scholar]
  • 15. Smarr KL, Keefer AL. Measures of depression and depressive symptoms: beck depression inventory-II (BDI-II), Center for Epidemiologic Studies Depression Scale (CES-D), geriatric depression scale (GDS), hospital anxiety and depression scale (HADS), and patient health Questionnaire-9 (PHQ-9). Arthritis care & research. 2011;63(S11):S454–S466. [DOI] [PubMed] [Google Scholar]
  • 16. Bagby RM, Ryder AG, Schuller DR, Marshall MB. The Hamilton Depression Rating Scale: has the gold standard become a lead weight? American Journal of Psychiatry. 2004;161(12):2163–2177. 10.1176/appi.ajp.161.12.2163 [DOI] [PubMed] [Google Scholar]
  • 17. Zimmerman M, Posternak MA, Chelminski I. Is it time to replace the Hamilton Depression Rating Scale as the primary outcome measure in treatment studies of depression? Journal of Clinical Psychopharmacology. 2005;25(2):105–110. 10.1097/01.jcp.0000155824.59585.46 [DOI] [PubMed] [Google Scholar]
  • 18. Gibbons RD, Clark DC, Kupfer DJ. Exactly what does the Hamilton depression rating scale measure? Journal of psychiatric research. 1993;27(3):259–273. 10.1016/0022-3956(93)90037-3 [DOI] [PubMed] [Google Scholar]
  • 19. Bech P, Allerup P, Reisby N, Gram L. Assessment of symptom change from improvement curves on the Hamilton depression scale in trials with antidepressants. Psychopharmacology. 1984;84(2):276–281. 10.1007/BF00427459 [DOI] [PubMed] [Google Scholar]
  • 20. Maier W, Philipp M, Gerken A. Dimensions of the Hamilton depression scale. Factor analysis studies. European archives of psychiatry and neurological sciences. 1985;234(6):417 10.1007/BF00386061 [DOI] [PubMed] [Google Scholar]
  • 21. Hamilton M. A rating scale for depression. Journal of neurology, neurosurgery, and psychiatry. 1960;23(1):56 10.1136/jnnp.23.1.56 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Fried EI, Nesse RM. Depression sum-scores don’t add up: why analyzing specific depression symptoms is essential. BMC medicine. 2015;13(1):1–11. 10.1186/s12916-015-0325-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Paulhus DL. Socially desirable responding: The evolution of a construct. The role of constructs in psychological and educational measurement. 2002;49459. [Google Scholar]
  • 24. Braun HI, Jackson DN, Wiley DE. The role of constructs in psychological and educational measurement. Routledge; 2001. [Google Scholar]
  • 25. Paulhus DL. Socially desirable responding on self-reports. Encyclopedia of personality and individual differences. 2017; p. 1–5. 10.1007/978-3-319-28099-8_1349-1 [DOI] [Google Scholar]
  • 26. Betella A, Verschure PFMJ. The affective slider: A digital self-assessment scale for the measurement of human emotions. PLoS ONE. 2016;11(2):1–11. 10.1371/journal.pone.0148037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Rottenberg J, Gross JJ, Gotlib IH. Emotion context insensitivity in major depressive disorder. Journal of abnormal psychology. 2005;114(4):627 10.1037/0021-843X.114.4.627 [DOI] [PubMed] [Google Scholar]
  • 28. Sloan DM, Strauss ME, Quirk SW, Sajatovic M. Subjective and expressive emotional responses in depression. Journal of affective disorders. 1997;46(2):135–141. 10.1016/S0165-0327(97)00097-9 [DOI] [PubMed] [Google Scholar]
  • 29. Sloan DM, Bradley MM, Dimoulas E, Lang PJ. Looking at facial expressions: Dysphoria and facial EMG. Biological psychology. 2002;60(2-3):79–90. 10.1016/S0301-0511(02)00044-3 [DOI] [PubMed] [Google Scholar]
  • 30. Dunn BD, Dalgleish T, Lawrence AD, Cusack R, Ogilvie AD. Categorical and dimensional reports of experienced affect to emotion-inducing pictures in depression. Journal of abnormal psychology. 2004;113(4):654 10.1037/0021-843X.113.4.654 [DOI] [PubMed] [Google Scholar]
  • 31. Berenbaum H, Oltmanns TF. Emotional experience and expression in schizophrenia and depression. Journal of abnormal psychology. 1992;101(1):37 10.1037/0021-843X.101.1.37 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Bortolla R, Cavicchioli M, Galli M, Verschure PFMJ, Maffei C. A comprehensive evaluation of emotional responsiveness in borderline personality disorder: a support for hypersensitivity hypothesis. Borderline Personality Disorder and Emotion Dysregulation. 2019;6(1). 10.1186/s40479-019-0105-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Haney C. Mental health issues in long-term solitary and “supermax” confinement. Crime & Delinquency. 2003;49(1):124–156. 10.1177/0011128702239239 [DOI] [Google Scholar]
  • 34. Gupta V, Hanges PJ, Dorfman P. Cultural clusters: Methodology and findings. Journal of World Business. 2002;37(1):11–15. 10.1016/S1090-9516(01)00070-0 [DOI] [Google Scholar]
  • 35. Conti AA. Historical and methodological highlights of quarantine measures: From ancient plague epidemics to current coronavirus disease (COVID-19) pandemic. Acta Biomedica. 2020;91(2):226–229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Shah JN, Shah J, Shah JN. Quarantine, isolation and lockdown: in context of COVID-19. Journal of Patan Academy of Health Sciences. 2020;7(1):48–57. 10.3126/jpahs.v7i1.28863 [DOI] [Google Scholar]
  • 37. Russell JA. Affective space is bipolar. Journal of Personality and Social Psychology. 1979;37(3):345–356. 10.1037/0022-3514.37.3.345 [DOI] [Google Scholar]
  • 38. Russell JA. A circumplex model of affect. Journal of Personality and Social Psychology. 1980;39(6):1161–1178. 10.1037/h0077714 [DOI] [Google Scholar]
  • 39. Bradley MM, Lang PJ. Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry. 1994;25(1):49–59. 10.1016/0005-7916(94)90063-9 [DOI] [PubMed] [Google Scholar]
  • 40. Kurdi B, Lozano S, Banaji MR. Introducing the Open Affective Standardized Image Set (OASIS). Behavior Research Methods. 2017;49(2):457–470. 10.3758/s13428-016-0715-3 [DOI] [PubMed] [Google Scholar]
  • 41.Lang PJ, Bradley MM, Cuthbert BN. International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8 University of Florida, Gainesville, FL. 2008;. [Google Scholar]
  • 42. Cohen J. Statistical power analysis for the behavioral sciences. Lawrence Earlbam Associates, Hillsdale, NJ: 1988. [Google Scholar]
  • 43. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research. 2011;12:2825–2830. [Google Scholar]
  • 44. Whelan R. Effective analysis of reaction time data. Psychological Record. 2008;58(3):475–482. 10.1007/BF03395630 [DOI] [Google Scholar]
  • 45. Krueger AB, Mueller AI. Time use, emotional well-being, and unemployment: Evidence from longitudinal data. American Economic Review. 2012;102(3):594–599. 10.1257/aer.102.3.594 [DOI] [Google Scholar]
  • 46. Umberson D, Karas Montez J. Social Relationships and Health: A Flashpoint for Health Policy. Journal of Health and Social Behavior. 2010;51(1_suppl):S54–S66. 10.1177/0022146510383501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Hawkley LC, Capitanio JP. Perceived social isolation, evolutionary fitness and health outcomes: A lifespan approach. Philosophical Transactions of the Royal Society B: Biological Sciences. 2015;370 (1669). 10.1098/rstb.2014.0114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Santini ZI, Jose PE, York Cornwell E, Koyanagi A, Nielsen L, Hinrichsen C, et al. Social disconnectedness, perceived isolation, and symptoms of depression and anxiety among older Americans (NSHAP): a longitudinal mediation analysis. The Lancet Public Health. 2020;5(1):e62–e70. 10.1016/S2468-2667(19)30230-0 [DOI] [PubMed] [Google Scholar]
  • 49. Cacioppo JT, Hawkley LC, Thisted RA. Perceived social isolation makes me sad: 5-year cross-lagged analyses of loneliness and depressive symptomatology in the chicago health, aging, and social relations study. Psychology and Aging. 2010;25(2):453–463. 10.1037/a0017216 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Cacioppo JT, Hughes ME, Waite LJ, Hawkley LC, Thisted RA. Loneliness as a specific risk factor for depressive symptoms: Cross-sectional and longitudinal analyses. Psychology and Aging. 2006;21(1):140–151. 10.1037/0882-7974.21.1.140 [DOI] [PubMed] [Google Scholar]
  • 51. Caspi A, Harrington HL, Moffitt TE, Milne BJ, Poulton R. Socially isolated children 20 years later: Risk of cardiovascular disease. Archives of Pediatrics and Adolescent Medicine. 2006;160(8):805–811. 10.1001/archpedi.160.8.805 [DOI] [PubMed] [Google Scholar]
  • 52. Valtorta NK, Kanaan M, Gilbody S, Ronzi S, Hanratty B. Loneliness and social isolation as risk factors for coronary heart disease and stroke: Systematic review and meta-analysis of longitudinal observational studies. Heart. 2016;102(13):1009–1016. 10.1136/heartjnl-2015-308790 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Cacioppo JT, Hawkley LC, Crawford LE, Ernst JM, Burleson MH, Kowalewski RB, et al. Loneliness and health: Potential mechanisms. Psychosomatic Medicine. 2002;64(3):407–417. 10.1097/00006842-200205000-00005 [DOI] [PubMed] [Google Scholar]
  • 54. Matthews T, Danese A, Wertz J, Odgers CL, Ambler A, Moffitt TE, et al. Social isolation, loneliness and depression in young adulthood: a behavioural genetic analysis. Social Psychiatry and Psychiatric Epidemiology. 2016;51(3):339–348. 10.1007/s00127-016-1178-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Cornwell EY, Waite LJ. Social disconnectedness, perceived isolation, and health among older adults. Journal of Health and Social Behavior. 2009;50(1):31–48. 10.1177/002214650905000103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Shankar A, McMunn A, Banks J, Steptoe A. Loneliness, Social Isolation, and Behavioral and Biological Health Indicators in Older Adults. Health Psychology. 2011;30(4):377–385. 10.1037/a0022826 [DOI] [PubMed] [Google Scholar]
  • 57. Wang Y, Di Y, Ye J, Wei W. Study on the public psychological states and its related factors during the outbreak of coronavirus disease 2019 (COVID-19) in some regions of China. Psychology, Health and Medicine. 2020. 10.1080/13548506.2020.1746817 [DOI] [PubMed] [Google Scholar]
  • 58. Wang C, Pan R, Wan X, Tan Y, Xu L, Ho CS, et al. Immediate psychological responses and associated factors during the initial stage of the 2019 coronavirus disease (COVID-19) epidemic among the general population in China. International Journal of Environmental Research and Public Health. 2020;17(5). 10.3390/ijerph17051729 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Liu D, Ren Y, Yan F, Li Y, Xu X, Yu X, et al. Psychological Impact and Predisposing Factors of the Coronavirus Disease 2019 (COVID-19) Pandemic on General Public in China. SSRN Electronic Journal. 2020. [Google Scholar]
  • 60. Holt-Lunstad J. The Potential Public Health Relevance of Social Isolation and Loneliness: Prevalence, Epidemiology, and Risk Factors. Public Policy & Aging Report. 2017;27(4):127–130. 10.1093/ppar/prx030 [DOI] [Google Scholar]
  • 61. Taylor DA, Altman I, Wheeler L, Kushner EN. Personality factors related to response to social isolation and confinement. Journal of Consulting and Clinical Psychology. 1969;33(4):411–419. 10.1037/h0027805 [DOI] [PubMed] [Google Scholar]
  • 62. Kong X, Wei D, Li W, Cun L, Xue S, Zhang Q, et al. Neuroticism and extraversion mediate the association between loneliness and the dorsolateral prefrontal cortex. Experimental Brain Research. 2014;233(1):157–164. 10.1007/s00221-014-4097-4 [DOI] [PubMed] [Google Scholar]
  • 63. Zelenski JM, Sobocko K, Whelan DC. Introversion, Solitude, and Subjective Well-Being In: Coplan RJ, Bowker JC, editors. The Handbook of Solitude. John Wiley & Sons, Ltd; 2013. p. 184–201. [Google Scholar]
  • 64. Ballester BR, Lathe A, Duarte E, Duff A, Verschure PF. A Wearable Bracelet Device for Promoting Arm Use in Stroke Patients. In: NEUROTECHNIX; 2015. p. 24–31. [Google Scholar]
  • 65. Grechuta K, Ballester BR, Munné RE, Bernal TU, Hervás BM, Mohr B, et al. Multisensory cueing facilitates naming in aphasia; 2020. Preprint under consideration at Journal of NeuroEngineering and Rehabilitation. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Beck AT, Steer RA, Brown GK. Manual for the Beck depression inventory-II. San Antonio, TX: Psychological Corporation; 1996. [Google Scholar]
  • 67. Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine. 2001;16(9):606–613. 10.1046/j.1525-1497.2001.016009606.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Vigo D, Thornicroft G, Atun R. Estimating the true global burden of mental illness. The Lancet Psychiatry. 2016;3(2):171–178. 10.1016/S2215-0366(15)00505-2 [DOI] [PubMed] [Google Scholar]
  • 69. Sayers J. The world health report 2001-Mental health: new understanding, new hope. Bulletin of the World Health Organization. 2001;79:1085–1085. [Google Scholar]
  • 70. Demyttenaere K, Kiekens G, Bruffaerts R, Mortier P, Gorwood P, Martin L, et al. Outcome in depression (II): beyond the Hamilton Depression Rating Scale. CNS spectrums. 2020; p. 1–22. 10.1017/S1092852920001418 [DOI] [PubMed] [Google Scholar]
  • 71. Reinertsen E, Clifford GD. A review of physiological and behavioral monitoring with digital sensors for neuropsychiatric illnesses. Physiological measurement. 2018;39(5):05TR01. 10.1088/1361-6579/aabf64 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72. Lipton ZC, Elkan C, Naryanaswamy B. Optimal thresholding of classifiers to maximize F1 measure In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 8725 LNAI. Springer Verlag; 2014. p. 225–239. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Stephan Doering

16 Jul 2020

PONE-D-20-15454

Subjective ratings of emotive stimuli predict the impact of the COVID-19 quarantine on affective states

PLOS ONE

Dear Dr. Carral,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by August 15, 2020. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Stephan Doering, M.D.

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.Thank you for including your ethics statement: 

"The study was approved by the ethics committee of the researchers' institution.

Participants consented to participate and were informed of their rights before starting

the experiment. No personal data of the participants were recorded".   

Please amend your current ethics statement to include the full name of the ethics committee/institutional review board(s) that approved your specific study.

Once you have amended this statement in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

4.Thank you for stating the following in the Competing Interests section:

[I have read the journal's policy and the authors of this manuscript have the following competing interests: PFMJV is the founder and interim CEO of Eodyne S L, which aims at bringing scientifically validated neurorehabilitation technology to society. The rest of the authors have nothing to disclose.].

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

5.   We note that Figures supporting 1 and Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (a) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (b) remove the figures from your submission:

a)        You may seek permission from the original copyright holder of Figure(s) [#] to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. 

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b)    If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: PONE-D-20-15454

Effects of COVID-19 quarantine on affective states

Thank you for the opportunity to review this novel approach to assessing affective states in people impacted by quarantine. There are 2 principle concerns wit this manuscript as currently presented.

1. There are some grammatical suggestions to improve the readability of the manuscript

2. The r values have not been interpreted at all the strength of the relationships not discussed

Specific recommendations are outlined below:

L4-7. I recommend updating data to the most recent figures at the time of publication

L8. Suggest; …public health authorities have employed…

L13. Previous outbreaks of what? Are you referring to SARS/MERS?

L16. The statement about a growing body of evidence is not substantiated with the provided reference. Please ensure relevant references are provided to support this claim.

L26. This statement is supported by 2 references, both pertaining to a single depression rating scale, one of which is far from current, published in 1993. If this statement were true why is the HDRS still widely used?

L61. Was the required sample size reached?

L69. Why was it relevant to blind participants to the study purpose and how was this done? This suggests some degree of deception, or was this address in part D of the online experiment?

L183. Why was median and not mean time to rate each image used?

L220. Statistically these correlations are significant but the scatterplots (Figure 5) show substantial variability hence the strength of the correlations are weak at best. Please comment on this in the discussion. This comment also applies to the strength of the correlations based on SVC which are only weak to moderate (L184).

L265-269. Other than the response bias issues, what is the significance of reporting personal situations? I am not sure these are ‘robust’ with rating at 65-84% accuracy. These are far lower than that reported, for example, for sentiment analysis of social media posts, but might reflect the novel application of ML to this topic. On the other hand, it might indicate the need for more training of the algorithm.

L269-275. This section focussed solely on the implementation of the technology, not the psychological health of participants. I feel more emphasis on the interpretation of the findings is needed, rather than discussing future application of the model.

Reviewer #2: This study provides valuable insights regarding emotive perception among individuals under quarantine in COVID-19 pandemic. The findings of this study may inform future research and policymaking on mental health, especially for people who are more likely to have impaired affective states. However, this study may be subjected to a methodological concern related to sampling and comparative analyses, which should be considered prior to communicating this research with a broader audience.

As a small sample was drawn from a diverse online population from 19 countries, it is likely that their mental health may not represent the populations they belong to. Furthermore, demographic and psychosocial factors in the study sample may be heterogenous in nature, which may further affect the generalizability of the findings. Proper rationale for this sampling approach should be presented in the methods section and associated limitations should be discussed in the discussion section of the article.

Another perspective on the use of the evidence revealed in this study is how mental health practitioners and policymakers can translate the findings into clinical practice and mental health policymaking. The authors may wish to draw some inferences on how the altered emotive perceptions may result in short- and long-term mental health impacts, how some individuals are more vulnerable than others, and how potential strategies can be adopted to mitigate such mental health challenges during this and future pandemics.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2020 Aug 13;15(8):e0237631. doi: 10.1371/journal.pone.0237631.r002

Author response to Decision Letter 0


24 Jul 2020

Editors

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

R1. We thank the Editor for this advice. As suggested, we have carefully examined the journal’s style requirements, including those for file naming, and adjusted the files accordingly. We believe that the current version of the manuscript, as well as the additional materials, fully comply with all the requirements.

2. Please amend your current ethics statement to include the full name of the ethics committee/institutional review board(s) that approved your specific study.

R2. We have amended our ethics statement specifying the name of the ethics committee that approved the reported experiment. Thank you for raising this issue.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

R3. Thank you for raising this point. We would like to confirm our commitment to making all data available in a repository when the manuscript is accepted.

4. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

R4. We confirm that the declared competing interests do not alter our adherence to PLOS ONE policies on sharing data as materials. As such, we have added a clarifying statement in our cover letter:

“The authors of this manuscript have the following competing interests: PFMJV is the founder and interim CEO of Eodyne S.L., which aims at bringing scientifically validated neurorehabilitation technology to society. This does not alter our adherence to PLOS ONE’s policies on sharing data and materials. The rest of the authors have nothing to disclose.”

5. We note that Figures supporting 1 and Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution.

R5. We thank the Editor for raising this point. We would like to clarify that all the images used both in Figure 1 and Figures supporting 1 come from the OASIS dataset (Kurdi et al., 2017), which we used as the benchmark for the emotive rating evaluation. The authors of this dataset explicitly indicate that all the images “can be downloaded, used, and modified free of charge for research purposes” (please see https://pixabay.com/service/license/). Similar, in the case of the image used in Figure 1, the source states that its license allows “free for commercial and noncommercial use across print and digital” and that “attribution is not required” (please see

https://pixabay.com/photos/man-human-person-alone-being-alone-396299/).

This confirms that the images in question are not subject to copyright restrictions; hence they can be published under the Creative Commons Attribution License (CC BY 4.0).

Reviewer #1

Thank you for the opportunity to review this novel approach to assessing affective states in people impacted by quarantine. There are 2 principal concerns with this manuscript as currently presented.

Dear Reviewer, we thank you for the encouraging feedback regarding our work. Your suggestions have contributed to the improvement of our manuscript. Below, we provide detailed responses to all your comments.

1. There are some grammatical suggestions to improve the readability of the manuscript

R1. We thank the Reviewer for this valuable comment. To improve readability, we have rephrased long, complex, or unclear sentences, corrected the typographical errors, and the manuscript was run through a professional grammar and spell check platform for scientific writing. Subsequently, it was revised by a native speaker of English with a background in medicine and neuroscience. We believe that by addressing this point, we have significantly increased the legibility of the manuscript.

2. The r values have not been interpreted at all the strength of the relationships not discussed

R2. We thank the Reviewer for raising this point. We agree with the Reviewer that the manuscript was lacking an explicit interpretation of the r values. Since the concern is very much related to comment #11, we have addressed both in response R11. Please, see below.

Specific recommendations are outlined below:

3. L4-7. I recommend updating data to the most recent figures at the time of publication

R3. This is a fair point, thank you. We have updated all the reported data as follows (Lines: 4-7):

“By mid-March 2020, a total of 200,000 confirmed cases (Johns Hopkins, 2020) have been reported worldwide, showing an exponential increase with the current number of identified cases exceeding 14 million, whereby Spain, Italy, and the United Kingdom are the most-affected European nations.”

4. L8. Suggest; …public health authorities have employed…

R4. We have followed the suggestion of the Reviewer and changed the text accordingly.

5. L13. Previous outbreaks of what? Are you referring to SARS/MERS?

R5. Thank you for noticing this. What we actually mean are previous applications of quarantine, rather than the outbreaks. We have clarified the sentence as follows (Lines: 13-15):

“Indeed, prolonged widespread lock-down and limiting social contact has resulted in post-traumatic stress disorder, depression, anxiety, mood dysregulations, and anxiety-induced insomnia during previous periods of quarantine (Miles et al., 2015, Brooks et al., 2020, Hossain et al., 2020).”

6. L16. The statement about a growing body of evidence is not substantiated with the provided reference. Please ensure relevant references are provided to support this claim.

R6. Thank you for noticing this. To further support our statement, we have included the following relevant references:

Holmes, E. A., O’Connor, R. C., Perry, V. H., Tracey, I., Wessely, S., Arseneault, L., Bullmore, E. (2020). Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science. The Lancet Psychiatry. Elsevier Ltd. https://doi.org/10.1016/S2215-0366(20)30168-1

Rajkumar, R. P. (2020). COVID-19 and mental health: A review of the existing literature. Asian Journal of Psychiatry, 52. https://doi.org/10.1016/j.ajp.2020.102066

Torales, J., O’Higgins, M., Castaldelli-Maia, J. M., & Ventriglio, A. (2020). The outbreak of COVID-19 coronavirus and its impact on global mental health. International Journal of Social Psychiatry, 66(4), 317–320. https://doi.org/10.1177/0020764020915212

7. L26. This statement is supported by 2 references, both pertaining to a single depression rating scale, one of which is far from current, published in 1993. If this statement were true why is the HDRS still widely used?

R7. The reason why we are focusing on the criticism of the HDRS (Hamilton 1960), specifically, was twofold. First, HDRS has constituted the most common observer-rated instrument to measure depression severity, its changes over time, and the efficacy of treatment for over 60 years. Second, it has been regarded as the gold standard in clinical trials (Wiliams 2001, Bech 2009, Bagby et al., 2004, Gibbons et al., 1993, Stefanis et al., 2002, Gullion & Rush 1998). As described in the manuscript, despite the extensive use of HDRS, the scale seems to present several limitations worth noting. Below, we provide the specific criticisms discussed in the two aforementioned references, among others, and address the Reviewer’s comment related to the extensive use of the scale independent of its reported limitations.

The first reference (Bagby et al., 2004), which we included to support our statement, systematically examined 70 articles that aimed to explicitly evaluate the psychometric properties of the HDRS, conceptual issues related to its development, continued use, and shortcomings. The studies included in that review were published between January 1980 and May 2003. The authors found that, although the internal reliability at the item level was mostly satisfactory, a significant number of scale items were, in fact, poorly contributing to the measurement of depression severity, and many items presented low inter-rater and test-retest reliability (Bagby et al., 2004). The authors argued that while the convergent validity and discriminant validity were adequate, content validity was quite unsatisfactory. Furthermore, the scale was designed as multidimensional, resulting in weak replication across samples. Finally, the analysis yielded that the response format was biased such that certain items contributed more to the total score than others. Indeed, according to the psychometric model (Shapiro 1951), each of the scale items should display identical clinical weight (Fava & Belaise, 2005). A similar criticism to those of Bagby et al. was reported by Zimmerman et al., (2004). In particular, the authors emphasized that the differential item weight in HDRS, whereby some items contribute more to the total score than others, is the most critical limitation of the scale. To further support our statement, we have included this reference within the text (Line: 28). At the end of their reviews, both Bagby et al., (2004) and Zimmerman et al., (2004) concluded that, because of the lack of factorial and content validity, HDRS is a flawed measure and suggested replacing it with a novel, more sensitive gold standard for the assessment of depression.

The second reference (Gibbons et al., 1993) emphasizes the necessity to examine and reassess the robustness of clinical ratings in general, including the HDRS, to prevent an unreliable diagnosis of psychological phenomena and weakened validity of the studies that use such scales. The authors approach their analysis by investigating the flaws of HDRS in the context of two fundamental principles of psychometric assessment, including (1) defining a syndrome and scaling its severity, and (2) considering the issue of multidimensionality of the scale. As discussed above, HDRS presents flaws regarding both parameters.

Additionally to the two citations included in the initial version of the manuscript, the HDRS scale was further criticized by others as a nonobjective measure of depression severity since it does not correlate with other clinical assessments and it does not permit a definition of a unidimensional depressive state (Bech & Allerup 1981, Maier, Philipp, & Gerken 1985, Fried & Nesse 2015, among others). We have included these references to the body of the text (Line: 28). Moreover, to explicitly address this point in more depth within the manuscript, we have added the following lines to the Introduction section (Lines: 28-32):

“ (...) For instance, the Hamilton Depression Rating Scale (HDRS, Gibbons et al., 1993), which has been considered a gold standard in clinical practice as well as clinical trials, was widely criticized for its subjectivity as well as the multidimensional structure, which varies across studies hence preventing replication across samples as well as poor factorial and content validity (Bagby et al., 2004, Zimmerman et al., 2004, Beth & Allerup 1981, Maier, Philipp, & Gerken 1985)”.

To answer the final question, we propose that, despite its limitations, including subjectivity, HDSR is still used in clinical practice first because of its long tradition and second because of the lack of well established and robust alternative measures. Indeed, other standardised, yet also criticized, self-reported depression scales such as the Beck Depression Inventory (BDI, Beck et al., 1988) or the Zung Depression Rating Scale are also widely used in the clinic and in academia. Most research related to depression is still grounded in just those few scales, including HRSD (and, specifically, its 20 variations) and BDI, none of which is shown to be sufficiently robust (Santor et al., 2009). Since our understanding of depression depends mostly on the quality and accuracy of the diagnosis, monitoring, and treatment, we argue that there is a need for novel depression assessment tools which would allow replication across samples while accommodating the heterogeneity and diversity of depressive disorders, including major depressive disorder (MDD, Kessler et al., 2003). Such an approach will not only facilitate our understanding of the specific causes of depression but also better inform clinical decision-making. Possibly, a complete solution would require a combination of explicit and implicit assessment and monitoring tools delivered continuously to not only diagnose developing mental disorders but also prevent them (Sayers 2001). Such tools should include an assessment of symptomatology and the wellbeing of patients (Demyttenaere et al., 2020). This is even more relevant now that the burden of neuropsychiatric illness exceeds that of cardiovascular disease or cancer (Vigo et al., 2016). We have addressed this analysis and its generalization in the following way in the Discussion section (Lines: 318-331):

“The efficient diagnosis, monitoring, and treatment of neuropsychiatric illness are becoming increasingly important because its burden exceeds that of cardiovascular disease and cancer (Vigo et al., 2016) and it is estimated that about 25% of individuals will suffer neurological or mental disorders at some point in their lives. However, due to several factors, including the lack of trained healthcare professionals, pervasive underdiagnosis, and stigma, only 0.2% will be able to receive necessary treatment (Sayers 2001). Hence, key current challenges include the improvement of the efficacy of the diagnosis of psychological disturbances and overcoming known limitations of current clinical scales (Bagby et al., 2004, Zimmerman et al., 2004, Beth & Allerup 1981, Maier, Philipp, & Gerken 1985) together with accurately capturing symptoms and patient specific concerns (Demyttenaere et al., 2020). To this end, we propose that an optimal evaluation strategy may comprise explicit, observer-rated and self-reported evaluation tools combined with implicit physiological and behavioral monitoring using biometric sensing, such as the proposed affective rating methods and associated tools (Reinertsen & Clifford 2018).”

8. L61. Was the required sample size reached?

R8. We thank the reviewer for raising this point. As estimated by the G*Power software, the required sample size equaled 110 participants, while the total number of subjects who participated in our study was N= 112. We have specified that in the Participants subsection of the Methods section as follows (Lines: 65-66):

“The sample size of N= 110 was determined a priori using the G*Power software version 3.1 (Kiel, Germany) based on α= 0.05, power of 80% and medium effect size (0.5).”

9. L69. Why was it relevant to blind participants to the study purpose and how was this done? This suggests some degree of deception, or was this address in part D of the online experiment?

R9. We designed the experiment such that the participants were blind to the purpose of the study. Specifically, they were not aware that the study's objective was to investigate possible adverse effects of the COVID-19 pandemic on the affective ratings until the end of the session. We wanted to prevent this information to bias the ratings of the emotive stimuli. To this end, the experiment consisted of four main sections including

instructions, the consent form, disclaimer, and the collection of demographic data,

experimental task,

COVID-19 questionnaire, and

explanation of the rationale of the study.

With THIS design, the subjects provided the affective ratings without being aware that they may be informative of their mood/affective state. In section 4, however, after completing the whole experiment, we provided the study's full rationale and explained our hypothesis. To address this point, we have included the following description to the Participants subsection of the Methods section (Lines: 80-82):

“Specifically, until the end of the session, subjects did not know the study’s objective, which could bias their responses. However, they were informed about it at the end of the trial.”

10. L183. Why was median and not mean time to rate each image used?

R10. Thank you for raising this point. We used the median time instead of the mean time to accurately quantify the time that the users took to rate each image. The median is a common statistic used to analyse reaction times, favored over the mean, as long as the number of trials is the same in all cases (as it is in our experiment). This compensates for the skewed distribution due to intermittent long reaction times (Whelan, 2008). Indeed, our data reflects a similar trend. In particular, the D’Agostino-Pearson normality test revealed that the reported reaction times were not normally distributed (p < 0.001). To clarify this, we have included the following information in the Results section (Lines: 195-198):

“The D’Agostino-Pearson normality test revealed that the rating times were not normally distributed (p < 0.001). Hence, similar to other studies (Whelan, 2008), we applied nonparametric statistics for the subsequent analyses of rating times.”

11. L220. Statistically these correlations are significant but the scatterplots (Figure 5) show substantial variability hence the strength of the correlations are weak at best. Please comment on this in the discussion. This comment also applies to the strength of the correlations based on SVC which are only weak to moderate (L184).

R11. This is a valuable comment, thank you. To address this point, we have included the following paragraph in the Discussion section (Lines: 271-280):

“It is worth noting that our data presented variability in the relationships between the mean difference in valence ratings and both the enjoyment of working from home and the feeling of missing life from before the quarantine. This may be explained by the interaction of additional factors that were not captured by the present experiment but might have impacted the participants' emotional state. For example, personality traits might play an essential role in the ways individual participants are affected by social isolation and how they cope with it (Taylor et al., 1969; Kong et al., 2014; Zelenski et al., 2013). Furthermore, the intensity of the enforced quarantine measures was not the same for all participants, resulting in variation in self-isolation. Future studies should address these limitations by controlling for additional, possibly confounding factors.”

12. L265-269. Other than the response bias issues, what is the significance of reporting personal situations? I am not sure these are ‘robust’ with ratings at 65-84% accuracy. These are far lower than that reported, for example, for sentiment analysis of social media posts, but might reflect the novel application of ML to this topic. On the other hand, it might indicate the need for more training of the algorithm.

R12. This is an important point. Indeed, we report different aspects related to the personal situation of the participants since, as revealed in previous studies, they comprise significant indicators of affective states during quarantines. Specifically, as we describe in the Introduction section as well as in the Discussion, certain situations, such as physical interaction with others, might significantly affect participants’ mood (Lines: 10-12):

“Mandatory mass quarantine restrictions, which include social distancing, stay-at-home rules, and limiting work-related travel outside the home (Rothstein et al., 2003), might impact both physical and mental health of the affected individuals (Nobles et al., 2020).”

We agree that, ideally, the accuracy of the proposed classification algorithms should have lower variability. However, at the current stage, it rather provides an indication of the robustness of the model we are currently improving (Lines: 332-337):

“Importantly, at the current stage, the proposed classification algorithms serve rather as proof of the potential to automatically classify well-being (Lipton et al., 2014). Future work will address this limitation by further improving the model. Those improvements will imply additional training of the classifier and the inclusion of supplementary variables that might affect participants’ mental state, such as personality traits and biometrics.”

13. L269-275. This section focussed solely on the implementation of the technology, not the psychological health of participants. I feel more emphasis on the interpretation of the findings is needed, rather than discussing future application of the model.

R13. We thank the Reviewer for raising this point. We agree that the discussion related to the participants' psychological health was not thorough enough in the initial version of the manuscript. To address this point, we have included the following paragraph in the Discussion section (Lines: 342-356):

“(...) Taken together, the present report presents a significant and timely finding which sheds light on the current quarantine's impact beyond the experience of the individuals who undergo it. In line with other studies (Nobles et al., 2020, Holmes et al., 2020, Rajkumar et al., 2020, Torales et al., 2020) our results confirm that individuals undergoing current mass quarantine can experience adverse psychological effects and be at risk of anxiety, mood dysregulations, and depression, which, in the long term, may lead to post-traumatic stress disorder and affect overall wellbeing (Miles et al., 2015, Brooks et al., 2020, Hossain et al., 2020). Indeed, according to previous studies, the measures that are commonly undertaken to mitigate pandemics, including stay-at-home rules and social distancing may have drastic consequences. For instance, people can experience intense fear and anger leading to severe consequences at cognitive and behavioral levels, culminating in civil conflict and legal procedures (Miles et al., 2015) as well as suicide (Barbisch et al., 2015, Rubin et al., 2020). In addition, the long-term impact of this change in wellbeing is currently not understood and deserves further study. The results presented in this report highlight the need to explore possible impacts of the COVID-19 pandemic and its effects on psychological wellbeing and mental health. To this aim, more studies need to be conducted to systematically investigate the interventions that may be deployed by both the healthcare system and individuals undergoing quarantine to mitigate the adverse psychological effects.”

Reviewer #2

This study provides valuable insights regarding emotive perception among individuals under quarantine in COVID-19 pandemic. The findings of this study may inform future research and policymaking on mental health, especially for people who are more likely to have impaired affective states. However, this study may be subjected to a methodological concern related to sampling and comparative analyses, which should be considered prior to communicating this research with a broader audience.

1. As a small sample was drawn from a diverse online population from 19 countries, it is likely that their mental health may not represent the populations they belong to. Furthermore, demographic and psychosocial factors in the study sample may be heterogeneous in nature, which may further affect the generalizability of the findings. Proper rationale for this sampling approach should be presented in the methods section and associated limitations should be discussed in the discussion section of the article.

R1. We thank the Reviewer for raising this point. As correctly noted by the Reviewer, our sample comes from a variety of countries. Importantly, however, the vast majority of the participants originate from a relatively small subset of European countries, including Spain (53.57%), Italy (16.07%), Poland (8.04%), and the United Kingdom (5.36%). We consider that this does not impair the generalizability of our results due to cultural similarities that reduce the possible heterogeneity of the data (Gupta et al., 2002). The rationale behind our sampling approach was to include European subjects whose countries apply similar measures to mitigate the spread of the virus [ref]. To explicitly address this issue in our manuscript, we have extended the Methods section (Participants subsection) by including the following (Lines: 71-77):

“This sampling approach was chosen to cover a range of countries that were similarly impacted by self-isolation measures. In particular, for the analyses, we included only those participants who were actively undergoing quarantine. Thus, all participants were uniform in their cultural traits (Gupta et al., 2002) and quarantine measures, including social isolation and distancing, the banning of social events and gatherings, the closure of schools, offices, and other facilities, and travel restrictions (Conti, 2020; Shah et al., 2020).”

Furthermore, we have added the following paragraph in the Discussion section (Lines: 281-289):

“(...) Moreover, the participant sample used in this study comes from a variety of European countries. This sampling approach was intentionally chosen to cover a set of regions with comparable cultures as well as quarantine and self-isolation measures. It is possible, however, that the underlying diversity of the sample could have introduced heterogeneity in the data, which could impact the generalizability of our findings. This limitation shall be addressed in future studies by focusing the collection of data from a smaller subset of countries to further ensure the commonality of demographic aspects that could better represent the mental health of the sampled population.”

2. Another perspective on the use of the evidence revealed in this study is how mental health practitioners and policymakers can translate the findings into clinical practice and mental health policymaking. The authors may wish to draw some inferences on how the altered emotive perceptions may result in short- and long-term mental health impacts, how some individuals are more vulnerable than others, and how potential strategies can be adopted to mitigate such mental health challenges during this and future pandemics.

R2. We thank the reviewer for this comment. To address the issues of translating the proposed technology into the clinical practice and mental health policymaking, we included two paragraphs in the Discussion section. First, we added the following discussion (Lines: 296-311):

“(...) On the other hand, the hereby proposed method for diagnosing the affective changes through subjective ratings of emotive stimuli may already be of use to the healthcare system. Specifically, the current findings, as well as the reported machine learning techniques, could be translated into clinical practice by using techniques such as in-person visits and digital technology in the form of smartphone apps. The former could provide a unique opportunity of combining multidimensional scales including, for instance, brain scanning (e.g., functional Magnetic Resonance Imaging) genomic measurements, observer-rated neurocognitive evaluations (e.g., HDRS), patient self-reports (e.g., BDI), medical record reviews, as well as implicit measures such as the affective evaluations used in our study. From the academic and medical perspectives, such a compound diagnosis could contribute to fundamental advances in understanding neuropsychological conditions. However, there is a need for easy to apply and low-cost solutions for diagnostics, monitoring, and treatment. Hence, the implicit assessment validated in our study can allow continuous monitoring of the effective ratings as the proxy of the affective states allowing for a prediction of the personal situation based on the obtained ratings. Such software could promote at-home remote diagnostics and monitoring of at-risk patients continuously, at a low cost, and with a further benefit of preventing possible response biases (Braun 2001, Paulhus 2002, Paulhus 2017). We have successfully deployed such an approach in the domain of stroke rehabilitation. We have successfully deployed such an approach in the domain of stroke rehabilitation (Ballester at al., 2015, Grechuta et al., 2020).”

Furthermore, we have included a brief discussion related to the impact of the burden of neuropsychiatric illnesses, including depression and the necessity to rethink current assessment tools provided their limitations. Here, we also comment on the short- and long-term effects of mental health alteration due to COVID-19, as well as propose possible strategies that can be adopted to mitigate such mental health challenges during this and future pandemics. (Lines: 318-331):

“The efficient diagnosis, monitoring, and treatment of a neuropsychiatric illness is becoming increasingly important because its burden exceeds that of cardiovascular disease and cancer (Vigo et al., 2016) and it is estimated that about 25% of individuals will suffer neurological or mental disorders at some point in their lives. However, due to several factors, including the lack of trained healthcare professionals, pervasive underdiagnosis, and stigma, only 0.2% will be able to receive the necessary treatment (Sayers 2001). Hence, key current challenges include the improvement of the efficacy of the diagnosis of psychological disturbances and overcoming known limitations of current clinical scales (Bagby et al., 2004, Zimmerman et al., 2004, Beth & Allerup 1981, Maier, Philipp, & Gerken 1985) together with accurately capturing symptoms and patient specific concerns (Demyttenaere et al., 2020). To this end, we propose that an optimal evaluation strategy may comprise explicit, observer-rated and self-reported evaluation tools combined with implicit physiological and behavioral monitoring using biometric sensing, such as the proposed affective rating methods and associated tools (Reinertsen & Clifford 2018).”

We thank the Editor and the Reviewers again for the care they have taken in processing this manuscript. We hope that you will find that the reworked version of our manuscript complies with the concerns raised in the referee reports. Thank you for considering our work.

Kind regards,

Héctor López Carral and co-authors

References:

Kurdi, Benedek, Shayn Lozano, and Mahzarin R. Banaji. "Introducing the open affective standardized image set (OASIS)." Behavior research methods 49.2 (2017): 457-470.

The Center for Systems Science and Engineering, Johns Hopkins. Coronavirus COVID-19 Global Cases; 2020.

Holmes, E. A., O’Connor, R. C., Perry, V. H., Tracey, I., Wessely, S., Arseneault, L., Bullmore, E. (2020). Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science. The Lancet Psychiatry. Elsevier Ltd.

Rajkumar, R. P. (2020). COVID-19 and mental health: A review of the existing literature. Asian Journal of Psychiatry, 52.

Torales, J., O’Higgins, M., Castaldelli-Maia, J. M., & Ventriglio, A. (2020). The outbreak of COVID-19 coronavirus and its impact on global mental health. International Journal of Social Psychiatry, 66(4), 317–320.

Hamilton, Max. "The Hamilton Depression Scale—accelerator or break on antidepressant drug discovery." Psychiatry 23 (1960): 56-62.

Williams JB: Standardizing the Hamilton Depression Rating Scale: past, present, and future. Eur Arch Psychiatry Clin Neurosci 2001; 251(suppl 2): II6–II12

Bech P. Fifty years with the Hamilton scales for anxiety and depression. A tribute to Max Hamilton. Psychother Psychosom. 2009; 78(4):202–11

Bagby, R. M., Ryder, A. G., Schuller, D. R., & Marshall, M. B. (2004). The Hamilton Depression Rating Scale: has the gold standard become a lead weight? American Journal of Psychiatry, 161(12), 2163-2177.

Gibbons, R. D., Clark, D. C., & Kupfer, D. J. (1993). Exactly what does the Hamilton depression rating scale measure? Journal of psychiatric research, 27(3), 259-273.

Stefanis, C. N., & Stefanis, N. C. (2002). Diagnosis of depressive disorders: A review. Depressive disorders, 1-87.

Gullion, C. M., & Rush, A. J. (1998). Toward a generalizable model of symptoms in major depressive disorder. Biological psychiatry, 44(10), 959-972.Gullion, C. M., & Rush, A. J. (1998). Toward a generalizable model of symptoms in major depressive disorder. Biological psychiatry, 44(10), 959-972

Shapiro, M. B. (1951). An experimental approach to diagnostic psychological testing. Journal of mental science, 97(409), 748-764.Shapiro, M. B. (1951). An experimental approach to diagnostic psychological testing. Journal of mental science, 97(409), 748-764.

Fava, G. A., & Belaise, C. (2005). A discussion on the role of clinimetrics and the misleading effects of psychometric theory. Journal of clinical epidemiology, 58(8), 753-756.

Zimmerman M, Posternak MA, Chelminski I. Is it time to replace the Hamilton Depression Rating Scale as the primary outcome measure in treatment studies of depression? J Clin Psychopharmacol. 2005 Apr;25(2):105–10.

Beth. P., Allerup, N., Rcisby, N. & Gram, L. F. (1984). Assessment of symptom change from improvement curves on the Hamilton depression scale in trials with antidepressants.

Maier. W., Philipp. M., & Gcrken, A. (1985). Dimensions of the Hamilton Depression Scale.

Fried, E. I., & Nesse, R. M. (2015). Depression sum-scores don’t add up: why analyzing specific depression symptoms is essential. BMC medicine, 13(1), 1-11.

Beck A, Steer RA, Garbin MG. (1988) Psychometric properties of the Beck Depression Inventory: 25 years of evaluation. Clin Psychol Rev; 8:77–100.

Santor DA, Gregus M, Welch A. (2009) Eight decades of measurement in depression. Measurement. 4:135–55.

Kessler, R. C., Berglund, P., Demler, O., Jin, R., Koretz, D., Merikangas, K. R., & Wang, P. S. (2003). The epidemiology of major depressive disorder: results from the National Comorbidity Survey Replication (NCS-R). Jama, 289(23), 3095-3105.

Sayers, J. (2001). “The world health report 2001 - Mental health: new understanding, new hope.” Bulletin of the World Health Organization 79.11, p. 1085.

Demyttenaere, K., Kiekens, G., Bruffaerts, R., Mortier, P., Gorwood, P., Martin, L., & Di Giannantonio, M. Outcome in depression (II): beyond the Hamilton Depression Rating Scale. CNS spectrums, 1-22.

Vigo, D., Thornicroft, G., and Atun, R. (2016). “Estimating the true global burden of mental illness”. The Lancet Psychiatry 3.2, pp. 171–178.

Reinertsen, E., & Clifford, G. D. (2018). A review of physiological and behavioural monitoring with digital sensors for neuropsychiatric illnesses. Physiological measurement, 39(5), 05TR01.

Whelan, R. (2008). Effective analysis of reaction time data. The Psychological Record, 58(3), 475-482.

Taylor, D. A., Altman, I., Wheeler, L., & Kushner, E. N. (1969). Personality factors related to response to social isolation and confinement. Journal of consulting and clinical psychology, 33(4), 411.

Kong, X., Wei, D., Li, W., Cun, L., Xue, S., Zhang, Q., & Qiu, J. (2015). Neuroticism and extraversion mediate the association between loneliness and the dorsolateral prefrontal cortex. Experimental brain research, 233(1), 157-164.

Zelenski, J. M., Sobocko, K., & Whelan, D. C. (2014). Introversion, solitude, and subjective well-being. The handbook of solitude: Psychological perspectives on social isolation, social withdrawal, and being alone, 184-201.

Center for Disease Control, Rothstein, M. A., Alcalde, M. G., Elster, N. R., Majumder, M. A., Palmer, L. I., & Hoffman, R. E. (2003). Quarantine and isolation: Lessons learned from SARS. University of Louisville School of Medicine, Institute for Bioethics, Health Policy and Law.

Nobles, J., Martin, F., Dawson, S., Moran, P., & Savovic, J. (2020). The potential impact of COVID-19 on mental health outcomes and the implications for service solutions.

Lipton, Z. C., Elkan, C., & Naryanaswamy, B. (2014). Optimal thresholding of classifiers to maximize F1 measure. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8725 LNAI, pp. 225–239). Springer Verlag.

Miles, S. H. (2015). Kaci Hickox: public health and the politics of fear. The American Journal of Bioethics, 15(4), 17-19.

Brooks, S. K., Webster, R. K., Smith, L. E., Woodland, L., Wessely, S., Greenberg, N., & Rubin, G. J. (2020). The psychological impact of quarantine and how to reduce it: rapid review of the evidence. The Lancet.

Hossain, M. M., Sultana, A., & Purohit, N. (2020). Mental health outcomes of quarantine and isolation for infection prevention: A systematic umbrella review of the global evidence. Available at SSRN 3561265.

Barbisch, D., Koenig, K. L., & Shih, F. Y. (2015). Is there a case for quarantine? Perspectives from SARS to Ebola. Disaster medicine and public health preparedness, 9(5), 547-553.

Rubin, G. J., & Wessely, S. (2020). The psychological effects of quarantining a city. Bmj, 368.

Alberto Conti, A. (2020). Historical and methodological highlights of quarantine measures: From ancient plague epidemics to current coronavirus disease (covid-19) pandemic. Acta Biomedica. Mattioli 1885.

Shah, J., Shah, J., & Shah, J. (2020). Quarantine, isolation and lockdown: in context of COVID-19. Journal of Patan Academy of Health Sciences, 7(1), 48-57.

Braun, H. I., Jackson, D. N., & Wiley, D. E. (Eds.). (2001). The role of constructs in psychological and educational measurement. Routledge.

Paulhus, D. L. (2002). Socially desirable responding: The evolution of a construct. The role of constructs in psychological and educational measurement, 49459.

Paulhus, D. L. (2017). Socially desirable responding on self-reports. Encyclopedia of personality and individual differences, 1-5.

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 1

Stephan Doering

31 Jul 2020

Subjective ratings of emotive stimuli predict the impact of the COVID-19 quarantine on affective states

PONE-D-20-15454R1

Dear Dr. Carral,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Stephan Doering, M.D.

Academic Editor

PLOS ONE

Acceptance letter

Stephan Doering

6 Aug 2020

PONE-D-20-15454R1

Subjective ratings of emotive stimuli predict the impact of the COVID-19 quarantine on affective states

Dear Dr. López-Carral:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Stephan Doering

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Selection of images used in this study from the OASIS data set.

    (PDF)

    S2 File. The COVID-19 questionnaire used in the study.

    (PDF)

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    Data is available on Kaggle. The corresponding DOI is: 10.34740/kaggle/dsv/1396507 (https://doi.org/10.34740/kaggle/dsv/1396507). Alternative URL: https://www.kaggle.com/hectorlopezcarral/covid19-affective-ratings.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES