Skip to main content
Brain, Behavior, & Immunity - Health logoLink to Brain, Behavior, & Immunity - Health
. 2020 Aug 2;7:100123. doi: 10.1016/j.bbih.2020.100123

The quality of research on mental health related to the COVID-19 pandemic: A note of caution after a systematic review

Inés Nieto 1, Juan F Navas 1, Carmelo Vázquez 1,
PMCID: PMC7395946  PMID: 32835299

Abstract

Background and aims

SARS-CoV-2 pandemic has spurred scientific production in diverse fields of knowledge, including mental health. Yet, the quality of current research may be challenged by the urgent need to provide immediate results to understand and alleviate the consequences of the pandemic. This study aims to examine compliance with basic methodological quality criteria and open scientific research practices on the mental health effects of the COVID-19 pandemic.

Method and results

Twenty-eight studies were identified through a systematic search. Most of them met the requirements related to reporting key methodological and statistical information. However, the widespread use of convenience samples and the lack of a priori power analysis, coupled with low compliance with open science recommendations, such as pre-registration of studies and availability of databases, raise concerns about the validity, generalisability, and reproducibility of the findings.

Conclusions

While the importance of offering rapid evidence-based responses to mitigate mental health problems stemming from the COVID-19 pandemic is undeniable, it should not be done at the expense of sacrificing scientific rigor. The results of this study may stimulate researchers and funding agencies to try to orchestrate efforts and resources and follow standard codes of good scientific practice.

Keywords: COVID-19, Mental health, Research quality, Systematic review

Highlights

  • The quality of mental health research related to the COVID-19 pandemic may need to be revised.

  • Main quality problems affect the validity and generalisability of findings.

  • Open science practices are hardly followed in the literature examined.

1. Introduction

The SARS-CoV-2 pandemic has had profound consequences in many areas of daily life including research activities. The academic world has launched, admirably fast, many initiatives to try to understand both the biological mechanisms of the virus and the associated psychological and social responses in humans (Holmes et al., 2020). However, this rapid proliferation of studies, and fast-track publication practices, can also be a source of unexpected problems related to the quality of the ongoing studies (Ioannidis, 2020) which, ultimately, are leading to enormous ethical and practical problems (Mehra et al., 2020).

The SARS-CoV-2 crisis is likely challenging some of the standard codes of scientific conduct. A gigantic incentive system has been built up to immediately offer results, which might being stimulating a race for social and academic reputation as well as potential economic advantages (Smaldino and McElreath, 2016). However, even during these difficult times, or perhaps due to it, high-quality research should be inexcusably guided by a series of principles that Zarin et al. (2019; p.813) summarised as: “(1) the study hypothesis must address an important and unresolved scientific, medical, or policy question; (2) the study must be designed to provide meaningful evidence related to this question; (3) the study must be demonstrably feasible (e.g., it must have a realistic plan for recruiting sufficient participants); (4) the study must be conducted and analysed in a scientifically valid manner; and (5) the study must report methods and results accurately, completely, and promptly”. Lack of adherence to these principles may contribute to the crisis replicability and reproducibility in science (Munafò et al., 2017) and may favor the creation of a worthless corpus of knowledge based on significant but spurious findings.

It is possible that the urgency of situations may bring some problems, like running studies only for the sake of exploratory purposes with no clear hypotheses and flawed designs or analyses of results, which may harm the credibility of science and, worst of all, mislead important decisions on prevention and intervention measures (London and Kimmelman, 2020). With the hope of contributing to the improvement of quality future research, the aim of this meta-research study (Hardwicke et al., 2020, Hardwicke et al., 2020) was to provide a descriptive perspective on the practices of research and publication related to the mental health aspects of the current COVID-19 crisis. To analyse the quality of quantitative studies, given that there are no clear universal standards of scientific practices in the field of mental health (Gruber and Joormann, 2020), we focused on general principles of ethical standards, methodological soundness, adequate reporting, open science practices as described in quality checklists (Downs and Black, 1998) and indicators of reproducibility and transparency (Munafò et al., 2017).

2. Method

2.1. Search strategy and selection criteria

This systematic review was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses-P statement for systematic review and meta-analysis protocols (Shamseer et al., 2015). The study protocol was previously pre-registered: https://osf.io/bk3gw/.

The literature search was performed using PubMed and Scopus databases on the 13th of May. Studies were included if they were empirical studies, published in English, between February and May 2020, and in peer-reviewed journals. The dependent variable(s) required to be quantitative, self-reported behavioural, cognitive or emotional measures related to mental health. Exclusion criteria included clinical pharmacological trials, studies using psychophysiological or biological recordings, and opinion articles, letters to the Editor, reviews and the like. The terms used in the search were coronavirus OR COVID (limited to title/abstract) AND mental OR psych∗ OR depression OR anxiety OR stress OR trauma OR alcohol OR drugs OR substance use.

Two researchers (IN and JFN) independently evaluated search results for inclusion. First, duplicates were removed. Then, studies whose title and abstracts suggested that they did not meet the inclusion criteria were eliminated. Finally, researchers examined in detail the full text of the remaining studies. Discrepancies were resolved by consensus and, when needed, by referral to a senior researcher (CV).

2.2. Quality indicators

A list of quality indicators was developed by agreement between the authors, after examining some of the most widely used existing guidelines of quality in social and health research: Checklist for Measuring Quality (Downs and Black, 1998), the Effective Public Health Practice Project Quality Assessment Tool (EPHPP) (Jackson et al., 2005), The Newcastle-Ottawa Scale (NOS) (Wells, Shea, and O'Connel, 2009), The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement (von Elm et al., 2008) and Critical Appraisal Skills Programme (‘Critical Appraisal Skills Programme. CASP Qualitative Checklist. [online]’, 2018) checklists. For the selection, we included indicators that were common to all types of quantitative research, regardless of their design (e.g., clinical trials vs. cross-sectional studies). Since there is a great overlap between checklists, the criteria were mainly based on the Checklist for Measuring Quality (Downs and Black, 1998). Indicators which required a subjective judgment were avoided (e.g., Were the statistical tests used appropriately?). Broadly, these indicators encompassed aspects of the design (e.g., recruitment process, use of validated instruments), the analytical strategy (e.g., control of potential confounders), and the way in which key components of the study are reported (e.g., internal reliability of the instruments used, exact p-values).

In addition, some further criteria that were not considered in the referred guidelines (most of them related to Open science quality criteria) were added: (i) the approval of an ethical committee; (ii) report of the effect sizes; (iii) pre-registration of the study and (iv) open access to data bases (see Table 1, next section, for the list of quality of indicators).

Table 1.

Percentage of studies meeting quality indicators.

Quality indicator
Ethics
1 Approval of ethics committee. 71.4%
Reporting
2 Description of participants' characteristics: inclusion and exclusion criteria for all participants (including control group, if applicable). 75%
3 Report of the period of time in which the sample was recruited. 78.6%
4 Report of estimates of the random variability in the data for the main outcome(s). 85.7%
5 Report of effect size(s) of main outcome(s). 96.4%
6∗ Report of exact p-value(s) for the main result(s), except when the probability value is less than 0.001. 79.2%
External validity
7 Use of random sampling methods to recruit participants. 10.7%
8∗ Report of the proportion of participants that agreed to participate. 48.1%
9 Inclusion of a priori power analyses. 7.14%
Internal validity
10 Use of previously validated instruments to measure main outcomes. 89.3%
11 Report of the internal reliability of measurement instruments within the study. 39.3%
12∗ Inclusion of specific analyses to control for potential confounders (e.g., sex, age, health status, etc.). 58.3%
Open science
13 Pre-registration of the study design, primary outcomes and analysis plan. 0%
14 Open access to databases (i.e., databases available in public repositories). 3.57%

Note: Quality indicators marked with an asterisk were not applicable in some cases. Thus, there were 27 studies that met the eighth indicator, while for the sixth and twelfth indicators there were 24 studies.

3. Results

Twenty-eight empirical studies on mental health problems associated with the SARS-CoV-2 were finally included in the analyses (see Fig. 1 for a detailed description of search flow and results).

Fig. 1.

Fig. 1

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.

In Table 1 we present the standard ethical, methodological, analytical and open science quality indicators that were selected and the percentage of studies that met each indicator (i.e., our main outcome measure). The studies were coded by two independent judges (IN and JFN). The Kohen's kappa inter-rater reliability score was 0.96. Discrepancies were resolved by discussion between both researchers. Those on which no consensus was reached were referred to the senior researcher (CV) for a final decision. For the sake of transparency, the quality indicator met by each single study and is available as a Supplementary file.

All but one study reported some effect size related to their main outcome measures. Two criteria were met by more than 80% of the studies. One related to reporting key information (i.e., estimates of the random variability in the data for the main outcomes) and the other one was the use of validated instruments to measure main outcomes. Also, a majority of the studies had the approval of an ethics committee and reported information on their inclusion/exclusion criteria for the sample, the period of time in which the sample was recruited and data was collected, and the exact p-values for main results. On the other hand, around sixty percent of studies included analytic strategies to control for the main potential confounders, mostly the half of studies reported the proportion of participants that agreed to participate, but less than forty percent of studies included estimates of the internal reliability of the instruments used. Finally, there was a low compliance with standards of quality regarding the use of random sampling methods to recruit participants, conducting a priori power analysis, making available the database used in a public repository, and pre-registration.

4. Discussion

The results of the study reveal that some critical components of high-quality research may be lacking in current studies on the mental health consequences of the COVID-19. Whereas most of the studies fulfilled basic requirements of standard research (e.g., reporting basic descriptive statistics and inclusion/exclusion criteria), there are some worrying deficits regarding conditions that may pose a threat to the external validity of the studies and the generalisability and replicability of the results. Our results have detected serious flaws regarding the representativeness of the samples, the robustness of the measures used to estimate mental health problems in the studied samples, and the potential control of selective reporting. The low compliance with open science recommendations found in our review, although it is in line with previous findings in psychology (Vanpaemel et al., 2015) and psychiatry (Sherry et al., 2020), showing that this practice is poorly followed by researchers, confirms that research related to the COVID-19 does not follow recommended measures to increase replicability of findings. Additionally, some warnings and recommendations have been made about the quality of internet-collected responses (Chandler et al., 2020) and the practice of pre-analysis plans (Olken, 2015) that apparently are not followed in the studies analysed.

Although some of the limitations detected in our systematic review may be due to the urgency of providing fast research-based answers during a global emergency situation, we wonder whether researchers, universities, research centres and funding agencies are ready to respond to the unusual pressures associated to the development of scientific knowledge without affecting quality standards. It is worth noting that the COVID-19 pandemic has had a deep impact on academic organisations and researchers worldwide subjecting them to unprecedented stress (e.g., time pressures to launch initial studies, difficulties to release funds for research, or overwhelming increases in workload) that may have hindered possibilities to design and perform, at least in the initial stages of the pandemic, more sophisticated and controlled research (e.g., using random sampling methods instead of convenience samples). In that respect, a limitation of the present review is that we have only included a first wave of published studies performed at the beginning of the pandemic, which might have been particularly affected by restrictions in time, funding and human resources. Nonetheless, regardless of reasons explaining these shortcomings, our findings bring concerns on whether the research examined has potential practical utility or may instead represent an obstacle to understanding the true impact of the COVID-19 pandemic on mental health.

As London and Kimmelman (2020) have defended, the logistic and practical challenges caused by the pandemic should not be an excuse to loosen up the criteria of quality usually requested in good science practices. In spite of the current limitations to conduct high-quality research, society does not just need data but, now more than ever, sound data. Poor designs and procedures of recruitment and the lack of representativeness of the samples may make findings worthless and being a waste of money an energy. Furthermore, flawed science may reduce trust in science, which is a relevant key for citizens to follow prescriptions during pandemics (Balog-Way and McComas, 2020). Although as we mentioned before, it is understandable the urgency to conduct studies and publish results, a sound science is typically fed by reflection, research guided by hypotheses, and robust strategic plans (Hardwicke et al., 2020, Hardwicke et al., 2020).

Behind the demand of quality in research in these difficult times, we should remind ourselves that “the moral mission of research remains the same: to reduce uncertainty and enable caregivers, health systems, and policy-makers to better address individual and public health” (London and Kimmelman, 2020, p.476). Otherwise, these efforts may be both as worthless and unproductive as unethical (Zarin et al., 2019).

Role of the funding source

There was no direct funding source for this study. JFN was supported by a Spanish Ministry of Science and Innovation post-doctoral contract (Juan de la Cierva-Formación: FJC2018-036047-I) and IN was supported by a grant of Complutense University of Madrid (CT17/17-CT18/17 UCM). These agencies had no role in the development of this study.

Declaration of competing interest

The authors declare no conflict of interest.

Acknowledgements

We thank James O'Grady for his assistance in editing the latest version of this manuscript and Prof. Julio Sanchez-Meca for his comments on the goals of design of the study.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.bbih.2020.100123.

Appendix A. Supplementary data

The following is the Supplementary data to this article:

Multimedia component 1
mmc1.docx (61.4KB, docx)

References

  1. Balog-Way D.H.P., McComas K.A. COVID-19: reflections on trust, tradeoffs, and preparedness. J. Risk Res. 2020:1–11. doi: 10.1080/13669877.2020.1758192. 0. [DOI] [Google Scholar]
  2. Chandler J., Sisso I., Shapiro D. Participant carelessness and fraud: consequences for clinical research and potential solutions. J. Abnorm. Psychol. 2020;129:49–55. doi: 10.1037/abn0000479. [DOI] [PubMed] [Google Scholar]
  3. Critical Appraisal Skills Programme . 2018. CASP Qualitative Checklist.https://casp-uk.net/wp-content/uploads/2018/01/CASP-Qualitative-Checklist-2018.pdf [online] Available at: Accessed: May 2020. [Google Scholar]
  4. Downs S.H., Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J. Epidemiol. Community Health. 1998;52:377–384. doi: 10.1136/jech.52.6.377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Gruber J., Joormann J. Best research practices in clinical science: reflections on the status quo and charting a path forward. J. Abnorm. Psychol. 2020;129:1–4. doi: 10.1037/abn0000497. [DOI] [PubMed] [Google Scholar]
  6. Hardwicke T.E., Serghiou S., Janiaud P., Danchev V., Crüwell S., Goodman S.N., Ioannidis J.P.A. Calibrating the scientific ecosystem through meta-research. Annu. Rev. Stat. Its Appl. 2020;7:1–51. [Google Scholar]
  7. Hardwicke Tom E., Wallach J.D., Kidwell M.C., Bendixen T., Crüwell S., Ioannidis J.P.A. An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017) R. Soc. Open Sci. 2020;7:190806. doi: 10.1098/rsos.190806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Holmes E.A., O'Connor R.C., Perry V.H., Tracey I., Wessely S., Arseneault L., Ballard C., Christensen H., Cohen Silver R., Everall I., Ford T., John A., Kabir T., King K., Madan I., Michie S., Przybylski A.K., Shafran R., Sweeney A., Worthman C.M., Yardley L., Cowan K., Cope C., Hotopf M., Bullmore E. Multidisciplinary research priorities for the COVID-19 pandemic: a call for action for mental health science. The Lancet Psychiatry. 2020;7:547–560. doi: 10.1016/S2215-0366(20)30168-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Ioannidis J.P.A. Coronavirus disease 2019: the harms of exaggerated information and non-evidence-based measures. Eur. J. Clin. Invest. 2020;50:1–5. doi: 10.1111/eci.13222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Jackson N., Waters E., Kristjansson E. Criteria for the systematic review of health promotion and public health interventions. Health Promot. Int. 2005;20 doi: 10.1093/heapro/dai022. [DOI] [PubMed] [Google Scholar]
  11. London A.J., Kimmelman J. Against pandemic research exceptionalism. Science. 2020;368:476–477. doi: 10.1126/science.abc1731. (80-. ) [DOI] [PubMed] [Google Scholar]
  12. Mehra M.R., Desai S.S., Ruschitzka F., Patel A.N. Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet. 2020;6736 doi: 10.1016/S0140-6736(20)31180-6. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  13. Munafò M.R., Nosek B.A., Bishop D.V.M., Button K.S., Chambers C.D., Percie Du Sert N., Simonsohn U., Wagenmakers E.J., Ware J.J., Ioannidis J.P.A. A manifesto for reproducible science. Nat. Hum. Behav. 2017;1:1–9. doi: 10.1038/s41562-016-0021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Olken B.A. Promises and perils of pre-analysis plans. J. Econ. Perspect. 2015;29:61–80. [Google Scholar]
  15. Shamseer L., Moher D., Clarke M., Ghersi D., Liberati A., Petticrew M., Shekelle P., Stewart L.A., Altman D.G., Booth A., Chan A.W., Chang S., Clifford T., Dickersin K., Egger M., Gøtzsche P.C., Grimshaw J.M., Groves T., Helfand M., Higgins J., Lasserson T., Lau J., Lohr K., McGowan J., Mulrow C., Norton M., Page M., Sampson M., Schünemann H., Simera I., Summerskill W., Tetzlaff J., Trikalinos T.A., Tovey D., Turner L., Whitlock E. Preferred reporting items for systematic review and meta-analysis protocols (prisma-p) 2015: elaboration and explanation. BMJ. 2015 doi: 10.1136/bmj.g7647. [DOI] [PubMed] [Google Scholar]
  16. Sherry C.E., Pollard J.Z., Tritz D., Carr B.K., Pierce A., Vassar M. Assessment of transparent and reproducible research practices in the psychiatry literature. Gen Psychiatry [Internet] 2020;33(1) doi: 10.1136/gpsych-2019-100149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Smaldino P.E., McElreath R. The natural selection of bad science. R. Soc. Open Sci. 2016;3 doi: 10.1098/rsos.160384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Vanpaemel W., Vermorgen M., Deriemaecker L., Storms G. Are we wasting a good crisis? The availability of psychological research data after the storm. Collabra. 2015;1 doi: 10.1525/collabra.13. [DOI] [Google Scholar]
  19. von Elm E., Altman D.G., Egger M., Pocock S.J., Gøtzsche P.C., Vandenbroucke J.P. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. J. Clin. Epidemiol. 2008;61:344–349. doi: 10.1016/j.jclinepi.2007.11.008. [DOI] [PubMed] [Google Scholar]
  20. Wells G.A., Shea B., O'Connel D. 2009. The Newcastle-Ottawa Scale (NOS) for Assessing the Quailty of Nonrandomised Studies in Meta-Analyses.http://www.ohrica/programs/clinical_epidemiology/oxford_htm 2009 Feb 1 2009. [Google Scholar]
  21. Zarin D.A., Goodman S.N., Kimmelman J. Harms from uninformative clinical trials. J. Am. Med. Assoc. 2019;322:813. doi: 10.1001/jama.2019.9892. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia component 1
mmc1.docx (61.4KB, docx)

Articles from Brain, Behavior, & Immunity - Health are provided here courtesy of Elsevier

RESOURCES