1. Introduction
The coronavirus disease 2019 (COVID-19) pandemic has imposed a tremendous toll on individuals, community, healthcare system and economy. COVID-19 outbreak severely impacted the scientific community by campus shutdowns (Su, 2020). Counterintuitively however, unprecedented high number of studies involving human participants were published during this period (Di Girolamo and Meursinge Reynders, 2020). Interestingly, there has also been some stern criticism on the research integrity of studies conducted during the COVID-19 pandemic and the pace at which such research was happening (Bramstedt, 2020). The surge in the publication of online surveys, which have now become an important tool for COVID-19 research (Hlatshwako et al., 2021), was an important contributor for the pace and ease of research output as well as the criticism due to its inherent limitations (Teitcher et al., 2015).
Understanding the need for studying mental health vulnerabilities in the wake of the pandemic led crisis, psychiatry journals expedited the manuscript review process and came up with additional virtual special issues for speedy publication of articles (Tandon, 2020a; 2021a). This paved way for publication of several articles on COVID-19 and mental health, with some journals reporting upto a fourfold increase in the number of manuscript submissions (Tandon, 2021a). Among them were several online surveys (Akintunde et al., 2021). Given that mental health research has followed the global scientific trend of resorting to internet surveys, evaluating the research quality of these surveys is critical (Sharma and Tikka, 2020). And, the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) is a robust tool for improving the quality of internet e-surveys (Eysenbach, 2004). The purpose of this study was to conduct a systematic review of online surveys on mental health impact of COVID-19 outbreak related issues published during the first 7 months of the COVID-19 outbreak, from January to July 2020, for compliance with standard reporting quality guidelines.
2. Methodology
This systematic review was approved by the Institutional ethics committee (AIIMS/IEC/20/520; ECR/736/Inst/UK/201s/RR-18).
2.1. Study selection and data extraction
The search strategy is mentioned in section- S1 of the supplementary material. The following criteria were used to select studies for inclusion in this review: (i) studies that used online surveys and questionnaires; (ii) followed cross sectional study type; (iii) assessed mental and psychological health during COVID-19 pandemic; (iv) studies published between January to July 2020; and (v) articles written in English. The full texts of relevant articles and data were assessed for whether they adhered to items on the CHERRIES checklist. Any disagreements regarding data extraction were resolved through clear consensus. The PRISMA flowchart describing the process of studies’ selection is presented in supplementary figure 1.
3. Results
General characteristics of the studies are mentioned in section-S2 of the supplementary material.
Adherence to CHERRIES
After screening 216 articles, we selected 80 internet surveys. All the 80 studies were reviewed for adherence to the items of CHERRIES checklist. Table 1 shows frequency and percentage of studies adhering to reporting of each item of the CHERRIES checklist. Supplementary Table- 1 shows reporting-adherence checks (yes/no) to each item of the CHERRIES checklist by each of these studies.
Table 1.
Frequency of reporting-adherence to each item of the CHERRIES checklist in the reviewed studies.
CHERRIES Items | Studies reporting this item n (%) | |
---|---|---|
Design | 1. Describe survey design | 79 (98.8) |
IRB approval and Informed consent process | 2. IRB approval | 72 (90) |
3. Informed consent | 67 (83.8) | |
4. Data protection | 11 (13.8) | |
Development and pre-testing | 5. Development and testing | 3 (3.75) |
Recruitment process and description of the sample having access to the questionnaire | 6. Open survey versus closed surveys | 100; closed 11 (13.8) & open 69 (86.2) |
7. Contact mode | 72 (90) | |
8. Advertising the survey | 31 (38.8) | |
Survey administration | 9. Web/e-mail | 75 (93.8) |
10. Context | 19 (23.8) | |
11. Mandatory/voluntary | 72 (90); All 72 (90) reported as voluntary | |
12. Incentive | 8 (10) | |
13. Time/Date | 75 (93.8) | |
14. Randomization of items or questionnaires | 1 (1.3) | |
15. Adaptive questioning | 2 (2.5) | |
16. Number of items | 75 (93.8); Not clear in 3 (3.8) | |
17. Number of screens | 4 (5) | |
18. Completeness check | 21 (26.3) | |
19. Review step | 4 (5) | |
Response rates | 20. Unique site visitor | 1 (1.3) |
21. View rate (Ratio unique site visitor/unique survey visitors) | 1 (1.3) | |
22. Participation rate (Ratio unique survey page visitors/agreed to participate) | 5 (6.3) | |
23. Completion rate (Ratio agreed to participate/finished survey) | 27 (33.8) | |
Preventing multiple entries from the same individual | 24. Cookies | 0 (0) |
25. IP check | 4 (5) | |
26. Log file analysis | 2 (2.5) | |
27. Registration | 2 (2.5) | |
Analysis | 28. Handling of incomplete questionnaires | 43 (53.8) |
29. Questionnaires submitted with an atypical timestamp | 3 (3.8) | |
30. Statistical correction | 4 (5) |
We found that almost all studies (98.8%) included information about the survey's design and mode, i.e. whether the survey was closed or open (100%); most of the studies were conducted via open surveys (86.2%). The majority of studies (93.8%) reported the dates of the survey and the number of survey items (93.8%). IRB approval and a statement regarding informed consent were reported in 90% and 83.8% of the surveys, respectively. Ninety percent of studies specified the method of contact and whether or not participation in the survey was voluntary. Advertising of the survey and incentives provided were reported by 38.8% and 10% of the surveys, respectively. Completeness checks, completeness rates, and handling of incomplete responses were reported by 26.3%, 33.8% and 53.8% of surveys, respectively. Only 5% of studies used statistical correction for confounding variables such as incomplete response rates, response rates, and so on. Only 13.8% of studies included a data protection statement. The development and testing of items, their randomization during administration, and the use of adaptive questioning were all reported in 3, 1, and 2 studies, respectively. While five studies reported on participation rates, only one reported on view rates. In 5% of studies, checks to prevent multiple entries from the same individual were reported.
4. Discussion
Our critical review shows that while certain items of CHERRIES are reported by most studies, many items remain not reported. All studies should include mandatory statements regarding IRB approval and informed consent, as recommended by the International Committee of Medical Journal Editors (ICMJE), universally endorsed by biomedical journals. Among the studies we reviewed, 10% of them do not report them. This is despite numerous national agencies issuing clear guidelines indicating that review of COVID-19-related research studies may be expedited but not exempted (Arunachalam et al., 2021). Even though 90% of studies reported IRB approval, only less than one in every six studies reported the data protection statement. Data privacy protection is now regarded as an indispensable element of mental healthcare, particularly with the increased use of digital technology (Lustgarten et al., 2020).
COVID-19 studies have previously been criticized for failing to disclose limitations (Di Girolamo and Meursinge Reynders, 2020). Specific parameters– the view-, the participation- and the completion rates estimate the selection bias, which is an inherent limitation of online surveys (Eysenbach, 2004). Our review discovered that only a small number of studies reported these rates; in fact, view and participation rates were reported in scant detail (1.3% and 6.3%, respectively). The reporting of checks for multiple entries too was scarce (<5%). This domain of internet surveys, also called ‘seriousness checks’ (Aust et al., 2013) is critical for their validity to remain intact.
During this pandemic, scientific journals have the challenge of balancing “twin imperitives” i.e. expedited reviews and scientific rigor (Tandon, 2020b, 2021b). Relevant concerns have been raised over the content of articles that present alarming but less accurate information, as they have the potential for misinterpretation and sensationalization (Tandon, 2021b). We deem that adhering to standard reporting guidelines by the authors and an editorial check for their adherence might improve the scientific rigor of online surveys. Overall, our review reveals that the reporting quality of published online surveys on mental health issues is subpar. Therefore, findings of our review calls for a critical requirement to follow caution in interpreting the findings of online surveys on COVID-19 and mental health and an urgent need for the online surveys on mental health to adhere to established reporting standards and guidelines.
Financial disclosure
No funding received.
Declaration of Competing Interest
The authors report no declarations of interest.
Acknowledgement
None.
Footnotes
Supplementary material related to this article can be found, in the online version, at doi:https://doi.org/10.1016/j.ajp.2021.102799.
Appendix A. Supplementary data
The following are Supplementary data to this article:
References
- Akintunde T.Y., Musa T.H., Musa H.H., Musa I.H., Shaojun C., Ibrahim E., Tassanga A.E., Helmy M.S.E.D.M. Bibliometric analysis of global scientific literature on effects of COVID-19 pandemic on mental health. Asian J. Psychiatr. 2021 doi: 10.1016/j.ajp.2021.102753. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arunachalam M.A., Halwai A., Arunachalam C. National guidelines for ethics committees reviewing biomedical & health research during COVID-19 pandemic: an analysis. Indian J. Med. Ethics. 2021;VI:1–12. doi: 10.20529/IJME.2020.120. [DOI] [PubMed] [Google Scholar]
- Aust F., Diedenhofen B., Ullrich S., Musch J. Seriousness checks are useful to improve data validity in online research. Behav. Res. Methods. 2013;45:527–535. doi: 10.3758/s13428-012-0265-2. [DOI] [PubMed] [Google Scholar]
- Bramstedt K.A. The carnage of substandard research during the COVID-19 pandemic: a call for quality. J. Med. Ethics. 2020;46:803–807. doi: 10.1136/medethics-2020-106494. [DOI] [PubMed] [Google Scholar]
- Di Girolamo N., Meursinge Reynders R. Characteristics of scientific articles on COVID-19 published during the initial 3 months of the pandemic. Scientometrics. 2020;125:795–812. doi: 10.1007/s11192-020-03632-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) J. Med. Internet Res. 2004;6 doi: 10.2196/JMIR.6.3.E34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hlatshwako T.G., Shah S.J., Kosana P., Adebayo E., Hendriks J., Larsson E.C., Hensel D.J., Erausquin J.T., Marks M., Michielsen K., Saltis H., Francis J.M., Wouters E., Tucker J.D. Online health survey research during COVID-19. Lancet Digit. Health. 2021 doi: 10.1016/S2589-7500(21)00002-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lustgarten S.D., Garrison Y.L., Sinnard M.T., Flynn A.W. Digital privacy in mental healthcare: current issues and recommendations for technology use. Curr. Opin. Psychol. 2020 doi: 10.1016/j.copsyc.2020.03.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sharma R., Tikka S.K. COVID-19 online surveys need to follow standards and guidelines: comment on “does COVID-19 pandemic affect sexual behaviour? A cross-sectional, cross-national online survey” and "binge watching behavior during COVID 19 pandemic: a cross-sectional, cross-n. Psychiatry Res. 2020;290 doi: 10.1016/J.PSYCHRES.2020.113173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Su L. My lab is closed to me because of the coronavirus. Here’s how I’m planning to stay productive. Nature. 2020 doi: 10.1038/d41586-020-00986-6. [DOI] [PubMed] [Google Scholar]
- Tandon R. The COVID-19 pandemic, personal reflections on editorial responsibility. Asian J. Psychiatr. 2020;50 doi: 10.1016/j.ajp.2020.102100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tandon R. COVID-19 and mental health: preserving humanity, maintaining sanity, and promoting health. Asian J. Psychiatr. 2020 doi: 10.1016/j.ajp.2020.102256. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tandon R. The bitter lessons of COVID-19: acknowledging and working through many points of tension. Asian J. Psychiatr. 2021;55 doi: 10.1016/j.ajp.2021.102545. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tandon R. COVID-19 and suicide: just the facts. Key learnings and guidance for action. Asian J. Psychiatr. 2021;60 doi: 10.1016/j.ajp.2021.102695. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Teitcher J.E.F., Bockting W.O., Bauermeister J.A., Hoefer C.J., Miner M.H., Klitzman R.L. Detecting, preventing, and responding to “fraudsters” in internet research: ethics and tradeoffs. J. Law Med. Ethics. 2015;43:116–133. doi: 10.1111/jlme.12200. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.