To the Editor—The COVID-19 pandemic has thrust preprints into the spotlight, attracting attention from the media and the public, as well as from scientists. Preprints are articles not yet published in a peer-reviewed journal, and as such they offer a unique opportunity to improve reporting. The Automated Screening Working Group (https://scicrunch.org/ASWG/about/COVIDPreprint) aims to provide rapid feedback that may help authors of COVID-19 preprints to improve their transparency and reproducibility.
One quarter of COVID-19 papers published have been preprints. Most of these appear on medRxiv; others appear on bioRxiv or other servers1. Although publishing results in preprints allows them to be posted rapidly, the absence of traditional peer review has raised concerns about preprint quality. Unfortunately, it has been impossible for scientists to keep pace with the thousands of COVID-19 preprints published since February. Preprints are vetted before posting to confirm that they describe scientific studies and to prevent posting on topics that could damage public health; however, routine assessment of manuscript quality or flagging of common reporting problems is not feasible at this scale.
Although automated screening is not a replacement for peer review, automated tools can identify common problems. Examples include failure to state whether experiments were blinded or randomized2, failure to report the sex of participants2 and misuse of bar graphs to display continuous data3. We have been using six tools4–8 to screen all new medRxiv and bioRxiv COVID-19 preprints (Table 1). New preprints are screened daily9. By this means, reports on more than 8,000 COVID preprints have been shared using the web annotation tool hypothes.is (RRID:SCR_000430) and have been tweeted out via @SciScoreReports (https://hypothes.is/users/sciscore). Readers can access these reports in two ways. The first option is to find the link to the report in the @SciScoreReports tweet in the preprint’s Twitter feed, located in the metrics tab. The second option is to download the hypothes. is bookmarklet. In addition, readers and authors can reply to the reports, which also contain information on solutions.
Table 1 |.
Tool | Screens for | Link and RRID |
---|---|---|
SciScore4 | Blinding, randomization, sample-size calculations, sex/gender, ethics and consent statements, resources, RRIDs | http://sciscore.com; RRID:SCR_016251 |
ODDPub5 | Open data, open code | https://github.com/quest-bih/oddpub; RRID:SCR_018385 |
Limitation- Recognizer6 | Author-acknowledged limitations | https://github.com/kilicogluh/limitation-recognizer; RRID:SCR_018748 |
Barzooka | Bar graphs of continuous data | https://quest-barzooka.bihealth.org; RRID:SCR_018508 |
JetFighter7 | Rainbow color maps | https://jetfighter.ecrlife.org; RRID:SCR_018498 |
Seek and Blastn8 | Correct identification of nucleotide sequencesa | http://scigendetection.imag.fr/TPD52/; RRID:SCR_016625 |
Semiautomated tool—requires human confirmation of results. rrID, research resource identifier.
Screening of 6,570 medRxiv and bioRxiv COVID-19 preprints posted before 19 July revealed several interesting results. 13.6% of preprints shared open data and 14.3% shared open code, making it easier for others to reuse data or reproduce results. Approximately one third (34.4%) of COVID-19 preprints acknowledged at least one study limitation. 7.3% of preprints included bar graphs of continuous data. This is problematic because many different datasets can lead to the same bar graph, and the actual data may suggest different conclusions from those implied by the summary statistics alone3. Therefore, authors should use dot plots, box plots or violin plots instead3. Among papers with color maps, 7.6% used rainbow colormaps, which are not colorblind safe and also create visual artifacts for viewers with normal vision7. Rainbow color maps should be replaced with more-informative color maps that are perceptually uniform and colorblind accessible, such as viridis7. 1,775 preprints (27%) contained an ethics approval statement for human or animal research. This suggests that nearly three quarters of COVID-19 preprints are secondary or tertiary analyses, modeling studies or cell line studies that do not require approval. Although there are known sex differences in COVID-1910, only 20% of all COVID-19 preprints, and 38% of preprints with an ethics approval statement, address sex as a biological variable. Statements regarding sample size calculations (1.4%), blinding (2.7%) and randomization (11.4%) were uncommon, even among studies that contained a human ethics statement (present in 2.4%, 5.4% and 12.6%, respectively). Many COVID-19 preprints are modeling studies, however, and hence these criteria are not always relevant. 6.1% of preprints used nonhuman organisms, mainly mice. Among the 552 preprints that included cell lines, 7% described how the cell lines were authenticated (e.g., short tandem repeat profiling) or were kept free of contamination (e.g., mycoplasma detection tests).
Our work shows that it is feasible to conduct large-scale automated screening of preprints and provide rapid feedback to authors and readers. Automated tools are not perfect—they make mistakes, and they cannot always determine whether a problem is relevant to a given paper. Moreover, some problems are too complex for automated tools to detect. Despite these limitations, automated tools can quickly flag potential problems and may complement peer reviews. We hope that these reports will raise awareness about factors that affect transparency and reproducibility, while helping authors to improve their manuscripts. Further research is needed to determine whether automated tools improve reporting.
Acknowledgements
SciScore was funded by the US National Institutes of Health (OD024432, DA039832, DK097771, MH119094, and HHSN276201700124P). The development and application of Seek & Blastn is supported by grants from the US Office of Research Integrity, grant ID ORIIR180038-01-00 (J.A.B., C.L.), and from the National Health and Medical Research Council of Australia, Ideas grant ID APP1184263 (J.A.B., C.L., A.C.D.). Development of Limitation-Recognizer was partially supported by the intramural research program of the NIH and National Library of Medicine.
Footnotes
Competing interests
A.B. is a cofounder of SciCrunch Inc.
References
- 1.Callaway E Nature 582, 167–168 (2020). [DOI] [PubMed] [Google Scholar]
- 2.US National Institutes of Health. https://grants.nih.gov/grants/guide/notice-files/not-od-15-103.html (2015). [PubMed]
- 3.Weissgerber TL et al. Circulation 140, 1506–1518 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Menke J, Roelandse M, Ozyurt B, Martone M & Bandrowski A iScience. 101698 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Riedel N, Kip M & Bobrov E Data Sci. J 19, 42 (2020). [Google Scholar]
- 6.Kilicoglu H, Rosemblat G, Malicki M & Ter Riet GJ Am. Med. Inform. Assoc 25, 855–861 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Saladi S eLife https://elifesciences.org/labs/c2292989/jetfighter-towards-figure-accuracy-and-accessibility (2020).
- 8.Labbe C, Grima N, Gautier T, Favier B & Byrne JA PLoS One 14, e0213266 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Eckmann P https://github.com/PeterEckmann1/aswg-pipeline (accessed 15 September 2020).
- 10.Wenham C, Smith J & Morgan R, the Gender and COVID-19 Working Group. Lancet 395, 846–848 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]