Skip to main content
BMJ Open Access logoLink to BMJ Open Access
. 2023 Apr 19;28(6):418–423. doi: 10.1136/bmjebm-2022-112185

Rapid reviews methods series: Guidance on team considerations, study selection, data extraction and risk of bias assessment

Barbara Nussbaumer-Streit 1,, Isolde Sommer 1, Candyce Hamel 2,3, Declan Devane 4,5,6, Anna Noel-Storr 7, Livia Puljak 8, Marialena Trivella 1,9, Gerald Gartlehner 1,10, On behalf of the Cochrane Rapid Reviews Methods Group
PMCID: PMC10715469  PMID: 37076266

Abstract

This paper is part of a series of methodological guidance from the Cochrane Rapid Reviews Methods Group (RRMG). Rapid reviews (RRs) use modified systematic review (SR) methods to accelerate the review process while maintaining systematic, transparent and reproducible methods to ensure integrity. This paper addresses considerations around the acceleration of study selection, data extraction and risk of bias (RoB) assessment in RRs. If a RR is being undertaken, review teams should consider using one or more of the following methodological shortcuts: screen a proportion (eg, 20%) of records dually at the title/abstract level until sufficient reviewer agreement is achieved, then proceed with single-reviewer screening; use the same approach for full-text screening; conduct single-data extraction only on the most relevant data points and conduct single-RoB assessment on the most important outcomes, with a second person verifying the data extraction and RoB assessment for completeness and correctness. Where available, extract data and RoB assessments from an existing SR that meets the eligibility criteria.


WHAT IS ALREADY KNOWN ON THIS TOPIC

  • Compared with full systematic reviews, rapid reviews (RRs) often omit dual processes or use other methodological shortcuts. While this helps accelerate the review process, unreflective use of shortcuts might introduce bias and/or inaccuracies to RRs.

WHAT THIS STUDY ADDS

  • This paper presents considerations and recommendations for team composition, study selection, data extraction and risk of bias assessment in a RR.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • Considerations and recommendations in this paper should help to conduct RRs quickly while minimising potential errors or bias, so decision-makers in research, clinical practice and health policy can make evidence-based decisions in a resource efficient manner.

Introduction

This paper is part of a series from the Cochrane Rapid Review Methods Group providing methodological guidance for rapid reviews (RRs).1–3 It aims to address considerations around the team composition and the acceleration of study selection, data extraction and risk of bias (RoB) assessment in RRs and to provide templates for practical use.

According to a recent scoping review, study selection, data extraction and RoB assessment are often the most resource-intensive steps during the production of a systematic review (SR).4 These are error-prone steps that require subjective judgement. In full SRs, it is considered best practice that two people independently screen potentially relevant studies, extract data and assess RoB of included studies.5 6

In a RR, methodological shortcuts can be employed to accelerate the timeline. According to a scoping review, 23% of RRs had one person extract data, 6% applied single-reviewer screening, 7% had one person assess RoB and another 7% omitted RoB assessment entirely.7 These methodological shortcuts might lead to a gain in resource efficiency,4 especially if the search yields many records for screening or if many studies are included in the review. However, if not implemented properly accelerated methods could even increase the workload, for example, if a single screener is overinclusive on title/abstract level this could lead to an increased workload when screening full texts. Teams must aim to minimise potential bias in the accelerated approaches taken.

In the following sections, we first address general considerations that are not unique to RRs, but nevertheless important when performing a RR. Study selection is the first step of the review process, where team size, experience and organisation play an important role regarding efficiency. We provide recommendations on piloting, followed by considerations specific to study selection, data extraction and RoB assessment. Table 1 gives an overview of the recommendations, which are discussed in more detail in the following sections.

Table 1.

Overview of recommendations for RR conduct

General recommendations on team characteristics and organisation Ensure that the team has sufficient SR experience and that teams are a manageable size. Use supportive software and plan review steps to ensure an efficient workflow.
Employ piloting exercises to allow team members involved in a certain task (eg, study selection) to test the tools and processes of this task on a small proportion of records to ensure that all team members perform the task consistently and correctly.
Recommendations on study selection (title/abstract and full-text level)
  • Reduce the number of human judgements involved:

    • conduct dual and independent screening of a proportion of records (eg, 20%) and assess reviewer agreement. If agreement is good (eg, 80%), proceed with single-reviewer screening.

    • Enhance validity of single-screening based on types and numbers of exclusion reasons.

  • Use supportive software.

  • Consider semiautomation in the form of crowdsourcing and/or machine learning.

Recommendations on data extraction
  • Have one person extract the data, with a second person verifying the data for accuracy and completeness.

  • Limit data extraction to only the most important data fields relevant to address the RR question (as agreed on in the protocol).

  • Where available, extract primary study data directly from existing SRs.

  • Use supportive software.

Recommendations on RoB assessment
  • Use validated and study design–specific tools to assess the RoB of the included studies.

  • Limit the RoB assessment to only the most important outcomes (for RoB tools with outcome-specific questions).

  • Have one person perform the RoB assessment and have a second reviewer verify the judgements.

  • Complete omission of RoB assessment is discouraged, as this information informs the interpretation of the evidence and review implications.

RoB, risk of bias; RR, rapid review; SR, systematic review.

General considerations

Team characteristics and organisation

RRs may seem ‘easier’ to conduct than SRs because they are perceived to have less methodological rigour. However, in our experience, the review team must include sufficient SR methodological experience to properly plan, conduct, analyse and report a RR and, most importantly, be aware of potential limitations due to the methodological shortcuts.

During study selection, novices to evidence synthesis tend to make more incorrect decisions about the inclusion and exclusion of records than more experienced reviewers who have already worked on several evidence syntheses.8 9 Although the data abstractors’ experience may matter less than initially thought and adjudication leads to reduced errors, skilled extractors will be key to minimising error rates for RRs.10 11 Data extraction and RoB assessment requires training and experience. It is, therefore, important that team expertise and organisation are considered carefully.

In our experience, RR teams should not be too large (ideally 3–5 people), as larger teams can increase inefficiencies. However, a large team can be beneficial during the study selection phase if the number of records is large and the time to complete the review is limited. Conversely, for tasks such as data extraction and RoB assessment, limiting the number of reviewers may increase homogeneity and efficiency. It can be more beneficial if not all team members participate in all review steps.

The review team may also be organised to work on different stages of the review in parallel rather than working as a team on each stage. For example, while one part of the team screens titles/abstracts, other members can screen potentially relevant full texts or start with the data extraction and RoB assessment of the included studies.12 It helps to perform the data extraction and RoB assessment simultaneously, using the same people, so studies only need to be evaluated once. We also recommend using collaborative platforms (eg, Microsoft Teams, Google drive) and/or SR software (eg, Abstrackr, Covidence, DistillerSR, Rayyan) to share documents and manage the review (eg, protocol, screening forms, reports, meeting notes). Videoconference tools can facilitate conflict resolution and regular team meetings.

Piloting

Piloting exercises in RRs allow team members involved in a certain task (eg, study selection) to test the tools and processes of this task on a small proportion of records. This helps ensure that all team members have a common understanding of the task and perform the task consistently and correctly. Piloting is important in RRs, especially if certain tasks rely on a single person’s judgement. If, for example, a researcher extracts the data inconsistently (eg, sometimes number of people analysed, sometimes number of people randomised), this increases the workload for the person verifying the data extraction and could lead to distorted results. Piloting is especially relevant to avoid such problems.

For study selection, we recommend creating a standardised screening form that clearly explains the eligibility criteria10 11 (Example of screening form—see online supplemental appendix 1). The entire screening team should pilot the form using the same records to test whether all team members share a common understanding of the inclusion and exclusion criteria. As with title/abstract screening, we recommend a pilot exercise using the same full-text articles with the entire screening team.10 11 The number of records used in the pilot may depend on several factors, including the total search yield, the complexity of the topic and the experience of the screening team.

Supplementary data

bmjebm-2022-112185supp001.pdf (53.1KB, pdf)

For data extraction, we recommend creating and pilot testing a data extraction template. This form should limit data fields to essential data items as discussed with the knowledge users and defined in the protocol.10 11 A list of data items usually extracted into a data extraction form is available in online supplemental appendix 2. This template can be created as a spreadsheet or web-based form or set up in a SR management software. All people involved in data extraction should pilot the data extraction template using the same studies and then compare their results. This can help increase the data extraction accuracy. Pilot testing RoB assessment tools for content is not usually necessary since published and validated tools exist. However, assessing some studies as a team to discuss discrepancies in judgements might be useful too.

Supplementary data

bmjebm-2022-112185supp002.pdf (78.1KB, pdf)

Study selection

Critical appraisal tools for SRs list dual-reviewer screening of titles/abstracts and full-texts as a quality criterion.5 6 Dual-reviewer screening means that two reviewers independently assess all records for eligibility, first based on titles and abstracts, then on the full-texts for records included at the title/abstract level. Further, any conflicting judgements about the inclusion or exclusion of papers should be resolved by discussion or consulting a third person.13 14 For RRs, teams can follow this dual approach if the volume of evidence to be reviewed and resources permit. Otherwise, we recommend the following accelerated approaches to study selection.

Reducing the number of human judgements involved

We recommend dual assessment on a proportion of records, for example, 20% .10 The proportion of records might vary depending on the complexity of the topic and/or the number of records yielded by the search. After this dual-screening phase, reviewers must discuss and resolve conflicting decisions and assess how well they agreed. We recommend continuing with single-reviewer screening (ie, each record is screened by one person) of the remaining titles/abstracts only when reviewer agreement is high (at least 80% agreement)15 16 during the dual-assessment phase.10 The team should feel confident that everyone performing single-reviewer screening is able to make correct judgments. In cases where reviewer agreement is low, the review team should proceed with dual-reviewer screening until a better agreement has been achieved.

Although single-reviewer screening of all titles/abstracts may be a practical solution for certain RRs, we do not recommend this for RRs in general. This approach has been shown to miss 13% of relevant studies and is mainly dependent on the reviewer’s experience.17–19 One study also showed that accuracy of single-reviewer screening was lower in a complex review including multiple study designs than in a pharmacological review including solely randomised controlled trials.18

Single-reviewer screening could, however, be a valid approach to exclude records with multiple exclusion reasons or a clear objective exclusion reason (eg, wrong age group). The other records not fulfilling these criteria could be screened dually.20 Another option for title/abstract screening could be to perform single screening and let a second person check all excludes. However, in our experience, this does not save much time and is often difficult to implement in SR software.

We recommend the same approach for full-text screening. After screening about 20% of the full-texts dually and achieving good agreement between reviewers, the team can proceed with single-reviewer screening of the full texts.10 If time and resources permit, a second person could verify the excluded full texts. Review teams will identify incorrectly included studies during the data extraction phase.

Supportive software

A wide range of software tools exists to support study selection (see www.systematicreviewtools.com). According to Harrison et al, these tools vary significantly in terms of cost, scope and intended user audience.21 However, most use similar principles: all identified records can be uploaded to a web platform, distributed between screeners and screened simultaneously. Decisions are automatically documented. Most tools provide a platform where both title/abstract and full-text screening can be conducted. More details on supportive software can be found in another paper of this series.22 Several applications (eg, Abstrackr,23 DistillerSR,24 EPPI-Reviewer,25 Pico Portal,26 Rayyan,27 RobotAnalyst28 and SWIFTActive-Screener29) have incorporated artificial intelligence (eg, active machine learning) to aid in study selection. While fully automating the study selection is not optimal, semiautomation (eg, one human screener+machine learning) is promising, at least in reviews of intervention studies30–33 and could be implemented in RRs. Artificial intelligence also ranks abstracts by relevance, which can be useful for prioritising studies during screening so that studies with a high likelihood for inclusion are displayed in order to be reviewed first. There is some guidance related to the point at which prioritised records no longer need to be screened or screened by only one reviewer, as some software displays the predicted inclusion rate during the screening process. Empirical evidence has shown that a predicted inclusion rate of 95% corresponds to finding around 98%–100% of relevant references during title/abstract screening.32 34

Crowdsourcing

Crowdsourcing is outsourcing tasks to a large community of people, usually via the internet. This can take a variety of formats,35 36 for example, microtasking, which involves breaking a task down into microformats to create a simple, discrete classification or categorisation task. This crowdsourcing mode is particularly well suited to tasks that involve processing large amounts of information or data, such as an SR’s title/abstract screening stages.

Using a crowd in the review production process is challenging from a technical point of view. Tools are emerging to support this contribution model better, but they are currently in their infancy. Cochrane, for example, has been using crowdsourcing for study selection via its citizen science platform Cochrane Crowd (https://crowd.cochrane.org). Cochrane Crowd performed well in study selection. Across four RRs, the title/abstract screening was completed within 48–53 hours and achieved a sensitivity of 94%–100% (compared with the gold standard of dual-reviewer screening).36 Currently, only Cochrane authors can access the Cochrane Crowd via Cochrane’s Screen4Me service. There exist alternatives to Cochrane Crowd, such as Amazon Mechanical Turk (AMT) (https://www.mturk.com). AMT is a microtasking platform where task proposers can create microtasks and provide micropayment (piece-rate payment, eg, £0.05 per record assessed) as a reward for task responders. However, reporting experience using AMT for this task is limited.

Data extraction

Critical appraisal tools for SRs require teams to strive to random errors in data extraction—ideally through dual-reviewer data extraction.5 6 Cochrane methods guidance for SR conduct requires the data extraction to be done independently by two investigators (mandatory for outcome data, highly desirable for study characteristics data), seeking unpublished resources to complete the data extraction and using a piloted data extraction sheet.13 In RRs, the following accelerated approaches for data extraction can be considered.

Reducing the number of human judgements involved

One accelerated method of extracting data is having one person extract the data, with a second person verifying the data for accuracy and completeness.10 11 Dual, independent data extraction has been reported to take longer per study than the data extraction verification and does not yield significantly different results.37 As extraction errors are frequent, single-data extraction, especially of outcome data, without verification by a second person is discouraged.38 To reduce the time spent during data verification, it is helpful for the initial extractor to highlight the extracted data in the electronic versions of the included papers. Extraction should also be limited to only the most important data fields to address the RR question, as determined in the review protocol.

Where available, reviewers can also extract data directly from existing SRs rather than from their included studies.10 11 In a case study on medical treatment for premature ejaculation, this approach did not alter the conclusions of the RR.39 However, this approach requires high-quality SRs with good reporting. Teams could also use data repositories, to download data from completed SRs or upload it for future reviewers (eg, the SR data repository from AHRQ40 or Mendeley Data).41 The use of such repositories could help increase reuse of data, however, the upload of data is time-consuming. Further details on how to address issues of finding multiple SRs, poor quality SRs and when/if to update an existing SR are addressed in another paper of this series.42

Supporting software

A wide range of software tools to support data extraction is available (see www.systematicreviewtools.com). The most helpful tools support all steps of the review process (eg, Covidence, DistillerSR), as information and details may be shared across the review processes. To the best of our knowledge, tools that automatically extract reliable data do not exist, but some tools can save time by assisting reviewers in the extraction process (eg, the ExaCT tool automatically detects and highlights data items).43 In RRs that include studies in multiple languages, translation software such as DeepL or Google Translate can also be helpful.44

RoB assessment

Current guidance for SR conduct requires RoB assessments to be done by two people independently,13 using published assessment tools, such as the Cochrane Risk of Bias Tool 2.0 for randomised controlled trials.45 Showing support for judgements and incorporating these judgements into the synthesis are also required.13 This approach is encouraged in the RR processes if the timeline and number of included studies permit; if not, the following accelerated approaches for RoB assessment can be considered.

Reducing the number of human judgements involved

Table 2 gives an overview of study design specific RoB tools recommended by Cochrane; the list is not exhaustive as non-Cochrane RR may use other validated tools. One accelerated approach is to use tools that are less complex and, therefore, faster to complete (eg, Cochrane Risk of Bias Tool 1.0 vs 2.0) and to limit the assessment (for outcome-specific questions) to only the most important outcomes, as determined in the review protocol.10 11 Another approach is for one reviewer to perform the RoB assessment and for a second reviewer to verify the judgements.10 11 Complete omission of RoB assessment is discouraged, as this information informs the interpretation of the evidence and review implications.

Table 2.

Risk of bias (RoB) assessment tools recommended by Cochrane

Study design RoB tool
Randomised controlled trials Cochrane RoB 2.049
Non-randomised studies of interventions ROBINS-I50
Non-randomised studies of exposures ROBINS-E51
Diagnostic studies QUADAS 252
Prognostic studies PROBAST53
Systematic reviews ROBIS6

PROBAST, Prediction model Risk Of Bias Assessment Tool; QUADAS, Quality Assessment of Diagnostic Accuracy Studies; RoB, risk of bias; ROBINS-E, Risk of Bias in Non-randomised Studies–of Exposures; ROBINS-I, ROBINS-of Interventions; ROBIS, Risk of Bias in Systematic Reviews.

Supporting software

A wide range of software tools and complex spreadsheets exists to support RoB assessment (see www.systematicreviewtools.com). Machine learning tools are also available, such as Robot Reviewer (www.robotreviewer.net), which assesses the RoB and extracts supporting information in randomised controlled trials automatically (for some of the RoB questions in the Cochrane Risk of Bias Tool V.1.0). Such software can assist during RoB assessment but cannot yet replace humans.46 47

Conclusion

Streamlining study selection, data extraction and RoB assessment can save time and resources during the RR process. However, shortcuts may come with increased risk (eg, missing one or more relevant studies, increasing data extraction errors). Therefore, piloting the steps of the review process with the team members that will perform them is essential in RRs. Every review team should include sufficient SR methodological experience to conduct the RR properly and be aware of potential limitations of methodological shortcuts. Novices to evidence synthesis should have a direct line of communication with experienced team members to resolve issues early in the process. Review teams should consider that it is unnecessary to employ methodological shortcuts at all stages of an RR and that the accelerated methods maybe differ from RR to RR. In a RR, for example, that identifies only a small number of records, dual-reviewer screening and resolving conflicts might save more time than single-reviewer screening with an overinclusive screener. Review teams should not be discouraged by an increased workload when using supportive software for the first time.48 After a learning curve has occurred, the use of software increases efficiency.

Acknowledgments

We would like to thank Sandra Hummel for administrative support.

Footnotes

Contributors: BN-S, GG and CH contributed to the conceptualisation of this paper. BN-S wrote the first draft of the manuscript. All authors critically reviewed and revised the manuscript. The Cochrane Rapid Reviews Methods Group provided feedback to the manuscript. BN-S wrote the final version of the manuscript and is responsible for the overall content as guarantor.

Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests: IS and LP: None declared. BN-S, CH, DD and GG are co-convenors of the RRMG. MT worked 2022 as research associate for the RRMG. AN-S is leading the crowdsourcing project ‘Cochrane Crowd’.

Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

Provenance and peer review: Not commissioned; externally peer reviewed.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

Ethics statements

Patient consent for publication

Not applicable.

References

  • 1. Klerings I, Robalino S, Booth A, et al. Rapid reviews methods series: guidance on literature search. BMJ Evid Based Med 2023;28:412–7. 10.1136/bmjebm-2022-112079 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Garritty C, Tricco AC, Smith M, et al. Rapid reviews methods series: involving patient and public partners, healthcare providers and policymakers as knowledge users. BMJ Evid Based Med 2023. 10.1136/bmjebm-2022-112070 [Epub ahead of print 19 April 2023]. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Gartlehner G, Nussbaumer-Streit B, Devane D. Rapid reviews methods series: assessing the certainty of evidence in rapid reviews – a practical guide. BMJ Evid Based Med 2023. 10.1136/bmjebm-2022-112111 [Epub ahead of print 19 April 2023]. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Nussbaumer-Streit B, Ellen M, Klerings I, et al. Resource use during systematic review production varies widely: a scoping review. J Clin Epidemiol 2021;139:287–96. 10.1016/j.jclinepi.2021.05.019 [DOI] [PubMed] [Google Scholar]
  • 5. Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008. 10.1136/bmj.j4008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Whiting P, Savović J, Higgins JPT, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 2016;69:225–34. 10.1016/j.jclinepi.2015.06.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med 2015;13:224. 10.1186/s12916-015-0465-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Robson RC, Pham B, Hwee J, et al. Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review. J Clin Epidemiol 2019;106:121–35. 10.1016/j.jclinepi.2018.10.003 [DOI] [PubMed] [Google Scholar]
  • 9. Waffenschmidt S, Knelangen M, Sieben W, et al. Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review. BMC Med Res Methodol 2019;19:132. 10.1186/s12874-019-0782-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Garritty C, Trivella M, Hamel C, et al. Cochrane rapid review methods guidance - update (manuscript in preparation). 2023. [Google Scholar]
  • 11. Garritty C, Gartlehner G, Nussbaumer-Streit B, et al. Cochrane rapid reviews methods group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol 2021;130:13–22. 10.1016/j.jclinepi.2020.10.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Gartlehner G, Nussbaumer-Streit B. Learning from emergency trauma teams: an organizational approach for conducting (very) rapid reviews. collaborating in response to COVID-19: editorial and methods initiatives across cochrane. Cochrane Database of Systematic Reviews 2020;12 Suppl 1:41–2. [Google Scholar]
  • 13. Higgins JP, Lasserson T, Chandler J, et al. Methodological expectations of cochrane intervention reviews. London, 2022. [Google Scholar]
  • 14. Institute for Quality and Efficiency in Health . General methods version 5.0. 2019. Available: www.iqwig.de/en/about-us/methods/methods-paper/
  • 15. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas 1960;20:37–46. 10.1177/001316446002000104 [DOI] [Google Scholar]
  • 16. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb) 2012;22:276–82. [PMC free article] [PubMed] [Google Scholar]
  • 17. Edwards P, Clarke M, DiGuiseppi C, et al. Identification of randomized controlled trials in systematic reviews: accuracy and reliability of screening records. Stat Med 2002;21:1635–40. 10.1002/sim.1190 [DOI] [PubMed] [Google Scholar]
  • 18. Gartlehner G, Affengruber L, Titscher V, et al. Single-reviewer abstract screening missed 13 percent of relevant studies: a crowd-based, randomized controlled trial. J Clin Epidemiol 2020;121:20–8. 10.1016/j.jclinepi.2020.01.005 [DOI] [PubMed] [Google Scholar]
  • 19. Pham MT, Waddell L, Rajić A, et al. Implications of applying methodological shortcuts to expedite systematic reviews: three case studies using systematic reviews from agri-food public health. Res Synth Methods 2016;7:433–46. 10.1002/jrsm.1215 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Nama N, Hennawy M, Barrowman N, et al. Successful incorporation of single reviewer assessments during systematic review screening: development and validation of sensitivity and work-saved of an algorithm that considers exclusion criteria and count. Syst Rev 2021;10:98. 10.1186/s13643-021-01632-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Harrison H, Griffin SJ, Kuhn I, et al. Software tools to support title and Abstract screening for systematic reviews in healthcare: an evaluation. BMC Med Res Methodol 2020;20:7. 10.1186/s12874-020-0897-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Affengruber L, Van der M, Nussbaumer-Streit B. Supportive software for rapid reviews (paper 8) - A practical guide (manuscript in preparation). 2023.
  • 23. Centre for Evidence Synthesis in Health . Abstrackr. Providence, RI, USA, [Google Scholar]
  • 24. Distiller SR. Evidence partners. Ottawa, 2011. [Google Scholar]
  • 25. Thomas J, Graziosi S, Brunton J, et al. EPPI-reviewer: advanced software for systematic reviews, maps and evidence synthesis. EPPI-centre software. London: UCL Social Research Institute, 2020. [Google Scholar]
  • 26. Pico Portal. 2022. Available: https://picoportal.net/
  • 27. Ouzzani M, Hammady H, Fedorowicz Z, et al. Rayyan-a web and mobile APP for systematic reviews. Syst Rev 2016;5:210. 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. RobotAnalyst . The national centre for text mining. n.d. Available: www.nactem.ac.uk/robotanalyst/
  • 29. National Institute of Environmental Health Sciences (NIEHS) . SWIFT-activescreener. Research triangle park. NC, USA, [Google Scholar]
  • 30. Gartlehner G, Wagner G, Lux L, et al. Assessing the accuracy of machine-assisted Abstract screening with distillerai: a user study. Syst Rev 2019;8:277. 10.1186/s13643-019-1221-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Gates A, Guitard S, Pillay J, et al. Performance and usability of machine learning for screening in systematic reviews: a comparative evaluation of three tools. Syst Rev 2019;8. 10.1186/s13643-019-1222-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Hamel C, Kelly SE, Thavorn K, et al. An evaluation of distillersr’s machine learning-based prioritization tool for title/abstract screening-impact on reviewer-relevant outcomes. BMC Med Res Methodol 2020;20:256. 10.1186/s12874-020-01129-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Tsou AY, Treadwell JR, Erinoff E, et al. Machine learning for screening prioritization in systematic reviews: comparative performance of abstrackr and EPPI-reviewer. Syst Rev 2020;9. 10.1186/s13643-020-01324-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Howard BE, Phillips J, Tandon A, et al. SWIFT-active screener: accelerated document screening through active learning and integrated recall estimation. Environ Int 2020;138:105623. 10.1016/j.envint.2020.105623 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Noel-Storr AH, Dooley G, Wisniewski S, et al. Cochrane centralised search service showed high sensitivity identifying randomized controlled trials: a retrospective analysis. J Clin Epidemiol 2020;127:142–50. 10.1016/j.jclinepi.2020.08.008 [DOI] [PubMed] [Google Scholar]
  • 36. Noel-Storr A, Gartlehner G, Dooley G, et al. Crowdsourcing the identification of studies for COVID-19-related Cochrane rapid reviews. Res Synth Methods 2022;13:585–94. 10.1002/jrsm.1559 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Buscemi N, Hartling L, Vandermeer B, et al. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol 2006;59:697–703. 10.1016/j.jclinepi.2005.11.010 [DOI] [PubMed] [Google Scholar]
  • 38. Mathes T, Klaßen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol 2017;17:152. 10.1186/s12874-017-0431-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. James M-S, Cooper K, Kaltenthaler E. Methods for a rapid systematic review and metaanalysis in evaluating selective serotonin reuptake inhibitors for premature ejaculation. Evid Policy 2017;13:517–38. 10.1332/174426416X14726622176074 [DOI] [Google Scholar]
  • 40. Agency for healthcare research and quality (AHRQ). SRDR+: moving systematic reviews forward. n.d. Available: https://srdrplus.ahrq.gov/
  • 41. Mendeley Ltd . Mendeley data. n.d. Available: https://data.mendeley.com/
  • 42. King VJ, Garritty C, Hamel C. Rapid reviews methods series: guidance on how to synthesize evidence (manuscript in preparation). 2023.
  • 43. Gates A, Gates M, Sim S, et al. Creating efficiencies in the extraction of data from randomized trials: a prospective evaluation of a machine learning and text mining tool. BMC Med Res Methodol 2021;21:169. 10.1186/s12874-021-01354-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Jackson JL, Kuriyama A, Anton A, et al. The accuracy of google translate for abstracting data from non–english-language trials for systematic reviews. Ann Intern Med 2019;171:677–9. [DOI] [PubMed] [Google Scholar]
  • 45. The Cochrane Collaboration . RoB 2 tool. 2019. Available: www.riskofbias.info/
  • 46. Soboczenski F, Trikalinos TA, Kuiper J, et al. Machine learning to help researchers evaluate biases in clinical trials: a prospective, randomized user study. BMC Med Inform Decis Mak 2019;19:96. 10.1186/s12911-019-0814-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Hirt J, Meichlinger J, Schumacher P, et al. Agreement in risk of bias assessment between robotreviewer and human reviewers: an evaluation study on randomised controlled trials in nursing-related Cochrane reviews. J Nurs Scholarsh 2021;53:246–54. 10.1111/jnu.12628 [DOI] [PubMed] [Google Scholar]
  • 48. Scott AM, Forbes C, Clark J, et al. Systematic review automation tools improve efficiency but lack of knowledge impedes their adoption: a survey. J Clin Epidemiol 2021;138:80–94. 10.1016/j.jclinepi.2021.06.030 [DOI] [PubMed] [Google Scholar]
  • 49. Sterne JAC, Savović J, Page MJ, et al. Rob 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019;366:l4898. 10.1136/bmj.l4898 [DOI] [PubMed] [Google Scholar]
  • 50. Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Bmj 2016;355:i4919. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Higgins JM, Rooney A, Taylor K, et al. Risk of bias in non-randomized studies - of exposure (ROBINS-E). 2022. Available: www.riskofbias.info/welcome/robins-e-tool
  • 52. Whiting PF, Rutjes AWS, Westwood ME, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med 2011;155:529–36. 10.7326/0003-4819-155-8-201110180-00009 [DOI] [PubMed] [Google Scholar]
  • 53. Wolff RF, Moons KGM, Riley RD, et al. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019;170:51–8. 10.7326/M18-1376 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjebm-2022-112185supp001.pdf (53.1KB, pdf)

Supplementary data

bmjebm-2022-112185supp002.pdf (78.1KB, pdf)

Data Availability Statement

Data sharing not applicable as no datasets generated and/or analysed for this study.


Articles from BMJ Evidence-Based Medicine are provided here courtesy of BMJ Publishing Group

RESOURCES