Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Oct 10.
Published in final edited form as: Worldviews Evid Based Nurs. 2022 Nov 15;19(6):442–449. doi: 10.1111/wvn.12611

A guide to using technological applications to facilitate systematic reviews

Karen DiValerio Gibbs 1, Jennifer Loveless 2, Stacey Crane 1
PMCID: PMC11465921  NIHMSID: NIHMS2021150  PMID: 36380454

Abstract

Background:

A systematic review (SR) synthesizes evidence in a reproducible way and informs evidence-based decision-making. SRs are time-intensive, particularly with respect to staying organized, maintaining records, and managing different phases of the process. Although there are numerous methodological guides to lead researchers in the approach to SRs to minimize bias and enhance rigor, there is less focus on technological approaches that can make the SR process easier for researchers.

Aim:

To guide researchers through the currently available technological applications that can assist with the SR process and synthesis of scientific literature.

Methodology:

Key ways that technological applications can facilitate the SR process are examined.

Results:

Specific applications are discussed and stratified by their support of one or multiple phases of the systematic review process. Key features, strengths, and limitations are provided for technological applications that support the SR process.

Linking Evidence to Action:

This paper guides researchers in different ways technology can support SRs. Through use of these applications, the researcher can complete SRs in a timely manner and manage the process effectively.

Keywords: applications, evidence synthesis, evidence-based practice, meta-analysis, systematic review, technology, tools

INTRODUCTION

Systematic reviews (SRs) are a research methodology involving a synthesis of scientific knowledge in a specific area (Page et al., 2021). SRs are used to inform clinical decision-making, serve as the foundation for developing evidence-based guidelines and formulating policy, and are instrumental in identifying specific areas that require more research or innovation (Page et al., 2021). Although vital for evidence-based practice, SRs are time-intensive, rigorous scientific endeavors, with the average SR taking 67.3 weeks to complete (Borah et al., 2017). With ever more scientific evidence to synthesize, SRs are being generated at an astounding rate; over 29,000 citations with an SR publication type were added to PubMed in 2021.

As technology evolves, a greater variety of highly effective technological applications are now also available to support SRs and enhance the rigor with which they are conducted. Researchers not adopting these tools may expend additional effort and time conducting SRs. Technology can assist in the documentation of the SR process, increase transparency, enhance rigor, and streamline some of the more time-intensive SR steps. However, missing from the literature is a comprehensive summary of specifically how technology can support SRs.

The objective of this article is to review the currently available, commonly used technological applications that can assist the researcher in conducting SRs and synthesis of scientific literature. Technology reviewed in this article includes applications or websites tailored to the specific phases of the SR process: developing the protocol, crafting and conducting the literature database searches, screening abstracts and articles, extracting data from the articles, assessing data quality in the articles, and synthesizing findings of the articles. Technological applications can assist individual or multiple phases of SRs. Although automated tools and machine learning systems can also expedite many steps of the SR process (Marshall & Wallace, 2019), they will not be covered in this article as they are still an emerging area of SR technology.

THE SYSTEMATIC REVIEW PROCESS

SRs should be conducted by a team. Ideally, the SR team includes a medical librarian and at least two others who will contribute substantively to the conduct of the SR. Technological applications are extremely helpful in coordinating team members’ efforts on the project. With multiple phases of the process requiring accurate documentation of decisions, record-keeping and project management presents a major challenge to maintaining rigor and consistency in SRs.

Although there are many types of SRs (Grant & Booth, 2009) with innumerable different foci (Munn et al., 2018), the process typically followed for all SRs is relatively standard. This process includes protocol development, literature database searches, abstract and article screening and selection, data extraction, risk of bias or quality assessment of individual articles, data analysis, data synthesis, and determination of the certainty and quality of a body of evidence. There are several landmark references that provide detailed guidance to lead researchers through the SR process, with a focus on ensuring scientific rigor in every phase. These references include the PRISMA 2020 statement as a reporting guideline for SRs (Page et al., 2021), the Cochrane Handbook for SR of Interventions (Higgins et al., 2021), and the JBI Manual for Evidence Synthesis (Aromataris & Munn, 2020), among other evidence synthesis resources and organizations.

Briefly, protocol development begins with developing a question that will guide the review. The most common format for the guiding question is Population, Intervention, Comparison, and Outcome (PICO). However, other formats may also be used when appropriate, like Population, Exposure, Comparison, Outcome (PECO) or Population, Concept, Context (PCC; Booth et al., 2019). After the guiding question is developed, next steps include establishing inclusion and exclusion criteria for article selection and identifying the primary databases that will be searched. Research teams are recommended to submit their SR protocol to a systematic review registry, such as PROSPERO (National Institute for Health Research, n.d.) or Campbell Collaboration (2022), so that other researchers are aware of efforts that are underway. Checklists for SR protocols such as PRISMA-P (Moher et al., 2015) help researchers refine each planned step of the SR at the very beginning of the project, which can save research teams valuable time and effort. Once the protocol is established, together with a medical librarian, a researcher crafts a specific Boolean-based search strategy for the primary database and translates it to other relevant databases. Searches are then conducted, and the resulting list of articles is collated, and duplicates are removed. Then, the title and abstracts of the articles are screened independently by two team members for possible eligibility. Once articles are identified for possible inclusion, the full texts of these articles are obtained, and a more in-depth evaluation is conducted against the inclusion and exclusion criteria, again independently by two team members. Conflicts in team members’ determination of the articles’ eligibility are then discussed and reconciled. Once the SR team has identified all articles for inclusion, a data extraction form is developed, piloted, and subsequently refined using a small subset of the included articles. Data are then independently extracted from all the included articles by two team members, and any conflicts in data extraction are resolved. Data analysis and synthesis include evaluating the quality of the individual articles as well as an assessment of the study designs, populations studied, differences in interventions and methodological approaches, and research outcomes. The synthesis can involve either quantitative (e.g., meta-analysis) and/or qualitative methods.

TECHNOLOGICAL APPLICATIONS TO FACILITATE THE SR PROCESS

Multi-Phase applications

We begin by reviewing technological applications that are useful within multiple phases of the SR process and then will discuss applications that may be used in individual phases of the process. While single-phase applications may meet researchers’ needs for SRs, multi-phase applications offer the added benefit of guiding researchers through multiple phases of the SR process while storing data in a singular location. These sophisticated programs offer added functionalities that help the research team stay organized, maintain records, and manage documents. A comparison of currently available multi-phase technological applications is provided in Table 1, including hyperlinks, key features, limitations, costs, and format(s) in which they are offered. Only the most used applications will be discussed in the text, but additional applications (Evidence Partners, 2022) are found in Table 1.

TABLE 1.

Review of multi-phase tools for systematic reviews

Program name and hyperlink Phase(s) of SR supported Key features Limitations Cost Format
Microsoft Excel® Study screening (title/abstract & full-text phases)
Data extraction
Risk of bias assessment
Customizability Requires extensive time to customize to user preferences; easy to overwrite information; difficult to ensure accuracy with file manipulation License must be purchased for the program Web-based online tool (through Microsoft365) and downloadable application
Covidence Study screening (Title/Abstract & Full-text phase)
Data extraction
Risk of bias assessment
Integrates into other programs (compatible with reference managers, RevMan, etc.) Must export data to another program to do meta-analysis Must purchase either a single review, a small package of reviews, or organizational subscription Web-based online tool
DistillerSR Study screening (title/abstract & full-text phases)
Data extraction
Risk of bias assessment
Integrates into other programs (compatible with reference managers) Cost Offers monthly paid subscriptions for students and faculty with additional features as well as academic institutional subscriptions Web-based online tool
EPPI Reviewer Study screening (Title/Abstract & full-text phase)
Data extraction and synthesis
Risk of bias assessment, report writing
Helpful for meta-analyses, narrative reviews, qualitative reviews, or other reviews that will use thematic synthesis; very similar to qualitative analysis programs Cost Monthly subscriptions either for an individual user or a sharable review or site licenses for organizations Web-based online tool
JBI SUMARI Protocol development
Study screening (Title/Abstract & full-text phase)
Data extraction and synthesis
Risk of bias assessment, report writing
Integrates into other programs (compatible with reference managers)
Helps with report writing
Cost Annual subscriptions at the individual or organization level Web-based online tool
PICO Portal Study screening (Title/Abstract & full-text phase)
Data extraction
Integrates into other programs (compatible with reference managers) No risk of bias assessment Free individual license (one project per user) and Paid team licenses Web-based online tool
RevMan Web or RevMan 5 Preparation of protocols and full reviews, including text, characteristics of studies, comparison tables, and study data;
Risk of bias visualizations
Meta-Analysis
Can import from Covidence, can create graphical presentation of forest plots Web-based tool is meant for Cochrane reviews, but downloadable application can be used for other types of reviews Free for purely academic use Web-based online tool and downloadable application
GRADEPro Evidence tables and determining the overall certainty of evidence Can import from reference managers and meta-analysis software such as RevMan Primarily made for evidence-based guideline development; must utilize other programs for other portions of the review Free individual license for up to 3 team members with up to 25 questions and Team and Enterprise licensing options Web-based online tool

Depending on one’s proficiency and comfort, many phases of the SR can be managed entirely within Microsoft® Excel, including title and abstract review, full-text review, data extraction, and study risk of bias assessment. Although Excel is an inexpensive way of managing the SR process, we recommend using other multi-phase applications with features tailored specifically for SRs whenever the researchers’ budget allows. With Excel, it is easy to inadvertently overwrite cells and lose data or cause errors during sorting.

Several applications offer key features that make the SR process as seamless as possible. Covidence is one of the preferred screening and data extraction application for conducting Cochrane SRs, one of the pre-eminent international SR networks for evidence synthesis (Cochrane Community, n.d.). This web-based software includes importing citations to complete both the title and abstract and full-text review phases and the conflict resolution process. Users can also build customized data extraction forms for each review and customized risk of bias assessments. Covidence can also generate PRISMA flow diagrams, which are required by the PRISMA Reporting Guideline for SRs (Page et al., 2021). While no mobile application is available for Covidence, researchers can still easily screen citations on a mobile web browser. However, Covidence does come with a cost. Subscriptions may be purchased for a single review, a small package of reviews, or organizational use. Covidence can also interface with some of the other applications listed here, including GRADEPro and RevMan.

For reviews that may require narrative synthesis or meta-ethnographies, EPPI-Reviewer Web (Thomas et al., 2022) is a technological application that can facilitate screening as well as complex coding structures for narrative synthesis that include reconciliation. EPPI-Centre also has an application, EPPI-Mapper (Digital Solution Foundry and EPPI-Centre, 2022) to help visually map gaps in the evidence and present figures that demonstrate the strength and quantity of evidence in a particular dimension of the phenomenon in question. EPPI-Reviewer is another Cochrane preferred application and offers monthly subscriptions either for an individual user or a shareable review.

The Joanna Briggs Institute (JBI), another international knowledge synthesis organization, offers a program called JBI SUMARI to facilitate multiple phases of the SR process. This web-based application can facilitate the protocol writing process, in addition to the importing of citations, study screening, risk of bias assessment, and data extraction processes. JBI SUMARI can also create forest plots and other charts, as well as support report writing. JBI SUMARI offers support for different types of SRs, including, but not limited to, effectiveness, scoping, qualitative, umbrella, and mixed methods reviews. Like Covidence and EPPI-Reviewer Web, JBI SUMARI also is a paid subscription service for individual or organization-level subscriptions.

PICO Portal (2022) is another multi-phase application with similar capabilities as the other applications. Title, abstract, and full-text review are offered through this application, with customizable data extraction. PICO Portal, like other programs, has the added perk of providing one free review with unlimited reviewers and collaborators for new users to trial prior to purchasing a paid team license. PICO Portal offers a project dashboard to view progress, PRISMA flow chart generation, and the ability to invite multiple project team members as well.

Many of the applications previously mentioned do not provide support for report writing or meta-analysis. RevMan offers both a web-based software (RevMan Web) and downloadable application (RevMan 5). RevMan can facilitate the writing of the protocol and final report and in performing meta-analysis. RevMan can also help create summary of findings tables, risk of bias graphs, and additional tables. Although specifically crafted for Cochrane SRs, this application might be helpful for other teams to organize reports and integrates with other commonly used SR software such as Covidence.

GRADEPro is another application that can help with development of evidence tables as well as appraising the overall certainty of evidence using the Grading of Recommendations, Development, Assessment, and Evaluation (GRADE) methodology. While individual risk of bias tools can help quantify quality assessments for individual studies, GRADE uses a structured framework to evaluate the overall quality or certainty of a body of evidence and the overall effect estimate (Schunemann et al., 2013). This methodology looks at factors that can reduce the certainty of evidence (e.g., study limitations, inconsistency of results, indirectness of evidence, imprecision of results, and publication bias) and factors that can increase the certainty of evidence (Schunemann et al., 2013). GRADEPro allows for the development of summary of findings tables and evidence profiles, and can extend SR work into recommendation development for clinical practice guidelines.

Fortunately, given the rapid growth of technological applications for the management of SRs, the SR team has an opportunity to review multiple tool applications and decide which one meets their needs and budget.

Single-phase applications

Table 2 compares different technological applications that are designed to facilitate a single phase of an SR, including the key features, limitations, costs, and format(s) in which they are offered. In addition, please note that many of the multiple-phase applications previously mentioned can also be used to simply facilitate a single phase of a SR.

TABLE 2.

Review of single-phase tools for systematic reviews technology

Program name and hyperlink Phase(s) of SR supported Key features Limitations Cost Format(s)
PubMed PICO Preliminary searches Can enter search terms in the PICO format and select ideal publication type SRs would require details of search for reproducibility. Can only select one publication type Free Web-based online tool and mobile application
Yale MeSH Analyzer Search strategy Can enter PMID of relevant articles and compare common MeSH terms Unable to save or export work Free Web-based online tool
Rayyan Study screening (Title/Abstract & full-text phase) Can import search files and blind decisions to reviewers, can upload PDF files and re-export files to reference managers Must create a new “review” for each phase (ex., one for title /abstract, then one for full text)
No generation of PRISMA
Free for web-based access; offers mobile application and additional support for student, professional, team, and enterprise memberships. Web-based online tool and mobile application
REDCap Data extraction Allows customization of data extraction forms Time-intensive to create structure of form Non-profit companies can join the REDCap consortium; otherwise, third-party companies offer fee-based hosting Web-based online tool and mobile application
robvis Risk of bias assessment (visualization) Visualizing risk-of-bias assessments performed as part of a SR with common risk of bias tools Requires customization for less common risk of bias visualizations Free Web-based online tool
Qualitative Data Analysis Software like NVivo, MaxQDA, Atlas.ti, Dedoose, etc. Data extraction or analysis Can create coding scheme, helpful for narrative synthesis Challenging to capture data in figures or tables; difficult to ensure coding was correctly applied; some programs are unable to blind coding to other reviewers Institutional or individual subscriptions Web-based online tool or downloadable programs; some have mobile applications
OpenMeta[Analyst] Meta-analysis Can offer meta-analysis of binary and continuous data with several types of fixed and random-effects methods Not offered in cloud version, must be downloaded Free Installed programs
Statistical Software such as R, STATA, SPSS, etc. Meta-analysis Commonly used statistical software can have meta-analysis packages or commands to calculate effect estimates Cost, learning programming to operate Institutional or individual subscriptions; R is free Installed programs

Database search

There are tools available to assist with literature searches, including the freely available PubMed PICO question tool from the National Library of Medicine. The entry of search terms in the PICO question format of this tool and quick filtering for type of evidence is valuable in assisting with preliminary reviews of the literature. However, programs like the PubMed PICO question tool and other similar tools have a limited role in systematic reviews because of limited functionality in synonyms, MeSH terms, and documentation required for reproducibility of the search.

With respect to the literature search, reference managers such as EndNote, RefWorks, Mendeley, and Zotero are instrumental. In particular, reference managers can be used to collate the articles obtained from multiple databases and literature search strategies. In addition, they can quickly identify duplicates in the collated articles for removal, but authors should take care in reviewing and removing software-identified duplicates for accuracy.

One application that can help with crafting the search strategy is Yale MeSH Analyzer (Grossetta Nardinin & Wang, 2022), a web-based application that prompts the user to enter PubMed Identification Numbers (PMIDs) of relevant articles, and then generates a table that includes article PubMed Medical Subject Headings (MeSH) and author-identified keywords. If the research team identifies in advance key articles that should be included in the SR, the Yale MeSH Analyzer can then be used to identify relevant MeSH headings and keywords to include in the search strategy. Yale MeSH Analyzer is currently restricted to articles found in PubMed but is useful if PubMed (or Ovid Medline) is the primary database used to craft the search strategy.

Screening

To ensure rigor of the SR, screening should be done by at least two team members independently reviewing citations against the inclusion and exclusion criteria in both the title/abstract and full-text screening phases of the SR. A mobile and web-based application called Rayyan (Ouzzani et al., 2016) allows users to import citations from reference managers for the purpose of study screening. Rayyan can also be used to identify and remove duplicate articles. Users can use Rayyan to blind decisions on articles from other users to ensure independent review, identify conflicts that need to be resolved, identify and highlight keywords within the articles, and export selected articles back to reference managers. However, since Rayyan is freely available, one of its drawbacks is that the title, abstract, and full-text phases of the screening have to be conducted as separate projects within Rayyan, as it does not seamlessly take the user between screening phases (at the time of this publication). Other multiphase tools mentioned earlier could also be used to screen articles independently of their other functionality.

Data extraction

Data extraction from the included articles is another step of the SR process that should be completed independently by at least two team members to ensure accuracy of the extracted data. Currently, there is a lack of sophisticated standalone data extraction tools that are specific to SRs. Options for data extraction include creating forms for data extraction or for quality and risk of bias assessment in Microsoft® Excel for researchers to complete independently or using REDCap, a secure web application for managing research data (Harris et al., 2009; Vanderbilt University, 2022). REDCap can be used to develop and complete data extraction forms, serve as an SR data storage platform, and facilitate data exports to Microsoft® Excel for comparison of results and further analysis, but REDCap requires significant effort to set up the project to allow for data extraction. Qualitative data analysis software, like NVivo (QSR International, 2021), ATLAS. ti (2022), Dedoose (2022), or MaxQDA (VERBI GmbH, 2022), can also be used for SR data extraction. Specifically, researchers can use qualitative data analysis software to create a coding framework that focuses on the components of the planned data extraction. After coding the articles, reports showing all the coded data can be generated and downloaded for further analysis.

Risk of bias assessment

Often, risk of bias assessments for each individual article are presented visually in color-coded figures. The Robvis web application (McGuinness & Higgins, 2021) allows for easy development of visualizations for risk of bias assessments. Researchers can use common risk of bias tools, including Cochrane Revised Risk of Bias (RoB 2) Tool (Sterne et al., 2019), ROBINS-I (Sterne et al., 2016), QUADAS-2 (Whiting et al., 2011), and QUIPS (Hayden et al., 2013), or customize and add other risk of bias tools. As mentioned earlier, Microsoft® Excel is another application that could be used for this as well.

Data analysis

If a team determines that the studies included in their review have homogeneous outcomes, similar research participants, and interventions that allow for pooling of data, meta-analysis will produce a summary estimate of the effect. Reviewing guidance for meta-analysis can be helpful in identifying the appropriateness of quantitative synthesis at the protocol phase (Seidler et al., 2019). Many commonly used statistical software programs, such as RevMan, STATA, SPSS, and R all offer meta-analysis functionality. For those without access or familiarity with statistical software, OpenMeta[Analyst] (Wallace et al., 2012) is another option to perform meta-analysis and is free to download for use.

If quantitative synthesis is not possible, narrative synthesis can be reported using SWiM (Synthesis Without Meta-analysis) reporting guidelines (Campbell et al., 2020). Qualitative analysis software may be helpful for narrative synthesis to identify themes and guide the interpretation of studies.

DISCUSSION

In this article, we described many different technologies that can help with managing SR logistics, reduce the potential for error, and enhance rigor in the SR process. When reviewing these applications, individual researchers and teams should consider key features that are needed for their SR and the resources available to them. While multi-phase applications may seem like the obvious choice for SRs, those with limited financial resources may opt to combine several free, single-phase applications or take advantage of a free trial offered by some of the applications.

While the list of technological applications we included was not exhaustive, it provides a comprehensive overview of the more commonly used applications. Given that technology is always evolving, we suggest checking the SR Toolbox (Marshall et al., 2022) for the latest list of technological applications that may meet specific needs.

Many of the applications described herein offer free trials where researchers can test applications to identify which is most appropriate for their review and compatible with their institution’s preferred software. Most of the applications have video tutorials available on YouTube that review key features, give tips on use, and answer frequently asked questions. As an additional benefit, many of the multi-phase applications have community functions (e.g., discussion boards), methodological resources, and traditional customer support functions.

All technological applications have some weaknesses. When selecting applications, the key is to be aware of the weaknesses, so that strategies can be employed to offset issues that could arise. For example, if teams opt to use Excel for any part of a SR, they could consider locking parts of the worksheets that should not be edited to avoid accidental changes and ensure data accuracy.

Machine learning and natural language processing are foremost, emerging areas of innovation in managing SRs. Machine learning and natural language processing are being used to assist with the screening of abstracts in applications such as Research Screener (Chai et al., 2021) and DistillerAI (Gartlehner et al., 2019). These applications show promise in reducing the time-intensive step of reviewing thousands of titles and abstracts for eligibility. Some of the applications mentioned, like Rayyan (Ouzzani et al., 2016), are trialing machine learning and natural language processing features as well. Currently, the time saved using machine-assisted screening comes with the increased risk of missing potentially relevant records. Researchers interested in these technologies should proceed with caution and compromise with semi-automation to reduce screening workload (Gates et al., 2019).

Other potential challenges with SRs include navigating collaborations between multiple researchers, document management, and general project management. Some of the applications mentioned provide functionality that may address challenges faced by larger SR teams operating at multiple institutions. While there are applications designed to facilitate the management of large research projects, they are not covered here as they are not targeted to the SR process.

SRs are a complex form of research that necessitate technological applications to ensure a high level of rigor is maintained while prioritizing speed. Since SRs are often used to inform policy, clinical practice guidelines, and to identify research priorities, researchers should use the technology available to enhance their work.

Linking Evidence to Action

  • The advantages of using technological applications to support the SR process include enhancing rigor through record keeping, facilitating impartial review of potential articles for inclusion, and assisting the researcher in evidence synthesis.

  • Researchers should become familiar with the advantages and disadvantages of various tools, along with the steps they support within the SR process.

  • Researchers should select a combination of single-phase tools or multi-phase tools to support their SR, depending on the SR scope, budget, desired features, and team logistics.

  • Researchers should explore and consider new tools or new functionalities of existing tools as they are developed.

REFERENCES

  1. Aromataris E, & Munn Z (Eds). (2020). JBI manual for evidence synthesis. https://synthesismanual.jbi.global [Google Scholar]
  2. ATLAS.ti. (2022). ATLAS.ti. https://atlasti.com/
  3. Booth A, Noyes J, Flemming K, Moore G, Tunçalp Ö, & Shakibazadeh E (2019). Formulating questions to explore complex interventions within qualitative evidence synthesis supplemental material: Rapid review of existing question formulation frameworks. BMJ Global Health, 4(Suppl 1), e001107. 10.1136/bmjgh-2018-001107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Borah R, Brown AW, Capers PL, & Kaiser KA (2017). Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open, 7(2), e012545. 10.1136/bmjopen-2016-012545 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Campbell M, McKenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, Hartmann-Boyce J, Ryan R, Shepperd S, Thomas J, Welch V, & Thomson H (2020). Synthesis without meta-analysis (SWiM) in systematic reviews: Reporting guideline. BMJ (Clinical research ed.), 368, l6890. 10.1136/bmj.l6890 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Chai K, Lines R, Gucciardi DF, & Ng L (2021). Research screener: A machine learning tool to semi-automate abstract screening for systematic reviews. Systematic Reviews, 10(1), 93. 10.1186/s13643-021-01635-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cochrane Community. (n.d.). Covidence. https://community.cochrane.org/help/tools-and-software/covidence
  8. Dedoose. (2022). Dedoose: Great research made easy. https://www.dedoose.com/
  9. Digital Solution Foundry and EPPI-Centre. (2022). EPPI-Mapper, version 1.2.2. https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3790
  10. Evidence Partners. (2022). DistillerSR. https://www.evidencepartners.com/products/distillersr-systematic-review-software
  11. Gartlehner G, Wagner G, Lux L, Affengruber L, Dobrescu A, Kaminski-Hartenthaler A, & Viswanathan M (2019). Assessing the accuracy of machine-assisted abstract screening with DistillerAI: A user study. Systematic Reviews, 8(1), 277. 10.1186/s13643-019-1221-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gates A, Guitard S, Pillay J, Elliott SA, Dyson MP, Newton AS, & Hartling L (2019). Performance and usability of machine learning for screening in systematic reviews: A comparative evaluation of three tools. Systematic Reviews, 8(1), 278. 10.1186/s13643-019-1222-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Grant MJ, & Booth A (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26(2), 91–108. 10.1111/j.1471-1842.2009.00848.x [DOI] [PubMed] [Google Scholar]
  14. Grossetta Nardinin HK, & Wang L (2022). The Yale MeSH Analyzer. http://mesh.med.yale/edu
  15. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, & Conde JG (2009). Research electronic data capture (REDCap): A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42, 377–381. 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hayden JA, van der Windt DA, Cartwright JL, Côté P, & Bombardier C (2013). Assessing bias in studies of prognostic factors. Annals of Internal Medicine, 158(4), 280–286. 10.7326/0003-4819-158-4-201302190-00009 [DOI] [PubMed] [Google Scholar]
  17. Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, & Welch V (Eds.). (2021). Cochrane handbook for systematic reviews of interventions. https://training.cochrane.org/handbook/current [Google Scholar]
  18. Marshall C, Sutton A, O’Keefe H, & Johnson E (Eds.). (2022). The systematic review toolbox. http://www.systematicreviewtools.com/ [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Marshall IJ, & Wallace BC (2019). Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic Reviews, 8(1), 163. 10.1186/s13643-019-1074-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. McGuinness LA, & Higgins J (2021). Risk-of-bias VISualization (robvis). https://mcguinlu.shinyapps.io/robvis/ [DOI] [PubMed]
  21. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, & PRISMA-P Group. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews, 4(1), 1. 10.1186/2046-4053-4-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Munn Z, Stern C, Aromataris E, Lockwood C, & Jordan Z (2018). What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Medical Research Methodology, 18(1), 5. 10.1186/s12874-017-0468-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. National Institute for Health Research. (n.d.). PROSPERO international prospective register of systematic reviews. https://www.crd.york.ac.uk/prospero/
  24. Ouzzani M, Hammady H, Fedorowicz Z, & Elmagarmid A (2016). Rayyan—A web and mobile app for systematic reviews. Systematic Reviews, 5(1), 210. 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, … Moher D (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ (Clinical research ed.), 372, n71. 10.1136/bmj.n71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. PICO Portal. (2022). Introducing PICO portal. https://picoportal.org/
  27. QSR International Pty Ltd. (2021). NVivo https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home
  28. Schunemann H, Brozek J, Guyatt G, & Oxman A (Eds.). (2013). Handbook for grading the quality of evidence and the strength of recommendations using the GRADE approach. https://gdt.gradepro.org/app/handbook/handbook.html [Google Scholar]
  29. Seidler AL, Hunter KE, Cheyne S, Ghersi D, Berlin JA, & Askie L (2019). A guide to prospective meta-analysis. BMJ (Clinical research ed.), 367, l5342. 10.1136/bmj.l5342 [DOI] [PubMed] [Google Scholar]
  30. Sterne J, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, … Higgins J (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ (Clinical research ed.), 366, l4898. 10.1136/bmj.l4898 [DOI] [PubMed] [Google Scholar]
  31. Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, … Higgins JP (2016). ROBINS-I: A tool for assessing risk of bias in non-randomised studies of interventions. BMJ (Clinical research ed.), 355, i4919. 10.1136/bmj.i4919 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Thomas J, Graziosi S, Brunton J, Ghouze Z, O’Driscoll P, & Bond M (2022). EPPI-Reviewer: Advanced software for systematic reviews, maps and evidence synthesis. http://eppi.ioe.ac.uk/cms/Default.as-px?alias=eppi.ioe.ac.uk/cms/er4
  33. Vanderbilt University. (2022). REDCap (Research Electronic Data Capture). https://www.project-redcap.org
  34. VERBI GmbH. (2022). MAXQDA https://www.maxqda.com/
  35. Wallace BC, Dahabreh IJ, Trikalinos TA, Lau J, Trow P, & Schmid CH (2012). Closing the gap between methodologists and end-users: R as a computational back-end. Journal of Statistical Software, 49(5), 1–15. 10.18637/jss.v049.i05 [DOI] [Google Scholar]
  36. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JA, Bossuyt PM, & QUADAS-2 Group. (2011). QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine, 155(8), 529–536. 10.7326/0003-4819-155-8-201110180-00009 [DOI] [PubMed] [Google Scholar]

RESOURCES