Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Mar 1.
Published in final edited form as: Res Synth Methods. 2022 Aug 26;14(2):173–179. doi: 10.1002/jrsm.1598

Limits in the search date for rapid reviews of diagnostic test accuracy studies

Luis Furuya-Kanamori 1, Lifeng Lin 2,3, Polychronis Kostoulas 4, Justin Clark 5, Chang Xu 6
PMCID: PMC9922791  NIHMSID: NIHMS1840154  PMID: 36054082

Abstract

Limiting the search date is a common approach utilised in therapeutic/interventional rapid reviews. Yet the accuracy of pooled estimates is unknown when applied to rapid reviews of diagnostic test accuracy studies. Data from all systematic reviews of diagnostic test accuracy studies published in the Cochrane Database of Systematic Reviews, until February 2022 were collected. Meta-analyses with at least five studies were included to emulate rapid reviews by limiting the search to the recent 1, 2, 3, 4, 5, 10, 15, 20, 25, 30, 35 and 40 years. The magnitude of the pooled area under the curve (AUC), sensitivity and specificity for the full meta-analysis and the rapid reviews were compared. A total of 846 diagnostic meta-analyses were included. When the search date was limited to the recent 10 and 15 years, more than 75% and 80% of meta-analyses presented less than 5% difference between the pooled AUC, sensitivity and specificity of the full meta-analysis and the rapid review. There was little gain in the precision of the pooled estimates when the emulated rapid reviews included more than 15 years in the search. Rapid reviews restricted by search date are a valid and reliable approach for diagnostic test accuracy studies. Robust evidence can be achieved by restricting the search date to the recent 10–15 years. Future studies need to examine the reduction in workload and time to finish the rapid reviews under different search date limits.

Keywords: accuracy, bias, diagnosis, meta-analysis, rapid approach, synthesis

1 |. INTRODUCTION

Research synthesis (i.e., systematic review and meta-analysis) is the cornerstone of evidence-based medicine.1 Systematic reviews and meta-analyses synthesise all available evidence pertinent to a topic in a comprehensive and transparent manner, and thus can resolve seemingly contradicting research findings. Given the information explosion in the last decades, with over 10,000 publications indexed per day in Scopus, acquiring and synthesising all relevant evidence on a topic has become a resource-intensive and time-consuming endeavour. As a result, it is estimated that well-conducted systematic reviews could take 6 months to 2 years and up to $200,000 to complete.2 Such long periods are often not practical as evidence could already be outdated by the time it is published, especially in the case of an outbreak or pandemic, when clinicians need timely evidence for decision-making.

Currently, two approaches are being investigated with varying degrees of success. The first approach uses advances in technology (e.g., data mining, machine learning and natural language processing) with a focus on semi-automation to assist in systematic reviews tasks (e.g., search strategy, screening process, data extraction) to accelerate systematic review production by reducing person-hours required.3 The second approach is to streamline (or omit) some of the processes to produce evidence in a resource- and time-efficient manner. Rapid reviews accomplish this by narrowing the research question, limiting the literature search date, searching evidence in only one database, or employing only one reviewer for screening, data extraction and quality appraisal.4

Tricco et al. found that 88% of researchers applied search limits by date of publication when conducting rapid reviews.5 Although limiting the literature search date is commonly applied, there are no guidelines on the limits of search dates, and in practice the decision is arbitrary – some reviewers may limit the search to the most recent 5 years, while others may choose the most recent 10 or 15 years. Xu et al. conducted a large meta-epidemiological study with over 21,000 and 7000 meta-analyses of binary and continuous outcomes, respectively and estimated the changes in effect estimates (i.e., odds ratio, risk difference, mean difference, standardised mean difference) of emulated rapid reviews by limiting the search date to the recent 40, 35, 30, 25, 20, 15, 10, 7, 5 and 3 years compared to the full meta-analyses.6 They found that for a search restricted to the recent 20 years, 80% of meta-analyses had a percentage change in pooled estimates of less than 5%, indicating good accuracy when applying this search limit and the feasibility of therapeutic/interventional rapid reviews.6

Although these results are promising for generating credible evidence from rapid reviews, they cannot be extrapolated to rapid reviews for diagnostic test accuracy studies. The meta-analytical methods and outputs in diagnostic meta-analyses are distinct to those in interventional meta-analyses given the two-dimensional nature of the data (i.e., pairs of sensitivity [Sens] and specificity [Spec]).7 At the beginning of the COVID-19 pandemic, it was clear that besides preventive interventions and therapeutic options, there was also an urgent need for accurate screening and diagnostic tests.8 We hypothesise that rapid reviews of diagnostic test accuracy studies are a valid approach to generate evidence for urgent decision-making. Currently, there is no evidence of the impact of rapid review methods in the context of diagnostic test accuracy meta-analyses. Therefore, this study was conducted to investigate possible changes in the pooled estimates if the rapid review approach – by limiting the search date – was applied under different scenarios.

2 |. METHODS

2.1 |. Data source

Data from all systematic reviews of diagnostic test accuracy studies published in the Cochrane Database of Systematic Reviews (CDSR) from 2003 (issue 1) to 2022 (issue 2) were extracted using the R (the code is provided in the Supporting information S1). The .rm5 (Review Manager version 5) files of the CDSRs contained information of each systematic review in a standard format. These files contained data of the individual studies in each meta-analysis – that is, year of publication of the studies, as well as the number of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN) in each study. The .rm5 files were then converted into .csv files to facilitate data identification and the analysis (R codes provided in the Supporting information S2). The CDSR is currently the largest database for systematic reviews, and we have previously used this database to examine measures of between-study heterogeneity,9 compare tests for publication bias,10,11 and evaluate rapid review approaches.6 We got access to the Cochrane Library through Florida State University, and declared that the data were only used for the research purpose.

The analysis was restricted to meta-analyses with at least five studies. Restriction on the minimum number of studies in a meta-analysis was imposed to reduce the chance of variability in point estimates due to random error. Furthermore, for a topic with limited evidence (with less than five studies), it is not necessary to perform a rapid review, and in these cases a systematic review is more appropriate. Meta-analyses that contained studies with zero events in two cells (e.g., FP and TN) and where the diagnostic odds ratio (DOR) could not be estimated were excluded.

2.2 |. Emulating rapid reviews of diagnostic accuracy studies

The rapid review approach by limiting the search date was emulated by ordering (in descending order) the studies in each meta-analysis by publication year and treating the publication year of the systematic review as the reference date. Twelve scenarios, each generating one rapid review, were emulated for each meta-analysis by limiting the inclusion of studies to those published in the 1, 2, 3, 4, 5, 10, 15, 20, 25, 30, 35 and 40 years prior to the reference date. A reanalysis including all studies (‘full meta-analysis’) was conducted to estimate the pooled area under the curve (AUC), Sens and Spec using the split component synthesis method12 under the inverse variance heterogeneity model.13 The same meta-analytic methods were applied to the 12 emulated rapid review scenarios for each meta-analysis. For a comparative purpose, the same procedure was replicated using the bivariate model.14

The magnitude and standard error of the estimates (i.e. DOR/AUC, Sens, Spec) for the full meta-analysis and the rapid reviews were computed. The absolute difference was estimated as |θ^θ^r|, where θ^ was the estimate for the full meta-analysis and θ^r was the estimate for the emulated rapid review, and categorised into three levels of difference (<0.05 [minimal], 0.05–0.1 [moderate] and >0.1 [large]). The proportion of meta-analyses lost was estimated as (mmr)/m, were m was the total number of meta-analyses included in the analysis and mr was the number of meta-analyses that could be estimated for each rapid review scenario. The proportion of studies lost was estimated as (nnr)/n, were n was the number of studies in the full meta-analysis and nr was the number of studies in the emulated rapid review. The proportion of meta-analyses within each level of difference, and the proportions of meta-analyses and of studies lost were summarised for each rapid review scenario. Data management and analyses were conducted in Stata/MP 14.0 (Stata, College Station, TX) using the diagma package for the split component synthesis methods15 and the midas package for the bivariate model.16

3 |. RESULTS

3.1 |. Selection of meta-analyses

As of 28 February 2022, there were 156 reviews of diagnostic test accuracy studies in CDSR containing 3530 meta-analyses. After restrictions were applied, 2684 meta-analyses were excluded mainly because they contained less than five studies. A total of 846 diagnostic meta-analyses (containing 12,784 studies) were included in the analysis (Figure 1).

FIGURE 1.

FIGURE 1

Flow diagram for the selection of meta-analyses for the analysis

3.2 |. Changes in number of studies and pooled estimates

The median number of studies in the full meta-analysis was 9 (interquartile range [IQR] 6–17). Rapid reviews included a median number of studies ranging from 4 (IQR 3–9) to 6 (IQR 3–10) when the search date was restricted to less than 5 years. The median number of studies increased to 9 (IQR 6–16) when the recent 25 or more years were searched (Table 1).

TABLE 1.

Limits on search date and the number of studies included per meta-analysis, and the resulting studies and metaanalyses lost

Years searched Median number of studies per meta-analysis (IQR) Number of studies included (% of studies lost) Number of meta-analyses included (% of meta-analyses lost)
1 4 (3–9) 1383 (89.2) 188 (77.8)
2 5 (3–9) 2359 (81.5) 321 (62.1)
3 5 (3–8) 3295 (74.2) 450 (46.8)
4 5 (3–9) 4226 (66.9) 537 (36.5)
5 6 (3–10) 5139 (59.8) 595 (29.7)
10 7 (4–12) 8324 (34.9) 766 (9.5)
15 8 (5–14) 10,120 (20.8) 813 (3.9)
20 8 (6–15) 11,233 (12.1) 830 (1.9)
25 9 (6–16) 11,822 (7.6) 838 (1.0)
30 9 (6–16) 12,068 (5.6) 843 (0.4)
35 9 (6–16) 12,200 (4.6) 844 (0.2)
40 9 (6–16) 12,281 (3.9) 846 (0.0)
Full meta-analysis 9 (6–17) 12,784 846

Abbreviation: IQR, interquartile range.

When the search was limited to the recent 1 and 2 years, it was not possible to conduct more than 60% of meta-analyses due to an insufficient number of studies. As the number of years searched increased, the proportion of meta-analyses with minimal (i.e., <0.05 point estimate change) increased. Around half of meta-analyses presented minimal changes in the pooled estimates when the search was limited to the recent 5 years. When the search date was limited to the recent 10 and 15 years, despite 21% to 35% of studies were lost, only between 4% and 10% of meta-analyses were not able to be conducted due to insufficient studies, and more than 75% and 80% of meta-analyses presented minimal changes in the pooled estimates. There was little gain in the number of meta-analyses that could be conducted and the precision of the pooled estimates when the emulated rapid reviews included more years (Figures 24 and Supporting information S3). The standard error of the DOR, Sens and Spec increased when the search date was limited to less than 5 years, with the highest when less than 2 years were searched. The standard errors decreased and remained constant when the number of years searched was between 10 and 40 (Supporting information S4). On average, across the different number of years searched, 25% of meta-analyses did not converge using the bivariate model; among those meta-analyses that converged, consistent results with the split component synthesis method were observed (Supporting information S5 and S6).

FIGURE 2.

FIGURE 2

Change in pooled sensitivity by the number of years searched in the rapid review. ‘Minimal’ <0.05; ‘Moderate’ 0.05–0.1; ‘Large’ >0.1 change in the pooled estimate; ‘Lost of MA’ when the meta-analysis was not possible due to lack of studies. MA, meta-analysis

FIGURE 4.

FIGURE 4

Change in pooled AUC by the number of years searched in the rapid review. ‘Minimal’ <0.05; ‘Moderate’ 0.05–0.1; ‘Large’ >0.1 change in the pooled estimate; ‘Lost of MA’ when the meta-analysis was not possible due to lack of studies. AUC, area under the curve; MA, meta-analysis

4 |. DISCUSSION

Using a large real-world dataset of Cochrane reviews, our study is the first one to provide empirical evidence that restricting the search limit to the recent 10–15 years in rapid reviews of diagnostic test accuracy studies provides credible results in most cases. Limiting the search to the recent 10 years could be an alternative if the review authors are willing to accept the probability that ~10% of pooled estimates may differ by 5%–10%. Restricting the search limit to less than 5 years is not recommended, due to the large proportion of reviews that could not be conducted and the probability of imprecise pooled estimates. Expanding the search limit to more than 20 years yields little benefit in terms of precision of pooled estimates and number of reviews that could be conducted.

A survey by Arevalo-Rodriguez et al. found that search limits were the most common approach used in diagnostic test accuracy rapid reviews, with 96% of reviewers reporting using this method.17 However, the limits in the search dates applied by the reviewers (e.g., 5, 10 and 15 years) were not examined in the survey. The Cochrane Rapid Reviews Methods Group (CRRMG) has recently compiled recommendations to conduct interventional rapid reviews, but did not provide guidance on the number of years to be searched.18 The working group intends to adopt the recommendations to diagnostic test accuracy or screening rapid reviews. Although the CRRMG did not give specific guidance on date ranges, they did recommend date restrictions where there was a clinical or methodological justification. We agree that search date limits should be defined by content experts and topic of the review (e.g., in 2022, there is no benefit to expand the search date from 3 to 10 years for diagnostic/screening tests for COVID-19). Our study should help in these decisions by providing guidance on suitable ranges for search limits that can be combined with the knowledge of content experts to set effective and appropriate search date limits. Another factor to consider when using search date limits for the meta-analysis of diagnostic accuracy studies is that older studies may be less relevant to newer studies as far as the current diagnostic accuracy for a specific disease and test is concerned since changes in the pathogen and immune response at the population level can occur over time and thus may affect the performance of the tests.

The implication of the different search date limits on the workload needs to be assessed in future studies. Our study revealed that the decline in the proportion of studies lost is not linear in relation to the number of years searched. Instead, it revealed a hyperbolic decline. This indicates that most of the studies included in diagnostic meta-analyses have been published in recent years. If we assume that the reduction in rapid review workload by applying search limits comes from the reduction in the number of studies to be screened, quality assessed and data to be extracted, while the workload needed for the analysis, interpretation of results and write-up of results remain the same – a rapid review restricted to the recent 15 years compared to a rapid review restricted to the recent 30 years, would not halve the workload. Instead, the reduction in workload may be in the range of 5%–10% or even less.

Another alternative that needs to be explored in terms of reduction in time and workload to produce synthesis evidence in a timely manner is the use of technology. Although fully automated tools are not yet reliable,19 teams with experience across all aspects of reviewing evidence, combined with protected time to focus on the review who have already adopted semi-automated tools into their work practices, can complete a systematic review in 2 weeks.20 Therefore, future studies could compare rapid reviews with searches limited to the last 15 years to full systematic reviews using semi-automated tools for reviews of diagnostic test accuracy studies.

There were some limitations in our study. First, the true population parameter values were unknown, and thus the bias of the rapid review estimates could not be calculated. Second, we could not examine the impact of publication bias as there are no reliable methods for meta-analyses of diagnostic test accuracy studies.21 Thirdly, given that Sens and Spec do not have a null value, change in ‘significance’ could not be assessed; instead, the changes in the magnitude (<0.05, 0.05–0.1 and >0.1) of the estimates were investigated. Lastly, the ‘age’ of the review topic – e.g., accuracy of screening tests for COVID-19 (new topic) and influenza (old topic) – was not considered; thus, reviewers’ judgement is needed to consider the suitable search date limits.

In conclusion, rapid review by search date is a valid and reliable approach for diagnostic test accuracy studies. Robust evidence can be achieved by restricting the search date to the recent 10–15 years. Future studies need to examine the reduction in workload and time to finish the rapid reviews under different search date limits, as well as compare this approach against the use of semi-automated tools.

Supplementary Material

Appendix S1: Supporting information
Appendix S2: Supporting information
Appendix S3-S6: Supporting information

FIGURE 3.

FIGURE 3

Change in pooled specificity by the number of years searched in the rapid review. ‘Minimal’ <0.05; ‘Moderate’ 0.05–0.1; ‘Large’ >0.1 change in the pooled estimate; ‘Lost of MA’ when the meta-analysis was not possible due to lack of studies. MA, meta-analysis

What is already known

  • Rapid reviews are widely employed to generate timely evidence and limiting the search date is among the most commonly used approaches.

  • Restricting the search date to the recent 20 years produce accurate results, while substantially decreasing the workload in therapeutic/interventional rapid reviews.

What is new

  • Restricting the search limit to the recent 10–15 years in rapid reviews of diagnostic test accuracy studies provides credible results in 75% and 80% of the meta-analyses, respectively.

Potential impact for research synthesis methods reads outside the author’s field

  • Like for therapeutic/interventional studies, we demonstrated that rapid reviews by search date limit are also a valid approach to synthesise credible evidence for diagnostic test accuracy studies.

ACKNOWLEDGMENT

Open access publishing facilitated by The University of Queensland, as part of the Wiley - The University of Queensland agreement via the Council of Australian University Librarians.

FUNDING INFORMATION

LFK was supported by Australian National Health and Medical Research Council Early Career Fellowships (APP1158469). LL was supported in part by the US National Institutes of Health/National Institute of Mental Health grant R03 MH128727 and the National Institutes of Health/National Library of Medicine grant R01 LM012982.

Footnotes

SUPPORTING INFORMATION

Additional supporting information can be found online in the Supporting Information section at the end of this article.

CONFLICT OF INTEREST

The authors do not have any conflicts of interest to declare.

DATA AVAILABILITY STATEMENT

The data that support the findings of this study are available in https://www.cochranelibrary.com/cdsr/reviews. The R files used to (i) download the .rm5 files (S1) and (ii) export them as .csv files for analyses (S2) are available in the supplementary material.

REFERENCES

  • 1.OCEBM Levels of Evidence Working Group. The Oxford Levels of Evidence 2. https://www.cebm.ox.ac.uk/resources/levels-ofevidence/ocebm-levels-of-evidence. Accessed March 2022.
  • 2.Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. 2017;7:e012545. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Beller E, Clark J, Tsafnat G, et al. Making progress with the automation of systematic reviews: principles of the international collaboration for the automation of systematic reviews (ICASR). Syst Rev. 2018;7:77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hamel C, Michaud A, Thuku M, et al. Defining rapid reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews. J Clin Epidemiol. 2021;129:74–85. [DOI] [PubMed] [Google Scholar]
  • 5.Tricco AC, Antony J, Zarin W, et al. A scoping review of rapid review methods. BMC Med. 2015;13:224. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Xu C, Ju K, Lin L, et al. Rapid evidence synthesis approach for limits on the search date: how rapid could it be? Res Synth Methods. 2022;13:68–76. [DOI] [PubMed] [Google Scholar]
  • 7.Honest H, Khan KS. Reporting of measures of accuracy in systematic reviews of diagnostic literature. BMC Health Serv Res. 2002;2:4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Lisboa Bastos M, Tavaziva G, Abidi SK, et al. Diagnostic accuracy of serological tests for COVID-19: systematic review and meta-analysis. BMJ. 2020;370:m2516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ma X, Lin L, Qu Z, Zhu M, Chu H. Performance of between study heterogeneity measures in the Cochrane library. Epidemiology. 2018;29:821–824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Lin L, Chu H, Murad MH, et al. Empirical comparison of publication bias tests in meta-analysis. J Gen Intern Med. 2018;33: 1260–1267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Furuya-Kanamori L, Xu C, Lin L, et al. P value-driven methods were underpowered to detect publication bias: analysis of Cochrane review meta-analyses. J Clin Epidemiol. 2020;118: 86–92. [DOI] [PubMed] [Google Scholar]
  • 12.Furuya-Kanamori L, Kostoulas P, Doi SAR. A new method for synthesizing test accuracy data outperformed the bivariate method. J Clin Epidemiol. 2021;132:51–58. [DOI] [PubMed] [Google Scholar]
  • 13.Doi SA, Barendregt JJ, Khan S, Thalib L, Williams GM. Advances in the meta-analysis of heterogeneous clinical trials I: the inverse variance heterogeneity model. Contemp Clin Trials. 2015;45:130–138. [DOI] [PubMed] [Google Scholar]
  • 14.Reitsma JB, Glas AS, Rutjes AWS, Scholten RJPM, Bossuyt PM, Zwinderman AH. Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. J Clin Epidemiol. 2005;58:982–990. [DOI] [PubMed] [Google Scholar]
  • 15.Furuya-Kanamori L. Doi SAR. DIAGMA: Stata Module for the Split Component Synthesis Method of Diagnostic Meta-Analysis; 2021.
  • 16.Dwamena B. MIDAS: Stata Module for Meta-Analytical Integration of Diagnostic Test Accuracy Studies; 2009.
  • 17.Arevalo-Rodriguez I, Steingart KR, Tricco AC, et al. Current methods for development of rapid reviews about diagnostic tests: an international survey. BMC Med Res Methodol. 2020;20:115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Garritty C, Gartlehner G, Nussbaumer-Streit B, et al. Cochrane rapid reviews methods group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gates A, Vandermeer B, Hartling L. Technology-assisted risk of bias assessment in systematic reviews: a prospective cross-sectional evaluation of the RobotReviewer machine learning tool. J Clin Epidemiol. 2018;96:54–62. [DOI] [PubMed] [Google Scholar]
  • 20.Clark J, Glasziou P, Del Mar C, Bannach-Brown A, Stehlik P, Scott AM. A full systematic review was completed in 2 weeks using automation tools: a case study. J Clin Epidemiol. 2020; 121:81–90. [DOI] [PubMed] [Google Scholar]
  • 21.Deeks JJ, Macaskill P, Irwig L. The performance of tests of publication bias and other sample size effects in systematic reviews of diagnostic test accuracy was assessed. J Clin Epidemiol. 2005; 58:882–893. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1: Supporting information
Appendix S2: Supporting information
Appendix S3-S6: Supporting information

Data Availability Statement

The data that support the findings of this study are available in https://www.cochranelibrary.com/cdsr/reviews. The R files used to (i) download the .rm5 files (S1) and (ii) export them as .csv files for analyses (S2) are available in the supplementary material.

RESOURCES