Skip to main content
JAMA Network logoLink to JAMA Network
. 2019 Sep 5;5(11):1550–1555. doi: 10.1001/jamaoncol.2019.2564

Evaluation of Reproducible Research Practices in Oncology Systematic Reviews With Meta-analyses Referenced by National Comprehensive Cancer Network Guidelines

Cole Wayant 1,, Matthew J Page 2, Matt Vassar 3
PMCID: PMC6735674  PMID: 31486837

Key Points

Question

To what extent do clinically relevant oncology systematic reviews cited by National Comprehensive Cancer Network guidelines use reproducible research practices?

Findings

In this cross-sectional study of 154 oncology meta-analyses comprising 3696 meta-analytic effect sizes, 2375 (64.3%), including subgroup and sensitivity analyses, were reproducible in theory, with the main driver of reproducibility being whether a meta-analysis was presented in a forest plot. Authors infrequently described how missing data were handled, and only 1 meta-analysis provided a link to a data set.

Meaning

An emphasis on the reporting of meta-analytic effects in forest plots and requirements for providing access to data sets would strengthen the reproducibility of oncology meta-analyses.


This cross-sectional study evaluates the reproducibility of research practices in oncology systematic reviews with meta-analyses cited by the National Comprehensive Cancer Network guidelines for the treatment of cancer by site.

Abstract

Importance

Reproducible research practices are essential to biomedical research because these practices promote trustworthy evidence. In systematic reviews and meta-analyses, reproducible research practices ensure that summary effects used to guide patient care are stable and trustworthy.

Objective

To evaluate the reproducibility in theory of meta-analyses in oncology systematic reviews cited by the 49 National Comprehensive Cancer Network (NCCN) guidelines for the treatment of cancer by site and evaluate whether Cochrane reviews or systematic reviews that report adherence to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines use more reproducible research practices.

Design, Setting, and Participants

A cross-sectional investigation of all systematic reviews with at least 1 meta-analysis and at least 1 included randomized clinical trial (RCT) that are cited by NCCN guidelines for treatment of cancer by site. We scanned the reference list of all NCCN guidelines (n = 49) for potential systematic reviews and meta-analyses. All retrieved studies were screened, and data were extracted, independently and in duplicate. The analysis was carried out between May 6, 2018, and January 28, 2019.

Main Outcomes and Measures

The frequency of reproducible research practices, defined as (1) effect estimate and measure of precision (eg, hazard ratio with 95% confidence interval); (2) clear list of studies included for each analysis; and (3) for subgroup and sensitivity analyses, it must be clear which studies were included in each group or level.

Results

We identified 1124 potential systematic reviews, and 154 meta-analyses comprising 3696 meta-analytic effect size estimates were included. Only 2375 of the 3696 meta-analytic estimates (64.3%), including subgroup and sensitivity analyses, were reproducible in theory. Forest plots appear to improve the reproducibility of meta-analyses. All meta-analytic estimates were reproducible in theory in 100 systematic reviews (64.9%), and in 15 systematic reviews (9.7%), no meta-analytic estimates could potentially be reproduced. Data were said to be imputed in 29 meta-analyses, but none specified which data. Only 1 meta-analysis included a link to an online data set.

Conclusions and Relevance

More reproducible research practices are needed in oncology meta-analyses, as suggested by those that are cited by the NCCN. Reporting meta-analyses in forest plots and requirements for full data sharing are recommended.

Introduction

Concerns are growing about the reproducibility of biomedical research.1,2 Many of these concerns stem from research practices that lack transparency, including poor reporting of study methodology3 and failing to make study data publicly available.4 As a result, efforts to reproduce biomedical research findings have been thwarted.5,6 Most efforts to reproduce research findings have been dedicated to primary studies, such as randomized clinical trials, and little effort has been dedicated to reproduce higher levels of evidence, such as systematic reviews. The first studies to holistically evaluate the reproducibility of systematic reviews and meta-analyses in the biomedical literature found that authors frequently fail to use reproducible research practices.4,7 However, only a small proportion of the systematic reviews with meta-analyses evaluated in previous investigations were for oncology interventions, leaving unanswered questions for researchers in this field, oncologists, and policy makers.

For this investigation of the reproducibility of oncology systematic reviews, we identified systematic reviews cited in National Comprehensive Cancer Network (NCCN) clinical practice guidelines. The NCCN set of guidelines is one of many available to oncologists; however, a survey of oncologists showed that NCCN guidelines were more likely to influence clinical practice than other popular oncology guidelines.8 Further, NCCN guidelines cover all blood and solid cancers, thus making them ideal for a broad investigation such as this. The primary objective of this investigation was to evaluate the reproducibility in theory of meta-analyses in oncology systematic reviews cited by the 49 NCCN guidelines for the treatment of cancer by site. The secondary objective was to evaluate whether Cochrane reviews or systematic reviews that report adherence to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines use more reproducible research practices.

Methods

The protocol for this investigation is publicly available via the Open Science Framework.9 We defined an systematic review according to the PRISMA for protocols definition: articles that explicitly stated methods to identify studies (ie, a search strategy), explicitly stated methods of study selection (eg, eligibility criteria and selection process), and explicitly described methods of synthesis (or other type of summary).10 Because NCCN guidelines update regularly throughout each year, all guidelines were manually downloaded as PDFs on May 6, 2018, to avoid citations being added to the guideline during the course of our investigation.11 To identify systematic reviews we manually screened the reference lists and Discussion narratives of all NCCN clinical practice guidelines for the treatment of cancer. We extracted all references with “systematic review,” “meta-analysis,” “metaanalysis,” and any references without the keywords in the title that were discussed as a systematic review or meta-analysis by guideline authors. We also extracted any cited references that were published in the Cochrane Database for Systematic Reviews. All extracted references were added to a PubMed collection and exported to Rayyan12 for title and abstract screening.

We screened articles using the liberal acceleration method whereby 1 author (C.W.) was required to mark a record for inclusion and 2 authors (C.W. and M.J.P.) were required to mark a record for exclusion. Next, 2 authors (C.W. and M.J.P.) screened the full text of potentially relevant articles for inclusion. Key inclusion criteria were systematic reviews published in 2011 or later with at least 1 meta-analysis that included at least 1 randomized clinical trial. We chose to include only systematic reviews published after 2011 to allow time for uptake of the 2009 PRISMA Statement. Thus, all included systematic reviews were accountable to currently accepted reporting quality standards. The systematic reviews of individual patient data or of primary studies other than randomized clinical trials, network meta-analyses, and pooled analyses of randomized clinical trials were excluded.

To extract data for this study we developed a pilot-tested Google Form based on the extraction form used in a similar, previous study.4 Extracted data items were related to the number of meta-analyses reported, reporting of summary statistics for each individual study, use of fixed-effect vs random-effect models, interpretation of tests of heterogeneity and small-study effects, and types of subgroup and sensitivity analyses performed. We extracted data for all meta-analyses, but certain items were dedicated to the index meta-analysis, which we defined as the primary meta-analysis for the primary end point. If there was no primary end point mentioned, we used the first reported meta-analysis as the index meta-analysis and inferred the primary end point from there. We counted meta-analyses by summing the number of summary effects in forest plots, written narrative, and supplemental appendices. Duplicate meta-analytic effects were only counted once. We counted subgroup effects that were derived from an analysis of at least 2 studies, as well as the overall summary effect that synthesized all subgroup effects. We only counted sensitivity analyses that were expressly described with a summary effect in the article or the supplemental material.

To be considered reproducible in theory an analysis must have 3 elements: (1) effect estimate and measure of precision (eg, hazard ratio with 95% confidence interval); (2) clear list of studies included for each analysis; and (3) for subgroup and sensitivity analyses, it must be clear which studies were included in each group or level.

Data from all systematic reviews were extracted by C.W. A random sample of 15% of the included systematic reviews was extracted in duplicate by M.J.P. and M.V. adjudicated discrepancies in the double-extracted 15% sample. Any item that had at least 1 discrepancy was reviewed a second time in the 85% of other studies by C.W. A complete list of items with a discrepancy are available, along with our protocol and data, via the Open Science Framework.9

Summary statistics and measures of central tendency (eg, median with interquartile range [IQR]) were calculated using Microsoft Excel (2016, Microsoft). We planned to use STATA statistical software (version 15.1, STATA Corp) to calculate unadjusted risk ratios (uRR) and 95% confidence intervals (CIs) for the comparisons between Cochrane and non-Cochrane systematic reviews, and between systematic reviews self-reporting the use of PRISMA vs not, but owing to disparate numbers of Cochrane and non-Cochrane systematic reviews, we only reported the comparisons of systematic reviews stratified by PRISMA adherence and year of publication. We conducted sensitivity analyses for meta-analyses presented in figures and for those published as supplementary material to investigate potential factors contributing to reproducibility.

Results

Characteristics

We identified 1124 potential systematic reviews from our survey of the 49 NCCN guidelines for the treatment of cancer by site. Five NCCN guidelines did not cite any systematic reviews. An additional 19 clinical practice guidelines did not have any systematic reviews that met the inclusion criteria. After removing duplicates and screening all articles, 154 systematic reviews with at least 1 meta-analysis were included (Figure 1).9 There was high agreement between reviewers (94.0%) for studies extracted in duplicate.

Figure 1. PRISMA Diagram.

Figure 1.

IPD indicates individual patient data; NCCN, National Comprehensive Cancer Network; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; RCT, randomized clinical trial; SR, systematic review.

Of the 154 included systematic reviews, 77 (50.0%) were either a Cochrane review or mentioned adherence to PRISMA. Eighteen systematic reviews (11.7%) were Cochrane systematic reviews, and 60 adhered to PRISMA (39.0%). Of the 78 systematic reviews that received funding, public sources (eg, government) were most common (36 [46.2%]). The systematic reviews included a median of 14 (IQR, 7.25-29.75) meta-analytic effect estimates, including those from subgroup and sensitivity analysis. Additional characteristics of our sample are reported in eTable 1 in the Supplement.

Only 88 systematic reviews (57.1%) labeled their primary end point (eTable 2 in the Supplement). Thus, we inferred the primary end point in the remaining 66 systematic reviews from the index (first reported) meta-analysis. Seventy-three (47.4%) primary end points were all-cause mortality. A median of 8 (IQR, 5-12) primary studies with a median 1914 (IQR, 917-3941) patients were included in each index meta-analysis. Seventy-nine index meta-analyses (51.2%) included a subgroup analysis and 54 included a sensitivity analysis (35.1%).

Reproducible Research Practices: Overall

There were a total of 3696 meta-analytic effect estimates, including subgroup and sensitivity analyses in the 154 meta-analyses, but only 2375 (64.3%) were reproducible in theory. All meta-analyses were reproducible in theory in 100 meta-analyses (64.9%), and in 139 meta-analyses (90.3%) there was at least 1 meta-analysis that could potentially be reproduced. Summary statistics (eg, event rates) for studies included in the index meta-analysis were reported in 107 meta-analyses (69.5%), but only 39 (25.3%) mentioned whether or not missing data were imputed and included in the index meta-analysis. Missing data were reported to have been imputed in 29 of these 39 meta-analyses, but it was not clear which exact data points were imputed in all 29. Similarly, only 29 meta-analyses mentioned whether unpublished data were retrieved from primary study authors, with 17 affirming that authors were contacted. However, only 3 of 17 (17.6%) were clear about which data were retrieved.

Eighty-seven meta-analyses (56.5%) generated funnel plots to assess for publication bias, but only 49 of 87 (56.3%) presented the funnel plot in the meta-analysis or supplemental appendix. In 28 of 87 meta-analyses (32.2%), the number of studies included in the funnel plot was unclear. Only 62 meta-analyses cited the guide they used to interpret their I2 statistic, with the most common guide being by Higgins et al.13 Sixty-one authors (39.6%) decided between a random- or fixed-effects model based on the statistical heterogeneity of the included studies, but 31 of 61 (50.8%) did not report the amount of heterogeneity necessary to use a random-effects model.

Random-effects models were used for 91 of 154 index meta-analyses (59.1%), but specific information about the between-trial variance estimator (eg, DerSimonian and Laird14) were not reported in 45 of 91 (49.5%). Subgroup analyses were included in 79 of 154 index meta-analyses (51.3%), but only 51 of 79 (64.6%) were fully reproducible in theory. Of the 54 sensitivity analyses that accompanied index meta-analyses, only 34 (63.0%) were fully reproducible in theory. Only 1 meta-analysis—a Cochrane meta-analysis—included a link to an online data set.

When considering only the 2341 of 3696 meta-analytic estimates that were presented on forest plots, we determined that 2195 of 2341 (93.7%) were reproducible in theory because they included numerical point estimates (or event rates conducive to calculating point estimates) and a list of included studies. Compared with meta-analyses not published in Figures (180/1355), forest plot–based meta-analyses were more often reproducible in theory (uRR, 8.4; 95% CI, 7.2-9.7). When considering only meta-analytic estimates published as supplemental material, we determined that 368 of 642 (57.3%) were reproducible in theory. Compared with main-text meta-analyses (2007/3054), supplemental meta-analyses were less often reproducible in theory (uRR, 0.74; 95% CI, 0.65-0.86). Both sensitivity analyses were unadjusted and should be interpreted with caution, especially the supplemental vs main-text analysis, which was likely confounded by forest plot–based meta-analyses.

Stratified Analyses of Reproducible Research Practices

We limited our analysis of Cochrane and non-Cochrane reviews to summary statistics owing to large differences in group sample sizes. One of 18 Cochrane meta-analyses (5.6%) and 59 of 136 non-Cochrane meta-analyses (43.4%) stated that they adhered to PRISMA guidelines. In 16 of 18 Cochrane meta-analyses (88.9%), all included meta-analyses were reproducible in theory compared with 85 of 136 non-Cochrane meta-analyses (62.5%). Regarding sensitivity and subgroup analyses, all were reproducible in theory in Cochrane meta-analyses. In non-Cochrane meta-analyses only 29 of 48 (60.4%) with sensitivity analyses and 49 of 77 (63.6%) with subgroup analyses provided enough information to make these analyses reproducible. All data for comparisons between meta-analyses that did and did not mention PRISMA are in the Table and Figure 2. Data for our analysis by year of publication are shown in Figure 3.

Table. Reproducible Research Practices of Systematic Reviews With Meta-analyses That Underpin the National Comprehensive Cancer Network Clinical Practice Guidelines for the Treatment of Cancer by Site.

Reproducible Research Practice No. (%)
All (n = 154) PRISMA (n = 60) Non-PRISMA (n = 94)
Reported the data needed to recreate all meta-analytic effect estimates in the SR 100 (64.9) 36 (60.0) 64 (68.1)
Reported the data needed to recreate the index meta-analytic effect estimate 140 (90.9) 58 (96.7) 82 (87.2)
Reported summary statistics for each individual study in the index meta-analysis 107 (69.5) 42 (70.0) 65 (69.1)
Reported effect estimates and measures of precision for each individual study in the index meta-analysis 140 (90.9) 58 (96.7) 82 (87.2)
Reported that some data in the index meta-analysis had been imputed 39 (25.3) 14 (23.2) 25 (26.6)
Clear which data were imputed and how 0 0 0
Reported that some data in the index meta-analysis had been obtained from the study author/sponsor 17 (11.0) 7 (11.7) 10 (10.6)
Clear which data were obtained 3 (1.9) 0 3 (3.2)
Reported (or inferred) the type of random-effects method used for the index meta-analysis, No./No. (%) 78/91 (85.7) 22/43 (51.2) 24/48 (50.0)
Reported the data needed to recreate each subgroup analysis for the index meta-analysis, No./No. (%)
For all subgroup analyses 51/79 (64.6) 26/40 (65.0) 25/39 (64.1)
For some subgroup analyses 1/79 (1.3) 0 1/39 (2.6)
Not for any subgroup analysis 27/79 (34.2) 14/40 (35.0) 13/39 (33.3)
Reported the data needed to recreate each sensitivity analysis for the index meta-analysis, No./No. (%)
For all sensitivity analyses 34/54 (63.0) 12/23 (52.2) 22/31 (71.0)
For some sensitivity analyses 2/54 (3.7) 2/23 (8.7) 0
Not for any sensitivity analysis 18/54 (33.3) 9/23 (39.1) 9/31 (29.0)
Mention of access to data sets and statistical analysis code used to perform analyses 1 (0.6) 0 1 (1.1)

Abbreviations: PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; SR, systematic review.

Figure 2. Forest Plot of Studies Stratified by Mention of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Guidelines.

Figure 2.

NCCN indicates National Comprehensive Cancer Network; uRR, unadjusted risk ratio.

Figure 3. Forest Plot of Studies Stratified by Time Frame.

Figure 3.

PRISMA indicates Preferred Reporting Items for Systematic Reviews and Meta-Analyses; uRR, unadjusted risk ratio.

Discussion

The results of our investigation of oncology meta-analyses demonstrate that reproducible research practices are commonly implemented for primary analyses, but far less so for secondary, subgroup, and sensitivity analyses. Moreover, figure-based (eg, forest plot) meta-analyses were far more reproducible than other meta-analyses, and our sensitivity analysis shows that the main driver of whether a meta-analysis was reproducible or not was based on it being published in a forest plot or not. Systematic reviews with meta-analyses cited by oncology practice guidelines may represent the most important cohort of oncology systematic reviews because these systematic reviews inform guideline recommendations, in some cases. Yet, despite recent improvements in the quality of systematic reviews after the publication of the PRISMA statement,15 we found that key items were missing from oncology meta-analyses, which may hinder their reproducibility. The ability to reproduce all meta-analytic effects—even for secondary end points because systematic reviews are not powered for 1 end point like clinical trials—is fundamentally important because scientific progress requires trustworthy results. Although the inability to reproduce study findings does not mean the study findings are false, it may affect the interpretation of results, especially because our study defined “reproducibility” for main effects as the reporting of a summary effect, measure of precision, and list of included studies.

Our findings are comparable to those from a previous, similar study that examined the reproducible research practices of a cross-section of systematic reviews and meta-analyses that were published in February of 2014.4 That study found that 73% of meta-analytic effects were reproducible in theory, compared with the 64.3% found in our study. For articles in this study, adhering to PRISMA and citing a guide to interpret statistical heterogeneity both seemed to improve the reporting of effect estimates and measures of precision for the index meta-analysis. These effects are either small or imprecise and should be interpreted accordingly.

Strengths and Limitations

This study has several key strengths and limitations. Our sample of 154 is 40% larger than the previous study of reproducible research practices and is focused on only 1 area of medicine. Unlike previous investigations of data reporting in SRs,16,17,18,19,20,21 we extracted whether data necessary to reproduce meta-analyses (eg, summary statistics or effect estimates) were available from published reports, and whether subgroups or sensitivity analyses differed from the index meta-analyses in this regard. Concerning limitations, our sample of systematic reviews may not be generalizable to all systematic reviews of oncology interventions because we relied on the citations in NCCN guidelines. It is possible that other specialized organizations (eg, American Society of Hematology for blood cancers) cite different systematic reviews. Further, it may be that other systematic reviews of oncology interventions are more or less reproducible in theory than those in this study. We used double data extraction for only 15% of the included studies, which may increase the chance of data extraction errors. Despite the high percentage of agreement between authors, to mitigate the possibility of these errors, we extracted data a second time for all items with a discrepancy and used a third-party adjudicator. These quality checks are consistent with previous studies.4,22 Further, the absence of data to reproduce a meta-analysis effect does not necessarily imply it was incorrectly estimated, only that the availability of the data to reproduce may improve confidence for some readers in its accuracy.

Conclusions

We recommend that authors of systematic reviews with meta-analyses incorporate more reproducible research practices and expect guideline authors to evaluate whether existing systematic reviews and meta-analyses are reproducible. We further recommend journals encourage authors to present all meta-analyses in figures because standard graphical output for meta-analyses in most statistical packages includes a list of included studies and numerical point estimates. In this study, these 2 items alone were necessary to reproduce a summary effect, in theory. A guideline development group may downgrade the quality of systematic review data if they feel that the findings are not trustworthy. We further recommend earnest adherence to PRISMA because many of the reproducible research practices that we investigated are addressed therein, indicating that authors may incompletely adhere to PRISMA recommendations. Authors should make use of data repositories, such as the Open Science Framework, to store data, supplemental material, or other necessary items that ensures the reproducibility of findings.

Supplement.

eTable 1. General characteristics of systematic reviews reference by the NCCN guidelines

eTable 2. Characteristics of the index meta-analysis in each SR

References

  • 1.Munafò MR, Nosek BA, Bishop DVM, et al. . A manifesto for reproducible science. Nature Human Behaviour. 2017;1(1):s41562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505(7485):612-613. doi: 10.1038/505612a [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Rivoirard R, Bourmaud A, Oriol M, et al. . Quality of reporting in oncology studies: a systematic analysis of literature reviews and prospects. Crit Rev Oncol Hematol. 2017;112:179-189. doi: 10.1016/j.critrevonc.2017.02.012 [DOI] [PubMed] [Google Scholar]
  • 4.Page MJ, Altman DG, Shamseer L, et al. . Reproducible research practices are underused in systematic reviews of biomedical interventions. J Clin Epidemiol. 2018;94:8-18. doi: 10.1016/j.jclinepi.2017.10.017 [DOI] [PubMed] [Google Scholar]
  • 5.Collaboration OS; Open Science Collaboration . PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716. doi: 10.1126/science.aac4716 [DOI] [PubMed] [Google Scholar]
  • 6.Nosek BA, Errington TM. Reproducibility in cancer biology: making sense of replications. eLife Sci. 2017;6:e23383. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lakens D, Page-Gould E, van Assen MA, et al. Examining the Reproducibility of Meta-Analyses in Psychology: A Preliminary Report. 2017. https://osf.io/preprints/bitss/xfbjf. Accessed May 8, 2018.
  • 8.Jagsi R, Huang G, Griffith K, et al. . Attitudes toward and use of cancer management guidelines in a national sample of medical oncologists and surgeons. J Natl Compr Canc Netw. 2014;12(2):204-212. doi: 10.6004/jnccn.2014.0021 [DOI] [PubMed] [Google Scholar]
  • 9.Wayant C. Matthew J Page, Matt Vassar. Reproducibility of Oncology Meta-Analyses. https://osf.io/kxj9z/. Published May 23, 2018. Accessed May 23, 2018.
  • 10.Moher D, Shamseer L, Clarke M, et al. ; PRISMA-P Group . Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1. doi: 10.1186/2046-4053-4-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.NCCN NCCN Guidelines for Treatment of Cancer by Site. https://www.nccn.org/professionals/physician_gls/default.aspx#site. Accessed May 6, 2018.
  • 12.Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. doi: 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557-560. doi: 10.1136/bmj.327.7414.557 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177-188. doi: 10.1016/0197-2456(86)90046-2 [DOI] [PubMed] [Google Scholar]
  • 15.Page MJ, Moher D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement and extensions: a scoping review. Syst Rev. 2017;6(1):263. doi: 10.1186/s13643-017-0663-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Vaughn K, Skinner M, Vaughn V, Wayant C, Vassar M. Methodological and reporting quality of systematic reviews referenced in the clinical practice guideline for pediatric high-blood pressure. J Hypertens. 2019;37(3):488-495. doi: 10.1097/HJH.0000000000001870 [DOI] [PubMed] [Google Scholar]
  • 17.Nissen T, Wayant C, Wahlstrom A, et al. . Methodological quality, completeness of reporting and use of systematic reviews as evidence in clinical practice guidelines for paediatric overweight and obesity. Clin Obes. 2017;7(1):34-45. doi: 10.1111/cob.12174 [DOI] [PubMed] [Google Scholar]
  • 18.Scott J, Howard B, Sinnett P, et al. . Variable methodological quality and use found in systematic reviews referenced in STEMI clinical practice guidelines. Am J Emerg Med. 2017;35(12):1828-1835. doi: 10.1016/j.ajem.2017.06.010 [DOI] [PubMed] [Google Scholar]
  • 19.Ross A, Rankin J, Beaman J, et al. . Methodological quality of systematic reviews referenced in clinical practice guidelines for the treatment of opioid use disorder. PLoS One. 2017;12(8):e0181927. doi: 10.1371/journal.pone.0181927 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Peters JPM, Hooft L, Grolman W, Stegeman I. Reporting quality of systematic reviews and meta-analyses of otorhinolaryngologic articles based on the PRISMA statement. PLoS One. 2015;10(8):e0136540. doi: 10.1371/journal.pone.0136540 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Liu Y, Zhang R, Huang J, et al. . Reporting quality of systematic reviews/meta-analyses of acupuncture. PLoS One. 2014;9(11):e113172. doi: 10.1371/journal.pone.0113172 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Page MJ, Shamseer L, Altman DG, et al. . Epidemiology and Reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):e1002028. doi: 10.1371/journal.pmed.1002028 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplement.

eTable 1. General characteristics of systematic reviews reference by the NCCN guidelines

eTable 2. Characteristics of the index meta-analysis in each SR


Articles from JAMA Oncology are provided here courtesy of American Medical Association

RESOURCES