In this issue of Critical Care Medicine, there are two systematic review and meta-analyses (SRMAs) assessing IV vitamin C therapy in patients with sepsis. Although most readers are likely familiar with using randomized clinical trials (RCTs) in clinical decisions, some may be less familiar with using SRMAs. In this foreword, we will review benefits and downsides of SRMAs and address the value of SRMAs in relation to RCTs and their respective roles in clinical decision-making.
An SRMA is an original, reproducible scientific work that answers a focused research question both qualitatively (evidence evaluation and distillation through systematic review) and quantitatively (meta-analysis). SRMAs are informed by a careful literature search in multiple databases such as Medline, Embase, and the Cochrane database, and often include evaluation of unpublished sources, such as trial registries. The investigators will assess studies for clinical heterogeneity (as there are almost always some differences between trials). In the absence of substantial heterogeneity, a meta-analysis is performed to generate summary- effect estimates for the outcomes of interest. At this point, investigators will evaluate for statistical heterogeneity both qualitatively (visually via a Forrest plot) and quantitatively (commonly with Cochran Q test and/or I2 statistic). If there is important statistical heterogeneity, subgroup analysis can be performed to help explain the inconsistency in findings. In addition, included studies are evaluated for risk of bias, based on methodology, selective outcome reporting bias, and publication bias. A properly done SRMA will follow standardized reporting guidelines, including The Preferred Reporting Items for Systematic Reviews and Meta-Analyses, which includes a checklist for reporting results. Ideally, an SRMA should be formally registered prior to initiation to ensure important decisions are made a priori (1, 2).
As part of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system, clinical practice guidelines and resultant recommendations must be informed by a systematic review (and meta-analysis if possible). Pooled effect estimates from an SRMA are evaluated using GRADE to ascribe the certainty of evidence as high, moderate, low, or very low. Using this summarized data evaluating benefits, harms, and certainty of evidence, guidelines panels decide on recommendations—strong or conditional, for or against an intervention of interest.
Outside of guidelines, most clinicians are comfortable using results from clinical trials to inform their practice. Considered the gold standard for clinical investigation, RCTs offer the ability to directly compare two (or more) groups, whereas randomization ensures balance of both known and unknown confounders between groups. It can be argued that well-done RCTs offer the most trustworthy information for the target population from which the study sample was drawn. However, RCTs may not always be generalizable to patients who are different from the study sample (such as those often excluded from RCTs—severely critically ill, elderly, and pregnant patients). Pragmatic trials with fewer exclusion criteria are more likely to be generalizable.
By combining multiple RCTs into a single pooled effect estimate, meta-analyses offer a larger sample size and, therefore, greater precision in detecting clinically meaningful effects (3). Other advantages of meta-analyses include the ability to evaluate and compare multiple interventions not directly studied in comparison (via a technique called network meta-analyses), improved external validity by including trials with more diverse populations, and an ability to examine between-study subgroups for factors that may influence treatment effects (even more relevant for individual patient data meta-analyses) (3). Some methodologists consider findings from a well-done SRMA of RCTs to be the highest level of evidence.
Despite the advantages of SRMAs, some clinicians may find it difficult to use SRMAs as compared with RCTs to inform clinical practice. There is always a degree of clinical heterogeneity between the studies included in SRMAs, and detractors often describe SRMAs as “combining apples and oranges.” For practicing clinicians, the further the data are from the patient, the less they might trust the data and more likely to consider that their patient is the exception to the rule. This is further complicated when the results of SRMAs are not concordant with the largest or the most recent RCT examining the same clinical question.
If the summary estimate of the SRMA matches with the largest and most trustworthy of the included RCTs, then the clinician does not need to decide which of the two to follow. However, discordance between an SRMA and the largest or most trustworthy RCT is also possible (4). If the SRMA results differ from largest or most reliable RCT, this may be a situation where one should consider following the RCT, particularly if the SRMA includes small trials at high risk for bias. Additionally, meta-analyses on the same topic may not agree, with a recent example being the role of corticosteroids in sepsis and septic shock, where two meta-analyses found disparate results (5, 6). In this case, the methodological quality of the SRMAs should be assessed—using a tool such as A Measurement Tool to Assess Systematic Reviews (7, 8). If both SRMAs are well done, it is important to examine carefully the included studies and their details to understand why differing results were obtained.
Ultimately, SRMAs are only as good as the studies they include. The discerning clinician should review SRMAs with the same careful lens that they use to appraise traditional RCT (9) and be aware that the quality and reproducibility of each study included in the SRMA will influence the results of the SRMA. If the SRMA meets those exacting criteria, we would suggest that a clinician should be comfortable using the results of a well-done SRMA as well as a large RCT to inform clinical care.
Footnotes
Dr. Agarwal received funding from National Institute of General Medical Sciences (5T32GM 95442-10). Dr. Sevransky’s institution received funding from the Center for Disease Control and Prevention Foundation, the Marcus Foundation, and the Society of Critical Care Medicine. The remaining authors have disclosed that they do not have any potential conflicts of interest.
REFERENCES
- 1.Page MJ, McKenzie JE, Bossuyt PM, et al. : The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021; 372:n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Liberati A, Altman DG, Tetzlaff J, et al. : The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: Explanation and elaboration. BMJ 2009; 339:b2700. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Walker E, Hernandez AV, Kattan MW: Meta-analysis: Its strengths and limitations. Cleve Clin J Med 2008; 75:431–439 [DOI] [PubMed] [Google Scholar]
- 4.LeLorier J, Grégoire G, Benhaddad A, et al. : Discrepancies between meta-analyses and subsequent large randomized, controlled trials. N Engl J Med 1997; 337:536–542 [DOI] [PubMed] [Google Scholar]
- 5.Rochwerg B, Oczkowski SJ, Siemieniuk RAC, et al. : Corticosteroids in sepsis: An updated systematic review and meta-analysis. Crit Care Med 2018; 46:1411–1420 [DOI] [PubMed] [Google Scholar]
- 6.Rygård SL, Butler E, Granholm A, et al. : Low-dose corticosteroids for adult patients with septic shock: A systematic review with meta-analysis and trial sequential analysis. Intensive Care Med 2018; 44:1003–1016 [DOI] [PubMed] [Google Scholar]
- 7.Shea BJ, Hamel C, Wells GA, et al. : AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol 2009; 62:1013–1020 [DOI] [PubMed] [Google Scholar]
- 8.Shea BJ, Reeves BC, Wells G, et al. : AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017; 358:j4008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Murad MH, Montori VM, Ioannidis JP, et al. : How to read a systematic review and meta-analysis and apply the results to patient care: Users’ guides to the medical literature. JAMA 2014; 312:171–179 [DOI] [PubMed] [Google Scholar]
