Skip to main content
BMJ Open logoLink to BMJ Open
. 2023 Jul 26;13(7):e073232. doi: 10.1136/bmjopen-2023-073232

Evaluating the impact of including non-randomised studies of interventions in meta-analysis of randomised controlled trials: a protocol for a meta-epidemiological study

Minghong Yao 1,2,3, Yuning Wang 1,2,3, Jason W Busse 4,5,6, Matthias Briel 5,7, Fan Mei 1,2,3, Guowei Li 5,8,9, Kang Zou 1,2,3, Ling Li 1,2,3,, Xin Sun 1,2,3,
PMCID: PMC10373676  PMID: 37495391

Abstract

Introduction

Although interest in including non-randomised studies of interventions (NRSIs) in meta-analysis of randomised controlled trials (RCTs) is growing, estimates of effectiveness obtained from NRSIs are vulnerable to greater bias than RCTs. The objectives of this study are to: (1) explore how NRSIs can be integrated into a meta-analysis of RCTs; (2) assess concordance of the evidence from non-randomised and randomised trials and explore factors associated with agreement; and (3) investigate the impact on estimates of pooled bodies of evidence when NRSIs are included.

Methods and analysis

We will conduct a systematic survey of 210 systematic reviews that include both RCTs and NRSIs, published from 2017 to 2022. We will randomly select reviews, stratified in a 1:1 ratio by Core vs non-Core clinical journals, as defined by the National Library of Medicine. Teams of paired reviewers will independently determine eligibility and abstract data using standardised, pilot-tested forms. The concordance of the evidence will be assessed by exploring agreement in the relative effect reported by NRSIs and RCT addressing the same clinical question, defined as similarity of the population, intervention/exposure, control and outcomes. We will conduct univariable and multivariable logistic regression analyses to examine the association of prespecified study characteristics with agreement in the estimates between NRSIs and RCTs. We will calculate the ratio of the relative effect estimate from NRSIs over that from RCTs, along with the corresponding 95% CI. We will use a bias-corrected meta-analysis model to investigate the influence on pooled estimates when NRSIs are included in the evidence synthesis.

Ethics and dissemination

Ethics approval is not required. The findings of this study will be disseminated through peer-reviewed publications, conference presentations and condensed summaries for clinicians, health policymakers and guideline developers regarding the design, conduct, analysis, and interpretation of meta-analysis that integrate RCTs and NRSIs.

Keywords: systematic review, statistics & research methods, epidemiology


Strengths and limitations of this study.

  • This study includes more systematic reviews and involves more journals than previous studies. Additionally, the number of types of non-randomized studies of interventions (NRSI) included in our study is broad. Therefore, our findings will be more generalizable.

  • This study will use an advanced statistical method that attempts to correct bias associated with NRSI, to investigate the impact of pooling body of evidence integrated from NRSI and randomized controlled trials.

  • Only studies with binary outcomes (yes or no) will be included in this study. This may result in findings that are less generalizable to other review types.

  • The accuracy and reliability of the information from the original studies included in each systematic review or meta-analyses will not be assessed.

Background

Randomised controlled trials (RCTs) with low risk of bias (RoB) are considered the most robust source of evidence for evaluating effects of health interventions. This is due to random allocation of participants to competing interventions and analytic approaches that support causal inference.1 As such, meta-analysis of RCTs has become an essential methodology in the evaluation of evidence2; however, RCTs may be limited by strict inclusion criteria and short-follow up, which may limit the applicability of their findings in real-world clinical practice. In addition, costs associated with large sample size present a challenge for the use of RCTs in the assessment of rare events.3

There is growing interest in using evidence from non-randomised studies of interventions (NRSIs)—including quasi-randomised controlled trials (quasi-RCTs), non-randomised controlled trials (non-RCTs), cohort studies and case–control studies—to assess the effectiveness of health interventions.4 A potential advantage of NRSI is that enrolled patients may be more representative of clinical practice than evidence from RCTs, due to inclusion of a more diverse distribution of patients.5 NRSIs could provide complementary, sequential or replacement evidence for RCTs, particularly when RCTs may be inappropriate (eg, rare or long-term adverse effects) or not feasible (unethical to perform).2 Data from NRSIs have been leveraged by regulatory bodies, such as the US Food and Drug Administration and European Medicines Agency, to assess the safety of marketed products and for new drug approvals.4 6 7

Both RCTs and NRSIs are potentially valuable sources of evidence for assessing the effects of health interventions, and there have been ongoing efforts to integrate NRSIs into meta-analysis of RCTs.8–10 NRSIs can provide valuable information as complementary, sequential or replacement evidence for RCTs, integrating NRSIs with RCTs for assessing treatment effects may help increase the overall level of certainty of evidence.2 Tools, framework and guidelines to facilitate combining evidence from RCTs and NRSIs are available2 11–13; however, including NRSI in meta-analyses of RCTs provides methodological challenges, as effect estimates obtained from NRSI may be subject to additional sources of bias (eg, confounding). The concordance of evidence between RCTs and NRSIs and the influence on estimates of pooling bodies of evidence (BoE) when NRSIs are included into evidence syntheses require careful consideration to ensure credibility of results.

The integration of NRSIs into meta-analyses of RCTs and the reporting quality of such reviews have been the subject of several studies.14–17 Faber et al18 surveyed 119 meta-analyses of RCTs and non-randomised studies published in 2013, and found that description of study type, search for grey literature, RoB assessment and reporting of whether crude or adjusted estimates combined were inadequately reported. In a more recent study, Bun et al19 examined 102 meta-analyses including both RCTs and observational studies published from 2014 to 2018 in five leading journals and the Cochrane Database of Systematic Reviews. They found that 38% of studies quantitively combined RCTs and observational studies, none of which clearly reported how RoB of NRSIs was considered, and how the BoE was affected by inclusion of NRSIs.

Other studies have evaluated the concordance of treatment effects from NRSIs and RCTs, and findings have been inconsistent. Among 58 meta-analyses reported in 19 reviews published in 2011, Golder et al20 found no difference, on average, in the risk estimate of adverse effects between RCTs and observational studies. Anglemyer et al21 investigated systematic reviews that were designed as methodological reviews to compare quantitative effect size estimates tested in RCTs with those observational studies. They concluded that there was little evidence for significant differences in estimates between observational studies and RCTs, regardless of the specific observational study design, heterogeneity or inclusion of studies on pharmacological interventions. However, most of the studies included this study were published much earlier, the methodology for NRSIs has evolved, particularly in regards to causal inference techniques. In a study of 74 pairs of effect estimates from both RCTs and observational studies, which evaluated the effectiveness and safety of drugs, Hong et al22 found that 20% of pairs showed a statistically significant difference. Similarly, in a study examining the agreement of treatment effects of three drugs used for treating COVID-19 from RCTs and observational studies, Moneer et al23 found that in 78% of paired studies’ treatment effects agreed. Alternately, Bröckelmann et al24 evaluated the agreement from RCTs and cohort studies and found that pooled effect estimates between RCTs and cohort studies did not differ. However, no prior study explored factors that could impact agreement, or examined the agreement of NRSIs and RCTs when causal modelling and traditional methods are used to control for confounding in NRSIs. Additionally, the major issue with previous studies was that they did not adequately consider the RoB in RCTs. Therefore, a comprehensive analysis of the concordance between RCTs and NRSIs with different levels of RoB of RCTs is highly valuable.

In a study of 118 BoE paired based on RCTs and cohort studies reported in 13 high impact factor medical journals, Bröckelmann et al25 investigated whether inclusion of evidence from cohort studies modified the conclusion when only evidence from RCTs was considered. However, they used random and common effects models to quantitatively synthesise RCTs and cohort studies; these methods do not attempt to adjust for potential bias from cohort studies.26 No prior review has empirically investigated the influence on estimates of pooled BoE integrated from RCTs and NRSIs using advanced statistical methods, such as a bias-corrected meta-analysis model.27

These shortcomings leave important questions unanswered, which we will address with three main objectives in our meta-epidemiological study. The first is to explore how NRSIs are integrated into meta-analyses of RCTs. The second is to assess compatibility of the evidence from different types of NRSIs and RCTs, and examine factors that may be associated with agreement in results. The third objective is to evaluate the impact of adding NRSIs to RCTs on the BoE using an advanced analysis model.

Methods and analysis

Study design overview

We will conduct a systematic survey of systematic reviews that include both RCTs and NRSIs conducted in humans, published from 2017 to 2022. To maximise the generalisability of study findings, we will include the following types of NRSIs: (1) quasi-RCTs, (2) non-RCTs, (3) cohort studies and (4) case–control studies.

Eligibility criteria

The inclusion criteria are:

  1. The study is a systematic review;

  2. The participants are human;

  3. The study is published between 2017 and 2022;

  4. At least one outcome included RCTs and NRSIs, and the NRSI is either a quasi-randomised study, non-RCT, cohort study, case–control study;

  5. The outcome of the meta-analysis of RCTs and NRSIs is binary and informs the benefits or safety of an intervention directed at treatment or prevention;

  6. Published in English.

The exclusion criteria are:

  1. Individual participant data meta-analysis, network meta-analysis, the outcome is binary but is reported only as a rate, dose–response meta-analysis;

  2. The study is reported as a research letter, protocol, abstract and short report.

Literature search

We will search PubMed to identify eligible systematic reviews. Our search strategy includes terms for systematic review, meta-analysis, RCTs and NRSIs (online supplemental appendix 1) and was informed by a previous related study.20 22 28

Supplementary data

bmjopen-2023-073232supp001.pdf (53.2KB, pdf)

Sample size and random sampling

Our sample size estimation is based on the proportion of agreement of treatment effects between NRSIs and RCTs (defined below). The sample size is calculated by the equation: n= zα2[p1-p]δ2, where Zα=1.96, p denotes the proportion of the agreement of effects from NRSIs and RCTs and is equal to 80% based on two previous studies,22 23 δ is the tolerance of errors and is set as 5.5%. According to this estimation, we will require at least a total of 204 systematic reviews for our study.

We will stratify included studies into Core and non-Core Clinical Journals, as defined by the US National Library of Medicine and the National Institutes of Health. There are 118 Core Clinical journals covering all specialties of clinical medicine and public health sciences.29 We will randomly sample journal articles, with 1:1 stratification by journal type (Core and non-Core). We will screen samples for eligibility and continue the random sampling process until we identify 210 eligible systematic reviews.

Study process

Teams of paired reviewers, trained in systematic review methods, will screen titles, abstracts and full texts for eligibility, and abstract data from all eligible studies, independently and in duplicate. We will use electronic forms, developed with Microsoft Access, for study screening and data extraction. The forms will be standardised and pilot-tested, together with detailed written instructions to improve reliability. Reviewer teams will resolve disagreements through discussion or, if needed, adjudication by one of two arbitrators (MY, LL.).

Study screening

Teams of paired reviewers will independently screen titles and abstracts of identified citations for potential eligibility. In the title and abstract screening phase, all systematic reviews or meta-analyses including NRSIs and RCTs with human participants will be acquired in full text. The reviewers will then screen the full texts of potentially eligible systematic reviews or meta-analyses to determine final eligibility. When more than 50% of primary studies overlapped between eligible articles, we included only the review with the relevant meta-analysis informed by the largest number of studies.

Patient and public involvement

This study is a meta-epidemiological study that does not involve the collection of any individual data.

Selection of the primary outcome

Teams of paired reviewers will select one primary outcome from each eligible review. If a systematic review or meta-analysis specifies a single primary eligible outcome, we will select it as the primary outcome for our analyses. If a systematic review or meta-analysis specifies more than one primary eligible outcome, we will select the first one reported in their results.

Data abstraction

Reviewer teams will abstract data from eligible systematic reviews using pilot-tested, standardised data abstraction forms, together with corresponding detailed instructions.

Study characteristics

We will extract information on the first author, year of publication, journal name, type of review (Cochrane or non-Cochrane review), clinical condition, type of NRSIs included in the selected meta-analysis, tools used to assess the RoB of RCTs and NRSIs, whether this review followed prespecified reporting guidelines, registration information, whether there is an associated protocol publicly available, number of NRSIs and RCTs included in the meta-analysis of interest, total number of patients analysed in NRSIs and RCTs, justification for inclusion of NRSIs, involvement of a methodologist (ie, statistician or epidemiologist), type of journal (Core vs non-Core journal), type of funding (private for profit, private not for profit, governmental, not funded, not reported), type of intervention (pharmacological vs non-pharmacological), type of objective for primary outcome selected (safety or efficacy/effectiveness) and category of the selected primary outcome (symptoms/quality of life/functional status, mortality, morbidity, surrogate outcomes). If the original study did not report the RoB for RCTs or did not use Cochrane’s RoB to evaluate it, we will evaluate the RoB for RCTs using Cochrane’s RoB.30 The overall RoB judgement for the result are ‘low RoB’, ‘some concerns’ or ‘high RoB’.

The manner of NRSIs integrated into a meta-analysis and the reporting

We will record how NRSIs and RCTs were combined19 (conducted a separate meta-analysis by type of study, combined in the same meta-analysis and also provided results of subgroup analysis by type of study, combined in the same meta-analysis without subgroup analysis, only one type of study combined in a meta-analysis and the other with only a qualitative description, RCTs and quasi-randomised studies combined in the same meta-analysis and other types of NRSIs are combined or qualitative description), statistical methods to quantitatively synthesise RCTs and NRSIs outcome data, whether the inclusion of NRSI was justified, whether the study design for each NRSI was reported, whether the authors reported the PI/ECO (Population, Intervention/Exposure, Comparison, Outcome) for NRSIs and RCTs, whether PI/ECO similarity between RCTs and NRSIs was illustrated, types of NRSIs included in meta-analysis, whether estimates were adjusted for NRSIs, the causal modelling and traditional methods used to control for confounding among NRSIs, heterogeneity test for different study types, publication bias for different study types, whether the certainty of evidence was appraised and which approach was used, subgroup/sensitivity analysis conducted, whether the potential implications of the decision to include NRSIs were discussed and the agreement of the estimates from pooled estimates and RCTs.

Concordance of the evidence from NRSIs and RCTs, and the influence of NRSIs on estimates of BoE

We will record the PI/ECO for different types of NRSIs and RCTs, and whether authors used methods such as restriction or stratification by PI/ECO to improve the comparability from NRSIs and RCTs. We will extract the following information both for RCTs and different types of NRSIs that contributed to the selected meta-analysis from each eligible review: pooled effect estimate and associated 95% CI, number of studies, number of participants and type of effect measure (risk ratio (RR), OR, HR). If the original study pooled RCTs and NRSIs, and did not report separate estimates for RCTs and NRSIs, we will perform a meta-analysis separately for RCTs and NRSIs.31 If quasi-randomised studies were pooled in the meta-analysis of interest, we will perform a meta-analysis of RCTs excluding quasi-randomised studies. If NRSIs did not pool quasi-randomised studies, we will conduct a meta-analysis reallocating quasi-randomised study to NRSI.

Statistical analysis

Analysis of the quality of reporting

We will conduct descriptive analyses for general study and reporting characteristics. We will use frequencies (and percentages) and mean (and SD) or median (and range) or median (and first quartile, third quartile) for dichotomous and continuous variables, separately. We will compare the general study and reporting characteristics between Core and non-Core journals, using the χ2 test or Fisher’s exact test for categorical variables, and t-test or Mann-Whitney U test for continuous variables.

Analysis of concordance of the evidence from NRSI and RCTs

We will evaluate the similarity of PI/ECO between the BoE from RCTs and NRSIs, with reference to two previous studies.24 28 Specially, these studies used criteria to rate the similarity of each PI/ECO-domain as ‘more or less identical’, ‘similar but not identical’, ‘broadly similar’ or ‘dissimilar’ for each outcome from RCTs and NRSIs. Two reviewers will perform this categorisation independently and in duplicate, and discrepancies will be resolved by one of two arbitrators (MY, LL).

The results will be said to qualitatively agree if both RCTs and NRSIs identify the same direction of effects, namely a statistically significant increase, a statistically significant decrease or no statistically significant difference.20 If the effect estimates (RR, OR, HR) were not the same from RCTs and NRSIs, we will express both estimates in the same measure (RR) using an assumed control risk (ACR): RR= OR1ACR(1OR).32 We will standardise the direction of effect of the outcomes so that the pooled effect estimates (HR/OR/RR) <1 refer to a beneficial effect. We will use logistic regression to examine the association of study characteristics with agreement versus not agreement, with nine prespecified study characteristics: (1) journal type (Core vs non-Core), (2) source of funding (profit vs not for-profit), (3) the similarity between the BoE from RCTs and NRSI, (4) the types of NRSI, (5) type of intervention (pharmacological vs non-pharmacological), (6) type of objective for primary outcome (safety vs benefit), (7) statistical significance of the combined estimates (yes vs no), (8) only NRSIs adjusted for confounders in the results were included (yes vs no), (9) the heterogeneity within meta-analyses and (10) the similarity between the control group from RCTs and NRSI.

To quantify differences of effect estimates, we will compute a ratio of ratios (RoR) for pooled effects of RCTs and NRSIs contributing to the meta-analysis of interest,20 22 and pooled evidence from RCTs will serve as the reference group. The RoR will not be interpreted as larger or smaller treatment effects in one type of study (eg, NRSIs), but only as the difference between the two estimates; and the direction of difference depends on direction of effect of the underlying estimates. We will express differences in pooled effect estimates with the following measures: RoRs that were <1, >1 or =1, RoRs indicating an ‘important difference’ (<0.70 or >1.43) or not (0.7≤RoR≤1.43).24 33 We will pool the RoRs across all eligible studies using a random-effects model to assess whether, in total, effect estimates from RCTs are larger or smaller in relation to those from NRSI.24 34

Analysis of the influence of combining RCTs and NRSI on estimates of BoE

We will assess the influence of combining randomised and non-randomised data on estimates of BoE with the following metrics: statistical heterogeneity, 95% prediction intervals, the proportion of meta-analyses in which the inclusion of NRSIs modify the qualitative direction of estimates from RCTs, weight of BoE contributed by RCTs and the agreement of the direction of effect between RCTs and pooled effect estimates. We will use a bias-corrected meta-analysis model by Verde27 to combine RCTs and NRSIs. This model is based on a mixture of two random effects distributions, where the first component corresponds to the model of interest and the second component to the hidden bias structure. One important advantage of this approach is that the proportion of NRSIs is used as a simple prior elicitation of the probability of bias in the meta-analysis, in which the bias of NRSI is considered.

Subgroup analysis

We will describe the details of the aforementioned analysis for the primary outcome by the type of NRSIs, type of intervention, type of objective for the primary outcome, category of the primary outcome and the RoB for RCTs.

Ethics and dissemination

This study involves neither human participants nor unpublished secondary data. As such, approval from a human research ethics committee is not required. The findings of this study will be disseminated through peer-reviewed publications, conference presentations and condensed summaries for clinicians, health policymakers and guideline developers regarding the design, conduct, analysis and interpretation of meta-analysis that integrate RCTs and NRSIs.

Discussion

This protocol describes a meta-epidemiological study to assess the potential impact in evidence syntheses when including NRSIs in meta-analyses of RCTs. By publishing our detailed study protocol, we aim to make our objectives and analysis methods transparent.

Strengths and limitations

Our study has several strengths. First, this study is transparent and will follow rigorous systematic methods regarding searching for eligible systematic reviews or meta-analyses, selecting eligible studies and the primary outcomes, and abstracting data. We will use standardised, pilot-tested forms accompanied by written instructions to ensure data integrity. Second, our study will include more systematic reviews and involve more journals than previous studies. Furthermore, the number of types of the NRSIs included in our study is broad, thus our findings will be more generalisable. Third, to investigate the impact scenario of pooling BoE integrated from RCTs and NRSIs, we will use advanced statistical methods that attempt to correct bias associated with NRSIs. Finally, our methodology will address a series of important questions that have not been fully answered by previous studies.

Our study also has several limitations. First, we will exclude systematic reviews or meta-analysis reporting primary outcomes that are continuous variables or expressed as rates, individual participant data meta-analysis, network meta-analysis, dose–response meta-analysis and diagnostic meta-analysis; the findings from our study may not be generalisable to these reviews types. Second, the similarity of the PI/ECO between the BoE from RCTs and NRSIs will involve reviewers’ judgement which is subjective. We will list the prespecified criteria with detailed written instructions based on two previous studies24 28 to improve the accuracy and reliability of assessment results. Third, we will accept information and data as reported by the authors of included systematic reviews or meta-analyses. We will not abstract the information from the original studies included in each systematic review or meta-analyses.

Implications of this study

This protocol describes a meta-epidemiological study to assess the potential impact on evidence syntheses when including NRSIs in meta-analyses of RCTs. By publishing our detailed study protocol, we aim to make our objectives and analysis methods transparent. Although some empirical studies restricted to certain diseases (COVID-19 and major depressive disorder), type of objective for primary outcome (adverse effects or efficacy/effectiveness) or type of NRSIs (cohort studies) have explored agreement of effects from NRSIs and RCTs, the factors that drive reporting quality and the concordance of evidence between NRSIs and RCTs in a broader set of systematic reviews and meta-analyses remain unclear. The results from this study will provide more generalizable evidence on the factors that impact reporting and compatibility of the evidence from NRSI and RCTs.

Furthermore, the inclusion of NRSIs in a meta-analysis of RCTs in appraising health interventions may provide a more representative evidence base for decision-making, however, the influence on estimates of BoE is still unclear. Compared with RCTs, the results of NRSIs often report large effects because of uncontrolled confounding,22 with much smaller estimates of precision because the events and the sample size are usually much larger.35 There have been several meta-analyses including both NRSIs and RCTs that used conventional methods to directly combine effect estimates from RCTs and NRSIs, which are vulnerable to bias associated with NRSIs. The results of this study will reveal the extent to which the bias-corrected meta-analysis model by Verde27 can adjust pooled estimates of BoE from RCTs and NRSI for bias in NRSI.

The findings of this study may influence recommendations on the design, conduct, analysis and interpretation of systematic reviews or meta-analyses that include RCTs and NRSIs. Our findings will have important implications for clinicians, health policymakers and guideline developers who conduct and report meta-analyses integrating RCTs and NRSIs, as well as interpretation of the results.

Supplementary Material

Reviewer comments
Author's manuscript

Footnotes

Twitter: @JasonWBusse

Contributors: XS, LL and MY conceptualised the study. All authors contributed the design of this protocol and approved the manuscript.

Funding: This study is found by National Natural Science Foundation of China (Grant No. 72204173, 71904134 and 82274368), National Science Fund for Distinguished Young Scholars (Grant No. 82225049), Natural sciences Fund of Hainan Province (Grant No.821MS0818), Sichuan Provincial Central Government Guides Local Science and Technology Development Special Project (Grant No. 2022ZYD0127) and Fundamental Research Funds for the Central public welfare research institutes (Grant No. 2020YJSZX-3). JWB is supported, in part, by a CIHR Canada Research Chair in Prevention & Management of Chronic Pain (Grant No. NA).

Competing interests: None declared.

Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research.

Provenance and peer review: Not commissioned; externally peer reviewed.

Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Ethics statements

Patient consent for publication

Not applicable.

References

  • 1.Barton S. Which clinical studies provide the best evidence? the best RCT still trumps the best observational study. BMJ 2000;321:255–6. 10.1136/bmj.321.7256.255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Cuello-Garcia CA, Santesso N, Morgan RL, et al. GRADE guidance 24 optimizing the integration of randomized and non-randomized studies of interventions in evidence syntheses and health guidelines. J Clin Epidemiol 2022;142:200–8. 10.1016/j.jclinepi.2021.11.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply Lancet 2005;365:82–93. 10.1016/S0140-6736(04)17670-8 [DOI] [PubMed] [Google Scholar]
  • 4.Sherman RE, Anderson SA, Dal Pan GJ, et al. Real-world evidence - what is it and what can it tell us N Engl J Med 2016;375:2293–7. 10.1056/NEJMsb1609216 [DOI] [PubMed] [Google Scholar]
  • 5.Wu J, Wang C, Toh S, et al. Use of real-world evidence in regulatory decisions for rare diseases in the United States-current status and future directions. Pharmacoepidemiol Drug Saf 2020;29:1213–8. 10.1002/pds.4962 [DOI] [PubMed] [Google Scholar]
  • 6.Bolislis WR, Fay M, Kühler TC. Use of real-world data for new drug applications and line extensions. Clin Ther 2020;42:926–38. 10.1016/j.clinthera.2020.03.006 [DOI] [PubMed] [Google Scholar]
  • 7.Sun X, Tan J, Tang L, et al. Real world evidence: experience and lessons from China. BMJ 2018;360:j5262. 10.1136/bmj.j5262 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Jenkins DA, Hussein H, Martina R, et al. Methods for the inclusion of real-world evidence in network meta-analysis. BMC Med Res Methodol 2021;21:207. 10.1186/s12874-021-01399-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Norris SL, Atkins D, Bruening W, et al. Observational studies in systematic reviews of comparative effectiveness: AHRQ and the effective health care program. J Clin Epidemiol 2011;64:1178–86. 10.1016/j.jclinepi.2010.04.027 [DOI] [PubMed] [Google Scholar]
  • 10.Page MJ, Shamseer L, Altman DG, et al. Epidemiology and reporting characteristics of systematic reviews of BIOMEDICAL research: a cross-sectional study. PLoS Med 2016;13:e1002028. 10.1371/journal.pmed.1002028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sarri G, Patorno E, Yuan H, et al. Framework for the synthesis of non-randomised studies and randomised controlled trials: a guidance on conducting a systematic review and meta-analysis for Healthcare decision making. BMJ Evid Based Med 2022;27:109–19. 10.1136/bmjebm-2020-111493 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017;358:j4008. 10.1136/bmj.j4008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Saldanha IJ, Adam GP, Bañez LL, et al. Inclusion of Nonrandomized studies of interventions in systematic reviews of interventions: updated guidance from the agency for health care research and quality effective health care program. J Clin Epidemiol 2022;152:300–6. 10.1016/j.jclinepi.2022.08.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Brugha TS, Matthews R, Morgan Z, et al. Methodology and reporting of systematic reviews and meta-analyses of observational studies in psychiatric epidemiology: systematic review. Br J Psychiatry 2012;200:446–53. 10.1192/bjp.bp.111.098103 [DOI] [PubMed] [Google Scholar]
  • 15.Golder S, Loke Y, McIntosh HM. Room for improvement? A survey of the methods used in systematic reviews of adverse effects. BMC Med Res Methodol 2006;6:3. 10.1186/1471-2288-6-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Moher D, Tetzlaff J, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews. PLoS Med 2007;4:e78. 10.1371/journal.pmed.0040078 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Zorzela L, Golder S, Liu Y, et al. Quality of reporting in systematic reviews of adverse events: systematic review. BMJ 2014;348:f7668. 10.1136/bmj.f7668 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Faber T, Ravaud P, Riveros C, et al. Meta-analyses including non-randomized studies of therapeutic interventions: a methodological review. BMC Med Res Methodol 2016;16:35. 10.1186/s12874-016-0136-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Bun R-S, Scheer J, Guillo S, et al. Meta-analyses frequently pooled different study types together: a meta-Epidemiological study. J Clin Epidemiol 2020;118:18–28. 10.1016/j.jclinepi.2019.10.013 [DOI] [PubMed] [Google Scholar]
  • 20.Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med 2011;8:e1001026. 10.1371/journal.pmed.1001026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Anglemyer A, Horvath HT, Bero L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev 2014;2014:MR000034. 10.1002/14651858.MR000034.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hong YD, Jansen JP, Guerino J, et al. Comparative effectiveness and safety of pharmaceuticals assessed in observational studies compared with randomized controlled trials. BMC Med 2021;19:307. 10.1186/s12916-021-02176-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Moneer O, Daly G, Skydel JJ, et al. Agreement of treatment effects from observational studies and randomized controlled trials evaluating hydroxychloroquine, Lopinavir-Ritonavir, or dexamethasone for COVID-19: meta-Epidemiological study. BMJ 2022;377:e069400. 10.1136/bmj-2021-069400 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bröckelmann N, Balduzzi S, Harms L, et al. Evaluating agreement between bodies of evidence from randomized controlled trials and cohort studies in medical research: a meta-epidemiological study. BMC Med 2022;20:174. 10.1186/s12916-022-02369-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Bröckelmann N, Stadelmaier J, Harms L, et al. An empirical evaluation of the impact scenario of pooling bodies of evidence from randomized controlled trials and cohort studies in medical research. BMC Med 2022;20:355. 10.1186/s12916-022-02559-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Yao M, Wang Y, Mei F, et al. Methods for the inclusion of real-world evidence in a rare events meta-analysis of randomized controlled trials. J Clin Med 2023;12:1690. 10.3390/jcm12041690 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Verde PE. A bias-corrected meta-analysis model for combining, studies of different types and quality. Biom J 2021;63:406–22. 10.1002/bimj.201900376 [DOI] [PubMed] [Google Scholar]
  • 28.Schwingshackl L, Balduzzi S, Beyerbach J, et al. Evaluating agreement between bodies of evidence from randomised controlled trials and cohort studies in nutrition research: meta-Epidemiological study. BMJ 2021;374:n1864. 10.1136/bmj.n1864 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.U.S. National Library of Medicine . Abridged index Medicus (AIM or ‘‘core clinical’’) Journal titles. Available: http://www.nlm.nih.gov/bsd/aim.html [Accessed 20 Dec 2022].
  • 30.Sterne JAC, Savović J, Page MJ, et al. Rob 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019;366:l4898. 10.1136/bmj.l4898 [DOI] [PubMed] [Google Scholar]
  • 31.Reeves BC, Deeks JJ, Higgins JPT, et al. Chapter 24: including non-randomized studies on intervention effects. In: Higgins JPT, Thomas J, Chandler J, et al., eds. Cochrane handbook for systematic reviews of interventions version 6.3 (updated February 2022). 2022. Available: www.training.cochrane.org/handbook [accessed 22 Dec 2022]. [Google Scholar]
  • 32.Grant RL. Converting an odds ratio to a range of plausible relative risks for better communication of research findings. BMJ 2014;348:bmj.f7450. 10.1136/bmj.f7450 [DOI] [PubMed] [Google Scholar]
  • 33.Dahabreh IJ, Sheldrick RC, Paulus JK, et al. Do observational studies using propensity score methods agree with randomized trials? A systematic comparison of studies on acute coronary syndromes. Eur Heart J 2012;33:1893–901. 10.1093/eurheartj/ehs114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Riley RD, Higgins JPT, Deeks JJ. Interpretation of random effects meta-analyses. BMJ 2011;342:bmj.d549. 10.1136/bmj.d549 [DOI] [PubMed] [Google Scholar]
  • 35.Alkabbani W, Pelletier R, Gamble JM. Sodium/glucose cotransporter 2 inhibitors and the risk of diabetic Ketoacidosis: an example of complementary evidence for rare adverse events. Am J Epidemiol 2021;190:1572–81. 10.1093/aje/kwab052 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjopen-2023-073232supp001.pdf (53.2KB, pdf)

Reviewer comments
Author's manuscript

Articles from BMJ Open are provided here courtesy of BMJ Publishing Group

RESOURCES