Skip to main content
Journal of Comparative Effectiveness Research logoLink to Journal of Comparative Effectiveness Research
. 2023 Jun 22;12(7):e230092. doi: 10.57264/cer-2023-0092

R WE ready for reimbursement? A round up of developments in real-world evidence relating to health technology assessment: part 12

Ben Bray 1,2, Sreeram V Ramagopalan 1,3,*
PMCID: PMC10508304  PMID: 37345541

Abstract

In this latest update we highlight the final results from the RCT-DUPLICATE initiative, the publication of guidance from Haute Autorité de Santé (HAS), the joint viewpoint from the Institute for Quality and Efficiency in HealthCare (IQWIG) and the Belgian HealthCare Knowledge Center, and a position from the European Organization for Research and Treatment of Cancer (EORTC). Finally, we discuss how the NICE RWE framework has been implemented to allow consideration of RWE external control arms.

Keywords: Belgian Health Care Knowledge Center, European Organization for Research and Treatment of Cancer, Haute Autorité de Santé, Institute for Quality and Efficiency in Health Care, NICE, randomized controlled trials, RCT-DUPLICATE, real-world evidence


Randomized controlled trials (RCTs) are considered the gold standard for demonstrating efficacy of medicines by regulators, health technology assessment (HTA) agencies and the healthcare practitioner community [1]. In order for studies using real-world data (RWD) to be more widely accepted for HTA decision making, treatment effectiveness estimates obtained from appropriately designed studies using RWD should be consistent with those from RCTs. This is the premise on which the RCT-DUPLICATE initiative was born [2], and the final results of this landmark study were just published [3]. By mirroring the design of 30 completed and two ongoing RCTs with analyses from three US healthcare claims data sources, the initiative aimed to ascertain the level of agreement between RCT results and a replicated version of the trial using RWD. The researchers aimed to emulate RCT designs by identifying and implementing observational analogues of RCT design parameters (population, intervention, comparator, outcome and time-frame [PICOT]) and used propensity score matching to balance confounders. Analyses were prespecified and a protocol registered before exposure–outcome relationships were investigated. RCTs chosen covered cardiovascular diseases, diabetes, osteoporosis, chronic kidney disease, prostate cancer, asthma and COPD. The study found that 75% of the comparisons between the RCT and RWD replication met statistical significance agreement (defined as RCT and RWD estimates and CIs being on the same side of the null) and 66% met estimate agreement (defined as estimates for the RWD emulation falling within the 95% CI of the RCT result). In a post hoc analysis limited to 16 RCTs with closer emulation of trial design and measurements, concordance was higher- 94% met statistical significance and 88% estimate agreement. The authors concluded that three factors were important in determining whether RWD could reliably replicate a RCT – the ability to emulate key features of the trial in the available RWD, residual confounding and chance. Reasons for inability to emulate a trial in RWD included poorer adherence of medications in the real world, missingness of outcomes (e.g., HbA1c), difficulties in replicating a placebo comparator and having appropriate treatment periods (e.g., maintenance therapy). Taken together this study demonstrates that in certain circumstances real-world evidence (RWE) may be able to support causal conclusions, providing options when RCTs may not be feasible. Notably, when the trial design and measurements could be closely emulated, a remarkable degree of concordance was observed between the RCT and RWD. Given HTA bodies are perhaps more concerned with the magnitude of effect and implications for cost–effectiveness estimates rather than simply the existence and direction of an effect, the fact that 88% of trials that could be closely emulated had an effect estimate agreement is reassuring in this context. Learnings from this study should help guide decision makers as to when trial emulation is possible in claims based RWD for the indications investigated. Given the limitations of claims databases such as lack of detail about important clinical measures, challenges in outcome ascertainment and the vagaries of medical coding, it is perhaps surprising that the results were as good as they were. Questions still remain as to whether RWD sources with more detailed clinical data (e.g., electronic medical records, patient registries) would have shown even more consistency to RCTs and how emulation varies by therapy area and end points. One learning for researchers from this study was that protocols were registered after feasibility analyses but before analysis of exposure–outcome relationships. Feasibility analyses play an important but perhaps overlooked role in the design of RWE studies and this study provides a useful example of using these to address key questions about the ability of RWD to emulate an RCT, without losing the benefits that pre-registration of study protocols provide.

Despite positive demonstration projects like that from RCT-DUPLICATE we still have different views on the utility of RWD as a measure of effectiveness by HTA stakeholders. An article published recently by the French HTA body, Haute Autorité de santé (HAS) [4] provides guidance on the use of external control arms to single-arm trials. HAS stresses the need to justify why a RCT cannot be performed, similar to other guidances like the US FDA [5]. Where an external control is going to be used, it should be planned before the initiation of the single-arm trial, and the comparison should seek to emulate a target trial, with the external control reflecting standard of care. Systematic searches should be done to identify the most appropriate data source for the external control as well as to identify potential confounders. Comparison methods using individual patient data are preferred, with appropriate methods to adjust for confounders (e.g., propensity scores). Residual confounding should be addressed with negative controls or quantitative bias analysis. Given little acceptance to date of RWD external control arms by HAS [6] the guidance is to be welcomed in assisting the generation, usefulness and (where appropriate) acceptance of external controls to enable patient access to new medicines, and it will be of interest to see how submissions containing RWD external controls fare in the future. Of note is the number of RWD guidances now suggesting the use of quantitative bias analysis, which was suggested to be used in the HTA setting for the first time in this journal [7]. A similar view on RWD to HAS is seen from the European Organization for Research and Treatment of Cancer (EORTC) [8]. While not directly involved in the HTA process, EORTC is a non profit cancer research organization involved in clinical trial conduct, the results of which will ultimately be submitted for HTA review. EORTC stated a preference for randomized RWD, for example, pragmatic trials, but where randomization is not possible, the organization will consider setting up non randomized RWD studies following target trial approaches. Providing a very different perspective, are authors from the Institute for Quality and Efficiency in HealthCare (IQWIG) and Belgian HealthCare Knowledge Centre, who suggest that RWD cannot be reliably used to measure treatment effectiveness, and researchers should always strive to perform RCTs [9]. This is in line with a general critical view of RWD by IQWIG [10,11], and while we are yet to have a full read out of all demonstration projects definitively stating in which indications and from which data sources RWD can appropriately emulate RCTs, it is a shame that IQWIG do not want to be pragmatic in cases where conventional RCTs are stifled by patient preferences and absence of equipoise. Given the dismissiveness of RWE it will be interesting to see how the German authorities view the results of post approval RWD studies for spinal muscular atrophy they have mandated [2].

We now have a wealth of RWD guidances from regulators and HTA agencies, representing a significant amount of work from all involved. A question may be asked as to what the impact of these guidances are? One positive example comes from NICE during the evaluation of mobocertinib [12] and amivantamab [13] for EGFR exon 20 insertion mutation-positive advanced non-small-cell lung cancer. This indication is rare and therefore single-arm trials were conducted with RWD submitted as external comparators. In both cases the NICE committees believed that the RWD comparison was justified and showed a statistically significant benefit in favour of the new technologies; however, there was uncertainty around the exact level of improvement due to the potential for residual confounding in the comparison. The NICE RWE framework [14] was referenced in the decision making, and the committee asked the manufacturers to complete the DataSAT checklist to describe the RWD used. The fact that the committees both called the RWE ‘the best available source of evidence’ shows how far things have come as compared with previously just completely disregarding RWD external control arms that were submitted [15]. Perhaps the concern around residual confounding could have been addressed with quantitative bias analysis. The use of RWE in HTA is a rapidly evolving one, and we look forward to seeing further changes in how HTA agencies evaluate RWE in light of new guidances and methodologies.

Footnotes

Financial & competing interests disclosure

The author SV Ramagopalan has received an honorarium from Becaris Publishing for the contribution of this work. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

No writing assistance was utilized in the production of this manuscript.

Open access

This work is licensed under the Attribution-NonCommercial-NoDerivatives 4.0 Unported License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/

References

  • 1.Ramagopalan SV, Simpson A, Sammon C. Can real-world data really replace randomised clinical trials? BMC Med. 18, 13 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Simpson A, Ramagopalan SV. R WE ready for reimbursement? A round up of developments in real-world evidence relating to health technology assessment. J. Comp. Eff. Res. 10(10), 797–799 (2021). [DOI] [PubMed] [Google Scholar]
  • 3.Wang SV, Schneeweiss S. RCT-DUPLICATE Initiative. Emulation of randomized clinical trials with nonrandomized database analyses: results of 32 clinical trials. J. Am. Med. Assoc. 329, 1376–1385 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Vanier A, Fernandez J, Kelley S et al. Rapid access to innovative medicinal products while ensuring relevant health technology assessment. Position of the French National Authority for Health. BMJ Evid. Based Med.(2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bray B, Ramagopalan SV. R WE ready for reimbursement? A round up of developments in real-world evidence relating to health technology assessment: part 11. J. Comp. Eff. Res. 12(5), e230008 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Cox O, Sammon C, Simpson A, Wasiak R, Ramagopalan S, Thorlund K. The (harsh) reality of real-world data external comparators for health technology assessment. Value Health 25(7), 1253–1256 (2022). [DOI] [PubMed] [Google Scholar]
  • 7.Sammon CJ, Leahy TP, Gsteiger S, Ramagopalan S. Real-world evidence and nonrandomized data in health technology assessment: using existing methods to address unmeasured confounding? J. Comp. Eff. Res. 9(14), 969–972 (2020). [DOI] [PubMed] [Google Scholar]
  • 8.Saesen R, Van Hemelrijck M, Bogaerts J et al. Defining the role of real-world data in cancer clinical research: the position of the European Organisation for Research and Treatment of Cancer. Eur. J. Cancer 186, 52–61 (2023). [DOI] [PubMed] [Google Scholar]
  • 9.Wieseler B, Neyt M, Kaiser T, Hulstaert F, Windeler J. Replacing RCTs with real world data for regulatory decision making: a self-fulfilling prophecy? Brit. Med. J. 380, e073100 (2023). [DOI] [PubMed] [Google Scholar]
  • 10.Simpson A, Ramagopalan SV. R WE ready for reimbursement? A round up of developments in real-world evidence relating to health technology assessment: part 8. J. Comp. Eff. Res. 11(13), 915–917 (2022). [DOI] [PubMed] [Google Scholar]
  • 11.Simpson A, Ramagopalan SV. R WE ready for reimbursement? A round up of developments in real-world evidence relating to health technology assessment: part 10. J. Comp. Eff. Res. 12(1), e220194 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.National Institute for Health and Care Excellence (NICE). Mobocertinib for treating EGFR exon 20 insertion mutation-positive advanced non-small-cell lung cancer after platinum-based chemotherapy. Technical appraisal guidance [TA855] (2023). www.nice.org.uk/guidance/ta855
  • 13.National Institute for Health and Care Excellence (NICE). Amivantamab for treating EGFR exon 20 insertion mutation-positive advanced non-small-cell lung cancer after platinum-based chemotherapy. Technical appraisal guidance [TA850] (2022). www.nice.org.uk/guidance/ta850
  • 14.National Institute for Health and Care Excellence (NICE). NICE real-world evidence framework. Corporate document [ECD9] (2022). www.nice.org.uk/corporate/ecd9/chapter/overview
  • 15.National Institute for Health and Care Excellence (NICE). Entrectinib for treating ROS1-positive advanced non-small-cell lung cancer. Technology appraisal guidance [TA643] (2020). www.nice.org.uk/guidance/ta643

Articles from Journal of Comparative Effectiveness Research are provided here courtesy of Becaris Publishing

RESOURCES