Abstract
Real-world studies have become increasingly important in providing evidence of treatment effectiveness in clinical practice. While randomized clinical trials (RCTs) are the “gold standard” for evaluating the safety and efficacy of new therapeutic agents, necessarily strict inclusion and exclusion criteria mean that trial populations are often not representative of the patient populations encountered in clinical practice. Real-world studies may use information from electronic health and claims databases, which provide large datasets from diverse patient populations, and/or may be observational, collecting prospective or retrospective data over a long period of time. They can therefore provide information on the long-term safety, particularly pertaining to rare events, and effectiveness of drugs in large heterogeneous populations, as well as information on utilization patterns and health and economic outcomes. This review focuses on how evidence from real-world studies can be utilized to complement data from RCTs to gain a more complete picture of the advantages and disadvantages of medications as they are used in practice.
Funding: Sanofi US, Inc.
Keywords: Clinical practice, Real-world data, Real-world study
Introduction
Real-world studies seek to provide a line of complementary evidence to that provided by randomized controlled trials (RCTs). While RCTs provide evidence of efficacy, real-world studies produce evidence of therapeutic effectiveness in real-world practice settings [1]. The RCT is a well-established methodology for gathering robust evidence of the safety and efficacy of medical interventions [2]. In RCTs, the investigators are able to reduce bias and confounding by utilizing randomization and strict patient inclusion and exclusion criteria. This internal validity is often achieved at the expense of external validity (generalizability), since the populations enrolled in RCTs may differ significantly from those found in everyday practice. Real-world evidence has emerged as an important means to understanding the utility of medical interventions in a broader, more representative patient population. The strict exclusion criteria for RCTs may exclude the majority of patients seen in routine care; therefore, real-world evidence can give vital insight into treatment effects in more diverse clinical settings, where many patients have multiple comorbidities [3, 4].
Data from real-world studies can provide evidence that informs payers, clinicians, and patients on how an intervention performs outside the narrow confines of the research setting, providing essential information on the long-term safety and effectiveness of a drug in large populations, its economic performance in a naturalistic setting, and for assessment of comparative effectiveness with other treatments. With improvements in the rigor of methodology being applied to real-world studies, along with the increasing availability of higher-quality, larger datasets, the importance of findings from these studies is growing. The value of real-world data has been recognized by regulatory bodies such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) [5, 6]. These bodies acknowledge the importance of real-world data in supporting marketed products and their potential role in supporting life cycle product development/monitoring and decision-making for regulation and assessment [5, 6]. A survey of the pharmaceutical and medical devices industry in the European Union and the USA determined that 27% of real-world studies are conducted by industry, performed “on request” by regulatory authorities [7]. Real-world data form a key component of healthcare technology assessments used by national and regional bodies, such as the National Institute for Health and Care Excellence (NICE) in the UK and Germany’s Institute for Quality and Efficiency in Health Care (EQWiG), to guide clinical decision-making [8]. The data from real-world studies are also increasingly utilized by payers. In a US survey, the majority of payers who responded reported using real-world data to guide decision-making, in particular on utilization management and formulary placement [9]. Such data usage may have profound effects; for example, the reversal of a decision by the EQWiG that analogue basal insulins showed no benefit over human insulin, which restored market access and premium pricing for insulin glargine in Germany [10]. The increase in the number of real-world studies has resulted in more clinical evidence being available to guide treatment decisions, and can allow assessment of the impacts of off-label use. In this paper, we review the impact of real-world clinical data and how their interpretation can assist clinicians to assess clinical evidence appropriately for their own decision-making.
The Association of the British Pharmaceutical Industry defines real-world data as “data that are collected outside the controlled constraints of conventional RCTs to evaluate what is happening in normal clinical practice” [11]. Real-world studies can be either retrospective or prospective, and when they include prospective randomization, they are called “pragmatic trial design” studies (Table 1) [12]. The clearest distinction between RCTs and real-world studies is based on (a) the setting in which the research is conducted and (b) where evidence is generated [2]. RCTs are typically conducted with precisely defined patient populations, and patient selection is often contingent on meeting extensive eligibility (i.e., inclusion and exclusion) criteria. Participants in such trials (and the data they provide) are subject to rigorous quality standards, with intensive monitoring, the use of detailed case-report forms (to capture additional information that may not be present in ordinary medical records), and carefully managed contact with research personnel (who are responsible for ensuring protocol adherence) being commonplace. Real-world evidence, in contrast, is often derived from multiple sources that lie outside of the typical clinical research setting: these can include offices that are not generally involved in research, electronic health records (EHRs), and patient registries and administrative claims databases (sometimes obtained from integrated healthcare delivery systems). Despite these differences, real-world evidence can also be used retrospectively as external control arms for RCTs, to provide comparative efficacy data [13]. Consequently, this article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.
Table 1.
Randomized controlled trials | Real-world studies | ||
---|---|---|---|
Type of study | Experimental/interventional | Observational/non-interventional | Interventional/pragmatic |
Design | Prospective | Retrospective/prospective | Prospective |
Primary focus | Efficacy, safety, quality, cost-effectiveness | Efficacy, safety, quality, cost-effectiveness, natural history, compliance and adherence, service models, patient preferences, comparative | |
Patient population | Narrow, restricted, motivated | Diverse, large, and unrestricted | |
Monitoring | Intense (ICH-GCP compliant) | Not required (?) | Reflects usual care |
Comparators | Gold standard/placebo | None/standard clinical practice/multiple iterations | Standard practice/placebo/multiple iterations |
Outcomes | Clear sequence | Wide range | |
Data collection confounders | Standardized, controlled | Routine, recruitment bias (?), recall/interviewer bias | |
Randomization | Yes | No | Yes |
Blinding | Yes | No | Sometimes (participants or outcome assessment) |
Follow-up | Generally short | Reflects usual care | Long |
ICH-GCP International Conference on Harmonisation of Good Clinical Practice
Large “pragmatic trials” are an increasingly common real-world data source. Such trials are designed to show the real-world effectiveness of an intervention in a broad patient group [14]. They incorporate a prospective, randomized design and collect data on a wide range of health outcomes in a diverse and heterogeneous population (i.e., they are consistent with clinical practice) [15–17]. Pragmatic trials are conducted in routine practice settings [1], include a population that is relevant for the intervention and a control group treated with an acceptable standard of care (or placebo), and describe outcomes that are meaningful to the population in question [14]. Aspects of care other than the intervention being studied are intentionally not controlled, with clinicians applying clinical discretion in their choice of other medications [11]. Pragmatic trials may focus on a specific type of patient or treatment, and study coordinators may select patients, clinicians, and clinical practices and settings that will maximize external validity (i.e., the applicability of the results to usual practice) [16]. As such, pragmatic trials are able to provide data on a range of clinically relevant real-world considerations, including different treatments, patient- and clinician-friendly titration and treatment algorithms, and cost-effectiveness, which in turn may help address practice- and policy-relevant issues. These studies can focus specifically on the outcomes which are most important to patients, and take into account real-world treatment adherence and compliance on the direct impact of a medication or treatment regimen for patients.
Understanding the Strengths and Weaknesses of Real-World Studies
Compared with RCT data, real-world evidence has the potential to more efficiently provide answers that inform outcomes research, quality improvement, pharmacovigilance, and patient care [2]. As they are performed in clinical settings and patient populations that are similar to those encountered in clinical practice, real-world studies have broader generalizability. Specifically, RCTs provide evidence of efficacy, while real-world studies give evidence of effectiveness in real-world practice settings [1]. Additionally, observational, retrospective real-world studies are generally more economical and time efficient than RCTs [18] as they use existing data sources such as registries, claims data, and EHRs to identify study outcomes [16].
Key to the utility of real-world studies is their ability to complement data from RCTs in order to fill current gaps in clinical knowledge. Specific trial criteria may cause RCTs to exclude a particular group of patients commonly seen in clinical practice; for example, RCTs frequently exclude older adults. In the case of diabetes, while many RCTs focus primarily on the safety and glucose-lowering efficacy of antihyperglycemia drugs [19], it is desirable to have real-world effectiveness outcomes data in patients with type 2 diabetes (T2D) that take into account issues such as adherence [20, 21] and the frequency of side effects in less controlled settings (which may affect outcomes). Such studies suggest that the difference between glycated hemoglobin reduction in RCTs and in practice may be related to adherence and point to the potential value of real-world studies assessing clinical-practice effectiveness. In addition, real-world evidence can address important issues such as the impact of treatment on microvascular disease and cardiovascular (CV) events [22] and enable the examination of outcomes, which are difficult to assess in RCTs, such as the utilization of healthcare resources by patients receiving different therapies. In the DELIVER-3 study, for example, insulin glargine 300 U/ml (Gla-300) was associated with reduced resource utilization compared with other basal insulins [23]. An example, which demonstrates the utility of pragmatic trial design, is the exploration of patient-driven insulin titration protocols that highlight the practical need that patients face in everyday life, rather than reflecting the needs of a highly controlled, well-motivated RCT population [24–26].
Real-world studies have a number of limitations. Retrospective and non-randomized real-world studies are subject to bias and confounding factors, problems that are controlled for in randomized blinded trials [27]. Electronic data may be inconsistently collected, with missing data elements that can eventually result in reduced statistical validity and a decreased ability to answer the research question [16]. The types of bias seen in real-world trials include selection bias (e.g., therapies may be differently prescribed depending upon patient and disease characteristics, e.g., severity of disease and/or other patient characteristics), information bias (misclassification of data), recall bias (caused by selective recall of impactful events by patients/caregivers), and detection bias (where an event is more likely to be captured in one treatment group than another) [28]. While systematic reviews have found little evidence to suggest that treatment effects or adverse events in well-designed observational studies are either overestimated or qualitatively different from those obtained in RCTs, each real-world study must be examined individually for sources of bias and confounding [29–31]. Indeed, caution should be exercised when using data from real-world studies (particularly retrospective studies) to influence change in clinical practice [18] because of confounding and bias. Techniques such as propensity score matching (PSM) can be used to reduce selection bias by matching the characteristics of patients entering different arms of studies (see below) [32].
Properly designed, prospective, interventional pragmatic trials have the potential to overcome many of the limitations of observational and retrospective real-world studies. However, the main limitation of pragmatic trials is that they do not often place constraints on patients and clinicians, which may result in inconsistent or missing data in source documents such as EHRs. This, together with heterogeneity in terms of clinical practice and associated documentation, may lead to a reduced capability of the study to answer the research question [16]. In addition, heterogeneity of clinical practice and patient populations reduces the translatability of pragmatic trial data to different settings and locations [33]. There are also numerous challenges inherent in pragmatic trial design. These are illustrated by the trade-off between blinding of results to reduce bias and the desire to create a fully pragmatic design where the intervention is delivered as in normal practice [14]. Pragmatic trials, in producing evidence of effectiveness in real-world-practice settings, may trade aspects of internal validity for higher external validity, which ultimately means that they are more generalizable than RCTs [1].
Learning from Real-World Findings: Examples
Retrospective Observational Studies
A real-world study that had a definite effect on prescribing practice concerned a live attenuated nasal spray influenza vaccine in the USA. On the basis of results from a number of RCTs, which showed the superior efficacy of this vaccine over the inactivated influenza vaccine, the Advisory Committee for Immunization Practices (ACIP) issued a guidance for its use in children [34]. However, because of data from real-world observational studies showing worse performance compared with the RCT data and near zero performance against some pandemic influenza strains, the ACIP subsequently changed its guidance and recommended against the use of the live attenuated vaccine [34]. Retrospective, observational real-world data can confirm or refute the findings of RCTs. For example, the DELIVER-2 and DELIVER-3 studies were conducted in a broad population of patients with T2D on basal insulin, including at-risk older adults, and showed that those who switched to Gla-300 experienced significantly fewer hypoglycemia events—including events associated with hospitalization or emergency room visits—than those who switched to other basal insulins, without compromising blood glucose control [23, 35, 36], corroborating the results obtained in the EDITION RCTs [37–39].
Prospective Observational Studies
The importance of prospective observational studies has been clearly illustrated. For example, the Framingham Heart Study, initiated almost 70 years ago [40]. This study has provided substantial insight into the epidemiology of cardiovascular disease (CVD) and its risk factors, and has significantly influenced clinical thinking and practice. In the case of diabetes, prospective observational studies have provided key evidence that has guided the development of treatment guidelines worldwide. Ten years of long-term follow-up after the completion of the UK Diabetes Study confirmed and extended data on the importance of glycemic control in preventing the development of the microvascular and macrovascular complications of T2D in a real-world population [41]. The Epidemiology of Diabetes Interventions and Complications (EDIC) prospective observational follow-up study of the Diabetes Control and Complications Trial (DCCT) has described the long-term effects of prior intensive therapy compared with conventional insulin therapy on the development and progression of microvascular complications and CVD in type 1 diabetes [42].
The prospective observational ReFLeCT study is looking at rates of hypoglycemia, glycemic control, patient-reported outcomes, and quality of life under normal clinical practice conditions in approximately 1200 European patients with either type 1 or 2 diabetes for which they are prescribed insulin degludec. An analysis of data from the Cardiovascular Risk Evaluation in people with type 2 Diabetes on Insulin Therapy (CREDIT) study found that improved glycemic control in patients beginning insulin resulted in significant reductions in CV events such as stroke and CV death; no differences were observed between different insulin regimens, suggesting that it was good glycemic control that was the most important factor [43].
Pragmatic Prospective Randomized Trials
A number of pragmatic randomized trials have been completed or are underway to investigate a range of real-world diabetes patient-care issues, including the long-term effectiveness of major antihyperglycemia medications [44], glucose monitoring [45, 46], insulin initiation [47], and support strategies [48]. Since 2008, the FDA and subsequently the EMA have required sponsors of new antihyperglycemia therapies to evaluate their CV safety. This has resulted in a number of large-scale CV outcome trials including pragmatic trials such as the Trial Evaluating Cardiovascular Outcomes with Sitagliptin (TECOS) [49] and the Exenatide Study of Cardiovascular Event Lowering (EXSCEL) trial [50].
Real-World Studies: Addressing Generalizability
RCT exclusion criteria may rule out a significant proportion of real-world patients. As previously mentioned, patients excluded from RCTs are older, have more medical comorbidities, and have more challenging social and demographic issues than those included in these trials. Real-world studies have the potential to assess whether results seen in RCTs would be generalizable to real-world patient populations. The EMPA-REG OUTCOME RCT selected T2D patients with established CVD and, for those treated with the sodium-glucose co-transporter-2 (SGLT2) inhibitor empagliflozin vs placebo, reported a significant reduction in the primary composite endpoint of a three-point major adverse cardiac event (MACE) (CV death, non-fatal myocardial infarction, and non-fatal stroke), as well as the individual endpoints of CV death, all-cause death, and hospitalization for heart failure [51]. The CANVAS RCT investigating the SGLT2 inhibitor canagliflozin, which included a lower percentage of patients at high CV risk than EMPA-REG, also reported a significant reduction in the primary composite endpoint of a three-point MACE and the individual endpoint of hospitalization for heart failure but did not show a significant benefit for CV mortality or all-cause mortality alone [52]. Evidence from a further real-world study may support and expand upon the RCT data. The CVD-REAL study in over 300,000 patients with T2D, both with (13% of the total) and without established CVD, showed a consistent reduction in hospitalization for heart failure suggesting a real-world benefit of the SGLT2 inhibitor drug class as a whole in patients with T2D, irrespective of existing CV risk status or the SGLT2 inhibitor used [53].
Improving Quality of Evidence Generated from Real-World Studies
Criteria for the design of observational studies have been developed and, if followed, should result in higher-quality studies (Table 2) [28]. The STROBE guidelines (STrengthening the Reporting of OBservational studies in Epidemiology) provide a reporting standard for observational studies [54]. An extension to the CONSORT guideline for RCTs provides specific guidance for pragmatic trials and provides a reporting checklist that covers background, participants, interventions, outcomes, sample size, blinding, participant flow, and generalizability of findings [55]. Adherence to such criteria should improve not only the quality but also the validity of real-world study data in clinical practice.
Table 2.
Section | Quality criteria |
---|---|
Background | Clear underlying hypotheses and specific research question(s) |
Methods | |
Study design | Observational comparative effectiveness database study Independent steering committee involved in a priori definition of the study methodology (including statistical analysis plan), review of analyses, and interpretation of results Registration in a public repository with a commitment to publish results |
Database(s) | High-quality database(s) with few missing data for measures of interest Validation studies |
Outcomes | Clearly defined primary and secondary outcomes, chosen a priori The use of proxy and composite measures justified and explained The validity of proxy measures checked |
Length of observation | Sufficient duration to reliably assess outcomes of interest and long-term treatment effects |
Patients | Well-described inclusion and exclusion criteria, reflecting target patients’ characteristics in the real world |
Analyses | Study groups compared at baseline using univariate analyses Avoidance of biases related to baseline differences using matching and/or adjustments Sensitivity analyses are performed to check the robustness of results |
Sample size | Sample size calculations based on clear a priori hypotheses regarding the occurrence of outcomes of interest and target effect of studied treatment versus comparator |
Results | Flow chart explaining all exclusions Detailed description of patients’ characteristics, including demographics, characteristics of the disease of interest, comorbidities, and concomitant treatments Characteristics of patients lost to follow-up are compared with those of patients remaining in the analyses Extensive presentation of results obtained in unmatched and matched populations (if matching was performed) using univariate and multivariate, as well as unadjusted and adjusted, analyses Sensitivity analyses and/or analyses of several databases go in the same direction as primary analyses |
Discussion | Summary and interpretation of findings, focusing first on whether they confirm or contradict a priori hypotheses Discussion of differences with results of efficacy RCTs Discussion of possible biases and confounding factors, especially related to the observational nature of the study Suggestions for future research to challenge, strengthen, or extend study results |
Reprinted with permission of the American Thoracic Society. Copyright © 2018 American Thoracic Society [28]
RCT randomized control trial
A number of methods have also been developed to reduce the effects of confounding in observational studies, including PSM. This method aims to make it possible to compare outcomes of two treatment or management options in similar patients [32]. It does this by reducing the effects of multiple covariates to a single score, the propensity score. Comparison of outcomes across treatment groups of pairs or pools of propensity-score-matched patients can reduce issues such as selection bias [32]. Although a powerful and widely used tool, there are limits to the degree in which propensity score adjustments can control for bias and confounding variables. An example of this can be seen in RCT versus real-world data for mortality in patients with severe heart failure treated with the aldosterone inhibitor spironolactone [56]. While RCT data consistently showed a reduction in mortality, in a real-world study using PSM, spironolactone appeared to be associated with a substantially increased risk of death [57]. The authors of the study suggest that concluding that spironolactone is dangerous on the basis of the real-world study is not legitimate because of issues of unknown bias and confounding by indication (i.e., confounding due to factors not in the propensity score or even not formally measured) [57]. This illustrates a major limitation of PSM: it can only include variables that are in the available data [58]. A further major limitation is that the need for grouping or pairing data in PSM narrows the patient population analyzed, limiting generalizability and thereby reducing one of the main values of real-world studies.
“Big data” have emerged as a cutting-edge discipline that uses capture of data from EHRs and other high-volume data sources to efficiently generate hypotheses about the relationship between processes and outcomes. This demands an increased emphasis on the integrity of the data, with “high-quality” data defined in terms of their accuracy, availability and usability, integrity, consistency, standardization, generalizability, and timeliness [59, 60]. Missing data may represent a significant challenge in some datasets. For example, the US healthcare system (unlike many European countries) relies on a number of different laboratory companies to supply laboratory results data, which may result in inconsistencies in the recording of results in EHRs. The technical and methodological challenges presented by these new data sources are an active area of endeavor by key stakeholders moving towards harmonization of data collected from high-volume data sources, with the aim of creating a unified monitoring system and implementing methods for incorporating such data into research [2]. Artificial intelligence (AI) is the natural partner of big data, and the increased availability of these data sources is already allowing AI to improve clinical decision-making. AI techniques have used raw data gleaned from radiographical images, genetic testing, electrophysiological studies, and EHRs to improve diagnoses [6].
As a final caveat, with the increasing availability of real-world data, there may be some discrepancies in information derived from different sources. As with all data, be it from RCTs or real-world practice, consideration should be given to the limitations and generalizability of results when interpreting individual study outcomes and applying them to everyday clinical practice.
Conclusions
Real-world studies provide important information that can complement and/or even expand the information obtained in RCTs. RCTs set the standard for eliminating bias in determining efficacy and safety of medications, but have significant limitations with regard to generalizability to the broad population of patients with diabetes receiving health care in diverse clinical practice settings. Because real-world studies are performed in actual clinical practice settings, they are better able to assess the actual effectiveness and safety of medications as they are used in real-life by patients and clinicians. With improving study designs, methodological advances, and data sources with more comprehensive data elements, the potential for real-world evidence continues to expand. Moreover, the limitations of real-world studies are better understood and can be better addressed. Real-world evidence can both generate hypotheses requiring further investigation in RCTs and also provide answers to some research questions that may be impractical to address through RCTs.
Acknowledgements
KK acknowledges support from the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care-East Midlands (CLAHRC-EM) and the NIHR Leicester Biomedical Research Centre.
Funding
Funding, including article processing charges and Open Access fee, was provided by Sanofi US, Inc. All authors had full access to all of the data in this study and take complete responsibility for the integrity of the data and accuracy of the data analysis.
Medical Writing and Editorial Assistance
The authors received writing and editorial support in the preparation of this manuscript. This support was provided by Grace Richmond, PhD, of Excerpta Medica, funded by Sanofi US, Inc.
Authorship
All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.
Disclosures
Lawrence Blonde received grant/research support and honoraria from AstraZeneca, Intarcia Therapeutics, Janssen Pharmaceuticals, Lexicon Pharmaceuticals, Merck, Novo Nordisk, and Sanofi. Kamlesh Khunti received honoraria and research support from AstraZeneca, Boehringer Ingelheim, Eli Lilly, Janssen, Merck Sharp & Dohme, Novartis, Novo Nordisk, Roche, and Sanofi. Stewart Harris received honoraria and grants/research support from Abbott, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Intarcia, Janssen, Merck, Novo Nordisk, and Sanofi, honoraria and consulting fees from Abbott, AstraZeneca, Boehringer Ingelheim/Lilly, Janssen, Novo Nordisk, and Sanofi, and honoraria from Medtronic and Merck. Casey Meizinger has nothing to disclose. Neil Skolnik served on advisory boards for AstraZeneca, Boehringer Ingelheim, Janssen Pharmaceuticals, Intarcia, Lilly, Sanofi, and Teva, has been a speaker for AstraZeneca and Boehringer Ingelheim, and received research support from AstraZeneca, Boehringer Ingelheim, and Sanofi.
Compliance with Ethics Guidelines
This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.
Footnotes
Enhanced digital features
To view enhanced digital features for this article go to 10.6084/m9.figshare.7156850.
References
- 1.Luce BR, Drummond M, Jönsson B, et al. EBM, HTA, and CER: clearing the confusion. Milbank Q. 2010;88:256–276. doi: 10.1111/j.1468-0009.2010.00598.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Sherman RE, Anderson SA, Dal Pan GJ, et al. Real-world evidence—what is it and what can it tell us? N Engl J Med. 2016;375:2293–2297. doi: 10.1056/NEJMsb1609216. [DOI] [PubMed] [Google Scholar]
- 3.Barnish MS, Turner S. The value of pragmatic and observational studies in health care and public health. Pragmat Obs Res. 2017;8:49–55. doi: 10.2147/POR.S137701. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Fortin M, Dionne J, Pinho G, Gignac J, Almirall J, Lapointe L. Randomized controlled trials: do they have external validity for patients with multiple comorbidities? Ann Fam Med. 2006;4(2):104–108. doi: 10.1370/afm.516. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.FDA. Developing a framework for regulatory use of real-world evidence; Public Workshop. https://www.gpo.gov/fdsys/pkg/FR-2017-07-31/pdf/2017-16021.pdf. Accessed 08 Sep 2017.
- 6.EMA. Update on real world evidence data collection. 10 March 2016. https://ec.europa.eu/health//sites/health/files/files/committee/stamp/2016-03_ stamp4/4_ real_world_evidence_ema_presentation.pdf. Accessed 08 Sep 2017.
- 7.Batrouni M, Comet D, Meunier JP. Real world studies, challenges, needs and trends from the industry. Value Health. 2014;17:A587–A588. doi: 10.1016/j.jval.2014.08.2006. [DOI] [PubMed] [Google Scholar]
- 8.Goodman CS. National Information Center on Health Services Research and Health Care Technology (NICHSR): HTA 101, 2017. https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed Feb 2018.
- 9.Malone DC, Brown M, Hurwitz JT, Peters L, Graff JS. Real-world evidence: useful in the real world of US payer decision making? How? When? And what studies? Value Health. 2018;21(3):326–333. doi: 10.1016/j.jval.2017.08.3013. [DOI] [PubMed] [Google Scholar]
- 10.Cattell J, Groves P, Hughes B, Savas S. How can pharmacos take advantage of the real-world data opportunity in healthcare? McKinsey and Company, 2011. https://www.mckinsey.com/~/media/mckinsey/dotcom/client_service/Pharma%20and%20Medical%20Products/PMP%20NEW/PDFs/Pharma%20%20RWD%20opportunity%20October%202011.ashx. Accessed Feb 2018.
- 11.ABPI. The vision for real world data—harnessing the opportunities in the UK. Demonstrating value with real world data 2017. http://www.abpi.org.uk/media/1378/vision-for-real-world-data.pdf. Accessed 22 Jan 2018.
- 12.Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Clin Epidemiol. 2009;62(5):499–505. doi: 10.1016/j.jclinepi.2009.01.012. [DOI] [PubMed] [Google Scholar]
- 13.Davies J, Martinex M, Martina R, et al. Retrospective indirect comparison of alectinib phase II data vs ceritinib real-world data in ALK + NSCLC after progression on crizotinib. Ann Oncol. 2017;28(suppl_2): ii28-ii51. 10.
- 14.Ford I, Norrie J. Pragmatic trials. N Engl J Med. 2016;375:454–463. doi: 10.1056/NEJMra1510059. [DOI] [PubMed] [Google Scholar]
- 15.Dang A, Vallish BN. Real world evidence: an Indian perspective. Perspect Clin Res. 2016;7:156–160. doi: 10.4103/2229-3485.192030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Sox HC, Lewis RJ. Pragmatic trials: practical answers to “real world” questions. JAMA. 2016;316:1205–1206. doi: 10.1001/jama.2016.11409. [DOI] [PubMed] [Google Scholar]
- 17.Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290:1624–1632. doi: 10.1001/jama.290.12.1624. [DOI] [PubMed] [Google Scholar]
- 18.Dubois RW. Is the real-world evidence or hypothesis: a tale of two retrospective studies. J Comp Eff Res. 2015;4(3):199–201. doi: 10.2217/cer.15.17. [DOI] [PubMed] [Google Scholar]
- 19.Clinicaltrials.gov. Studies for “Diabetes Mellitus, Type 2”. https://clinicaltrials.gov/ct2/results?cond=Diabetes+Mellitus%2C+Type+2&term=&cntry=&state=&city=&dist=. Accessed 21 Aug 2018.
- 20.Carls GS, Tuttle E, Tan RD, et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1 RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017;40:1469–1478. doi: 10.2337/dc16-2725. [DOI] [PubMed] [Google Scholar]
- 21.Edelman SV, Polonsky WH. Type 2 diabetes in the real world: the elusive nature of glycemic control. Diabetes Care. 2017;40:1425–1432. doi: 10.2337/dc16-1974. [DOI] [PubMed] [Google Scholar]
- 22.McGovern A, Hinchliffe R, Munro N, de Lusignan S. Basing approval of drugs for type 2 diabetes on real world outcomes. BMJ. 2015;351:h5829. doi: 10.1136/bmj.h5829. [DOI] [PubMed] [Google Scholar]
- 23.Zhou FL, Ye F, Gupta V, et al. Older adults with type 2 diabetes (T2D) experience less hypoglycemia when switching to insulin glargine 300 U/mL (Gla-300) vs other basal insulins (DELIVER 3 study). Poster 986-P, American Diabetes Association (ADA) 77th Scientific Sessions, San Diego, CA, US, June 10, 2017.
- 24.Blonde L, Merilainen M, Karwe V, Raskin P. Patient-directed titration for achieving glycaemic goals using a once-daily basal insulin analogue: an assessment of two different fasting plasma glucose targets-the TITRATE™ study. Diabetes Obes Metab. 2009;11:623–631. doi: 10.1111/j.1463-1326.2009.01060.x. [DOI] [PubMed] [Google Scholar]
- 25.Gerstein H. C., Yale J.-F., Harris S. B., Issa M., Stewart J. A., Dempsey E. A randomized trial of adding insulin glargine vs. avoidance of insulin in people with Type 2 diabetes on either no oral glucose-lowering agents or submaximal doses of metformin and/or sulphonylureas. The Canadian INSIGHT (Implementing New Strategies with Insulin Glargine for Hyperglycaemia Treatment) Study. Diabetic Medicine. 2006;23(7):736–742. doi: 10.1111/j.1464-5491.2006.01881.x. [DOI] [PubMed] [Google Scholar]
- 26.Meneghini L, Koenen C, Weng W, Selam JL. The usage of a simplified self-titration dosing guideline (303 Algorithm) for insulin detemir in patients with type 2 diabetes—results of the randomized, controlled PREDICTIVE™ 303 study. Diabetes Obes Metab. 2007;9:902–913. doi: 10.1111/j.1463-1326.2007.00804.x. [DOI] [PubMed] [Google Scholar]
- 27.Garrison LP, Jr, Neumann PJ, Erickson P, Marshall D, Mullins CD. Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force report. Value Health. 2007;10:326–335. doi: 10.1111/j.1524-4733.2007.00186.x. [DOI] [PubMed] [Google Scholar]
- 28.Roche Nicolas, Reddel Helen, Martin Richard, Brusselle Guy, Papi Alberto, Thomas Mike, Postma Dirjke, Thomas Vicky, Rand Cynthia, Chisholm Alison, Price David. Quality Standards for Real-World Research. Focus on Observational Database Studies of Comparative Effectiveness. Annals of the American Thoracic Society. 2014;11(Supplement 2):S99–S104. doi: 10.1513/AnnalsATS.201309-300RM. [DOI] [PubMed] [Google Scholar]
- 29.Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;22(342):1878–1886. doi: 10.1056/NEJM200006223422506. [DOI] [PubMed] [Google Scholar]
- 30.Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342:1887–1892. doi: 10.1056/NEJM200006223422507. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med. 2011;8:e1001026. doi: 10.1371/journal.pmed.1001026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.McMurry TL, Hu Y, Blackstone EH, Kozower BD. Propensity scores: methods, considerations, and applications. J Thorac Cardiovasc Surg. 2015;150:14–19. doi: 10.1016/j.jtcvs.2015.03.057. [DOI] [PubMed] [Google Scholar]
- 33.Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues Clin Neurosci. 2011;13:217–224. doi: 10.31887/DCNS.2011.13.2/npatsopoulos. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Frieden TR. Evidence for health decision making—beyond randomized, controlled trials. N Engl J Med. 2017;377:465–475. doi: 10.1056/NEJMra1614394. [DOI] [PubMed] [Google Scholar]
- 35.Ye F, Agarwal R, Kaur A, et al. Real-world assessment of patient characteristics and clinical outcomes of early users of the new insulin glargine 300U/mL. Poster 943-P, American Diabetes Association (ADA) 76th Scientific Sessions, New Orleans, LA, US. June 11, 2016.
- 36.Zhou Fang Liz, Ye Fen, Berhanu Paulos, Gupta Vineet E., Gupta Rishab A., Sung Jennifer, Westerbacka Jukka, Bailey Timothy S., Blonde Lawrence. Real-world evidence concerning clinical and economic outcomes of switching to insulin glargine 300 units/mL vs other basal insulins in patients with type 2 diabetes using basal insulin. Diabetes, Obesity and Metabolism. 2018;20(5):1293–1297. doi: 10.1111/dom.13199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Bolli GB, Riddle MC, Bergenstal RM, et al. New insulin glargine 300 U/ml compared with glargine 100 U/ml in insulin-naïve people with type 2 diabetes on oral glucose-lowering drugs: a randomized controlled trial (EDITION 3) Diabetes Obes Metab. 2015;17:386–394. doi: 10.1111/dom.12438. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Riddle Matthew C., Bolli Geremia B., Ziemen Monika, Muehlen-Bartmer Isabel, Bizet Florence, Home Philip D. New Insulin Glargine 300 Units/mL Versus Glargine 100 Units/mL in People With Type 2 Diabetes Using Basal and Mealtime Insulin: Glucose Control and Hypoglycemia in a 6-Month Randomized Controlled Trial (EDITION 1) Diabetes Care. 2014;37(10):2755–2762. doi: 10.2337/dc14-0991. [DOI] [PubMed] [Google Scholar]
- 39.Yki-Järvinen H, Bergenstal R, Ziemen M, et al. New insulin glargine 300 units/mL versus glargine 100 units/mL in people with type 2 diabetes using oral agents and basal insulin: glucose control and hypoglycemia in a 6-month randomized controlled trial (EDITION 2) Diabetes Care. 2014;37:3235–3243. doi: 10.2337/dc14-0990. [DOI] [PubMed] [Google Scholar]
- 40.Mahmood SS, Levy D, Vasan RS, Wang TJ. The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. Lancet. 2014;383:999–1008. doi: 10.1016/S0140-6736(13)61752-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Stratton IM, Adler AI, Neil HA, et al. Association with glycaemia with macrovascular and microvascular complications of type 2 diabetes (UKPDS 35): prospective observational study. BMJ. 2000;321:405–412. doi: 10.1136/bmj.321.7258.405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Research Group. Effect of intensive therapy on the microvascular complications of type 1 diabetes mellitus. JAMA. 2002;287:2563–9. [DOI] [PMC free article] [PubMed]
- 43.Freemantle N, Danchin N, Calvi-Gries F, Vincent M, Home PD. Relationship of glycaemic control and hypoglycaemic episodes to 4-year cardiovascular outcomes in people with type 2 diabetes starting insulin. Diabetes Obes Metab. 2016;18:152–158. doi: 10.1111/dom.12598. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Nathan DM, Buse JB, Kahn SE, et al. Rationale and design of the glycemia reduction approaches in diabetes: a comparative effectiveness study (GRADE) Diabetes Care. 2013;36:2254–2261. doi: 10.2337/dc13-0356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Wermeling PR, Gorter KJ, Stellato RK, et al. Effectiveness and cost-effectiveness of 3-monthly versus 6-monthly monitoring of well-controlled type 2 diabetes patients: a pragmatic randomised controlled patient-preference equivalence trial in primary care (EFFIMODI study) Diabetes Obes Metab. 2014;16:841–849. doi: 10.1111/dom.12288. [DOI] [PubMed] [Google Scholar]
- 46.Young LA, Buse JB, Weaver MA, et al. Three approaches to glucose monitoring in non-insulin treated diabetes: a pragmatic randomized clinical trial protocol. BMC Health Serv Res. 2017;17:369. doi: 10.1186/s12913-017-2202-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Furler J, O’Neal D, Speight J, et al. Supporting insulin initiation in type 2 diabetes in primary care: results of the Stepping Up pragmatic cluster randomised controlled clinical trial. BMJ. 2017;356:j783. doi: 10.1136/bmj.j783. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Choudhry NK, Isaac T, Lauffenburger JC, et al. Rationale and design of the Study of a Tele-pharmacy Intervention for Chronic diseases to Improve Treatment adherence (STIC2IT): a cluster-randomized pragmatic trial. Am Heart J. 2016;180:90–97. doi: 10.1016/j.ahj.2016.07.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Green Jennifer B., Bethel M. Angelyn, Armstrong Paul W., Buse John B., Engel Samuel S., Garg Jyotsna, Josse Robert, Kaufman Keith D., Koglin Joerg, Korn Scott, Lachin John M., McGuire Darren K., Pencina Michael J., Standl Eberhard, Stein Peter P., Suryawanshi Shailaja, Van de Werf Frans, Peterson Eric D., Holman Rury R. Effect of Sitagliptin on Cardiovascular Outcomes in Type 2 Diabetes. New England Journal of Medicine. 2015;373(3):232–242. doi: 10.1056/NEJMoa1501352. [DOI] [PubMed] [Google Scholar]
- 50.Holman RR, Bethel MA, Mentz RJ, et al. Effects of once-weekly exenatide on cardiovascular outcomes in type 2 diabetes. N Engl J Med. 2017;377:1228–1239. doi: 10.1056/NEJMoa1612917. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Zinman B, Wanner C, Lachin JM, et al. Empagliflozin, cardiovascular outcomes, and mortality in type 2 diabetes. N Engl J Med. 2015;373:2117–2128. doi: 10.1056/NEJMoa1504720. [DOI] [PubMed] [Google Scholar]
- 52.Neal B, Perkovic V, Mahaffey KW, et al. Canagliflozin and cardiovascular and renal events in type 2 diabetes. N Engl J Med. 2017;377(7):644–657. doi: 10.1056/NEJMoa1611925. [DOI] [PubMed] [Google Scholar]
- 53.Kosiborod M, Cavender MA, Fu AZ, et al. Lower risk of heart failure and death in patients initiated on sodium-glucose cotransporter-2 inhibitors versus other glucose-lowering drugs: the CVD-REAL study (comparative effectiveness of cardiovascular outcomes in new users of sodium-glucose cotransporter-2 inhibitors). Circulation. 2017;136(3):249–59. [DOI] [PMC free article] [PubMed]
- 54.STROBE. STROBE Statement: Strengthening the reporting of observational studies in epidemiology. https://www.strobe-statement.org/index.php?id=available-checklists. Accessed 26 Sep 2018.
- 55.Zwarenstein M., Treweek S., Gagnier J. J, Altman D. G, Tunis S., Haynes B., Oxman A. D, Moher D. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337(nov11 2):a2390–a2390. doi: 10.1136/bmj.a2390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Pitt B, Zannad F, Remme WJ, et al. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med. 1999;341(10):709–717. doi: 10.1056/NEJM199909023411001. [DOI] [PubMed] [Google Scholar]
- 57.Freemantle N, Marston L, Walters K, et al. Making inferences on treatment effects from real world data: propensity scores, confounding by indication, and other perils for the unwary in observational research. BMJ. 2013;347:f6409. doi: 10.1136/bmj.f6409. [DOI] [PubMed] [Google Scholar]
- 58.Penning de Vries BBL, Groenwold RHH. Cautionary note: propensity score matching does not account for bias due to censoring. Nephrol Dial Transplant. 2017;1–3. [DOI] [PubMed]
- 59.Zhang Runshun, Wang Yinghui, Liu Baoyan, Song Guangli, Zhou Xuezhong, Fan Shizhen, Pan Xishui. Clinical data quality problems and countermeasure for real world study. Frontiers of Medicine. 2014;8(3):352–357. doi: 10.1007/s11684-014-0351-1. [DOI] [PubMed] [Google Scholar]
- 60.Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med. 2017;376(26):2507. doi: 10.1056/NEJMp1702071. [DOI] [PMC free article] [PubMed] [Google Scholar]