Abstract
Purpose of review:
This review describes the learning healthcare system paradigm, recent examples, and future directions. Patients, clinicians, and health systems frequently encounter decisions between available treatments, technologies, and healthcare delivery methods with little or no evidence about the comparative effectiveness and safety of the available options. Learning healthcare systems endeavor to recognize such knowledge gaps, integrate comparative effectiveness research – including clinical trials – into clinical care to address the knowledge gaps, and seamlessly implement the results into practice to improve care and patient outcomes.
Recent findings:
Recent studies comparing the effectiveness of diagnostic tests and treatments, using information technology to identify patients likely to experience an outcome or benefit from an intervention, and evaluating models of healthcare delivery have demonstrated how a learning healthcare system approach can reduce arbitrary variation in care, decrease cost, and improve patient outcomes.
Summary:
Learning healthcare systems have the potential to answer questions of importance to patients, clinicians, and health system leaders, improve efficiency of healthcare delivery, and improve patient outcomes. Achieving this goal will require realignment of the culture around clinical care, institutional and federal investment, expanded stakeholder engagement, tailored ethical and regulatory guidance, and methodologic advances in information technology and biostatistics.
Keywords: Clinical trials, comparative effectiveness research, learning healthcare system
Introduction
Over the last century, biomedical research has provided patients, clinicians, and health systems with an astonishing array of new medications, technologies, and healthcare delivery methods, helping improve survival and quality of life for a wide range of acute and chronic conditions [1,2]. As an unexpected consequence, in clinical care today patients and clinicians frequently must choose between numerous treatments for a given condition, with little or no evidence about the comparative effectiveness and safety of the available options. As a result, the treatment that an individual patient would receive varies with the clinician, health system, formulary, and geographic region. This arbitrary variation in care (i.e., variation driven by factors other than knowledge of which treatment is best for the patient) systematically and unknowingly exposes some patients to ineffective or harmful therapies and represents a “profoundly serious moral problem” for current clinical practice [3]. Just as the biomedical research system is ideally positioned to discover new treatments, the healthcare systems that deliver clinical care are uniquely positioned to generate and apply evidence on the comparative effectiveness of available therapies. The paradigm of a ‘learning healthcare system’ – in which researchers, patients, clinicians, and health system leaders collaborate to identify knowledge gaps, integrate comparative effectiveness research into clinical practice, and systematically implement best evidence into practice – has been proposed as a method for reducing potentially harmful arbitrary variation in care, decreasing cost, and improving patient outcomes.
The Problem
More than a decade ago the National Academy of Medicine (NAM) established the goal of having “90 percent of clinical decisions…supported by accurate, timely, and up-to-date clinical information [reflecting] the best available evidence” [4]. Yet, today the vast majority of clinical decisions faced by patients, clinicians, and health systems still lack high-quality evidence [5–8**]. In critical care, common, simple questions like “what is the best vasopressor agent for septic shock?”, “what is the best treatment for alcohol withdrawal?”, and “what is the best oxygen saturation target during mechanical ventilation?” go unanswered for decades and millions of patients each year receive therapies which may be ineffective or harmful without contributing to new knowledge or improving care for future patients.
Generating evidence to inform common comparative effectiveness decisions has been limited by the cost, inefficiency, and inadequate representativeness of traditional explanatory randomized trials. Explanatory trials, typified by drug development trials, are trials conducted to determine if a therapy demonstrates the hypothesized mechanism of action under idealized conditions. Explanatory trials are ill-suited to informing the choice between available treatments, technology, and healthcare delivery methods for several reasons. First, the use of research personnel and data systems separate from the personnel and data systems of clinical care is inefficient, expensive, and time-consuming. Phase III trials of a new drug typically cost more than $100 million and require 3–5 years to conduct [9,10]. At this pace and cost, generating evidence for the hundreds of decisions clinicians face each day would take centuries and cost trillions of dollars. Second, pharmaceutical companies’ financial investments in trials of novel drugs and devices are offset by the potential to earn billions of dollars in profit [11], whereas no similar incentive exists to compare interventions already common in clinical care. Third, while proving a new drug is better than placebo in a homogenous patient population might require enrolling hundreds or thousands of patients, understanding the optimal choice between two common therapies for the broad range of patients receiving them in practice might require enrolling tens of thousands of patients. Fourth, site selection, narrow eligibility criteria, and difficulty recruiting and retaining patients frequently result in explanatory trial populations that do not represent the diversity of age, sex, race, ethnicity, economic status, health literacy, and comorbidities needed to understand the effectiveness and safety of interventions in real-world clinical care [12–14]. Fifth, many knowledge gaps about the best use of a technology (e.g., electronic health record [EHR] alerts) or method of delivering care (e.g., nursing staffing) are context-dependent, delivered to clusters of patients or clinicians, and not suited to evaluation in traditional, patient-level, placebo-controlled explanatory trials.
A Learning Healthcare System as a Potential Solution
The concept of empowering healthcare systems to integrate clinical research and clinical care to improve outcomes for patients was first articulated by the National Academy of Medicine (NAM) [15]. NAM defined a learning healthcare system as one in which “science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the care process, patients and families as active participants in all elements, and new knowledge is captured as an integral by-product of the care experience”. A learning healthcare system would ideally [1] recognize knowledge gaps, [2] integrate comparative effectiveness research – including randomized clinical trials – into clinical care to address the knowledge gaps, and [3] seamlessly implement the results into practice to improve care and patient outcomes. The patients, clinicians, and leaders within a healthcare system are often better positioned than external researchers, funders, or industry representatives to identify areas of uncertainty and controversy that represent a comparative effectiveness knowledge gap. In a learning healthcare system, arbitrary variation in clinical care can be recognized as a marker of clinical equipoise, highlighting an opportunity for comparative effectiveness research (Figure 1). Transforming arbitrary variation in clinical care into structured variation in the form of effectiveness trials embedded within clinical care (sometimes referred to as pragmatic trials) is a potentially inexpensive, efficient, and inclusive method for health systems to close knowledge gaps and improve care and patient outcomes.
Figure 1. Arbitrary Variation in Clinical Care vs. Structured Variation in a Learning Healthcare System Trial.
The left panel shows a schematic of arbitrary variation in clinical care. When two or more therapies are available for a common condition without evidence to inform which is best for a given patient, clinicians are forced to make an arbitrary choice between the two therapies. One clinician may choose therapy A while another would choose therapy B. The patient experiences all the benefits and risks of whichever therapy is received, but no knowledge is gained, and care is not improved for future patients. When one of the two therapies is superior, continuing to allow arbitrary variation in care systematically exposes patients to ineffective or harmful treatments. The right panel shows a schematic of how a learning healthcare system can address such knowledge gaps. Ideally, arbitrary variation in clinical care is recognized as a marker of clinical equipoise, prompting studies that structure the variation between therapy A and therapy B through a comparative effectiveness trial. Because both therapies are routinely used in clinical care and neither therapy is known to be better for the patient, randomizing patients to therapy A and therapy B may not significantly change the overall number of patients receiving each therapy or the risks and benefits experienced by patients, compared to clinical care. However, by structuring variation through a clinical trial, the learning healthcare system generates evidence regarding which therapy is superior for which patients and applies that knowledge to improve care for future patients.
COVID-19: Learning Healthcare Systems vs Traditional Clinical Care and Clinical Trials
The coronavirus disease 2019 (COVID-19) pandemic has dramatically highlighted the contrasts between the approach of integrating pragmatic trials into clinical care in a learning healthcare system and the traditional approach of conducting explanatory trials segregated from clinical care. As COVID-19 emerged in early 2020, the prevalence and lethality of the virus made an urgent priority of identifying which of the available potential treatments were safe and effective [16].
In the United States (US), the comparative effectiveness knowledge gaps in the treatment of COVID-19 resulted two parallel, and sometimes conflicting [17**], responses: [1] a rush to conduct explanatory randomized trials and [2] widespread arbitrary variation in the treatments provided to patients in clinical care. For example, the antimalarial medication hydroxychloroquine was proposed as a potential treatment for COVID-19 early in the pandemic [16]. In March of 2020, with unprecedented efficiency, the “Outcomes Related to COVID-19 Treated With Hydroxychloroquine Among Inpatients With Symptomatic Disease (ORCHID)” trial was designed, federally funded, and approved by the U.S. Food and Drug Administration and a central Institutional Review Board in just 15 days [18*]. Dedicated research staff for the blinded, placebo-controlled trial enrolled 479 patients from 34 academic hospitals in the Prevention and Early Treatment of Acute Lung Injury (PETAL) Network, before the trial was stopped for futility. Between March when the ORCHID trial was designed and November when the results from the 479 patients were published, hundreds of thousands of patients across the US were treated with hydroxychloroquine as a part of clinical care [19*], experiencing the risks and benefits of the drug without improving knowledge or care for future patients.
In the United Kingdom (UK) during the same period, the National Health Service designed the Randomised Evaluation of COVID-19 Therapy (RECOVERY) trial, which also aimed to compare the effectiveness of available therapies for COVID-19. RECOVERY evaluated hydroxychloroquine using a radically different, learning healthcare system approach. At the direction of the National Health Service, health system leaders encouraged patients and clinicians to participate in the RECOVERY trial to facilitate rapid evaluation of untested therapies and discouraged the use of these therapies outside of trials. RECOVERY trial procedures were embedded within clinical care and conducted by treating clinicians after a brief training session. Data were collected from the medical record. By embedding the pragmatic trial in clinical care, RECOVERY enrolled approximately 11,000 patients between March 25 and June 5, 2020 [20*] – approximately 23-times as many patients as the ORCHID trial enrolled during the same period [21**], despite case counts that were approximately one-tenth those of the US [22]. While the ORCHID trial was designed to answer only one question, the RECOVERY trial was designed to concurrently evaluate the full range of comparative effectiveness questions in COVID-19 treatment, including use of corticosteroids, azithromycin, and antivirals [21**,23*,24*]. Consistent with the paradigm of a learning healthcare system, the National Health Service hospitals continuously identified new knowledge gaps to evaluate, like the use of convalescent plasma and tocilizumab, and implemented findings from each completed trial arm into treatment guidelines for future patients to continually and iteratively improve outcomes [25].
Ideal Research Questions for a Learning Healthcare System
No single method is best for all clinical research. Even if the role for learning healthcare systems expands to address many common comparative effectiveness questions, traditional explanatory trials likely will remain the primary method for conducting drug development and translational research. The following sections describe examples of three types of knowledge gap that learning healthcare systems may be well-suited to address (additional recent examples in Table 1).
Table 1.
Recent Trials in a Learning Healthcare System Paradigm
| Study Type | Trial | Description and Comment |
|---|---|---|
| Patient-level comparative effectiveness research | Preventing Hypoxemia with Manual Ventilation during Endotracheal Intubation (PreVent) Trial [40] | Randomized trial of bag-mask ventilation to prevent hypoxemia during emergency airway management in the ICU. Enrollment, randomization, delivery of the intervention, and data collection were embedded into clinical care. |
| Preventing Cardiovascular collaPse With Administration of Fluid Resuscitation Before Endotracheal Intubation (PrePARE) [41*] | Randomized trial of a pre-intubation fluid bolus to prevent hypotension and cardiac arrest during emergency airway management in the ED and ICU Enrollment, randomization, delivery of the intervention, and data collection were embedded into clinical care. |
|
| The RECOVERY Trial [20,21,23,24] | Adaptive platform randomized trial evaluating therapies against COVID-19 in National Health Service hospitals in the UK. Screening, consent, enrollment, delivery of the intervention, and data collection were embedded into routine care. |
|
| Pragmatic, Randomized Clinical Trial of Gestational Diabetes Screening [42**] | Randomized trial conducted by Kaiser Permanente Northwest and Kaiser Permanente Hawaii comparing one-step vs two-step screening for gestational diabetes. Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded within the EHR. |
|
| Urgent vs. Early Endoscopy in High-Risk Patients With Upper Gastrointestinal Bleeding Trial [43*] | Randomized trial of timing of endoscopy for acute upper gastrointestinal bleeding. Screening, enrollment, randomization, and intervention delivery all embedded within routine care. |
|
| Unit-level comparative-effectiveness research | Isotonic Solutions and Major Adverse Renal Events Trial (SMART) [26] | Cluster-randomized, multiple-crossover trial comparing balanced crystalloids with saline for intravenous fluid administration among critically ill adults admitted to five intensive care units. Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded within the EHR. |
| Saline against Lactated Ringer’s or Plasma-Lyte in the Emergency Department (SALT-ED) Trial [44] | Single-center cluster-crossover trial comparing balanced crystalloids with saline for intravenous fluid administration among noncritically ill adults in the ED Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded within the EHR. |
|
| Proton Pump Inhibitors vs Histamine-2 Receptor Blockers for Ulcer Prophylaxis Treatment in the Intensive Care Unit (PEPTIC) trial [45**] | Cluster-randomized, crossover trial comparing proton pump inhibitors to histamine-2 receptor blockers for peptic ulcer prophylaxis among critically ill adults admitted to 50 intensive care units. Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded within routine care and outcomes were collected from a pre-existing registry. |
|
| Neonatal Resuscitation With Supraglottic Airway (NeoSupra) Trial [46] | Cluster-randomized, multiple-crossover trial comparing laryngeal mask airway (LMA) to face-mask ventilation during neonatal resuscitation in low-income countries. Screening, enrollment, randomization, and intervention delivery all embedded in routine care. |
|
| Chlorhexidine Bathing Trial [47] | Cluster-randomized, multiple-crossover trial trial of once-daily bathing of all patients with 2% chlorhexidine compared to nonantimicrobial cloths among critically ill adults admitted to 5 intensive care units. Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded in routine care. |
|
| Hospital-level comparative-effectiveness research | High-Sensitivity Troponin in the Evaluation of Patients With Acute Coronary Syndrome (High-STEACS) Trial [48] | Step-wedge trial to evaluate implementation of high-sensitivity cardiac troponin assays to identify patients at high-risk of cardiovascular events. Screening, enrollment, randomization, and intervention delivery were all embedded within the EHR |
| Active Bathing to Eliminate Infection (ABATE Infection) Trial [49] | Cluster-randomized trial of 53 hospitals comparing routine bathing to decolonization with universal chlorhexidine and nasal mupirocin. Screening, enrollment, randomization, intervention delivery, outcome assessment all embedded within clinical care. |
|
| Head Positioning in Acute Stroke (HeadPost) Trial [50] | Cluster-randomized, crossover trial of head positioning after acute stroke, conducted at 114 hospitals in 9 countries. Screening, enrollment, randomization, and intervention delivery, all embedded within clinical care. |
|
| Trials using information technology to identify patients likely to experience an outcome or benefit from an intervention | Advanced Alert Monitor at Kaiser Permanente Northern California [35**] | Used a model embedded within the EHR to identify patients at risk of unplanned ICU transfer within 12 hours (afferent limb), triggering a clinical evaluation (efferent limb). Results were compared to control patients from other hospitals for whom the model was functional but not linked to a clinical team alert (no efferent limb). Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded within the EHR. |
| Optimizing Electronic Alerts for Acute Kidney Injury [36**] | Used a statistical model embedded within the EHR to identify patients at risk of acute kidney injury (afferent limb), triggering an order set for best management practices (efferent limb) Screening, enrollment, randomization, intervention delivery, and outcome assessment all embedded within the EHR. |
|
| Evaluating processes of care and outcomes of children in hospital (EPOCH) [51] | Cluster-randomized trial at 21 hospitals comparing a bedside pediatric early warning score to usual care with regard to in-hospital mortality. Screening, enrollment, intervention delivery, and outcomes assessment were all embedded within clinical care. |
|
| Trial evaluating models of healthcare delivery | Hotspotting trial [38**]. | Trial randomly assigned patients with medically and socially complex conditions to receive or not receive assistance from a care-transition program composed of nurses, social workers, and community health workers |
| Vanderbilt ICU Recovery Program Pilot (VIP) Trial [52] | Trial randomly assigned critically ill patients to enrollment in an ICU recovery program compared or usual care. Screening, enrollment, randomization, and intervention delivery all embedded within routine care. |
|
| Improving Patient and Family Centered Care in Advanced Critical Illness [53] | Stepped-wedge, cluster-randomized trial that randomized critically ill patients in five intensive care units to a multicomponent family-support intervention compared to usual care. Screening, enrollment, randomization, and intervention delivery all embedded within routine care. |
|
| Strategies to Reduce Injuries and Develop Confidence in Elders (STRIDE) Trial [54*] | Cluster-randomized trial at ten clinical sites to evaluate the effectiveness of a multifactorial intervention to prevent fall injuries. Screening, randomization, and intervention delivery embedded within routine care. |
|
| Promoting Successful Weight Loss in Primary Care in Louisiana (PROPEL) Trial [55*] | Cluster-randomized trial at 18 primary clinics to evaluate the effectiveness of a high-intensity, lifestyle-based program for obesity treatment. Screening, randomization, intervention delivery, and outcome assessment, all embedded within routine care. |
|
| Randomized Order Safety Trial Evaluating Resident-Physician Schedules (ROSTERS) Trial [56] | Cluster-randomized trial of resident work schedules conducted in six pediatric intensive care units. Screening, enrollment, randomization, and intervention delivery all embedded within routine care |
|
| Trial of Method of Contraception Prescription Delivery (Bridge-It) [57*] | Cluster-randomized crossover trial in 29 pharmacies evaluating the effectiveness of providing women with a bridging supply of the progestogen-only pill compared to the usual care (referring patients to usual contraceptive provider). Screening, enrollment, randomization, and intervention delivery all embedded within routine care |
|
| Cut Your Pressure Too: The Los Angeles Barbershop Blood Pressure Study [58] | Cluster-randomized trial of blood pressure control conducted in 52 black-owned barbershops evaluating the effect of an embedding a pharmacist-led intervention within a nontraditional health care setting. |
Comparing the Effectiveness of Diagnostic Tests or Treatments
Several recent high-profile trials have demonstrated the power of learning healthcare systems to efficiently compare the effectiveness of diagnostic tests and treatments by embedding pragmatic trials into clinical care at the level of the patient, provider, unit, or hospital (Table 1).
Administration of intravenous fluids is the most common treatment received by hospitalized patients. For more than a century, two basic classes of isotonic fluid have existed: 0.9% sodium chloride (saline) and crystalloid solutions with electrolyte compositions closer to that of plasma (balanced crystalloids). Until recently, no large trials had compared balanced crystalloids to saline with regard to patient outcomes. Between June 1, 2015 and April 30, 2017, the Learning Healthcare System at Vanderbilt University Medical Center conducted the Isotonic Solutions and Major Adverse Renal Events Trial (SMART), a pragmatic, un-blinded, cluster-randomized, multiple-crossover trial comparing balanced crystalloids with saline among critically ill adults [26]. Trial screening, enrollment, group assignment, intervention delivery, and outcome assessment were all performed by software applications within the EHR. As shown in Figure 2, both fluids were commonly used in the health system before the trial and conduct of the trial did not significantly change the total amount of each crystalloid used in the health system. After the trial but before the results were known, treating clinicians returned to using slightly more saline than balanced crystalloids. Once the trial’s analysis demonstrated that balanced crystalloids led to improved clinical outcomes compared to saline, the learning healthcare system implemented the trial results via the same software applications used during the trial, resulting in a rapid change to balanced crystalloids for nearly all future patients.
Figure 2. Phases of Research in a Learning Healthcare System.
This figure demonstrates the ‘observation, intervention, implementation’ phases of research in a learning healthcare system as exemplified by the SMART trial [26]. The y-axis shows the percent of isotonic fluid administered in the Vanderbilt medical ICU that was saline from 2014 to 2019. Before the SMART trial, 60–75% of the isotonic fluid administered was saline, with some clinicians administered as much as 100% saline and other administering as little as 10% saline. During the trial, the choice of isotonic fluid alternated monthly between saline and balanced crystalloids, achieving high rates of compliance with the assigned fluid without markedly changing the total proportion of fluid that was saline, compared to clinical care before the trial. After the trial was completed but before the results were known, clinicians returned to using slightly more saline than balanced crystalloids. Once the trial’s analysis demonstrated improved clinical outcomes with balanced crystalloids compared to saline, the learning healthcare system implemented the trial results using the same software applications used to deliver the assigned intervention during the trial. This mechanism allowed rapid integration of study findings and within one month of implementation, nearly all the fluid administered in the study unit was balanced crystalloids.
Using Information Technology to Identify Patients likely to Experience an Outcome or Benefit from an Intervention.
Advances in technology, risk modeling, and machine learning have created an opportunity to embed prognostic and predictive models within the EHR with the aim of improving patient outcomes, health care quality, and equity. Learning healthcare systems are well-suited for the iterative process of defining the problem, developing algorithms, implementing tools, and evaluating the performance of the tools and their effect on patient care and outcomes. Recent examples of embedding prognostic or predictive models for sepsis, critical illness, and acute kidney injury into the EHR have highlighted significant opportunities and challenges.
Hospital-acquired sepsis is an attractive use-case for applying predictive tools to improve care delivery. Several EHR-based early warning systems have been developed to alert clinicians in real-time to inpatients meeting criteria for sepsis. Early efforts to deploy warning systems, however, did not improve clinical outcomes or processes of care [27–31]. Consistent with the iterative process of a learning healthcare system, early models are being refined to better define the clinical problem and incorporate temporally granular EHR data, perspectives of bedside clinicians, and tenets of implementation science [32,33].
An example is the hourly Advanced Alert Monitor at Kaiser Permanente Northern California, which identifies patients at risk of unplanned ICU transfer within 12 hours. The model was originally implemented as proof-of-concept [34], was iteratively updated and refined, and then was deployed and evaluated in a prospective, quasi-experimental study of nearly 44,000 hospitalizations [35**]. In this study, the model was deployed at a staggered fashion at 19 hospitals over 3 years. Combining the model to identify patients at risk of deterioration with a triggered clinical review and response from a rapid-response team led to decreases in 30-day mortality, ICU transfer, and hospital length of stay, compared to a control group composed of patients from hospitals where the model had been deployed but not yet linked to a clinical response.
Another recent study, which evaluated an EHR alert for acute kidney injury, underscores the importance of rigorously evaluating the effects of implementing predictive and prognostic models. In a randomized trial at six hospitals, Wilson and colleagues found that an alert notifying clinicians of patients meeting diagnostic criteria for acute kidney injury with a link to an order set for best management practices was effective in changing clinician behavior but unexpectedly led to increased mortality compared to usual care [36**].
Collectively, these examples demonstrate the important potential role that a learning healthcare system can play in evaluating the clinical effects of implementing new predictive models, decision support, or other technology. They highlight the importance of considering both the model (afferent limb) and the clinical actions taken in response to the model (efferent limb). They also highlight the value of rigorously evaluating the effects of implementing the model, either to help refine the intervention or to recognize and de-implement an intervention found to be ineffective or harmful.
Evaluating Models of Healthcare Delivery
Healthcare systems are also ideally positioned to evaluate models of healthcare delivery. Historically, healthcare systems have designed and implemented interventions such as appointment reminder cards, post-discharge phone calls, and programs targeting patients with high healthcare utilization without high-quality evidence of efficacy [37**]. Such interventions are rarely formally evaluated, and even ineffective programs may be newly implemented or continued for years at significant financial cost and burden to patients, clinicians, and the health system. The value of formally evaluating models of healthcare delivery was recently demonstrated by the Health Care Hotspotting trial [38**]. This trial randomly assigned patients with medically and socially complex conditions to receive or not receive assistance from a care-transition program composed of nurses, social workers, and community health workers. This team visited patients, coordinated outpatient care, and linked patients with social services. Health systems globally had implemented similar resource-intense programs based on promising observational studies. In the 800-patient Health Care Hotspotting trial, however, there was no difference in readmission or any other outcome between patients assigned to the post-discharge program and those assigned to usual care.
Challenges to the Development of Learning Healthcare Systems
Developing robust learning healthcare systems will require solving several challenges. To succeed, learning healthcare systems must find new ways to meaningfully engage patient-, clinician-, and health system stakeholders in each phase of research from prioritization to implementation and dissemination [39]. These stakeholders will need to transition from viewing arbitrary variation in treatments as an acceptable characteristic of clinical care to viewing it as an opportunity to improve outcomes. An ethical and regulatory framework will need to be tailored to the incremental risk of comparing common treatments that patients are already receiving, rather than to the development of novel drugs with unknown safety profiles. Learning healthcare systems must extend across multiple hospitals or health systems to produce generalizable knowledge and improve clinical care at scale. Doing so will require addressing challenges related to information technology, data inter-operability, privacy and confidentiality, and health system priorities and governance. Finally, learning healthcare systems must develop the advanced statistical modeling techniques to use data from large pragmatic trials to estimate the effects of treatments for individual patients, rather than the average effect of treatment across the population. Addressing these challenges may fulfill the promise of a learning healthcare system in which patients, clinicians, and health system leaders across multiple hospitals collaborate to identify knowledge gaps, use information technology to embed pragmatic trials into clinical care to address those knowledge gaps, and implement individual treatment effect estimates from those trials to help patients and clinicians make evidence-based, personalized treatment decisions for the myriad of choices that comprise routine medical care.
Conclusion
Learning healthcare systems have the potential to improve patient outcomes, answer questions of importance to patients, clinicians, and health system leaders, and improve efficiency of healthcare delivery. Achieving this goal will require expanded stakeholder engagement, institutional and federal investment, tailored ethical and regulatory guidance, methodologic advances in information technology and biostatistics, and realignment of the culture around clinical care.
Key Points:
Traditional explanatory trials in the biomedical research system frequently occur separately from clinical care and are poorly suited to comparing available treatments, contributing to the arbitrary variation in clinical care that exposes patients to ineffective or harmful therapies.
A learning healthcare system can: 1) observe arbitrary provider practice variation and other opportunities to improve care, 2) intervene to determine the best treatment for patients (often by comparing available treatments in comparative effectiveness clinical trials), and 3) rapidly implement knowledge generated through research to improve the care of future patients.
Clinical care and research during the COVID-19 pandemic exemplified the differences between a learning healthcare system (e.g. the UK model in which use of untested therapies outside of trials was discouraged and efforts were made to include all patients in efficient, pragmatic clinical trials implemented by treating clinicians) and segregated models of care (e.g. the US model in which small, independent research teams separate from treating clinicians enrolled a fraction of patients into trials while clinicians commonly administered untested therapies to hundreds of thousands of patients in clinical care outside of trials).
Five key challenges facing the development of learning healthcare systems are: 1) institutional and federal investment in infrastructure for learning healthcare systems, 2) expanded engagement of patient, clinician, and health system stakeholders, 3) ethical and regulatory guidance tailored to the incremental risk inherent in learning healthcare system research, 4) methodologic advances in information technology to enable multicenter learning healthcare system and in biostatistics to allow trials to deliver evidence-based individual treatment effects, and 5) realignment of the culture of clinical care to see comparative effectiveness research as inherent to choosing treatments, technologies, and methods of healthcare delivery that produce the best outcomes for patients.
Acknowledgements
The authors wish to thank Henry J. Domenico, MS and Daniel W. Byrne, MS of the Department of Biostatistics at Vanderbilt University Medical Center for helping to conceptualize and design Figure 2.
Funding: JDC was supported in part by the NIH/NHLBI (K23HL153584). KRC was supported in part by the NIH/NHLBI (K23HL143181). TWR was supported in part by NHLBI U01 HL123009, NCRR U54 RR 032646, NCATS U24 TR 001608. MWS was supported in part by the NHLBI (K23HL143053).
Footnotes
Disclaimer: On behalf of all authors, the corresponding author states that there are no additional potential conflicts of interest.
REFERENCES
- [1].Streptomycin Treatment of Pulmonary Tuberculosis. Br Med J 1948;2:769–82. [PMC free article] [PubMed] [Google Scholar]
- [2].Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992;268:2420–5. 10.1001/jama.1992.03490170092032. [DOI] [PubMed] [Google Scholar]
- [3].Faden RR, Kass NE, Goodman SN, et al. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hastings Cent Rep 2013;Spec No:S16–27. 10.1002/hast.134. [DOI] [PubMed] [Google Scholar]
- [4].Medicine I of. Leadership Commitments to Improve Value in Health Care: Finding Common Ground: Workshop Summary. 2010. 10.17226/11982. [DOI]
- [5].Fanaroff AC, Califf RM, Windecker S, et al. Levels of Evidence Supporting American College of Cardiology/American Heart Association and European Society of Cardiology Guidelines, 2008–2018. JAMA 2019;321:1069–80. 10.1001/jama.2019.1122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Sims CR, Warner MA, Stelfox HT, Hyder JA. Above the GRADE: Evaluation of Guidelines in Critical Care Medicine. Crit Care Med 2019;47:109–13. 10.1097/CCM.0000000000003467. [DOI] [PubMed] [Google Scholar]
- [7].Zhang Z, Hong Y, Liu N. Scientific evidence underlying the recommendations of critical care clinical practice guidelines: a lack of high level evidence. Intensive Care Med 2018;44:1189–91. 10.1007/s00134-018-5142-8. [DOI] [PubMed] [Google Scholar]
- [8]. Simon GE, Platt R, Hernandez AF. Evidence from Pragmatic Trials during Routine Care — Slouching toward a Learning Health System. New England Journal of Medicine 2020;382:1488–91. 10.1056/NEJMp1915448. **Excellent review of the ways in which our current research infrastructure fails to provide evidence on comparative effectiveness questions, which highlights the potential for embedded pragmatic trials as part of learning healthcare systems to address these knowledge gaps.
- [9].Institute of Medicine (US) Forum on Drug Discovery D. The State of Clinical Research in the United States: An Overview. National Academies Press (US); 2010. [PubMed] [Google Scholar]
- [10].DiMasi JA, Hansen RW, Grabowski HG. The price of innovation: new estimates of drug development costs. J Health Econ 2003;22:151–85. 10.1016/S0167-6296(02)00126-1. [DOI] [PubMed] [Google Scholar]
- [11].Pollack A. Sales of Sovaldi, New Gilead Hepatitis C Drug, Soar to $10.3 Billion. The New York Times 2015. [Google Scholar]
- [12].Colon-Otero G, Smallridge RC, Solberg LA, et al. Disparities in participation in cancer clinical trials in the United States : a symptom of a healthcare system in crisis. Cancer 2008;112:447–54. 10.1002/cncr.23201. [DOI] [PubMed] [Google Scholar]
- [13].Kwiatkowski K, Coe K, Bailar JC, Swanson GM. Inclusion of minorities and women in cancer clinical trials, a decade later: Have we improved? Cancer 2013;119:2956–63. 10.1002/cncr.28168. [DOI] [PubMed] [Google Scholar]
- [14].Wilder J, Saraswathula A, Hasselblad V, Muir A. A Systematic Review of Race and Ethnicity in Hepatitis C Clinical Trial Enrollment. J Natl Med Assoc 2016;108:24–9. 10.1016/j.jnma.2015.12.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Committee on the Learning Health Care System in America, Institute of Medicine. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington (DC): National Academies Press (US); 2013. [PubMed] [Google Scholar]
- [16].Sanders JM, Monogue ML, Jodlowski TZ, Cutrell JB. Pharmacologic Treatments for Coronavirus Disease 2019 (COVID-19): A Review. JAMA 2020. 10.1001/jama.2020.6019. [DOI] [PubMed] [Google Scholar]
- [17]. Dominus S. The Covid Drug Wars That Pitted Doctor vs. Doctor. The New York Times 2020. **Investigative reporting demonstrating the tensions created during the COVID-19 pandemic by using separate infrastructures for clinical care and research.
- [18]. Casey JD, Johnson NJ, Semler MW, Collins SP, et al. Rationale and Design of ORCHID: A Randomized Placebo-controlled Clinical Trial of Hydroxychloroquine for Adults Hospitalized with COVID-19. Annals ATS 2020;17:1144–53. 10.1513/AnnalsATS.202005-478SD. *Review of the rationale and design of the primary US trial to evaluate the use of hydroxychloroquine among hospitalized patients, which describes how widespread open-label use of hydroxychloroquine presented a challenge to conduct of a definitive randomized trial.
- [19]. Bull-Otterson L. Hydroxychloroquine and Chloroquine Prescribing Patterns by Provider Specialty Following Initial Reports of Potential Benefit for COVID-19 Treatment — United States, January–June 2020. MMWR Morb Mortal Wkly Rep 2020;69. 10.15585/mmwr.mm6935a4. *Observational study demonstrating how hundreds of thousands of patients received off-label hydroxychloroquine during the early months of the COVID-19 pandemic.
- [20]. Effect of Hydroxychloroquine in Hospitalized Patients with Covid-19. New England Journal of Medicine 2020;383:2030–40. 10.1056/NEJMoa2022926. *Results from the RECOVERY trial showing how a learning healthcare system in the UK was able to rapidly provide evidence that hydroxychloroquine is ineffective as a treatment for COVID-19.
- [21]. Dexamethasone in Hospitalized Patients with Covid-19. New England Journal of Medicine 2021;384:693–704. 10.1056/NEJMoa2021436. **Results from the RECOVERY trial showing how a learning healthcare system in the UK was able to rapidly provide evidence that dexamethasone improves mortality for patients hospitalized with COVID-19.
- [22].Coronavirus tracked: has the epidemic peaked near you? n.d. https://ig.ft.com/coronavirus-chart (accessed April 2, 2021).
- [23]. RECOVERY Collaborative Group. Lopinavir-ritonavir in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial. Lancet 2020. 10.1016/S0140-6736(20)32013-4. *Results from the RECOVERY trial showing how a learning healthcare system in the UK was able to rapidly provide evidence that lopinavir-ritonavir is ineffective as a treatment for COVID-19.
- [24]. RECOVERY Collaborative Group. Azithromycin in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial. Lancet 2021;397:605–12. 10.1016/S0140-6736(21)00149-5. *Results from the RECOVERY trial showing how a learning healthcare system in the UK was able to rapidly provide evidence that azithromycin is ineffective as a treatment for COVID-19.
- [25].Doidge JC, Gould DW, Ferrando-Vivas P, et al. Trends in Intensive Care for Patients with COVID-19 in England, Wales, and Northern Ireland. Am J Respir Crit Care Med 2020;203:565–74. 10.1164/rccm.202008-3212OC. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Semler MW, Self WH, Wanderer JP, et al. Balanced Crystalloids versus Saline in Critically Ill Adults. N Engl J Med 2018;378:1951. 10.1056/NEJMc1804294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med 2015;10:26–31. 10.1002/jhm.2259. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Downing NL, Rolnick J, Poole SF, et al. Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation. BMJ Qual Saf 2019;28:762–8. 10.1136/bmjqs-2018-008765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Giannini HM, Ginestra JC, Chivers C, et al. A Machine Learning Algorithm to Predict Severe Sepsis and Septic Shock: Development, Implementation, and Impact on Clinical Practice. Crit Care Med 2019;47:1485–92. 10.1097/CCM.0000000000003891. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Bedoya AD, Clement ME, Phelan M, et al. Minimal Impact of Implemented Early Warning Score and Best Practice Alert for Patient Deterioration. Crit Care Med 2019;47:49–55. 10.1097/CCM.0000000000003439. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Makam AN, Nguyen OK, Auerbach AD. Diagnostic accuracy and effectiveness of automated electronic sepsis alert systems: A systematic review. J Hosp Med 2015;10:396–402. 10.1002/jhm.2347. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Ginestra JC, Giannini HM, Schweickert WD, et al. Clinician Perception of a Machine Learning-Based Early Warning System Designed to Predict Severe Sepsis and Septic Shock. Crit Care Med 2019;47:1477–84. 10.1097/CCM.0000000000003803. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Bhattacharjee P, Churpek MM, Snyder A et al. Detecting Sepsis: Are Two Opinions Better Than One? J Hosp Med 2017;12:256–8. 10.12788/jhm.2721. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [34].Escobar GJ, LaGuardia JC, Turk BJ, et al. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med 2012;7:388–95. 10.1002/jhm.1929. [DOI] [PubMed] [Google Scholar]
- [35]. Escobar GJ, Liu VX, Schuler A, Lawson B, Greene JD, Kipnis P. Automated Identification of Adults at Risk for In-Hospital Clinical Deterioration. New England Journal of Medicine 2020;383:1951–60. 10.1056/NEJMsa2001090. **Excellent example of a learning healthcare system using a prospective study to evaluate an information technology tool and prove that it improved clinical outcomes.
- [36]. Wilson FP, Martin M, Yamamoto Y, et al. Electronic health record alerts for acute kidney injury: multicenter, randomized clinical trial. BMJ 2021;372:m4786. 10.1136/bmj.m4786. **Randomized trial of a prognostic model that identified patients with impending acute kidney injury and triggered a clinical alert. Results suggested the intervention caused harm to patients, highlighting the importance of evaluating well-intentioned quality improvement interventions through learning healthcare system trials.
- [37]. Horwitz LI, Kuznetsova M, Jones SA. Creating a Learning Health System through Rapid-Cycle, Randomized Testing. New England Journal of Medicine 2019;381:1175–9. 10.1056/NEJMsb1900856. **Overview of the results from successfully implemented a learning healthcare system at NYU Langone Health.
- [38]. Finkelstein A, Zhou A, Taubman S, Doyle J. Health Care Hotspotting — A Randomized, Controlled Trial. New England Journal of Medicine 2020;382:152–62. 10.1056/NEJMsa1906848. **Excellent example of how a learning healthcare system can use randomized trials to evaluate whether widely used models of healthcare delivery improve patient outcomes.
- [39].Califf RM, Robb MA, Bindman AB, Briggs JP, et al. Transforming Evidence Generation to Support Health and Health Care Decisions. N Engl J Med 2016;375:2395–400. 10.1056/NEJMsb1610128. [DOI] [PubMed] [Google Scholar]
- [40].Casey JD, Janz DR, Russell DW, et al. Bag-Mask Ventilation during Tracheal Intubation of Critically Ill Adults. New England Journal of Medicine 2019;380:811–21. 10.1056/NEJMoa1812405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [41]. Janz DR, Casey JD, Semler MW, et al. Effect of a fluid bolus on cardiovascular collapse among critically ill adults undergoing tracheal intubation (PrePARE): a randomised controlled trial. Lancet Respir Med 2019;7:1039–47. 10.1016/S2213-2600(19)30246-2. *Example of a trial that successfully embedded trial enrollment, intervention delivery (a pre-intubation fluid bolus), and outcome assessment into routine clinical care.
- [42]. Hillier TA, Pedula KL, Ogasawara KK et al. A Pragmatic, Randomized Clinical Trial of Gestational Diabetes Screening. New England Journal of Medicine 2021;384:895–904. 10.1056/NEJMoa2026028. **Excellent example from Kaiser Permanente of an EHR-embedded learning healthcare system trial which randomized every pregnant patient receiving care to one of two commonly used screening test for gestational diabetes.
- [43]. Lau JYW, Yu Y, Tang RSY, et al. Timing of Endoscopy for Acute Upper Gastrointestinal Bleeding. New England Journal of Medicine 2020;382:1299–308. 10.1056/NEJMoa1912484. *Example of a learning healthcare system trial that embedded enrollment and intervention delivery into routine clinical care to provide important evidence on the timing of endoscopy for upper gastrointestinal bleeding.
- [44].Self WH, Semler MW, Wanderer JP, et al. Balanced Crystalloids versus Saline in Noncritically Ill Adults. N Engl J Med 2018;378:819–28. 10.1056/NEJMoa1711586. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45]. PEPTIC Investigators for the Australian and New Zealand Intensive Care Society Clinical Trials Group, Alberta Health Services Critical Care Strategic Clinical Network, and the Irish Critical Care Trials Group, Young PJ, Bagshaw SM, Forbes AB, Nichol AD, Wright SE, et al. Effect of Stress Ulcer Prophylaxis With Proton Pump Inhibitors vs Histamine-2 Receptor Blockers on In-Hospital Mortality Among ICU Patients Receiving Invasive Mechanical Ventilation: The PEPTIC Randomized Clinical Trial. JAMA 2020;323:616–26. 10.1001/jama.2019.22190. **Excellent example of a highly efficient, large, randomized, multi-center, cluster-crossover trial which evaluated the approach to peptic ulcer disease prophylaxis by embedding enrollment and intervention delivery into routine care and obtaining outcomes from a pre-existing observational registry.
- [46].Pejovic NJ, Myrnerts Höök S, Byamugisha J, et al. A Randomized Trial of Laryngeal Mask Airway in Neonatal Resuscitation. New England Journal of Medicine 2020;383:2138–47. 10.1056/NEJMoa2005333. [DOI] [PubMed] [Google Scholar]
- [47].Noto MJ, Domenico HJ, Byrne DW, et al. Chlorhexidine bathing and health care-associated infections: a randomized clinical trial. JAMA 2015;313:369–78. 10.1001/jama.2014.18400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [48].Shah ASV, Anand A, Strachan FE, et al. High-sensitivity troponin in the evaluation of patients with suspected acute coronary syndrome: a stepped-wedge, cluster-randomised controlled trial. Lancet 2018;392:919–28. 10.1016/S0140-6736(18)31923-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [49].Huang SS, Septimus E, Kleinman K, et al. Chlorhexidine versus routine bathing to prevent multidrug-resistant organisms and all-cause bloodstream infections in general medical and surgical units (ABATE Infection trial): a cluster-randomised trial. The Lancet 2019;393:1205–15. 10.1016/S0140-6736(18)32593-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [50].Anderson CS, Arima H, Lavados P, et al. Cluster-Randomized, Crossover Trial of Head Positioning in Acute Stroke. New England Journal of Medicine 2017;376:2437–47. 10.1056/NEJMoa1615715. [DOI] [PubMed] [Google Scholar]
- [51].Parshuram CS, Dryden-Palmer K, Farrell C et al. Effect of a Pediatric Early Warning System on All-Cause Mortality in Hospitalized Pediatric Patients: The EPOCH Randomized Clinical Trial. JAMA 2018;319:1002–12. 10.1001/jama.2018.0948. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [52].Bloom SL, Stollings JL, Kirkpatrick O, et al. Randomized Clinical Trial of an ICU Recovery Pilot Program for Survivors of Critical Illness. Crit Care Med 2019;47:1337–45. 10.1097/CCM.0000000000003909. [DOI] [PubMed] [Google Scholar]
- [53].White DB, Angus DC, Shields A- M, et al. A Randomized Trial of a Family-Support Intervention in Intensive Care Units. New England Journal of Medicine 2018;378:2365–75. 10.1056/NEJMoa1802637. [DOI] [PubMed] [Google Scholar]
- [54]. Bhasin S, Gill TM, Reuben DB, et al. A Randomized Trial of a Multifactorial Strategy to Prevent Serious Fall Injuries. New England Journal of Medicine 2020;383:129–40. 10.1056/NEJMoa2002183. *Example of a recent learning healthcare system trial that embedded screening, randomization, and intervention delivery into routine care to evaluate a method of care delivery.
- [55]. Katzmarzyk PT, Martin CK, Newton RL, et al. Weight Loss in Underserved Patients — A Cluster-Randomized Trial. New England Journal of Medicine 2020;383:909–18. 10.1056/NEJMoa2007448. *Example of a recent learning healthcare system trial that embedded screening, randomization, intervention delivery, and outcome assessment into routine care to evaluate a method of care delivery.
- [56].Landrigan CP, Rahman SA, Sullivan JP, et al. Effect on Patient Safety of a Resident Physician Schedule without 24-Hour Shifts. New England Journal of Medicine 2020;382:2514–23. 10.1056/NEJMoa1900669. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [57]. Cameron ST, Glasier A, McDaid L, et al. Use of effective contraception following provision of the progestogen-only pill for women presenting to community pharmacies for emergency contraception (Bridge-It): a pragmatic cluster-randomised crossover trial. The Lancet 2020;396:1585–94. 10.1016/S0140-6736(20)31785-2. *Example of a recent learning healthcare system trial that embedded screening, enrollment, randomization, and intervention delivery into routine care to evaluate a method of care delivery.
- [58].Victor RG, Lynch K, Li N, et al. A Cluster-Randomized Trial of Blood-Pressure Reduction in Black Barbershops. New England Journal of Medicine 2018;378:1291–301. 10.1056/NEJMoa1717250. [DOI] [PMC free article] [PubMed] [Google Scholar]



