Skip to main content
Transactions of the American Clinical and Climatological Association logoLink to Transactions of the American Clinical and Climatological Association
. 2014;125:204–218.

The Role of Pragmatic Clinical Trials in The Evolution of Learning Health Systems

Gary E Rosenthal
PMCID: PMC4112713  PMID: 25125735

Abstract

Pragmatic clinical trials (PCTs) test clinical interventions (eg, treatments, diagnostic tests, delivery strategies) that are widely used in practice and for which there is often clinical equipoise. Similar to traditional explanatory trials of novel therapeutics, PCTs use randomization to decrease selection bias. In contrast, PCTs rely on extant data sources (eg, electronic medical records [EMRs]) and test interventions that can be implemented with minimal research infrastructures. Thus, PCTs have drawn interest as vehicles for decreasing the cost of clinical research and for creating learning health systems, which, as articulated by the Institute of Medicine, seek to generate new knowledge as an integral by-product of the delivery experience. However, realizing this vision for PCTs will require innovative approaches for engaging clinicians, improving the efficiency of subject recruitment, improving the reliability of EMR data, and new paradigms for the regulatory review of low-risk trials to decrease unncessary hurdles to practice-based knowledge generation.

INTRODUCTION

The United States spends more money than any other nation on healthcare yet produces health outcomes that lag behind many nations. For example, data from the Organization for Economic Cooperation and Development (OECD) found that per capita healthcare expenditures in the US in 2008 ($7538) were 50% higher than the next leading nation (Norway) and more than 2.5 times higher than the median expenditures across the 34 OECD nations, even after adjusting for differences in the cost of living (1). However, in an analysis of healthcare delivery in 7 OECD nations, the US ranked last or next to last on five key dimensions, including access, patient safety, coordination, efficiency, and equity (2).

These data indicate that within the US healthcare system there is substantial use of treatments and diagnostic modalities that are ineffective or even harmful. For example, as has been widely documented, significant variations exist in the delivery of healthcare services in different regions of the country that cannot be explained by differences in disease prevalence or patient severity (35). Based on this work, it has been estimated that 30% of all Medicare spending could be avoided without worsening healthcare outcomes (6). In a further analysis, it was estimated that more than one third of US healthcare expenditures were wasteful (7).

Although the causes of variations in healthcare practice and wasteful spending are complex, one important contributor is the lack of high-quality, empirical evidence of the effectiveness of different treatments and diagnostic tests. Indeed, it is estimated that less than 20% of the interventions commonly used in practice are based on evidence from randomized controlled trials (8). Moreover, because many randomized trials include populations that are relatively homogeneous with little comorbidity, much of the randomized trial data may not be applicable to a large proportion of the patients seen in routine practice settings, including those patients with multiple comorbidities who drive a substantial portion of healthcare spending. Thus, new paradigms are urgently needed to create the high-quality evidence that is needed to inform clinical decision making.

It is in this context that discussion has emerged about the creation of learning health systems, in which new knowledge about the effectiveness of healthcare interventions is generated through practice-based pragmatic trials and the generation of this knowledge becomes a routine part of the delivery process (911). In this article, I first review the Learning Health System, as it was envisioned by the Institute of Medicine. I then review the promise of pragmatic trials and how such trials differ from traditional clinical trials. I conclude with a discussion of the key challenges that must be overcome if knowledge generation is to become a by-product of the clinical delivery process.

Learning Health Systems

The Learning Health System was first articulated at a 2006 workshop of the Institute of Medicine's Roundtable on Evidence-Based Medicine (9). The workshop examined how re-engineering clinical research and healthcare delivery could better generate the knowledge needed to drive high-quality patient care and how the wide-scale implementation of electronic medical records (EMRs) could facilitate this goal. The workshop also identified a number of challenges, including the traditional reliance for knowledge generation on randomized controlled clinical trials (RCTs), which were believed to be too time consuming, too costly, and fraught with questions of generalizability. Lastly, the workshop identified key needs with regard to the development of Learning Health Systems, including: a culture of shared responsibility among patients, providers, and investigators for the generation of evidence and improved communication about the nature of evidence and its development; new clinical research paradigms that were better adapted to the constraints and realities of the practice environment; tools for better linking large healthcare databases derived from EMRs, billing records, and other extant data sources and for better mining these data for patterns and clinical insights; acknowledgement of healthcare data as a central resource for advancing knowledge and the need to address challenges to the use of healthcare data posed by the Health Insurance Portability and Accountability Act (HIPAA); and effective health system leadership to develop the strategic and tactical plans necessary to create learning health systems.

Although learning has always been a cardinal feature of academic medicine, the Learning Health System takes learning further in two important ways. First is the tight integration between research and practice such that research findings directly inform practice and key issues faced by practitioners become the focus of future research projects. Second is that health system resources, such as the EMR, are an integral part of the research infrastructure.

Since the 2006 Institute of Medicine workshop other efforts have identified key attributes of the Learning Health System. For example, Green et al (11) described six distinct phases of learning in healthcare: scanning and surveillance, which involves the proactive identification and characterization of key problems around which to focus future studies; design, which involves convening a broad group of stakeholders to develop interventions and evaluation systems that are tailored to the particular clinical setting; implementation, which involves careful piloting of the intervention to identify factors that could affect successful spread to other settings; evaluation to provide timely feedback on the effectiveness of the new interventions using predefined endpoints; adjustment, which involves ongoing fine tuning of an intervention to address contextual issues that invariably arise when an intervention is implemented in a particular setting; and dissemination of the evaluation findings using both traditional (eg, peer-reviewed publications, presentations at scientific meetings) and nontraditional approaches (eg, more timely vehicles such as trade journals or webinars).

PRAGMATIC CLINICAL TRIALS

The term pragmatic clinical trial (PCT) was coined nearly 50 years ago by Schwarz and Llellouch (12) to distinguish between clinical trials that were explanatory in orientation (ie, understanding whether a difference exists between treatments that are specified by strict definitions) and trials that were pragmatic in orientation (ie, understanding whether a difference exists in treatment as applied in practice). These differences were described in a hypothetical trial to determine if a radiation sensitizing agent that would be administered 30 days before the initiation of radiation therapy improved cancer survival. In both an explanatory and a pragmatic trial, the intervention group would receive the sensitizing agent followed by a standard course of radiation treatment beginning 30 days later. In an explanatory trial, the control group would receive radiation therapy after an identical 30-day window to carefully isolate the effect of the new agent. However, in a pragmatic trial, the control group would receive radiation therapy immediately to mimic how the treatment would be applied in practice. Thus, the pragmatic trial would compare the agent plus delayed radiation treatment to immediate radiation treatment.

Since this initial description of the PCT, other distinctions between pragmatic and explanatory trials have been described with regard to differences in study populations, treatment and control groups, the interventions being tested, study endpoints, and the interpretation of results (1318).

Differences in Study Populations

Explanatory trials testing new treatments for specific conditions typically enroll homogeneous patients with few comorbid conditions to reduce response variation and, thus, the sample size needed to show a particular degree of difference between two treatments. In contrast, PCTs have fewer patient selection criteria and seek to enroll more heterogeneous populations. As a result, PCTs may require larger sample sizes but have greater external validity (ie, generalizability).

Differences in Treatment and Control Groups

Whereas explanatory trials often compare new agents to placebo treatments, PCTs should always compare two or more currently used treatments, often for which there is clinical equipoise.

Differences in the Intervention Being Tested

Explanatory trial interventions are typically delivered by highly skilled and specialized practitioners in research settings to maximize intervention fidelity. In contrast, PCT interventions are delivered by practitioners in a routine practice setting and often incorporate flexibility to adapt treatment to individual needs of patients and to the capabilities of the delivery setting. Thus, pragmatic designs mimic routine practice with the exception that patients are randomly allocated to treatment.

Differences in Study Endpoints

Explanatory trials often examine intermediate or surrogate endpoints (eg, serum lipids or coronary artery calcification scores in cardiovascular trials) in an attempt to decrease the time required to conduct trials or to use endpoints that might be more objective. In contrast, PCTs should examine endpoints that reflect the “real life” concerns of patients (eg, myocardial infarction death, or alleviation of symptoms in cardiovascular studies) and that may be obtainable from the EMR, other extant data sources, or from simple patient surveys. As a result, PCTs may require longer follow-up periods to track outcomes and may be better suited to study chronic conditions that require treatment over many years.

Differences in Interpretation of Results

Explanatory trials determine treatment efficacy or the effect of a new treatment under ideal experiment conditions. PCTs determine treatment effectiveness or the effect of a treatment as applied in normal practice settings. Thus, PCT results may be more applicable to the average patient. Moreover, the analysis in PCTs is typically based on an “intention-to-treat” approach, recognizing that treatment cross-over may be more common than in explanatory trials. In addition, in PCTs, patients and practitioners are typically not blinded to treatment assignment, although the allocation of patients to different groups should be random and the process for assessing endpoints should be blinded to group allocation.

Although the above distinctions are helpful at a conceptual level, most studies have features that are both explanatory and pragmatic in orientation. To better characterize the explanatory or pragmatic nature of studies, several classification schemes have been proposed. One of the more widely adopted schemes is the Pragmatic-Explanatory Continuum Indicator Summary (PRECIS) tool (19). PRECIS identifies 10 different features of trials that reflect study eligibility criteria, the flexibility of study implementation, the nature of practitioner involvement, endpoint assessment, the intensity of study follow-up procedures, and the study analysis (Figure 1). PRECIS recognizes that each of the 10 features can be categorized along an explanatory-pragmatic continuum (ie, each feature is not an all or none phenomenon) and that levels on the continuum may differ for a given trial for the different PRECIS features. By rating each of the 10 features along the explanatory-pragmatic continuum and connecting the points, PRECIS provides a graphical view of the pragmatic nature of a given trial. For example, Thorpe et al (19) used PRECIS to classify two randomized trials that examined the effects of low-dose aspirin on reducing pre-eclampsia in pregnancy (20,21). As shown in Figure 1, the study depicted in panel A (20) was largely pragmatic in orientation, whereas the study in panel B (21) was more explanatory in orientation, although for both studies the degree varied across different features.

Fig. 1.

Fig. 1

Graphical summary of two clinical trials (20,21) examining the effect of aspirin on the incidence of pre-eclampsia using the PRECIS framework for categorizing trial features (I-X) as explanatory or pragmatic in orientation.

CURRENT CHALLENGES TO PRAGMATIC TRIALS AND LEARNING HEALTH SYSTEMS

The realization of learning health systems in which research, ongoing learning, and practice are integrally intertwined will require overcoming a number of methodological and practical challenges, including improvement of subject recruitment, development of valid and reliable clinical phenotypes from EMR data and other extant data (eg, administrative data) to decrease data collection costs, development of appropriate incentives for clinician involvement in PCTs, and decreasing regulatory hurdles for low-risk PCTs.

Improvement of Subject Recruitment

Low subject recruitment is one of the fundamental challenges facing the US clinical research enterprise and is one of the major reasons that the pharmaceutical industry has moved much of its clinical trials efforts overseas (22). A majority of studies either fail to meet desired recruitment and/or are subject to significant delays in trial completion due to recruitment (23). Although most of the data on subject recruitment rates is based on traditional clinical trials, it is likely that PCTs face the same challenges. However, in recent years, a number of promising strategies to enhance recruitment that capitalize on the EMR and web-based technologies have been proposed. One such strategy is to use demographic, diagnostic, laboratory, medication, and other commonly captured data to screen large numbers of patients to identify those who may be eligible for a particular trial. The feasibility of this approach was shown for a randomized trial of a cognitive processing intervention to increase visual processing speed in older adults (24). The eligible sample for this trial included patients seen in General Internal Medicine and Family Medicine clinics in a single academic medical center who met the following criteria: 1) age 50 years and older; 2) two or more clinic visits in the past 12 months; and 3) the absence of diagnosis codes for Alzheimer's disease, Pick's disease, and other forms of dementia. Randomly selected eligible patients were sequentially identified until the enrollment target for the study (n = ∼650) was met. Patients were then sent a single mailing about the study. Of the 4747 patients sent the single mailing, 996 (21%) expressed interest in participating in the study. Of these, 390 did not meet telephone-based screening criteria, leaving 681 (14%) who were enrolled over a 4-month period.

A second EMR-based strategy involves the use of point-of-care clinical alerts to physicians during encounters that the patient being seen may be eligible for a particular trial. The clinician can then inform the patient and provide information on how to contact the study if interested. The value of this approach was shown for a diabetes trial (25) that identified patients on the basis of diagnosis, age (40 years and older), and glycosylated hemoglobin value (7.5 mg/dL or greater). Alerts were directed at 10 endocrinologists and 104 general internists in a single academic medical center. The alert prompted physicians to consider additional criteria that could not be reliably obtained from the EMR and to send a referral order to a study coordinator if a patient was interested in participating. Analyses found that the alert increased (P < .05) enrollment from 2.9 patients per month to 6.0 patients per month with an increase in the number of clinicians who referred patients from five to 42.

A further strategy involves the use of web-based approaches to improve the efficiency of obtaining informed consent. The potential effectiveness of this approach was shown in a pilot study funded by the University of Iowa Institute for Clinical and Translational Science that involved a mock trial of 95 potential research subjects who were randomized to a traditional paper-based consent process, an online version of the paper-based consent that included graphics and narration, and an interactive multi-media online consent process that posed questions and provided feedback to patients in the online platform. The study found that compared to the paper-based format, the interactive online format increased (P < .05) potential subjects' knowledge of the trial and their satisfaction with the consent process (Klein DW, unpublished observations September 2013).

Development of Valid and Reliable Clinical Phenotypes from EMR and Other Extant Data

Although the EMR includes a wealth of detailed clinical information, a number of challenges exist with regard to the extraction and use of this information (26). First, a majority of information in EMRs is contained in clinical notes, which are typically in free text format. Current techniques to extract this information that are based on natural language processing are poorly developed and/or cumbersome to implement. Second, there is tremendous variability across institutions and individual clinics within institutions in the capture of information, particularly for diagnoses, symptoms, and other important clinical descriptors (eg, functional status). Third, there are often systematic biases in the capture of information for different types of clinical encounters with limited standardization of phenotypic definitions for common clinical conditions. The potential impact of such biases and the lack of standardization were shown in a study (27) of the impact of comorbid psychiatric conditions on in-hospital mortality among 31,284 admissions for pneumonia and congestive heart failure (CHF) among Department of Veterans Affairs' hospitals. The study compared two approaches for defining comorbid psychiatric conditions: a diagnosis of one or more of five conditions (depression, anxiety, and post-traumatic stress, disease, or psychotic disorders) recorded during the hospital admission; and a diagnosis of one or more of the five conditions recorded during an outpatient visit in the 2 years before hospitalization. The study found that the prevalence of psychiatric conditions was much higher based on outpatient diagnoses than those based on inpatient diagnoses (32% versus 12%, respectively; P < .001). Moreover, agreement between the two methods, as measured by the kappa statistic, was only fair (κ = 0.31 for pneumonia, κ = 0.24 for CHF). In addition, adjusted odds of death were lower for patients with a psychiatric comorbidity, relative to patients without a comorbidity using inpatient diagnoses (0.63 [P < .001] for pneumonia, 0.75 [P < .001] for CHF), but were similar using outpatient diagnoses (1.04 [P = .46] for pneumonia, 0.93 [P = .13] for CHF).

The above results highlight the importance of developing a library of computable definitions and algorithms to enable phenotyping for common clinical conditions from EMR data, as well as the importance of testing these phenotype definitions across different data systems and against traditional manual review of patients' medical records. A further related issue is the lack of standardization in how clinical information is entered into the EMR in the first place. Addressing this deeper-rooted issue will require a synthesis of best practices for entering key clinical information data into EMR data fields (eg, problem lists) and the standardization of approaches for entering clinical data across different EMR systems and institutions.

Development of Appropriate Incentives for Clinician Involvement in PCTs

Clinician involvement and buy-in are critically important to the successful implementation of pragmatic trials and the creation of Learning Health Systems. Because most clinicians are compensated based on their clinical productivity (eg, number of visits or relative value units [RVUs]), the buy-in must overcome potential interruptions in clinical workflows that a pragmatic trial might produce. Given the economic disincentives, even minor interruptions in workflows may lead to a lack of interest in PCT participation by clinicians.

Thus, attention must be given to strategies for garnering clinician support. First, clinicians must be actively engaged in all phases of the research design and implementation processes, including providing input on the prioritization of research questions, on the structure of the intervention, on strategies to refine the intervention after it is implemented, and communicating the results of PCTs after the trials are completed. Second, PCTs must use designs and intervention protocols that can be easily embedded into practice and cause as little practice disruption as possible. Third, health system leadership must be encouraged to create institutional cultures that value knowledge generation and that are committed to integrating the knowledge gained into day-to-day practice. Lastly, institutional incentives and reward systems must be properly aligned. For example, participation in pragmatic trials could be a factor in determining clinician productivity (eg, providing RVUs for subject recruitment) or could be a factor in the academic promotion process.

Decreasing Regulatory Hurdles for Low-Risk PCTs

Obtaining appropriate regulatory approval for conducting PCTs can be a time-consuming process and a significant disincentive to rigorously evaluating pragmatic interventions. Moreover, randomization is an essential element of PCTs, and obtaining informed consent for randomizing patients can be a further barrier. However, it is important to recognize that pragmatic trials typically involve interventions that are low risk in nature and/or involve the comparison of interventions and treatments that are already widely used in practice and for which there may be clinical equipoise. This recognition has given rise to discussion that the regulatory process for trials that study standard of care treatments should be different than for studies testing novel agents with unknown safety profiles.

These issues were captured in a recent series of articles from the Hastings Center that examined the current framework for differentiating research from practice and that proposed strategies for decreasing regulatory hurdles for conducting PCTs. An article in the series by Kass et al (28) characterized five features that are currently used to distinguish research from practice, including the belief that research involves: production of generalizable knowledge; systematic investigation; less net clinical benefit and greater risk than practice; burdens or risks that are otherwise not part of patients' clinical management; and protocols that dictate which treatments or diagnostic interventions patients receive. The article then systematically identifies problems with each of the five distinguishing features and concludes that features are out of date. For example, as health care organizations move to become integrated systems of care and focused around improving value, the creation of generalizable knowledge that can be applied across different institutions in the system will be an explicit objective of these arrangements. Moreover, Learning Health Systems are by definition committed to simultaneously delivering the care patients need while capturing the experience of clinical practice in systematic ways that produce generalizable knowledge to improve care for both present and future patients. So, the intentions to produce generalizable knowledge and to conduct systematic investigation are unreliable ways to distinguish research from practice.

As a further example, the article argues that many treatments widely used in clinical practice are of unproven value, and highlights numerous interventions that became widely adopted but which were later shown to have no efficacy and/or be harmful (eg, gastric freezing for peptic ulcer disease, extracranial to intracranial bypass surgery to reduce the risk of ischemic stroke, use of antiarrhythmic drugs to reduce the risk of sudden death, high-dose chemotherapy followed by bone marrow transplantation for breast cancer). Thus, in many cases, research does not expose patients to greater risk than they would encounter in practice.

In a second article in the Hastings Center series, Faden et al (29) highlight a new ethics framework for Learning Health Systems. Three aspects of this framework deserve particular notice. First, there is a moral priority to promote learning in healthcare. Both healthcare professionals and institutions have a unique obligation to contribute to such learning. Second, a similar moral obligation extends to patients. Faden et al justified this position by principle of the “common good” — that members of a society have a common interest in ensuring an affordable, high-quality healthcare system. Third, just as the healthcare delivery system has an obligation to decrease disparities in healthcare outcomes, the Learning Health System has an obligation to address unjust inequities in the available evidence for clinical decision-making. Based on this framework, Faden et al argue that in some cases patient randomization for low-risk trials testing standard of care treatments might not require informed consent if the consent process negatively impacted the feasibility of the study.

CONCLUSIONS

There is currently a critical need to improve the evidence base for clinical practice and to decrease the use of treatments and diagnostic modalities that are ineffective or even harmful. These efforts must be tightly woven into the fabric and ethos of practice. In this regard, PCTs can play a critical role in building knowledge about interventions that are most effective, as they are applied in actual practice settings. However, realizing the full potential of PCTs enterprise will require efforts to: 1) improve the efficiency of subject recruitment; 2) improve the validity and reliability of the routinely collected clinical information in EMRs; 3) more actively engage clinicians and implement reward systems that value their participation in PCTs; and 4) decrease regulatory barriers for conducting low-risk PCTs. Creating a thriving PCT enterprise will, in turn, enable the creation of true learning health systems in which …. “science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by-product of the delivery experience (2).”

Footnotes

Potential Conflicts of Interest: Clinical and Translational Science Award (2 UL1 TR000442-06) from the National Center for Advancing Translational Science.

DISCUSSION

Baum, Bronx: One of the obstacles that I see, in my current position because I read about a thousand letters of recommendations for students written by faculty members. Increasing numbers of those letters are written by hospitalists. So, most inpatient care at many, many hospitals is being provided by hospitalists who are incentivized in an entirely different direction; that is, get the patient out fast and efficiently. So I think that's another barrier or obstacle to doing what I totally agree would be a good idea. And I think that's the group that really has to be addressed and the RVU concept is probably very appropriate there.

Rosenthal, Iowa City: I agree with you. I think it's a challenge to sort of introduce research into very busy clinical environments in which the reward systems are aligned solely towards productivity right now.

Zeidel, Boston: Very nice talk. Two quick comments. First, I wonder about the value of the electronic medical record as being most importantly the ability to see what happened over here if you happen to be over there. I fear that we are increasing — in our efforts to standardize for research, the care of patients — we are losing the narrative component. And often the richest part of the electronic record is the narrative, which happens to be dictated by the clinician. So I'd ask for a comment on that and one other element. A lot of research into how to improve the way we care for patients is like engineering research. It involves small steps, you test it, and then you move forward, and then it goes in aggregate, and then those improvements may not lend themselves to randomized control trials. So, can you comment on the issue of narrative history in the record and on the issue of these kinds of stepwise things, as a version of pragmatic clinical trials?

Rosenthal, Iowa City: I would totally agree with you about the points you make about losing the narrative, and the patient's story. I think the challenge is to maintain that and teach people how to tell that story and that always has to be part of the electronic medical record. But there is always going to be some standard fields, most notably diagnoses. I think there are much better ways that we can structure the electronic medical record that facilitate better input. The introduction of the electronic medical record is probably one of the most incredible experiments in healthcare. It's really an untested intervention. There is a lot of potential good for it. There are a lot of potential unintended adverse consequences in terms of the second part of your question.

Zeidel, Boston: It was about the engineering approach … about small improvements, each one aggregating into a complex system that works far better than where you started and the comparison of that kind of work versus randomized control trials and the ability to publish such work as you improve the way we care for patients, which at some level is a version of this.

Rosenthal, Iowa City: Right, I agree. I think the challenge in doing interventions that are integrated into practice — as compared to say when you are doing a trial of a drug where you know exactly what the intervention is — is that the practice-based interventions are murky. You often can't implement them in exactly a standard engineering way. They have to be tinkered with, with regard to the local environment. So a lot of times, the question is somewhat different. You're asking a question about the process of implementing a change as opposed to a very prescriptive set of changes.

Mushlin, New York City: I enjoyed that also and couldn't agree with you more that pragmatic clinical trials are on the horizon and things we need to see more development of. What I was a bit surprised about though in your remarks — and I'd like you perhaps to comment on — is the additional challenge that pragmatic clinical trials raise about lack of validity. I mean, really, if you take classic randomized trials, the main advantage is internal validity. The main advantage in pragmatic trials, of course, is generalizability and applicability of the findings. But what about threats to validity within pragmatic clinical trials? And, what about work that can be done at the interface between clinical trials and statistics to help us control from confounding and bias? Do you see any potential, and any things on the horizon, that are going to get pragmatic clinical trials closer to the gold standard that we mean when we talk about randomized trials?

Rosenthal, Iowa City: That's a great question, and there are a lot of complexities and parts to that. I'll try to be relatively brief. So you're right about what you trade with pragmatic trials: you're trying to gain external validity and you're reducing a lot of your internal validity. So I think it's probably very, as you're implementing these trials, very important to do ongoing formative evaluation so that you can truly understand what the intervention is. And these interventions have multiple components and perhaps that may even help you tease out which components are actually working. I think the key thing about pragmatic trials is the issue of randomization and introducing randomization into the course of practice to try to eliminate a lot of the selection bias that is inherent in some of the observational studies. But, if you can identify up front what you think some of the factors downstream are that may confound results, you can think about collecting them and adjusting for them on the back end. But that was a great question.

Thibault, New York: Outstanding presentation, really very forward-thinking. I would add one thing to your challenge: to think about the role of education and how we prepare health professionals to prepare them for a world in which they are working in the Learning Health System. And more attention needs to be paid to that. It really is the unity, not just of practice and research, but the unity of education, practice, and research that will bring us to this world. Very outstanding.

Rosenthal, Iowa City: Thank you.

REFERENCES

  • 1.Anderson GF, Markovich P. Multinational Comparisons of Health Systems Data, 2010. New York: Commonwealth Fund; 2011. Jul, (available at: http://www.commonwealthfund.org/Publications/Chartbooks/2011/Jul/Multinational-Comparisons-of-Health-Systems-Data-2010.aspx) [Google Scholar]
  • 2.Davis K, Schoen C, Stremikis K. Mirror, Mirror on the Wall: How the Performance of the U.S. Health Care System Compares Internationally. 2010 Update. New York: Commonwealth Fund; 2010. Jun, (available at http://www.commonwealthfund.org/~/media/Files/Publications/Fund%20Report/2010/Jun/1400_Davis_Mirror_Mirror_on_the_wall_2010.pdf) [Google Scholar]
  • 3.Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in Medicare spending. Part 2: Health outcomes and satisfaction with care. Ann Intern Med. 2003;138(4):288–98. doi: 10.7326/0003-4819-138-4-200302180-00007. [DOI] [PubMed] [Google Scholar]
  • 4.Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in Medicare spending. Part 1: The content, quality, and accessibility of care. Ann Intern Med. 2003;138(4):273–87. doi: 10.7326/0003-4819-138-4-200302180-00006. [DOI] [PubMed] [Google Scholar]
  • 5.Morden NE, Chang CH, Jacobson JO, et al. End-of-life care for Medicare beneficiaries with cancer is highly intensive overall and varies widely. Health Affairs. 2012;31(4):786–96. doi: 10.1377/hlthaff.2011.0650. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Lalleman NC. Health Policy Brief: Reducing waste in health care. Health Affairs. 2012. (Dec 13, 2012):1-5 (available at: http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=82)
  • 7.Berwick DM, Hackbarth AD. Eliminating waste in US health care. JAMA. 2012;307(14):1513–6. doi: 10.1001/jama.2012.362. [DOI] [PubMed] [Google Scholar]
  • 8.Eddy DM. Evidence-based medicine: a unified approach. Health Affairs. 2005;24(1):9–17. doi: 10.1377/hlthaff.24.1.9. [DOI] [PubMed] [Google Scholar]
  • 9.Olsen L, Aisner D, McGinnis JM, editors. The Learning Healthcare System: Workshop Summary. Washington, DC: The National Academies Press; 2007. Roundtable on Evidence-Based Medicine. [PubMed] [Google Scholar]
  • 10.Smith M, Saunders R, Stuckhardt L, McGinnis JM, editors. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: The National Academies Press; 2012. Committee on the Learning Health Care System in America. [PubMed] [Google Scholar]
  • 11.Greene SM, Reid RJ, Larson EB. Implementing the learning health system: from concept to action. Ann Intern Med. 2012;157(3):207–10. doi: 10.7326/0003-4819-157-3-201208070-00012. [DOI] [PubMed] [Google Scholar]
  • 12.Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Chron Dis. 1967;20:637–48. doi: 10.1016/0021-9681(67)90041-0. [DOI] [PubMed] [Google Scholar]
  • 13.Bratton DJ, Nunn AJ, Wojnarowska F, et al. The value of the pragmatic-explanatory continuum indicator summary wheel in an ongoing study: the bullous pemphigoid steroids and tetracyclines study. Trials. 2012;13:50. doi: 10.1186/1745-6215-13-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Chalkidou K, Tunis S, Whicher D, et al. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials. 2012;9(4):436–46. doi: 10.1177/1740774512450097. [DOI] [PubMed] [Google Scholar]
  • 15.Saag KG, Mohr PE, Esmail L, et al. Improving the efficiency and effectiveness of pragmatic clinical trials in older adults in the United States. Contemp Clin Trials. 2012;33(6):1211–6. doi: 10.1016/j.cct.2012.07.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sackett DL. Explanatory and pragmatic clinical trials: a primer and application to a recent asthma trial. Pol Arch Med Wewn. 2011;121(7–8):259–63. [PubMed] [Google Scholar]
  • 17.Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues. Clin Neurosci. 2011;13(2):217–24. doi: 10.31887/DCNS.2011.13.2/npatsopoulos. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.van der Windt DA, Koes BW, van Aarst M, et al. Practical aspects of conducting a pragmatic randomised trial in primary care: patient recruitment and outcome assessment. Br J Gen Pract. 2000;50(454):371–4. [PMC free article] [PubMed] [Google Scholar]
  • 19.Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. CMAJ. 2009;180(10):E47–57. doi: 10.1503/cmaj.090523. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Collaborative Low-dose Aspirin Study in Pregnancy (CLASP) Collaborative Group. CLASP: a randomized trial of low-dose aspirin for the prevention and treatment of pre-eclampsia among 9364 pregnant women. Lancet. 1994;334:619–29. [PubMed] [Google Scholar]
  • 21.Caritis S, Sibai B, Hauth J, et al. Low-dose aspirin to prevent preeclampsia in women at high risk. National Institute of Child Health and Human Development Network of Maternal-Fetal Medicine Units. N Engl J Med. 1998;338(11):701–5. doi: 10.1056/NEJM199803123381101. [DOI] [PubMed] [Google Scholar]
  • 22.Eapen ZJ, Vavalle JP, Granger CB, et al. Rescuing clinical trials in the United States and beyond: a call for action. Am Heart J. 2013;165:837–47. doi: 10.1016/j.ahj.2013.02.003. [DOI] [PubMed] [Google Scholar]
  • 23.Watson JM. Torgerson DJ. Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34. doi: 10.1186/1471-2288-6-34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Wolinsky FW, Vander Weg MW, Howren MB, et al. A randomized controlled trial of cognitive training in middle aged and older adults. PLoS One. 2013;8(5):e61624. doi: 10.1371/journal.pone.0061624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Embi PJ, Jain A, Clark J, et al. Effect of a clinical trial alert system on physician participation in trial recruitment. Arch Intern Med. 2005;165(19):2272–7. doi: 10.1001/archinte.165.19.2272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Weng C, Appelbaum P, Hripcsak G, et al. Using EHRs to integrate research with patient care: promises and challenges. J Am Med Inform Assoc. 2012;19(5):684–7. doi: 10.1136/amiajnl-2012-000878. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Abrams TE, Vaughan-Sarrazin M, Rosenthal GE. Variations in the associations between psychiatric comorbidity and hospital mortality according to the method of identifying psychiatric diagnoses. J Gen Intern Med. 2008;23:317–22. doi: 10.1007/s11606-008-0518-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Kass NE, Faden RR, Goodman SN, et al. The research-treatment distinction: a problematic approach for determining which activities should have ethical oversight. Hastings Cent Rep. 2013 Jan-Feb; doi: 10.1002/hast.133. Spec No:S4–S15. [DOI] [PubMed] [Google Scholar]
  • 29.Faden RR, Kass NE, Goodman SN, et al. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical studies. Hastings Cent Rep. 2013 Jan-Feb; doi: 10.1002/hast.134. Spec No:S16–27. [DOI] [PubMed] [Google Scholar]

Articles from Transactions of the American Clinical and Climatological Association are provided here courtesy of American Clinical and Climatological Association

RESOURCES