Abstract
Pragmatic clinical trials (PCTs) are research investigations embedded in health care settings designed to increase the efficiency of research and its relevance to clinical practice. The Health Care Systems Research Collaboratory, initiated by the National Institutes of Health Common Fund in 2010, is a pioneering cooperative aimed at identifying and overcoming operational challenges to pragmatic research. Drawing from our experience, we present 4 broad categories of informatics-related challenges: (1) using clinical data for research, (2) integrating data from heterogeneous systems, (3) using electronic health records to support intervention delivery or health system change, and (4) assessing and improving data capture to define study populations and outcomes. These challenges impact the validity, reliability, and integrity of PCTs. Achieving the full potential of PCTs and a learning health system will require meaningful partnerships between health system leadership and operations, and federally driven standards and policies to ensure that future electronic health record systems have the flexibility to support research.
Keywords: pragmatic clinical trial, demonstration project, National Institutes of Health, clinical informatics, electronic health records
INTRODUCTION
The growing use of electronic health records (EHRs) has increased the potential of pragmatic clinical trials (PCTs), randomized controlled trials designed for generalizability, often involving multiple clinical sites and broad eligibility criteria.1,2 In contrast to traditional clinical trials, in which the goal is to evaluate new treatments under highly controlled conditions, PCTs are comparative effectiveness trials embedded within health care systems that are designed to maximize the generalizability of the results.3 PCTs provide a means to determine whether health interventions actually work in the “real world.” Hence rapid, efficient implementation of PCTs will be key to a successful learning health system.4 PCTs are also a source of “real-world evidence” that can inform therapeutic development, outcomes research, patient care, research on health care systems, quality improvement, safety surveillance, and well-controlled effectiveness studies.5
The Health Care Systems Research Collaboratory is funded by the National Institutes of Health Common Fund to advance the conduct of research in health care settings by developing methods to increase its efficiency, relevance, and generalizability. The Collaboratory supports 9 PCT demonstration projects (Table 1) in various phases of implementation, representing a variety of health conditions of interest, interventions, research designs, settings, and patient populations. Collectively, the Collaboratory has produced guidance on topics including ethical considerations and regulatory requirements,6,7 successful partnerships with health systems,8,9 design approaches and statistical methods,10,11 and assessment of data quality.12 The Collaboratory’s Coordinating Center maintains an online textbook to provide researchers, research sponsors, and health system leaders with practical guidance on ethics, consent, study design, patient-reported outcomes, EHR data, and learning health systems.13 The Collaboratory serves as an important knowledge-sharing community and resource for new research initiatives, such as the National Patient-Centered Clinical Research Network, known as PCORnet.14
Table 1.
Collaboratory PCT demonstration projects
| Trial acronym | Title | Project goal | Uses of existing clinical data |
|---|---|---|---|
| PPACT | Pain Program for Active Coping and Training | To coordinate and integrate services to help patients adopt self-management skills for chronic pain, limit use of opioid medications, and identify factors amenable to treatment in the primary care setting.15 | Uses EHR data to identify patients eligible for study participation; also embeds patient-reported outcomes (PROs) into the EHR system. |
| STOP CRC | Strategies and Opportunities to Stop Colorectal Cancer in Priority Populations | To improve the rates of colorectal cancer (CRC) screening at federally qualified health centers by having clinics mail fecal immunochemical tests to patients due for screening. Control arm clinics provide opportunistic CRC screening to patients at clinic visits. Although CRC is 90% curable if caught early, screening rates are extremely low in patients at federally qualified health centers, which serve nearly 19 million patients annually.16 | Uses EHR data to identify patients eligible for study participation and outcomes (increase in CRC screening rates) and an EHR embedded application (Reporting Workbench) to generate mailings. |
| SPOT | Suicide Prevention Outreach Trial | To compare outcomes in patients who receive 1 of 2 suicide-prevention strategies vs usual care. Strategy 1 is a care management approach, and strategy 2 is an online skills training method designed to help people manage painful emotions and stressful situations.17 | Uses routinely administered depression questionnaires and other clinical data extracted from the EHR to identify patients at risk for suicide attempt. Primary outcome “suicide attempt” is ascertained from health system EHR and claims databases.16,29 |
| TiME | Time to Reduce Mortality in End-Stage Renal Disease | To compare hemodialysis sessions of ≥4.25 h with usual care (no trial-driven approach to dialysis session length). Observational studies indicate that longer duration of hemodialysis sessions is associated with lower mortality, but this has not been evaluated in randomized trials.18 | All data elements for the trial are obtained from clinical data. |
| PROVEN | Pragmatic Trial of Video Education in Nursing Homes | To determine whether showing advanced care planning (ACP) videos in nursing homes affects the rate of hospitalization/person-years alive. Patients in nursing homes often have advanced comorbid conditions and may get aggressive care that is inconsistent with their preferences. ACP is associated with better palliative care outcomes, but its implementation is inconsistent.19 | Uses data from the national Minimum Data Set to ascertain all patient-level data. |
| LIRE | Lumbar Image Reporting with Epidemiology | To determine whether inserting epidemiological benchmarks (essentially representing the normal range) into lumbar spine imaging reports reduces subsequent tests and treatments, including cross-sectional imaging (such as magnetic resonance imaging and computed tomography), opioid prescriptions, spinal injections, and surgery. Lumbar imaging frequently reveals incidental findings of disk degeneration that are common in normal, pain-free volunteers.20,21 | Embeds contextual information (epidemiologic benchmarks) into lumbar spine imaging reports to evaluate whether this physician- and patient-directed information reduces subsequent tests and treatments for patients with back pain.19,28 Uses EHR data for patient identification and study outcomes. |
| ABATE | Active Bathing to Eliminate Infection | To evaluate whether antiseptic bathing for all patients and nasal ointments for patients harboring methicillin-resistant Staphylococcus aureus reduces antibiotic-resistant bacteria and hospital-associated infections.22 | Uses EHR data for outcome ascertainment (reduction in hospital-associated infections in noncritical care units). |
| ICD-Pieces™ | Improving Chronic Disease Management with Pieces | To evaluate the use of a novel technology platform (Pieces) to enable use of the EHR to identify patients with chronic kidney disease, diabetes, and hypertension and to improve care with practice facilitators and within primary care practices or community medical homes.23 | Uses EHR data to identify patients with coexistent chronic kidney disease, diabetes, and hypertension. |
| TSOS | Trauma Survivors Outcomes and Support | To implement an innovative trial design for improving care management of patients with posttraumatic stress disorder (PTSD) and comorbidity at US trauma centers, as well as to inform national policy changes for trauma care for which evidence-based treatments for PTSD and comorbidity have not been broadly implemented.24 | Uses EHR data to identify potential study participants with PTSD and comorbidity. |
Data from EHRs and other clinical information systems are used in Collaboratory PCTs for eligibility screening, recruitment activities, identification of patient cohorts for observation (the most common use case), interventions, and assessment of study outcomes. Our first report25 identified an informatics agenda that highlighted approaches for using data to identify clinically equivalent populations from multiple sites and assessing whether data collected from health systems are comparable, valid, and reliable. In this update, we reflect on our continued experience conducting PCTs, characterize the informatics challenges that these trials have faced the last 3 years, and propose policy and research actions that will facilitate the future conduct of PCTs.
APPROACH
The Collaboratory’s Phenotypes, Data Standards, and Data Quality Core group includes representatives from each PCT demonstration project and is charged with assimilating the experiences of the projects to develop generalizable knowledge for future PCTs.26 Data and informatics challenges identified by the group are documented in an ongoing inventory that is periodically reviewed with project teams to identify new issues as they emerge. Here, we distill the inventory of challenges into thematic categories and suggest actions to address them.
INFORMATICS-RELATED CHALLENGES FOR PRAGMATIC TRIALS
Using EHR data for research
Clinical trials typically control the quality and consistency of data by prospectively collecting study data, often using specialized clinical trial management systems, well-defined data-management procedures, and extensive data cleaning. Unlike the controlled conditions created by the detailed protocols and data-collection procedures of traditional clinical trials, PCTs are conducted in the “open system” of health care organizations. Organizational decisions (eg, policies, partners, and purchases) can affect patient mix, provider or researcher documentation behaviors, types of data captured, and data quality, which in turn has the potential to negatively impact the integrity of the trial and the subsequent validity, interpretation, or generalizability of the study results. Data that are systematically missing due to interoperability between EHR systems or a lack of consistent capture can introduce bias into a study, often beyond researchers’ control (Example 1).
Integrating data from disparate and heterogeneous systems
A common thread across the Collaboratory PCTs is the development of processes for obtaining data from multiple sites at multiple time points and assessing whether these data are what the researchers intended to receive. Combining data from disparate or heterogeneous systems requires patient identifiers and deterministic linkage techniques or probabilistic matching. In either case, linkages are continuously used as data are updated. Procedures for data updates, quality checks, and linkages must be clearly documented at each site in case of staff turnover or reassignment. A process for managing nonstandard or local codes at sites must be developed for codes to be grouped together to identify particular services and diagnoses, and also to monitor the system for new codes that appear during the study.27
Due to the heterogeneity of clinical information systems, data-collection practices, and data-representation formats, study teams from Collaboratory projects have had to invest considerable resources to export existing clinical data from each site and map them to a uniform dataset for the study. Discovering whether data are missing from an extracted dataset, or were incorrectly transformed, is challenging and often requires manual review.28 Other difficulties encountered are formatting differences that require transformation of incoming datasets, loss of data in transformation, and assuring that the study team understands the origin of important data so they can identify inconsistencies in data handling, facility workflow, or documentation. Because health system outsiders do not know how data are created and managed at any given organization, a centralized study team or coordinating center alone cannot effectively characterize the data; rather, each research site must participate.
Using EHRs to support delivery of interventions or improvements in clinical practice
One goal of pragmatic research is to evaluate and eventually promote changes in clinical practice that benefit stakeholders. EHRs can be used to support the delivery of interventions or new clinical practices using a number of decision support features (eg, best practice alerts, customized information) that can be evaluated with thoughtfully designed pragmatic trials. EHR systems also can be used to identify patients eligible for trials or at risk for certain conditions.
The time and resources needed to implement EHR enhancements designed to change clinical practice have been more complicated and intensive than expected, in part because each health care system has its own processes for implementing enhancements, even in common products; ie, no 2 EHR installations are the same, even when supplied by the same vendor. Further, support for new features in the EHR (ie, access to help, change control, bug fixes) is site-specific. Consequently, all our teams had to budget for some proportion of IT time for every site.
Ideally, an evaluation of interventions delivered by EHR systems should include whether providers actually process the information or alerts. Often, we assume that if a tool is visible, then it is read and acted on. Yet, given the complexity of EHR design and usability and the competing cognitive demands on clinicians, future EHRs must be able to capture whether providers adequately view and comprehend EHR-embedded interventions in order to assess the fidelity of the interventions with certainty.
Assessing and improving data capture
Because clinical systems are designed for patient care and are not optimized for research, data must be assessed carefully to determine whether they are sufficient to address the needs of a given PCT. Caveats for the use of clinical data in research have been well described.29,30 For pragmatic trials in particular, we have suggested activities for assessing the validity of clinical data for research purposes, including formal validation of clinical phenotype definitions,31 public reporting of results from multiple clinical sites, and evaluation and reporting of data quality metrics.25 Our previous recommendations included rigorous data quality assessment activities to describe the capability of the data to support the research conclusions.12 These recommendations acknowledge that the level of quality and data validation needed will vary by trial and purpose of the data.
In some cases, the data critically needed for a trial, such as patient-reported outcomes, are not routinely collected by the health system. Researchers must develop or enhance systems in order to collect these data, and they must interact with operations staff to ensure alignment with workflows (Example 2).
DISCUSSION
We distilled our experience from the Collaboratory PCTs into 4 broad informatics challenges: using EHR data for research, integrating data from disparate and heterogeneous systems, using EHRs to support the delivery of interventions or improvements in clinical practice, and assessing and improving data capture. While the list may not be comprehensive, we believe it is generalizable to other pragmatic trials and health systems and provides a useful starting point for discussions of the important role of informatics in PCTs.
Embedding clinical trials into health care systems has many challenges, including obtaining health systems’ cooperation to support the trial, designing a trial that will work within the “business” of health care delivery, navigating federal regulations for human research and quality improvement activities, identifying organizational concerns about sharing provider or outcome data, and competing for limited IT resources.9,13 Increasingly, informatics professionals will need to engage with research teams and health system leaders to support pragmatic research, underscoring the inevitable union between clinical research and health care operations.32
Because of the shortage of qualified IT staff in many health systems, this challenge cannot be addressed by simply adding more research funds.33 Uniformly, Collaboratory demonstration projects have addressed this by engaging systems as research partners and facilitating access to local IT staff with research champions within the organization, both of which are essential steps for other trial activities (eg, recruitment) and critical to the success of PCTs in general.8 Indeed, frequent communication among staff and research teams is an important component of all Collaboratory projects. These projects recommend that future PCTs also plan for adequate staffing and time to run systematic data quality tests throughout the study.12
Previous research has shown that only a small portion of data required for research is typically available in EHR systems.28,33 Because it is unlikely that clinical systems will ever be able to capture the data elements needed to answer all research questions, standardized approaches are needed to augment EHR systems with additional data collection (that are equivalent across sites). The Retrieve Form for Data Capture integration profile was developed for this specific purpose34 but was not used in any of the demonstration projects. The Retrieve Form for Data initiative will be enhanced by standard data elements and associated terminology that are harmonized between routine care and research. We recommend that researchers use elements from a standard library or post their phenotype definitions to a public repository accessible by researchers and research consumers.35 This would create efficiencies for researchers in implementing new research studies, and also benefit research consumers by enabling comparisons across multiple studies that use similar disease definitions, data dictionaries, and outcome measures.
Most commercial EHRs were not designed to facilitate research.32 Our experience in multisite pragmatic trials underscores the lack of uniform interface and workflow enhancements to support PCTs. We recommend that specific certification criteria be added by the Office of the National Coordinator for Health IT. The Certified EHR Technology and Meaningful Use stage 3 objectives related to transmitting data to registries should require that certified EHR technology be capable of normalizing data to standardized data elements or a data dictionary, and interoperability standards should also be developed and promoted.33 Further, future certified EHRs should enable health care systems to manage clinical data within their own infrastructure, yet share research datasets outside their walls.36 Finally, the HL7 Fast Healthcare Interoperability Resources standard37 can enable third-party applications to interact directly with existing commercial EHR data models as a way to quickly deliver new front-end features38 that facilitate research or practice changes in response to new evidence. Such “plug-and-play” readiness should also be a requirement for EHR certification. Because Fast Healthcare Interoperability Resources–based research applications can be centrally developed, they can reduce the resources clinical sites need to engage in multisite research.
Without these features, the readiness of clinical sites to implement new data-collection interfaces and front-end workflow changes in the EHR will continue to vary across organizations. Many studies today are hindered by the tension between the study design requirements to initiate an intervention and the readiness of sites to launch a feature or begin data collection. A truly national learning health system depends on pragmatic trials in smaller clinics and rural settings, not just in academic medical centers. Ultimately, all health systems, large and small, should support interoperable data exchange and research functions.
CONCLUSION
PCTs provide a means to assess health intervention performance in actual practice settings and are a key component in learning health systems. In order to succeed, informatics, research, and operations will need to collaborate on strategies to harmonize data and coordinate information technology across organizations that participate in multisite trials. National-level policies and standards are needed to ensure that EHR systems will support pragmatic research and learning health system goals.
Knowledge Resources for Pragmatic Clinical Trials
Rethinking Clinical Trials: A Living Textbook of Pragmatic Clinical Trials ® is a public resource that provides the latest thinking in the conduct of pragmatic clinical trials, with chapters on acquiring EHR data, managing conflicts of interest and consent, implementing patient-reported outcome measures, and establishing a learning health care system.13 A number of narratives and case studies related to phenotypes, data standards, and data quality are posted on the “Tools for Research” tab and will continue to be updated.13
Example 1. Obtaining data from outside labs in the Strategies and Opportunities to Stop Colorectal Cancer in Priority Populations (STOP CRC) pragmatic trial
The STOP CRC project had difficulty in the timely and consistent identification of completed screening exams across sites, which were critical data for the trial. Variations in the capture of colonoscopy data led to errors both in classifying individuals eligible for screening and in ascertaining the study outcome, completion of CRC testing. STOP CRC clinics used an EHR search tool to identify the word “colonoscopy” from free text and find previously completed colonoscopies, and then enter the procedure and date it was performed into the EHR’s Health Maintenance tool, thus adding to the completeness of the patient record and reducing future unnecessary and redundant testing. However, workflows for using this tool and entering outside data could potentially vary and affect comparisons between intervention and control sites. To minimize this bias, the study chose completion of fecal testing as the primary outcome, as that was consistently captured across sites. Completion of any type of CRC screening test, including colonoscopy or flexible sigmoidoscopy, was a secondary outcome.
More details are available on the “Tools for Research” page at https://sites.duke.edu/rethinkingclinicaltrials/tools-for-research/.
Example 2. Collecting patient-reported data to evaluate interventions in the Pain Program for Active Coping and Training (PPACT) pragmatic trial
The PPACT study needed patient-reported outcome (PRO) data for its primary endpoints. The team’s initial exploration was to determine whether each region was using the same PRO instrument and whether the data were collected with adequate frequency. It was determined that PRO data collected via standard clinical practices was not sufficient to meet the needs of the project. In order to address this, project leaders worked with the national Kaiser organization to create buy-in for use of a common instrument across the regions, and then local IT built it within each region. In addition, a multitiered approach was developed to supplement the clinically collected PRO data at 4 project-required time points (3, 6, 9, and 12 months). Two tiers were within the clinical system: secure e-mail from the EHR was sent with an attached survey, followed by an automated interactive voice recognition phone call. Follow-up phone calls by research staff were necessary to maximize data collection at each time point. These follow-up calls were consistent with standard clinical practice of having medical assistant staff follow up with patients over the phone.
More details are on the “Tools for Research” page at https://sites.duke.edu/rethinkingclinicaltrials/tools-for-research/.
ACKNOWLEDGMENTS
We wish to thank Lesley Curtis, PhD, for review, and Liz Wing, MA, for editorial assistance. The members of the National Institutes of Health Collaboratory’s Phenotypes, Data Standards, and Data Quality Core group are Monique Anderson, Alan Bauck, Denise Cifelli, Pedro Gozalo, Beverly Green, Michael Kahn, Reesa Laws, John Lynch, Rosemary Madigan, Vincent Mor, George “Holt” Oliver, Jon Puro, Rachel Richesson, Alee Rowley, Michelle Smerek, Greg Simon, Kari Stephens, and Erik Van Eaton.
Funding
This publication was made possible by grants 1U54AT007748-01, 1UH2AT007769-01, 1UH2AT007782-01, 1UH2AT007755-01, 1UH2AT007788-01, 1UH2AT007766-01, 1UH2 AT007784-01, and 1UH2AT007797-01 from the National Institutes of Health.
Competing interests
The authors have no competing interests related to the content or publication of this manuscript.
References
- 1. Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues Clin Neurosci. 2011;13:217–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290:1624–32. [DOI] [PubMed] [Google Scholar]
- 3. Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis. 1967;20:637–48. [DOI] [PubMed] [Google Scholar]
- 4. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309:1351–52. [DOI] [PubMed] [Google Scholar]
- 5. Sherman RE, Anderson SA, Dal Pan GJ et al. Real-World Evidence — What Is It and What Can It Tell Us? N Engl J Med. 2016;375:2293–97. [DOI] [PubMed] [Google Scholar]
- 6. Anderson ML, Califf RM, Sugarman J et al. Ethical and regulatory issues of pragmatic cluster randomized trials in contemporary health systems. Clin Trials. 2015;12:276–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Califf RM, Sugarman J. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clin Trials. 2015;12:436–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Johnson KE, Tachibana C, Coronado GD et al. A guide to research partnerships for pragmatic clinical trials. BMJ. 2014;349:g6826. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. NIH Health Care Systems Research Collaboratory. Lessons Learned from the NIH Health Care Systems Research Collaboratory Demonstration Projects. January 19, 2016. https://www.nihcollaboratory.org/Products/Lessons%20Learned%20from%20the%20NIH%20Collaboratory%20Demonstration%20Projects_V1.0.pdf. Accessed September 20, 2016.
- 10. Cook AJ, Delong E, Murray DM et al. Statistical lessons learned for designing cluster randomized pragmatic clinical trials from the NIH Health Care Systems Collaboratory Biostatistics and Design Core. Clin Trials. 2016;13:504–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Pladevall M, Simpkins J, Donner A et al. Designing multicenter cluster randomized trials: an introductory toolkit. NIH Health Care Systems Research Collaboratory. November 7, 2014. https://www.nihcollaboratory.org/Products/Designing%20CRTs-Introductory%20Toolkit.pdf. Accessed September 20, 2016. [Google Scholar]
- 12. Zozus MN, Hammond WE, Green BB et al. Assessing Data Quality for Healthcare Systems Data Used in Clinical Research (Version 1.0). An NIH Health Care Systems Research Collaboratory White Paper. 2013. [Internet] [cited 2016 Dec 13]. http://sites.duke.edu/rethinkingclinicaltrials/assessing-data-quality/. [Google Scholar]
- 13. NIH Health Care Systems Research Collaboratory. Rethinking Clinical Trials: A Living Textbook of Pragmatic Clinical Trials. https://sites.duke.edu/rethinkingclinicaltrials/. Accessed September 20, 2016.
- 14. Fleurence RL, Curtis LH, Califf RM et al. Launching PCORnet, a national patient-centered clinical research network. J Am Med Inform Assoc. 2014;21:578–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Pain Program for Active Coping and Training (PPACT). https://www.nihcollaboratory.org/demonstration-projects/Pages/PPACT.aspx. Accessed September 20, 2016.
- 16. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Strategies and Opportunities to Stop Colorectal Cancer (STOP CRC). https://www.nihcollaboratory.org/demonstration-projects/Pages/STOP%20CRC.aspx. Accessed October 9, 2015.
- 17. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project .Suicide Prevention Outreach Trial (SPOT).https://www.nihcollaboratory.org/demonstration-projects/Pages/SPOT.aspx. Accessed October 9, 2015.
- 18. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Time to Reduce Mortality in End-Stage Renal Disease (TiME). https://www.nihcollaboratory.org/demonstration-projects/Pages/TiME.aspx. Accessed October 9, 2015.
- 19. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Pragmatic Trial of Video Education in Nursing Homes (PROVEN). https://www.nihcollaboratory.org/demonstration-projects/Pages/PROVEN.aspx. Accessed October 9, 2015.
- 20. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Lumbar Image Reporting with Epidemiology (LIRE). https://www.nihcollaboratory.org/demonstration-projects/Pages/LIRE.aspx. Accessed October 9, 2015.
- 21. National Institutes of Health. NIH Collaboratory. An Interview with Dr. Jerry Jarvik. 2015. https://www.nihcollaboratory.org/Pages/Jarvik%20LIRE%20Interview%207-21-15.pdf. Accessed October 20, 2015.
- 22. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Active Bathing to Eliminate (ABATE) Infection trial. https://www.nihcollaboratory.org/demonstration-projects/Pages/ABATE.aspx. Accessed February 2, 2015.
- 23. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Improving Chronic Disease Management with Pieces (ICD-Pieces). https://www.nihcollaboratory.org/demonstration-projects/Pages/ICD-Pieces.aspx. Accessed February 4, 2015.
- 24. National Institutes of Health. NIH Health Care Systems Research Collaboratory Demonstration Project. Trauma Survivors Outcomes and Support (TSOS). https://www.nihcollaboratory.org/demonstration-projects/Pages/TSOS.aspx. Accessed January 19, 2016.
- 25. Richesson RL, Hammond WE, Nahm M et al. Electronic health records based phenotyping in next-generation clinical trials: a perspective from the NIH Health Care Systems Collaboratory. J Am Med Inform Assoc. 2013;20:e226–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. NIH Health Care Systems Research Collaboratory. Phenotypes, Data Standards, and Data Quality Core Group Webpage. https://www.nihcollaboratory.org/cores/Pages/phenotypes.aspx. Accessed October 27, 2016.
- 27. Puro J, Coronado GD, Petrik A et al. Using EHR Data to Automate Colorectal Cancer Screening at Community Health Centers: Opportunities and Barriers. North American Primary Care Research Group 2014 Annual Meeting. New York; 2014 http://www.napcrg.org/Conferences/PastMeetingArchives/2014AnnualMeetingArchives. [Google Scholar]
- 28. Devine EB, Capurro D, van Eaton E et al. Preparing electronic clinical data for quality improvement and comparative effectiveness research: The SCOAP CERTAIN Automation and Validation Project. EGEMS (Wash DC). 2013;1:1025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Hersh WR, Cimino J, Payne PR et al. Recommendations for the use of operational electronic health record data in comparative effectiveness research. EGEMS (Wash DC). 2013;1:1018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Richesson RL, Horvath MM, Rusincovitch SA. Clinical research informatics and electronic health record data. Yearb Med Inform. 2014;9:215–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Petrik AF, Green BB, Vollmer WM et al. The validation of electronic health records in accurately identifying patients eligible for colorectal cancer screening in safety net clinics. Fam Pract. 2016;336:639–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Marsolo K. Informatics and operations: let’s get integrated. J Am Med Inform Assoc. 2013;20:122–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Van Eaton EG, Devlin AB, Devine EB et al. Achieving and sustaining automated health data linkages for learning systems: barriers and solutions. EGEMS (Wash DC). 2014;2:1069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Retrieve Form for Data Capture. Integrating the Healthcare Enterprise Wiki page. http://wiki.ihe.net/index.php/Retrieve_Form_for_Data_Capture. Accessed October 27, 2016.
- 35. Richesson RL, Smerek MM, Blake Cameron C. A Framework to Support the Sharing and Reuse of Computable Phenotype Definitions Across Health Care Delivery and Clinical Research Applications. EGEMS (Wash DC). 2016;4:1232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Sittig DF, Hazlehurst BL, Brown J et al. A survey of informatics platforms that enable distributed comparative effectiveness research using multi-institutional heterogeneous clinical data. Med Care. 2012;50:S49–59. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. HL7. FHIR Overview. https://www.hl7.org/fhir/overview.html. Accessed September 20, 2016.
- 38. SMART HealthIT. Something New and Powerful: SMART on FHIR®. http://smarthealthit.org/smart-on-fhir/. Accessed September 20, 2016.
