Skip to main content
AMIA Summits on Translational Science Proceedings logoLink to AMIA Summits on Translational Science Proceedings
. 2019 May 6;2019:92–101.

A Method for EHR Phenotype Management in an i2b2 Data Warehouse

Andrew Post 1, Nityananda Chappidi 1, Dileep Gunda 1, Nita Deshpande 1
PMCID: PMC6568136  PMID: 31258960

Abstract

Electronic health record (EHR) data is valuable for finding patients for clinical research and analytics but is complex to query. EHR phenotyping involves the curation and dissemination of best practices for querying commonly studied populations. Phenotyping software computes patterns in clinical and administrative data and may add the found patterns as derived variables to a database that researchers can query. This paper describes a method for managing EHR phenotypes in a data warehouse as the warehouse is incrementally updated with new and changed data. We have implemented this method in proof-of-concept form as an extension to the Eureka! Clinical Analytics phenotyping software system and evaluated the implementation’s performance. The method shows promise for realizing the efficient addition, modification, and removal of derived variables representing phenotypes in a data warehouse.

Introduction

Clinical data warehouses1 provide a population view of data in the electronic health record (EHR) that can support characterizing the number of potential participants for research studies in order to assess study feasibility.2 After a study has launched, clinical data warehouses can identify patients incrementally who can be approached for recruitment.3 Academic centers may have clinical research focus areas, and as a result, their researchers frequently query similar patient populations. Health systems with data warehouses appear to be less likely than in other industries to have formal data governance models in place that could promote standardization of frequent data requests.4 There likely is substantial variability in the queries that researchers use.

EHR phenotypes are best practice queries for identifying populations of interest for clinical and translational research.5 In a future state, when researchers are formulating a study, they will partner with informatics experts to identify and adapt existing phenotypes from their institution’s phenotype repository, which leverages local best practices and possibly those from other institutions.6 These phenotypes will be encoded in computable form.7 The phenotypes for the study will be deployed into a data warehouse’s data transformation and loading processes, after which the data warehouse will provide a regularly updated set of patients with derived variables that are computed according to the definitions of the phenotypes of interest. Identification of patients with the phenotypes of interest could trigger research alerts indicating the availability of a potential participant for a study.

This capability requires incrementally updating phenotypes in a clinical data warehouse as data flows in. The data flow may include new, corrected, updated, and deleted data. As those data changes occur, computation of new phenotypes, changes to the phenotypes that were previously computed, and deletion of phenotypes that are no longer valid should also occur. While we could recompute all phenotypes of interest across the entire data warehouse every time data is updated, based on an analysis of our institution’s research data warehouse,8 we expect that approach to perform poorly in an environment of daily data updates to a large data warehouse with millions of patients.

This paper will describe a method for implementing phenotype maintenance that includes incremental updating, and a prototype implementation that extends Eureka! Clinical Analytics. We previously reported on the Eureka system’s phenotyping capabilities9,10 and on its support for loading incrementally updated data into an i2b2 data warehouse.8 This proof of concept implementation supports creating new phenotypes for addition to an existing dataset; retrieving computed phenotypes; changing phenotype definitions and recomputing them; and deleting phenotypes that are no longer of interest. We have analyzed the correctness of the implementation using synthetic data.

Background

Eureka! Clinical Analytics implements an Extract, Transform and Load (ETL) process11 for creating and maintaining databases that are populated from a clinical data warehouse.12 It extracts data from relational databases, Excel spreadsheets, and flat files. It structurally and semantically transforms extracted data from the source system into the Accrual to Clinical Trials (ACT) common data model and its supported vocabularies.13 Eureka computes phenotypes on the ACT data model representation of the data. It then transforms and loads the data and time intervals representing the computed phenotypes into a target data model. Supported target models include the i2b2 star schema,14 the Observational Medical Outcomes Partnership (OMOP) common data model,15 and a Neo4j graph database.12 While Eureka has supported incrementally updating a target model with new, changed, and deleted data since 2016,8 until now we have not provided for incrementally updating computed phenotypes that are loaded into the target model.

Eureka provides four basic building blocks, called abstractions, for composing phenotypes:

  • Category abstractions allow specifying clinically significant groupings of abstractions and data values. Intervals are created that correspond to the timestamps of the category’s members that are found.

  • Value threshold abstractions allow specifying thresholds on one or more data values or abstractions such as a laboratory test or vital sign. The application of a threshold may require a contextual abstraction or data value to be present. Intervals are created corresponding to the timestamps during which there are data values that satisfy the thresholds.

  • Frequency abstractions allow specifying the number of times a data value or abstraction must be present. Intervals are created that span the temporal extent of all of the data values and abstractions that satisfy the frequency threshold.

  • Sequence abstractions allow specifying two or more data values or abstractions that occur in a required temporal order. Members of a sequence may be constrained with minimum and/or maximum temporal durations. The time distance between pairs of members may be constrained to minimum and maximum time values. Intervals are created as the temporal extent of a specified member of the sequence.

Abstractions may be computed directly from an extracted dataset, or they may be computed from a combination of data values and other abstractions. This capability allows building complex summary representations of clinical data.

A Eureka “query” specifies data and phenotypes of interest and an optional date range, and the query result consists of a stream of data and found phenotypes grouped by patient, including linkages between phenotypes and the data from which they were computed. Because phenotypes are specified in a common data model rather than the source system’s data model, phenotypes may be specified once and reused for phenotyping data from different source systems. The query result is streamed into a “query result handler” plugin. Plugins are available for outputting delimited files, outputting graphs of data using the Neo4j database, and loading data into i2b2.

Eureka computes phenotypes using a production rules system. Each phenotype of interest is translated at query time into one or more rules. Source data is loaded into working memory a patient at a time, rules are fired, and intervals representing the abstractions that are found are inserted into working memory. Eureka maintains “forward” links from data to computed intervals, and it maintains similar links from intervals to other computed intervals. Eureka also maintains “backward” links from computed intervals down to the data from which they were computed.

Taking advantage of these provenance links for incrementally updating phenotypes requires that the links be persisted so that they can be updated when new and changed data become available. The OMOP common data model has a fact relationship table that may support storing at least some of the required provenance information. Assessing the feasibility of using OMOP’s fact relationship table is outside the scope of this proof of concept. Our research data warehouse environment employs i2b2, which models only relationships between facts and patients, visits, and providers. As a result, a secondary database is needed to manage a full set of phenotype provenance links. A key- value NoSQL store16 that is organized by patient may provide an efficient and easily implemented solution. We employ the secondary database approach in Methods below.

Eureka is implemented in Java as RESTful17 web services and Angular (https://angular.io) web clients for specifying phenotypes and running queries. Eureka computes abstractions using the Drools (https://www.drools.org) production rules system. Eureka is available as open source from https://github.com/eurekaclinical and has been deployed as the ETL system for our institution’s i2b2 data warehouse since October 2016.

Methods

Data flows

The Eureka! Clinical Analytics ETL process with incremental updating of phenotypes has six modes of operation. Two of them, called ETL flows, involve computing phenotypes on the fly as new and updated data flows through Eureka from a source data warehouse into i2b2. The other four, called phenotype management flows, support updating phenotypes and loading the changes into i2b2 without having to go back to the source system for data.

New phenotypes are specified by the user in Eureka! Clinical Analytics’ existing web user interface, which includes screens for creating, editing and deleting the category, value threshold, frequency, and sequence abstractions that are described above in Background

ETL flows:

The first two modes, REPLACE and INCREMENTAL, add incremental phenotyping to Eureka’s existing full reload and incremental data updating processes. Inputs into the process, shown in Figure 1, are new and changed data from a clinical data warehouse, the Accrual to Clinical Trials (ACT) project’s ontology, mappings from codes used in the source data warehouse to the standard codes that are required by ACT, a repository of phenotype definitions, and a “working memory database”. The latter is a key-value store by patient id that maintains a copy of the data mapped into the ACT model, intervals representing phenotypes that have been computed by Eureka, and the provenance information linking phenotype intervals to the data from which they were computed.

Figure 1.

Figure 1.

Eureka ETL flows compute and store phenotypes on the full dataset of interest from the source clinical data warehouse (REPLACE mode) or on data changes resulting from the incremental updating process (INCREMENTAL mode). Eureka phenotype management flows allow modifying the set of phenotypes that have been computed in a dataset. Rather than re-pull data from the source database, the phenotype management flows modify intervals representing phenotypes in the working memory database and push selected data and intervals into i2b2. Semantic Layer refers to the Eureka data processing RESTful web service.

In REPLACE mode, Eureka truncates the i2b2 tables and the working memory database, and it loads data and computes all requested phenotypes from scratch. This mode is intended to populate i2b2 with an initial dataset.

In INCREMENTAL mode, the target i2b2 database and the working memory database are updated with new and changed data. They also are updated with any phenotypes that now can be computed given the data changes. Intervals representing phenotypes may also be changed or deleted if the data from which they were computed changed or was deleted. INCREMENTAL mode is intended to refresh an i2b2 database as data is added, updated and deleted in the source system. This mode assumes that the set of phenotypes of interest has not changed, or phenotype changes only apply to new and changed data going forward from the date of implementation.

The INCREMENTAL mode is further illustrated in Figure 2, which shows phenotypes being added to a dataset at three time points: the initial data load, and two subsequent data updates.

Figure 2.

Figure 2.

The management of phenotypes in Eureka after an initial data load at time point 1, and subsequent incremental updates at time points 2 and 3. Time point 1 has a snapshot containing one set of blood pressure readings, a hypertension diagnosis code, and a dispense of a drug used to treat hypertension. Based on these values, the system computes “On Diuretic” and “Hypertension” intervals. By time point 2, two more blood pressure readings are available, one of which is elevated. In addition, there is another hypertension diagnosis code, because of which the system computes another “Hypertension” interval and a “Second Hypertension” interval. The data added by time point 3 allows Eureka to extend the Elevated BP interval and add new On Diuretic and Hypertension intervals. The type of abstraction (frequency, etc.) is in parentheses next to each interval’s name.

Phenotype management flows:

The latter four modes, CREATE, RETRIEVE, UPDATE, and DELETE, support creating, retrieving, updating, and deleting phenotypes in an existing working memory database and loading the changes into the i2b2 database repository (Figure 1). The data flow for these modes retrieves no data from the source data warehouse.

In CREATE mode, new phenotypes are computed from the available data and previously computed phenotypes in the working memory database. The changes to working memory are written back to the key value store. The new phenotypes also are passed into i2b2, where they are loaded into the i2b2 fact table, and concept records for the new phenotypes are loaded into the concept dimension table.

Figure 3 shows an interval representing a phenotype that was added using CREATE mode to the phenotypes in time point 3 from Figure 2, Elev. BP in Hypertensive on Diuretic. This new phenotype is computed from Second Hypertension, Elevated BP, and On Diuretic intervals, which were already present in the working memory database.

Figure 3.

Figure 3.

An interval of the Elev. BP in Hypertensive on Diuretic phenotype that was added incrementally by Eureka’s CREATE mode. The type of abstraction (sequence, etc.) is in parentheses next to each interval’s name.

RETRIEVE mode loads specified data and phenotypes from the working memory database into the target i2b2 data warehouse. We include this mode primarily for completeness, but it could be useful for restoring an i2b2 data warehouse from the contents of a working memory database.

UPDATE mode is functionally equivalent to a DELETE followed by a CREATE. Because phenotype intervals are computed values, the source data does not provide a unique identifier for the intervals that could support a full merge operation.

In DELETE mode, specified data and phenotypes are deleted from the working memory database, and their corresponding records are also passed to i2b2 with a delete timestamp set to the current time so that the specified data and phenotypes are deleted from i2b2. Due to provenance maintenance, specifying a phenotype for deletion will cause all intervals of that phenotype to be deleted. It also will cause all intervals that are reachable from it via forward links to be deleted from the working memory database. This capability is illustrated in Figure 4, which shows a hydrochlorothiazide (HCT) dispense data value marked as deleted, which causes Eureka to delete phenotypes that were computed from it, such as On Diuretic and Elev. BP in Hypertensive on Diuretic.

Figure 4.

Figure 4.

A HCT Dispense data value was deleted, which caused intervals of the On Diuretic and Elev. BP in Hypertensive on Diuretic phenotypes to be retracted. The type of abstraction (sequence, etc.) is in parentheses next to each interval’s name.

Results

Proof-of-concept implementation

We implemented the working memory database using version 18.1.25 of the Oracle Berkeley DB Java edition key- value store, which is freely available from https://www.oracle.com/database/berkeley-db/java-edition.html. We implemented incremental phenotyping in Eureka’s Java backend code. Eureka’s job submission screen, which launches data processing jobs, now allows selecting from the two ETL modes and four phenotype management modes as shown in Figure 5.

Figure 5.

Figure 5.

Screenshot showing the Eureka data processing job management screen with the mode selector containing the two ETL modes (Update data [INCREMENTAL mode], REPLACE all data) and the four phenotype management modes (RETRIEVE from working memory, Add [CREATE] phenotypes to working memory, UPDATE working memory, and DELETE from working memory).

Evaluation of system performance

To test the system’s accuracy, we processed a synthetic dataset of 512 patients, available from https://github.com/eurekaclinical/eurekaclinical-analytics-webapp/tree/master/src/main/webapp/docs/sample.xlsx. It has the data types and volumes listed in Table 1. We specified in Eureka the 25 phenotypes in Table 2, which we previously created with quality improvement stakeholders at our institution’s health system as part of a hospital readmissions reduction effort.9

Table 1.

Data volumes by subject area

Subject area # of records
Patients 512
Hospital and clinic encounters 2,555
ICD9 diagnosis codes 16,312
Lab tests 11,784
Total 31,163

Table 2.

Phenotypes

Phenotype name Type Phenotype definition
Encounter with subsequent 30-day readmission Sequence A hospital encounter that is followed within 30 days of discharge by the start of another hospital encounter
Second readmit Frequency The second Encounter with subsequent 30-day readmission across all encounters for a patient
Myocardial infarction (MI) Category ICD-9 code in 410.*.
Second MI Frequency The second Myocardial infarction across all encounters for a patient.
Diabetes Category ICD-9 code in 250.* or 648.0*.
Uncontrolled diabetes Category, Value threshold ICD-9 code in 250.*2, 250.*3, 707.1 -or- Hemoglobin A1c (HbA1c) test result > 9%
Heart failure Category ICD-9 code in 402.01, 402.11, 402.91, 404.01, 404.03, 404.11, 404.13, 404.91, 404.93 or 428.*.
Encounter in last 90 days Sequence Encounter that ends within 90 days of the start of another encounter
Encounter in last 180 days Sequence Encounter that ends within 180 days of the start of another encounter
Chronic kidney disease Category ICD-9 code in 581.*, 582.*or 585.*.
End-stage renal disease Category ICD-9 code 285.21 or 585.6.
Chemotherapy encounter Category ICD-9 code in V58.1.
Radiation therapy encounter Category ICD-9 code V58.0
Obesity Category ICD-9 code 278.00 or 278.01.
Stroke Category ICD-9 code in: 430.*, 431.*, 432.9*, 433.01, 433.11, 433.21, 433.31, 433.81, 433.91, 434.00, 434.01, 434.10, 434.91, 435.*, or 436.*.
Pressure ulcer Category ICD-9 code 707.0 or 707.2.
Methicillin-resistant staph aureus Category ICD-9 code 041.12 or 038.12.
Sickle cell anemia Category ICD-9 code in 282.6*.
Sickle cell crisis Category ICD-9 code 282.62, 282.64, and 282.69.
Chronic obstructive pulmonary disease Category ICD-9 code in 491.20, 491.21, 491.22, 492.8, 493.20, 493.21, 493.22, 494.0, 494.1, 495.* or 496.*.
Cancer Category ICD-9 code in 140-208, 209.0, 209.1, 209.2, 209.3, 225.*, 227.3, 227.4, 227.9, 228.02, 228.1, 230.*, 231.*, 232.*, 233.*, 234.*, 236.0, 237.*, 238.4, 238.6, 238.7, 239.6, 239.7, 259.2, 259.8, 273.2, 273.3, 285.22, 288.3, 289.83, 289.89, 511.81, 789.51, 795.06, 795.16, V58.0, V58.1*or V10.*.
Metastasis Category ICD-9 code in 196.*, 197.*, or 198.*.
Pulmonary hypertension Category ICD-9 code 416.0, 416.1, 416.8 or 416.9.
Fourth encounter with subsequent 30-day readmission Frequency The fourth Encounter with subsequent 30-day readmission across all encounters for a patient
Frequent-flier encounter Sequence An encounter after the patient’s Fourth encounter with subsequent

We loaded all 31,163 synthetic data values from the subject areas in Table 1 using REPLACE mode. We subsequently added using CREATE mode all phenotypes from Table 2. 256 intervals were computed corresponding to the phenotypes in Table 2. We added (CREATE), changed (UPDATE) and removed (DELETE) phenotypes and confirmed the correctness of counts of the intervals corresponding to those phenotypes by manual inspection. In addition, we conducted a comparison of the row counts from the source data file and those in the i2b2 data tables after data loading was complete.

We did not test INCREMENTAL mode in this evaluation due to the limited dataset that is available in our development environment for prototyping new functionality. We plan to test this mode in our QA environment, which has access to live patient data refreshed daily, in future work.

Discussion

Eureka! Clinical Analytics is a software system for computing EHR phenotypes in heterogeneous clinical data warehouse environments. It can load clinical data and phenotypes into an i2b2 data warehouse. While Eureka previously supported an incremental data loading process, it only supported full reloads of phenotypic information. We addressed that limitation in the current work and implemented in proof-of-concept full creation, retrieval, updating and deletion of phenotypes in an i2b2 data warehouse.

Such maintenance of phenotypes requires tracking the provenance of how phenotypes were computed from source data. While we ideally would track provenance information directly in i2b2, the i2b2 star schema has no fact relationship table that would support storing such information. As a result, we implemented a key-value store for provenance information. In the initial implementation, the key-value store contains all data processed by Eureka in addition to the computed phenotypes and provenance, thus doubling storage requirements for an i2b2 environment.

There are at least three possible approaches to resolving the storage doubling issue. For environments that employ i2b2 as a data mart solution for subsets of a clinical data warehouse, the set of EHR phenotypes that are needed may be relatively static, thus it may be feasible to limit the data that is saved into the secondary store to those that are needed to compute the phenotypes of interest. Alternatively, we could store only pointers to data values in the secondary store and load the actual data from the i2b2 fact table when phenotypes need updating. A third possible solution would be to use a common data model that provides for storing relationships between facts such as OMOP rather than the i2b2 star schema. The i2b2 team recently implemented support for the OMOP model (https://community.i2b2.org/wiki/display/OMOP/OMOP+Home).

Another limitation of this initial proof of concept is that the key value store solution may not scale to large datasets in its current form, because each patient’s entire dataset must be loaded into memory at runtime regardless of whether the data is required for phenotype computation. An implementation may be needed that only reads from disk the patient data that is needed for phenotyping. The proof-of-concept’s speed has not yet been evaluated.

We expect that this approach to phenotyping will be fruitful for managing best practice definitions of common cohorts of interest in i2b2. While best practice queries for common cohorts could theoretically be managed by storing i2b2 queries in the shared workspace in the web client, support for organizing a library of phenotypes within i2b2 is limited. Furthermore, in our experience temporal queries like what is supported by Eureka can take hours of database time to run as i2b2 queries. Incorporating phenotyping into the i2b2 ETL process like we do with Eureka may be more scalable because the queries only need to be run once rather than every time an investigator requests them.

Conclusion

The incremental phenotyping method that we implemented in proof-of-concept correctly created, computed, updated, and deleted phenotypes in an i2b2 database. Continued work on this capability will involve speed testing, optimization, hardening, and integration into our institution’s i2b2 clinical data warehouse environment. This work shows promise for supporting an EHR phenotyping capability for cohort discovery and participant recruitment.

Acknowledgments

This work was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award number UL1TR002378. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Figures & Table

References

  • 1.Chute CG, Beck SA, Fisk TB, Mohr DN. The Enterprise Data Trust at Mayo Clinic: a semantically integrated warehouse of biomedical data. J Am Med Inform Assoc. 2010;17(2):131–5. doi: 10.1136/jamia.2009.002691. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Murphy SN, Morgan MM, Barnett GO, Chueh HC. Optimizing healthcare research data warehouse design through past COSTAR query analysis. Proc AMIA Symp. 1999:892–6. [PMC free article] [PubMed] [Google Scholar]
  • 3.Obeid JS, Beskow LM, Rape M, Gouripeddi R, Black RA, Cimino JJ, et al. A survey of practices for the use of electronic health records to support research recruitment. J Clin Transl Sci. 2017;1(4):246–52. doi: 10.1017/cts.2017.301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Elliott TE, Holmes JH, Davidson AJ, La Chance PA, Nelson AF, Steiner JF. Data warehouse governance programs in healthcare settings: a literature review and a call to action. EGEMS (Wash DC) 2013;1(1):1010. doi: 10.13063/2327-9214.1010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Richesson RL, Smerek M. Electronic Health Records-Based Phenotyping. Rethinking Clinical Trials: A Living Textbook of Pragmatic Clinical Trials: Duke University. 2014 [Google Scholar]
  • 6.Kirby JC, Speltz P, Rasmussen LV, Basford M, Gottesman O, Peissig PL, et al. PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability. J Am Med Inform Assoc. 2016 doi: 10.1093/jamia/ocv202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Mo H, Thompson WK, Rasmussen LV, Pacheco JA, Jiang G, Kiefer R, et al. Desiderata for computable representations of electronic health records-driven phenotype algorithms. J Am Med Inform Assoc. 2015;22(6):1220–30. doi: 10.1093/jamia/ocv112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Post AR, Ai M, Kalsanka Pai A, Overcash M, Stephens DS. Architecting the Data Loading Process for an i2b2 Research Data Warehouse: Full Reload versus Incremental Updating. AMIA Annu Symp Proc. 2017;2017:1411–20. [PMC free article] [PubMed] [Google Scholar]
  • 9.Post AR, Kurc T, Cholleti S, Gao J, Lin X, Bornstein W, et al. The Analytic Information Warehouse (AIW): A platform for analytics using electronic health record data. J Biomed Inform. 2013;46(3):410–24. doi: 10.1016/j.jbi.2013.01.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Post AR, Kurc T, Willard R, Rathod H, Mansour M, Pai AK, et al. Temporal abstraction-based clinical phenotyping with Eureka! AMIA Annu Symp Proc. 2013;2013:1160–9. [PMC free article] [PubMed] [Google Scholar]
  • 11.Kimball R, Ross M. The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling. 2nd ed. New York: Wiley Computer Publishing; 2002. [Google Scholar]
  • 12.Post AR, Pai AK, Willard R, May BJ, West AC, Agravat S, et al. Metadata-driven Clinical Data Loading into i2b2 for Clinical and Translational Science Institutes. AMIA Summits Transl Sci Proc. 2016:184–93. [PMC free article] [PubMed] [Google Scholar]
  • 13.The ACT Network. Data Harmonization. 2018 [updated Aug 26, 2018; cited 2018 Aug 7]; Available from: https://ncatswiki.dbmi.pitt.edu/acts/wiki/DataHarmonization
  • 14.Murphy SN, Weber G, Mendis M, Gainer V, Chueh HC, Churchill S, et al. Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2) J Am Med Inform Assoc. 2010;17(2):124–30. doi: 10.1136/jamia.2009.000893. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Observational Health Data Sciences and Informatics (OHDSI) OMOP Common Data Model. 2016 [cited 2016 Jan 5]; Available from: http://www.ohdsi.org/data-standardization/the-common-data-model/
  • 16.Stein L. Creating databases for biological information: an introduction. Curr Protoc Bioinformatics. 2013;Chapter 9:Unit9 1. doi: 10.1002/0471250953.bi0901s42. [DOI] [PubMed] [Google Scholar]
  • 17.Sundvall E, Nystrom M, Karlsson D, Eneling M, Chen R, Orman H. Applying representational state transfer (REST) architecture to archetype-based electronic health record systems. BMC Med Inform Decis Mak. 2013;13:57. doi: 10.1186/1472-6947-13-57. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Summits on Translational Science Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES