Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2020 Mar 4;2019:258–266.

One-Way and Round-Trip Analysis Demonstrates Surprising Limitations of Standards-Based Terminology Maps

Steven H Brown 1,2, Loren Stevenson 1, Daniel J Territo 1, John Kilbourne 1, Jonathan R Nebeker 1,3, Holly Miller 1, Michael J Lincoln 1,3
PMCID: PMC7153154  PMID: 32308818

Abstract

The informatics community has a long-standing vision of freely flowing and highly re-usable patient-specific clinical data that improves care quality and safety. We sought to evaluate the extent to which a standards-based mapping approach is sufficient to support semantic interoperability. We simulated large-scale clinical data transmission and measured semantic success between VA and DoD systems via one-way testing (OWT) and round-trip testing (RTT). Simulations were accomplished via SQL queries and production standards-based maps for medications, allergens, document titles, vitals and payers. Success rates for mapping local codes to national standards varied from 62.5% for DoD document titles and medications, to 100% for VA and DoD vital signs. Successful, one-way testing was considerably lower, ranging from 8.52% to 62.7%. Round-trip success rates were lower still, ranging from 1.7% to 76.3%. We present an error framework, lessons learned, and proposed mitigating steps to enhance standards-based semantic interoperability.

Introduction

Calls for the meaningful sharing of standards-based electronic health data are more than 25 years old.(1–3) These reports, and others more recent, share a common vision of freely flowing and highly re-usable patient-specific clinical data that improves care quality and safety. The HITECH act of 2009 mandated and achieved widespread national adoption of electronic health records (EHRs) by hospitals and providers.(4,5) HITECH-act-driven EHR adoption in tandem with associated regulations requiring the use of specific data standards has helped the US take significant steps towards improving healthcare via shared data.(6,7)

Demands for health data interoperability between the Department of Defense (DoD) and the Department of Veterans Affairs (VA) share a similar timeline with more challenging technical and organizational requirements.(8–10) For example, The National Defense Authorization Act (NDAA) for Fiscal Year 2014 required the Secretaries of the Department of Defense and the Department of Veterans Affairs to ensure that “Not later than October 1, 2014 all health care data contained in the Department of Defense AHLTA and the Department of Veterans Affairs VistA systems shall be computable in real time and comply with the existing national data standards.”(11) The 2014 NDAA further required “transition to modern, open architecture frameworks that use computable data mapped to national standards to make data available for determining medical trends and for enhanced clinician decision support.”

The Department of Veterans Affairs and the Department of Defense responded to this legislation by developing and deploying the Joint Legacy Viewer (JLV).(12,13) JLV is a product that shares and displays patient data at the point of care. JLV extracts data from legacy EHRs (e.g., VistA and AHLTA) via Corporate Data Warehouses (CDW) and applies data maps between internal data representations and designated national standards to facilitate exchange. DoD and VA also assigned the joint Interagency Clinical Informatics Board (ICIB) to define “all health care data” and necessary standards to meet the NDAA mandate. The ICIB subsequently identified 23 data “domains” (e.g., medications, immunizations) and relevant standards for their exchange. Each Department developed and maintains its data maps independently. JLV is deployed at all VA Medical Centers and Military Treatment facilities. Between March and November of 2018, users logged in to JLV approximately 300,000 times and viewed approximately 1,200,000 records per month.

Saitwal and Hussain both note that data mapping is a more complex activity than might be assumed at first blush.(14,15) Achieving the long-standing vision of meaningful sharing of standards-based electronic health data (i.e., semantic interoperability) requires the successful marshaling, mapping, transmission, re-mapping and use of transferred information. This complex pathway requires solutions for mapping and marshaling challenges (e.g., as noted by L’Amore et al). We undertook this study to assess impacts on semantic interoperability of applying standards-based clinical data maps to a large, real-world clinical data set.

Methods

Selected Domains

We selected the five domains that the JLV is capable of integrating standards-based data for: Document titles (LOINC), medications (RxNorm), allergens (RxNorm +SNOMED CT), vital signs (LOINC) and payers (ASC X12).

Data Preparation

We developed a “test region” in VA’s Corporate Data Warehouse for this evaluation. The CDW test region copied the data structures and de-identified clinical data from the CDW production system servicing JLV. Data was de-identified by authorized CDW staff using approved standard “test region” encryption methods prior to being made available to the evaluation team. The same approved encryption methods were applied to the DoD Corporate Data Repository (CDR), ensuring that patients could be matched across data sets but not identified. We calculated counts of all data instances and unique codes for each domain.

Mapping-Success Analysis

Mapping to standards is a necessary prerequisite for standards-based interchange. We calculated the numbers and percentages of local codes that were mapped to a standards-based mediating code. We also calculated the number of mediating codes mapped to sets of local codes (i.e., one mediating code mapped to many local codes) stratified by set size.

Data Transmission Simulation

We simulated data transmission between VA and DoD systems using two distinct patterns: one-way testing (OWT) and round-trip testing (RTT). One-way testing separately simulated the transmission of VA data to DoD and the transmission of DoD data to VA. In each case, the data was transformed through each map set one time (i.e., the sending system’s local codes were mapped to standards-based “mediating” codes and then into the receiving system’s local codes). Round-trip testing simulated bi-directional data transmission (e.g., from VA to DoD and back again). Simulations were accomplished via SQL queries performed against test region tables using JLV production maps.

Scoring

Algorithmically exact matches between input and output data were scored as fully successful without additional human review. One-way testing success occurred when both the sending and receiving agencies shared the same mediating code (which have mappings to local codes and surface forms), regardless of the receiving system’s local code or surface form. If the mediating code appeared more than once in the receiving systems mapping tables (tied to a different local code), each match was counted as a success. This choice was made because sending and receiving local code systems could not be expected to have the same surface forms or internal identifiers.

Round-trip testing success occurred when the local data sent was received back in precisely its original form – i.e., matching codes and surface forms. An unsuccessful RTT means that the precise, original meaning was not sent back to the originating partner. Full success RTT rate is calculated with the numerator equals the number of full successes and the denominator equals “intent to interoperate” (size of domain). We also documented the total number RTT sent messages (“tries”) created by the many-to-one mappings.

Results

Data Extraction

We extracted de-identified patient-instance data from the VA Corporate Data Warehouse and the DoD Clinical Data Repository. Table 1 shows the number of clinical data elements and unique codes extracted for each domain for each Department.

Table 1.

Data Set Overview

VA Data Instances VA Unique Codes DoD Data Instances DoD Unique Codes Total Data Instances
Payers 235,569 50 3,126,159 29 3,361,728
Vitals 252,299,085 17 8,331,186 11 260,630,271
Docs 290,334,004 3,438 1,046,252 80 291,380,256
Meds 62,932,809 16,338 28,387,162 66,769 91,319,971
Allergens 715,073 3,225 267,237 4,346 982,310

Mapping to Standards

The percentage of mapping successes (i.e., mapping a local code to a standards-based mediating code) for each Department’s clinical data is detailed in Table 3 and Table 4. They varied from 62.5% for DoD document titles to 100% for VA and DoD vital signs. Seven of ten mapping-success proportions exceeded 90%.

Table 3.

VA→DoD One-Way Test Results for Unique Identifiers

Domain VA Unique Codes VA-SDO Mapped VA-SDO Mapped % Success N Success % of total Success % of mapped
Documents 3,438 3,416 99.36% 293 8.52% 8.58%
Payers 50 50 100.00% 0 0.00% 0.00%
Vital Signs 17 17 100.00% 5 29.41% 29.41%
Medications 16,338 12,204 74.70% 7,648 46.81% 62.67%
Allergens 3,225 2,905 90.08% 856 26.54% 29.47%

Table 4.

DoD→VA One-Way Test Results for Unique Identifiers

Domain DoD Codes DoD-SDO Mapped DoD-SDO Mapped % Success N Success % of total Success % of mapped
Documents 80 50 62.50% 20 25.00% 40.00%
Payers 29 28 96.55% 0 0.00% 0.00%
Vital Signs 11 11 100.00% 5 45.45% 45.45%
Medications 66,769 41,746 62.52% 13,757 20.60% 32.95%
Allergens 4,346 3,946 90.80% 1,825 41.99% 46.25%

We found that in many instances multiple local codes were mapped to a single, standards-based mediating code. For example, two VA source elements “CEFADROXIL 1GM TAB” and “CEFADROXIL MONOHYDRATE 1GM TAB” were mapped to a single RxNorm semantic clinical drug “Cefadroxil 1000 MG Oral Tablet”. Table 2 shows the counts of instances where sets of local codes mapped to a single mediating code. DoD medications, DoD allergens and VA document titles had more than 40 bins (e.g., long tails). There were no cases of multiple standards-based mediating codes mapping to a single local code in any domain.

Table 2.

Many-to-One Local to Mediating Codes Bin Size

Medication Codes Document Codes Allergen Codes
Many-to-One Mapping Bin Count VA DoD VA DoD VA DoD
1 8,320 10,965 128 36 2,514 1,481
2 1,191 2,965 73 7 144 849
3 212 1,266 39 0 19 519
4 86 767 32 0 4 275
5… 28 439 26 0 2 171
Total Bins 15 56 46 2 7 136

One-Way Testing Results

Table 3 shows VA→DoD one-way testing results, and Table 4 shows DoD→VA one-way testing results. In Table 3, VA to DoD one-way testing success as percent of total unique document titles, vital signs, medications and allergens were 8.5%, 29.4%%, 46.8%, and 26.5% respectively. Payers were successful in no instances.

In Table 4, showing DoD→VA one-way testing results, DoD to VA one-way testing success as percent of total unique document titles, vital signs, medications and allergens were 25%, 45.5%%, 20.6%, and 42% respectively. Payers, again, were successful in no instances.

Mediating Code Impact

A mediating code is an SDO code that VA and DoD each map to for any given concept, enabling a one-way trip. Table 5 represents the actual impact of the SDO mediating codes on one-way testing success. This table describes actual instances of mediation, given the number of shared concepts (“mediating codes”). For example, VA and DoD had a partially disjointed, overlapping set of vital signs (e.g., VA audiometric “vitals” not shared by DoD). However, the Departments shared five vital signs (e.g., heart rate). Those five shared vitals mediated 65.06% of VA and 56.3% of eight DoD measurements. A total of 164 million VA vital signs and 8.3 million DoD vital sign instances were examined. Across all five domains, considering both directions, mediation rates (excluding payors) ranged between 26.4% and 91.8%.

Table 5.

Mediating Code Impact on Actual Instances of Clinical Data

Domain Mediating Codes VA Data Instances VA Mediated Instances % VA Mediated DoD Data Instances DoD Mediated Instances % DoD Mediated
Documents 14 290,334,004 86,925,863 29.94% 1,046,252 932,716 89.15%
Payers 0 235,569 0 0.00% 3,126,159 0 0.00%
Vital Signs 5 252,299,085 164,155,351 65.06% 8,331,186 4,690,531 56.30%
Medications 5,731 62,932,809 57,778,448 91.81% 28,387,162 23,052,269 81.21%
Allergens 789 715,073 451,343 63.12% 267,237 70,480 26.37%

Documents: 14 LOINC codes mediated 29.9% of all VA document titles and 89.2%% of all DoD documents titles in the data set when adjusted for frequency.

Payers: Payers have had a mapping file mismatch resulting in 0 mediations. Each Department used different versions of the ANSI X12 standard that were not interoperable (VA used payer typology; DoD used ANSI ASC X120-like codes plus “additions” that were unique to their system).

Vital Signs: Five LOINC mediated 65.1% of all VA vital signs and 56.3% of all DoD vital signs in the data set. Medications: 5,731 RxNorm codes mediated 91.8% of VA and 81.2% of DoD medications in the dataset.

Allergens: 789 RxNorm and SCT codes mediated 63.1% of VA allergens and 26.4 % of DoD allergens in the data set.

Round-Trip Testing Results

In round-trip testing, the actual results, excluding Payors, varied between 1.7% and 76.3%. But even when standards were the same (e.g., RxNorm for Medications), differences in formularies (value sets used by each Department) and non-exact mappings meant that round-trip testing could still fail. The table illustrates the results for all five domains and origins from either VA (VA→DoD→VA) or DoD (DoD→VA→DoD). For example, Medication round trips succeeded no more than 29% of the time (in that case, the direction was VA→DoD→VA).

Error Framework

Certain types of errors repeatedly occurred, and so we established a corresponding error framework with the following categories:

Plausible Mismap: Codes for same or equivalent concepts mapped to different yet arguably plausibly correct targets. Example: VA maps to “Body Weight”, DoD maps to “Body Weight, Measured”

Mapping Error: Mapping errors can be pernicious but perhaps are not inevitable. Errors may occur at the time of map creation or emerge as the result of terminology and formulary changes. Example: VA “HEPATOLOGY C&P EXAMINATION CONSULT” incorrectly mapped to LOINC 39867-6 instead of LOINC 38967-6. Error surveillance, including versioning and configuration management, can mitigate this risk. Importantly, each Department or exchange partner should consider using a common methodology for mapping. For example, they might develop and share a common “style guide” for mappers. Consistent use of tooling that helps prompt and even enforce consistent mapping can be useful. Finally, an Independent Verification and Validation (IV&V) of both partners’ mappings can be performed, ideally by the same group and using the same IV&V methodology.

Granularity Error: The granularity of a term is a measure of its specificity and refinement. A coarsely granular concept may encompass many finely granular terms (e.g., subclasses). Map granularity can be a design choice based on intended use or reflect patient-instance data-knowledge limits (e.g., patient x is allergic to penicillin(s) (class) vs. Benzathine Penicillin G (precise ingredient)). Example: VA maps document titles to 3416 LOINC terms, DoD maps to 50.

Formulary Error: Sending and receiving systems may not utilize or be able to represent the same clinical “items” (e.g., medications, lab tests). Final step expressivity deficits may limit transmission success (i.e., nothing to map to) This could be addressed by maintaining common set of items or by having methods to determine the useful similarity of items. For example, drug-allergy interactions could be computed using an RxNorm semantic clinical drug component rather than the precise RxNorm semantic clinical drug. Example: DoD has “PEAK FLOW” as a vital sign; VA does not.

Information Model Error: Differences in information model may make single term mapping impossible. Example: VA maps to BP, DoD maps to systolic BP and diastolic BP. Both systems represent blood pressure metadata such as cardiac cycle, patient position, cuff size, and artery sampled entirely differently.

Many-to-One Explosions: Many-to-one mapping explosions induced a large number of errors. The number of RTT tries observed corresponds precisely to the product of the number of codes at each step of the transmission (Table 7). The observed RTT success percentage corresponds to the reciprocal of the number of local codes in the many to one relationship (e.g., 3 codes sent = 33% success on return). For RTT tries this is an O(n^3) process. Semantic interoperability cannot be achieved until viable solutions to this issue are implemented.

Table 7.

VA-DoD-VA RTT Medication “Tries” Examples

VA NDF Sent RxNorm Sent DoD Local Pivot RxNorm Return VA NDF Return Total RTT Tries RTT Success Examples
2 1 1 1 2 4 2 Cefadroxil 1000 MG Oral Tablet
1 1 3 1 1 3 3 Cyclosporine, modified 100 MG Oral Capsule [Gengraf]
2 1 2 1 2 8 4 Zolpidem tartrate 10 MG Oral Tablet
1 1 3 1 1 3 1 Pentaerythritol Tetranitrate 10 MG Oral Tablet
3 1 2 1 3 18 6 Pilocarpine Hydrochloride 40 MG/ML Ophthalmic Solution

We undertook qualitative analysis of many VA NDF codes to one RxNorm Code instances. One consistent cause of many-to-one mappings relates to “Quantity Factors” for liquid preps. A “Quantity Factor” is the amount of medicine packaged in a vial or syringe. For example, VA “DOXORUBICIN HCL 200MG/VIL (PF) INJ” and VA “DOXORUBICIN HCL 10MG/VIL (RDF) INJ” both mapped to RxNorm “Doxorubicin Hydrochloride 2 MG/ML Injectable Solution”. The version of RxNorm used in this study was not able to represent and distinguish quantity factors. New versions of RxNorm now have this capability.

We performed limited quantitative analysis of many-to-one mappings in the domain of medications. Medications provide an extreme example, Medication RTT from VA →DoD →VA yielded 54,632 failures: 8,690 from unrecognized medications, and 45,942 from recognized many-to-one mappings. Table 7 demonstrates that many-to-one mappings impact round trips O(n^3) more than one-way success rates O(n^2).

Discussion

A high percentage of mappings between clinical data and national standards existed within each Department, yet one-way and round-trip testing success percentages based on mediating codes between Departments were much lower. This occurred in part because the Departments share partly disjoint value sets (e.g., no DoD counterpart for VA audiometric “vitals”). The problem is further exacerbated because the Department mappings were done independently and not entirely consistently: there was no examination between Departments when apparently very similar terms, and likely identical concepts, were mapped to different targets. These two factors alone meant that one-way trips would not always equal the proportion of overall mappings. Finally, the combinatorial explosion of one-to-many mappings made successful trips, particularly round trips, even worse. Therefore, for example, 99% of unique document titles in VA were mapped to national standards but the 14 mediating LOINC codes only mapped 8.6% of VA’s unique identifiers in one-way trips to DoD. Likewise, for vital signs, 100% of DoD’s and VA’s local identifiers were mapped to national standards but the 5 mediating LOINC codes matched only 29.4% of VA vital signs and 45.5% of DoD vital signs in one-way tests. Frequency-adjusted mediation proportions were higher because the most commonly occurring concepts were more likely to be shared between the value sets, and because the independent Department experts were more likely to have mapped similarly for such common items. In addition to the error framework presented in the results section, we learned several other lessons about potential challenges in map-based interoperability, namely:

Standards “Suitability”: The similarity of the Standard’s information model to the deployed system information model affects success rate. In cases where the deployed information model includes more data than the Standards information model, one-to-many mappings can result, causing decreased full-trip success rate even when all data is known or knowable.

Style: Standards may have more than one way to reasonably represent the same concept. This can result in plausible mismappings that must be managed by sending or receiving systems.

Intent: Maps, like terminologies(16), should be created with a specific purpose in mind. Maps created for one purpose (e.g., many drugs to one category for drug allergy interactions) may not work well for other purposes (e.g., granular pharmacy inventory management).

Change Management: The quality of maps can degrade over time. Two types of change must be managed: terminology versions and change in value sets (formularies) to be mapped and exchanged. We have found, in maintaining JLV maps, that we can expect ca. 12-15% annual change in active domains such as medications. This results from standards-bodies’ changes (e.g., retired RxNorm CUIs), formulary changes and new drug entities.

As a result of lessons learned, we propose several approaches to improve semantic interoperability. First, whenever possible, standards should be used natively in EHRs. In our experience, mapping is a recurrently expensive and inherently lossy process. Reducing the number of mapping translations reduces interoperability complexity and cost. Barriers to native use of standards, potentially including expressivity and update frequency, must be better understood. Second, ongoing standards quality improvement, for example, the addition of the “quantity factor” to RxNorm, can improve interoperability. Additional standards extensions and improvements should be use-case driven and could contribute reductions in many-to-one, granularity, and formulary errors. Third, we recommend that EHRs should deploy entire terminologies and maintain separate purpose-specific subsets (e.g., ordering medications). The whole terminology could underpin other uses (e.g., CDS and reporting) and provide context for inbound, standards-based data. Additional benefits may accrue by using aggregation hierarchies and class definitions rather than enumerated lists to drive CDS, reporting, and other uses. Chu et al. have found this approach to reduce complexity and knowledge engineering time while increasing completeness coverage of clinical phenotypes.(17) We postulate that this approach may prove to be more resilient than enumerated lists for interoperating with inbound, standards-based data. Finally, our fourth suggestion is to develop robust mapping tools, common mapping procedures (at least a shared “desiderata” of mapping) and undertake strong independent validation and verification (IV&V) to best improve future results.

Conclusion

As our present results show, even when mapping is funded, prioritized and reliably completed, significant clinical information can be lost in one-way and round-trip transmission. Understanding this fact and addressing the underlying causes are important for continued progress in semantic interoperability and achieving the opportunities for improving healthcare quality and safety that it offers.

Clinicians increasingly expect computer-driven decision support, intelligent prompts, error detection and truly comparable data. They want to sum the lifetime Adriamycin dose across a continuum of care, even when the legacy systems in that continuum do not use common drug formularies, much less units of measure. Healthcare systems and practices that acquire systems should better understand the limitations that relate to data quality and interoperability and press for ongoing improvement. We suggest that system and standards developers focus on several problems: 1) the expense, inaccuracy and probable unsustainability of purely mapping-based approaches; 2) the need to support, maintain and natively use in HIT, common and high quality terminology standards; 3) migrating to common information models for complex clinical expressions and conclusions; 4) the idea that HIT software is evanescent and the clinical data are the gold nuggets that matter to patients and clinicians. Interoperability will not happen without ongoing investment and quality improvement cycles. We hope this paper has brought to light new opportunities for progress.

Figures & Table

Figure 1.

Figure 1

Simulated Data Flows. VA-DoD Data One-Way Testing (solid arrows, steps 1-3): 1) VA data element retrieved from test region and matched to VA-Standards map (one-to-many). 2) Standards-based data element(s) sent to Standards-DoD Map. DoD map output compared to DoD data elements in test region (one-to-many). Round-trip testing return (dashed arrows, steps 4-6) reverses the process.

Table 6.

Round-Trip Testing Results for Unique Identifiers

RTT
Success
VA→DoD→
VA Tries
VA→DoD→ VA Tries Success % DoD→VA→
DoD Tries
DoD→VA→ DoD Success
%
Payers 0 50 0.00% 29 0.00%
Vitals 5 17 29.41% 11 45.45%
Docs 500 29,551 1.69% 974 51.33%
Meds 22,771 77,403 29.42% 259,959 8.76%
Allergens 2,120 2,780 76.26% 262 11,072 19.15%

References

  • 1.Board of Directors of the American Medical Informatics Association. Standards for medical identifiers, codes, and messages needed to create an efficient computer-stored medical record. American Medical Informatics Association. Journal of the American Medical Informatics Association : JAMIA. 1994 Feb;1(1):1–7. doi: 10.1136/jamia.1994.95236133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Institute of Medicine Committee on Improving the Patient R. The Computer-Based Patient Record: Revised Edition: An Essential Technology for Health Care. In: Dick RS, Steen EB, Detmer DE, editors; Dick RS, Steen EB, Detmer DE, editors. Washington (DC): National Academies Press (US) Copyright 1997 by the National Academy of Sciences; 1997. The Computer-Based Patient Record: Revised Edition: An Essential Technology for Health Care. [PubMed] [Google Scholar]
  • 3.National Committee on Vital and Health Statistics, Health and Human Services. Report to the Secretary of the U.S. Department of Health and Human Services on Uniform Data Standards for Patient Medical Record Information. 2000:p. 65. [Google Scholar]
  • 4.Health and Human Services. Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology. Revisions to the Permanent Certification Program for Health Information Technology. (2014 Edition) 2012 Oct 4;:p. 54163–54292 (130 pages). 77 FR 54163 45 CFR 170. [PubMed] [Google Scholar]
  • 5.American Recovery and Reinvestment Act of 2009. 2009 Available from: https://www.congress.gov/111/plaws/publ5/PLAW-111publ5.pdf. [Google Scholar]
  • 6.D’Amore JD, Bouhaddou O, Mitchell S, Li C, Leftwich R, Turner T, et al. Interoperability Progress and Remaining Data Quality Barriers of Certified Health Information Technologies. AMIA Annu Symp Proc. 2018 Dec;5:358–67. [PMC free article] [PubMed] [Google Scholar]
  • 7.D’Amore JD, Mandel JC, Kreda DA, Swain A, Koromia GA, Sundareswaran S, et al. Are Meaningful Use Stage 2 certified EHRs ready for interoperability? Findings from the SMART C-CDA Collaborative. Journal of the American Medical Informatics Association : JAMIA. 2014 Dec;21(6):1060–8. doi: 10.1136/amiajnl-2014-002883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.United States. General Accounting Office, GAO. Federal health care : increased information system sharing could improve service, reduce costs : briefing report to the Chairman, Committee on Veterans Affairs, House of Representatives. 1993 [Google Scholar]
  • 9.United States. General Accounting Office, GAO. Computer-based patient records : better planning and oversight by VA, DOD, and IHS would enhance health data sharing : report to congressional committees. 2001 [Google Scholar]
  • 10.Melvin Valerie C. United States. Congress. House. Committee on Veterans’ Affairs ; United States Government Accountability Office, GAO. Electronic health records : long history of management challenges raises concerns about VA’s and DOD’s new approach to sharing health information : testimony before the Committee on Veterans’ Affairs, House of Representatives. 2013 [Google Scholar]
  • 11.National Defense Authorization Act for Fiscal Year 2014. [Internet]. 2013 Dec 26; Available from: https://www.congress.gov/bill/113th-congress/house-bill/3304/text. [Google Scholar]
  • 12.Council L. The VA’s Interoperability Mission. [Internet] 2016 Available from: https://fedtechmagazine.com/article/2016/05/vas-interoperability-mission. [Google Scholar]
  • 13.Legler A, Price M, Parikh M, Nebeker JR, Ward MC, Wedemeyer L, et al. Effect on VA Patient Satisfaction of Provider’s Use of an Integrated Viewer of Multiple Electronic Health Records. Journal of general internal medicine. 2019 Jan;34(1):132–6. doi: 10.1007/s11606-018-4708-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Saitwal H, Qing D, Jones S, Bernstam EV, Chute CG, Johnson TR. Cross-terminology mapping challenges: a demonstration using medication terminological systems. Journal of Biomedical Informatics. 2012 Aug;45(4):613–25. doi: 10.1016/j.jbi.2012.06.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hussain S, Sun H, Sinaci A, Erturkmen GB, Mead C, Gray AJ, et al. A framework for evaluating and utilizing medical terminology mappings. Studies in health technology and informatics. 2014;205:594–8. [PubMed] [Google Scholar]
  • 16.Rector AL. Thesauri and formal classifications: terminologies for people and machines. Methods Inf Med. 1998 Nov;37(4–5):501–9. [PubMed] [Google Scholar]
  • 17.Chu L, Kannan V, Basit MA, Schaeflein DJ, Ortuzar AR, Glorioso JF, et al. SNOMED CT Concept Hierarchies for Computable Clinical Phenotypes From Electronic Health Record Data: Comparison of Intensional Versus Extensional Value Sets. JMIR Med Inform. 2019 Jan 16;7(1) doi: 10.2196/11487. e11487. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES