Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2017 Feb 10;2016:2082–2089.

Evaluating Terminologies to Enable Imaging-Related Decision Rule Sharing

Zihao Yan 1,6, Ronilda Lacson 1,6, Ivan Ip 1,2,6, Vladimir Valtchinov 1,6, Ali Raja 1,3,6, David Osterbur 5,6, Ramin Khorasani 1,4,6
PMCID: PMC5333322  PMID: 28269968

Abstract

Purpose: Clinical decision support tools provide recommendations based on decision rules. A fundamental challenge regarding decision rule-sharing involves inadequate expression using standard terminology. We aimed to evaluate the coverage of three standard terminologies for mapping imaging-related decision rules.

Methods: 50 decision rules, randomly selected from an existing library, were mapped to Systemized Nomenclature of Medicine (SNOMED CT), Radiology Lexicon (RadLex) and International Classification of Disease (ICD-10-CM). Decision rule attributes and values were mapped to unique concepts, obtaining the best possible coverage with the fewest concepts. Manual and automated mapping using Clinical Text Analysis and Knowledge Extraction System (cTAKES) were performed.

Results: Using manual mapping, SNOMED CT provided the greatest concept coverage (83%), compared to RadLex (36%) and ICD-10-CM (8%) (p<0.0001). Combined mapping had 86% concept coverage. Automated mapping achieved 85% mapping coverage vs. 94% with manual mapping (p<0.001).

Conclusion: Although some gaps remain, standard terminologies provide ample coverage for mapping imaging- related evidence.

Introduction

Clinical decision support (CDS) integrated with a computerized physician order entry (CPOE) system can improve workflow efficiency, increase guideline adherence, and reduce the rate of inappropriate imaging in certain outpatient and Emergency Department (ED) settings(14). To improve quality of care and reduce waste, the Affordable Care Act was implemented to encourage CDS adoption(1, 5). Additionally, under the Protecting Access to Medicare Act of 2014, health care providers will be required to consult specified appropriate use criteria using a qualified CDS system when ordering advanced imaging for Medicare patients(6).

CDS decision rules are often derived from professional society guidelines, published evidence, and local best practices and are generally described in free text(7). For CDS decision rules to be shareable, they need to be machine interpretable and available in standard representation(8). Recently, a repository of diagnostic imaging decision rules has been developed, which also includes systematic grading of recommendations(7). Although these decision rules are available in semi-structured format, the terminology is not in standard format and limits shareability. Leveraging existing medical terminologies could potentially offer a solution to standardizing concepts within decision rules. In addition, automated approaches for mapping to standard terminologies could enable large scale mapping of decision rules from various CDS systems.

Three terminologies have been utilized previously to retrieve critical imaging findings in radiology - Systemized Nomenclature of Medicine (SNOMED CT), Radiology Lexicon (RadLex) and the International Classification of Diseases, 9th edition (ICD-9-CM)(9). For this study, the newer edition, ICD-10-CM is used. A brief description of these three terminologies follows.

SNOMED CT

SNOMED CT contains more than 311,000 concepts. SNOMED CT is an extensive clinical terminology that was formed by the merger, expansion, and restructuring of SNOMED RT® (Reference Terminology) and the United Kingdom National Health Service Clinical Terms. It is the most comprehensive clinical vocabulary available in English(10).

RadLex

RadLex is developed by the Radiological Society of North America in recognition of limited coverage of radiological concepts by other lexicons(11). RadLex provides a standardized method for indexing radiological concepts in a variety of settings. RadLex consists of approximately 12,000 individual concepts(12).

ICD-10-CM

ICD-10-CM is a clinical modification of the World Health Organization’s ICD-10, which consists of a diagnostic system. ICD-10-CM includes the level of detail needed for morbidity classification and diagnostic specificity. As with ICD-9-CM, ICD-10-CM is maintained by the National Center for Health Statistics. It has more than 68,000 codes, compared to approximately 13,000 in ICD-9-CM(13).

Therefore, the primary goal of this study was to evalute the coverage of SNOMED CT, RadLex and ICD-10-CM for mapping imaging-related decision rules. As a secondary goal, we assessed automated mapping using Clinical Text Analysis and Knowledge Extraction System (cTAKES), a natural language processing (NLP) tool.

Methods

Source of Decision Rules

This study was exempt from Institutional Review Board review. Imaging-related decision rules were randomly selected from among those in an existing publicly available library of evidence(7). The library currently contains 411 annotated and graded decision rules, derived from practice guidelines and studies published between 1995 and 2014. Specifically, 50 decision rules were selected for this study from five sources – two professional society guidelines (American College of Radiology [ACR] and American College of Physicians [ACP]), local best practice from two healthcare organizations (Ottawa Civic Hospital and Brigham and Women’s Hospital) and a clinical study (Wells Criteria for pulmonary embolism evaluation).

Each decision rule consists of 20 attributes, of which 6 attributes contain values with clinical content that can be expressed using standardized medical terminology. We selected these attributes’ corresponding values for mapping: “imaging modality”, “contrast”, “body region”, diagnosis/symptom”, “clinical logic (if)” and “clinical logic (then).” An example of clinical logic – If (Chronic headache) AND (No new features) AND (Normal neurologic examination), THEN MRI of the head without contrast – is included in Figure 1. The other attributes that were not suitable for mapping included names of graders, guideline publishers (e.g. ACR), dates of publications, citations and evidence grades. Full decision rules are publicly available on the library website(14). The list of the 50 rules is included in the supplemental file.

Figure 1.

Figure 1.

Flow chart of Methods

Generating Unique Attribute Values

A total of 300 attribute values were derived from the 50 decision rules (6 attribute values per decision rule). Repeated attribute values were only analyzed once. In addition, stop words, and commonly occurring English words (e.g., with) were removed(15). Attribute values comprised of solely stop words (e.g., “without”) were removed entirely, while those containing partially stop words had only the stop words removed (Figure 1). A total of 75 unique attribute values were generated.

Manual Mapping

The National Cancer Institute (NCI) Metathesaurus Term Browser, which includes SNOMED CT, RadLex and ICD-10-CM(16), was used to map each unique attribute value to each terminology. To maintain consistency, the “Exact Match” option in the NCI browser was used to search each terminology. We used the UMLS build 2015AB, which contains SNOMED CT (version, 2013_09_01), Radlex (version, 3_10), and ICD10_2010. Evaluation of match type and computation of coverage are described in the subsequent subsections.

  1. Generating unique concepts – A concept is defined as “the fundamental unit of meaning of terms … and contains all atoms from any source that express that meaning in any way.”(17) If a phrase contains only one word (e.g., Head), then the concept is equivalent to the phrase. If a phrase contains more than one word (e.g., Chronic Headache), then the concept is determined by prioritizing pre-coordinated terms over post-coordinated terms. For example, if the attribute value “chronic headache” results in a simple match (“chronic headache” to “chronic headache”) or a partial match (chronic headache” to “chronic ache”) from any one of the three medical terminologies during mapping, then “chronic headache” would be a concept. Otherwise, “chronic” and “headache” would be two separate concepts that make up the attribute value “chronic headache”. By this definition, any concept with more than one word could be partially matched, while concepts with one word may or may not match. This ensures mapping attribute values to the least number of concepts.

  2. Calculating concept coverage – Concept coverage is calculated for each of the three terminologies by calculating the percentage of matched concepts using a single terminology over all concepts from the attribute values. For example, concept coverage for SNOMED CT is defined as the number of unique matched concepts (simple or partial match) using SNOMED CT alone, over the number of existing concepts derived from all attribute values.

  3. Categorizing match types – Using methods described by Aronson(18), when mapping attribute value, one of the following could result: “simple match”, “complex match”, “partial match” or “no match”. A simple match results when the concept value is identical to the attribute value (e.g., “pulmonary embolism” to “pulmonary embolism”). A complex match results when individual terms in an attribute value have a simple match to more than one concept, but the entire attribute value does not have a simple match (e.g., “acute ankle injury” to “acute” and “ankle injury”). A partial match results when at least one word of either the mapped result or the attribute value does not participate in the mapping process (“hereditary nonpolyposis colorectal cancer” to “hereditary nonpolyposis colon cancer”), and a no match results when no concept in the attribute value was successfully mapped. All three terminologies were utilized to calculate the highest possible number of matched concepts.

  4. Calculating mapping coverage – Further using methods described by Aronson, assuming the attribute value has X words, in which X0 words (X0 ≤ X) participated in the mapping process, and corresponding mapped result has Y words, in which Y0 words (Y0 ≤ Y) participated in the mapping process, the mapping coverage is defined as 23(Y0Y)+13(X0X) For example, for the attribute value “(Chronic headache) AND (No new features) AND (Normal neurologic examination)” (Fig. 1), there are 7 words in total (“No” is a stop words thus removed). Of the 7 words, “feature” was not matched, while all other words had simple matches. Therefore, X = 7, X0 = 6, Y =6, Y0 = 6, leading to a mapping coverage of 23(66)+13(67)=2021 Abbreviations were un-abbreviated (e.g., MRI would count as three words). Under this definition, simple and complex matches would automatically have a mapping coverage of 1, partial match with a mapping coverage between 0 and 1, while no match would have a mapping coverage of 0. The average mapping coverage is the mean mapping coverage of all 75 attribute values.

Automated Mapping

cTAKES version 3.01(19) was used with YTEX (part of the cTAKES Apache Project) to perform automated mapping. cTAKES was customized with RadLex, the latest releases of the SNOMED CT vocabulary files and ICD- 10-CM using the NCI-supported knowledge representation languages, resource description framework (RDF) and the MetamorhoSys’ sub-setting utility, a customization tool provided by the Unified Medical Language System (UMLS) to customize and add source vocabularies to UMLS(20). Custom components were developed to allow cTAKES to take its input from a structured data source and write its output to the YTEX defined schema. cTAKES was applied to all 75 unique attribute values, and the output included concept unique identifiers (CUIs) from the three terminologies(21).

Categorizing match types and calculating mapping coverage were performed similar to manual mapping. We also prioritized pre-coordinated results over post-coordinated results. For example, cTAKES maps “pulmonary embolism” to the concept “pulmonary embolism” and two other concepts, “pulmonary” and “embolism”. In this case, we would disregard “pulmonary” and “embolism”, and would only consider the simple match result “pulmonary embolism.”

Data Analysis

We calculated the concept coverage of SNOMED CT, RadLex and ICD-10-CM individually for all existing concepts, and assessed concept coverage for all three terminologies combined. We used McNemar paired test to compare concept coverage between the three terminologies. We further computed the average mapping coverage for all attribute values resulting from automated mapping, and compared this to the average mapping coverage when we performed manual mapping using paired t-test. A p-value of <0.05 was considered significant.

Results

A total of 75 unique attribute values and 220 unique concepts were generated from the randomly selected 50 pieces of evidence.

Concept Coverage for SNOMED CT, RadLex and ICD-10-CM

Of the 220 concepts, SNOMED CT provided coverage of 182 concepts, an 83% concept coverage (181/219) rate, significantly greater than RadLex with a 36% concept coverage (79/220, p<0.0001), and ICD-10-CM with an 8% concept coverage (18/220, p<0.0001). When combining all three terminologies, the concept coverage was 86% (190/220). The 8 concepts in addition to the 182 concepts covered by SNOMED CT were contributed by RadLex (Table 1). The unmapped concepts are listed in Table 2.

Table 1.

Concept coverage from each standard terminology

Terminology Concepts mapped Concept coverage (Out of 219 concepts)
SNOMED CT 182 83%
RadLex 79 36%
ICD-10-CM 18 8%
All combined 190 86%

Table 2.

List of 30 unmatched words in any of the three terminologies

BRCA obvious Ottawa hemodynamic Irrelevant
nonspecific noncontributory rule Cluster Critical
underlying suspect exclusion seen Ill
cervicocerebral labral criteria logic Deterioration
impact workup base demonstrate Continued
dangerous lifetime inability BSGI image-guided

Match Categorization and Mapping Coverage: Automated vs. Manual Mapping

For automated mapping using cTAKES, there were 19 simple matches (25%), 5 complex matches (7%), 49 partial matches (65%), and 2 no matches (3%). For manual mapping, there were 20 simple matches (27%), 34 complex matches (45%), 21 partial matches (28%) and 0 no match (0%). The mapping coverages for manual and automated mapping were 94% and 85%, respectively (p<0.001)

Discussion

CDS aids clinicians in decision making, thus reducing medical errors and cost, and promoting more effective care(22). Despite this, most healthcare institutions have limited CDS capabilities(23, 24). One key reason for limited adoption is the predominant use of non-standard approaches to implementing CDS that are often specific to an implementation setting (23, 25, 26). As a result, CDS capabilities developed at one institution may not be easily transferred to other health care institutions, or even to other types of CDS applications within the same institution(23, 27).

At its core, CDS represents clinical knowledge in a detailed, machine-interpretable format. Machine-interpretable representation, while challenging, is necessary because narrative clinical guidelines often lack the detail and algorithmic specificity required for execution(28). One promising approach to promote CDS capabilities is the availability of machine-interpretable knowledge resources, which can be leveraged across multiple care settings(23, 2932). However, this approach requires overcoming the heterogeneity that often exists across institutions with regard to patient data and knowledge representation(33). Thus, the adoption of a robust CDS system is critically dependent upon the development and adoption of standards that encompass these facets of CDS delivery.

Various standard terminologies that can represent CDS content are already well-developed(34). Thus, the challenge lies with the fact that multiple terminologies are in concurrent use and the unavailability of content represented using these standard terminologies. As a result, a CDS resource designed for use in one setting may not be readily applicable in another setting. When we assessed concept coverage of SNOMED CT, RadLex and ICD-10-CM, SNOMED CT provided the highest concept coverage at 82.6%. With well over 300,000 concepts, even though not imaging-specific, SNOMED CT provided significantly more coverage than RadLex (35.6%), a radiology-specific lexicon with approximately 13,000 concepts and ICD-10-CM, a terminology primarily for diagnoses and billing with about 68,000 concepts. Although SNOMED CT was able to capture most of the concepts, RadLex presented 8 additional concepts, some of which were specific to radiology (e.g., colonography).

The list of unmapped words (Table 2) covers mostly common English words (e.g., obvious), specific clinical concepts (e.g., hemodynamic), and proper names (e.g., Ottawa). One way to decrease the number of unmapped words is the use of a dynamic suggestion drop down menu when converting narrative clinical guidelines to a structured format. For example, if SNOMED CT is incorporated during the authoring tool when converting guideline recommendations, using synonyms of the unmapped words may prompt authors to use concepts that can be mapped to SNOMED CT.

For the match types of attribute values, manual and automated mapping resulted in similar simple matches, while the majority of attribute values that were mapped as complex matches were mapped as partial matches under automated mapping. The average mapping coverage for manual mapping was significantly higher than that of automated mapping. This result is not surprising as complete matches, defined by combining simple and complex matches (72% in manual mapping; 32% in automated mapping) have a coverage of 100%. Therefore, mapping manually is capable of achieving significantly higher coverage compared to using cTAKES. Some reasons why cTAKES did not have a complete match include deficiency in mapping of abbreviations (e.g., MRCP), adjectives/modifiers (e.g., missing “chronic” in “chronic headache”), and occasionally, incorrect matches (e.g., “without risk factors or neurologic deficit” was mapped to “malnutrition”).

Limitations

This study focused primarily on terminologies that can represent imaging-related guideline knowledge and may not generalize to guidelines in other disciplines. Second, manual effort was required to filter out redundant mapping results from cTAKES. This may have resulted in greater mapping coverage, compared to fully automated mapping. Lastly, while our study addresses the coverage of the three terminologies, it does not taking into consideration the importance of certain words or phrases during the mapping process.

Conclusion

Standard terminologies, such as SNOMED CT, RadLex and ICD-10-CM, provide ample coverage for mapping imaging-related guideline knowledge. Among the three terminologies, SNOMED CT provides the highest amount of coverage. Efforts are underway to further reduce gaps in coverage and increase availability of guideline knowledge, expressed using standard terminologies.

References

  • 1.Ip IK, Schneider L, Seltzer S, et al. Impact of provider-led, technology-enabled radiology management program on imaging. The American journal of medicine. 2013;126(8):687–92. doi: 10.1016/j.amjmed.2012.11.034. [DOI] [PubMed] [Google Scholar]
  • 2.Rosenthal DI, Weilburg JB, Schultz T, et al. Radiology order entry with decision support: initial clinical experience. Journal of the American College of Radiology: JACR. 2006;3(10):799–806. doi: 10.1016/j.jacr.2006.05.006. [DOI] [PubMed] [Google Scholar]
  • 3.Bowen S, Johnson K, Reed MH, Zhang L, Curry L. The effect of incorporating guidelines into a computerized order entry system for diagnostic imaging. Journal of the American College of Radiology: JACR. 2011;8(4):251–8. doi: 10.1016/j.jacr.2010.11.020. [DOI] [PubMed] [Google Scholar]
  • 4.Blackmore CC, Mecklenburg RS, Kaplan GS. Effectiveness of clinical decision support in controlling inappropriate imaging. Journal of the American College of Radiology: JACR. 2011;8(1):19–25. doi: 10.1016/j.jacr.2010.07.009. [DOI] [PubMed] [Google Scholar]
  • 5.Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. The New England journal of medicine. 2010;363(6):501–4. doi: 10.1056/NEJMp1006114. [DOI] [PubMed] [Google Scholar]
  • 6.H.R.4302 - 113th Congress (2013-2014) Protecting Access to Medicare Act of 2014. 2014 [Google Scholar]
  • 7.Lacson R, Raja AS, Osterbur D, et al. Assessing Strength of Evidence of Appropriate Use Criteria for Diagnostic Imaging Examinations. Journal of the American Medical Informatics Association: JAMIA. 2016 doi: 10.1093/jamia/ocv194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Sim I, Gorman P, Greenes RA, et al. Clinical decision support systems for the practice of evidence-based medicine. Journal of the American Medical Informatics Association: JAMIA. 2001;8(6):527–34. doi: 10.1136/jamia.2001.0080527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Warden GI, Lacson R, Khorasani R. Leveraging terminologies for retrieval of radiology reports with critical imaging findings. AMIA Annual Symposium proceedings / AMIA Symposium AMIA Symposium. 2011;201(1):1481–8. [PMC free article] [PubMed] [Google Scholar]
  • 10.Medicine USNLo Unified Medical Language System. 2012 [cited 2016 03.06]; Available from: https://www.nlm.nih.gov/research/umls/Snomed/snomedfaq.html.
  • 11.Langlotz CP. RadLex: a new method for indexing online educational materials. Radiographics. 2006;26(6):1595–7. doi: 10.1148/rg.266065168. [DOI] [PubMed] [Google Scholar]
  • 12.Hazen R, Van Esbroeck AP, Mongkolwat P, Channin DS. Automatic extraction of concepts to extend RadLex. J Digit Imaging. 2011;24(1):165–9. doi: 10.1007/s10278-010-9334-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Prevention CfDCa International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) 2015 [cited 2016 03.06]; Available from: http://www.cdc.gov/nchs/icd/icd10cm.htm.
  • 14.School HM. Harvard Medical School Library of Evidence. 2016 [cited 2016 07.06.2016]; Available from: http://libraryofevidence.med.harvard.edu/app/public.
  • 15.Su K, Ries JE, Peterson GM, et al. Comparing frequency of word occurrences in abstracts and texts using two stop word lists. Proceedings / AMIA Annual Symposium AMIA Symposium; 2001; pp. 682–6. [PMC free article] [PubMed] [Google Scholar]
  • 16.Institute NC. NCI Metathesaurus Browser. 2014 [cited 2016 02.28]; Available from: https://wiki.nci.nih.gov/display/EVS/NCI+Metathesaurus+Browser.
  • 17.Medicine USNLo Unified Medical Language System: UMLS Glossary. 2014 [cited 2016 03.04]; Available from: https://www.nlm.nih.gov/research/umls/newusers/glossary.html.
  • 18.Aronson AR. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. Proceedings / AMIA Annual Symposium AMIA Symposium; 2001; pp. 17–21. [PMC free article] [PubMed] [Google Scholar]
  • 19.Computing NCfB Welcome to BioPortal, the world’s most comprehensive repository of biomedical ontologies. 2016 [cited 2016 02.28]; Available from: http://bioportal.bioontology.org/
  • 20.Medicine USNLo. MetamorphoSys Help. 2013 [cited 2016 02.28]; Available from: http://www.nlm.nih.gov/research/umls/implementationresources/metamorphosys/help.html.
  • 21.Bodenreider O. The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004;32:D267–70. doi: 10.1093/nar/gkh061. (Database issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Crossing the Quality Chasm: A New Health System for the 21st Century. Washington (DC): 2001. [PubMed] [Google Scholar]
  • 23.Osheroff JA, Teich JM, Middleton B, Steen EB, Wright A, Detmer DE. A roadmap for national action on clinical decision support. Journal of the American Medical Informatics Association: JAMIA. 2007;14(2):141–5. doi: 10.1197/jamia.M2334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Simon SR, Kaushal R, Cleary PD, et al. Physicians and electronic health records: a statewide survey. Archives of internal medicine. 2007;167(5):507–12. doi: 10.1001/archinte.167.5.507. [DOI] [PubMed] [Google Scholar]
  • 25.Kawamoto K, Lobach DF. Proposal for fulfilling strategic objectives of the U.S. Roadmap for national action on clinical decision support through a service-oriented architecture leveraging HL7 services. Journal of the American Medical Informatics Association: JAMIA. 2007;14(2):146–55. doi: 10.1197/jamia.M2298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision support. J Biomed Inform. 2008;41(2):387–92. doi: 10.1016/j.jbi.2007.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Wright A, Sittig DF. A four-phase model of the evolution of clinical decision support architectures. International journal of medical informatics. 2008;77(10):641–9. doi: 10.1016/j.ijmedinf.2008.01.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tierney WM, Overhage JM, Takesue BY, et al. Computerizing guidelines to improve care and patient outcomes: the example of heart failure. Journal of the American Medical Informatics Association: JAMIA. 1995;2(5):316–22. doi: 10.1136/jamia.1995.96073834. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Pryor TA, Hripcsak G. The Arden syntax for medical logic modules. Int J Clin Monit Comput. 1993;10(4):215–24. doi: 10.1007/BF01133012. [DOI] [PubMed] [Google Scholar]
  • 30.Sordo M, Boxwala AA, Ogunyemi O, Greenes RA. Description and status update on GELLO: a proposed standardized object-oriented expression language for clinical decision support. Studies in health technology and informatics. 2004;107:164–8. Pt 1. [PubMed] [Google Scholar]
  • 31.Boxwala AA, Peleg M, Tu S, et al. GLIF3: a representation format for sharable computer-interpretable clinical practice guidelines. J Biomed Inform. 2004;37(3):147–61. doi: 10.1016/j.jbi.2004.04.002. [DOI] [PubMed] [Google Scholar]
  • 32.Ram P, Berg D, Tu S, et al. Executing clinical practice guidelines using the SAGE execution engine. Studies in health technology and informatics. 2004;107:251–5. Pt 1. [PubMed] [Google Scholar]
  • 33.Kawamoto K, Hongsermeier T, Wright A, Lewis J, Bell DS, Middleton B. Key principles for a national clinical decision support knowledge sharing framework: synthesis of insights from leading subject matter experts. Journal of the American Medical Informatics Association: JAMIA. 2013;20(1):199–207. doi: 10.1136/amiajnl-2012-000887. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Hammond WE. The making and adoption of health data standards. Health affairs. 2005;24(5):1205–13. doi: 10.1377/hlthaff.24.5.1205. [DOI] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES