Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2014 Nov 14;2014:432–441.

A Semantic-based Approach for Exploring Consumer Health Questions Using UMLS

Licong Cui 1, Shiqiang Tao 1,2, Guo-Qiang Zhang 1,2
PMCID: PMC4419919  PMID: 25954347

Abstract

NetWellness is a non-profit web service providing high quality health information. It has been in operation since 1995 with over 13 million visits per year by consumers across the world in recent years. Consumer questions in NetWellness have been answered by medical and health professional faculties at three Ohio partner universities: Case Western Reserve University, the Ohio State University, and University of Cincinnati. However, the resident interface in NetWellness is ineffective in searching existing questions that have already been carefully answered by experts in an easy-to-understand manner. In our previous work, we presented a Conjunctive Exploratory Navigation Interface (CENI) reusing NetWellness’ 120 pre-defined health topics in assisting question retrieval. This paper presents a novel semantic-based search interface called Semantic Conjunctive Exploratory Navigation Interface (SCENI), using UMLS concepts as topics. 60,000 questions were tagged by UMLS Concept Unique Identifies (CUIs), with each question allowing multiple possible tags. Using a slightly modified 5-point Likert scale for relevance, SCENI reveals improved precision and relevance (precision: 93.47%, relevance: 4.31) in comparison to CENI using NetWellness’ pre-defined topics alone (precision: 77.85%, relevance: 3.3) and NetWellness’ resident search interface (precision: 50.62%, relevance: 1.97), on a set of sample queries.

Introduction

Although a substantial amount of consumer health information is available online [1], it is not necessarily easy for general consumers to access such information. For example, a study reported in JAMA [2] by Berland et al. found that accessing health information by use of search engines (e.g., Google or Yahoo!) and simple search terms was not sufficient. Only less than a quarter of links on the search engines first pages of search results led to relevant content.

To improve health information retrieval, we developed a Conjunctive Exploratory Navigation Interface (CENI [3]) for exploring NetWellness [7] health questions with health topics as dynamic and searchable menus complementing lookup search. The efficacy of CENI was evaluated by comparing it with a similar search interface with keyword-based search only, as well as the existing search mode using Google search or NetWellness advanced search. The evaluation was conducted through crowdsourcing, a valuable method for gathering data when human participation is needed. Our crowdsourced evaluation of CENI with a comparative study of search interfaces with anonymous, paid participants recruited from an online labor marketplace called Amazon Mechanical Turk (AMT) showed a nearly 2-1 ratio in preference of CENI for 9 carefully designed search tasks. Participants indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search.

However, a limitation of CENI is that the health topics are predefined, reusing those from NetWellness in order to have a side-by-side comparative study. Now that the effectiveness of the CENI interface has been established, this paper presents our work in using UMLS concept labels as topics, resulting in a semantic-based system called Semantic Conjunctive Exploratory Navigation Interface (SCENI). This way, SCENI shares CENI’s benefits in its use of topics as dynamic and searchable menus for consumer health information retrieval and navigation, and in allowing users to quickly narrow down to the most relevant results, without the restriction of a relatively small set of predefined topics.

1 Background

1.1 Unified Medical Language System (UMLS) and MetaMap

The Unified Medical Language System (UMLS) [4], developed by the US National Library of Medicine (NLM), is perhaps the largest integrated repository of biomedical vocabularies. The 2014AA release of UMLS covers over 2.9 million concepts from more than 150 source vocabularies. Vocabularies integrated in the UMLS Metathesaurus include the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), Consumer Health Vocabulary (CHV), National Center for Biotechnology Information (NCBI) taxonomy, the Medical Subject Headings (MeSH), RxNorm, and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). UMLS also provides a set of broad subject categories, or semantic types, that allows for the semantic categorization of all concepts. Each UMLS concept has a unique concept identifier (CUI), and is assigned one or more semantic type(s) of the 135 semantic types (e.g., Disease or Syndrome, Body Location or Region).

MetaMap [5, 6], also provided by NLM, is the state-of-the-art solution for the annotation of UMLS Metathesaurus concepts and semantic types. It is available via web access, a downloadable Java implementation (MMTx), an Application Programming Interface (API), and a downloadable version of the complete Prolog implementation of MetaMap itself. MetaMap associates biomedical terms to UMLS concepts and assigns a CUI to every term that it can identify.

1.2 NetWellness

NetWellness [7, 8] is one of the first consumer health websites that has been in existence for 19 years. It is a community service providing high quality, unbiased health information created and evaluated by medical and health professional faculties at Case Western Reserve University, the Ohio State University, and University of Cincinnati. These health professionals include physicians, nurses, pharmacists, dietitians and dentists, and serve as “experts” in NetWellness’ popular “Ask an Expert” service. Health questions in NetWellness have been generated by consumers and answered by experts after editorial review to make them readable and understandable to consumers.

NetWellness has over 60,000 consumer questions categorized into 120 health topics. Table 1 shows the top 10 health topics ranked by the number of questions related to each topic. Each question in NetWellness consists of four major components: Health Topic, Subject, Question, and Answer (See Figure 1 for a sample question). Each question is assigned one health topic by a consumer when asking the question. The Subject is the title of a question and usually contains the key topical information of a question. Question provides more details on the subject. Answers are given by an expert in the related health topic area.

Table 1:

Top 10 health topics ranked by the number of questions in NetWellness.

Topics # of Questions
Pharmacy and Medications 3802
Ear, Nose, and Throat Disorders 3481
Pregnancy 3209
Children’s Health 2827
Myasthenia Gravis 2470
Diet and Nutrition 2335
Women’s Health 2310
Eye and Vision Care 2249
Lung and Respiratory 1674
Kidney Diseases 1624

Figure 1:

Figure 1:

A sample question with the four major components: Health Topic, Subject, Question, and Answer.

1.3 Conjunctive Exploratory Navigation Interface (CENI)

Since each question is assigned a single topic in NetWellness, this presents an impediment to access health questions through multiple pathways in navigational exploration. In [9], a multi-topic assignment method using formal concept analysis was proposed to categorize a question into multiple relevant topics. It achieved a 36.5% increase in recall with virtually no sacrifice in precision compared to NetWellness’ original single topic assignment. Based on the results of the multi-topic assignment approach, a novel Conjunctive Exploratory Navigation Interface called CENI [3] was developed for supporting effective retrieval of consumer health questions using health topics in NetWellness. The effectiveness of the CENI interface was evaluated through crowdsourcing and received a nearly 2-1 ratio of preference compared to two other search modes.

However, CENI is limited in the number of health topics that are adopted from NetWellness’ existing ones, which may not represent the best choices of potential health topics. In this paper, we present semantic-based SCENI by mining a relatively large collection of UMLS concepts as health topics and tagging each question with the most relevant topics.

2 Methods

Figure 2 depicts the higher level architecture of the proposed UMLS-concept-based tagging approach to facilitating consumer health questions retrieval in NetWellness. First, all the health questions in NetWellness are processed and tagged with UMLS concepts, allowing the handling of synonyms. Second, the most relevant concepts for each question are selected using concept TF-IDF and consumer health vocabulary. Then, resulting concepts for all the questions constitute a concept cloud, which are indexed for retrieving questions in the semantic search interface SCENI.

Figure 2:

Figure 2:

Overview of semantic-based approach to exploring consumer health questions using UMLS concepts.

2.1 Question Annotation with UMLS Concepts

For each health question in NetWellness, the text in its topic, subject and question are processed using MetaMap and mapped to UMLS concepts (CUIs) and semantic types. Since there is a large number of CUIs involved and not all of them are relevant to consumer heath, we manually identified 31 semantic types that are less meaningful for retrieving health questions, including “Qualitative Concept,” “Quantitative Concept,” “Intellectual Product,” “Regulation or Law,” and “Geographic Area.” Concepts with such semantic types were filtered out. Moreover, we only consider non-negated concepts in order to remove the noise that negated concepts brought for retrieving related questions based on concepts. After this process, each question is annotated with a set of UMLS CUIs representing the question’s relevant concepts. For example, the question shown in Figure 1 is annotated with a set of 15 CUIs shown in Table 2, where each CUI takes the form of “CUI: Preferred concept name,” and each number in the parenthesis represents the count of occurrences of the CUI in this question (the parenthesis is omitted if the CUI occurs only once).

Table 2:

A set of 15 UMLS concepts annotated for the question shown in Figure 2. The number of occurrences of each CUI is indicated in parenthesis.

C0013447: Ear Diseases C0028432: Nose Diseases C0544058: Throat Disorders
C0013443: Ear (2) C0005889: Body Fluids (2) C0020517: Allergy
C1881534: Make C0455270: Sharp pain C0240577: Swollen nose
C0162723: Zyrtec C0286677: Flonase (2) C2945656: Severe Allergy
C0033213: Problem C0439801: Limited C0013227: Drug (2)

2.2 Concept Selection

For each question, we select a smaller set of the most relevant concepts adopting the idea of the Term Frequency-Inverse Document Frequency (TF-IDF) metric [10], a most common term weighting scheme in information retrieval and text mining. In this paper, we calculate TF-IDF based on CUIs instead of terms.

Hence the term frequency (TF) for a CUI c in a question q is calculated as the number of occurrences of the CUI in the question and it’s topic and subject (f(c, q)), normalized by the number of all CUI occurrences in that question (∑{f(w, q) : wq}), as follows:

tf(c,q)=f(c,q){f(w,q):wq}.

The inverse document frequency (IDF) is used to measure the importance of a CUI c in a corpus of questions Q. It is calculated as the logarithm of the quotient of the number of all questions (|Q|) and the number of questions containing the CUI (|{qQ : cq}|), as follows:

idf(c,Q)=log|Q||{qQ:cq}|.

To avoid division-by-zero, we adjust the denominator to 1 + |{qQ : cq}|. The TF-IDF weight, calculated as tf-idf(c, q, Q) = tf(c, q) × idf(c, Q), is used to determine the importance of the CUI c for the question q.

For each question, its top 5 CUIs ranked by TF-IDF weight are selected for tagging the question. We also keep the CUIs identified for the topic and subject of a question, since they contain key information of the question in very short text. Furthermore, CUIs not included in the Consumer Health Vocabulary (CHV) [11, 12] are discarded.

After this process, each question is annotated with a smaller set of CUIs representing the most relevant concepts. For example, Table 3 displays the resulted set for the question in Figure 1. Combining the most relevant CUIs of all the questions with duplicates removed results in a concept cloud.

Table 3:

A set of 7 most relevant UMLS concepts representing the question shown in Figure 1.

C0013447: Ear Diseases C0028432: Nose Diseases C0544058: Throat Disorders
C0013443: Ear C0005889: Body Fluids C0240577: Swollen nose
C0286677: Flonase

2.3 Concept-based Indexing and Semantic-based Conjunctive Search

The concept cloud is indexed using Picky [13], a Ruby-based semantic text search engine for categorized data. Both CUIs and the strings they represent are indexed to support quick response of concept searching by users. For each health question, the most relevant CUIs obtained after the concept selection process are indexed to support question retrieval using a semantic search interface SCENI. SCENI is implemented in Ruby on Rails and reused the interface design of CENI. SCENI is designed to deal with conjunctive search in a large number of questions and concepts. In SCENI, given a set of concepts to search, their synonyms were also used to better retrieve related questions.

2.4 Evaluation

We evaluated the proposed semantic-based approach in two ways. One is to evaluate if the concept selection process obtains the most relevant concepts for tagging questions. The other is to compare the proposed concept-based conjunctive search with two other existing search modes of NetWellness questions.

2.4.1 Concept Selection

Since no well-established reference standard is available, two experts in health informatics created a reference standard of most relevant concepts for a set of 50 health questions in NetWellness. In [9], a set of randomly selected 300 health questions in NetWellness were used for evaluating the multi-topic assignment method, and resulted in a subset of 278 questions with good quality as the reference standard. In this paper, we randomly chose 50 questions among these 278 and developed a web-based annotation interface for the annotators to review questions and tagging them with the most related concepts. Since it is hard for the annotators to come up with UMLS concepts by themselves, we provided a baseline set of UMLS concepts for each question identified by MetaMap after removing concepts with less meaningful semantic types. The two annotators reviewed the 50 questions together, resolved any disagreements, and created the reference standard.

2.4.2 Semantic-based Conjunctive Search

To evaluate the semantic-based conjunctive search, we designed 5 search tasks (see Table 4) to compare three search modes: the key-word based search in NetWellness official website (NWO) [7], the topic-based conjunctive search in CENI [3], and the proposed UMLS concept-based conjunctive search in SCENI.

Table 4:

List of five search tasks.

Search Task ID Search Task Description
1 Can anti-epileptic medications be taken during pregnancy?
2 Is colon cancer an inherited disease?
3 Is it safe to take birth control pills when breastfeeding?
4 Is it possible to contract HIV from toilets?
5 Can hypertension cause heart attack?
6 Does Keppra cause hair loss?
7 Does drinking alcohol afffect emphysema?
8 What diet would help with gastroesophageal reflux disease (GERD)?
9 What are possible causes of infant sleep apnea?
10 Does toothpaste cause allergy?

For each search task in Table 4, an expert in health informatics used each search mode to retrieve a list of questions, and gave a relevance score for each of the retrieved question using a slightly modified 5-point Likert scale [14], where 0 indicated not relevant at all, and values between 1 and 5 were considered relevant (1 indicated weakly relevant and 5 indicated strong relevant).

3 Results

3.1 Concept Statistics

Over 60000 health questions in NetWellness were processed. MetaMap identified 32195 distinct relevant UMLS CUIs for all questions, their topics and subjects. 23955 CUIs were obtained after filtering out uninformative ones using their semantic types. 21365 CUIs remained left after performing CUI TF-IDF. Removing CUIs that were not in CHV resulted in 18538 CUIs, which constituted a cloud of concepts. Figure 3 displays the top 15 concepts and the numbers of questions they occur in.

Figure 3:

Figure 3:

Top 15 UMLS concepts occurring in the NetWellness questions. The x-axis indicates the UMLS concept name, and the y-axis indicates the number of questions each concept occurs in.

3.2 Semantic Conjunctive Exploratory Navigation Interface (SCENI)

The general layout of SCENI interface is illustrated in Figure 4. There are four general areas of SCENI:

  1. Area for searching concepts allows a user to type a search string into a search box and retrieve concepts of interest among the concept cloud;

  2. Area for displaying candidate concepts matching the search string lists the matched concepts for a user to select from. Clicking any concept in the list automatically adds a concept tag in the area for displaying selected concepts (red arrow);

  3. Area for displaying selected concepts shows user-selected concepts. Each selected concept is displayed as a tag in the form of “concept name (CUI).” The “Reset” button is used to start over a new query by clearing the specified concepts;

  4. Area for displaying health questions related to all the selected concepts is automatically updated when the user clicks a candidate concept, the “Search” button, or the “Reset” button. By default, all questions are displayed if no concept is selected, consistent with convention.

Figure 4:

Figure 4:

A screenshot of the semantic conjunctive exploratory navigation interface SCENI to retrieve health questions relating to “left flank pain” and “kidney disease.” These selected concepts (3. Area for displaying selected concepts) are obtained by typing an unstructured query string into the search box (1. Area for searching concepts) and clicking desired concepts (2. Area for displaying candidate concepts matching the query string).

3.3 Evaluation of Concept Selection

To evaluate the results of concept selection, example-based measures for multi-label classification problems [15, 16] were used as the evaluation metrics to avoid a few questions dominating the values of the metrics. Let R be the reference standard consisting of m = 50 questions {(qi, Yi) | i = 1, … , m}, where Yi is the set of most relevant concepts tagged for the question qi. Let Zi be the set of predicted concepts for qi. The example-based precision (P), recall (R) and F1 measure (F1) were calculated as follows:

P=1mi=1m|YiZi||Zi|,R=1mi=1m|YiZi||Yi|,andF1=1mi=1m2|YiZi||Zi|+|Yi|.

Table 5 shows the values of these metrics for the results of three methods: the baseline MetaMap, the TF-IDF method, and the TF-IDF method followed by filtering out concepts not in CHV (TF-IDF + CHV, for short). The TF-IDF + CHV method achieved the best F1 measure of 0.806, an increase of 40% compared to the baseline MetaMap.

Table 5:

The example-based precision, recall, and F1 measure for the results of concept selection using baseline MetaMap, TF-IDF, and TF-IDF + CHV.

Method Precision Recall F1
Baseline MetaMap 0.446 1.0 0.576
TF-IDF 0.743 0.932 0.802
TF-IDF + CHV 0.807 0.859 0.806

3.4 Evaluation of Semantic-based Conjunctive Search

To evaluate concept-based search using SCENI, the scores of retrieved questions for search tasks in Table 4 given by the evaluator were used to calculate precision and relevance. For each search task, precision was calculated by dividing the number of relevant questions (score of 1 or higher) by the number of retrieved questions; relevance was calculated by averaging scores given to each retrieved question:

precision=#of relevant questionsnrelevance=i=1nscorein

where n is the number of retrieved questions. We did not calculate recall since it would require the evaluation of each health question in NetWellness for each search task, which is too labor-intensive to perform manually.

Table 6 shows the detailed results of precision and relevance for the two existent search modes (NWO and CENI), and the proposed SCENI. The precision was on average 50.62% for NWO, 77.85% for CENI, and 93.47% for SCENI. The relevance was on average 1.97 for NWO, 3.3 for CENI, and 4.31 for SCENI. This shows the improvement using semantic-based approach.

Table 6:

Results of precision and relevance for three search modes of retrieving health questions in NetWellness. NWO: the NetWellness Official website [7], CENI: the Conjunctive Exploratory Navigation Interface in [3], and SCENI: the proposed semantic CENI. Relevance was measured using a slightly modified 5-point Likert scale, where 0 indicated not relevant at all, 1 indicated weakly relevant, and 5 indicated strong relevant.

Query ID Precision(%)
Relevance (scale 0–5)
NWO CENI SCENI NWO CENI SCENI
1 42.86 58.33 87.5 2.14 2.82 4.38
2 22.22 100 100 0.33 5 5
3 33.33 90.91 100 1.11 3.82 3.88
4 0 100 100 0 5 5
5 70 75 88.89 2.5 1.83 2.22
6 100 100 100 4 4.5 4.75
7 33.33 42.86 66.67 1.67 2.14 3.33
8 76 85.71 91.67 2.64 3.79 4.58
9 60 40 100 3 2 5
10 68.42 85.71 100 2.32 2.14 5

Average 50.62 77.85 93.47 1.97 3.3 4.31

4 Discussions

4.1 Performance Analysis

We manually reviewed some health questions and found a number of factors affecting the performance of the proposed semantic-based approach:

  1. Spelling errors in health questions. For example, “Larynx” was misspelled as “Larnyx,” and was not identified by MetaMap.

  2. Incorrect UMLS concepts identified by MetaMap. Consumers sometimes describe health questions using abbreviations, such as “MG” for “Myasthenia Gravis” and “mg” for “Milligram.” In such cases, MetaMap sometimes incorrectly identify the concept “C2346927: Magnesium Cation” for them.

  3. Relatively short description of health questions. Some questions are described in only one or two sentences, which may weaken the advantage of concept selection using TF-IDF and result in less relevant concepts. For instance, for the following question
    “If I had a hiatal hernia, would an upper GI series be able to identify it?”
    “able” was recognized as the concept “C1299581: Able (finding)” and selected as one of the most relevant concepts due to a small number of concepts identified from such a brief question.

4.2 Limitations

Our evaluation of the semantic-based search SCENI is limited in the number of search tasks performed and the number of evaluators involved in reviewing results. More search tasks and evaluators would avoid bias and provide more statistical power. Since each search task may result in 10s if not 100s of resulting questions, manually evaluating the precision and recall performance is infeasible for larger number of questions. The alternative would be to make the SCENI interface available to the public, with a feedback mechanism built-in the interface. Although this could potentially achieve feedback in scale, it could still introduce biases in feedback responses.

Using UMLS concepts as topics introduces additional challenges. First, concepts with same CUI may have multiple labels which introduces overhead on the interface. On the other hand, terms similar to each other may have distinct CUIs providing an opportunity to apply subsumption reasoning, which we have explored in this work.

5 Conclusion

We have presented a semantic-based approach for exploring consumer health questions using UMLS concepts. SCENI shares the established benefits of CENI without the restriction of a smaller predefined set of topics for supporting menu-driven conjunctive navigation. Our preliminary evaluation shows a slight improvement in precision, and points to the need to alternative forms of systematic, larger scale evaluation.

Acknowledgments

We thank Susan Wentz for providing the consumer health questions from NetWellness. This publication was made possible by the Clinical and Translational Science Collaborative of Cleveland, UL1TR000439 from the National Center for Advancing Translational Sciences (NCATS) component of the US National Institutes of Health and NIH roadmap for Medical Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

References

  • [1].Luo W, Najdawi M. Trust-building measures: a review of consumer health portals. Communications of the ACM. 2004;47(1):108–113. [Google Scholar]
  • [2].Berland GK, Elliott MN, Morales LS, et al. Health information on the internet accessibility, quality, and readability in English and Spanish. JAMA. 2001;285(20):2612–2621. doi: 10.1001/jama.285.20.2612. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Cui L, Carter R, Zhang GQ. Evaluation of a Novel Conjunctive Exploratory Navigation Interface for Consumer Health Information: A Crowdsourced Comparative Study. Journal of Medical Internet Research. 2014;16(2):e45. doi: 10.2196/jmir.3111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Bodenreider O. The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004;32:D267–D270. doi: 10.1093/nar/gkh061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Aronson AR. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. Proceedings of the American Medical Informatics Association (AMIA) Symposium. 2001:17–21. [PMC free article] [PubMed] [Google Scholar]
  • [6].Aronson AR, Lang FM. An overview of MetaMap: historical perspective and recent advances. Journal of the American Medical Informatics Association. 2010;17(3):229–236. doi: 10.1136/jamia.2009.002733. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7]. [Accessed August 3rd, 2014]. http://www.netwellness.org/
  • [8].Morris TA, Guard JR, Marine SA, et al. Approaching Equity in Consumer Health Information Delivery NetWellness. Journal of the American Medical Informatics Association. 1997;4(1):6–13. doi: 10.1136/jamia.1997.0040006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Cui L, Xu R, Luo Z, Wentz S, Scarberry K, Zhang GQ. Multi-topic Assignment for Exploratory Navigation of Consumer Health Information in NetWellness using Formal Concept Analysis. BMC Medical Informatics and Decision Making. 2014;14:63. doi: 10.1186/1472-6947-14-63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Jones KS. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation. 1972;28(1):11–21. [Google Scholar]
  • [11]. [Accessed August 3rd, 2014]. http://www.consumerhealthvocab.org/
  • [12].Zeng Q, Tse T. Exploring and developing consumer health vocabularies. Journal of the American Medical Informatics Association. 2006;13(1):24–29. doi: 10.1197/jamia.M1761. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13]. [Accessed August 3rd, 2014]. http://florianhanke.com/picky/
  • [14].Kwak M, Leroy G, Martinez JD, Harwell J. Development and evaluation of a biomedical search engine using a predicate-based vector space model. Journal of biomedical informatics. 2013;46(5):929–939. doi: 10.1016/j.jbi.2013.07.006. [DOI] [PubMed] [Google Scholar]
  • [15].Tsoumakas G, Katakis I. Multi-label classification: an overview. International Journal of Data Warehousing and Mining (IJDWM) 2007;3(3):1–13. [Google Scholar]
  • [16].Tsoumakas G, Katakis I, Vlahavas I. Data Mining and Knowledge Discovery Handbook. Springer; US: 2010. Mining multi-label data; pp. 667–685. [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES