Skip to main content
AMIA Annual Symposium Proceedings logoLink to AMIA Annual Symposium Proceedings
. 2015 Nov 5;2015:852–860.

Analysis of empty responses from electronic resources in infobutton managers

Jie Long 1, Nathan C Hulse 1,2, Cui Tao 3
PMCID: PMC4765673  PMID: 26958221

Abstract

Infobuttons provide context-aware educational materials to both providers and patients and are becoming an important element in modern electronic health records (EHR) and patient health records (PHR). However, the content from different electronic resources (e-resource) as responses from infobutton manager has not been fully analyzed and evaluated. In this paper, we propose a method for automatically analyzing responses from infobutton manager. A tool is implemented to retrieve and analyze responses from infobutton manager. To test the tool, we extracted and sampled common and uncommon concepts from EHR usage data in Intermountain Healthcare’s enterprise data warehouse. From the output of the tool, we evaluate infobutton performance by multiple categories, including against the most and less common used concepts, grouped by different modules in patient portal, by different e-resources, and by type of access (standardized Health Level Seven (HL7) vs not). Based on the results of our evaluation, we provide suggestions for further enhancements of infobuttons to the current implementation, including suggesting accessing priorities of e-resources and encouraging the use of the HL7 standard.

Introduction

Infobuttons have been becoming an increasingly important element in modern EHRs and PHRs 1,2 and serve as a notable solution for addressing clinical information needs at the point-of-care 36. Currently infobuttons are an important part in the HL7 standard 7,8 and well as in the meaningful use 9. Infobuttons can be used to deliver educational materials to both physicians and patients. Infobuttons provide services to deliver context-aware information, which has proved to be an effective way for retrieving more accurate information to the end users 10. The context of an infobutton request includes basic information from the user, the patient, the system, and the requested concept. As a result, besides the requested concepts that a user would normally type into the search box, the infobutton manager is intelligent enough to retrieve information from different e-resources and find the best match of an input concept given the provided context. In that way, content from the infobutton response is tailored to be more accurate to reflect the needs of the end users.

It is very important that infobuttons perform well to satisfy users’ information needs, especially in PHRs where the ends users have less professional knowledge than physicians in seeking out good clinical information. However, the content brought up by infobutton manager has not been fully analyzed and its performance is not well-documented in the medical informatics literature. The lack of a full systematic analysis of the content might put designer’s bias into the infobutton manager and result in less than optimal responses. These can result in a frustrating end-user experience, one that could lessen their likelihood of using the tool in the future.

One of the interesting limitations of most current infobutton implementations 11 is that the infobutton manager is typically not aware of the content it will receive before it generates a link that directs users to it. This occurs because the first version of the HL7 standard (the one that is most widely adopted) is a unidirectional standard that does not allow for metadata to flow back to the infobutton manager. As such, there is an implicit trust between the infobutton manager (on the EHR side) and the content provider that there will be relevant content returned. Clearly, content providers have a finite set of available content, and it may not be indexed in such a way that all of it is readily accessible from the infobutton manager. This creates the potential for ‘empty responses’ from e-resources linked to from the infobutton manager, or scenarios in which a user clicks on a link from the infobutton and arrives at a page with zero results. This scenario is to be anticipated at least some of the time when linking from very uncommon concepts, but it becomes notably worse if the infobutton manager returns empty responses in commonly used concepts or highly-trafficked modules.

However, systematically analyzing content brought up by infobuttons is a new challenge. In the current implementation of infobutton managers, the rank of e-resources is predefined based on the designer’s interest. There is nowhere in the current design to collect user’s feedbacks on the content of e-resources from infobutton manager. If the content is empty or there’s an error happening for some e-resources, there’s no mechanism to avoid the empty links or report errors before delivering the content to the end users.

In this research, we propose a method to automatically detect empty responses from e-resources and provide statistics on the empty responses. The content from infobuttons is retrieved from Intermountain Healthcare’s patient portal MyHealth12. The content coverage presented in this manuscript includes three modules in MyHealth and seven different e-resources in these modules. We implemented a tool to automatically detect empty responses for sampled concepts. Each concept sends an infobutton request. These concepts are sampled from the three most trafficked ‘infobutton-supported’ modules in our personal health record, including health concerns (problem list), medications, and tests and procedures (lab values). We implemented a tool that is capable of firing a series infobutton requests (from a list of input parameters), automatically detecting empty responses, and recording data for further analysis. The number of empty responses is then classified by module and by e-resources.

Hypothesis

In approaching this evaluation, we anticipate that (1) websites with HL7 standardized API will return fewer empty responses; (2) infobuttons will return fewer empty responses in more commonly used concepts; (3) the most used infobutton module will return the fewest empty responses (4) websites will exhibit variance in supporting different modules and in supporting common and uncommon concepts. We expect that the conclusions help researchers to better understand the vacancy rate in content brought by infobutton and therefore improve the current performance of infobutton. From the distribution of the empty responses, the conclusion also help designers to better assign accessing priorities for different web sites so the end user will see less empty responses which sometimes are frustrating.

Background

Intermountain Healthcare is a not-for-profit integrated healthcare delivery system based in Salt Lake City, Utah. It provides healthcare for the entire state of Utah and parts of southeastern Idaho. Intermountain maintains 22 inpatient hospitals (including a children’s hospital, an obstetrical facility and a dedicated orthopedic hospital), more than 185 outpatient clinics, and 18 community clinics serving uninsured and low-income patients. Intermountain provides primary and specialty care for approximately half of the residents of the state of Utah. Intermountain Healthcare has been recognized in the literature for supporting best care practice with clinical decision support interventions13. Intermountain’s infobutton manager (a key component of this effort) is used regularly and its development and uptake have been detailed previously14.

Infobuttons have been in use at Intermountain Healthcare for over 15 years 15,16. They have been integrated in two separate clinical systems, including usage from 4 major modules within these systems. Usage has steadily increased over the years, with over 1,700 unique monthly users, accounting for over 18,000 infobutton sessions per month.

Infobuttons have been made available for patient use from Intermountain’s patient portal MyHealth since early 2014. MyHealth is actively used by many patients, with tens of thousands of logins per month. This implementation based on the general OpenInfobutton13 has been augmented with local services, supporting internal logging, integration with terminology services, and enhancements to satisfy local security requirements. This is the first exposure of infobuttons to the public network in Intermountain Healthcare. Infobuttons are available from the health concerns, medications, and lab value modules in myHealth. Currently, infobuttons link to seven different e-resources from MyHealth, including Krames StayWell, MedLine Plus, HealthFinder.gov, FamilyDoctor.org, Mayo Clinic, Drugs.com, and LabTestsOnline.

This research analyzes infobutton responses in MyHealth from three modules including Medications, Tests and procedures, and Health concerns. The analysis is based on 400 sampled concepts from each module. Specifically, we focus on automatically detecting empty responses for these concepts in infobutton responses and report the related statistics in pursue of enhancing the current implementation of infobuttons.

The targeted concepts are sampled from the Enterprise Data Warehouse (EDW) where patients’ health records are saved. In each module, we select the two sets of concepts, which are the top 200 and the bottom 200 ranked by the usage of all the concepts. The selection of these two sets enables us to differentiate infobutton performance between the most common conditions and the least common conditions. We expect that infobutton supports the common conditions better than the uncommon conditions.

The empty responses will be recorded and sorted out by module and by e-resource respectively. By Module, since Medication has over 75% of infobutton usage from the previous research, we expect that infobuttons will perform well in Medication with less empty responses than the other two modules. Also, by classifying empty responses in each e-resource, we could evaluate the performance of e-resources and expect variances in performances of e-resources.

In addition, the research compares empty responses in e-resources which supports HL7 standard with these without HL7 standard. We expect that the HL7 standard API results in more accurate search items as well as less empty responses because HL7 standard is context-aware with the patient information as well as the matching criteria are based on the standard coding systems, instead of free text.

Methods

This section describes the complete process of retrieving infobutton links and detecting empty responses from all the infobutton supported e-resources in three modules of patient portal MyHealth. The process includes building a tool, sampling concepts, and collecting empty responses from each e-resource. The tool automatically sends an infobutton request for each sampled concept and record empty responses if detected. The concepts are sampled from Intermountain’s enterprise data warehouse, which stores longitudinal patient data. The empty responses are collected by running the tool against the sampled concepts.

Automatically detect empty responses

We designed a program to analyze the content of e-resources derived from infobutton requests in MyHealth. In Figure 1, we demonstrate the workflow of the automatic process. First, a series of concepts is fed into the program for analysis. For each input concept, an infobutton request is composed, using the context information from a test patient and a test provider (which were both predefined and fixed for all the infobutton requests). The infobutton manager processes the request and generates a corresponding response with supported e-resources. For each URL listed from the supported e-resources, the program runs the URL and retrieves and parses the remote content. By parsing the header of the content we record the status code from web pages while the number of returned items from each URL is recorded by parsing the body of the content. The status code explains whether there’s error with the website or with the connection. A set of rules leads the process of parsing the remote content. In a rule, we define the path in HTML/XML to extract interested information from a specific e-resource. Each e-resource has its own set of rules that allow us to find and pinpoint key information in the page for analysis. By these predefined rules, our program is intelligent enough to identify empty responses and record URLs and other metadata for further analysis.

Figure 1.

Figure 1

Flow chart of automatically detecting empty responses

Dataset

We extracted a large data set for analysis from each of the three infobutton domains. We created two sets of concepts for each infobutton supported modules including medications, health concerns, and tests and procedure. In a module, these two sets represent the most common conditions and the most uncommon conditions based on usage of concepts. In this research, we selected the top 200 to represent to most common conditions while the bottom 200 to represent the uncommon conditions. For each sampled concept, we compose an infobutton request. In Table 1, we display the count of hits for the concepts in the top and bottom set respectively in each module in year 2014. In Table 2, we demonstrate the representations of concepts from the top 5 and the bottom 5 in each module separately, where by (T) it refers to the top 200 concepts and by (B) it refers to the last 200 concepts ranked by the count of concepts’ hits in EHR system.

Table 1.

The counts of top 200 and bottom 200 concepts in modules.

Health concerns Tests and procedures Medications
Top 865,039 58,606,872 2,446,961
Bottom 200 527 200

Table 2.

The representations of the top 5 and bottom 5 concepts in modules. (T): top 200 concepts; (B) bottom 200 concepts.

Health concerns (T) Health concerns (B) Medications (T) Medications (B) Tests and procedures (T) Tests and procedures (B)
Abdominal pain Chronic obstructive pulmonary disease exacerbation Norco benzoyl peroxide Complete Blood Count Protein Electrophoresis
Pregnancy Addison’s disease due to autoimmunity Percocet Serenagen Chemistry 12 Panel Adenosine Deaminase
Hypertension Retinoblastoma Flonase Tyrex-1 CBC, Differential Acid Ham’s Test
Depression Cystic Kidney Disease Ibuprofen sumatriptan Chemistry 7 Serum or Plasma Cholesterol/HDL
Anxiety Right to left shunt due to patent ductus arteriosus (PDA) Augmentin Levothyroxine Lipid Profile Beta Melanocyte Stimulating Hormone

Result from auto-detection of empty responses

We ran this tool for auto-detecting empty responses against the top and bottom 200 concepts in the module of Health concerns, medications, and Tests and procedures. Each concept sends an infobutton request. The process resulted in a report of empty responses for all the e-resources. As shown in Figure 1, the last step involves the capture of relevant output from the auto-detection of empty responses. These records includes information such as calling module, concept, the name of website, number of returned items from the website, status code of http response from the website, and the URL if the link does not return any items for the query. In our research, we processed and stored all ‘empty responses’ from the e-resources linked to from infobuttons in MyHealth, including Family Doctor, MedlinePlus, Krames, Health Finder, Mayo Clinic, Drugs.com, Lab Tests Online.

The number of empty responses was classified per e-resource and per module separately. In Table 3, we report the number of empty responses from each module as display in rows and from seven e-resources as shown in columns. In each row, a module contains the two sets of concepts represented by (T) for the top 200 concepts and (B) for the last 200 concepts ranked by the count of concepts’ hits in EHR system, as described in the Section of Dataset. The modules include health concerns, medications, and tests and procedures. The symbol ‘-’ in the table means that the e-resource in the column does not support the according module in the row from the initial design. In Figure 2, we visualize the result from Table 2, displaying the number of empty responses (vertical) for each e-resource in the top and bottom 200 concepts in three modules (horizontal). Health concerns has five e-resources, medications has four e-resources, and tests and procedures has 3 e-resources.

Table 3.

Total number of empty responses from each e-resource in three modules (T): top 200 concepts; (B) bottom 200 concepts.

Family Doctor Medline Plus Krames Health Finder Mayo Clinic Drugs.com Lab Tests online
Health concerns (T) 36 22 0 70 15
Health concerns (B) 171 110 4 172 104
Medications (T) 4 0 3 0
Medications (B) 57 3 58 0
Tests and procedures (T) 2 48 64
Tests and procedures (B) 3 97 135

Figure 2.

Figure 2

The number of empty responses for each e-resource in the top and bottom 200 concepts in three modules. (T): top 200 concepts; (B) bottom 200 concepts.

By module, the health concerns (B) has the most number of empty responses while the medication (T) has the least number. By e-resource, Drugs.com has 0 empty responses for all the infobutton requests while Health finder has the most of empty responses. We will further analyze the result in the next section.

We also record the number of returned items from each e-resource. Table 4 shows the total number of returned search items from e-resources given the top and bottom 200 concepts in three modules. The result will be further analyzed in the next section.

Table 4.

Total number of returned items from each e-resource in three modules (T): top 200 concepts; (B) bottom 200 concepts.

Family Doctor Medline Plus Krames Health Finder Mayo Clinic Drugs.com Lab Tests online
Health concerns (T) 6386 8289 4000 983 116968
Health concerns (B) 513 12815 10598 139 19813
Medications (T) 3202 7611 26012 3065705
Medications (B) 8722 11141 9527 11577212
Tests and procedures (T) 13215 15573 3804
Tests and procedures (B) 16806 5517 362

Analysis of empty responses

The empty responses from websites were categorized by module and by e-resource. From the result, we evaluated the performance of infobutton by module, by e-resource, and by support for the HL7 standard (or the lack thereof).

An e-resource can be configured to support only one or multiple modules. For example, Krames supports all the three modules while Lab Tests Online only supports tests and procedures. In our results, per concept calculation accounts for only the supported concepts in supported modules for an e-resource in order to normalize for comparing performance of different e-resources that have different coverage.

In Table 5, we demonstrate the rate of empty responses and the average number of returned items from each e-resource. Each column refers to an e-resource where the last column reports the average over all the e-resources. In the row for rate of empty responses, for each e-resource, the rate is computed from all the infobutton requests in the process by:

rateofemptyresponsespereResource=totalnumberofemptyresponsespereResourcetotalnumberofinfobuttonvisitstotheeResource'swebsite

Table 5:

Rate of empty responses and average returned items per e-resource

Family Doctor Medline Plus Krames Health Finder Mayo Clinic Drugs.com Lab Tests online Avg
Rate of empty responses 51.75% 24.13% 1.00% 60.50% 27.08% 0.00% 49.75% 30.60%
Avg returned items 36 59 54 7 224 37,450 21 5,407

The average of the return items per concept for an e-resource is computed by:

averageofreturneditemsperrequest=totalnumberofreturneditemspereResourcetotalnumberofinfobuttonvisitstotheeResource'swebsite

The average rate of empty responses for all the e-resources is 30.60%. HealthFinder.gov had the highest rate, which almost doubles the average rate for all the e-resources. Drugs.com and Krames have the lowest overall ‘empty response’ rate. Drugs.com has a zero empty responses because of its large database as well as possibly broader matching criteria, which can be inferred from its average number of the returned items, at 37,450 per requested concept. That average number is much bigger than the rest of the e-resources. By taking out Drugs.com from the average returned items, the average number of returned items for all the e-resources changes from 5,407 to 57 per requested concept. Krames has low rate at 1.00% of empty responses and a moderate number 54 for the average number of search items, which is close to the average number by counting out Drugs.com. Health Finder returned few search items but it has a high rate of empty responses, which indicates that the website might have less coverage of concepts. Mayo Clinic has a lower than average of the empty responses but has a bigger number of returned items, which means that it has a good coverage of content and therefore is more likely to return results.

In Table 6, we report the percentage of empty responses for each e-resource by sets of concepts. The concepts are sampled from the top 200 and bottom 200 and grouped from all the supported modules. For example, Krames supports three modules so its top concepts cover the top 200 from all the three modules. For an e-resource, the percentage in the table is computed by the sum of empty responses over the total number of concepts in the top and bottom concepts separately. Noticeably, the top concepts, with an average rate of empty responses at 14.69%, always have lower percentage of empty responses comparing to the bottom concepts, with an average rate of empty responses at 46.51%. That means, all the e-resources support the most commonly used concepts better than the least used ones. Drugs.com has 0 empty responses while Krames and MedlinePlus have the lowest rate of empty responses. But MedlinePlus has a significant number of empty responses for the bottom concepts, comparing the Krames. Health Finder and Family Doctor have both the highest rate of empty responses in the bottom concepts, where the rate is 85.50% and 86.00% respectively. With that high rate of empty responses, we could suggest removing the Health Finder and Family Doctor from the list for these bottom concepts and have users focus on these which return non-empty items.

Table 6:

Empty responses from e-resources in the top and bottom 200 concepts

Family Doctor Medline Plus Krames Health Finder Mayo Clinic Drugs.com Lab Tests online Avg
Top concepts 18.00% 6.50% 0.33% 35.00% 11.00% 0.00% 32.00% 14.69%
Bottom concepts 85.50% 41.75% 1.67% 86.00% 43.17% 0.00% 67.50% 46.51%

In Table 7, we present the rate of empty responses for each e-resource by module. The symbol ‘-’ means that the e-resource in the column is not supported in the module in the row. For an e-resource, the rate of empty responses is computed by the number of concepts with empty responses over the total number of concepts in a module. The medication has the lowest rate of empty responses at an average of 10.42%, comparing to the highest of 35.20% in Health concerns. Medication has the most usage from previous research of infobutton usage. That suggests that the e-resources provide better coverage in modules with higher infobutton usage. Krames is supported in all the three modules and performs well, where Krames has the lowest rate of empty responses in all of the modules. Health Finder and Family Doctor both support a single module of Health concerns but have a high rate of empty responses, where is at 60.50% and 51.75% respectively. We would suggest lower the accessing priority of these two e-resources in Health concerns so people will have less chance to be frustrated with empty responses. Medline Plus and Mayo Clinic perform similarly in Health concerns and Medications, which are both close to the average of all the e-resource in each module. Lab Tests online has a high rate of empty responses, so we would suggest to lower the accessing priority for Lab Tests online while the vender could try to broaden the search criteria as well as improving content coverage in the backend.

Table 7:

Empty responses from e-resources in supported modules

Family Doctor Medline Plus Krames Health Finder Mayo Clinic Drugs.com Lab Tests online Avg
Health concerns 51.75% 33.00% 1.00% 60.50% 29.75% 35.20%
Medications 15.25% 0.75% 15.25% 0 10.42%
Tests and procedures 1.25% 26.50% 49.75% 25.83%

The difference in rate of empty responses between e-resources with and without a HL7 standardized API is sharp, as shown in Table 8. The rate of the empty responses is computed by the number of empty responses grouped by the e-resource’s API over the total number of infobutton requests fired from that group. In Table 8, we compare the rate of empty responses between e-resources with HL7 standard API and non-HL7 standard API, in each module with the top and bottom concepts. All the e-resources have a very low rate of empty responses for the top concepts in both HL7 standard and non-HL7 standard column. However, the e-resource supporting HL7 standard API has a noticeably different performance in Health concerns and Tests and procedures. In Medication module, the performance of infobutton requests is generally the best for both the HL7 standard API and the non-HL7 standard API, where the rate of vacancy is similarly low. The non-HL7 standard API, especially Drugs.com which has zero empty responses, is mostly based on free text search and has adjustable broader matching criteria, which results in a lower rate of empty responses. Overall, the bottom 200 concepts using the non-HL7 standard API in the Health concern have the highest rate of empty responses. Interestingly the top 200 concepts also using the non-HL7 standard have the lowest rate of empty responses, mostly because of Drugs.com’s zero contributions to the empty responses in this module. Despite the contribution of the non-HL7 standard API from Drugs.com, the HL7 standard API performs similarly to the rate of empty responses in Medication.

Table 8:

Rate of empty responses in e-resources supporting HL7 API and non-HL7 API

HL7 standard API Non- HL7 standard API
Health concerns (T) 5.50% 20.17%
Health concerns (B) 28.50% 74.50%
Medications (T) 1.00% 0.75%
Medications (B) 15.00% 14.50%
Tests and procedures (T) 1.00% 28.00%
Tests and procedures (B) 1.50% 58.00%

Discussion

This paper analyzes the empty responses from infobutton manager by module, by e-resources, and by supporting HL7 standard or not.

The common concepts always have less empty responses than the uncommon concepts. All the e-resources perform better at returning content for the common concepts than the rare ones. We conclude that the overall content coverage for common concepts is good. In the meantime, more effort should be sent to improving the content coverage of the less commonly used concepts.

By module, medications has the best performance in terms of returning less empty responses and more number of returned items for a requested concept. That matches the fact that medications has the highest usage of infobuttons. As a module with higher demand, we conclude that infobuttons do well support the usage.

By e-resource, Krames is notably the best e-resource among the seven e-resources we analyze in this paper. Krames has the lowest rate of empty responses and returns a moderate number of items. HealthFinder and Family doctor will need to improve significantly upon the number of empty responses in order to be viable e-resources on the infobutton. Drugs.com never brings userse to a blank page, but leaves some questions about users’ perceptions with the content returned. Further analysis will be needed to assess the quality of the returned items.

E-resources supporting HL7 standard consistently perform better than the non-HL7 standard e-resource by a significant margin. As such, we would encourage implementers toward the usage of the HL7 standard for e-resource providers as possible.

Conclusion and Future work

We present a method to analyze the content from infobutton requests. The empty responses are automatically detected from e-resources. The result of the empty responses is analyzed by module and by e-resource. We suggest that more effort will be needed to cover content from the less common conditions, such as less used concepts and less used modules. We also encourage e-resource providers to implement the HL7 standard therefore the performance of infobuttons can be improved.

In future, we will need to analyze the quality of the content from different e-resources. Also we will need to read through the empty responses and categorize the reasons for the failures. Furthermore, we can build a tool to analyze the content offline and dynamically create the list of e-resources promoting these with better content and therefore avoid the empty responses at real time.

References

  • 1.Del Fiol Guilherme, Curtis Clayton, Cimino James J, Iskander Andrew, Kalluri Aditya SD, Jing Xia, Hulse Nathan C, Long Jie, Overby Casey Lynnette, Schardt Connie, Douglas David M. Disseminating Context-Specific Access to Online Knowledge Resources within Electronic Health Record Systems. MedInfo. 2013:672–676. 11. [PMC free article] [PubMed] [Google Scholar]
  • 2.Collins Sarah A, et al. Information needs, Infobutton Manager use, and satisfaction by clinician type: a case study. Journal of the American Medical Informatics Association. 2009;16(1):140–142. doi: 10.1197/jamia.M2746. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: a systematic review. JAMA Intern Med. 2014;174(5):710–8. doi: 10.1001/jamainternmed.2014.368. Clinical questions raised by clinicians at the point of care: a systematic review. [DOI] [PubMed] [Google Scholar]
  • 4.Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME. Answering physicians’ clinical questions: obstacles and potential solutions. J Am Med Inform Assoc. 2005;12(2):217–24. 12. doi: 10.1197/jamia.M1608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Covell DG, Uman GC, Manning PR. Information needs in office practice: are they being met? Ann Intern Med. 1985;103(4):596–9. 13. doi: 10.7326/0003-4819-103-4-596. [DOI] [PubMed] [Google Scholar]
  • 6.Ely JW, Osheroff JA, Chambliss ML, Ebell MH, Rosenbaum ME. Answering physicians’ clinical questions: obstacles and potential solutions. J Am Med Inform Assoc. 2005;12(2):217–24. doi: 10.1197/jamia.M1608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.HL7 Version 3 Standard: Context Aware Knowledge Retrieval Application (“Infobutton”), Knowledge Request, Release 2. [Accessed on March 12, 2015]. http://www.hl7.org/implement/standards/product_brief.cfm?product_id=208.
  • 8.Del Fiol G, Huser V, Strasberg HR, Maviglia SM, Curtis C, Cimino JJ. Implementations of the HL7 Context-Aware Knowledge Retrieval (“Infobutton”) Standard: challenges, strengths, limitations, and uptake. J Biomed Inform. 2012 Aug;45(4):726–35. doi: 10.1016/j.jbi.2011.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Cimino JJ, Jing X, Del Fiol G. Meeting the Electronic Health Record “Meaningful Use” Criterion for the HL7 Infobutton Standard Using OpenInfobutton and the Librarian Infobutton Tailoring Environment (LITE) AMIA Annual Symposium Proceedings. 2012;2012:112–120. [PMC free article] [PubMed] [Google Scholar]
  • 10.Maviglia SM, Yoon CS, Bates DW, Kuperman G. KnowledgeLink: Impact of context-sensitive information retrieval on clinicians’ information needs. J Am Med Inf Assoc. 2006;13:67–73. 4. doi: 10.1197/jamia.M1861. [DOI] [PMC free article] [PubMed] [Google Scholar]; Maviglia SM, Yoon CS, Bates DW, Kuperman G. KnowledgeLink: Impact of context-sensitive information retrieval on clinicians’ information needs. J Am Med Inf Assoc. 2006;13:67–73. doi: 10.1197/jamia.M1861. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Open Infobutton. [Accessed on March 12, 2015]. Available at: http://www.openinfobutton.org/
  • 12.MyHealth patient portal. [Accessed on September 25, 2014]. Available at: https://myhealth.intermountainhealthcare.org/login/
  • 13.Del Fiol G, Rocha RA, Clayton PD. Infobuttons at Intermountain Healthcare: utilization and infrastructure. Proc AMIA Annu Fall Symp. 2006:180–4. [PMC free article] [PubMed] [Google Scholar]
  • 14.Del Fiol G, Haug PJ, Cimino JJ, Narus SP, Norlin C, Mitchell JA. Effectiveness of topic-specific infobuttons: a randomized controlled trial. J Am Med Inform Assoc. 2008 Nov-Dec;15(6):752–759. doi: 10.1197/jamia.M2725. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Del Fiol G, Haug PJ. Use of Classification Models Based on Usage Data for the Selection of Infobutton Resources. AMIA Annu Symp Proc. 2007:171–175. [PMC free article] [PubMed] [Google Scholar]
  • 16.Del Fiol G, Rocha RA, Clayton PD. Infobuttons at Intermountain Healthcare: utilization and infrastructure. Proc AMIA Annu Fall Symp. 2006:180–4. [PMC free article] [PubMed] [Google Scholar]

Articles from AMIA Annual Symposium Proceedings are provided here courtesy of American Medical Informatics Association

RESOURCES