Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2010 Jan-Feb;17(1):104–107. doi: 10.1197/jamia.M3294

Unintended errors with EHR-based result management: a case series

Thomas R Yackel 1,, Peter J Embi 2
PMCID: PMC2995631  PMID: 20064810

Abstract

Test result management is an integral aspect of quality clinical care and a crucial part of the ambulatory medicine workflow. Correct and timely communication of results to a provider is the necessary first step in ambulatory result management and has been identified as a weakness in many paper-based systems. While electronic health records (EHRs) hold promise for improving the reliability of result management, the complexities involved make this a challenging task. Experience with test result management is reported, four new categories of result management errors identified are outlined, and solutions developed during a 2-year deployment of a commercial EHR are described. Recommendations for improving test result management with EHRs are then given.

Keywords: Continuity of patient care; hospitals, teaching; humans; laboratory techniques and procedures; medical records systems, computerized; office management/standards; outcome assessment (healthcare); ambulatory care

Introduction

The management of test results is an integral part of clinical practice and can be a particularly challenging aspect of ambulatory medicine. Before the introduction of electronic health records (EHRs), results management was identified as problematic and associated with errors; failure to follow-up on abnormal test results is cited as a common cause of medical malpractice lawsuits,1 2 3 and is “one of the most problematic safety issues in the practice of outpatient medicine.”1 An advisory group of hospital representatives in Massachusetts found that errors in result communication were frequent and had the potential for serious harm.4 Such findings have led accrediting organizations, such as the Joint Commission, to set goals for improving communication among care givers, especially with regard to patients' critical test result information.5

The frequency of non-follow-up of outpatient test results has been shown to vary from 2 to 50% among primary care practitioners.6 Several explanations for why appropriate action may not be taken on abnormal test results have been offered7:

  • results not correctly communicated to provider;

  • results communicated but never received or reviewed by the provider;

  • results reviewed, but appropriate action not recommended by provider;

  • appropriate recommendation made by provider, but action not carried out.

In this paper we focus on the first step, which is especially important in the ambulatory setting where steps in the process of test ordering, scheduling, and resulting may be interrupted by long periods of time. Indeed, in a survey of ambulatory care providers, 25% of respondents reported having no reliable method of ensuring they received the results of all the tests they have ordered.8

Reliability of ambulatory test result management systems

The precise reliability of ambulatory test result reporting systems (paper, electronic, telephonic, or otherwise) is unknown, but available reports suggest that problems are common and directly impact patient care. In one study of newly diagnosed osteoporosis patients, 33% did not receive recommended treatment. Of these untreated patients, two-thirds of the cases were found to have no evidence from examination of the medical record or after contacting the provider that the results were reviewed.7

In a study of eight family medicine ambulatory practices, 88% reported errors in tracking and return of test results.9 In another study, a survey of 262 physicians working in 15 internal medicine practices noted that 18% of doctors had experienced a concerning delay in EHR test results awareness five or more times in the past 2 months.1

Unintended errors have been documented with the implementation of other electronic patient care information systems;10 11 however, to our knowledge, there is no literature describing new types of errors associated with electronic test results management in EHRs. In this case series, we describe our experience with such events, describe error types identified, and discuss the need for system improvements to address them in light of the advancing adoption of EHRs.

Case description

Setting

The setting of this case series is an academic hospital and affiliated clinics with 600 000 ambulatory visits per year in which a commercial, integrated ambulatory EHR with results-management capabilities was implemented for use by the organization's 2166 healthcare providers. At the time of this report, the EHR forwards about 54 000 test results per month to providers' electronic inboxes based upon routing logic.

Information systems

Test result messages were generated from in-house laboratory (Centricity Ultra Laboratory, GE Healthcare, Barrington, Illinois, USA), pathology (Tamtron Powerpath, IMPAC Medical Systems, Sunnyvale, California), blood bank (Hemacare HCLL, Mediware, Lenexa, Kansas, USA), cardiology (Centricity Cardiology, GE Healthcare, Chalfont St Giles, UK), and radiology (IDX Rad RIS, IDX, Seattle, Washington USA) information systems and passed to the EHR (EpicCare, Epic Systems, Madison, Wisconsin) system utilizing HL-7 compliant interfaces via a commercially available interface engine (OpenLink, Siemens Medical Systems, Malvern, Pennsylvania, USA). Result routing schemes were developed for each practice site based on local practice patterns. The identified schemes were then translated into rules by local analysts and entered into the production system. Routing logic was validated in a test environment by manually creating sample results and then observing whether they filed correctly to the EHR's electronic inbox for the test provider.

Method

Cases of result delivery failure were found through a combination of user reports of errant or missing results and system monitoring using customized reports to locate potential sources of result management errors. In most cases, such customized reports were developed to explore the source of error after an issue was identified and reported by an end user.

Upon identification of multiple cases describing test result routing and management errors, the authors undertook an analysis of the reports. Cases were analyzed by the lead author and grouped into categories. These cases and their descriptors were then discussed among the authors, and the descriptions and categorizations were refined using an iterative approach. The results presented are qualitative descriptions of the major findings and their implications.

Example

Over a 2-year period from 2005 to 2007, coinciding with the first 2 years of a planned 3-year deployment of the ambulatory EHR to multiple practice sites, the vast majority of laboratory result routing events functioned as intended. However, seven error types were identified as causing a substantial delay or disruption in result delivery to providers' electronic inboxes and led to further investigations and case finding by our group. Upon analysis, these seven error types were logically grouped into four distinct error categories: (1) interface and results routing logic errors, (2) provider record issues, (3) EHR system settings, and (4) system maintenance. Each of these is described in the sections that follow.

Interface and results routing logic errors

Pathology system interface errors

After 2 months of live system use in one practice, a single user reported that gynecological cytology test results were not being delivered to the electronic inbox. Further investigation revealed that anatomical pathology results were filing appropriately in the EHR but were not generating messages in providers' EHR inboxes. The problem was traced back to a design decision made regarding inpatient test results shortly after EHR deployment. At that time, some providers complained that inpatient test results were filing to their inboxes, and they preferred to receive only ambulatory results there, since they had a different workflow for reviewing inpatient results. In response, a change was made to the EHR interface to block all test results with a status of “inpatient” from generating inbox messages. Unfortunately, testing did not reveal that the pathology system did not differentiate between ambulatory and inpatient status, and all pathology tests were classified as “inpatient” by default. Thus, all pathology results were blocked from creating inbox messages.

Three hundred ninety-two cases were affected by the delay dating back 3 months from the time of incident discovery. A quality review team was assembled, and a clinical review was performed on all tests to assure that any significant findings were identified and appropriately acted upon. This process revealed that there were no adverse clinical outcomes as a result of the delay in results delivery. Affected providers were given paper copies of all results in addition to having results sent anew to their electronic inboxes.

Solution: Interface settings were changed to unblock pathology results and deliver via the inbox, regardless of the setting in which the specimen was acquired (inpatient or outpatient). Furthermore, it was recommended that delivery of pathology results also continues on paper in addition to electronic delivery. As an additional verification that results were received on all analyzed specimens, reports of received specimens were generated directly from the pathology information system and provided to practices so that they could track and verify that results were received for all patients.

Results misdirected to user “pool”

A medical assistant reported that results were not being received for the providers she worked with. In the case of this particular clinic, a result routing scheme that forwarded results to a group or “pool” of medical assistants was used. Due to a practice reorganization, the names of the practice's “pools” were changed; however, the result logic was not updated to use the new “pool” names, and results continued to be forwarded to the old “pools” that users were no longer checking. Eighty-five patient results remained unread in the old “pool” when the issue was discovered.

Solution: A mitigation plan involving change control practices for the EHR team when creating new “pools” was implemented to ensure results flow was not disrupted in the future.

Provider record issues

Tables out of sync

A user contacted the EHR support desk to report a test result for a test that appeared in the EHR but not in the inbox as expected. A review of the system's order record, which contains details of the order such as ordering user, date/time stamps, and other variables, indicated the data stored in the field for the provider authorizing the test was a placeholder variable, not a real provider, indicating an system problem. Further investigation revealed that the authorizing provider's name had not been added to the laboratory information system's (LIS's) provider directory prior to the order being placed. The LIS by default inserted in a placeholder variable for the provider name. Lab personnel subsequently updated a different, non-interfaced field with the correct authorizing provider's name in the LIS. (This field would have printed on the paper result and allowed for proper manual delivery.) The placeholder variable was transmitted with the result over the electronic interface and overwrote the authorizing provider name in the EHR. No inbox message could be generated because the EHR did not have a valid provider name as the authorizing provider. The system-wide impact of this problem was not measured.

Solution: A change was made to the interface logic to respect EHR as “authority” for ordering provider with no overwrites possible, thus preventing future occurrences of this problem.

Departed providers

Multiple instances were discovered of providers who had left the institution or moved to a different unit not using the EHR but who continued to have active EHR accounts and inboxes. The lack of communication regarding the personnel changes resulted in unchecked results and other messages filling the inbox. In the paper workflow, this would have been immediately apparent, as most physical mailboxes were grouped by department, and other department members were aware of absences so reassignment could occur.

Solution: The issue was mitigated through electronic monitoring of all providers' inboxes (using system reports) to look for trends that could indicate a lack of attention to the inbox, such as a high number of unread messages, a long period of time since the last login to the system, or the lack of future appointments scheduled. Additionally, routing logic was created for providers who departed the institution so those results could be seen in the practice and forwarded to the appropriate staff for action.

EHR system settings

User record system configuration errors

There were several issues discovered related to incorrect system configuration that resulted in non-delivery or non-review of patient results. In one case, a user's system configuration was discovered to be associated with an inbox that received results but did not have the security to access that inbox. In another case, a provider's record was errantly matched to the wrong user, resulting in messages being directed to an unmonitored inbox.

Solution: Both of these sources of error were mitigated through the implementation of system reports to scan for system configuration errors.

Non-CPOE (unsolicited) orders

Computerized provider order entry (CPOE)-entered test orders have details that affect the test result routing scheme that are not available when tests are ordered directly in ancillary (interfaced) systems. For example, the department in which the order was placed is a field that drives certain result delivery logic. However, this field does not exist in the lab or radiology systems. Orders placed directly in those systems for add-on tests, during downtime, or for reflex orders could be routed improperly without this field's data. In all cases, the results were routed to a provider; however this may not be the provider who was expecting the result, thus creating a potential for delayed follow-up. We have not yet identified any cases where this potential problem has caused harm.

Solution: Users are asked to forward unexpected test result messages to the correct follow-up provider or contact our support desk for assistance in correctly routing these messages.

System maintenance-related errors

During a cleanup of what were believed to be unused inboxes, some users' inboxes were inadvertently deleted. Users were unable to see their results for several days while messages were restored from backup, which resulted in result management delays.

Solution: New procedures (including contacting the provider prior to removing in-basket messages) were put in place to verify whether inboxes were actually in use in order to avoid such errors.

Discussion

Over a 2-year period, hundreds of thousands of test results were electronically communicated to providers' electronic inboxes. The overwhelming majority of these results were correctly routed instantaneously after being resulted. The delivery, receipt and follow-up actions by providers were recorded and are electronically auditable.

While the advantages of instantaneous delivery and comprehensive tracking are significant, several examples of unexpected result management errors occurred while using an electronic result communication system. The reasons for the errors were varied and included problems with routing logic, provider records, system settings, and maintenance. A lack of understanding of the complex interplay between systems, lack of adequate testing, failure to follow procedures, and human error contributed to these mistakes. In the cases where errors occurred, there was inadequate redundancy built into the process to tolerate faults, and manual testing had a limited ability to find configuration and data integrity issues prior to their occurrence.

In our experience, most cases were found by end users and reported to the EHR team, often after the error had occurred multiple times. This suggests that the sensitivity of current electronic systems to monitor for errors is low and that adequate mechanisms for identifying such issues are lacking. A true failure rate thus cannot be calculated with certainty, but it is likely higher than our experience would suggest, since it is reasonable to conclude that some errors never came to our attention.

The majority of the mitigation plans to resolve the errors we identified involved careful monitoring of the system with reporting. It is our belief that careful testing or monitoring of the result management system can lead to a reduction in error rates either through avoidance or by limiting the extent to which errors occur or are allowed to persist. Indeed, it has been recommended that organizations adopt “fail-safe” programs to ensure that redundant, backup procedures are in place in case there is an initial communications breakdown.12

Recommendations for improved systems

Based upon our experience with ambulatory result routing, we propose these steps to improve the safety of the test result management process:

  1. Develop fault-tolerant systems that automatically report delivery failures. Safe patient care depends on timely and accurate delivery of results to providers. This is especially true in the ambulatory setting where test ordering and resulting are often asynchronous occurrences separated by weeks or months. A “best efforts” approach to reporting of results does not meet the current need. Result management systems should be designed to tolerate multiple faults and still correctly deliver results. In the case where the system cannot deliver, results should go to an error queue—much like an interface error queue—to be analyzed and delivered manually. In order to design fault-tolerant systems that anticipate failure points, we must have access to data on past events. Systematic reporting of actual errors should be encouraged for users and implementers of results management systems.

  2. Use robust testing to find rare errors that occur both within and between systems. Current standards for testing result management systems do not exist. Testing for fault tolerance requires specifically designed testing scripts that anticipate the failure points and seek to verify system adequacy. As types of errors are categorized and recorded, specific testing designed to exploit those problems should be built.

  3. Implement tracking mechanisms for critical tests, such as cancer screening and diagnostics. Proactive tracking for critical tests has often been a component of paper-based routing systems. For example, a paper log of Papanicolaou tests sent out by a clinical practice, with a regular check on results received, serves to verify proper result management. Electronic systems should offer similar capabilities that allow local practices to monitor those tests they deem crucial. Automatic and manual ticklers for less crucial tests would also help identify errors more quickly and prevent patient harm.

  4. Deliver results directly to patients. There is no one who has a stake greater in the proper management of test results than the patient. With the advent of personal health records, direct notification to patients is feasible and provides an additional layer of safety in the result delivery system. Established practice, medical culture, and some laws prevent or limit such disclosures; however these practices are not in sync with patient-directed healthcare and reduce overall system safety.13

Finally, while it might be tempting to attribute the errors noted above to the use of a particular health information system or even Health IT in general, an examination of the cases reveals that most of these errors actually resulted from local configuration and implementation decisions rather than to the technologies themselves. Indeed, the authors believe that these cases further support the emerging truism that errors related to Health IT are in most cases the result of human error in the implementation of new information and communication systems into our existing complex healthcare environments.10 Therefore, we contend that the main lesson arising from these cases is that care must be taken by those responsible for implementing health information systems to remain aware of the kinds of errors that might occur and monitor for the unexpected consequences that will undoubtedly take place, but not to avoid use of such systems that likely have the capacity for far greater benefit than harm, if implemented and monitored properly.

Conclusion

As institutions adopt comprehensive ambulatory health information systems, increased attention should be paid to the test result delivery and management processes. There are many benefits to using an EHR to manage results. Electronic systems can improve the speed of result delivery and make results available anywhere a provider can access the EHR. Patients can have direct access to their results through a tethered personal health record. Results can also be shared among multiple providers and routed based upon conditional logic that can improve provider efficiency. Follow-up actions can be automatically captured. Electronic reporting can be used to monitor for cases of result delivery problems.

However, use of electronic test management systems does not necessarily eliminate result delivery failures and may create a new set of errors as illustrated by the cases presented in this manuscript. There is also the potential for multiple repeated errors before such issues are detected. EHR and related health information system designers and those responsible for integrated EHR implementation and management should be aware of the types of errors described and should take them into account as they build and implement such systems.

Acknowledgments

The authors wish to acknowledge the contributions of NM Deiorio and DA Handel, in the preparation of this manuscript.

Footnotes

Funding: PJE's contributions to this manuscript were supported in part by a grant from the National Institutes of Health (UL1-RR-026314).

Competing interests: None.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1.Poon EG, Gandhi TK, Sequist TD, et al. “I wish I had seen this test result earlier!”: dissatisfaction with test result management systems in primary care. Arch Intern Med 2004;164:2223–8 [DOI] [PubMed] [Google Scholar]
  • 2.Poon EG, Haas JS, Louise Puopolo A, et al. Communication factors in the follow-up of abnormal mammograms. J Gen Intern Med 2004;19:316–23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Doebbeling BN, Chou AF, Tierney WM. Priorities and strategies for the implementation of integrated informatics and communications technology to improve evidence-based practice. J Gen Intern Med 2006;21(Suppl 2):S50–7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hanna D, Griswold P, Leape LL, et al. Communicating critical test results: safe practice recommendations. Jt Comm J Qual Patient Saf 2005;31:68–80 [DOI] [PubMed] [Google Scholar]
  • 5.Schiff GD. Introduction: communicating critical test results. Jt Comm J Qual Patient Saf 2005;31:63–5, 61. [DOI] [PubMed] [Google Scholar]
  • 6.Murff HJ, Gandhi TK, Karson AK, et al. Primary care physician attitudes concerning follow-up of abnormal test results and ambulatory decision support systems. Int J Med Inform 2003;71:137–49 [DOI] [PubMed] [Google Scholar]
  • 7.Cram P, Rosenthal GE, Ohsfeldt R, et al. Failure to recognize and act on abnormal test results: the case of screening bone densitometry. Jt Comm J Qual Patient Saf 2005;31:90–7 [DOI] [PubMed] [Google Scholar]
  • 8.Gandhi TK. Fumbled handoffs: one dropped ball after another. Ann Intern Med 2005;142:352–8 [DOI] [PubMed] [Google Scholar]
  • 9.Elder NC. The testing process in family medicine: problems, solutions, and barriers as seen by physicians and their staff. J Patient Saf 2006;2:25–32 [Google Scholar]
  • 10.Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc 2004;11:104–12 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293:1197–203 [DOI] [PubMed] [Google Scholar]
  • 12.Bates DW, Leape LL. Doing better with critical test results. Jt Comm J Qual Patient Saf 2005;31:66–7, 61. [DOI] [PubMed] [Google Scholar]
  • 13.Berlin L. Communicating results of all radiologic examinations directly to patients: has the time come? AJR Am J Roentgenol 2007;189:1275–82 [DOI] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES