Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2021 Apr 27;37(1):137–144. doi: 10.1007/s11606-021-06772-y

Why Test Results Are Still Getting “Lost” to Follow-up: a Qualitative Study of Implementation Gaps

Andrew J Zimolzak 1,3, Umber Shahid 1,3, Traber D Giardina 1,3, Sahar A Memon 1,3, Umair Mushtaq 1,3, Lisa Zubkoff 2, Daniel R Murphy 1,3, Andrea Bradford 1,3, Hardeep Singh 1,3,
PMCID: PMC8739406  PMID: 33907982

Abstract

Background

Lack of timely follow-up of abnormal test results is common and has been implicated in missed or delayed diagnosis, resulting in potential for patient harm.

Objective

As part of a larger project to implement change strategies to improve follow-up of diagnostic test results, this study sought to identify specifically where implementation gaps exist, as well as possible solutions identified by front-line staff.

Design

We used a semi-structured interview guide to collect qualitative data from Veterans Affairs (VA) facility staff who had experience with test results management and patient safety.

Setting

Twelve VA facilities across the USA.

Participants

Facility staff members (n = 27), including clinicians, lab and imaging professionals, nursing staff, patient safety professionals, and leadership.

Approach

We conducted a content analysis of interview transcripts to identify perceived barriers and high-risk areas for effective test result management, as well as recommendations for improvement.

Results

We identified seven themes to guide further development of interventions to improve test result follow-up. Themes related to trainees, incidental findings, tracking systems for electronic health record notifications, outdated contact information, referrals, backup or covering providers, and responsibility for test results pending at discharge. Participants provided recommendations for improvement within each theme.

Conclusions

Perceived barriers and recommendations for improving test result follow-up often reflected previously known problems and their corresponding solutions, which have not been consistently implemented in practice. Better policy solutions and improvement methods, such as quality improvement collaboratives, may bridge the implementation gaps between knowledge and practice.

Supplementary Information

The online version contains supplementary material available at 10.1007/s11606-021-06772-y.

Introduction

Lack of timely follow-up of abnormal diagnostic test results is common in healthcare.13 These “missed test results” are important contributors to diagnostic error,4 including missed and delayed cancer diagnoses.57 Systematic reviews have also shown increased hospitalization and inappropriate medication adjustment as outcomes of missed test results.8 Prior work has examined barriers and facilitators to proper test result follow-up9 and improved understanding of testing processes and workflows. Test result follow-up at transition points, such as hospital and emergency room discharge, has been particularly vulnerable.1014

A 2015 report from the National Academies advocates for “approaches to identify, learn from, and reduce diagnostic errors and near misses”.15 Test result tracking and follow-up processes are promising targets for interventions.13 For instance, certain organizations are using electronic surveillance to identify patients with delayed follow-up of test results.16,17 This approach uses structured data in electronic health records (EHRs) to find patients with missed test results. Wider application of such approaches would require a health system-based team to conduct surveillance for missed test results, analyze the data, identify which patients warrant intervention for the near-miss, and finally generate and sustain improvements. Most organizations do not yet have such teams in place. It is also unclear whether and how organizations have translated existing knowledge and interventions related to test result management processes into policies and practices to reduce risks.

Recognizing the need for additional efforts to reduce missed test results at an organization level, our team recently launched a project to develop and implement change strategies to improve test result follow-up across a national sample of Veterans Affairs (VA) health care facilities. We proposed using a Virtual Breakthrough Series (VBTS),18,19 an established approach to assist teams with implementing evidence-based practices. An essential early step in this work is to identify current practices and stakeholders’ perceptions of high-priority risk areas, which will guide development of interventions. To this end, we conducted a qualitative study to identify common, high-priority risk areas and contributing factors related to missed test results from the perspective of multiple stakeholders in VA facilities. Our goal was to synthesize stakeholder perspectives to refine development of change strategies for our project, and more broadly to identify where implementation gaps exist and elicit strategies to improve test result follow-up.

Methods

Study Setting and Population

The VA is one of the largest health care systems in the USA and has had an operational EHR since 1985.20,21 The Computerized Patient Record System (CPRS) is the EHR interface in use since 1997. It includes a system for in-basket type EHR notifications (commonly called “View Alerts”) for communicating test results, among other patient care messages related to orders, medication refills, and referrals. Ordering clinicians are always notified of certain clinically significant abnormal tests, and clinicians can assign covering surrogates to receive their notifications.22

We recruited participants from 12 VA healthcare facilities enrolled in the larger VBTS study. We promoted the study at meetings and conference events, and directly via email to facility-level leaders. We recruited facilities purposively to represent a range of facility sizes and geographical locations. After facility representatives confirmed participation, each identified a site champion to coordinate further activities.

Site champions nominated personnel at their facilities who had experience with patient safety and test result notifications, including physicians, nurses, leadership, and professionals in laboratory, radiology, and patient safety. We individually contacted nominated staff members to invite them to participate. We obtained verbal informed consent before interviews. The study was approved by the local institutional review board.

Interview Guide Development and Content

We developed a semi-structured interview guide to facilitate data collection and to cover key topics.23 Interview content reflected the three primary “drivers” or domains of change represented in the draft set of change strategies for the larger study. These domains, generated by literature review and research team discussions in the first step of our project, were as follows: (1) enhancing patient engagement with test results, (2) improving situational awareness among providers and care teams, and (3) implementing processes to close the loop on test result reporting and follow-up. We incorporated additional concepts about each of the three domains from existing literature2428 when drafting the interview guide.

The final interview guide included 42 questions (Appendix), although each participant answered a subset of 25 or 26 questions depending on their role. All participants were asked 9 general questions about how test results are reviewed, communicated, and followed up at their institutions. Clinicians, nursing staff, lab personnel, and patient safety professionals were asked 16 questions specific to patient engagement with test results and providers’ management of EHR notifications. Leadership personnel responded to a different set of 17 questions about situational awareness among providers and care teams, “closing the loop,” and monitoring the effectiveness of test result management. Finally, all participants were asked for recommendations about actions to reduce the rate of missed test results.

Data Collection and Analysis

One author (US, a qualitative researcher) conducted individual interviews by telephone between March 2019 and January 2020. Interviews were audio recorded and transcribed verbatim by a third party. Transcripts were used to conduct qualitative content analysis, using an inductive approach, allowing codes and categories to flow from the data and new insights to emerge. The interviewer (US) conducted the initial data analysis by reading transcripts iteratively to fully understand the data as a whole. Codes were derived by highlighting words from the text that captured key thoughts or concepts, which were used to draft a codebook. This process continued on each transcript until no more new or meaningful concepts emerged. After revising the initial coding scheme with the study team, a second coder (UM) with expertise in health informatics independently coded the full data set and added additional codes to the codebook. Both coders then met to refine the codebook by grouping codes with similar meanings into higher-order categories representing contributors to missed test results. The final code book and categories were shared with the research team for review. Recommendations elicited from participants were tabulated and grouped by theme.

Results

We interviewed 27 participants across 12 sites, including clinicians (n = 9, 33%), facility leadership (n = 7, 26%), lab professionals (n = 4, 15%), radiology professionals (n = 2, 7%), nursing (n = 1, 4%), and patient safety professionals (n = 4, 15%). The typical duration of the interviews was 30 min (range 20 to 40 min). Qualitative content analysis revealed several risk areas that were grouped into 7 themes (Table 1), and recommendations grouped by theme (Table 2).

Table 1.

Risk Areas Identified by Participants

Risk area Number of participants mentioning (%) Representative quotes
Trainees (residents) 17 (63%) “The alerts are not managed well by residents because they rotate so frequently, it’s close to zero. So, if they ordered a test in the outpatient and the result comes out to be abnormal, it will go to them but if they are no longer rotating through the VA it will not trigger the attending of record. Those are potential vulnerabilities of losing abnormal test results, particularly imaging.” (#106, clinician)
“Our biggest area of weakness or potential for improvement is what is our process of following up on labs ordered by residents if not by a physician. We do not have any mechanism to make sure all results are seen by physicians and residents are not consistent enough to follow it up or properly address.” (#120, clinician)
“We have a mechanism that if the resident orders any labs or imaging test and they don’t click on it as they rotate every 5 weeks, then after 2 weeks the PCP that the resident is linked to will get notification. So technically residents are supposed to alert whoever they are working with about the lab results, however they don’t usually do that.” (#105, clinician)
“It would be a nice suggestion to actually educate the providers not to disapprove alerts but actually take it seriously.” (#104, lab professional)
Incidental findings on imaging 15 (56%) “In fact the primary motivation for involvement in this project is because we have seen a number of cases in peer reviews that are related to delay in review of lab results particularly imaging like pulmonary nodule, that down the road turns out to be something more (cancer) but we do not have a process to figure out that delay and it falls through the crack. No one owns secondary findings.” (#107, clinician)
“Abnormal results or non-urgent findings are the ones that fall through the crack, for example a secondary mass on imaging.” (#118, leadership)
“It is the weakness for sure. We are resource constrained, so many things (for example lung nodule) is just a tip of the iceberg.” (#115, radiologist)
EHR and related tracking systems 14 (52%) “Critical labs are always reported with positive communication within 30 minutes to an hour, but abnormal labs go to CPRS view alert. There is no way of tracking if [a] provider ever viewed it. I don’t even know if view alerts are working because providers are overloaded and may just click past it.” (#104, lab professional)
“We do not have a standardized method to ensure if test results are reviewed and followed up and this could be the potential weak point where test results could be missed.” (#106, clinician)
“As of now we are relying on the view alerts from CPRS and that does not ensure 100% of the results are reviewed and acted upon.” (#107, clinician)
Lack of updated patient and provider contact information 12 (44%) “We do not have a system of updating patient information, I’m sure they are supposed to have that I don’t think it’s done.” (#105, clinician)
“I wish there were ways or one easy way for a physician or a nurse, outpatient physician, inpatient physician, any of us—to have the one single best contact number for the patient. Because outdated contact phone numbers are a pervasive problem in our system. (#103, clinician)
Challenges in referral for consultation/testing and follow-up of results 7 (26%) “Referring a patient to another facility is a mess, it’s not smooth at all, whether its imaging or consultation, it requires a great deal of work on the part of the patient and my referring team. There is always a breakdown in communication system, or the other facility may contact us after weeks or months later.” (#120, clinician)
“When a patient is referred to a subspecialty there comes a break in who will follow with his test results. Whether the PCP or the ordering clinician.” (#113, clinician)
Use of surrogate or backup clinicians 5 (19%) “A switch of providers for a couple of days, followed by view alerts still going to the substitute attending who is no longer taking care of the patient who did not view his alerts, and results are lost to follow-up.” (#118, leadership)
Lack of clarity regarding the clinician responsible for follow-up of pending labs at discharge 4 (15%) “That’s another place where things are really broken, pending labs at discharge, who gets it?” (#120, clinician)
“The biggest hole in our system is the tests that are pending on discharge…Once patient is discharged, I’m no longer listed as the attending, I no longer get alerts…and the only person getting the alert is the intern who (a) doesn’t manage their alerts and (b) is likely to have rotated off service.” (#114, clinician)
“Pending inpatient [test results] for providers who only work 2-4 weeks a year and have full time jobs elsewhere” (#118, leadership)

Note: Participants used VA-specific terminology to refer to the VA’s EHR (CPRS [Computerized Patient Record System]) and to test result inbox notifications (“view alerts”)

Table 2.

Summary of Interview Participants’ Recommendations to Reduce Missed Test Results in the VA Healthcare System

Theme Identified recommendations (number of participants mentioning)
Trainees • Lab personnel should have access to contact information for the attending of record, current trainee, and replacement trainee at the end of a rotation (3)
• Trainees should order tests under the attending’s name and not their own (2)
• Include chief residents in the cascade of providers (chain of commands) (1)
• Develop a mechanism for providers to acknowledge and “close the loop” on test result notifications (1)
• Creating a mechanism where trainee-ordered laboratory or imaging result notifications escalate to the primary care provider with whom the trainee is linked if not addressed within 2 weeks (1)
Incidental findings on imaging • Creating a position of a responsible person to follow-up all concerning secondary findings on imaging (2)
• Developing a technology where list of providers/chain of command automatically pops up for labs/imaging results (2)
• Creating and implementing technological solutions to contact the appropriate provider as soon as the critical result comes both for labs and imaging and would continue doing so until someone is reached (1)
• Train lab personnel on how to use the EHR to retrieve provider information because they may use a separate system that does not display this information (1)
EHR and related tracking system • Educate providers not to ignore or delete alerts, and to take all alerts seriously (1)
• Ensure the team creates groups for their VA alerts so that other team members can get alerts too (4)
• Color coding of view alerts to help prioritize on time (3)

• Weekly monitoring of alerts or any unsigned notes by the Chief (3)

• Providers with 25 or more unsigned notes get additional time to catch-up (1)

• Develop a standardized process where all providers can take care of their view alerts in the same way (2)
• Send an alert to department leadership if a provider falls behind in taking care of the set number of alerts (1)
• Developing multilayer of alert system, instead of relying only on view alerts. In case view alerts are not attended on time, it will automatically generate a secondary alert through a different mechanism (1)
• Creating patient safety report whenever there is delay in communication to identify areas of weakness and preventing it from happening again (1)
Lack of updated patient and provider contact information • Patient and next to kin information should be updated every time a patient checks in at the clinic (7)
Referral coordination outside the initiating medical center • Creating a position to track the data (including lab testing, imaging, and consultations) of patients who are referred externally (1)
• Care of patient should be switched to a new provider [ordering or external] in the case of a patient referral with secondary findings, to make care coordination and follow-up easy (PCP should no longer be responsible for test results/follow-up) (1)
Use of surrogate/backup clinicians • Administrative support to help set surrogates (1)
Lack of clarity regarding the clinician responsible for pending labs at discharge follow-up • Create an automated list of pending results at discharge to make providers aware of what is in process and creates a record that reflects back to cross reference if all lab results are followed (2)
• Alternatively, introducing a [manual] box at the end of discharge summary entitled ‘labs pending’ and making it mandatory for all PCPs to check the box at follow-up (1)
• Admission, discharge, and transfer (ADT) orders help to identify a change in providers for inpatients. For example, if a lab was collected on the medical ward, and the patient transferred to intensive care, lab personnel can use the ADT order to determine the responsible intensive care physician (1)

Trainees (Residents and Interns)

A majority of participants (n = 17, 63%) voiced serious concerns about test result management when medical trainees (residents) are involved. They described weak or absent teamwork, coordination, and accountability. At some facilities, treating clinicians did not receive notifications for tests ordered by trainees who had completed their rotations, resulting in test results being missed or found by chance.

Participants reported the need for a mechanism to ensure that all test results are seen by the supervisory clinician as a backup. However, other participants shared details about such mechanisms at their facilities. For example, at one facility, orders for labs or imaging that are entered by trainees can generate a notification to a supervising primary care provider (PCP). If a trainee does not acknowledge results within 2 weeks, the supervising PCP receives a notification. Some participants expressed a need for more trainee education on how to manage EHR notifications and hand off information after a rotation. Furthermore, they suggested that treating physicians should be aware of VA’s national policy on communication of test results to providers and patients. The policy establishes that test results should be communicated to patients within 7 calendar days for results requiring action and within 14 days for results requiring no action.

Incidental Findings on Imaging

More than half of participants stated that incidental findings are the results most likely to fall through the cracks, particularly for imaging. Lost to follow-up incidental findings could result in missed diagnosis or delays in patient care, sometimes for years (e.g., in the case of cancer).

Some participants added that there is no backup system to ensure that the ordering provider acted on the results. Many test results that do not require urgent attention still need action to be taken, and participants expressed a need for a clear process for follow-up actions, which all providers including supervising physicians and trainees can follow. Providers also expressed that addressing incidental findings requires better teamwork.

EHR and Related Tracking Systems

Participants from multiple roles discussed that critical test results are always verbally communicated as a closed loop. However, they reported that non-critical abnormalities are transmitted as EHR notifications where the loop may not necessarily close. It was unanimously accepted across participants that the VA lacks a backup system for tracking EHR notifications related to abnormal test results. Furthermore, providers acknowledged that they are overwhelmed by the number of EHR notifications they receive each day. Respondents added that, although there is a column indicating high, medium, or low importance of notifications, the system lacks a more compelling visualization to prioritize notifications based on urgency; many are given the same weight and volume, which leads to a poor signal-to-noise ratio and increases the risk for missing information. Participants recommended introducing a color-coding system to help providers more effectively triage their notifications and take action.

Lack of Updated Patient and Provider Contact Information

Nearly half the participants were dissatisfied with the system of maintaining patient and provider contact information. A few (n = 8) also reported that the VA facility is supposed to maintain current contact information for both patients and providers, but they were not sure whether this was being done. A commonly mentioned recommendation was for patient contact information to be updated at each visit. Nevertheless, many other participants were satisfied with the way their facilities maintain contact information and the ease of communicating critical test results.

Some participants, especially lab personnel, reported substantial difficulty finding the accurate provider contact information. Participants also reported that tier systems or cascading lists that identify the next in command were not consistently updated, and they noted that having updated contact information for both providers and patients is imperative for abnormal results. Multiple participants called for better access to ordering physician contact information, and for backup contacts when the ordering physician is not available.

Challenges in Referral for Consultation/Testing and Follow-up of Results

Several participants reported challenges related to consultation visits and testing referrals, including poor care coordination, breakdown in communication, delays in patient care, and excessive workloads. At times, ambiguity existed regarding who is responsible for test result follow-up after the referral (e.g., the PCP versus the specialist ordering the test). Participants emphasized that after a referral to a specialty clinic outside of their VA facility, the response time for test results is uncertain or long, and responsibility for information retrieval and follow-up may fall onto the referring clinic or the patient. One clinician suggested there should be a tracking person (e.g., a care coordinator) to facilitate communication of these test results. However, the majority of participants were satisfied with referrals and coordination of care within the VA.

Use of Surrogate/Backup Clinicians

A few participants mentioned test results being missed or lost to follow-up during the transition from the primary provider to a covering secondary provider, and vice-versa. Once a provider is no longer covering, notifications must be switched back to the primary provider; otherwise, EHR notifications will be transmitted to the covering clinician who is no longer involved in the patient’s care, creating a risk of missed test results. Multiple participants recommended use of more automation to support the covering physician notification system.

Test Results Pending at Discharge

Participants voiced concerns about pending test results at discharge, who receives the results, and who is responsible for their follow-up. Participants expressed a lack of clarity among providers (including themselves) about whether the responsible clinician for pending results should be the ordering provider, the inpatient attending of record, or the PCP. Participants recommended instituting a process with clear guidance for who is responsible, and automatic generation of a list of tests pending at discharge.

Discussion

As part of a larger project to improve test result follow-up, we identified current implementation gaps and elicited strategies for improvement in a national sample of VA health care facilities. Themes commonly mentioned included concerns about trainees, incidental findings on imaging, and using the EHR effectively for follow-up. Also notable were mentions of problems for which potential solutions were available but apparently not implemented.

Some of our findings reflect the need for strategies to enhance uptake of and adherence to existing policies and best practices. For instance, we found that several participants wanted guidance about who is responsible for tests pending at hospital discharge, even though a policy for this already exists, in line with previous recommendations for such guidelines.8 The national VA policy states, “When results of tests ordered and performed while the patient is inpatient become available after discharge, they are communicated to the patient by the ordering inpatient provider, or their designee, unless responsibility is transferred to an outpatient provider, or their designee, and the transfer is documented in [the EHR].”.29

All VA facilities are expected to follow national policy and develop local processes and procedures to address the requirements set forth therein. Several participants were unaware of any strategies implemented at their facilities to improve follow-up of test results pending at discharge, despite prior research on effective strategies.1012 This suggests a need not only for diffusion (passive spread) and dissemination (active spread) of research and policies, but implementation, adoption, and sustainability.30,31 Prior work has suggested that sustainable policy implementation requires not just education of the users of the policy but also consistent and specific reinforcement.32 Sustainability has been under-researched, and qualitative approaches are needed to identify ways to sustain policy implementation.33

Several recommendations from interviewees have been described in prior work.34 This illustrates the presence of an implementation gap in this area, which is not surprising because passive diffusion of practices has been challenging.35,36 Given the complexity and the sociotechnical context of the problem,37 implementation methods warrant a careful evaluation. Moreover, a single solution is unlikely to provide substantial benefit. For instance, because of variation in EHR usage skills from provider to provider,38 users need effective ways to achieve competencies in required skills but also better EHR workflows for dealing with test result notifications. Moreover, PCPs have previously reported lack of sufficient protected time for EHR notification management, as well as desire for better visualizations,35 similar to recommendations elicited in the present work. Recommendations frequently related to EHR features that interviewees found lacking. Prior work has described the workarounds that EHR users use39 and the need for the evolution of the EHR to match workflow needs.40 A sociotechnical approach is needed to address deficiencies and complexities of electronic test ordering, alerting, and follow-up.37,4042

Lack of timely follow-up of tests ordered by trainees has also been described in several prior works.43,44 This phenomenon is common in academic medical centers (not only in VA medical centers) because residents and interns rotate frequently from one specialty to the next, they may be unfamiliar with follow-up methods at new hospitals or specialties, and because it may be months before they return to the same clinical setting. Multiple themes found in this study could be addressed by automatically escalating unacknowledged test result notifications to individuals who could take action. This was only being done at a single facility despite being considered a useful safety practice.45 Our findings suggest that organizations could identify high-risk scenarios for test result follow-up (such as incidental imaging findings) and scale up strategies such as call cascades, a successful strategy in other research.44

Strategies to enhance test result follow-up have not seen widespread adoption despite many years since publication, and additional reasons for this lack of implementation should be explored. Improvement models such as the Institute for Healthcare Improvement’s Breakthrough Series collaborative have shown promise in bridging the gap between knowledge and practice. This model has been applied in previous studies, including those in the VA, and is especially useful for improvement efforts across facilities. In the next steps of our work, we plan to use a Virtual Breakthrough Series model to foster collaboration and learning among staff at participating facilities as they implement change strategies informed by our findings.

Our study has several limitations. First, we interviewed employees only within the Veterans Health Administration; recommendations may not generalize to other health systems or EHRs. However, all of the risk area and recommendation categories (Tables 1 and 2) are not specific to the VA system and are relevant to any health system. Several prior systematic reviews have outlined the problem and examined the effects of interventions to improve follow-up, such as education, manual review12, automated tools10,13, and other electronic systems.46 Most of the individual trials cited were performed outside of VA, but they examine the same broad categories of intervention that our participants proposed. Second, recommendations were collected without regard for feasibility, cost-effectiveness, or unintended consequences, all of which should be assessed before implementation. Several recommendations involve adding new steps (additional workload) to existing processes and changes to the EHR, which need involvement of EHR designers.47 Third, our sample although national may not be representative. Certain roles were not well represented in our sample (e.g., radiology and nursing) and participants were eligible only if their facility expressed an interest in the project. Finally, VA is in the early stages of an enterprise-wide EHR modernization effort.48 Nevertheless, technology is only one part of a larger sociotechnical system, and many of these lessons would be applicable to its future EHR or to any other EHR system. Future work should also explore cultural and “hidden curriculum” factors leading to missed test results, such as time pressures and training hierarchies.49

In conclusion, we used qualitative methods to identify factors contributing to the persistent problem of missed test results. Several of these factors have been elicited before and reflect the presence of an implementation gap at the organization level.50 Strategies to enhance uptake of and adherence to existing policies and best practices are needed. Our work is a first step as part of a Virtual Breakthrough Series approach to close implementation gaps around missed test results5153. Long-term goals are to develop resources and practices that can be disseminated within and outside of VA. However, in the short term, all health care facilities should address fixable issues such as responsibility for follow-up of test results, updated contact information for clinicians and patients, and escalating or backup procedures for tests ordered by trainees.

Supplementary Information

ESM 1 (20.2KB, docx)

(DOCX 20 kb)

Funding

This study was funded by Veterans Administration (VA) Health Services Research and Development (HSR&D) Service (IIR17-127). Dr. Singh is additionally funded in part by the Center for Innovations in Quality, Effectiveness, and Safety (CIN13-413), the VA HSR&D Service (the Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety, the Agency for Healthcare Research and Quality (R01HS27363), the CanTest Research Collaborative funded by a Cancer Research UK Population Research Catalyst award (C8640/A23385), and the Gordon and Betty Moore Foundation. The opinions expressed are those of the authors and not necessarily those of the Department of Veterans Affairs, the US government, or Baylor College of Medicine. Some of Dr. Zubkoff’s work for this study and manuscript were performed while affiliated with the White River Junction VA Medical Center and Department of Psychiatry at the Geisel School of Medicine.

Declarations

Conflict of Interest

The authors declare that they do not have a conflict of interest.

Footnotes

The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs or any other funding agency.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Poon E, Gandhi T, Sequist T, Murff H, Karson A, Bates D. “I wish I had seen this test result earlier!”: Dissatisfaction with test result management systems in primary care. Arch Intern Med. 2004;164(20):2223–8. doi: 10.1001/archinte.164.20.2223. [DOI] [PubMed] [Google Scholar]
  • 2.Schiff GD, Kim S, Abrams R, Cosby K, Lambert B, Elstein AS. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology) Rockville, MD: Agency for Healthcare Research and Quality AHRQ Publication Nos. 050021 (1-4); 2005. Diagnosing diagnostic errors: Lessons from a multi-institutional collaborative project; pp. 255–78. [Google Scholar]
  • 3.Singh H, Petersen LA, Thomas EJ. Understanding diagnostic errors in medicine: a lesson from aviation. Quality and Safety in Health Care. 2006;15(3):159–64. doi: 10.1136/qshc.2005.016444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Wahls TL, Cram PM. The frequency of missed test results and associated treatment delays in a highly computerized health system. BMC Fam Pract. 2007;8:32. doi: 10.1186/1471-2296-8-32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Murphy DR, Laxmisan A, Reis BA, Thomas EJ, Esquivel A, Forjuoh SN, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf. 2014;23(1):8–16. doi: 10.1136/bmjqs-2013-001874. [DOI] [PubMed] [Google Scholar]
  • 6.Murphy DR, Meyer AND, Vaghani V, Russo E, Sittig DF, Wei L, et al. Development and Validation of Trigger Algorithms to Identify Delays in Diagnostic Evaluation of Gastroenterological Cancer. Clin Gastroenterol Hepatol. 2017;16(1):90–98. doi: 10.1016/j.cgh.2017.08.007. [DOI] [PubMed] [Google Scholar]
  • 7.Murphy DR, Meyer AN, Vaghani V, Russo E, Sittig DF, Wei L, et al. Electronic Triggers to Identify Delays in Follow-Up of Mammography: Harnessing the Power of Big Data in Health Care. Journal of the American College of Radiology. 2017;15(2):287–295. doi: 10.1016/j.jacr.2017.10.001. [DOI] [PubMed] [Google Scholar]
  • 8.Callen JL, Westbrook JI, Georgiou A, Li J. Failure to Follow-Up Test Results for Ambulatory Patients: A Systematic Review. J Gen Intern Med. 2011;27(10):1334–48. doi: 10.1007/s11606-011-1949-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Murphy DR, Satterly T, Rogith D, Sittig DF, Singh H. Barriers and facilitators impacting reliability of the electronic health record-facilitated total testing process. Int J Med Inform. 2019;127:102–8. doi: 10.1016/j.ijmedinf.2019.04.004. [DOI] [PubMed] [Google Scholar]
  • 10.Whitehead NS, Williams L, Meleth S, Kennedy S, Epner P, Singh H, et al. Interventions to Improve Follow-Up of Laboratory Test Results Pending at Discharge: A Systematic Review. J Hosp Med. 2018;13(9):631–636. doi: 10.12788/jhm.2944. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Dalal AK, Roy CL, Poon EG, Williams DH, Nolido N, Yoon C, et al. Impact of an automated email notification system for results of tests pending at discharge: a cluster-randomized controlled trial. J Am Med Inform Assoc. 2014;21(3):473–80. doi: 10.1136/amiajnl-2013-002030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Darragh PJ, Bodley T, Orchanian-Cheff A, Shojania KG, Kwan JL, Cram P. A Systematic Review of Interventions to Follow-Up Test Results Pending at Discharge. J Gen Intern Med. 2018;33(5):750–8. doi: 10.1007/s11606-017-4290-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.El-Kareh R, Roy C, Williams DH, Poon EG. Impact of Automated Alerts on Follow-Up of Post-Discharge Microbiology Results: A Cluster Randomized Controlled Trial. J Gen Intern Med. 2012;27(10):1243–50. doi: 10.1007/s11606-012-1986-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Were MC, Li X, Kesterson J, Cadwallader J, Asirwa C, Khan B, et al. Adequacy of hospital discharge summaries in documenting tests with pending results and outpatient follow-up providers. J Gen Intern Med. 2009;24(9):1002–6. doi: 10.1007/s11606-009-1057-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.The National Academies of Sciences Engineering, and Medicine. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press; 2015. 472 p. [PubMed]
  • 16.Danforth KN, Smith AE, Loo RK, Jacobsen SJ, Mittman BS, Kanter MH, et al. Electronic Clinical Surveillance to Improve Outpatient Care: Diverse Applications within an Integrated Delivery System. eGEMs (Generating Evidence & Methods to improve patient outcomes) [Internet]. 2014 6/6/2016; 2. Available from: http://repository.academyhealth.org/egems/vol2/iss1/9. [DOI] [PMC free article] [PubMed]
  • 17.Murphy DR, Meyer AN, Sittig DF, Meeks DW, Thomas EJ, Singh H. Application of electronic trigger tools to identify targets for improving diagnostic safety. BMJ Qual Saf. 2019;28(2):151–9. doi: 10.1136/bmjqs-2018-008086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Institute for Healthcare Improvement. The Breakthrough Series. 2003.
  • 19.Boushon B, Provost L, Gagnon J, Carver P. Using a virtual breakthrough series collaborative to improve access in primary care. Jt Comm J Qual Patient Saf. 2006;32(10):573–84. doi: 10.1016/s1553-7250(06)32075-2. [DOI] [PubMed] [Google Scholar]
  • 20.Fihn SD, Francis J, Clancy C, Nielson C, Nelson K, Rumsfeld J, et al. Insights from advanced analytics at the veterans health administration. Health Aff (Millwood ). 2014;33(7):1203–11. doi: 10.1377/hlthaff.2014.0054. [DOI] [PubMed] [Google Scholar]
  • 21.Brown SH, Lincoln MJ, Groen PJ, Kolodner RM. VistA--U.S. Department of Veterans Affairs national-scale HIS. Int J Med Inform. 2003;69(2-3):135–56. doi: 10.1016/s1386-5056(02)00131-4. [DOI] [PubMed] [Google Scholar]
  • 22.Singh H, Mani S, Espadas D, Petersen N, Franklin V, Petersen LA. Prescription errors and outcomes related to inconsistent information transmitted through computerized order entry: a prospective study. Arch Intern Med. 2009;169(10):982–9. doi: 10.1001/archinternmed.2009.102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Edwards R, Holland J. What is Qualitative Interviewing? London: Bloomsbury Academic; 2013. [Google Scholar]
  • 24.Daley UE, Gandhi T, Mate K, Whittington J, Renton M, Huebner J. Framework for Effective Board Governance of Health System Quality. Boston, Massachusetts: Institute for Healthcare Improvement; 2018.
  • 25.Improving Diagnosis in Medicine Change Package. Chicago, IL: Health Research & Educational Trust; 2018.
  • 26.Chassin MR, Loeb JM. High-reliability health care: getting there from here. Milbank Q. 2013;91(3):459–90. doi: 10.1111/1468-0009.12023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Pronovost PJ, Berenholtz SM, Goeschel CA, Needham DM, Sexton JB, Thompson DA, et al. Creating high reliability in health care organizations. Health Serv Res. 2006;41(4 Pt 2):1599–617. doi: 10.1111/j.1475-6773.2006.00567.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf. 2014;40(3):102–10. doi: 10.1016/s1553-7250(14)40013-8. [DOI] [PubMed] [Google Scholar]
  • 29.Veterans Health Administration Directive 1088: Communicating test results to providers and patients. Washington, DC.2015.
  • 30.Rabin BA, Brownson RC, Haire-Joshu D, Kreuter MW, Weaver NL. A glossary for dissemination and implementation research in health. J Public Health Manag Pract. 2008;14(2):117–23. doi: 10.1097/01.PHH.0000311888.06252.bb. [DOI] [PubMed] [Google Scholar]
  • 31.Lomas J. Diffusion, dissemination, and implementation: who should do what? Ann N Y Acad Sci. 1993;703:226–35. doi: 10.1111/j.1749-6632.1993.tb26351.x. [DOI] [PubMed] [Google Scholar]
  • 32.Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Jt Comm J Qual Patient Saf. 2010;36(5):226–32. doi: 10.1016/s1553-7250(10)36037-5. [DOI] [PubMed] [Google Scholar]
  • 33.Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17. doi: 10.1186/1748-5908-7-17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Singh H, Thomas EJ, Sittig DF, Wilson L, Espadas D, Khan MM, et al. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain? Am J Med. 2010;123(3):238–44. doi: 10.1016/j.amjmed.2009.07.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Singh H, Spitzmueller C, Petersen NJ, Sawhney MK, Smith MW, Murphy DR, et al. Primary care practitioners’ views on test result management in EHR-enabled health systems: a national survey. J Am Med Inform Assoc. 2013;20(4):727–35. doi: 10.1136/amiajnl-2012-001267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Powell L, Sittig DF, Chrouser K, Singh H. Assessment of Health Information Technology-Related Outpatient Diagnostic Delays in the US Veterans Affairs Health Care System: A Qualitative Study of Aggregated Root Cause Analysis Data. JAMA Netw Open. 2020;3(6):e206752. doi: 10.1001/jamanetworkopen.2020.6752. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Sittig DF, Singh H. Improving Test Result Follow-up through Electronic Health Records Requires More than Just an Alert. J Gen Intern Med. 2012;27(10):1235–7. doi: 10.1007/s11606-012-2161-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Hysong SJ, Sawhney MK, Wilson L, Sittig DF, Espadas D, Davis T, et al. Provider management strategies of abnormal test result alerts: a cognitive task analysis. J Am Med Inform Assoc. 2010;17(1):71–7. doi: 10.1197/jamia.M3200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Menon S, Murphy DR, Singh H, Meyer AN, Sittig DF. Workarounds and Test Results Follow-up in Electronic Health Record-Based Primary Care. Appl Clin Inform. 2016;7(2):543–59. doi: 10.4338/ACI-2015-10-RA-0135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hysong SJ, Sawhney MK, Wilson L, Sittig DF, Esquivel A, Singh S, et al. Understanding the management of electronic test result notifications in the outpatient setting. BMC Med Inform Decis Mak. 2011;11:22. doi: 10.1186/1472-6947-11-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Ash J, Singh H, Sittig D. SAFER Guides: Test Results Reporting and Follow-Up 2016 [Available from: https://www.healthit.gov/sites/default/files/safer_test_results_reporting.pdf.
  • 42.Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care. 2010;19(Suppl 3):i68–i74. doi: 10.1136/qshc.2010.042085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Singh H, Thomas EJ, Mani S, Sittig D, Arora H, Espadas D, et al. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential? Arch Intern Med. 2009;169(17):1578–86. doi: 10.1001/archinternmed.2009.263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Menon S, Smith MW, Sittig DF, Petersen NJ, Hysong SJ, Espadas D, et al. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open. 2014;4(11):e005985. doi: 10.1136/bmjopen-2014-005985. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Tenner CT, Shapiro NM, Wikler A. Improving Health Care Provider Notification in an Academic Setting: A Cascading System of Alerts. Archives of Internal Medicine. 2010;170(4):392. doi: 10.1001/archinternmed.2010.9. [DOI] [PubMed] [Google Scholar]
  • 46.Georgiou A, Li J, Thomas J, Dahm MR, Westbrook JI. The impact of health information technology on the management and follow-up of test results - a systematic review. J Am Med Inform Assoc. 2019;26(7):678–88. doi: 10.1093/jamia/ocz032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Helou S, Abou-Khalil V, Yamamoto G, Kondoh E, Tamura H, Hiragi S, et al. Prioritizing Features to Redesign in an EMR System. Stud Health Technol Inform. 2019;264:1213–7. doi: 10.3233/SHTI190419. [DOI] [PubMed] [Google Scholar]
  • 48.VA EHR Modernization 2020 [Available from: https://www.ehrm.va.gov/.]
  • 49.Robertson JJ, Long B. Medicine’s Shame Problem. J Emerg Med. 2019;57(3):329–38. doi: 10.1016/j.jemermed.2019.06.034. [DOI] [PubMed] [Google Scholar]
  • 50.Sorensen AV, Bernard SL. Accelerating what works: using qualitative research methods in developing a change package for a learning collaborative. Jt Comm J Qual Patient Saf. 2012;38(2):89–95. doi: 10.1016/s1553-7250(12)38012-4. [DOI] [PubMed] [Google Scholar]
  • 51.Zubkoff L, Neily J, Mills PD. How to do a Virtual Breakthrough Series Collaborative. J Med Syst. 2019;43(2):27. doi: 10.1007/s10916-018-1126-z. [DOI] [PubMed] [Google Scholar]
  • 52.Zubkoff L, Neily J, King BJ, Dellefield ME, Krein S, Young-Xu Y, et al. Virtual Breakthrough Series, Part 1: Preventing Catheter-Associated Urinary Tract Infection and Hospital-Acquired Pressure Ulcers in the Veterans Health Administration. Jt Comm J Qual Patient Saf. 2016;42(11):485–AP2. doi: 10.1016/S1553-7250(16)42091-X. [DOI] [PubMed] [Google Scholar]
  • 53.Zubkoff L, Neily J, Quigley P, Soncrant C, Young-Xu Y, Boar S, et al. Virtual Breakthrough Series, Part 2: Improving Fall Prevention Practices in the Veterans Health Administration. Jt Comm J Qual Patient Saf. 2016;42(11):497–AP12. doi: 10.1016/S1553-7250(16)42092-1. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ESM 1 (20.2KB, docx)

(DOCX 20 kb)


Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES