Skip to main content
RSNA Journals logoLink to RSNA Journals
. 2015 May 11;277(1):81–87. doi: 10.1148/radiol.2015142530

Development and Validation of Electronic Health Record–based Triggers to Detect Delays in Follow-up of Abnormal Lung Imaging Findings

Daniel R Murphy 1,, Eric J Thomas 1, Ashley N D Meyer 1, Hardeep Singh 1
PMCID: PMC4613876  PMID: 25961634

An electronic health record–based trigger algorithm was developed to identify patients who are at risk for delayed follow-up of abnormal imaging findings. The trigger achieved a positive predictive value of 57.3% and shows promise in detecting diagnostic delays in patients with imaging findings suspicious for lung cancer.

Abstract

Purpose

To develop an electronic health record (EHR)–based trigger algorithm to identify delays in follow-up of patients with imaging results that are suggestive of lung cancer and to validate this trigger on retrospective data.

Materials and Methods

The local institutional review board approved the study. A “trigger” algorithm was developed to automate the detection of delays in diagnostic evaluation of chest computed tomographic (CT) images and conventional radiographs that were electronically flagged by reviewing radiologists as being “suspicious for malignancy.” The trigger algorithm was developed through literature review and expert input. It included patients who were alive and 40–70 years old, and it excluded instances in which appropriate timely follow-up (defined as occurring within 30 days) was detected (eg, pulmonary visit) or when follow-up was unnecessary (eg, in patients with a terminal illness). The algorithm was iteratively applied to a retrospective test cohort in an EHR data warehouse at a large Veterans Affairs facility, and manual record reviews were used to validate each individual criterion. The final algorithm aimed at detecting an absence of timely follow-up was retrospectively applied to an independent validation cohort to determine the positive predictive value (PPV). Trigger performance, time to follow-up, reasons for lack of follow-up, and cancer outcomes were analyzed and reported by using descriptive statistics.

Results

The trigger algorithm was retrospectively applied to the records of 89 168 patients seen between January 1, 2009, and December 31, 2009. Of 538 records with an imaging report that was flagged as suspicious for malignancy, 131 were identified by the trigger as being high risk for delayed diagnostic evaluation. Manual chart reviews confirmed a true absence of follow-up in 75 cases (trigger PPV of 57.3% for detecting evaluation delays), of which four received a diagnosis of primary lung cancer within the subsequent 2 years.

Conclusion

EHR-based triggers can be used to identify patients with suspicious imaging findings in whom follow-up diagnostic evaluation was delayed.

© RSNA, 2015

Introduction

Delays in the diagnosis of lung cancer from failure to follow up patients with abnormal imaging results remains a common problem that contributes to both malpractice litigation and poor patient outcomes (16). These delays in care, which typically involve well-intentioned clinicians, also occur when clinicians have good access to electronic diagnostic information. Multiple factors likely contribute to the delays, including time pressures, information overload, heavy workloads, and a lack of robust test-result tracking systems (715). Currently, identification of such delays in care requires a review of the records of all patients who undergo imaging. However, this nonselective review is too inefficient for practical use given the number of records that would require review and the vast amounts of information contained in electronic health records (EHRs). Likewise, sending providers information created with simple search queries (eg, all patients with abnormal chest radiography findings) for the purpose of identifying patients who need follow-up is not practical because most clinicians are already overloaded with information.

The increasing adoption of EHRs has led to the development of large repositories of patient data spanning the continuum of care (16). By taking advantage of this data, “trigger tools” could be developed and applied to automate the process of detecting records of patients who are experiencing delays in receiving diagnostic evaluation. Trigger tools use an algorithm based on certain clues to flag the records of patients who have a higher risk for harm so that their records can be reviewed for possible safety events (1720). The use of trigger tools in radiology has been limited; a recent study proposed that the presence of lower extremity thrombosis at ultrasonography is a potential indicator of insufficient inpatient venous thromboembolism prophylaxis (21). Expanding on the capabilities of simple search queries, triggers can account for multiple inclusion and exclusion criteria to increase their positive predictive value and facilitate their prospective, potentially real-time, application. While triggers have traditionally been used to detect adverse drug events and nosocomial infections, we successfully developed them to detect delays in care related to follow-up of patients with positive fecal occult blood test results and abnormal prostate-specific antigen results (2227).

Approximately 9% of patients with abnormal chest imaging findings suspicious for neoplasm fail to receive timely follow-up diagnostic evaluation (2,28). Development of a trigger to efficiently identify such delays in diagnostic evaluation could assist in the development of a back-up, or “safety net,” system to prospectively detect patients who are at risk for delays and alert designated personnel to take action (29). The EHRs at the Department of Veterans Affairs offer a unique opportunity to apply and evaluate the effectiveness of triggers on imaging results. When interpreting imaging results, radiologists at the Department of Veterans Affairs flag abnormal reports by using a structured computer code that alerts the referring provider (via their EHR inboxes) about the flagged abnormality. Most facilities use a “suspicious for malignancy” code to identify an imaging result that is suggestive of cancer. A similar system for rating mammograms (the Breast Imaging Reporting and Data System) is nearly universal throughout the United States, but the use of other structured ratings systems for imaging results is uncommon. In this study, we developed an electronic EHR-based trigger algorithm to identify delays in follow-up evaluation of patients with imaging findings suggestive of lung cancer, and we validated the trigger with retrospective data.

Materials and Methods

Setting

Data were obtained from the clinical data repository of a large tertiary care Veterans Affairs facility that provides both inpatient and outpatient care. The local institutional review board approved the study.

Trigger Development

We performed literature reviews and interviewed specialists and primary care providers to develop criteria for the trigger algorithm on the basis of existing processes for follow-up evaluation used by providers at the facility. We operationally defined each criterion to enable conversion into a set of computerized search criteria, and chart reviews were performed on a “test cohort” to determine the validity of our code for extracting data for each individual criterion. We previously evaluated the effectiveness and feasibility of the “age,” “terminal illness diagnoses,” and “hospice/palliative care enrollment” criteria during work on similar nonimaging triggers for patients with prostate and colorectal cancers (27). In these cases, we did not perform a repeat review of the identified records. The algorithm included criteria designed to identify patients with imaging findings consistent with a possible lung malignancy and exclude patients who do not need follow-up evaluations. The algorithm (Table 1) initially identified records of patients at risk for a delay in care on the basis of two sets of inclusion criteria: (a) “red flag” criteria to identify clinical clues, such as an abnormal imaging finding, and (b) demographic criteria, such as patients who were alive and within the age group in which follow-up action would be expected. The trigger subsequently removed records that did not require additional action on the basis of two sets of exclusion criteria: (a) clinical criteria to exclude patients who do not need follow-up evaluation, such as those with a terminal illness, and (b) expected follow-up criteria to exclude patients who already received appropriate follow-up care within a prespecified timeframe (Figure).

Table 1.

Criteria Used to Develop the Trigger Algorithm

graphic file with name radiol.2015142530.tbl1.jpg

graphic file with name radiol.2015142530.fig1.jpg

Flow chart shows the trigger algorithm and chart review process.

Because there is no agreed upon definition of a delay in chest imaging follow-up, we chose 30 days as the maximum timeframe in which to complete an initial follow-up diagnostic evaluation. This timeframe was chosen on the basis of discussions with expert providers who considered the time needed to have patients see specialists and undergo testing. Furthermore, it is likely that, should lung cancer be definitively identified, the clinical impact of a delay of 30 days would be minimal. All criteria were based on structured data fields, including the International Classification of Diseases, Ninth Revision, and the Current Procedural Terminology codes, as well as a structured “suspicious for malignancy” code assigned to radiographs by radiologists at the time of their interpretation. The final criteria were reviewed by expert providers and are shown in Table 1. The criteria were coded into a structured query language computer algorithm and applied to the local data warehouse for the facility, a separate repository of clinical data extracted from the main EHR, twice per month.

Trigger Performance

We applied the final trigger algorithm to all patient records with a chest radiography or CT result during a 1-year period (the “validation cohort”) and calculated the positive predictive value (PPV), which is defined as the percentage of patients who were flagged by the trigger and were determined to need follow-up care but did not receive it. For practical use, triggers must achieve a reasonable PPV to avoid overloading recipients with false-positive results. We assumed that no recipient would want a false-positive rate of more than 50%, and our previous work suggests that triggers can achieve this level (27). Thus, we designed the study to detect a PPV of at least 50% with confidence intervals of plus or minus 10% and a two-sided α of 0.05, and we determined that a minimum of 97 records needed to be reviewed. By using a piloted chart review form, a trained physician assistant at the facility with over 30 years of clinical primary care experience performed manual chart reviews of all identified records to determine whether they truly contained a delay in diagnostic evaluation. Prior to proceeding, the initial 20 records were independently reviewed by a second physician reviewer (D.R.M.) with 9 years of internal medicine experience, and the findings were compared to ensure an interrater reliability of at least 80%. Records that were identified by the trigger but that were found to not have a delay in care were deemed false-positive results. The reasons for the false-positive results were recorded during review. The ratio of true-positive results to total records identified by the trigger was used to determine the PPV of the trigger algorithm. Finally, 2 years after the imaging test, we performed a follow-up review to determine whether patients subsequently received a diagnosis of lung cancer.

Data Analysis

Data were analyzed by using Excel software (Microsoft, Redmond, Wash) and SPSS version 21 (IBM, Armonk, NY). Trigger performance, time to follow-up evaluation, reasons for a lack of follow-up care, and cancer outcomes were analyzed and reported by using descriptive statistics.

Results

Trigger Development

We applied the trigger criteria to all patients at the facility who received chest imaging results between January 1, 2008, and December 31, 2008 (the test cohort). We performed manual record reviews on 36 randomly selected charts, making iterative refinements and reapplying the algorithm criteria after each review. Because Veterans Affairs facilities provide care to United States military veterans, the population was predominantly men (92%) aged 18 years and older (30).

Trigger Performance

The final algorithm was programmed as a computerized abnormal imaging trigger and was applied to the records of all patients with chest radiography or CT results between January 1, 2009, and December 31, 2009 (the validation cohort). Of 89 168 patients who were seen at the facility during the study period, 24 829 (27.8%) had at least one chest radiography or CT study performed, of which 538 records met both the red flag (an imaging result that was flagged as being suspicious for malignancy) and demographic criteria (Figure). Subsequently, 207 records were excluded on the basis of clinical exclusion criteria, and 200 records were excluded on the basis of expected follow-up criteria. The remaining 131 records (24.3% of those that met the inclusion criteria and 0.5% of all records for patients who underwent imaging) for patients at high risk for experiencing delays in diagnostic evaluation were manually reviewed. During manual review, 56 records were identified as false-positive results, while the remaining 75 were found to truly lack timely diagnostic evaluation (PPV, 57.3%; 95% confidence interval: 48.3, 65.8) (Table 2). Of the 131 records of patients at high risk for delayed diagnostic evaluation, 18 (13.7%) were ordered for screening indications (with “annual exam” or “pre-op” given as the reason for the examination), and the remaining 113 (86.3%) were ordered for diagnostic purposes.

Table 2.

Reasons for Delays in Follow-up of Patients with Abnormal Imaging Findings as Ascertained by Record Review

graphic file with name radiol.2015142530.tbl2.jpg

Note.—Data are in parentheses are percentages.

Among the 56 false-positive results (ie, records that were incorrectly identified by the trigger as positive), the most common reason was that the finding that was “suspicious for malignancy” was not in the lung, such as with a chest wall or mediastinal mass (in 17 [30.4%]). Follow-up actions for these findings would typically be handled differently than they would for a primary lung malignancy, and our trigger did not account for these actions. Other common reasons for false-positive results included patients who were receiving outside care (12 [21.4%]) or who declined to undergo further evaluation of the finding (eight [14.3%]). This information was only available within the free-text progress notes.

Of the 75 records without documented justification for the delay in follow-up (ie, true-positive results), in 35 records, follow-up studies were ordered in response to the abnormal findings but were not performed within 30 days. Within this subsample, the median time for completion of follow-up action was 49 days (interquartile range, 35–127 days). The remaining 40 records had no evidence of follow-up action. The reasons for delays in follow-up evaluation of abnormal imaging findings, as ascertained by a record review, are outlined in Table 2. Of the 75 patients with imaging results that were suspicious for lung cancer and who did not receive timely follow-up action, four (5.3%, with a median follow-up period of 237 days) subsequently received a diagnosis of primary lung cancer during the following 2 years, 47 (62.7%) had findings that were subsequently attributed to nonneoplastic causes, and 24 (32.0%) had no evidence of subsequent diagnostic evaluation or a documented reason for the absence of follow-up action. The latter findings were relayed to clinic leadership.

Discussion

An EHR-based trigger algorithm can help automate the process of identifying patients who did not receive timely follow-up of chest radiography and CT findings suggestive of lung malignancy. The trigger achieved a PPV of 57.3% (75 of 131) and reduced the number of records that required review to less than one-quarter (24.3%, 131 of 538) of the records that would require review had the process been nonselective. Triggers could harness the vast amounts of EHR data to help develop a safety net for detecting and mitigating delays in follow-up action for patients with abnormal imaging findings (21,26). Systems-based interventions to address this safety issue have been slow to implement, and EHR triggers could be a promising alternative (3,7,3134).

Although our application included retrospective data, such triggers can be prospectively applied or used in near real time to identify records with potential delays in care. For example, trigger algorithms could be incorporated into panel management programs or registries, allowing the care team to identify and address delays before patient outcomes are affected. Some institutions use nurse navigator programs to track imaging findings that are suspicious for cancer until a definitive diagnosis is made (3537). Prospective use of triggers by nurse navigators could enable the navigators to more efficiently identify and track suspicious findings. Building on prior work in tracking abnormal imaging findings, our triggers automate the process of both identifying high-risk patients and excluding those who do not require follow-up action. This would allow nurse navigators to focus their attention on patients who truly need action and direct less attention to reviewing false-positive results (38). Dedicated pulmonary nodule tracking programs that are operated by nurse navigators and use EHR-based triggers could be used to help patients navigate the health care system, improve consistency of follow-up actions, and reduce delays in care (39,40). The use of standardized codes for diagnostic criteria (eg, the International Classification of Diseases and the Current Procedural Terminology) enables portability among sites with only limited modification before implementation.

The ability of the trigger to identify imaging results that are suspicious for lung cancer depended on the radiologists’ use of structured “suspicious for malignancy” codes with which to tag reports during interpretation. Unfortunately, the use of such codes is uncommon outside the Department of Veterans Affairs, limiting widespread application of such triggers. While the use of sophisticated natural language processing and text mining algorithms may help, they are more difficult to implement and standardize across sites and have other accuracy and precision limitations. To address this problem, we recommend the development of a standardized set of radiology codes that could be more widely implemented across EHR systems. Similar to the Breast Imaging Reporting and Data System coding scheme, which greatly reduced ambiguity of mammogram interpretation and communication of results, a set of structured codes to indicate imaging findings that are suspicious for malignancy could be developed. These codes would allow EHRs to “understand” radiology report interpretations in clinical decision support tools (41). For example, regardless of the initial test result communication, an application could alert providers when action on an abnormal chest imaging finding has not been completed within a certain number of days and enable other types of institution-level measurements for an absence of follow-up (42). Future triggers could incorporate both structured data mining and natural language processing and leverage the benefits of each method while minimizing their limitations.

Several study limitations warrant discussion. We were unable to determine the false-positive rate and, thus, could not calculate the sensitivity and specificity of the triggers. This inability was because of the large number of negative record reviews that would be required to identify a single false-negative result and is a common limitation of data mining for outcomes with a low prevalence; however, the use of such triggers enables the detection of safety events in circumstances when manual reviews would otherwise be cost-prohibitive (43). This study was also not designed to evaluate the clinical impact of such triggers. Currently, there is no evidence that these triggers will improve outcomes such as morbidity, mortality, and disease stage at diagnosis. However, this study provides a basis for subsequent work on the prospective use of such triggers to test their impact on patient outcomes.

In conclusion, we developed an EHR-based trigger algorithm to identify patients with abnormal imaging findings who are at risk for delays in follow-up action. The trigger achieved a PPV of 57.3% and shows promise for detecting diagnostic delays in patients with imaging findings that are suspicious for lung cancer. Our study provides a starting point for future work on evaluating the clinical impact of the prospective use of such triggers, as well as the development of more advanced algorithms that extend beyond structured EHR data.

Advances in Knowledge

  • ■ We developed and validated an electronic health record–based trigger algorithm to identify patients with imaging findings suspicious for lung cancer who lacked timely follow-up.

  • ■ Over a 1-year period, the trigger identified 131 patients at a large health care facility, of whom 75 (57.3%) were found to have a delay in follow-up action, which was confirmed through a manual record review.

  • ■ During the study period, 89 168 patients visited the facility, 24 829 had at least one chest radiograph or CT image obtained, and 538 had findings that radiologists flagged as being “suspicious for malignancy;” thus, the trigger reduced the number of records we needed to review to confirm follow-up delays to less than one-quarter of those with suspicious findings (24%; 131 of 538).

Implication for Patient Care

  • ■ Health care organizations could leverage triggers to proactively detect potential delays in follow-up evaluation of patients with imaging findings suspicious for lung cancer and prevent delays in diagnosis.

Acknowledgments

Acknowledgments

The authors thank Louis Wu, Brian Reis, Li Wei, and Viraj Bhise for their contributions to this study.

Received October 30, 2014; revision requested December 11; revision received January 19, 2015; accepted January 30; final version accepted February 19.

Funding: This research was supported by the National Institutes of Health (grant R01HS022087).

Supported by an R18 grant from the Agency for Healthcare Research and Quality (#1R18HS017820) and supported in part by the Houston Veterans Affairs HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413). H.S. supported by the Veterans Affairs Health Services Research and Development Service (CRE 12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), and Veterans Affairs National Center for Patient Safety.

These funding sources had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, or approval of the manuscript.

The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

Disclosures of Conflicts of Interest: D.R.M. Activities related to the present article: received a grant and travel support from Agency for Healthcare Research and Quality. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. E.J.T. Activities related to the present article: received a grant from Agency for Healthcare Research and Quality. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships. A.N.D.M. disclosed no relevant relationships. H.S. Activities related to the present article: received a grant from Agency for Healthcare Research and Quality. Activities not related to the present article: disclosed no relevant relationships. Other relationships: disclosed no relevant relationships.

Abbreviations:

EHR
electronic health record
PPV
positive predictive value

References

  • 1.Singh H, Sethi S, Raber M, Petersen LA. Errors in cancer diagnosis: current understanding and future directions. J Clin Oncol 2007;25(31):5009–5018. [DOI] [PubMed] [Google Scholar]
  • 2.Singh H, Hirani K, Kadiyala H, et al. Characteristics and predictors of missed opportunities in lung cancer diagnosis: an electronic health record-based study. J Clin Oncol 2010;28(20):3307–3315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Gandhi TK, Kachalia A, Thomas EJ, et al. Missed and delayed diagnoses in the ambulatory setting: a study of closed malpractice claims. Ann Intern Med 2006;145(7):488–496. [DOI] [PubMed] [Google Scholar]
  • 4.Phillips RL, Jr, Bartholomew LA, Dovey SM, Fryer GE, Jr, Miyoshi TJ, Green LA. Learning from malpractice claims about negligent, adverse events in primary care in the United States. Qual Saf Health Care 2004;13(2):121–126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Brenner RJ, Lucey LL, Smith JJ, Saunders R. Radiology and medical malpractice claims: a report on the practice standards claims survey of the Physician Insurers Association of America and the American College of Radiology. AJR Am J Roentgenol 1998;171(1):19–22. [DOI] [PubMed] [Google Scholar]
  • 6.Berlin L, Murphy DR, Singh H. Breakdowns in communication of radiological findings: an ethical and medico-legal conundrum. Diagnosis 2014;1(4):263–268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Poon EG, Kachalia A, Puopolo AL, Gandhi TK, Studdert DM. Cognitive errors and logistical breakdowns contributing to missed and delayed diagnoses of breast and colorectal cancers: a process analysis of closed malpractice claims. J Gen Intern Med 2012;27(11):1416–1423. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005;165(13):1493–1499. [DOI] [PubMed] [Google Scholar]
  • 9.Singh H, Giardina TD, Petersen LA, et al. Exploring situational awareness in diagnostic errors in primary care. BMJ Qual Saf 2012;21(1):30–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Murphy DR, Reis B, Sittig DF, Singh H. Notifications received by primary care practitioners in electronic health records: a taxonomy and time analysis. Am J Med 2012;125(2):209.e1–e7. [DOI] [PubMed] [Google Scholar]
  • 11.Murphy DR, Reis B, Kadiyala H, et al. Electronic health record-based messages to primary care providers: valuable information or just noise? Arch Intern Med 2012;172(3):283–285. [DOI] [PubMed] [Google Scholar]
  • 12.McDonald CJ, McDonald MH. Electronic medical records and preserving primary care physicians’ time: comment on “electronic health record-based messages to primary care providers”. Arch Intern Med 2012;172(3):285–287. [DOI] [PubMed] [Google Scholar]
  • 13.Murphy DR, Singh H, Berlin L. Communication breakdowns and diagnostic errors: a radiology perspective. Diagnosis 2014;1(4):253–261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Menon S, Smith MW, Sittig DF, et al. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open 2014;4(11):e005985. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Al-Mutairi A, Meyer AND, Chang P, Singh H. Lack of timely follow-up of abnormal imaging results and radiologists’ recommendations. J Am Coll Radiol 2015;12(4):385–389. [DOI] [PubMed] [Google Scholar]
  • 16.Frankovich J, Longhurst CA, Sutherland SM. Evidence-based medicine in the EMR era. N Engl J Med 2011;365(19):1758–1759. [DOI] [PubMed] [Google Scholar]
  • 17.Szekendi MK, Sullivan C, Bobb A, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care 2006;15(3):184–190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Classen DC, Resar R, Griffin F, et al. ‘Global trigger tool’ shows that adverse events in hospitals may be ten times greater than previously measured. Health Aff (Millwood) 2011;30(4):581–589. [DOI] [PubMed] [Google Scholar]
  • 19.Jha AK, Classen DC. Getting moving on patient safety: harnessing electronic data for safer care. N Engl J Med 2011;365(19):1756–1758. [DOI] [PubMed] [Google Scholar]
  • 20.Zalis M, Harris M. Advanced search of the electronic medical record: augmenting safety and efficiency in radiology. J Am Coll Radiol 2010;7(8):625–633. [DOI] [PubMed] [Google Scholar]
  • 21.Resar RK, Rozich JD, Classen D. Methodology and rationale for the measurement of harm with trigger tools. Qual Saf Health Care 2003;12(Suppl 2):ii39–ii45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Honigman B, Light P, Pulling RM, Bates DW. A computerized method for identifying incidents associated with adverse drug events in outpatients. Int J Med Inform 2001;61(1):21–32. [DOI] [PubMed] [Google Scholar]
  • 23.Classen DC, Pestotnik SL, Evans RS, Burke JP. Computerized surveillance of adverse drug events in hospital patients: 1991. Qual Saf Health Care 2005;14(3):221–225; discussion 225–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Mull HJ, Nebeker JR. Informatics tools for the development of action-oriented triggers for outpatient adverse drug events. AMIA Annu Symp Proc 2008 Nov 6:505–509. [PMC free article] [PubMed] [Google Scholar]
  • 25.Knirsch CA, Jain NL, Pablos-Mendez A, Friedman C, Hripcsak G. Respiratory isolation of tuberculosis patients using clinical guidelines and an automated clinical decision support system. Infect Control Hosp Epidemiol 1998;19(2):94–100. [DOI] [PubMed] [Google Scholar]
  • 26.Muething SE, Conway PH, Kloppenborg E, et al. Identifying causes of adverse events detected by an automated trigger tool through in-depth analysis. Qual Saf Health Care 2010;19(5):435–439. [DOI] [PubMed] [Google Scholar]
  • 27.Murphy DR, Laxmisan A, Reis BA, et al. Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf 2014;23(1):8–16. [DOI] [PubMed] [Google Scholar]
  • 28.Singh H, Thomas EJ, Mani S, et al. Timely follow-up of abnormal diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential? Arch Intern Med 2009;169(17):1578–1586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Singh H, Arora HS, Vij MS, Rao R, Khan MM, Petersen LA. Communication outcomes of critical imaging results in a computerized notification system. J Am Med Inform Assoc 2007;14(4):459–466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Veteran Population . National Center for Veterans Analysis and Statistics. http://www.va.gov/vetdata/Veteran_Population.asp. Accessed December 11, 2014.
  • 31.Giardina TD, King BJ, Ignaczak AP, et al. Root cause analysis reports help identify common factors in delayed diagnosis and treatment of outpatients. Health Aff (Millwood) 2013;32(8):1368–1375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Poon EG, Haas JS, Louise Puopolo A, et al. Communication factors in the follow-up of abnormal mammograms. J Gen Intern Med 2004;19(4):316–323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Gordon JRS, Wahls T, Carlos RC, Pipinos II, Rosenthal GE, Cram P. Failure to recognize newly identified aortic dilations in a health care system with an advanced electronic medical record. Ann Intern Med 2009;151(1):21–27, W5. [DOI] [PubMed] [Google Scholar]
  • 34.Schiff GD, Kim S, Krosnjar N, et al. Missed hypothyroidism diagnosis uncovered by linking laboratory and pharmacy data. Arch Intern Med 2005;165(5):574–577. [DOI] [PubMed] [Google Scholar]
  • 35.Hunnibell LS, Rose MG, Connery DM, et al. Using nurse navigation to improve timeliness of lung cancer care at a veterans hospital. Clin J Oncol Nurs 2012;16(1):29–36. [DOI] [PubMed] [Google Scholar]
  • 36.Hunnibell LS, Slatore CG, Ballard EA. Foundations for lung nodule management for nurse navigators. Clin J Oncol Nurs 2013;17(5):525–531. [DOI] [PubMed] [Google Scholar]
  • 37.McMullen L. Oncology nurse navigators and the continuum of cancer care. Semin Oncol Nurs 2013;29(2):105–117. [DOI] [PubMed] [Google Scholar]
  • 38.Choksi VR, Marn CS, Bell Y, Carlos R. Efficiency of a semiautomated coding and review process for notification of critical findings in diagnostic imaging. AJR Am J Roentgenol 2006;186(4):933–936. [DOI] [PubMed] [Google Scholar]
  • 39.Powell AA, Schultz EM, Ordin DL, et al. Timeliness across the continuum of care in veterans with lung cancer. J Thorac Oncol 2008;3(9):951–957. [DOI] [PubMed] [Google Scholar]
  • 40.Gould MK, Ghaus SJ, Olsson JK, Schultz EM. Timeliness of care in veterans with non-small cell lung cancer. Chest 2008;133(5):1167–1173. [DOI] [PubMed] [Google Scholar]
  • 41.Smith M, Murphy D, Laxmisan A, et al. Developing software to “track and catch” missed follow-up of abnormal test results in a complex sociotechnical environment. Appl Clin Inform 2013;4(3):359–375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Singh H, Vij MS. Eight recommendations for policies for communicating abnormal test results. Jt Comm J Qual Patient Saf 2010;36(5):226–232. [DOI] [PubMed] [Google Scholar]
  • 43.Nebeker J, Stoddard G, Rosen A. Considering Sensitivity and Positive Predictive Value in Comparing the Performance of Triggers Systems for Iatrogenic Adverse Events: Triggers and Targeted Injury Detection Systems (TIDS) Expert Panel Meeting. Rockville, Md: Agency for Healthcare Research and Quality, 2009. [Google Scholar]

Articles from Radiology are provided here courtesy of Radiological Society of North America

RESOURCES