Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2023 May 31;30(9):1526–1531. doi: 10.1093/jamia/ocad089

Developing electronic clinical quality measures to assess the cancer diagnostic process

Daniel R Murphy 1,2,, Andrew J Zimolzak 3,4, Divvy K Upadhyay 5, Li Wei 6,7, Preeti Jolly 8,9, Alexis Offner 10,11, Dean F Sittig 12,13, Saritha Korukonda 14, Riyaa Murugaesh Rekha 15, Hardeep Singh 16,17
PMCID: PMC10436145  PMID: 37257883

Abstract

Objective

Measures of diagnostic performance in cancer are underdeveloped. Electronic clinical quality measures (eCQMs) to assess quality of cancer diagnosis could help quantify and improve diagnostic performance.

Materials and Methods

We developed 2 eCQMs to assess diagnostic evaluation of red-flag clinical findings for colorectal (CRC; based on abnormal stool-based cancer screening tests or labs suggestive of iron deficiency anemia) and lung (abnormal chest imaging) cancer. The 2 eCQMs quantified rates of red-flag follow-up in CRC and lung cancer using electronic health record data repositories at 2 large healthcare systems. Each measure used clinical data to identify abnormal results, evidence of appropriate follow-up, and exclusions that signified follow-up was unnecessary. Clinicians reviewed 100 positive and 20 negative randomly selected records for each eCQM at each site to validate accuracy and categorized missed opportunities related to system, provider, or patient factors.

Results

We implemented the CRC eCQM at both sites, while the lung cancer eCQM was only implemented at the VA due to lack of structured data indicating level of cancer suspicion on most chest imaging results at Geisinger. For the CRC eCQM, the rate of appropriate follow-up was 36.0% (26 746/74 314 patients) in the VA after removing clinical exclusions and 41.1% at Geisinger (1009/2461 patients; P < .001). Similarly, the rate of appropriate evaluation for lung cancer in the VA was 61.5% (25 166/40 924 patients). Reviewers most frequently attributed missed opportunities at both sites to provider factors (84 of 157).

Conclusions

We implemented 2 eCQMs to evaluate the diagnostic process in cancer at 2 large health systems. Health care organizations can use these eCQMs to monitor diagnostic performance related to cancer.

Keywords: quality measures, triggers, diagnostic errors, diagnostic delays, lung cancer, colon cancer

BACKGROUND

Diagnostic errors are common and contribute to a substantial portion of preventable patient harm.1 In patients ultimately diagnosed with cancer, an estimated 20% undergo multiple visits and prolonged evaluation of cancer-related symptoms before a formal cancer diagnosis is made.2 Delays in cancer diagnosis can result from lack of timely follow-up of clinical red-flag findings, such as symptoms or test results that warrant evaluation for cancer.2–4 For many cancers, these delays often correlate with poorer prognosis5,6 and represent missed opportunities where actions by patients, providers, or health systems could have been taken earlier to arrive at an appropriate diagnosis, allowing treatment to begin sooner.2 Despite advances in detection7,8 and improved understanding of the root causes of diagnostic errors,9–11 measurement and tracking of diagnostic errors has not been implemented.12 This limits attention and organizational learning necessary to improve timeliness of cancer diagnosis.

Standardized measurement of medical errors was proposed over 2 decades ago13 and has reduced patient harm related to medication safety and hospital-acquired conditions.14 Today, electronic clinical quality measures (eCQMs) are frequently used in ambulatory practices to monitor and incentivize preventive activities (eg, mammography screening rates) and quality of care (eg, diabetes and hypertension control).15 However, to date, electronic quality measures for missed, delayed, or wrong diagnosis have received limited exploration and advancement.16 One reason is lack of accurate, automated, standardized, and easily implemented quality measures related to diagnostic errors.

Current detection of diagnostic errors often relies on incident reporting, patient feedback, sentinel events, or malpractice claims.17 These mechanisms miss a vast majority of diagnostic errors and involve manual data collection, making them poor candidates as reliable quality measures.7,16,18 However, with widespread adoption of electronic health records, the data necessary to automatically identify signals suggestive of a diagnostic error is now possible. Prior work has applied algorithms to vast clinical data sets to identify diagnostic process breakdowns in patients with red flags (eg, signs, symptoms, or abnormal test results) suggestive of a possible cancer.19–21 In addition to identifying cases for organizational learning or preventing additional delays from failure to follow-up red-flags, this methodology could be adapted for use in eCQMs focused on measuring and tracking diagnostic performance. Such eCQMs could help organizations monitor cancer-related diagnostic process breakdowns and potential missed opportunities in cancer diagnosis more reliably and uniformly than existing methods. They can also inform efforts to identify underlying contributory factors to help reduce them.

We aimed to develop, implement, and validate eCQMs to evaluate a health care organization’s performance related to the cancer diagnostic process. Specifically, we sought to develop and implement measures to monitor how effectively clinicians recognize and act on red-flag findings suspicious for undiagnosed cancer. Such measures could allow comparison of diagnostic quality and safety over time and across practices and health care organizations.

MATERIALS AND METHODS

Using a previously validated algorithmic approach,19,20 we developed 2 eCQMs to assess follow-up and appropriate evaluation after a clinical red flag finding suspicious for cancer. We conducted the study at 2 organizations: the US Department of Veterans Affairs (Site 1), which hosts a national clinical data warehouse containing inpatient and outpatient records on over 20 million individuals,22 and Geisinger (Site 2), a large, mostly rural integrated health care system with an integrated clinical data warehouse containing around 4 million patient records. The Baylor College of Medicine and Geisinger institutional review boards approved the study. Measure algorithms were developed and tested previously19,23 on a subset of regional VA data and applied nationally in this study. Both measures were designed to automatically extract electronic data from the respective clinical data repository and calculate each organization’s rate of timely follow up of tests suspicious for cancer.

We designed both measures to assess the rate of appropriate and timely follow-up based on 3 sets of criteria. First, the measures identified all patients with a red flag abnormal finding indicating that workup is needed to evaluate for a possible cancer (A = Abnormal findings). Abnormal findings were defined as a positive fecal occult blood test, fecal immunohistochemical test, or stool DNA test, or as laboratory test results suggestive of iron deficiency anemia for the CRC evaluation measure and abnormal chest imaging (plain radiograph or computed tomography) with a possible lung mass or nodule for the lung cancer evaluation measure (see Supplementary Appendix for detailed criteria). Second, the measure identified the subset of red flag abnormal findings where a subsequent evaluation for a cancer would typically not be performed (C = Clinical exclusions), such as in patients with terminal illness. We calculated the denominator by subtracting clinical exclusions (C) from the total number of abnormal findings (A). From those records included in the denominator, the measure calculated the subset of red flag abnormal findings where timely and appropriate follow-up action was detected (numerator, F = Followed up), for example, via a completed colonoscopy or repeat chest imaging. Figure 1 displays the numerator, denominator, and formula for calculating this measure.

Figure 1.

Figure 1.

Formula for electronic clinical quality measure to assess the cancer diagnostic process. A: Abnormal findings: number of red flag findings. C: Clinical exclusions: number of red flags where follow-up is not indicated. F: Followed up: number of red flags where follow up needed and appropriately received.

Measures were implemented using commonly available clinical data elements, including Current Procedural Terminology codes for visits and procedures, International Classification of Disease codes for diagnoses, and Logical Observation Identifiers Names and Codes for laboratory testing, while the lung cancer measure additionally relied on internal codes that radiologists added to chest imaging results to signify the type of abnormality, if any (eg, “suspicious for malignancy”). The lung cancer eCQM used a 30-day timeframe when determining timeliness of appropriate follow-up action, while the CRC eCQM used a 60-day timeframe, both of which were determined based on literature review and expert opinion.19,20

We implemented and applied both measures at Site 1 to calculate diagnostic performance related to follow-up of suspicious lung and colorectal findings. At Site 2, only the colorectal eCQM was implemented because, unlike the VA, Site 2 did not electronically tag radiology images found to be “suspicious for cancer.” Because COVID-19 likely impacted follow-up processes nationally,24 we evaluated patients seen between January 1, 2019 and December 31, 2019 to provide a steady baseline state unaffected by pandemic-related practice changes. To validate the accuracy of the measures, clinicians at each site reviewed 100 charts meeting each measure to calculate Positive Predictive Value (PPV) and 20 charts with a red flag but not meeting the measure to calculate Negative Predictive Value (NPV) using a standardized data collection form. Sample size was calculated to achieve a confidence interval of no more than 10% above and below (20% total width) the PPV and NPV point estimates. To ensure high interrater reliability before proceeding, a second reviewer performed independent reviews of 10% of the sample. Chart reviewers collected or confirmed demographic data, whether a missed opportunity was identified, and if so, whether it was due to patient factors (eg, patient did not show up to a specialist visit), provider factors (eg, provider did not act on abnormal test results or document any justification not to), or system factors (eg, referrals canceled by specialty clinic without process to reschedule). Data were analyzed using descriptive statistics, and differences between scores (proportions) across sites were analyzed using R 4.2, 2-tailed tests, and an alpha of 0.05.

RESULTS

The CRC eCQM was successfully implemented and tested at both sites. However, at Site 2, we discovered during development of the lung cancer eCQM that radiologists did not flag most abnormal chest imaging results using a structured field. Thus, the measure’s algorithm could neither identify specific imaging results that were abnormal nor identify those that specifically contained a finding suspicious for possible cancer. Discerning malignancy-related results would require additional chart reviews, limiting the effectiveness of our automated measure. Therefore, the lung cancer eCQM was not implemented at Site 2. Results from applying the measure at each site and of eCQM performance based on chart reviews are discussed below.

CRC evaluation eCQM

During implementation, we reviewed 30 records at Site 1 and 40 at Site 2 to confirm that individual eCQM criteria were accurately captured. At Site 1, the CRC eCQM identified 146 614 red flags, of which 72 300 contained clinical exclusions making follow-up unnecessary, leaving 74 314 red flags needing follow up (denominator). From these, 26 783 received appropriate follow-up. Thus, Site 1 scored 36.0% for the CRC eCQM. Similarly, Site 2 identified 8174 red flags, of which 5713 had clinical exclusions, leaving 2461 in the denominator. From these, 1009 received appropriate follow-up, yielding a score of 41.0%. The difference in scores between sites was statistically significant (P < .001). Reviewers agreed on the presence or absence of a missed opportunity in 70% of overlapping records at Site 1 and 70% at Site 2.

Lung cancer evaluation eCQM

The lung cancer eCQM was successfully implemented at Site 1. Thirty-three reviews were performed during development to confirm that the eCQM accurately captured data. During validation, 48 207 red flags were identified, of which 7283 had clinical exclusions automatically detected, leaving 40 924 needing follow-up. From these, 25 166 received appropriate and timely follow-up, yielding a score of 61.5%. Reviewers agreed on all 10 (100%) records at Site 1.

Detection and analysis of missed opportunities

Demographic data of unique patients with red flags for patients who did not receive appropriate follow-up or have a clinical exclusion are displayed in Table 1. The CRC eCQM algorithm was found to have a PPV of 70% (95% CI: 61.0–79.0%) for detecting missed opportunities (70 of 100 records identified by the measure truly contained a delay in care; Table 2) and an NPV of 100% (CI: >90–100%). For the lung cancer eCQM, the PPV for detecting missed opportunities was 27% (CI: 18.3–35.7%), but an additional 55% required action at a future date beyond our 30-day cut-off (eg, nodule with a lower potential of cancer would require 3-month follow up based on Fleischner Criteria25). These latter cases necessitated tracking to ensure that the follow-up occurred but could not be labeled as missed opportunities within the 30-day period used by the algorithm. The lung cancer eCQM NPV was 100% (CI: >90–100%).

Table 1.

Demographics of unique patients at high risk for a missed opportunities as identified by electronic clinical quality measures

Site 1
Site 2a
Colorectal Lung Colorectal
All red flags All red flags All red flags
n = 29 578 n = 15 026 n = 1221
Gender
 Male 86.0% 94.1% 60.5%
 Female 14.0% 5.9% 39.5%
Race
 White 64.7% 79.0% 85.1%
 Black or African American 26.5% 14.7% 4.8%
 Native Hawaiian or Pacific Islander 0.8% 0.7% 0.7%
 Asian 0.8% 0.5% 0.9%
 American Indian or Alaska Native 1.1% 0.8% 0.2%
 Unknown 6.0% 4.3% 8.3%
Age (years old)
 40–49 5.6% 1.8% 16.0%
 50–59 19.3% 11.6% 27.0%
 60–69 38.9% 32.1% 36.6%
 70–79 36.2% 41.8% 18.4%
 80–89 0.0% 10.8% 0.0%
 90–100 0.0% 1.9% 0.0%
a

Lung digital quality measure could not be implemented at site 2.

Table 2.

Red flags and missed opportunities for diagnosis identified by validation chart review

Site 1
Site 2a
Colorectal Lung Colorectal
n = 100 n = 100 n = 100
Red flag
 Colorectal cancer
  Positive Cologuard 0 (0%) 37 (37%)
  Positive FOBT/FIT 52 (52%) 15 (15%)
  Labs consistent with IDA 48 (48%) 48 (48%)
 Lung cancer
  Lung nodule 100 (100%)
Missed opportunity for diagnosis
 Missed opportunity 70 (70%) 27 (27%) 60 (60%)
  Patient factorsb 14 (20%) 6 (20%) 30 (50%)
  Provider factorsb 34 (49%) 16 (59%) 34 (57%)
  System factorsb 30 (43%) 7 (26%) 23 (38%)
 Needs tracking 55 (55%)
 No missed opportunity 30 (30%) 18 (18%) 40 (40%)
a

Lung digital quality measure could not be implemented at site.

b

Individual missed opportunities may have more than one contributing factor.

FOBT/FIT: fecal occult blood test/fecal immunohistochemical test; IDA: iron deficiency anemia.

Among instances where a missed opportunity was confirmed by reviewers, the most common causes across both measures at both sites (59% for lung, and 49% and 57% for CRC at Site 1 and Site 2, respectively) were provider (including physicians and advanced care practitioners) factors. These included instances where a test result was transmitted to the ordering provider, but no appropriate action was taken. In most instances, reviewers did not find any documented action, rationale not to act, or communication with the patient about the abnormal results, suggesting that providers either overlooked the results or failed to document a rationale for inaction. In other instances, providers treated iron deficiency anemia with iron supplements without pursuing guideline-based investigation of the underlying cause of the anemia.26,27 At Site 1, the second most common cause for missed opportunities was system factors (26% for lung and 43% for colorectal), which included referrals or subsequent testing orders being placed but remaining unscheduled. At Site 2, patient factors were the second most common, generally relating to patients not showing up to visits or declining testing. While such instances may represent appropriate acquiescence to patient preferences in medical care, understanding these missed opportunities can help ensure patients have the prerequisite information necessary to make an informed decision before declining care.

DISCUSSION

We successfully developed and implemented 2 eCQMs to automate measurement of diagnostic performance in cancer. The quality measures assessed for any missed opportunities in diagnostic evaluation for colorectal and lung cancer and quantified follow-up after abnormal test results suspicious for cancer. While both measures provide strong signals for missed opportunities, one of the 2 measures could be implemented only at one site because of the absence of radiology codes indicating suspicion for malignancy. The high PPV of the CRC eCQM demonstrates its promise as a robust method for assessing an organization’s diagnostic performance in cancer.

The inability to implement the lung cancer measure at one site and low PPV at the other site highlights the challenges in working with unstructured EHR data. Because this measure relies on identifying abnormal imaging results categorized as suspicious for malignancy by radiologists, it cannot function in settings where this categorization does not occur, such as at Site 2. Furthermore, lung nodules suspicious for malignancy often have low risk features in a patient with no risk factors, thus requiring more flexible timeframes for follow-up (eg, 3–12 months) than our measure included. Discerning these differences in follow-up requires radiologists to categorize levels of urgency while interpreting images. The American College of Radiology’s Lung-RADS system28 offers a common lexicon that provides this capability, and while use is growing, it is not yet widespread.29 Alternatively, methods such as natural language processing could convert radiologist-dictated findings into expected follow-up timeframes, which could be useful for future measures. While some efforts to apply such advanced algorithms have shown early success,30 implementation currently requires specialized data mining knowledge and may not be immediately available at many organizations.31

Our work demonstrates a successful first step in developing quality measures for diagnostic performance in cancer. For instance, such measures could be used to incentivize improvements in diagnostic performance and assist quality improvement teams develop interventions to reduce missed opportunities in follow-up. While our quality measures were implemented at the organizational level, measures could also be focused on a particular clinic or clinician or to recognize high-functioning practices with robust follow-up pathways.32 Rather than be used as interruptive clinical decision support or EHR notifications to clinicians which may get ignored,33 these eCQMs can facilitate monitoring procedures at the practice or organizational level and help create lists of patients who did not receive follow-up. In the future, these measures could be used for benchmarking of clinicians within an organization, identifying patients who did not receive timely follow-up to understand barriers to optimal diagnostic performance, and evaluating the impact of quality improvement interventions targeting diagnostic performance. Future efforts are needed to expand the types of diagnostic performance measures available as well as understand the impact that implementation of these measures has on improving cancer outcomes.

Several limitations may impact the generalizability of this study. First, eCQMs were implemented at 2 large health care systems from which patients receive most of their care; thus, these measures may not be generalizable to smaller practices or practices where data external to the practice may not be easily available. However, the growing use of health information exchanges34 improves the likelihood that data necessary for such measures will be accessible to practices of all sizes in the future. Second, validation was performed based on chart reviews, which do not always represent care delivered. However, clinical documentation is currently the most commonly used method to assess care quality. Third, we could not implement the lung cancer measure at one site. Nevertheless, our work still outlines the first steps in measurement and highlights pathways for further improvement in this measure. Fourth, the measures, particularly the lung cancer measure, did not achieve optimal positive predictive values and may incorrectly categorize patients as having a delay when they did not yet need follow-up. However, the lung cancer measure provides a tool for tracking patients needing future follow-up until systems such as Lung-RADS become more widespread and developed for tracking purposes.

CONCLUSION

We developed and implemented eCQMs designed to evaluate diagnostic performance related to evaluation for possible colorectal and lung cancer. Our work addresses the current lack of electronic quality measures related to cancer diagnosis and can be used for benchmarking and quality improvement to reduce preventable delays in cancer diagnosis.

Supplementary Material

ocad089_Supplementary_Data

ACKNOWLEDGMENTS

The authors would like to acknowledge the contributions of Viralkumar Vaghani MBBS, and Umair Mushtaq MS for their efforts in data collection during this project.

Contributor Information

Daniel R Murphy, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA; Department of Medicine, Baylor College of Medicine, Houston, Texas, USA.

Andrew J Zimolzak, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA; Department of Medicine, Baylor College of Medicine, Houston, Texas, USA.

Divvy K Upadhyay, Division of Quality, Safety and Patient Experience, Geisinger, Danville, Pennsylvania, USA.

Li Wei, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA; Department of Medicine, Baylor College of Medicine, Houston, Texas, USA.

Preeti Jolly, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA; Department of Medicine, Baylor College of Medicine, Houston, Texas, USA.

Alexis Offner, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA; Department of Medicine, Baylor College of Medicine, Houston, Texas, USA.

Dean F Sittig, Department of Clinical and Health Informatics, The University of Texas Health Science Center at Houston’s School of Biomedical Informatics, Houston, Texas, USA; The UT-Memorial Hermann Center for Healthcare Quality & Safety, Houston, Texas, USA.

Saritha Korukonda, Investigator-Initiated Research Operations, Geisinger, Danville, Pennsylvania, USA.

Riyaa Murugaesh Rekha, Division of Quality, Safety and Patient Experience, Geisinger, Danville, Pennsylvania, USA.

Hardeep Singh, Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA; Department of Medicine, Baylor College of Medicine, Houston, Texas, USA.

FUNDING

This project was funded by the Gordon and Betty Moore Foundation Award (8838) and partially funded by the Houston VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (CIN 13-413). Dr. Singh is additionally supported by the VA Health Services Research and Development Service (IIR17-127), the VA National Center for Patient Safety, and the Agency for Health Care Research and Quality (R01HS022087, R18HS029347, and R01HS028595). These funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

AUTHOR CONTRIBUTIONS

DRM, AJZ, DKU, DFS, and HS contributed to the conception of this study. All authors contributed to the design of the work. DRM, DKU, LW, PJ, AO, SK, and RMR participated in data collection. DRM, DKU, LW, and HS analyzed and interpreted the data. DRM drafted the initial article. All authors provided critical revision of the article and approved the final manuscript.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

DISCLOSURE STATEMENT

The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

CONFLICT OF INTEREST STATEMENT

The authors have no competing interests to declare.

DATA AVAILABILITY

The United States Department of Veterans Affairs (VA) places legal restrictions on access to veteran’s health care data, which includes both identifying data and sensitive patient information. The VA data sets used for this study are not permitted to leave the VA firewall without a Data Use Agreement. However, VA data are made freely available to researchers behind the VA firewall with an approved VA study protocol. Data from the Geisinger data repository are available with permission from Geisinger Health. All summary data obtained during analysis during this study were included in the manuscript and Supplementary Appendix.

REFERENCES

  • 1. Singh H, Graber ML.  Improving diagnosis in health care—the next imperative for patient safety. N Engl J Med  2015; 373 (26): 2493–5. [DOI] [PubMed] [Google Scholar]
  • 2. Lyratzopoulos G, Vedsted P, Singh H.  Understanding missed opportunities for more timely diagnosis of cancer in symptomatic patients after presentation. Br J Cancer  2015; 112 (s1): S84–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Singh H, Hirani K, Kadiyala H, et al.  Characteristics and predictors of missed opportunities in lung cancer diagnosis: an electronic health record-based study. J Clin Oncol  2010; 28 (20): 3307–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Bhise V, Modi V, Kalavar A, et al.  Patient-reported attributions for missed colonoscopy appointments in two large healthcare systems. Dig Dis Sci  2016; 61 (7): 1853–61. [DOI] [PubMed] [Google Scholar]
  • 5. Hanna TP, King WD, Thibodeau S, et al.  Mortality due to cancer treatment delay: systematic review and meta-analysis. BMJ  2020; 371: m4087. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Neal RD, Tharmanathan P, France B, et al.  Is increased time to diagnosis and treatment in symptomatic cancer associated with poorer outcomes? Systematic review. Br J Cancer  2015; 112 (s1): S92–107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Murphy DR, Meyer AN, Sittig DF, Meeks DW, Thomas EJ, Singh H.  Application of electronic trigger tools to identify targets for improving diagnostic safety. BMJ Qual Saf  2018; 28 (2): 151–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Murphy DR, Laxmisan A, Reis BA, et al.  Electronic health record-based triggers to detect potential delays in cancer diagnosis. BMJ Qual Saf  2014; 23 (1): 8–16. [DOI] [PubMed] [Google Scholar]
  • 9. Singh H, Thomas EJ, Petersen LA, Studdert DM.  Medical errors involving trainees: a study of closed malpractice claims from 5 insurers. Arch Intern Med  2007; 167 (19): 2030–6. [DOI] [PubMed] [Google Scholar]
  • 10. Rogith D, Satterly T, Singh H, et al.  Application of human factors methods to understand missed follow-up of abnormal test results. Appl Clin Inform  2020; 11 (5): 692–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Graber ML.  Progress understanding diagnosis and diagnostic errors: thoughts at year 10. Diagnosis (Berl)  2020; 7 (3): 151–9. [DOI] [PubMed] [Google Scholar]
  • 12. Improving Diagnostic Quality and Safety. National Quality Forum; 2017: 79.
  • 13. Committee on Quality of Health Care in America. Crossing the Quality Chasm: (317382004-001). doi: 10.1037/e317382004-001 [DOI]
  • 14. Bates DW, Singh H.  Two decades since to err is human: an assessment of progress and emerging priorities in patient safety. Health Aff (Millwood)  2018; 37 (11): 1736–43. [DOI] [PubMed] [Google Scholar]
  • 15. Centers for Medicare and Medicaid Services. CMS Quality Measure Development Plan: Supporting the Transition to the Merit-based Incentive Payment System (MIPS) and Alternative Payment Models (APMs). Published online 2016. Accessed April 12, 2023. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf.
  • 16. Singh H, Graber ML, Hofer TP.  Measures to improve diagnostic safety in clinical practice. J Patient Saf  2019; 15 (4): 311–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Singh H, Bradford A, Goeschel C.  Operational measurement of diagnostic safety: state of the science. Diagnosis (Berl)  2021; 8 (1): 51–65. [DOI] [PubMed] [Google Scholar]
  • 18. Singh H, Sittig DF.  Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf  2015; 24 (2): 103–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Murphy DR, Meyer AND, Vaghani V, et al.  Development and validation of trigger algorithms to identify delays in diagnostic evaluation of gastroenterological cancer. Clin Gastroenterol Hepatol  2018; 16 (1): 90–8. [DOI] [PubMed] [Google Scholar]
  • 20. Murphy DR, Meyer AND, Bhise V, et al.  Computerized triggers of big data to detect delays in follow-up of chest imaging results. Chest  2016; 150 (3): 613–20. [DOI] [PubMed] [Google Scholar]
  • 21. Murphy DR, Meyer AND, Vaghani V, et al.  Electronic triggers to identify delays in follow-up of mammography: harnessing the power of big data in health care. J Am Coll Radiol  2018; 15 (2): 287–95. [DOI] [PubMed] [Google Scholar]
  • 22. Fihn SD, Francis J, Clancy C, et al.  Insights from advanced analytics at the Veterans Health Administration. Health Aff (Millwood)  2014; 33 (7): 1203–11. [DOI] [PubMed] [Google Scholar]
  • 23. Murphy DR, Thomas EJ, Meyer AND, Singh H.  Development and validation of electronic health record-based triggers to detect delays in follow-up of abnormal lung imaging findings. Radiology  2015; 277 (1): 81–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Gandhi T, Singh H.  Reducing the risk of diagnostic error in the COVID-19 era. J Hosp Med  2020; 15 (6): 363–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Bueno J, Landeras L, Chung JH.  Updated Fleischner society guidelines for managing incidental pulmonary nodules: common questions and challenging scenarios. Radiographics  2018; 38 (5): 1337–50. [DOI] [PubMed] [Google Scholar]
  • 26. Sawhney MS, Lipato T, Nelson DB, Lederle FA, Rector TS, Bond JH.  Should patients with anemia and low normal or normal serum ferritin undergo colonoscopy?  Am J Gastroenterol  2007; 102 (1): 82–8. [DOI] [PubMed] [Google Scholar]
  • 27. Peytremann-Bridevaux I, Arditi C, Froehlich F, et al. ; EPAGE II Study Group. Appropriateness of colonoscopy in Europe (EPAGE II)—iron-deficiency anemia and hematochezia. Endoscopy  2009; 41 (3): 227–33. [DOI] [PubMed] [Google Scholar]
  • 28. Pinsky PF, Gierada DS, Black W, et al.  Performance of lung-RADS in the National Lung Screening Trial: a retrospective assessment. Ann Intern Med  2015; 162 (7): 485–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Chelala L, Hossain R, Kazerooni EA, Christensen JD, Dyer DS, White CS.  Lung-RADS version 1.1: challenges and a look ahead, from the AJR special series on radiology reporting and data systems. AJR Am J Roentgenol  2021; 216 (6): 1411–22. [DOI] [PubMed] [Google Scholar]
  • 30. Hunter B, Reis S, Campbell D, et al.  Development of a structured query language and natural language processing algorithm to identify lung nodules in a cancer centre. Front Med (Lausanne)  2021; 8: 748168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Crombé A, Seux M, Bratan F, et al.  What influences the way radiologists express themselves in their reports? A quantitative assessment using natural language processing. J Digit Imaging  2022; 35 (4): 993–1007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Mog AC, Liang PS, Donovan LM, et al.  Timely colonoscopy after positive fecal immunochemical tests in the Veterans Health Administration: a qualitative assessment of current practice and perceived barriers. Clin Transl Gastroenterol  2022; 13 (2): e00438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Murphy DR, Reis B, Kadiyala H, et al.  Electronic health record-based messages to primary care providers: valuable information or just noise?  Arch Intern Med  2012; 172 (3): 283–5. [DOI] [PubMed] [Google Scholar]
  • 34. Devine EB, Totten AM, Gorman P, et al.  Health information exchange use (1990–2015): a systematic review. EGEMS (Wash DC)  2017; 5 (1): 27. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocad089_Supplementary_Data

Data Availability Statement

The United States Department of Veterans Affairs (VA) places legal restrictions on access to veteran’s health care data, which includes both identifying data and sensitive patient information. The VA data sets used for this study are not permitted to leave the VA firewall without a Data Use Agreement. However, VA data are made freely available to researchers behind the VA firewall with an approved VA study protocol. Data from the Geisinger data repository are available with permission from Geisinger Health. All summary data obtained during analysis during this study were included in the manuscript and Supplementary Appendix.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES