Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2024 Oct 22;40(4):782–789. doi: 10.1007/s11606-024-09132-8

Evaluation of Measure Dx, a Resource to Accelerate Diagnostic Safety Learning and Improvement

Andrea Bradford 1,, Alberta Tran 2, Kisha J Ali 2, Alexis Offner 1, Christine Goeschel 2, Umber Shahid 1, Melissa Eckroade 2, Hardeep Singh 1
PMCID: PMC11914432  PMID: 39438386

Abstract

Background

Several strategies have been developed to detect diagnostic errors for organizational learning and improvement. However, few health care organizations (HCOs) have integrated these strategies into routine operations. To address this gap, the Agency for Healthcare Research and Quality released “Measure Dx: A Resource To Identify, Analyze, and Learn From Diagnostic Safety Events” in 2022.

Objective

We conducted an evaluation of Measure Dx to measure feasibility of implementation and effects on short-term and intermediate outcomes related to diagnostic safety.

Design

Prospective observational study.

Participants

Teams from 11 HCOs, primarily academic medical centers.

Interventions

Participants were asked to use Measure Dx over approximately 6 months and attend monthly virtual learning collaborative sessions to share and discuss approaches to measuring diagnostic safety.

Main Measures

Descriptive outcomes were gathered at the HCO level and included uptake of different case-finding strategies and the number of cases reviewed and confirmed to have diagnostic safety improvement opportunities. We collected information on organizational practices related to diagnostic safety at each HCO at baseline and at the conclusion of the project.

Key Results

The 11 HCOs completed all requirements for the evaluation. Each of the four diagnostic safety case finding strategies outlined in Measure Dx were used by at least three HCOs. Across the cohort, participants reviewed 703 cases using a standardized data collection instrument. Of those cases, 224 (31.8%) were identified as diagnostic safety events with improvement opportunities. Unexpectedly, self-ratings on the checklist assessment declined for several organizations.

Conclusions

Use of Measure Dx can help accelerate implementation of systematic approaches to diagnostic error measurement and learning across a variety of HCOs, while potentially enabling HCOs to identify opportunities to improve diagnostic safety practices.

Supplementary Information

The online version contains supplementary material available at 10.1007/s11606-024-09132-8.

INTRODUCTION

Diagnostic errors are estimated to affect most patients at least once during their lifetime.1 These errors are associated with substantial harm and are frequently implicated in malpractice claims.24 Many such errors are believed to be preventable, but they are challenging to capture and analyze using conventional patient safety mechanisms.5 Accordingly, the National Academies of Sciences, Engineering and Medicine’s report Improving Diagnosis in Health Care1 recommends that health care organizations (HCOs) take action to “monitor the diagnostic process and identify, learn from, and reduce diagnostic errors and near misses in a timely fashion.” An important barrier to these efforts is a lack of consensus about how best to define and measure these events,6 which are more complex than many other types of safety events.

Currently, there is no standard metric for diagnostic errors and no realistic way to identify all diagnostic errors in an organization. However, it is feasible to focus on a limited range of situations or events in which diagnostic error may be prominent.7 For example, prior work describes focused reviews of pre-defined events (e.g., patient deaths,8 readmissions,9 unexpected presentations to the emergency department after a recent health care visit,10 unexpected care escalation to the intensive care unit11) to determine whether opportunities existed in these cases to make an earlier, correct diagnosis. Another strategy is to encourage clinicians to report diagnosis-related events.12,13 Such case-finding strategies have potential for translation to real-world settings.8,1318 Data gathering activities result in quantifiable findings19 that can be used for internal learning and improvement purposes.

Few HCOs currently devote programmatic resources specific to diagnostic safety, and incentives to do so are lacking. For instance, a recent survey of hospitals participating in the Leapfrog Hospital Survey assessed their current diagnostic safety practices. Of the 4% of eligible hospitals that responded (which were more likely than non-respondents to have received a top safety grade), only 25% reported having a multidisciplinary team to promote diagnostic quality and safety.20

To help make diagnostic safety measurement strategies more standardized and widely accessible, the Agency for Healthcare Research and Quality (AHRQ) funded the development, pilot testing, and publication of Measure Dx,21 a publicly available resource to support HCOs in identifying and learning from diagnostic safety events. Measure Dx guides an HCO to implement a series of iterative steps to develop a diagnostic safety learning and improvement initiative. Various data sources can serve as a foundation for this work, and their selection should be guided by the organization’s resources and priorities for improvement. Using published literature and case examples, Measure Dx outlines four strategies to identify and analyze diagnostic safety events using one of the following types of data:

  1. Events already known to the organization through an existing quality and safety improvement activity (e.g., mortality reviews, general quality and safety reviews);

  2. Clinician-reported diagnostic safety events (e.g., those identified through a safety event reporting system);

  3. Patient-reported diagnostic safety events (e.g., those identified through complaints or other feedback mechanisms);

  4. Electronic health record (EHR)–enabled processes to identify a set of cases enriched for diagnostic safety events (e.g., events identified using structured queries of EHR data)

Measure Dx also provides guidance to HCOs for implementing a structured process to review and adjudicate events and to further analyze those that are found to have preventable missed opportunities.

The content of Measure Dx was developed in close consultation with 12 national experts and further refined after field testing at 12 HCOs. Table 1 provides an outline of the final published guide.

Table 1.

Outline of Measure Dx

1. Prepare Your Organization for Discovery and Action

  • Ensure a foundation of psychological safety

  • Engage leadership and other stakeholders

  • Build a team

  • Disseminate information about your work

2. Conduct a Self-assessment

  • Take inventory of available resources to support this work

  • Select a feasible measurement strategy based on resources available to the team

3. Implement Measurement Strategies

  Strategy A. Use existing quality and safety data

  Re-examine previously identified safety events (e.g., rapid response calls, mortality reviews) for evidence of diagnostic improvement opportunities

  Strategy B. Solicit reports from clinicians

  Ask clinicians to bring attention to diagnostic events within an environment of psychological safety

  Strategy C. Leverage patient-reported data

  Examine patient surveys, incident reports, and complaints to identify missed opportunities in diagnosis

  Strategy D. EHR-enhanced chart review

  Use EHR searches or trigger algorithms to identify high-risk diagnoses or care patterns suggestive of a missed opportunity

4. Review and Analyze Cases for Improvement Opportunities

  Identify cases for review

  Ensure that pertinent clinical documentation is available to review the record

  Is there a missed opportunity?

  Perform a structured review of case details to determine whether there was a missed opportunity to make an earlier, correct diagnosis

  Review for contributing factors

  Analyze further details of the case using diagnostic error classification tools and taxonomies

We evaluated Measure Dx using quantitative and qualitative methods that assessed the feasibility of its implementation and its effects on short-term and intermediate outcomes. Here, we present our approach and quantitative outcomes, following the SQUIRE guidelines for reporting quality improvement initiatives.22 Our aims were to (1) examine the feasibility of implementing Measure Dx with limited external technical assistance; (2) identify the yield of newly detected diagnostic safety events and learning opportunities associated with use of Measure Dx; and (3) evaluate self-reported improvements in diagnostic safety processes, procedures, and improvement activities at the end of the evaluation.

METHODS

We conducted a prospective evaluation of Measure Dx between November 2022 and April 2023. Eligible participants were US HCOs that reported sufficient resources to complete the requirements of the evaluation (see “Procedures,” below). We otherwise placed few restrictions on eligibility, as we considered participants’ motivation to learn from events and improve to be more important than specific organizational resources or prior experience with diagnostic safety.

We used several national outreach methods to recruit evaluation sites in 2022. First, AHRQ hosted a 1-h webinar (approximately 200 attendees) during which we presented an overview of Measure Dx and details about the expectations and timelines for participating in the evaluation. The session was also recorded and posted to the AHRQ website. Subsequent recruitment strategies included listserv emails sent by AHRQ and other professional societies (reaching approximately 160,000 listserv subscribers), an announcement at the annual Society to Improvement Diagnosis in Medicine meeting (about 200 attendees), word of mouth, and social media posts.

Procedures

We asked representatives from enrolled organizations to identify a “site champion” who would serve as the primary liaison for the project. We communicated the following expectations and activities related to participation:

  • Two or more people at the organization who would lead implementation of Measure Dx, including at least one clinician and one quality and safety professional;

  • Use of Measure Dx to identify, review, and analyze diagnostic safety events using at least one case identification strategy from the guide;

  • Participation in data collection;

  • Participation in monthly calls to share experiences with and learn from other participating organizations (see below).

To facilitate implementation and shared learning, participants attended monthly 1-h teleconferences. Teleconferences were structured using the Project ECHO® model,23 a research-supported learning framework to disseminate knowledge and create virtual learning communities. The purpose of the teleconferences was to foster shared learning and networking, to check progress and troubleshoot barriers to implementation, and to understand real-time challenges, successes, and lessons learned. Each of the 6 monthly teleconferences followed a similar structure, beginning with a didactic presentation by a subject matter expert, followed by voluntary updates from one or more organizations about their ongoing work, and facilitated group discussion. A web-based participant portal for the project provided on-demand access to session recordings, slides, and relevant readings.

Data collection activities were reviewed and approved by the institutional review board at the authors’ organizations. While our team led the evaluation and was available for brief consultation, we did not assist with any organization-level activities, such as selecting a case-finding strategy or reviewing events. No personally identifiable patient or provider information was provided to the evaluation team. Each participating organization received a $7500 stipend.

Measures

We assessed several feasibility-related outcomes, including (1) adherence to the required evaluation activities (attrition from the evaluation; attendance at ECHO teleconferences), (2) participants’ ability to generate data for learning and improvement (i.e., to identify and review cases of diagnostic safety events), and (3) uptake of each the four case identification strategies described in Measure Dx. We collected self-reported data from organizations with a survey administered using REDCap (Research Electronic Data Capture),24,25 a secure, web-based platform designed to support data capture for research studies. We gathered the following information.

Participant Characteristics

At baseline, we assessed organization-level characteristics, including type of organization (academic, not-for-profit, for-profit); total number of hospital and ambulatory care facilities; and current engagement in patient safety activities, including safety culture surveys, mechanisms for routine review of patient safety events, and staff- and/or provider-facing mechanisms for event reporting.

Case Review Summary

Participants reported the number of potential safety events identified or flagged for review, the number ultimately reviewed using a structured approach as described in Measure Dx, and the number of reviewed events confirmed to have missed opportunities in diagnosis, regardless of harm. Harm ratings can amplify hindsight bias26 and therefore were not collected. Event counts were further classified according to the case-finding strategy (see Table 1, #3]) used to identify the event. Participants reported case review and event counts at the midpoint (months 1–3) and end (months 4–6) of the evaluation. Event data were reported to our team in aggregate within standardized form fields, and a team member confirmed that reports for each 3-month period represented independent events. A voluntary open-ended item solicited a summary of what was learned from case reviews and any subsequent actions taken by the organization.

Safer Dx Checklist

The Safer Dx Checklist consists of 10 items that assess implementation of ten high-priority diagnostic safety practices.27 The checklist helped participants understand the current state of diagnostic practices in their organizations, identify areas needing improvement, and track progress toward diagnostic excellence over time. Example items include “Health care organization actively seeks patient and family feedback to identify and understand diagnostic safety concerns and addresses concerns by codesigning solutions” and “Health care organization has in place standardized systems and processes to close the loop on communication and follow up on abnormal test results and referrals.” Each practice is reported as fully, partially, or not implemented. Based on the number of fully implemented practices, organizations are classified as “beginning” (0–3 “fully” responses), “making progress” (4–6 “fully” responses), or “exemplar” (7 or more “fully” responses). The Safer Dx Checklist was administered at baseline and at the end of the evaluation.

Data Analysis

We generated descriptive statistics to characterize the evaluation cohort and to describe the distributions of survey responses. As none of the surveys has a summary score algorithm, we examined changes in the distribution of item responses from baseline. Case review data were analyzed as counts and proportions, broken down by time period (months 1–3 versus 4–6), by event detection strategy from Measure Dx (Strategy A, B, C, or D; see Fig. 1), and by outcome of the review process (cases reviewed or not reviewed; reviewed events found to have or not have diagnostic safety learning and improvement opportunities). Analyses were conducted using Stata/SE 14.2 and Microsoft Excel.

Figure 1.

Figure 1

Safer Dx Checklist item ratings pre- and post-evaluation. Each dot represents one organization’s response at baseline, and arrows correspond to item ratings that subsequently increased or decreased post-evaluation. With one exception (see item #9), item responses either did not change or changed to an adjacent category. For the purpose of data display, item descriptions are abbreviated from their original form on the Safer Dx Checklist.

RESULTS

Recruitment and Organizational Characteristics

Representatives from 27 organizations initially indicated interest in participating. Four were deemed ineligible (1 was outside of the USA and 3 were not direct providers of health care services). Ten organizations withdrew from consideration due to insufficient resources (e.g., staffing, leadership support), and 1 declined participation due to competing interests. Two organizations did not respond to follow-up on their initial inquiries. Ultimately, 11 organizations (comprising 34 individual participants) were enrolled. Characteristics of enrolled organizations, including their patient safety-related activities, are summarized in Table 2.

Table 2.

Characteristics and Safety Activities of Participating Organizations (n = 11)

Description n (%)
Organization type
  Academic medical center 8 (72.7%)
  Other not-for-profit 3 (27.2%)
Number of hospital facilities within organization
  1 6 (54.5%)
  2–10 4 (36.4%)
  > 10 1 (9.1%)
Number of ambulatory clinic sites within organization
  0 2 (18.2%)
  1–10 2 (18.2%)
  11–50 5 (45.5%)
  > 50 2 (18.2%)
Organization routinely conducts patient safety culture survey 7 (63.6%)
Safety culture surveys that are routinely conducted
  AHRQ Safety Survey 5 (45.5%)
  Safety Attitudes Questionnaire 1 (9.1%)
  Press Ganey 2 (18.2%)
  None specified 4 (36.4%)
Activities regularly held in the organization
  Peer reviews 10 (90.9%)
  Morbidity and mortality conferences 11 (100%)
  Death reviews 8 (72.7%)
  Root cause analysis 11 (100%)
  Health care failure mode and effects analysis 7 (63.6%)
  Other 3 (27.3%)
Organization has a safety hotline or incident reporting system for providers 11 (100%)
Organization has a safety hotline or incident reporting system for patients 7 (63.6%)

Feasibility Outcomes

All 11 organizations completed all requirements for the evaluation. Each of the four diagnostic safety event detection strategies outlined in Measure Dx were used by three or more organizations (Table 3). At the 3-month midpoint, all but three organizations reported having identified cases and having conducted one or more case reviews for learning opportunities. At the 6-month point, all organizations had identified cases and conducted case reviews for learning opportunities. Participation in the ECHO teleconferences was consistently high. Two organizations missed ECHO teleconferences due to late enrollment in the evaluation. However, only one missed a single ECHO teleconference (i.e., no representatives from that organization attended) after enrollment.

Table 3.

Numbers of Cases Reviewed and Yield of Identified Learning Opportunities by Case Detection Strategya

Case finding strategy Number of organizations (n) Total number of cases reviewed (range per org.) Total number of reviewed cases with learning opportunities (range per org.) Proportion of reviewed cases with learning opportunities (range per org.)
A (Re-examine existing data sources/known events) 8* 156 (4–62) 74 (1–41) 47.4% (20.0–74.5%)
B (Clinician reports) 3 12 (2–5) 4 (1–2) 33.3% (20.0–50.0%)
C (Patient-reported data) 3* 14 (4–10) 0 (0) 0%
D (EHR-enhanced review) 6* 521 (12–226) 146 (7–109) 28.0% (6.0–58.3%)

aResults include total numbers of cases for each strategy in the study period, aggregating 3- and 6-month numbers

*Organizations that reported using the strategy but reviewed 0 cases are included in this n value; however, to better describe the ranges of cases per organization, values reported in the adjacent columns do not include organizations that reviewed 0 cases

Case Yield for Diagnostic Learning Opportunities

Across the evaluation period, participants collectively reviewed 703 cases using the Revised Safer Dx Instrument (or comparable review methods, as applicable). Of those cases, 224 (31.8%) were identified as diagnostic safety events with improvement opportunities.

Total case volume and yield for cases with improvement opportunities varied considerably at the organization level. Also, within each case-finding strategy, the proportion of reviewed cases with identifiable improvement opportunities ranged from 0 to 47% (Table 3). Re-review of quality and safety events previously collected by the organization (Strategy A) yielded the highest overall percentage of cases with identifiable learning opportunities. In contrast, organizations reviewed few cases using patient-reported data (Strategy C), and none led to findings of learning opportunities.

Safer Dx Checklist

At baseline, more than half of the organizations’ (n = 7) responses to the Safer Dx Checklist generated scores that characterized them as “beginning” in their journey to diagnostic excellence; the remaining four were “making progress.” By the end of the evaluation, 10 organizations were characterized as “beginning” their journey; of note, three organizations’ classifications lowered from “making progress” to “beginning.” Figure 1 displays individual item ratings in the Safer Dx Checklist, showing the trajectories of change for each item from pre- to post-evaluation.

Item-level analyses indicate that some practices shifted more than others in this cohort. The item that most often showed positive change was item 3 (“Health care organization creates feedback loops to increase information flow about patients’ diagnostic and treatment-related outcomes…”); 5 organizations (45%) upgraded their rating of this item, with only one site remaining at the “not implemented” level. At the end of the evaluation, 100% of participants endorsed at least partial implementation of items 2 (“Health care organization creates a just culture and creates a psychologically safe environment…”) and 4 (“Health care organization includes multidisciplinary perspectives to understand and address contributory factors…”). The item that most often showed no change was item 8 (“…has in place standardized systems and processes to encourage direct, collaborative interactions between clinical teams and diagnostic specialties”).

Self-Reported Learning and Improvement

Voluntary open-ended responses described examples of specific opportunities to improve diagnosis. Table 4 summarizes voluntarily reported events and improvement activities. Six organizations (55%) referred one or more identified events to leadership and/or other existing quality and safety committees for further review and action.

Table 4.

Summary of Voluntarily Reported Events and Actions Takena

Event type Action(s) taken
Missed sepsis/septic shock (n = 3)

Feedback to existing sepsis care initiatives

Referral to existing quality & safety committee for further review and action (n = 2)

Feedback to involved clinicians

Feedback to leadership

Delayed diagnosis of child physical abuse Referral to existing quality & safety committee for further review and action
Missed musculoskeletal infections Feedback to inform the revision of a clinical pathway
Missed and wrong diagnoses of urinary tract infections Not specified
Missed biphasic anaphylaxis Creation of anaphylaxis clinical pathway
Missed follow-up of test results pending at hospital discharge Creation of a report in EHR to identify all hospital discharges with pending pathology and cultures
Delayed communication of diagnosis to patient/family Not specified
Other, not specified events (n = 4)

Referral to existing quality & safety committee for further review and action (n = 2)

Feedback to involved clinicians

Feedback to leadership (n = 2)

Establishment of a diagnostic error database

Creation of reports showing testing percentage and accuracy

aUnless otherwise specified, responses represent one organization. Counts are based on data reported in open-ended survey items and do not necessarily represent the full range of events or actions taken

DISCUSSION

Growing awareness of diagnostic errors has highlighted an unmet need for pragmatic approaches for HCOs to measure and improve diagnostic safety. To our knowledge, this is the first application of a standardized set of strategies across multiple organizations to systematically address diagnostic safety. With minimal external technical support, HCOs used the guidance in Measure Dx to identify, analyze, and learn from diagnostic safety events within a 6-month period. Organizations differed in the extent to which they could identify these events, and this appeared to depend in part on the case finding strategies used. While findings are preliminary, they reinforce recommendations in Measure Dx to begin with existing sources of data (e.g., routine quality and safety event reviews, electronic health record data warehouses) to look for diagnostic errors. While soliciting data directly from clinicians and patients can provide unique insights, these approaches may require further development to deliver a higher yield. Several participants voluntarily reported actions taken in response to what they learned from case reviews, including providing individual and group feedback and enhancing their organization’s capacity for future data gathering about diagnostic safety events.

Learning from safety events is essential to identifying preventable breakdowns and is consistent with the goals of a learning health system.28,29 Prior research has identified several barriers to diagnostic safety improvement, including limited infrastructure for measurement and monitoring activities and lack of a coordinated organizational response to diagnostic safety events.30 Measure Dx was developed to help organizations build capacity to fill these gaps. Our findings underscore how using Measure Dx can help create a shared mental model of diagnostic safety improvement work within an organization.

Somewhat unexpectedly, we noted that several organizations declined in their ratings of various items on the Safer Dx Checklist. Because of the short duration of the intervention, we do not believe that this reflects true declines in safety practices, although it is plausible that use of Measure Dx had no meaningful effect on diagnostic safety capacity. Alternatively, these changes in ratings may have been due to site champions’ increased accuracy in their assessments of their diagnostic safety practices once the work was underway.

Several potential limitations temper our conclusions. Despite a national outreach effort and a modest monetary incentive, recruitment efforts yielded only 11 organizations, all of which included highly motivated champions to improve diagnosis. As such, our sample is likely not representative of most HCOs. Other organizations may have difficulty implementing and sustaining diagnostic safety improvement activities without more compelling external incentives. For instance, despite initial interest, nearly half of HCOs that approached our team ultimately declined to enroll after assessing their available resources and leadership support. Currently, no specific external motivators exist for diagnostic safety, although models to engage payers have been recently proposed.31 The resulting small sample size limited our ability to examine statistical differences between case-finding strategies, and it also limited evaluation of how contextual factors (e.g., organizational characteristics) were related to outcomes. Finally, the short duration of the evaluation period provided a shorter window for implementation and limited our ability to explore longer-term diagnostic safety outcomes, sustainability, and unintended consequences. The observational nature of the study also precludes strong inferences about the specific effects of using Measure Dx. Future studies should be designed with larger and more representative samples and longer implementation and follow-up periods.

Despite these limitations, we believe our findings demonstrate feasibility, utility, and potential for uptake of Measure Dx in other HCOs. We observed that participants were highly engaged during the ECHO teleconferences and valued this mode of mutual learning and support. Few learning collaborative-style methods have been used to measure and improve diagnostic safety.32 Rigorously conducted larger learning collaboratives may be a promising means of promoting adoption and sustainability of diagnostic safety improvement activities across US HCOs. Multi-site collaboratives with support from payers and policymakers could be tested, such as through the CMS Innovation Center, to help implement novel approaches to diagnostic safety improvement. While our sample could be characterized as “early adopters,” diagnostic safety is becoming a more prominent topic and is the focus of new initiatives from public (US Centers for Disease Control and Prevention) and private (Leapfrog Group20) groups.

In conclusion, using standardized strategies through Measure Dx can accelerate diagnostic safety improvement work in HCOs. Implementing methods to use existing data for systematic organizational measurement, as outlined in Measure Dx, can help overcome long-standing barriers to diagnostic safety improvement and inform a coordinated organizational response to reduce diagnostic safety events. Policy and payment incentives can further stimulate diagnostic safety improvement efforts and encourage wider adoption of Measure Dx and related improvement approaches.

Supplementary Information

Below is the link to the electronic supplementary material.

Funding

This project was funded under contract number HHSP233201500022I/75P00119F37006 from the Agency for Healthcare Research and Quality (AHRQ), US Department of Health and Human Services. Dr. Singh is funded in part by the Houston Veterans Administration (VA) Health Services Research and Development (HSR&D) Center for Innovations in Quality, Effectiveness and Safety (CIN13–413), the VA National Center for Patient Safety, and AHRQ (R01HS028595 and R18HS029347). The authors are solely responsible for this document’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ, the Department of Veterans Affairs, or the US government. The funders were not involved in in data collection or interpretation. Readers should not interpret any statement in this product as an official position of AHRQ, the US Department of Health and Human Services, the Department of Veterans Affairs, or the US government. None of the authors has any affiliation or financial involvement that conflicts with the material presented in this product.

Data Availability

Raw data pertaining to characteristics of participating organizations and individuals are not available, as these could inadvertently identify these participants and thereby breach their privacy and confidentiality. Other data that support the findings of this study are available from the corresponding author, AB, upon reasonable request.

Declarations:

Conflict of Interest:

The authors declare that they do not have a conflict of interest.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Balogh EP, Miller BT, Ball JR, eds. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press; 2015. [PubMed]
  • 2.Brown TW, McCarthy ML, Kelen GD, Levy F. An epidemiologic study of closed emergency department malpractice claims in a national database of physician malpractice insurers. Acad Emerg Med. 2010;17(5):553-60. 10.1111/j.1553-2712.2010.00729.x. [DOI] [PubMed] [Google Scholar]
  • 3.Gupta K, Szymonifka J, Rivadeneira NA, et al. Factors associated with malpractice claim payout: An analysis of Closed Emergency Department Claims. Jt Comm J Qual Patient Saf. 2022;48(9):492-495. 10.1016/j.jcjq.2022.05.006 [DOI] [PubMed] [Google Scholar]
  • 4.Schacht K, Furst W, Jimbo M, Chavey WE. A malpractice claims study of a family medicine department: A 20-year review. J Am Board Fam Med. 2022;35(2):380-386. 10.3122/jabfm.2022.02.210260. [DOI] [PubMed] [Google Scholar]
  • 5.Graber ML, Trowbridge R, Myers JS, Umscheid CA, Strull W, Kanter MH. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf. 2014;40(3):102-10. 10.1016/s1553-7250(14)40013-8. [DOI] [PubMed] [Google Scholar]
  • 6.Giardina TD, Hunte H, Hill MA, Heimlich SL, Singh H, Smith KM. Defining diagnostic error: a scoping review to assess the impact of the National Academies' report Improving Diagnosis in Health Care. J Patient Saf. 2022;18(8):770-78. 10.1097/PTS.0000000000000999. [DOI] [PMC free article] [PubMed]
  • 7.Singh H, Bradford A, Goeschel C. Operational measurement of diagnostic safety: state of the science. Diagnosis (Berl). 2021;8(1):51-65. 10.1515/dx-2020-0045. [DOI] [PubMed] [Google Scholar]
  • 8.Huddleston JM, Diedrich DA, Kinsey GC, Enzler MJ, Manning DM. Learning from every death. J Patient Saf. 2014;10(1):6-12. 10.1097/PTS.0000000000000053. [DOI] [PubMed] [Google Scholar]
  • 9.Congdon M, Rauch B, Carroll B, et al. Opportunities for diagnostic improvement among pediatric hospital readmissions. Hosp Pediatr. 2023;13(7):563-571. 10.1542/hpeds.2023-007157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hudspeth J, El-Kareh R, Schiff G. Use of an expedited review tool to screen for prior diagnostic error in emergency department patients. Appl Clin Inform. 2015;6(4):619-28. 10.4338/ACI-2015-04-RA-0042. [DOI] [PMC free article] [PubMed]
  • 11.Cifra CL, Custer JW, Smith CM, et al. Prevalence and characteristics of diagnostic error in pediatric critical care: a multicenter study. Crit Care Med. 2023;51(11):1492-1501. 10.1097/CCM.0000000000005942. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Marshall TL, Ipsaro AJ, Le M, et al. Increasing physician reporting of diagnostic learning opportunities. Pediatrics. 2021;147(1). 10.1542/peds.2019-2400. [DOI] [PubMed]
  • 13.Okafor N, Payne VL, Chathampally Y, Miller S, Doshi P, Singh H. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine. Emerg Med J. 2016;33(4):245-52. 10.1136/emermed-2014-204604. [DOI] [PubMed] [Google Scholar]
  • 14.Murphy DR, Thomas EJ, Meyer AN, et al. Development and validation of electronic health record-based triggers to detect delays in follow-up of abnormal lung imaging findings. Radiology. 2015;277(1):81-87. 10.1148/radiol.2015142530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Raffel KE, Kantor MA, Barish P, et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf. 2020;29(12):971-979. [DOI] [PubMed] [Google Scholar]
  • 16.Bhise V, Meyer A, Singh H, et al. Errors in diagnosis of spinal epidural abscesses in the era of electronic health records. Am J Med. 2017;130(8). 10.1016/j.amjmed.2017.03.009. [DOI] [PubMed]
  • 17.Giardina TD, Korukonda S, Shahid U, et al. Use of patient complaints to identify diagnosis-related safety concerns: a mixed-method evaluation. BMJ Qual Saf. 2021;30(12):996-1001. 10.1136/bmjqs-2020-011593. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Dalal AK, Schnipper JL, Raffel K, Ranji S, Lee T, Auerbach A. Identifying and classifying diagnostic errors in acute care across hospitals: Early lessons from the Utility of Predictive Systems in Diagnostic Errors (UPSIDE) study. J Hosp Med. 2023. 10.1002/jhm.13136. [DOI] [PubMed] [Google Scholar]
  • 19.Perry MF, Melvin JE, Kasick RT, et al. The diagnostic error index: A quality improvement initiative to identify and measure diagnostic errors. J Pediatr. 2021;232:257-263. 10.1016/j.jpeds.2020.11.065. [DOI] [PubMed] [Google Scholar]
  • 20.Campione Russo A, Tilly JL, Kaufman L, et al. Hospital commitments to address diagnostic errors: An assessment of 95 US hospitals. J Hosp Med. 2024. 10.1002/jhm.13485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Measure Dx: A Resource To Identify, Analyze, and Learn From Diagnostic Safety Events. Agency for Healthcare Research and Quality. https://www.ahrq.gov/patient-safety/settings/multiple/measure-dx.html. Accessed September 2, 2024.
  • 22.Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25(12):986-992. 10.1136/bmjqs-2015-004411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Mexico TUoN. Project ECHO. https://hsc.unm.edu/echo/. Accessed 11 August 2023.
  • 24.Harris PA, Taylor R, Minor BL, et al. The REDCap consortium: Building an international community of software platform partners. J Biomed Inform. 2019;95:103208. 10.1016/j.jbi.2019.103208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-81. 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Banham-Hall E, Stevens S. Hindsight bias critically impacts on clinicians’ assessment of care quality in retrospective case note review. Clin Med (Lond). 2019;19(1):16-21. 10.7861/clinmedicine.19-1-16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Singh H, Mushtaq U, Marinez A, et al. Developing the Safer Dx Checklist of Ten Safety Recommendations for Health Care Organizations to Address Diagnostic Errors. Jt Comm J Qual Patient Saf. 2022;48(11):581-590. 10.1016/j.jcjq.2022.08.003. [DOI] [PubMed] [Google Scholar]
  • 28.Edwards MT. An organizational learning framework for patient safety. Am J Med Qual. 2017;32(2):148-155. 10.1177/1062860616632295. [DOI] [PubMed] [Google Scholar]
  • 29.Vincent C, Burnett S, Carthey J. Safety measurement and monitoring in healthcare: a framework to guide clinical teams and healthcare organisations in maintaining safety. BMJ Qual Saf. 2014;23(8):670-7. 10.1136/bmjqs-2013-002757. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Giardina TD, Shahid U, Mushtaq U, Upadhyay DK, Marinez A, Singh H. Creating a learning health system for improving diagnostic safety: Pragmatic insights from US Health Care Organizations. J Gen Intern Med. 2022;37(15):3965-3972. 10.1007/s11606-022-07554-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Ali KJ, Goeschel CA, DeLia DM, Blackall LM, Singh H. The PRIDx framework to engage payers in reducing diagnostic errors in healthcare. Diagnosis (Berl). 2023. 10.1515/dx-2023-0042. [DOI] [PubMed] [Google Scholar]
  • 32.Schiff GD, Reyes Nieva H, Griswold P, et al. Randomized trial of reducing ambulatory malpractice and safety risk: Results of the Massachusetts PROMISES Project. Med Care. 2017;55(8):797-805. 10.1097/MLR.0000000000000759. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

Raw data pertaining to characteristics of participating organizations and individuals are not available, as these could inadvertently identify these participants and thereby breach their privacy and confidentiality. Other data that support the findings of this study are available from the corresponding author, AB, upon reasonable request.


Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES