Skip to main content
De Gruyter Funded Articles logoLink to De Gruyter Funded Articles
. 2020 Jun 26;8(2):219–225. doi: 10.1515/dx-2020-0035

Development of a rubric for assessing delayed diagnosis of appendicitis, diabetic ketoacidosis and sepsis

Kenneth A Michelson 1,, David N Williams 2, Arianna H Dart 1, Prashant Mahajan 3, Emily L Aaronson 4, Richard G Bachur 1, Jonathan A Finkelstein 5
PMCID: PMC7759568  NIHMSID: NIHMS1609508  PMID: 32589599

Abstract

Objectives

Using case review to determine whether a patient experienced a delayed diagnosis is challenging. Measurement would be more accurate if case reviewers had access to multi-expert consensus on grading the likelihood of delayed diagnosis. Our objective was to use expert consensus to create a guide for objectively grading the likelihood of delayed diagnosis of appendicitis, new-onset diabetic ketoacidosis (DKA), and sepsis.

Methods

Case vignettes were constructed for each condition. In each vignette, a patient has the condition and had a previous emergency department (ED) visit within 7 days. Condition-specific multi-specialty expert Delphi panels reviewed the case vignettes and graded the likelihood of a delayed diagnosis on a five-point scale. Delayed diagnosis was defined as the condition being present during the previous ED visit. Consensus was defined as ≥75% agreement. In each Delphi round, panelists were given the scores from the previous round and asked to rescore. A case scoring guide was created from the consensus scores.

Results

Eighteen expert panelists participated. Consensus was achieved within three Delphi rounds for all appendicitis and sepsis vignettes. We reached consensus on 23/30 (77%) DKA vignettes. A case review guide was created from the consensus scores.

Conclusions

Multi-specialty expert reviewers can agree on the likelihood of a delayed diagnosis for cases of appendicitis and sepsis, and for most cases of DKA. We created a guide that can be used by researchers and quality improvement specialists to allow for objective case review to determine when delayed diagnoses have occurred for appendicitis, DKA, and sepsis.

Keywords: diagnostic error, diagnostic safety, error measurement

Introduction

Diagnostic errors are a major cause of harm to patients in the United States [1], [2], [3], [4]. Reducing the burden of diagnostic errors is an urgent, national priority [1]. The emergency department (ED) is prone to diagnostic errors because patients are usually unknown to clinicians; the decision density is high; more than one third of emergency clinicians do not have specialized emergency care training; and characteristics of emergency care systems such as frequent handoffs, transfers between facilities, inherent unpredictability, and others may contribute to error [5], [6], [7], [8]. Children are likely at higher risk of diagnostic error compared with adults because signs and symptoms of even serious illness are frequently vague and challenging to assess, and physical findings may be subtle or difficult to interpret [9]. Delays account for the large majority of diagnostic errors in the ED [10]. Measuring delayed diagnosis of serious conditions for children visiting EDs is important, but determining when a delayed diagnosis occurred is difficult, in part because clinicians are often unaware of or uncomfortable reporting delays [11].

The standard method of determining when a delay in diagnosis occurred is a retrospective case review [12], [13]. High-quality reviews are challenging for several reasons. First, there are multiple definitions of delayed diagnosis, including diagnostic error [1], [8], undesirable diagnostic event [14], or missed opportunity to improve diagnosis [15]. The missed opportunity to improve diagnosis has emerged as a key concept in diagnostic safety [16], [17]. The missed opportunity concept has valuable properties including its inherent connectedness to improvement efforts and its neutral connotation. Second, case reviews are inherently subjective, and when case reviewers disagree, resolution of disagreement may be difficult [18].

Multi-disciplinary expert consensus case review overcomes the subjectivity of individual case review but is inherently time-intensive [18]. To overcome that barrier, we used multi-expert consensus to develop a condition-specific case review guide to inform case review by individuals. The guide would include a wide spectrum of case vignettes for three key serious pediatric emergency conditions: appendicitis, new-onset diabetic ketoacidosis (DKA), and sepsis. These conditions were chosen to represent a spectrum of serious pediatric conditions and rapidity of progression. With the aid of this guide, individuals could then review a case and more objectively grade the likelihood of a delayed diagnosis than would be possible without the guide. The overall goal of this study was to produce a case review guide for the included conditions and to understand the types of cases in which consensus is not possible.

Materials and methods

Study design

We performed a Delphi [19] study to assess case vignettes of appendicitis, DKA, and sepsis diagnosed on a second visit to an ED, to determine the likelihood of a delayed diagnosis having occurred. The Institutional Review Board deemed the study not human subjects research and thus it was exempt from review.

Participants

Condition- and discipline-specific experts were approached to join one of three expert consensus panels (Table 1). Experts were identified through involvement in diagnostic error work (e. g. being an institutional diagnostic safety leader or having federal funding to conduct diagnostic safety research) or because of publications attesting to content expertise (e. g. pediatric sepsis research leader or diabetes clinical trialist). Two study authors (KAM and ELA) participated in the case reviews for all three conditions.

Table 1:

Composition of the Delphi panels by specialty. One general emergency physician and one pediatric emergency physician participated in all three panels. One pediatric endocrinologist had additional training in pediatric critical care. Specialties were selected for expertise in diagnosis in the emergency care setting.

Appendicitis Diabetic ketoacidosis Sepsis
General emergency (n=2) General emergency (n=1) General emergency (n=1)
General pediatrics (n=2) Pediatric emergency (n=4) Pediatric emergency (n=4)
Pediatric emergency (n=4) Pediatric endocrinology (n=2)
Pediatric surgery (n=2)

Case vignettes

The study authors developed a series of case vignettes for each condition. The vignettes were intended to represent cases a clinician might evaluate in an ED setting, and were based on clinical experience from prior similar case reviews [20]. In each vignette, a fictitious patient is known to have a diagnosis of one of the included conditions (e. g. appendicitis). The patient had a previous visit to the ED within 7 days in which the diagnosis was not made, as this time window is a boundary frequently used for measuring ED revisits [21]. Details of the previous visit were given, and the panelist determined whether a delayed diagnosis was likely; for example:

A 5-year-old female has appendicitis. She had a visit to an emergency department 3 days previously for a complaint of abdominal pain. The doctor obtained blood testing that showed a white blood cell count of 15,000 and the patient had severe pain in the right abdomen. The doctor diagnosed the patient with constipation and sent her home.

The authors developed vignettes with a goal of creating cases across a range of likelihood of delayed diagnosis. Vignettes varied in the detail provided, as would be true of actual health records.

Panelists reviewed case vignettes to determine whether a delayed diagnosis occurred. The measure of delayed diagnosis was whether the condition was present on the most recent previous ED visit, using an ordinal scale (Table 2) [20]. In order to capture all types of delay, when assigning these scores reviewers were not asked whether the delay was a missed opportunity to improve diagnosis. For appendicitis and sepsis, panelists were asked whether the condition was present on the previous visit. For DKA, panelists were asked whether diabetes (not only DKA specifically) was present on the previous visit. We did this because untreated diabetes frequently leads to DKA; identification of diabetes (and subsequent treatment) will generally prevent DKA; repeated visits for undiagnosed diabetes are a risk factor for DKA [22]; and early symptoms of DKA are often indistinguishable from hyperglycemia.

Table 2:

Levels of likelihood of delayed diagnosis used for rating cases with repeated emergency department visits. Delayed diagnosis was defined as the presence of a disease on a previous emergency department visit in which a diagnosis was not made. Example vignettes of appendicitis cases are shown, with likelihood ratings based on final consensus after the Delphi process.

Likelihood diagnosis was present on prior visit Definition Example appendicitis vignette
Near-definitely not There is near-certainty that the condition was not present on the prior encounter. The likelihood of a delay falls into this range when [1] there were no signs of the condition on the prior encounter; [2] an alternative explanation for the prior encounter symptoms is definite or almost definite; or [3] the time course makes the condition virtually certain not to have been present. An 8-year-old male with appendicitis visited the ED 6 days previously for ear pain; he denied abdominal pain, and had an abdominal examination at that time that was normal. At the prior visit, the patient was found to have otitis media and was managed without antibiotics.
Probably not It is very unlikely that the condition was present on the prior encounter, and symptoms, signs, and other data at that time mostly pointed away from the condition. A 5-year-old male with non-perforated appendicitis presented 4 days previously with diffuse abdominal pain and had an ultrasound with a non-visualized appendix and a peripheral WBC of 6,000/mm3. He was ultimately diagnosed with constipation, received an enema and felt completely better. He awoke the following morning with diffuse abdominal pain.
Possibly The likelihood of a delay when [1] it is possible the condition was present, but there are factors for and against that theory; [2] determination is confusing; [3] there is limited detail from which to decide; or [4] there are some alternative explanations for the case’s features, but they have similar likelihood to the condition. A 4-year-old with non-perforated appendicitis presented 2 days previously with a day of fever to 102, cough, rhinorrhea, mild diffuse abdominal pain, and myalgias. He was diagnosed with a flu-like illness. All of the symptoms were ongoing since that visit.
Probably More likely than not, the patient had the condition on the prior encounter. Evidence pointed toward the condition, or few alternative explanations existed. A 14-year-old female with non-perforated appendicitis visited the ED 3 days ago for right lower quadrant pain. She was moderately tender in the right lower quadrant. A peripheral white blood cell count was 14,000/mm3. On ultrasound, the appendix could not be visualized, but a 3 cm right hemorrhagic ovarian cyst was seen. She was discharged, but her pain did not improve over the subsequent days, so she returned.
Near-definitely The patient almost definitely or definitely had the condition on the prior encounter. Generally, these cases have clear evidence that the condition was present on the prior encounter. Alternatively, there is no other plausible explanation for the symptoms/signs on the prior encounter. A 4-year-old female with perforated appendicitis visited the ED yesterday for nausea and vomiting but had a normal abdominal exam at that point. With ondansetron, her vomiting ceased and she was able to tolerate liquids. Her appetite has been decreased since ED discharge and she resumed vomiting 24 h ago.

Development of the survey tool

Six local academic pediatric emergency physicians piloted the survey tool for each condition (two per condition). The pilot testers completed the survey and assessed understandability and length of time to read and respond. The pilot testers were allowed to reject vignettes for understandability. We planned to write additional vignettes if the range of response scores using the ordinal outcome measure was not fully saturated (i. e. if vignettes did not exist for each possible response). However, pilot testing revealed that vignettes include the full spectrum of likelihood of delayed diagnosis.

Delphi approach

Case vignettes were provided to panelists in each round of the Delphi process, and responses were recorded using the REDCap platform [23]. Panelists were not given the identities of the other participants. After each round, if ≥75% of the panel agreed on the rating, we considered consensus to be achieved [24] and the vignette was finalized and not included in the subsequent round. Each Delphi round was open for 1 week, and analysis and development of the next survey occurred in the subsequent week. Participants received daily reminders to fill out the survey during the open week until they completed it.

In round 1, each panel scored all of the vignettes for its condition, and gave a rationale for each score. In round 2, for the remaining vignettes (where consensus was not reached in round 1), panelists were given the distribution of scores and verbatim rationales given by respondents in round 1, and were asked to rescore and give a rationale for the score. In round 3, panelists were asked if they agreed with the median rating from round 2. If the median fell between categories (e. g. between probable and near-definite) then the median was “rounded” to the more definitive choice (e. g. near-definite). In all rounds, panelists were blinded to the identity and specialty of the responses.

After round 3, vignettes without consensus were given a final rating as follows: for vignettes in which ≥75% of the votes were probable or near-definite, the final designation was “at least probable.” For vignettes where ≥75% of the votes were probably not or near-definitely not, the final designation was “at most improbable.” For all other vignettes, the final designation was “uncertain.”

From the completed Delphi responses, a guide for scoring case vignettes was created.

Results

Twenty-one participants were approached, of whom three did not participate; two did not respond to an invitation and one had retired. The final makeup of the panel is shown in Table 1. There was a 100% response rate for each panel through each Delphi round.

Qualitative rationales varied depending on the score, as in this example:

An 8-year-old female presented to the ED and has new-onset DKA. Three days ago, she presented to the ED for vomiting and dehydration. She received ondansetron and was able to tolerate oral fluids. No laboratory studies were performed. She felt well in the intervening days until she went to her pediatrician today and was found to have hyperglycemia.

A rater who stated that the patient “possibly” had diabetes in a previous visit noted, “Vomiting, dehydration, but well in the intervening days.” In contrast, another rater who said the patient “near-definitely” had diabetes during the previous ED visit noted, “Diabetes develops over a period of months, most likely present 6 days prior, even if no specific diagnostic clues.” This lack of early agreement for several DKA cases coalesced into consensus in subsequent rounds. The panel for DKA generally moved toward agreement that diabetes was present during the prior ED visit nearly regardless of the prior symptoms, because of the time course of new-onset diabetes.

The final scores after all Delphi rounds are shown in Table 3. There was increasing consensus through each round (Supplemental Table 1). Consensus was achieved for all cases for appendicitis and sepsis. Panelists were nearly certain that the diagnosis was present in a prior ED visit in 17/40 (43%) of appendicitis cases, 19/23 (83%) of DKA cases, and 12/32 (38%) of sepsis cases. The case vignettes and their final scores are available in the Supplementary Appendix.

Table 3:

Final consensus ratings of the case vignettes by condition.

Rating Appendicitis, n (%) New-onset diabetic ketoacidosis, n (%) Sepsis, n (%)
Near-definitely not 5 (12) 0 11 (34)
Probably not 5 (12) 0 6 (19)
Possibly 13 (32) 0 11 (34)
Probably 5 (12) 4 (13) 3 (9)
Near-definitely 12 (30) 19 (63) 1 (3)
At least probable 0 5 (17) 0
At most improbable 0 0 0
Uncertain 0 2 (7) 0

There were 7/30 (23%) cases for DKA in which consensus was not achieved by the pre-specified standard. In 5/7 of these cases, the median score was that diabetes was “probably” present during the prior ED visit, but those who disagreed thought the correct score was “near-definitely.” Thus, for those cases, the final consensus score was “at least probably.” In these cases, the disagreement reflected the certainty with which panelists felt that the time course alone made diabetes likely on prior visits. For the other two cases, the median score was “possibly,” but the qualitative comments of those who disagreed reflected the view that the timeline alone supported the presence of diabetes at the prior visit, regardless of symptoms and findings. In one of the two cases, a patient visiting the ED for a tibia fracture later returned with DKA uncovered during a routine pediatrician visit. In the other, a patient visited the ED for dehydration and had a normal blood sugar, then visited the ED 5 days later and was found to be in DKA. These vignettes were deemed “uncertain.”

Discussion

We successfully conducted a consensus process to create a rubric for determining the likelihood of delayed diagnosis of appendicitis, new-onset DKA, and sepsis. The consensus likelihood of delay was uncertain for only two of 102 cases, suggesting that multi-expert groups can reach consensus on whether delays in diagnosis occurred for most types of cases of these conditions.

The result of this work is a scoring guide that individual case reviewers may use as training and as an ongoing reference to assess for delayed diagnosis of appendicitis, new-onset DKA, and sepsis. Repeated visits to care are a useful screening tool to identify cases in which delays in diagnosis or treatment occurred [4], [25]. Among patients who “screen in” because they had an ED revisit, this guide could be used in research or as a part of ongoing quality improvement efforts to measure delayed diagnosis rates. We plan to use it as the basis for research case reviews to assess the performance of automated systems to detect delayed diagnosis.

Current approaches to the assessment of diagnostic error rates rely on clinician surveys [12], [26], trigger tools [4], [15], or medical record reviews [27]. The standard for definitive determination is the medical record review, which frequently depends on single assessors who have variable levels of expertise [14]. The use of rating tools such as the Revised Safer Dx [13] may be used to evaluate individual cases, and have good inter-rater reliability. However, their reliability depends on the knowledge and attitudes of the particular reviewers employing them. This study employs an expert consensus panel to provide a criterion standard for how case reviews for three particular serious diagnoses may best be conducted. Although using multiple reviewers with assessment of inter-rater reliability may improve assessments [28], we found that the inclusion of a cross-disciplinary panel resulted in discussions that individual reviewers might not incorporate. For instance, emergency physicians on the panel were responsive to endocrinologist comments that regardless of symptoms on the earlier visit, the time course of the development of diabetes makes it likely that it was present but undiagnosed.

The rubric created in this study can only be applied to repeated encounters, to detect whether serious diagnoses of appendicitis, DKA, or sepsis were present at an earlier visit. The ultimate goal is to improve diagnosis to reduce delay-related harms. For instance, earlier detection of diabetes may avoid the development of DKA [22]. The definition of delayed diagnosis in this study requires only that the condition was present at a prior ED visit. We believe this definition is complementary to the missed opportunity to improve diagnosis [13], which depends on whether a delay in diagnosis was preventable. Use of additional tools among cases of delayed diagnosis would allow for further designation as preventable or unpreventable delays.

The only two vignettes in the study in which consensus could not be achieved were for DKA. Additionally, all DKA vignettes were felt by the panelists to be likely delayed diagnoses, despite efforts to construct vignettes with low probabilities of delay. Based on the qualitative comments, this is likely because the time interval for progression of new-onset diabetes to DKA is longer than 7 days in nearly all cases. Although we believe 7 days is a reasonable time window to assess delayed diagnosis in an acute care setting [29] (and has been used to assess for delays in DKA diagnoses [30]), additional study would be needed to better define the time horizon for case reviews for delayed diagnosis of DKA across settings.

Institutions and researchers may use this rubric in several ways. First, it may be used as a reference guide for case reviewers while scoring cases of appendicitis, DKA, or sepsis for the likelihood of delayed diagnosis. Second, it can be used to train record reviewers before case review occurs. Finally, the guide may be used as a set of teaching cases for trainees or clinicians to understand the spectrum of the types of missed diagnoses for the three conditions.

The overall approach applied here could be broadened to assess delayed diagnoses of any acute condition. For a given condition, a range of vignettes would be developed, and then assessed for the likelihood of a delayed diagnosis by multi-specialty consensus. Reviewers could then evaluate actual cases of that condition preceded by a recent health encounter to determine the presence of a delay. While we achieved consensus for three divergent acute conditions, certain conditions might not be as amenable to agreement between reviewers. An inability to achieve consensus on delayed diagnosis assessments would suggest conditions that are less amenable to retrospective review.

This study must be interpreted in the context of its limitations. First, actual cases were not used, since we were interested in developing more generalizable, brief vignettes that could be applied to many real cases. Second, cases will occur that are not covered by a vignette from this study. Nevertheless, this study provides an ordinal scale for evaluating such cases, and a library of the sort of cases that fall under each level of the scale. Third, although the time window of 7 days is used frequently for assessment of acute care revisits, our experience assessing delayed diagnoses of diabetes resulting in DKA suggests that a longer time window could be useful for certain conditions that develop slowly but are eventually severe. Such conditions might include cancer, rheumatic diseases, child abuse, and others. Finally, the participants in the consensus study were all affiliated with academic institutions, though the panels were convened across centers throughout the U.S., and included representatives from each of the relevant specialties that care for each condition, including general emergency medicine. Because of its multisite and multispecialty nature, we believe that a broad range of expert views were represented. Although some panelists spend some time in community settings, it is possible that not emphasizing recruitment of full-time community panelists limited the range of viewpoints, given that illness presentations may vary between community and academic settings.

In conclusion, we successfully developed a rubric for case reviews to determine the likelihood of delayed diagnosis of appendicitis, new-onset DKA, and sepsis. The rubric may be used as a means of assessing the likelihood of delayed diagnosis for specified conditions, in order to inform diagnostic error research and quality improvement efforts.

Supplementary Material

Supplementary Material Details

Acknowledgments

We deeply appreciate the contributions of our pilot testers and Delphi panelists for volunteering their time and energy to this endeavor. Pilot: Todd Lyons, MD, MPH (Boston Children’s Hospital); Christopher Rees, MD, MPH (Boston Children’s Hospital); Catherine Perron, MD (Boston Children’s Hospital); Joel Hudgins, MD, MPH (Boston Children’s Hospital); Susan Lipsett, MD (Boston Children’s Hospital); Anna Cushing, MD (Boston Children’s Hospital); Jeffrey Neal, MD (Boston Children’s Hospital). Appendicitis panel: Anupam Kharbanda, MD, MS (Children’s Minnesota); Rakesh Mistry, MD, MS (Children’s Hospital Colorado); Andrew Olson, MD (University of Minnesota Medical School); James Moses, MD, MPH (Boston Medical Center); Peter Smulowitz, MD, MPH (Beth Israel Deaconess Medical Center); Shawn Rangel, MD, MSCE (Boston Children’s Hospital); Biren Modi, MD, MPH (Boston Children’s Hospital). DKA panel: Rachel Rempell, MD (Children’s Hospital of Philadelphia); Paul Aronson, MD, MHS (Yale School of Medicine); Nicole Nadeau, MD (Massachusetts General Hospital); Erinn Rhodes, MD, MPH (Boston Children’s Hospital); Michael Agus, MD (Boston Children’s Hospital). Sepsis panel: Fran Balamuth, MD, PhD (Children’s Hospital of Philadelphia); Halden Scott, MD (Children’s Hospital Colorado); Timothy Dribin, MD (Cincinnati Children’s Hospital Medical Center). Drs. Aaronson and Michelson participated in all three panels.

Footnotes

Research funding: Dr. Michelson was funded by award 1K08HS026503 from the Agency for Healthcare Research and Quality, with project support from the Boston Children’s Hospital Office of Faculty Development. Dr. Mahajan was funded by awards 1R01HS024953 and 1R18HS026622 from the Agency for Healthcare Research and Quality.

Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

Competing interests: Authors state no conflict of interest.

Informed consent: Informed consent was obtained from all individuals included in this study.

Ethical approval: The Institutional Review Board deemed the study not human subjects research and thus it was exempt from review.

Supplementary Material

The online version of this article offers supplementary material (DOI:https://doi.org/10.1515/dx-2020-0035).

References

  • 1.Balogh EP, Miller BT, Ball JR, editors. Improving diagnosis in health care. Washington, D.C.: National Academies Press; 2015. [PubMed] [Google Scholar]
  • 2.Berthelot S, Lang ES, Quan H, Stelfox HT. Identifying emergency-sensitive conditions for the calculation of an emergency care inhospital standardized mortality ratio. Ann Emerg Med. 2014;63:418–24. doi: 10.1016/j.annemergmed.2013.09.016. [DOI] [PubMed] [Google Scholar]
  • 3.Nafsi T, Russell R, Reid CM, Rizvi SMM. Audit of deaths less than a week after admission through an emergency department: how accurate was the ED diagnosis and were any deaths preventable? Emerg Med J. 2007;24:691–5. doi: 10.1136/emj.2006.044867. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Singh H, Meyer AND, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf. 2014 Sep;23:727–31. doi: 10.1136/bmjqs-2013-002627. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Croskerry P, Sinclair D. Emergency medicine: a practice prone to error? CJEM. 2001 Oct;3:271–6. doi: 10.1017/S1481803500005765. [DOI] [PubMed] [Google Scholar]
  • 6.Kovacs G, Croskerry P. Clinical decision making: an emergency medicine perspective. Acad Emerg Med. 1999;6:947–52. doi: 10.1111/j.1553-2712.1999.tb01246.x. [DOI] [PubMed] [Google Scholar]
  • 7.Hall MK, Burns K, Carius M, Erickson M, Hall J, Venkatesh A. State of the National Emergency Department Workforce: who provides care where? Ann Emerg Med. 2018;72:302–7. doi: 10.1016/j.annemergmed.2018.03.032. [DOI] [PubMed] [Google Scholar]
  • 8.Mahajan P, Mollen C, Alpern ER, Baird-Cox K, Boothman RC, Chamberlain JM, et al. An operational framework to study diagnostic errors in emergency departments: findings from a consensus panel. J Patient Saf. 2019 Nov 25 doi: 10.1097/PTS.0000000000000624. [DOI] [PubMed] [Google Scholar]
  • 9.Warrick C, Patel P, Hyer W, Neale G, Sevdalis N, Inwald D. Diagnostic error in children presenting with acute medical illness to a community hospital. Int J Qual Heal Care. 2014;26:538–46. doi: 10.1093/intqhc/mzu066. [DOI] [PubMed] [Google Scholar]
  • 10.Hussain F, Cooper A, Carson-Stevens A, Donaldson L, Hibbert P, Hughes T, et al. Diagnostic error in the emergency department: learning from national patient safety incident report analysis. BMC Emerg Med. 2019;19:77. doi: 10.1186/s12873-019-0289-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Grubenhoff JA, Ziniel SI, Cifra CL, Singhal G, McClead RE, Singh H. Pediatric clinician comfort discussing diagnostic errors for improving patient safety. Pediatr Qual Saf. 2020;5:e259. doi: 10.1097/pq9.0000000000000259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Schiff GD, Hasan O, Kim S, Abrams R, Cosby K, Lambert BL, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med. 2009;169:1881–7. doi: 10.1001/archinternmed.2009.333. [DOI] [PubMed] [Google Scholar]
  • 13.Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis (Berl) 2019;6:315–23. doi: 10.1515/dx-2019-0012. [DOI] [PubMed] [Google Scholar]
  • 14.Olson APJ, Graber ML, Singh H. Tracking progress in improving diagnosis: a framework for defining undesirable diagnostic events. J Gen Intern Med. 2018;33:1187–91. doi: 10.1007/s11606-018-4304-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Singh H, Giardina TD, Forjuoh SN, Reis MD, Kosmach S, Khan MM, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012;21:93–100. doi: 10.1136/bmjqs-2011-000304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017;26:484–94. doi: 10.1136/bmjqs-2016-005401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Singh H. Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Jt Comm J Qual Patient Saf. 2014;40:99–101. doi: 10.1016/S1553-7250(14)40012-6. [DOI] [PubMed] [Google Scholar]
  • 18.Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015;24:103–10. doi: 10.1136/bmjqs-2014-003675. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: characteristics and guidelines for use. Am J Public Health. 1984;74:979–83. doi: 10.2105/AJPH.74.9.979. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Michelson KA, Buchhalter LC, Bachur RG, Mahajan P, Monuteaux MC, Finkelstein JA. Accuracy of automated identification of delayed diagnosis of pediatric appendicitis and sepsis in the ED. Emerg Med J. 2019;36:736–40. doi: 10.1136/emermed-2019-208841. [DOI] [PubMed] [Google Scholar]
  • 21.Sills MR, Macy ML, Kocher KE, Sabbatini AK. Return visit admissions may not indicate quality of emergency department care for children. Acad Emerg Med. 2018 Mar 2;25:283–92. doi: 10.1111/acem.13324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Lokulo-Sodipe K, Moon RJ, Edge JA, Davies JH. Identifying targets to reduce the incidence of diabetic ketoacidosis at diagnosis of type 1 diabetes in the UK. Arch Dis Child. 2014;99:438–42. doi: 10.1136/archdischild-2013-304818. [DOI] [PubMed] [Google Scholar]
  • 23.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42:377–81. doi: 10.1016/j.jbi.2008.08.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67:401–9. doi: 10.1016/j.jclinepi.2013.12.002. [DOI] [PubMed] [Google Scholar]
  • 25.Singh H, Giardina TD, Meyer AND, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173:418–25. doi: 10.1001/jamainternmed.2013.2777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Singh H, Thomas EJ, Wilson L, Kelly PA, Pietz K, Elkeeb D, et al. Errors of diagnosis in pediatric practice: a multisite survey. Pediatrics. 2010;126:70–9. doi: 10.1542/peds.2009-3218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Cifra CL, Ten Eyck P, Dawson JD, Reisinger HS, Singh H, Herwaldt LA. Factors associated with diagnostic error on admission to a PICU: a pilot study. Pediatr Crit Care Med. 2020 May;21:e311–5. doi: 10.1097/PCC.0000000000002257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bhise V, Meyer AN, Singh H, Wei L, Russo E, Al-Mutairi A, et al. Errors in diagnosis of spinal epidural abscesses in the era of electronic health records. Am J Med. 2017;130:975–81. doi: 10.1016/j.amjmed.2017.03.009. [DOI] [PubMed] [Google Scholar]
  • 29.Michelson KA, Lyons TW, Bachur RG, Monuteaux MC, Finkelstein JA. Timing and location of emergency department revisits. Pediatrics. 2018;141:e20174087. doi: 10.1542/peds.2017-4087. [DOI] [PubMed] [Google Scholar]
  • 30.Bui H, To T, Stein R, Fung K, Daneman D. Is diabetic ketoacidosis at disease onset a result of missed diagnosis? J Pediatr. 2010;156:472–7. doi: 10.1016/j.jpeds.2009.10.001. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material Details


Articles from Diagnosis (Berlin, Germany) are provided here courtesy of De Gruyter

RESOURCES