Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
editorial
. 2003 Mar;18(3):228–229. doi: 10.1046/j.1525-1497.2003.30115.x

Diagnosis

Highlighting the Gaps

SHARON E STRAUS 1
PMCID: PMC1494835  PMID: 12648256

During the diagnostic decision-making process, clinicians need to apply evidence about the accuracy of a test to the pretest probability of the target disorder and to integrate the resulting posttest probability with their patients' values. Richardson et al. have attempted to explore one of the components of this process by providing an estimate for how frequently information about pretest probability can be found for clinical problems encountered on a general medical service. They found that they were able to identify evidence about pretest probability for the majority of clinical problems that they encountered.1 Commonly, clinicians generate pretest probability based on their own experience, but several previous studies have shown that clinicians' estimates of pretest probability vary widely and may be inaccurate.24 For example, Noguchi et al. conducted a study to explore the diagnostic process with medical students and found that they had difficulty estimating pretest probability and integrating this information with that about test accuracy to obtain a posttest probability.4 Further exploration needs to be done to determine the impact of the evidence about pretest probability on the clinical decision-making process and to determine the best methods of teaching the process of diagnostic reasoning.

Although diagnostic testing, including the clinical examination, is a critical component of the clinical decision-making process, there is a paucity of high-quality evidence about diagnostic accuracy to help inform this process.5,6 Moreover, the number of available diagnostic tests has increased exponentially in recent years, and given the rapid introduction of many of these tests and the reliance of clinicians on their use, information about their precision and accuracy is crucial. Inadequate reporting and design are associated with biased estimates of diagnostic accuracy, and poorly designed and reported studies can mislead clinical decision making.6 Several studies have shown that although use of methodological standards in diagnostic test research has improved in the last decade, methodological quality is still inadequate.69

More intensive efforts need to be made by journal editors, researchers, granting agencies, and developers of diagnostic tests to encourage rigorous diagnostic research before diagnostic tests are widely implemented. The recent proposal to improve standards for reporting of diagnostic accuracy may lead to more high-quality evidence about diagnostic tests.10 Efforts to complete high-quality systematic reviews of studies of diagnostic accuracy must also continue. The development of the Bayes library (an international consortium created to conduct rigorous systematic reviews of studies of diagnostic accuracy) led by Matthias Egger and Daniel Pewsner, should help us take some steps forward to bridge the gap in our knowledge about diagnostic accuracy.

Once the evidence about pretest probability and the precision and accuracy of diagnostic tests has been created, if it is to be used by busy clinicians, it must be available quickly and in a concise, intelligible form. Clinicians are limited by their inability to afford more than a few seconds per patient for finding and assimilating relevant evidence. Indeed, if answers to clinical questions are not found within 90 seconds, searches for evidence are frequently abandoned.11 Richardson et al. did not report the time required for their successful literature searches, which would be useful information. Information retrieval can be difficult for busy clinicians, because when evidence is synthesized and packaged, the barriers to achieving efficient evidence-based decision making are not always considered.12

Finally, the development of other practice tools to facilitate evidence-based diagnosis is essential. For example, given the time pressures of clinical practice, efficient methods for translating the pretest probability into a posttest probability would be useful. Paul Glasziou has responded to this challenge by developing a tool that can quickly calculate and graph a posttest probability for any pretest probability.13,14 Others have developed online versions of validated decision tools that can be used to help with this process.15

All of these highlighted gaps in the diagnostic process must be bridged for effective evidence-based diagnostic decision making to occur. The challenge to clinicians, educators, and researchers is to work together to ensure this happens.

REFERENCES

  • 1.Richardson WS, Polashenski WA, Robbins BW. Could our pretest probabilities become evidence based? A prospective survey of hospital practice. J Gen Intern Med. 2003;18:203–8. doi: 10.1046/j.1525-1497.2003.20215.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Lyman GH, Balducci L. The effect of changing disease risk on clinical reasoning. J Gen Intern Med. 1994;9:488–95. doi: 10.1007/BF02599218. [DOI] [PubMed] [Google Scholar]
  • 3.Bobbio M, Detrano R, Shandling AH, et al. Clinical assessment of the probability of coronary artery disease: judgmental bias from personal knowledge. Med Decis Making. 1992;12:197–203. doi: 10.1177/0272989X9201200305. [DOI] [PubMed] [Google Scholar]
  • 4.Noguchi Y, Matsui K, Imura H, Kiyota M, Fukui T. Quantitative evaluation of the diagnostic thinking process in medical students. J Gen Intern Med. 2002;17:848–53. doi: 10.1046/j.1525-1497.2002.20139.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.McAlister FA, Straus SE, Sackett DL on behalf of the CARE–COAD1 Group. Why we need large, simple studies of the clinical examination: the problem and a proposed solution. Lancet. 1999;354:1721–24. doi: 10.1016/s0140-6736(99)01174-5. [DOI] [PubMed] [Google Scholar]
  • 6.Lijmer JG, Mol BW, Heisterkamp S, et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999;282:1061–66. doi: 10.1001/jama.282.11.1061. [DOI] [PubMed] [Google Scholar]
  • 7.Bossuyt P, Reitsma JB, Bruns DE, et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem. 2003;49:7–18. doi: 10.1373/49.1.7. [DOI] [PubMed] [Google Scholar]
  • 8.Reid MC, Lachs MS, Feinstein A. Use of methodological standards in diagnostic test research. JAMA. 1995;274:645–51. [PubMed] [Google Scholar]
  • 9.Arroll B, Schechter MT, Sheps SB. The assessment of diagnostic tests: a comparison of medical literature in 1982 and 1985. J Gen Intern Med. 1988;3:443–7. doi: 10.1007/BF02595920. [DOI] [PubMed] [Google Scholar]
  • 10.Bossuyt P, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy. BMJ. 2003;326:41–4. doi: 10.1136/bmj.326.7379.41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sackett DL, Straus SE. Finding and applying evidence during clinical rounds. The evidence cart. JAMA. 1998;280:1336–8. doi: 10.1001/jama.280.15.1336. [DOI] [PubMed] [Google Scholar]
  • 12.Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–65. doi: 10.1001/jama.282.15.1458. [DOI] [PubMed] [Google Scholar]
  • 13.Glasziou P. Which method for bedside Bayes? ACP JClub. 2001;135:A11–12. [PubMed] [Google Scholar]
  • 14. Available at: www.cebm.utoronto.ca. Accessed February 19, 2003.
  • 15. Available at: http://med.mssm.edu/ebm. Accessed February 19, 2003.

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES