Clinical decision support is the provision of “clinical knowledge and patient-related information, intelligently filtered or presented at appropriate times, to enhance patient care.”1 Medical institutions are increasingly adopting tools that offer decision support to improve patient outcomes and reduce errors. Healthcare providers and administrators with little or no training in computer science may be asked to evaluate, select, or contribute to the development of decision support systems for their practices. Is there an easy way to determine which clinical decision support systems are good?
In this issue Kawamoto and colleagues provide some evidence based guidance in a systematic analysis of the ability of decision support systems to improve practice in both statistically significant and clinically meaningful ways (p 765).2 This rigorous review includes only randomised controlled trials and excludes small studies that do not meet 50% of established criteria for methodological quality.3,4 It identifies four independent predictors of effective decision support: systems that enhance practice generate decision support automatically as part of the normal clinical workflow and at the time and place of decision making; they use computers to deliver support; and they offer specific recommendations rather than mere assessments. Ninety four per cent of clinical decision support systems with these characteristics improved practice compared with only 46% of systems that lack one of these features.
Similar findings were reported in a recent systematic review of controlled trials evaluating computerised decision support programs, but worrying deficiencies in the evidence base were noted.5 Garg and colleagues found that the performance of healthcare practitioners using decision support systems improved in 64% of studies, comparable to the improvement in 68% of trials noted by Kawamoto et al,2 and they also observed that automatically generated versus user-initiated decision support resulted in better delivery of care. However, of the 100 studies analysed, few specified a primary outcome for statistical analysis, and nearly three quarters were evaluated by their software developers. Developer self-assessment was the only other factor associated with better performance. The outcomes of most studies were metrics assessing the process of healthcare delivery with and without decision support systems. Only 52 trials measured at least one patient outcome, and improvements were noted in only 13% of these studies.
Unfortunately, the implementation of effective clinical decision support is a challenging task involving interactions between technologies and organisations, and there are no easy solutions to guarantee success or to avoid failure in this complex process.6 Many factors influence reductions in errors or improvements in health, so measuring the effectiveness of decision support systems in improving these endpoints is difficult. Moreover, another recent eye opening observational study identified 22 different ways in which an established computerised order entry system (the benefits of which are thought to include reducing errors) could actually introduce medication errors.7 Although many researchers have sought to prove the advantages of clinical decision support, few have carefully studied sources of harm. Clearly defining the balance between the risks and benefits of clinical decision support is a continuing challenge.
Finally, a clinical decision support system is only as effective as its underlying knowledge base, which changes rapidly as medical science evolves. Sim and colleagues have proposed that the next generation of clinical decision support systems should be not only evidence based, but also “evidence adaptive,” with automated and continuous updating to reflect the most recent advances in clinical science and local practice knowledge.8 Flexibility in incorporating information from diverse sources and adaptability to varied practice settings are likely to be the quality criteria by which decision support systems are judged in the future.
Information in Practice p 765
Competing interests: None declared.
References
- 1.Osheroff JA, Pifer EA, Sittig DF, Jenders RA, Teich JM. Clinical decision support implementers' workbook. Chicago: HIMSS, 2004. www.himss.org/cdsworkbook.
- 2.Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using decision support systems: a systematic review of randomised controlled trials to identify system features critical to success. BMJ 2005;330: 765-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Johnston ME, Langton KB, Haynes RB. Mathieu A. Effects of computer-based clinical decision support systems on clinician performance and patient outcome. A critical appraisal of research. Ann Intern Med 1994;120: 135-42. [DOI] [PubMed] [Google Scholar]
- 4.Randolph AG, Haynes RB, Wyatt JC, Cook DJ, Guyatt GH. Users' guides to the medical literature: XVIII. How to use an article evaluating the clinical impact of a computer-based clinical decision support system. JAMA 1999;282: 67-74. [DOI] [PubMed] [Google Scholar]
- 5.Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005;293: 1223-38. [DOI] [PubMed] [Google Scholar]
- 6.Wears RL, Berg M. Computer technology and clinical work: still waiting for Godot. JAMA 2005;293: 1261-3. [DOI] [PubMed] [Google Scholar]
- 7.Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293: 1197-203. [DOI] [PubMed] [Google Scholar]
- 8.Sim I, Gorman P, Greenes RA, Haynes RB, Kaplan B, Lehmann H, et al. Clinical decision support systems for the practice of evidence-based medicine. J Am Med Informatics Assoc 2001;8: 527-34. [DOI] [PMC free article] [PubMed] [Google Scholar]