Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2017 Sep 23;25(5):564–567. doi: 10.1093/jamia/ocx096

Unintended adverse consequences of a clinical decision support system: two cases

Erin G Stone 1,
PMCID: PMC7646869  PMID: 29036296

Abstract

Many institutions have implemented clinical decision support systems (CDSSs). While CDSS research papers have focused on benefits of these systems, there is a smaller body of literature showing that CDSSs may also produce unintended adverse consequences (UACs). Detailed here are 2 cases of UACs resulting from a CDSS. Both of these cases were related to external systems that fed data into the CDSS. In the first case, lack of knowledge of data categorization in an external pharmacy system produced a UAC; in the second case, the change of a clinical laboratory instrument produced the UAC. CDSSs rely on data from many external systems. These systems are dynamic and may have changes in hardware, software, vendors, or processes. Such changes can affect the accuracy of CDSSs. These cases point to the need for the CDSS team to be familiar with these external systems. This team (manager and alert builders) should include members in specific clinical specialties with deep knowledge of these external systems.

Keywords: clinical decision support systems, unintended adverse consequences, electronic health records (EHRs)

BACKGROUND AND SIGNIFICANCE

With the rapid spread of health information technology has come the rapid deployment of clinical decision support systems (CDSSs). Using current patient-level data, CDSSs can provide patient-specific recommendations to providers at the point of care. CDSSs are most typically deployed with the goal of obtaining clinical improvement, but can also be deployed to decrease inappropriate utilization, improve patient satisfaction, or improve provider efficiency.1

The medical literature is replete with individual studies and narrative and systematic reviews that summarize these studies, assessing the benefits of CDSSs. These reviews cover a wide range of conditions and topics, including diabetes,2–4 medication prescribing,5–9 cardiovascular disease,10,11 venous thromboembolism prophylaxis,12 asthma,13 depression,14 antibiotics use,15,16 diagnostic imaging ordering,17 diagnostic errors,18 cost reduction,19,20 and blood transfusions.21 They also cover a variety of patient care settings, including inpatient,16,19,20 anesthesia,22 critical care,23 and long-term care,5 as well as ambulatory practice settings.

There have been concerns voiced in the past about the unintended adverse consequences (UACs) of aspects of health IT, including computerized physician order entry (CPOE)24–29 and electronic health records (EHRs).30 Studies have also been published specifically addressing UACs in CDSSs. UACs were found to be rare in a study of an acute kidney injury medication alerting system.31 An analysis of 47 cases of unintended consequences of CDSS found 7 related to CDSS content and the remainder related to CDSS presentation.32 In an analysis of 79 CPOE-related UACs, 25% were related to CDSS.27 A CDSS used for anticoagulation dosing found more time in the therapeutic range but a longer time to get patients into the therapeutic range, thus potentially increasing their risk for thrombosis.33 A recent series of 4 cases from a single institution found content-related CDSS UACs in all 4 malfunctioning alerts.34 This same paper surveyed a number of chief medical information officers and found that >90% had experienced at least 1 CDSS-related unintended adverse event.

A systematic review of CDSS effects found that CDSSs rarely worsened clinical outcomes.35 However, additional details were not given in the review. A later systematic review found a few UACs listed in underlying studies.36 These included high rates of false positive results when using CDSSs for quality measurements, underestimation of completion of quality of care processes using CDSSs to query disease registries, and negative impact on provider time. While focusing on medication CPOE, a white paper that reviewed 10 order entry systems across 6 health care organizations found issues related to CDSS monitoring.37 None of the sites could provide simple reports on alert appropriateness, alert frequency, override frequency, or override reasons. The investigators also found that alerts were inconsistent and varied by user role, the screen used to order medications, whether a medication was ordered by brand or generic name, and site of care (inpatient vs outpatient). The investigators also found that most sites edited the alert rules they obtained from third-party pharmacy database vendors, thus potentially introducing errors into the CDSS.

Compared to the literature representing the effectiveness of CDSSs, the literature on the UACs of CDSSs is small. Detailed here are 2 cases describing types of CDSS adverse events related to external systems that have not been previously reported. These 2 cases of UAC relate to a CDSS that was deployed as part of an EHR implementation in a large integrated delivery system. This CDSS is a commercial knowledge-based system that provides active, patient-specific, synchronous, on-screen alerts at the point of care and is fully integrated with the commercial EHR.

CASE REPORTS

A 63-year-old patient was admitted to the hospital with an acute myocardial infarction (AMI). During his hospitalization, carvedilol (an alpha/beta blocker) was started and was continued through hospital discharge. At discharge, the discharging resident-in-training acted on a CDSS alert to start a beta blocker in patients with a diagnosis of AMI who did not have one in their discharge medication list. The beta blocker atenolol was thus prescribed. Two days later, this patient presented to the emergency department (ED) with bradycardia and hypotension. The ED physician determined that the combination of carvedilol and atenolol was causing the patient’s symptoms. Upon review of the logic of the alert and an interview with the CDSS alert builder, the logic did not take into account that alpha/beta blockers have beta-blocker activity. The builder had built the alert based on medication categories and had assumed that our third-party pharmacy database vendor had categorized alpha/beta blockers into both the alpha-blocker category and the beta-blocker category. Instead, the vendor had placed alpha/beta blockers in a separate unique category that did not overlap with the alpha-blocker or beta-blocker category.

Another cardiac CDSS alert had been built to assess whether a patient with an AMI had been given aspirin in the ED. The alert triggered from an ED diagnosis of AMI or a troponin level >0.5. While the alert appeared to be working well for a couple of years after it was released, we soon received reports from ED physicians that it was overfiring, especially on patients with undetectable troponins, and that some patients might be receiving aspirin unnecessarily. Upon investigation, we discovered that our clinical laboratory had upgraded its chemistry instruments. While the previous instruments reported an undetectable troponin as 0.01, the new instruments reported an undetectable level as <0.01. The CDSS software had interpreted the less-than sign in the result as a very large number, thus triggering the alert inappropriately.

CONCLUSION AND RECOMMENDATIONS

As both of these cases illustrate, UACs leading to patient safety issues can arise from a CDSS. While CDSSs can lead to patient safety issues because of false negative alerts,31 these 2 cases produced UACs because of false positive alerts. And, while UACs of CDSSs can be grouped into either content issues or presentation issues,32 these 2 cases are strictly related to content.

These 2 cases add to the growing literature detailing UACs of CDSSs. While they have some similarities to cases in previous reports, the lack of knowledge of an external pharmacy system and a change in the chemistry instrumentation in the clinical laboratory leading to CDSS malfunction have not been previously reported. It should be noted that the third-party pharmacy database system and the lab chemistry instruments were working as designed and were not broken. This is in contrast to many reports of CDSS UACs where external systems were, in fact, malfunctioning. It should also be noted that the interfaces from these 2 systems to the EHR (and hence to the CDSS) were also working as designed. This is in contrast to some reports of UACs related to interface errors.38

A number of lessons can be learned from these 2 cases. In the first case, the primary lesson we learned was that an intimate knowledge of the CDSS software is necessary, but is not sufficient to produce accurate alerts. (Accuracy in this paper means the alert correctly triggers with a correct message, taking into account a specific patient’s true medical condition, including correct demographics, diagnoses, treatments, allergies/intolerances, testing results, and personal and family history, and that the logic is correct and robust.) Our CDSS accesses only our EHR for data, and the CDSS team members (manager and alert builders) are certified in both the CDSS and EHR software. However, the EHR has feeds from many different external systems, including pharmacy, laboratory, diagnostic imaging, cardiovascular systems, call center, and foundation systems, that include patient demographics and insurance information. In addition, providers input data from a variety of instruments, including urine analysis instruments, glucometers, rapid-result chemistry instruments, and other point-of-care testing machines. Knowledge of these external systems and how and when these data flow from external system to EHR to CDSS is also necessary for accurate CDSS alerts. In a large integrated delivery system such as ours, these systems and instruments are numerous and often complex and may be managed by departments that have little interaction with the CDSS team and little knowledge of CDSS projects. We have added a pharmacy review of all CDSS projects that involve medications, thus ensuring that team members with deep knowledge of the third-party pharmacy database system review the specifications and build of the alerts prior to release. These pharmacy team members also help write the testing scenarios.

The second case makes a compelling argument for active monitoring of CDSS alerts, as is done in some institutions.34 Reactively waiting until end users complain that an alert is not accurate could delay the investigation and correction of a malfunctioning alert. Had we concurrently monitored the volume of firings of this alert, we likely would have seen an acute spike in volume, thus alerting us to a problem before the end users had enough patient experience or volume to bring this to our attention. This second case also focuses on the issue of changes in external systems that may adversely affect CDSS accuracy. While lessons from the first case focus on knowledge of external systems, lessons from the second case focus on knowledge of changes to these external systems. Many of these external systems have both hardware and software components that are frequently upgraded. The managers of these systems usually do not think of the impact of their upgrade on other (dependent) systems. In this case, the lab instrument change propagated through the EHR into the CDSS, and it was very unlikely that the lab personnel would have known the consequences of this change. Ideally, the CDSS team would keep abreast of all of these changes to external systems and do regression testing prior to upgrades to ensure that the CDSS is still functioning correctly. In reality, considering the number of external systems, the number of changes to these systems, and the siloed organizational structure of most complex health care organizations, this is a daunting task. This second case also points to the importance of thorough documentation of CDSS software, whether homegrown or commercial.

As a final lesson, clinicians, even clinicians in training, should not blindly follow CDSS recommendations. Clinicians should realize that there might be incorrect or missing data in a patient’s medical record that can affect CDSS alert accuracy. As examples, over-the-counter or complementary/alternative medications might not be documented, or an initial diagnosis might be incorrect after further testing is performed. Lack of robust alert logic, for example, not taking into account a patient’s terminal illness, could also lead to inaccuracy. Factors external to the alert, such as patient preferences or ability to pay, could affect acting on an alert. Thus an independent assessment of a CDSS recommendation is important, to assess the components and logic of the alert as well as factors external to the alert. The CDSS message should provide enough information about why it is triggering for the provider to make this independent assessment. For example, “This patient has an acute MI because of an elevated troponin and/or a diagnosis of acute MI and has not been given aspirin” is preferred over “Aspirin is recommended in this patient.” With enough information, the clinician should be able to make an independent evaluation of the appropriateness of the CDSS alert message for his patient. However, with increasing use of health IT, clinicians may become overly reliant on CDSSs and not have the knowledge to make independent assessments.27

These cases also point to changes that should occur with future reports of CDSSs in the medical literature. Study design of future research should include systematically anticipating UACs of these systems and capturing and reporting them to the extent possible. The clinical effects of both false positive and false negative alerts should be captured and reported. In addition, narrative and systematic reviews of CDSS efficacy should always report UACs as listed in the underlying articles. If UACs were not assessed, then the reviews should so state.

If the survey results of chief information officers reported by Wright et al.34 are representative of UACs in other CDSSs, these issues may be larger than previously thought. It is only in assessing the benefits of CDSSs and the prevalence and severity of UACs of CDSSs that we can come to an accurate knowledge of the true effects these systems have on our patients.

Funding

None.

Competing interests

The author has no competing interests to declare.

Contributor

EGS planned, prepared, drafted, and revised the manuscript for finalization and submission for publication.

ACKNOWLEDGMENTS

Kaiser Permanente pharmacy database team, Kaiser Permanente clinical laboratory, peer reviewers of earlier draft.

References

  • 1. Bright TJ, Wong A, Dhurjati R et al. , Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;1571:29–43. [DOI] [PubMed] [Google Scholar]
  • 2. Ali SM, Giordano R, Lakhani S et al. , A review of randomized controlled trials of medical record powered clinical decision support system to improve quality of diabetes care. Int J Med Inf. 2016;87:91–100. [DOI] [PubMed] [Google Scholar]
  • 3. Jeffery R, Iserman E, Haynes RB et al. , Can computerized clinical decision support systems improve diabetes management? A systematic review and meta-analysis. Diabet Med. 2013;306:739–45. [DOI] [PubMed] [Google Scholar]
  • 4. Cleveringa FGW, Gorter KJ, van den Donk M et al. , Computerized decision support systems in primary care for type 2 diabetes patients only improve patients’ outcomes when combined with feedback on performance and case management: a systematic review. Diabetes Technol Ther. 2013;152:180–92. [DOI] [PubMed] [Google Scholar]
  • 5. Marasinghe KM. Computerised clinical decision support systems to improve medication safety in long-term care homes: a systematic review. BMJ Open. 2015;55:e006539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Ranji SR, Rennke S, Wachter RM. Computerised provider order entry combined with clinical decision support systems to improve medication safety: a narrative review. BMJ Qual Saf. 2014;239:773–80. [DOI] [PubMed] [Google Scholar]
  • 7. Payne TH, Hines LE, Chan RC et al. , Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc. 2015;226:1243–50. [DOI] [PubMed] [Google Scholar]
  • 8. Nuckols TK, Smith-Spangler C, Morton SC et al. , The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis. Syst Rev. 2014;3:56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Bennett JW, Glasziou PP. Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials. Med J Aust. 2003;1785:217–22. [DOI] [PubMed] [Google Scholar]
  • 10. Njie GJ, Proia KK, Thota AB et al. , Clinical decision support systems and prevention: a community guide cardiovascular disease systematic review. Am J Prev Med. 2015;495:784–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Anchala R, Pinto MP, Shroufi A et al. , The role of Decision Support System (DSS) in prevention of cardiovascular disease: a systematic review and meta-analysis. PloS One. 2012;710:e47064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Adams P, Riggio JM, Thomson L et al. , Clinical decision support systems to improve utilization of thromboprophylaxis: a review of the literature and experience with implementation of a computerized physician order entry program. Hosp Pract. 2012;403:27–39. [DOI] [PubMed] [Google Scholar]
  • 13. Matui P, Wyatt JC, Pinnock H et al. , Computer decision support systems for asthma: a systematic review. NPJ Prim Care Respir Med. 2014;24:14005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Triñanes Y, Atienza G, Louro-González A et al. , Development and impact of computerised decision support systems for clinical management of depression: a systematic review. Rev Psiquiatr Salud Ment. 2015;83:157–66. [DOI] [PubMed] [Google Scholar]
  • 15. Holstiege J, Mathes T, Pieper D. Effects of computer-aided clinical decision support systems in improving antibiotic prescribing by primary care providers: a systematic review. J Am Med Inform Assoc. 2015;221:236–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Baysari MT, Lehnbom EC, Li L et al. , The effectiveness of information technology to improve antimicrobial prescribing in hospitals: a systematic review and meta-analysis. Int J Med Inf. 2016;92:15–34. [DOI] [PubMed] [Google Scholar]
  • 17. Goldzweig CL, Orshansky G, Paige NM et al. , Electronic health record-based interventions for improving appropriate diagnostic imaging: a systematic review and meta-analysis. Ann Intern Med. 2015;1628:557–65. [DOI] [PubMed] [Google Scholar]
  • 18. Nurek M, Kostopoulou O, Delaney BC et al. , Reducing diagnostic errors in primary care. A systematic meta-review of computerized diagnostic decision support systems by the LINNEAUS collaboration on patient safety in primary care. Eur J Gen Pract. 2015;21(Suppl):8–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Thompson G, O’Horo JC, Pickering BW et al. , Impact of the Electronic Medical Record on Mortality, Length of Stay, and Cost in the Hospital and ICU: A Systematic Review and Metaanalysis. Crit Care Med. 2015;436:1276–82. [DOI] [PubMed] [Google Scholar]
  • 20. Fillmore CL, Bray BE, Kawamoto K. Systematic review of clinical decision support interventions with potential for inpatient cost reduction. BMC Med Inform Decis Mak. 2013;13:135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Hibbs SP, Nielsen ND, Brunskill S et al. , The impact of electronic decision support on transfusion practice: a systematic review. Transfus Med Rev. 2015;291:14–23. [DOI] [PubMed] [Google Scholar]
  • 22. Simpao AF, Tan JM, Lingappan AM et al. , A systematic review of near real-time and point-of-care clinical decision support in anesthesia information management systems. J Clin Monit Comput. 2016. [Epub ahead of print], DOI: 10.1007/s10877-016-9921-x. [DOI] [PubMed] [Google Scholar]
  • 23. Belard A, Buchman T, Forsberg J et al. , Precision diagnosis: a view of the clinical decision support systems (CDSS) landscape through the lens of critical care. J Clin Monit Comput. 2017;312:261–71. [DOI] [PubMed] [Google Scholar]
  • 24. Koppel R, Metlay JP, Cohen A et al. , Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;29310:1197–203. [DOI] [PubMed] [Google Scholar]
  • 25. Berger RG, Kichak JP. Computerized physician order entry: helpful or harmful? J Am Med Inform. Assoc 2004;112:100–03. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Han YY, Carcillo JA, Venkataraman ST et al. , Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics. 2005;1166:1506–12. [DOI] [PubMed] [Google Scholar]
  • 27. Campbell EM, Sittig DF, Ash JS et al. , Types of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2006;135:547–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Beeler PE, Bates DW, Hug BL. Clinical decision support systems. Swiss Med Wkly. 2014;144:w14073. [DOI] [PubMed] [Google Scholar]
  • 29. Ash JS, Sittig DF, Poon EG et al. , The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007;144:415–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Meeks DW, Smith MW, Taylor L et al. , An analysis of electronic health record-related patient safety concerns. J Am Med Inform Assoc. 2014;216:1053–59. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. McCoy AB, Waitman LR, Lewis JB et al. , A framework for evaluating the appropriateness of clinical decision support alerts and responses. J Am Med Inform Assoc. 2012;193:346–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Ash JS, Sittig DF, Campbell EM et al. , Some unintended consequences of clinical decision support systems. In: AMIA Ann Symp Proc. 2007;2007:26–30. [PMC free article] [PubMed] [Google Scholar]
  • 33. Abbrecht PH, O’Leary TJ, Behrendt DM. Evaluation of a computer-assisted method for individualized anticoagulation: retrospective and prospective studies with a pharmacodynamic model. Clin Pharmacol Ther. 1982;321:129–36. [DOI] [PubMed] [Google Scholar]
  • 34. Wright A, Hickman T-TT, McEvoy D et al. , Analysis of clinical decision support system malfunctions: a case series and survey. J Am Med Inform Assoc. 2016;236:1068–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Garg AX, Adhikari NKJ, McDonald H et al. , Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;29310:1223–38. [DOI] [PubMed] [Google Scholar]
  • 36. Chaudhry B, Wang J, Wu S et al. , Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;14410:742–52. [DOI] [PubMed] [Google Scholar]
  • 37. Schiff G, Wright A, Bates DW et al. , Computerized Prescriber Order Entry Medication Safety (CPOEMS). 2015. www.fda.gov/downloads/Drugs/DrugSafety/MedicationErrors/UCM477419.pdf. Accessed June 15, 2017. [Google Scholar]
  • 38. Schreiber R, Sittig DF, Ash J et al. , Orders on file but no labs drawn: investigation of machine and human errors caused by an interface idiosyncrasy. J Am Med Inform Assoc. 2017;24:958–63. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES