Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2019 Aug 2;26(11):1375–1378. doi: 10.1093/jamia/ocz133

Importance of clinical decision support system response time monitoring: a case report

David Rubins 1,2,3,, Adam Wright 1,2,3, Tarik Alkasab 2,3,4, M Stephen Ledbetter 1,2,3,5, Amy Miller 1,2,3, Rajesh Patel 1,2, Nancy Wei 2,6, Gianna Zuccotti 1,2,3, Adam Landman 1,2,7
PMCID: PMC6798567  PMID: 31373352

Abstract

Clinical decision support (CDS) systems are prevalent in electronic health records and drive many safety advantages. However, CDS systems can also cause unintended consequences. Monitoring programs focused on alert firing rates are important to detect anomalies and ensure systems are working as intended. Monitoring efforts do not generally include system load and time to generate decision support, which is becoming increasingly important as more CDS systems rely on external, web-based content and algorithms. We report a case in which a web-based service caused significant increase in the time to generate decision support, in turn leading to marked delays in electronic health record system responsiveness, which could have led to patient safety events. Given this, it is critical to consider adding decision support-time generation to ongoing CDS system monitoring programs.

Keywords: clinical decision support, computerized provider order entry, cloud-based integration, system performance monitoring, clinical decision support monitoring

INTRODUCTION

Clinical decision support (CDS) systems are widely used in electronic health records (EHRs) and have been shown to improve patient care in multiple areas.1–3 However, increased prevalence and reliance on CDS systems can lead to unintended consequences and potential for patient harm.4–6 For example, CDS systems can cause harm by failing to notify clinicians of significant drug–disease interactions or by encouraging ordering of medications that are contraindicated in a given patient, for example, due to allergy or comorbid illness. The potential for unintended consequences increases when CDS systems fail, which can happen in multiple ways due to both internal and external malfunctions.7

There is growing evidence supporting monitoring programs for CDS systems, with the strongest evidence focusing on recognition of anomalies in firing rates and user responses to CDS.7,8 This type of monitoring is felt to be essential for the success of CDS systems by identifying errors both before and after releasing to end users.

Traditionally, CDS has been implemented locally using data and tools available contained within the EHR. However, advanced CDS increasingly involves sharing of content and algorithms across organizations.9–11 Sharing CDS is limited by the difficulties involved in encoding knowledge.9,12 To address this and other limitations, CDS systems are moving to service-oriented architectures,9,13 making use of Continuity of Care Documents (CCD), Fast Healthcare Interoperability Resources, and other interoperability standards.14–16 As a result, many organizations can now take a patient’s record, encode it in a standard form, map local terminologies to national standards, and share via web services. With increasing usage of web services in CDS systems, there is a need to monitor the interaction between the local EHR and the cloud-based decision support it is calling. Multiple systems interacting increases the chance of failure because small probabilities of malfunction are combinatorially increased with each additional system. Malfunction in these interactions can lead to inaccurate information transmittal, delays in data processing, and increased decision support generation time. We present a case of malfunctioning of the local EHR cloud-based decision support interaction that led to significant increase in decision support generation time, resulting in considerable system slowness.

CASE REPORT

As part of the Protecting Access to Medicare Act of 2014, Congress mandated that providers ordering advanced imaging studies for Medicare patients consult appropriate use criteria (AUC) via a “qualified CDS” system. The goal of the CDS system is to provide AUC information from the American College of Radiology or another qualified provider-led entity before placing the advanced imaging order. At our organization, Partners HealthCare in Boston, MA, this was implemented using a Best Practice Advisory (BPA) in our EHR (Epic Corporation, Verona, WI) that calls a web service provided by National Decision Support Company (NDSC) called CareSelect . When a provider places an order for qualifying imaging procedures, Epic creates an aggregate of the relevant information using the HL7 Clinical Document Architecture standard, which is sent to NDSC. The Clinical Document Architecture document (CCD) contains information about the patient, the imaging procedure order, and the indication for the order. NDSC receives and processes this information and returns to Epic an appropriateness grade for the advanced imaging order, which is then displayed to the provider. When initially implemented at our institution, the alert was set to display to ordering providers only in outpatient settings and to fire “silently” (ie, not display to the end user but still record the appropriateness grade) in inpatient settings.

On October 20th, 2018, we upgraded our EHR from Epic version 2015 to Epic version 2018. The following day, we began receiving reports from inpatient providers experiencing significant delays when ordering imaging studies (Figure 1, red line). Delays of more than 60 seconds from clicking “Sign” to the order filing were observed. After an order was signed, the EHR would freeze and the screen would display “Performing decision support checks” (Figure 2). These scenarios were difficult to replicate in test environments. Therefore, Information System support teams were sent to multiple units at multiple hospitals to manually measure the time delay. Investigation revealed that the delay seemed to be more common on patients with large quantities of data in the EHR (eg, intensive care unit patients). Additionally, there were reports of the BPA displaying resultant AUC recommendations to inpatient providers, which was not the intended build design of the live alert; future iterations of the alert to satisfy the Protecting Access to Medicare Act requirement may involve inpatient alerting, but we had not yet implemented this. The delay was confirmed using Epic’s “System Pulse” tool that monitors average time to complete a variety of workflow steps and the number of “exceptions” (time delays greater than a set threshold) on a real-time basis.

Figure 1.

Figure 1.

Average response times per hour for the NDSC BPA. The red line indicates the time of the upgrade from Epic v2015 to Epic v2018. The yellow line represents restricting the BPA to no longer fire silently in the inpatient setting; it remained firing as normal for the outpatient setting. The green line is the implementation of the brief CCD that only contained the necessary information for the Appropriateness Use Criteria recommendations to be generated.

Figure 2.

Figure 2.

Frozen screen of “Performing decision support checks.” After the upgrade from Epic v2015 to Epic v2018, the EHR would freeze with this screen being displayed when providers ordered imaging studies on patients with large amounts of data in the EHR.

Given the significant implications of this ordering delay on patient care and provider usability, once the problem was confirmed, we deactivated the AUC BPA in the inpatient setting (Figure 1, yellow line). We then collaborated with NDSC, Epic, and the Partners CDS team and were able to determine that the cause of the delay was a postupgrade change that unexpectedly increased time needed to collate patient information and create the CCD document. This upgrade CCD document was significantly larger for unclear reasons. As a result, it caused web services to timeout, leading to an increased delay for end users in the outpatient setting and, unexpectedly, even for inpatient orders, where unintentional display of the BPA resulted as a consequence of the silent mode timeout.

Remediation of the error was completed by optimizing the CCD document creation process to send the minimum amount of information needed for the AUC recommendation. With the implementation of this change (Figure 1, green line), the response time from provider clicking “Sign” order to the order being filed returned to slightly faster than the pre-upgrade baseline and the BPA no longer displayed for inpatients. In addition to improvements in the average response time for the BPA, there was also a decrease in the number of exceptions per 15-minute interval (Figure 3).

Figure 3.

Figure 3.

Response Time Exception Count for NDSC BPA. This bar chart shows the number of web service calls that took more than 5 seconds (yellow) or 10 seconds (red) averaged over each 15-minute period in the week before and after the CCD in the BPA was corrected. The red line indicates the time of the upgrade from Epic v2015 to Epic v2018. The blue line represents restricting the BPA to no longer fire silently in the inpatient setting; it remained firing as normal for the outpatient setting. The green line is the implementation of the brief CCD that only contained the necessary information for the Appropriateness Use Criteria recommendations to be generated.

Of note, after fully investigating this issue, we found that there had been significant delays in the months preceding the upgrade as well that had not been reported by end users. Had we been prospectively monitoring response times, we likely would have identified this performance issue even without end user input. Retroactively, we were not able to determine why these pre-upgrade delays occurred.

CONCLUSIONS AND RECOMMENDATIONS

As this case illustrates, there is a need for more robust monitoring of CDS systems as they become more complex and dependent on external, web-based resources. CDS monitoring programs focused on the stability and integrity of CDS need to consider overall system performance as much as firing rates. In addition to the problem described above, we learned that there had been a noticeable delay for some time that users had not reported to Information Systems.

Increased system response/load times can have significant clinical consequences, particularly for patients who are acutely ill and need emergency orders placed. EHR vendors currently provide tools to system administrators to facilitate CDS performance monitoring, such as the automated Epic “System Pulse” tool. In the future, we hope that Epic and other EHR vendors will extend these tools to have more granularity in logging to aid in troubleshooting and issue resolution.

This case also highlights the importance of CDS testing during major EHR updates, as well as the need to test on patients with realistic volumes of data. The significant delays we experienced likely would have been caught with testing on real or more realistic test patients. Potential methods to accomplish this testing include migrating changes to an environment with real patient data and/or creating fake patients that have more realistic volumes of data. Both these solutions have associated challenges. Testing on real patient data raises privacy concerns; and, when testing web services, it involves recreating interfaces and other built-in environments that are often refreshed daily. Additionally, creating fake patient data is time-consuming, and it can be challenging to produce authentic data when a system relies on third party interfaces to Lab Information Systems and Radiology Information Systems. One solution to circumvent these challenges related to testing is by releasing CDS changes to gradually expanding subsets of the user population. This ensures that system performance remains stable while also limiting the impact of any unintended negative consequences of interventions.

Finally, this case underscores the importance of close collaboration with EHR vendors and third-party CDS service vendors (eg, NDSC) to aid in the quick recognition and resolution of CDS system errors. In our case, we relied on diagnostic tests being run on NDSC servers as well as working with Epic to quickly pare down the CCD to the minimum required data set. After solutions were proposed by both NDSC and Epic, implementation required extensive testing both locally and with the cloud-based NDSC system.

With the emergence of CDS Hooks and FHIR-based CDS systems, monitoring of response time and system load will become an essential element of ongoing CDS system program monitoring. Based on our experience with this event, we are working to develop automated anomaly detectors that focus on the system response time and programmatic exceptions that are recorded by Epic. We will incorporate these detection systems into our current automated daily push notification system that monitors for other CDS malfunctions and anomalies (eg, patient-level and alert-level firing rates). We plan to extend this monitoring to alert us in real-time of changes in system performance.

FUNDING

This work was supported by National Library of Medicine of the National Institutes of Health grant number R01LM011966. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

AUTHOR CONTRIBUTIONS

DR collected the data, cleaned and analyzed the data, and drafted and revised the manuscript. AL initiated the collaborative project, assisted in the concept and design of the paper, and revised the draft manuscript. AW initiated the collaborative project, assisted in conception and design of the article, and revised the draft manuscript. TA, MSL, AM, RP, NW, and GZ participated in the analysis of the problem and methods to monitor the issue going forward and revised the draft paper. All authors discussed the issue and contributed to the final manuscript.

CONFLICT OF INTEREST

Dr Landman has received consulting fees from Abbott. No other authors have affiliations with or involvement in any organization or entity with any financial interest (such as honoraria, educational grants, participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; or expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge, or beliefs) in the subject matter or materials discussed in this manuscript.

REFERENCES

  • 1. Bates DW, Cohen M, Leape LL, et al. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc 2001; 8: 299–308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Bates DW, Gawande AA.. Improving safety with information technology. N Engl J Med 2003; 34825: 2526–34. [DOI] [PubMed] [Google Scholar]
  • 3. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998; 28015: 1311–6. [DOI] [PubMed] [Google Scholar]
  • 4. Landman AB, Takhar SS, Wang SL, et al. The hazard of software updates to clinical workstations: a natural experiment. J Am Med Inform Assoc 2013; 20 (e1): e187–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Stone EG. Unintended adverse consequences of a clinical decision support system: two cases. J Am Med Inform Assoc 2018; 255: 564–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med 2012; 1571: 29–43. [DOI] [PubMed] [Google Scholar]
  • 7. Wright A, Hickman TTT, McEvoy D, et al. Analysis of clinical decision support system malfunctions: a case series and survey. J Am Med Inform Assoc 2016; 236: 1068–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Yoshida E, Fei S, Bavuso K, et al. The value of monitoring clinical decision support interventions. Appl Clin Inform 2018; 9: 163–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Kawamoto K, Lobach DF.. Proposal for fulfilling strategic objectives of the US Roadmap for national action on clinical decision support through a service-oriented architecture leveraging HL7 services. J Am Med Inform Assoc 2007; 142: 146–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Abedin Z, Hoerner R, Kawamoto K, et al. Abstract 13: evaluation of a FHIR-based clinical decision support tool for calculating CHA2DS2-VASc scores. Circ Cardiovasc Qual Outcomes 2019; 12:A13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Wulff A, Haarbrandt B, Tute E, et al. An interoperable clinical decision-support system for early detection of SIRS in pediatric intensive care using openEHR. Artif Intell Med 2018; 89: 10–23. [DOI] [PubMed] [Google Scholar]
  • 12. Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision support. J Biomed Inform 2008; 412: 387–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Wright A, Sittig DF, Ash JS, et al. Lessons learned from implementing service-oriented clinical decision support at four sites: A qualitative study. Int J Med Inform 2015; 8411: 901–11. [DOI] [PubMed] [Google Scholar]
  • 14. Spineth M, Rappelsberger A, Adlassnig K-P.. Implementing CDS hooks communication in an Arden-syntax-based clinical decision support platform. Stud Health Technol Inform 2018; 255: 165–9. [PubMed] [Google Scholar]
  • 15.Overview-FHIR v4.0.0. https://www.hl7.org/fhir/overview.html Accessed April 2, 2019.
  • 16. Shang Y, Wang Y, Gou L, et al. Development of a service-oriented sharable clinical decision support system based on ontology for chronic disease. Stud Health Technol Inform 2017; 245: 1153–7. [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES