Abstract
Objective
Clinical decision support (CDS) hard-stop alerts—those in which the user is either prevented from taking an action altogether or allowed to proceed only with the external override of a third party—are increasingly common but can be problematic. To understand their appropriate application, we asked 3 key questions: (1) To what extent are hard-stop alerts effective in improving patient health and healthcare delivery outcomes? (2) What are the adverse events and unintended consequences of hard-stop alerts? (3) How do hard-stop alerts compare to soft-stop alerts?
Methods and Materials
Studies evaluating computerized hard-stop alerts in healthcare settings were identified from biomedical and computer science databases, gray literature sites, reference lists, and reviews. Articles were extracted for process outcomes, health outcomes, unintended consequences, user experience, and technical details.
Results
Of 32 studies, 15 evaluated health outcomes, 16 process outcomes only, 10 user experience, and 4 compared hard and soft stops. Seventy-nine percent showed improvement in health outcomes and 88% in process outcomes. Studies reporting good user experience cited heavy user involvement and iterative design. Eleven studies reported on unintended consequences including avoidance of hard-stopped workflow, increased alert frequency, and delay to care. Hard stops were superior to soft stops in 3 of 4 studies.
Conclusions
Hard stops can be effective and powerful tools in the CDS armamentarium, but they must be implemented judiciously with continuous user feedback informing rapid, iterative design. Investigators must report on associated health outcomes and unintended consequences when implementing IT solutions to clinical problems.
Keywords: hard-stop alert, decision support systems, clinical, medical order entry systems, decision support techniques, alert fatigue
OBJECTIVE
Background and significance
As usage of electronic health records (EHRs) and computerized physician order entry (CPOE) have increased in the era of meaningful use from 23.9% of office physicians in 2005 to 86.9% in 2015,1 use of clinical decision support (CDS) in the form of alerts has increased exponentially. An alert—an automatic warning message meant to communicate essential information to the clinician using an EHR—is now generated for 6% to 8% of all orders entered into an EHR by providers.2,3 Each of these alerts represents an intention to provide useful information to the clinician, shape clinician behavior, and positively impact patient safety and outcomes.
However, the utility of these warnings is attenuated by unintended consequences such as alert fatigue (clinicians’ tendency to ignore repeated alerts) and distraction, especially in an environment where alert volume is high, and many alerts are clinically irrelevant. In fact, 90% to 95% of alerts are overridden,2–4 and this phenomenon has been identified as a cause of several high-profile errors.5,6
Efforts at minimizing alert fatigue have included tiering alerts by importance and titrating the degree of interaction required by the clinician accordingly.7 Three different alert categories have arisen from these efforts: hard stops, soft stops, and passive alerts. We define hard-stop alerts as those in which the user is either prevented from taking an action altogether or allowed to proceed only with the external override of a third party. Soft-stop alerts are those in which the user is allowed to proceed against the recommendations presented in the alert as long as an active acknowledgement reason is entered. A passive alert is one in which information is presented but does not interrupt the user workflow and does not require any interaction on the part of the user.
In the context of alert fatigue, hard-stop alerts would appear to prevent the kind of dangerous overrides that result in harm to the patient. They can be an appealing option when compared to soft stops and passive alerts for this reason, but they can have unintended consequences, such as delays to the delivery of appropriate therapy.8
To inform the appropriate application of alerts, we sought to characterize the risks and benefits of hard stops by asking 3 key questions: (1) To what extent are hard-stop CDS alerts in EHRs effective in improving patient health and healthcare delivery outcomes? (2) What are the unintended consequences and adverse events of computerized hard-stop alerts? (3) How do hard-stop alerts compare to soft-stop alerts in efficacy and unintended consequences?
MATERIALS AND METHODS
To address these questions, we performed a systematic review of the biomedical literature. This systematic review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines.9 A formal protocol was developed and submitted to PROSPERO registry of systematic reviews prior to commencing the review (CRD42017057262).
Data sources and searches
We began by performing a simple search on PubMed for the term “hard stop.” Of the 38 studies returned, 21 pertained to alerts in the EHR. A full-text review was performed on these studies extracting the definition of “hard stop” from each. There was near, but not universal, consensus on the definition of hard stop in the papers we reviewed. Twenty-one papers used the term “hard stop” or “hard-stop” in the context of CDS. For 16 of these, the definition was similar: hard-stop alerts are automatic warning messages meant to communicate essential information to the clinician using an EHR that either prevent the user from taking an action altogether or allow the user to proceed only with the external override of a third party.7,10–24 For 3 of the remaining studies, the definition was most consistent with what we have termed a soft stop—an automatic warning message meant to provide information that may change the clinician’s course of action but can be ignored or overridden with only minimal action (such as clicking an acknowledgement reason).24–26 In the final 2 studies, the term “hard stop” was used but not defined.27,28 Definitions extracted from the papers can be found in Supplementary Appendix B. From these studies and others in the domain of CDS alerts, a consensus definition of hard stop was developed along with definitions for soft-stop alerts and passive alerts.
Using these studies and associated references, a search strategy was developed in collaboration with our institution’s librarian (AH) to identify all hypothesis-testing and qualitative studies in the domain of hard-stop alerts. The Yale MeSH Analyzer was used to identify key Medical Subject Headings (MeSH) in the MEDLINE database.29 Important free-text search terms included “hard stop,” “mandatory,” and “forcing functions.” These, combined with index terms relevant to computerized CDS, were used to examine the following databases: MEDLINE (Ovid), MEDLINE In-Process (Ovid), EMBASE (Ovid), The Cochrane Library (Wiley), Web of Science, and Engineering Village. Gray literature was searched and the following relevant websites examined: National Institute for Health and Care Excellence (NICE), EMIS, Vision, and SystemOne. Finally, the database search was enhanced by a review of reference lists from relevant articles. No date limits or language restrictions were used. All searches were performed by a professional librarian, initially conducted on March 9, 2017, and updated on December 9, 2017. Full search strategies are available in Supplementary Appendix A.
Study selection
We identified 826 studies using our search strategy (excluding duplicates). All titles and abstracts were screened in duplicate by 3 reviewers (EP, MS, RS). All discrepancies were resolved by discussion requiring consensus from both reviewers. One-hundred-fourteen full-text articles were then reviewed in duplicate (EP, MS) and all discrepancies resolved in a similar manner. Thirty-two articles were included in the review. The screening process is illustrated in Figure 1 and was performed using Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia).
Figure 1.
Flow of information through the different phases of the systematic review.
We included experimental and observational studies reporting quantitative measures of the impact of hard-stop alerts or qualitative analysis of the experience of hard-stop alerts. We required that the hard stop be embedded within a software program utilized in a healthcare setting. Studies in which the hard stop was one small component of a much larger intervention and not evaluated directly were excluded. In cases in which the description of the intervention was insufficient to categorize as a hard stop, soft stop, or passive alert, the authors were contacted for further clarification.
Data extraction, quality assessment, and analysis
Data were extracted into a web-based spreadsheet (Google Sheets, Google, Mountain View, CA) and references managed using a web-based reference manager (Paperpile, Paperpile LLC, Vienna, Austria). Data were extracted in full by one reviewer (EP) with confirmation by a second reviewer (MS). All discrepancies were resolved by discussion. Extracted information included hard stop definitions, synonyms, and technical specifics, including the software program used, hard-stop alert trigger, alert message text, and the process required to resolve the alert. We also extracted study design, time period, sample size, and clinical domain of the intervention as well as process outcomes, patient health and healthcare delivery outcomes, adverse events and unintended consequences, and the user experience reported in each study.
We assessed methodological quality using the Downs and Black Checklist for both randomized and nonrandomized studies.30 Downs and Black Checklist scores were further categorized into excellent, good, fair, and poor.31 The heterogeneous character and overall low quality of the studies precluded formal meta-analyses.
RESULTS
Definitions
Of the 32 studies included in the review, 11 used the term “hard stop.”7,8,12,13,15–20,23 Terms used by other studies to describe the same concept included “mandatory,” “obligatory,” “compulsory,” or “required.” Some studies described the action of the system such as “must be entered into the system before” or “trigger the system to not allow” without using a specific term.
Description of the evidence
The evidentiary table (Table 1) outlines the characteristics of the included studies and their outcomes. Of the 32 included studies, 3 were randomized controlled trials,8,50,52 3 retrospective cohort studies,7,18,36 18 pre-post studies,12,13,15,16,19,20,33,35,37,39–49 and 5 time series studies.17,23,32,34,38 Six were conducted before 20107,32,39,41,48,49 and 12 within the past 3 years.12,15,16,18,33–36,38,46,47,50 The majority (n = 29) were conducted in the United States, with the remaining conducted in Taiwan (n = 1),13 Korea (n = 1),48 and Israel (n = 1).41 Most (n = 26) were conducted in the hospital setting at single-institution, academic medical centers. Two were conducted at a Veterans Affairs hospital35,43 and 4 in a multi-practice primary care setting.15,41,44,49
Table 1.
Evidentiary table
| Study |
Quality |
Intervention |
Process Outcomes |
Patient Health/Healthcare Delivery Outcomes |
Unintended/ Adverse Events |
User Experience |
|||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Author, year Targeted behavior | Downs and Blacka score(cat) | Type | Trigger/Setting | Third Party Over-ride | Measured | Baseline | Goal | Outcome (* p < 0.05) | Measured | Baseline | Goal | Outcome (* p < 0.05) | Measured | Events Discussed | Discussed Good/Neutral/Poor |
|
12 (P) | interruptive | order (any) | ✓ | 50% | ▴ | 95% | ✓ | |||||||
| reconciliation completed | neutral | ||||||||||||||
|
16 (F) | interruptive | order (med) | ✓ | ✓ | increased alerts | |||||||||
| order modified | 34.2% | ▴ | 92.70% | ||||||||||||
|
16 (F) | inline | note (procedure) | ✓ | ✓ | ||||||||||
| documentation completed | 62.8% | ▴ | 99.8%* | good | |||||||||||
|
11 (P) | inline | note (outpatient visit) | ✓ | ✓ | ||||||||||
| documentation completed | 74.11% | ▴ | 80% | poor | |||||||||||
|
14 (F) | inline | flowsheet (anesthesia case) | ✓ | |||||||||||
| documentation completed | 95.6% | ▴ | 100% | ||||||||||||
|
21 (G) | interruptive | order (any) | ✓ | ✓ | no increased bleeding | |||||||||
| prophylaxis received | 25.9% | ▴ | 36.8%* | ||||||||||||
| VTE rate | 0.51% | ▾ | 0.43% | ||||||||||||
|
20 (G) | inline | order (pulmonary CTA) | ✓ | ✓ | ✓ | |||||||||
| appropriateness of CTA | 58% | ▴ | 76%* | diagnosis of PE | 0.44% | ▴ | 0.38% | poor | |||||||
|
17 (F) | interruptive | EHR login | ✓ | ✓ | no events reported | |||||||||
| median time to renewal | 189 min | ▾ | 133 min* | median time in restraint | 235 min | ▾ | 130 min | ||||||||
| mean # orders/patient | 1.46 | ▴ | 2.34* | ||||||||||||
|
18 (F) | interruptive | order set | ✓ | ✓ | ✓ | |||||||||
| risk stratification | 3.0% | ▴ | 97.8%* | prophylaxis received | 66.2% | ▴ | 84.4%* | neutral | |||||||
| documentation | DVT rate | 2.25% | ▾ | 0.25%* | |||||||||||
| preventable VTE rate | 1.00% | ▾ | 0.17%* | ||||||||||||
|
15 (F) | interruptive | order (med) | ✓ | ✓ | ||||||||||
| nonadherence rate | 3.5% | ▾ | 1.2%* | ||||||||||||
|
11 (P) | interruptive | open patient chart | ✓ | 40.6% | ▴ | 58.5%* | ||||||||
| bp documentation | |||||||||||||||
|
17 (F) | inline | order set | ✓ | ✓ | X | no events reported | ||||||||
| mobility orders | 58% | ▴ | 82%* | multiple | no change | ||||||||||
| mobility achieved | 22% | ▴ | 80%* | ||||||||||||
|
17 (F) | inline | order (pulmonary CTA) | ✓ | ✓ | ||||||||||
| pulmonary CTA yield | 3.1% | ▴ | 16.5%* | ||||||||||||
|
20 (G) | interruptive | quantity in order (med) | ✓ | |||||||||||
| warning rate | 0.61% | ▾ | 0.16%* | ||||||||||||
|
21 (G) | interruptive | order (lumbar- spine MRI) | ✓ | ✓ | ✓ | |||||||||
| guideline adherence | 78% | ▴ | 96%* | visits w/ lumbar-spine | 5.3% | ▾ | 3.7%* | ||||||||
| MRI | 6.5% | ▾ | 4.5%* | ||||||||||||
| visits w/ any MRI | |||||||||||||||
|
13 (P) | interruptive | order (any) | ✓ | |||||||||||
| prophylaxis received | 73% | ▴ | 90%* | ||||||||||||
| prophylaxis within 24h | 73% | ▴ | 93%* | ||||||||||||
| VTE rate | 2% | ▾ | 0.5% | ||||||||||||
|
16 (F) | interruptive | pharmacy approval | ✓ | |||||||||||
| transfusion rate | 84% | ▾ | 76%* | ||||||||||||
| estimated cost savings | $700k/yr | ||||||||||||||
|
15 (F) | interruptive | order set | ✓ | |||||||||||
| prophylaxis received | 55-70% | ▴ | 82-85%* | ||||||||||||
| racial disparity | 6-14% | ▾ | 1-4%* | ||||||||||||
|
12 (P) | interruptive | close patient chart | ✓ | |||||||||||
| screening rate | 58% | ▴ | 99.5% | ||||||||||||
|
16 (F) | interruptive | order (stool culture, GC-EIA, O&P) | ✓ | ✓ | ✓ | decrease in bacterial detected | ||||||||
| # stool O&P | 129 | ▾ | 46* | ||||||||||||
| # GC-EIA | 47 | ▾ | 27 | ||||||||||||
| # stool cultures | 249 | ▾ | 106* | ||||||||||||
| estimated cost savings | $8k | ||||||||||||||
|
13 (P) | interruptive | not noted | ✓ | ✓ | ||||||||||
| past reaction reporting rate | 0.23% | ▴ | 0.78% | rate of reactions from known allergen | 15% | ▾ | 1% | ||||||||
|
17 (F) | interruptive | order (med) | ✓ | ✓ | ||||||||||
| compliance (not tiered vs tiered) | 34% | ▴ | 100%* | good | |||||||||||
| compliance (soft vs hard) | 29% | ▴ | 100% | ||||||||||||
| Peterfreund et al.17, 2011 Anesthesia adverse event reporting | 12 (P) | interruptive | close anesthesia case | ✓ | ✓ | ||||||||||
| number events captured | 132 | ▴ | 213* | good | |||||||||||
|
16 (F) | interruptive | order (lab) | ✓ | ✓ | ✓ | ✓ | ||||||||
| effectively averting duplicate orders (soft vs hard) | 43.6% | ▴ | 92.3%* | cost savings per alert (soft vs hard) | $3.52 | ▴ | $16.08 | good | |||||||
|
19 (G) | interruptive | order (metformin) | ✓ | ✓ | improved glycemic control | |||||||||
| inappropriate administration rate | 32.6% | ▾ | 8.8%* | lactic acidosis events | 0 | ▾ | 0 | ||||||||
|
16 (F) | inline | note (outpatient visit) | ✓ | ✓ | ||||||||||
| pain diagnosis rate | 42% | ▴ | 36% | poor | |||||||||||
| pain documentation | 49% | ▴ | 44% | ||||||||||||
|
18 (F) | interruptive | order (discharge) | ✓ | |||||||||||
| compliance discharges | 6.4% | ▴ | 94.0%* | ||||||||||||
| compliance admissions | 6.0% | ▴ | 44.5%* | ||||||||||||
|
21 (G) | interruptive | order (TMP/Sulfa, warfarin) | ✓ | ✓ | ✓ | 4 cases delayed therapy | ||||||||
| not reordering alerting med | 13.5% | ▴ | 57.2% | ||||||||||||
|
20 (G) | interruptive | order (foot/ankle x-ray) | ✓ | ✓ | note/CDS discordance | |||||||||
| adherence ankle x-ray | 61.8% | ▴ | 92.6%* | visits w/ ankle x-ray | 48.9% | ▾ | 64.3%* | ||||||||
| adherence foot x-ray | 63.6% | ▴ | 81.0%* | visits w/ foot x-ray | 54.0% | ▾ | 54.6% | ||||||||
| fractures detected | 4.8% | ▴ | 10.1%* | ||||||||||||
|
16 (F) | interruptive | order (imaging) | ✓ | ✓ | ||||||||||
| % studies low-yield | 5.43% | ▾ | 1.92%* | ||||||||||||
| % low-yield abandoned | 44.33% | ▾ | 11.45%* | ||||||||||||
|
21 (G) | inline | flowsheet (anesthesia case) | ✓ | |||||||||||
| median items completed | 9.5 | ▴ | 19* | ||||||||||||
| median time to med | 240s | ▾ | 520.5 | ||||||||||||
|
17 (F) | interruptive | order (discharge) | ✓ | ✓ | ||||||||||
| plan provided and complete | 79% | ▴ | 81%* | good | |||||||||||
Abbreviations: bp, blood pressure; CT, computed tomography; CTA, computed tomography angiography; DVT, deep venous thrombosis; ED, emergency department; EHR, electronic health record; GC-EIA, Giardia/Cryptosporidium enzyme immunoassay screen; HIV, human immunodeficiency virus; ICU, intensive care unit; MRI, magnetic resonance imaging; O&P, ova and parasite; PE, pulmonary embolism; TMP-SMX, trimethoprim-sulfamethoxazole; VTE, venous thromboembolism.
Downs and Black scores categorized into E for “excellent” (24-28 points), G for “good” (19-23 points), F for “fair” (14-18 points), or P for “poor” (<14 points).31
Fourteen studies targeted inpatient medicine,7,8,12,13,16,18,23,32,34,37,40,45–48 4 emergency medicine,20,38,39,43 5 primary care,13,41,44,49,51 3 anesthesia,17,36,52 and 2 pediatrics23,33 with the remainder targeting diverse specialties. Common clinical topics included venous thromboembolism (VTE) prophylaxis (n = 4),37,40,45,47 drug-drug or drug-allergy interactions (n = 4),7,8,49,53 reducing low-yield imaging (n = 3),44,50,51 and use of pulmonary CT angiography (CTA) for pulmonary embolism (n = 2).38,43
Overall, study quality was low. Seven were of poor quality, 17 of fair quality, 8 of good quality, and none of excellent quality.
Types of hard stops
There were 2 types of hard stops identified among the interventions. Interruptive (pop-up) alerts and inline hard stops on data entry fields. For interruptive alerts, a dialog box pops up, interrupting the user workflow to communicate a message and requiring interaction before the workflow can proceed. For inline alerts, the message displays within the window in which the user is working, and no additional steps are needed to continue working in the task other than completing the required documentation. Twenty-three of the 32 studies evaluated interruptive alerts7,8,12,13,15–20,23,32,33,37,39–41,44,45,47–50 and 9 evaluated inline alerts.23,32,34–36,38,42,43,49 Technical specifics for each group are available in Supplementary Appendices C and D, respectively.
For interruptive alerts, the most common alert trigger was order placement for the medication or test that was the CDS target. In 2 cases, the alert was fired on discharge order initiation: 1 when the purpose of the CDS was to increase HIV screening in the ED33 and the other to increase completion of an asthma action plan prior to discharge from the hospital.23 One study fired the alert on EHR login to attempt to improve appropriate re-evaluation of restraint orders39 and another upon opening the patient chart to increase blood pressure documentation.41 Two studies fired the alert on attempt to close the chart: 1 to increase postpartum depression screening15 and the other to increase anesthesia adverse event reporting.17 Seven of the studies allowed the alert to be overridden by contacting an external third party such as a pharmacist or radiologist with whom the case could be discussed and who had the ability to enter the order successfully if approved.8,12,16,18,33,44,51 The remaining 16 did not allow for any override mechanism.
Inline alerts were primarily used to increase documentation in orders (n = 3),38,42,43 flowsheets (n = 2),32,36 and notes (n = 4).23,34,35,49 In all cases, the order could not be signed or flowsheet/note closed until documentation was complete. In no cases was there a mechanism for override other than abandoning the task altogether. For the majority of these alerts, documentation was focused on things that are likely to be assessed in routine practice but were poorly documented, such as anesthesia case start time and pain assessment. In 2 cases, alerts attempted to more actively guide clinical decision making by requiring providers to enter a Well’s score within a CTA order38 or requiring providers to enter a mobility order for intensive care unit (ICU) patients.42
Effect on patient health and healthcare delivery outcomes
Of the 32 studies, 15 evaluated patient health or healthcare delivery outcomes, and 11 showed improvement in those outcomes (Figure 2). For patient health outcomes, 4 of 8 studies reported improvement. Haut et al.40 showed a significant reduction in deep venous thrombosis (DVT) rate and preventable VTE events in trauma patients by embedding a hard-stop VTE risk assessment tool with appropriate prophylaxis orders within the admission order set. Hoo et al.43 showed an increase in the yield of pulmonary CTA for pulmonary embolism by embedding a mandatory Wells score calculator and conditional D-dimer order into the CTA order process. Park et al.48 showed a decrease in drug hypersensitivity reactions caused by readministration of a suspected causative agent in response to hard-stop entry of prior drug hypersensitivity reactions on admission. Tajmir et al.50 showed an increase in detection of clinically significant fractures and an increase in radiographic yield after embedding a mandatory Ottawa Ankle Rules CDS module in the order for plain ankle and foot radiography in the urgent care setting.
Figure 2.
(a) Number of studies reporting outcomes by category. (b) Number of studies reporting improvement in outcomes associated with the hard-stop intervention.
In 8 of 10 studies assessing healthcare delivery outcomes such as lab utilization or imaging cost savings, the intervention was associated with improvement as intended. Four studies of interventions to increase rates of appropriate VTE prophylaxis demonstrated improvement after implementation of a hard stop.37,40,45,47 Ip et al.44 showed a decrease in percent lower-back pain visits resulting in lumbar-spine MRI in response to a hard-stop risk stratification module with in the MRI order, Larson et al.46 showed a reduction in transfusion rates after eliminating a hard stop for critical value notification from pharmacy to provider for hemoglobin levels between 7.00 and 7.99, Procop et al.18 showed cost savings associated with a hard-stop alert preventing duplicate lab orders when compared to a soft-stop alert, and Nikolic et al.16 showed a reduction in proportion of stool cultures and stool ova/parasite examinations ordered after day 3 of hospitalization and an estimated cost savings in association with a hard-stop alert preventing such orders.
Process measures such as changes in documentation completion rates, order rates, and alert responses were evaluated in 26 studies, and in 16 of these, they were the only outcomes reported.7,8,12,13,15,17,20,23,32,34–36,41,49,51,52 24 of these studies reported significant improvement in process measures in association with hard stop implementation.7,8,12,13,15,17–20,23,32–34,36,38–42,44,48,50–52 Saigh et al.49 showed no increase in reported pain diagnoses after implementation of a hard-stop pain assessment module for the outpatient encounter note, and Bansal et al.35 showed no improvement of intent of chemotherapy documentation after placing mandatory fields in the visit note template.
Adverse events and unintended consequences
Eleven of the 32 included studies reported on adverse events or unintended consequences associated with hard-stop alerts. In 3 cases, the authors stated that no problems associated with the intervention were reported but did not actively evaluate for adverse events or elaborate further.18,39,42 Only 2 studies, Galanter et al. and Strom et al., specified health-related adverse events as planned outcomes measures prior to initiation of the study. Galanter et al.37 evaluated the incidence of major and minor bleeding events after implementation of a hard-stop mandatory VTE risk assessment tool embedded in the admission order set, and an increase was not detected in association with the tool. Strom et al.8 encountered 4 delays in the delivery of appropriate medical therapy in their randomized controlled trial (RCT) evaluating a hard-stop drug-drug interaction alert for sulfamethoxazole-trimethoprim and warfarin, and the trial was ended early as a result. The remaining 5 studies reported a variety of unintended consequences including an increase in alert frequency,33 a compensatory increase in specialists’ use of lumbar-spine MRI in response to a hard-stop alert prohibiting lumbar-spine MRI for low back pain without red flags in the primary care,44 an unexpected improvement in glycemic control in association with a hard-stop alert prohibiting inpatient metformin in patients at risk for lactic acidosis and instead directing providers to an insulin order set,19 a 1.3% discordance between findings documented in the mandatory CDS tool and those documented in the clinical note,50 and a decrease in percent of tests positive for enteropathogenic bacteria in patients hospitalized for more than 3 days.16
Comparison to soft stops
Four studies directly compared hard-stop alerts to soft-stop alerts. Three out of the 4 studies showed superiority of the hard-stop alert in comparison to the soft-stop alert in achieving the desired process outcome.7,18,19 The fourth study found that both soft-stop and hard-stop alerts reminding the providers to complete restraint renewal orders improved ordering rates over the alert-free baseline but found no difference between the 2 alert styles in median restraint order renewal time or number of restraint orders.39 Only 1 study, Procop et al.18, directly compared the impact of a hard-stop vs soft-stop alert on a healthcare delivery outcome and showed a significant cost savings using the hard-stop alert over the soft-stop alert to prevent unnecessary duplicate lab orders. No studies compared adverse event rates between the 2 alert styles.
User experience
Ten studies reported on user experience. Among these, 4 studies reported broad provider acceptance of the hard-stop alert and attributed this to comprehensive implementation plans involving user engagement, multiple iterations, and rapid response to feedback.17,18,23,34 Two additional studies stated their interventions were well accepted with 1 specifying that acceptance was likely tied to low frequency of hard-stop alerts.7,32 The remaining 3studies examined inline hard stops on documentation fields and reported poor user reception. After implementation of a hard stop on documentation of chemotherapy intent in a clinical note template, no increase in documentation was shown, and users reported avoiding use of the template because of the hard stops.35 In Geeting et al.38, the authors hypothesized that users falsely inflated the Wells score entered in a mandatory field of a pulmonary CTA order, resulting in improvement in the appropriateness of CTA orders without the expected change in detection rates of pulmonary embolism. Finally, in Saigh et al., the majority of users surveyed felt a mandatory pain assessment tool embedded in their workflow did not change their pain assessment and was difficult to use. Many providers chose the fastest way through the module, even if it produced inaccurate and contradictory clinical documentation to limit time spent in the tool.49
DISCUSSION
Main findings
We found that hard stops are often used in clinical practice as a tool to influence provider behavior despite the concerns discussed in the literature around their use.54,55 Although several prior reviews have more broadly examined the effects of CDS and alerts on a variety of outcomes56–59 and still other reviews have commented on the use of hard-stop alerts as part of an overall analysis of CDS for a particular clinical question,11,22 this is the first review to our knowledge that specifically examines the hard-stop modality to answer these key 3 questions:
To what extent are hard-stop CDS alerts in EHRs effective in improving patient health and healthcare delivery outcomes?
In the majority of cases, hard-stop CDS alerts were associated with the desired improvements in process measures but demonstration of improvement in patient health outcomes or healthcare delivery outcomes was less common. This is consistent with what has been shown previously for all categories of CDS alerts.60 Notably, in more than half the studies, investigators did not report on health outcomes or adverse events associated with the intervention. For those who did, 80% (8 of 10) showed improvement in the desired healthcare delivery outcome such as lab or imaging costs or ordering rate, but only 50% (4 of 8) showed improvement in patient health outcomes such as detection of pulmonary embolism. Patient health and healthcare delivery outcomes are harder to measure, which likely explains the relative paucity of evidence presented in these studies, but more effort needs to be made to include these outcomes in studies, as these are the outcomes that matter most from the perspective of patients, providers, and public health. Based on the evidence presented here, hard-stop alerts appear to be a useful tool in improving healthcare delivery outcomes, but no conclusion can be made about their efficacy in improving patient health outcomes.
What are the adverse effects and unintended consequences of hard-stop electronic alerts?
In agreement with what has now been recognized as an unavoidable part of health information technology,61 our review showed that unintended consequences do occur in association with hard stops and that these consequences can be either positive or negative, minor or profound. A more complete picture of the consequences specific to hard stops is impossible, given the very limited number of studies that systematically evaluated this area. Recent studies were more likely to incorporate a discussion of unintended consequences likely reflecting increasing awareness of technology-induced error.
How do hard-stop alerts compare to soft-stop alerts in efficacy and unintended consequences?
Although only 4 studies directly compared the 2 alerts styles, hard-stop alerts appear to be superior to soft-stop alerts in achieving the desired process outcomes. This is expected, as hard-stop alerts, by design, allow for only a single course of action. No conclusion can be made about how the 2 alert styles compare when considering patient health, healthcare delivery outcomes, and unintended consequences due to lack of evidence.
Implications
Implications for practice
There is a desire among providers for differentiation of the alert experience based on severity of the alert and agreement that it should be harder to override alerts for critical issues.53 Such tiering of alerts improves responses to all modalities of alerts.7 Combined with the demonstrated improvement in process measures associated with hard stops, this review supports a role for hard-stop alerts in the CDS armamentarium. However, because they can be such powerful tools, effectively prohibiting specific actions on the part of the provider, they need to be carefully implemented with diligent assessment of possible harms and continuous user involvement in the design and implementation process. A lack of user testing and iterative design is more likely to lead to unintended consequences and error-prone systems and must be addressed in accordance with the growing body of literature on this topic.61
Implications for implementation
Several design characteristics for hard-stop alerts appear to be associated with positive outcomes. These included integrating important clinical decision-making aids, such as recent relevant lab results and specific score or risk calculators,18,37,40,43,50 triggering the alert at a natural and relevant place in physician workflow,12,13,18,33,35,43,49 and providing the possibility for a third-party override, such as telephone discussion with a pharmacist or radiologist.8,12,16,18,33
Implications for research
As this review demonstrates, there continues to be marked heterogeneity in design, implementation, and evaluation hampering the ability to disseminate these tools and ensure their safe and effective use. Based on the literature presented here, we propose that future research on hard-stop alerts must report on clearly defined patient health outcomes, healthcare delivery outcomes, adverse events, and user experience in addition to any relevant process measures. Publications must include sufficient technical details to allow for replication elsewhere. Screenshots are especially useful in achieving this goal and should be included. Study design also must be strengthened to allow for more robust conclusions to be drawn. Pre-post design may be more realistic in the setting of practical clinical informatics or quality improvement initiatives, but conclusions could be strengthened with the use of cluster randomized trials or stronger quasi-experimental designs, such as an interrupted time-series analysis that accounts for temporal trends by comparison to either a local or national metric.62 Interventions should be followed out for a long enough period to determine the true impact outside of the novelty effect.63 Additionally a mixed methods approach—integrating qualitative and quantitative methods—may help better characterize user experience and identify the design features most associated with a successful implementation.64 Hard-stop alerts are implemented in complex healthcare systems and have the capacity to affect not just the individual patient but all patients receiving care in that system. It is imperative that we approach the study of CDS with the same rigor with which we approach more traditional healthcare interventions. Development of a standard for the reporting of CDS interventions is an important step in achieving this goal.
With more and better designed studies, we may be able to draw conclusions on the clinical topics or clinical actions most and least suited to a hard-stop alert, on the specific design features that are most associated with a successful implementation, such as third-party override or display of clinical calculators/lab results, as well as develop a more specific roadmap for design and implementation. We may also be able to develop a better understanding of how hard-stop alerts fit into the overall alert experience of the clinician and the frequency with which they should be used.
Limitations
The most important limitation to this review is a lack of sufficient high-quality studies examining hard-stop alerts in the EHR as well as marked heterogeneity in the design and setting of the alerts limiting both the strength of the conclusions and generalizability. An additional limitation is the likely existence of significant publication bias. Although we performed a thorough search of the gray literature and included relevant abstracts not published as full articles, it is likely many more alerts are being implemented in clinical practice than are being studied. And of those studied, alerts with positive results are perhaps more likely to be published, potentially biasing the conclusion of this review. Another important limitation was underreporting of patient health and health delivery outcomes, harms, technical implementation details, and user responses. These are especially important in an attempt to judge the utility of the intervention and to replicate implementation of the intervention in new settings.
CONCLUSION
Hard stop can be a useful tool in the CDS armamentarium. Because hard stops are powerful tools, they must be implemented judiciously with continuous user feedback informing rapid, iterative design. Investigators must report on associated health outcomes and harms when implementing IT solutions to clinical problems.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
Conflict of interest statement. None declared.
FUNDING
This work was supported by the National Library of Medicine grant number T15 LM007056 and the Agency for Healthcare Research and Quality grant numbers K08HS024332 and K08HS021271. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality or the National Library of Medicine.
CONTRIBUTORS
All authors have made substantial contribution to all of the following: (1) the concept or design of the work; or the acquisition, analysis, or interpretation of data for the work; (2) drafting the work or revising it critically for important intellectual content; (3) final approval of the version to be published; and (4) agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Supplementary Material
Acknowledgments
The authors would like to acknowledge the efforts of the Cushing/Whitney Medical Library Cross-Departmental Team, led by Vermetha Polite, in obtaining the full text of the papers included in this review.
REFERENCES
- 1. QuickStats: Percentage of Office-Based Physicians with a Basic Electronic Health Record (EHR) System,* by State—National Electronic Health Records Survey,† United States, 2014§. 2015; 6434: 963–963. https://www.cdc.gov/mmwr/preview/mmwrhtml/mm6434a10.htm. [Google Scholar]
- 2. Rosenberg SN, Sullivan M, Juster IA.. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009; 16914: 1337; author reply 1338.. [DOI] [PubMed] [Google Scholar]
- 3. Nanji KC, Slight SP, Seger DL, et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014; 213: 487–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Zenziper Straichman Y, Kurnik D, Matok I, et al. Prescriber response to computerized drug alerts for electronic prescriptions among hospitalized patients. Int J Med Inform 2017; 107: 70–5. [DOI] [PubMed] [Google Scholar]
- 5. Wachter R. The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age. New York: McGraw Hill Professional; 2015. [Google Scholar]
- 6. Safe use of health information technology. Sentinel Event Alert 2015; 54: 1–6. [PubMed] [Google Scholar]
- 7. Paterno MD, Maviglia SM, Gorman PN, et al. Tiering drug-drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc 2009; 161: 40–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Strom BL, Schinnar R, Aberra F, et al. Unintended effects of a computerized physician order entry nearly hard-stop alert to prevent a drug interaction: a randomized controlled trial. Arch Intern Med 2010; 17017: 1578–83. [DOI] [PubMed] [Google Scholar]
- 9. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 2010; 85: 336–41. [DOI] [PubMed] [Google Scholar]
- 10. Evans AS, Lazar EJ, Tiase VL, et al. The role of housestaff in implementing medication reconciliation on admission at an academic medical center. Am J Med Qual 2011; 261: 39–42. [DOI] [PubMed] [Google Scholar]
- 11. Goldzweig CL, Orshansky G, Paige NM, et al. Electronic health record–based interventions for improving appropriate diagnostic imaging: a systematic review and meta-analysis. Ann Intern Med 2015; 1628: 557–65. [DOI] [PubMed] [Google Scholar]
- 12. Helmons PJ, Coates CR, Kosterink JGW, et al. Decision support at the point of prescribing to increase formulary adherence. Am J Health Syst Pharm 2015; 725: 408–13. [DOI] [PubMed] [Google Scholar]
- 13. Hsu C-C, Chou C-Y, Chou C-L, et al. Impact of a warning CPOE system on the inappropriate pill splitting of prescribed medications in outpatients. PLoS One 2014; 912: e114359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Kilgore MR, McIlwain CA, Schmidt RA, et al. Reflex test reminders in required cancer synoptic templates decrease order entry error: an analysis of mismatch repair immunohistochemical orders to screen for Lynch syndrome. J Pathol Inform 2016; 71: 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Loudon H, Nentin F, Silverman ME.. Using clinical decision support as a means of implementing a universal postpartum depression screening program. Arch Womens Ment Health 2016; 193: 501–5. [DOI] [PubMed] [Google Scholar]
- 16. Nikolic D, Richter SS, Asamoto K, et al. Implementation of a clinical decision support tool for stool cultures and parasitological studies in hospitalized patients. J Clin Microbiol 2017; 5512: 3350–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Peterfreund RA, Driscoll WD, Walsh JL, et al. Evaluation of a mandatory quality assurance data capture in anesthesia: a secure electronic system to capture quality assurance information linked to an automated anesthesia record. Anesth Analg 2011; 1125: 1218–25. [DOI] [PubMed] [Google Scholar]
- 18. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol 2015; 1435: 623–6. [DOI] [PubMed] [Google Scholar]
- 19. Rossi AP, Wellins CA, Savic M, et al. Use of computer alerts to prevent the inappropriate use of metformin in an inpatient setting. Qual Manag Health Care 2012; 214: 235–9. [DOI] [PubMed] [Google Scholar]
- 20. Schnall R, Sperling JD, Liu N, et al. The effect of an electronic ‘hard-stop’ alert on HIV testing rates in the emergency department. Stud Health Technol Inform 2013; 192: 432–6. [PMC free article] [PubMed] [Google Scholar]
- 21. Vawdrey DK, Chang N, Compton A, et al. Impact of electronic medication reconciliation at hospital admission on clinician workflow. AMIA Annu Symp Proc 2010; 2010: 822–6. [PMC free article] [PubMed] [Google Scholar]
- 22. Yousem DM. Combating overutilization: radiology benefits managers versus order entry decision support. Neuroimaging Clin N Am 2012; 223: 497–509. [DOI] [PubMed] [Google Scholar]
- 23. Zipkin R, Schrager SM, Keefer M, et al. Improving home management plan of care compliance rates through an electronic asthma action plan. J Asthma 2013; 506: 664–71. [DOI] [PubMed] [Google Scholar]
- 24. Felcher AH, Gold R, Mosen DM, et al. Decrease in unnecessary vitamin D testing using clinical decision support tools: making it harder to do the wrong thing. J Am Med Inform Assoc 2017; 244: 776–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Cecchini M, Framski K, Lazette P, et al. Electronic intervention to improve structured cancer stage data capture. J Oncol Pract 2016; 1210: e949–56. [DOI] [PubMed] [Google Scholar]
- 26. Stenner SP, Chakravarthy R, Johnson KB, et al. ePrescribing: reducing costs through in-class therapeutic interchange. Appl Clin Inform 2016; 74: 1168–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. MacKay M, Anderson C, Boehme S, et al. Frequency and severity of parenteral nutrition medication errors at a large children’s hospital after implementation of electronic ordering and compounding. Nutr Clin Pract 2016; 312: 195–206. [DOI] [PubMed] [Google Scholar]
- 28. McNulty J, Donnelly E, Iorio K.. Methodologies for sustaining barcode medication administration compliance. A multi-disciplinary approach. J Healthc Inf Manag 2009; 23: 30–3. [PubMed] [Google Scholar]
- 29. Grossetta Nardini HKWL. The Yale MeSH Analyzer. The Yale MeSH Analyzer 2018; http://mesh.med.yale.edu/Accessed April 12, 2018. [Google Scholar]
- 30. Downs SH, Black N.. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health 1998; 526: 377–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. O’Connor SR, Tully MA, Ryan B, et al. Failure of a numerical quality assessment scale to identify potential risk of bias in a systematic review: a comparison study. BMC Res Notes 2015; 81: 224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Bails D, Clayton K, Roy K, et al. Implementing online medication reconciliation at a large academic medical center. Jt Comm J Qual Patient Saf 2008; 349: 499–508. [DOI] [PubMed] [Google Scholar]
- 33. Balasuriya L, Vyles D, Bakerman P, et al. Computerized dose range checking using hard and soft stop alerts reduces prescribing errors in a pediatric intensive care unit. J Patient Saf 2017; 133: 144–8. [DOI] [PubMed] [Google Scholar]
- 34. Ban VS, Madden CJ, Browning T, et al. A novel use of the discrete templated notes within an electronic health record software to monitor resident supervision. J Am Med Inform Assoc 2017; 24: e2–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Bansal M, Abdallah A-O, Pennisi A, et al. Improving communication on intent of chemotherapy using QOPI scores and PDSA cycles. J Cancer Educ 2016; 314: 736–41. [DOI] [PubMed] [Google Scholar]
- 36. David R, Luis R, David S, et al. Using electronic medical records features–are hard stops the way to improve documentation [abstract]? In: The Annual Meeting of the Society for Technology in Anesthesia; 2015 Jan 7–10; Phoenix, AZ.
- 37. Galanter WL, Thambi M, Rosencranz H, et al. Effects of clinical decision support on venous thromboembolism risk assessment, prophylaxis, and prevention at a university teaching hospital. Am J Health Syst Pharm 2010; 6715: 1265–73. [DOI] [PubMed] [Google Scholar]
- 38. Geeting GK, Beck M, Bruno MA, et al. Mandatory assignment of modified wells score before CT angiography for pulmonary embolism fails to improve utilization or percentage of positive cases. AJR Am J Roentgenol 2016; 2072: 442–9. [DOI] [PubMed] [Google Scholar]
- 39. Griffey RT, Wittels K, Gilboy N, et al. Use of a computerized forcing function improves performance in ordering restraints. Ann Emerg Med 2009; 534: 469–76. [DOI] [PubMed] [Google Scholar]
- 40. Haut ER, Lau BD, Kraenzlin FS, et al. Improved prophylaxis and decreased rates of preventable harm with the use of a mandatory computerized clinical decision support tool for prophylaxis for venous thromboembolism in trauma. Arch Surg 2012; 14710: 901–7. [DOI] [PubMed] [Google Scholar]
- 41. Heymann AD, Hoch I, Valinsky L, et al. Mandatory computer field for blood pressure measurement improves screening. Fam Pract 2005; 222: 168–9. [DOI] [PubMed] [Google Scholar]
- 42. Hildreth AN, Enniss T, Martin RS, et al. Surgical intensive care unit mobility is increased after institution of a computerized mobility order set and intensive care unit mobility protocol: a prospective cohort analysis. Am Surg 2010; 76: 818–22. [PubMed] [Google Scholar]
- 43. Hoo GWS, Wu CC, Vazirani S, et al. Does a clinical decision rule using D-dimer level improve the yield of pulmonary CT angiography? AJR Am J Roentgenol 2011; 1965: 1059–64. [DOI] [PubMed] [Google Scholar]
- 44. Ip IK, Gershanik EF, Schneider LI, et al. Impact of IT-enabled intervention on MRI use for back pain. Am J Med 2014; 1276: 512–8.e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Khoury L, Dangodara AA, Lee J-A, et al. Implementation of a mandated venous thromboembolism clinical order set improves venous thromboembolism core measures. Hosp Pract 2014; 425: 89–99. [DOI] [PubMed] [Google Scholar]
- 46. Larson EA, Thompson PA, Anderson ZK, et al. Decreasing the critical value of hemoglobin required for physician notification reduces the rate of blood transfusions. Int J Gen Med 2016; 9: 133–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Lau BD, Haider AH, Streiff MB, et al. Eliminating health care disparities with mandatory clinical decision support: the venous thromboembolism (VTE) example. Med Care 2015; 531: 18–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Park CS, Kim T-B, Kim SL, et al. The use of an electronic medical record system for mandatory reporting of drug hypersensitivity reactions has been shown to improve the management of patients in the university hospital in Korea. Pharmacoepidem Drug Saf 2008; 179: 919–25. [DOI] [PubMed] [Google Scholar]
- 49. Saigh O, Triola MM, Link RN.. Brief report: failure of an electronic medical record tool to improve pain assessment documentation. J Gen Intern Med 2006; 212: 185–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Tajmir S, Raja AS, Ip IK, et al. Impact of clinical decision support on radiography for acute ankle injuries: a randomized trial. West J Emerg Med 2017; 183: 487–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Vartanians VM, Sistrom CL, Weilburg JB, et al. Increasing the appropriateness of outpatient imaging: effects of a barrier to ordering low-yield examinations. Radiology 2010; 2553: 842–9. [DOI] [PubMed] [Google Scholar]
- 52. Wetmore DS, Gandhi MNA, Curatolo MC, et al. Simulation to test hard-stop implementation of a pre-anesthetic induction checklist. In: 2014 IEEE 27th International Symposium on Computer-Based Medical Systems 2014: 564–5. New York, NY.
- 53. Yu KH, Sweidan M, Williamson M, et al. Drug interaction alerts in software–what do general practitioners and pharmacists want? Med J Aust 2011; 195 (11-12): 676–80. [DOI] [PubMed] [Google Scholar]
- 54. Bisantz AM, Wears RL.. Forcing functions: the need for restraint. Ann Emerg Med 2009; 534: 477–9. [DOI] [PubMed] [Google Scholar]
- 55. Bates DW. Decision support in hospitals: getting the benefits: comment on “unintended effects of a computerized physician order entry nearly hard-stop alert to prevent a drug interaction”. Arch Intern Med 2010; 17017: 1583–4. [DOI] [PubMed] [Google Scholar]
- 56. Page N, Baysari MT, Westbrook JI.. A systematic review of the effectiveness of interruptive medication prescribing alerts in hospital CPOE systems to change prescriber behavior and improve patient safety. Int J Med Inform 2017; 105: 22–30. [DOI] [PubMed] [Google Scholar]
- 57. Curtis CE, Al Bahar F, Marriott JF.. The effectiveness of computerised decision support on antibiotic use in hospitals: a systematic review. PLoS One 2017; 128: e0183062.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Beeler PE, Bates DW, Hug BL.. Clinical decision support systems. Swiss Med Wkly 2014; 144: w14073.. [DOI] [PubMed] [Google Scholar]
- 59. McCoy AB, Thomas EJ, Krousel-Wood M, Sittig DF.. Clinical decision support alert appropriateness: a review and proposal for improvement. Ochsner J 2014; 142: 195–202. [PMC free article] [PubMed] [Google Scholar]
- 60. Garg AX, Adhikari NKJ, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 29310: 1223–38. [DOI] [PubMed] [Google Scholar]
- 61. Borycki E, Dexheimer JW, Hullin Lucay Cossio C, et al. Methods for addressing technology-induced errors: the current state. Yearb Med Inform 2016; 1: 30–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Penfold RB, Zhang F.. Use of interrupted time series analysis in evaluating health care quality improvements. Acad Pediatr 2013; 13 (6 Suppl): S38–44. [DOI] [PubMed] [Google Scholar]
- 63. Gordon SR. Computing Information Technology: The Human Side. Hersey, PA: Idea Group Inc (IGI; ); 2003. [Google Scholar]
- 64. Sockolow P, Dowding D, Randell R, et al. Using mixed methods in health information technology evaluation. Stud Health Technol Inform 2016; 225: 83–7. [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


