Abstract
Objective
Alert fatigue limits the effectiveness of medication safety alerts, a type of computerized clinical decision support (CDS). Researchers have suggested alternative interactive designs, as well as tailoring alerts to clinical roles. As examples, alerts may be tiered to convey risk, and certain alerts may be sent to pharmacists. We aimed to evaluate which variants elicit less alert fatigue.
Materials and Methods
We searched for articles published between 2007 and 2017 using the PubMed, Embase, CINAHL, and Cochrane databases. We included articles documenting peer-reviewed empirical research that described the interactive design of a CDS system, to which clinical role it was presented, and how often prescribers accepted the resultant advice. Next, we compared the acceptance rates of conventional CDS—presenting prescribers with interruptive modal dialogs (ie, “pop-ups”)—with alternative designs, such as role-tailored alerts.
Results
Of 1011 articles returned by the search, we included 39. We found different methods for measuring acceptance rates; these produced incomparable results. The most common type of CDS—in which modals interrupted prescribers—was accepted the least often. Tiering by risk, providing shortcuts for common corrections, requiring a reason to override, and tailoring CDS to match the roles of pharmacists and prescribers were the most common alternatives. Only 1 alternative appeared to increase prescriber acceptance: role tailoring. Possible reasons include the importance of etiquette in delivering advice, the cognitive benefits of delegation, and the difficulties of computing “relevance.”
Conclusions
Alert fatigue may be mitigated by redesigning the interactive behavior of CDS and tailoring CDS to clinical roles. Further research is needed to develop alternative designs, and to standardize measurement methods to enable meta-analyses.
Keywords: alert fatigue, decision support systems, clinical, medical order entry systems, electronic prescribing, decision support techniques
introduction
According to the most recent U.S. government reports, 1 in every 20 deaths in the United States has been attributable to an adverse drug event (ADE).1,2 Many ADEs result from erroneous prescriptions.3 By the most conservative estimates, 1 in every 50 prescriptions is inappropriate.3
Clinical decision support (CDS) is intended to reduce prescription error by providing prescribers with automated guidance during computerized order entry.4 Some have held high hopes for CDS, believing that it would significantly reduce prescription errors.5
The reality has proved more complex—CDS can create new patient safety risks. For example, in some instances, “hard stops” have prevented patients from receiving potentially life-saving treatment in time.6 The information technology infrastructures that organizations must install to integrate CDS into the medication-ordering process—often accompanied by changes in workflow and communication patterns—can disrupt work during “roll-out,” as well as in long-term use.7 These disruptions can increase instances of ADEs, which can, in turn, increase patient mortality.7
CDS can also fail to improve patient safety due to alert fatigue.8 Alert fatigue occurs when a high number of irrelevant alerts leads users to habitually override them. It is a term derived from alarm fatigue, which psychologists and human factors researchers have used when studying high false alarm rates in fields such as aviation and nuclear power plant operation.9
Alarm fatigue was once referred to as the “cry-wolf effect” because, much like Aesop’s fable,10 it describes a situation in which people stop responding to false alarms.9 Severe consequences can result from alarm and alert fatigue conditions. For example, a 1997 plane crash was attributed to alarm fatigue—the control tower operators had disabled a minimum safe altitude alarm due to its frequent false alarms.11 Similarly, the patient safety goals of CDS can be compromised by alert fatigue.
Some researchers have focused on increasing alert sensitivity and specificity by modifying CDS rulesets.12,13 The results have been mixed. It is often difficult to justify disabling alerts due to safety concerns or pressure from patient safety groups (eg, Leapfrog).14,15
Psychologists and human factors researchers have developed strategies to reduce alarm fatigue via interaction design—the design of the way the “dialogue” unfolds between the human user and the computer. Some have applied these strategies in CDS, with promising results. For example, tiered alarms9 indicate the likelihood or severity of an adverse event, and they seem to have been well received in CDS.16 As another example, “patient” alarms—those that avoid distracting airplane pilots when they are busy—may be accepted more often than “impatient” alarms.17 Similarly, in CDS, researchers have implemented alerts that avoided requiring attention at a particular time—again, with some success.18
To address whether the interactive design of CDS affects clinical alert fatigue, in the aggregate, we conducted a systematic Preferred Reporting Items for Systematic Reviews and Meta-Analyses19 review. Existing systematic reviews have tended to focus on prescriber performance and patient outcomes rather than alert acceptance.20–25 We only found 3 published reviews that addressed interactive design.8,26,27 In 2006, van der Sijs et al8 conducted a conceptual analysis, noting that they had identified only 9 studies that reported override rates. Subsequent reviews by Horsky et al26 and Miller et al27 deferred to prior authors’ assessments of effectiveness. These assessments were based on a variety of factors, ranging from provider usability and satisfaction to patient morbidity and mortality. In this review, we continue this line of inquiry by centralizing alert fatigue and specifically examining the relationship between interactive designs and prescriber acceptance rates.
Defining acceptance
We defined acceptance—our main outcome—as a change to a prescription based on computerized advice. This definition excluded “intention to monitor” and “acknowledgment”—explanations of these concepts follow.
Some CDS alerts have allowed prescribers to select “intention to monitor” as an override justification, and some researchers have counted this justification-selection as evidence of “acceptance.” However, Slight et al28 found evidence of monitoring in only 36% of instances in which the prescriber indicated an intention to monitor.
Many CDS alerts have been presented as modal dialogs (also known as pop-ups), and some of these have provided a button that indicates “acknowledgment,” but which takes no action. Some researchers have considered a click of this button to count as “acceptance”—but this, too, may rely on an incorrect assumption. Under alert fatigue, modal dialogs become obstacles, and “acknowledgment” buttons become the work-around.29
Additionally, in this review, we paid attention to the clinical role of the recipient of the automated guidance, eg, a prescriber or a pharmacist. Other authors8,30 have identified that delivering the right guidance to the right recipient is crucial to the acceptance of the alert.
MATERIALS AND METHODS
We conducted a systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses19 model. We started by searching the PubMed, Embase, CINAHL, and Cochrane literature databases. The search terms we used are shown in Table 1. We identified articles published between 2007 and 2017. Two of the authors (T.L.R., M.I.H.) screened the search results, extracting relevant details (interactive features, clinical roles, acceptance rates, and methods) from included studies for analysis. T.L.R. and M.I.H. met often to ensure consistency.
Table 1.
Search query structure
| Decision support… | Advisories… | Acceptance Rates… | Time frame |
|---|---|---|---|
| (“allergy” OR “computer-assisted” OR “computerised” OR “computerized” OR “cpoe” OR “decision support” OR “drug interaction” OR “drug-drug interaction” OR “electronic prescribing” OR “expert system” OR “order check” OR “order checks” OR “order entry” OR “prescribing” OR “prescription” OR “rules based”) | AND (“alert” OR “alerts” OR “alerting” OR “alarm” OR “message” OR “messages” OR “prompt” OR “prompts” OR “reminder” OR “warning” OR “warnings”) | AND (“alert fatigue” OR “alarm fatigue” OR “distraction” OR “error” OR “errors” OR “override” OR “overridden” OR “overrode” OR “guideline adherence” OR “non-adherence” OR “practice patterns” OR “practise patterns” OR “problem” OR “problems” OR “usability”) | AND (published between 4 Oct 2007 and 4 Oct 2017) |
Eligibility criteria
We included peer-reviewed, English-language articles reporting empirical studies about CDS for medication safety. We included articles that documented acceptance rates, as defined in the Introduction, or enough information to calculate an acceptance rate. When we found more than 1 article documenting the same CDS and setting, we retained the more thorough version.
While screening, we used the following additional criteria. First, as our goal was to understand how prescribers acting of their own free will responded to different interventions, we excluded “hard stops,” which impose heavy time penalties to override, and which therefore materially restrict the prescriber’s range of action. Readers interested in an analysis of hard stops should refer to a 2018 systematic review by Powers et al.6 Second, we excluded articles that did not describe the interactive design in enough detail to produce a description. Third, we excluded articles in which researchers made global changes to an alerting system, but only reported acceptance rates for those alerts intended to convey the most urgency, for certain drug categories, or for a selected subset of users exposed to the alert; some of these authors may have chosen to report only the most palatable results. If, on the other hand, the researchers set out to improve acceptance of a certain type of alert, like antibiotic stewardship or renal dosing, then reporting the acceptance rate for only those alerts was considered appropriate for our analysis.
Data extraction process
For included articles, we extracted interactive features, the clinical role that received CDS, measurement methods, acceptance rates, and rates of override appropriateness. For articles documenting time series trials of incremental changes to the CDS ruleset, we extracted the last recorded result. If an article reported more than 1 intervention—for example, if the authors compared plain modal dialogs with dialogs that provided additional context31—we extracted results from each intervention separately. When an acceptance rate was not directly given, we used the equations provided by McCoy et al32 to derive an acceptance rate.
The same 2 authors (T.L.R., M.I.H.) split this data extraction workload evenly and checked one another’s work. Doubts on inclusion were handled by interpreting the inclusion criteria in-person to achieve consensus.
Data analysis and synthesis of results
We coded features and measurement methods as short descriptions (eg, “tiered modal dialog presented to the prescriber,” “counted dialog button-clicks”). We sorted these descriptions into categories as commonalities emerged.
We also paid attention to the methods used to construct acceptance rates. In this article, we refer to the 2 main methods as in-dialog action analysis and event analysis.
In-dialog action analysis is only applicable when the CDS intervention takes the form of a dialog that features a button that the prescriber can click to modify or discard their order (eg, “Discard Warfarin Order”). Researchers count the number of times the “acceptance” button was clicked, and divide that count by the total number of dialogs that appeared.
Event analysis may be applied to any form of CDS, including dialogs. When conducting an event analysis, researchers search the patient chart for evidence that the prescriber accepted advice, in addition to any changes that prescribers may have made by clicking buttons inside CDS dialogs. For example, a prescriber might dismiss a modal dialog warning against a warfarin order, and then reduce the dose later. Or, a pharmacist might receive an alert from a CDS system, and counsel the prescriber by phone—in which case the researchers must check to see if the prescriber made a change to the chart.
We plotted the frequencies of measurement methods by publication year to examine their popularity over time. For those studies that used more than 1 measurement method, we compared the results of those measurement methods. We also plotted the frequencies of interactive and role-tailoring features reported over time, to identify trends.
Next, we used a t test to compare acceptance rates between CDS systems by interactive design and clinical role-tailoring. In addition, we constructed a plot to holistically examine prescribers’ acceptance rates by feature.
RESULTS
Study selection
As shown in Figure 1, we initially identified 2699 records by querying the literature databases. After removing duplicates, screening titles, and abstracts, and examining full-texts to determine eligibility, we determined that 39 articles met our inclusion criteria. Extracting results from these articles yielded 42 different interventions, since there were 3 articles that reported 2 interventions each.
Figure 1.
Preferred Reporting Items for Systematic Reviews and Meta-Analysis19 flow diagram.
Study characteristics
The study characteristics are shown in Figure 2. Twenty-four (61%) of the 39 included articles reported studies conducted in the United States, and 3 (8%) reported studies from Taiwan. There were 2 studies from Switzerland, 2 from the Netherlands, and 1 from each of the following: the United Kingdom, China, Canada, and Belgium. Nine of the 24 (38%) studies conducted in the United States were conducted in Harvard-affiliated institutions.
Figure 2.
Study characteristics. ED: emergency department; EHR: electronic health record.
Seventeen (44%) studies were conducted in inpatient settings only, 12 (31%) were conducted in outpatient settings only, 6 (15%) studied both inpatient and outpatient settings, and the remaining 4 (10%) studies were conducted in the emergency department, in the emergency department and outpatient settings, or in an unspecified setting. Twenty-five of the 39 (64%) included articles studied academic healthcare settings, 12 (31%) studied nonacademic settings, and the remaining 2 (5%) studied both settings.
Twenty-five of the 39 (64%) included articles documented an electronic health record (EHR)–integrated CDS, 9 (23%) documented a standalone CDS, and 5 (13%) did not specify whether the CDS was integrated into an EHR. Three of the 39 included articles (8%) reported more than 1 intervention31,33,34; each intervention was treated as a separate study.
Twenty-four (60%) of the articles solely studied physician behavior. Ten (26%) studied both physician and nurse practitioner behavior and 5 (13%) did not specify the clinical roles that were studied.
Trends in measuring acceptance
As mentioned in the Materials and Methods, we analyzed the methods that researchers used to construct acceptance rates. The number of studies that conducted in-dialog action analyses (n = 23) was approximately equal to the number of that conducted event analyses (n = 22). Eight of the studies in our analysis—all between 201232 and 2017—conducted a review of appropriateness, either of the CDS alerts or of overriding behavior, using the method described by Weingart et al35 in 2003.
Three articles contained measurements of prescriber acceptance using both in-dialog action analysis and event analysis. Woods et al36 arrived at an acceptance rate of 26% using in-dialog action analysis, and an acceptance rate of 41% using event analysis. Slight et al37 arrived at acceptance rates of 40% and 66%, using in-dialog action analysis and using event analysis, respectively; McCoy et al32 arrived at acceptance rates of 18% and 47%, respectively. Event analyses generally yielded acceptance rates twice as high (194%) as in-dialog analyses.
Trends in CDS interventions
Features present in 4 or more included studies are plotted cumulatively, over time, in Figure 3. Three of the most common interactive features—tiering alerts, providing shortcuts for common corrective actions, and requiring a reason to override—are described and illustrated in Table 2.
Figure 3.
Feature prevalence over time. “Pharmacists Received CDS” is a subcategory of “No Modals Interrupted Prescribers.” All others are subcategories of “Modals Interrupted Prescribers.” CDS: clinical decision support.
Table 2.
Common interactive features
| Name and description | Sample design |
|---|---|
| Tiered Alerts present an indication of the risks associated with an override. In some cases, higher-priority alerts are modal dialogs, while lower-priority alerts are modeless. |
|
| Action shortcuts Modal dialogs provide the ability to perform common corrections. For example, one might wish to reduce the dose, or substitute another medication, rather than discard an order altogether. |
|
| Override reason required Modal dialogs mandated that the prescriber provide a justification prior to dismissal. Justifications may be solicited with a pick-list, a free-text field, or both. |
|
The most commonly reported type of CDS—which comprised 83% of results—interrupted prescribers with modal dialogs. The most common variants were tiered to convey levels of risk, provided shortcuts for common corrections, or required a reason to override.
We also found advisories that were not automatically issued using computerized systems. These included fax or mail alerts, and interactive designs in which a user manually retrieved a list of alerts38 or manually triggered a battery of modal dialogs.33 Only 1 article documented a design that allowed the user to dismiss a modal, and then retrieve it later for reference, rather than memorizing the contents of the alert.38 A list of all designs for presenting CDS is available in the Supplementary Appendix.
CDS acceptance by feature
For the analysis of feature acceptance, we included the 22 studies that used event analysis. Of those studies, 15 (68%) were based on CDS systems that interrupted prescribers with modal dialogs. Among the 7 alternatives, 4 (18%) presented alerts pertaining to areas such as antimicrobial stewardship or renal dosing to pharmacists,39–42 2 (9%) delivered fax or mail alerts to prescribers,43,44 and 1 (4.5%) depended on the prescriber to manually trigger a review process.38
We compared those interventions that interrupted prescribers with modal dialogs with all other interventions. The group of alternative interventions included any alerts that were sent to the pharmacist instead of the prescriber, as well as any alerts that were sent to the prescriber but were not modal dialogs. Using a t test, we found that prescriber-interrupting modals were accepted significantly less often, as predicted (38.67% v. 61.57%; P = .026). The acceptance rate distributions are shown in Figure 4.
Figure 4.
Boxplot comparing how often prescribers accepted advice directly from interruptive modal dialogs vs alternatives.
Our plot of acceptance rates by CDS feature is shown in Figure 5. In that figure, CDSs with multiple features appear on multiple lines. For example, a CDS that interrupted prescribers with tiered modal dialogs will appear twice in the figure, once on the “Modals Interrupted Prescribers” line, and once on the “Alerts Tiered to Convey Risk” line.
Figure 5.

Prescribers’ acceptance rates for clinical decision support (CDS) advice, by feature, measured using event analysis. CDSs with multiple features appear on multiple lines.
Visual inspection suggested that prescribers accepted advice from CDS-guided pharmacists more frequently and with less variability than they accepted advice when interrupted by modal dialogs.
DISCUSSION
In this systematic review, we found that interrupting prescribers with modal dialogs have become the least accepted—yet the most prevalent—design. In this section, we analyze possible reasons to account for this observation. Afterward, we discuss some methodological dilemmas faced in CDS research. Some of these have been a matter of methodological inconsistency—and they presented a practical barrier to meta-analysis. Finally, we conclude this section with our recommendations to improve the quality of CDS design and research.
Reasons why prescriber-interruptive modals seem to elicit alert fatigue
When we compared prescriber-interruptive modal dialogs with alternatives, we found evidence favoring the alternatives—in particular, those that tailored CDS to the roles of pharmacists and physicians. We believe there are 3 explanations for this finding. The first concerns etiquette—“proper” etiquette often makes advice easier to receive. The second concerns the division of expert labor between prescribers and pharmacists. The third concerns relevance, which comes naturally to humans, but which remains difficult to compute.
Etiquette. In the Introduction, we mentioned that psychologists and human factors researchers tend to endorse presenting guidance “politely”—even in emergencies.17 Prescribers might have accepted pharmacists’ advice so readily because those pharmacists produced behavioral patterns culturally recognized as “polite.” In some cases, it is appropriate to carefully design and program computers to produce similar “behavior” to solicit the user’s reciprocity.45 Some of the modal dialogs that we saw, which featured large, capitalized red text, and which required several clicks and keystrokes to dismiss, might have been seen as patronizing, rather than polite. In our review, attempts to imitate “politeness” in CDS were rare to find.
Division of expert labor. It has been common practice for prescribers to consult pharmacists about the appropriateness of particular medications for patient cases.46 Presenting certain medication-related CDS to the pharmacist—such as those concerning antibiotic targeting and renal dosing—therefore may support (rather than disrupt) an established clinical practice. This division of expert labor might have a hidden advantage: Sheltering prescribers from most of the details of pharmacy review might allow prescribers to focus more of their attention on the key details of clinical cases, so that they may think more clearly.47 We understand this may not be feasible in certain cases until regulatory barriers are changed.
Relevance. Prescribers may have found pharmacist-mediated CDS alerts highly acceptable because pharmacists filtered out irrelevant advice. Whether computers might, someday, handle relevance and context as capably as humans has been a matter of debate.48–50 Indeed, CDS does not seem to perform at the same level of precision and relevance as the humans they advise.51
The prevalence of prescriber-interruptive modal dialogs in the literature might be due to overly narrow definitions of “decision support” by certain patient advocacy groups15 or it may be due to actual prevalence in clinics. Additionally, EHR homogenization52 may have determined which types of decision support have been convenient for clinical institutions to implement, and which have been expensive, at scale.
CDS homogeneity presented 1 of several barriers to meta-analysis. The other barriers were primarily due to methodological inconsistencies in the literature. Next, we discuss methodological issues.
Mediation analysis may address methodological dilemmas
As mentioned in the Results, we found that researchers had been using 2 main ways to measure how often a prescriber accepted computer-generated advice: in-dialog action analysis and event analysis. Some studies explicitly conducted comparative analyses of the validity of the 2 methods.32,36,37
As previously mentioned, when using in-dialog action analysis, the researchers dichotomize the actions taken inside a modal dialog: The prescriber either accepts the alert (eg, by clicking “Discard Order”) or overrides it (eg, by clicking “Proceed Anyway”). We note 3 problems with this method’s validity. First, those clicks provide a rather partial story of the order—for example, they do not account for possible corrections that the prescriber may take after responding to the alert. This is related to the second problem: Applying in-dialog action analysis to modals that feature action shortcuts may artificially inflate acceptance rates with respect to other modal dialogs, because more actions that would otherwise take place outside the dialog would instead take place inside the dialog. Third, in-dialog action analysis cannot be used with CDS interventions that do not offer decision-buttons to prescribers—these interventions must be studied with event analysis.
When using event analysis, the researcher additionally searches for corrective actions that the prescriber made after dismissing any alert, including a modal dialog. For example, the prescriber may change a dose, or switch to a narrow-spectrum antibiotic, after dismissing a modal dialog. These adjustments are taken as evidence of acceptance. This main problem with this method’s validity is that there is no way to know whether the prescriber would have taken the same action if the intervention had not been delivered.
One might expect that in-dialog action analysis errs on the side of specificity (it systematically fails to recognize corrective actions), while event analysis errs on the side of sensitivity (it may misattribute some corrections to interventions). The evidence we gathered from 3 studies that compared these 2 methods32,36,37 suggest that this intuition is correct. These methods are biased, in a traditional sense: they produce results that predictably depart from the results that one would expect from the most accurate instrument imaginable.
Despite their limitations, we believe these methods to be valuable, as they seem to be the most cost-effective ways to capture data for CDS acceptance. However, we must caution that these methods produce results that are noncomparable. We suggest using event analysis, to enable rigorous comparisons between modal and modeless forms of CDS.
Appropriateness panel reviews,32,35 were rare to see. We imagine these reviews to be particularly costly. Indeed, half of the included articles that reported an appropriateness review were from well-resourced academic institutions.
In fact, the scarcity of information that was useful in our review was surprising given the quantity of available CDS literature. Nearly 9 in 10 of the articles we excluded did not report prescriber acceptance, an important mediating variable between the technological intervention and patient outcomes that seems to have been assumed. Earlier, we described a homogeneity of CDS interventions in the reported literature—specifically, prescriber-interruptive modal dialogs comprised 5 in 6 included results. This also constrained the analyses that could be conducted with adequate statistical power. Finally, a roughly 50-50 split between 2 incomparable measurement methods precluded meta-analysis.
This review revealed several issues in the literature. Future work is needed to develop standardized, low-cost, informative measures for determining acceptance for CDS, and for relating CDS acceptance to patient outcomes. Next, we present some feasible recommendations for improving the quality of the CDS literature.
Recommendations for future work
Given the preceding discussion, we propose the following 3 recommendations for future CDS research:
First, we recommend that researchers consider alternatives to prescriber-interruptive modal dialogs, since there is evidence that the latter suffers from relatively lower acceptance. Role-based tailoring appeared to improve acceptance rates, and further work is needed in this area. Ideally, those who will receive the alerts should be involved in role-tailoring decisions. Alternatives to modal dialogs should also be explored.
Second, recommend measuring acceptance rates using event analysis, rather than in-dialog action analysis. Because event analysis is more widely applicable, using it will enable meta-analyses that accommodate varied CDS interventions.
Last, we recommend reporting both acceptance rates and patient outcomes. Much of the literature that we saw in our review reported one or the other; few reported both. This has made it difficult to analyze patient outcomes as a function of CDS design and role-tailoring, mediated53 by acceptance.
CONCLUSION
Alert fatigue remains a persistent challenge in CDS. Among prescriber-interruptive modal dialogs, acceptance rates have been highly variable. In our analysis, prescribers accepted alternative interventions more often—especially those which tailored CDS to the areas of expertise associated with clinical roles. Although there are plausible reasons why some alternative CDS interventions would improve acceptance, contemporary literature has not supported detailed analyses. We recommend that future studies pay more attention to alternative designs, measure acceptance using event analysis, and report patient outcomes as well as acceptance rates.
Supplementary Material
ACKNOWLEDGMENTS
The authors would like to thank the Editors and Reviewers for their careful consideration of this work.
FUNDING
The project described was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant (UL1 TR001414). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The project was also supported in part by Academic Senate Council on Research, Computing and Libraries (CORCL), and by the U.S. Department of Education (DoE), Graduate Assistance in Areas of National Need (GAANN).
Author CONTRIBUTIONS
Mustafa I. Hussain provided substantial contributions to conception, design, data acquisition, analysis, interpretation, and drafting. Tera L. Reynolds provided substantial contributions to the acquisition and analysis of data for the work, as well as critical revisions for important intellectual content. Kai Zheng provided substantial contributions to the conception of the work, and the interpretation of data for the work, as well as critical revisions for important intellectual content. All authors provided final approval of the version to be published, and have agreed to be accountable for all aspects of the work.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
CONFLICT OF INTEREST STATEMENT
None declared.
REFERENCES
- 1. Califf R. FAERS Reporting by Patient Outcomes by Year. Silver Spring, MD: Food and Drug Administration; 2015. [Google Scholar]
- 2. Kochanek K, Murphy SL, Xu J, et al. Deaths: Final Data for 2014. Atlanta, GA: Centers for Disease Control and Prevention; 2016. [PubMed] [Google Scholar]
- 3. Assiri GA, Shebl NA, Mahmoud MA, et al. What is the epidemiology of medication errors, error-related adverse events and risk factors for errors in adults managed in community care contexts? A systematic review of the international literature. BMJ Open 2018; 85: e019101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Berner ES, La Lande TJ.. Overview of clinical decision support systems In: Berner ES, ed. Clinical Decision Support Systems. New York, NY: Springer; 2016: 3–22. [Google Scholar]
- 5. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 29310: 1197–203. [DOI] [PubMed] [Google Scholar]
- 6. Powers EM, Shiffman RN, Melnick ER, et al. Efficacy and unintended consequences of hard-stop alerts in electronic health record systems: a systematic review. J Am Med Inform Assoc 2018; 2511: 1556–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics 2005; 1166: 1506–12. [DOI] [PubMed] [Google Scholar]
- 8. van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13: 138–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Wickens CD, Hollands JG, Banbury S, et al. Alarm and alert systems In: Wickens CD, Hollands JG, eds. Engineering Psychology and Human Performance. 4th ed Upper Saddle River, NJ: Pearson Education; 2013: 23–5. [Google Scholar]
- 10. Aesop. The shepherd’s boy In: Croxall S, ed Fables of Aesop, and Others: With Instructive Applications. New York, NY: World Publishing House; 1877: 263. [Google Scholar]
- 11. Kowalczyk L. Alarm fatigue linked to patient death. Boston Globe 2010. http://archive.boston.com/news/local/massachusetts/articles/2010/04/03/alarm_fatigue_linked_to_heart_patients_death_at_mass_general/. Accessed March 6, 2019.
- 12. Bryant AD, Fletcher GS, Payne TH.. Drug interaction alert override rates in the Meaningful Use era: no evidence of progress. Appl Clin Inform 2014; 53: 802–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. McCoy AB, Thomas EJ, Krousel-Wood M, et al. Turning Off Medication Alerts to Reduce Clinical Decision Support Overrides. Podium Presentation, AMIA Annual Symposium Proceedings 2017; 2017: 141–142. [Google Scholar]
- 14. Van Der Sijs H, Aarts J, Van Gelder T, et al. Turning off frequently overridden drug alerts: limited opportunities for doing it safely. J Am Med Inform Assoc 2008; 154: 439–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Thompson CA. Leapfrog Group wants hospitals to monitor, not just implement, CPOE systems. Am J Health Syst Pharm 2010; 6716: 1310–1. [DOI] [PubMed] [Google Scholar]
- 16. Paterno MD, Maviglia SM, Gorman PN, et al. Tiering drug-drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc 2009; 161: 40–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Parasuraman R, Miller CA.. Trust and etiquette in high-criticality automated systems. Commun ACM 2004; 474: 51–5. [Google Scholar]
- 18. Green LA, Nease D, Klinkman MS.. Clinical reminders designed and implemented using cognitive and organizational science principles decrease reminder fatigue. J Am Board Fam Med 2015; 283: 351–9. [DOI] [PubMed] [Google Scholar]
- 19. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009; 3391: b2535. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Garg AX, Adhikari NKJ, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 29310: 1223–38. [DOI] [PubMed] [Google Scholar]
- 21. Jaspers MWM, Smeulers M, Vermeulen H, et al. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc 2011; 183: 327–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Kaushal R, Shojania KG, Bates DW.. Effects of computerized physician order entry and clinical decision support systems on medication safety. Arch Intern Med 2003; 16312: 1409–16. [DOI] [PubMed] [Google Scholar]
- 23. Kawamoto K, Houlihan CA, Balas EA, Lobach DF.. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 3307494: 765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Brown CL, Mulcaster HL, Triffitt KL.. A systematic review of the types and causes of prescribing errors generated from using computerized provider order entry systems in primary and secondary care. J Am Med Inform Assoc 2016; 242: 432–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Nabovati E, Vakili-Arki H, Taherzadeh Z, et al. Information technology-based interventions to improve drug-drug interaction outcomes: a systematic review on features and effects. J Med Syst 2017; 41 (1): 1–12. [DOI] [PubMed] [Google Scholar]
- 26. Horsky J, Phansalkar S, Desai A, et al. Design of decision support interventions for medication prescribing. Int J Med Inf 2013; 826: 492–503. [DOI] [PubMed] [Google Scholar]
- 27. Miller K, Mosby D, Capan M, et al. Interface, information, interaction: a narrative review of design and functional requirements for clinical decision support. J Am Med Inform Assoc 2018; 255: 585–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Slight SP, Seger DL, Nanji KC, et al. Are we heeding the warning signs? Examining providers’ overrides of computerized drug-drug interaction alerts in primary care. PLoS One 2013; 8: e85071. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Zheng K, Hanauer DA, Padman R, et al. Handling anticipated exceptions in clinical care: investigating clinician use of ‘exit strategies’ in an electronic health records system. J Am Med Inform Assoc 2011; 186: 883–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Campbell R. The five rights of clinical decision support: CDS tools helpful for meeting meaningful use. J Ahima 2013; 85: 42–7; quiz 48. [PubMed] [Google Scholar]
- 31. Duke JD, Li X, Dexter P.. Adherence to drug-drug interaction alerts in high-risk patients: a trial of context-enhanced alerting. J Am Med Inform Assoc 2013; 203: 494–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. McCoy AB, Waitman LR, Lewis JB, et al. A framework for evaluating the appropriateness of clinical decision support alerts and responses. J Am Med Inform Assoc 2012; 193: 346–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Tamblyn R, Huang A, Taylor L, et al. A randomized trial of the effectiveness of on-demand versus computer-triggered drug decision support in primary care. J Am Med Inform Assoc 2008; 154: 430–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Cornu P, Steurbaut S, Gentens K, et al. Pilot evaluation of an optimized context-specific drug–drug interaction alerting system: a controlled pre-post study. Int J Med Inf 2015; 849: 617–29. [DOI] [PubMed] [Google Scholar]
- 35. Weingart SN, Toth M, Sands DZ, et al. Physicians decisions to override computerized drug alerts in primary care. Arch Intern Med 2003; 16321: 2625–31. [DOI] [PubMed] [Google Scholar]
- 36. Woods AD, Mulherin DP, Flynn AJ, et al. Clinical decision support for atypical orders: detection and warning of atypical medication orders submitted to a computerized provider order entry system. J Am Med Inform Assoc 2014; 213: 569–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Slight SP, Beeler PE, Seger DL, et al. A cross-sectional observational study of high override rates of drug allergy alerts in inpatient and outpatient settings, and opportunities for improvement. BMJ Qual Saf 2017; 263: 217–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Zhang Y, Long X, Chen W, et al. A concise drug alerting rule set for Chinese hospitals and its application in computerized physician order entry (CPOE). Springerplus 2016; 5: 2067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Smith T, Philmon C, Johnson G, et al. Antimicrobial stewardship in a community hospital: attacking the more difficult problems. Hosp Pharm 2014; 499: 839–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Fritz D, Ceschi A, Curkovic I, et al. Comparative evaluation of three clinical decision support systems: prospective screening for medication errors in 100 medical inpatients. Eur J Clin Pharmacol 2012; 688: 1209–19. [DOI] [PubMed] [Google Scholar]
- 41. Niedrig D, Krattinger R, Jödicke A, et al. Development, implementation and outcome analysis of semi-automated alerts for metformin dose adjustment in hospitalized patients with renal impairment: semi-automated metformin alerts in renal impairment. Pharmacoepidemiol Drug Saf 2016; 2510: 1204–9. [DOI] [PubMed] [Google Scholar]
- 42. Joosten H, Drion I, Boogerd KJ, et al. Optimising drug prescribing and dispensing in subjects at risk for drug errors due to renal impairment: improving drug safety in primary healthcare by low eGFR alerts. BMJ Open 2013; 3: e002068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Armstrong EP, Wang SM, Hines LE, et al. Evaluation of a drug-drug interaction: fax alert intervention program. BMC Med Inform Decis Mak 2013; 13: 32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Feifer RA, James JM.. Geographic variation in drug safety: potentially unsafe prescribing of medications and prescriber responsiveness to safety alerts. J Manag Care Pharm 2010; 16: 196–205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Waddell TF, Zhang B, Sundar SS.. Human–computer interaction. Int Encycl Interpers Commun 2016; 1–9. [Google Scholar]
- 46. Willig SH. Legal considerations for the pharmacist undertaking new drug consultation responsibilities. Food Drug Cosmet Law J 1970; 25: 444. [Google Scholar]
- 47. Cicourel AV. Cognitive overload and communication in two healthcare settings. Commun Med 2004; 11: 35–44. [DOI] [PubMed] [Google Scholar]
- 48. Grice HP, Ezcurdia M, Stainton RJ.. Logic and conversation In: Ezcurdia M, Stainton RJ, eds. The Semantics-Pragmatics Boundary in Philosophy. Peterborough, Canada: Broadview Press; 2013: 47–59. [Google Scholar]
- 49. Searle JR. Is the brain’s mind a computer program? Sci Am 1990; 2621: 26–31. [DOI] [PubMed] [Google Scholar]
- 50. Dreyfus H. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press; 1998. [Google Scholar]
- 51. Strom BL, Schinnar R, Aberra F, et al. Unintended effects of a computerized physician order entry nearly hard-stop alert to prevent a drug interaction. Arch Intern Med 2010; 170: 1578–83. [DOI] [PubMed] [Google Scholar]
- 52. Koppel R, Lehmann CU.. Implications of an emerging EHR monoculture for hospitals and healthcare systems. J Am Med Inform Assoc 2015; 222: 465–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. MacKinnon DP, Fairchild AJ, Fritz MS.. Mediation analysis. Annu Rev Psychol 2007; 58: 593–614. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Beeler PE, Orav EJ, Seger DL, et al. Provider variation in responses to warnings: do the same providers run stop signs repeatedly? J Am Med Inform Assoc 2016; 23 (e1): e93–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Bell GC, Crews KR, Wilkinson MR, et al. Development and use of active clinical decision support for preemptive pharmacogenomics. J Am Med Inform Assoc 2014; 21 (e1): e93–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Cho I, Slight SP, Nanji KC, et al. Understanding physicians’ behavior toward alerts about nephrotoxic medications in outpatients: a cross-sectional analysis. BMC Nephrol 2014; 15: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Cho I, Slight SP, Nanji KC, et al. The effect of provider characteristics on the responses to medication-related decision support alerts. Int J Med Inf 2015; 849: 630–9. [DOI] [PubMed] [Google Scholar]
- 58. Galanter W, Liu X, Lambert BL.. Analysis of computer alerts suggesting oral medication use during computerized order entry of I.V. medications. Am J Health Syst Pharm 2010; 6713: 1101–5. [DOI] [PubMed] [Google Scholar]
- 59. Genco EK, Forster JE, Flaten H, et al. Clinically inconsequential alerts: the characteristics of opioid drug alerts and their utility in preventing adverse drug events in the emergency department. Ann Emerg Med 2016; 672: 240–8.e3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Hsu M-H, Yeh Y-T, Chen C-Y, et al. Online detection of potential duplicate medications and changes of physician behavior for outpatients visiting multiple hospitals using national health insurance smart cards in Taiwan. Int J Med Inf 2011; 803: 181–9. [DOI] [PubMed] [Google Scholar]
- 61. Isaac T, Weissman J, Davis R, et al. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009; 1693: 305–11. [DOI] [PubMed] [Google Scholar]
- 62. Jani YH, Barber N, Wong I.. Characteristics of clinical decision support alert overrides in an electronic prescribing system at a tertiary care paediatric hospital: Electronic prescribing system alert overrides. Int J Pharm Pract 2011; 195: 363–6. [DOI] [PubMed] [Google Scholar]
- 63. Knight AM, Falade O, Maygers J, et al. Factors associated with medication warning acceptance for hospitalized adults: medication warnings for adults. J Hosp Med 2015; 101: 19–25. [DOI] [PubMed] [Google Scholar]
- 64. Long A-J, Chang P, Li Y-C, et al. The use of a CPOE log for the analysis of physicians’ behavior when responding to drug-duplication reminders. Int J Med Inf 2008; 778: 499–506. [DOI] [PubMed] [Google Scholar]
- 65. Nanji KC, Slight SP, Seger DL, et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014; 213: 487–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Nishimura AA, Shirts BH, Salama J, et al. Physician perspectives of CYP2C19 and clopidogrel drug-gene interaction active clinical decision support alerts. Int J Med Inf 2016; 86: 117–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. Perlman SL, Fabrizio L, Shaha SH, et al. Response to medication dosing alerts for pediatric inpatients using a computerized provider order entry system. Appl Clin Inform 2011; 2: 522–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Saxena K, Lung B, Becker J.. Improving patient safety by modifying provider ordering behavior using alerts (CDSS) in CPOE system. AMIA Annu Symp Proc 2011; 2011: 1207–16. [PMC free article] [PubMed] [Google Scholar]
- 69. Scharnweber C, Lau BD, Mollenkopf N, et al. Evaluation of medication dose alerts in pediatric inpatients. Int J Med Inf 2013; 828: 676–83. [DOI] [PubMed] [Google Scholar]
- 70. Sethuraman U, Kannikeswaran N, Murray KP, et al. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department. Acad Emerg Med 2015; 226: 714–9. [DOI] [PubMed] [Google Scholar]
- 71. Simpao AF, Ahumada LM, Desai BR, et al. Optimization of drug-drug interaction alert rules in a pediatric hospital’s electronic health record system using a visual analytics dashboard. J Am Med Inform Assoc 2015; 22: 361–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72. Topaz M, Seger DL, Kenneth L, et al. High override rate for opioid drug-allergy interaction alerts: current trends and recommendations for future. Stud Health Technol Inform 2015; 216: 242–6. [PMC free article] [PubMed] [Google Scholar]
- 73. van der Sijs H, van Gelder T, Vulto A, et al. Understanding handling of drug safety alerts: a simulation study. Int J Med Inf 2010; 795: 361–9. [DOI] [PubMed] [Google Scholar]
- 74. Weingart SN, Zhu J, Young-Hong J, et al. Do drug interaction alerts between a chemotherapy order-entry system and an electronic medical record affect clinician behavior? J Oncol Pharm Pract 2014; 203: 163–71. [DOI] [PubMed] [Google Scholar]
- 75. Yeh M-L, Chang Y-J, Wang P-Y, et al. Physicians’ responses to computerized drug–drug interaction alerts for outpatients. Comput Methods Programs Biomed 2013; 1111: 17–25. [DOI] [PubMed] [Google Scholar]
- 76. Zimmer KP, Miller MR, Lee BH, et al. Electronic narcotic prescription writer: use in medical error reduction. J Patient Saf 2008; 42: 98–105. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.




