Abstract
Objectives
A scoping review identified interventions for optimizing hospital medication alerts post-implementation, and characterized the methods used, the populations studied, and any effects of optimization.
Materials and Methods
A structured search was undertaken in the MEDLINE and Embase databases, from inception to August 2023. Articles providing sufficient information to determine whether an intervention was conducted to optimize alerts were included in the analysis. Snowball analysis was conducted to identify additional studies.
Results
Sixteen studies were identified. Most were based in the United States and used a wide range of clinical software. Many studies used inpatient cohorts and conducted more than one intervention during the trial period. Alert types studied included drug–drug interactions, drug dosage alerts, and drug allergy alerts. Six types of interventions were identified: alert inactivation, alert severity reclassification, information provision, use of contextual information, threshold adjustment, and encounter suppression. The majority of interventions decreased alert quantity and enhanced alert acceptance. Alert quantity decreased with alert inactivation by 1%-25.3%, and with alert severity reclassification by 1%-16.5% in 6 of 7 studies. Alert severity reclassification increased alert acceptance by 4.2%-50.2% and was associated with a 100% acceptance rate for high-severity alerts when implemented. Clinical errors reported in 4 studies were seen to remain stable or decrease.
Discussion
Post-implementation medication optimization interventions have positive effects for clinicians when applied in a variety of settings. Less well reported are the impacts of these interventions on the clinical care of patients, and how endpoints such as alert quantity contribute to changes in clinician and pharmacist perceptions of alert fatigue.
Conclusion
Well conducted alert optimization can reduce alert fatigue by reducing overall alert quantity, improving clinical acceptance, and enhancing clinical utility.
Keywords: alert fatigue, alert optimization, computerized provider order entry
Background and significance
Medication errors are a common problem in clinical medicine, with consequences ranging from the incidental, incorrect prescription of an inactive substance, to the severe, such as a life-threatening allergic reaction. Errors of prescription are a major category of medication errors, with examples including prescribing the wrong medication, wrong time, duplicate prescribing, or failure to identify significant interactions, contraindications, or adverse reactions.1
Electronic alerts are a very common component of often multipronged interventions that aim to decrease medication errors, with evidence suggesting that they can reduce medication-related errors and sometimes provide cost savings.1 Medication error alerts are messages that are displayed to a clinician when prespecified trigger criteria are met, often coming from a computerized provider order entry (CPOE) or clinical decision support system (CDS). Examples of the types of alerts that display are provided in Table 1.
Table 1.
Forms of medication prescription error, modified from Saiyed et al.2
| Types of drug alert |
|---|
|
Medication error alerts vary by software and institution and may be customizable, such as by care encounter (eg, a visit to the emergency department) or by user (eg, clinicians or pharmacists). Alerts can vary in complexity, with simple triggers representing binary interactions (the patient has an identified allergy, please check) through to inclusion of contextual information (eg, a drug being contraindicated at a certain level of kidney function). Alerts may also vary in their visual format, function, and their mode of interaction with the user.3
Once implemented, alert systems may need to be optimized for a number of reasons. New clinical evidence and new medications may require the alerts to be revised to reflect new knowledge. Computer interface design issues or the generation of a high alert volume may lead to alert fatigue, a state in which an excess of insignificant alerts results in their dismissal, irrespective of their importance.4 This can result in adverse events from failure to acknowledge clinically significant alerts, time and efficiency costs, and may contribute to clinician burnout or job dissatisfaction.5
Faced with suboptimal clinical performance, different strategies can be employed to recalibrate medication alert systems. This scoping review explores post-implementation processes designed to optimize alerts and reduce alert fatigue and patient safety risks within hospital environments.
Methods
Objective
This review aims to characterize the published literature on post-implementation medication alert optimization in hospital settings.
Study selection
A scoping review was selected as the most appropriate method to characterize alert optimization interventions that have a large amount of heterogeneity in their studied populations, contexts, interventions used, study design, and choice of outcomes. For this review, we included all studies that reported an approach to optimizing medication alerts and describe the methods used, populations studied, and the effects of the interventions. Alert optimization was defined as any attempt to reduce the volume of alerts displayed or to increase the clinical appropriateness of the alerts. Studies were only included if they provided sufficient data to perform a PICO (population, intervention, comparator, outcomes) analysis. A full description of inclusion and exclusion criteria is provided in Table 2.
Table 2.
Inclusion and exclusion criteria.
| Component | Included | Excluded |
|---|---|---|
| Participants | Clinical settings (inpatient and outpatient) | |
| Intervention | Primary outcome is to optimize alerts in an existing setting post-implementation | Comparison between existing systems without a process to optimize alerts |
| Comparator | Presence of a comparator, whether by location or time point | Comparator was between a system without alerts (such as a paper-based system) and a system with alerts |
| Outcome | Endpoints are stated and measured | Endpoints are not stated or measured |
| Language | English | Other languages |
| Study design | Peer-reviewed publication with sufficient information to answer a PICO analysis, including grey literature (reports, letters, etc.) |
Search strategy
A search was undertaken in the MEDLINE database from inception to August 18, 2023, and in the Embase database from inception to August 22, 2023, with the assistance of a medical librarian. Terms were formulated using a combination of subject headers (eg, Medical Order Entry Systems), and keyword searches (eg, deimplementation, optimization). Search terms are described in full in Appendix S1.
Two reviewers independently screened titles and abstracts (T.L. and K.B.-C.) using the Covidence platform, and then the full text was reviewed independently by the reviewers for eligibility and included after consensus agreement. Disagreements were resolved by discussion or by involvement of a third reviewer (E.C.). Studies identified by snowball analysis and meeting the eligibility criteria on full-text review were also included.
Data extraction
An Excel spreadsheet was developed for study data to be extracted in the PICO format. Extracted variables included: publication details (author, title, year, country of publication), study objectives, population characteristics (inpatient or outpatient setting, number of centers involved, country), system characteristics (CPOE system and knowledge bank), interventions (type, whether applied by individual alert or to a category of alerts, target user group), study design and comparator (prospective or retrospective, number of optimization steps, comparators or controls), outcomes (primary desired outcome, other outcomes), and the governance process used to optimize alerts. For a full summary of included articles, see Appendix S2.
Summarizing and reporting findings
The included studies were reviewed in full and standardized terms were used to describe and summarize the studies (Table 3). A quality appraisal of the literature was not conducted given that few studies were identified, and due to discernible heterogeneity between studies.
Table 3.
Terminology used to describe studies.
| Term | Definition |
|---|---|
| Alert | An automated message to a user, whether interruptive or non-interruptive, with a predefined trigger |
| Trigger | A predefined condition that causes an alert to be displayed |
| Alert optimization | The process of refining alerts or their triggers to better match a desired outcome |
| CDS | Clinical decision support system (CDS), a system which processes context specific inputs to provide clinical recommendations to a user |
| CPOE system | A computerized provider order entry system (CPOE), which allows providers to electronically create a prescription |
| Provider | A clinician, such as a doctor or nurse prescriber, who initiates a prescription |
Results
Study characteristics
In total, 241 studies were identified, of which 41 were excluded as duplicates and 179 were excluded after title/abstract screening (Figure 1). Twenty-one studies were retrieved for full-text screening, and 16 were included in the final analysis. The selected studies were published between 2006 and 2021. The study design and settings are detailed in Table 4.
Figure 1.
PRISMA diagram.6
Table 4.
Summary of included studies.
| Short title | Setting | Study design | Intervention | Outcomes |
|---|---|---|---|---|
| Del Beccaro et al., 20107 | Inpatient and outpatient, single center, pediatric, United States | Prospective multistep observational | Individual reclassification, contextualization, threshold adjustment | 1° alert quantity; 2° clinical outcome |
| Bhakta et al., 20198 | Inpatient, adult multicenter, United States | Prospective single step observational | Individual reclassification, inactivation, encounter suppression | 1° alert acceptance; 2° alert quantity |
| Brodowy et al., 20169 | Unspecified cohort, single center, United States | Retrospective multistep observational | Categorical reclassification, contextualization, inactivation | 1° alert quantity; 2° alert acceptance |
| Cornu et al., 201510 | Inpatient, single center, adult, Belgium | Prospective single step controlled | Individual reclassification, contextualization, other (alert design) | 1° alert acceptance; 2° alert quantity |
| Duke et al., 201311 | Outpatient, age and site unspecified, United States | Prospective single step RCT | Individual contextualization | 1° alert acceptance; 2° alert quantity, clinical outcome |
| Eppenga et al., 201212 | Inpatient, adult, single center, Netherlands | Prospective single step observational | Individual contextualization | 1° alert utility; 2° alert quantity |
| Muhlenkamp et al., 201913 | Inpatient, multicenter, age unspecified, United States | Prospective single step observational | Individual threshold adjustment, contextualization and information provision | 1° alert quantity; 2° alert acceptance |
| Parke et al., 201514 | Site and age unspecified, United States | Retrospective single step observational | Individual reclassification | 1° alert quantity; 2° alert acceptance, clinical outcome |
| Paterno et al., 200915 | Adult and pediatric inpatient, multicenter United States | Retrospective single step observational, controlled | Individual reclassification | 1° alert adherence; 2° alert quantity |
| Rizk and Swan, 202116 | Inpatient, age unspecified, single center, United States | Prospective multistep observational | Individual information provision | 1° alert utility; 2° alert volume |
| Saiyed et al., 201717 | Inpatient and outpatient, multicenter, adult and pediatric, United States | Multistep observational | Category and individual inactivation, threshold adjustment | 1° alert quantity; 2° clinical outcome |
| Saiyed et al., 20192 | Inpatient and outpatient adult and pediatric, multicenter, United States | Retrospective multistep observational, comparative | Category and individual inactivation, threshold adjustment, encounter suppression | 1° alert quantity |
| Shah et al., 200618 | Adult outpatient, multicenter, United States | Prospective single step observational | Individual classification | 1° alert acceptance; 2° alert quantity |
| Simpao et al., 201519 | Pediatric inpatient, single center, United States | Prospective multistep observational | Other (dashboard); individual inactivation | 1° alert quantity; 2° alert acceptance |
| Hatton et al., 201120 | Inpatient, single center, age unspecified, United States | Retrospective single step observational | Individual inactivation and reclassification | 1° alert accuracy; 2° alert quantity |
| Zenziper et al., 201421 | Inpatient, single center, age unspecified, Israel | Retrospective multistep observational | Individual and category inactivation, threshold adjustment, contextualization, and information provision | 1° alert quantity |
Terminology: Setting. Single step (intervention at one time point) or multistep (one or more interventions at multiple time points). Intervention. Individual (applied to alerts via single alert modification) or category (applied broadly to a type of alert, eg, all minimum dosage). broadly described as reclassification (by severity), contextualization (adding contextual information to trigger criteria), threshold adjustment (adjusting the triggering values), information provision (providing more information in alerts when they are displayed), encounter suppression (inactivating alerts by encounter type, eg, all alerts displayed in an operation), and other (all other alerts). Outcomes. 1° = primary outcome, 2° = secondary outcome. Broadly described as alert quantity related (a simple count of alerts), alert frequency (alert quantity modified by potential triggering instances), clinical (eg, alert quantities), alert utility (eg, specificity or sensitivity for detecting a clinically meaningful interaction), and other (eg, cost or time effects).
Setting
Most studies originated from the United States (n = 13, 81%), with the remainder from Belgium (n = 1), Israel (n = 1), and the Netherlands (n = 1). Of the 14 studies that specified intervention settings, there were 12 in inpatient settings, 5 in outpatient settings, and 3 studies across both. Nine studies described the patient population: adults (n = 4), children (n = 2), or both (n = 3). Most studies identified implementation vendors (n = 13, 81%), including Epic (n = 6), Cerner (n = 2), and one each of Chameleon, Horizon, Longitudinal Medical Record, Centasys as well as a bespoke system. Medication knowledge bases were noted in several studies and included First Databank (n = 3), Medispan (n = 2), Vigilanz, and SafeRx. Eight studies investigated interventions on alerts displayed to both prescribers and pharmacists; 7 studies investigated interventions on alerts displayed only to prescribers, one investigated alerts displayed only to pharmacists.16
The type of alert varied between studies. Four studies included all possible medication alerts within the scope of their intervention; the majority limited their interventions to alerts within one or more specific types (n = 12). The most common were drug–drug interactions (n = 9), drug dosage alerts (n = 3), and drug allergy alerts (n = 3). Baseline alert rates differed considerably between studies, for example, from 7 to 88.4 drug–drug interaction alerts per 100 prescriptions.8,14
Study design
Most studies featured interventions on a CPOE from a single vendor; only one conducted a comparative study of an intervention with 2 vendors.12 Most studies were prospective in design (n = 9).
Three studies included a control group: a randomized control trial used physician allocation as the unit that was randomized,11 and 2 studies used nonrandomized control groups, 1 by clinical department,10 and 1 by clinical site.15 Another study described a nonstandardized comparison between 2 sites that used different interventions.2
Alert appearance
The visual appearance of alerts was provided in several studies.10,11,15,18 There were considerable differences in alert headings and titles. The reason for the alert was displayed at different locations on the alert screen, and the amount of additional information provided was unique for each setting. Font size and characteristics (bold, italic) were used variably to differentiate elements within the alert. Furthermore, differing interaction designs were used for provider interactions, for example, text input, selecting a radio button list, or a binary button selection.
Alert optimizing interventions
The process undertaken to optimize alerts typically had 3 elements:
Candidate alert identification: A process at the institutional level determined which alerts were candidates for optimization. Most studies identified alerts for review by measuring alert quantity within a specified time period (n = 9). Alternate approaches included reviewing the most frequently displayed alerts (n = 3), reviewing the first 100 reported alerts in order of triggering,7 selection based on clinician and pharmacist feedback,8 or after review of existing commercial and government knowledge bases.18 One intervention was determined by a literature review which supported insertion of context-specific information in alerts.11 Lists were usually generated in text, and one study used a graphic dashboard.19 The dashboard displayed alerts and override rates in a graphical and numeric format and displayed the most frequently occurring interactions.19
-
Target alert selection: To determine which specific alerts or sets of alerts were to be affected by an intervention, studies selected alerts for intervention following either review of individual alerts (n = 12), selection of categories of alerts only (n = 1),2 or applying interventions to a combination of both categories and individual alerts. As an example, a review of the clinical significance of individual drug interactions and inactivating those deemed clinically nonsignificant was an individual modification,20 whereas inactivation of minimum dosage alerts for all medications would be a category intervention.2
Target alerts were identified either by processes unique to the study (n = 11), using existing committees, such as a digital health or medication review committee (n = 3), or both (n = 2). Most review committees were multidisciplinary in nature, generally featuring pharmacists (n = 13) and physicians (n = 10); other members included nurses (n = 3), bioinformatics specialists (n = 1), informaticians (n = 1), a director of patient safety (n = 2), and IT staff (n = 1). For further information on governance, we refer readers to van Dort et al.22
Alert optimization: Target alerts were altered and then implemented to satisfy optimization goals. Many studies used a process that featured an intervention at a single time point, which we call a single-step intervention (n = 11). An example of a single-step intervention was the inclusion of context-specific data in the form of potassium and creatinine values within the text displayed on drug–drug interaction alerts associated with hyperkalemia.11 Several studies employed multiple interventions at more than one time point (n = 5). For example, Saiyed et al.17 employed an incremental optimization process on drug-dosage alerts over a 3-month period, with both categoric inactivation of all minimum dosage alerts and increasing maximum daily dosage threshold to 125%, followed by adjustment of individual alert triggering criteria for the top 22 most frequent alerts. This was classified as a multistep intervention, as were 2 studies that described the implementation and optimization of a CPOE with CDS.9,21
We identified the following optimization interventions. Most studies conducted more than one type of intervention during their study period (n = 10):
Inactivation of alerts (n = 7): operational alerts are turned off either by inactivating a single triggering criterion (eg, for a specific drug–drug interaction) or categorically by inactivating a category of alerts in bulk (eg, all minimum dosage alerts).
Adjustment of triggering thresholds (n = 5): for example, by altering either individually or categorically minimum or maximum dosage thresholds for when an alert should display (eg, increasing the maximum dosage threshold from 100% to 125% before an alert is triggered).
Adding contextual information (n = 7), such as patient characteristics or laboratory test values to trigger criteria for an alert,12 or within the text that an alert displays.11
Implementation or reclassification of alert severity (also called “re-tiering”) (n = 8): for example, reclassifying an interruptive alert to a noninterruptive alert.
Increasing the amount of information displayed within the alert text (n = 3): for example, changing a vague alert that states Metformin is contraindicated in kidney injury, to an alert that provides more information by specifying the stage of kidney injury and that metformin accumulates in impaired kidney function.21
Suppressing alerts depending on encounter type (n = 2): for example, inactivating duplicate drug alerts relating to duplicate drugs detected between encounters such as intra and post-operatively.8
Alert optimization outcomes
The variables measured to report outcomes differed considerably (see Table 4):
Alert quantity-related endpoints were included in all studies as gross alert quantity, with 9 studies reporting frequency based on potential triggering instances. Alert quantity and frequency were reported to decrease in 10 of 15 studies and increase in 3. A further 3 did not include sufficient data to reach a conclusion.
Clinician alert acceptance endpoints such as override rates of displayed alerts, or user perceptions survey were often reported (n = 9), and the following breakdown was noted: direct alert acceptance or override rates (n = 7), override reason (n = 1), and user perceptions survey (n = 1).13 Six of the 7 studies measuring acceptance or override rates had positive outcomes; 1 reported a worse outcome post-intervention11; and in a pre- and post-intervention user survey, no change in user perceptions was found.13
Alert utility endpoints were reported to provide information on the clinical relevance of an alert, such as by providing a negative or positive predictive value (NPV/PPV; n = 2), or sensitivity/specificity of an alert (n = 3). This was generally by reporting accuracy against a gold standard such as a pharmacist determination (n = 2).
Clinical endpoints such as medical error rates were only reported in a few studies (n = 4); for example, through a separate medical error reporting system where reports were voluntarily filed by staff members.7
Effect of interventions
Studies varied in the outcome effects of alert optimization interventions (Appendices S2 and S3), with the majority reporting benefits. Alert quantity and frequency decreased in 10 of 15 studies that included it as an endpoint; it increased in 3 studies, and 3 did not include sufficient data to reach a conclusion. Six of 7 studies measuring acceptance or override rates had positive outcomes; 1 had a worse outcome post-intervention11; and in a pre- and post-intervention user survey, positive and negative changes were reported in user perceptions.13 All studies including alert utility reported endpoints described as a positive effect of their interventions. Of the studies that reported on clinical error rates, 1 described a decrease7; 2 described no significant change,2,14 and 1 included insufficient information to determine a change in outcomes.11
Most interventions did report positive effects. Alert inactivation, for example, was seen to have absolute decreases in quantity between 1% and 25.3%,2,8 and an absolute improvement in acceptance rates by 1.49%-10.76%.8,19 Alert severity reclassification was associated with an absolute decrease in alert quantity by 1%-16.5% in 6 studies,7,14 although an increase in quantity was observed in one study.10 Severity reclassification was associated with an absolute increase in alert acceptance for all alerts by 4.2%-50.2%,10,11 and was associated with a 100% acceptance rate for high-severity alerts when implemented.15,18
Discussion
This scoping review sets out to establish what interventions, methods, and study designs have been used to optimize clinical medication alerts. It has identified a heterogenous set of alert optimization studies, predominantly undertaken in the United States, using a variety of optimization strategies, software, and knowledge bases, and applied to diverse patient populations. The majority of studies were observational and conducted interventions in more than one alert type (eg, drug–drug interactions, drug allergy alerts, etc.), with follow-up and review at more than one time point, and generally with review of individual alerts. The most frequent modifications were alert inactivation, reclassification of alert severity, addition of contextual information, and adjustment of triggering thresholds. Whilst bespoke study designs reflect real-world processes depicting the ways in which alerts are optimized, it is more difficult to conduct comparisons between studies.
Inactivation of alerts, threshold adjustment, providing additional information within alerts, and reclassifying alert severity were particularly effective interventions. The majority of studies pursuing these interventions reported decreases in alert quantity and frequency and improvements in acceptance and override rates. Using contextual information was variable in its effect on alert quantity and acceptance, but seemed to increase the utility of alerts. When alerts were modified by category, the most common intervention was inactivation of minimum dosage thresholds, resulting in a decrease in alert quantity for most studies.
Practical considerations when implementing strategies to optimize alerts were reflected within the varied approaches used.4 The most common governance structure was a small multidisciplinary group (comprised of 1 clinician and 1 pharmacist, at a minimum), who would present suggested alterations to a team that had been created for the purposes of alert optimization. Similarly, differences were observed in the use and development of in-house catalogs of alerts versus the use of commercial or national almanacs. Such “off-the-shelf” almanacs may be timesaving, but local clinicians did not always agree with their assessment of clinical significance.17 Unique dashboards used to identify alerts may be visually appealing and help draw attention to more common alerts,23 but implementation does need to consider available resources which may include associated technical aspects such as maintenance.24
Whilst alert fatigue was commonly identified as a motivating factor leading to an intervention, alert quantity was generally used as a surrogate for this complex outcome. Alert quantity is a crude metric unadjusted for case mix, seasonal variation, or prescription volume (when reported without frequency). A variety of clinical acceptance-related measures may provide insight into causes of prescriber frustration and alert fatigue. Furthermore, whether changes in quantity were associated with consumer or clinician outcomes was barely reported, with few studies reporting the important healthcare-related endpoint of patient harm and only one reporting physician perceptions.13
Understanding clinician override reasons, and improving the quality of override reasons that clinicians select, may prove useful when evaluating clinical alert acceptance.25,26 Technical interventions that help incorporate clinical information, such as differentiating allergic reactions and intolerances, may help to increase the specificity of alerts.27 The databases used for drug–allergy or drug–drug interactions vary widely with no generally accepted list and there is limited consensus on clinically relevant interactions.27,28
Of the few studies reporting clinical endpoints, the majority used voluntary staff-reported clinical incident notification systems. This practice generated very few reports and consequently may be insensitive to picking up more complex changes in prescribing patterns. Alert utility-related endpoints varied in their choice of gold standard prescribing, and often had a pharmacy-led or almanac-led determination of gold standard clinically significant interactions; whether this correlates with clinician-perceived significant interactions also remains unclear.
Study quality
Most interventions described favorable results, raising a potential concern for publication bias.22 Interventions that were multimodal and included observational components raise the potential for a Hawthorne effect during the study period. The limited study periods and lack of follow-up data make it difficult to assess whether changes are maintained or related to a honeymoon effect. The large variation in baseline alert rates, for example, ranging from 7 to 88.4 drug–drug interaction alerts per 100 prescriptions,8,14 may affect preintervention levels of alert fatigue, which makes it difficult to directly compare studied populations. Additional features of the included studies that may increase the risk of bias and decrease study quality include: risk of confounding due to lack of comparator, control arm, and blinding; risk of bias due to endpoint selection and measurement (eg, volume, which may not control for seasonal activity levels); or risk of bias due to lack of measurement of clinical or investigator determined endpoints (eg, clinical error rates, or using alert quantity as a surrogate for alert fatigue without a validated minimally significant change). Owing to the observational nature of many studies, statistical significance testing was typically not performed. No studies were powered to detect a prespecified statistically significant change of a defined endpoint.
We identified additional limitations in data reporting across the studies. First, not all data points were clearly stated; for example, the age, patient setting, or population were not provided for some studies. Second, alerts varied by type (eg, drug-drug, drug-allergy), precluding generalizability of results, particularly given the limited number of studies of each type. Third, the choice of intervention was often multimodal, introducing several potential causative factors contributing to an identified change. Fourth, comparators were generally not controlled for, and when included, differed between studies—for example, different hospital networks, or before-and-after studies that did not account for seasonal case mix variation or changes in end user prescribing behavior secondary to the intervention. Fifth, consistent with previous evaluations of alert designs, visual appearances of alerts varied significantly (where samples were provided), and it is likely that human factors relating to alert design may confound acceptance rates.3,29 Finally, the outcome selection varied greatly between study groups, also affecting generalizability. Several studies used gross alert quantity as a reported outcome without controlling for prescription number, and many studies did not report clinical outcomes, which we feel would be an appropriate metric to include for clinically based interventions. In the future, to assess the impact of specific interventions, we suggest directly measuring standard endpoints of interest such as alert fatigue, clinician burnout, and patient safety (eg, medication errors, patient harms, clinical outcomes).
Future interventions are likely to be multipronged, and may increasingly incorporate understanding of the complex sociotechnical underpinnings of CDSS systems, including the effects of these alerts and CPOE on communication, influencing factors such as the work environment,30,31 and the effects of training or learning feedback systems.26 Practical technical approaches include increasing the specificity of alerts by narrowing triggering criteria and targeted user groups, displaying the alert at the most appropriate time within the workflow, and standardizing and improving alert text and appearance. Alternate strategies to the use of alerts are increasingly reported, such as education, workflow changes, noninterruptive messages, and inline validation.18,23,32 Recognizing the proliferation of alerts and the everchanging healthcare system, there appears to be a real need for durable CDSS governance structures with periodic review and institutional memory.23,33,34
Review limitations
We identified several limitations with our approach for conducting this scoping analysis. There is a possibility that we did not identify all relevant studies, and our review of post-implementation optimization did not identify cutting-edge strategies that might prove beneficial, such as artificial-intelligence-based outlier analysis.35,36 Additionally, we approached the issue of alert optimization as a broad topic and may have missed situation-specific alert optimization strategies that may have otherwise been reported.
Conclusion
This scoping review identified 16 articles describing interventions intended to optimize medication alerts. Most were observational studies conducted in inpatient clinical settings. A diversity of interventions were used in varying types of medication alerts, with the most common interventions being alert inactivation, alert reclassification, and information provision.
Identified interventions were generally found to be effective in reducing overall alert quantity, improving clinical acceptance, and enhancing clinical utility. However, intervention impacts on alert fatigue and clinically important endpoints, such as reporting of clinical errors during the trial periods, or the downstream consequences of alert fatigue, were often unmeasured. Alert optimization is one tool that is used to reduce alert fatigue, and this review describes a variety of strategies ranging in their sophistication and complexity.
Supplementary Material
Contributor Information
Thomas Stephen Ledger, Royal Australasian College of Physicians, NSW 2000, Australia.
Kalissa Brooke-Cowden, Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, NSW 2109, Australia.
Enrico Coiera, Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, NSW 2109, Australia.
Author contributions
T.S.L., K.B.-C., and E.C. contributed equally and are considered co-senior authors of this work. In addition, they had full access to all the data in the study and take responsibility for the integrity of data and accuracy of the data analysis.
Supplementary material
Supplementary material is available at Journal of the American Medical Informatics Association online.
Funding
E.C. and K.B.-C. were supported by the NHMRC Centre of Research Excellence (CRE) in Digital Health, and E.C. by an NHMRC Investigator Grant (GNT2008645). The funding source did not play any role in study design, in the collection, analysis, and interpretation of data, in the writing of the report, or in the decision to submit the article for publication.
Conflict of interest
None declared.
Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
References
- 1. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc. 2007;14(1):29-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Saiyed S, Davis K, Kaelber D.. Differences, opportunities, and strategies in drug alert optimization: experiences of two different integrated health care systems. Appl Clin Inform. 2019;10(5):777-782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Phansalkar S, Zachariah M, Seidling HM, et al. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J Am Med Inform Assoc. 2014;21(e2):e332-e340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Sutton RT, Pincock D, Baumgart DC, et al. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digital Med. 2020;3(1):17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Kane-Gill S, O’Connor MF, Rothschild JM, et al. Technologic distractions (Part 1). Crit Care Med. 2017;45(9):1481-1488. [DOI] [PubMed] [Google Scholar]
- 6. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372(88):n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Beccaro MAD, Villanueva R, Knudson KM, et al. Decision support alerts for medication ordering in a computerized provider order entry (CPOE). Appl Clin Inform. 2010;1(3):346-362. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Bhakta SB, Colavecchia AC, Haines L, et al. A systematic approach to optimize electronic health record medication alerts in a health system. Am J Health Syst Pharm. 2019;76(8):530-536. [DOI] [PubMed] [Google Scholar]
- 9. Brodowy B, Nguyen D.. Optimization of clinical decision support through minimization of excessive drug allergy alerts. Am J Health Syst Pharm. 2016;73(8):526-528. [DOI] [PubMed] [Google Scholar]
- 10. Cornu P, Steurbaut S, Gentens K, et al. Pilot evaluation of an optimized context-specific drug–drug interaction alerting system: a controlled pre-post study. Int J Med Inform. 2015;84(9):617-629. [DOI] [PubMed] [Google Scholar]
- 11. Duke JD, Li X, Dexter P.. Adherence to drug–drug interaction alerts in high-risk patients: a trial of context-enhanced alerting. J Am Med Inform Assoc. 2013;20(3):494-498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Eppenga WL, Derijks HJ, Conemans JMH, et al. Comparison of a basic and an advanced pharmacotherapy-related clinical decision support system in a hospital care setting in the Netherlands. J Am Med Inform Assoc. 2012;19(1):66-71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Muhlenkamp R, Ash N, Ziegenbusch K, et al. Effect of modifying dose alerts in an electronic health record on frequency of alerts. Am J Health Syst Pharm. 2019;76(Supplement_1):S1-S8. [DOI] [PubMed] [Google Scholar]
- 14. Parke C, Santiago E, Zussy B, et al. Reduction of clinical support warnings through recategorization of severity levels. Am J Health Syst Pharm. 2015;72(2):144-148. [DOI] [PubMed] [Google Scholar]
- 15. Paterno MD, Maviglia SM, Gorman PN, et al. Tiering drug–drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc. 2009;16(1):40-46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Rizk E, Swan JT.. Development, validation, and assessment of clinical impact of real-time alerts to detect inpatient as-needed opioid orders with duplicate indications: prospective study. J Med Internet Res. 2021;23(10):e28235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Saiyed SM, Greco PJ, Fernandes G, et al. Optimizing drug-dose alerts using commercial software throughout an integrated health care system. J Am Med Inform Assoc. 2017;24(6):1149-1154. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Shah NR, Seger AC, Seger DL, et al. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc. 2006;13(1):5-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Simpao AF, Ahumada LM, Desai BR, et al. Optimization of drug–drug interaction alert rules in a pediatric hospital’s electronic health record system using a visual analytics dashboard. J Am Med Inform Assoc. 2015;22(2):361-369. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Hatton RC, Rosenberg AF, Morris CT, et al. Evaluation of contraindicated drug-drug interaction alerts in a hospital setting. Ann Pharmacother. 2011;45(3):297-308. [DOI] [PubMed] [Google Scholar]
- 21. Zenziper Y, Kurnik D, Markovits N, et al. Implementation of a clinical decision support system for computerized drug prescription entries in a large tertiary care hospital. Isr Med Assoc J. 2014;16(5):289-294. [PubMed] [Google Scholar]
- 22. Van Dort BA, Zheng WY, Sundar V, et al. Optimizing clinical decision support alerts in electronic medical records: a systematic review of reported strategies adopted by hospitals. J Am Med Inform Assoc. 2021;28(1):177-183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Chaparro J, Hussain C, Lee J, et al. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform. 2020;11(1):46-58. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Xie CX, Chen Q, Hincapié CA, et al. Effectiveness of clinical dashboards as audit and feedback or clinical decision support tools on medication use and test ordering: a systematic review of randomized controlled trials. J Am Med Inform Assoc. 2022;29(10):1773-1785. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Dekarske BM, Zimmerman CR, Chang R, et al. Increased appropriateness of customized alert acknowledgement reasons for overridden medication alerts in a computerized provider order entry system. Int J Med Inform. 2015;84(12):1085-1093. [DOI] [PubMed] [Google Scholar]
- 26. Khalifa M, Zabani I.. Improving utilization of clinical decision support systems by reducing alert fatigue: strategies and recommendations. In: Mantas J, Hasman A, Gallos P, et al, eds.Unifying the Applications and Foundations of Biomedical and Health Informatics. IOS Press; 2016:51-54. [PubMed] [Google Scholar]
- 27. Luri M, Leache L, Gastaminza G, et al. A systematic review of drug allergy alert systems. Int J Med Inform. 2022;159:104673. [DOI] [PubMed] [Google Scholar]
- 28. Baysari MT, Raban MZ.. The safety of computerised prescribing in hospitals. Aust Prescr. 2019;42(4):136-138. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Phansalkar S, Edworthy J, Hellier E, et al. A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc. 2010;17(5):493-501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Thomas SK, Coleman JJ.. The impact of computerised physician order entry with integrated clinical decision support on pharmacist–physician communication in the hospital setting: a systematic review of the literature. Eur J Hosp Pharm. 2012;19(4):349-354. [Google Scholar]
- 31. Xiao S, Liu J, Chang H.. Physician-nurse communication surrounding computerized physician order entry systems from social and technical perspective. Comput Inform Nurs. 2021;40(4):258-268. [DOI] [PubMed] [Google Scholar]
- 32. Patel J, Ogletree R, Sutterfield A, et al. Optimized computerized order entry can reduce errors in electronic prescriptions and associated pharmacy calls to clarify (CTC). Appl Clin Inform. 2016;7(2):587-595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. McCoy AB, Russo EM, Johnson KB, et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J Am Med Inform Assoc. 2022;29(6):1050-1059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Ng HJH, Kansal A, Naseer JFA, et al. Optimizing best practice advisory alerts in electronic medical records with a multi-pronged strategy at a tertiary care hospital in Singapore. JAMIA Open 2023;6(3):ooad056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Segal G, Segev A, Brom A, et al. Reducing drug prescription errors and adverse drug events by application of a probabilistic, machine-learning based clinical decision support system in an inpatient setting. J Am Med Inform Assoc. 2019;26(12):1560-1565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Woods AD, Mulherin DP, Flynn AJ, et al. Clinical decision support for atypical orders: detection and warning of atypical medication orders submitted to a computerized provider order entry system. J Am Med Inform Assoc. 2014;21(3):569-573. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article will be shared on reasonable request to the corresponding author.

