Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2022 Oct 20;30(1):64–72. doi: 10.1093/jamia/ocac191

Distinct components of alert fatigue in physicians’ responses to a noninterruptive clinical decision support alert

Douglas A Murad 1, Yusuke Tsugawa 2, David A Elashoff 3, Kevin M Baldwin 4, Douglas S Bell 5,6,
PMCID: PMC9748542  PMID: 36264258

Abstract

Objective

Clinical decision support (CDS) alerts may improve health care quality but “alert fatigue” can reduce provider responsiveness. We analyzed how the introduction of competing alerts affected provider adherence to a single depression screening alert.

Materials and Methods

We analyzed the audit data from all occurrences of a CDS alert at a large academic health system. For patients who screen positive for depression during ambulatory visits, a noninterruptive alert was presented, offering a number of relevant documentation actions. Alert adherence was defined as the selection of any option offered within the alert. We assessed the effect of competing clinical guidance alerts presented during the same encounter and the total of all CDS alerts that the same provider had seen in the prior 90 days, on the probability of depression screen alert adherence, adjusting for physician and patient characteristics.

Results

The depression alert fired during 55 649 office visits involving 418 physicians and 40 474 patients over 41 months. After adjustment, physicians who had seen the most alerts in the prior 90 days were much less likely to respond (adjusted OR highest–lowest quartile, 0.38; 95% CI 0.35–0.42; P < .001). Competing alerts in the same visit further reduced the likelihood of adherence only among physicians in the middle two quartiles of alert exposure in the prior 90 days.

Conclusions

Adherence to a noninterruptive depression alert was strongly associated with the provider’s cumulative alert exposure over the past quarter. Health systems should monitor providers’ recent alert exposure as a measure of alert fatigue.

Keywords: alert, physicians, depression, clinical decision support, alert fatigue, regression modeling

BACKGROUND AND SIGNIFICANCE

Although research shows that receipt of healthcare services supported by guidelines can lead to improved outcomes, not all patients receive evidence-based care. Prior studies have shown that patients receive guideline-directed treatment only half of the time.1,2 A frequently used approach to addressing quality gaps is the implementation of clinical decision support (CDS) alerting systems that attempt to prompt clinicians to take desirable actions within electronic health record (EHR) systems.3 CDS systems play a central role in the Medicare and Medicaid Promoting Interoperability Program. In particular cases, they have been shown to improve patient safety,4,5 lower costs for health care systems,6 improve adherence to clinical guidelines,7,8 increase the quality of clinical documentation,9 and heighten diagnostic accuracy.10 However, there remain persistent concerns related to alert fatigue,11 alert inappropriateness,12 and workflow fragmentation resulting in increased cognitive load.13

The shortcomings of CDS systems often lead to high alert override rates,14,15 and have motivated considerable research to improve CDS usability.16 However, even finely crafted alerts must operate within a milieu of competing alerts. While it has been shown that alert adherence may be inversely related to the firing frequency of the individual alert,17 it is not clear how the likelihood of provider adherence is altered by the volume of alerts recently seen by a provider.

In this context, our health system implemented a noninterruptive alert that appeared in the EHR encounter when a patient’s answers on the PHQ-9 depression screening questionnaire indicated at least mild depression. The aim of this study is to describe the longitudinal adherence rates for this depression screen CDS alert and to assess the extent to which adherence is modified by the overall volume of clinical guidance alerts seen by each provider over the preceding 90 days (recent alert count) and by the number of competing alerts seen during the same encounter (competing alert count).

METHODS

Clinical decision support tool

Screening with the PHQ-9 (Patient Health Questionnaire) depression severity measure18 takes place during the nurse-led rooming process at many primary care and psychiatry clinics at UCLA Health. After PHQ-9 responses are entered, any score greater than 4, which suggests at least mild depression, triggers a noninterruptive alert in the provider’s EHR navigation window. Within the alert, providers may take three kinds of actions: add 1 of 10 depression diagnoses to the patient’s problem list, click on a link for further clinical guidance, or attest to: initiation of depression treatment, referral for further evaluation, patient refusal of evaluation or treatment, or that the patient is already undergoing treatment (Figure 1). The previous existence of depression on the patient’s problem list did not suppress alert firing.

Figure 1.

Figure 1.

Screenshot of depression screening clinical guidance alert. The first 10 pushbuttons allow the user to rapidly add a new depression problem to the patient’s electronic health record. The last three push buttons allow the user to quickly attest that the depression screen has been reviewed and documents follow-up for quality reporting.

Data

Every depression alert firing associated with an appointment between September 1, 2017 and February 28, 2021 was extracted from the relational auditing database supplied by the EHR vendor. Physician and patient characteristics were retrieved from separate tables of the same database. All problem lists, encounters, billing, or past medical history diagnoses made on or prior to the encounter date were collected from the patient’s record. After removal of depression diagnoses made on the office visit date, the remaining codes were used to automatically generate19 AHRQ-weighted Elixhauser comorbidity scores20 for each patient encounter. Encounters with any alert firings shown to residents, nurse practitioners, scribes, or other ancillary staff were excluded. Depression alerts that did not fire on the day of the encounter were not included. Encounters with patients less than 18 years of age were excluded. For consistency over time, only alerts shown during office visits were included. Physicians having less than 10 encounters with positively screened patients over the entire study period were excluded.

Alert adherence

Any actions taken within the depression alerts during the patient encounter were noted. The primary, binary outcome of physician alert adherence was calculated at the level of each unique patient office visit. The decision to add a new depression problem, to click on an informational link, or to select an attestation button within the alert was denoted as positive alert adherence.

“Recent alert exposure” and “competing alert count”

For this analysis, we counted all alerts that were termed by the EHR vendor as “Best Practice Advisories”, which were reminders about recommended preventive or disease-specific care, usually presented to the clinician upon opening the chart. Medication alerts were not counted. “Recent alert exposure” was defined as the number of nonmedication alerts of any kind seen by the physician in the 90 days prior to each encounter. The “competing alert count” was defined as the count of all other nonmedication alerts presented to the physician during the encounter in which a depression alert was triggered.

Descriptive analysis of alert adherence rates over time

To visualize unadjusted, longitudinal alert performance data, the percent of depression alert encounters with any adherence was plotted on a monthly basis. In parallel, the average number of competing alerts shown to the provider during each encounter with a depression alert was plotted. Finally, the average number of alerts shown to the provider over the preceding 90 days was graphed over the same time interval.

Statistical analysis

First, we used generalized linear mixed-effect models (GLMMs) with logit link (physicians and patients as random intercepts) at the level of each individual encounter associated with a depression alert. Alert adherence was used as the binary outcome variable with recent alert count and competing alert count as covariates, and physician and patient characteristics as adjustment variables.21 We then calculated adjusted alert compliance rates using the marginal standardization method by holding covariate values at their means and varying the covariates of interest.22

We also investigated the relationship between recent alert exposure and competing alert count on alert adherence during each visit. An interaction term between the two variables was added to the original model. ANOVA testing between the main model with and without the interaction term was performed. The marginal effect on alert adherence probability was plotted for the two interacting variables in the model.

Adjustment variables

Physician characteristics used as adjustment variables included sex, the decade of medical school graduation, primary specialty, and primary care relationship with the patient. Patient characteristics included sex, age, antidepressant prescription in the last year, active depression diagnosis on the problem list, and PHQ-9 score triggering the alert.

Sensitivity analyses

We conducted two sensitivity analyses. Given the central importance of the primary care relationship in the treatment of depression, we repeated the analysis among only the encounters between patients and their primary care provider. To assess whether the removal of providers with less than 10 alert encounters over the entire study period incurred bias in the main analysis, the inclusion criteria were also relaxed to include them. Finally, physicians were trialed as fixed effects, with patients remaining as random effects, in order to remove unobserved heterogeneity between physicians.

Data extraction was performed in Microsoft SQL Server Management Studio 17. Statistical analyses and graphical representations were created using R Studio and JMP® software, respectively. Elixhauser comorbidity presence was computed in R using the comorbidity package. We considered the P value of less than .05 to be statistically significant. This study was approved by the institutional review board of University of California, Los Angeles.

RESULTS

During the three-and-a-half-year study period, there were a total of 55 649 encounters with associated depression screening alerts meeting the inclusion criteria (Supplementary Table S1). These occurred among a total of 40 474 unique patients and 418 unique physicians.

Physician characteristics

Two-thirds (66%) of the 418 physicians included in the study had graduated from medical school after 2000, and nearly half (44%) had graduated after 2010 (Supplementary Table S2). Female physicians made up a slightly larger proportion (58%). Nearly three-quarters (73%) of the physicians were internists or family practitioners. The median number of depression alerts seen by the physicians over the entire study period was 72 (interquartile range 28–179).

Patient and encounter characteristics

Among the 55 649 encounters with depression alerts, nearly three-quarters (74%) were between patients and their primary care physician (Table 1). Approximately 90% of visits had a scheduled duration of 30 min or less. While the number of alerts increased between 2017 and 2019, the number of office visits with depression alerts decreased sharply after February 2020 (see Figure 2a), as expected due to the Covid-19 pandemic. Nearly two-thirds of visits were with female patients, and the median patient age was 42 (interquartile range 30–59). More than 90% of patients lacked an active depression diagnosis on their problem list at the time of their visit, and 61% had not received an antidepressant prescription over the preceding year.

Table 1.

Encounter, physician and patient characteristics for office visits with an accompanying depression alert

Characteristics N = 55 649 encounters
Visit is with patient’s PCPa, n (%)
 No 14 706 (26)
 Yes 40 943 (74)
Appointment length, n (%)
 15 min or less 21 809 (39)
 20–30 min 27 943 (50)
 Greater than 30 min 5897 (11)
Encounter year, n (%)
 2017 2205 (4)
 2018 14 127 (25)
 2019 21 969 (39)
 2020 14 759 (27)
 2021 2589 (5)
Competing BPA alert count, n (%)
 0 21 203 (38)
 1 22 034 (40)
 2 9161 (16)
 3 or more 3251 (6)
Provider specialty, n (%)
 Family medicine 21 635 (39)
 Internal medicine 20 617 (37)
 Medicine-pediatrics 7657 (14)
 Other 757 (1)
 Psychiatry 4983 (9)
Provider medical school graduation decade, n (%)
 1980s or earlier 6364 (11)
 1990s 13 803 (25)
 2000s 11 589 (21)
 2010s 23 893 (43)
Number of alerts seen by provider during last 90 days, n (%)
 <125 13 472 (24)
 125–333 13 925 (25)
 334–720 14 093 (25)
 >720 14 159 (25)
Provider sex, n (%)
 Female 34 479 (62)
 Male 21 170 (38)
Patient sex, n (%)
 Female 36 413 (65)
 Male 19 236 (35)
Patient with depression active on problem list, n (%)
 No 50 479 (91)
 Yes 5170 (9)
Patient with antidepressant Rx in last year, n (%)
 No 34 063 (61)
 Yes 21 586 (39)
Patient age, median (IQR) 42 (30 to 59)
Patient PHQ-9, median (IQR) 10 (7 to 15)
Comorbidity Scoreb, median (IQR) −1 (−5 to 2)

Abbreviations: PHQ-9: Patient Health Questionnaire-9; PCP: primary care provider; Rx: prescription.

Percentages may not total 100 because of rounding.

a

Encounter provider is the patient’s attributed primary care provider.

b

AHRQ-weighted Elixhauser comorbidity score.

Figure 2.

Figure 2.

Monthly exposure and response rates for depression alerts. (a) Total number of encounters accompanied by depression alert per month. (b) Percent of depression alerts with adherence per month. (c) Average number of other clinical guidance alerts during each encounter with a depression alert. (d) Mean number of clinical guidance alerts (any) seen by encounter provider over the 90 days prior to the encounter. Abbreviations: EHR: electronic health record; CA: California.

Longitudinal adherence rates

The number of encounters with depression alerts (Figure 2a) increased steadily from September 2017 to January 2019. During this initial period, the monthly average adherence to the depression alert remained 50%–60% with little change (Figure 2b). However, in June 2019 the average number of competing alerts appearing during the same encounter as the depression alerts doubled, from 0.6 to 1.2 (Figure 2c). Following this change, the monthly average adherence to the depression alert gradually decreased.

As expected, the monthly average number of all alerts seen by physicians over the 90 days preceding each encounter (Figure 2d) began to increase accordingly. During this period, the upward trend of this lagging indicator mirrored the downward pattern of monthly average alert adherence. The overall decline in depression alert adherence clearly preceded two major system disruptions: a major EHR upgrade in November 2019 and the California Covid-19 Shutdown in March 2020.

Modeling alert compliance

The median unadjusted provider-level depression alert adherence rate was 35% (IQR among the 418 providers 7%–72%). The initial model utilized patient and physician random effects; however, given a substantially lower variance, the patient random effects were dropped from the final model for computational simplicity (Supplementary Table S3). The final model utilized physician random effects with adjustment for potential confounders (Table 2). The likelihood of alert adherence was higher among encounters with physicians in the second versus the first quartile of recent alert count (aOR 1.120; P = .003). However, encounters with physicians in the higher quartiles of the recent alert count were much less likely to be accompanied by depression alert adherence (Q3 aOR 0.71, Q4 aOR 0.38; P < .001 for both). In comparison to encounters with no competing alerts, encounters with one or more competing alerts were uniformly less likely to be followed by depression alert adherence. The lowest adjusted odds ratio was found in the encounters with three or more competing alerts compared to those with none at all (aOR 0.78; P < .001).

Table 2.

Generalized linear mixed-effect model of depression screen alert adherence

Covariate Adjusted probability of alert adherencea (95% CI) Adjusted OR of PHQ-9 alert adherence (95% CI)b P value
Provider sex
 Female 33.9 (27.4–41.1) Reference
 Male 29.9 (22.9–38.0) 0.833 (0.522–1.331) .445
Alerts seen by provider in last 90 daysc <.001*
 <125 (Q1) 39.3 (33.7–45.2) Reference
 125–333 (Q2) 42.0 (36.3–48.0) 1.120 (1.039–1.208) .003
 334–720 (Q3) 31.5 (26.5–36.9) 0.709 (0.655–0.768) <.001
 >720 (Q4) 19.8 (16.2–24.0) 0.382 (0.350–0.417) <.001
Competing alerts during encounterd <.001*
 0 35.2 (29.9–40.9) Reference
 1 30.8 (26.0–36.2) 0.822 (0.775–0.872) <.001
 2 30.6 (25.7–36.0) 0.813 (0.753–0.879) <.001
 3 or more 29.7 (24.6–35.3) 0.778 (0.696–0.869) <.001
Appointment with PCPe
 No 29.9 (25.1–35.2) Reference
 Yes 33.3 (28.2–38.8) 1.170 (1.096–1.249) <.001
Physician specialty <.001*
 Internal medicine 35.2 (27.8–43.4) Reference
 Family medicine 35.0 (26.3–45.0) 0.993 (0.580–1.701) .980
 Medicine-pediatrics 45.8 (28.6–64.0) 1.554 (0.683–3.537) .293
 Other 11.5 (4.9–24.7) 0.239 (0.089–0.645) .005
 Psychiatry 7.8 (4.0–14.2) 0.156 (0.071–0.345) <.001
MD Medical School Graduation Decade .538*
 1980s or earlier 31.6 (19.7–46.4) Reference
 1990s 37.3 (26.4–49.7) 1.290 (0.583–2.856) .890
 2000s 26.6 (18.1–37.5) 0.787 (0.351–1.762) .569
 2010s 32.7 (25.5–40.7) 1.052 (0.513–2.154) .890
Patient sex
 Female 31.7 (26.7–37.1) Reference
 Male 33.6 (28.5–39.2) 1.092 (1.037–1.151) .001
Patient with active depression problemf
 No 33.0 (27.9–38.3) Reference
 Yes 26.5 (22.0–31.6) 0.733 (0.678–0.793) <.001
Patient prescribed antidepressant in last year
 No 33.5 (28.4–39.0) Reference
 Yes 30.6 (25.8–36.0) 0.879 (0.836–0.924) <.001
Patient PHQ-9 Scoreg 1.049 (1.044–1.054) <.001
Patient ageg 0.996 (0.994–0.997) <.001
AHRQ-weighted Elixhauser Comorbidity Scoreg 0.995 (0.992–0.998) <.001

Abbreviations: OR: odds ratio; PCP: primary care physician; MD: medical doctor; PHQ-9: Patient Health Questionnaire-9.

The target outcome was positive adherence to the depression alert by the physician during the encounter. Patient race and ethnicity were trialed as model covariates but were not found to have statistical significance.

P values shown in bold are less than .05.

a

Calculated by marginal effects at the means (MEM).

b

Adjusted odds ratios from generalized mixed-effects modeling. Clustering performed at the level of individual physician with random effects.

c

Count of all clinical guidance alerts seen by the attending of record over the 90 days preceding the encounter.

d

Count of other clinical guidance alert (besides depression alert) seen by the attending during the encounter.

e

Primary care relationship between physician and patient on date of encounter.

f

Patient’s problem list already contains one of the depression problems offered within the depression alert.

g

Unit odds ratio.

*

Overall significance of categorical variable assessed with ANOVA between model with and without variable.

No difference was found between male or female physicians in terms of alert adherence. Similarly, the medical school graduation decade had no significant impact on alert adherence probability. However, the likelihood of alert adherence increased with the PHQ-9 value presented within the alert (unit aOR 1.05; P < .001). The likelihood of alert adherence decreased with patient age (unit aOR 0.996; P < .001) and patient comorbidity (unit aOR 0.995; P < .001). Physicians seeing male patients versus female patients were more likely to utilize the depression alert (aOR 1.09; P < .001).

Interaction of recent alert count and competing alert count

An interaction term between recent alert count and competing alert count was added to the first model. The addition of the interaction term resulted in better model fit (ANOVA P value <.001). Within the first quartile of the recent alert count, the adjusted probability of alert adherence decreased from 38% with 0 competing alerts versus 34% for ≥3 competing alerts but the effect was not statistically significantly (Figure 3). However, in the second quartile of recent alert exposure, the adjusted odds ratios of alert adherence was substantially lower with any competing alerts during the visit (aOR for ≥3 vs 0 competing alerts, 0.70; P = .01). The same was true in the third quartile of recent alert exposure (aOR for ≥3 vs 0 competing alerts, 0.56; P < .001). In the fourth quartile of recent alert exposure, there was no statistically significant difference in alert adherence between these levels (Supplementary Table S5).

Figure 3.

Figure 3.

Marginal effect plot of the interaction between total number of alerts seen by provider over past 90 days and number of competing alerts during encounter.

Sensitivity analyses

Our findings were qualitatively unaffected by restricting encounters to those between patients and their attributed primary care provider (Supplementary Table S7). This was also true after including physicians who saw less than 10 depression alerts during the study period (Supplementary Table S8). Treating as physicians as fixed effects resulted in no significant difference in the odds ratios of recent and competing alert counts on alert adherence (Supplementary Table S9).

DISCUSSION

In investigating a gradual yet substantial decline in depression screening alert adherence by physicians at a large academic health system, we found that physicians were less likely to respond within the alerts when they had seen greater numbers of alerts recently. Depression screening alert adherence was also substantially diminished with competing alerts occurring in the same encounter; however interaction analysis revealed that this negative association only occurred during visits with physicians in the middle range of recent alert count. We also found that physicians were more likely to respond to alerts for patients with higher depression severity scores, and less likely to respond to alerts during encounters with older and more comorbid patients. A decrease in alert adherence was also found for patients having an active depression diagnosis already on the problem list and for those with a prescription for an antidepressant written in the preceding year.

It is reasonable to assume that major changes in alert adherence rates over time should occur soon after a causal event. In the case of the decline in the use of the depression screening alert, we expected that a decline in alert adherence would have occurred after specific system-wide changes. However, our timeline showed that the decline happened before two expected drivers: (1) a major EHR system upgrade and (2) the California Covid-19 shutdown, the latter being associated with major shifts in office visit volume and workflow. Analysis of alert adherence over time did not reveal the decline to be stepwise, but rather gradual over numerous months, and we found that the provider’s cumulative experience over the last quarter was more influential than competing alerts within the same visit. To our knowledge, the relationship between changing healthcare system factors, such as the introduction of competing alerts, and individual alert performance has not been examined in this fashion. It is important to note that these findings indicate that declining adherence can manifest over an extended period of time after the inciting event, complicating both real-time detection of alert performance decay as well retrospective analyses to identify drivers of waning alert use. These findings should underscore to vendors and administrators of EHR systems the criticality of long-term alert monitoring, an easily overlooked aspect of CDS system management. This need will grow increasingly important as CDS systems age, with the dwindling presence of the original clinical stakeholders and alert designers.

We found an initial, slight improvement in depression screening alert adherence with increasing recent alert count, suggesting a sensitizing effect in this lower exposure range. However, for encounters with providers having a recent alert count above the median value, there was a dramatically reduced likelihood of depression screening alert adherence. Although a small number of studies have reported a lack of correlation between alert receipt frequency and acceptance rate14,23 there has been more evidence to support a negative relationship between alert exposure magnitude and adherence. A comprehensive analysis of both clinical guidance and drug-related alerts17 demonstrated a marginally decreased likelihood of alert acceptance with increased exposure across an array of alert types. Interestingly, in that study, the subanalysis of a depression-related alert did not find evidence of alert fatigue. In any case, the overall decrease in alert acceptance with receipt count, on an individual provider basis, was felt to be explained by either (1) alert fatigue or (2) a lesser need for clinical guidance among providers seeing the alert more often and therefore, more likely to be familiar with the management of the underlying disease. However, our findings are more consistent with alert fatigue, given an initial sensitization in alert adherence up to the second quartile of recent alert exposure, followed by a sharp decrease in the latter half.

Another study by the same group of researchers11 provided further evidence that primary care physicians were less likely to accept alerts when more competing alerts of any kind were presented in the same encounter. Although the number of unique alerts and overall alert counts was much higher than in the present study, there were noted to be many repeated alerts for the same patient in the same year. These findings were thought to support the conclusion that higher numbers of repetitive and uninformative alerts were associated with alert noncompliance. In a similar vein, our measure of recent alert exposure included all alerts in the preceding 90 days, whether appropriate or not. However, our measured outcome is specific to a single, noninterruptive alert, which appears due to a well-defined clinical trigger and which contains follow-up actions that cover a broad range of appropriate responses. Our findings suggest that even appropriate alert usage can be deleteriously affected by greater numbers of heterogeneous, mixed-quality alerts. Managers of CDS systems should not consider the success of a newly implemented alert as durable, but rather, inherently dependent on changing provider attitudes as well as on other EHR stimuli. Simplistic monitoring of alert performance over time should be supplemented with a suite of contextual, encounter-level metrics in order to better understand the overall environment in which alert performance may be degrading.

While the effect of competing alerts during a single visit was found to have a significantly negative impact on depression screening alert adherence, interaction analysis revealed that this effect was particular to encounters with physicians in the middle two quartiles of the recent alert count. That is, for encounters with physicians seeing very low or very high recent volumes of alerts, adherence was made no worse with an increasing number of competing alerts. At the low end, it is conceivable that these providers are less likely to be experiencing alert fatigue and potentially less susceptible to cognitive overload when presented with multiple competing alerts. At the high end of the recent alert count, alert fatigue may be driving a floor effect with depression alert adherence unaffected by the number of competing alerts. However, for encounters with providers in the middle two quartiles, the presence of any competing alerts substantially reduced the likelihood of depression alert adherence. These findings suggest the potential benefit of orchestrating an adaptive “rationing” strategy, whereby low-priority alerts that may compete for a provider’s attention could be withheld when doing so may improve alert adherence.

There was a strong association between the magnitude of the patient’s depression score, which is shown at the top of the alert, and the likelihood of alert adherence. Prior studies of drug–drug interactions have shown that the probability of alert adherence may be directly related to the tiered risk presented within the alert.24,25 Our findings reinforce the notion that end-users respond dynamically to patient-specific, contextualizing information presented within alerts. To this end, designers of CDS should aim to display any quantitative criteria driving alert firing. Provision of information regarding the severity of the underlying disorder can spotlight particularly serious cases and potentially overcome some of the deleterious effects of alert fatigue.

Our study has limitations. Encounters wherein nonattending providers were also exposed to the alert were excluded due to the difficulty in attributing follow-up responsibility. There may have been attending physicians, primarily based in resident clinics, for example, that were excluded for this reason. Additionally, in order to adjust for patient complexity, the AHRQ-weighted Elixhauser comorbidity score was calculated from all prior ICD-10 codes available in the EHR. Thus, where past medical history was not recorded or only documented in clinical notes, there was likely some inaccuracy of the comorbidity score.26 Additionally, we analyzed the impact of alert exposure on only one noninterruptive alert, and it is possible that the impact may differ for other alerts. Finally, the physicians involved in the study were limited to those at an academic medical center and therefore, the findings may not be generalizable to nonacademic settings. Strengths of the study include the detailed, encounter-based modeling that accounted for particular features of each appointment and the long period of retrospective analysis.

In conclusion, we found that a successful depression alert was negatively affected by two separate components of alert fatigue, competing alerts during the same encounter and a learning effect from the overall number of alerts seen in the recent past. Health care systems should strive to actively monitor for declines in alert performance and for increases in overall alert exposure over extended periods of time. As changes and additions are made to the EHR on a continuous basis, diagnosing and treating underperforming CDS systems requires the availability of tools that can provide a holistic understanding of the entire, dynamic EHR milieu.

Supplementary Material

ocac191_Supplementary_Data

ACKNOWLEDGMENTS

The author would like to thank Rong Guo and Sitaram Vangala for their statistical advice.

AUTHOR CONTRIBUTIONS

Each of the authors (DAM, YT, DAE, KMB, DSB) contributed to the work’s conception and design, analysis and interpretation, drafting and revising the manuscript, and final approval of the version to be published. DAM acquired the data. All authors agree to be accountable for all aspects of the work.

FUNDING

The NIH National Center for Advancing Translational Science (NCATS) through the UCLA Clinical and Translational Science Institute (CTSI), provided salary funding through Grant Number TL1TR001883 to DAM, who was a TL1 postdoctoral fellow.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

CONFLICT OF INTEREST STATEMENT

None declared.

Contributor Information

Douglas A Murad, Medical Informatics, Kaiser Permanente Southern California, San Diego, CA, USA.

Yusuke Tsugawa, Division of General Internal Medicine, Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.

David A Elashoff, Division of General Internal Medicine, Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.

Kevin M Baldwin, UCLA Health Information Technology, Los Angeles, CA, USA.

Douglas S Bell, Division of General Internal Medicine, Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA; UCLA Health Information Technology, Los Angeles, CA, USA.

Data Availability

The data underlying this article are described in the article and in its online supplementary material. Additional data will be shared on reasonable request to the corresponding author.

REFERENCES

  • 1. McGlynn E, Asch S, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348 (26): 2635–45. [DOI] [PubMed] [Google Scholar]
  • 2. Higashi T, Shekelle P, Adams J, et al. Quality of care is associated with survival in vulnerable older patients. Ann Intern Med 2005; 143 (4): 274–81. [DOI] [PubMed] [Google Scholar]
  • 3. Middleton B, Sittig D, Wright A.. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inform 2016; 25 (S 01): S103–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Eslami S, de Keizer N, Dongelmans D, et al. Effects of two different levels of computerized decision support on blood glucose regulation in critically ill patients. Int J Med Inform 2012; 81 (1): 53–60. [DOI] [PubMed] [Google Scholar]
  • 5. Jia P, Zhang L, Chen J, et al. The effects of clinical decision support systems on medication safety: an overview. PLoS One 2016; 11 (12): e0167683. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Escovedo C, Bell D, Cheng E, et al. Noninterruptive clinical decision support decreases ordering of respiratory viral panels during influenza season. Appl Clin Inform 2020; 11 (2): 315–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. McGinn T, McCullagh L, Kannry J, et al. Efficacy of an evidence-based clinical decision support in primary care practices: a randomized clinical trial. JAMA Intern Med 2013; 173 (17): 1584–91. [DOI] [PubMed] [Google Scholar]
  • 8. Kwok R, Dinh M, Dinh D, et al. Improving adherence to asthma clinical guidelines and discharge documentation from emergency departments: Implementation of a dynamic and integrated electronic decision support system. Emerg Med Australas 2009; 21: 31–7. [DOI] [PubMed] [Google Scholar]
  • 9. Haberman S, Feldman J, Merhi Z, et al. Effect of clinical-decision support on documentation compliance in an electronic medical record. Obstet Gynecol 2009; 114 (2): 311–7. [DOI] [PubMed] [Google Scholar]
  • 10. Martinez-Franco A, Sanchez-Mendiola M, Mazon-Ramirez J, et al. Diagnostic accuracy in family medicine residents using a clinical decision support system (DXplain): a randomized-controlled trial. Diagnosis 2018; 5 (2): 71–6. [DOI] [PubMed] [Google Scholar]
  • 11. Ancker J, Edwards A, Nosal S, et al. ; with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (1): 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Ash J, Sittig D, Poon E, et al. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc 2007; 14 (4): 415–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Sheehan B, Kaufman D, Bakken S, et al. Cognitive analysis of decision support for antibiotic ordering in a neonatal intensive care unit. Appl Clin Inform 2012; 3 (1): 105–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Bryant A, Fletcher G, Payne T.. Drug interaction alert override rates in the meaningful use era no evidence of progress. Appl Clin Inform 2014; 05 (03): 802–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Weingart S, Toth M, Sands D, et al. Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med 2003; 163 (21): 2625–31. [DOI] [PubMed] [Google Scholar]
  • 16. Osheroff J. Improving Medication Use and Outcomes with Clinical Decision Support: A Step by Step Guide. Chicago, IL: HIMSS; 2009. [Google Scholar]
  • 17. Ancker J, Kern L, Edwards A. et al. ; HITEC Investigators. How is the electronic health record being used? Use of EHR data to assess physician-level variability in technology use. J Am Med Inform Assoc 2014; 21 (6): 1001–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Kroenke K, Spitzer R, Williams J.. The PHQ‐9: validity of a brief depression severity measure. J Gen Intern Med 2001; 16 (9): 606–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Gasparini A. Comorbidity: an R package for computing comorbidity scores. J Open Source Softw 2018; 3 (23): 648. [Google Scholar]
  • 20. Moore BJ, White S, Washington R, Coenen N, Elixhauser A.. Identifying increased risk of readmission and in-hospital mortality using hospital administrative data. Med Care 2017; 55 (7): 698–705. [DOI] [PubMed] [Google Scholar]
  • 21. Bates D, Maechler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. arXiv 2014. https://arxiv.org/abs/1406.5823. Accessed October 13, 2022.
  • 22. Lüdecke D. ggeffects: tidy data frames of marginal effects from regression models. J Open Source Softw 2018; 3 (26): 772. [Google Scholar]
  • 23. van der Sijs H, Aarts J, Vulto A, Berg M.. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13 (2): 138–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Paterno MD, Maviglia SM, Gorman PN, et al. Tiering drug–drug interaction alerts by severity increases compliance rates. J Am Med Inform Assoc 2009; 16 (1): 40–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Seidling H, Phansalkar S, Seger D, et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011; 18 (4): 479–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Singh B, Singh A, Ahmed A, et al. Derivation and validation of automated electronic search strategies to extract Charlson comorbidities from electronic medical records. Mayo Clin Proc 2012; 87 (9): 817–24. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocac191_Supplementary_Data

Data Availability Statement

The data underlying this article are described in the article and in its online supplementary material. Additional data will be shared on reasonable request to the corresponding author.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES