Skip to main content
Applied Clinical Informatics logoLink to Applied Clinical Informatics
. 2013 Mar 27;4(1):144–152. doi: 10.4338/ACI-2012-12-RA-0055

Provider Use of and Attitudes Towards an Active Clinical Alert

A Case Study in Decision Support

J Feblowitz 1,2,3,1,2,3,1,2,3, S Henkin 1,2,1,2, J Pang 1,2,1,2, H Ramelson 1,2,3,1,2,3,1,2,3, L Schneider 1,3,1,3, F L Maloney 2, A R Wilcox 4, DW Bates 1,2,3,5,1,2,3,5,1,2,3,5,1,2,3,5, A Wright 1,2,3,1,2,3,1,2,3,
PMCID: PMC3644821  PMID: 23650494

Abstract

Background

In a previous study, we reported on a successful clinical decision support (CDS) intervention designed to improve electronic problem list accuracy, but did not study variability of provider response to the intervention or provider attitudes towards it. The alert system accurately predicted missing problem list items based on health data captured in a patient’s electronic medical record.

Objective

To assess provider attitudes towards a rule-based CDS alert system as well as heterogeneity of acceptance rates across providers.

Methods

We conducted a by-provider analysis of alert logs from the previous study. In addition, we assessed provider opinions of the intervention via an email survey of providers who received the alerts (n = 140).

Results

Although the alert acceptance rate was 38.1%, individual provider acceptance rates varied widely, with an interquartile range (IQR) of 14.8%-54.4%, and many outliers accepting none or nearly all of the alerts they received. No demographic variables, including degree, gender, age, assigned clinic, medical school or graduation year predicted acceptance rates. Providers’ self-reported acceptance rate and perceived alert frequency were only moderately correlated with actual acceptance rates and alert frequency.

Conclusions

Acceptance of this CDS intervention among providers was highly variable but this heterogeneity is not explained by measured demographic factors, suggesting that alert acceptance is a complex and individual phenomenon. Furthermore, providers’ self-reports of their use of the CDS alerting system correlated only modestly with logged usage.

Keywords: Patient problem list, electronic medical records, primary care

Introduction

Clinical decision support (CDS), when effectively implemented and integrated with electronic health records (EHRs), has the potential to improve the quality and cost-effectiveness of patient care [1-6]. CDS systems include a wide array of tools that are designed with the goal of improving provider decision-making at the point of care. Examples of CDS include health maintenance reminders (or alerts), order sets, drug-drug and drug-allergy checking, and automated laboratory test interpretation, among numerous others [7].

Although many trials of CDS interventions have proven successful, observed improvements are often modest in magnitude despite substantial investments of time and resources. Furthermore, many interventions have failed to improve clinical practice or outcomes altogether [1, 2]. Many barriers to effective CDS implementation exist, including poor system usability and integration, lack of acceptance of CDS recommendations by providers or overall failure of providers to utilize a CDS tool [8]. These barriers have proven challenging to overcome in part because users include a diverse range of providers of varying background, clinical expertise and individual workflow styles. In order to overcome these barriers, it may be helpful to develop an improved understanding of the multifactorial influences that dictate real-world provider use of an electronic CDS tool.

One relatively simple and common form of CDS is the clinical reminder, or alert, which is designed to call a provider’s attention to a specific care recommendation (e.g. vaccination, health maintenance screening, abnormal test result, or gap in documentation). At Brigham and Women’s Hospital, for example, alerts have been implemented to remind providers to administer influenza and pneumococcal vaccines, document tobacco status, order cholesterol tests and provide several other screenings based on current clinical guidelines. Although CDS systems, including clinical alerts, have been shown to increase compliance with care guidelines, the magnitude of these improvements has often been small and the success of individual interventions has been inconsistent [1, 2, 6]. Evidence also shows that passive decision support systems are less likely to alter physician behavior than those employing active alerts [1].

Background

In this study, we carried out a secondary analysis of a completed randomized, controlled trial that examined whether a CDS alert system that employs inference rules could improve problem list completeness [9, 10]. We characterized individual providers’ use of this system and compared it with their demographic information and self-reported assessment of the CDS tool. We hypothesized that by comparing individual provider usage of a set of validated, active clinical alerts, we might uncover potential reasons for the wide variability in provider use of CDS. Our goal was to determine whether and to what extent providers’ demographic traits or their subjective opinion of a CDS tool are correlated with their logged usage.

In a prior study [9], we designed and validated problem inference rules that use clinical and billing information to predict patient problems. Using a random sample of patients who had been seen at Brigham & Women’s Hospital (BWH) (Boston, MA), we created a set of prediction rules to infer which patients were likely to have a given problem. Validation of these rules demonstrated overall positive predictive value and sensitivity of 83.9% and 91.7%. The complete methodology including rule logic and knowledge base is described elsewhere [9].

We then created an active clinical alerting system based on these rules and conducted a randomized, controlled trial of the alerts in BWH-affiliated primary care practices [10]. When the rule criteria were met, an alert was displayed to provider suggesting problems to be added to the problem list (► Fig. 1). Providers could ignore, override or accept the alert. If the alert was ignored, it would reappear the next time the provider signed a note for that patient. If the alert was overridden, it would not be shown again for that patient. In the trial, we demonstrated that the alert dramatically increased completeness of the electronic problem list. Target problems were roughly three times more likely to be documented when the alerts were shown, with significant increases in problem documentation for 13 of 17 conditions. Complete results of the trial are reported in a prior publication [10].

Fig. 1.

Fig. 1

Screenshot of problem inference alerts

Methods

We designed the current study as a follow-up study to the trial described above [10]. In order to assess provider experience with the CDS intervention, we aggregated final data for each provider from the intervention arm clinics including: the total number of alerts, the number of times alerts were ignored, the number of times alerts were accepted, the number overridden, and the number of unique alerts (excluding alerts that were shown multiple times for the same provider and patient). We defined the acceptance rate as the total number of alerts accepted divided by the number of unique alerts (this excludes duplicate instances of an alert that was ultimately accepted after being displayed more than once). We also calculated unique alerts per note, which was the number of unique alerts divided by total number of notes for a provider. The number of notes written by a given provider approximates their visit volume, which allowed us to adjust for the variable amount of clinical time per provider.

Usage data were available for a total of 236 providers. In order to focus our study on those providers for which the alert was relevant, we limited our sample to clinical care providers with prescriptive authority (medical doctors [MD], nurse practitioners [NP], or physician assistants [PA]); nurses, medical assistants, and administrative staff were excluded from analysis. In addition, we excluded providers who had received less than 20 alerts over the six-month trial period in order to focus our analysis on those with at least a moderate amount of experience with the alert. After these exclusions, the final sample contained 140 providers (130 MDs, 6 NPs, and 4 PAs).

In addition to aggregating the results of the trial, we collected demographic information on all providers (n = 140) including degree, gender, clinic location, age, academic title, medical school (for MDs), and graduation year.

We then designed a survey instrument aimed at collecting additional provider data to assess providers’ subjective experience and opinions of the alert system. The complete survey instrument is available as supplementary file. Providers were asked to self-report the number of years they had been practicing, the number of weekly half-day sessions devoted to clinical care, the average number of patients per half-day session, and years of experience using the electronic health record (EHR). In addition, they were asked to assess their experience with the CDS tool along several dimensions. Additional details of the survey responses are provided in the results section.

Providers were invited to complete the survey online via an email invitation. The initial survey invitation was sent on January 28, 2011. Additional email reminders were sent as needed to increase participation and identical paper surveys were sent to a subset of participants who did not respond to email invitations (n = 50). The survey was closed on May 9, 2011.

The survey instrument was designed and administered using the REDCap secure online survey tool [11] and is available as supplementary file. Approval for this study was obtained from the Partners HealthCare Institutional Review Board.

Data analysis was carried out using Microsoft Excel and SAS 9.2. Multiple linear regression was used to assess the effect of demographic variables on alert acceptance. Spearman’s rank correlation coefficient was used to evaluate several dimensions of survey responses reported on a discontinuous nine-item Likert scale. Significance was set at a two-tailed p-value of 0.05.

Results

There were a total of 18,044 unique alerts shown for 271,003 notes written during the original trial study period (1 unique alerts per 14.9 notes), with 6,876 accepted alerts (38.1%). The unique alerts per note differed substantially among providers (median = 1 alert per 9.1 notes; interquartile range (IQR) 1 alert per 18.5 notes, 1 alert per 4.2 notes). In addition, there was substantial heterogeneity among providers in response to the alert (median acceptance rate = 33.4%; IQR 14.8%, 54.4%). ► Fig. 2 shows the acceptance rate and alert frequency for providers, and demonstrates this considerable heterogeneity.

Fig. 2.

Fig. 2

Scatter plot of unique alerts per note versus acceptance rate. When criteria were met, an alert was displayed at the time the provider signed the patient note electronically. “Unique alerts per note” is defined here as the total number of unique alerts (excluding identical alerts that were shown more than once for a given patient) divided by the total number of notes written by the provider during the study period.

In total, 103 of 140 providers completed the online survey (response rate: 73.6%). Twenty-eight providers (20.0%) declined to participate and nine providers (6.4%) could not be reached via email. Seven of the 103 responding providers indicated that they had not received the alerts (despite electronic logs indicating all had received the minimum inclusion threshold, sometimes far more). Non-responders were significantly more likely to be male, were significantly younger, had significantly fewer total notes (a proxy for visit volume) and had significantly less unique alerts than responders. Demographic characteristics of respondents and non-respondents are described in full in ► Table 1.

Table 1.

Demographic characteristics, clinical experience, alert frequency and alert acceptance of survey respondents (n = 103) and non-respondents (n = 37)

Characteristics Survey Sample (n = 140)
Respondents Non-Respondents P-value
Total 103 (73.6%) 37 (26.4%)
Median age (IQR) 41.0 (32.0–53.0) 31.0 (30.0–43.0) 0.019
Women 73 (70.9%) 16 (43.2%) 0.003
Role 0.145
• MD 93 (90.3%) 37 (100%)
• NP 6 (5.8%) 0
• PA 4 (3.9%) 0
Median Note Count (IQR) 1218 (289–3705) 234 (131–1977) 0.001
Top 25 Ranked Medical School 44 (47.3%) 21 (56.8%) 0.160
Graduation Year 0.098
• ≤ 10 25 (24.3%) 7 (18.9%)
• 11-20 21 (20.4%) 6 (16.2%)
• 21-30 13 (12.6%) 1 (2.7%)
• - 31+ 15 (14.6%) 4 (10.8%)
• Unknown 29 (28.2%) 19 (51.4%)
Median Total Alerts (IQR) 223 (90–416) 118 (53–322) 0.028
Median Unique Alerts (IQR) 97 (50–178) 55 (30–103) 0.005
Unweighted Median Acceptance Rate (IQR) 15.2% (5.9–32.4%) 16.7% (6.0–37.2%) 0.695
Mean Acceptance Rate (SD)* 35.7% (25.6%) 35.7% (25.9%) 0.744
Median Years Practicing (IQR) ** 8.0 (2.0–20.0) N/A N/A
Median Half-Day Sessions/Week (IQR)** 4.0 (1.0–6.0) N/A N/A
Median Patients/Session(IQR)** 8.0 (5.0–9.0) N/A N/A
Median Experience Using EHR (IQR)** 6.0 (4.0–10.0) N/A N/A

* Reflects acceptance rate (total alerts accepted ÷ unique alerts displayed)

** Self-reported from survey responses, these data were not available for non-responders

Providers’ attitudes towards the intervention varied widely across our sample. Of the 103 survey respondents, users reported a median alert frequency of 5.0 (a few times per week, [IQR 5.0-7.0) across the entire study period. For the 96 providers who reported receiving alerts, the median alert accuracy and self-reported acceptance rate were both 5.0 (“sometimes accurate,” [IQR 3.0-6.0]; and “accepted alerts sometimes,” [IQR 3.0-7.0]). Users reported rarely accepting alerts when covering patients for another provider (median = 2, IQR 1.0-3.0). Complete survey responses with medians are shown in ► Table 2 for the providers who reported receiving alerts (n = 96).

Table 2.

Provider attitudes towards alert intervention (n = 96)*

Survey Question** (1 = low, 9 = high) Median IQR
Alert Frequency*** 5 2 (5–7)
Alert Accuracy 5 3 (3–6)
Perceived Alert Acceptance 5 4 (3–7)
Perceived Alert Acceptance (when covering) 2 2 (1–3)
Improved Accuracy 7 2 (5–7)
Improved Efficiency 6 2 (5–7)
Overall helpfulness 5 2 (3–5)
Turn Alerts Off / Keep Alerts On 5 4 (3–7)

*Excludes providers indicating that they “never” received alerts (and thus did not respond to intervention-specific questions other than alert frequency).

**See supplementary file for complete survey instrument and full text of survey questions.

***Estimated alert frequency during trial study period (1 = almost never, 3 = a few times a month, 5 = a few times a week, 7 = a few times a day, 9 = after almost every note)

To assess the potential predictors of alert acceptance, we performed linear regression on both the complete sample of providers who participated in the trial (n = 140) and the complete sample of providers who responded to the survey (n = 103). For the trial sample, we assessed whether degree (MD/NP/PA), gender, age, medical school (top 25 or non-top 25), or graduation year predicted acceptance. For the survey sample, we assessed whether degree, gender, age, medical school (top 25), graduation year, years of experience, years of experience using an EHR or patient volume (patients/ week) predicted acceptance rate. In the first model assessing those participating in the trial, no factors predicted provider acceptance to a significant degree. In the second model, graduating from a Top 25 medical school was significantly positively associated with increased acceptance rate of alerts (r = 0.198, p = 0.009).

In addition, we assessed providers’ subjective opinion of the CDS alert using Spearman’s rank correlation coefficient. Providers’ self-reported acceptance rate (r = 0.270, p = 0.008) and their opinion of the tool’s helpfulness (r = 0.304, p = 0.003) and accuracy (r = 0.338, p = 0.001) were positively correlated with their acceptance rate. In addition, their subjective assessment of alert frequency was correlated with the frequency of total alerts (r = 0.365, p<0.001) and unique alerts (r = 0.446, p<0.001) that clinicians viewed during the intervention period.

Discussion

Previously, we found that approximately 40% of unique alerts were accepted by providers [10]. This aggregate acceptance rate might reasonably be explained by one of two usage patterns: (1) most providers accepting alerts at a similar frequency or (2) some providers accepting most alerts with others accepting very few. In this study, we found that there was a wide range of variability in provider acceptance rather than consistent or bimodal response. Further, and somewhat unexpectedly, measured provider characteristics did not appear to explain differences in CDS use among providers. Instead, our analysis demonstrated a high degree of variability between individual providers in their response to and attitudes towards the CDS alert system that was not explained by a range of variables. Further, our previous work showed that most of the rules in our system had a positive predictive value of approximately 90%, but the acceptance rate was less than 40%. The substantial difference between these rates appears to be largely attributable to idiosyncratic differences in provider acceptance and this discrepancy should serve as a caveat to those designing and implementing CDS systems without performing adequate post-implementation assessments and collecting user feedback.

Our findings confirm that variation in provider use of CDS is due in part to providers’ individual perceptions of how helpful and accurate the specific tool is. Providers’ subjective assessments of CDS appear to be moderately good predictors of their actual use, although a wide range of responses was observed and the correlations are less robust than might be expected. For example, providers subjective perception of how often they accepted an alert was only moderately correlated (r = 0.246) with acceptance rates. Likewise, perceived alert frequency was also moderately correlated (r = 0.365) with the true frequency of alerts and wide variation was observed across the sample.

Though these findings are not surprising, they have significant implications for how to assess the effectiveness and utility of an implemented CDS system. It appears that provider’s subjective assessment of a tool’s helpfulness and accuracy and their self-reported usage of the tool are not strong predictors of actual usage patterns. When studying the utility and usability of CDS, it is important that primary analysis be based on concrete usage data from system logs. Self-reported assessments appear to be a poor proxy for actual usage patterns and should be utilized only to supplement the assessment of a CDS system.

For example, in comparing logged usage data versus reported usage, the gap between perceived and actual alert frequency deserves special comment. Surprisingly, providers reporting that they received alerts “never”, “almost never”, “rarely” or “a few times per month”, in fact viewed an average of 2.84, 8.68, 3.41 and 5.55 alerts per half-day practice session respectively. One possible explanation for this disparity is some providers may be spending little or no time reviewing alerts before accepting or declining them, possibly due to the frequently cited phenomenon of “alert fatigue” [12, 13].

We did not find evidence that observed variation in provider use of CDS is tied to demographic characteristics of the provider including age, gender, experience or patient volume. Indeed, we found no evidence that any of these variables influences provider acceptance of the CDS alert. Interestingly, when controlling for other factors such as patient volume, attending Top 25 medical school as a student was significantly associated with increased alert acceptance. On the whole, these findings suggest that variation in provider use of CDS is more likely to be a complex and multi-factorial phenomenon dependent on factors such as personality, local culture, governance practices, system usability and individual provider workflow.

One specific finding of note is the lack of any observed connection between age and use of the evaluated CDS tool. In the current era of EHR adoption, there exists a perception (but little formal research) that older providers may be less receptive to the implementation of new technologies, including CDS [14]. However, we found no evidence of this in our sample. Indeed, there was no overall trend associated with age and there were many providers of all ages that were both high- and low-frequency users of the CDS alerts. Although more research will be needed to validate our findings on provider demographics, our data suggest that use of CDS does not correlate closely with age or experience.

Our study has several potential limitations. First, our assessment of providers’ subjective experience using a problem-list-focused CDS tool was limited to a single alerting system at a single practice network using a self-developed outpatient EHR. Thus, our results may not be generalizable to other sites, or different EHRs and CDS systems. More research is needed to characterize the variation in provider use of CDS across multiple institutions and multiple systems. However, due the validated accuracy of the alerts, the active and relatively uncontroversial nature of the recommendations and the previously successful randomized trial, we believe that these findings serve as a valid starting point for additional research to characterize provider use of CDS. Second, as a result of the small sample size (n = 140), this study had limited statistical power to detect potentially subtle differences in provider use of the alerting system. In the future, it may be necessary to conduct a large-scale analysis of provider usage patterns (even across multiple sites) in order to uncover the many minor factors that are likely to influence usage. Third, the use of a retrospective survey in this case may be subject to recall bias or to changes in providers’ opinions of the intervention over time. In the future, prospective collecting of feedback at multiple points during implementation and deployment may provide a richer picture of providers’ usage patterns and attitudes towards the alert system. Fourth, the CDS intervention designed for use by primary care providers (PCP) and the trial sample consisted exclusively of PCPs. Thus, our results may not be generalizable to medical and surgical specialists. Finally, although the CDS tool was previously validated for accuracy on a large randomized sample of patient charts [9] and an additional randomized audit of the implemented alerts revealed an accuracy of just under 90% [10], the system is nevertheless imperfect and part of the variability in provider use of the tool is likely attributable to variations in actual accuracy experienced by providers across the sample.

Conclusion

Variation in CDS use by providers appears to be a complex and multi-factorial phenomenon. We found that a range of provider demographic variables were not predictive of actual use of a CDS tool. In addition, our findings suggest that self-reported provider assessments of CDS may not provide a sufficiently accurate picture of actual use. More research, both quantitative and qualitative, is needed in order to further characterize the wide variability observed in provider CDS use.

Clinical Relevance Statement

Marked variation has been observed in the extent to which providers utilize available clinical decision support (CDS) systems and the usability of such systems remains an important issue. An improved understanding of the factors that affect providers’ use of CDS is needed in order to better design and implement these tools in the future and maximize their utility for practicing clinicians.

Conflicts of Interest

The authors declare that they have no conflicts of interest in this research.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by the Partners Healthcare Institutional Review Board.

Supplementary Material

Problem List Alerts Survey
ACI-04-0144-s001.pdf (35.4KB, pdf)

References

  • 1.Garg AX, Adhikari NK, McDonald H, Rosas-Arellano M P, Devereaux PJ, Beyene J, Sam J, Haynes RB. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. Jama 2005; 293(10): 1223-1238 [DOI] [PubMed] [Google Scholar]
  • 2.Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ (Clinical research ed. 2005; 330(7494): 765 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L, Keesey J, Adams J, Kerr EA. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Intern Med 2004; 141(12): 938-945 [DOI] [PubMed] [Google Scholar]
  • 4.Osheroff JA, Teich JM, Middleton B, Steen EB, Wright A, Detmer DE. A roadmap for national action on clinical decision support. J Am Med Inform Assoc 2007; 14(2): 141-145 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc 2001; 8(4): 299-308 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D. Effect of clinical decision-support systems: a systematic review. Annals of internal medicine 2012; 157(1): 29-43 [DOI] [PubMed] [Google Scholar]
  • 7.Wright A, Maloney F, Feblowitz JC. Clinician attitudes toward and use of electronic problem lists: a thematic analysis. BMC Med Inform Decis Mak 2011; 11: 36 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Reisman Y. Computer-based clinical decision aids. A review of methods and assessment of systems. Medical informatics = Medecine et informatique 1996; 21(3): 179-197 [DOI] [PubMed] [Google Scholar]
  • 9.Wright A P, J, Feblowitz JC, Maloney FL, Wilcox AR, Ramelson HZ, Schneider LI, Bates DW. A method and knowledge base for automated infrence of patient problems from structured data in an electronic medical record. J Am Med Inform Assoc 2011; 18(6): 859-867 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Wright A, Pang J, Feblowitz JC, Maloney FL, Wilcox AR, McLoughlin KS, Ramelson H, Schneider L, Bates DW. Improving Electronic Problem List Completeness Through Clinical Decision Support: A Randomized , Clinical Trial. J Am Med Inform Assoc 2012; 19(4): 555-561 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Harris P, Taylor R, Thielke R, Payne J, Gonzalez N, Conde J. Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009; 42(2): 377-381 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ash J, Sittig DF, Campbell EM, Guappone KP, Dykstra RH. Some Unintended Consequences of Clinical Decision Support Systems. AMIA Annu Symp Proc 2007; 2007: 26-30 [PMC free article] [PubMed] [Google Scholar]
  • 13.Phansalkar S, van der Sijs H, Tucker AD, Desai AA, Bell DS, Teich JM, Middleton B, Bates DW. Drug-drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc 2012Sept 25 [Epub] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Holmes C, Brown M, Hilaire DS, Wright A. Healthcare provider attitudes towards the problem list in an electronic health record: a mixed-methods qualitative study. BMC Med Inform Decis Mak 2012; 12: 127 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Problem List Alerts Survey
ACI-04-0144-s001.pdf (35.4KB, pdf)

Articles from Applied Clinical Informatics are provided here courtesy of Thieme Medical Publishers

RESOURCES