Abstract
Background
Clinical decision support systems, including electronic alerts, ideally provide immediate and relevant patient-specific information to improve clinical decision-making. Despite the growing capabilities of such alerts in conjunction with an expanding electronic medical record, there is a paucity of information regarding their perceived usefulness. We surveyed healthcare providers' opinions concerning the practicality and efficacy of a specific text-based automated electronic alert for acute kidney injury (AKI) in a single hospital during a randomized trial of AKI alerts.
Methods
Providers who had received at least one electronic AKI alert in the previous 6 months, as part of a separate randomized controlled trial (clinicaltrials.gov #01862419), were asked to complete a survey concerning their opinions about this specific AKI alert system. Individual approval of the alert system was defined by a provider's desire to continue receiving the alert after termination of the trial.
Results
A total of 98 individuals completed the survey, including 62 physicians, 27 pharmacists and 7 non-physician providers. Sixty-nine percent of responders approved the alert, with no significant difference among the various professions (P = 0.28). Alert approval was strongly correlated with the belief that the alerts improved patient care (P < 0.0001), and negatively correlated with the belief that alerts did not provide novel information (P = 0.0001). With each additional 30 days of trial duration, odds of approval decreased by 20% (3–35%) (P = 0.02).
Conclusions
The alert system was generally well received, although approval waned with time. Approval was correlated with the belief that this type of alert improved patient care. These findings suggest that perceived efficacy is critical to the success of future alert trials.
Keywords: acute kidney injury, alert, alert fatigue, approval, clinical decision support
Introduction
Clinical decision support (CDS) systems include alerts and guidelines that assist physicians in diagnosing and treating patients using the patients' monitored status and their available medical information [1]. The optimal use of CDS systems by providers has the potential to lower costs, provide efficient healthcare and increase patient convenience [2]. In the hospital setting, randomized trials have shown the efficacy of computerized alert systems in improving physician performance and in some cases patient outcomes [3, 4]. Systematic reviews of such trials were summarized by Trowbridge and Weingarten in 2001; the paper concluded that CDS systems have no major adverse effects and widespread implementation of such systems is feasible [5]. Two systematic reviews have demonstrated that CDS systems improve physician performance (on diverse metrics) 64–68% of the time [6, 7]. Several of the studies included in the systematic reviews demonstrated that alerts in particular have been effective in changing physician behaviors, including response time to lab results [8], prescriptions of drugs [9, 10], preventive care [7] and disease management [7]. Due to their effectiveness in hospital care, the use of alerts in clinical settings is increasing and becoming more common in various disciplines of medicine.
However, despite the observable usefulness, there are challenges to implementing CDS systems within a clinical setting. A frequent critique of CDS systems is ‘alert fatigue’, described by van der Sijs et al. [11] as ‘the mental state that is the result of too many alerts consuming time and mental energy’. It is common for CDS systems to over-alert, causing many physicians to close alerts or pop-ups without reading or acknowledging them [1]. Thus, alert fatigue can lead to the problem of overriding alerts, both important and irrelevant ones, as user dissatisfaction increases. This correlates with providers viewing CDS-based alerts as an impediment to workflow [1], a perspective that stems from having to check automated notifications and incorporate suggestions into their clinical decisions during patient treatment. Alert effectiveness may also decrease in the long term as the novelty of a new alert wears off and discontent with inappropriate or intrusive alerts mounts.
We conducted a large, randomized controlled trial of automated electronic alerts for acute kidney injury (AKI), in which the intervention was not significantly associated with clinical outcomes of death, dialysis or change in creatinine [12]. The purpose of this ancillary study was to learn more about the providers' opinions regarding this style of AKI alert. We hypothesized that providers' approval of the alert would be negatively correlated with their perception of impediment to workflow, and positively correlated with their impressions of alert effectiveness.
Materials and methods
Study population
We conducted this study in parallel with the aforementioned clinical trial. In that trial, front-line providers and pharmacists of patients randomized to the alert arm were sent a single text page reading in part ‘Your patient, [initials], in room [room number] has been identified as having acute kidney injury according to the latest creatinine value. Please consider diagnostic and therapeutic options.’ Front-line providers included interns, residents, physician's assistants (PAs), nurse practitioners (NPs) and attending physicians. We approached these providers who were identified as potential alert targets (e.g. their patient was randomized into the alert or usual care arm of the trial) during one weekday clinical shift and asked them to complete the survey. We approached providers during the blinded phase of the trial, so some may have only cared for control patients. Providers were eligible to complete the survey if they reported receiving at least one trial-related AKI alert in the previous 6 months and had not completed the survey already. Surveys were administered throughout the duration of the trial. We did not oversample any group (e.g. pharmacists versus physicians) in order to accurately assess the overall hospital-level alert approval. All providers gave informed consent to participate in the study, which was approved by the University of Pennsylvania Institutional Review Board.
Survey
In addition to basic demographic information, the survey (Supplementary data, Appendix A) comprised 23 questions regarding the alert itself and the providers' response to alerts. With the exception of the final question, all of these were measured on a five-point Likert scale. Questions addressed the logistics (number received, impediment to workflow) and usefulness (AKI recognition, AKI documentation) of, and behavioral responses to the alerts (diagnostic and therapeutic management). Certain survey questions were only directed to non-pharmacists (e.g. ‘In general, the AKI alert system led me or my team to avoid testing with contrast’). We also included an open-ended question: ‘What changes would you recommend to the AKI electronic alert system?’ We administered the survey via a tablet computer. Once the tablet was provided, the study coordinator allowed the participant to complete the survey in private. We administered surveys over discrete 2-week periods at three time points during the trial (first month, fourth month, sixth month).
Primary outcome
The final question of the survey was considered standard for alert approval and stated ‘After the AKI alert trial ends, would you like to continue to receive AKI alerts?’ Practitioners were forced to answer ‘yes’ or ‘no’ to this question. In all analyses, we defined alert approval as a ‘yes’ response to this question.
Statistical methods
We categorized participants into non-pharmacist providers (including physicians, PAs and NPs) and pharmacists. We present continuous measures as median (interquartile range [IQR]) and categorical measures as counts or proportions. We assessed the association between continuous measures (such as the survey responses) and categorical variables using the Wilcoxon Rank-Sum test. We used chi-square tests to compare categorical variables. We used the absolute value of rank-sum z-scores to assess the strength of association between survey responses and the primary outcome. We used Cronbach's alpha statistic to assess the internal validity of the survey. This measure indicates the degree of within-test correlation among survey responses, with higher values suggesting the survey is more consistently evaluating a unified underlying theme (in this case, satisfaction with the alert). We used logistic regression to examine the association between alert approval and the timing of survey administration.
All analyses were conducted in Stata v. 14.1 (Stata Corp, College Station, TX, USA).
Results
From the 133 providers we approached to complete this survey, 35 (23.6%) had not received an alert in the previous 6 months. Of the remaining 98, all agreed to participate giving a final sample of 27 pharmacists and 71 non-pharmacist providers. Characteristics of the participants appear in Table 1.
Table 1.
Total | 98 |
Physicians | 62 (63%) |
PGY-1 | 36 (37%) |
PGY-2 | 12 (12%) |
PGY-3 | 7 (7%) |
Not in training | 7 (7%) |
Pharmacists | 27 (28%) |
<1 year of experience | 1 (1%) |
1 to <2 years of experience | 5 (5%) |
2 to <3 years of experience | 2 (2%) |
3 to <4 years of experience | 1 (1%) |
≥4 years of experience | 10 (10%) |
Other providers | 9 (9%) |
1 to <2 years of experience | 2 (2%) |
2 to <3 years of experience | 1 (1%) |
≥3 years of experience | 4 (4%) |
Numbers are raw counts and percentages of the total population.
PGY, post-graduate year.
Participants reported receiving a median of 1 alert daily over the past 30 days (range 1–3). Pharmacists reported receiving significantly more alerts daily than non-pharmacist providers (P = 0.004).
Survey validity
The survey reliability was measured in three parts: questions answered by all providers, pharmacists only and non-pharmacist providers only. Including responses from all participants, the survey had a Cronbach's alpha value of 0.75, demonstrating adequate internal consistency. The Cronbach's alpha coefficient for pharmacist-oriented questions was calculated to be 0.78, showing that these questions were adequate in the survey. When the survey question analysis was limited to non-pharmacist-oriented questions, there was very strong internal validity, as demonstrated by the high alpha coefficient value of 0.93. This finding suggests that the survey is sufficiently evaluating the concept of provider alert approval and acceptance.
Approval of AKI alerts
The majority of the sample population, 68 of the 98 (69%), approved of the alert. This rate was similar across the groups studied and included 64% of physicians, 81% of pharmacists and 67% of other providers (P = 0.28). When analyzed by level of training, there were no significant differences in alert approval among interns, residents and attending physicians.
The relationship between alert approval and responses to various survey questions is shown in Table 2. The single strongest correlation between alert approval and a survey response was the response to the statement ‘In general, the AKI alert system improved the care of my patients’ (P < 0.0001). Of the 40 individuals who indicated agreement with that statement, 36 (90%) approved of the alert, compared with 2 out of 14 (14%) of those who disagreed with the statement. The second most strongly correlated factor was the response to the question ‘I was already aware that the patient had AKI when I received an alert’. Of the 77 individuals who answered ‘All of the time’ or ‘Most of the time’ the approval rate was 64%, compared with 90% among those who answered ‘Sometimes’, ‘Rarely’ or ‘Never’.
Table 2.
Approved alert | Did not approve alert | P-value | |
---|---|---|---|
68 | 30 | ||
Demographics | |||
Physician | 40 | 22 | 0.28a |
Pharmacist | 22 | 5 | |
Other provider | 6 | 3 | |
Individual survey responses | Median (IQR) | Median (IQR) | |
The amount of alerts I received adversely impacted overall patient care. | 1 (1–2) | 2 (1–2) | 0.25 |
The amount of alerts I received impeded my workflow. | 1 (1–2) | 2 (2–2) | 0.0002 |
I was already aware that the patient/s had AKI when I received an alert. | 4 (3–4) | 5 (4–5) | 0.0001 |
Provider behavior | |||
In general, the electronic AKI alert system led me to document AKI (write it in the chart) as a diagnosis more frequently. | 3 (2–4) | 2 (1–2.5) | <0.0001 |
In general, the electronic AKI alert system led me or my team to recommend redosing or discontinuing certain medications. | 3 (2–3) | 2 (1–3) | 0.04 |
In general, the AKI alert system led me or my team to change IV fluid management earlier. | 3 (2–4) | 2 (1–2) | 0.0001 |
In general, the AKI alert system led me or my team to transfer the patient to the ICU more frequently.b | 1 (1–2) | 1 (1–1) | 0.23 |
In general, the AKI alert system led me or my team to delay discharge of the patient. | 1 (1–2) | 1 (1–2) | 0.14 |
In general, the AKI alert system led me or my team to order urinalysis, urine electrolytes and/or creatinine earlier. | 3 (2–4) | 2 (1–2.5) | 0.0012 |
In general, the AKI alert system led me or my team to order a retroperitoneal ultrasound. | 1 (1–3) | 1 (1–2) | 0.62 |
In general, the AKI alert system led me or my team to order a nuclear renal scan. | 1 (1–1) | 1 (1–1) | 0.95 |
In general, the AKI alert system led me to discuss the results with my patient. | 3 (1–3) | 1 (1–2.5) | 0.005 |
In general, the AKI alert system led me to consult the renal/nephrology service. | 2 (1–3) | 1 (12) | 0.04 |
In general, the AKI alert system improved the care of my patients. | 4 (3–4) | 3 (2–3) | <0.0001 |
All survey questions employed a 5-point Likert Scale, where higher values indicate stronger agreement with the statement presented. Alert approval was defined as a ‘yes’ response to the question ‘After the AKI alert trial ends, would you like to continue receiving AKI alerts?’
ICU, intensive care unit.
aNote that this P-value compares alert approval rate among the three demographic groups.
bExcludes one provider who worked exclusively in the ICU.
Disapproval of the alert was also associated with self-reported lower likelihood to change certain behaviors. For example, individuals who disapproved of the alert indicated that alerts would have less impact on whether they would change medication dosage (P = 0.04), change IV fluid management (P < 0.001), order urine studies (P = 0.001) or consult the nephrology team (P = 0.04).
Alert approval waned as the study progressed. The approval rate in the first half of the study was 83%, compared with 59% in the second half of the study (P = 0.009), and each additional 30 days of alerting decreased the odds of alert approval by 20% (35–3%), P = 0.02.
Qualitative responses
Representative examples of qualitative responses are displayed in Table 3, and were generally positive. Some responses were neutral, stating the system was beneficial and straightforward, yet there were no significant changes or improvements in their patient care. Meanwhile, some providers commented on a delay between receiving an alert and the availability of lab results. Despite this critique, all alerts fired within 1 h of lab results being posted, as verified both via electronic records and pre-trial quality assessment records.
Table 3.
Positive responses |
‘…Due to the alert the AKI was documented in the chart and handled quickly and effectively. I appreciated the alert and its benefit in patient care.’ |
‘I'd rather receive it and be made aware than not receive it and possibly miss a dose adjustment.’ |
‘I think this is a great idea and would love to see it used in the future.’ |
Neutral responses |
‘I only received one notification and it was in the MICU (Medical Intensive Care Unit) when I already knew the pt had AKI, therefore the alert did not really do anything to change patient care for me.’ |
‘If it could be integrated with (electronic health record user interface), it would be much more noticeable to a majority of the medicine residents rather than as a text page.’ |
Negative responses |
‘The alert comes much too late. I have always recognized an AKI before getting the alert.’ |
‘It should come out immediately after a lab comes back.’ |
Samples taken from responses to the question, ‘What changes would you recommend to the AKI electronic alert system?’.
MICU, medical intensive care unit.
Discussion
This ancillary study to a large, randomized trial of automated, electronic alerting for AKI captured the perceptions of the providers who received a specific text-based electronic alert, and quantified their alert approval over the course of the trial. Key conclusions include the fact that perception of patient benefit was strongly associated with alert approval, as defined by a desire to continue to receive alerts after the end of the study, and that alert approval waned over the course of the trial. It is notable that these effects were so strong in the light of the primary findings of the trial, which suggested that the alert provided no significant clinical benefit to the patient. This implies that perception of alert efficacy may be significantly more important than actual alert efficacy in terms of provider approval.
Electronic alerting for AKI is becoming significantly more common, with multiple studies demonstrating the feasibility and efficacy of such alerts in the clinical setting [13–19]. Evidence from these studies led the National Health Service of England, in 2014, to adopt a policy requiring the automated alerting of AKI events via all laboratory information management systems [20]. Despite the fact that many physicians, pharmacists and other providers are being and will be exposed to such alerts, few studies have rigorously assessed their experience of AKI alerting.
We noted a strong effect whereby alert approval waned over the course of the trial. There are several potential explanations for this finding. First, this may be a manifestation of alert fatigue, with providers becoming less enthusiastic about the alert the more alerts they receive. Additionally, the novelty of a new alerting system may have falsely boosted acceptance at the beginning of the study. These findings suggest that alert systems may benefit from periodic ‘goodwill’ campaigns, whereby providers are reeducated about the goals of the alert.
Some providers indicated that the alert was not timely, despite the alert being sent a maximum of 1 h after lab results were posted to the electronic health record. This may be due to the possibility that more subtle changes in creatinine or urine output (which were not captured by the alert) had already been noted by the provider, and would explain why providers who stated they were already aware of the presence of AKI were less likely to approve of the alert overall.
Our findings suggest that better alert targeting could substantially improve provider acceptance rates. If alerts could be targeted to individuals either at high risk of relevant outcomes (for example, those receiving nephrotoxic medications), or those who are at high risk of having AKI remain undetected (for example, those with low baseline creatinine concentrations), we might simultaneously boost the efficacy of the intervention (leading to an improvement in perceived patient benefit) and reduce the number of alerts that are perceived as extraneous or redundant.
This study should be interpreted in the light of several limitations. First, the population studied, though derived from lists of providers who may have received an alert in the context of the trial, was approached during regular working hours and thus represents a convenience sample that may not be generalizable. Second, the survey was not administered immediately after an alert occurred, which may have led to recall bias. However, as our primary goal was to assess overall alert acceptance, a provider's recollection of the alerting experience may be a more valid indicator than if we had administered the survey at the time an alert was received. Third, despite the large numbers of patients in the trial (2393), our pool of providers who received alerts was somewhat limited (as alerts were only sent to the primary provider and unit pharmacist), leading to a low overall sample size. Despite this, the significance of our findings suggest that the effects seen here would be reproduced in larger-scale studies. Fourth, responders were approached directly by members of our research staff to limit response bias. While surveys were taken in private, some respondents might have been more likely to indicate alert approval given their interaction with a study partisan. Fifth, the survey was limited in scope, and we recommend future investigators explore the deeper reasons why alerts may engender particular attitudes. Sixth, different alerting algorithms have different sensitivities for true AKI—an issue recently discussed by Sawhney et al. [21]. Thus other algorithms may increase or decrease the percent of captured patients with AKI and influence providers' perceptions of efficacy. Seventh, this study surveyed the responses to one specific type of alert, within a single hospital, thus the findings cannot be generalized to all AKI alerts. Finally, it is important to note that alerts were delivered non-intrusively. More specifically, the alerts were delivered as only one alert per patient per provider in a text message format. There were no requirements to respond or acknowledge the alerts, nor were there any specific instructions on follow-up procedures. Different alert implementations may lead to different provider acceptance.
In conclusion, provider approval is critical to the success of clinical decision support systems. Our study suggests that efforts to convince providers of alert efficacy will increase the likelihood that providers embrace an alert system. Additional approaches, such as attempting to avoid alerting when a provider has already recognized the condition of interest, may also improve overall acceptance.
Supplementary data
Supplementary data are available online at http://ckj.oxfordjournals.org.
Conflict of interest statement
The results presented in this paper have not been published previously, in whole or in part, except in abstract format.
Supplementary Material
Acknowledgements
This research was conducted in part through grants NIH K23DK097201 to F.P.W. and NIH K23 HL114868 to J.M.T. We appreciate the advice of Chriag Parikh, MD PhD, on the preparation of this manuscript.
References
- 1.Ash JS, Sittig DF, Campbell EM et al. Some unintended consequences of clinical decision support systems. AMIA Annu Symp Proc 2007: 26–30 [PMC free article] [PubMed] [Google Scholar]
- 2.Berner ES. Clinical Decision Support Systems: State of the Art Agency for Healthcare Research and Quality AHRQ Publication No. 09-0069-EF. Rockville, MD: AHRQ, 2009 [Google Scholar]
- 3.Strom BL, Schinnar R, Aberra F et al. Unintended effects of a computerized physician order entry nearly hard-stop alert to prevent a drug interaction: a randomized controlled trial. Arch Intern Med 2010; 170: 1578–1583 [DOI] [PubMed] [Google Scholar]
- 4.Dexter PR, Perkins S, Overhage JM et al. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med 2001; 345: 965–970 [DOI] [PubMed] [Google Scholar]
- 5.Trowbridge R, Weingarten S. Clinical decision support systems. In: Shojania K, Duncan B, McDonald K et al. (eds). Making Health Care Safer: A Critical Analysis of Patient Safety Practices Evidence Report/Technology Assessment Number 43 AHRQ Publication No. 01-E058. Rockville, MD: AHRQ, 2001 [Google Scholar]
- 6.Kawamoto K, Houlihan CA, Balas EA et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330: 765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Garg AX, Adhikari NK, McDonald H et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005; 293: 1223–1238 [DOI] [PubMed] [Google Scholar]
- 8.Kuperman GJ, Teich JM, Tanasijevic MJ et al. Improving response to critical laboratory results with automation: results of a randomized controlled trial. J Am Med Inform Assoc 1999; 6: 512–522 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Monane M, Matthias DM, Nagle BA et al. Improving prescribing patterns for the elderly through an online drug utilization review intervention: a system linking the physician, pharmacist, and computer. JAMA 1998; 280: 1249–1252 [DOI] [PubMed] [Google Scholar]
- 10.Raschke RA, Gollihare B, Wunderlich TA et al. A computer alert system to prevent injury from adverse drug events: development and evaluation in a community teaching hospital. JAMA 1998; 280: 1317–1320 [DOI] [PubMed] [Google Scholar]
- 11.van der Sijs H, Aarts J, van Gelder T et al. Turning off frequently overridden drug alerts: limited opportunities for doing it safely. J Am Med Inform Assoc 2008; 15: 439–448 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Wilson FP, Shashaty M, Testani J et al. Automated, electronic alerts for acute kidney injury: a single-blind, parallel-group, randomised controlled trial. Lancet 2015; 385: 1966–1974 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Selby NM, Crowley L, Fluck RJ et al. Use of electronic results reporting to diagnose and monitor AKI in hospitalized patients. Clin J Am Soc Nephrol 2012; 7: 533–540 [DOI] [PubMed] [Google Scholar]
- 14.Selby NM. Electronic alerts for acute kidney injury. Curr Opin Nephrol Hypertens 2013; 22: 637–642 [DOI] [PubMed] [Google Scholar]
- 15.Kashani K, Herasevich V. Sniffing out acute kidney injury in the ICU: do we have the tools? Curr Opin Crit Care 2013; 19: 531–536 [DOI] [PubMed] [Google Scholar]
- 16.Colpaert K, Hoste E, Van Hoecke S et al. Implementation of a real-time electronic alert based on the RIFLE criteria for acute kidney injury in ICU patients. Acta Clin Belg 2007; 62: 322–325 [DOI] [PubMed] [Google Scholar]
- 17.Colpaert K, Hoste EA, Steurbaut K et al. Impact of real-time electronic alerting of acute kidney injury on therapeutic intervention and progression of RIFLE class. Crit Care Med 2012; 40: 1164–1170 [DOI] [PubMed] [Google Scholar]
- 18.Goldstein SL. Automated/integrated real-time clinical decision support in acute kidney injury. Curr Opin Crit Care 2015; 21: 485–489 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Porter CJ, Juurlink I, Bisset LH et al. A real-time electronic alert to improve detection of acute kidney injury in a large teaching hospital. Nephrol Dial Transplant 2014; 29: 1888–1893 [DOI] [PubMed] [Google Scholar]
- 20.Selby NM, Hill R, Fluck R. Standardizing the early identification of acute kidney injury: The NHS England National Patient Safety Alert. Nephron 2015; 131: 113–117 [DOI] [PubMed] [Google Scholar]
- 21.Sawhney S, Fluck N, Marks A et al. Acute kidney injury—how does automated detection perform? Nephrol Dial Transplant 2015; 30: 1853–1861 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.