Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2021 Oct 19;28(12):2654–2660. doi: 10.1093/jamia/ocab179

Alert burden in pediatric hospitals: a cross-sectional analysis of six academic pediatric health systems using novel metrics

Evan W Orenstein 1,2,, Swaminathan Kandaswamy 1, Naveen Muthu 3,4, Juan D Chaparro 5,6, Philip A Hagedorn 7,8, Adam C Dziorny 9,10, Adam Moses 11, Sean Hernandez 11,12, Amina Khan 4, Hannah B Huth 11, Jonathan M Beus 3,4, Eric S Kirkendall 11,13
PMCID: PMC8633657  PMID: 34664664

Abstract

Background

Excessive electronic health record (EHR) alerts reduce the salience of actionable alerts. Little is known about the frequency of interruptive alerts across health systems and how the choice of metric affects which users appear to have the highest alert burden.

Objective

(1) Analyze alert burden by alert type, care setting, provider type, and individual provider across 6 pediatric health systems. (2) Compare alert burden using different metrics.

Materials and Methods

We analyzed interruptive alert firings logged in EHR databases at 6 pediatric health systems from 2016–2019 using 4 metrics: (1) alerts per patient encounter, (2) alerts per inpatient-day, (3) alerts per 100 orders, and (4) alerts per unique clinician days (calendar days with at least 1 EHR log in the system). We assessed intra- and interinstitutional variation and how alert burden rankings differed based on the chosen metric.

Results

Alert burden varied widely across institutions, ranging from 0.06 to 0.76 firings per encounter, 0.22 to 1.06 firings per inpatient-day, 0.98 to 17.42 per 100 orders, and 0.08 to 3.34 firings per clinician day logged in the EHR. Custom alerts accounted for the greatest burden at all 6 sites. The rank order of institutions by alert burden was similar regardless of which alert burden metric was chosen. Within institutions, the alert burden metric choice substantially affected which provider types and care settings appeared to experience the highest alert burden.

Conclusion

Estimates of the clinical areas with highest alert burden varied substantially by institution and based on the metric used.

Keywords: electronic health records, decision support systems, clinical burnout, professional, benchmarking, alert fatigue, health personnel

INTRODUCTION

Clinical decision support (CDS), such as computerized alerts in electronic health record (EHR) systems, are intended to increase the effectiveness, safety, and overall quality of patient care. CDS can accomplish this by bringing recommendations based on scientific evidence or best practices to the bedside.1 However, alert-related workload has also been associated with physical fatigue and cognitive weariness, 2 of the 3 dimensions of burnout.2,3 While many EHR alerts have demonstrated improved outcomes,4,5 a systematic review of randomized trials found that computerized alerts improved adherence to the intended care process by a median of only 4.2%.6 Providers override the recommended action in up to 95% of EHR alerts.7 While many overrides are appropriate,8 routinely overriding alerts habituates providers into ignoring EHR alerts (alert fatigue)9 and can cause providers to disregard important alerts. This behavior can lead to patient harm.10,11 Higher alert burden may potentiate alert fatigue, as reducing alert burden has reduced the frequency of alert overrides in some settings.12 However, alert burden metrics have not yet been standardized and there exist no benchmarks for comparisons across institutions, provider types, or care settings. This gap prevents health systems from identifying alert burden outliers compared to analogous care settings and organizations, reducing their ability to prioritize alert burden reduction efforts and to motivate organizational changes.

One of the most challenging problems in measuring alert burden is defining the appropriate denominator. Assumptions in alert burden definitions may change which alerts or care settings an organization categorizes as high burden, leading to biases in prioritization efforts. However, it remains unknown how the chosen metric of alert burden changes the relative prioritization within an institution or how this might affect comparison across institutions. Identifying an appropriate unit of comparison for alert burden could also help standardize CDS practices, facilitating multi-institutional alert burden reduction strategies. Thus, understanding how alert burden measures vary within and across institutions is critical to selecting alert burden metrics and evaluating scalable strategies to improve alert usefulness across health systems.

In this retrospective cross-sectional study, we estimated alert burden using 2 patient-focused denominators (alerts per inpatient-day and alerts per encounter) and 2 clinician-focused denominators (alerts per 100 orders and alerts per clinician day, defined as the number of unique clinicians on each calendar day with at least 1 EHR log in the system) at 6 academic pediatric health centers. Denominators were selected to encompass common vendor metrics as well as an early estimate of alert burden per unit time a clinician spends in the EHR. We assessed intra- and interinstitutional variation by care setting, provider type, alert type, and individual clinician. Finally, we determined how changes in denominator definitions and alert burden metric choice affect alert burden rankings.

MATERIALS AND METHODS

Study setting

This study was conducted at 6 large pediatric health systems: Brenner Children’s Hospital (BCH), Cincinnati Children’s Hospital Medical Center (CCHMC), Children’s Healthcare of Atlanta (CHOA), Children’s Hospital of Philadelphia (CHOP), Nationwide Children’s Hospital (NCH), and University of Rochester-Golisano Children’s Hospital (URGCH). All 6 centers use Epic Systems as their enterprise EHR vendor with multiple active alert types, including custom alerts built within the organization as well as medication alerts derived from third- party vendors with local customization. Site characteristics are described in Table 1. Data were collected between 9/1/2016 and 9/1/2019.

Table 1.

Site characteristics

System Location Patient population Licensed beds Patient days per year ED and urgent care visits Ambulatory clinic visits
Cincinnati Children’s Hospital and Medical Center Cincinnati, OH Pediatric only 670 152 834 155 889 1 077 789
Children’s Healthcare of Atlanta Atlanta, GA Pediatric only 673 164 453 439 072 1 170 880
Children’s Hospital of Philadelphia Philadelphia, PA Pediatric only 567 181 632 127 252 1 113 909
Nationwide Children’s Hospital Columbus, OH Pediatric only 673 166 011 265 470 1 044 666
University of Rochester Golisano Children’s Hospital Rochester, NY Mixed pediatric and adulta 886 300 435 396 691 2 942 922
Wake Forest Baptist Health Winston-Salem, NC Mixed pediatric and adulta 1535 1 001 259 651 000 2 095 845
a

Patient volume metrics for mixed systems include both pediatric and adult encounters.All health systems are teaching hospitals and use Epic Systems as their EHR vendor.

Data on beds and utilization are from most recent public reports spanning 2019–2020

Data collection and aggregation

Data were collected and aggregated using a federated model.13 EHR queries were initially written at a single site (CHOA) to extract data about each interruptive alert firing as well as denominator data needed to calculate each alert burden metric described above. The initial code, deidentified data at the level of each alert firing, and summary data were reviewed by the full collaborative and iteratively improved. A second site (CHOP) translated the query from Microsoft SQL Server (Redmond, WA) to Oracle SQL Developer (Redwood Shores, CA) and validated the query locally prior to dissemination of the SQL code to all 6 sites. Each site independently verified their own data and brought concerns or questions to the collective to discuss including approval of individual site modifications, but only summary data were shared across institutions. Two of the 6 sites were combined pediatric and adult healthcare systems, and each used different strategies to restrict to pediatric encounters commensurate with local workflows. Both sites manually reviewed individual departments to identify those dedicated to pediatric encounters. One of the sites also had hybrid departments serving both adults and children (eg, the emergency department), so individual bed spaces were selected for pediatric patients. Encounters from relevant departments and bed spaces were used to filter orders and clinician interactions for all alert burden metrics. Of note, age alone was found to not be an effective means of segmenting encounters as pediatric subspecialists frequently followed patients beyond age 18 years.

Definitions and local mapping

To facilitate comparison across institutions, the authors came to consensus definitions for alert firings, alert types, provider types, care settings, and denominators. At first, definitions were proposed by 1 site and adjusted through discussion. After reviewing and approving conceptual definitions, each site independently mapped local codes to each concept and presented ambiguous cases to the full group of authors for adjudication as well as iterative adjustment of consensus definitions. Consensus was considered achieved when no further edits were recommended after a full round of group reviews of alerts. The final definitions are presented in the Supplementary Appendix.

Briefly, we defined alert firings as the number of interruptions from custom (developed using locally managed rules), medication administration, and drug–drug interaction alerts requiring the clinician to act on the alert before being able to proceed with their workflow in the EHR.14 Of note, the current study did not include drug–dose, drug–disease, or drug–allergy alerts. If multiple interruptive alerts fired simultaneously, each alert was counted separately.

Care setting was defined as the location of the patient at the time of the alert firing and categorized as intensive care unit (ICU), inpatient non-ICU (IP Non-ICU), emergency department or urgent care (ED/UC), perioperative, ambulatory, hospital outpatient department (eg, an infusion center), or ancillary. The provider exposed to the alert was categorized as ordering provider (physician, nurse practitioner, or physician assistant), nurse, pharmacist, or other.

Alerts per encounter was defined as the number of alert firings divided by the number of patient encounters, where encounters were included if they involved synchronous activity from the provider and patient. For example, office visits, hospitalizations, and telephone encounters were all included, whereas “orders only” or other administrative encounters were excluded. Alerts for all metrics were only counted if they appeared during 1 of the included encounter types. Alerts per inpatient day were defined only for ICU and IP Non-ICU settings as the number of alert firings divided by the sum of the lengths of stay of all inpatients. Alerts per 100 orders was defined as the number of alert firings divided by the number of unique signed orders and did not include auto-generated child orders produced by an original parent order (eg, albuterol q4h parent order auto-generates multiple child orders for each albuterol administration). Alerts per clinician day was defined as the number of alert firings for individual providers divided by the number of unique calendar days in which that provider had at least 1 record in the EHR audit logs.

Data analysis

We calculated descriptive statistics for alert burden measures and developed visualizations to examine intrainstitutional variation (by alert type, care setting, and provider type) and interinstitutional variation in alert burden.

This work was approved by the Institutional Review Board at each of the 6 participating institutions.

RESULTS

Total interruptive alert firings and the alert burden for each metric are summarized in Table 2 and Figure 1. A total of 33 978 153 interruptive alerts fired across the 6 sites during the 3-year study period (range by site: 1 842 465–13 982 997). Alerts per clinician day had the greatest variability across sites, with 43.8 times more alerts per clinician day at the highest burden site than the lowest burden site. Alerts per inpatient day had the least variability across sites with a 4.7-fold difference between the sites with highest and lowest alert burdens by this measure. When comparing across sites, the rank order of alert burden was nearly identical for alerts per 100 orders, alerts per inpatient-day, and alerts per encounter. For alerts per clinician day, the sites with the most and least alert burden remained consistent with the other metrics, with the other 4 sites clustered in the middle. Of note, the 2 sites with the highest alert burden in this sample were the only 2 combined pediatric and adult health systems.

Table 2.

Total interruptive alert burden by site and metric

Institution Interruptive firings (total) Unique custom alertsa Clinician focused denominator
Patient focused denominator
Firings per clinician day Firings per 100 orders Firings per inpatient day Firings per encounter
A 2 165 414 702 0.08 0.98 0.22 0.06
B 6 253 865 246 0.75 2.56 0.70 0.19
C 13 982 997 355 0.10 4.99 0.59 0.28
D 3 028 568 268 0.50 1.66 0.29 0.09
E 1 842 465 269 0.43 5.06 0.76 0.48
F 6 704 844 723 3.34 17.42 1.06 0.76

In each alert burden metric column, the cell with the highest alert burden is colored red, t he cell with the lowest alert burden metric is colorless, and all other cells have red background on a scale normalized between the lowest and highest alert burdens for that specific metric.

a

Unique custom alerts (LGL IDs in Epic Systems) with at least 1 interruptive firing during the study period.

Figure 1.

Figure 1.

Comparison of alert burden across 6 pediatric health systems using 4 different metrics.

We compared interruptive alert burden across provider types by site and by alert burden metric (Table 3). There was substantial variation across sites as to which provider type experienced the most alert burden. When measuring using alerts per 100 orders, alerts per 100 encounters, or alerts per inpatient day, nurses had the highest alert burden on average across all sites. However, there was substantial variability with ordering providers having the greatest burden at 2 of 6 sites and pharmacists at 1 site. By contrast, when using the metric alerts per clinician day, pharmacists appeared to experience a much higher alert burden compared to all other provider types, on average 3.1 times higher than the second highest provider type (0.4–13 times higher across sites).

Table 3.

Alert burden by provider type and metric

Per 100 orders Site
Per clinician day Site
A B C D E F Average A B C D
E F Average
Provider 1.63 3.11 8.32 2.30 4.48 17.65 6.25 Provider 0.07 2.03 0.08 1.26 0.39 4.82 1.44
Nurse 0.85 3.99 6.15 1.34 12.42 20.64 7.56 Nurse 0.07 1.75 0.12 0.42 1.31 7.35 1.84
Pharmacy 0.88 1.46 3.49 2.59 2.03 2.97 2.24 Pharmacy 0.13 13.35 0.20 16.43 2.60 4.32 6.17
Other 0.46 1.39 0.78 0.42 0.27 19.34 3.78 Other 0.05 0.54 0.04 0.16 0.05 10.91 1.96
Average 0.95 2.49 4.69 1.66 4.80 15.15 4.96 Average 0.08 4.42 0.11 4.57 1.09 6.85 2.85

Per encounter Site Per IP Day Site
A B C D E F Average A B C D E F Average

Provider 0.10 0.20 0.40 0.12 0.34 0.74 0.32 Provider 0.30 0.85 0.70 0.43 0.68 0.56 0.59
Nurse 0.06 0.29 0.31 0.08 1.15 0.93 0.47 Nurse 0.30 1.18 0.73 0.10 1.95 2.49 1.12
Pharmacy 0.07 0.14 0.26 0.16 0.29 0.17 0.18 Pharmacy 0.25 0.37 0.85 0.57 0.28 0.64 0.49
Other 0.02 0.10 0.05 0.02 0.03 0.83 0.17 Other 0.04 0.35 0.10 0.04 0.03 0.15 0.12
Average 0.06 0.18 0.26 0.09 0.45 0.67 0.29 Average 0.22 0.69 0.59 0.28 0.74 0.96 0.58

For each site and each metric, the cell with the highest alert burden is colored red, the cell with the lowest alert burden metric is colorless, and all other cells have red background on a scale normalized between the lowest and highest alert burdens for that specific site and metric.

Next, we compared interruptive alert burden across care settings and by alert burden metric (Table 4). The choice of alert burden metric substantially changed which care settings appeared to have the highest alert burden. When measured using alerts per 100 orders, ICU settings had the highest burden at 3 of 6 sites, whereas ED/urgent care, Ambulatory, and Hospital Outpatient Departments each had the highest burden at 1 site. When measured using alerts per encounter, alert burden in the ICU was between 2 and 24 times greater than the second highest care setting at each site. When measured as alerts per clinician day, alert burden was relatively evenly distributed across care settings with only perioperative settings having consistently low burden across sites.

Table 4.

Alert burden by care setting and metric

Per 100 orders Site
Per clinician day Site
A B C D E F Average A B
C D E F Average
ED/UC 0.01 2.99 9.08 1.81 9.30 4.56 4.62 ED/UC 0.00 0.57 0.31 0.40 0.34 0.22 0.31
ICU 1.08 8.13 11.31 3.94 4.80 23.06 8.72 ICU 0.06 0.58 0.08 0.42 0.30 0.44 0.31
IP—Non-ICU 1.04 1.96 2.89 1.55 5.99 4.67 3.02 IP—Non-ICU 0.08 0.67 0.07 0.65 0.48 0.81 0.46
Perioperative 0.06 2.12 2.58 2.05 3.80 5.19 2.63 Perioperative 0.01 0.16 0.11 0.23 0.06 0.09 0.11
Ambulatory 1.22 1.40 7.41 0.88 1.04 33.95 7.65 Ambulatory 0.08 0.29 0.12 0.32 0.16 6.98 1.33
HOD 1.67 1.47 1.12 2.75 0.92 2.06 1.66 HOD 0.11 0.34 0.12 0.47 0.03 0.13 0.20
Ancillary 0.00 0.00 0.68 3.20 8.63 0.56 2.62 Ancillary 0.00 0.04 0.01 0.05 0.00 0.00 0.02
Average 0.72 3.01 5.01 2.31 4.93 10.58 4.46 Average 0.05 0.38 0.12 0.36 0.20 1.24 0.39

Per Encounter Site Per IP Day Site
A B C D E F Average A B C D E F Average

ED/UC 0.00 0.13 0.36 0.06 0.47 0.17 0.20 ICU 0.14 0.67 0.00 0.22 0.43 1.04 0.50
ICU 1.45 13.24 22.52 2.29 6.99 50.19 16.11 IP—Non-ICU 0.24 0.48 0.59 0.36 0.86 0.95 0.58
IP—Non-ICU 0.73 1.42 2.49 0.90 3.31 2.27 1.85 Average 0.19 0.57 0.59 0.29 0.64 1.00 0.54
Perioperative 0.02 0.24 0.58 0.20 0.48 0.28 0.30
Ambulatory 0.04 0.03 0.18 0.02 0.03 0.74 0.17
HOD 0.21 0.14 0.13 0.41 0.02 0.11 0.17
Ancillary 0.00 0.00 0.09 0.03 0.13 0.01 0.05
Average 0.35 2.54 3.76 0.56 1.63 7.68 2.76

ED: emergency department; HOD: hospital outpatient department; ICU: intensive care unit; IP: inpatient; UC: urgent care.

For each site and each metric, the cell with the highest alert burden is colored red , the cell with the lowest alert burden metric is colorless, and all other cells have red background on a scale normalized between the lowest and highest alert burdens for that specific site and metric.

Finally, we compared alert burden by alert type (Table 5). Across all 6 sites and all 4 alert burden metrics, custom-built alerts contributed the most to alert burden with only a single exception. Globally, custom alerts accounted for 77% of all interruptive firings, ranging from 50% to 93% across the 6 sites.

Table 5.

Alert burden by alert type and metric

Per 100 orders Site
Per clinician day Site
A B C D E F Average A B C D E F Average
Drug-drug interaction 0.30 0.45 1.74 0.92 0.66 0.76 0.80 Drug-Drug Interaction 0.04 0.19 0.10 0.34 0.10 0.11 0.14
Custom 1.91 5.99 10.44 2.42 12.45 43.98 12.87 Custom 0.09 1.37 0.12 0.45 0.63 6.48 1.52
Medication administration 0.71 0.74 1.95 1.51 1.78 2.61 1.55 Medication Administration 0.07 0.26 0.08 0.47 0.17 0.32 0.23
Average 0.97 2.39 4.71 1.62 4.96 15.78 5.07 Average 0.07 0.61 0.10 0.42 0.30 2.30 0.63

Per encounter Site Per IP Day Site
A B C D E F Average A B C D E F Average

Drug-drug interaction 0.02 0.04 0.11 0.06 0.07 0.04 0.06 Drug-Drug Interaction 0.05 0.10 0.40 0.22 0.08 0.19 0.18
Custom 0.11 0.36 0.49 0.12 1.23 1.73 0.67 Custom 0.49 1.76 1.01 0.33 1.88 2.49 1.33
Medication administration 0.05 0.06 0.12 0.08 0.16 0.12 0.10 Medication Administration 0.15 0.19 0.37 0.31 0.23 0.45 0.28
Average 0.06 0.15 0.24 0.09 0.48 0.63 0.28 Average 0.23 0.68 0.59 0.28 0.73 1.04 0.59

For each site and each metric, the cell with the highest alert burden is colored red, the cell with the lowest alert burden metric is colorless, and all other cells have red background on a scale normalized between the lowest and highest alert burdens for that specific site and metric.

DISCUSSION

This cross-sectional study of 6 pediatric health systems demonstrated wide variability in alert burden across institutions. Custom alerts were consistently responsible for the majority of alert burden, suggesting that local institutional practices and culture for the development and governance of alerts may be a primary driver of alert burden and may present an opportunity to substantially modify alert burden. The rank order of institutions by alert burden was similar regardless of which alert burden metric was chosen. However, within each institution, the alert burden metric choice substantially affected which provider types and care settings appeared to experience the highest alert burden. For example, alert burden was distributed relatively evenly across provider types when comparing alerts per 100 orders, alerts per encounter, or alerts per inpatient-day. However, pharmacists appeared to have much higher alert burden when examining alerts per clinician day across several institutions.

While this study establishes an initial framework for benchmarking alert burden across and within institutions, the alert burden metric(s) that are most useful for predicting clinician burnout or patient safety issues due to alert fatigue remains unknown. Prior exposure to monitor alarms has been shown to affect subsequent responses, and text message alerts have demonstrated negative effects from distraction-related to other tasks such as medication administration.15,16 However, none of the existing EHR alert burden metrics have been validated to determine how prior acute or chronic alert burden affects the clinician’s subsequent response to a new alert or the impact of EHR alert distraction on other tasks. In the absence of a gold standard, each metric provides a different lens into sources of alert burden that could guide institutions in their alert-burden reduction strategy.

To our knowledge, this is the first study to establish a data-sharing network focused on EHR alert burden across pediatric health systems. Many of the challenges in creating this collaborative are common to other clinical data research networks. Only aggregated data were shared from each site for analysis, where more granular data would allow for more rigorous analyses. This federated model13 is commonly used in early iterations of quality improvement collaboratives17,18 and research networks19 with the goal of evolving to sharing more granular data.20 To facilitate analysis of granular data across multiple sites, collaborative networks must (1) address privacy concerns21,22 through strict deidentification procedures and regulatory agreements, (2) harmonize data through mapping efforts to established standards23 and (3) establish shared understanding of terms and metrics through joint development of definitions. All of these processes require substantial investment but have led to successful collaborations such as PEDSnet and other PCORnet partner networks.24,25 In this use case, no suitable standards were found for key dimensions in defining alert burden metrics (eg, categorizing encounter types as having synchronous activity from the provider and patient) as well as important dimensions for analysis (eg, grouping clinical departments into care settings). Federating data for outcomes and dimensions with imperfect standards requires a diverse set of skills including clinical and operational subject matter expertise, extensive institutional knowledge, and informatics literacy.26 For example, some clinical departments represent non-ICU inpatient bed spaces most of the time but accommodate ICU surges frequently—appropriate categorization requires understanding workflow features to distinguish the 2 cases (eg, recorded level of service or clinical service of primary attending) and the ability to retrieve the differentiating data elements. These challenges alongside differences in technical infrastructure such as variation in database management systems and EHR audit log storage required substantial local customizations of shared code even across institutions on the same EHR vendor. Cataloging these challenges and establishing procedures for customization and on-boarding new sites are critical steps in establishing a pediatric CDS collaborative.

This study has several limitations. While there is substantial variability in workflows, EHR configuration, and technology infrastructure, the sites remain relatively homogenous. All are academic institutions on the same EHR vendor with clinical informatics expertise needed for participation, which may also affect alert development and culture. Thus, these early benchmarks may not be representative of community sites, sites using different EHR vendors, or sites with less local clinical informatics expertise. Additionally, analyses over millions of EHR records likely hides some degree of data quality concerns. For example, simultaneous firing of multiple alerts was counted as separate firings in this study instead of a single interruption. Categorization of encounter types, provider types, and especially care settings also required subjective assessments and were performed by a single author at each site. Thus, we were not able to calculate interrater reliability on this mapping process. While we tried to address these issues through verification of outliers, code reviews, and collaborative data mapping efforts, nonetheless these alert burden estimates may evolve with greater understanding of sources of misclassification. Moreover, this study did not address variation in alert acceptance or alert dwell time which may better assess the proximal impact of the alert on subsequent care. Finally, this early analysis is primarily descriptive, so trends in alert burden variation may not be validated when examined at a more granular level or with greater statistical sophistication.

Alert fatigue has been recognized as a high priority patient safety issue and contributor to clinician burnout.27 Alert burden varies substantially across institutions, with custom alerts responsible for substantially more alerts than drug–drug interactions or other alert types. In the current state, organizations aiming to reduce alert burden can choose any of the metrics described in this manuscript to track over time.14 Alert burden varies by clinical context and the metric chosen—feasible context-specific metrics will facilitate multicenter alert optimization. In addition, appropriate alert management requires assessment not only of the deleterious effects of excess alerts on patient safety and clinician burnout but also alert effectiveness at achieving intended outcomes. CDS utilization metrics such as order set usage patterns,28,29 alert appropriateness8 and burden measurement, and flowsheet documentation30 represent 1 arm of a translational measurement framework that compares CDS burden to its effectiveness, which is ultimately what is needed to define CDS best practices and improve outcomes.

Supplementary Material

ocab179_Supplementary_Data

FUNDING

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

AUTHOR CONTRIBUTIONS

EWO, SK, NM, NM, JDC, PAH, ACD, AM, SH, and ESK designed the study.

EWO, SK, NM, JDC, PAH, ACD, AM, SH, AK, JMB, and ESK extracted, cleaned, and shared data.

EWO, SK, NM, JDC, PAH, ACD, SH, JMB, and ESK performed the analysis and designed visualizations.

EWO, SK, NM, JDC, PAH, ACD, AM, SH, AK, HBH, JMB, and ESK wrote the manuscript.

DATA AVAILABILITY STATEMENT

Deidentified alert burden data are available in Data Dryad as Alert Burden in Pediatric Hospitals: A Cross-Sectional Analysis of Six Academic Pediatric Health Systems Using Novel Metrics submitted with DOI https://doi.org/10.5061/dryad.5mkkwh769

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

COMPETING INTEREST STATEMENT

EWO and NM are co-founders and hold equity in Phrase Health, a clinical decision support analytics company.

ACKNOWLEDGMENTS

The authors would like to acknowledge the clinical informatics teams at each institution who helped write and adjust queries and who continue to work on reducing alert burden for pediatric clinicians.

REFERENCES

  • 1. Bates DW, Gawande AA.. Improving safety with information technology. N Engl J Med 2003; 348 (25): 2526–34. [DOI] [PubMed] [Google Scholar]
  • 2. Khairat S, Coleman C, Ottmar P, et al. Association of electronic health record use with physician fatigue and efficiency. JAMA Netw Open 2020; 3 (6): e207385. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Gregory M, Russo E, Singh H.. Electronic health record alert-related workload as a predictor of burnout in primary care providers. Appl Clin Inform 2017; 8 (3): 686–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems. Ann Intern Med 2012; 157 (1): 29–43. [DOI] [PubMed] [Google Scholar]
  • 5. Najafi N, Cucina R, Pierre B, Khanna R.. Assessment of a targeted electronic health record intervention to reduce telemetry duration. JAMA Intern Med 2019; 179 (1): 11–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Shojania KG, Jennings A, Mayhew A, Ramsay C, Eccles M, Grimshaw J.. Effect of point-of-care computer reminders on physician behaviour: a systematic review. CMAJ 2010; 23: E216–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Bryant AD, Fletcher GS, Payne TH, Payne TH.. Drug interaction alert override rates in the Meaningful Use era: no evidence of progress. Appl Clin Inform 2014; 05 (03): 802–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Mccoy AB, Waitman LR, Lewis JB, et al. A framework for evaluating the appropriateness of clinical decision support alerts and responses. J Am Med Inform Assoc 2012; 19: 345–52. doi:10.1136/amiajnl-2011-000185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Baysari MT, Tariq A, Day RO, Westbrook JI.. Alert override as a habitual behavior - a new perspective on a persistent problem. J Am Med Inform Assoc 2017; 24: 409–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Carspecken CW, Sharek PJ, Longhurst C, Pageler NM.. A clinical case of electronic health record drug alert fatigue: consequences for patient outcome. Pediatrics 2013; 131 (6): e1970–e1973. [DOI] [PubMed] [Google Scholar]
  • 11. van der Sijs H, Aarts J, Vulto A, et al. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13 (2): 138–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Simpao AF, Ahumada LM, Desai BR, et al. Optimization of drug-drug interaction alert rules in a pediatric hospital’s electronic health record system using a visual analytics dashboard. J. Am. Med. Informatics Assoc 2014; 21: 541–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Mandl KD. Federalist principles for healthcare data networks. Nat Biotechnol 2015; 33: 360–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Chaparro JD, Hussain C, Mha JH, Rhia MBA, Hoffman J.. Reducing intrusive alert burden using quality improvement methodology. Appl Clin Inform2020; 11: 46–58. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Bonafide CP, Lin R, Zander M, et al. Association between exposure to nonactionable physiologic monitor alarms and response time in a children’s hospital. J Hosp Med 2015; 10 (6): 345–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Bonafide CP, Miller JM, Localio AR, et al. Association between mobile telephone interruptions and medication administration errors in a pediatric intensive care unit. JAMA Pediatr 2020; 174 (2): 162–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Kugler JD, Beekman Iii RH, Rosenthal GL, et al. Development of a pediatric cardiology quality improvement collaborative: from inception to implementation. From the joint council on congenital heart disease quality Improvement task force. Congenit Heart Dis 2009; 4 (5): 318–28. [DOI] [PubMed] [Google Scholar]
  • 18. Goldstein SL, Dahale D, Kirkendall ES, et al. A prospective multi-center quality improvement initiative (NINJA) indicates a reduction in nephrotoxic acute kidney injury in hospitalized children. Kidney Int 2020; 97 (3): 580–8. [DOI] [PubMed] [Google Scholar]
  • 19. Brat GA, Weber GM, Gehlenborg N, et al. International electronic health record-derived COVID-19 clinical course profiles: the 4CE consortium. NPJ Digit. Med 2020; 3: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Alonso GT, Corathers S, Shah A, et al. Establishment of the T1D Exchange Quality Improvement Collaborative (T1DX-QI). Clin Diabetes 2020; 38 (2): 141–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Toh S. Analytic and data sharing options in real-world multidatabase studies of comparative effectiveness and safety of medical products. Clin Pharmacol Ther 2020; 107 (4): 834–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Li X, Fireman BH, Curtis JR, et al. Validity of privacy-protecting analytical methods that use only aggregate-level information to conduct multivariable-adjusted analysis in distributed data networks. Am J Epidemiol 2019; 188 (4): 709–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Kush RD, Warzel D, Kush MA, et al. FAIR data sharing: the roles of common data elements and harmonization. J Biomed Inform 2020; 107: 103421. [DOI] [PubMed] [Google Scholar]
  • 24. Forrest CB, Margolis PA, Bailey LC, et al. PEDSnet: a national pediatric learning health system. J Am Med Inform Assoc 2014; 21 (4): 602–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Home | The National Patient-Centered Clinical Research Network. https://pcornet.org/. Accessed May 1, 2021.
  • 26. Gouripeddi R, Warner PB, Mo P, et al. Federating clinical data from six pediatric hospitals: process and initial results for microbiology from the PHIS+ consortium. AMIA Annu Symp Proc 2012; 2012: 281–90. [PMC free article] [PubMed] [Google Scholar]
  • 27.Alarm, Alert, and Notification Overload. https://www.ecri.org/components/HDJournal/Pages/Top_10_hazards_2020_No_6_alarms.aspx?tab=1. Accessed May 1, 2021.
  • 28. Wright A, Feblowitz JC, Pang JE, et al. Use of order sets in inpatient computerized provider order entry systems: a comparative analysis of usage patterns at seven sites. Int J Med Inform 2012; 81 (11): 733–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Li RC, Wang JK, Chen JH.. When order sets do not align with clinician workflow: assessing practice patterns in the electronic health record. BMJ Qual Saf 2019; 28 (12): 987–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Collins S, Couture B, Kang MJ, et al. Quantifying and visualizing nursing flowsheet documentation burden in acute and critical care. AMIA Annu Symp Proc 2018; 2018: 348–57. [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocab179_Supplementary_Data

Data Availability Statement

Deidentified alert burden data are available in Data Dryad as Alert Burden in Pediatric Hospitals: A Cross-Sectional Analysis of Six Academic Pediatric Health Systems Using Novel Metrics submitted with DOI https://doi.org/10.5061/dryad.5mkkwh769


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES