Skip to main content
Critical Care Explorations logoLink to Critical Care Explorations
. 2023 May 3;5(5):e0909. doi: 10.1097/CCE.0000000000000909

Evaluation of Digital Health Strategy to Support Clinician-Led Critically Ill Patient Population Management: A Randomized Crossover Study

Svetlana Herasevich 1,, Yuliya Pinevich 1,2, Kirill Lipatov 3, Amelia K Barwise 4,5, Heidi L Lindroth 6,7, Allison M LeMahieu 8, Yue Dong 1, Vitaly Herasevich 1, Brian W Pickering 1
PMCID: PMC10158897  PMID: 37151891

OBJECTIVES:

To investigate whether a novel acute care multipatient viewer (AMP), created with an understanding of clinician information and process requirements, could reduce time to clinical decision-making among clinicians caring for populations of acutely ill patients compared with a widely used commercial electronic medical record (EMR).

DESIGN:

Single center randomized crossover study.

SETTING:

Quaternary care academic hospital.

SUBJECTS:

Attending and in-training critical care physicians, and advanced practice providers.

INTERVENTIONS:

AMP.

MEASUREMENTS AND MAIN RESULTS:

We compared ICU clinician performance in structured clinical task completion using two electronic environments—the standard commercial EMR (Epic) versus the novel AMP in addition to Epic. Twenty subjects (10 pairs of clinicians) participated in the study. During the study session, each participant completed the tasks on two ICUs (7–10 beds each) and eight individual patients. The adjusted time for assessment of the entire ICU and the adjusted total time to task completion were significantly lower using AMP versus standard commercial EMR (–6.11; 95% CI, –7.91 to –4.30 min and –5.38; 95% CI, –7.56 to –3.20 min, respectively; p < 0.001). The adjusted time for assessment of individual patients was similar using both the EMR and AMP (0.73; 95% CI, –0.09 to 1.54 min; p = 0.078). AMP was associated with a significantly lower adjusted task load (National Aeronautics and Space Administration-Task Load Index) among clinicians performing the task versus the standard EMR (22.6; 95% CI, –32.7 to –12.4 points; p < 0.001). There was no statistically significant difference in adjusted total errors when comparing the two environments (0.68; 95% CI, 0.36–1.30; p = 0.078).

CONCLUSIONS:

When compared with the standard EMR, AMP significantly reduced time to assessment of an entire ICU, total time to clinical task completion, and clinician task load. Additional research is needed to assess the clinicians’ performance while using AMP in the live ICU setting.

Keywords: clinical decision support, clinical informatics, cognitive load, electronic medical record, intensive care unit


KEY POINTS

Question: Whether a novel acute care multipatient viewer (AMP) can reduce the time required for clinical decision-making by clinicians caring for populations of acutely ill patients, when compared with a widely used commercial electronic medical record (EMR).

Findings: In this randomized crossover study, AMP was associated with a significantly shorter time to assessment of an entire ICU and significantly shorter total time to clinical task completion, and reduced clinician task load.

Meanings: AMP may support clinicians caring for populations of acutely ill patients by reducing the time needed to assess and complete tasks on patients, compared with a standard EMR.

A standard commercial electronic medical record (EMR) stores and provides a multitude of patient-related information captured during everyday care and decision-making (1). As the ICU is a complex hospital setting, the quantity of information generated from the ICU patient can be 10 times more than in other hospital settings (2, 3). In their study of information generation in the critical care setting, Manor-Shulman et al (4) reported over 1,300 clinical information data points related to clinical information per day per patient. This vast amount of information has significant implications for clinician cognitive reasoning, timeliness of medical decision-making, and can contribute to medical errors (59).

The timeliness of decision-making and interventions for acutely ill patient significantly impacts morbidity and mortality (10, 11). Early identification of patients who need priority attention and action enables ICU providers to alter the trajectory of the illness before it leads to irreversible harm (12).

Previous studies demonstrate that clinicians take a significant amount of time to process the typical clinical information associated with the individual ICU patient, experience a high cognitive load as measured using National Aeronautics and Space Administration-Task Load Index (NASA-TLX), and commit errors in medical decision-making (5). At the same time, ICU clinicians focus on a relatively small subset of available EMR data when making decisions about individual patients (6). Conversely, the presentation of high value clinical data in user interfaces (a single-patient critical care viewer) has been shown to reliably reduce clinician cognitive load, time to decision and the risk of medical decision-making errors when caring for individual patients in the ICU (5, 13, 14). Subsequent studies confirmed the safety and reliability of this approach when deployed in live clinical settings (15, 16).

However, critical care clinicians usually care for more than one patient simultaneously. In this context, identifying patients who need priority attention in the population of critically ill patients using a standard EMR can be challenging. Automated strategies which reliably and safely support the identification of those patients, who would benefit from immediate attention or clinical intervention among populations of acutely ill patients, are not digitized and have not been implemented in the EMR environment (17).

In the current study, we replicated the methodology and approach that was used to evaluate a single-patient viewer (5) and applied it to the task of developing and refining an effective digital health strategy to support the management of multiple ICU patients concurrently. We used a novel acute care multipatient viewer (AMP), recently introduced to the ICU clinical practice at Mayo Clinic. The aim of this study was to investigate whether AMP, created with an understanding of clinician information and process requirements, could reduce the time required for clinical decision-making by clinicians caring for populations of acutely ill patients compared with a widely used commercial EMR.

METHODS

This was a single-center prospective randomized crossover study. The study protocol Who needs clinician attention first? Digital health strategies to support clinician-led critically ill patient population management was reviewed and deemed exempt by the Mayo Clinic Institutional Review Board (IRB) on March 23, 2020 (IRB number 19-012448). The requirement for written informed consent was waived by the IRB, and the study was conducted in accordance with the ethical standards of the IRB at Mayo Clinic and with the ethical principles described in the Declaration of Helsinki. The study results were reported using the Consolidated Standards of Reporting Trials for randomized crossover trials (18) (eTable 1, http://links.lww.com/CCX/B187).

Study Participants and Recruitment

Study participants were off-duty critical care service physicians (attending physicians and physicians in training) and advanced practice providers (APPs). We collected demographic data, including age, sex, role in the ICU, type of ICU they usually work at (medical, surgical, or other), work experience in critical care, experience with the standard institutional EMR and AMP.

We used site-based distribution lists to identify all ICU attendings, fellows, and APPs. Potential participants were invited via individual e-mails with one reminder sent within 14 days of the first invitation. Participation was voluntary and no remuneration was offered.

Study Setting

Mayo Clinic Rochester, Minnesota is a quaternary care academic hospital including over 200 ICU beds and providing 24/7 care for more than 16,000 ICU admissions annually. Nine ICUs with a total of 100 ICU beds were included in this study. Participating units included Medical, Medical-Cardiac, Cardiovascular Surgical, Medical-Surgical-Transplant, and Multi-specialty ICUs.

The study was conducted in two quiet offices remoted from clinical areas. Each office was equipped with workstations similar to those used in the ICU and included two 17-inch monitors, a keyboard, and a mouse for navigation.

Intervention

In our study, we compared the performance of ICU clinicians conducting a structured clinical task using two electronic environments—a standard EMR (Epic) and an AMP. Epic was implemented at Mayo Clinic in May 2018 and is one of the most popular EMR systems employed in more than 250 healthcare organizations nationwide covering almost a half of the U.S. population (19).

AMP is a central alert-screening and implementation system developed at Mayo Clinic and built on top of Epic. AMP was first introduced to clinical practice at Mayo Clinic Rochester in October 2020. AMP combines advanced analytics, visualization, and alert-delivery mechanisms into an electronic dashboard to improve situational awareness (SA), prioritization of care, and decision-making about cohorts of acutely ill patients (17, 20, 21). A limited subset of high-priority information informs the dashboard (eFig. 1, http://links.lww.com/CCX/B187), and the clinician can view it as a geographical unit (global view) or by patient (snapshot). Both interfaces provide an overview of relevant information to inform the development of SA, leading to prioritization and decision-making of population acute care needs.

Study Procedures

Units for each study session were selected by the author (S.H.) from the participating ICUs using a random number table (22). The author (S.H.) checked the eligibility of patients within the study units before each session. In this crossover study, pairs of participants were randomly assigned to complete structured clinical tasks using either Epic or AMP along with Epic. Participants logged into the workstation and were allowed to use any custom settings used in their everyday clinical practice to complete the tasks. We provided all participants, regardless of their previous experience with AMP, a 4-minute educational video prior to the study, and then allowed them to explore AMP for several minutes prior to study initiation to get familiar with overall organization and layout of AMP.

This study used real-time EMR data of critically ill adult (age ≥ 18 yr) patients admitted to the ICUs at Mayo Clinic Rochester during the time of the study session. Both participants within each pair started the tasks simultaneously, in two different study offices. One participant in each pair started with AMP along with a standard institutional EMR. We encouraged participants to use AMP primarily for task completion, however, they were free to use the EMR as well. The other participant started the task with the standard EMR. At the cross-over participants completed the same task on the different unit/patients and using the different electronic environment. Figure 1 illustrates the overall study design.

Figure 1.

Figure 1.

Crossover study design. AMP = acute care multipatient viewer, EMR = electronic medical record, NASA-TLX = National Aeronautics and Space Administration-Task Load Index.

Structured clinical task scenarios (eTables 2 and 3, http://links.lww.com/CCX/B187) were developed by the study team based on group discussions and insights from previously conducted semi-structured interviews with ICU providers about the multipatient viewer organization. Task scenarios were pilot tested within and outside the broader research team. The structured clinical tasks included two blocks of questions: questions related to patient prioritization in the entire unit (7–10 patients) and questions related to individual patients (assessment of four randomly selected patients from the same units). The unit-level SA task can be summarized as asking the participant to identify patients with end-organ dysfunction (evidenced by mechanical ventilation, vasopressors/inotropes, continuous dialysis, or elevated lactate level). The individual-patient task can be summarized as asking the participant to determine whether the patient is suitable for a spontaneous breathing trial, whether the patient is suitable for antibiotic de-escalation, and whether the patient has a severe metabolic derangement. To account for the variability in experience and clinical decision-making, we asked participants to use the provided simplified criteria for decision-making about patients.

Outcome Assessments

The primary outcome was the difference in time required to complete structured tasks using AMP compared with the standard EMR. Time was measured in seconds by a member of the study team using an electronic stopwatch.

Secondary outcomes included the task load score of clinicians performing the task measured using NASA-TLX. NASA-TLX is a widely accepted tool for subjective measuring task load experienced while performing a task (2325). This tool rates participant performance across six factors, including mental demand, physical demand, temporal demand, frustration, effort, and performance. Additional details about NASA-TLX are described in the Supplemental Digital Content and eTable 4 (http://links.lww.com/CCX/B187). The other secondary outcome was the reliability of task completion measured by the number of errors of cognition when using AMP versus EMR.

We also assessed AMP usability and intention to use with the System Usability Scale (SUS) (26) and Intention to Use survey (adapted from the Technology Acceptance Model [TAM]) (27) following completion of the study tasks. Additional details about these instruments are reported in the Supplemental Digital Content and eTables 5–7 (http://links.lww.com/CCX/B187).

Statistical Analysis

Participant demographics were summarized using median (interquartile range) for continuous variables and frequency (percentage) for categorical variables. Unadjusted and multivariable linear mixed-effects regression models assessed the association between the electronic environment (EMR or AMP) and time and cognitive load, accounting for pairs and participants with random intercepts. Unadjusted and multivariable mixed-effects proportional odds regression models assessed the association between errors and electronic environment, accounting for pairs and participants with random intercepts. Adjustment variables were chosen in consultation with our study statistician, and included number of patients per unit, participant age, and previous AMP experience. For all analyses, p values of less than 0.05 were used to signify statistical significance. Data management and analysis were performed in SAS Studio 3.8 (SAS Institute, Cary, NC).

RESULTS

Study Participants

Between June 2021 and September 2021, 165 ICU clinicians were identified as eligible and were invited to participate in the study. Twenty subjects (making 10 pairs of clinicians) participated in the study. The overall response rate was 12% (20/165) and 14 (70%) were males. The reasons for non-participation were likely related to scheduling challenges—each participant had to be paired at a convenient time with another participant to perform the cross over study at the same time and in the study office at hospital. All participants completed the study and were included in analysis.

The demographic characteristics of study participants are described in Table 1. All participants had experience working with the current institutional EMR for at least 2 years. However, 45% of the participants had no experience working with AMP.

TABLE 1.

Study Participant Characteristics (n = 20)

Characteristic n (%) or Median (IQR)
Sex
 Female 6 (30)
 Male 14 (70)
Age (yr) 40.0 (36.0–51.5)
Role in the ICU
 Attending physician 12 (60)
 Clinical fellow 4 (20)
Advanced practice providers (nurse practitioners and physician assistants) 4 (20)
Primary ICU
 Medical 10 (50)
 Surgical 4 (20)
 Medical and surgical 5 (25)
 Neuro 1 (5)
Work experience in critical care (yr) 7.5 (4.0–17.0)
Current electronic medical record use experience, yr
 < 2 0 (0)
 2–3 8 (40)
 4–5 8 (40)
 > 5 4 (20)
Acute care multipatient viewer use experience 11 (55)

IQR = interquartile range.

Outcomes

During the study session, each participant completed the tasks on two different ICUs and on eight individual patients. Primary and secondary outcomes are presented in Table 2.

TABLE 2.

Primary and Secondary Outcomes: Clinician Performance Using Acute Care Multipatient Viewer Versus Electronic Medical Record (n = 20)

Outcome Unadjusted Adjusted
Estimate (95% CI) p Estimate 95% CI p
Difference in time to task completion for one entire unit (min) –6.11 (–7.88 to –4.33) < 0.001 –6.11 (–7.91 to –4.3) < 0.001
Difference in time to task completion for four patients (min) 0.73 (–0.13 to 1.58) 0.090 0.73 (–0.09 to 1.54) 0.078
Difference in total time to task completion (min) –5.38 (–7.58 to –3.18) < 0.001 –5.38 (–7.56 to –3.20) < 0.001
Task load (National Aeronautics and Space Administration-Task Load Index) –22.6 (–32.6 to –12.5) < 0.001 –22.6 (–32.7 to –12.4) < 0.001
Total errors per user 0.71 (0.37–1.36) 0.333 0.68 (0.36–1.30) 0.280

Primary Outcome

The adjusted time for assessment of the entire ICU using AMP versus standard EMR was 3.1 times lower using AMP (–6.11; 95% CI, –7.91 to –4.30 min; p < 0.001). When comparing both electronic environments, the adjusted total time to task completion was 1.4 times lower using AMP (–5.38; 95% CI, –7.56 to –3.20 min; p < 0.001). The adjusted time for assessment of the individual patients was similar using both the EMR and AMP (0.73; 95% CI, –0.09 to 1.54 min; p = 0.078) (eFig. 2, http://links.lww.com/CCX/B187).

Secondary Outcomes

AMP was associated with a significantly lower adjusted clinician task load (NASA-TLX) versus the standard EMR (22.6; 95% CI, –32.7 to –12.4 points; p < 0.001). The adjusted difference in total errors for each user when comparing AMP versus EMR was not statistically significant (0.68; 95% CI, 0.36–1.30; p = 0.078).

Half of the participants (10/20) rated usability as excellent, 25% (5/20) as good, and 25% (5/20) as poor or awful (Fig. 2). The median SUS score among the 20 participants was 80 points reflecting good usability.

Figure 2.

Figure 2.

Acute care multipatient viewer usability assessment by participants (using System Usability Scale [SUS]).

Participants demonstrated the attitude toward AMP using the median TAM score was high both for perceived usefulness (questions 1–6) and perceived ease of use (questions 7–12). Median score for each factor is presented in Table 3.

TABLE 3.

Acute Care Multipatient Viewer Intention to Use Survey Scores by Specific Questions

Question Median Score (IQR)
1) Using AMP in my job would enable me to accomplish tasks more quickly 82 (71–95)
2) Using AMP would improve my job performance 83 (61–88)
3) Using AMP in my job would increase my productivity 81 (62–88)
4) Using AMP would enhance my effectiveness on the job 81 (60–88)
5) Using AMP would make it easier to do my job 83 (62–95)
6) I would find AMP useful in my job 84 (68–96)
7) Learning to operate AMP would be easy for me 87 (76–94)
8) I would find it easy to get AMP to do what I want it to do 78 (62–90)
9) My interaction with AMP would be clear and understandable 84 (74–88)
10) I would find AMP to be flexible to interact with 81 (65–89)
11) It would be easy for me to become skillful at using AMP 83 (75–88)
12) I would find AMP easy to use 86 (77–92)

AMP = acute care multipatient viewer, IQR = interquartile range.

DISCUSSION

In this randomized crossover study, we compared ICU clinicians’ performance in clinical task completion when using AMP versus standard commercial EMR. Although 45% participants had no prior AMP experience, AMP was associated with a significantly shorter time for assessment of the entire ICU and the time to task completion. The time for assessment of individual patients was similar using both electronic environments. AMP was associated with a significantly lower clinician task load (using NASA-TLX) versus the standard EMR. The number of errors for each user using AMP of EMR did not differ. Additionally, AMP demonstrated good usability during the task completion, and the participants’ attitude to AMP use was positive reflected in the high intention to use scores.

The ability of digital health strategies to reduce time to clinical decision-making and task load for clinicians caring for populations of critically ill patients may be extremely helpful in highly dynamic environments, such as the ICU, where clinicians are responsible for multiple patients, need to re-evaluate patients’ multiple times during their shifts, and must be situationally aware at all times. The cumulative time saved by clinicians reviewing data over a day or a week may be substantial and clinically relevant, especially since they may need to scan multiple units.

The most commonly installed EMRs (28) are designed around individual patient management and are not explicitly designed to enable real time prioritization of patient populations according to acute care needs (29,30). A previous study by Ahmed et al (5) demonstrated that the presentation of critical data in a parsimonious way using novel user interface reduces clinician task load, time to decision and the risk of medical decision-making errors by ICU clinicians caring for individual patients compared with a standard EMR. However, critical care clinicians typically care for the groups of eight to 15 patients at the same time during a 12-hour shift (3133). Increased patient-intensivist ratios are associated with poor patient outcomes (3436).

The tools we currently have for managing population of acutely ill patients fall into two main categories—traditional severity prediction models and specialized applications. The Acute Physiology and Chronic Health Evaluation (APACHE) prediction model is a de facto standard for the prediction of distant outcomes—hospital mortality and length of stay (37). As with other prediction models or scores, for example, Mortality Probability Model, Simplified Acute Physiology Score instruments, or Sequential (sepsis-related) Organ Failure Assessment, APACHE is a useful tool for prognosis, clinical research, quality comparison or administrative planning (3841). However, these prediction models do not suggest any specific clinical actions and were never intended to dynamically influence real time ICU management (40, 42).

Specialized applications (43), designed for use in the tele-ICU setting, provide some indicators of physiologic or laboratory abnormities and severity of illness which may be used as triggers for clinical review and for patient stratification or prioritization. While these specialist tools are more suited for management of critically ill patient populations than general EMRs, they remain highly dependent on severity of illness or prognosis for stratification and are not optimized to address the question—who needs immediate clinician attention and for which specific problem?. In contrast, some newly developed detecting algorithms, such as a ventilator-induced lung injury sniffer, are actionable and can change a course of treatment (44). However, these algorithms are typically focused on specific clinical conditions.

Demand for critical care clinicians continues to grow as the hospitalized patient population becomes more elderly and medically complex (45). At the same time, the availability of intensivists in underserved areas remains limited (46, 47). As telehealth grows as a mechanism for delivering acute care, it is anticipated that the challenge of matching patient needs to clinician availability will become more widely recognized (48, 49). In this context, the development and scientific testing of novel digital technologies that support clinician-led population management and patient prioritization become increasingly important as we expand clinical oversight to larger patient populations through remote monitoring services (50). Having the limited set of most relevant information is particularly useful in the tele-ICU setting, where clinician cares for up to 150 patients simultaneously and is physically remote from the bedside.

AMP has been softly implemented and is currently being used at Mayo Clinic as a nonmandatory tool, which gives the opportunities for its further evaluation and leveraging as a hospital-wide surveillance, alerting, and response-tracking tool for use in both the hospital and hospital-at-home settings. Future research and evolution of digital health strategies should be with deep understanding of socioeconomic environment and engagement of key stakeholders. Some examples of evaluation approaches include gathering users’ feedback using both classical (surveys, interviews/focus groups, ethnography methods) and innovative approaches, like video-reflective ethnography (51), eye tracking in the field (52), or user performance evaluation in the field. Analyzing user log data can give additional insight on granular interactions of providers with the medical record (which tasks or events clinicians perform using AMR and when) (53). Other important aspects of digital strategy assessment may include evaluation of team safety, intention to adopt, impact on processes of care, and impact on patient outcomes. We expect that with feedback from users, we will be able to improve usability and further refine AMP.

Strengths and Limitations

Our study has several strengths. The study was performed by a team with expertise in digital technology design and testing (5, 54, 55). The study was conducted in an academic center with a robust informatics infrastructure supporting the design and development of a multipatient viewer. Clinicians involved in the study represented a variety of ICUs, including medical ICU, surgical ICU, trauma ICU, neuro-ICU, and tele-ICU. The crossover study design allows each participant to act as their own control. This minimizes between-subject variability and enables the obtention of statistically significant results on the relatively small study sample. We standardized study procedures to minimize confounding that might occur and used previously validated instruments for outcomes assessment (56). To minimize potential bias on the study results were adjusted to participant age, clinical experience, and AMP use experience.

The Study Also Has Some Limitations:

This was a single-center study that may have limited generalizability to other settings including nonacademic settings and institutions without the informatics infrastructure that our institution has built. A nonresponse bias may also occur. Those who responded might be more interested in digital strategies and may perform better than other clinicians who are less engaged in these types of innovations. Study participants could not be blinded to the study environment and intervention (EMR or AMP) potentially modifying their performance. Similarly, outcome assessors could not be blinded to the intervention. One more important limitation was a physical isolation of providers from the patients and from the potential distractors and interruptions they are usually exposed to in their working places. This was done to standardize the conditions in which different providers performed the task. As with a lot of research methods we wanted to establish the efficacy before integrating and testing in a real-world setting. Also, the unit-level tasks were focused on identifying the most acutely ill patients. The sickest patients may not necessarily be the same as the patients who need immediate attention and who would benefit from clinical intervention. Our study may be prone to carryover and within-participant effects. To minimize these potential flaws, we compared the performance within pairs for same patients using either AMP of EMR, and then after the crossover each participant started the task on new patients using the other electronic environment.

CONCLUSIONS

The findings of this study suggest that the novel AMP, created with an understanding of ICU clinician information and process requirements, reduces time to clinical task completion and clinician task load compared with the standard EMR. Additional research is needed to assess clinician performance while using AMP in the live ICU setting.

ACKNOWLEDGMENTS

We would like to thank Drs. Kalyan Pasupathy, Alexander Niven, and Mark Keegan for their ongoing support throughout all study processes—from review of the study protocol, data collection methodology, analysis, and presentation of the study results.

Supplementary Material

cc9-5-e0909-s001.pdf (698.5KB, pdf)

Footnotes

This study was supported by Grant Number UL1 TR002377 from the National Center for Advancing Translational Sciences. Also, this study was also supported by grant number R18HS026609 from the Agency for Healthcare Research and Quality and by a Society of Critical Care Medicine Discovery Grant award.

The contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health. Also, the contents do not necessarily represent the official views of the Agency for Healthcare Research and Quality or Society of Critical Care Medicine.

The authors have disclosed that they do not have any potential conflicts of interest.

Dr. Pickering, Dr. Herasevich, Dr. Pinevich, and Dr. Lipatov contributed to the design of the study. All authors contributed to acquisition, analysis, and interpretation of the data. Dr. Herasevich and Pickering drafted the article. All authors critically revised the article for important intellectual content and approved the final version of the article.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccejournal).

REFERENCES

  • 1.Gunter TD, Terry NP: The emergence of national electronic health record architectures in the United States and Australia: Models, costs, and questions. J Med Internet Res 2005; 7:e3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Herasevich V, Litell J, Pickering B: Electronic medical records and mHealth anytime, anywhere. Biomed Instrum Technol 2012; Fall(Suppl):45–48 [DOI] [PubMed] [Google Scholar]
  • 3.Donchin Y, Gopher D, Olin M, et al. : A look into the nature and causes of human errors in the intensive care unit. Crit Care Med 1995; 23:294–300 [DOI] [PubMed] [Google Scholar]
  • 4.Manor-Shulman O, Beyene J, Frndova H, et al. : Quantifying the volume of documented clinical information in critical illness. J Crit Care 2008; 23:245–250 [DOI] [PubMed] [Google Scholar]
  • 5.Ahmed A, Chandra S, Herasevich V, et al. : The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance. Crit Care Med 2011; 39:1626–1634 [DOI] [PubMed] [Google Scholar]
  • 6.Pickering BW, Gajic O, Ahmed A, et al. : Data utilization for medical decision making at the time of patient admission to ICU. Crit Care Med 2013; 41:1502–1510 [DOI] [PubMed] [Google Scholar]
  • 7.Zhu X, Lord W: Using a context-aware medical application to address information needs for extubation decisions. AMIA Annu Symp Proc 2005; 2005:1169. [PMC free article] [PubMed] [Google Scholar]
  • 8.Klerings I, Weinhandl AS, Thaler KJ: Information overload in healthcare: Too much of a good thing? Z Evid Fortbild Qual Gesundhwes 2015; 109:285–290 [DOI] [PubMed] [Google Scholar]
  • 9.Laker LF, Froehle CM, Windeler JB, et al. : Quality and efficiency of the clinical decision-making process: Information overload and emphasis framing. Prod Oper Manage 2018; 27:2213–2225 [Google Scholar]
  • 10.Bates ER: Timeliness of treatment is more important than choice of reperfusion therapy. Cleve Clin J Med 2010; 77:567–569 [DOI] [PubMed] [Google Scholar]
  • 11.Clark K, Normile LB: Influence of time-to-interventions for emergency department critical care patients on hospital mortality. J Emerg Nurs 2007; 33:6–13; quiz 90 [DOI] [PubMed] [Google Scholar]
  • 12.Yadav H, Thompson BT, Gajic O: Fifty years of research in ARDS. Is acute respiratory distress syndrome a preventable disease? Am J Respir Crit Care Med 2017; 195:725–736 [DOI] [PubMed] [Google Scholar]
  • 13.Dziadzko MA, Herasevich V, Sen A, et al. : User perception and experience of the introduction of a novel critical care patient viewer in the ICU setting. Int J Med Inform 2016; 88:86–91 [DOI] [PubMed] [Google Scholar]
  • 14.Pickering BW, Herasevich V, Ahmed A, et al. : Novel representation of clinical information in the ICU: Developing user interfaces which reduce information overload. Appl Clin Inform 2010; 1:116–131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Olchanski N, Dziadzko MA, Tiong IC, et al. : Can a novel ICU data display positively affect patient outcomes and save lives? J Med Syst 2017; 41:171. [DOI] [PubMed] [Google Scholar]
  • 16.Pickering BW, Dong Y, Ahmed A, et al. : The implementation of clinician designed, human-centered electronic medical record viewer in the intensive care unit: A pilot step-wedge cluster randomized trial. Int J Med Inform 2015; 84:299–307 [DOI] [PubMed] [Google Scholar]
  • 17.Herasevich V, Pickering BW, Clemmer TP, et al. : Patient monitoring systems. In: Biomedical Informatics. Shortliffe EH, Cimino JJ. (Eds). Cham; Springer, 2021, pp 693–732 [Google Scholar]
  • 18.Dwan K, Li T, Altman DG, et al. : CONSORT 2010 statement: Extension to randomised crossover trials. BMJ 2019; 366:l4378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Which Hospitals Use EPIC Software in Different States? 2022. Available at: https://digitalhealth.folio3.com/blog/which-hospitals-use-epic/. Accessed October 18, 2022
  • 20.Herasevich V, Keegan MT, Johnston MD, et al. : Will artificial intelligence change ICU practice? ICU Manage Pract 2019/2020; 19:218–221 [Google Scholar]
  • 21.Herasevich V, Lipatov K, Pickering BW: EHR data: Enabling clinical surveillance and alerting. In: Nursing Informatics. Health Informatics. (Suppl). Hübner UH, Mustata Wilson G, Morawski TS, et al (Eds). Cham; Springer, 2022, pp 155–168 [Google Scholar]
  • 22.Murdoch J, Barnes JA: Random number tables. In: Statistical Tables. Murdoch J, Barnes JA. (Eds). London; Palgrave, 1986, pp 36–39 [Google Scholar]
  • 23.National Aeronautics and Space Administration: NASA TLX Task Load Index, 2006. Available at: https://humansystems.arc.nasa.gov/groups/tlx/. Accessed October 18, 2022 [DOI] [PMC free article] [PubMed]
  • 24.Sandra GH, Lowell ES: Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In: Advances in Psychology. Vol. 52. Peter HA, Najmedin M. (Eds). North-Holland, 1988, pp 139–183 [Google Scholar]
  • 25.Agency for Healthcare Research and Quality: NASA Task Load Index. Available at: https://digital.ahrq.gov/health-it-tools-and-resources/evaluation-resources/workflow-assessment-health-it-toolkit/all-workflow-tools/nasa-task-load-index. Accessed October 18, 2022 [DOI] [PubMed]
  • 26.Brooke J: SUS—A Quick and Dirty Usability Scale. In: Usability Evaluation in Industry, 1986, pp 189–194 [Google Scholar]
  • 27.Davis FD: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989; 13:319–340 [Google Scholar]
  • 28.McCormack M: EHR meaningful use market share, 2020. Available at: https://www.softwareadvice.com/resources/ehr-meaningful-use-market-share/) Accessed January 3, 2020
  • 29.Gawande A: Why Doctors Hate Their Computers. Digitization Promises to Make Medical Care Easier and More Efficient. But Are Screens Coming Between Doctors and Patients? Available at: https://www.newyorker.com/magazine/2018/11/12/why-doctors-hate-their-computers. Accessed October 20, 2022
  • 30.Vaghefi I, Hughes JB, Law S, et al. : Understanding the impact of electronic medical record use on practice-based population health management: A mixed-method study. JMIR Med Inform 2016; 4:e10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Bhatla A, Ryskina KL: Hospital and ICU patient volume per physician at peak of COVID pandemic: State-level estimates. Healthc (Amst) 2020; 8:100489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Neuraz A, Guérin C, Payet C, et al. : Patient mortality is associated with staff resources and workload in the ICU: A multicenter observational study. Crit Care Med 2015; 43:1587–1594 [DOI] [PubMed] [Google Scholar]
  • 33.Ward NS, Afessa B, Kleinpell R, et al. ; Members of Society of Critical Care Medicine Taskforce on ICU Staffing: Intensivist/patient ratios in closed ICUs: A statement from the Society of Critical Care Medicine Taskforce on ICU Staffing. Crit Care Med 2013; 41:638–645 [DOI] [PubMed] [Google Scholar]
  • 34.Dara SI, Afessa B: Intensivist-to-bed ratio: Association with outcomes in the medical ICU. Chest 2005; 128:567–572 [DOI] [PubMed] [Google Scholar]
  • 35.Gershengorn HB, Pilcher DV, Litton E, et al. : Association of patient-to-intensivist ratio with hospital mortality in Australia and New Zealand. Intensive Care Med 2022; 48:179–189 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kerlin MP, Caruso P: Towards evidence-based staffing: The promise and pitfalls of patient-to-intensivist ratios. Intensive Care Med 2022; 48:225–226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Knaus WA, Zimmerman JE, Wagner DP, et al. : APACHE-Acute Physiology and Chronic Health Evaluation: A physiologically based classification system. Crit Care Med 1981; 9:591–597 [DOI] [PubMed] [Google Scholar]
  • 38.Higgins TL, Teres D, Copes WS, et al. : Assessing contemporary intensive care unit outcome: An updated Mortality Probability Admission Model (MPM0-III). Crit Care Med 2007; 35:827–835 [DOI] [PubMed] [Google Scholar]
  • 39.Metnitz PG, Moreno RP, Almeida E, et al. ; SAPS 3 Investigators: SAPS 3--from evaluation of the patient to evaluation of the intensive care unit. Part 1: Objectives, methods and cohort description. Intensive Care Med 2005; 31:1336–1344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Vincent JL, de Mendonca A, Cantraine F, et al. : Use of the SOFA score to assess the incidence of organ dysfunction/failure in intensive care units: Results of a multicenter, prospective study. Working group on “sepsis-related problems” of the European Society of Intensive Care Medicine. Crit Care Med 1998; 26:1793–1800 [DOI] [PubMed] [Google Scholar]
  • 41.Zimmerman JE, Kramer AA, McNair DS, et al. : Acute Physiology and Chronic Health Evaluation (APACHE) IV: Hospital mortality assessment for today’s critically ill patients. Crit Care Med 2006; 34:1297–1310 [DOI] [PubMed] [Google Scholar]
  • 42.Vincent JL, Moreno R, Takala J, et al. : The SOFA (Sepsis-related Organ Failure Assessment) score to describe organ dysfunction/failure. On behalf of the Working Group on Sepsis-Related Problems of the European Society of Intensive Care Medicine. Intensive Care Med 1996; 22:707–710 [DOI] [PubMed] [Google Scholar]
  • 43.Lilly CM, McLaughlin JM, Kojicic H, et al. : UMass Memorial Critical Care Operations Group. A multicenter study of ICU telemedicine reengineering of adult critical care. Chest 2014; 145:500–507 [DOI] [PubMed] [Google Scholar]
  • 44.Herasevich V, Tsapenko M, Kojicic M, et al. : Limiting ventilator-induced lung injury through individual electronic medical record surveillance. Crit Care Med 2011; 39:34–39 [DOI] [PubMed] [Google Scholar]
  • 45.Tanaka Gutiez M, Ramaiah R: Demand versus supply in intensive care: An ever-growing problem. Criti Care 2014; 18(Suppl 1):P9 [Google Scholar]
  • 46.Halpern NA, Tan KS, DeWitt M, et al. : Intensivists in U.S. acute care hospitals. Crit Care Med 2019; 47:517–525 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Pastores SM, Kvetan V: Shortage of intensive care specialists in the United States: Recent insights and proposed solutions. Revista Brasileira de terapia intensiva 2015; 27:5–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Khanna A, Majesko A, Johansson M: The Multidisciplinary Critical Care Workforce: An Update From SCCM. 2019. Available at: https://www.sccm.org/Communications/Critical-Connections/Archives/2019/The-Multidisciplinary-Critical-Care-Workforce-An. Accessed November 25, 2019
  • 49.Kumar S, Merchant S, Reynolds R: Tele-ICU: Efficacy and cost-effectiveness approach of remotely managing the critical care. Open Med Inform J 2013; 7:24–29 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Pickering BW, Litell JM, Herasevich V, et al. : Clinical review: The hospital of the future - building intelligent environments to facilitate safe and effective acute care delivery. Crit Care 2012; 16:220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Ajjawi R, Hilder J, Noble C, et al. : Using video-reflexive ethnography to understand complexity and change practice. Med Educ 2020; 54:908–914 [DOI] [PubMed] [Google Scholar]
  • 52.Klaib AF, Alsrehin NO, Melhem WY, et al. : Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and internet of things technologies. Expert Syst Appl 2021; 166:114037 [Google Scholar]
  • 53.Adler-Milstein J, Adelman JS, Tai-Seale M, et al. : EHR audit logs: A new goldmine for health services research? J Biomed Inform 2020; 101:103343. [DOI] [PubMed] [Google Scholar]
  • 54.Barwise A, Leppin A, Dong Y, et al. : What contributes to diagnostic error or delay? A qualitative exploration across diverse acute care settings in the United States. J Patient Saf 2021; 17:239–248 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Huang C, Barwise A, Soleimani J, et al. : Bedside clinicians’ perceptions on the contributing role of diagnostic errors in acutely ill patient presentation: A survey of academic and community practice. J Patient Saf 2022; 18:e454–e462 [DOI] [PubMed] [Google Scholar]
  • 56.Mills EJ, Chan AW, Wu P, et al. : Design, analysis, and presentation of crossover trials. Trials 2009; 10:27. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

cc9-5-e0909-s001.pdf (698.5KB, pdf)

Articles from Critical Care Explorations are provided here courtesy of Wolters Kluwer Health

RESOURCES