Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jun 10.
Published in final edited form as: ACM Trans Comput Hum Interact. 2023 Jun 10;30(3):41. doi: 10.1145/3582431

Transitioning Cognitive Aids into Decision Support Platforms: Requirements and Design Guidelines

Angela Mastrianni 1, Aleksandra Sarcevic 2, Allison Hu 3, Lynn Almengor 4, Peyton Tempel 5, Sarah Gao 6, Randall S Burd 7
PMCID: PMC10489246  NIHMSID: NIHMS1878414  PMID: 37694216

Abstract

Digital cognitive aids have the potential to serve as clinical decision support platforms, triggering alerts about process delays and recommending interventions. In this mixed-methods study, we examined how a digital checklist for pediatric trauma resuscitation could trigger decision support alerts and recommendations. We identified two criteria that cognitive aids must satisfy to support these alerts: (1) context information must be entered in a timely, accurate, and standardized manner, and (2) task status must be accurately documented. Using co-design sessions and near-live simulations, we created two checklist features to satisfy these criteria: a form for entering the pre-hospital information and a progress slider for documenting the progression of a multi-step task. We evaluated these two features in the wild, contributing guidelines for designing these features on cognitive aids to support alerts and recommendations in time- and safety-critical scenarios.

Additional Keywords and Phrases: clinical decision support system, alerts, trauma resuscitation, cognitive aid, alert fatigue, electronic documentation

1. INTRODUCTION

Clinical decision support systems (CDSSs) have long been used in healthcare settings to aid decision-making, but human-computer interaction issues have reduced their adoption and effectiveness [40,63]. Most CDSSs have been built as stand-alone systems [11,44,52,85,95] because they are straightforward to develop and can be easily shared between different institutions [91]. Despite these benefits, stand-alone CDSSs often require additional work and can interrupt clinician workflows and increase provider cognitive load [85]. To mitigate this adoption barrier, prior research has proposed integrating decision support into existing clinical systems [86]. In primary care and inpatient settings, decision support has been integrated into archival systems, such as electronic health records (EHRs) [24,25,64] and computerized order entry systems [51]. Archival systems, however, are cumbersome for high-stakes, time-critical events occurring in emergency medical settings [89]. Clinicians in these settings either rely on a scribe to document the event [88] or use the archival system after the event [13]. Unlike archival systems, digital cognitive aids have become widely used during emergency medical care [1,45,49,57]. Cognitive aids are “implementation tools” designed to help users complete a task or series of tasks [58]. The aids are used concurrently with work and include checklists (or “task lists”), flowcharts, mnemonics, and manuals [59]. Because digital cognitive aids have had higher adoption rates [14], decision support features could be integrated into these aids to support clinicians during decision making.

Our long-term goal has been to design and evaluate digital cognitive aids to support decision making during time-critical clinical events. We situated this work in the context of pediatric trauma resuscitation because this setting requires rapid decisions about life-saving interventions [20]. In pediatric trauma resuscitation, an interdisciplinary team cares for a severely injured child during a short window of time. Surgical residents and fellows, along with emergency department (ED) physicians, serve as team leaders. Decision support is critical in resuscitations because clinicians’ cognitive load is substantially higher during medical crises that involve patients with potentially life-threatening injuries [83]. Even highly-trained resuscitation teams can make errors leading to preventable deaths [31]. Prior research identified several types of errors in time-critical, team-based medical work, including interpretation errors (failures to make correct diagnoses promptly) and management errors (failures to track task progress) [76]. Despite prior attempts to reduce these types of errors by implementing CDSSs, usability issues (e.g., poor workflow integration) have limited system adoption [74,94].

In this paper, we studied how cognitive aids can be transitioned into platforms for triggering decision support alerts aimed at mitigating interpretation and management errors. Decision support platforms have two components: (1) acquisition of the data needed to determine if decision support alerts should be triggered, and (2) communication of the alerts to users in some form. Here, we focus on the first component—obtaining the data needed to determine if alerts should be triggered—by studying two alerts: (1) an interpretation alert about an increased risk of needing a blood transfusion, and (2) a management alert about delays in establishing intravenous (IV) or intraosseous (IO) access. We had three research questions: (1) What types of design changes are needed to transition cognitive aids from “task lists” into decision support platforms that trigger alerts? (2) How can features on cognitive aids be designed to improve the acquisition of the data needed for decision support? (3) How do clinicians use these features to document context information and task status during time-critical medical events? To answer these research questions, we first studied how an existing checklist supported data acquisition for triggering management and interpretation alerts. We found that the checklist design poorly supported the input of context information (for interpretation alerts) and accurate documentation of multi-step tasks (for management alerts). We then used co-design sessions and near-live simulations to design two new checklist features: a pre-hospital form for obtaining context information (e.g., the patient’s injury type) and a progress slider to improve documentation accuracy. Finally, we evaluated these new features in the wild during 78 actual resuscitations. Team leaders used the pre-hospital form in most cases but documented some items less frequently than others. Because the pre-hospital form was used in most cases, the information entered on the form could be used to determine the patient’s risk of needing a blood transfusion and trigger appropriate decision support alerts. The accuracy of these alerts, however, could be limited due to incomplete documentation. Premature documentation of multi-step tasks decreased after we introduced the progress slider, potentially leading to more accurate alerts in cases with delays in establishing intravenous (IV) access and other multi-step tasks. However, false alerts would also increase in cases without delays because leaders frequently did not update the status of the slider after task completion.

Using our findings, we make two contributions to HCI: (1) an understanding of how cognitive aids (specifically “task lists”) can be used as decision support platforms to trigger alerts aimed at mitigating errors during time-critical decision making, and (2) design guidelines for accurately capturing context information and task status required for decision support alerts on cognitive aids. These contributions are not limited to clinical settings but could inform the design of cognitive aids in other settings with the potential for interpretation or management errors, such as driving [38] and aviation [22,37].

2. RELATED WORK

Below we review prior work on common errors occurring during time-critical medical events. We also discuss research on the use of cognitive aids and clinical decision support systems in clinical settings.

2.1. Computerized Support to Address Errors During Time-Critical Medical Work

Prior research identified four error types in time-critical, team-based medical work: communication errors, management errors (failures to track task progress), vigilance errors (failures to block team members’ errors), and interpretation errors (failures to make correct diagnoses promptly) [76]. We chose to focus on interpretation and management errors because a cognitive aid used by an individual user may be less effective in reducing other errors that involve entire teams. Several technologies have been designed to address errors in time-critical medical work [33,43,47,74,93,94]. For example, multiple CDSSs were implemented to reduce interpretation errors by assisting physicians in making rapid diagnoses with limited information, but their adoption has been limited due to poor workflow integration [74,94]. Other studies examined how cognitive aids could reduce delays and procedural errors [33,93]. Because limited provider attention has been a key issue during dynamic medical events, some cognitive aids have started showing intervention prompts and timers to help users manage task performance [93]. However, research on the design and use of alerts on cognitive aids is scarce. A recent study found that an alert on a checklist reduced delays in documenting time-critical information [61], suggesting that alerts on cognitive aids can be effective. In this study, we contribute new insight and guidelines for incorporating alerts within cognitive aids to mitigate interpretation and management errors.

2.2. Cognitive Aids in Clinical Settings: Information Displays and Checklists

Shared information displays are frequently used as cognitive aids in clinical settings to improve situational awareness and teamwork [12,27,69]. Digital information displays are especially advantageous in time-critical settings because they can adapt to the situation, presenting only relevant information [27]. Although information displays may improve overall teamwork, their success has been limited because (1) displays only increase the situational awareness of select team members [69] and (2) data entry and accuracy issues prevent using displays in real time [12]. Checklists are another type of cognitive aid used in clinical settings. Prior work has designed and implemented checklists for different resuscitation settings [32,4749,68] and intensive care units [21,33,87]. Earlier studies compared paper and digital checklists, finding fewer unchecked items on the digital forms [47,87] and no difference in data quality between the two modalities [30]. Checklists used by an individual team member can also impact the entire team and their work in hierarchical team settings [33]. Most checklists for intensive care units and resuscitation settings are in the form of task lists, while checklists that support diagnostic thinking instead of task completion have been developed for outpatient settings [50].

Past research has also explored how the design of cognitive aids should evolve to better support users, increase adoption, and avoid fatigue [29,46,93]. Digital aids can be designed to adapt to different scenarios in time-critical medical events [93]. Non-routine cases would especially benefit from adaptive cognitive aids that highlight tasks normally not performed in routine cases [46]. Cognitive aids require context information, such as task and patient status, to provide the right content at the right time in a medical event [29]. We build on this prior work by studying how “task list” checklists can obtain accurate context information. In addition to supporting decision-support alerts, this context information would also allow for more dynamic and adaptive cognitive aids.

2.3. Clinical Decision Support Systems: Design Issues and Barriers to Implementation

Both medical [40,63,82,86] and HCI [11,42,44,52,90,95,96] studies have evaluated the use of CDSSs in healthcare settings. Medical studies found mixed results, with drug ordering systems improving clinical performance and diagnostic systems having low accuracy from poor data availability [40,86]. Some studies found HCI issues, such as delayed alerts activation [63] and data entry burden [40]. We next review HCI studies that focused on CDSS design. We also discuss the barriers to implementation and adoption of these systems.

2.3.1. Design of Clinical Decision Support Systems

Prior HCI studies of CDSSs have focused on the types of support for different clinical settings [42,44,95,96]. For example, the design of decision support for intensive care units should be evidence-based and support three phases of decision making: (1) deciding appropriate interventions, (2) implementing the interventions, and (3) monitoring the patient [42]. CDSSs for surgical teams should (1) support autonomy, (2) provide appropriate aid given the setting, (3) enhance teamwork and communication, and (4) allow for inputting different types of data while requiring minimal interaction from clinicians [44].

A common issue with CDSSs has been alert fatigue, where users frequently ignore or override alerts [63]. Past work has proposed different strategies for mitigating alert fatigue in CDSSs and other alerting systems [10,17,35,62,95], such as triggering related alerts together [35]. Other studies examined how alerts could be designed to reduce fatigue [10,17,62,95]. Proposed design strategies involve the use of vibrotactile modalities [17] and peripheral interactions (e.g., turning off an alarm by pressing a foot pedal) [10]. Another strategy includes varying the alert format based on clinicians’ intentions [95]. Recommendations should be “unremarkable” most of the time, and only appear in more intrusive formats when the system’s recommendations conflict with clinicians’ plans. The accuracy of an alert could also impact its intrusiveness, with more intrusive designs used for alerts with high accuracy rates [62]. However, research on the accuracy of alerts triggered by data entered on cognitive aids is currently limited. In this paper, we begin addressing this gap by evaluating the accuracy of alerts triggered by data recorded on a digital checklist.

2.3.2. Barriers to Implementation and Adoption of Clinical Decision Support Systems

Although the design of CDSSs has improved over time, different barriers have prevented the implementation and adoption of these systems [96]. The main challenge has been obtaining the contextual data required for accurate recommendations and alerts. This challenge is especially present in diagnostic CDSSs because they require more data than CDSSs for preventative care or drug prescription [40]. A common method for obtaining the required data has been manual input by clinicians. Data entry burdens, however, can deter clinicians from using the system or cause incomplete information, negatively impacting the accuracy of the alerts [40]. Because clinicians may only have time for data entry after they had made decisions, alerts and recommendations may be coming too late in their workflows [90]. As a result, continued research is needed to study how data entry can be improved in CDSSs to allow for more accurate and timely decision support alerts.

Documentation practices in electronic health records (EHRs) have been studied extensively [26]. Electronic documentation can increase provider workload [5,66,71] and affect their thought processes [9,67,100]. Tensions can also arise when providers undergo additional work to enter data for secondary purposes, such as research [5]. Several studies have examined documentation practices during time-critical medical events, focusing on temporality [39], compliance [23], and the differences between paper and electronic documentation [19]. Even when teams have a dedicated role for documentation, not all information about an event is recorded. In a recent study, medical scribes documented less than half of the reports provided during a resuscitation on an electronic flowsheet [39]. Clinicians in fast-paced settings may use transitional artifacts, like paper notes, to record information before they are able to document it in formal electronic systems [13].

Integrating data from other electronic systems (e.g., patient monitoring devices) can reduce the data entry burden in CDSSs [65]. Not all data required for decision support, however, can be found in an electronic system. For example, patient home life and social support are important factors in decision-making, but this information is not frequently captured in medical histories [96]. In intensive care settings, data required for decision support is captured at different times by different providers, with some information recorded on paper records [42]. While most studies investigated documentation in archival (e.g., EHRs and flowsheets) and ordering systems, documentation on cognitive aids remains understudied. We expand this prior work by addressing how cognitive aids can be used to capture the data required for triggering decision support alerts.

3. STUDY OVERVIEW

Our study had three main components: (1) alert requirements, (2) design research, and (3) in-the-wild evaluation (Figure 1). To determine the alert requirements, we analyzed past cases to understand how the checklist interface could support data acquisition required for alerts. In the design research, we conducted co-design and near-live simulation sessions to design and evaluate features for obtaining these data. We then evaluated the modified checklist design in the wild. This study was approved by the hospital’s Institutional Review Board (IRB).

Figure 1:

Figure 1:

Study overview and data collection timeline for the three study components.

3.1. Research Site and Trauma Team Members

Our research site has been a level 1 pediatric trauma center in the United States, treating over 500 patients each year. When a child is injured, the responding emergency medical services (EMS) use an alert system to notify the trauma center. In this alert, the EMS provide information about the incoming patient, including age, mechanism of injury, and symptoms. This alert is transcribed by the hospital staff, who activate the trauma team through a page system. Team members report to the trauma bay after receiving the page, occasionally arriving after the patient. When EMS arrive with the patient, they also report the patient’s injuries and treatments. The trauma team is co-led by a surgical fellow or resident and an emergency department (ED) physician. Together, they guide the team through the Advanced Trauma Life Support protocol [3]. The surgical leader uses a digital checklist listing the resuscitation tasks (Figure 2). In the pre-arrival period, the team prepares for the patient. In the primary survey phase, the examiner evaluates the patient’s airway, breathing, and circulatory status, while the bedside nurses obtain vital signs and intravenous (IV) access. The team then examines the patient to identify other injuries (secondary survey) and discusses the next steps (departure plan). Depending on the injuries, other roles may get involved, such as anesthesiologists and respiratory therapists. Each resuscitation is recorded for quality assurance purposes using an always-on video and audio recording system with three cameras and an array of microphones. While consent from the patient’s caregiver is not required to review checklist data, consent is required to review the videos for research purposes. Videos for consented cases are stored and viewed according to IRB-approved research protocols at the trauma center.

Figure 2:

Figure 2:

Digital checklist used at the hospital.

3.2. Digital Checklist

Clinicians at the hospital have been using a digital checklist for trauma resuscitation since 2017 (Figure 2). The checklist is implemented on a Samsung Galaxy tablet and used by the surgical team leader to ensure adherence to ATLS protocol. The checklist has five sections: two for the primary and secondary ATLS surveys, a pre-arrival section, a vital signs section, and a prepare for travel section. Each section contains a list of tasks that should be completed by the team. Next to each task is a checkbox and a place to take handwritten notes about the task. A few tasks have spaces for typing values (e.g., the blood pressure task has a space to type the blood pressure value). A notetaking area is placed at the top of the checklist where users can handwrite any information using the stylus. When leaders click the button to submit the checklist, a window appears with any unchecked items, giving the leader a final opportunity to check-off the tasks. Each use of the checklist produces a text file log and screenshot of the notes. The log file also contains timestamps for each check-off and any typed values.

3.3. Study Participants

Our study participants were surgical team leaders with experience in using the digital checklist during trauma resuscitations at the hospital. The surgical team leaders are either fellows or senior residents. While both fellows and senior residents have training in trauma resuscitation, fellows are more experienced with pediatric patients than senior residents. Fellows rotate at the trauma center for two years, while senior residents rotate for two months. At the beginning of their rotation, the leader receives training on the digital checklist and consents to using the tool as part of the research study. All team leaders who participated in the co-design sessions and near-live simulation sessions received monetary compensation for their time.

4. ALERT REQUIREMENTS

In this study, we focused on supporting two alerts: (1) an alert about an increased risk of requiring blood transfusion (interpretation alert) and (2) an alert about delays in establishing IV access (management alert). Context information and task status are required to determine if these alerts should be triggered during a medical event. To determine the design changes needed to support these types of alerts on cognitive aids (our first research question), we evaluated how the current “task list” design supported timely and accurate documentation of context information and task status.

4.1. Interpretation Alert: Increased Risk of Requiring Blood Transfusion

To understand how a digital cognitive aid could support alerts aimed at mitigating interpretation errors, we explored an alert indicating that a patient is at an increased risk of requiring blood transfusion. Severe blood loss contributes to about 30% of pediatric deaths after traumatic injury [70]. Timely life-saving interventions, such as massive blood transfusion, lead to better patient outcomes [72]. Our collaborating clinical team identified four variables that could predict the need for blood transfusion in pediatric trauma patients based on literature review [e.g., 4]: patient age, injury type, pre-arrival vital signs, and pre-arrival Glasgow Coma Score (GCS), a measure of consciousness. We then reviewed past resuscitation cases where leaders used the digital checklist to identify design requirements for obtaining these four variables.

4.1.1. Analyzing Information Recorded on Checklist: Methods and Findings

Because the digital checklist is primarily a task list, it does not contain dedicated spaces for entering context information required to determine risk of needing a blood transfusion. In our prior work, however, we found that leaders can use task lists as a memory externalization tool, handwriting notes about patient status and pre-hospital information [47,77]. We transcribed and reviewed the margin notes from 154 digital checklist cases between September 2017 and November 2018, observing instances of handwritten pre-arrival context information needed for determining bleeding risk (Figure 3). We calculated the word frequencies, finding that most common words fell into four categories: (1) demographic information (e.g., age, sex), (2) mechanism of injury (e.g., pedestrian struck, fall), (3) symptoms (e.g., loss of consciousness, pain), and (4) treatments (e.g., intubated). Although these notes contained information that could indicate bleeding risk, the notes were not taken in a standardized manner and did not always include all four variables. As part of the design research, we studied how the task list could be modified to standardize the entry of required context information, focusing on variables associated with an increased risk of bleeding.

Figure 3:

Figure 3:

Examples of handwritten notes taken by users on the digital checklist: (A): 3yo [gender symbol] fell 6 ft, Ø LOC [loss of consciousness], GCS 13 en route. (B) L hip lower back pain Ø LOC. GCS 15 LLE/back 118/68 [blood pressure]. (C): 14F <=10 mph on skateboard w/ head trauma, ⊕ LOC [illegible text] no helmet, unresponsive, [illegible text].

4.2. Management Alert: Delays in Establishing IV/IO Access

To determine how a digital cognitive aid could support alerts aimed at mitigating management errors, we explored an alert highlighting delays in establishing IV/IO access. According to ATLS protocol, access to the circulatory system should be promptly obtained [3]. This access is essential for administering medications, fluids, and blood products required for managing life-threatening conditions [56]. The process of obtaining IV access involves multiple steps: a provider must place a tourniquet, locate a vein, and successfully insert a needle into the vein. Several attempts may be needed to successfully place an IV. If the team has difficulty obtaining IV access, they may instead obtain IO access, which requires drilling into the bone. Some patients will arrive to the trauma center with existing IV or IO access, either receiving this access from EMS or at another institution. Given the importance of promptly establishing IV/IO access, we studied how an alert could be created on the checklist to inform the leader of delays in this multi-step process. This alert could suggest alternative means, like using a vein finder or obtaining IO access. We first analyzed the timing of successful IV placements in past cases to determine when an alert should be triggered if IV access had not been established. We then performed a retrospective analysis to examine the accuracy of an IV/IO alert on the checklist.

4.2.1. Determining When to Trigger the Alert

To determine the appropriate time for triggering the alert, we considered a combination of the time- and process-based approaches [61]. In a time-based approach, the alert is triggered if a task is not completed after a certain period. In a process-based approach, the alert is triggered if a task is not completed after a certain point in the process. We reviewed videos from 140 past cases between April 2018 and May 2019, finding that the median time from the patient arrival until establishing IV access was about seven minutes. In our combined time- and process-based approach, the alert would be triggered (1) if IV access had not been marked as complete on the checklist after seven minutes from the first item check-off on the primary survey section (roughly indicating the time of patient arrival) or (2) if the task had not been marked as complete when the last task on the secondary survey section was checked-off.

4.2.2. Assessing the Alert Accuracy: Methods and Findings

We used a retrospective analysis to determine if the checklist system could accurately trigger alerts about delays in establishing IV access. In retrospective analysis, data from past cases are used to evaluate alerts on metrics such as sensitivity and specificity before releasing the alerts [62]. Low sensitivity (number of true alerts) will prevent the alert triggering when the corresponding condition is present, while low specificity (number of true non-alerts) will lead to greater false alerts. The digital checklist was used in 76 of the 140 cases that we reviewed when determining the median time for establishing IV access. To perform the retrospective analysis, we examined digital checklist logs from these 76 cases, focusing on the “Confirm IV/IO access has been established” task. Per protocol, this checkbox should be checked-off immediately after IV/IO access is obtained. From the video review of all 140 cases, we had a record of the time when IV/IO access was obtained in each case. We parsed each checklist log with a Python script. The script used the timestamps of primary and secondary survey task check-offs to determine the alert time (e.g., seven minutes after the first primary survey check-off or after the last secondary survey check-off, whichever event came first). The script also determined if the IV/IO task was unchecked at this time, which would have triggered the alert. Using the alert status (triggered or not triggered) and the IV placement status at the time of the alert (obtained or not obtained), we classified the 76 cases into four groups: missed alert, false alert, true alert, and true non-alert (Figure 4). We found that this alert would not have been triggered in most cases (69.4%) with delays in establishing IV access because team leaders checked-off the IV/IO task before it was completed. Because of these premature check-offs, the task status on the checklist did not match the actual status of the task. In our prior work, we found that users frequently mark the multi-step tasks as complete when the team begins the task, instead of waiting to check it off at task completion [49,60]. To support management alerts about delays in multi-step tasks, we studied how to improve the design of multi-step tasks on the checklist to increase documentation accuracy.

Figure 4:

Figure 4:

Classification of alert status.

5. DESIGN RESEARCH

To answer our second research question, we used co-design sessions to develop features for improving the documentation of context information and increasing the accuracy of documenting multi-step task status. We evaluated these new decision support features using near-live simulations (Figure 1).

5.1. Co-Design Sessions

5.1.1. Procedure

From September 2019 through April 2020, we conducted four hour-long co-design sessions with eight surgical fellows and residents. Co-design sessions allow participants to directly engage with the design process by collaborating with researchers, expressing independent opinions, and making decisions [34,75]. The first three sessions were held in person, with three participants in the first session and two participants in the following two sessions. Due to the COVID-19 pandemic, we conducted the fourth session virtually over the video-conferencing platform Zoom with one participant. All participants had experience in using the checklist during trauma resuscitations. We began the session by asking participants to describe their experiences with the checklist from actual events to ground their design thinking. We also presented an overview of the current state of the checklist. Participants then individually identified issues with the checklist design and shared them with the group. Next, participants in the three in-person sessions used cardboard cutouts in a group activity to design an “ideal” checklist before drawing individual sketches of features that might resolve current issues. Participants then discussed their sketches with the group and voted to prioritize potential new features by placing stickers on Post-it notes. In the remote session we allocated extra time for sketching because we skipped the cardboard cutouts activity. We collected several types of data, including video recordings of participants discussing their sketches, photos of the sketches, photos from the prioritization activity, and notes taken by researchers during the sessions. We then inductively analyzed these data to identify current issues with the checklist, features that should be introduced on the checklist, and design ideas for these features.

5.1.2. Findings

Participants described several issues with the current version of the checklist and proposed numerous design ideas. Here, we focus on the challenges and design ideas related to documenting context information about the patient and capturing the status of multi-step tasks. Leaders highlighted that the checklist only supported marking task completion. They used the example of the IV/IO task (“Confirm IV/IO access is established”), discussing how the team may not start this task or may be in the process of obtaining IV access when the leader reaches this item on the checklist. The leader would then either skip the task or prematurely mark it as completed. One participant described a case where they wrote “will do later” [S#4, P#8] in the notes section next to the task. These conversations highlighted how the leaders needed an approach to mark the phases of multi-step tasks, rather than just checking tasks as completed or not completed.

Participants also discussed design ideas for capturing pre-hospital context information about the patient, providing different reasons for why this feature would be useful. One participant drew a detailed pre-hospital form for the checklist, with spaces for documenting the patient’s age, mechanism of injury and severity level, time of injury, pre-arrival interventions, level of consciousness, and important medical history. They described how this form would better facilitate documentation of the pre-hospital information than handwriting the information in the margin notes section:

“Instead of writing ‘13yo female, fall from bike, no helmet,’ if you click on ‘bike’ for mechanism of injury, it will drop down with options for ‘helmeted’ and ‘unhelemeted’… Instead of having to handwrite all that stuff, you could just click through it.”

[S#4, P#8]

Another participant described how information in a pre-hospital form could be used to generate a one-line summary of the case to support team preparation for the patient:

“The section for pre-hospital could be a quick one-liner so it is ‘MVC, unrestrained, how fast was the car, was it a baby in a car seat, any injuries,’ and then we can prepare by looking at this information and thinking about, ‘unrestrained means we need a collar…’”

[S#1, P#3]

Participants also discussed how the pre-hospital information could be used to adapt the tasks presented on the checklist or support decision making by suggesting different algorithms:

“You can check ‘Is this a burn?’ and then there is a suggested algorithm [for treating burn injuries] that pops up on the side and you can click on it and look at it.”

[S#1, P#2]

Another participant agreed, stating “you could do burn, blunt, and penetrating… that would be helpful if you could categorize it from the beginning” [S#1, P#1]. One participant similarly described how information recorded in the primary survey could be used to prompt the leader to consider ordering a certain type of imaging test:

“If somewhere you’re recording the primary survey and the GCS [Glasgow Coma Score] is 13, then it prompts you ‘Do you want a head CT?’”

[S#1, P#2]

When voting on the new features for the checklist, all four sessions prioritized using context information entered on the checklist to recommend interventions. Because leaders currently rely on their recollection of the diagnostic algorithms or reference them manually, they thought this feature would aid their decision making.

5.2. Design and Evaluation of Decision Support Features

5.2.1. Decision Support Features Design Process

We added two new features to the checklist interface: (1) a pre-hospital form (Figure 5(a)), and (2) a progress slider for the “Confirm IV/IO task has been established” (Figure 5(b)). We used participant sketches from the co-design sessions and our analysis of past margin notes to inform the pre-hospital form design. We included the four categories (demographic information, injury type, symptoms, and treatments) identified in the notes analysis. In the co-design sessions, participants also discussed the need to mark a task as “in progress,” instead of “complete” or “not complete” through a checkbox. We implemented a progress slider for the IV/IO task to allow users to move between several different statuses. To simplify the progress slider, we provided three status options: “Not Started,” “In Progress,” and “Completed.” The progress slider was initially set to “Not Started,” with users able to move it to either “In Progress” or “Complete.”

Figure 5:

Figure 5:

Screenshots of two features evaluated during the near-live simulations: (A) pre-hospital form and (B) progress slider for the IV/IO task.

5.2.2. Near-Live Simulations: Procedure

Throughout April and May 2020, we conducted four near-live simulation sessions to evaluate these new decision support features. During near-live simulations, a single user watches videos of simulated cases while using a system and thinking aloud. Near-live simulation sessions can highlight usability issues without requiring an entire clinical team to perform the simulation [54,94]. All four participants had experience in using the checklist in actual resuscitations. Three of the participants had also participated in earlier co-design sessions. Each simulation session was an hour long and held virtually over Zoom. Clinicians participated from the hospital, using the new version of the checklist on a tablet. One researcher facilitated the session, while two other researchers observed and took notes. We began the session by asking the participant to describe their last trauma resuscitation to ground their thinking based on real-world scenarios. We then introduced the new checklist features and provided brief training. Next, we asked the participant to watch two videos of simulated cases while using the checklist as if they were the leader. In the videos, a trauma team treated a “patient” (simulation mannequin) suffering from life-threatening injuries. The patient’s age and type of injury differed between the two videos. While the participant watched the videos, we remotely accessed their tablet, informing the participant that we could view their interactions with the checklist. The facilitator shared their screen with the other two researchers who took notes on how the participant was using the new features. After the participant completed both simulation cases, we discussed each feature, asking them if they used it, their thoughts on the feature, and any potential changes they would make. We concluded by having the participant compare this new checklist to the current version and discuss how the new features would impact their work.

We analyzed the observation notes from the near-live simulation sessions using an inductive qualitative content analysis approach [15]. The goal of the analysis was to both highlight how the participants were using these new features and identify potential design improvements. The first author performed the analysis using NVivo, a qualitative data analysis software, open coding the notes related to the pre-hospital form and progress slider and iteratively connecting the open codes to identify themes. These themes were discussed and refined in meetings with the wider research team.

5.2.3. Near-Live Simulations: Findings

All four participants used the pre-hospital form during both cases in their session. We identified three themes related to participants’ use of the pre-hospital form: (1) different documentation practices for the sporadic information, (2) persistence of handwritten notes, and (3) concerns about the quality of pre-arrival information.

We observed two different practices for documenting the sporadic information shared about the patient. During the simulations, all four participants completed the pre-hospital form before patient arrival, documenting the information from the simulated pager system. When the EMS arrived with the patient in the video, two participants [P#1,2] reopened the pre-hospital form to enter additional information provided by the EMS. The other two participants [P#3,4] handwrote this information in the margin note area instead of reopening and updating the pre-hospital form. We also identified a persistence of handwritten notes even with the use of the pre-hospital form. Although information from the pre-hospital form appeared in the summary line on top of the margin note, two participants [P#3,4] copied the information they entered in the pre-hospital form into the margin note. One participant explained that handwriting a one-line summary had become a “habit” [P#3] and that they might stop with acclimatization to the pre-hospital form.

During the post-simulation debriefing, participants highlighted concerns about the quality of the pre-arrival information. Three participants [P#2,3,4] discussed how context information about the patient may be unknown or inaccurate. Participants #2 and #3 suggested mitigating this problem by adding an “unknown” option for certain fields on the pre-hospital form. Participant #4 discussed how they would be hesitant to document information provided by EMS on the pre-hospital form due to concerns about the reliability of this information. Several changes were recommended for the pre-hospital form. Two participants [P#2,3] suggested making the form simpler and easier to use (e.g., changing the default in the age dropdown from “days” to “years” [P#2]). They also suggested removing the “Other” section in each category because they found it easier to handwrite that information in the margin note instead of typing it. Participant #3 stated they would not add anything else to the form to avoid complexity. Three participants [P#1,2,3] said they would use the pre-hospital form in actual cases. Participants #1 and #2 highlighted how converting the pre-hospital information into a one-line summary aided their thought processes, explaining that the summary was “a basic reminder of what you need to be thinking about” [P#1] and “it’s not a burden to check a few more boxes if it makes things smoother” [P#1]. Participant #4, however, expressed a preference for handwriting notes in the margin note area. They highlighted how they would prefer having that information auto populated on the checklist, assuming this automation would make the tool easier to use and less distracting.

We also identified one theme in participant reactions to the IV/IO progress slider: concerns about returning to the feature to update task status. All four participants used the IV/IO progress slider in at least one of their two simulation videos. During discussion, Participant #1 stressed the importance of ensuring that IV/IO access is established but was concerned they would not go back to mark the task as complete after moving to another section of the checklist. They discussed how they needed to be selective about the items they returned to since multiple tasks could be unchecked. Participant #3 went back to the primary survey section to change the task status from in-progress to complete during the simulation. This participant suggested moving uncompleted tasks from one section over to the next to facilitate this multi-step interaction and ensure the tasks are marked as complete. However, moving unchecked tasks between sections could be complicated when teams do not follow the ATLS protocol and leaders frequently jump between checklist sections. Participant #4 suggested putting the status of the IV/IO task in a “global view,” explaining that “once you scroll away, you lose track of it.”

5.3. Final Designs for Decision-Support Checklist Features

Using the results of the near-live simulation sessions, we finalized the design for the pre-hospital form (Figure 6(a)). Leaders expressed different preferences for the pre-hospital information types they record on the checklist. Our goal was to design a simple form while capturing the information most useful for decision support. Combining user data with input from our clinical collaborators, we simplified or removed extraneous sections from the initial design, such as changing the mechanism of injury section to specify only blunt or penetrating mechanism. Because the pre-arrival GCS and vital sign measurements are also needed for decision support, we added these fields to the form. The form’s input was now modified to dynamically update some items on the checklist. For example, selecting pre-arrival IV access on the form automatically sets the IV/IO task item on the checklist to complete. Finally, we had the form appear on the screen as soon as the leader began using the checklist, removing the need to manually open it at the beginning of the resuscitation. Leaders are not required to fill out any parts of the pre-hospital form, can exit the form at any time, and still have the margin note area to take handwritten notes. They can also click a button on the main screen to reopen the form. No changes were made to the initial design of the progress slider based on findings from the near-live simulations (Figure 6(b)).

Figure 6:

Figure 6:

Final designs for (A) pre-hospital form and (B) progress slider.

6. IN-THE-WILD EVALUATION

To answer our third research question, we released the new features on the checklist at the hospital to study how clinicians used the features to document context information and task status in actual cases (Figure 1). We could release these features for actual use because they were evaluated for usability in the near-live simulations and leaders could choose not to use either feature during patient care. To understand how leaders used the pre-hospital form, we (1) calculated documentation rates for different variables and evaluated their accuracy, and (2) reviewed videos from cases with pre-hospital form use. Because the information entered on the form can be used to trigger decision support alerts aim at mitigating interpretation errors, studying the documentation rates and accuracy can provide insight into the accuracy of these alerts. To determine the effects of the progress slider on the accuracy of an alert about delays in establishing IV access, we retrospectively analyzed the slider use by examining both the checklist logs and videos of cases.

6.1. Pre-Hospital Form

We released the pre-hospital form on the checklist at our research site on March 22, 2021. In the three months of data collection (through June 30, 2021), leaders used the checklist during 78 trauma resuscitations.

6.1.1. Pre-Hospital Information Documentation Rates: Methods

To evaluate documentation rates and accuracy, we examined the checklist logs and notes, along with patient charts. We began by transcribing the handwritten margin notes taken before the release of the pre-hospital form (June 1, 2020–March 22, 2021), noting each time a variable from the pre-hospital form was captured in the handwritten note. After the release of the pre-hospital form, the checklist logs contained the timestamps when the leaders opened and closed the form, along with the timestamps and values for documented data. Using these data, we calculated documentation rates for different fields on the pre-hospital form. We then compared documentation between the handwritten margin notes and pre-hospital form. To evaluate the accuracy of the pre-hospital form documentation, we compared the data entered on the form to the data documented in patient charts. We could not evaluate the accuracy of the GCS, heart rate, and blood pressure values because these values fluctuate over time. We could not ensure that the values recorded in the patient chart were measured at the same time as the values recorded on the digital checklist.

6.1.2. Pre-Hospital Information Documentation Rates: Findings

Leaders used the pre-hospital form in 85% of the cases, frequently entering the age, sex, and injury type variables in the form (Table 1). The pre-arrival clinical measures (e.g., GCS, heart rate, blood pressure) were documented more often in the form than in the margin notes but had lower documentation rates when compared to the age, sex, and injury type. The checklist also has dedicated spaces for recording the GCS, heart rate, and blood pressure values measured by the team in the trauma bay. Pre-hospital variables were documented less frequently than variables measured during the resuscitation, highlighting the challenge of documenting pre-hospital information. While the pre-arrival GCS was documented in the pre-hospital form in only 37.7% of cases, the GCS calculated during the resuscitation was recorded in 97.4% of cases. Leaders also documented the heart rate and blood pressure values measured in the trauma bay in 90.1% of cases, while the pre-arrival heart rate and blood pressure values were documented in 15.6% and 11.7% of cases, respectively. Variables in symptoms (e.g., loss of consciousness) and treatments (e.g., intubation, CPR, IV access) also had lower documentation rates in the pre-hospital form when compared to the age, sex, and injury type variables.

Table 1:

Documentation rates in cases before and after the introduction of the pre-hospital form.

Checklist Margin Note (n=198)
(June 1, 2020 – March 21, 2021)
Pre-Hospital Form (n=78)
(March 22, 2021 – June 30, 2021)
Age (%) 85 (42.9) 66 (84.6)
Sex (%) 51 (25.8) 58 (74.4)
Injury Type (%) 96 (48.4) 50 (64.1)
GCS (%) 7 (3.5) 30 (38.5)
Heart Rate (%) 0 (0.0) 12 (15.6)
Blood Pressure (%) 0 (0.0) 9 (11.7)
Pre-arrival IV Access (%) 0 out of 92 cases (0.0) 6 out of 37 cases (16.2)
Loss of Consciousness (%) 8 out of 36 cases (22.0) 1 out of 10 cases (10.0)
Intubation (%) 2 out of 4 cases (50.0) 1 out of 2 cases (50.0)
CPR (%) 2 out of 3 cases (66.7) 0 out of 1 case (0.0)

We found high levels of accuracy in the recorded age (61/66, 92.4%), sex (54/58, 93.1%), and injury type (50/50, 100.0%) variables. One age entry was off by four years, while the other four discrepancies were only different by a year or two. In two cases, the age was also inaccurate on the page sent at the beginning of the case. When evaluating variables in the treatments and symptoms categories, we found inaccuracies in the pre-arrival IV and loss of consciousness. Leaders documented pre-arrival IV access in nine cases but three of those cases did not have established access. In three cases marked as having loss of consciousness, one case was marked as “unknown” in the chart, while the chart for another case stated, “no loss of consciousness.” We did not find any cases in which intubation or CPR were incorrectly selected in the pre-hospital form.

6.1.3. Video Review: Methods

To understand how the leaders used the pre-hospital form and the factors affecting its use, we reviewed videos from 11 resuscitations with pre-hospital form use. We started by randomly selecting 10 cases to review, ensuring an even distribution of cases throughout the post-intervention period (i.e., after the pre-hospital form release). When reviewing those cases, we observed that one case had more severe injuries than the others. We purposefully selected an additional severe case for review to determine if case severity impacted pre-hospital form use. One researcher reviewed each video, recording their observations. The researcher noted when different team members arrived, how the team communicated about the information documented on the form, and how the leader interacted with the checklist during the EMS report and patient handover. Another researcher transcribed the page texts sent to the trauma teams before patient arrival. We then analyzed the observations and page texts using an inductive qualitative analysis approach [28]. The two researchers independently examined the observation notes and page texts to begin identifying patterns of the pre-hospital form use. The researchers then met to discuss the patterns and iteratively finalize the themes.

6.1.4. Video Review: Findings

We identified four themes when analyzing videos of resuscitations with pre-hospital form use:

Team members arrive at different times resulting in multiple team briefings.

In seven of 11 cases, the ED leader arrived and briefed team members on the incoming resuscitation before the surgical leader arrived. In one case, the ED leader briefed the team twice before the surgical leader arrived. During the second briefing, they asked the team to introduce themselves, acknowledging that the full team had not arrived yet. Upon arrival, the surgical leader asked the ED leader for the results of a scan and then completed the pre-hospital form on the checklist. A few minutes later, the ED leader briefed the team for the third time.

Sources of the pre-hospital information depend on patient and leader arrival.

Leaders used different data sources to document the pre-hospital information depending on their time of arrival. When arriving before the patient, leaders filled out the form with the information conveyed in the pre-arrival page and through the ED leader’s update. In two cases, however, the team arrived at the same time as the patient. In these cases, the surgical leader completed the form using their observations and the EMS report instead of the page. For example, the team in one case had no advance notice that a patient was arriving, receiving the page “Trauma Stat **NOW** in the ER” with no additional information. The surgical leader arrived less than a minute before the patient and started filling out the pre-hospital form as EMS brought the patient into the room. The surgical leader left the pre-hospital form open for three minutes while the examiner was receiving basic information about the patient and starting the primary survey. After the examiner asked the patient for their age, the leader documented it in the form. The leader then entered the vital signs shown on the room monitor on the form.

Leaders will reopen the form to correct information but will not reopen it to enter new information.

The leaders documented an incorrect age or sex in three of the 11 cases, reopening the form to correct the information in two cases. Although leaders reopened the form to correct existing information, they entered new information in the margin note area instead of reopening the form. In one case, the leader asked the team if the patient was two years old, documenting this age on the pre-hospital form after receiving confirmation. Shortly after, the ED leader stated in their briefing that the patient was three years old, while EMS reported four years during patient handover. During the EMS report, the surgical leader took handwritten notes about the patient’s pre-arrival GCS score and loss of consciousness. They then reopened the form to correct the patient’s age but did not add the GCS score or loss of consciousness to the form.

Leaders manage uncertainty and incomplete information while documenting pre-hospital information.

In most cases, leaders had incomplete or uncertain information while filling out the pre-hospital form. The pre-arrival page texts from all cases infrequently included information about the GCS (20.7%), heart rate (31.1%), and blood pressure (29.9%) measurements. Page texts in two video-reviewed cases stated “alert and oriented” instead of providing a GCS value. In one of these cases, the leader documented a value of 15 (normal) for the pre-arrival GCS. The leader in the other case did not document a GCS value. Even in cases with the page containing the exact pre-arrival vital signs, leaders used phrases like “hemodynamically stable” and “normal vital signs” in their briefings, instead of exact values. Uncertainty and confusion also arose from the dynamic nature of the emergency department. In one case, two patients were arriving at the same time, leading to confusion about which patient the team would treat. Team members assumed they were treating one of the patients and the leader filled out the pre-hospital form with the information known about that patient.

6.2. IV/IO Task Progress Slider

6.2.1. Methods

We first added the progress slider to the IV/IO task on November 9, 2020. After releasing the slider, we reviewed checklist logs from 29 cases, finding that the progress slider was used in only two cases. From user feedback received through our clinical collaborators, we found that using both the checkbox and progress slider for this task was confusing (Figure 5(b)). We removed the checkbox on March 22, 2021, leaving just the progress slider (Figure 6(b)). After completing data collection in June 2021, we performed another retrospective analysis to evaluate how the progress slider would have impacted the accuracy of the alert for delays in establishing IV access. We reviewed 246 video-consented cases from June 2020 through June 2021 to determine when IV/IO access was established. We could not analyze six consented cases due to corrupted video files. The review process consisted of several steps. First, we completed a chart review to determine if the patient arrived with IV/IO access. For cases where the patient did not arrive with access, we reviewed each video, timestamping each step of the IV placement process (e.g., placing the tourniquet, inserting the catheter, and attaching the syringe to confirm placement). We also determined the time when the alert would have been triggered and noted if the task was marked as completed on the checklist at that time. Using these two data sources, we classified the alert status into the four categories used in our initial alert research (Figure 4): missed alert, false alert, true alert, or true non-alert. We then performed univariate analysis (chi-square) to compare the categorizations between the three periods: (1) checkbox only (n=115), (2) checkbox and progress slider (n=56), and (3) progress slider only (n=69).

6.2.2. Findings

We found no difference in the number of cases with delays in establishing IV/IO access between the three periods: (1) checkbox only (31/115, 27.0%), (2) checkbox and progress slider (14/56, 25.0%), and (3) progress slider only (15/69, 21.7%) (p=0.7). After the introduction of the slider, premature documentation of IV/IO task completion in cases without pre-arrival access decreased from 67.8% with the checkbox-only design to 30.3% with the slider-only design. This reduction in premature documentation would have contributed to fewer missed alerts in cases with delays, improving the alert’s sensitivity (Table 2, Figure 7). False alerts, however, would have increased in cases without delays in establishing IV access, with 30 cases having false alerts. We further examined these 30 cases to understand why false alerts would have occurred. Sixteen cases included patients who arrived with IV access and 14 had patients who arrived without access. The leader only marked the IV/IO task as complete in five of the 16 cases with pre-arrival access and did so after the time when the alert would have been triggered. For the 14 cases that did not have pre-arrival IV access, the leaders did not mark the task as complete after IV access was established in the room, or they marked it as complete after the false alert would have been triggered. Team members in these cases had not started obtaining IV access (10 cases) or were in the process of obtaining access (4 cases) when the leaders reached the IV/IO access task on the checklist. The leaders then left the slider at the “Not Started” or “In Progress” points before moving to the next item on the checklist. In seven cases, the leader never returned to the IV/IO task to mark it as complete even though the team obtained access. In the other seven cases, the leader either returned to the slider to mark the task as complete or marked the task as done on the summary screen that appeared upon checklist submission. The median time between task completion and the task being marked as complete on the checklist was seven minutes (IQR: 5.4 – 8.1). The false alarm would have been triggered during this time.

Table 2:

Number of missed and false alerts in cases with different IV/IO task designs.

Checkbox Only
(n = 115)
Checkbox & Slider
(n = 56)
Slider Only
(n = 69)
p-value
Missed alerts in cases with delays (%) 23/31 (74.2) 5/14 (35.7) 2/15 (13.3) <0.001
False alerts in cases without delays (%) 11/84 (13.1) 9/42 (21.4) 21/54 (38.9) <0.01
Figure 7:

Figure 7:

Percentage of missed alerts in cases with delays in establishing IV/IO access (left) and percentage of false alerts in cases without delays (right) with three different task designs.

7. DISCUSSION

To enable alerts on cognitive aids, context information and task status must be captured in a timely, accurate, and standardized manner. The current “task list” design found on most cognitive aids does not fully support documentation of the context information and task status needed for triggering accurate decision support alerts. We explored two features on a digital checklist for a time-critical medical process to address this limitation. We first created a form for documenting pre-hospital information about the patient. Because the accuracy and frequency of documentation would impact the accuracy of alerts triggered by the information entered on the form, we studied how clinicians used this form in the wild. Although clinicians frequently used the form during the in-the-wild evaluation, some variables were infrequently documented. This infrequent documentation was partially due to challenges with incomplete and sporadic information and time constraints. We also introduced a progress slider for the IV/IO task to facilitate tracking of multi-step task status. After performing a retrospective analysis of live cases in which this feature was used, we found that the number of missed alerts in cases with delays in establishing IV/IO access would have decreased with the use of the slider. The number of false alerts, however, would have increased in cases without delays because users did not always update the status of the slider after task completion. Using these findings, we next discuss how cognitive aids can serve as decision support platforms in settings with time-critical decision making and the potential for interpretation or management errors. We also discuss guidelines for designing cognitive aid features that promote timely and accurate documentation of context information and multi-step task status.

7.1. Cognitive Aids as Decision Support Platforms

Our findings highlight how cognitive aids can evolve and advance to better support users. While the ability to accurately capture context information is required for decision support alerts, it is also necessary to improve the design of cognitive aids to support advanced interactions. Prior studies have proposed that cognitive aids should progress by becoming dynamic and context-aware, selecting content based on the specifics of the event [29,46,93]. To become adaptative and context-aware, a cognitive aid requires context information to determine how the content should be altered. In our study, we used context information captured in the pre-hospital form to dynamically alter the checklist content. For example, a task was set to the “not applicable” status when the user specified a certain injury type on the pre-hospital form. Visual clutter, distraction, and fatigue are reduced when cognitive aids adapt to present only relevant tasks [29]. In addition to supporting the user in tracking task status, an adaptive cognitive aid could also assist in decision making by signaling when to consider different interventions. If an adaptive cognitive aid supports decision making, cognitive aids may become decision support platforms even before the addition of alerts. Research on both cognitive aids and decision support systems has stressed the importance of displaying the right content at the right time in the correct format [29,81], strengthening the argument for further studying how these two types of systems can be integrated together.

Although we have focused on mitigating interpretation and management errors in a clinical setting, our insights and design guidelines go beyond medical domains. The transition of cognitive aids into decision support platforms could occur in any setting where users must make rapid decisions or perform tasks in a timely manner and where the potential for interpretation or management errors is high. Interpretation errors can occur across domains that involve changing information and rapid decision making, such as driving [38] and aviation [22,37]. For example, air traffic controllers must interpret and integrate many different types of information when making decisions, such as aircraft types, routes, and altitudes [22]. Similarly, pilots can make interpretation errors when assessing aircraft instruments [36]. Because pilots use checklists during both routine flights [8] and emergency situations [37], decision support could be integrated into their checklists to mitigate interpretation and management errors. We next discuss guidelines for designing cognitive aid features that promote timely and accurate documentation of context information and multi-step task status. These guidelines may be generalizable to other contexts where teams make time-critical decisions and perform dynamic tasks.

7.2. Supporting Alerts for Mitigating Interpretation Errors in Time-Critical, Team-Based Work

Decision support systems that suggest potential interventions and mitigate interpretation errors through alerts require real-time context information. Our pre-hospital form required manual effort to capture context information, but technological advances could reduce the documentation burden. Recent work has examined the use of natural language processing for obtaining context information during patient encounters, allowing for hands-free documentation [55,73,98]. Integrations with other systems could also capture context data, reducing the amount of manual documentation and better supporting decision support features [65]. For example, integrations between emergency medical services (EMS) and ED systems could provide the context information needed for some alerts, minimizing documentation of pre-arrival data by ED clinicians. To achieve this integration, electronic documentation practices must first be improved for EMS teams [7,97]. Even when electronic documentation systems exist, interoperability issues often prevent systems from sharing information [65]. Until advancements in electronic documentation are made and interoperability issues are resolved, we still need to understand how digital cognitive aids can support manual documentation of context information to provide decision support alerts. Using the findings from our study and prior work, we discuss several guidelines for designing cognitive aid features that promote rapid documentation of context data in medical and other domains where teams must rapidly document data while managing uncertainty and changing information.

Consider structured data entry to capture context information.

Our co-design participants suggested a checklist section that allowed for structured data entry of context information about the patient. Some participants also proposed using the recorded context information to trigger decision support features, such as alerts. Past work found that senior physicians perceived no need for decision support tools during complex medical processes [88]. Because the participants in our study were rotating at the hospital and less experienced with the pediatric patient population, they might have perceived a higher need for decision support than senior physicians. In addition, clinicians in intensive care settings frequently consider the opinions of other team members and work with different types of equipment, which may make them more open to CDSSs than clinicians in other specialties [39]. Structured data entry for capturing context information could also apply to design of cognitive aids used in other domains. For example, general aviation pilots frequently record information from air traffic control on paper notes [80]. A cognitive aid could support structured data entry of the information received from air traffic control and use that information to provide decision support. Evaluating the effects of this transition from free-form note-taking to structured data entry on pilots’ cognitive processing will be critical. Clinicians in our study thought that structured data entry would support their clinical reasoning and better prepare them for the patient. These perceptions contradicted findings from prior work that showed how physicians’ documentation shifted from clinical reasoning to mechanical data entry after the introduction of an ordering system that relied on structured data entry [92].

Provide options for entering or selecting classifications.

We found instances where pre-event notifications had classifications instead of exact values for pre-arrival measurements. Instead of only providing space to enter numeric values, the system could also allow indicating the classification of a measurement when the exact value is not known or cannot be entered because of time limitations. For example, the system could have an abnormal/normal classification added to the vital signs or a scale added to the GCS measurement that assesses the patient’s consciousness (e.g., alert, verbal, pain, unresponsive). Because selecting an item is easier than typing information, the use of classifications could better facilitate data input [42]. In addition to facilitating data entry during time-critical decision making, the ability to quickly select classifications could also support pilots who have trouble typing on touchpads during turbulence [18]. Adding classifications could increase documentation rates, leading to more accurate decision support alerts. Classifications, however, are more subjective than numeric values. Inaccurate decision support alerts could be triggered if a user interprets the classification differently than the decision support system.

Support uncertainty and indicating different situations.

Our pre-hospital form only allowed users to select whether a patient had certain attributes. Users were also not required to complete these fields on the form. When attributes were left unselected, it was unclear if (a) the patient did not have those attributes, (b) the leader did not know if the patient had those attributes, or (c) the leader chose not to document the field. While the form only supported “yes/no” status, the attributes had additional states during cases: “unknown” and “not documented.” Uncertainty is not limited to clinical settings. For example, operators of railroad control rooms frequently have to manage uncertain information, such as the behavior of drivers or passengers, when making decisions [84]. Informal documentation practices, such as handwritten notes, afford the capture of uncertain and incomplete information, which is still important in decision making [99]. To become a decision support platform, a cognitive aid should support documenting these different states in a standardized manner because of their different implications for triggering decision support features. For example, a decision support system designed to aid anesthesia teams during surgery displayed three potential states for attributes: (1) present, (2) absent, or (3) undetectable by the system [44]. Similarly, a cognitive aid could allow the user to indicate when a status is currently unknown by the team [50]. If the status of an attribute is critical for determining decision support and the user has not documented the status, a reminder could be incorporated within the cognitive aid interface to prompt documentation.

Allow other team members to document and view information.

Designing decision support systems for a specific user in a team-based process can create issues because other team members lack visibility into the system [79]. In time-critical events with ad hoc teams, team members may arrive at different times. As a result, decisions may be made before a team member using the cognitive aid arrives. Although a cognitive aid may be primarily used by one team role, allowing other roles to document data could help trigger decision support features more quickly, while discussions about the event are occurring between the present team members. Clearly denoting which team role should document data in the absence of the primary cognitive aid user could help avoid confusion. Prior work has also found that decision support systems in team settings support collaborative decision-making [42]. The information documented in the cognitive aid could be included on a wall display to help late-arriving members understand the currently known information about the patient and collaborate more easily with other team members. Presenting information on a shared display could also help all team members establish a shared mental model. Prior studies in different domains, such as surgery [92] and search and rescue [41], have highlighted the importance of establishing and maintaining shared mental models during crisis management.

Support documentation of data at multiple times.

Ad-hoc, knowledge-based teams may learn about changes in status and new data at different points before, during, and after the event or work process. For example, prior work found that clinicians document sporadic information in accessible transitional artifacts (e.g., paper notes) during medical events and then formally document this data in archival systems after the event [13]. In our study, users frequently handwrote information received throughout the event in the already accessible note area instead of reopening the form to document context information. Based on our findings, features for documenting status information and other data should be easily accessible at any time or place in the system to support the capture of evolving information. Capturing this evolving information would allow for more specific and precise decision support alerts.

7.3. Supporting Alerts for Mitigating Management Errors

Despite our cognitive aid being designed as a “task list,” we observed issues with accurately capturing the status of multi-step tasks. Multi-step tasks have greater potential for delays than single-step tasks and would benefit from alerts about delays. Users, however, have a harder time accurately documenting multi-step task status, creating barriers to triggering accurate alerts on cognitive aids about delays. We propose three guidelines for designing features that track the status of multi-step activities in dynamic settings:

Clearly define the stages of progress in multi-step tasks.

In addition to supporting awareness of overall task status, cognitive aids can also display subtasks and their respective statuses. For example, prior checklists designed for creative (e.g., website design) [6] and medical [16] work used hierarchical structures to display the status of both a task and its subtasks. In our study, the multi-step task to establish IV/IO access was originally represented as a single checkbox, which only indicated overall task status (“complete” or “not complete) and did not provide insight into the progress of the subtasks. The redesigned IV/IO task featuring a progress slider provided more information about task status by including several different stages. Because the stages may contain different information depending on the steps of the task, they can be used to trigger specific alerts or other decision support features. Our slider had three stages: “Not Started,” “In Progress,” and “Complete,” representing the task status, but these stages could have different meanings to different users. For example, some users may consider the task as in progress when they instruct the team to start the task, while others might consider it as in progress when the team starts with the first step of the task. An alternative approach to using task status (e.g., “Not Started”, “In Progress”) as the stages in the slider could be using the specific subtasks (e.g., “Perform venipuncture”). Clearly defining the task stages represented on the cognitive aid would reduce ambiguity and improve decision support features related to delays. The stages need to be represented at the appropriate level of granularity for a given domain so that they provide the information needed for alerts about delays without greatly increasing the documentation burden.

Allow users to continuously view and change the status for multi-step tasks.

Because the progress slider was in one section of the checklist, users in our study would scroll away from the slider as they moved to other sections. During the near-live simulation sessions, some participants were concerned they would forget to go back to change the slider status. Indeed, we observed during the in-the-wild evaluation that users did not update the status of the slider or only updated it after a delay. The failure to update the slider status would have led to more false alerts. Users may be more likely to update the status of a multi-step task more frequently if they can always view and change the task status. A prior study found that clinicians documented information more quickly when using a cognitive aid that continuously displayed buttons for marking task completion during resuscitations [32]. Improving the timeliness of task status updates would lead to more accurate decision support alerts. After completion, the multi-step task could be removed from the global view to reduce the amount of information shown to the user and focus their attention on the other tasks that need completion.

Consider how the design might impact the accuracy of alerts.

Because extra work is required to distinguish true alerts from false alerts [2], false alerts are detrimental in safety-critical domains, such as clinical [62], driving [53], and rail traffic control [78] settings. High amounts of false alerts can reduce a user’s trust and compliance with a system [53]. Retrospective analyses can be an effective method to estimate the effects of different cognitive aid feature designs on the proportion of true and false alerts. Our retrospective analysis of the IV/IO task progress slider use showed that the slider would have increased the number of true alerts, while also leading to more false alerts. Because the design of the checklist item (e.g., checkbox vs. slider) can impact the proportion of missed and false alerts, designers should consider the implications of false and missed alerts when designing the cognitive aid features. In situations where the user can easily assess an alert’s accuracy, feature designs that lead to more false alerts (but also more true alerts) may be more acceptable. Understanding the false alert rate before implementation can also influence how the alerts are designed. For example, prior work has proposed that alerts with higher false alarm rates should use less intrusive designs [62]. Until improvements are made in recording multi-step task status, alerts about delays in task completion may need to be designed less intrusively to mitigate issues with inaccuracy that could lead to alert fatigue.

7.4. Study Limitations

This study had three main limitations. First, data were collected from a single research site. Other institutions may have different policies, cultures, and training protocols that impact the use of cognitive aids. Although data were collected from one research site, all users were temporarily rotating at the research site and had been exposed to different policies, cultures, and training protocols during earlier rotations at other institutions. Second, our findings may have been influenced by the physicians who elected to participate in the co-design design sessions and near-live simulations. These physicians may have had certain biases toward technology, cognitive aids, and decision-support systems. While certain physicians elected to participate in the design research, the in-the-wild evaluation captured checklist use across all physicians participating in trauma resuscitations at our research site. Third, we had to exclude some cases from video review due to a lack of consent and corrupted video files. These excluded cases were a small percentage (13%) of the total cases.

8. CONCLUSION

In this paper, we studied how cognitive aids could be extended into decision support platforms and used to obtain data for triggering alerts during time-critical decision making. We focused on decision support alerts aimed at mitigating interpretation and management errors in team-based, high-risk work. We identified two types of design changes to enable decision support alerts on a standard, “task list” cognitive aid: (1) context information must be entered in a timely, accurate, and standardized manner, and (2) task status must be accurately documented. We used co-design sessions and near-live simulation sessions to design and evaluate two features that satisfied the required changes: a pre-hospital form for entering context information and progress slider for documenting the status of a multi-step task. These two features were evaluated in the wild, during 78 actual resuscitations. Based on our findings, we proposed several guidelines for designing features that capture data required for decision support on cognitive aids. In our future work, we will explore how the data obtained by these features on the cognitive aid can be used to display decision alerts.

CCS CONCEPTS.

  • Human-centered computing

  • Human computer interaction (HCI)

  • Empirical studies in HCI

ACKNOWLEDGMENTS

We thank the medical staff at Children’s National Hospital for participating in this research. This research has been supported by the National Library of Medicine of the National Institutes of Health under grant number 2R01LM011834-05 and the National Science Foundation under grant number IIS-1763509. This material is also based upon work supported by the National Science Foundation Graduate Research Fellowship Program under grant number 2041772. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Contributor Information

Angela Mastrianni, College of Computing and Informatics, Drexel University, Philadelphia, USA.

Aleksandra Sarcevic, College of Computing and Informatics, Drexel University, Philadelphia, USA.

Allison Hu, Division of Trauma and Burn Surgery, Children’s National Hospital, Washington, D.C., USA.

Lynn Almengor, College of Computing and Informatics, Drexel University, Philadelphia, USA.

Peyton Tempel, Division of Trauma and Burn Surgery, Children’s National Hospital, Washington, D.C., USA.

Sarah Gao, Division of Trauma and Burn Surgery, Children’s National Hospital, Washington, D.C., USA.

Randall S. Burd, Division of Trauma and Burn Surgery, Children’s National Hospital, Washington, D.C., USA

REFERENCES

  • [1].Agarwala Aalok V., Firth Paul G., Albrecht Meredith A., Warren Lisa, and Musch Guido. 2015. An Electronic Checklist Improves Transfer and Retention of Critical Information at Intraoperative Handoff of Care. Anesthesia & Analgesia 120, 1: 96–104. 10.1213/ANE.0000000000000506 [DOI] [PubMed] [Google Scholar]
  • [2].Ancker Jessica S., Edwards Alison, Nosal Sarah, Hauser Diane, Mauer Elizabeth, Kaushal Rainu, and with the HITEC Investigators. 2017. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Medical Informatics and Decision Making 17, 1: 36. 10.1186/s12911-017-0430-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].ATLS Subcommittee, American College of Surgeons’ Committee on Trauma, and International ATLS working group. 2013. Advanced trauma life support (ATLS®): the ninth edition. The Journal of Trauma and Acute Care Surgery 74, 5: 1363–1366. 10.1097/TA.0b013e31828b82f5 [DOI] [PubMed] [Google Scholar]
  • [4].Baker Jay B., Korn Carrie S., Robinson Ken, Chan Linda, and Henderson Sean O.. 2001. Type and Crossmatch of the Trauma Patient. Journal of Trauma and Acute Care Surgery 50, 5: 878–881. [DOI] [PubMed] [Google Scholar]
  • [5].Berg Marc. 1999. Accumulating and Coordinating: Occasions for Information Technologies in Medical Work. Computer Supported Cooperative Work (CSCW) 8, 4: 373–401. 10.1023/A:1008757115404 [DOI] [Google Scholar]
  • [6].Bharadwaj Aditya, Siangliulue Pao, Marcus Adam, and Luther Kurt. 2019. Critter: Augmenting Creative Work with Dynamic Checklists, Automated Quality Assurance, and Contextual Reviewer Feedback. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ‘19), 1–12. 10.1145/3290605.3300769 [DOI] [Google Scholar]
  • [7].Bledsoe Bryan E., Wasden Chad, and Johnson Larry. 2013. Electronic Prehospital Records are Often Unavailable for Emergency Department Medical Decision Making. Western Journal of Emergency Medicine 14, 5: 482–488. 10.5811/westjem.2013.1.12665 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Boorman Daniel. 2001. TODAY’S ELECTRONIC CHECKLISTS REDUCE LIKELIHOOD OF CREW ERRORS AND HELP PREVENT MISHAPS. ICAO Journal. Retrieved July 28, 2022 from https://trid.trb.org/view/607283 [Google Scholar]
  • [9].Bossen Claus. 2006. Representations at work: a national standard for electronic health records. In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work - CSCW ‘06, 69. 10.1145/1180875.1180887 [DOI] [Google Scholar]
  • [10].Guerra Miguel Cabral, Kommers Deedee, Bakker Saskia, An Pengcheng, Pul Carola van, and Andriessen Peter. 2019. Beepless: Using Peripheral Interaction in an Intensive Care Setting. In Proceedings of the 2019 on Designing Interactive Systems Conference, 607–620. 10.1145/3322276.3323696 [DOI] [Google Scholar]
  • [11].Cai Carrie J., Reif Emily, Hegde Narayan, Hipp Jason, Kim Been, Smilkov Daniel, Wattenberg Martin, Viegas Fernanda, Corrado Greg S., Stumpe Martin C., and Terry Michael. 2019. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. 10.1145/3290605.3300234 [DOI] [Google Scholar]
  • [12].Calder Lisa A, Bhandari Abhi, Mastoras George, Day Kathleen, Momtahan Kathryn, Falconer Matthew, Weitzman Brian, Sohmer Benjamin, Cwinn A Adam, Hamstra Stanley J, and Parush Avi. 2018. Healthcare providers’ perceptions of a situational awareness display for emergency department resuscitation: a simulation qualitative study. International Journal for Quality in Health Care 30, 1: 16–22. 10.1093/intqhc/mzx159 [DOI] [PubMed] [Google Scholar]
  • [13].Chen Yunan. 2010. Documenting transitional information in EMR. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘10), 1787–1796. 10.1145/1753326.1753594 [DOI] [Google Scholar]
  • [14].Chen Yun-Yun K and Arriaga Alexander. 2021. Crisis checklists in emergency medicine: another step forward for cognitive aids. BMJ Quality & Safety 30, 9: 689–693. 10.1136/bmjqs-2021-013203 [DOI] [PubMed] [Google Scholar]
  • [15].Cho Ji and Lee Eun-Hee. 2014. Reducing Confusion about Grounded Theory and Qualitative Content Analysis: Similarities and Differences. The Qualitative Report 19, 32: 1–20. 10.46743/2160-3715/2014.1028 [DOI] [Google Scholar]
  • [16].Christov Stefan C., Conboy Heather M., Famigletti Nancy, Avrunin George S., Clarke Lori A., and Osterweil Leon J.. 2016. Smart checklists to improve healthcare outcomes. In Proceedings of the International Workshop on Software Engineering in Healthcare Systems (SEHS ‘16), 54–57. 10.1145/2897683.2897691 [DOI] [Google Scholar]
  • [17].Cobus Vanessa, Ehrhardt Bastian, Boll Susanne, and Heuten Wilko. 2018. Vibrotactile Alarm Display for Critical Care. In Proceedings of the 7th ACM International Symposium on Pervasive Displays, 1–7. 10.1145/3205873.3205886 [DOI] [Google Scholar]
  • [18].Cockburn A, Masson D, Gutwin C, Palanque P, Goguey A, Yung M, Gris C, and Trask C. 2019. Design and evaluation of braced touch for touchscreen input stabilisation. International Journal of Human-Computer Studies 122: 21–37. 10.1016/j.ijhcs.2018.08.005 [DOI] [Google Scholar]
  • [19].Coffey Carla, Wurster Lee Ann, Groner Jonathan, Hoffman Jeffrey, Hendren Valerie, Nuss Kathy, Haley Kathy, Gerberick Julie, Malehorn Beth, and Covert Julia. 2015. A Comparison of Paper Documentation to Electronic Documentation for Trauma Resuscitations at a Level I Pediatric Trauma Center. Journal of Emergency Nursing: JEN 41, 1: 52–56. http://dx.doi.org.ezproxy2.library.drexel.edu/10.1016/j.jen.2014.04.010 [DOI] [PubMed] [Google Scholar]
  • [20].Croskerry Pat. 2002. Achieving Quality in Clinical Decision Making: Cognitive Strategies and Detection of Bias. Academic Emergency Medicine 9, 11: 1184–1204. 10.1197/aemj.9.11.1184 [DOI] [PubMed] [Google Scholar]
  • [21].De Bie AJR, Nan S, Vermeulen LRE, Van Gorp PME, Bouwman RA, Bindels AJGH, and Korsten HHM. 2017. Intelligent dynamic clinical checklists improved checklist compliance in the intensive care unit. British Journal of Anaesthesia 119, 2: 231–238. 10.1093/bja/aex129 [DOI] [PubMed] [Google Scholar]
  • [22].Reuck Samantha De, Donald Fiona, and Siemers Ian. 2014. Factors associated with safety events in air traffic control. Ergonomics SA : Journal of the Ergonomics Society of South Africa 26, 1: 2–18. 10.10520/EJC157026 [DOI] [Google Scholar]
  • [23].D’Huyvetter Cecile, Lang Ann M., Heimer Dawn M., and Cogbill Thomas H.. 2014. Efficiencies Gained by Using Electronic Medical Record and Reports in Trauma Documentation. Journal of Trauma Nursing | JTN 21, 2: 68–71. 10.1097/JTN.0000000000000031 [DOI] [PubMed] [Google Scholar]
  • [24].Febretti Alessandro, Lopez Karen Dunn, Stifter Janet, Johnson Andrew E., Keenan Gail, and Wilkie Diana. 2014. Evaluating a clinical decision support interface for end-of-life nurse care. In CHI ‘14 Extended Abstracts on Human Factors in Computing Systems, 1633–1638. 10.1145/2559206.2581170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Fiks Alexander G., Grundmeier Robert W., Mayne Stephanie, Song Lihai, Feemster Kristen, Karavite Dean, Hughes Cayce C., Massey James, Keren Ron, Bell Louis M., Wasserman Richard, and Localio A. Russell. 2013. Effectiveness of Decision Support for Families, Clinicians, or Both on HPV Vaccine Receipt. Pediatrics 131, 6: 1114–1124. 10.1542/peds.2012-3122 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Fitzpatrick Geraldine and Ellingsen Gunnar. 2013. A Review of 25 Years of CSCW Research in Healthcare: Contributions, Challenges and Future Agendas. Computer Supported Cooperative Work (CSCW) 22, 4: 609–665. 10.1007/s10606-012-9168-0 [DOI] [Google Scholar]
  • [27].Gonzales Michael J., Henry Joshua M., Calhoun Aaron W., and Riek Laurel D.. 2016. Visual TASK: A Collaborative Cognitive Aid for Acute Care Resuscitation. arXiv:1605.05224 [cs]. Retrieved June 24, 2021 from http://arxiv.org/abs/1605.05224
  • [28].Graneheim UH and Lundman B. 2004. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Education Today 24, 2: 105–112. 10.1016/j.nedt.2003.10.001 [DOI] [PubMed] [Google Scholar]
  • [29].Grigg Eliot. 2015. Smarter Clinical Checklists: How to Minimize Checklist Fatigue and Maximize Clinician Performance. Anesthesia & Analgesia 121, 2: 570–573. 10.1213/ANE.0000000000000352 [DOI] [PubMed] [Google Scholar]
  • [30].Grigg Eliot, Palmer Andrew, Grigg Jeffrey, Oppenheimer Peter, Wu Tim, Roesler Axel, Nair Bala, and Ross Brian. 2014. Randomised trial comparing the recording ability of a novel, electronic emergency documentation system with the AHA paper cardiac arrest record. Emergency Medicine Journal : EMJ 31, 10: 833. http://dx.doi.org.ezproxy2.library.drexel.edu/10.1136/emermed-2013-202512 [DOI] [PubMed] [Google Scholar]
  • [31].Gruen Russell L., Jurkovich Gregory J., McIntyre Lisa K., Foy Hugh M., and Maier Ronald V.. 2006. Patterns of Errors Contributing to Trauma Mortality. Annals of Surgery 244, 3: 371–380. 10.1097/01.sla.0000234655.83517.56 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Grundgeiger T, Albert M, Reinhardt D, Happel O, Steinisch A, and Wurmb T. 2016. Real-time tablet-based resuscitation documentation by the team leader: evaluating documentation quality and clinical performance. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 24, 1: 51. 10.1186/s13049-016-0242-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Grundgeiger Tobias, Huber Stephan, Reinhardt Daniel, Steinisch Andreas, Happel Oliver, and Wurmb Thomas. 2019. Cognitive Aids in Acute Care: Investigating How Cognitive Aids Affect and Support In-hospital Emergency Teams. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. 10.1145/3290605.3300884 [DOI] [Google Scholar]
  • [34].Hansen Nicolai Brodersen, Dindler Christian, Halskov Kim, Iversen Ole Sejer, Bossen Claus, Basballe Ditte Amund, and Schouten Ben. 2019. How Participatory Design Works: Mechanisms and Effects. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction, 30–41. 10.1145/3369457.3369460 [DOI] [Google Scholar]
  • [35].Heringa Mette, Siderius Hidde, Floor-Schreudering Annemieke, de Smet Peter A G M, and Bouvy Marcel L. 2017. Lower alert rates by clustering of related drug interaction alerts. Journal of the American Medical Informatics Association : JAMIA 24, 1: 54–59. 10.1093/jamia/ocw049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].den Hoed Annemarie van, Landman Annemarie, Baelen Dirk Van, Stroosma Olaf, van Paassen M. M. (René), Groen Eric L., and Mulder Max. 2020. Leans Illusion in Hexapod Simulator Facilitates Erroneous Responses to Artificial Horizon in Airline Pilots. Human Factors: 0018720820975248. 10.1177/0018720820975248 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [37].Huber Stephan, Gramlich Johanna, and Grundgeiger Tobias. 2020. From Paper Flight Strips to Digital Strip Systems: Changes and Similarities in Air Traffic Control Work Practices. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1: 1–21. 10.1145/3392833 [DOI] [Google Scholar]
  • [38].Hurtado Stephanie and Chiasson Sonia. 2016. An Eye-tracking Evaluation of Driver Distraction and Unfamiliar Road Signs. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 153–160. 10.1145/3003715.3005407 [DOI] [Google Scholar]
  • [39].Jagannath Swathi, Sarcevic Aleksandra, Young Victoria, and Myers Sage. 2019. Temporal Rhythms and Patterns of Electronic Documentation in Time-Critical Medical Work. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ‘19, 1–13. 10.1145/3290605.3300564 [DOI] [Google Scholar]
  • [40].Jaspers Monique W. M., Smeulers Marian, Vermeulen Hester, and Peute Linda W.. 2011. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. Journal of the American Medical Informatics Association: JAMIA 18, 3: 327–334. 10.1136/amiajnl-2011-000094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Jones Brennan, Tang Anthony, and Neustaedter Carman. 2020. Remote Communication in Wilderness Search and Rescue: Implications for the Design of Emergency Distributed-Collaboration Tools for Network-Sparse Environments. Proceedings of the ACM on Human-Computer Interaction 4, GROUP: 10:1–10:26. 10.1145/3375190 [DOI] [Google Scholar]
  • [42].Kaltenhauser Annika, Rheinstädter Verena, Butz Andreas, and Wallach Dieter P.. 2020. “You Have to Piece the Puzzle Together”: Implications for Designing Decision Support in Intensive Care. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, 1509–1522. 10.1145/3357236.3395436 [DOI] [Google Scholar]
  • [43].Kelleher Deirdre C., Carter Elizabeth A., Waterhouse Lauren J., Parsons Samantha E., Fritzeen Jennifer L., and Burd Randall S.. 2014. Effect of a checklist on advanced trauma life support task performance during pediatric trauma resuscitation. Academic Emergency Medicine: Official Journal of the Society for Academic Emergency Medicine 21, 10: 1129–1134. 10.1111/acem.12487 [DOI] [PubMed] [Google Scholar]
  • [44].Klüber Sara, Maas Franzisca, Schraudt David, Hermann Gina, Happel Oliver, and Grundgeiger Tobias. 2020. Experience Matters: Design and Evaluation of an Anesthesia Support Tool Guided by User Experience Theory. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, 1523–1535. 10.1145/3357236.3395552 [DOI] [Google Scholar]
  • [45].Kramer Heidi S. and Drews Frank A.. 2017. Checking the lists: A systematic review of electronic checklist use in health care. Journal of Biomedical Informatics 71: S6–S12. 10.1016/j.jbi.2016.09.006 [DOI] [PubMed] [Google Scholar]
  • [46].Kulp Leah, Sarcevic Aleksandra, Cheng Megan, and Burd Randall S.. 2021. Towards Dynamic Checklists: Understanding Contexts of Use and Deriving Requirements for Context-Driven Adaptation. ACM Transactions on Computer-Human Interaction 28, 2: 1–33. 10.1145/3444947 [DOI] [Google Scholar]
  • [47].Kulp Leah, Sarcevic Aleksandra, Cheng Megan, Zheng Yinan, and Burd Randall S.. 2019. Comparing the Effects of Paper and Digital Checklists on Team Performance in Time-Critical Work. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ‘19), 1–13. 10.1145/3290605.3300777 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Kulp Leah, Sarcevic Aleksandra, Farneth Richard, Ahmed Omar, Mai Dung, Marsic Ivan, and Burd Randall S.. 2017. Exploring Design Opportunities for a Context-Adaptive Medical Checklist Through Technology Probe Approach. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS ‘17), 57–68. 10.1145/3064663.3064715 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [49].Kulp Leah, Sarcevic Aleksandra, Zheng Yinan, Cheng Megan, Alberto Emily, and Burd Randall. 2020. Checklist Design Reconsidered: Understanding Checklist Compliance and Timing of Interactions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13. 10.1145/3313831.3376853 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Kuo Pei-Yi, Saran Rajiv, Argentina Marissa, Heung Michael, Bragg-Gresham Jennifer L., Chatoth Dinesh, Gillespie Brenda, Krein Sarah, Wingard Rebecca, Zheng Kai, and Veinot Tiffany C.. 2019. Development of a Checklist for the Prevention of Intradialytic Hypotension in Hemodialysis Care: Design Considerations Based on Activity Theory. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. 10.1145/3290605.3300872 [DOI] [Google Scholar]
  • [51].Lee JaeHo, Han Hyewon, Ock Minsu, Lee Sang-il, Lee SunGyo, and Jo Min-Woo. 2014. Impact of a clinical decision support system for high-alert medications on the prevention of prescription errors. International Journal of Medical Informatics 83, 12: 929–940. 10.1016/j.ijmedinf.2014.08.006 [DOI] [PubMed] [Google Scholar]
  • [52].Lee Min Hun, Siewiorek Daniel P. P., Smailagic Asim, Bernardino Alexandre, and Bermúdez Sergi Badia Bermúdez i. 2021. A Human-AI Collaborative Approach for Clinical Decision Making on Rehabilitation Assessment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–14. 10.1145/3411764.3445472 [DOI] [Google Scholar]
  • [53].Lees MN and Lee JD. 2007. The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics 50, 8: 1264–1286. 10.1080/00140130701318749 [DOI] [PubMed] [Google Scholar]
  • [54].Li Alice C., Kannry Joseph L., Kushniruk Andre, Chrimes Dillon, McGinn Thomas G., Edonyabo Daniel, and Mann Devin M.. 2012. Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support. International Journal of Medical Informatics 81, 11: 761–772. 10.1016/j.ijmedinf.2012.02.009 [DOI] [PubMed] [Google Scholar]
  • [55].Li Brenna, Crampton Noah, Yeates Thomas, Xia Yu, Tian Xirong, and Truong Khai. 2021. Automating Clinical Documentation with Digital Scribes: Understanding the Impact on Physicians. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–12. 10.1145/3411764.3445172 [DOI] [Google Scholar]
  • [56].Luck Raemma P., Haines Christopher, and Mull Colette C.. 2010. Intraosseous access. The Journal of Emergency Medicine 39, 4: 468–475. 10.1016/j.jemermed.2009.04.054 [DOI] [PubMed] [Google Scholar]
  • [57].Mainthia Rajshri, Lockney Timothy, Zotov Alexandr, France Daniel J., Bennett Marc, St Jacques Paul J., Furman William, Randa Stephanie, Feistritzer Nancye, Eavey Roland, Leming-Lee Susie, and Anders Shilo. 2012. Novel use of electronic whiteboard in the operating room increases surgical team compliance with pre-incision safety practices. Surgery 151, 5: 660–666. 10.1016/j.surg.2011.12.005 [DOI] [PubMed] [Google Scholar]
  • [58].Marshall SD. 2017. Helping experts and expert teams perform under duress: an agenda for cognitive aid research. Anaesthesia 72, 3: 289–295. 10.1111/anae.13707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Marshall Stuart. 2013. The Use of Cognitive Aids During Emergencies in Anesthesia: A Review of the Literature. Anesthesia & Analgesia 117, 5: 1162–1171. 10.1213/ANE.0b013e31829c397b [DOI] [PubMed] [Google Scholar]
  • [60].Mastrianni Angela, Kulp Leah, Mapelli Emily, and Sarcevic Aleksandra. 2020. Understanding Digital Checklist Use Through Team Communication. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ‘20), 1–8. 10.1145/3334480.3382817 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [61].Mastrianni Angela, Sarcevic Aleksandra, Chung Lauren, Zakeri Issa, Alberto Emily, Milestone Zachary, Marsic Ivan, and Burd Randall S. 2021. Designing Interactive Alerts to Improve Recognition of Critical Events in Medical Emergencies. In Designing Interactive Systems Conference 2021. Association for Computing Machinery, New York, NY, USA, 864–878. Retrieved June 29, 2021 from 10.1145/3461778.3462051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [62].McGreevey John D., Mallozzi Colleen P., Perkins Randa M., Shelov Eric, and Schreiber Richard. 2020. Reducing Alert Burden in Electronic Health Records: State of the Art Recommendations from Four Health Systems. Applied Clinical Informatics 11, 1: 1–12. 10.1055/s-0039-3402715 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [63].Moxey Annette, Robertson Jane, Newby David, Hains Isla, Williamson Margaret, and Pearson Sallie-Anne. 2010. Computerized clinical decision support for prescribing: provision does not guarantee uptake. Journal of the American Medical Informatics Association : JAMIA 17, 1: 25–33. 10.1197/jamia.M3170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].Nanji Karen C., Seger Diane L., Slight Sarah P., Amato Mary G., Beeler Patrick E., Her Qoua L., Dalleur Olivia, Eguale Tewodros, Wong Adrian, Silvers Elizabeth R., Swerdloff Michael, Hussain Salman T., Maniam Nivethietha, Fiskio Julie M., Dykes Patricia C., and Bates David W.. 2018. Medication-related clinical decision support alert overrides in inpatients. Journal of the American Medical Informatics Association 25, 5: 476–481. 10.1093/jamia/ocx115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [65].Park JaeYeon, Rhim Soyoung, Han Kyungsik, and Ko JeongGil. 2021. Disentangling the clinical data chaos: User-centered interface system design for trauma centers. PLOS ONE 16, 5: e0251140. 10.1371/journal.pone.0251140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [66].Park Sun Young, Chen Yunan, and Rudkin Scott. 2015. Technological and Organizational Adaptation of EMR Implementation in an Emergency Department. ACM Transactions on Computer-Human Interaction 22, 1: 1–24. 10.1145/2656213 [DOI] [Google Scholar]
  • [67].Park Sun Young, Lee So Young, and Chen Yunan. 2012. The effects of EMR deployment on doctors’ work practices: A qualitative study in the emergency department of a teaching hospital. International Journal of Medical Informatics 81, 3: 204–217. 10.1016/j.ijmedinf.2011.12.001 [DOI] [PubMed] [Google Scholar]
  • [68].Parsons Samantha E., Carter Elizabeth A., Waterhouse Lauren J., Fritzeen Jennifer, Kelleher Deirdre C., O’Connell Karen J., Sarcevic Aleksandra, Baker Kelley M., Nelson Erik, Werner Nicole E., Boehm-Davis Deborah A., and Burd Randall S.. 2014. Improving ATLS Performance in Simulated Pediatric Trauma Resuscitation Using a Checklist. Annals of Surgery 259, 4: 807–813. 10.1097/SLA.0000000000000259 [DOI] [PubMed] [Google Scholar]
  • [69].Parush A, Mastoras G, Bhandari A, Momtahan K, Day K, Weitzman B, Sohmer B, Cwinn A, Hamstra SJ, and Calder L. 2017. Can teamwork and situational awareness (SA) in ED resuscitations be improved with a technological cognitive aid? Design and a pilot study of a team situation display. Journal of Biomedical Informatics 76: 154–161. 10.1016/j.jbi.2017.10.009 [DOI] [PubMed] [Google Scholar]
  • [70].Pieracci Fredric M., Witt Jennifer, Moore Ernest E., Burlew Clay C., Johnson Jeffery, Biffl Walter L., Barnett Carlton C., and Bensard Denis D.. 2012. Early death and late morbidity after blood transfusion of injured children: a pilot study. Journal of Pediatric Surgery 47, 8: 1587–1591. 10.1016/j.jpedsurg.2012.02.011 [DOI] [PubMed] [Google Scholar]
  • [71].Pine Kathleen H. and Chen Yunan. 2020. Right Information, Right Time, Right Place: Physical Alignment and Misalignment in Healthcare Practice. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12. Retrieved August 31, 2021 from 10.1145/3313831.3376818 [DOI] [Google Scholar]
  • [72].Potter D. Dean, Berns Kathleen S., Elsbernd Terri A., and Zietlow Scott P.. 2015. Prehospital use of blood and plasma in pediatric trauma patients. Air Medical Journal 34, 1: 40–43. 10.1016/j.amj.2014.07.037 [DOI] [PubMed] [Google Scholar]
  • [73].Quiroz Juan C., Laranjo Liliana, Kocaballi Ahmet Baki, Berkovsky Shlomo, Rezazadegan Dana, and Coiera Enrico. 2019. Challenges of developing a digital scribe to reduce clinical documentation burden. npj Digital Medicine 2, 1: 1–6. 10.1038/s41746-019-0190-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [74].Salwei Megan E., Carayon Pascale, Hoonakker Peter, Hundt Ann Schoofs, Novak Clair, Wang Yudi, Wiegmann Douglas, and Patterson Brian. 2019. Assessing workflow of emergency physicians in the use of clinical decision support. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, 1: 772–776. 10.1177/1071181319631334 [DOI] [Google Scholar]
  • [75].B.-N.Sanders Elizabeth and Stappers Pieter Jan. 2008. Co-creation and the new landscapes of design. CoDesign 4, 1: 5–18. 10.1080/15710880701875068 [DOI] [Google Scholar]
  • [76].Sarcevic Aleksandra, Marsic Ivan, and Burd Randal S.. 2012. Teamwork Errors in Trauma Resuscitation. ACM Transactions on Computer-Human Interaction 19, 2: 1–30. 10.1145/2240156.2240161 [DOI] [Google Scholar]
  • [77].Sarcevic Aleksandra, Zhang Zhan, Marsic Ivan, and Burd Randall S. 2017. Checklist as a Memory Externalization Tool during a Critical Care Process. AMIA Annual Symposium Proceedings 2016: 1080–1089. [PMC free article] [PubMed] [Google Scholar]
  • [78].Sebok Angelia, Wickens Christopher, Laux Lila, and Jones Michael. 2015. Supporting Human-Automation Interaction in the Rail Industry by Applying Lessons from Aviation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 59, 1: 1661–1665. 10.1177/1541931215591359 [DOI] [Google Scholar]
  • [79].Silsand Line and Ellingsen Gunnar. 2016. Complex Decision-Making in Clinical Practice. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 993–1004. 10.1145/2818048.2819952 [DOI] [Google Scholar]
  • [80].Simon Florine, Duran Lucia Alonso, Carrau Xavier, Delacroix Emeric, Guilbert Baptiste, Jean-Paul IMBERT Mickaël Causse, and Brock Anke M.. 2021. BLEXVFR: Designing a Mobile Assistant for Pilots. In Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI ‘21), 1–5. 10.1145/3447527.3474852 [DOI] [Google Scholar]
  • [81].Sirajuddin Anwar M, Osheroff Jerome A., Sittig Dean F., Chuo John, Velasco Ferdinand, and Collins David A.. 2009. Implementation Pearls from a New Guidebook on Improving Medication Use and Outcomes with Clinical Decision Support. Journal of Healthcare Information Management 23, 4: 38–45. [PMC free article] [PubMed] [Google Scholar]
  • [82].Sittig Dean F., Wright Adam, Osheroff Jerome A., Middleton Blackford, Teich Jonathan M., Ash Joan S., Campbell Emily, and Bates David W.. 2008. Grand challenges in clinical decision support. Journal of Biomedical Informatics 41, 2: 387–392. 10.1016/j.jbi.2007.09.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [83].Sluiter J, der Beek A J van, Frings-Dresen M, and Ursin H. 2003. Medical staff in emergency situations: severity of patient status predicts stress hormone reactivity and recovery. Occupational and Environmental Medicine 60, 5: 373–375. 10.1136/oem.60.5.373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [84].Smith Penn, Blandford Ann, and Back Jonathan. 2009. Questioning, exploring, narrating and playing in the control room to maintain system safety. Cognition, Technology and Work 11, 4: 279–291. 10.1007/s10111-008-0116-1 [DOI] [Google Scholar]
  • [85].Sukums Felix, Mensah Nathan, Mpembeni Rose, Massawe Siriel, Duysburgh Els, Williams Afua, Kaltschmidt Jens, Loukanova Svetla, Haefeli Walter E., and Blank Antje. 2015. Promising adoption of an electronic clinical decision support system for antenatal and intrapartum care in rural primary healthcare facilities in sub-Saharan Africa: The QUALMAT experience. International Journal of Medical Informatics 84, 9: 647–657. 10.1016/j.ijmedinf.2015.05.002 [DOI] [PubMed] [Google Scholar]
  • [86].Sutton Reed T., Pincock David, Baumgart Daniel C., Sadowski Daniel C., Fedorak Richard N., and Kroeker Karen I.. 2020. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digital Medicine 3, 1: 1–10. 10.1038/s41746-020-0221-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Thongprayoon Charat, Harrison Andrew M., O’Horo John C., Sevilla Berrios Ronaldo A., Pickering Brian W., and Herasevich Vitaly. 2016. The Effect of an Electronic Checklist on Critical Care Provider Workload, Errors, and Performance. Journal of Intensive Care Medicine 31, 3: 205–212. 10.1177/0885066614558015 [DOI] [PubMed] [Google Scholar]
  • [88].Walker Katherine, Michael Ben-Meir William Dunlop, Rosler Rachel, West Adam, Gabrielle O’Connor Thomas Chan, Badcock Diana, Putland Mark, Hansen Kim, Crock Carmel, Liew Danny, Taylor David, and Staples Margaret. 2019. Impact of scribes on emergency medicine doctors’ productivity and patient throughput: multicentre randomised trial. BMJ 364: l121. 10.1136/bmj.l121 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [89].Walker Katie, Dwyer Tim, and Heaton Heather A. 2021. Emergency medicine electronic health record usability: where to from here? Emergency Medicine Journal 38, 6: 408–409. 10.1136/emermed-2021-211384 [DOI] [PubMed] [Google Scholar]
  • [90].Wang Dakuo, Wang Liuping, Zhang Zhan, Wang Ding, Zhu Haiyi, Gao Yvonne, Fan Xiangmin, and Tian Feng. 2021. “Brilliant AI Doctor” in Rural Clinics: Challenges in AI-Powered Clinical Decision Support System Deployment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18. 10.1145/3411764.3445432 [DOI] [Google Scholar]
  • [91].Wright Adam and Sittig Dean F.. 2008. A four-phase model of the evolution of clinical decision support architectures. International Journal of Medical Informatics 77, 10: 641–649. 10.1016/j.ijmedinf.2008.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [92].Wu Leslie, Cirimele Jesse, Card Stuart, Klemmer Scott, Chu Larry, and Harrison Kyle. 2011. Maintaining shared mental models in anesthesia crisis care with nurse tablet input and large-screen displays. In Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology (UIST ‘11 Adjunct), 71–72. 10.1145/2046396.2046428 [DOI] [Google Scholar]
  • [93].Wu Leslie, Cirimele Jesse, Leach Kristen, Card Stuart, Chu Larry, Harrison T. Kyle, and Klemmer Scott R.. 2014. Supporting crisis response with dynamic procedure aids. In Proceedings of the 2014 conference on Designing interactive systems, 315–324. 10.1145/2598510.2598565 [DOI] [Google Scholar]
  • [94].Yadav Kabir, Chamberlain James M., Lewis Vicki R., Abts Natalie, Chawla Shawn, Hernandez Angie, Johnson Justin, Tuveson Genevieve, and Burd Randall S.. 2015. Designing Real-time Decision Support for Trauma Resuscitations. Academic Emergency Medicine 22, 9: 1076–1084. 10.1111/acem.12747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [95].Yang Qian, Steinfeld Aaron, and Zimmerman John. 2019. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ‘19: 1–11. 10.1145/3290605.3300468 [DOI] [Google Scholar]
  • [96].Yang Qian, Zimmerman John, Steinfeld Aaron, Carey Lisa, and Antaki James F.. 2016. Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 4477–4488. Retrieved January 19, 2021 from 10.1145/2858036.2858373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [97].Zhang Zhan, Joy Karen, Upadhyayula Pradeepti, Ozkaynak Mustafa, Harris Richard, and Adelgais Kathleen. 2021. Data Work and Decision Making in Emergency Medical Services: A Distributed Cognition Perspective. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2: 356:1–356:32. 10.1145/3479500 [DOI] [Google Scholar]
  • [98].Zhang Zhan, Luo Xiao, Harris Richard, George Susanna, and Finkelstein Jack. 2022. Hands-Free Electronic Documentation in Emergency Care Work Through Smart Glasses. In Information for a Better World: Shaping the Global Future (Lecture Notes in Computer Science), 314–331. 10.1007/978-3-030-96960-8_21 [DOI] [Google Scholar]
  • [99].Zhou Xiaomu, Ackerman Mark S., and Zheng Kai. 2010. Computerization and information assembling process: nursing work and CPOE adoption. In Proceedings of the 1st ACM International Health Informatics Symposium (IHI ‘10), 36–45. 10.1145/1882992.1883000 [DOI] [Google Scholar]
  • [100].Zhou Xiaomu, Ackerman Mark, and Zheng Kai. 2011. CPOE workarounds, boundary objects, and assemblages. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3353–3362. 10.1145/1978942.1979439 [DOI] [Google Scholar]

RESOURCES