Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 1.
Published in final edited form as: Proc SIGCHI Conf Hum Factor Comput Syst. 2019 May;2019:547. doi: 10.1145/3290605.3300777

Comparing the Effects of Paper and Digital Checklists on Team Performance in Time-Critical Work

Leah Kulp 1, Aleksandra Sarcevic 1, Megan Cheng 2, Yinan Zheng 2, Randall S Burd 2
PMCID: PMC6800573  NIHMSID: NIHMS1054002  PMID: 31633126

Abstract

This mixed-methods study examines the effects of a tablet-based checklist system on team performance during a dynamic and safety-critical process of trauma resuscitation. We compared team performance from 47 resuscitations that used a paper checklist to that from 47 cases with a digital checklist to determine if digitizing a checklist led to improvements in task completion rates and in how fast the tasks were initiated for 18 most critical assessment and treatment tasks. We also compared if the checklist compliance increased with the digital design. We found that using the digital checklist led to more frequent completions of the initial airway assessment task but fewer completions of ear and lower extremities exams. We did not observe any significant differences in time to task performance, but found increased compliance with the checklist. Although improvements in team performance with the digital checklist were minor, our findings are important because they showed no adverse effects as a result of the digital checklist introduction. We conclude by discussing the takeaways and implications of these results for effective digitization of medical work.

Keywords: Digital checklist, medical records, team performance, technology implementation, trauma resuscitation, user interface design

1. INTRODUCTION

Technology implementations in healthcare contexts are increasingly replacing paper records and forms, creating new opportunities for improving patient care and medical information documentation. Prior work on Electronic Health Records (EHRs) has shown that digitizing paper forms is improving workflows, access to information and patient care [12,13,24,33,37]. Transitions from paper to digital records, however, have been challenging, leading to slower performance and time-consuming workarounds [5,6,27]. This challenge of converting paper records is in part driven by our lack of knowledge about the effects of digitization on physician and team performance during actual patient care. Wu et al. [35] compared the quality of medical decision making in simulated patient scenarios using three conditions (no cognitive aid, paper cognitive aid, and digital cognitive aid), finding that the digital aid outperformed the paper aid. Although this prior work offers important guidelines for researchers and designers of healthcare information technology (HIT), additional studies are needed to understand the effects of paper versus digital formats on team performance in situ.

In this paper, we evaluated the effects of a transition from paper to digital checklists on team performance during trauma resuscitation—a time-critical process of evaluating and stabilizing severely injured patients—in the trauma center of a regional pediatric teaching hospital. Most medical checklists were designed for use after the tasks are completed and have been shown to improve patient care and reduce errors [4,8,14,23,30,32]. These checklists, however, do not meet the needs of dynamic medical settings that require concurrent use of checklists [7,10,35]. The introduction of a newly designed digital checklist for trauma resuscitation based on its paper predecessor at our research site provided an opportunity for understanding the effects of digitization on complex teamwork in real time. To collect data on team performance, we reviewed videos of 94 resuscitations (47 with paper and 47 with digital checklists) and determined the start and end times for 18 most critical assessment and treatment tasks from the checklist, as well as whether those tasks were performed to completion. We also analyzed the number of unchecked items using both paper forms and digital checklist logs from these cases. We then compared team performance between the two conditions using three measures: task completion rates, time to task performance, and checklist compliance. We found that the digital checklist improved completion rates for the initial airway assessment task, while also reducing completion rates for ear and lower extremity assessment tasks. Our results also showed no significant difference between the paper and digital checklists for time to task performance. Finally, we found increased checklist compliance when using the digital checklist. Although team performance in the digital checklist cases improved only slightly, our results suggest that a digital checklist system is a feasible replacement for its paper counterpart—it was used concurrently during complex and dynamic teamwork without causing negative effects on team performance, while also providing better access to patient and process information. With this work, we contribute (1) a design of a digital checklist system for concurrent use during time-critical medical scenarios and (2) implications for digitizing complex medical work.

2. RELATED WORK

The use of EHRs has shown improvements over paper forms, including reduced errors and better quality of documentation [12,13,24,28]. Hawley et al. [13], for example, found a significant improvement in the completeness of data in the EHR when compared to a paper health record. In another study, Reddy et al. [28] identified the ability of electronic systems to decouple information from its representations, leading to better coordination among clinicians. In contrast, some studies found decreased efficiency and increased workload with the EHRs [5,6,27,36]. Chen et al. [5], for example, observed a gap between the formal EHR documentation and actual workflow, which led clinicians to document transitional information on paper. Chiang et al. [6] showed increased documentation times and changes in the nature of documentation after EHR implementation, while Pine and Mazmanian [27] described perfect but inaccurate accounts of nurse documenters and the tension between following protocols and documenting what was actually done. To address these challenges and workarounds in EHR use, Bardram [2] proposed a hybrid patient record (HyPR) that consists of a paper binder and an electronic tablet. The HyPR allowed clinicians to benefit from both paper and electronic systems, while also preserving collaborative affordances such as portability, collocated access, shared overview, and mutual awareness. Although EHRs have advantages over paper forms, the misalignment between system design and actual work practices and interactions has shown unintended effects on clinicians’ work.

Checklists and other cognitive aids have also caught up with this digitization trend in healthcare [1,7,21,29,33,35]. Checklists are different from EHRs because they support compliance with standardized protocols rather than just documentation. Prior studies have shown that replacing paper forms with electronic checklist systems in medical contexts has had positive effects on task performance and teamwork [1,7,21,29,33,35,37]. Agarwala et al. [1], for example, found that an electronic checklist for anesthesia handoffs improved relay and retention of critical patient information. In another study, an electronic trauma health record (eTHR) led to faster documentation during a usability study [37]. Despite these benefits of electronic checklists, digitization challenges similar to those found in EHR implementation persist. Designing systems for highly complex medical work is even more challenging and requires in depth understanding of technology use and its effects on team performance. A recent review [17], for example, has shown that electronic checklists increased memory support but their design was too rigid in allowing access to necessary information (e.g., [22]).

Medical researchers at our study site have previously compared team performance during pediatric trauma resuscitations with and without paper-based checklists [15], observing improvement in the completion of some tasks, as well as increased odds of task completion in the post-checklist implementation period. In a later study, [21] we suggested that the digital checklist for trauma resuscitation could be an effective replacement for the paper form after deploying and evaluating the digital aid through a technology probe approach. Here, we build on this prior work by comparing the effects of two checklist formats on time-critical team performance. Our insights will not only inform future designs of concurrent digital checklists to maximize their effectiveness, but also offer implications for more effective digitization of medical work in general.

3. RESUSCITATION CHECKLIST OVERVIEW

To standardize the care of critically injured patients and improve outcomes, trauma teams follow the Advanced Trauma Life Support (ATLS) protocol [25]. The protocol has two parts—primary and secondary surveys. During the primary survey, physicians assess the patient’s major physiological systems (Airway, Breathing, Circulation, Disability, and Exposure, or ABCDE). The secondary survey focuses on other injuries through a head-to-toe evaluation. To reduce errors and assist with protocol compliance, a team of physicians and researchers at our site designed a paper-based checklist (Figure 1(left)) for the trauma team leader—a physician leadership role usually taken by a senior surgical resident or fellow, an attending surgeon, or an emergency medicine physician. The team leader is responsible for overseeing the care and performance of all tasks, and is hands-off most of the time. The checklist was, therefore, designed to serve as a mental and visual guide to ensure protocol compliance, and reduce deviations and skipped tasks. Some leaders administer the checklist as they move through the tasks, verbally prompting the team to complete them, while other leaders prompt the team only when tasks are skipped. Although following the order of items on the checklist is important, tasks are sometimes performed out of order to address non-routine scenarios, or in parallel to increase efficiency. After being evaluated for its effects in a simulation environment, the paper checklist was deployed and has since become standard practice at the trauma center, leading to fewer missed steps [15,26].

Figure 1:

Figure 1:

Left: Paper version of the trauma resuscitation checklist. Right: Example screens from the digital checklist.

The adoption of this low-tech artifact presented an opportunity for designing a computerized tool to further improve trauma teamwork. Although helpful, the paper checklist is static, rigid, and often incomplete, providing limited support for dynamic, rapidly evolving medical scenarios [10]. As an initial step, we converted the paper checklist into its digital counterpart (Figure 1(right)) and deployed it during actual resuscitations in a supervised manner after a period of testing with physicians [16,1921]. These initial trials have suggested that the digital checklist could be a replacement for the paper checklist. The visual and interaction enhancements afforded by the digital version were expected to improve checklist completion rates, while also informing further design improvements.

3.1. Digital Checklist Design Process

The design and evaluation of the digital checklist took place at a metropolitan pediatric teaching hospital and a level 1 trauma center, with over 600 patients treated annually in one of two adjoining resuscitation rooms. The process included conversion from paper to digital format, initial digital checklist trials to assess usability, and the real-world implementation in patient care.

3.1.1. Conversion from Paper to Digital Checklist.

We collected and analyzed the use patterns of 800 paper checklists filled out by trauma team leaders over a 40-month period (May-August 2012; July 2015-June 2018) during actual resuscitations. The paper checklist has four main sections on a single-sided page (Figure 1(left)): the Pre-arrival Plan section includes preparation tasks; the Primary Survey section includes the ABCDE tasks, vital signs checks, and a pause section for teams to get on the same page; the Secondary Survey section lists all body parts of the head-to-toe exam; and, the Prepare for Travel section prepares the patient for departure from the trauma bay. Using the paper forms, we examined the frequency of checked and unchecked items, hand-written notes on the margins and in different sections of the checklist, as well as user interactions with checklists using video review. To preserve the affordances of the paper checklist and adapt its initial design to a tablet-based system with small screen real estate, each section of the paper checklist was designed onto a tab, while maintaining the order of items within each section. Users can move between tabbed pages by either tapping on the tab icon, or scrolling up and down.

The paper checklist contains sections that are used for rarely occurring tasks, which are indicated by non-applicable (“N/A”) checkboxes that can be checked when the item does not apply (e.g., “Prepare for intubation” item does not apply when patients do not require intubation). We observed, however, that physician leaders frequently either skipped the “N/A” checkboxes or crossed off the whole section. Digitizing the checklist allowed us to have these “N/A” items checked off by default, which in turn required leaders to uncheck them only if the task was actually performed. Some “N/A” items also have multiple sub-tasks. For example, the intubation task requires the team to perform five sub-tasks, including upgrading the activation level and performing the neurological exam. These sub-tasks are often left unchecked on paper checklists when the main task does not apply. Once we transferred the design to the digital format, we hid these items in a collapsible section that leaders could expand to reveal them, if needed. This design change significantly reduced the visual clutter of the checklist.

We also included a collapsible note-taking area on top of the screen to allow for writing margin notes that were frequently observed on paper checklists. Similarly, we provided individual note areas for each checklist item. A written note is minimized into a readable thumbnail, so it remains visible to the user throughout checklist use. The numerical items, such as weight, vitals, neurological Glasgow Coma Score or GCS (a three-part score for indicating the patient’s visual, verbal, and motor abilities), and temperature, can be typed using dedicated fields. As we analyzed the paper checklists, we observed that notes were sometimes taken for specific items (e.g., values noted for each vital sign), but then the corresponding checkboxes remained unchecked. Discussing this observation with domain experts helped us clarify that recorded notes or values next to an item indicated task performance, so we implemented an autochecking feature on the digital checklist. For example, if a team leader types in a value for “blood pressure,” the Blood pressure checklist item will be automatically checked. This feature helped reduce the number of taps for the user. To provide an overview of the progress for individual sections without the need for scrolling to those sections, we incorporated progress circles around the icons. The progress circles change colors in response to item checking in each section, cycling through red (least complete), orange, yellow, and green (100% complete). Before finishing the checklist and submitting the log file, users can review and complete any unchecked items on a modal screen or tap “Back” to return to the checklist. The checklist was designed and developed for a Samsung Galaxy tablet.

3.1.2. Initial Digital Checklist Trials.

We evaluated the digital checklist with 11 medical experts over a 15-month period (October 2015-January 2017), making significant design changes based on their feedback and real-time use data. We first ran a usability study with three of the experts (2 team leaders, 1 hospital-based research assistant) to determine the modifications to content, color schemes, and overall flow. Five experts (2 team leaders, 3 research assistants) then used the digital checklist during 16 actual resuscitations, while shadowing leaders who were using the paper checklists. Their feedback helped us identify a range of interface design and system issues. Following this phase, we piloted the digital checklist in live trauma resuscitations with six team leaders. When the leader arrived to the trauma room, a research assistant handed the digital checklist to him or her. The research assistant would stand by in case the leader had any questions or issues with the digital checklist. Using this approach, we collected feedback from an additional 11 resuscitations [21].

The design process was highly iterative, leading to 10 major design updates and about 15 smaller changes to the system before the deployment. Some changes were tested individually with physician leaders and some were evaluated in situ during actual resuscitations. These mixed approaches to evaluation yielded results that differed in structure and relevance, so we had to reconcile them before incorporating into the system. We prioritized updates based on their relevance to medical work. For example, we updated note-taking features to allow typing notes for all items; enabled the feature that auto-checks an item after users enter a value (e.g., for vital signs); increased the resolution of note thumbnails for better viewing; designed new icons and buttons; enabled checklist resuming to the latest screen if the application was accidentally closed; enlarged note-taking areas; embedded a crash report feature for remote monitoring of any system crashes; and, redesigned the log file for more streamlined data analysis.

3.1.3. Real-World Implementation of the Digital Checklist.

Before the checklist was officially deployed in January 2017, we conducted training sessions with physician leaders. As new physicians joined the hospital (and consented to participate in the study), we trained them on the checklist before they used it during actual patient care. The leaders were instructed to use the digital checklist as they would use the paper version, i.e., to ensure that all tasks were performed and verbally confirmed, and to request completions of unperformed tasks. Although the digital checklist was available, its use was not enforced, so some physician leaders continued to use the paper form. We have been interviewing team leaders who had used the digital checklist three or more times to gather feedback on usability, interface design, and overall effects of the checklist on their work. This threshold for the number of checklist uses was set to allow for sufficient time to schedule the interviews before the end of resident or fellow rotations, while also ensuring that leaders were familiar enough with the checklist interface to provide meaningful feedback. To date, we have interviewed 13 team leaders and used their feedback to make both minor and major adjustments to the system design. Since the checklist deployment, a total of 31 physician leaders used the digital checklist during actual patient care, ranging from one to 76 use cases per leader (mean 13, SD 18, median 5). In this study, we look at a portion of those cases (selected to match the cases with paper checklists, as described next) and investigate the effects of checklist digitization on team performance and user interactions with the technology.

4. METHODS

The study was conducted at the same hospital and trauma center where we initially designed, evaluated and trialed the digital checklist. The trauma team activation type at our site is determined based on patient acuity, and ranges from transfers, to stat (low acuity), to attending (high acuity). Upon being notified by the Emergency Medical Services (EMS) about an incoming patient, the hospital’s emergency communications team sends a page to all trauma team members on call, who then assemble in the resuscitation room and prepare for patient arrival. Some patients arrive to the hospital without the pre-arrival notification (trauma “now”), which may affect team readiness. Resuscitations at this center are audio and video recorded for quality improvement and research purposes under a protocol approved by the hospital’s Legal and Risk Management Department. Each room is equipped with overhead and wide-angle video cameras and two directional microphones for recording live resuscitations. The system records data to a server that is accessible via a password-protected portal.

The study population included only admitted patients with pre-arrival notification and blunt injuries. We removed patient types and cases that are rarely seen at the center to avoid skewing our dataset (e.g., patients with complex procedures such as intubation, i.e., inserting a tube in the throat to assist with breathing, or patients with penetrating injuries, such as gunshot wounds). The study was approved by the hospital’s Institutional Review Board and a reliance agreement has been established with our university.

4.1. Data Collection & Study Procedures

The study involved a multi-step process for collecting and preparing data for analyses. Our goal was to compare the cases with paper checklists to those with digital checklists, so we first performed case selection and matching. We then obtained data about team performance through video review and coding of all selected cases; we marked the start and end times for 18 checklist tasks and determined if those tasks were performed to completion. Finally, we collected all paper forms from the paper checklist cases and all log files from the digital checklist cases to derive the number of unchecked items and to perform content analysis of notes that were taken on checklists during patient care.

4.1.1. Case Selection and Matching.

Our dataset consisted of 94 trauma resuscitations: 47 with paper checklists collected over a seven-month period (April-October 2016) and 47 with digital checklists collected over a fifteen-month period (January 2017-March 2018). To minimize the differences between the baseline patient and resuscitation features, we selected the cases by performing case matching. We began with a sample of all paper checklists collected since the checklist was introduced in 2012 and for which we already had coded team performance data, which was 51 cases. We then filtered all admitted patients with blunt injuries, leaving us with 47 paper checklist cases for analysis. We applied a similar filtering method to the digital checklist cases, beginning with all resuscitations from January 2017 to March 2018 (252 cases). Of these, 187 were already coded for team performance data. We then filtered the 187 cases to include only admitted patients, which brought us to 95 cases, and further filtered to only include patients with blunt injuries, leaving us with 82 digital checklist cases. From these, we removed cases with patient intubation, ending with 72 cases. To select the digital cases that best matched the 47 paper checklist cases, we identified nine event features as matching variables: patient age, team activation level (transfer, stat, attending), motor component of the neurological GCS exam, and abbreviated injury scale (AIS) based on six body regions (head, face, neck, thorax, abdomen, spine). We defined the age categories as: 0–2, 3–8, 9–12, and 13+. From the remaining 72 digital checklist cases, we performed case matching using the nine features and the Hungarian Algorithm [18] to calculate and minimize error scores for each pair of cases (Euclidean distance). To evaluate whether the matched 47 paper and 47 digital cases displayed similar distributions for all variables, we ran univariate analyses (Fisher’s exact test), finding that all p values were insignificant (>0.05) (Table 1).

Table 1:

Summary statistics for all patient and resuscitation features for all cases, paper checklist cases, and digital checklist cases (%).

Characteristics All Cases (n=94) Paper (n=47) Digital (n=47) p-value
Age (years)
 0–2 31.9 34.04 29.8 0.8
 3–8 35.1 34.04 36.2
 9–12 18.1 14.9 21.3
 13+ 14.9 17.02 12.8
Activation level
 Attending 5.3 8.5 2.1 0.3
 Stat 53.2 55.3 51.1
 Transfer 41.5 36.2 46.8
GCS-Motor (%)
 1 1.1 2.1 0 0.7
 2 0 0 0
 3 1.1 2.1 0
 4 1.1 2.1 0
 5 3.2 2.1 4.3
 6 93.6 91.5 95.7
AIS >=2 (%) Global
 Head 50 48.9 48.9 0.5
 Face 4.3 4.3 4.3 1
 Neck 7.5 12.8 2.1 0.1
 Thorax 9.6 8.5 10.6 0.5
 Abdomen 3.2 4. 2.1 0.7
 Spine 22.3 23.4 21.3 1

4.1.2. Video Review and Coding.

Three researchers at the hospital with experience in trauma resuscitation reviewed video recordings of all 96 resuscitations to annotate the start and end times for 18 assessment and treatment tasks from the checklist, and to determine if those tasks were performed to completion. The 18 tasks were selected based on their medical relevance and included six primary survey and 12 secondary survey tasks (Figure 1(left) and Table 2). The primary survey tasks were: airway assessment (checking if the patient’s airway is clear and removing any obstructions), c-spine stabilization (stabilizing the patient’s neck and cervical spine), pulses exam (checking distal pulses as part of the circulation assessment), Glasgow Coma Score (GCS) verbalized (a three-part neurological exam of the patient’s eye, verbal and motor responses), pupils exam, and exposure assessment (removing clothes, placing warm blanket and measuring temperature). The secondary survey tasks included physical exams of the head, ears, eyes, facial bones, nose, mouth, neck, chest, abdomen, pelvis, upper extremities, and lower extremities, as well as verbalizing those findings. Performance times were annotated based on the data dictionary developed by physicians on our research team that defined successful completion for each task and their start and end times. For example, start and end times for the right distal pulse exam were defined as “start: examiner’s fingers placed on patient’s right foot; end: fingers removed from foot.” This task was considered completed after the examiner verbalized the findings (e.g., “I can feel distal pulses on the right”). The researchers also noted the patient arrival time to allow for calculating the time elapsed from patient arrival until the start time of each of the 18 tasks (time to task performance). To decrease the time required for coding videos, the researchers used video annotation software designed for rapid identification, time-stamping and archiving of data. All three researchers were trained in coding task and time performance on a sample of resuscitations, and proceeded with video annotation for research purposes only after their inter-rater reliability results achieved a Kappa value of >.80 for both variables when compared to experienced coders on the team.

Table 2:

Task completion rates for paper and digital checklist cases (%).

Tasks Paper Digital P-value
(n=47) (n=47)
Primary Survey
 Airway assessment 59.6 85.1 0.01
 C-spine stabilization 85.1 78.7 0.59
 Pulses 55.3 48.9 0.68
 GCS verbalized 89.4 89.4 1
 Pupils 95.7 97.9 1
 Full exposure 55.3 57.4 1
Secondary Survey
 Head 100 93.6 0.24
 Ears 95.7 78.7 0.03
 Eyes 8.5 - 0.12
 Facial Bones 93.6 93.6 1
 Nose 46.8 42.6 0.84
 Mouth 48.9 31.9 0.14
 Neck 76.6 72.3 0.81
 Chest 93.6 85.1 0.32
 Abdomen 100 91.5 0.12
 Pelvis 61.7 44.7 0.15
 Upper extremities 85.1 66.0 0.05
 Lower extremities 97.9 83.0 0.03

4.1.3. Content Analysis of Checklists.

Paper checklists were collected at the end of resuscitations, anonymized, and then scanned and uploaded to a secure server. Digital checklist log files are saved locally on the tablet at the end of the resuscitation and the files are then anonymized and shared through the same secure server. We transcribed all hand-written notes from paper checklists and typed or stylus-written notes from digital checklists into a file for further analysis. We also noted the checklist items or sections corresponding to those notes. Finally, we recorded all unchecked items from the paper forms and digital logs. To gain insights into user interactions and whether they differed between the two formats, we performed content analysis of the notes, looking at the nature of notes (e.g., narrative or numerical), the type of information recorded (e.g., pre-hospital information or exam findings), their location, and length. We calculated the averages of total notes and unchecked items, and used univariate analysis to compare user interactions and checklist compliance.

4.2. Team Performance Measures & Data Analysis

We selected three measures of team performance that best captured the checklist use effects: task completion rates, time to task performance, and checklist compliance.

4.2.1. Task Completion Rates.

Based on video coding, each of the 18 tasks was annotated as either complete or incomplete for each of the 94 resuscitations. We then separately calculated completion rates for paper and digital checklist cases using the following formula: # times task completed / total # of cases. Finally, we performed univariate analysis using the Fisher’s exact test to determine any differences in task completion rates between the two checklist formats (Table 2).

4.2.2. Time to Task Performance.

After we determined completion rates, we calculated time to task performance, i.e., the time it took to initiate the task since patient arrival. This measure is indicative of checklist use effects because it depends on the leader requesting the task based on their own memory or checklist lookup. We performed a regression analysis, clustering data by checklist case and using the airway assessment task as the reference variable. We then compared time to task performance of individual tasks between digital and paper cases by first calculating the means of time to task performance for each of the 18 tasks, then by using Levene’s test to ensure equality of variances between all task groups, and finally by using t-tests to determine any differences in means. Due to the large number of comparisons (18), we applied a Bonferroni correction to reduce the likelihood of a type I error and adjusted p values to meet a threshold of p=0.011.

4.2.3. Checklist Compliance.

We analyzed checklist compliance using two checklist interaction measures: note taking and unchecked items. We defined complete checklist compliance as “all applicable checklist items checked off.” The checklist content was designed so that most items are required for every patient, except for a few items marked “N/A” (Figure 1(left)). In our compliance analysis, N/A items were omitted if they did not apply to the case. We calculated the number of unchecked items for each case, then calculated averages for paper and digital checklists, and finally used univariate analysis to compare the two form factors (Table 4). We used note taking as another measure of compliance, showing to what extent team leaders interacted with the checklist. We compared the average number of notes taken between paper and digital checklists using univariate analysis.

Table 4:

Comparison of note taking practices and the number of unchecked items for paper and digital checklists.

Tasks Paper Digital P-value
Mean (SD) Mean (SD)
Total notes 4 (5.3) 7 (3.9) 0.008
Unchecked items 9.7 (6.4) 2.9 (4.3) <0.001

5. RESULTS

We analyzed 94 pediatric trauma resuscitations, 47 of which were performed between April and October of 2016 using the paper checklist, and 47 were performed between January 2017 and March 2018 using the digital checklist. Two-thirds of the patients were younger than nine years; 53.2% of cases were low-acuity activations (i.e., stat level); 93.6% of all patients had GCS motor scores of six (highest possible, meaning they were lower-risk patients who could follow commands); and, 50% of injuries were to the head (Table 1). We observed no significant differences in patient populations between the two study periods.

5.1. Task Completion Rates

Of the 18 most critical tasks from the primary and secondary surveys, the airway assessment on the primary survey was completed more frequently in resuscitations that used the digital checklist (p=0.01, Table 2). Evaluation of ears and lower extremities on the secondary survey, however, was performed at lower rates when the digital checklist was used (p=0.03 for both). No other tasks were affected by the implementation of the digital checklist. These are promising results for an initial evaluation of the effects of digital technology on time-critical team performance. Although the digital checklist did not outperform the paper checklist, our findings suggested that using the digital checklist did not add to the team workload.

5.2. Time to Task Performance

The regression analysis showed no significant difference between paper and digital checklists in the time it took to initiate the 18 tasks from the time the patient arrived (Table 3). A comparison of the mean time to task performance of individual tasks between paper and digital checklist cases also showed no significant differences for any of the 18 tasks. The average total resuscitation time (from the moment the patient enters the room to the moment the patient leaves the room) did not differ between the paper and digital checklist cases (26.4 minutes vs. 26.8 minutes, p=0.89). Although a prior study found an average of 9 seconds improvement in the time to task performance after a paper checklist was implemented [15], we did not find any significant improvements in task timeliness with the introduction of the digital checklist. These results, however, are positive because they show no adverse effects associated with the introduction of the digital checklist.

Table 3:

Mean time to task (in minutes [m] and seconds [s]) for paper and digital checklist cases.

Tasks Paper Digital p-value
(n=47) Time (SD) (n=47) Time (SD)
Primary Survey
 Airway assessment 41 1m 10s (44s) 47 1m 10s (34s) 0.66
 C-spine stabilization 38 56s (163s) 36 38s (80s) 0.77
 Pulses 46 1m 48s (57s) 46 1m 53s (44s) 0.95
 GCS verbalized 42 2m 26s (57s) 42 2m 19s (51s) 0.42
 Pupils 44 2m 31s (61s) 45 3m 4s (58s) 0.08
 Full exposure - 1m 52s (52s) - 3m 48s (141s) -
Secondary Survey
 Head 47 3m 56s (93s) 45 3m 20s (80s) 0.06
 Ears 44 4m 46s (125s) 39 5m 5ss (102s) 0.97
 Eyes 11 7m 1s (383s) 7 7m 14s (421s) 0.74
 Facial Bones 43 4m 29s (121s) 46 4m 11s (89s) 0.23
 Nose 41 4m 37s (110s) 36 4m 50s (105s) 0.70
 Mouth 40 5m 2s (148s) 38 4m 36s (99s) 0.12
 Neck 39 5m 16s (117s) 38 5m 24s (122s) 0.66
 Chest 44 5m 27s (104s) 46 4m 41s (90s) 0.01
 Abdomen 46 5m 36s (112s) 47 5m 1s (93s) 0.065
 Pelvis 42 6m 5s (111s) 39 5m 15s (107s) 0.061
 Upper extremities 44 6m 2s (125s) 41 5m 9s (90s) 0.074
 Lower extremities 47 6m 12s (125s) 44 5m 31s (106s) 0.09
Total resuscitation time 47 26.4m (10m) 47 26.8m (15m) 0.89

5.3. Checklist Compliance

We examined a total of 325 notes from digital checklists and 177 notes from paper checklists associated with 34 different checklist items. Our analysis showed significantly more notes on the digital checklist (Table 4). However, only 30% (14/47) of digital checklists contained margin notes, while 44.5% (21/47) of paper checklists did. Most notes on digital checklists were written for the “State GCS” task, where physicians recorded the actual values obtained through the exam (39 notes), followed by vitals (33), weight (31), and temperature (25). The most common locations of notes on paper checklists were the “Estimated weight” field (40 notes), followed by the margin areas (21) and vital signs (11). We identified categories of notes similar to those from our prior work [38], including pre-hospital information (about en route interventions), exam findings, task status (noting whether the task is in progress or needs to be done), and care plan (discussing laboratory results or next destination). We observed the following distribution of notes across these four categories: pre-hospital information—51 notes on digital and 55 on paper checklists; exam findings—260 digital, 109 paper; task status—10 digital, 6 paper; and care plan—4 digital, 7 paper. These results showed that paper checklists had more notes about pre-hospital information and care plan, while digital checklists had more notes about exam findings and task status. We also categorized the notes based on their type, and whether they included words (narrative), numbers (numerical), or both letters and numbers (combination). These categories of notes helped inform many of our design decisions (e.g., whether to offer an alphanumerical keyboard or only numerical). We found more narrative, numerical and combined notes on digital checklists, but the overall trend was similar between the formats: numerical notes were most frequent (227 digital, 92 paper), followed by narrative (76 digital, 68 paper), and combined notes (22 digital, 16 paper). For note length, we found that users mostly wrote brief, one-word notes on both formats (245 digital, 110 paper), followed by 2–4 word notes (55 digital, 48 paper), and longer, five or more words notes (25 digital, 19 paper). For both checklist formats, 50% of long notes (5+ words) were located in the margin areas.

We found more margin notes on paper checklists, written down not only in the top margin but also in the side and bottom margins; some notes were even written on the back side of the checklist sheet. The initial design of the digital checklist provided a margin note area, but physician leaders found it too small and asked for more space, especially for recording the pre-hospital information. We adapted the design based on this feedback by introducing a draggable icon for expanding and collapsing the margin note area. We also observed different note-taking preferences. For example, some physician leaders recorded the final neurological GCS exam score and the three individual scores, while others typed in the final score only.

Physician leaders who used the digital checklist left significantly fewer items unchecked than those who used the paper format (p<0.001, Table 4). When we narrowed our analysis to the most critical 18 tasks for which we also had completion rates, we found that users of the digital checklist still frequently omitted checking c-spine stabilization and exposure assessment from the primary survey, and neck exam from the secondary survey. Even so, these results suggested that the digital checklist exhibited higher compliance rates.

6. DISCUSSION

Emergency medical scenarios like trauma resuscitation are highly complex and often chaotic, involving a team of medical professionals who work in a coordinated manner to rapidly assess the patient and make critical decisions within minutes. These environments are information-rich because data about the patient and team activities come from a range of sources. Yet they are also information-poor because information technology support is minimal and rigid. To provide effective and efficient care, teams rely on memory aids and tools for recording information that are still largely paper based [13,15,23]. Hales et al. [11] review of 178 checklist studies found that implementing a checklist reduced errors of omission and improved standards of care, without leading to any adverse outcomes [14]. We have been developing a digital aid for fast-response medical teams and have shown the feasibility of concurrently using mobile technology to support complex teamwork. The results of our study showed that using the digital checklist led to improvements in completion of one primary survey task (airway assessment) while completions of two secondary survey tasks (ear exams and lower extremities exams) decreased. Although Kelleher et al. [15] found an average of 9 seconds improvement in time to task performance after trauma teams transitioned from no checklist to a paper checklist, we did not find any significant differences between the paper and digital checklists. Our results, however, showed that compliance with the checklist increased in the digital condition. The observed differences between the effects of paper and digital checklists on team performance were small and clinically irrelevant—we did not detect any adverse effects on team performance or patient care. The digital checklist also did not significantly affect the time it took trauma teams to initiate the tasks, which suggests that interactions with the checklist did not interfere with leader activities. In addition, the digital checklist has provided better access to patient and process information, allowing team leaders to visualize progress and enter values efficiently. Below we discuss several takeaways from transitioning between paper and digital formats, as well as design implications for future checklists and medical work in general.

6.1. Takeaways from Digitizing Complex Work

The design and evaluation of technology for dynamic medical scenarios is challenging for several reasons: multidisciplinary teams with different needs and backgrounds are involved; time-sensitive issues are often addressed with little warning, requiring completion of a series of actions; multiple guidelines must be rapidly considered for the appropriate course of action; and information must be presented in a non-obtrusive manner. Although traditional HCI approaches to design address some of these challenges, they are often incomplete without evaluating the systems in situ and deriving data from real-world scenarios. To fill in this gap, we converted a paper checklist for trauma resuscitation into a digital format, designed its user interface and then evaluated and deployed the checklist during actual patient care. This approach allowed us to analyze the effects of new technology on many aspects of complex medical work. Although we found the process of converting the checklist successful, it posed many challenges, leading to three main takeaways: (1) how evaluating the technology in semi real-world scenarios yields different results than simulation-based evaluation; (2) how co-designing with domain experts and using alternative approaches to training lowers barriers to use; and, (3) how agile design iterations before and during deployment helped alleviate user frustrations.

First, while testing the design of new HIT before implementation is required and typically performed, most tests are conducted in simulation settings. Rather than just relying on simulations, we also asked domain experts to use the technology during actual work while shadowing members of the targeted user group. This agile design approach allowed us to emulate the real-world use and observe how this technology was used concurrently with task performance. The feedback gained was invaluable in advancing the design and significantly improving user interactions with technology. Second, we continued receiving feedback from real-world users even after we deployed the technology for actual work. We considered this agile, in-the-wild design approach to be a key element for ensuring successful adoption of the system: we gathered feedback from users about their real-world interactions with the system, quickly made design decisions and changes, and then observed how those changes affected the system use. Engaging the users in this agile design process created more investment in the success of the implementation and changes that were made. Finally, another contributing factor to increasing user adoption of our digital checklist system was listening to physician leaders’ feedback and rapidly implementing changes that alleviated user frustrations [9,34]. During interviews and informal feedback through research assistants, physician leaders provided thoughts and discussed ways in which the system or logistics around implementation (e.g., placement of the checklist tablet) could be improved. Addressing these issues and communicating the changes to users helped dissipate the frustrations, while also letting users know that their concerns were heard.

6.2. Implications for Digital Checklist Design

Our data on team performance and the effects of paper versus digital checklists provided further guidelines for designing an adaptive digital checklist for emergency medical scenarios.

The paper checklist outperformed the digital checklist by increasing completion rates of ear and lower extremities exams. This finding suggests several directions for redesigning the checklist interface. The secondary survey tasks are sometimes completed simultaneously, leading to missteps or forgotten check offs on this long list of items. To address this issue, the checklist already fades out checked items, but could also dynamically highlight the upcoming item to draw attention (e.g., the head item is checked and greyed out, and then ear exam is highlighted to show contrast among the long list of items). The lower extremities item, in particular, is placed at the bottom of the checklist, requiring the user to scroll down. A potential solution could include dynamic ordering of checklist items, so that checked items are moved down, pushing unchecked items to the top of the list for better visibility. Also, when the margin note area is open, not all secondary survey items fit on one page, requiring users to scroll down to the bottom of the list. The margin note area could automatically minimize the note into a thumbnail, exposing the entire secondary survey when the user gets to this page.

Although physician leaders took more notes overall on the digital checklist, the number of margin notes was higher for paper forms. Despite enlarging the margin note area and making it expandable, users kept taking fewer margin notes on digital checklists. The most common notes found in the margin area of the digital checklist were about the EMS-relayed pre-hospital information report. Because this report follows a general pattern (e.g., patient age, sex, weight, mechanism of injury, sustained injuries, and any treatments) and requires more space, a design solution could be a pop-up form for entering the pre-hospital report, allowing users to enter the details and then condensing this information in the margin area.

We also found that one-word numerical notes were most common on both paper and digital checklists. To reduce the mental work required for calculating scores or observing trends, the digital checklist could offer quick solutions for entering and adapting this information to meet the user needs. In addition, we found that exam findings were the most frequent type of notes on both paper and digital checklists, which are often found in the secondary survey section. The vocabulary used for these exam findings notes was fairly limited (e.g., “normal,” “no deformities,” “abrasions R arm”). To increase the efficiency of note taking, the digital checklist could offer options for both typing notes and selecting the most common exam findings using buttons. The buttons, for example, could offer details for the sustained injuries (e.g., laceration or bruising), indication of the injured body side (L[eft] or R[ight]), and even options for normal findings (e.g., normal or non-tender). Additionally, we observed that users added their own checkboxes on paper checklists for the subsequent care plan steps, laboratory tests or x-rays that were needed. The only place to write these care plan items on the digital checklist is the margin area. The “Prepare for travel” section, however, could provide space at the bottom of the screen for entering custom checklist items for labs, x-rays, and other tasks. This design change would also allow for a level of customization that is often needed in emergency medical scenarios that rapidly change based on patient response to various treatments.

As we shift from paper to digital checklists, we also need to consider the shifts in visibility of work and how interaction behaviors may be affected [31]. Prior studies of record keeping in medical work have already shown the importance of making that work visible [3,5,39]. We are seeing similar problems in our work. While the handwritten notes and checkmarks remain visible on paper checklists to the leader and surrounding team members at all times, this work becomes invisible on the digital checklist as the leader switches to a new checklist page. While a linear display of checklist items facilitates search for items in fast-paced scenarios, the design should not enforce the sequential order of task performance. An adaptive medical checklist should incorporate an overview of the status of the work to provide a big picture of progress.

7. CONCLUSIONS & FUTURE WORK

We compared and evaluated the effects of paper and digital checklists on team performance during trauma resuscitations. We found that the digital checklist was associated with lower completion rates for some tasks, but overall, it did not have adverse effects on this safety- and time-critical teamwork. While differences in team performance were nominal and the effects were fairly small from the clinical perspective, the introduction of the digital checklist did improve checklist compliance. Digitizing the checklist has also offered many advantages, including better access to patient data, integration with a future computerized decision support system, flexibility and customization of content, and dynamic display of information. The design and conversion process from paper to digital formats provided valuable lessons about the challenges and how best to overcome them, especially in high-risk medical work. Given the minimal adverse effects on one side and advantages and feasibility of the digital checklist on the other, we found the implementation of this technology overall successful.

This study has three limitations: (1) we focused on only 18 checklist tasks (out of 53) due to the time-consuming video coding; (2) most of our cases were low acuity and required routine care, as opposed to high acuity cases, which may lead to different findings; and, 3) this is a single-site study and the digital checklist effects may differ at other centers. Our future work will include digital checklist deployment at another site that currently has no checklist for trauma resuscitations. We also plan additional analysis of interactions with the digital checklist using video review to better understand their effects on time to task performance. Finally, a study comparing the time of task performance and the time when the task was checked will help us determine the factors that contribute to different checking behaviors (e.g., delays, early checks).

CCS CONCEPTS.

  • Human centered-computing~Human computer interaction (HCI)

ACKNOWLEDGMENTS

This research has been supported by the National Science Foundation under Award Number 1253285, and partially supported by the National Library of Medicine of the National Institutes of Health under Award Number R01LM011834. We thank Ivan Marsic, Omar Ahmed, Richard Farneth, Lauren Waterhouse, and the medical staff for their expertise and participation. We also thank Brett Rosen, Keegan Cannon, and Alyssa Klein for their contributions to the design and development of the digital checklist.

REFERENCES

  • [1].Agarwala Aalok V., Firth Paul G., Albrecht Meredith A., Warren Lisa, and Musch Guido. 2015. An electronic checklist improves transfer and retention of critical information at intraoperative handoff of care. Anesth. Analg 120, 1 (January 2015), 96–104DOI: 10.1213/ANE.0000000000000506 [DOI] [PubMed] [Google Scholar]
  • [2].Bardram Jakob E. and Houben Steven. 2018. Collaborative affordances of medical records. Comput Supported Coop Work 27, 1 (February 2018), 1–36. DOI: 10.1007/s10606-017-9298-5 [DOI] [Google Scholar]
  • [3].Berg Marc and Bowker Geoffrey. 1997. The multiple bodies of the medical record: Sociological Quarterly 38, 3 (June 1997), 513–537. DOI: 10.1111/j.1533-8525.1997.tb00490.x [DOI] [Google Scholar]
  • [4].Bergs Jochen, Hellings Johan, Cleemput Irina, Ö Zurel Vera De Troyer, Monique Van Hiel Jean-Luc Demeere, Claeys Donald, and Vandijck Dominique. 2014. Systematic review and meta-analysis of the effect of the World Health Organization surgical safety checklist on postoperative complications. Br J Surg 101, 3 (February 2014), 150–158. DOI: 10.1002/bjs.9381 [DOI] [PubMed] [Google Scholar]
  • [5].Chen Yunan. 2010. Documenting transitional information in EMR. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘10), 1787–1796. DOI: 10.1145/1753326.1753594 [DOI] [Google Scholar]
  • [6].Chiang Michael F., Sarah Read-Brown Daniel C. Tu, Choi Dongseok, Sanders David S., Hwang Thomas S., Bailey Steven, Karr Daniel J., Cottle Elizabeth, Morrison John C., Wilson David J., and Yackel Thomas R.. 2013. Evaluation of electronic health record implementation in ophthalmology at an academic medical center. Trans Am Ophthalmol Soc 111, (September 2013), 70–92. [PMC free article] [PubMed] [Google Scholar]
  • [7].Christov Stefan C., Conboy Heather M., Famigletti Nancy, Avrunin George S., Clarke Lori A., and Osterweil Leon J.. 2016. Smart checklists to improve healthcare outcomes. In Proceedings of the 2016 International Workshop on Software Engineering in Healthcare Systems (SEHS ‘16), 54–57. DOI: 10.1145/2897683.2897691 [DOI] [Google Scholar]
  • [8].Deering Shad H., Tobler Kyle, and Cypher Rebecca. 2010. Improvement in documentation using an electronic checklist for shoulder dystocia deliveries. Obstet Gynecol 116, 1 (July 2010), 63–66. DOI: 10.1097/AOG.0b013e3181e42220 [DOI] [PubMed] [Google Scholar]
  • [9].Fourcade Aude, Blache Jean-Louis, Grenier Catherine, Bourgain Jean-Louis, and Minvielle Etienne. 2012. Barriers to staff adoption of a surgical safety checklist. BMJ Qual Saf 21, 3 (March 2012), 191–197. DOI: 10.1136/bmjqs-2011-000094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Grigg Eliot. 2015. Smarter clinical checklists: How to minimize checklist fatigue and maximize clinician performance. Anesth. Analg 121, 2 (August 2015), 570–573. DOI: 10.1213/ANE.0000000000000352 [DOI] [PubMed] [Google Scholar]
  • [11].Hales Brigette, Terblanche Marius, Fowler Robert, and Sibbald William. 2008. Development of medical checklists for improved quality of patient care. Int J Qual Health Care 20, 1 (February 2008), 22–30. DOI: 10.1093/intqhc/mzm062 [DOI] [PubMed] [Google Scholar]
  • [12].Harper RH, O’Hara KP, Sellen AJ, and Duthie DJ. 1997. Toward the paperless hospital? Br J Anaesth 78, 6 (June 1997), 762–767. [DOI] [PubMed] [Google Scholar]
  • [13].Hawley Glenda, Jackson Claire, Hepworth Julie, and Wilkinson Shelley A.. 2014. Sharing of clinical data in a maternity setting: How do paper hand-held records and electronic health records compare for completeness? BMC Health Services Research 14, 1 (December 2014), 650 DOI: 10.1186/s12913-014-0650-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Haynes Alex B., Weiser Thomas G., Berry William R., Lipsitz Stuart R., Breizat Abdel-Hadi S., Dellinger E. Patchen, Herbosa Teodoro, Joseph Sudhir, Kibatala Pascience L., Lapitan Marie Carmela M., Merry Alan F., Moorthy Krishna, Reznick Richard K., Taylor Bryce, Gawande Atul A, and Safe Surgery Saves Lives Study Group. 2009. A surgical safety checklist to reduce morbidity and mortality in a global population. N. Engl. J. Med 360, 5 (January 2009), 491–499. DOI: 10.1056/NEJMsa0810119 [DOI] [PubMed] [Google Scholar]
  • [15].Kelleher Deirdre C., Carter Elizabeth A., Waterhouse Lauren J., Parsons Samantha E., Fritzeen Jennifer L., and Burd Randall S.. 2014. Effect of a checklist on advanced trauma life support task performance during pediatric trauma resuscitation. Acad Emerg Med 21, 10 (October 2014), 1129–1134. DOI: 10.1111/acem.12487 [DOI] [PubMed] [Google Scholar]
  • [16].Klein Alyssa, Kulp Leah, and Sarcevic Aleksandra. 2018. Designing and optimizing digital applications for medical emergencies. In Proceedings of the 2018 CHI Conference Extended Abstracts on Human Factors in Computing Systems, LBW588. DOI: 10.1145/3170427.3188678 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Kramer Heidi S. and Drews Frank A.. Checking the lists: A systematic review of electronic checklist use in health care. Journal of Biomedical Informatics. DOI: 10.1016/j.jbi.2016.09.006 [DOI] [PubMed] [Google Scholar]
  • [18].Kuhn HW and Yaw Bryn 1955. The Hungarian method for the assignment problem. Naval Res. Logist. Quart (1955), 83–97. [Google Scholar]
  • [19].Kulp Leah and Sarcevic Aleksandra. 2017. Design in the wild: Lessons from researcher participation in design of emerging technology. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘17), 1802–1808. DOI: 10.1145/3027063.3053170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Kulp Leah and Sarcevic Aleksandra. 2018. Design in the “medical” wild: Challenges of technology deployment. In Proceedings of the 2018 CHI Conference Extended Abstracts on Human Factors in Computing Systems, LBW040. DOI: 10.1145/3170427.3188571 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Kulp Leah, Sarcevic Aleksandra, Farneth Richard, Ahmed Omar, Mai Dung, Marsic Ivan, and Burd Randall S.. 2017. Exploring design opportunities for a context-adaptive medical checklist through technology probe approach. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS ‘17), 57–68. DOI: 10.1145/3064663.3064715 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Landmark Andreas, Selnes May-Britt, Larsen Elisabeth, Svensli Astrid, Solum Linda, and Brattheim Berit. 2012. The role of electronic checklists - case study on MRI-safety. Stud Health Technol Inform 180, (2012), 736–740. [PubMed] [Google Scholar]
  • [23].Lashoher Angela, Schneider Eric B., Juillard Catherine, Stevens Kent, Colantuoni Elizabeth, and Berry William R.. 2016. Implementation of the World Health Organization Trauma Care Checklist Program in 11 centers across multiple economic strata: Effect on care process measures. World J Surg (October 2016), 1–9. DOI: 10.1007/s00268-016-3759-8 [DOI] [PubMed] [Google Scholar]
  • [24].Menachemi Nir and Collum Taleah H. 2011. Benefits and drawbacks of electronic health record systems. Risk Manag Healthc Policy 4, (May 2011), 47–55. DOI: 10.2147/RMHP.S12985 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].van Olden Ger D. J., Meeuwis J. Dik, Bolhuis Hugo W, Boxma Han, and Goris R. Jan A.. 2004. Clinical impact of advanced trauma life support. Am J Emerg Med 22, 7 (November 2004), 522–525. [DOI] [PubMed] [Google Scholar]
  • [26].Parsons Samantha E., Carter Elizabeth A., Waterhouse Lauren J., Fritzeen Jennifer, Kelleher Deirdre C., Oʼconnell Karen J., Sarcevic Aleksandra, Baker Kelley M., Nelson Erik, Werner Nicole E., Boehm-Davis Deborah A., and Burd Randall S.. 2014. Improving ATLS performance in simulated pediatric trauma resuscitation using a checklist. Ann. Surg 259, 4 (April 2014), 807–813. DOI: 10.1097/SLA.0000000000000259 [DOI] [PubMed] [Google Scholar]
  • [27].Pine Kathleen H. and Mazmanian Melissa. 2014. Institutional logics of the EMR and the problem of “Perfect” but inaccurate accounts. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ‘14), 283–294. DOI: 10.1145/2531602.2531652 [DOI] [Google Scholar]
  • [28].Reddy Madhu C., Dourish Paul, and Pratt Wanda. 2001. Coordinating heterogeneous work: Information and representation in medical care In ECSCW 2001: Proceedings of the Seventh European Conference on Computer Supported Cooperative Work 16–20 September 2001, Bonn, Germany, Springer; Netherlands, Dordrecht, 239–258. DOI: 10.1007/0-306-48019-0_13 [DOI] [Google Scholar]
  • [29].Sarcevic Aleksandra, Rosen Brett, Kulp Leah, Marsic Ivan, and Burd Randall. 2016. Design challenges in converting a paper checklist to digital format for dynamic medical settings. In Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare (Pervasive Health 2016), 1–8. DOI: 10.4108/eai.16-5-2016.2263335 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Spector Jonathan M., Agrawal Priya, Kodkany Bhala, Lipsitz Stuart, Lashoher Angela, Dziekan Gerald, Bahl Rajiv, Merialdi Mario, Mathai Matthews, Lemer Claire, and Gawande Atul. 2012. Improving quality of care for maternal and newborn health: Prospective pilot study of the WHO Safe Childbirth Checklist Program. PLOS ONE 7, 5 (May 2012), e35151 DOI: 10.1371/journal.pone.0035151 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Susan Leigh Star and Anselm Strauss. 1999. Layers of silence, arenas of voice: The ecology of visible and invisible work. Comput. Supported Coop. Work 8, 1–2 (February 1999), 9–30. DOI: 10.1023/A:1008651105359 [DOI] [Google Scholar]
  • [32].Subbe Christian P., Kellett John, Barach Paul, Chaloner Catriona, Cleaver Hayley, Cooksley Tim, Korsten Erik, Croke Eilish, Davis Elinor, De Bie Ashley JR, Durham Lesley, Hancock Chris, Hartin Jilian, Savijn Tracy, and Welch John. 2017. Crisis checklists for in-hospital emergencies: expert consensus, simulation testing and recommendations for a template determined by a multi-institutional and multi-disciplinary learning collaborative. BMC Health Services Research 17, (May 2017), 334 DOI: 10.1186/s12913-017-2288-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [33].Thongprayoon Charat, Harrison Andrew M., O’Horo John C., Sevilla Berrios Ronaldo A., Pickering Brian W., and Herasevich Vitaly. 2016. The effect of an electronic checklist on critical care provider workload, errors, and performance. J Intensive Care Med 31, 3 (March 2016), 205–212. DOI: 10.1177/0885066614558015 [DOI] [PubMed] [Google Scholar]
  • [34].Verdaasdonk Emiel G., Stassen Laurents P., Widhiasmara Prama P., and Dankelman Jenny. 2009. Requirements for the design and implementation of checklists for surgical processes. Surgical Endoscopy 23, 4 (April 2009), 715–26. DOI: [DOI] [PubMed] [Google Scholar]
  • [35].Wu Leslie, Cirimele Jesse, Leach Kristen, Card Stuart, Chu Larry, Harrison T. Kyle, and Klemmer Scott R.. 2014. Supporting crisis response with dynamic procedure aids. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS ‘14), 315–324. DOI: 10.1145/2598510.2598565 [DOI] [Google Scholar]
  • [36].Sun Young Park Yunan Chen, and Rudkin Scott. 2015. Technological and organizational adaptation of EMR implementation in an emergency department. ACM Transactions on Computer-Human Interaction 22, (February 2015), 1–24. DOI: 10.1145/2656213 [DOI] [Google Scholar]
  • [37].Zargaran Eiman, Schuurman Nadine, Nicol Andrew J., Matzopoulos Richard, Cinnamon Jonathan, Taulu Tracey, Ricker Britta, Garbutt Brown David Ross, Navsaria Pradeep, and Hameed S. Morad. 2014. The electronic trauma health record: Design and usability of a novel tablet-based tool for trauma care and injury surveillance in low resource settings. Journal of the American College of Surgeons 218, 1 (January 2014), 41–50. DOI: 10.1016/j.jamcollsurg.2013.10.001 [DOI] [PubMed] [Google Scholar]
  • [38].Zhang Zhan, Sarcevic Aleksandra, Yala Maria, and Burd Randall S.. 2014. Informing digital cognitive aids design for emergency medical work by understanding paper checklist use. In Proceedings of the 18th International Conference on Supporting Group Work (GROUP ‘14), 204–214. DOI: 10.1145/2660398.2660423 [DOI] [Google Scholar]
  • [39].Zhou Xiaomu, Ackerman Mark S., and Zheng Kai. 2009. I just don’t know why it’s gone: Maintaining informal information use in inpatient care. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘09), 2061–2070. DOI: 10.1145/1518701.1519014 [DOI] [Google Scholar]

RESOURCES