Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2022 Oct 27;30(1):8–15. doi: 10.1093/jamia/ocac201

Team is brain: leveraging EHR audit log data for new insights into acute care processes

Christian Rose 1, Robert Thombley 2, Morteza Noshad 3, Yun Lu 4, Heather A Clancy 5, David Schlessinger 6, Ron C Li 7,8, Vincent X Liu 9, Jonathan H Chen 10,11,12, Julia Adler-Milstein 13,
PMCID: PMC9748597  PMID: 36303451

Abstract

Objective

To determine whether novel measures of contextual factors from multi-site electronic health record (EHR) audit log data can explain variation in clinical process outcomes.

Materials and Methods

We selected one widely-used process outcome: emergency department (ED)-based team time to deliver tissue plasminogen activator (tPA) to patients with acute ischemic stroke (AIS). We evaluated Epic audit log data (that tracks EHR user-interactions) for 3052 AIS patients aged 18+ who received tPA after presenting to an ED at three Northern California health systems (Stanford Health Care, UCSF Health, and Kaiser Permanente Northern California). Our primary outcome was door-to-needle time (DNT) and we assessed bivariate and multivariate relationships with six audit log-derived measures of treatment team busyness and prior team experience.

Results

Prior team experience was consistently associated with shorter DNT; teams with greater prior experience specifically on AIS cases had shorter DNT (minutes) across all sites: (Site 1: −94.73, 95% CI: −129.53 to 59.92; Site 2: −80.93, 95% CI: −130.43 to 31.43; Site 3: −42.95, 95% CI: −62.73 to 23.17). Teams with greater prior experience across all types of cases also had shorter DNT at two sites: (Site 1: −6.96, 95% CI: −14.56 to 0.65; Site 2: −19.16, 95% CI: −36.15 to 2.16; Site 3: −11.07, 95% CI: −17.39 to 4.74). Team busyness was not consistently associated with DNT across study sites.

Conclusions

EHR audit log data offers a novel, scalable approach to measure key contextual factors relevant to clinical process outcomes across multiple sites. Audit log-based measures of team experience were associated with better process outcomes for AIS care, suggesting opportunities to study underlying mechanisms and improve care through deliberate training, team-building, and scheduling to maximize team experience.

Keywords: audit log, stroke, teams, team experience, busyness

BACKGROUND AND SIGNIFICANCE

The setting, or context, in which care is delivered is known to contribute to variation in patient outcomes.1–3 The range of contextual factors is vast, including, for example, the busyness of the treating unit, the level of team training or experience, and the number of provider handoffs and team interactions.4–7 Relative to examination of how clinical factors (eg, comorbidities) contribute to patient outcomes, contextual factors have been understudied. This is due, in part, to the difficulty studying these factors, particularly at scale, as they often require manual measurement via direct observation or surveys that rely on subjective recollection. A byproduct of the adoption of electronic health records (EHRs) is the capture of audit log data (also known as event or access log data) that records user actions performed in the EHR.8 These data have been used to answer different types of research questions, with a predominant focus on measuring EHR-based work of clinicians and teams.9–11 However, EHR audit log data are also well-suited to create measures that describe the context surrounding clinical care.12 Yet, relative to the richness of audit log data to characterize behaviors—particularly difficult-to-measure team behaviors—few studies have evaluated whether they explain variability in patient outcomes.13

Clinical process outcomes for patients with time-sensitive conditions that require team-based care are particularly well-suited to audit log research given the complex, stepwise interventions and interactions across various roles that are involved in the provision of care. While several process outcomes meet these criteria, door-to-needle time (DNT) for those experiencing an acute ischemic stroke (AIS)—the time from patient arrival to the emergency department (ED) until delivery of tissue plasminogen activator (tPA, a medication used to break up a blood clot and restore blood flow to the brain but which must be used within 4.5 h of symptom onset to limit dangerous side-effects)—is one example. DNT is also a mature and standardized measure, offering a compelling basis for exploration of the explanatory potential of audit log-derived measures.14,15 Team-based measures are likely associated with DNT because timely treatment does not solely rely on a single doctor’s evaluation, assessment, and orders. Critical interventions include triage by nurses who must then obtain blood samples for laboratory studies, technicians who must perform ECGs and move patients throughout the ED and to the CT scanner, and finally medications that must be retrieved and prepared by a pharmacist before tPA is administered to the patient. As such, contextual factors that affect the team are likely to contribute to DNT. For example, prior literature suggests that teams that have previously worked together have better performance, but this has not been studied in the specific context of DNT.16–22 When caring for a patient who may be having a stroke, team members with shared experience on prior stroke cases may have learned behaviors or implicit knowledge that allows for more streamlined care. On the other hand, if those members are busier at the time of presentation, they may face limitations in order entry, retrieval of blood samples, or the ability to transport a patient to the CT scanner for imaging, which may lead to delays in care. Audit log data offer a feasible way to measure such team-based interactions and prior experience that may explain variation in DNT. Few prior studies include multi-institutional audit log data, raising questions about generalizability of both the data itself and the relationships they are used to detect.

OBJECTIVE

To determine whether team-level contextual factors related to busyness and team experience, measured using EHR audit log data, are associated with DNT for AIS patients. Assessing these relationships leveraging EHR audit log data serves to reveal opportunities for intervention, such as staffing or scheduling interventions that promote team stability or educational initiatives to enhance teamwork, that may result in improved patient outcomes. Additional objectives include: (1) demonstrating the types of measures that can be constructed from EHR audit log data and how they can be tied to patient outcomes, which may be applied in other conditions and (2) assessing whether measures and analyses can be reproduced across multiple institutions.

MATERIALS AND METHODS

Setting and data

We conducted our study at three large health systems in Northern California. Two health systems are major academic training institutions in the San Francisco Bay Area (Stanford Health Care and UCSF Health) while the third (Kaiser Permanente Northern CA, KPNC) represents a community-based integrated healthcare delivery system across Northern California with multiple residency training programs. The three health systems use an EHR from Epic. For our study, we focused on AIS cases presenting to the adult emergency departments (EDs) at Stanford University and UCSF Health, as well as all 21 KPNC sites. The 24 EDs included in the study represent certified stroke centers as defined by the Joint Commission.23

Each site engaged in a parallel set of activities that involved defining a cohort of patients treated for AIS in the ED setting. Time periods varied somewhat, with start dates determined by Epic implementation at two sites (Stanford: January 1, 2010 through May 31, 2017; UCSF: June 1, 2012 through December 31, 2019) while the third site selected a date after Epic implementation when stroke care processes were more stable (KPNC: January 1, 2015 through May 31, 2019). For selected time periods, each site accessed clinical and audit log data for those patients to support study measures, and built an associated analytic dataset to evaluate relationships with DNT. At Stanford, data were accessed via a nightly exported copy from the Clarity Console managed by Stanford’s Research Repository (STARR) tools.24 At UCSF Health and KPNC, data were extracted directly from the Epic Clarity data warehouse. Each site secured their own IRB approval with the study considered expedited human subjects research.

Patient cohort and stroke care in ED

Each institution identified AIS patients treated with tPA. Criteria for inclusion were age 18+ years who received tPA within 4.5 h of ED presentation (the standard window for time from onset of symptoms to safely giving tPA as described by the American Heart Association and American Stroke Association).25 We characterized the cohorts at each site based on age, sex, race, and ethnicity.

For AIS patients presenting to the ED, the typical care process begins with presentation to a medical provider (commonly a triage nurse) who quickly screens the patient for possible stroke symptoms. If the patient screens positive for these symptoms, a “code stroke” is initiated resulting in a cascade of actions. First, the on-call ED, neurology, and radiology physicians are alerted to the “code stroke” as well as the patient location. The ED physician will then proceed to evaluate the patient, monitor symptoms, calculate an NIH stroke scale, and determine if the patient meets inclusion criteria for the subsequent delivery of tPA (eg, the timing of symptoms, current anticoagulation status, history of head or gastrointestinal bleed, etc.).25 The radiology technician will be alerted, and a CT scanner will be prepared for imminent patient arrival. Simultaneously, nurses will obtain venous access on the patient for diagnostic and therapeutic interventions. While the CT scan is being completed, blood samples are sent to the lab where they are run by a lab technician. Immediately upon completion of the CT scan, emergency providers, the neurology, and radiology teams review the images to determine if the patient is appropriate to receive tPA and will discuss the risks and benefits of treatment with the patient or their family. If all agree with treatment, tPA will be ordered by a physician and then retrieved and prepared for administration by the clinical pharmacist. All three sites followed this generalized care process.

Measures

Our primary outcome—DNT—was measured as the difference in minutes between arrival to the ED and tPA administration. ED arrival is recorded in the Admit, Discharge, Transfer (ADT) data model and tPA administration time is recorded in the Medication Administration Record (MAR). We used the patient’s earliest administration of tPA and relied on NDC codes to identify tPA. We selected two contextual factor domains of team performance—busyness and experience—based on a literature review and input from an expert panel with subject area knowledge in both AIS treatment and audit log data. (See Supplementary Appendix S1 for list of members.)6,14,16–18,26,27 We focused on team-based measures because the management of AIS involves tight coordination across multiple providers, in parallel and sequentially, to deliver diagnostic and therapeutic interventions. (See Supplementary Appendix S2 for additional rationale for measure inclusion and exclusion decisions. Supplementary Figure S1 has additional detail on measures below.)

Defining the treatment team

Though the importance of team management of AIS is well known, there is no standard—conceptually or in practice (ie, a designated in a field in the EHR)—by which to define the treatment team for an AIS patient.28,29 We therefore developed our own definitions, accommodating differences across the sites in role-based labels. At UCSF and KPNC, role-based labels (eg, Nurse, Technician) are captured for individual EHR users. At these two sites, the treatment team for each patient was defined as any user with a role of physician, resident, nurse, nurse practitioner, pharmacist, or technician who had at least one audit log event for the index patient within the treatment window (defined below). The Stanford data did not maintain role-based labels; we therefore defined the treatment team as any user who had at least one clinically relevant audit log event (eg, placed an order, opened a note, evaluated imaging studies, viewed results, etc.) for the index patient within the treatment window. Further, a small number of Stanford patients were given a temporary ID while registration was completed, which could not be merged with their permanent ID and therefore limited the ability to identify the full team. Such cases were removed.

Treatment window

Our definition of the treatment team depended on defining the window of activity in which EHR actions were related to tPA administration and not to other care. We chose of a window of 120 min: 60 min preceding and 60 min following the administration of tPA. This window achieves a balance between sensitivity to capture those involved in the stroke diagnosis through tPA administration process (which has a goal of occurring in <60 min) and any associated after-the-fact documentation, while excluding those which are not involved in this acute process (eg, swallow studies or physical therapy and rehabilitation).

Busyness

We created three measures that capture different dimensions of team busyness: Movement, Charting, and Switching. Our Movement measure was defined as the total number of unique workstations used by members of the treatment team during the treatment window, divided by the number of unique team members.30,31 More workstations visited serves as a proxy for the amount of movement within the clinical workspace (ie, between rooms or to the differing treatment areas) and thus might indicate busyness as it relates to physical movement. Our Charting measure does not relate to physical movement and instead captures the number of unique, nonindex patient charts viewed by members of the treatment team during the treatment window, divided by the number of unique team members. This serves as a proxy for busyness associated with overall patient workload. Finally, in the ED, busyness can manifest as more frequent interruptions and task switching.32–34 Our Switching measure serves as a proxy for this by counting the total number of patient-chart switches by team members during the treatment window, divided by the number of unique team members. Chart switches were counted every time a treatment team member had sequential events logged for different patients during the treatment window.

Team experience

Guided by the literature, we created three measures of team experience around the concepts of Recency, Teamwork, and tPA-specific Teamwork.16,17,35,36 Prior direct experience treating AIS with tPA was thought to relate to the ability of one or more individuals on the team to quickly recognize stroke at triage and help the team respond quickly. We measured Recency as the fewest number of days that any treatment team member was last identified as being on the treatment team of a prior AIS case treated with tPA. Across the entire team, team members that have worked together more often in the past may be able to care for AIS patients faster through improved communication, delegation, or synergistic management of tasks. We therefore created two measures of teamwork. The first, Teamwork: Any, was defined as the number of shared encounters of any type, from the previous 6 months excluding the index encounter, between all pairs of treatment team members averaged across the team (eg, if Case 1 team members were A, B, and C and Case 2 team members were B, C, and D, the average experience for Case 2 would be 1BC+0BD+0CD3=0.33). Given the specific roles and responsibilities in stroke care, we also measured Teamwork: tPA as a similar measure, but restricted the set of shared previous encounters to only tPA-treated AIS encounters. We limited the lookback period to 6 months to account for potential bias associated with higher numbers of shared encounters for all providers as time progressed, which in turn restricted the individuals in our cohort to those seen after the first six months of available study data.

Analytic approach

We first produced descriptive statistics to characterize the cohort at each site and to compare the six contextual factors at each site. Specifically, for the contextual factors, we assessed the mean and standard deviation across each site’s cohort of AIS encounters. Next, we created a bivariate model to assess the association between each contextual factor and DNT at each site. We also ran multivariate models to control for the impact of patient demographics (including age, sex, ethnicity, race) and calendar year. To support interpretation of results, particularly for measures such as team experience for which the unit of measure is not intuitive, we present effect-size estimates based on a one standard deviation increase in the contextual factor. Where we observed consistent associations between a contextual factor and DNT, we reran regressions with the contextual factor measured in quartiles. These models reveal whether the associations reflect a dose-response like relationship or instead a relationship driven by one part of the distribution (eg, only very experienced teams have shorter DNT).

RESULTS

Descriptive statistics

Table 1 summarizes the cohort demographics and other key features by site, including our outcome (DNT). In general, the cohorts were similar. The average number of tPA-treated AIS cases per year per ED over the study period ranged from 26.6 to 31.9. Average DNT ranged from 45.0 to 57.5 min. Average treatment team size ranged from 17.5 to 21.9.

Table 1.

Characteristics of acute ischemic stroke patients, by site

Site 1 Site 2 Site 3
tPA cases per year per ED 30.7 31.9 26.6
Door-to-needle time (min), mean (SD) 56.5 (40) 57.5 (37) 45.0 (25)
Treatment team size (n), mean (SD) 21.3 (5.2) 21.9 (6.9) 17.5 (3.6)
Age (years), mean (SD) 71 (15.2) 78 (14.4) 73 (14.5)
Sex (% female) 52.6% 56.0% 50.4%
Ethnicity
 Hispanic or Latino 10.5% 8.7% 9.2%
 Not Hispanic or Latino 89.0% 88.8% 90.8%
 Other/unknown/declined 0.4% 2.5% 0%
Race
 African American 5.3% 7.5% 11.4%
 American Indian/Alaska Native 0.4% 0.4% 0.4%
 Asian 11.4% 33.6% 14.6%
 Native Hawaiian or other Pacific Islander 1.3% 3.3% 1.3%
 White 68.0% 37.3% 56.9%
 Other/unknown 13.6% 17.8% 15.3%

ED: emergency department; SD: standard deviation; tPA: tissue plasminogen activator.

Patient demographics showed all sites having approximately half of patients with female sex and a median age ranging from 71 to 78. Sites had some variability in race and ethnicity, with the most notable difference being that Site 2 had the highest percentage of Asian patients (33.6%) compared to Site 1 (10.6%) and Site 3 (14.6%), as well as the fewest White patients (Site 2: 37.3%; Site 1: 69.3%; Site 3: 56.9%).

Table 2 summarizes the team busyness and experience measures. Sites had similar levels of movement as measured by average number of unique workstations during the treatment window (Site 1: Mean = 1.39 and SD = 0.46; Site 2: 2.51 (0.41); Site 3: 2.11 (0.36)). Sites also had similar levels of nonindex patient charts viewed (Site 1: Mean = 12.52 and SD = 17.62; Site 2: 10.18 (2.95); Site 3: 13.39 (5.34)). Level of chart switching was somewhat varied, with similar higher levels at Sites 2 and 3 as compared to Site 1 (Site 1: Mean = 18.80 and SD = 20.50; Site 2: 26.71 (8.52); Site 3: 26.97 (9.54)).

Table 2.

Summary statistics of contextual factors measured during the acute ischemic stroke treatment window

Measure Definition
Site 1
Site 2
Site 3
For all members of the patient’s treatment team during the treatment window, mean: Mean SD Mean SD Mean SD
Busyness
 Movement Number of unique workstations used 1.39 0.46 2.51 0.41 2.11 0.36
 Charting Number of nonindex patient charts viewed 12.52 17.62 10.18 2.95 13.39 5.34
 Switching Number of chart switches 18.80 20.50 26.71 8.52 26.97 9.54
Experience
 Recency Length of time in days since last tPA case 6.40 12.40 13.23 14.06 6.59 11.56
 Teamwork: any Number of shared encounters (any type) within the prior 6 months 0.87 0.51 0.51 0.29 0.73 0.39
 Teamwork: tPA Number of shared tPA encounters within the prior 6 months 0.60 0.42 0.12 0.10 0.17 0.12

SD: standard deviation; tPA: tissue plasminogen activator.

With regard to experience measures, Site 2 had the longest average Recency since any member of the team had seen a prior tPA case while Sites 1 and 3 were similar (Site 1: Mean = 6.40 and SD = 12.40; Site 2: 13.23 (14.06); Site 3: 6.59 (11.56)). Site 1 had the highest Teamwork: Any shared experience (Mean = 0.87 and SD = 0.51), followed by Site 3 (0.73 (0.39)) and Site 2 (0.51 (0.29)). This pattern was also reflected in the Teamwork: tPA experience measure (Site 1: Mean = 0.60 and SD = 0.42; Site 2: 0.12 (0.10); Site 3: 0.17 (0.12)).

Bivariate associations between contextual factors and DNT

No site had a statistically significant association between Movement and DNT (Table 3). Site 1 was the only institution for which Charting was associated with DNT, such that more nonindex patient charts viewed was associated with slower DNT (0.30 (P=.03)). A one standard deviation increase (17.62 more charts viewed, as reported for Site 1 in Table 2) was associated with more than 5 min longer DNT. Switching was also significantly associated with DNT only at Site 1 (−0.37 (P=.04)) but reflected an inverse relationship. A one standard deviation increase in chart switching (20.50 chart switches) was associated with more than 7.5 min faster DNT.

Table 3.

Bivariate associations between contextual factors and door-to-needle time for tPA administration

Measure Site 1
Site 2
Site 3
Coefficient (95% CI) P value Coefficient (95% CI) P value Coefficient (95% CI) P value
Busyness
 Movement −4.52 (−20.02, 10.98) .56 2.37 (−9.12, 13.86) .69 −5.30 (−11.96, 1.36) .12
 Charting 0.30 (0.02, 0.58) .03 −0.66 (−2.25, 0.92) .41 −0.23 (−0.68, 0.22) .32
 Switching 0.37 (0.74, 0.00) .04 −0.24 (−0.79, 0.31) .40 −0.17 (−0.43, 0.08) .18
Experience
 Recency 0.09 (−0.24, 0.43) .58 0.27 (−0.07, 0.61) .11 0.32 (0.11, 0.52) .002
 Teamwork: any −6.96 (−14.56, 0.65) .07 19.16 (36.15,2.16) .03 11.07 (17.39,4.74) <.001
 Teamwork: tPA 94.73 (129.53,59.92) <.001 80.93 (130.43,31.43) .001 42.95 (62.73,23.17) <.001

Note: Bolded results are statistically significant at P < .05 level.

CI: confidence interval; tPA: tissue plasminogen activator.

Site 3 was the only site for which Recency had a significant association with DNT (0.32 (P=.002)) and in the predicted direction. Teams with one standard deviation longer time since the last tPA-treated AIS case (11.56 days) had 3.7 min slower DNT. Teamwork: Any was negatively associated with DNT at all three sites, and statistically significant at Sites 2 and 3 (Site 1: −6.96 (P = .07); Site 2: −19.16 (P = .03); Site 3: −11.07 (P≤.001)). A one standard deviation increase in team experience was associated with faster DNT of 19.16 min at Site 2 and 11.07 min at Site 3. Finally, Teamwork: tPA was significantly and negatively associated with DNT at all three sites (Site 1: −94.73 (P ≤ .001); Site 2: −80.93 (P = .001); Site 3: −42.95 (P ≤ .001)). A one standard deviation increase in team experience was associated with faster DNT of 39.8, 8.1, and 5.2 min at Site 1, Site 2, and Site 3, respectively.

Multivariate and quartile bivariate associations

In our multivariate analysis (Supplementary Table S3) Busyness measures were no longer significant at Site 1. Most Experience measures held consistent—with the exception of Site 2 that was no longer statistically significant for the Teamwork: Any measure (−13.94, 95% CI: −33.01 to 5.13). Full model results are available in Supplementary Tables S4–S9. Across sites, quartile results for Teamwork: Any and Teamwork: tPA showed the strongest associations in the top quartile (Supplementary Table S10) as well as the magnitude of association increasing with each quartile.

DISCUSSION

This study sought to reveal new insights about a clinically important outcome (DNT) by measuring the contribution of team-level contextual factors. It also sought to demonstrate the ability of audit log data to measure team-level context across multiple sites. We found that, for all three sites, teams with more shared experience had faster DNT—particularly if they worked together on prior tPA-treated AIS cases. This important new finding suggests that team experience is a contextual factor that may contribute to better outcomes in the context of AIS care and merits further assessment. Secondarily, we identified six contextual factors that were able to be reproduced across the three study sites and the associated summary statistics reflect notable similarity. This supports ongoing work to leverage audit log data to generate new evidence using multi-site data on factors that are otherwise difficult to measure but likely affect outcomes for AIS and other conditions.

While our primary finding is novel in the context of audit log derived measures, it is consistent with prior literature that has documented a relationship between team experience and performance. Specifically, prior work has shown that when care requires highly coordinated efforts by multiple team members, it is affected by team features including communication or experience.16,18,19,37 In the case of AIS, increased experience with team members may result in improved verbal and nonverbal communication regarding immediate next steps and needs, greater trust that critical tasks will be completed by the responsible member, and smoother transitions between tasks. For example, the comprehensive NIH Stroke Scale evaluation (a 13-question examination tool for stroke severity completed at the bedside) is used to determine if a patient will receive tPA when above a certain threshold. Given the complexity and time required to complete it (averaging between 3 and 9 min),38,39 clinicians on more experienced teams may be more likely to trust the evaluation of the initial assessor as compared to less experienced teams where they may choose to repeat the examination before agreeing to definitive care. Better understanding how team experience contributes to DNT offers a promising new way to continue to improve AIS care, perhaps leading to interventions such as training or simulation sessions to create more shared experience across team members.

While we expected busyness to be associated with DNT,40 we did not observe consistent relationships across study sites. There could be several reasons why. First, our measures only examined certain dimensions of busyness (as determined by physical movement and EHR work). It may be that the type of busyness that could impact timeliness of tPA (eg, performing critical interventions at the bedside or speaking with consultants) is not well captured by these dimensions. Second, it may be that DNT is not very sensitive to busyness, perhaps because AIS is widely recognized as a time-sensitive condition, with associated protocols like the Code Stroke (that was implemented at all study sites) that require all members of the stroke team to “drop everything” and begin AIS care. In fact, such Code Stroke policies are meant to address the concern that the busyness of providers will affect their care and to provide systems which might address this factor.41 However, given the relationships observed at Site 1, future work could further explore more nuanced measures. Further, given that the busyness measures we developed can readily be applied to other conditions, it may be interesting to explore whether the approach to prioritizing AIS care has negative spillover effects on outcomes for other patients who are in the ED when Code Strokes occur. More broadly, all our contextual factor measures could easily be adapted to other cohorts to assess whether they contribute to outcomes. Examining similar time-sensitive emergent conditions—such as acute myocardial infarction or sepsis that also require the coordination of multiple team members across several specialties—would be a logical next step. Our contextual factors also lend themselves to replication using audit log data from other EHRs as the primary fields we relied upon—user ID, medical record ID, encounter ID, and time/date stamps—are not specialized concepts. However, vendors do vary in the breadth of events that are logged, which could impact the operationalization of core measure components, such as who gets assigned to the treatment team.

Limitations and related considerations

While we were able to successfully replicate six contextual factor measures across our study sites, we learned several lessons along the way that can inform future single- and multi-site audit log studies. First, a key challenge was identifying the treatment team. No definition currently exists and there was also no standard way that team members were captured in the data. In fact, as noted above, despite all sites using an Epic EHR, they did not all identify team member roles in the same way. UCSF and KP utilized role labels matched to providers, while Stanford did not have consistent role labeling. At this site, we therefore had to define the team based on individuals who took specific clinical actions, as captured in the audit log that would only be performed by people with a given clinical role. While this may have added variability to our results, summary statistics for contextual factors and effect magnitudes were similar across sites, suggesting that these different approaches did not materially impact the resulting measures. Nonetheless, future work using team-level measures would be substantially easier and more reliable if there were consistent roles assigned to individuals and ideally an approach to define the care team for any given encounter. Further, the development of a common data model (eg, OMOP) for audit log data would facilitate multi-site studies such as ours.

Similarly, no definition exists to determine the treatment window for AIS cases. While the time of administration of tPA could be used to close the treatment window, not all actions are recorded in the EHR at the time they are completed. In particular, some are performed in the EHR after they have occurred—a process known colloquially as “back charting”. Thus, we extended our window beyond tPA administration by 60 min to capture team members who waited to (or were delayed in) review or document in the chart. This was particularly critical because nurses differ from technicians in when documentation occurs relative to care delivery as well as the rate of their actions which are automatically recorded versus manually input (eg, recording the collection of blood samples versus completion of an ECG). However, decisions about time cutoffs could be made in many different ways and with associated noise or bias introduced by selecting either too short or too long of a window.

Relatedly, because work in the EHR does not perfectly mirror real-world care, the contextual factors measured from audit log data can be noisy despite the granularity of these data. Some care is unaccounted for (eg, talking with a patient or their family), while some accurate data might result from unintended work (eg, opening the wrong chart and then closing it). Furthermore, these effects can be exacerbated by institution-specific differences in how actions are recorded in the EHR (eg, computers on wheels which may be moved between rooms versus stationary computers installed at workstations) or how roles are divided up across team members (eg, some departments utilize a “charting nurse” to take notes during primary evaluation). These site-dependent differences make it difficult to standardize across sites when there is no “right” or accepted definition for who is part of the team or which actions represent the care processes under study.

To overcome these limitations to the best of our ability, we used a variety of strategies. We tested alternative definitions of our contextual factor measures. For example, we evaluated our six measures using a more constrained definition of the treatment team (just physicians and nurses) and found that results were unchanged. Throughout the project we also engaged ED and AIS experts to review the measures and summary statistics to ensure they had face validity. Finally, we compared summary statistics and distributions of key measures across sites, which led to identifying and addressing differences in audit log content (eg, how John/Jane Doe accounts are handled). Additional limitations to our study include the focus on a single process outcome measure, which we selected because it is widely-accepted and measurement in standardized. However, our approach could also offer insight into other meaningful measures such as AIS misdiagnosis once such measures are mature.

CONCLUSION

In this first of its kind study, we demonstrated that it is possible to extract from EHR audit log data key contextual factors relevant to clinical process outcomes across multiple sites. Despite audit log differences at each institution utilizing the same EHR, we were able to produce consistent measures. This made it possible to compare results across sites, lending greater external validity to our findings. More generally, our approach offers a large-scale, systematic approach to studying teamwork and similar contextual factors that influence key care processes that are impractical to study at scale using other methods. Most importantly, we identify a promising avenue for future research—focused on team experience—to improve a significant clinical outcome for AIS patients.

FUNDING

This research was supported by The Gordon and Betty Moore Foundation. JHC was supported in part by the NIH/National Library of Medicine via Award R56LM013365, the Stanford Artificial Intelligence in Medicine and Imaging—Human-Centered Artificial Intelligence (AIMI-HAI) Partnership Grant, the Stanford Aging and Ethnogeriatrics (SAGE) Research Center under NIH/NIA grant P30AG059307, and Google Inc. VXL was supported in part by NIH R35GM128672.

AUTHOR CONTRIBUTIONS

All authors contributed to the writing of the manuscript. JA-M is responsible for the accuracy of the final contents of the manuscript. This research used data or services provided by STARR, “STAnford medicine Research data Repository,” a clinical data warehouse containing live Epic data from Stanford Health Care (SHC), the University Healthcare Alliance (UHA) and Packard Children’s Health Alliance (PCHA) clinics and other auxiliary data from Hospital applications such as radiology PACS. The STARR platform is developed and operated by Stanford Medicine Research IT team and is made possible by Stanford School of Medicine Research Office.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

Conflict of interest statement

JHC is the co-founder of Reaction Explorer LLC that develops and licenses organic chemistry education software. He receives paid consulting fees from Sutton Pierce and Younker Hyde MacFarlane PLLC.

Supplementary Material

ocac201_Supplementary_Data

Contributor Information

Christian Rose, Department of Emergency Medicine, Stanford University School of Medicine, Stanford, California, USA.

Robert Thombley, Center for Clinical Informatics and Improvement Research, Department of Medicine, University of California, San Francisco, San Francisco, California, USA.

Morteza Noshad, Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA.

Yun Lu, Kaiser Permanente Division of Research, Oakland, California, USA.

Heather A Clancy, Kaiser Permanente Division of Research, Oakland, California, USA.

David Schlessinger, Kaiser Permanente Division of Research, Oakland, California, USA.

Ron C Li, Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA; Division of Hospital Medicine, Stanford University School of Medicine, Stanford, California, USA.

Vincent X Liu, Kaiser Permanente Division of Research, Oakland, California, USA.

Jonathan H Chen, Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California, USA; Division of Hospital Medicine, Stanford University School of Medicine, Stanford, California, USA; Clinical Excellence Research Center, Stanford University School of Medicine, Stanford, California, USA.

Julia Adler-Milstein, Center for Clinical Informatics and Improvement Research, Department of Medicine, University of California, San Francisco, San Francisco, California, USA.

Data Availability

The data underlying this article cannot be shared publicly due to institutional policies that protect the privacy of individuals whose data was used in the study.

REFERENCES

  • 1. Coles E, Anderson J, Maxwell M, et al. The influence of contextual factors on healthcare quality improvement initiatives: a realist review. Syst Rev 2020; 9 (1): 94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Blasi D, Harkness Z, Ernst E, Georgiou A, Kleijnen J.. Influence of context effects on health outcomes: asystematic review. Lancet 2001; 357 (9258): 757–62. [DOI] [PubMed] [Google Scholar]
  • 3. Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, et al. Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Ann Fam Med 2013; 11 (Suppl_1): S115–S123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Chen Y, Patel MB, McNaughton CD, Malin BA.. Interaction patterns of trauma providers are associated with length of stay. J Am Med Inform Assoc 2018; 25 (7): 790–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Madej-Fermo OP, Staff I, Fortunato G, Abbott L, McCullough LD.. Impact of emergency department transitions of care on thrombolytic use in acute ischemic stroke. Stroke 2012; 43 (4): 1067–74. [DOI] [PubMed] [Google Scholar]
  • 6. Hawkes MA, Carpani F, Farez MF, Ameriso SF.. Door-to-needle time in acute stroke treatment and the “July effect. Neurohospitalist 2018; 8 (1): 24–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Curtze S, Meretoja A, Mustanoja S, et al. ; for the Helsinki Stroke Thrombolysis Registry Group. Does time of day or physician experience affect outcome of acute ischemic stroke patients treated with thrombolysis? a study from Finland. Int J Stroke 2012; 7 (6): 511–6. [DOI] [PubMed] [Google Scholar]
  • 8. Adler-Milstein J, Adelman JS, Tai-Seale M, Patel VL, Dymek C.. EHR audit logs: a new goldmine for health services research? J Biomed Inform 2020; 101: 103343. [DOI] [PubMed] [Google Scholar]
  • 9. Chen B, Alrifai W, Gao C, et al. Mining tasks and task characteristics from electronic health record audit logs with unsupervised machine learning. J Am Med Inform Assoc 2021; 28 (6): 1168–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (5): 419–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Ruppel H, Bhardwaj A, Manickam RN, et al. Assessment of electronic health record search patterns and practices by practitioners in a large integrated health care system. JAMA Netw Open 2020; 3 (3): e200512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Reznek MA, Murray E, Youngren MN, et al. Door-to-imaging time for acute stroke patients is adversely affected by emergency department crowding. Stroke 2017; 48 (1): 49–54. [DOI] [PubMed] [Google Scholar]
  • 13. Rule A, Chiang MF, Hribar MR.. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J Am Med Inform Assoc 2020; 27 (3): 480–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Fonarow GC, Smith EE, Saver JL, et al. Improving door-to-needle times in acute ischemic stroke: the design and rationale for the American Heart Association/American Stroke Association’s Target: Stroke initiative. Stroke 2011; 42 (10): 2983–9. [DOI] [PubMed] [Google Scholar]
  • 15. Xian Y, Xu H, Smith EE, et al. Achieving more rapid door-to-needle times and improved outcomes in acute ischemic stroke in a nationwide quality improvement intervention. Stroke 2022; 53 (4): 1328–38. [DOI] [PubMed] [Google Scholar]
  • 16. Mundt MP, Gilchrist VJ, Fleming MF, Zakletskaia LI, Tuan W-J, Beasley JW.. Effects of primary care team social networks on quality of care and costs for patients with cardiovascular disease. Ann Fam Med 2015; 13 (2): 139–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Kurmann A, Keller S, Tschan-Semmer F, et al. Impact of team familiarity in the operating room on surgical complications. World J Surg 2014; 38 (12): 3047–52. [DOI] [PubMed] [Google Scholar]
  • 18. Hysong SJ, Amspoker AB, Hughes AM, et al. Impact of team configuration and team stability on primary care quality. Implement Sci 2019; 14 (1): 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. He W, Ni S, Chen G, Jiang X, Zheng B.. The composition of surgical teams in the operating room and its impact on surgical team performance in China. Surg Endosc 2014; 28 (5): 1473–8. [DOI] [PubMed] [Google Scholar]
  • 20. Huckman RS, Staats BR.. Fluid tasks and fluid teams: the impact of diversity in experience and team familiarity on team performance. Manuf Serv Oper Manag. 2011; 13: 310–28. [Google Scholar]
  • 21. Huckman RS, Staats BR, Upton DM.. Team familiarity, role experience, and performance: evidence from Indian software services. Manage Sci 2009; 55 (1): 85–100. [Google Scholar]
  • 22. Schouten LMT, Hulscher MEJL, Akkermans R, van Everdingen JJE, Grol RPTM, Huijsman R.. Factors that influence the stroke care team’s effectiveness in reducing the length of hospital stay. Stroke 2008; 39 (9): 2515–21. [DOI] [PubMed] [Google Scholar]
  • 23. American Heart Association. Primary Stroke Center certification, overview sheet. https://www.heart.org/-/media/files/professional/quality-improvement/qi-international/internationaleligibility-guide782020.pdf?la=en. Accessed April 21, 2022.
  • 24. Lowe HJ, Ferris TA, Hernandez PM, Weber SC.. STRIDE—an integrated standards-based translational research informatics platform. AMIA Annu Symp Proc 2009; 2009: 391–5. [PMC free article] [PubMed] [Google Scholar]
  • 25. Powers WJ, Rabinstein AA, Ackerson T, et al. ; American Heart Association Stroke Council. 2018 guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American stroke association. Stroke 2018; 49 (3): e46–110. [DOI] [PubMed] [Google Scholar]
  • 26. de Sivatte I, Gordon JR, Olmos R, Simon C.. The effects of site experience on job performance: a missing element in work experience. Int j Hum Resour Manag 2021; 32 (21): 4603–28. [Google Scholar]
  • 27. Colicchio TK, Borbolla D, Colicchio VD, et al. Looking behind the curtain: Identifying factors contributing to changes on care outcomes during a large commercial EHR implementation. EGEMS (Wash, DC) 2019; 7 (1): 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Kurz MW, Ospel JM, Advani R, et al. Simulation methods in acute stroke treatment: current state of affairs and implications. Stroke 2020; 51 (7): 1978–82. [DOI] [PubMed] [Google Scholar]
  • 29. Edmans J. What makes stroke units effective? Br J Ther Rehabil 2001; 8 (2): 74–7. [Google Scholar]
  • 30. Vankipuram A, Patel VL, Traub S, Shortliffe EH.. Overlaying multiple sources of data to identify bottlenecks in clinical workflow. J Biomed Inform 2019; 100S: 100004. [DOI] [PubMed] [Google Scholar]
  • 31. Kannampallil TG, Denton C, Shapiro J, Patel V.. Efficiency of emergency physicians: Insights from an observational study using EHR log files. Appl Clin Inform 2018; 9 (1): 99–104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Duan Y, Jin Y, Ding Y, Nagarajan M, Hunte G.. The cost of task switching: evidence from the emergency department [published online ahead of print 2020]. SSRN Electron J. doi: 10.2139/ssrn.3519791. [DOI] [Google Scholar]
  • 33. Laxmisan A, Hakimzada F, Sayan OR, Green RA, Zhang J, Patel VL.. The multitasking clinician: decision-making and cognitive demand during and after team handoffs in emergency care. Int J Med Inform 2007; 76 (11–12): 801–11. [DOI] [PubMed] [Google Scholar]
  • 34. Raban MZ, Walter SR, Douglas HE, Strumpman D, Mackenzie J, Westbrook JI.. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol. BMJ Open 2015; 5 (10): e009076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Choudhry NK, Fletcher RH, Soumerai SB.. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med 2005; 142 (4): 260–73. [DOI] [PubMed] [Google Scholar]
  • 36. Ozcan YA, Watts J, Harris JM, Wogen SE.. Provider experience and technical efficiency in the treatment of stroke patients: DEA approach. J Oper Res Soc 1998; 49 (6): 573–82. [Google Scholar]
  • 37. Salas E, Cooke NJ, Rosen MA.. On teams, teamwork, and team performance: discoveries and developments. Hum Factors 2008; 50 (3): 540–7. [DOI] [PubMed] [Google Scholar]
  • 38. Gonzalez MA, Hanna N, Rodrigo ME, Satler LF, Waksman R.. Reliability of prehospital real-time cellular video phone in assessing the simplified National Institutes Of Health Stroke Scale in patients with acute stroke: a novel telemedicine technology. Stroke 2011; 42 (6): 1522–7. [DOI] [PubMed] [Google Scholar]
  • 39. Demaerschalk BM, Vegunta S, Vargas BB, Wu Q, Channer DD, Hentz JG.. Reliability of real-time video smartphone for assessing National Institutes of Health Stroke Scale scores in acute stroke patients. Stroke 2012; 43 (12): 3271–7. [DOI] [PubMed] [Google Scholar]
  • 40. Dwyer M, Peterson GM, Gall S, Francis K, Ford KM.. Health care providers’ perceptions of factors that influence the provision of acute stroke care in urban and rural settings: a qualitative study. SAGE Open Med 2020; 8: 2050312120921088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Gomez CR, Malkoff MD, Sauer CM, Tulyapronchote R, Burch CM, Banet GA.. Code stroke. An attempt to shorten inhospital therapeutic delays. Stroke 1994; 25 (10): 1920–3. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocac201_Supplementary_Data

Data Availability Statement

The data underlying this article cannot be shared publicly due to institutional policies that protect the privacy of individuals whose data was used in the study.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES