Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2019 Nov 21;27(3):480–490. doi: 10.1093/jamia/ocz196

Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods

Adam Rule 1,, Michael F Chiang 1,2, Michelle R Hribar 1,2
PMCID: PMC7025338  PMID: 31750912

Abstract

Objective

To systematically review published literature and identify consistency and variation in the aims, measures, and methods of studies using electronic health record (EHR) audit logs to observe clinical activities.

Materials and Methods

In July 2019, we searched PubMed for articles using EHR audit logs to study clinical activities. We coded and clustered the aims, measures, and methods of each article into recurring categories. We likewise extracted and summarized the methods used to validate measures derived from audit logs and limitations discussed of using audit logs for research.

Results

Eighty-five articles met inclusion criteria. Study aims included examining EHR use, care team dynamics, and clinical workflows. Studies employed 6 key audit log measures: counts of actions captured by audit logs (eg, problem list viewed), counts of higher-level activities imputed by researchers (eg, chart review), activity durations, activity sequences, activity clusters, and EHR user networks. Methods used to preprocess audit logs varied, including how authors filtered extraneous actions, mapped actions to higher-level activities, and interpreted repeated actions or gaps in activity. Nineteen studies validated results (22%), but only 9 (11%) through direct observation, demonstrating varying levels of measure accuracy.

Discussion

While originally designed to aid access control, EHR audit logs have been used to observe diverse clinical activities. However, most studies lack sufficient discussion of measure definition, calculation, and validation to support replication, comparison, and cross-study synthesis.

Conclusion

EHR audit logs have potential to scale observational research but the complexity of audit log measures necessitates greater methodological transparency and validated standards.

Keywords: electronic health records, audit logs, systematic review, workflow, usability

INTRODUCTION

Recently mandated logging of electronic health record (EHR) access in audit logs provides a promising resource for researchers seeking to observe clinical activities at scale. Informaticians currently use diverse methods to study clinical activities—work processes associated with patient care—including surveys, interviews, and time-motion studies.1–5 Time-motion studies in particular have seen wide adoption; however, their most common form of continuous observation by an external observer is time-consuming, expensive, and difficult to scale in terms of the diversity, duration, and detail of activity recorded.1–3 Researchers can scale certain aspects of observational studies with sensors such as Bluetooth beacons and video recorders, but this equipment can be difficult to set up and may provide, depending on the sensor, either a limited stream of data or detailed recordings that require extensive ethnographic analysis.6,7 Despite the many methods at their disposal, informaticians struggle to observe clinical activity accurately, efficiently, and at scale.

Starting in 2005, the Security Rule of the Health Insurance Portability and Accountability Act (HIPAA) required all healthcare organizations to “implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.8 The second stage of the Meaningful Use regulations,9 released in 2014, further clarified that certified EHRs must maintain audit logs adhering to the ASTM E2147 standard for tracking health information technology (HIT) use.10 Due to these regulations, virtually all EHRs in the United States now track at least 4 pieces of information about every episode of patient record access including who accessed which patient record at what time and the action they performed in that record such as adding, deleting, or copying information (Table 1). Depending on the vendor, EHR audit logs may track additional information about the computer, user, or record involved in each action, track those actions at different levels of granularity, and give them different names.

Table 1.

Example EHR audit log

TIME USER RECORD ACTION COMPUTER
05/12/2019 13: 04: 35 SMITHJANE 104738297 Edit Note Section MED2938
05/12/2019 13: 04: 37 SMITHJANE 104738297 Pend Note MED2938
05/12/2019 13: 04: 42 SMITHJANE 104738297 Sign Note MED2938
05/12/2019 13: 04: 52 DOEJOHN 105837489 View Problem List MED1238
05/12/2019 13: 05: 02 DOEJOHN 105837489 View Note MED1238
05/12/2019 13: 05: 04 DOEJOHN 105837489 View Note MED1238
05/12/2019 13: 05: 32 SMITHJANE 107483726 View Patient Summary MED2938
05/12/2019 13: 13: 32 SMITHJANE 107483726 View Patient Summary MED2938

While originally designed to monitor record access, EHR audit logs present a unique opportunity to study some clinical activities at a scale unachievable with direct observation. However, like other forms of time-motion study, audit log research has challenges and limitations. Audit logs are not purpose-built to track workflows and may lack vital context. Logged actions may be difficult to map to clinical activities such as chart review or patient exams and not all clinical activities involve EHR use. While EHR audit logs have been used to study diverse clinical activities, there has been little synthesis of the aims, measures, and methods of this research. This knowledge gap hinders efforts to replicate, generalize, and compare research on clinic workflow, EHR usability, and provider burnout which may involve audit log analysis.11–17

Objective

With this systematic review we identify consistency and variation in the aims, measures, and methods of audit log research. Moreover, we consolidate evidence for the validity of measures derived from audit logs and limitations of using audit logs to observe clinical activities. With this review, we aim to improve the quality and generalizability of audit log research and provide literature-driven recommendations for future study design.

MATERIALS AND METHODS

We identified articles for review by searching PubMed. We limited our search to PubMed as pilot queries of other potentially relevant databases (eg, IEEE Xplore, ACM Digital Library) did not yield papers meeting inclusion criteria and many healthcare-related engineering articles are cross-indexed in PubMed. Since the terms used to describe audit logs vary, we first hand-selected 17 articles familiar to us and identified the terms each used to describe audit logs (eg, access log, usage log, timestamps). Using these synonyms for “audit log” and synonyms for “EHR” used in prior systematic reviews,18,19 we searched PubMed in July 2019 for all literature referencing EHR audit logs (see Supplementary Appendix for full query). No date range limitation was imposed. The PubMed query and hand-selection together returned 1775 unique articles, with only 1 hand-selected article missing from the PubMed results. Through manual title, abstract, and text review, 1 author (AR) identified 74 articles which met inclusion criteria (summarized in Figure 1). A second author (MRH) with extensive experience conducting audit log research validated article inclusion by independently reviewing 100 randomly selected articles, achieving perfect inter-rater reliability (1.0 Cohen’s kappa). Scanning references of included articles, the authors identified 11 additional articles which met inclusion criteria, yielding a total of 85 articles for review (Figure 2). All 11 ancestor articles could be found on PubMed but were not in the original query results for reasons including using generic terms like “EHR data” to describe audit logs or having incomplete PubMed metadata (ie, missing abstract).

Figure 1.

Figure 1.

Article inclusion criteria.

Figure 2.

Figure 2.

Article review process.

Data extraction included coding 1) study features (eg, EHR vendor, sample size), 2) aims, 3) measures, 4) data preprocessing methods, 5) validation/sensitivity analyses, and 6) limitations discussed of using audit logs for research. Initial codes were developed by 1 author (AR) through an iterative process of extracting the features, aims, measures, methods, validation techniques, and limitations of each article. Two authors (AR and MRH) discussed these features and together clustered them into a smaller set of codes that covered the diversity of the literature. Following the data extraction method employed in Lopetegui et al’s review of healthcare time-motion studies,3 2 authors (AR, MRH) independently applied these codes to 20% of included articles and solved discrepancies by mutual agreement and discussion that refined the definition of each code. Inter-rater reliability was initially moderate (0.49 Cohen’s kappa) with disagreement due to ambiguity in code definitions and centering on what constituted complex enough measures and models of EHR activity to warrant coding (eg, is simple correlation of EHR usage with another measure a “model”?). After reaching full consensus on coding and code definitions, 1 author (AR) coded the remaining 80% of articles using the final coding scheme.

RESULTS

Features of audit log research

The 85 reviewed articles used a variety of terms in titles and abstracts to describe audit logs (Table 2). Only 30 used terms including “audit” or “access” while the remainder referenced more ambiguous EHR data, metadata, timestamps, and logs. Articles also varied in the EHRs, features, and users studied (Table 2). Just over half analyzed audit-logs from commercial EHRs (28 from 1 vendor, Epic [Verona, WI]). Most articles (65) examined all EHR activity while a minority (20) measured interactions with specific features or data types such as info buttons, handoff reports, or CT scans. Just over half (46 articles) examined EHR activity in individual departments such as internal medicine, outpatient primary-care, or ophthalmology, while the remainder spanned departments. Only 5 articles examined EHR use across multiple institutions: 3 outside the United States and 2 with a web-based EHR. Most articles (52) studied all EHR users while the remainder largely studied physician use (30) and few focused on nurses or medical students (3). Most articles (74) reported the length of time studied with the median duration being 1 year. Articles were less consistent in reporting the number of users, actions, patient records, and encounters studied (Table 3). Just over half of articles were published in 2016 or later (43 articles). See Table 4 for features by article.

Table 2.

Features of studies using EHR audit logs to study clinical activity

Study Attribute # %
Audit log term Audit/Access (eg, audit log, access log) 30 35
Generic Log (eg, log file, EHR log) 18 21
Usage (eg, usage log, usage patterns) 10 12
Data (eg, EHR data, EHR metadata) 8 9
Time (eg, timestamp, time data) 7 8
Event (eg, event file, event sequence) 6 7
Other (eg, system log, user log) 6 7
EHR Type Vendor 45 53
Homegrown (ie locally developed) 24 28
Unstated 16 19
Scope Whole EHR 65 76
Specific Feature 20 24
Department Multiple 39 46
Ophthalmology 10 12
Primary Care 9 11
General Internal Medicine 7 8
Emergency 5 6
Other 15 18
Institution Single 80 94
Multi 5 6
Users All Users 52 61
All Physicians 19 22
Residents/Fellows 11 13
Nurses 2 2
Medical Students 1 1

Table 3.

Reported sample size of reviewed articles including number of months, actions, users, patients, and encounters studied

Time (Months) Users Actions Encounters Patients
Studies reporting 74 50 24 18 20
Minimum 0.25 15 20 249 249 100
Median 12 154 1 930 620 38 628 3071
Maximum 120 10 659 118 000 000 3 219 910 815 114

Table 4.

Select features of the 85 articles included in this systematic review

Article
Terms
Who
What
Aims
Measures
Methods
Validation
Ref. PMID First Author Year Audit Log Term Users Studied Department Multi-Institute EHR Type Scope Months Logged EHR Use Workflow Care Team Model EHR Use Model Outcome Action Activity Duration Sequence Cluster Network Filter Activity Map Activity Gaps / Repeat Validate Mapping Method Validate Time Method Sensitivity
20 8130443 Michael PA 1993 audit trail All All Homegrown Feature 9 X X X
21 14728157 Cimino JJ 2003 log file All All Homegrown Feature 6 X X Survey
22 17102263 Chen ES 2006 log file All All Homegrown Feature 18 X X
23b 17238321 Cimino JJ 2006 log file All All Homegrown Feature 24 X X
24 18693813 Cimino JJ 2007 log file All All Homegrown Feature 14 X X X
25 18308208 McLean TR 2008 metadata Trainee Surgery Homegrown Feature 5 X X
26 20180439 Bernstein JA 2010 log data All All Vendor Feature 40 X X
27 22874273 Ries M 2012 system log All All Vendor Feature 8 X X
28 25024755 Hum RS 2014 user log All NICU Vendor Feature 22 X X X
29 25954381 Jiang SY 2014 usage log All All Vendor Feature 1 X X X
30 26958202 Jiang SY 2015 audit log All All Vendor Feature 1 X X X X X
31 29696473 Mongan J 2018 audit log MD Rad Vendor Feature 14 X X X X
32 30879188 Epstein RH 2019 access log MD Anes Vendor Feature 19 X X X X
33b PMC2243598 Asaro P 2001 access log All All Homegrown EHR 12 X X X
34b 16779018 Clayton PD 2005 audit trail All All Homegrown EHR 60 X X
35 17213496 Hripcsak G 2007 audit log All ED Homegrown EHR 7 X X X
36b 18999307 Wilcox A 2008 usage statistics MD All Homegrown EHR 1 X X
37 20442152 Zheng K 2010 audit log MD Primary Homegrown EHR 14 X X X X X
38a 20841655 Bowes WA III 2010 audit log All All Homegrown EHR X X Interview
39 21292704 Sykes TA 2011 system log All All EHR X X X X
40 23909863 Park JY 2014 log data All All Homegrown EHR 24 X X X
41b 24914013 Ancker JS 2014 ehr data All Primary Vendor EHR 42 X X X X
42 26618036 Choi W 2015 log file All All Homegrown EHR 12 X X
43 26831123 Kim S 2016 log file All All Homegrown EHR 24 X X
44 27332378 Kajimura A 2016 access log Nurses IM EHR <1 X X
45 29046269 Kim J 2017 usage log MD All Homegrown EHR 10 X X X X X
46 29237579 Lee Y 2017 usage log All All Homegrown EHR 54 X X X
47 29295318 Kim J 2017 usage patterns MD All EHR 10 X X
48 31183688 Cohen GR 2019 log All Primary X EHR 1 X X X X X Vendor
49 24907594 Chi J 2014 audit data Stud All Vendor EHR 7 X X X X X X Survey
50a 26642261 Ouyang D 2016 electronic audit Trainee IM Vendor EHR 12 X X X X X
51 26913101 Chen L 2016 audit log Trainee IM Vendor EHR 4 X X X X X X Vendor
52 30522828 Cox ML 2018 time data Trainee Surgery Vendor EHR 11 X X X X X X
53 30815089 Goldstein IH 2018 audit log MD Ophth Vendor EHR 12 X X X X Survey
54 30726208 Wang JK 2019 event log Trainee IM Vendor EHR 41 X X X X X X
55 30664893 Goldstein IH 2019 audit log MD Ophth Vendor EHR 120 X X X X X
56 27195306 Senathirajah Y 2016 log file All All Homegrown EHR 36 X X X
57 30137348 Orenstein EW 2018 audit log Trainee Peds Vendor EHR 24 X X X X
58a 22195144 Zhang W 2011 audit log All All Vendor EHR 3 X X X X
59a 29481625 Chen Y 2018 interaction pattern All Trauma EHR 24 X X X X X X X
60 21277996 Malin B 2011 access log All All Homegrown EHR 5 X X X X
61 22195103 Gray JE 2011 data All All EHR 12 X X X X
62 24511889 Adler-Milstein J 2013 task log All Primary X Vendor EHR X X X X
63a 21292706 Hripcsak G 2011 audit log All All Vendor EHR X X X X X
64 29854145 Grando A 2017 event logs All Surgery Vendor Feature <1 X X X X X X Experience
65a 29049512 Read-Brown S 2017 timestamp MD Ophth Vendor EHR 4 X X X X X X Observe
66a 28373331 Tai-Seale M 2017 log MD Primary Vendor EHR 48 X X X X X X Observe
67a 28893811 Arndt BG 2017 event log MD Primary Vendor EHR 36 X X X X X Observe Observe
68 30184241 Kannampallil TG 2018 log file MD ED Vendor EHR 1.5 X X X X X X X Observe
69 14728151 Chen ES 2003 log file All All Homegrown EHR 12 X X X X X X
70 15360766 Chen ES 2004 log file All All Homegrown EHR 12 X X X X X
71 22527782 Ben-Assuli O 2012 log file All ED X EHR 48 X X X X
72 23594488 Ben-Assuli O 2013 log file All ED X EHR 36 X X X X X
73 24692078 Ben-Assuli O 2015 log file All ED X EHR 48 X X X X X
74 26767060 Wanderer JP 2015 audit log All All Feature 5 X X X X X
75a 30240357 Shenvi EC 2018 access log Trainee IM Vendor EHR 6 X X X X
76 30664473 Soh JY 2019 log MD All Homegrown EHR 12 X X X X X X
77b 24701327 Gilleland M 2014 usage data MD IM Vendor EHR 3 X X X X X X X
78 28808942 Cutrona SL 2017 access/audit log MD Primary Vendor Feature 12 X X X X X
79 23942926 Hanauer DA 2013 computer log MD Hem Vendor Feature X X X X
80 25074989 Coleman JJ 2015 audit database Trainee All Feature 12 X X X X X X X
81 30730293 Amroze A 2019 access/audit log MD Primary Vendor Feature X X X X X X X X Observe
82 26958173 Chen Y 2015 event log All All EHR 4 X X X X X
83 28269922 Yan C 2016 event sequence All Cards EHR 4 X X X X
84b 20193841 Shine D 2010 data Trainee IM Vendor EHR 4 X X X X Survey
85a 27103047 Ouyang D 2016 audit Trainee IM Vendor EHR 12 X X X X X X X X
86 30625502 Dziorny AC 2019 timestamp Trainee Peds Vendor EHR X X X Survey
87 29854253 Wu DTY 2017 audit trail log All Primary EHR 5 X X X X Consensus Experience
88a 29174994 Chen Y 2018 utilization All All EHR 4 X X X X X
89 30807297 Karp EL 2019 event file Nurses IM Feature 2 X X X Observe
90b 26958290 Hribar M 2015 timestamp All Ophth Vendor EHR X X X Observe
91 28269861 Hribar MR 2016 timestamp All Ophth Vendor EHR 24 X X X
92b 29854159 Hribar M 2017 ehr data All Ophth Vendor EHR 15 X X X
93a 27375293 Hirsch AG 2017 audit file All Clinic Vendor EHR X X X X X X
94a 29036581 Hribar MR 2018 timestamp All Ophth Vendor EHR 12 X X X X X Observe
95 30312629 Hribar MR 2019 timestamp All Ophth Vendor EHR 24 X X X Observe
96b 29854142 Goldstein IH 2017 ehr data MD Ophth Vendor EHR 12 X X X
97a 29121175 Goldstein IH 2018 timestamp MD Ophth Vendor EHR 12 X X X
98a 22574103 Vawdrey DK 2011 audit log All Card Vendor EHR 1 X X X X
99a 24845147 Chen Y 2014 ehr utilization All All Homegrown EHR X X X X
100b 25710558 Soulakis ND 2015 record usage All All Vendor EHR 12 X X X
101a 27570217 Chen Y 2017 utilization record All All Homegrown EHR 4 X X X
102 30015537 Yao N 2018 access data All All Vendor EHR 24 X X X
103 30889243 Durojaiye AB 2019 metadata All Peds Vendor EHR 15 X X X X X X X
104 31160011 Zhu X 2019 access-log All All Vendor EHR X X X X X

Abbreviations: Anes, Anesthesiology; Card, Cardiology; ED, Emergency Department; Heme, Hematology; IM, Internal Medicine; MD, Physicians; NICU, Neonatal Intensive Care Unit; Ophth, Ophthalmology; Peds, Pediatrics; PMID, PubMed ID; Rad, Radiology; Ref, Reference; Stu, Medical Students.

a

Hand-selected article used to form query.

b

Article identified through ancestor search.

Aims of audit log research

Most articles used audit logs to study EHR use directly (62 articles, see Table 4 for details by article).20–81 This included how often providers accessed individual pieces of information,20–32,78 patterns of EHR use across features,33–48,62,69–76 and total duration of EHR use.49–55,63–68 More recently, studies began to use audit logs to examine clinical workflows extending beyond the EHR, using audit log timestamps to mark clinical event boundaries (34 articles).64–97 For example, a few articles calculated resident duty hours using EHR login and logout timestamps, assuming these occur near shift boundaries.77,84–86 Other studies used timestamps to identify the start and end of clinical exams and calculate exam length or patient wait time.90–97 Still other workflow studies focused on sequences of actions providers took after specific events occurred (eg, receiving an alert) or typical workflow when caring for certain patient groups, like those with complex cardiac conditions.79–83 A third common use of audit logs was to study care team structure and dynamics (17 articles).58–64,74,96–104 While a couple studies used EHR access to identify care teams for individual patients,61,98 more used co-access of the same records to identify which providers or departments consistently worked together across patients.60,99–104

In addition to these 3 core aims, many studies collected additional demographic, contextual, or outcome data to model the effect of EHR use on clinical outcomes (12 articles),39,49,58,59,62,68,71–73,77,85,103 or the effect of patient, provider, and context on EHR use (28 articles).29,30,37,39–41,45,46,48,49,52,55,57,59,65,66,68,75,77–81,85,88,93,99,104 For example, 1 study modeled EHR adoption as a function of providers’ demographics and professional networks.37 Several studies considered whether accessing a patient’s historical record decreased length of stay or admission rates.59,68,71–73 For this review, we coded correlations, such as the correlation between duration of EHR use and length of stay, as bidirectional models of both EHR use and outcomes.

Measures of audit log research

Reviewed articles derived a variety of measures from audit logs including 1) counts of actions captured by audit logs, 2) counts of higher-level activities imputed by researchers, 3) activity durations, 4) activity sequences, 5) activity clusters, and 6) networks of EHR users (summarized in Figure 3, see Table 4 for details by article).

Figure 3.

Figure 3.

(A) Audit logs track actions EHR users perform in patient records. Here we show a simplified example of an audit log for 1 provider performing actions (eg, “View Problem List”) in 3 different patient records. We have already mapped these actions to 3 higher-level clinical activities (record review, orders, documentation). (B) Audit logs can be used to compute a variety of measures including simple measures such as (1) action counts, (2) higher-level activity counts, and (3) activity durations. These base measures may be used to create more complex models and measures such as (4) sequences of activities, (5) clusters of similar activity patterns, and (6) networks of providers based on their access of the same patient records.

Counts of actions captured directly by audit logs (63 articles)20–48,50,56–62,69–79,81–83,85,87,88,96–104 such as “problem list viewed” were often used to quantify use of specific features such as info buttons and radiology reports. Alternatively, these actions were sometimes aggregated to identify peak periods of EHR use throughout the day or week. Counts of higher-level activities (27 articles)32,37,48–55,63–68,77,80,81,84–87,89–95,98 typically involved first mapping low-level actions to higher-level activities such as chart review and documentation. Alternatively, it might involve looking for significant gaps between actions to identify entire sessions of EHR use or work shifts. These activity boundaries could then be used to compute counts or rates, such as the number of unique EHR sessions across all users in the past month or the percent of encounters where providers reviewed the patient’s historical record. Other studies grouped actions into higher-level activities to compute the activity duration including total time devoted to EHR use (33 articles).24,28,49–55,63–68,74,77,78,80–82,84–86,89–95,98,104

These first 3 measures were used to create more complex measures and models, 3 of which were employed in multiple studies. Eight studies constructed event sequences to identify routine patterns of care and deviations from them.69,70,76,82,83,88,93,103 Thirteen studies clustered patterns of activity to identify recurring patterns of use, such as which sections of the record providers routinely accessed.20,29,30,33,45,59,69,70,76,82,83,88,103 Finally, 11 articles studying care teams used co-access of patient records to develop networks of users or departments that work together.59–61,63,64,99–104 Across all 6 measures, there was 1 significant change in measure use over time: 48% of articles published since 2016 reported a time duration, whereas only 26% of the articles published before 2016 did so (χ2 = 4.64, P < .05) (Figure 4).

Figure 4.

Figure 4.

Audit log publications over time with publications reporting a time duration highlighted.

Preprocessing methods of audit log research

Computing even seemingly simple measures from audit logs such as duration of EHR use is not necessarily straightforward. Yet, less than half of articles (32) discussed how raw audit logs were preprocessed before analysis (see Table 4 for details by article). Fewer still discussed this data wrangling in enough detail to support replication. When reported, common practices included 1) filtering actions, 2) mapping actions to higher-level clinical activities, and 3) selecting criteria to define time-periods. Filtering actions included removing actions that were considered incidental or irrelevant.30,31,35,37,45,49,51,52,54,69,72,73,76,80,81,85 For example, 1 study of medical student EHR use removed short bursts of activity on off-service days.49 Other studies considered all activity within 24 hours of a patient’s visit relevant,37 or only activity in periods with “more than 3 mouse clicks (or 15 keystrokes) or 1700 mouse miles (pixels) per minute.”51 Another common preprocessing practice was mapping individual actions to higher-level activities such as chart review or documentation.32,41,48,59,67,68,81,83,87 While no study reported actual action–activity mappings, some reported the process used to develop these mappings, which varied. A final recurring preprocessing step was selecting actions and criteria to define time-periods.50,51,53–56,65,66,80,84,93,94,103 This involved defining which actions constituted the start and end of clinical events and how gaps in activity would be handled. Depending on the research question, meaningful gaps ranged from 5 minutes, which could indicate the end of an activity,54 to 6 hours, which could indicate the end of a shift.84 Another study identified shifts using a 3-step process of 1) identifying distinct shifts based on 4-hour gaps, 2) merging shifts that were less than 7 hours apart which would result in a combined shift length of less than 30 hours, and 3) merging shifts that were less than 2 hours long and would result in a combined shift of less than 20 hours.86

Validating audit log measures

Using EHR audit logs to study clinical activity assumes audit logs consistently and accurately track clinical activities and the methods used to process them into more complex measures are sound. However, a minority of studies reported checking these assumptions through validation or sensitivity analyses. Validation studies, which compare measures derived from audit logs with those obtained through other methods, checked both the mapping of audit log actions to higher-level activities and the accuracy of activity patterns or durations derived from audit logs. Of 19 studies that reported validation studies, 6 validated activity mappings and 15 validated patterns or durations (see Table 4 for details by article).

The 6 studies that reported validating action-activity mappings used a variety of methods including consensus among 2 or more researchers,87 consulting the EHR vendor,48,51 and direct observation of clinical activities.67,68,81 Only 1 study reported the accuracy of mappings, noting that 5.9% of the audit log actions were originally misclassified as representing the wrong activity when compared to direct observation.67 Of 15 studies that reported validating activity patterns or durations, 8 compared them to self-reported data.21,38,49,64,84,86,87,97 Only 7 compared timing data to values obtained through direct observation.65–67,89,90,94,95 Of these, only 5 reported measure accuracy. Accuracy for EHR time per encounter ranged from overestimating by 43% (4.3 vs 3.0 minutes)65 to underestimating by 33% (2.4 ± 1.7 vs 1.6 ± 1.2 min).90 Measures of appointment lengths were more accurate, overestimated by just 4% in 1 study (13.8 ± 8.2 vs 13.3 ± 7.3 min),95 underestimated by 14% in another (19.4 vs 22.5 min),66 and overestimated by 29% in a third (24.4 ± 13.0 vs 18.9 ± 11.0 min).90

Computing duration in particular requires a number of assumptions about what constitutes the start and end of activities and how to handle gaps in time. Four studies reported sensitivity analyses in this vein,52,54,57,85 such as varying the gap in actions considered idle activity from 5–10 minutes54 or seeing what impact discarding the first and last 5% of actions in a shift had on shift length.85 None reported a significant change in results due to changing parameters.

Challenges and limitations of audit log research

Finally, reviewed articles mentioned a few limitations of using audit logs to study clinical activity. First, 19 articles mentioned that audit logs do not provide a full picture of clinical activity but only capture EHR use.20,30,35,52,59,63,64,67,74–76,84,86,90,91,93,99,103,104 Audit logs do not track phone, pager, or face-to-face interactions nor interactions with paper. This may lead to underestimating interaction or workload. Second, 15 articles noted that gaps between timestamps and multiple concurrent timestamps can be difficult to interpret.38,49,53,54,63,65,67,80,81,85,87,89,92,94,95 For example, does a long gap mean the provider was engaged with the EHR that entire time or had turned away? Third, 7 articles mentioned audit log data were either too coarse or too detailed for clear interpretation.20,46,49,64,65,99,102 Logs might capture who accessed a record, but not the exact note or result viewed. More detailed logs might use different names to track accessing the same piece of information on different screens. It can take researchers substantial time to map these isometric actions to higher-level activities. Lastly, 6 articles noted that audit logs may capture what a user did, but data from qualitative methods such as interviews are needed to understand why.21,31,35,56,63,80

DISCUSSION

With this systematic review, we surveyed articles using EHR audit logs to study clinical activities. We found a diverse literature employing a range of measures to study EHR use directly, clinical workflows extending beyond the EHR, and care team dynamics. This diversity reflects the breadth of research questions audit logs can address. These include directly measuring care efficiency and quality (eg, adherence to guidelines) as well as the impact of EHR use on care efficiency, quality, and effectiveness (eg, does chart review reduce length of stay). The body of EHR audit log research is growing with more than half of reviewed articles published in the last 3-1/2 years. Moreover, increasing measurement of total time using EHRs may reflect growing concern over the association between EHR use and provider burnout.14–17

Several clusters of EHR audit log research by institution emerged. For example, all 10 reviewed studies focusing on ophthalmology were conducted at Oregon Health & Science University.53,55,65,90–92,94–97 A post-hoc analysis of first author affiliation also revealed 13 studies written by authors from Columbia University (many of the earliest studies using audit logs),21–24,28–30,35,36,63,69,70,98 9 from authors at Vanderbilt (many investigating sequences of action and networks of users),58–60,74,82,83,88,99,101 6 from authors at a trio of Korean institutions focusing on mobile electronic health records,42,43,45–47,76 and 5 from authors at Stanford focused on trainees’ use of EHRs.26,49,50,54,85 These clusters may reflect the work of individual labs and institutions with expertise in the nontrivial task of analyzing audit-logs.

Whereas some measures employed in this literature were relatively simple counts of actions tracked explicitly by audit logs, others required researchers to manipulate audit logs in sophisticated ways, generating durations, sequences, clusters, and networks. Many studies glossed over the details of how raw audit logs were preprocessed and analyzed to compute these measures, and, even when methods were reported, there was significant variation.

Recommendations

The variability of measures and methods in reviewed articles echoes the variability observed in prior systematic reviews of the time-motion studies in healthcare.2 It also highlights areas where research using EHR audit logs might improve. We focus our recommendations on 4 areas: sample size reporting, reporting of methods used to preprocess audit logs, validation and sensitivity analyses, and methodological transparency leading to validated standards.

First, we recommend standard reporting of the time, number of users, and patient records studied. While most studies report the duration of time studied, not all did. Just over half reported the number of users studied, and far fewer reported the number of patients or encounters analyzed. This use of time to report sample sizes likely reflects the fact that audit log data are routinely queried by time period rather than number of patient records or users desired for analysis. We suggest other reported sample size measures be clinically relevant, such as the number of patient encounters, rather than dataset measures, such as number of audit log rows, as these are harder to compare across vendors and institutions with different logging practices.

Second, we recommend detailed reporting of steps used to compute measures. Given the variable accuracy of time durations reported in validation studies, more accurate and consistent methods of tracking activities with audit logs are needed. Methods reporting should include any criteria used to filter logs and at least the process used to map granular actions into higher-level activities, such as documentation or chart review. Ideally researchers would also report the exact mapping of actions to activities; however, this may not be feasible given the large number of actions that may map to a single activity or the potential for EHR vendors to consider audit log action names proprietary. For time durations, we recommend authors report how they handle repeated actions and gaps in activity, as well as how they identify activity boundaries, especially if data are missing. We recommend the audit log research community develop standards for reporting more complex measures such as activity sequences, activity clusters, and user networks.

Third, we recommend researchers take more steps to validate their results. Ultimately, the validity of audit log research rests on assumptions that audit logs consistently and accurately track EHR use and clinical activities. While some methods seem to be approaching parity with direct observation for measuring the duration of longer activities such as patient exams, measures of shorter events such as EHR time per encounter are more varied. Validation may occur in a number of ways including surveys and member-checks, but the gold-standard should remain comparing measures derived from audit logs with those obtained through direct observation. More sensitivity analyses are also warranted as the parameters of methods used to preprocess audit logs may significantly affect results.

Finally, there is a need for greater methodological transparency and validated standards to support replication and synthesis. This includes clear documentation and sharing of data schemas, action-activity mappings, and preprocessing scripts between institutions. We recommend that vendors, institutions, and the audit-log research community work together to share methods and develop validated standards for tracking, querying, and analyzing audit logs to compute the diverse measures of clinical activity uncovered in this review. These standards could in turn support replication and comparison across departments and institutions to identify consistency and variation in EHR use and clinical workflows between them.

Limitations

This review has a few limitations. First, it does not survey use of all HIT logs, nor all uses of EHR audit logs. EHR related technologies such as Personal Health Records, Health Information Exchanges, and mobile health apps often track user activity with logs similar to EHR audit logs105–108 and workflow researchers may use timing data from patient records in their studies (such as admission time). EHR audit logs are also routinely used for their primary purpose of access control and several publications have explored how to use them more effectively for that purpose.109–113 While the measures and methods used in these related domains may be similar to those reported in this review, we scoped our analysis to the use of EHR audit logs to study clinical activity to provide targeting insights for this growing research community. Second, we limited our search to articles on PubMed which may exclude articles published in engineering venues not routinely indexed there. We mitigated this risk by searching the citations of included articles for relevant references, regardless of venue. Third, article selection and coding were largely subjective and primarily performed by a single author, though validated by a second with extensive experience conducting audit log research. While the authors of each article may not agree with our classification, we aimed to develop a consistent coding scheme that captured the breadth of the literature by iteratively defining and applying each category label. Finally, this review likely reflects a publication bias in which some types of audit log research are more readily published than others (eg, workflow studies vs studies of IT infrastructure needs).

CONCLUSION

EHR audit logs have been used to study a wide range of clinical activities, extending beyond their original purpose of monitoring patient record access. The 85 articles included in this review demonstrate a diverse and growing literature, reflecting researchers’ desire to gather precise data on clinical activities at scale. However, the process of turning raw audit logs into insights is complex, requires professional judgement, and varies from study to study—when it is even reported. Moreover, there are relatively few articles in the literature that report testing the validity and sensitivity of audit log measures. This lack of rigor and reporting prevents synthesis and comparison across studies, as well as efforts to improve the accuracy of using audit logs to measure clinical activities. EHR audit logs have untapped potential to support quality improvement and research, but the continued growth of the field will require greater methodological transparency and validated standards to support replication and cross-study knowledge discovery.

FUNDING

Supported by grants R00LM12238, P30EY10572, and T15LM007088 from the National Institutes of Health (Bethesda, MD), and by unrestricted departmental funding from Research to Prevent Blindness (New York, NY). The funding organizations had no role in the design or conduct of this research.

AUTHOR CONTRIBUTIONS

AR and MRH contributed to the research design, data analysis, and manuscript preparation. MFC contributed to the manuscript preparation.

Supplementary Material

ocz196_Supplementary_Data

ACKNOWLEDGMENTS

Thank you to Julia Adler-Milstein and members of the National Research Network for EHR Audit-Logs and Metadata for help in hand-selecting articles to seed this systematic review. Thank you to Julia Adler-Milstein, Genna Cohen, and Nicole Weiskopf for feedback on early versions of this article.

Conflict of Interest statement

The authors have no commercial, proprietary, or financial interest in any of the products or companies described in this article. MFC is an unpaid member of the Scientific Advisory Board for Clarity Medical Systems (Pleasanton, CA), a Consultant for Novartis (Basel, Switzerland), and an initial member of Inteleretina, LLC (Honolulu, HI) .

REFERENCES

  • 1. Unertl KM, Novak LL, Johnson KB, et al. Traversing the many paths of workflow research: developing a conceptual framework of workflow terminology through a systematic literature review. J Am Med Inform Assoc 2010; 17 (3): 265–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Zheng K, Guo MH, Hanauer DA.. Using the time and motion method to study clinical work processes and workflow: methodological inconsistencies and a call for standardized research. J Am Med Inform Assoc 2011; 18 (5): 704–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Lopetegui M, Yen P-Y, Lai A, et al. Time motion studies in healthcare: what are we talking about? J Biomed Inform 2014; 49: 292–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Friedman CP, Wyatt J.. Evaluation Methods in Biomedical Informatics. New York; London: Springer; 2011. [Google Scholar]
  • 5. Kannampallil TG, Abraham J.. Evaluation of health information technology: methods, frameworks and challenges. In: Patel VL, Kannampallil TG, Kaufman DR, eds. Cognitive Informatics for Biomedicine. Berlin: Springer; 2015: 81–109. [Google Scholar]
  • 6. Zheng K, Hanauer DA, Weibel N, et al. Computational ethnography: automated and unobtrusive means for collecting data in situ for human–computer interaction evaluation studies In: Patel VL, Kannampallil TG, Kaufman DR, eds. Cognitive Informatics for Biomedicine. Cham: Springer International Publishing; 2015: 111–40. [Google Scholar]
  • 7. Weibel N, Rick S, Emmenegger C, et al. LAB-IN-A-BOX: semi-automatic tracking of activity in the medical office. Pers Ubiquit Comput 2015; 19 (2): 317–34. [Google Scholar]
  • 8.Health Insurance Portability and Accountability Act. Technical Safeguards, 45 C.F.R § 164.312; 2003.
  • 9.Standards for Health Information Technology to Protect Electronic Health Information Created, Maintained, and Exchanged, 45 C.F.R. § 170.210; 2015.
  • 10.ASTM E2147-Standard Specification for Audit and Disclosure Logs for Use in Health Information Systems. https://www.astm.org/Standards/E2147.htm
  • 11. Middleton B, Bloomrosen M, Dente MA, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20 (e1): e2–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Zhang J, Walji MF.. TURF: toward a unified framework of EHR usability. J Biomed Inform 2011; 44 (6): 1056–67. [DOI] [PubMed] [Google Scholar]
  • 13. Ratwani RM, Fairbanks RJ, Hettinger AZ, et al. Electronic health record usability: analysis of the user-centered design processes of eleven electronic health record vendors. J Am Med Inform Assoc 2015; 22 (6): 1179–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Bodenheimer T, Sinsky C.. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014; 12 (6): 573–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Shanafelt TD, Hasan O, Dyrbye LN, et al. Changes in burnout and satisfaction with work-life balance in physicians and the general US working population between 2011 and 2014. Mayo Clin Proc 2015; 90 (12): 1600–13. [DOI] [PubMed] [Google Scholar]
  • 16. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Mayo Clin Proc 2016; 91 (7): 836–48. [DOI] [PubMed] [Google Scholar]
  • 17. Gardner RL, Cooper E, Haskell J, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 26 (2): 106–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Hogan WR, Wagner MM.. Accuracy of data in computer-based patient records. J Am Med Inform Assoc 1997; 4 (5): 342–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Weiskopf NG, Weng C.. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc 2013; 20 (1): 144–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Michael PA. Physician-directed software design: the role of utilization statistics and user inputin enhancing HELP results review capabilities. In: proceedings of the Annual Symposium on Computer Application in Medical Care; 1993: 107–111. Washington DC, October 30-November 3, 1993. [PMC free article] [PubMed]
  • 21. Cimino JJ, Li J, Graham M, et al. Use of online resources while using a clinical information system. AMIA Annual Symposium Proceedings 2003: 175–9. [PMC free article] [PubMed] [Google Scholar]
  • 22. Chen ES, Bakken S, Currie LM, et al. An automated approach to studying health resource and infobutton use. Stud Health Technol Inform 2006; 122: 273–8. [PubMed] [Google Scholar]
  • 23. Cimino JJ. Use, usability, usefulness, and impact of an infobutton manager. AMIA Annual Symposium Proceedings 2006: 151–5. [PMC free article] [PubMed] [Google Scholar]
  • 24. Cimino JJ, Friedmann BE, Jackson KM, et al. Redesign of the Columbia University Infobutton Manager. AMIA Annual Symposium Proceedings 2007: 135–9. [PMC free article] [PubMed] [Google Scholar]
  • 25. McLean TR, Burton L, Haller CC, et al. Electronic medical record metadata: uses and liability. J Am Coll Surg 2008; 206 (3): 405–11. [DOI] [PubMed] [Google Scholar]
  • 26. Bernstein JA, Imler DL, Sharek P, et al. Improved physician work flow after integrating sign-out notes into the electronic medical record. Jt Comm J Qual Patient Saf 2010; 36 (2): 72–8. [DOI] [PubMed] [Google Scholar]
  • 27. Ries M, Golcher H, Prokosch H-U, et al. An EMR based cancer diary-utilisation and initial usability evaluation of a new cancer data visualization tool. Stud Health Technol Inform 2012; 180: 656–60. [PubMed] [Google Scholar]
  • 28. Hum RS, Cato K, Sheehan B, et al. Developing clinical decision support within a commercial electronic health record system to improve antimicrobial prescribing in the neonatal ICU. Appl Clin Inform 2014; 5: 368–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Jiang SY, Murphy A, Vawdrey D, et al. Characterization of a handoff documentation tool through usage log data. AMIA Annual Symposium Proceedings 2014: 749–56. [PMC free article] [PubMed] [Google Scholar]
  • 30. Jiang SY, Hum RS, Vawdrey D, et al. In search of social translucence: an audit log analysis of handoff documentation views and updates. AMIA Annual Symposium Proceedings 2015: 669–76. [PMC free article] [PubMed] [Google Scholar]
  • 31. Mongan J, Avrin D.. Impact of PACS-EMR integration on radiologist usage of the EMR. J Digit Imaging 2018; 31 (5): 611–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Epstein RH, Dexter F, Schwenk ES.. Provider access to legacy electronic anesthesia records following implementation of an electronic health record system. J Med Syst 2019; 43 (5): 105.. [DOI] [PubMed] [Google Scholar]
  • 33. Asaro PV, Ries JE.. Data mining in medical record access logs. AMIA Annual Symposium Proceedings. 2001: 855. [Google Scholar]
  • 34. Clayton PD, Naus SP, Bowes WA, et al. Physician use of electronic medical records: issues and successes with direct data entry and physician productivity. AMIA Annual Symposium Proceedings 2005: 141–5. [PMC free article] [PubMed] [Google Scholar]
  • 35. Hripcsak G, Sengupta S, Wilcox A, et al. Emergency department access to a longitudinal medical record. J Am Med Inform Assoc 2007; 14 (2): 235–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Wilcox A, Bowes WA, Thornton SN, et al. Physician use of outpatient electronic health records to improve care. AMIA Annual Symposium Proceedings 2008: 809–13. [PMC free article] [PubMed] [Google Scholar]
  • 37. Zheng K, Padman R, Krackhardt D, et al. Social networks and physician adoption of electronic health records: insights from an empirical study. J Am Med Inform Assoc 2010; 17 (3): 328–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Bowes WA. Measuring use of electronic health record functionality using system audit information. Stud Health Technol Inform 2010; 160 (Pt 1): 86–90. [PubMed] [Google Scholar]
  • 39. Sykes TA, Venkatesh V, Rai A.. Explaining physicians’ use of EMR systems and performance in the shakedown phase. J Am Med Inform Assoc 2011; 18 (2): 125–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Park J-Y, Lee G, Shin S-Y, et al. Lessons learned from the development of health applications in a tertiary hospital. Telemed J E Health 2014; 20 (3): 215–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Ancker JS, Kern LM, Edwards A, et al. How is the electronic health record being used? Use of EHR data to assess physician-level variability in technology use. J Am Med Inform Assoc 2014; 21 (6): 1001–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Choi W, Park M, Hong E, et al. Early experiences with mobile electronic health records application in a tertiary hospital in Korea. Healthc Inform Res 2015; 21 (4): 292–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Kim S, Lee K-H, Hwang H, et al. Analysis of the factors influencing healthcare professionals’ adoption of mobile electronic medical record (EMR) using the unified theory of acceptance and use of technology (UTAUT) in a tertiary hospital. BMC Med Inform Decis Mak 2015; 16 (1): 12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Kajimura A, Takemura T, Hikita T, et al. Nurses’ actual usage of emrs: an access log-based analysis. Stud Health Technol Inform 2016; 225: 858–9. [PubMed] [Google Scholar]
  • 45. Kim J, Lee Y, Lim S, et al. What clinical information is valuable to doctors using mobile electronic medical records and when? J Med Internet Res 2017; 19 (10): e340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Lee Y, Park YR, Kim J, et al. Usage pattern differences and similarities of mobile electronic medical records among health care providers. JMIR Mhealth Uhealth 2017; 5 (12): e178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Kim J, Lee Y, Lim S, et al. How are doctors using mobile electronic medical records? An In-depth analysis of the usage pattern. Stud Health Technol Inform 2017; 245: 1231.. [PubMed] [Google Scholar]
  • 48. Cohen GR, Friedman CP, Ryan AM, et al. Variation in physicians’ electronic health record documentation and potential patient harm from that variation. J Gen Intern Med 2019. doi: 10.1007/s11606-019-05025-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Chi J, Kugler J, Chu IM, et al. Medical students and the electronic health record: an epic use of time. Am J Med 2014; 127 (9): 891–5. [DOI] [PubMed] [Google Scholar]
  • 50. Ouyang D, Chen JH, Hom J, et al. Internal medicine resident computer usage: an electronic audit of an inpatient service. JAMA Intern Med 2016; 176 (2): 252–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Chen L, Guo U, Illipparambil LC, et al. Racing Against the clock: internal medicine residents’ time spent on electronic health records. J Grad Med Educ 2016; 8 (1): 39–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Cox ML, Farjat AE, Risoli TJ, et al. Documenting or operating: where is time spent in general surgery residency? J Surg Educ 2018; 75 (6): e97–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Goldstein IH, Hribar MR, Reznick LG, et al. Analysis of total time requirements of electronic health record use by ophthalmologists using secondary EHR data. AMIA Annual Symposium Proceedings 2018: 490–7. [PMC free article] [PubMed] [Google Scholar]
  • 54. Wang JK, Ouyang D, Hom J, et al. Characterizing electronic health record usage patterns of inpatient medicine residents using event log data. PLoS ONE 2019; 14 (2): e0205379.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Goldstein IH, Hwang T, Gowrisankaran S, et al. Changes in electronic health record use time and documentation over the course of a decade. Ophthalmology 2019; 126 (6): 783–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Senathirajah Y, Kaufman D, Bakken S.. User-composable electronic health record improves efficiency of clinician data viewing for patient case appraisal: a mixed-methods study. EGEMS 2016; 4: 1176.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Orenstein EW, Rasooly IR, Mai MV, et al. Influence of simulation on electronic health record use patterns among pediatric residents. J Am Med Inform Assoc 2018; 25 (11): 1501–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Zhang W, Gunter CA, Liebovitz D, et al. Role prediction using electronic medical record system audits. AMIA Annual Symposium Proceedings 2011: 858–67. [PMC free article] [PubMed] [Google Scholar]
  • 59. Chen Y, Patel MB, McNaughton CD, et al. Interaction patterns of trauma providers are associated with length of stay. J Am Med Inform Assoc 2018; 25 (7): 790–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60. Malin B, Nyemba S, Paulett J.. Learning relational policies from electronic health record access logs. J Biomed Inform 2011; 44 (2): 333–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Gray JE, Feldman H, Reti S, et al. Using digital crumbs from an electronic health record to identify, study and improve health care teams. AMIA Annual Symposium Proceedings 2011: 491–500. [PMC free article] [PubMed] [Google Scholar]
  • 62. Adler-Milstein J, Huckman RS.. The impact of electronic health record use on physician productivity. Am J Manag Care 2013; 19: 345–52. [PubMed] [Google Scholar]
  • 63. Hripcsak G, Vawdrey DK, Fred MR, et al. Use of electronic clinical documentation: time spent and team interactions. J Am Med Inform Assoc 2011; 18 (2): 112–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Grando A, Groat D, Furniss SK, et al. Using process mining techniques to study workflows in a pre-operative setting. AMIA Annual Symposium Proceedings 2017: 790–9. [PMC free article] [PubMed] [Google Scholar]
  • 65. Read-Brown S, Hribar MR, Reznick LG, et al. Time requirements for electronic health record use in an academic ophthalmology center. JAMA Ophthalmol 2017; 135 (11): 1250–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Tai-Seale M, Olson CW, Li J, et al. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Aff (Millwood) 2017; 36 (4): 655–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (5): 419–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68. Kannampallil TG, Denton CA, Shapiro JS, et al. Efficiency of emergency physicians: insights from an observational study using EHR log files. Appl Clin Inform 2018; 9: 99–104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Chen ES, Cimino JJ.. Automated discovery of patient-specific clinician information needs using clinical information system log files. AMIA Annual Symposium Proceedings 2003: 145–9. [PMC free article] [PubMed] [Google Scholar]
  • 70. Chen ES, Cimino JJ.. Patterns of usage for a web-based clinical information system. Stud Health Technol Inform 2004; 107 (Pt 1): 18–22. [PubMed] [Google Scholar]
  • 71. Ben-Assuli O, Leshno M, Shabtai I.. Using electronic medical record systems for admission decisions in emergency departments: examining the crowdedness effect. J Med Syst 2012; 36 (6): 3795–803. [DOI] [PubMed] [Google Scholar]
  • 72. Ben-Assuli O, Shabtai I, Leshno M.. The impact of EHR and HIE on reducing avoidable admissions: controlling main differential diagnoses. BMC Med Inform Decis Mak 2013; 13 (1): 49.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73. Ben-Assuli O, Shabtai I, Leshno M.. Using electronic health record systems to optimize admission decisions: the Creatinine case study. Health Inform J 2015; 21 (1): 73–88. [DOI] [PubMed] [Google Scholar]
  • 74. Wanderer JP, Gruss CL, Ehrenfeld JM.. Using visual analytics to determine the utilization of preoperative anesthesia assessments. Appl Clin Inform 2015; 6: 629–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75. Shenvi EC, Feupe SF, Yang H, et al. Closing the loop”: a mixed-methods study about resident learning from outcome feedback after patient handoffs. Diagnosis (Berl) 2018; 5 (4): 235–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76. Soh JY, Jung S-H, Cha WC, et al. Variability in doctors’ usage paths of mobile electronic health records across specialties: comprehensive analysis of log data. JMIR Mhealth Uhealth 2019; 7 (1): e12041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Gilleland M, Komis K, Chawla S, et al. Resident duty hours in the outpatient electronic health record era: inaccuracies and implications. J Grad Med Educ 2014; 6 (1): 151–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78. Cutrona SL, Fouayzi H, Burns L, et al. Primary care providers’ opening of time-sensitive alerts sent to commercial electronic health record inbaskets. J Gen Intern Med 2017; 32 (11): 1210–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Hanauer DA, Zheng K, Commiskey EL, et al. Computerized prescriber order entry implementation in a physician assistant-managed hematology and oncology inpatient service: effects on workflow and task switching. JOP 2013; 9 (4): e103–114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Coleman JJ, Hodson J, Thomas SK, et al. Temporal and other factors that influence the time doctors take to prescribe using an electronic prescribing system. J Am Med Inform Assoc 2015; 22 (1): 206–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81. Amroze A, Field TS, Fouayzi H, et al. Use of electronic health record access and audit logs to identify physician actions following noninterruptive alert opening: descriptive study. JMIR Med Inform 2019; 7 (1): e12650. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82. Chen Y, Xie W, Gunter CA, et al. Inferring clinical workflow efficiency via electronic medical record utilization. AMIA Annual Symposium Proceedings 2015: 416–25. [PMC free article] [PubMed] [Google Scholar]
  • 83. Yan C, Chen Y, Li B, et al. Learning Clinical Workflows to Identify Subgroups of Heart Failure Patients. AMIA Annual Symposium Proceedings 2016: 1248–57. [PMC free article] [PubMed] [Google Scholar]
  • 84. Shine D, Pearlman E, Watkins B.. Measuring resident hours by tracking interactions with the computerized record. Am J Med 2010; 123 (3): 286–90. [DOI] [PubMed] [Google Scholar]
  • 85. Ouyang D, Chen JH, Krishnan G, et al. Patient outcomes when housestaff exceed 80 hours per week. Am J Med 2016; 129 (9): 993–9.e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86. Dziorny AC, Orenstein EW, Lindell RB, et al. Automatic detection of front-line clinician hospital shifts: a novel use of electronic health record timestamp data. Appl Clin Inform 2019; 10: 28–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. Wu DTY, Smart N, Ciemins EL, et al. Using EHR audit trail logs to analyze clinical workflow: a case study from community-based ambulatory clinics. AMIA Annual Symposium Proceedings 2017: 1820–7. [PMC free article] [PubMed] [Google Scholar]
  • 88. Chen Y, Kho AN, Liebovitz D, et al. Learning bundled care opportunities from electronic medical records. J Biomed Inform 2018; 77: 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89. Karp EL, Freeman R, Simpson KN, et al. Changes in efficiency and quality of nursing electronic health record documentation after implementation of an admission patient history essential data set. Comput Inform Nurs 2019; 37 (5): 260–5. [DOI] [PubMed] [Google Scholar]
  • 90. Hribar MR, Read-Brown S, Reznick L, et al. Secondary use of EHR timestamp data: validation and application for workflow optimization. AMIA Annual Symposium Proceedings 2015: 1909–17. [PMC free article] [PubMed] [Google Scholar]
  • 91. Hribar MR, Biermann D, Read-Brown S, et al. Clinic workflow simulations using secondary EHR data. AMIA Annual Symposium Proceedings 2016: 647–56. [PMC free article] [PubMed] [Google Scholar]
  • 92. Hribar MR, Read-Brown S, Reznick L, et al. Evaluating and improving an outpatient clinic scheduling template using secondary electronic health record data. AMIA Annual Symposium Proceedings 2017: 921–9. [PMC free article] [PubMed] [Google Scholar]
  • 93. Hirsch AG, Jones JB, Lerch VR, et al. The electronic health record audit file: the patient is waiting. J Am Med Inform Assoc 2017; 24 (e1): e28–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94. Hribar MR, Read-Brown S, Goldstein IH, et al. Secondary use of electronic health record data for clinical workflow analysis. J Am Med Inform Assoc 2018; 25 (1): 40–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95. Hribar MR, Huang AE, Goldstein IH, et al. Data-driven scheduling for improving patient efficiency in ophthalmology clinics. Ophthalmology 2019; 126 (3): 347–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96. Goldstein IH, Hribar MR, Sarah R-B, et al. Quantifying the impact of trainee providers on outpatient clinic workflow using secondary EHR data. AMIA Annual Symposium Proceedings 2017: 760–9. [PMC free article] [PubMed] [Google Scholar]
  • 97. Goldstein IH, Hribar MR, Read-Brown S, et al. Association of the presence of trainees with outpatient appointment times in an ophthalmology clinic. JAMA Ophthalmol 2018; 136 (1): 20–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98. Vawdrey DK, Wilcox LG, Collins S, et al. Awareness of the care team in electronic health records. Appl Clin Inform 2011; 2: 395–405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99. Chen Y, Lorenzi N, Nyemba S, et al. We work with them? Healthcare workers interpretation of organizational relations mined from electronic health records. Int J Med Inform 2014; 83 (7): 495–506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100. Soulakis ND, Carson MB, Lee YJ, et al. Visualizing collaborative electronic health record usage for hospitalized patients with heart failure. J Am Med Inform Assoc 2015; 22 (2): 299–311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101. Chen Y, Lorenzi NM, Sandberg WS, et al. Identifying collaborative care teams through electronic medical record utilization patterns. J Am Med Inform Assoc 2017; 24 (e1): e111–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102. Yao N, Zhu X, Dow A, et al. An exploratory study of networks constructed using access data from an electronic health record. J Interprof Care 2018; 32: 666–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103. Durojaiye AB, Levin S, Toerper M, et al. Evaluation of multidisciplinary collaboration in pediatric trauma care using EHR data. J Am Med Inform Assoc 2019; 26 (6): 506–15. CrossRef][10.1093/jamia/ocy184] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104. Zhu X, Tu S-P, Sewell D, et al. Measuring electronic communication networks in virtual care teams using electronic health records access-log data. Int J Med Inform 2019; 128: 46–52. [DOI] [PubMed] [Google Scholar]
  • 105. Kim E-H, Stolyar A, Lober WB, et al. Challenges to using an electronic personal health record by a low-income elderly population. J Med Internet Res 2009; 11 (4): e44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106. Cimino JJ, Patel VL, Kushniruk AW.. What do patients do with access to their medical records? Stud Health Technol Inform 2001; 84 (Pt 2): 1440–4. [PubMed] [Google Scholar]
  • 107. Weingart SN, Rind D, Tofias Z, et al. Who uses the patient internet portal? The PatientSite experience. J Am Med Inform Assoc 2006; 13 (1): 91–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108. Yamin CK, Emani S, Williams DH, et al. The digital divide in adoption and use of a personal health record. Arch Intern Med 2011; 171 (6): 568–74. [DOI] [PubMed] [Google Scholar]
  • 109. Bakker A. Access to EHR and access control at a moment in the past: a discussion of the need and an exploration of the consequences. Int J Med Inform 2004; 73 (3): 267–70. [DOI] [PubMed] [Google Scholar]
  • 110. Boxwala AA, Kim J, Grillo JM, et al. Using statistical and machine learning to help institutions detect suspicious access to electronic health records. J Am Med Inform Assoc 2011; 18 (4): 498–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111. Chen Y, Malin B.. Detection of anomalous insiders in collaborative environments via relational analysis of access logs. Codaspy 2011; 2011: 63–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112. Chen Y, Nyemba S, Malin B.. Detecting anomalous insiders in collaborative information systems. IEEE Trans Dependable Secure Comput 2012; 9 (3): 332–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113. Menon AK, Jiang X, Kim J, et al. Detecting inappropriate access to electronic health records using collaborative filtering. Mach Learn 2014; 95 (1): 87–101. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocz196_Supplementary_Data

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES