Abstract
Electronic health record audit logs capture a time-sequenced record of clinician activities while using the system. Audit log data therefore facilitate unobtrusive measurement at scale of clinical work activities and workflow as well as derivative, behavioral proxies (eg, teamwork). Given its considerable research potential, studies leveraging these data have burgeoned. As the field has matured, the challenges of using the data to answer significant research questions have come into focus. In this Perspective, we draw on our research experiences and insights from the broader audit log literature to advance audit log research. Specifically, we make 2 complementary recommendations that would facilitate substantial progress toward audit log-based measures that are: (1) transparent and validated, (2) standardized to allow for multisite studies, (3) sensitive to meaningful variability, (4) broader in scope to capture key aspects of clinical work including teamwork and coordination, and (5) linked to patient and clinical outcomes.
Keywords: audit logs, EHR use metrics, standardization
INTRODUCTION
Electronic health record (EHR) audit logs contain data on clinician interactions with the system. Such logs are mandated under the Health Insurance Portability and Accountability Act (HIPAA) and Meaningful Use (MU) regulations because of the need to track interactions with Protected Health Information.1,2 However, EHR vendors often offer audit logs that go beyond these requirements; available audit log data can span from the most granular clickstream data to the HIPAA/MU mandated event-level data to fully processed aggregate measures (eg, physician time spent writing notes). This spectrum, and particularly the easily used, fully processed measures, has attracted researchers to apply them to a diverse set of research domains. For example, audit log data can be aggregated or grouped to align with the clinical tasks performed for patient care (eg, documentation, ordering medications), which can then provide a time-sequenced workflow of a clinician’s EHR-based tasks. Moreover, such task measures can act as representative proxies for assessing behavioral, cognitive, and clinical correlates associated with a clinician’s EHR work including workload, teamwork, wellness, cognitive load, and errors. Recent studies have utilized audit log data to measure administrative burden,3 cognitive load,4,5 interruptions,6,7 medication ordering,8 interface navigation,9,10 clinical documentation,11,12 and out-of-office work.13,14
Reflecting this breadth, audit logs have been described as an untapped “goldmine” for clinical informatics research,1 affording new opportunities for unobtrusive measurement at scale of clinical work and workflow, and for developing cognitive and behavioral markers of EHR-based work. Such unobtrusive measurement overcomes some of the challenges of observational studies including cost and effort, selection bias, and Hawthorne effects.15 However, similar to mining for gold, the process of working with audit log data can be messy and the outcomes can be of low-yield. While some challenges working with audit log data are inherent (eg, missing activities that occur outside of the EHR, need to protect user privacy), many others are addressable.
In this Perspective, drawing from the review from Rule et al16 and our own research experiences using audit logs, we offer detailed examples of how early efforts to use audit log data have revealed the need to advance the field toward a state in which audit log data and derivative measures are: (1) transparent and valid; (2) standardized to allow cross-site comparison; (3) broader in scope, particularly to capture teamwork; (4) sensitive to meaningful variability; and (5) linked to significant patient and clinical outcomes.15,17 Although we draw heavily on ambulatory settings where audit log work is more mature, the issues and challenges, as well as the derivative policy and practice-based recommendations we offer to advance audit log research, apply to all clinical settings.
CREATING REPRODUCIBLE, TRANSPARENT, STANDARDIZED MEASURES
Audit log data are unique because they are user (vs patient) centric and capture dimensions of care delivery that are not part of traditional medical record data fields. In contrast to core clinical data standards (and associated policy levers promoting adoption) that have rapidly expanded with the development of the US Core Data for Interoperability (USCDI), audit log standards are nascent. There is only 1 standard included in EHR certification criteria for audit logs, and it specifies a very basic structure by defining a minimum set of data elements that audit logs must contain: type of action (addition, deletion, change, queries, print, copy), date and time of event, user identification, and identification of the patient data that were accessed.1 Unfortunately, there are few meaningful measures that derive directly from the HIPAA/MU mandated set of audit log data. Simply counting the number of additions to a chart, for example, is not useful without additional context for the clinician, patient, or type of care delivered. Similarly, calculating time from first login to last logout does not meaningfully reflect time spent at work or on the EHR as many activities may happen in between, including long periods of EHR nonuse.
Instead, more meaningful measures, such as those characterizing clinician activities and behaviors (eg, total time spent on EHR, work outside of work [WoW], time spent on inbox), require considerable manipulation of audit log data. However, these manipulations have taken many forms, resulting in measures that are not standardized and therefore cannot be compared across studies. Notably, measuring time durations requires determining how to measure start and end times for a task, and researchers and EHR vendors have utilized different approaches.15,17
Take the example of one of the most commonly used audit log measures: total time spent on using the EHR. This measure is a seemingly simple additive one that can be computed by summing up the time spent on EHR-based activities as recorded by audit logs. However, the complexity arises in deciding precisely when a clinician is working in the EHR. Several heuristics and empirically assessed time-out periods have been proposed; for example, researchers have used time-out periods of 1 min18 and 5 min19,20 when constructing their own measures. Alternatively, Epic’s Signal measure uses a time-out period of 5 s21 relying on keystroke and mouse click events that are not captured in the HIPAA/MU mandated set of audit log events (and are therefore not routinely available to researchers to replicate these measures).
Worsening this issue is additional variability in the denominator. Total time spent on the EHR per day? per patient? per encounter? per scheduled patient hour? per work Relative Value Unit? Typically, there is no clear answer for the best measure specification. As a result, researchers calculate measures in ad hoc ways and make sense of the varied results. At a field level, such variations preclude comparing findings across studies.16 These challenges also create high barriers for multisite research leveraging audit log data. Even across sites that use the same EHR, seamless sharing of queries is difficult due to the variability in how core concepts are captured.
Indeed, concepts such as how to identify an individual patient or how to classify an EHR user (eg, based on their clinical roles) or how to determine where EHR work was physically performed (in clinic/hospital vs off-site/at home) are not standardized. For example, if a patient’s identity is unknown (eg, if they arrive at the ED unconscious), an audit log may track them under a John/Jane Doe account and then shift to logging under their identified medical record number (MRN) once they are identified; alternatively, the audit log may be set up to go back and overwrite the John/Jane Doe with the patient’s MRN such that a retrospective data pull would not include any John/Jane Doe entries. EHR user role types are also not consistent and sites usually implement their own categories based on internal Human Resources data. Similarly, roles can also change over time (eg, when a resident transitions to an attending) or an individual can hold 2 roles simultaneously (eg, a clinical role and an administrative role), such that even a straightforward effort to try to capture measures for a given type of clinician can require a large number of assumptions and decisions. Similarly, clinicians may see patients in multiple settings (eg, a surgeon or anesthesiologist seeing patients in an ICU, outpatient, and in the operating room); in such cases, appropriately classifying a patient encounter context (eg, ICU, outpatient) from audit logs can be challenging. Specifically, it is infeasible if the EHR does not have a way to represent context; more often, context is represented but relies on the user to select the appropriate context when logging in, resulting in a noisy measure.
Context could be inferred by ascertaining the physical location of work activities but doing so is also complex and noisy. For example, although IP addresses provide an approximate proxy for institutional location for desktop devices, mobile access is pervasive and locations are much more difficult to determine. This problem could be solved by routinely capturing geolocation for all devices as part of audit log data. As this may raise user privacy concerns, less granular options such as geofencing could achieve the same results with minimal loss of precision.22 Although it may feel like a heavy lift to pursue such options, capturing context is critical to accurately construct a widely used measure that supports efforts to address clinician burnout: WoW.
To begin to address these complex measure construction decisions, EHR vendors have created and made available to clients and researchers measures derived from audit log data (eg, Epic’s Signal measures; Cerner’s Lights-On measures). The appeal is that these measures support those who are interested in using audit log-derived measures but lack the expertise and/or resources to address these issues. However, vendor-developed measures come with several limitations. First, they are limited to the vendor’s selected measurement methodology, which can be quite coarse. For example, vendors have not solved the context issue and therefore construct the WoW measure using fixed time cutoffs (eg, all EHR work performed between 6 pm and 6 am). This is worsened by a second limitation; EHR vendor measures often lack the documentation and transparency to understand how they were derived from audit log data, which can make them problematic for researchers to use. As a field, this leads to the challenges described by Rule and colleagues in which the measures vary based on source (vendor vs researcher-developed) creating considerable variability, and challenges to the interpretation of results.16 Moreover, vendor measures often change over time, without advanced notice, making it difficult to compare the same measure over time.23
Despite calls for more transparency around vendor measures17 and more standardized audit log data (both semantic and structural), EHR vendors have not pursued voluntary standardization. Thus, creating reproducible, transparent, and standardized measures likely requires a top-down approach. The ongoing expansion of USCDI offers a natural policy lever. To address these issues, our first recommendation is to use USCDI to expand the scope of mandated audit log data elements (new data class) and set semantic standards (new code sets). To begin, USCDI has a provenance class with author time stamp and author organization as data elements but these lack standardized code sets.24 Thus, there exists an opportunity to define such code sets in future versions of USCDI (eg, code set of role types) as well as expand provenance data elements (eg, geofencing location classification).
UNDERSTANDING THE DRIVERS OF VARIABILITY IN MEASUREMENT USING AUDIT LOGS
Studies using audit log-derived measures have predominantly focused on characterizing frontline clinician EHR work, and in particular on EHR time; for example, 59 of the 102 articles in Rule et al16 reported on the duration of EHR use. Although the main findings often feature a large amount of time spent on average, these studies also reveal substantial variation.25 This variation exists across specialties, as would be expected, but is also present within specialties, which is harder to explain. For example, Overhage and McCallie (2020)26 report active EHR time per encounter by specialty, differentiating low specialties such as cardiology from high specialties such as primary care. While the means are quite different (682 vs 1188 seconds, respectively), the variability within specialty is dramatic (2312 and 2663 s, respectively).
It is hard to know what causes such variability and whether it matters. For example, it may be that a physician with very high EHR time per encounter simply has a greater clinical workload, or it may be that the individual is very inefficient at EHR work and requires considerable support. Unfortunately, there is no standardized way to account for clinical workload and it is influenced by a number of factors including the context of work (eg, inpatient vs outpatient), patient complexity, patient load, and expertise. Such effects are likely exacerbated among physician trainees who transition between clinical settings with varying degrees of clinical workload, patient load, and complexity.19 Thus, variability in EHR measures in this population are particularly hard to interpret.
Similarly, observing high levels of EHR WoW time could reflect a physician who is overwhelmed and likely to experience burnout, or it could reflect a physician who chooses to chart at night to be able to take time off during the day for personal commitments (something we would expect to be protective of burnout). Use of scribes (and how scribe activity is logged) is another factor that could explain variability but is rarely accounted for. In Rule et al’s16 review, they found that 5 studies (out of a 102) described the impact of scribes on documentation time/length of note; however, none of the studies directly examined the question of how scribes impact the accuracy of measuring EHR time.
Although there is value in examining averages and trends, especially when there is a clear normative direction (ie, very high levels of total EHR time), more impactful insights will come from identifying where variability reveals opportunity for improving more meaningful outcomes, such as burnout. We are starting to see these types of studies emerge; examples include those that have utilized audit logs to assess the association between time spent on EHR use and exhaustion,27 in-basket messages and physician well-being,28 trainee workload and burnout and errors,19 and cognitive effort and workload.29 Such work can be facilitated by efforts to make it easier to incorporate contextual measures of EHR work—perhaps beginning with patient scheduling data and patient complexity. As an extension of our prior recommendation, these should be prioritized as new USCDI data classes.
MOVING FROM DESCRIPTIVE MEASURES TO PATIENT OUTCOMES
Ultimately, the field needs to expand from studies focused on describing EHR-based work (and explaining associated variability) to insights focused on patient and clinical outcomes. However, the challenges of ascertaining outcomes associated with audit log-based measures are non-trivial. First, audit logs typically capture generic actions—such as whether lab results were viewed. In the context of explaining a clinical outcome, it is usually necessary not only to capture whether a laboratory test result was viewed but whether a specific result(s) relevant to that outcome was viewed. In some cases, this more precise measure may not be feasible (ie, the screen contained many results and there is no way to know what exactly the clinician viewed); in other cases, it might be feasible but complex (ie, linking from an audit log entry to another entry associated with individual test result information that specifies which result was viewed). Furthermore, often the same information—like an individual test result—can be viewed in many different ways (eg, on a lab result tab, via an inbox message), such that it is difficult to comprehensively measure a given activity or a set of activities.
Determinations of how to measure behaviors expected to impact patient outcomes requires deep clinical and audit log expertise. More importantly, given the granular nature of HIPAA/MU mandated audit logs, outcomes need to be assessed in a proximally related manner to the considered actions. In other words, the impact of a set of audit log actions is likely to be associated with outcomes in the near-term (eg, in the order of minutes or hours) rather than long-term (eg, several days or months). For example, the cognitive load associated with constant task interruptions due to alerts are likely to affect a clinician’s tasks during that shift (eg, leading to errors during that day).30 Again, nuanced clinical and analytic expertise are required to design the approach to such alignment. For example, we are undertaking a project to assess whether audit log measures of cognitive load relate to delivery of low-value care; we had to limit this investigation to primary care encounters where we could leverage existing measures of low-value care that could be tied to actions taken during specific encounters (eg, an order for imaging for low-back pain).
Complementing efforts to advance audit log-based measures that offer new insights into patient and clinical outcomes is efforts to expand the breadth of audit log measures—particularly to capture team-level measures. Measures of care coordination have been constructed by overlaying multiple clinicians’ audit log events including the roles (eg, resident, nurse, attending physician), timing of the events, and the sequence of actions to re-create the workflow patterns.31,32 Similarly, team coordination measures can be constructed by leveraging audit log data on EHR-mediated communication within care teams (eg, SecureChat, inbox messaging). Other studies have used audit log data to measure team structure, with 1 study establishing an association with patient length of stay.33 In our own work, we have found a relationship between prior team experience and door-to-needle time for patients experiencing an acute stroke.34
Although these studies reveal the significant potential for team-level measures, these measures face the same challenges of reproducibility, transparency, and standardization, such that our associated recommendation for expansion of USCDI to promote uniform approaches is all the more critical. However, it will take time for new measures and their underlying data elements to develop and be tested in a variety of contexts, such that they are sufficiently mature before they are proposed for integration into USCDI. For example, while our team constructed an audit log-based measure to capture prior team experience in an ED context, it will be important for the measure to be adapted for teams in other inpatient and even ambulatory settings. There is precedent for such adaptation; wrong-patient ordering errors measured using the audit-log derived retract-and-reorder (RAR) clinical decision rule is a validated safety measure endorsed by the National Quality Forum (NQF Measure #2723).35,36 Over time, wrong-patient orders measured using the RAR decision rule have been used extensively to measure the association between EHR-based clinician actions and errors in a wide range of clinical contexts.35–39
To facilitate the adaptation process, we recommend the creation of a repository for disseminating existing audit log measures (along with public availability of underlying source code). Through better dissemination, the community (including researchers and vendors) will gain a better understanding of where meaningful differences exist and how to reconcile them to allow for a feasible federal standard. Furthermore, such dissemination will facilitate adaptation of existing measures, which are often developed in 1 particular context or setting, to the broad set needed to support a federal standard. A model for a repository that could specifically support identification of emerging candidate standards for incorporation into USCDI is the Phenotype KnowledgeBase (http://phekb.org), an online disease phenotype database of various diseases that have been developed using EHR data.40 PheKB has a list of phenotype algorithms that have been developed and validated by investigators (eg, chronic kidney disease) and allows for further collaboration and implementation of these algorithms by investigators using their own datasets.
A similar repository for audit log measures would be instrumental in creating general consensus for measure standardization and allow for community input for modifying and implementing such measures for research and operational purposes. Measures available from such a repository should specify sources of audit log data needed, secondary data requirements (eg, patient events), query for retrieving data, data processing pipelines, and publicly available algorithms/code for deriving the measures. This enables the community to implement and replicate the measures for specific projects as well as creates a crowdsourced approach to building a practice-based foundation to identify new USCDI audit log standards.
CONCLUSION
Working with audit log data is beginning to produce high value insights along with greater clarity about the challenges. Much can be done to address these challenges, improving the efficiency, and robustness of the process of deriving meaningful measures. Specifically, we recommend first building a publicly available repository to share existing audit log measures, allowing researchers and vendors to be more transparent and iterate collaboratively. Such a repository would, in turn, identify where data classes and elements are sufficiently mature to be proposed for incorporation into USCDI. These complementary recommendations will usher in the next era of audit log goldmining—where the nuggets are bigger and found more easily.
FUNDING
Support for authors’ audit log-based research came from the National Library of Medicine (R01 LM013778-01), the American Medical Association, and the Gordon and Betty Moore Foundation.
AUTHOR CONTRIBUTIONS
TK and JA-M developed the concepts presented in this manuscript. Both authors were equally involved in the drafting of the initial manuscript, and the revisions, and provided approval for publication.
CONFLICT OF INTEREST STATEMENT
None declared.
Contributor Information
Thomas Kannampallil, Department of Anesthesiology, Washington University School of Medicine, St Louis, Missouri, USA; Institute for Informatics, Washington University School of Medicine, St Louis, Missouri, USA.
Julia Adler-Milstein, Department of Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, California, USA.
Data Availability
There are no data available for sharing.
REFERENCES
- 1. Adler-Milstein J, Adelman JS, Tai-Seale M, Patel VL, Dymek C.. EHR audit logs: a new goldmine for health services research? J Biomed Inform 2020; 101: 103343. [DOI] [PubMed] [Google Scholar]
- 2. Kannampallil T, Abraham J, Lou SS, Payne PR.. Conceptual considerations for using EHR-based activity logs to measure clinician burnout and its effects. J Am Med Inform Assoc 2021; 28 (5): 1032–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. DiAngi YT, Stevens LA, Halpern–Felsher B, Pageler NM, Lee TC.. Electronic health record (EHR) training program identifies a new tool to quantify the EHR time burden and improves providers’ perceived control over their workload in the EHR. JAMIA Open 2019; 2 (2): 222–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Ratanawongsa N, Matta GY, Bohsali FB, Chisolm MS.. Reducing misses and near misses related to multitasking on the electronic health record: observational study and qualitative analysis. JMIR Hum Factors 2018; 5 (1): e4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Ahmed A, Chandra S, Herasevich V, Gajic O, Pickering BW.. The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance. Crit Care Med 2011; 39 (7): 1626–34. [DOI] [PubMed] [Google Scholar]
- 6. Gardner RL, Cooper E, Haskell J, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 26 (2): 106–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. National Academies of Sciences, Engineering, and Medicine, National Academy of Medicine, Committee on Systems Approaches to Improve Patient Care by Supporting Clinician Well-Being. Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being. National Academies Press; 2019. [PubMed]
- 8. Shanafelt TD, Dyrbye LN, Sinsky C, et al. Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction. Elsevier 2016; 91 (7): 836–48. [DOI] [PubMed] [Google Scholar]
- 9. Kroth PJ, Morioka-Douglas N, Veres S, et al. The electronic elephant in the room: physicians and the electronic health record. JAMIA Open 2018; 1 (1): 49–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Babbott S, Manwell LB, Brown R, et al. Electronic medical records and physician stress in primary care: results from the MEMO Study. J Am Med Inform Assoc 2014; 21 (e1): e100–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Poissant L, Pereira J, Tamblyn R, Kawasumi Y.. The impact of electronic health records on time efficiency of physicians and nurses: a systematic review. J Am Med Inform Assoc 2005; 12 (5): 505–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Baumann LA, Baker J, Elshaug AG.. The impact of electronic health record systems on clinical documentation times: a systematic review. Health Policy 2018; 122 (8): 827–36. [DOI] [PubMed] [Google Scholar]
- 13. Martin SK, Tulla K, Meltzer DO, Arora VM, Farnan JM.. Attending physician remote access of the electronic health record and implications for resident supervision: a mixed methods study. J Grad Med Educ 2017; 9 (6): 706–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (5): 419–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Rule A, Chiang MF, Hribar MR.. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J Am Med Inform Assoc 2020; 27 (3): 480–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Rule A, Melnick ERApathy N.. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc, Accepted. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Sinsky CA, Rule A, Cohen G, et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (4): 639–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Sinha A, Stevens LA, Su F, Pageler NM, Tawfik DS.. Measuring electronic health record use in the pediatric ICU using audit-logs and screen recordings. Appl Clin Inform 2021; 12 (4): 737–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Lou SS, Lew D, Harford D, et al. Temporal associations between EHR-derived workload, burnout, and errors: a prospective cohort study. J Gen Intern Med 2022; 37 (9): 2165–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Ouyang D, Chen JH, Hom J, Chi J.. Internal medicine resident computer usage: an electronic audit of an inpatient service. JAMA Intern Med 2016; 176 (2): 252–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Melnick ER, Fong A, Nath B, et al. Analysis of electronic health record use and clinical productivity and their association with physician turnover. JAMA Netw Open 2021; 4 (10): e2128790. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Nguyen KT, Olgin JE, Pletcher MJ, et al. Smartphone-based geofencing to ascertain hospitalizations. Circ Cardiovasc Qual Outcomes 2017; 10 (3): e003326. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Hron JD, Lourie E.. Have you got the time? Challenges using vendor electronic health record metrics of provider efficiency. J Am Med Inform Assoc 2020; 27 (4): 644–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.USCDI Provenance. https://www.healthit.gov/isa/uscdi-data-class/provenance#uscdi-v1. Accessed June 21, 2022.
- 25. Neprash HT, Everhart A, McAlpine D, Smith LB, Sheridan B, Cross DA.. Measuring primary care exam length using electronic health record data. Med Care 2021; 59 (1): 62–6. [DOI] [PubMed] [Google Scholar]
- 26. Overhage JM, , McCallie D Jr.. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (3): 169–74. [DOI] [PubMed] [Google Scholar]
- 27. Adler-Milstein J, Zhao W, Willard-Grace R, Knox M, Grumbach K.. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc 2020; 27 (4): 531–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Tai-Seale M, Dillon EC, Yang Y, et al. Physicians’ well-being linked to in-basket messages generated by algorithms in electronic health records. Health Aff (Millwood) 2019; 38 (7): 1073–8. [DOI] [PubMed] [Google Scholar]
- 29. Lou SS, Kim S, Harford D, et al. Effect of clinician attention switching on workload and wrong-patient errors. Br J Anaesth 2022; 129 (1): e22–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Anderson JR. Spanning seven orders of magnitude: a challenge for cognitive modeling. Cogn Sci 2002; 26 (1): 85–112. [Google Scholar]
- 31. Li P, Chen B, Rhodes E, et al. Measuring collaboration through concurrent electronic health record usage: network analysis study. JMIR Med Inform 2021; 9 (9): e28998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Chen B, Alrifai W, Gao C, et al. Mining tasks and task characteristics from electronic health record audit logs with unsupervised machine learning. J Am Med Inform Assoc 2021; 28 (6): 1168–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Chen Y, Lehmann CU, Hatch LD, Schremp E, Malin BA, France DJ.. Modeling care team structures in the neonatal intensive care unit through network analysis of EHR audit logs. Methods Inf Med 2019; 58 (4-05): 109–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Noshad M, Rose CC, Thombley R, et al. Context is key: using the audit log to capture contextual factors affecting stroke care processes. AMIA Annu Symp Proc 2020; 2020: 953–62. [PMC free article] [PubMed] [Google Scholar]
- 35. Adelman JS, Applebaum JR, Schechter CB, et al. Effect of restriction of the number of concurrently open records in an electronic health record on wrong-patient order errors: a randomized clinical trial. JAMA 2019; 321 (18): 1780–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Adelman JS, Kalkut GE, Schechter CB, et al. Understanding and preventing wrong-patient electronic orders: a randomized controlled trial. J Am Med Inform Assoc 2013; 20 (2): 305–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Adelman J, Aschner J, Schechter C, et al. Use of temporary names for newborns and associated risks. Pediatrics 2015; 136 (2): 327–33. [DOI] [PubMed] [Google Scholar]
- 38. Adelman JS, Aschner JL, Schechter CB, et al. Evaluating serial strategies for preventing wrong-patient orders in the NICU. Pediatrics 2017; 139 (5): e1–7. [DOI] [PubMed] [Google Scholar]
- 39. Udeh C, Canfield C, Briskin I, Hamilton AC.. Association between limiting the number of open records in a tele-critical care setting and retract–reorder errors. J Am Med Inform Assoc 2021; 28 (8): 1791–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Kirby JC, Speltz P, Rasmussen LV, et al. PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability. J Am Med Inform Assoc 2016; 23 (6): 1046–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
There are no data available for sharing.