Skip to main content
Journal of the American Medical Informatics Association: JAMIA logoLink to Journal of the American Medical Informatics Association: JAMIA
. 2021 Apr 26;28(8):1676–1682. doi: 10.1093/jamia/ocab042

Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study

Genna R Cohen 1,, Jessica Boi 2, Christian Johnson 3, Llew Brown 1, Vaishali Patel 3
PMCID: PMC8324233  PMID: 33899105

Abstract

Objective

To understand hospitals’ use of EHR audit-log-based measures to address burden associated with inpatient EHR use.

Materials and Methods

Using mixed methods, we analyzed 2018 American Hospital Association Information Technology Supplement Survey data (n = 2864 hospitals; 64% response rate) to characterize measures used and provided by EHR vendors to track clinician time spent documenting. We interviewed staff from the top 3 EHR vendors that provided these measures. Multivariable analyses identified variation in use of the measures among hospitals with these 3 vendors.

Results

53% of hospitals reported using EHR data to track clinician time documenting, compared to 68% of the hospitals using the EHR from the top 3 vendors. Among hospitals with EHRs from these vendors, usage was significantly lower among rural hospitals and independent hospitals (P < .05). Two of these vendors provided measures of time spent doing specific tasks while the third measured an aggregate of auditable activities. Vendors varied in the underlying data used to create measures, measure specification, and data displays.

Discussion

Tools to track clinicians’ documentation time are becoming more available. The measures provided differ across vendors and disparities in use exist across hospitals. Increasing the specificity of standards underlying the data would support a common set of core measures making these measures more widely available.

Conclusion

Although half of US hospitals use measures of time spent in the EHR derived from EHR generated data, work remains to make such measures and analyses more broadly available to all hospitals and to increase its utility for national burden measurement.

Keywords: EHR, hospital, Metadata, audit log, provider burden

INTRODUCTION

Electronic health record (EHR) systems have become widespread in hospitals across the United States,1 thanks in part to the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009, which provided financial incentives and resources to encourage EHR adoption.2 But as EHR use has grown, so too have concerns about the associated clinician burden. The Office of the National Coordinator for Health Information Technology (ONC) noted recently that “healthcare workers have expressed challenges related to time and effort associated with using the EHR during care delivery, as well as the administrative burden of capturing documentation to support billing and reimbursement.”3

ONC established 3 goals in support of provider burden reduction: (1) reduce the effort and time required to record information in EHRs; (2) reduce the effort and time required to meet regulatory reporting requirements; (3) improve the functionality and intuitiveness (ease of use) of EHRs.3 To assess progress toward meeting these goals, it is imperative to measure how clinicians are using EHRs and assessing the extent to which their use may contribute to burden. One option for doing so without adding to providers’ burden involves EHR log data. Initially developed to track health record access per security requirements related to HIPAA and EHR Meaningful Use criteria, audit logs record the actions that clinicians take in their EHR systems. These audit logs, along with other types of EHR system data, have become a powerful source of information on EHR use patterns and clinician burden.4

A recent report released by the Office of the Assistant Secretary for Planning and Evaluation (ASPE) under the US Department of Health and Human Services concluded that measures based on audit logs may be useful for organizations to internally capture clinician activity and track change over time, particularly in response to interventions designed to reduce burden and improve EHR use.5 Several EHR vendors are offering their clients reports that provide measures of how clinicians are using EHRs based upon log data.6–8 Studies have also leveraged log data within health systems to examine the association between volume of “in-basket” messages received by physicians with provider burnout.9,10

Much of the work leveraging log data has focused on outpatient settings, including specifying a set of measures to assess documentation time.8,11 However, it is unclear whether the outpatient measures are applicable to the inpatient setting given its unique pressures and workflow dynamics. For example, the level of acuity and wide range of clinical data related to inpatient encounters create time constraints and documentation complexity distinct from that required during outpatient encounters. Furthermore, the dynamics of shift work in the inpatient setting complicate efforts to benchmark measures based on audit logs, as it can be difficult to map clinician work in the EHR against time metrics or specific patients to foster comparisons across users.

OBJECTIVE

To better understand the potential for audit-log-based measures to help hospitals assess burden associated with EHR use, we sought to answer 3 research questions:

  1. Are hospitals using their EHR or other IT system data (such as audit log data) to track the amount of time clinicians spend completing documentation in the inpatient setting? If so, how does usage vary by EHR vendor, hospital, and area characteristics?

  2. What types of measures do vendors provide to hospitals to support this work? What are the sources of data underlying these measures?

  3. How do hospitals report using these measures?

MATERIALS AND METHODS

We used a mixed-methods approach for this study. In the first phase, we used quantitative methods to identify characteristics of hospitals that reported using EHR data to track the time clinicians spend completing documentation. In the second phase of the study, we used findings from the quantitative analysis to inform qualitative interviews with staff of the EHR vendors used by a large proportion of hospitals that reported using EHR data (>30%). No ethical approvals were required for this research because it does not meet the 45 CFR Part 46 definition of human subject research.

Quantitative methods

We obtained data for the study from the 2018 American Hospital Association (AHA) Information Technology (IT) Supplement Survey. AHA invited all US hospitals to participate in the survey, with the target respondent being the person most knowledgeable about the hospital’s health IT (typically a chief information officer). The surveys, which were administered from January to May 2019, had a response rate of 64 percent for nonfederal acute care hospitals, resulting in a sample size of 2864 hospitals.

We performed statistical analyses using SAS version 9.4 (SAS Institute Inc, Cary, NC), using hospital-level weights to produce nationally representative estimates. We derived these weights by calculating the inverse predicted propensity of survey response. We used a logistic regression model to predict the propensity of survey response as a function of hospital characteristics, including size, ownership, teaching status, system affiliation, rurality, geographic region, and presence of a cardiac intensive care unit.

The primary outcome for this analysis was based on a hospital’s response to the survey question, “Does your hospital use EHR or other IT system data (eg, audit log data) to track the amount of time clinicians spend completing documentation?” We considered hospitals that did not respond to this question (n = 26) and those that responded no (n = 1047), do not know (n = 203), or NA (n = 23) as not using system-generated data to track documentation time. Survey respondents also identified the vendor of their primary inpatient EHR system, defined as the system used by the most patients or the system in which the hospital has made the single largest investment.

In order to determine which EHR vendors might be offering their clients’ reports of documentation time, like dashboards, we sought to identify the top EHR vendors with the highest rates of usage of EHR data by aggregating hospitals’ responses by EHR vendor. Our assumption was that if 30% or more of an EHR vendor’s hospital clients used these reports, these EHR vendors were more likely to be providing their clients standardized analyses of their audit log data. In order to confirm our assumption, we contacted 2 EHR vendors that reported lower rates of hospitals’ usage of these measures (<30%) and indeed found that they did not routinely provide hospitals with measures. Together, the 3 EHR vendors that met our criteria represented almost three-quarters of the hospital market share (74%), based on responses to the AHA IT Supplement Survey from nonfederal acute care hospitals.

We then conducted subanalyses focused on the hospitals using the 3 EHR vendors with highest rates of EHR data use (>30%). We assessed variation in hospitals’ use of EHR or other IT system data to track documentation time among hospitals using these EHR vendors for the following hospital characteristics: status as a Critical-Access Hospital, number of beds, system affiliation, teaching status, profit structure, and geographic region (urban/suburban or rural). A multivariable logistic regression was performed to identify hospital characteristics significantly associated with using system-generated data to track documentation time (P <.05).

Finally, we conducted a secondary analysis among hospitals using these top EHR vendors that reported using EHR data to track documentation time, with the goal of identifying how they used their data. Respondents could select any of the following options that applied to their hospital: vendor product improvement and troubleshooting, identify providers in need of training and support, provider burden reduction initiatives, performance/efficiency monitoring, identify areas to improve clinical workflow, and others.

Qualitative methods

To contextualize trends from the hospital-reported use of vendor measures, we conducted semistructured interviews with EHR vendor representatives from the top 3 vendors (described in the prior section) between March 2020 and May 2020 to learn about their offerings in the inpatient setting. We conducted 2 rounds of interviews, each about 1 hour long, with 2–4 executives and staff from each of the 3 vendors that met our inclusion criteria. In each instance we spoke with the people who felt most comfortable discussing measures. Respondent titles ranged and included a senior manager and a vice president. Prior to each interview, we informed respondents that the information they shared would be anonymized and made public and received their consent to record the discussion to improve the accuracy of notes. We also offered respondents an opportunity to review the manuscript before submission to guarantee that all information was reported accurately and was appropriate for public release. In the first interview, the respondents described the information they provide to hospitals about the time physicians and other users spend in the EHR. They also discussed the factors affecting their decision to develop the currently available measures, data sources those measures draw on, and limitations.

We then used data analysis matrices to compare findings across vendors, assigning each vendor a column in the matrix and summarizing different elements of their offerings in assigned rows (for example, 1 row for measures of total time spent in the EHR and 1 row for how frequently measures are updated for hospitals). One author (either GC or JB) filled in the vendor’s column based on notes and the other author (either JB or GC) reviewed to ensure the summaries were accurate and used comparable language. After comparing the rows, we conducted a second interview with respondents to ask outstanding questions and validate our characterization of their offerings while viewing the measures and data visualizations accessible to hospitals. A 2-person team conducted each interview, with 1 interviewer taking notes to capture the conversation and describe the visuals.

RESULTS

Hospitals vary in their use of EHR data to track clinician time spent documenting

Nationally, 53% of hospitals reported using their EHR or other IT system data (such as audit log data) to track the time clinicians spend completing documentation (Figure 1). However, this varied widely by EHR vendor. Overall, 68% of the hospitals using these top 3 EHR vendors reported using EHR data to track documentation time, compared with 12% of hospitals using the remaining EHR vendors (Figure 1).

Figure 1.

Figure 1.

Rates of nonfederal acute care hospitals using EHR data to track documentation time among those with top 3 vs other EHR vendors.

Source: Analysis of AHA IT Supplement Survey, 2018.

Among hospitals using the top 3 EHR vendors, the use of EHR data to track documentation varied by area and organizational characteristics (Supplementary AppendixTable S1). Our multivariable, adjusted analyses show that among hospitals using top 3 EHR vendors, those located in urban/suburban areas have significantly higher odds of using these data compared to those located in rural areas (OR 1.70, 95% CI 1.39–2.08) (Table 1). System-affiliated hospitals had over 3 times higher odds of using these data compared to independent hospitals (OR 3.77, 95% CI 3.14–4.52) (Table 1).

Table 1.

Adjusted odds of a hospital’s use of EHR or other IT system data to track documentation time by hospital and area characteristics, 2018

Characteristic Odds Ratio 95% Confidence Interval
Non-Critical Access Hospital vs Critical Access Hospital 0.89 0.70 1.12
Urban/Suburban vs Rural location 1.70* 1.39 2.08
Medium/Large Hospital vs Small Hospital 1.21 0.98 1.49
System Affiliation vs Independent 3.77* 3.14 4.52
Teaching Hospital vs Non-Teaching Hospital 1.20 1.00 1.45
State, County, or City Hospital vs Non-Government, For Profit Hospital 0.85 0.62 1.16
Non-Government, Not-For-Profit Hospital vs Non-Government, For Profit Hospital 1.24 0.97 1.58

Source: Analysis of AHA IT Supplement Survey, 2018.

*

Statistically significant from reference group (P < .05). The denominator is among hospitals using a top 3 EHR vendor (n = 2214). A hospital was considered small if it had fewer than 100 beds. A hospital was considered rural if it was in a nonmetropolitan statistical area. A hospital was considered Critical Access if it had fewer than 25 beds and was at least 35 miles from another general or Critical-Access Hospital.

EHR vendors’ measures offer varied ways for hospitals to identify patterns of EHR use

Interviews revealed that vendors initially built measures for their own internal use and adapted them for different audiences over time. In the words of 1 respondent, “We put some business logic on top of [existing data log infrastructure] to measure active time in the EHR.” Vendors first created measures to confirm that adoptees were using EHR functionality, to troubleshoot appropriate access, to learn more about application efficiency, and to perform feasibility testing of new features. However, functionality “evolved dramatically,” as vendors started to “move towards measures that provider sites can use.” Each of the vendors we spoke with provided measures to all of their hospital clients.

Measures provided to hospitals followed vendor-defined tasks, such as “notes,” “in-basket,” “orders,” and “reviewing” (Table 2). Consistent with prior literature,5,8,12 these tasks represent similar, but not identical, categories through which vendors capture EHR activity:

Table 2.

Characterizing measures for inpatient use

Vendor A Vendor B Vendor C
Data sources Audit logs identifying events within the EHR (eg, switching tabs) Custom logs based on mouse movement and mouse scrolling, more granular than audit log data Custom logs based on mouse movement and mouse scrolling, more granular than audit log data
Definition of measures (eg, numerator/denominator) For an aggregate of all auditable tasks, standard measure displays number of audit log entries per day For each task, standard measure displays minutes per day, standardized by patient volume (average patients/day). For each task, standard measure displays minutes per day, standardized by patient volume (patients/day)
Core tasks with associated measures
  • Accessing the chart

  • Chart review

  • Discharge

  • Documentation

  • E-prescribing

  • Message inbox

  • Orders

  • Clinical review

  • Discharge

  • Flowsheets

  • Message inbox

  • Medication reconciliation

  • Navigators

  • Notes

  • Orders

  • Patient lists

  • Problem list

  • Chart review

  • Documentation

  • Patient discovery

  • Orders

  • Problems and diagnoses

  • Message inbox

  • Discharge

  • Alerts

  • Medication Reconciliation

  • Histories

Frequency measures are updated and disseminated Measures aggregated, updated, and available continuously Reports distributed monthly to inpatient customers Measures available continuously, aggregated and updated monthly
Data retention Up to customer, but default is 14 days (max 31) Up to customer, but default is 3 years 13 months of granular data (activity per day per provider) retained. Aggregated data retained for several years.

Source: Authors’ analyses of interview data.

  • Vendor A calculates the total number of auditable events in the EHR, aggregated across all tasks. The measure looks at how many audit entries occurred per day, broken out by scheduled hours (ie, entries during or outside of scheduled hours). The measure can be reported for an individual or for multiple individuals in the same population (such as specialty or practice). The person running the report can define scheduled hours.

  • Vendor B calculates active time in the EHR normalized to patient volume. The base measure for each task calculates minutes per day, standardized to the average number of patients per hour. The measure can be reported at different levels of aggregation, including individual users, specialties/departments, and aggregates of user types (such as physicians, fellows, or nursing students). The denominator can also be modified to reflect different approaches to measuring patient volume, such as the number of orders placed or notes written.

  • Vendor C, much like Vendor B, calculates active time in the EHR normalized to patient volume. The base measure for each task calculates minutes per day, standardized to the number of patients per day. The measures can be reported at different levels of aggregation, much like Vendor B.

Underlying data

Of note, not all vendor measures of time spent in the EHR drew on audit-log data. Respondents from 2 different vendors reported relying on more granular activity log data that captured mouse movement and scrolling, noting that audit log data could not capture as much detail. In the words of 1 respondent, “If you’re only tracking when the chart is opened, you’re losing a ton of context.” Another respondent described linking audit log data to AHA survey data and users’ National Provider Identifiers to build comparison groups for benchmarking. Measure updates and dissemination varied by vendor, depending both on how their measures are displayed to customers (dashboards allow for constant accessibility) as well as the granularity of data underlying the measures and subsequent data processing burden. Vendors varied in how far back they stored raw data, ranging from 14 days to over a year. One vendor reported storing aggregate measures for longer than raw data due to the storage limitations.

Hospitals’ use of EHR vendor measures

Among hospitals using the top 3 EHR vendors, the most common use of data on EHR documentation time was to identify providers needing training and support (Table 3). Three-quarters or more of the hospitals working with the top 3 vendors also cited using the data to identify areas to improve clinical workflow and monitor performance or efficiency. Fewer reported using the data to improve their EHR products (43%).

Table 3.

Hospital use of EHR data that tracks documentation time

Uses for EHR documentation time data Overall (%)
Identify providers in need of training and support 85%
Identify areas to improve clinical workflow 79%
Performance/efficiency monitoring 76%
Provider burden-reduction initiatives 68%
Vendor product improvement and troubleshooting 43%
Other uses 7%
Total N 2214

Source: Analysis of 2018 AHA IT Supplement Survey.

Note: Denominator is among the 68% of nonfederal acute care hospitals that have both a top 3 vendor and report using EHR data for assessing documentation time (see Figure 1).

Qualitative analyses revealed EHR vendors sought to enhance the display of information as well as provide benchmarking to make the data easier to consume and use for identifying clinicians in need of training and improving workflow (Table 4). EHR vendors offered many ways to normalize measures to facilitate appropriate comparisons, such as calculating the number of notes written or the time spent writing notes, either of which could be benchmarked per day or for the average number of patients per hour. Users could customize measures according to these preset options or work with their vendor to build novel metrics but could not otherwise alter measure definitions. As 1 respondent noted, maintaining these standard definitions was necessary to support their data visualizations and comparisons against users in other organizations.

Table 4.

Characterizing tools to enable usage of EHR vendor measures

Vendor A Vendor B Vendor C
Data display Portal offering access to user-specified reports Interactive report with functionality to dis/aggregate data Interactive, real-time dashboard with functionality to dis/aggregate data
Benchmarking Reports can include multiple users to support comparisons between individual providers within the same organization Report benchmarks individual providers against all providers in their specialty within their organization. Customized comparisons also available. Dashboard visualizations benchmark performance against internal and external users at multiple levels of aggregation.
Other measures
  • Use of customization (eg, favorites)

  • Evening/early morning activity

  • Unsigned documentation

  • Use of customization (eg, favorites)

  • Evening/early morning activity

  • Use of customization (eg, favorites)

  • Evening/early morning activity

  • Workflow patterns

  • Alert fatigue

Source: Authors’ analyses of interview data.

Although all vendors created data visualizations and other displays to facilitate use and interpretation of these measures, the capabilities varied by vendor. While 1 offered hospitals the ability to specify parameters (such as user or user group and date) for a series of statistics, another offered more dynamic dashboards that mapped user workflow and allowed comparisons to other users in the hospital or departments across the country.

Respondents also described using log data to assess implementation of best practices, such as “favorites” lists, shortcuts, and recommended workflows. These measures can help quantify the relative effort required for individuals to complete a given task measured in clicks, time, or screen switching. Vendors described how these measures facilitate outreach to outlier individuals, highlighting opportunities to train them on ways they might alter their documentation and general EHR usage patterns to save time and increase efficiency. As 1 respondent put it, “The biggest use case [for these measures] is proactive outreach to silent strugglers.” One vendor also explained that measuring individual use of best practices was a way for them to ensure appropriate benchmarking and analysis, noting “We don’t want to evaluate and measure providers that are not set up using best practices.”

Limitations of measures

Inpatient vs outpatient measures

Vendors collected identical data across settings and used the same algorithms to calculate measures but suggested that their use and interpretation would likely differ in the inpatient compared to the outpatient setting. For example, 1 respondent explained that inpatient users typically document that same day’s patients, whereas users in an outpatient setting might be completing their notes for patients seen on prior days. As a result of that variation, benchmarking in the inpatient setting might require comparisons to be normalized to the number of patients seen while benchmarking in the outpatient setting which might require comparisons to be normalized to the number of patients scheduled.

Some setting-agnostic measures were more natural extensions of the outpatient setting than the inpatient setting. For example, all vendors used measures or data visualizations to track EHR use in the evening or early morning (ie, between 6 pm and 6 am), a definition that is less intuitive in a setting with 24/7 patient care responsibilities.5,11 Respondents noted difficulty building algorithms that were calibrated to identify work outside of work in the inpatient setting, given schedule complexities and variations across hospital staff. As 1 vendor respondent said, “Oftentimes providers don’t have a set schedule. So how do we frame that [comparison]? We…are willing to work with our sites to measure that but haven’t been able to nail that down yet.” Some early solutions include understanding temporal patterns of EHR use for different groups of users without saying whether the time spent was during or outside work—for example, via a heat map of time spent in the EHR over the course of a day—or allowing the organization to define the working hours and calculating the percentage of EHR activity that occurred outside those hours.

Usage for provider burden

One respondent said that subtle differences in how vendors measure different tasks could affect the ability to measure burden across vendors: “If different EHR [vendors] are attacking the issue differently, you will get variation not related to burden but just how the math is done.” But another vendor respondent said that it would still be useful to compare trends: “Even if [another vendor] counts time differently, if we are in the same ballpark, you still would be able to find opportunities for burden reduction in the aggregate…. For example, [seeing] that inpatient notes are longer than outpatient notes is simple, but [it] surprises CMOs and CMIOs, and just having the numbers can start the conversation.”

Vendor staff also identified limitations in how these measures could be interpreted, noting that different training and preferences would be difficult to capture with summary measures, especially given the diverse work that occurs in the same inpatient organization. Indeed, research based on audit logs has revealed variations in documentation patterns even among similar users of the same EHR system, providing some early indication of different preferences.12,13 Respondents thus cautioned that EHR-based measures would be difficult to use as the sole basis of subjective experiences of burden. The measures also focus exclusively on efficiency and do not incorporate any measures of documentation quality.

LIMITATIONS

In order to characterize hospitals’ use of these data we relied solely on self-reported survey data. Due to the pandemic, we were not able to interview hospitals to learn more about their use of data, including both strengths and limitations of the measures from their perspective related to access, usability, pricing, and more. This should be the subject of future investigation when hospital executives and staff have more bandwidth.

DISCUSSION

This national mixed-methods study is the first to examine hospitals’ use of EHR system data to measure time spent by clinicians documenting in their EHR system. The findings show that about half of hospitals nationally use measures, and usage is higher among hospitals that work with vendors who provide these reports to their clients. Most frequently, hospitals cited uses of the data related to identifying clinicians in need of training and support, improving clinical workflow, and performance or efficiency monitoring. Interviews with those vendors revealed how they leveraged EHR data to develop analytical products to support their clients’ use of EHR systems.

It is encouraging that tools are available to measure how clinicians are using EHRs. This enables hospitals and EHR vendors to take steps to provide training, improve clinical workflow, and redesign systems to reduce provider burden associated with EHR use. However, work remains to provide actionable insights on reducing the burden associated with EHR use. One barrier is limited availability of comparable measures across EHR vendors. The top 3 EHR vendors, in use by three-quarters of hospitals nationally, offer this functionality to their clients. Yet one-quarter of hospitals nationwide do not have routine access to these reports. Furthermore, across EHR vendor offerings, there is no standardization in underlying data, data retention or updating, or measure definition. Additionally, the complexity and robustness of the data required—which goes beyond EHR audit log files to include data on more granular actions taken by end users—may be difficult for smaller EHR vendors to compile. Although health applications or other open source tools through application programming interfaces may also be able to leverage the underlying EHR data to perform analytics and reporting for hospitals that lack these functions through their EHR vendor, this would likely come at additional expense for these hospitals.

Our findings also suggest that increasing availability of these tools alone will not lead to increases in overall usage. Among vendors that made these measures available, we found disparities in use among independent and rural hospitals compared to their counterparts. Consistent with other research identifying health IT disparities among hospitals with less resources, these disparities in usage likely reflect a lack of staff and resources needed to take advantage of these tools.14 Providing additional support or making the tools easier to use, such as through interactive, real-time dashboards, as offered by 1 of the EHR vendors, may help increase usage more broadly among hospitals with fewer resources. Future research could advance the usage of these measures by soliciting hospitals’ perspectives on the strengths and limitations of available measures, richer detail about how they are incorporating measures into their performance improvement and burden reduction, and barriers to usage.

The measures are also limited in other ways to support burden reduction initiatives. Our interviews with vendors revealed that although these measures may offer a baseline on EHR use, they are not nuanced enough to separate valuable work from burdensome work. A recent study also identified this issue in relation to using audit log data for national quantification of burden associated with EHR use.5 Furthermore, as our respondents noted, the value of reducing documentation time should be balanced against the importance of documentation quality and completeness, which is not reflected in these measures that focus on efficiency. Although this study focused on examining the usage of measures by hospitals to improve their efforts to decrease time spent documenting, future work should explore how EHR vendors are leveraging these data to improve clinicians’ overall experience using EHRs.

Policy implications

Our findings suggest that examining trends in documentation time using existing measures (ensuring comparisons are within organizations and providers using the same EHR vendor) over a specified period may serve to assess whether time spent documenting is decreasing in the near-term. In the longer term, understanding the level of burden nationally across EHR vendors would require more work to harmonize measure definitions and data sources used by vendors. A crosswalk of key measures that identifies comparable activities will be crucial to understanding the extent to which different use patterns are the result of vendors’ different definitions of activity, hospital or vendor characteristics that affect workflow, user preferences, or meaningful differences in burden associated with EHR use.

Increasing the specificity of existing audit log data standards and developing standards for capturing more granular activity data—a potentially resource-intensive endeavor—could lead to a variety of benefits. Firstly, it would facilitate the development of a common set of core measures that would inform the development of policies designed to reduce reporting burden on clinicians.15 This standardization could also yield other benefits, including making it easier for health application vendors to develop tools to make these measures more accessible to hospitals, and develop new features and products that use data on how clinicians are using EHRs to improve the usability of EHR products or harness the data in other novel ways.

CONCLUSION

Leveraging EHR data to measure time spent using the EHR is a critical component of understanding and, ultimately, reducing burden associated with EHR use. This mixed method study demonstrates that measures are available to hospitals using EHR vendors that serve a majority of hospitals; yet, work remains to make measures and analyses more broadly available to all hospitals and increase their usage across hospitals with fewer resources. Harmonizing measure definitions and data sources used by vendors could further advance efforts to measure the impact of national policies and strategies on reducing burden associated with EHR use, along with developing new tools that improve EHR use for clinicians.

FUNDING

This work was funded by ONC through subcontract 776-01280-000-71 (A+ Government Solutions, LLC).

AUTHOR CONTRIBUTIONS

GC, JB, LB, and VP contributed to the conception and design of the study. All authors contributed to the acquisition, analysis, and interpretation of the data (GC, CJ, VP for quantitative data; GC, JB, LB for qualitative data). All authors contributed to the writing and final approval of the manuscript. CJ and VP take responsibility for the integrity and accuracy of the quantitative data analysis; GC and JB take responsibility for the integrity and accuracy of the qualitative data.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

Supplementary Material

ocab042_Supplementary_Data

ACKNOWLEDGMENTS

The authors extend their gratitude to the vendor respondents who volunteered their time for interviews and enhanced our understanding of audit and other log data. The authors thank Sonal Parasrampuria, PhD for her early involvement in this project. The authors thank Dr. Thomas Mason on his feedback on the manuscript and findings. The authors thank Sally Baxter, MD, MSc; Nate Apathy, PhD; Dori Cross, PhD.; Michelle Hribar, PhD., MS, Chris Sinsky, MD, and other members of the National Research Network for their feedback on the findings.

DATA AVAILABILITY STATEMENT

The AHA survey data are available for purchase from AHA. The qualitative data underlying this article cannot be shared publicly for the privacy of individuals that participated in the study.

CONFLICT OF INTEREST STATEMENT

None declared.

REFERENCES

  • 1. Adler-Milstein J, Holmgren AJ, Kralovec P, et al. Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide. J Am Med Inform Assoc 2017; 24 (6): 1142–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Ommaya AK, Cipriano PF, Hoyt DB, et al. ; Association of American Medical Colleges. Care-centered clinical documentation in the digital environment: solutions to alleviate burnout. NAM Perspectives. Washington, DC: National Academy of Medicine 2018; 8 (1): 1–13. 10.31478/201801c [DOI] [Google Scholar]
  • 3.Office of the National Coordinator for Health Information Technology. Strategy on Reducing Regulatory and Administrative Burden Relating to the Use of Health IT and EHRs: Final Report. Washington, DC: Author; 2020.
  • 4. Rule A, Chiang MF, Hribar MR.. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J Am Med Inform Assoc 2020; 27 (3): 480–90. 10.1093/jamia/ocz196 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Cohen G, Brown L, Fitzgerald M, et al. Exploring the Feasibility of Using Audit Log Data to Quantitate Burden as Providers Use Electronic Health Records. Washington, DC: Mathematica. https://aspe.hhs.gov/system/files/pdf/263356/jsk-qebhr-final-concept-report.pdf Accessed June 2, 2020
  • 6.Lights On Network. https://www.cerner.com/solutions/lights-on-network Accessed June 16, 2020
  • 7.Pavel Tseytlovskiy Using Epic’s Signal Data to Measure Intervention Effectiveness. Bluetree. 2019. https://www.bluetreenetwork.com/blog/using-epics-signal-data-to-measure-intervention-effectiveness/ Accessed June 16, 2020
  • 8. Baxter SL, Apathy NC, Cross DA, Sinsky C, Hribar MR. Measures of electronic health record use in outpatient settings across vendors. JAMIA2020; doi/10.1093/jamia/ocaa266. [DOI] [PMC free article] [PubMed]
  • 9. Tai-Seale M, Dillon EC, Yang Y, et al. Physicians’ well-being linked to in-basket messages generated by algorithms in electronic health records. Health Aff (Millwood) 2019; 38 (7): 1073–8. [DOI] [PubMed] [Google Scholar]
  • 10. Adler-Milstein J, Zhao W, Willard-Grace R, et al. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc 2020; 27 (4): 531–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Sinsky CA, Rule A, Cohen G, et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (4): 639–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Cohen GR, Friedman CP, Ryan AM, et al. Variation in physicians’ electronic health record documentation and potential patient harm from that variation. J Gen Intern Med 2019; 34 (11): 2355–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Overhage M, McCallie D. Jr.. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (3): 169–74. [DOI] [PubMed] [Google Scholar]
  • 14. Pylypchuk Y, Alvarado CS, Patel V, Searcy T.. Uncovering differences in interoperability across hospital size. Healthc (Amst) 2019; 7 (4)S2213-0764(18)30185-4. doi:10.1016/j.hjdsi.2019.04.001 [DOI] [PubMed] [Google Scholar]
  • 15.Certification of Health IT. §170.315(d)(10) Auditing actions on health information. https://www.healthit.gov/test-method/auditing-actions-health-information. Accessed April 15, 2021.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocab042_Supplementary_Data

Data Availability Statement

The AHA survey data are available for purchase from AHA. The qualitative data underlying this article cannot be shared publicly for the privacy of individuals that participated in the study.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES