Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2023 Dec 20;31(3):784–789. doi: 10.1093/jamia/ocad254

Guidance for reporting analyses of metadata on electronic health record use

Adam Rule 1,, Thomas Kannampallil 2,3, Michelle R Hribar 4,5,6, Adam C Dziorny 7, Robert Thombley 8, Nate C Apathy 9,10, Julia Adler-Milstein 11
PMCID: PMC10873840  PMID: 38123497

Abstract

Introduction

Research on how people interact with electronic health records (EHRs) increasingly involves the analysis of metadata on EHR use. These metadata can be recorded unobtrusively and capture EHR use at a scale unattainable through direct observation or self-reports. However, there is substantial variation in how metadata on EHR use are recorded, analyzed and described, limiting understanding, replication, and synthesis across studies.

Recommendations

In this perspective, we provide guidance to those working with EHR use metadata by describing 4 common types, how they are recorded, and how they can be aggregated into higher-level measures of EHR use. We also describe guidelines for reporting analyses of EHR use metadata—or measures of EHR use derived from them—to foster clarity, standardization, and reproducibility in this emerging and critical area of research.

Keywords: electronic health record, metadata, measurement, event log, audit log

Introduction

There is growing interest in measuring how people interact with electronic health records (EHRs) as it offers a window into frontline care. One increasingly common method of measuring how people interact with EHRs is to analyze metadata on EHR use which are recorded within the EHR.1 These EHR use metadata—or data about actions performed on EHR data—unobtrusively capture EHR use at a scale unattainable through direct observation or self-reports. Hundreds of studies have employed EHR use metadata in recent years,1,2 including those measuring EHR use across hundreds of health systems,3,4 examining the impact of recent changes to healthcare policy and delivery on EHR use,5–10 assessing the relationship between aspects of EHR use and potential downstream effects such as burnout and quality of care,11–15 and characterizing various aspects of clinical workflow and quality of care.16,17 A new field is emerging that specializes in drawing insights from metadata on EHR use to address some of the most pressing questions in clinical informatics regarding workflows, clinical workload, clinician wellbeing, and patient safety.

Although metadata on EHR use are a valuable tool for clinical operations and research, prior work has revealed substantial variation in how these metadata—and measures of EHR use derived from them—are recorded, analyzed, and described, limiting understanding, replication, and synthesis across studies.1 For example, different methods have been used to measure the time ambulatory providers spend interacting with EHRs outside working hours, leading to drastically different estimates of the quantity and its relationship to burnout.3,13,14,18,19 Among other differences, measures of outside hours EHR use vary in the periods of time included in the measure (eg, time outside 6 am to 6 pm on weekdays) and how active EHR use is defined. Without standard methods of recording, analyzing, and reporting EHR use, the informatics community and operational leaders are unable to build an evidence base for making policy, practice, wellness, and patient safety decisions at the individual, clinic, health system, and national levels.

In this perspective, we provide guidance to those seeking to analyze or interpret metadata on EHR use. This guidance is drawn from our own extensive experience analyzing diverse types of metadata on EHR use, prior reviews of the literature,1,2 and conversations with staff at leading EHR developers about how these metadata are recorded. We contribute a detailed description of 4 common types of EHR use metadata, how they are recorded, and how they can be aggregated into higher-level measures of EHR use. We also provide a recommended terminology (Box 1) and elements to report (Box 2) when describing analyses of EHR use metadata, or measures derived from them. The overarching purpose of this perspective is to foster clarity, standardization, and reproducibility in this critical and expanding area of research.

Box 1.

Recommended terminology for describing EHR use metadata and related concepts.

Term Definition
Action A discrete act performed by a user or initiated by software (eg, a click, keystroke, or automatic logout).
Event An action recognized by software.
Event handler Software code which recognizes and performs an action when an event occurs.
Metadata Data that describe data.
EHR use metadata Data that describe actions performed on EHR data.
Event log A data structure containing a single record (eg, a row of data) describing each occurrence of a recorded event.
Audit log An event log which contains data describing record access events for the purpose of auditing record privacy and security.
Domain-specific event log An event log which contains data describing events relevant to a particular aspect of software use (eg, note writing).
Aggregate log A data structure where each record (eg, row of data) describes multiple events or the relationship between them.
Active use log An aggregate log where each record (eg, row of data) describes how long users spend actively using the software during a discrete period of time.
EHR use measure A quantity describing an aspect of EHR use.
Developer-derived measure A measure calculated by the EHR developer.
Investigator-derived measure A measure calculated by someone other than the EHR developer.

Box 2. Recommendations for reporting analyses of EHR use metadata and measures.

  1. EHR developer: Name the developer of the EHR whose use was observed (eg, Cerner, Epic, AllScripts).

  2. Data source: Name the source of the EHR use metadata or measure.

  3. Aggregation method: Describe how the metadata were aggregated to create the measure.

  4. Normalization: Clearly define the denominator used to normalize any measures of EHR use.

  5. Measure name: Provide a clear and descriptive name for any measure of EHR use.

Current methods of recording EHR use

The process of using metadata to measure EHR use starts with recording actions performed in the EHR (Figure 1A). These actions include user actions such as clicks and keystrokes, and system actions such as page loads and automatic logouts. Actions recognized by software are known as “events” and can trigger event handlers, code which recognizes the event and performs an action in response (Figure 1B). A mouse click, for example, may trigger an event handler which opens a page or saves data to a patient record. Critically, event handlers can also record data about the event that triggered the handler (eg, a mouse click event).

Figure 1.

Figure 1.

Process of recording EHR use. Five stages include: (A) EHR Use, (B) Event Handlers, (C) EHR Use Metadata, (D) Aggregation, (E) Measures.

EHRs employ several methods to record data about events and the relationships between them. One common method is to record data about individual events to an event log. Event logs are data structures which contain a separate record (eg, a row of data) for each observed event. For example, when a user signs a note, an event handler might save a row of data to an event log containing the time of the event, the type of event (eg, “note signed”), the user who signed the note, and the patient for whom the note was signed. Event handlers may also track the relationship between multiple events over time and record data about this relationship to an aggregate log. For example, Epic’s EHR tracks whether a users’ clicks, mouse movements, or keystrokes occur within 5 s of each other and saves the accumulated duration of this “active” EHR use to an aggregate log.

Implications of how metadata are recorded

The way in which these metadata are recorded has implications for their use, analysis, and interpretation. First, not every action associated with care is recorded in EHR use metadata. Only actions mediated by the EHR or other information technology can be automatically recorded. Actions occurring outside health information technology such as conversing with a colleague or reviewing a paper document may or may not be separately documented by users in the EHR. Even then, actions occurring in health information technology outside the EHR such as medical imaging review done in picture archiving and communication systems may not be recorded in EHR use metadata. Second, not every action performed in the EHR may be recorded. EHR developers must write code to intentionally record specific events and face tradeoffs when deciding what to record. Recording every low-level action such as every keystroke can be computationally expensive and generate large volumes of data; recording too few actions may leave insufficient data for debugging system operations or auditing system use. Third, EHR developers play a critical role in assigning meaning to events. This includes not only deciding which events to record but also what to call them (eg, “note signed”), and what additional data to record about each event. Fourth, a single action may trigger multiple event handlers that save different data about the same event to different data structures. Different types of EHR use metadata may thus provide more or less detail about the same event.

Common types of EHR use metadata

Investigators seeking to study EHR use may have access to many types of EHR use metadata depending on the EHR and how it is configured. In this section, we describe 4 common types of EHR use metadata (Figure 1C). These metadata include audit logs, other event logs, active use logs, and clinical metadata.

Audit logs

Since the passage of the second stage of Meaningful Use regulation in 2014, all EHRs certified for use in the United States have been required to maintain an audit log to support audits of patient record privacy and security.20 Currently, these audit logs—including Epic’s Access Log and Cerner’s P2Sentinel—must record at least 6 pieces of data about every action related to electronic health information (eg, adding, deleting, modifying, viewing, or printing patient information) including who performed what action within which patient record at what time. Based on how the specific EHR is configured, audit logs may also capture data beyond the required elements, such as the device on which the event occurred, or actions beyond those required for certification, such as those related to EHR customization. Audit logs are typically formatted as event logs with data for each observed event recorded to its own time-stamped row.

Other event logs

In addition to audit logs required for EHR certification, many EHR developers maintain separate event logs which record specific aspects of EHR use in detail. For example, one event log might record each time a clinical decision support alert is displayed, another might record each time a note template is invoked, and a third might record each time a user interacts with a secure message. Even if the event is already recorded in an audit log, these domain-specific event logs enable additional data to be recorded which may be unique to a particular aspect of EHR use. Some EHRs may also maintain domain-agnostic event logs separate from an audit log which record actions across all domains of EHR use, such as every click, keystroke, scroll, and mouse movement. Such detailed domain-agnostic event logs support granular studies of EHR use but quickly grow in size such that storage costs may outweigh the benefits of retaining the data, so these logs may only be stored for a few days or weeks before being deleted.

Active use log

Investigators often want to know not only what people do in the EHR but also how long they spend doing it. However, calculating durations of EHR use from raw event logs requires substantial analytical effort and non-trivial decisions about how to define active EHR use. To help, some EHR developers maintain active use logs—such as Epic’s UAL Lite and Cerner’s Timecards data—which record how long users spend actively using the EHR during a specific period of time (eg, 8:00-8:15 am). Active EHR use is typically defined using heuristics, such as the user performing a certain number of clicks, keystrokes, or mouse movements in a minute. Active use logs may have a separate record (eg, a row of data) for each unique combination of user, period of time, active window (eg, note reader), and patient encounter. This disaggregation enables the same metadata to be aggregated in different ways to answer different questions such as how long users spent on chart review over the previous week or how long they spent using the EHR outside of their clinic hours.

Clinical data and metadata

Event logs focus on how people interact with the EHR, so they may not record every meaningful clinical event, such as the posting of lab results, or may record them in ways which are difficult to parse. However, metadata stored alongside clinical data can be used to gain additional insight into a sequence of clinical actions or clinical workflows. For example, metadata stored alongside orders may be used to determine when test results were released and metadata stored alongside inbox messages may be used to determine when those messages were sent or received. Prior work has combined these data to measure how often patients see lab results before their provider, and how many messages patients send to their providers shortly after seeing lab results.7,21

Implications of different metadata types

These metadata differ in why they are recorded and the data they contain and may be appropriate for different operational and research use cases. Moreover, the same action, such as signing a note, may be recorded in different levels of detail in different forms of metadata. Researchers should consider the various EHR use metadata available to them and weigh the best source of data, recognizing that answering a particular question may require merging data from multiple sources.

Translating metadata into measures

Additional value can be derived by aggregating EHR use metadata into higher-level measures of EHR use.2 For example, metadata from thousands of events may be aggregated to determine that an individual used the EHR for 7 h on a particular day. Information about individual actions is lost in this process which is both a limitation and strength of resulting measures. Aggregation enables comparisons across individuals, settings, and time, but strips data of some nuance and context. For example, knowing that a provider spent 1 h a day using the EHR outside clinic hours does not convey whether most of that time occurred just after clinic or in the middle of the night.

Individual investigators have created custom investigator-derived measures from EHR use metadata including average appointment duration, number of charts opened in a shift, and the network of providers who cared for a patient.1 However, EHR developers also play a vital role in creating measures from EHR use metadata. Developers may have access to metadata—such as data on individual clicks and keystrokes—which are not consistently saved or available to individual investigators. EHR developers can also rapidly deploy measures across EHR installations. Several market-leading EHR developers provide platforms for viewing, analyzing, and exporting developer-derived measures ranging from the volume of inbox messages providers receive to the percent of orders they place with team support.22–24 The platforms—which include Epic’s Signal and Cerner’s Advance and Lights On reporting tools—make it possible to measure EHR use without analyzing raw EHR use metadata. The implicit standardization of developer-derived measures across installations of the same developer’s EHR has also enabled studies of EHR use across hundreds of health systems.3,4

Investigator- and developer-derived measures often have different goals, with developer-derived measures primarily focusing on measuring duration and efficiency of EHR use, whereas investigator-derived measures are more likely to employ EHR use as a proxy for measuring additional constructs related to EHR use such as clinical workflow, teamwork, or quality of care.1 There is also substantial variation between EHR developers in how some measures are calculated—such as how durations of active EHR use are calculated—which complicates cross-developer comparisons.18 There are also assumptions in the logic used to aggregate data which may not be reflected in measure names. For example, measures of EHR use outside scheduled hours may be based on heuristics of what time period constitutes scheduled hours rather than exact clinic schedules.18,25 These nuances highlight the importance of measure documentation provided by EHR developers and individual investigators. Another important aspect of measure development is normalization. Many measures consist of a numerator (the quantity being measured) and a denominator (the number of parts over which to divide that quantity) and the choice of denominator can affect comparisons across groups. Prior work has shown different relative rankings in alert burden across units and provider types when alert volume is normalized by number of encounters, orders, inpatient-days, or days logged into the EHR.26

Reporting guidelines

Given the variability and nuance of recording and measuring EHR use described above, we recommend 5 items be reported for every observational study based on EHR use metadata or measures derived from them including: (1) the EHR developer, (2) data source, (3) method of aggregation, (4) method of normalization, and (5) measure name (Box 2). We explain each of these recommendations below and where they fit into the overall process of recording and measuring EHR use (Figure 1).

First, we recommend reporting the name of the developer of the EHR whose use was observed. This will impact the entire process of measuring EHR use (Figure 1A-E), from which events are recorded, to how they are recorded as metadata, to how these metadata can be turned into measures. Second, we recommend reporting the data source used for the study, either for raw metadata or developer-derived measures (Figure 1C and E). Provide a developer-agnostic name for the data source such as audit log, active use log, or developer-derived measures. Consider also providing the developer-specific name of the data source in the description of methods if that name is visible to the EHR’s end users (eg, Cerner’s P2Sentinel, Epic’s UAL Lite, Epic’s Signal). Third, we recommend describing how metadata were aggregated into more abstract measures of EHR use (Figure 1D). For example, describe any method used to calculate durations of “active” EHR use, how specific events were mapped to higher-level activities such as chart review, and the definition of any periods of time such as “outside hours.” If sharing code describing a custom method of aggregation via a public repository such as GitHub, ensure that the code does not reveal any information about the EHR which is not visible to end users. Consider sharing code that reveals database schemas or other aspects of the EHR not visible to end-users via developer-provided user communities such as Epic’s UserWeb or Cerner’s uCern. Fourth, we recommend clearly reporting the denominator used to normalize any measure of EHR use (Figure 1D). For example, if normalizing EHR use “per day,” clarify if per day means per day in the observed period, per day logged into the EHR, per day with scheduled appointments, or another measure of days. Finally, we recommend reporting the name of any measure reported (Figure 1E). This includes a clear and descriptive name for any investigator-derived measure of EHR use, or the exact name of the developer-derived measure, such as “Time Outside Scheduled Hours,” if visible to end-users of the EHR.

Conclusion

Metadata on EHR use are a powerful tool for observing EHR use, clinical workflows, and clinical quality at scale. How these metadata are recorded, stored, and aggregated into higher-level measures of EHR use is varied, complex, and can change over time. We recommend individuals working with these data consult not only this perspective but also review developer documentation, contact developer representatives, and follow the proposed reporting guidelines to support transparency and replicability in this emerging field. We also recommend ongoing efforts to develop reporting guidelines, ontologies, and common data models to enable comparison of metadata and measures across developers and studies.

Acknowledgments

The authors would like to thank Marc Overhage, Jake Wilcox, Jeremy Hurraw, Sam Adams, and John O’Bryan for their insight into how EHR use is recorded and feedback on early drafts of this manuscript.

Contributor Information

Adam Rule, Information School, University of Wisconsin-Madison, Madison, WI 53706, United States.

Thomas Kannampallil, Department of Anesthesiology, Washington University School of Medicine, St Louis, MO 63110, United States; Institute for Informatics, Data Science and Biostatistics, Washington University School of Medicine, St Louis, MO 63110, United States.

Michelle R Hribar, Office of Data Science and Health Informatics, National Eye Institute, National Institute of Health, Bethesda, MD 20892, United States; Department of Ophthalmology, Casey Eye Institute, Portland, OR 97239, United States; Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, OR 97239, United States.

Adam C Dziorny, Department of Pediatrics, University of Rochester School of Medicine, Rochester, NY 14642, United States.

Robert Thombley, Department of Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, San Francisco, CA 94118, United States.

Nate C Apathy, National Center for Human Factors in Healthcare, MedStar Health Research Institute, Washington, DC 20782, United States; Center for Biomedical Informatics, Regenstrief Institute Inc, Indianapolis, IN 46202, United States.

Julia Adler-Milstein, Department of Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, San Francisco, CA 94118, United States.

Author contributions

All authors (A.R., T.K., M.R.H., A.C.D., R.T., N.C.A., and J.A.-M.) have made substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND drafting the work or revising it critically for important intellectual content; AND final approval of the version to be published; AND agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Funding

This research had no specific source of funding.

Conflict of interest

The authors have no commercial, proprietary, or financial interest in any of the products or companies described in this article. A.R. reports receiving grants, honoraria, and travel support from the American Medical Association outside the reported work. T.K. reports receiving grants related to event log research from the American Medical Association, NIH, and AHRQ outside the reported work. A.C.D. reports receiving grants from the NIH outside the reported work. N.C.A reports grants from the American Medical Association outside the reported work. J.A.-M. reports receiving grants related to event log research from the American Medical Association and NIH outside the reported work.

Data availability

No new data were generated or analyzed in support of this research.

References

  • 1. Rule A, Melnick ER, Apathy NC.. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc. 2022;30(1):144-154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Rule A, Chiang MF, Hribar MR.. Using electronic health record audit logs to study clinical activity: a systematic review of aims, measures, and methods. J Am Med Inform Assoc. 2020;27(3):480-490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Overhage JM, McCallie D.. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med. 2020;172(3):169-174. [DOI] [PubMed] [Google Scholar]
  • 4. Holmgren AJ, Downing NL, Bates DW, et al. Assessment of electronic health record use between US and non-US health systems. JAMA Intern Med. 2021;181(2):251-259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Holmgren AJ, Downing NL, Tang M, et al. Assessing the impact of the COVID-19 pandemic on clinician ambulatory electronic health record use. J Am Med Inform Assoc. 2022;29(3):453-460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Nath B, Williams B, Jeffery MM, et al. Trends in electronic health record inbox messaging during the COVID-19 pandemic in an ambulatory practice network in New England. JAMA Netw Open. 2021;4(10):e2131490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Steitz BD, Padi-Adjirackor NA, Griffith KN, et al. Impact of notification policy on patient-before-clinician review of immediately released test results. J Am Med Inform Assoc. 2023;30(10):1707-1710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Apathy NC, Hare AJ, Fendrich S, et al. Early changes in billing and notes after evaluation and management guideline change. Ann Intern Med. 2022;175(4):499-504. [DOI] [PubMed] [Google Scholar]
  • 9. Maisel N, Thombley R, Overhage JM, et al. Physician electronic health record use after changes in US centers for medicare & medicaid services documentation requirements. JAMA Health Forum. 2023;4(5):e230984. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Holmgren AJ, Thombley R, Sinsky CA, et al. Changes in physician electronic health record use with the expansion of telemedicine. JAMA Intern Med. 2023;183(12):1357-1365. 10.1001/jamainternmed.2023.5738 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Yan Q, Jiang Z, Harbin Z, et al. Exploring the relationship between electronic health records and provider burnout: a systematic review. J Am Med Inform Assoc. 2021;28(5):1009-1021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Johnson KB, Neuss MJ, Detmer DE.. Electronic health records and clinician burnout: a story of three eras. J Am Med Inform Assoc. 2021;28(5):967-973. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Adler-Milstein J, Zhao W, Willard-Grace R, et al. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc. 2020;27(4):531-538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Dyrbye LN, Gordon J, O'Horo J, et al. Relationships between EHR-based audit log data and physician burnout and clinical practice process measures. Mayo Clin Proc. 2023;98(3):398-409. [DOI] [PubMed] [Google Scholar]
  • 15. Rotenstein LS, Holmgren AJ, Healey MJ, et al. Association between electronic health record time and quality of care metrics in primary care. JAMA Netw Open. 2022;5(10):e2237086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Zheng K, Ratwani RM, Adler-Milstein J.. Studying workflow and workarounds in electronic health record–supported work to improve health system performance. Ann Intern Med. 2020;172(11 Suppl):S116-S122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Adler-Milstein J, Adelman JS, Tai-Seale M, et al. EHR audit logs: a new goldmine for health services research? J Biomed Inform. 2020;101:103343. [DOI] [PubMed] [Google Scholar]
  • 18. Melnick ER, Ong SY, Fong A, et al. Characterizing physician EHR use with vendor derived data: a feasibility study and cross-sectional analysis. J Am Med Inform Assoc. 2021;28(7):1383-1392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Rotenstein LS, Holmgren AJ, Downing NL, et al. Differences in total and after-hours electronic health record time across ambulatory specialties. JAMA Intern Med. 2021;181(6):863-865. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.2015 Edition Health IT Certification Criteria. 45 C.F.R. § 170.315. 2015. Accessed December 22, 2023. https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-D/part-170/subpart-C/section-170.315.
  • 21. Steitz BD, Sulieman L, Wright A, et al. Association of immediate release of test results to patients with implications for clinical workflow. JAMA Netw Open. 2021;4(10):e2129553. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Baxter SL, Apathy NC, Cross DA, et al. Measures of electronic health record use in outpatient settings across vendors. J Am Med Inform Assoc. 2021;28(5):955-959. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Cohen GR, Boi J, Johnson C, et al. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J Am Med Inform Assoc. 2021;28(8):1676-1682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Richwine C, Patel V.. Trends in electronic health record capabilities for tracking documentation time. Am J Manag Care. 2023;29(1):50-55. [DOI] [PubMed] [Google Scholar]
  • 25. Arndt BG, Micek MA, Rule A, et al. Refining vendor-defined measures to accurately quantify EHR workload outside time scheduled with patients. Ann Fam Med. 2023;21(3):264-268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Orenstein EW, Kandaswamy S, Muthu N, et al. Alert burden in pediatric hospitals: a cross-sectional analysis of six academic pediatric health systems using novel metrics. J Am Med Inform Assoc. 2021;28(12):2654-2660. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No new data were generated or analyzed in support of this research.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES