Skip to main content
AEM Education and Training logoLink to AEM Education and Training
. 2020 Aug 9;5(2):e10501. doi: 10.1002/aet2.10501

“EMERGing” Electronic Health Record Data Metrics: Insights and Implications for Assessing Residents’ Clinical Performance in Emergency Medicine

Stefanie S Sebok‐Syer 1,, Lisa Shepherd 2, Allison McConnell 2, Adam M Dukelow 2, Robert Sedran 2, Lorelei Lingard 3
Editor: Teresa Chan
PMCID: PMC8052996  PMID: 33898906

Abstract

Objectives

Competency‐based medical education requires that residents are provided with frequent opportunities to demonstrate competence as well as receive effective feedback about their clinical performance. To meet this goal, we investigated how data collected by the electronic health record (EHR) might be used to assess emergency medicine (EM) residents’ independent and interdependent clinical performance and how such information could be represented in an EM resident report card.

Methods

Following constructivist grounded theory methodology, individual semistructured interviews were conducted in 2017 with 10 EM faculty and 11 EM residents across all 5 postgraduate years. In addition to open‐ended questions, participants were presented with an emerging list of EM practice metrics and asked to comment on how valuable each would be in assessing resident performance. Additionally, we asked participants the extent to which each metric captured independent or interdependent performance. Data collection and analysis were iterative; analysis employed constant comparative inductive methods.

Results

Participants refined and eliminated metrics as well as added new metrics specific to the assessment of EM residents (e.g., time between signup and first orders). These clinical practice metrics based on data from our EHR database were organized along a spectrum of independent/interdependent performance. We conclude with discussions about the relationship among these metrics, issues in interpretation, and implications of using EHR for assessment purposes.

Conclusions

Our findings document a systematic approach for developing EM resident assessments, based on EHR data, which incorporate the perspectives of both clinical faculty and residents. Our work has important implications for capturing residents’ contributions to clinical performances and distinguishing between independent and interdependent metrics in collaborative workplace‐based settings.


With our adoption of competency‐based medical education (CBME) comes the need to generate more assessments that reflect residents’ clinical performance. Recently, several researchers have advocated for the use of electronic health record (EHR) data to provide insights into residents’ clinical actions and decisions. 1 , 2 , 3 , 4 , 5 , 6 However, despite the support and endorsement for EHR data being secondarily used for educational purposes, the challenges of identifying which measures to use and how they should be reported and interpreted have led to an underutilization of the EHR as a source of data for assessing resident performance.

The EHR can be defined as a report of a patient’s health, which contains details about each patient’s medical care and treatments. 7 Although the EHR is intended to contain details about individual providers and the treatment and care they provide to patients, the quality and completeness of such information can be tenuous 8 , 9 , 10 , 11 given the misattribution of clinical actions that can occur in the EHR. 12 Recent literature has also highlighted that EHR data often tend to represent the clinical performance of a team or reflect system‐level decisions surrounding patients’ medical care. 5 None of these issues are surprising given the collaborative, team‐based approach of providing health care 13 , 14 and the supervisory relationship in medicine’s workplace‐based training model, 15 , 16 , 17 but the above issues challenge the assumption that EHR data can be seamlessly used as a representation of a physician’s individual performance.

To tackle this assumption, recent efforts have distinguished between contribution (i.e., how much an individual contributed to patient care) and attribution (i.e., what actions are more likely attributed to a single individual) as a way of conceptualizing the interdependent nature of clinical performance. 18 , 19 The interdependent nature of clinical performance is also not limited to human entities, suggesting that medical directives, technology, and organizational systems also influence the interdependent nature of clinical performance. 20 Recently, Schumacher and colleagues 21 argued that viewing attribution and contribution as a continuum is most effective for assessing educational outcomes at any level.

Integrating such emerging conceptualizations with existing knowledge of EHRs, we argue that EHR data cannot be taken as a straightforward or transparent indicator of resident performance. Resident clinical metrics must be selected based on a systematic process of consultation, attention to contextual factors, and knowledge of how the EHR logs such performance data. 3 Residents’ contributions to clinical practice metrics and their independent and interdependent clinical interactions are documented, in various ways and to varying degrees, in the EHR. Therefore, efforts to use the EHR for assessing residents’ clinical performance must start by distinguishing independent and interdependent performances within clinical training contexts and follow‐up by considering how these performances are captured within the EHR. To this end, this study investigated: Which performance metrics, based on data extracted from the EHR, do faculty and residents perceive as suitable for assessing emergency medicine residents’ independent and interdependent clinical actions and decisions?

METHOD

This study was approved by our institutional health sciences research ethics board. Using constructivist grounded theory, we explored what data could be extracted from the EHR to reflect both independent and interdependent clinical performance. 22 Given our interest in how independent and interdependent performances are enacted in clinical environments, which we considered a social phenomenon, constructivist grounded theory was an appropriate methodologic approach. By employing this methodology, rather than using a consensus process such as a Delphi, we could identify practice metrics that faculty and residents view as most valuable for assessing EM residents’ independent and interdependent clinical actions and decisions as well as gain insight into the socially constructed realities of interdependence. Our research team includes experts in measurement and assessment (SSS), teamwork (LL), clinical teaching and supervision (LL, LS), EM (AD, LS, AM, RS, SSS), EM administrative leadership (AD, RS), and qualitative research methodology (LL).

Employing purposive sampling throughout 2017 calendar year, SSS conducted individual interviews with 10 emergency medicine (EM) faculty (junior faculty = 3; mid‐career faculty = 3; and senior faculty = 4) and 11 Royal College EM program PGY‐1 to ‐5 residents (PGY‐1 = 1; PGY‐2 = 2; PGY‐3 = 2; PGY‐4 = 3; and PGY‐5 = 3). We selected the specialty of EM for three reasons. First, faculty are physically present with residents at all times while on shift, which could lead to greater insight about what clinical work is being done both independently and interdependently. Second, individuals working within EM have contact with and are required to communicate with most other clinical specialties, thus affording them awareness of aspects of independent and interdependent performance outside their own setting. Third, many residents rotate through EM for core and subspecialty training, suggesting breadth across specialties in terms of understanding the emergency department’s clinical context and the opportunities for independent and interdependent performance. We selected a sample of faculty at various points in their career and residents from all postgraduate years because our goal was to identify independent and interdependent clinical practice metrics, captured within the Cerner Database—the EHR system used at this institution, across a spectrum of learners and describe features of those metrics. E‐mails were sent by SSS to all faculty and residents at a single, Canadian midsized Royal College–accredited EM residency program inviting them to voluntarily participate in a 30‐ to 45‐minute semistructured individual interview (see Data Supplement S1, Appendixes S1 and S2 [available as supporting information in the online version of this paper, which is available at http://onlinelibrary.wiley.com/doi/10.1002/aet2.10501/full], for interview guides). Data collection occurred iteratively. We ceased data collection when we reached theoretical sufficiency, which did not mean that no new ideas would have been identified with more data collection, but rather that we had achieved sufficient data collection to describe a preliminary set of independent and interdependent clinical performance metrics. 23 , 24

During the interview, participants were asked to identify metrics that reflected residents’ clinical actions and decisions and to describe how those were represented in the EHR. Participants were then presented with an emergent list of practice metrics, which was generated by the participants themselves, and asked to comment on how valuable they would be in assessing residents’ clinical performance. Participants had the opportunity to add, edit, or remove metrics from this list. The list presented to participants was initially developed using only the EM‐specific data metrics identified by EM faculty and residents in our pilot study; 3 this list was revised by our first participant in this study and continually refined based on subsequent participant interviews. The final list of practice metrics can be found in Table 1. After each participant reviewed the list and provided reasons for why each metric was valuable or not, they were asked to go through the list one final time and indicate to what extent they felt the metric accurately represented independent or interdependent performance. We specifically asked participants to classify the metrics and gauge where these metrics exist along the spectrum of independence and interdependence; however, the vast majority of our participants interpreted this question as binary, thus classifying each metric as independent or interdependent. In asking participants to comment on the independence/interdependence of metrics, we gained insight into the conditions that enable or constrain one’s clinical performance and shape the way in which individual physicians view a particular performance metric. All interviews were recorded, professionally transcribed verbatim, and deidentified prior to data analysis.

Table 1.

Finalized List of Emergency Medicine Performance Metrics endorsed by Faculty and Residents

Performance Metric % of Independent Responses % of Interdependent Responses
Turnaround time between signup and first orders (n = 16)* 94 6
Time from orders/reassessment to disposition (n = 17) 76 24
Number of patients seen (n = 20) 70 30
Cost of laboratory tests (n = 18) 56 44
Ordering of tests (n = 20)§ 50 50
Ordering of medications (n = 19) 47 53
Number of narcotics prescribed (n = 19) 47 53
Time to antibiotics (n = 19) 42 58
Time between signup and disposition (n = 15) 33 67
Time to fluids (n = 19) 16 84
Bounce‐back rates (n = 20) 15 85
Length of stay (n = 19)* 0 100

The number of responses is not the same across all metrics because the list was iteratively revised after each participant interview. Those with smaller numbers are a result of the metric being added later in the process of creating this list. Also, the first participants helped generate the initial list, and therefore, were not asked about whether the metrics represented independent or interdependent performance.

*

Higher levels of agreement among participants.

Moderate level of agreement among participants.

Some level of agreement among participants.

§

Little level of agreement among participants.

We conducted constant comparative inductive analysis using an iterative process in which data collection and data analysis were concurrent. Following constructivist grounded theory, 22 we engaged in three analytical stages of coding: initial, focused, and theoretical. Initial and focused coding (SSS) took place iteratively as transcripts became available. Initial coding consisted of reading the interview transcripts line by line to identify ideas. Building upon our initial codes, focused coding was then used to highlight concepts or themes within the transcripts. In regular meetings with the analysis group (SSS and LL), transcripts were reviewed, themes were discussed, and the entire data set was recoded (SSS) with special attention to instances that challenged consistency within a category. Once our data were organized accordingly, theoretical coding explored the relationships and nuances of these categories. We used the concepts of attribution and contribution, as described previously in the literature, as sensitizing concepts during the theoretical coding process to help draw our attention to important features of social interactions. The findings from our theoretical coding were presented with the entire research team for further refinement.

RESULTS

Overall, our participants were enthusiastic about providing their perspectives on what EHR data could, and should, be used for assessing EM residents’ clinical performance. In analyzing these descriptions, we gained insight into relationships among various practice metrics as well as issues that arise when trying to interpret them. Throughout the results section, faculty participants are identified by EMF# and residents by EMR#.

Participants identified and refined a variety of metrics that would be paramount to consider when assessing EM residents’ clinical performance. Some metrics seemed more obvious than others; for example, this EM resident felt that knowing the number of patients one sees per hour was important:

I would probably want to know what my peers are doing as per number of patients per hour, on average, per shift, that kind of thing as well. So, are they seeing two patients per hour on their shift, are they seeing .5 patients per hour, etcetera, and then based on their PGY level, I think that would be important (EMR11).

This participant went on to say that knowing the acuity level of the patients seen is also salient. The combination of number of patients per hour with acuity (e.g., Canadian Triage and Acuity Scale [CTAS] level) helps describe the clinical context:

And I'd also want to know what sort of CTAS’s my peers are seeing as well on average per shift, do you know what I mean, like, a CTAS‐1, 2, 3, etcetera, but I think that’s the thing, if you're going to give those metrics of, like, I see two patients an hour, or one patient per hour, I think you have to put into context of how complex the patient is (EMR11).

Both faculty and residents realized the importance of contextualizing data‐driven practice metrics to make them meaningful for learning and assessment.

When asked specifically about independence in the EM, all of our participants referred to EM is a staff dependent specialty, which means that faculty are physically in the hospital 24 hours a day, 7 days a week, and thus, trainees are never given the same opportunities for independence that one might see in other specialties (e.g., internal medicine residents on an overnight call shift). A main finding was that participants, acknowledging the interdependence of much clinical work in EM, struggled to identify opportunities where residents could demonstrate independence. With that being said, a recurring metric thought to represent independence, as conceptualized in EM, was the turnaround time between when a resident signs up for a patient and when the first orders are entered. Participants generated this metric from reflecting upon what residents do (or are expected to do) before the faculty sees a patient—sign up for the patient, take the patient’s history, and perform a physical examination. When later participants were presented with this performance metric, this faculty member went on to articulate why such a metric would be valuable:

Absolutely. No, I think that would be hugely valuable, and it [turnaround time between when a patient is signed up for and when those first orders are made] speaks to something like the time to fluids and the time to antibiotics. I would put those in there, and in some patients, those are extremely important to get in. It can make the difference between life and death very quickly so I think that’s an important metric. If that time was consistently 30 minutes for a senior resident, I think that would be concerning so I think it’s a good indicator … This essentially is a surrogate for how long does it take you to talk to a patient and their family and get a bit of a sense of what’s going on? I think a PGY‐1, it might take 30 minutes. A PGY‐5, it shouldn’t take more than, honestly, 10. I usually know what I want to order within a minute or 1½ minutes of seeing somebody (EMF10).

Other practice metrics, however, were viewed as less valuable or less clear in terms of whether they are assessing independent or interdependent performance. For instance, cosigning of orders “as an indicator for residents [was seen as] probably useless” (EMF7) because this often happens verbally rather than through the EHR and is more often utilized with medical students. Another metric that was seen as less relevant was checking beyond the past week for a patient’s history. It appears that this metric captures EHR fluency more than clinical insight as many residents had learned early to adjust the EHR default settings: “I always have my default as set to ten years and 999 test results” (EMR5). These are a few examples of metrics that were on the list at one point, but did not make the final list.

Emerging from our data was a consolidated list of EHR practice metrics, generated by faculty and residents, which would be useful in assessing EM residents’ clinical actions and decisions. The finalized list of metrics is presented in Table 1.

Table 1 also shows our participants’ level of agreement/disagreement regarding the characterization of these clinical practice metrics. Each metric is denoted with the same symbols based on level of agreement. For example “ordering of tests” is denoted with a section symbol to represent the highest level of disagreement. In this instance, our participants were evenly split (50/50) as to whether this metric represents independent or interdependent performance. On the other hand, participants were very confident that “turnaround time between signup and first orders” better represents independent performance (94/6) and “length of stay” captures more interdependent performance (0/100); these are both denoted with an asterisk to reflect the higher levels of agreement among participants. In general, we did not observe group‐level differences between faculty and residents in terms of whether they believed a metric was more independent or interdependent. However, we did gain insight into the conditions supporting why individual participants felt a particular metric should be categorized as independent or interdependent. For instance, “ordering of tests or medications” was viewed as independent if, and only if, the resident placed the order before consulting with faculty or another member of the health care team. Participants described that depending on factors such as PGY, preferences of the supervising faculty, and overall entrustment in a particular resident, most orders for medication and tests were placed only after they were reviewed by faculty, thus representing interdependent practice despite the appearance of an independent action within the EHR system. These findings also suggest that, for some aspects of clinical performance, interdependence is in the eye of the beholder, which is likely shaped by a physician’s experiences and the hospital workplace environment.

In addition to commenting on whether particular practice metrics represented independent or interdependent performance, many participants spoke about the relationships that exist between various practice metrics, the issues associated with interpreting various metrics, and the implications of using EHR data for assessment purposes. These themes are further explained throughout the results section.

Relationships Among Practice Metrics

Although participants were generally supportive and optimistic about the practice metrics they identified, their descriptions about how various metrics are related to one another suggests that it might be inappropriate to assess any one metric in isolation from the others. Take this senior resident, for example, who explains the relationship between consult rates and bounce‐back rates and the impact that focusing on individual metrics can have on the system of health care delivery:

Exactly, so if I see a patient with chest pain that is vague, and the workup is negative, I can still consult the cardiology service. They may perceive this as a low risk or a low pretest probability for a cardiac problem for the chest pain but once I consult them they cannot refuse to see the patient. They have to see this patient. It’s a one‐way consultation protocol here. They may come to see the patient and send them home immediately, but my bounce‐back rate will be zero because the disposition has been to an inpatient service. The problem with that is now cardiology has more work load. They may do more investigations leading to an adverse event and the patient has to be in hospital longer. It slows down the department, so not always the right choice but that’s one way you could kind of artificially reduce your bounce‐back rate (EMR2).

As this example suggests, individual EHR metrics used for resident assessment need to be counterbalanced to avoid behaviors or conduct that artificially alter the educational value of assessing aspects of clinical performance.

Issues With Interpretation

Although all of our participants were in favor of using EHR data as part of the assessment of residents’ clinical performance, they offered varied responses regarding how EHR metrics should be interpreted. One resident commented that metrics that are the same for faculty and residents would need to be interpreted differently because of the interdependence that exists between residents and their supervisors:

I think the only hard part is the staff, incorporated into the staff numbers is the residents that are with them. That’s not incorporated into our numbers. So, if you’re my staff, if you see, let’s say you were on for a six‐hour shift, and you see 12 patients and I see 10 patients, then it looks like you saw 22 patients on the staff report card. But on my report card, it still would look like 10 patients (EMR10).

Another resident explained that when you think of metrics in terms of all health care providers, a data point may vary depending on provider purpose, and although the data may appear clean and precise, the interpretation might not be:

Well just, for example, when you think of bounce‐back rates, you’re thinking, oh, people I sent home inappropriately when really, there are all these other things that bounce back it would encompass. It’s just that sometimes, what you think you’re capturing you’re not actually sitting and thinking of all the people that that data point is actually capturing. It’s the same thing with ordering tests and cancelling tests later. Is it because it was an inappropriate test? Is it because it was a [EHR system] error? Was it because two services accidentally ordered the same thing? It’s just maybe on the surface a little bit more than what you actually think you’re capturing (EMR4).

As these examples suggest, the system‐level aspects of the clinical environment cannot be ignored. To preserve the value of EHR data for assessment purposes, we need to approach these metrics the same way we do other assessments, through gathering evidence that supports both the validity arguments and interpretations.

Implications of Using EHR Data for Assessment Purposes

Many faculty and residents emphasized the implications of collecting and using EHR data for assessing residents. Some participants insisted that, if we are going to use EHR data to create metrics for EM residents, we need to be prepared to: 1) dive deeper into these metrics for particular residents, 2) provide coaching and feedback both on the clinical performance captured by a metric and on how to interpret these EHR data metrics in the context in which EM residents clinically practice, and 3) be prepared for unexpected consequences. As this EM faculty member stated:

You have to decide what the goal of the [report] card is or the indicator. If you want to say that I cost more per patient in lab tests than another person, like say, Name‐X, or whatever, that’s fine. If I have a desire to improve that or change that, then I need the deep dive as to where am I ordering more. So, is it in chest pain or is it in abdominal pain, so I have to something to focus on. Similar to the consult rate. If my consult rate is twice that of everybody else’s, I can’t work on that until I know where. Is it all patients, is it medical patients, surgical, certain type of surgery, that sort of stuff? And so when you roll things like this out, the person who is doing it and the leader needs to be prepared to be able to go deeper on certain metrics for people that want to improve, as opposed to trying to pick one area. Because you may order more tests in chest pain but be ordering a lot less tests in all these other areas, so it’s a balance. The hardest part of any of this, I think, although getting electronic data to flow and designing your indicators originally, getting feedback on them, is all very challenging. The human resource‐intensive portion is the meeting with people afterwards to go over the results, answering questions, coaching people, and then deep diving into some of the metrics people want to improve (EMF8).

Using EHR data for assessment requires faculty buy‐in from those who are willing and able to help residents understand that these practice metrics are not the only way to assess one’s clinical performance.

DISCUSSION

The systematic approach we used to collect and analyze data for this study is distinct from all other educational EHR studies, 1 , 2 , 4 , 5 , 6 which have focused more on what could be attained from the EHR rather than what faculty and residents want from EHR metrics for education and assessment purposes. Current attempts at developing EHR metrics often mirror those used for faculty (e.g., length of stay), which are limited when assessing residents due to the high proportion of misattribution and inability to account for the interdependent nature of patient care. Our study highlights interdependence 19 —inseparable ties that exist between residents and their supervisors, fellow learners, or other health care professionals—and elucidates the potential, and the challenge, of using the EHR as a tool for resident assessment by providing insights about which metrics EM faculty and residents deem acceptable for educational purposes and which metrics require further refinement to be meaningful. For example, a total time for length of stay would be valuable information for a resident from a systems perspective, but from an individual perspective the time between sign‐up and first orders better reflects the clinical actions of an individual resident. A recent article by Pines and colleagues, 25 outlined factors important to top clinical performance in EM; among the top EM specific skills and behaviors are clinical efficiency and adaptability, sufficient knowledge, and communication with staff/colleagues. Data from our participants highlighted many of these skills and behaviors, most notably metrics related to measures of clinical efficiency, which affords us the opportunity to describe and assess both independent and interdependent aspects of clinical competence within the context of team‐based clinical care. Our work takes the first step along this path, which we anticipate can eventually yield competency‐based assessments that reflect a shared understanding of the interdependent contributions of health care professionals and acknowledge how the performance of others impacts one’s own clinical performance.

The practice of using EHR data to reflect physician performance has been described previously. 26 , 27 Novel in this study is our systematic exploration of the potential of EHRs as a source of data to assess residents’ independent and interdependent clinical performance. A fundamental challenge of using EHR data to assess residents specifically is that their performance is often coupled with other team members, most strongly with their faculty supervisor, in addition to the systems in which they work. 19 , 20 Despite the tacit understanding that most practice metrics capture interdependent clinical actions and decisions, EHR metrics are often used to represent an individual provider’s performance often without accounting for the influence of other members of the team. 5 , 14 The fact that the majority of our participants viewed most clinical performance as at least somewhat interdependent is one of the main findings of this study. Despite our participants’ attempts to identify independent EM clinical performance captured by the EHR, the majority of metrics that faculty and residents in our study cared about were interdependent. Therefore, we argue that to use EHR data appropriately for resident assessment, we need to acknowledge the interdependence of many metrics. This may mean that some metrics are not appropriate for resident feedback and that others need to be accompanied by thoughtful, formative feedback to help residents understand their individual responsibilities and contributions within interdependent metrics. The interdependence of many of the metrics in our study also suggests that our individualist focus in a CBME framework is in tension with the realities of how our health systems approach and deliver patient care. Until our assessment approaches confront and resolve this tension, we will risk distorted or incomplete assessments of our residents in the clinical environments in which they train and practice. Our participants’ conceptualizations of interdependence warrant particular attention because this is essential to assessing both residents’ individual competence and their ability to contribute to a competent team. When asked whether specific clinical actions and decisions were more reflective of independent or interdependent performance, participants varied in their responses: they did not see interdependence the same way. Of the list of 12 practice metrics we developed, only one metric (i.e., length of stay) had complete agreement among all participants and it was believed to represent interdependent clinical performance. Overall, participants had a difficult time defining and discriminating between independent and interdependent performance because the extent to which independence/interdependence existed in practice depended upon a variety of contextual factors. For instance, participants varied in terms of their awareness of the team‐based approach used in EM practice. Senior residents and faculty tended to see clinical performance as more interdependent because they had a greater understanding of the role of various team members and health care system as a whole, whereas interns and junior residents were not only less aware of their team, but also the various ways in which faculty provide clinical supervision. This finding also suggests that maybe differences in workplace‐based clinical training (e.g., different hospitals in a health network) and rotation experiences offer a slightly different lens for viewing clinical performance metrics. Perhaps interdependence is a multifaceted construct that varies depending on whether one inherently focuses on the individual or the team in the delivery of patient care, which one could argue that learning to “work in interprofessional teams to enhance patient safety and improve patient care” 28 develops throughout postgraduate training.

As we gain a richer understanding of how interdependence (i.e., how individuals within a team interact with one another) is conceptualized by various stakeholders, we will increase the ability to precisely assess independent and interdependent aspects of clinical performance. And along with that achievement comes the possibility of providing residents’ with timely, personalized feedback about how their clinical actions contribute to the competent performance of a health care team. This shifts the focus of clinical competence to not only assessing what learners can do independently, but also being able to describe how their performance influences—or is influenced by—other members of the health care team. Furthermore, given that we tend to compare ourselves to our peers, we need to remember that sometimes in team situations, the individual who has the top metrics is not always the best physician and the one with average metrics can be the catalyst that enables the clinical actions of others. As educators, we have a responsibility to help our residents understand, interpret, and apply clinical performance data so that they can identify what is needed to practice safely and efficiently as they adapt to various clinical environments. By engaging in this level of assessment and feedback, which includes tailoring and providing individualized reports of residents’ clinical performance such as those based on EHR data, we can transform the possibility of authentically assessing clinical performance into reality, and in doing so, we can identify areas for residents to improve earlier, raise awareness about the affordances provided by other team members, and appropriately match residents with coaches and mentors.

LIMITATIONS

First, our findings reflect perspectives of the faculty and residents within an EM program at a single midsized Canadian medical school; assessing their transferability to other contexts requires consideration of contextual factors, particularly the EHR system. Our hospitals and health systems currently use Cerner for EHR data management, which may limit transferability of our insights to medical groups, hospitals, and health care organizations who use other EHR software (e.g., Epic), because their features and modules vary. Other contextual aspects such as billing structure, departmental culture, and leadership priorities may also impact what EHR metrics are perceived and interpreted as valuable. Further research should explore the resonance of our proposed practice metrics and the conceptualization of interdependence in other academic health science settings. Second, the participant‐generated list of performance metrics was developed organically based on what EHR data our sample of faculty and residents felt would be most useful in guiding educational efforts. Additional work is needed to determine how the elements of such EHR metrics would fit within existing CBME frameworks such as CanMEDS or ACGME to evaluate residents’ overall competence in a given specialty. Finally, our deliberate decision to sample faculty and residents from EM allowed us to describe nuances and contextual factors unique to this specialty. Future studies should consider exploring other clinical specialties, especially those where residents are indirectly supervised, to elaborate and refine our understanding about which metrics are useful for capturing independent and interdependent clinical performance.

CONCLUSION

For the electronic health record to realize its potential as a tool for assessing residents’ clinical performance, a rich understanding of the affordances and limitations of current electronic health record metrics is imperative. We require approaches that can assess both learners’ independent and interdependent clinical performance. By attending to both independence and interdependence in the interpretation of electronic health record metrics, we can foster precision in our assessments and optimize the role the electronic health record can have in contributing valuable feedback to learners in a timely fashion.

The authors acknowledge and thank the faculty and residents for their participation in this study.

Supporting information

Data Supplement S1. Supplemental material.

AEM Education and Training 2021;5:1–10

Presented at the Canadian Association of Emergency Physicians (CAEP) Annual Meeting, Calgary, Alberta, Canada, May 2018; the 2nd World Summit on Competency‐Based Medical Education, Basel, Switzerland, August 2018; the International Conference on Residency Education, Halifax, Nova Scotia, October 2018; and the Society for Academic Emergency Medicine (SAEM) Western Regional Meeting, Napa, CA, March 2019.

This study was supported by grants from the Schulich School of Medicine & Dentistry Dean’s Research Innovation Grant.

The authors have no potential conflicts to disclose.

This study was approved by the institutional Health Sciences Research Ethics Board on September 30, 2016 (File Number: 108391).

Author contributions: SSS and LL made substantial contributions to the conception and design, and acquisition, analysis and interpretation of data, drafted the work, provided final approval and is accountable for all aspects of the work. LS, AMcC, AMD, and RS made substantial contributions to acquisition, analysis and interpretation of data, revised the work, provided final approval and are accountable for all aspects of the work.

References

  • 1. Levin JC, Hron J. Automated reporting of trainee metrics using electronic clinical systems. J Grad Med Educ 2017;9:361–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Rajkomar A, Ranji SR, Sharpe B. Using the electronic health record to identify educational gaps for internal medicine interns. J Grad Med Educ 2017;9:109–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Sebok‐Syer SS, Goldszmidt M, Watling CJ, Chahine S, Venance SL, Lingard L. Using electronic health record data to assess residents’ performance in the clinical workplace: the good, the bad, and the unthinkable. Acad Med 2019;94:853–60. [DOI] [PubMed] [Google Scholar]
  • 4. Smirnova A, Sebok‐Syer SS, Chahine S, et al. Defining and adopting clinical performance measures in graduate medical education: where are we now and where are we going? Acad Med 2019;94:671–7. [DOI] [PubMed] [Google Scholar]
  • 5. Triola MM, Hawkins RE, Skochelak SE. The time is now: using graduates’ practice data to drive medical education reform. Acad Med. 2018;93:826–8. [DOI] [PubMed] [Google Scholar]
  • 6. Vineet A. Harnessing the power of big data to improve graduate medical education: big idea or bust? Acad Med 2018;93:833–4. [DOI] [PubMed] [Google Scholar]
  • 7. Hoerbst A, Ammenwerth E. Electronic health records: a systematic review on quality requirements. Methods Inf Med 2010;49:1–17. [DOI] [PubMed] [Google Scholar]
  • 8. Chan KS, Fowles JB, Weiner JP. Electronic health records and the reliability and validity of quality health measures: a review of the literature. Med Care Res Rev 2010;67:503–27. [DOI] [PubMed] [Google Scholar]
  • 9. Menachemi N, Collumn TH. Benefits and drawbacks of electronic health record systems. Risk Manag Healthc Policy 2011;4:47–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Sulmasy LS, Lopez AM, Horwitch CA. Ethical implications of the electronic health record: in the service of the patient. J Gen Intern Med 2017;32:935–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Fry E, Schulte F.Death by a Thousand Clicks: Where Electronic Health Records Went Wrong. Hong Kong: Fortune Media IP Limited; 2017. Available at: https://fortune.com/longform/medical‐records/. Accessed Mar 18, 2019.
  • 12. Lau BD, Streiff MB, Pronovost PJ, Haider AH, Efron DT, Haut ER. Attending physician performance measure scores and resident physicians’ ordering practices. JAMA Surg 2015;150:813–4. [DOI] [PubMed] [Google Scholar]
  • 13. Lingard L. Rethinking competence in the context of teamwork. In: Hodges BD, Anderson MB, editors. The Question of Competence: Reconsidering Medical Education in the Twenty‐first Century. Ithaca, NY: Cornell University Press, 2012:42–69. [Google Scholar]
  • 14. ten Cate O, Chen HC. The parts, the sum and the whole – evaluating students in teams. Med Teach 2016;38:639–41. [DOI] [PubMed] [Google Scholar]
  • 15. Fuglestad MA, Schenarts PJ. Supervision is not education: the dark side of remote access to the electronic health record. J Grad Med Educ 2017;9:714–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Goldszmidt M, Faden L, Doran T, van Merrienboer J, Bordage G, Lingard L. Attending physician variability: a model of four supervisory styles. Acad Med 2015;90:1541–6. [DOI] [PubMed] [Google Scholar]
  • 17. Hauer KE, ten Cate O, Boscardin C, Irby DM, Iobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract 2014;19:435–56. [DOI] [PubMed] [Google Scholar]
  • 18. Van Melle E, Gruppen L, Holmboe ES, et al. Using contribution analysis to evaluate competency‐based medical education programs: it’s all about rigor in thinking. Acad Med 2017;92:752–8. [DOI] [PubMed] [Google Scholar]
  • 19. Sebok‐Syer SS, Chahine S, Watling CJ, Goldszmidt M, Cristancho S, Lingard L. Considering the interdependence of clinical performance: Implications for assessment and entrustment. Med Educ 2018;52:970–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Sebok‐Syer SS, Pack R, Shepherd L, et al. Elucidating system‐level interdependence in electronic health record data: What are the ramifications for trainee assessment? Med Educ 2020;54:738–47. [DOI] [PubMed] [Google Scholar]
  • 21. Schumacher DJ, Dornoff E, Carraccio C, et al. The power of contribution and attribution in assessing educational outcomes for individuals, teams, and programs. Acad Med 2020;95:1014–9. [DOI] [PubMed] [Google Scholar]
  • 22. Charmaz K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. London: Sage Publications, 2006. [Google Scholar]
  • 23. Dey I. Grounding Grounded Theory: Guidelines for Qualitative Inquiry. London: Academic Press, 1999. [Google Scholar]
  • 24. Varpio L, Ajjawi R, Monrouuxe LV, O’Brien BC, Rees CE. Shedding the cobra effect: problematising thematic emergence, triangulation, saturation, and member checking. Med Educ 2017;51:40–50. [DOI] [PubMed] [Google Scholar]
  • 25. Pines JM, Alfaraj S, Batra S, et al. Factors important to top clinical performance in emergency medicine residency: results of an ideation survey and delphi panel. AEM Educ Train 2018;2:269–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA 2009;302:1277–83. [DOI] [PubMed] [Google Scholar]
  • 27. Hirsch AG, McAlearney AS. Measuring diabetes care performance using electronic health record data: the impact of diabetes definitions on performance measure outcomes. Am J Med Qual. 2014;29:292–9. [DOI] [PubMed] [Google Scholar]
  • 28. Carraccio C, Benson B, Burke A, Englander R, Guralnick S, Hicks P et al. The Pediatrics Milestone Project. Chicago: Accreditation Council for Graduate Medical Education, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Supplement S1. Supplemental material.


Articles from AEM Education and Training are provided here courtesy of Wiley

RESOURCES