Skip to main content
Canadian Medical Education Journal logoLink to Canadian Medical Education Journal
. 2021 Apr 30;12(2):e42–e56. doi: 10.36834/cmej.70943

Exploring resident perceptions of initial competency based medical education implementation

Exploration des perceptions de résidents sur la mise en œuvre préliminaire de la formation médicale fondée sur les compétences

Shivani Upadhyaya 1, Marghalara Rashid 2, Andrea Davila-Cervantes 3, Anna Oswald 4,
PMCID: PMC8105577  PMID: 33995719

Abstract

Background

Competence by design (CBD) is a nationally developed hybrid competency based medical education (CBME) curricular model that focuses on residents’ abilities to promote successful practice and better meet societal needs. CBD is based on a commonly used framework of five core components of CBME: outcome competencies, sequenced progression, tailored learning experiences, competency-focused instruction and programmatic assessment. There is limited literature concerning residents’ perceptions of implementation of CBME.

Objective

We explored resident perceptions of this transformation and their views as they relate to the intended framework.

Methods

We recruited residents enrolled in current CBME implementation between August 2018 and January 2019. We interviewed residents representing eight disciplines from the initial two CBME implementation cohorts. Inductive thematic analysis was used to analyse the data through iterative consensus building until saturation.

Results

We identified five themes: 1) Value of feedback for residents; 2) Resident strategies for successful Entrustable Professional Activity observation completion; 3) Residents experience challenges; 4) Resident concerns regarding CBME; and 5) Resident recommendations to improve existing challenges. We found that while there was clear alignment with residents’ perceptions of the programmatic assessment core CBME component, alignment was not as clear for other components.

Conclusions

Residents perceived aspects of this transformation as helpful but overall had mixed perceptions and variable understanding of the intended underlying framework. Understanding and disseminating successes and challenges from the resident lens may assist programs at different stages of CBME implementation.

Background

Competency based medical education (CBME) was formally introduced in 1978 as an outcomes-based approach organized with competencies centered around societal and patient needs to promote greater social accountability, training flexibility, and learner-centeredness.1 Since then, CBME, has emerged as a fundamentally different curricular outcomes-based approach, as opposed to time-based training, to address current challenges in medical training. Some challenges for both residents and educators in the current model include difficulties in identifying competencies required at all stages of training, identifying where learners may be struggling and creating tailored learning plans for them, and engaging in meaningful assessments that ensure a standard of competence across all specialties.2 While this approach has seen uptake in various ways in different jurisdictions internationally, our understanding of how best to practically implement and crystallize this change has been lacking and authors have identified conceptual, psychometric and logistical challenges.3

Components of CBME

CBME is an innovative overarching curricular approach that is implemented differently across programs and countries. However, further study has shown that most CBME systems internationally are based on five core components including: outcome competencies, sequenced progression, tailored learning experiences, competency-focused instruction, and programmatic assessment.4 CBME aims to empower learners through frequent workplace-based observations with coaching feedback, emphasis on abilities, de-emphasis on time-based training, and promotion of lifelong learning skills.5,6

CBD as a Canadian hybrid model of CBME

The Royal College of Physicians Surgeons of Canada (RCPSC) is transitioning their 67 postgraduate medical education programs to CBME through a national model called Competence by Design (CBD).7,8 This CBME design is a hybrid model in that it uses time as a resource but is not purely time-free. It is linked to the existing CanMEDS framework which describes seven roles (Medical Expert, Professional, Communicator, Collaborator, Leader, Health Advocate, and Scholar) that physicians require to demonstrate competence and improve health outcomes for patients and communities.9 This national CBD transition provides the opportunity to study resident perspectives of features that support implementation success in more detail.

Entrustable Professional Activities

In the CBD model, assessment and teaching are centered around workplace based assessment through the use of Entrustable Professional Activities (EPA) and associated CanMEDS milestones that are linked to four developmental stages (transition to discipline, foundation of discipline, core of discipline and transition to practice).7,8 In this system, the RCPSC requires that EPAs are assessed using an entrustment scale; most commonly the Ottawa Score (O-SCORE) of entrustment, which is a validated tool of resident workplace-based competency assessment.10,11 This scale uses a five-point scale of entrustment anchors: I had to do (1), I had to talk them through (2), I had to prompt them from time to time (3), I had to be there just in case (4), and I did not need to be there (5). To fulfill their responsibilities, residents are expected to request EPA observations from their supervisors, with predetermined criteria of required contexts and completion numbers to demonstrate competence that are outlined in national specialty guidelines and are judged by program-based competence committees.

Gaps in exploring learner-experiences and perceptions

Despite the increasingly widespread adoption of CBME, there is limited literature exploring learner experiences and perceptions of initial stages of implementation. A recent study from 2019 interviewed undergraduate students to determine their educational priorities to guide the design of a CBME curricula.12 Additionally, while the College of Family Physicians of Canada has implemented their competency based curriculum for many years (Triple C), studies of this initiative have focused on exploring program outcomes and faculty assessor experiences.13,14 It is surprising that little literature exists regarding residents’ perspectives, considering they are intended key stakeholders in this new educational model. CBME is intended to encourage trainees’ ongoing engagement in self-assessment and feedback-seeking behaviour and to augment learner centredness to promote the development of learners’ systems of self-directed maintenance of competence for unsupervised independent practice.5,6 Learners’ engagement is central to achieving many of these key goals, and the literature suggests that successful CBME implementation depends on empowered and engaged learners at its core.15 Thus, understanding residents’ perceptions of the initial implementation could help guide future success of this initiative.

Objectives

The purpose of this study was to explore resident perceptions of initial stages of CBME implementation and to identify if the curriculum is being understood as CBME was intended based on the core component framework.

Methods

Study design

A qualitative design, grounded in the principles of naturalistic inquiry, was used to explore residents’ experiences with CBME to capture vital questions about their in-depth perceptions based on the methodology described by Colorafi et al.16 They describe an approach to qualitative research known as descriptive qualitative design. This group outlines the key steps involved in carrying out a descriptive qualitative design for researchers. For our study we followed these steps in carrying out our data collection and data analysis. This descriptive qualitative design was chosen as it can be used with a wide range of sampling and data collection approaches and it aligned best with our desire to focus on pure descriptive accounts of our participants at the initial stages of CBME implementation.17

Setting

The RCPSC is transitioning their 67-specialty postgraduate medical education programs to CBME through their national CBD model. To ease the transition for programs, a staggered roll out for programs has been scheduled. This roll-out started in July 2017 with CBD implemented in Otolaryngology Head and Neck Surgery (OHNS) and Anesthesiology programs across the country. This was followed closely in July 2018 by CBD implementation in Medical Oncology, Surgical Foundations, Urology, Emergency Medicine, Forensic Pathology, and Adult Nephrology programs nationally with other programs set to be transitioned similarly in cohorts thereafter. This study was conducted at a single Canadian university.

Participants and sampling

Residents in programs already involved in the national implementation of CBME between August 2018 and January 2019 were recruited using convenient sampling. All eligible residents were approached. Participants were invited through email invitations. Written informed consent was obtained from all participants. Interviews were conducted between September and October 2018. The University of Alberta Research Ethics Board and Trainee Research Access Committee both approved the study.

Data collection

We conducted face-to-face and telephone interviews to accommodate participant schedules. In-depth semi-structured interviews explored residents’ experiences by following an interview guide (Appendix A). This interview guide was refined in two ways. Initially, we had an education scientist at our institution review our draft. Subsequently, we performed the first three interviews and then reassessed the interview guide for appropriateness. To maintain trustworthiness, the questions and probes were revised as data were analyzed to ensure we obtained in-depth understanding about residents’ experiences. We collected data until we reached saturation. Saturation was defined as repetition and solid redundancies of existing themes and subthemes until no new information was generated.18 We were witnessing that majority our themes were saturating after interview number 18. However, we wanted to make sure that our data collection was indeed saturating hence, we continued to collect more until we were confident that we were not getting any new information at all from our participants. Interviews were conducted by members of the research team (SU, ADC) lasting 30-60 minutes. Interviews were recorded and transcribed verbatim for analysis by an independent professional transcriptionist. Participants were identified by a numbering scheme where the first number is their de-identified program number, and the second number is the interview number. Due to the small number of participants, residents from the Surgical Foundations/Urology, Surgical Foundations/General Surgery and Surgical Foundations/Orthopedic Surgery programs were grouped into one program code to maintain anonymity.

Analysis

The independent professional transcriptionist anonymized the interview transcripts by removing all identifiers to ensure participants’ confidentiality. The research team (SU, MR, AD) conducted seven meetings for data analysis. These meetings were iterative in nature, the team discussed their thoughts, feelings, and predisposed biases to ensure that none of these influenced the data analysis. The meetings also consisted of peer-debriefing where the research team constantly made decisions about data collection and analysis in areas needing confirmation or validation from our participants. Data were analysed by following steps outlined by Braun and Clarke for conducting a thematic analysis.19 We read the transcripts to familiarise ourselves with the data and created data-driven codes from raw data (transcripts) (Meeting 1) and identify code categories (Meeting 2). Three researchers (SU, MR, ADC) independently coded the data and identified a preliminary thematic structure (Meeting 3). The identified themes were then compared to each other and assessed for consistency and redundancy with the overall data (Meeting 4, 5). The final thematic structure was then compared to primary quotations to ensure that all themes were relevant. Finally, all themes were named and labeled (Meeting 7). In order to determine if there were themes that might be context specific, we conducted a further interpretative analysis looking for themes that did not reach saturation in the overall analysis, but that clustered within particular programs or contexts. We used a parallel process to that described above for this interpretive analysis.

Trustworthiness and reflexivity

To maintain trustworthiness, the researchers (SU, MR, ADC) used memos and field notes during the analysis to keep track of decisions about coding and theme development to provide an audit trail.20-22 We conducted peer debriefing meetings with our research team on a regular basis to discuss our study recruitment, data collection, and analysis. In order to mitigate potential researcher bias, authors noted their personal feelings by being reflexive and thinking critically about their biases in relation to the research being conducted.23 We acknowledge that our roles and views that relate to CBME may influence the conceptualizations presented in this manuscript. At the time of this study, SU was a resident physician in a traditional program that was piloting CBME to optimize upcoming implementation. ADC (non-practicing physician) and MR (non-physician with qualitative research expertise) were research staff in the office of health professions scholarship and did not have personal experience with CBME. AO (physician faculty member) had a leadership role in supporting CBME implementation and so recused herself from recruitment, conducting interviews, accessing any identified data or performing primary data coding to maintain confidentiality and to provide distance from leadership of the CBME implementation.

Findings

Study participants

Participants were from a single Canadian university and included a total of 20 Royal College specialty resident physicians in CBME programs from eight different disciplines: Anesthesiology, Emergency Medicine, Medical Oncology, Nephrology, and Surgical Foundations residents co-registered in active CBME home programs of Urology and OHNS and Surgical Foundations residents co-registered in non-CBME home programs of Orthopedic Surgery and General Surgery).

Five main themes were identified through our data analysis. They are listed in Table 1 and discussed in further detail below: 1) Value of feedback for residents; 2) Resident strategies for successful Entrustable Professional Activity observation completion; 3) Residents experience challenges; 4) Resident concerns regarding CBME; and 5) Resident recommendations to improve existing challenges.

Table 1.

Summary of Major Themes

Themes Quotes
1. Value of feedback for residents “They’re generally in alignment with informal feedback I’ve had. So I haven’t really changed any behavior ‘cause it’s been in the moment when I’ve received the feedback and I’ll adjust my behavior at that time for a specific skill, for example. So what the EPA says after everything is completed doesn’t change what I’ve done during that assessment” (Program 1; Interview 1) “I think for me the CBD is better than the ITER [In Training Evaluation Report]. The ITER sounds a bit more generic and often at the end of the clinic, the staff person may not necessarily remember every single detail thing to improve on, whereas CBD right away it’s very directed to actionable correction measures that you can do to each specific. It’s very specific detail-oriented, whereas ITER is like a big picture of things. To me that’s what it seems like” (Program 4; Interview 9)
2. Resident strategies for successful EPA observation completion “In a weird way, sometimes as a learner you’ll probably try to target the cases that you felt like you did very well on so that you’ll get the success on a EPA. And there’s probably lots to actually talk about in terms of where you should be going forward and the actual true good learning opportunities and the good feedback sessions would kind of go towards a lot of cases that I have more trouble with but then knowing that I need to get successes on my EPAs, I’d be less inclined to actually get those EPAs done. So no, I think that often I’ve tried to get preceptors to do EPAs that I felt like I would get a success on and then ask for qualitative feedback away from any of the EPA system and that’s where probably time constraints kind of come in a little bit for some people” (Program 5; Interview 11)
“You can always swing things a certain way, you can always get EPA from people that you saw you on a good day, you can always use particular EPAs for a different thing that you planned because you had a bad day and you wanna use this really easy EPA or something. There’s ways to get around it” (Program 3; Interview 8)
3. Residents experience challenges “Certainly staff engagement and understanding of the program is an ongoing challenge. A lot of them are aware of the program now, which is a step forward but many if not most of them still do not feel comfortable actually completing the EPA themselves, and they have poor understanding of the consequences of this, of how to respond to various questions. They don't really understand the implications of each question” (Program 6; Interview 13)
“Wording, staff adherence, procedural EPAs if staff are uncomfortable supervising them or if they haven’t done them for a while, I think that’s a big factor, time. It’s true that this should only take 5 minutes, but you add a whole bunch of things that take 5 minutes and then before you know it it’s an hour, it’s like an admission bundle. You’re like oh, EPA should only take 5 minutes, but the admission orders take another 5, 10 minutes and then the note, goals of care, that’s another 5, 10 minutes and then you have 10 admissions, that’s like 3½ hours right there. So it’s not as if things are getting less busy and even for staff if you have four or five residents on and you’re doing an EPA every day or something like that and it’s not just the actual doing the EPA, it’s reviewing, it’s giving feedback, it’s having an active discussion with the resident, all this stuff takes time” (Program 2; Interview 3)
“…there’s been a lot of challenges regarding the language used, specifically relating to not so much the milestones I guess, but the actual level of skill obtainment, meaning a lot of physicians will still say, need to be there or I was, you know, needed to be on standby, when in reality they actually didn’t do anything and obviously whatever took place could’ve been done irrespective of them being there. Most people say, I needed to still be there. And so I think sometimes the language has been vague and misleading and so then as a result the assessments have sometimes been inaccurate” (Program 2; Interview 5)
4. Resident concerns regarding CBME “...this is the irony in all this, that the structure of CBD is great. It’s like the chair is comfy but the seatbelt hurts. The idea of all of it is great but when you all of a sudden strap people in and say well now you have to have this filled out and you actually have to have all these numbers, that’s when you start to go but wait, what happens when all of a sudden we need to slow down a bit or we need to move around a bit?” (Program 3; Interview 8) “It turns it into a bit of a grocery list to be honest with you. Otherwise normally...you read around cases that you have, you read around the physiology and medicine specific to the type of practice you’re in at that time and then also where you study in half day. So you kinda look at bigger picture stuff and then with the kind of advent of EPAs we’re basically just gonna blast ‘em through a list to try and get stuff filled out. And, like I said, it just turns it more into a grocery list than a learning adventure” (Program 3; Interview 6)
5. Resident recommendations to improve existing challenges they face “So I think assessments should be worded in a way that is less subjective and recognizing that all assessments are to some extent subjective, but language along the lines of the resident completed all key aspects of this particular entrustable activity without any prompting or direct supervision I think is more applicable language than I didn’t have to be there in theory, which probably gets at the same underlying kind of competency and independence but without using that same language that I think some preceptors are reluctant to sign off on” (Program 2; Interview 2)
“I think it’s good to stay on top of your EPAs and what’s expected of you because there are so many different ones that you’re expected to have filled out and you do so many different activities on a daily basis, sometimes it’s hard to realize the little procedure, little thing you just did is actually an EPA that could possibly be filled out and just asking staff repeatedly. I know I was hesitant…on asking daily, or weekly, for staff to fill out the EPAs I’d sent out, but then as a result they didn’t get filled out always… [I] feel bad to bug them more” (Program 1; Interview 1)

Theme 1: Value of feedback for residents

Residents valued specific and actionable feedback rather than offhand comments, with actionable feedback occurring most often immediately after a clinical encounter. Participants found this specific feedback particularly useful to focus on areas of improvement and structure an action plan of how to improve. These resident sentiments are best reflected in the statement below:

...For the faculty who understand what they’re supposed to tell you, I think having the really specific feedback is very helpful. It’s much better than being like oh, you did a good job versus in this specific thing that you did, this was good, in this specific thing that you did, here’s how you could do this part of that better. I think that’s very useful and it’s much more useful to you than the very vague comment of like try to do better next time. (Program 6; Interview 16)

While some residents commented that EPA observations provided opportunities for specific feedback, given the detail-oriented and timely assessments; other residents noted that oral feedback was more valuable in generating actionable feedback:

When I review cases like in clinic with the staff, they’ll immediately give feedback or they’ll quiz me or they’ll say, oh you missed this or whatever. So I’m receiving feedback all the time for that…But it’s more or less the immediate feedback they’re giving you in person rather than writing it on the form, which I find is awesome. So in a way sometimes I’ll fill out my evaluation for them while they’re talking and I’ll just write their comments, what they’re saying and then I’m doing it and it’s like they’re actually speaking it like not just to fill out a form, they’re actually telling me this is what you need to work on or whatever. (Program 1; Interview 4)

Takeaway of Theme 1: Our study participants reported that receiving regular feedback from preceptors helped modify their behaviours on a daily basis. Residents believed their behaviour is driven by both informal oral and formal documented feedback.

Theme 2: Resident strategies for successful EPA observation completion

Residents in this study believed that in order to successfully achieve their EPA observations they needed to complete as many as possible to ensure they continued toward overall progress in their program as highlighted in the following quote:

CBME is like fishing. You have to cast your net wide, so try to submit or send as many evaluations as possible because you’ll probably only get 50% of them filled to meet your requirement to pass. So every chance you get, send an eval. [sic] because probably only 50% of them will be filled and at the end you don’t want your progress to be hindered because staff won’t fill the evaluations or staff don’t understand the evaluation that’s in front of ‘em and fill it incorrectly. (Program 2; Interview 3)

Although progress decisions are meant to be nuanced and consider patterns of competence, a variety of contexts encountered and narrative comments, residents often perceive that they must receive the highest rating on the entrustment scale to be signed off as competent by the competence committee for a particular EPA. Residents have developed many strategies to ensure that they have the highest possible EPA observation ‘success’ rate. One of the main strategies residents used was ‘cherry picking’ EPA observations to specific encounters that ensure a successful completion of their EPA observations. However, they acknowledged that learning opportunities were present in all cases and capitalized on these opportunities by requesting informal verbal feedback separate from an EPA observation. This is exemplified by the following quote:

Well, I guess that it is certainly how EPAs currently work. You’re only going to ask to be assessed if you expect that you will do well on that EPA. There is discussion of a pattern of responses being, you know, each individual EPA is not like a formal assessment, that we’re really just looking at the broad overview of all of your EPAs combined but regardless, each individual EPA makes up that broad overview. So it is important on our end to make sure that our EPAs are as good as possible every time we do one. (Program 6; Interview 13).

Another strategy residents employed was to focus on who to ask and when to ask to ensure success. Participants reported that they tended to identify individuals who would be most inclined to fill out successful EPA observations for them. They also attempted EPA observations on their ‘good days’ to circumvent the difficulties in meeting their requirements. Residents commented on this strategy becoming common knowledge and questioned the impact for future residents undergoing difficulty within their program, as noted in the following quote:

It appears that residents are going to clue into who the people are that give easy evaluations. The staff who you know just check boxes and give you the good to go sort of green light and then you’ll do 10 EPA there and then you just get them all check, check, check and you know you’re good. But in the other actual instances where you really stank it up, you’re not going to go to someone where you maybe haven’t performed as well and ask them to evaluate you. And I think then it puts it in the trainees’ position to say what their evaluation is gonna look like. And I fear that that's then going to allow for incompetent and dangerous sometimes residents to kind of go unnoticed within a program. (Program 2; Interview 5)

Takeaway from Theme 2: Residents employed multiple strategies for successful EPA observation completion: casting a wide net to increase yield, ‘cherry picking’ EPA encounters, and focusing on who to ask and when to ask for EPAs.

Theme 3: Residents experience challenges

Residents noted that faculty development is essential in the early implementation of CBME, with varying levels of adoption noted amongst the programs and individuals. Residents found that while most preceptors were aware of the need for EPA observations, their comfort level and understanding of the EPA language varied. One resident summarized the challenges associated with this incomplete cultural shift:

I think it’s hit or miss. Some people really embrace it, some faculty and some seniors. And some people are really not into it at all and that makes it difficult. So I think part of getting your EPAs done, you need to know who to ask, who’s receptive to it, and who’s gonna actually fill them out, and then that’s who you’re asking. So I think that works in the way that you get them done. But I don’t know that you really get a good representation from all the faculty and all of the residents that you work with ’cause some people certainly have some negative opinions regarding CBD. (Program 6; Interview 16)

When residents set out to complete EPA observations in clinical settings, some encountered unforeseen situational challenges. A barrier to completing EPA observations noted by residents was lack of time. Participants indicated that receiving formal feedback was quite difficult in busy clinical settings where patient care was prioritized, and they found it challenging to identify a time to request EPA observations. Residents summarized this difficulty in balancing patient care and educational needs:

There is no time and I will reiterate that there is no time to fill out EPAs on a busy team-based specialty. It is impractical. (Program 3; Interview 8)

So I would say lack of time, lack of someone supervising you, exhaustion. Sometimes if you’ve been up for like a couple days you would rather sleep than try and find a form on your computer to send out to the staff. (Program 1; Interview 1)

Another situational challenge encountered by residents while attempting to complete EPA observations in clinical settings was lack of direct observation by attending staff despite the fact that this was required for many of the EPAs. Residents found their attending staff were unable to directly observe them for EPA observations completed in settings where they are only supervised by senior residents such as on call or when they were providing patient care in busy environments where the staff were absent. This is evidenced by the following quote:

I think some of the competencies are a little bit tricky to get because as a junior resident there’s stuff that I just do and there’s not necessarily anyone around to observe it or sign off on it... I was expecting for all of the competencies [to] be ones that… are more relevant for when you’re being supervised but I don’t know [if] we’ll be able to get it done. It’s just that was I guess one kind of expectation that didn’t really meetup or align. (Program 6; Interview 17)

Finally, during the process of filling out the EPA observation form, residents expressed frustration with the language used in the O-SCORE entrustment scale. Some felt it was unrealistic or unfair to have assessments where a junior resident was expected to be proficient enough to be unsupervised, despite the intention that EPAs were written for particular stages of training. Further, residents perceived that they needed to obtain the highest rating on the scale (equivalent to 5/5 or the evaluator feeling that “They didn’t need to be there”), to be considered as having a ‘successful’ EPA observation. However, they noted that due to a multitude of factors, including lack of faculty education regarding how to interpret the wording of the scale and resident perceptions that the wording itself may be flawed, they perceived that their EPA entrustment scores were not an accurate reflection of their performance. This was summarized in the following quote:

Sometimes the preceptor that’s evaluating you is hesitant to give you the four or the five which is a pass on the EPA, but the feedback that they will give you is like yes, you did everything great, I really didn’t need to be there, you knew all about this, you really managed the patient entirely and they, for the milestones they will give you entirely achieved, but they’re hesitant to tick that four or five just because of where you are in the training. I find there’s a little bit of discrepancy there and for me it’s almost difficult because if I get all achieved but they give me a three out of five, then that EPA doesn’t count for me for anything towards any passes. The written comments are valuable but to me it’s almost like a wasted EPA if you know what I mean because it doesn’t count towards me passing... they’re not giving me the actual numerical evaluation to prove that I’m where I am at that stage so I find that’s been a bit of a challenge too. (Program 5; Interview 10)

Takeaway from Theme 3: Most residents highlighted several challenges they encountered in the initial implementation of CBME. They found several hindrances to achieving successful EPA observations, beginning with inconsistent faculty engagement, situational limitations such as lack of time and lack of opportunities for direct observation, and frustrations with interpretation of the language of the entrustment scale itself.

Theme 4: Resident concerns regarding CBME

Residents perceived the current implementation was resulting in increased administrative documentation and some considered CBME a ‘make work project.’ Majority of the participants reported the CBME system to be cumbersome and adding unnecessary work to their schedule. They noted that the pre-existing residency system and new EPA observations are in fact two systems running in parallel concurrently, with feedback being delivered to residents as it was previously and additional EPA observation forms having to be completed on top of that. As such, they questioned its relevance to their learning and impact on patient care. This was evident in the following quote:

If I’m being quite honest with myself, it just seems like another thing I have to do to complete my training to an already extensive list of requirements…We barely spend 5 minutes with a patient explaining their diagnosis, which can be quite severe and whatnot and then we’re on top of it we’re expected to go through this whole charade. (Program 2; Interview 3)

It was also evident in the data that some residents feared their training might become checklist-based like bookkeeping. There were resident concerns regarding the utility of EPA observations and whether they would provide additional learning value or ultimately end up being yet another task to complete. They were concerned that while their real-life clinical duties involved global picture and nuisances in patient cases, quantifying this into an EPA observation form seemed impractical. Some of these concerns were evidenced by the following quote:

So my issue with the CBD... is whether or not it’s just gonna be this checklist where whether or not it truly...[is] gonna be something where you just have to get checked off, get done, or especially [if it will have] utility, using it as a learning? (Program 5; Interview 12)

Takeaway from Theme 4: Residents expressed concerns regarding CBME such as increased administrative burden equating to a ‘make work project’ and possibly a reduction of their training to checklists.

Theme 5: Resident recommendations to improve existing challenges they face

Residents were keen to provide alternative wording for the entrustment scale to reflect their perception of current practice so evaluators can provide more precise comments on resident performance. For example,

I think one of the things that should be considered is changing the wording of the actual EPA… so maybe changing that so that it says rather than I didn’t need to be there, saying resident performed independently. Because then they don’t feel like they’re necessarily stating that you know they either did or didn’t need to be there, it’s just what actually happened. Did the resident perform independently? Did the resident need your assistance? Did the resident require help in any particular way? Because the evaluation is of the resident, not of the person giving the feedback. (Program 6; Interview 13)

Many residents commented on the importance of being aware of the details of their EPAs and suggested that future residents keenly review their EPAs beforehand so that they may identify when a learning opportunity arises:

You have to stay on top of it. So because there’s so many that you have to achieve, it’s not like before like in undergrad when you could just kind of do your work, show that you’re keen, show that you kind of have a basic understanding of stuff. You really have to stay on top of, when you only have a limited number of experiences that you can get these EPAs from, you have to really be prepared and know your EPAs inside and out so that when you’re broached with especially one of these emergency situations, you know that that’s an EPA and you know that you can get it evaluated ‘cause some of them are so specific…So you’re not gonna get to see that that many times in your residency, so you have to be prepared when you get it if there’s an EPA for it and you need to be evaluated on it. (Program 3; Interview 7)

Takeaway from Theme 5: Our study participants proposed two main changes and recommendations to improve the early implementation of CBME. Residents suggested wording changes to the entrustment scale and recommended that future residents learn the details of their EPAs inside out.

Interpretative analysis

Further interpretive analysis of the above themes revealed some interesting insights. On the whole, residents were actively engaging with the curriculum and acknowledged that the concept of CBME was well intended and had potential positive consequences. For example, in the theme value of feedback for residents, some residents identified documented feedback through EPA observations as providing an opportunity for more specific and immediate feedback. However, this finding was inconsistent among our study participants. While residents valued this timely feedback, they noted that EPA observations were often completed retrospectively which diminished their utility as they relied on the assessor’s memory. This inconsistency highlighted a discordance between the intention of CBME and the implementation. This discordance related to the theme around residents experience challenges, underscoring the key contextual issues around inconsistent faculty engagement and lack of time for feedback by faculty and residents. It seems that regardless of the feedback being informal or formal, the immediacy after a clinical encounter provided the best opportunity for residents to receive valuable specific feedback.

One interesting area of comparison to the core components framework relates to the need for competency-focused instruction whereby teaching is individualized to the learner based on what is required to progress to the next stage. As our study included residents across several programs, some interesting differences were noted between the programs in this area. For example, residents highlighted one program as already having an ingrained culture of regular targeted teaching and feedback, with time often allotted at the end of each day for these activities. Since this was part of a pre-existing culture, residents in this program found this component to be easier to achieve and lack of time for EPA observations to be less of a barrier. These residents appeared to be supported by the pre-existing context of a program already engaged in change. In contrast, residents from many other programs articulated minimal alignment with this component and identified time for targeted teaching and feedback as a barrier.

Many residents were very focused on performance and achievement scores over feedback and growth, expressing frustration when faculty did not give them the highest ratings. However, residents from some programs noted they were successfully reassured that their entire portfolio of observations would be reviewed and considered as evidence of EPA and overall progression of competence.

Discussion

This study aimed to explore residents’ perspectives and interpretations of the initial implementation of a nationally developed CBME postgraduate medical education model, CBD, and compare it with the intended CBME core components framework. Through our analysis, we identified five main themes. In order to contribute to our understanding of the successes and barriers to achieving the intended benefits of CBME, we compared resident perceptions with the intended CBME framework to delineate in what ways there are differences between the way the curriculum was planned and the way it was perceived. In comparing the resident perceptions with this framework, it seemed the majority of discrepancies were with the sequenced progression, tailored learning experiences, competency-focused instruction, and programmatic assessment components. In contrast, residents did seem to appreciate and value the outcomes-based competency framework and the clarity it provided for them regarding the specific skills required to complete their training. In addition to introducing new findings, our study confirmed many findings already known in current literature regarding CBME including concerns around potential increased administrative burden, lack of time, perils of reducing competence to smaller elements, and importance of faculty development in the success of CBME.24-29

In our study, residents indicated that completing EPA observations felt like a ‘make work project’ with some residents perceiving that CBME resulted in duplication of their assessment system. This finding is supported by a commentary which highlights the potential perils of increased administrative burden brought by CBME where educators may be at risk of spending more time with the assessment paperwork rather than the actual learning experience itself; which appears to be incongruent with the aim of providing a tailored experience with authentic and flexible learning experiences facilitating acquisition of competencies.28

There have also been fears regarding CBME potentially detracting from the richness of the curricular process and the reductionist nature of CBME which devalues the context and complexity of competence.27,28,30,31 This concern is in line with resident concerns in our study that their education is at risk of being reduced to a checklist, which interestingly has emerged as a theme in another similar study, underscoring the prevalence of this perception.24 While it is possible that residents are unable to appreciate the overall programmatic assessment structure and the integrative intent of EPA language due to their early stage in training, at this initial stage of implementation residents’ perception regarding this component is not aligned with the intended framework of CBME. It would be interesting to track resident perceptions as they progress through their learning stages to see how perceptions evolve over time.

Residents’ perceptions of the importance of shared responsibility for their learning and assessment with faculty is aligned well with the core components framework regarding learning being self-directed. Further, they noted that timely and comprehensive faculty development led to more faculty engagement, a more responsive program and higher resident satisfaction. Faculty development is known to be of particular importance to faculty embracing their role as evaluators and expert coaches.29 At this stage of implementation, residents noted heterogeneity in faculty development and engagement that appeared to be largely program specific. This observation speaks to the importance of identifying program based CBME champions to support and reinforce local faculty development efforts. Indeed, the initial experience at Queen’s University, one of the earlier centres to adopt widespread postgraduate implementation, has highlighted the critical importance of the creation of leadership roles in both faculty, educational leaders and residents to optimize effective implementation by actively co-engaging all stakeholders in the process.26,32

In the theme value of feedback for residents, study participants reported that they valued specific and actionable feedback. Recent studies reported that qualitative narrative, actionable and specific feedback was of great value and facilitated a conversation with more credibility, allowing the learner to be more at ease.33,34

Existing literature has demonstrated that the trusting relationship fostered by narrative feedback between the assessor and learner enabled an environment that addressed emotional obstacles in facilitating feedback.34 In keeping with this, the introduction of EPA observations may have indirectly led to prompts for this type of narrative and actionable feedback conversation, which may be more important than the documentation of the feedback event itself. The experience of timely feedback is not explicitly addressed within the core components but is most reflective of the intentions of the tailored learning experiences.

Residents expressed frustration with the interpretation of the language of the entrustment scale and its complexity. A recent study by Melvin et al. focusing on the tension and realities of entrustment suggested that it was important to understand the complexity and specialty- specific language of entrustment in order to provide effective assessments to learners.35 Residents’ perceptions that anything less than the highest rating was a “wasted EPA observation” highlighted an interesting disconnect around the programmatic assessment aspect of the core components framework. This highlights residents’ perceptions that EPA assessments are solely a type of summative assessment, rather than having a largely formative purpose as they were intended, which has been echoed by other residents as well.24 They seemed to struggle to see the role of EPA observations for their value in contributing to assessment for learning, which is a key principle underlying the programmatic assessment component of the framework. This component stresses assessment practices to support and document developmental acquisition of competencies. Through these comments, residents appear to demonstrate a focus on performance orientation, where there is motivation to demonstrate one’s competence, rather than mastery orientation, where there is motivation to improve or gain competence.36 Perhaps, programs might be able to rectify this by emphasizing to residents the value of EPA observations at all levels of achievement as key contributors toward the recognition of their progression of competence. Residents felt that the language used in the O-SCORE (“I didn’t need to be there”) was not reflective of novice residents who expect staff to actively supervise them during the early stages of learning. This resident perspective is discordant with the core component of sequenced progression which recommends that CBME allows sequential progression of competencies to promote smooth transitions to the expert level throughout training. It is not clear from the current study if this disconnect in understanding resulted from a malalignment between residents finding that the EPA expectations were too complex for their assigned developmental stages based on their level of training (transition to discipline, foundation of discipline, core of discipline or transition to practice). Alternatively, this disconnect may have resulted from a misinterpretation, by either the assessor or learner, of the stage specific language of the EPA expectations in this CMBE model (i.e., that EPA observations written for junior stages refer to simpler tasks like recognizing emergencies rather than those for more senior stages that include more complex tasks like managing emergencies). This difficulty may have been exacerbated by the fact that many international CBME models, such as the Accreditation Council for Graduate Medicine Education (ACGME) milestones project, are focused on end of training or terminal graduate expectations.37

Our study provided perspectives from a wide variety of residents and identified themes that were common across many programs, both procedural and non-procedural. The individual response rate (40%) was quite high considering most physician studies struggle to achieve even a 20% response rate; however, it could be considered a limitation as the perspectives from the remaining residents may differ from the findings reported here.36 In addition, our study explored the experiences of residents from one institution and these experiences may differ elsewhere. However, all these programs were part of a national curriculum and assessment development process providing some degree of transferability. Our findings are aligned with existing literature from studies that had included fewer participants and/or programs, bringing some degree of confirmability.24 Further, we identified many commonalities across a diverse group of residents suggesting that the themes are likely common across many procedural and non-procedural programs.

Our exploration of resident perspectives within the CBME curriculum highlighted several disconnects which will hopefully be the catalyst for development and change to inform future implementation activities. We recommend that future research consider expanding the study to other institutions and programs as they begin implementing CBME to see if dissemination of these findings influences future residents’ experiences. Some practical recommendations can also be considered to address the findings of our study. The residents have revealed strategies that they employ for what they perceive as successful EPA completion, and these may help inform resident orientation activities to ensure programs are reinforcing more adaptive behaviours and guiding residents away from maladaptive approaches. They have also provided recommendations for other residents and program leaders around the importance of faculty development and the need for residents to develop an in-depth working knowledge of their program’s EPAs to allow residents to capitalize on learning and assessment in less frequently encountered clinical opportunities. These strategies and recommendations could be compiled and available as a national resource for programs undergoing CBME implementation to facilitate the transition. It will be interesting to explore residents’ perspectives longitudinally as they progress through subsequent stages of CBME development to explore the benefits and challenges over time and to compare their impressions of the delivery of the intended CBME framework of competencies with real life implementation. It is likely that as residents and programs gain experience with CBME they may refine their strategies over time.

Conclusions

Our study exploring resident perspectives on the initial implementation of a nationally developed CBME model reveals that residents had mixed reactions and variable understanding of the intended underlying framework. They appreciated the importance of feedback in molding their behaviours yet at the same time paradoxically struggled to see the value of assessment for learning. They also highlighted several major challenges they faced while trying to achieve successful EPA observations in the real-world setting including lack of direct observation, lack of time, frustration with the entrustment scale language, and variability in program faculty engagement and preparedness. Residents provided suggestions to address these challenges which highlighted the need to better explain the meaning of the entrustment scale and to ensure residents know the details of their EPAs well. We suspect that the mandated documented feedback required as part of CBME implementation facilitated residents’ positive perceptions toward the value of regular informal verbal feedback.

In comparing our findings to the intended core CBME framework we found that while the outcomes-based core component was well aligned with resident perspectives of early implementation, there was a variable degree of disconnect with the other four core components. This study provides valuable resident first-hand perceptions of the initial implementation of a nationally developed CBME model. This knowledge may help inform and positively impact future learner experiences of CBME implementation not only for this post-graduate model but also for a range of current and future international CBME initiatives across the continuum.

Our practical takeaways to facilitate future implementations of CBME include the following four observations. Mandated documented feedback required as part of CBME implementation facilitated residents’ positive perceptions toward the value of regular feedback. We noted residents struggled to see the importance of assessment for learning. The residents also struggled with time limitations and variable faculty preparedness and engagement in completing EPA observations. Lastly, residents from programs with a history of being available for frequent feedback had an easier time with the tasks of CBME.

Lessons for practice

  1. Mandated documented feedback required as part of CBME implementation facilitated residents’ positive perceptions toward the value of regular feedback.

  2. Residents struggled to see the importance of assessment for learning and struggled with time limitations and variable faculty preparedness, and engagement in completing EPA observations.

  3. Residents from programs with a history of being available for frequent feedback had an easier time with the tasks of CBME

Acknowledgements

We would like to acknowledge the in-kind resource support from the IDEAS Office, Faculty of Medicine & Dentistry, University of Alberta.

Conflicts of Interest

Shivani Upadhyaya, Marghalara Rashid, Andrea Davila-Cervantes, and Anna Oswald declare that they have no conflict of interest.

Appendix A. Interview guide

Prompting questions for the structured interviews based on general expectations

Welcome – Thank you for participating. The purpose of this interview is to understand your experiences with the new CBD curriculum.

Ground rules:

- To maintain anonymity, we will not be transcribing anyone’s name through the interview. As much as possible, please try not to use names in the interview.

- We are also tape recording the interview, as we want to capture everything you have to say, but we will not identify anyone by name in our notes or any reports.

- If you do not want to respond to a question, feel free to skip it.

- We are very eager to hear your thoughts, but this is again voluntary, and you may choose to end your participation in the interview at any time.

General expectations and experiences of CBD:

● Can you tell me a little about your expectations going into CBD at the beginning of your training?

Probes:

○ How is the implementation of CBD going so far in your program?

○ How closely has the CBD experience aligned with your expectations?

○ How has it differed?

● What are some successes you can identify in your program’s CBD implementation?

(You can give me an example if you would like)

● Let’s talk a bit about some challenges/improvements you can identify?

● What kinds of resident input have you provided locally or nationally regarding CBD implementation?

● Describe how responsive is your program to resident feedback in terms of your program’s CBD implementation? (In what other ways would you like to see residents having input in this system?)

Probes:

○ Can you please share some of your experiences of what you have learned?

● What would you like to share with future CBD residents as a member of the 1st cohort about CBD?

Prompting questions for the structured interviews based on the five core components:

1. Tell me a bit about the competences, were they clearly articulated and discipline specific?

○ Probes: (If no) how could it be improved to make them clearer and more specific?

2. Do you feel the competencies are relevant for your future practice?

3. Do you find the EPA’s level of difficulty are appropriate for their assigned stages of training?

4. Do you notice an increased level of difficulty as you move from stage to stage?

5. Does your rotation schedule generally align with your EPA assessment needs?

○ Probes: Can most of your EPAs be assessed in real clinical work environments?

6. How often are you receiving feedback informally and formally in your program?

○ Probes: Are you satisfied with it?

7. How often do your faculty provide actionable or coaching feedback? (Please give an example)

8. How often were you observed clinically?

9. How often are you getting documented EPA assessments?

10. How do you identify when is an appropriate time to request an EPA assessment?

11. How long do you typically work with a preceptor before approaching them for assessments?

12. When do you feel competent to perform a specific clinical activity?

○ Probes: Does this relate your EPA assessments?

○ Probes: What else contributes?

13. What are the different ways you are currently assessed in your program?

14. What reasons can you identify for not being able to complete your EPA assessments.

15. Do you find that EPA assessments are easier to complete when the EPAs are more specific or broader?

16. Tell me about some of the changes in your behavior based on the feedback you received

17. Can you talk about how CBD affected the way you approach your learning? (If any)

18. What is your understanding about your program’s competence committee process?

19. How do you find accessing the electronic portfolio system for either EPA requests or portfolio review?

○ Probes: Do you have comments about the electronic portfolio system?

Closing question:

Is there anything you would like to share with us that you think is important and that has not been discussed in this interview?

Prior oral presentations

The Canadian Conference on Medical Education 2020 (virtual), The International Conference on Residency Education 2019

Prior poster presentations

IDEAS Research Day, CBME Program Evaluation Summit 2019, University of Alberta Department of Medicine Research Day 2019

Funding

This study was funded by an unrestricted educational grant from the Office of Postgraduate Medical Education (PGME), Faculty of Medicine & Dentistry, University of Alberta.

References

  • 1.McGaghie WC, Miller GE, Sajid AW, Telder TV. Competency-based curriculum development in medical education - an introduction. http://apps.who.int/iris/bitstream/handle/10665/39703/WHO_PHP_68.pdf;jsessionid=E76BE9830C68B982EC6DA57683A0C110?sequence=1.Updated1978. [Accessed October 15, 2018]. [PubMed]
  • 2.Royal College of Physicians and Surgeons of Canada . Competence by design: The rationale for change. http://www.royalcollege.ca/rcsite/cbd/rationale-why-cbd-e [Accessed May 29, 2019].
  • 3.Norman G, Norcini J, Bordage G. Competency-based education: Milestones or millstones? Journal of graduate medical education. 2014;6(1):1. 10.4300/JGME-D-13-00445.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Van Melle E, Frank J, Holmboe E, Dagnone D, Stockley D, Sherbino J. A core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019;94(7):1002-1009. 10.1097/ACM.0000000000002743 [DOI] [PubMed] [Google Scholar]
  • 5.Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: A systematic review of published definitions. Med Teach. 2010;32(8):631-637. 10.3109/0142159X.2010.500898 [DOI] [PubMed] [Google Scholar]
  • 6.Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32(8):638-645. 10.3109/0142159X.2010.501190 [DOI] [PubMed] [Google Scholar]
  • 7.Royal College of Physicians and Surgeons of Canada . What is Competence by Design? http://www.royalcollege.ca/rcsite/cbd/what-is-cbd-e [Accessed May 29, 2019].
  • 8.Royal College of Physicians and Surgeons of Canada . Competence by design. EPA’s and milestones. http://www.royalcollege.ca/rcsite/cbd/implementation/cbd-milestones-epas-eUpdated2017. [Accessed May 29, 2019].
  • 9.Frank JR, Snell L, Sherbino J, editors. CanMEDS 2015 physician competency framework. Ottawa: Royal College of Physicians and Surgeons of Canada. 2015. [Google Scholar]
  • 10.Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): A tool to assess surgical competence. Acad Med. 2012;87(10). 10.1097/ACM.0b013e3182677805 [DOI] [PubMed] [Google Scholar]
  • 11.MacEwan MJ, Dudek NL, Wood TJ, Gofton WT. Continued validation of the O-SCORE (Ottawa surgical competency operating room evaluation): Use in the simulated environment. Teach Learn Med. 2016;28(1):72-79. 10.1080/10401334.2015.1107483 [DOI] [PubMed] [Google Scholar]
  • 12.Storrar N, Hope D, Cameron H. Student perspective on outcomes and process-recommendations for implementing competency-based medical education. Med Teach. 2018:1-6. 10.1080/0142159X.2018.1450496. [DOI] [PubMed] [Google Scholar]
  • 13.Berendonk C, Stalmeijer RE, Schuwirth LWT. Expertise in performance assessment: Assessors’ perspectives. Advances in Health Sciences Education. 2013;18(4):559-571. 10.1007/s10459-012-9392-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Schultz K, Griffiths J. Implementing competency-based medical education in a postgraduate family medicine residency training program: A stepwise approach, facilitating factors, and processes or steps that would have been helpful. Acad Med. 2016;91(5):685-689. 10.1097/ACM.0000000000001066 [DOI] [PubMed] [Google Scholar]
  • 15.Gruppen L, Frank JR, Lockyer J, et al. Toward a research agenda for competency-based medical education. Med Teach. 2017;39(6):623-630. 10.1080/0142159X.2017.1315065 [DOI] [PubMed] [Google Scholar]
  • 16.Colorafi KJ, Evans B. Qualitative descriptive methods in health science research. HERD: Health Environments Research & Design Journal. 2016; 9(4): 16-25. 10.1177/1937586715614171 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Lambert, V.A., & Lambert, C.E. Qualitative descriptive research: An acceptable design. Pacific Rim Int J Nurs Res. 2012(16(2)):255-256. [Google Scholar]
  • 18.Morse JM. The significance of saturation. Qualitative Health Research. 1995;5(2):147-149. 10.1177/104973239500500201 [DOI] [Google Scholar]
  • 19.Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology. 2006; 3: 77-101. 10.1191/1478088706qp063oa [DOI] [Google Scholar]
  • 20.Shenton A K. Strategies for ensuring trustworthiness in qualitative research projects. Education for Information. 2004; 22(2): 63-75. 10.3233/EFI-2004-22201 [DOI] [Google Scholar]
  • 21.Tong A., Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007;19:349–357. 10.1093/intqhc/mzm042 [DOI] [PubMed] [Google Scholar]
  • 22.Richards L, Morse JM. Readme first for a user's guide to qualitative methods. 2nd ed. ed. Thousand Oaks Calif: Sage Publications; 2007. http://books.google.com/books?isbn=1412927439 [Google Scholar]
  • 23.Ahern KJ. Ten tips for reflexive bracketing. Qualitative Health Research. 1999;9(3):407-411. 10.1177/104973299129121947 [DOI] [Google Scholar]
  • 24.Branfield Day L, Miles A, Ginsburg S, Melvin L. Resident perceptions of assessment and feedback in competency-based medical education: A focus group study of one internal medicine residency program. Acad Med. 2020;Publish Ahead of Print. 10.1097/ACM.0000000000003315 [DOI] [PubMed] [Google Scholar]
  • 25.Boet S, Pigford AE, Naik VN. Program director and resident perspectives of a competency-based medical education anesthesia residency program in Canada: A needs assessment. Korean J Med Educ. 2016; 28(2):157-168. 10.3946/kjme.2016.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Crawford L, Cofie N, McEwen L, Dagnone D, Taylor SW. Perceptions and barriers to competency-based education in Canadian postgraduate medical education. J Eval Clin. Pract 2020. 10.1111/jep.13371 [DOI] [PubMed] [Google Scholar]
  • 27.Hawkins RE, Welcher CM, Holmboe ES, et al. Implementation of competency-based medical education: Are we addressing the concerns and challenges? Med Ed. 2015;49(11):1086-1102. 10.1111/medu.12831 [DOI] [PubMed] [Google Scholar]
  • 28.Malone K, Supri S. A critical time for medical education: The perils of competence-based reform of the curriculum. Adv in Health Sci Educ. 2012;17(2):241-246. 10.1007/s10459-010-9247-2 [DOI] [PubMed] [Google Scholar]
  • 29.Holmboe ES, Ward DS, Reznick RK, et al. Faculty development in assessment: The missing link in competency-based medical education. Acad Med. 2011;86(4):460-467. 10.1097/ACM.0b013e31820cb2a7 [DOI] [PubMed] [Google Scholar]
  • 30.Brooks MA. Medical education and the tyranny of competency. Perspect Biol Med. 2009;52(1):90-102. 10.1353/pbm.0.0068 [DOI] [PubMed] [Google Scholar]
  • 31.Huddle TS, Heudebert GR. Taking apart the art: The risk of anatomizing clinical competence. Acad Med. 2007;82(6):536-541. 10.1097/ACM.0b013e3180555935 [DOI] [PubMed] [Google Scholar]
  • 32.Dagnone D, Stockley D, Flynn L, et al. Delivering on the promise of competency based medical education–an institutional approach. Can Med Ed J 2019;10(1):28. 10.36834/cmej.43303 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Marcotte L, Egan R, Soleas E, Dalgarno N, Norris M, Smith C. Assessing the quality of feedback to general internal medicine residents in a competency-based environment. Can Med Ed J. 2019;10(4):e32. 10.36834/cmej.57323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Tekian A, Watling CJ, Roberts TE, Steinert Y, Norcini J. Qualitative and quantitative feedback in the context of competency-based education. Med Teach. 2017;39(12):1245-1249. 10.1080/0142159X.2017.1372564 [DOI] [PubMed] [Google Scholar]
  • 35.Melvin L, Rassos J, Stroud L, Ginsburg S. Tensions in assessment: The realities of entrustment in internal medicine. Acad Med. 2020;95(4):609-615. 10.1097/ACM.0000000000002991 [DOI] [PubMed] [Google Scholar]
  • 36.Elliot AJ, Yeager DS, Dweck CS. Chapter 4: Achievement goals. In: Handbook of competence and motivation: Theory and application. The Guilford Press; 2017:43-60. http://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9781462529612&uid=none [Google Scholar]
  • 37.Jardine, D., Deslauriers, J., Kamran, S.C., Khan, N., Hamstra, S., Edgar, L.. Milestones guidebook for residents and fellows. accreditation council for graduate medical education (ACGME). Accreditation Council for Graduate Medical Education Web site. https://acgme.org/Portals/0/PDFs/Milestones/MilestonesGuidebookforResidentsFellows.pdf?ver=2017-06-29-090859-107Updated2017. [Accessed May 31, 2019].
  • 38.Nicholls K, Chapman K, Shaw T, et al. Enhancing response rates in physician surveys: The limited utility of electronic options. Health Services Research. 2011;46(5):1675-1682. 10.1111/j.1475-6773.2011.01261.x [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Canadian Medical Education Journal are provided here courtesy of University of Saskatchewan

RESOURCES