Abstract
Background
Competency-based medical education (CBME) is an outcomes-based curricular paradigm focused on ensuring that graduates are competent to meet the needs of patients. Although resident engagement is key to CBME’s success, few studies have explored how trainees have experienced CBME implementation. We explored the experiences of residents in Canadian training programs that had implemented CBME.
Methods
We conducted semi-structured interviews with 16 residents in seven Canadian postgraduate training programs, exploring their experiences with CBME. Participants were equally divided between family medicine and specialty programs. Themes were identified using principles of constructivist grounded theory.
Results
Residents were receptive to the goals of CBME, but in practice, described several drawbacks primarily related to assessment and feedback. For many residents, the significant administrative burden and focus on assessment led to performance anxiety. At times, residents felt that assessments lacked meaning as supervisors focused on “checking-boxes” or provided overly broad, non-specific comments. Furthermore, they commonly expressed frustration with the perceived subjectivity and inconsistency of judgments on assessments, especially if assessments were used to delay progression to greater independence, contributing to attempts to “game the system.” Faculty engagement and support improved resident experiences with CBME.
Conclusion
Although residents value the potential for CBME to improve the quality of education, assessment and feedback, the current operationalization of CBME may not be consistently achieving these objectives. The authors suggest several initiatives to improve how residents experience assessment and feedback processes in CBME.
Abstract
Contexte
La formation médicale axée sur les compétences (FMFC) est un paradigme d’apprentissage axé sur les résultats et visant à garantir que les diplômés aient les compétences nécessaires pour répondre aux besoins des patients. Bien que l’engagement des résidents soit la clé du succès de la FMFC, peu d’études ont exploré comment ils vivent son introduction. Nous nous sommes penchés sur l’expérience des résidents dans les programmes de formation canadiens qui ont mis en œuvre la FMFC.
Méthodes
Nous avons mené des entrevues semi-structurées avec 16 résidents de sept programmes de formation postdoctorale canadiens, afin de sonder leur expérience de la FMFC. Les participants provenaient de façon égale de la médecine familiale et de programmes de spécialité. Les thèmes ont été dégagés en appliquant les principes de la théorie enracinée constructiviste.
Résultats
Bien que réceptifs aux objectifs de la FMFC, les résidents décrivent des inconvénients de sa mise en pratique, notamment sur le plan de l’évaluation et de la rétroaction. Pour beaucoup d’entre eux, la focalisation sur l’évaluation et le fardeau administratif qui y est lié ont été une source d’anxiété de performance. Les résidents ont l’impression que les évaluations manquent parfois de pertinence, car les superviseurs, se sentant contraints de « cocher des cases », font des commentaires trop généraux et peu ciblés. De plus, un sentiment de frustration a été fréquemment exprimé face à la subjectivité et à l’incohérence perçues des jugements dans les évaluations, surtout lorsque ces dernières sont utilisées pour retarder le cheminement vers une plus grande indépendance, ce qui contribue à des tentatives de « déjouer le système ». L’implication et le soutien du corps professoral ont aidé à bonifier l’expérience des résidents.
Conclusion
Bien que les résidents apprécient le potentiel de la FMFC pour rehausser la qualité de la formation, de l’évaluation et de la rétroaction, son opérationnalisation actuelle ne permet pas d’atteindre ces objectifs de façon systématique. Les auteurs proposent quelques initiatives pour améliorer la façon dont les résidents vivent les processus d’évaluation et de rétroaction dans le cadre de la FMFC.
Introduction
Competency-based medical education (CBME) is an outcomes-based approach to the design, implementation, assessment, and evaluation of medical education programs that uses an organized framework of competencies.1 It has a history dating back half a century, and is based on educational approaches including mastery learning and outcome-based education.2-4 As objective outcome measurement and societal accountability have grown increasingly important in the modern era of medicine, CBME focuses on the achievement of curricular outcomes and competencies to ensure that trainees are able to meet the needs of patients.1,5,6 Unlike the traditional strictly time-based model of training, CBME also deemphasizes time-based training, and aims to be more learner-centered and flexible.1,7-9
In Canadian postgraduate medication education programs, there are two systems of CBME, which are hybrid outcome-based and time-based models. Competence-by-Design (CBD) model is the Royal College of Physician and Surgeons of Canada’s (RCPSC) brand of CBME. Specialty training programs in Canada have been transitioning to CBME since 2017.5,9 The College of Family Physicians of Canada (CFPC) brand of CBME is the Triple C Competency-based curriculum (Triple C). Family Medicine residency training programs implemented Triple C from 2011-2016.3,10
In CBME curricula (including CBD and Triple C), residents receive frequent assessments and feedback that are intended to support and document the progressive achievement of competencies required for practice that are the focus of learning experiences in CBME.3,10-12 In CBD, CanMEDS 2015 is used as an organizing framework of competencies, while in Triple C, competencies are based on CanMEDS-FM (an adaptation of CanMEDS 2015) and CFPC Assessment Objectives and Essential Skill Dimensions.4,10
In CBME, there is a greater focus on the constructive, ongoing formative process of assessment to stimulate learning (assessment for learning), and in aggregate to support summative decisions (assessment of learning) in addition to summative assessment as compared to traditional curricular approaches.1,7,13,14 Competency-based assessment is intended to be a formative process, whereby assessments are intended to occur regularly and to provide constructive feedback that is documented to guide learning and support promotion decisions.5,10,15 This complements existing summative assessments such as in-training evaluation reports (ITERs) as part of programmatic assessment in which multiple data points from diverse sources inform progress decisions. In CBD, trainees receive frequent assessments based on observed performance on Entrustable Professional Activities (EPAs). EPAs are authentic clinical activities of a discipline that a resident can be expected to perform independently once sufficient competence has been demonstrated.16 Judgments of competence are made on each EPA assessment. In Triple C, frequent work-based assessments occur using various performance assessment tools including EPA assessments and field notes to document feedback and progress towards achieving competencies. In both systems, progression of learning is monitored via a learning portfolio and summative decisions about progression in training are made at specific intervals by a program-based committee based on aggregate performance data.5,10,17
CBME promises greater learner-centeredness and relies on the engagement of learners in all elements of training for the success of this model. In particular, learners must be actively engaged in data gathering, demonstration of acquisition of competencies, and co-producing their learning plans.18-21 Furthermore, their active participation in feedback-seeking, self-assessment, and self-reflection is essential to foster self-regulated learning behaviours for future independent practice.22,23 Failure to do so could undermine the transformation to CBME. It is, therefore, critical that residents, as key stakeholders and recipients of this new education model, are consulted on their experiences in CBME and the impact on their training and engagement. Yet, not much is known about the resident perspective and its impact on training, service and the overall educational experience with this new curriculum. Research on resident perceptions of CBD has taken place, focusing on attitudes prior to implementation24 and how and whether residents understand the rationale behind the new curriculum.25 A few studies of early implementation suggest potential threats to engagement including issues with quality of feedback, disruptions to workflow from frequent assessments, and threats to natural feedback processes.26-30 Additionally, studies of Triple C have mostly evaluated faculty perspectives and educational outcomes, with few exploring the experiences of trainees directly, although the importance of active learner buy-in and participation in all elements of the model is emphasized.17,31–33
Given that there are few published studies that explore the trainee perspectives of this model, despite the importance of trainee engagement, we sought to explore the resident experience with CBME programs in Canada. By understanding their perceptions and experiences, targeted interventions could be created to harness the advantages of CBME and address potential weaknesses, such as shortcomings in assessment processes and uptake by relevant stakeholders.26–29
Methods
We used principles of constructivist grounded theory (CGT) methodology to conduct semi-structured interviews with residents who were enrolled in programs that had implemented CBME. CGT’s explicit acknowledgement that researchers co-construct meaning with study participants aligned well with our study aims (1) to elicit meaning from probing conversations with residents and (2) to leverage the research team’s own experiences as residents who have encountered CBME.34–36
CGT’s endorses explicit acknowledgement of positionality, a statement on the backgrounds and motivations of the research team.34,35 Accordingly, our research team brought varying clinical expertise and perspectives to this study, and we were reflexive about our stances throughout the data collection and interpretation process. At the time of this study, the primary author (LBD) and majority of co-authors (TC, FR, LZ, and AN) were resident doctors from different training program across five institutions (University of Toronto, University of Manitoba, University of Calgary, University of British Columbia, and Memorial University), most of which had implemented or were piloting CBME. AM and RM are medical education researchers. Importantly, with the exception of AM, all researchers were involved in Resident Doctors of Canada (RDoC; the body representing more than 10,000 resident doctors nationally) on the CBME Team, contributing to the development of projects and research to understand best-practices with CBME. This study was approved by the research ethics board at the University of Manitoba.
Setting
This study was conducted in residency training programs that have transitioned to CBME in Canada, in all provinces aside from Quebec, which does not fall under the jurisdiction of RDoC. In the year of this study, residency training programs were at varying stages of implementation ranging from family medicine programs that had fully implemented CBME in 2011 to Royal College programs that had recently implemented CBME.
Sample and recruitment
As leaders with RDoC, a non-for-profit organization that represents over 10,000 residents from across Canada outside of Quebec, we aimed to understand and represent the experiences of our members across disciplines. We therefore invited residents to participate who were training in all CBME residency programs under our jurisdiction via email sent by the seven Provincial Housestaff Organizations on behalf of RDoC. From those who responded to this email, residents were recruited via purposive sampling to include varying disciplines, training location, and postgraduate years (PGYs).36 The sampling goal was to develop a thorough understanding of the CBME experience from the point of view of a diverse selection of residents.35–37
Data collection
Between December 2018 to February 2019, our team of residents (LBD, TC, AN, LZ, FR) conducted one-on-one semi-structured phone interviews of 30-40 minutes duration. Informed consent from each participant was obtained in writing prior to each interview. Information on residency training program and year of training was also collected. Interviews were designed with sensitivity towards the different terms used in CBD and Triple C. These interviews explored residents’ understanding of and experiences with CBME, what was working well and what might need improvement, specific challenges with the CBME training model, methods of assessment used in their programs, and recommendations to other residents in CBME programs. The guide was modified as the analysis progressed to address new insights (Final guide in Appendix A). As the interviewers were residents, they were positioned to elucidate the subtleties of resident experiences with CBME throughout the interview. The use of near peer interviewers with no relationship to participants also served to minimize the power differential. Interviews were audio-recorded, transcribed verbatim, and de-identified prior to data analysis.
Data analysis
We analyzed the data iteratively using line-by-line open coding and constant-comparative analysis, consistent with principles of CGT.19,22 Three researchers (TC, LBD, and RM) reviewed and coded the initial set of six interviews. Several members of the study team (LZ, AN, FR) then reviewed uncoded transcripts to verify the initial themes and coding structure. Then, transcripts were again coded and analyzed in sets of three to four, through the process of constant comparative analysis, comparing new transcripts with earlier interviews, and to challenge and refine new concepts. The entire research team met several times to verify and challenge the coding structure, to help coalesce codes into categories, and then later into themes.
Given differences in aspects of operationalization of CBME in Triple C and CBD, interviews between specialty and family medicine trainees were analyzed initially separately as data collection progressed. However, as similar themes were found in both specialty and family medicine trainees, their data was ultimately considered together. We ceased recruitment when participants’ reflections sufficiently informed our analysis and we reached theoretical saturation. Theoretical saturation was determined to be the point in which sufficient data had been collected to enable a thorough understanding of the key concepts being explored.38,39 We used NVivo statistical software, version 12.2.0 (QSR International, Doncaster, Victoria, Australia) for data management.
Results
We conducted 16 interviews (7 PGY-1s, 7 PGY-2s, 1 PGY-3 and 1 PGY-4) with residents from eight different disciplines (Family Medicine, Otolaryngology, Anesthesia, Medical Oncology, Public Health, Palliative Care, Emergency Medicine and Obstetrics and Gynecology) in seven different Canadian institutions (University of British Columbia, University of Manitoba, Queen’s University, Northern Ontario School of Medicine, University of Toronto, McMaster University, Western University). Half of our participants were family medicine residents. Participants used the terminology of their specialty-specific CBME frameworks, but had overall similar viewpoints whether in family medicine or specialty training. Representative quotes below contain numerical participant identifiers.
Although we initially examined family medicine and specialty residents separately, as the same themes were evident, this data is represented together. Though interviews contained open-ended questions about resident experiences with CBME, resident descriptions were often dominated by discussions about assessment and feedback. Accordingly, the overarching finding was that resident engagement and buy-in to CBME was limited by numerous challenges related to assessment and feedback. The themes in this study are: administrative burden led to frustration and anxiety, learning was overshadowed by a focus on assessment, feedback quality on assessments was variable and often not meeting expectations, gaming of the system was common, and faculty engagement and support was valued. Challenges were interrelated and a deficiency in one or more aspects had an impact on others. For example, the administrative burden associated with assessments contributed to emotional distress and was perceived to hinder meaningful learning.
Administrative burden led to frustration and anxiety
The majority of residents expressed support for the theoretical benefits of CBME, yet, in practice, many described numerous drawbacks and barriers to their engagement that reduced the perceived value of CBME as a new model of medical education. As one participant described: “I think the change itself was a very good change in the paradigm shift... but when it comes to implementation there were major problems.” (P2)
For many residents, the volume of assessments and associated documentation in CBME, and the need to initiate frequent assessment encounters was problematic. Assessments were felt to be a burden in terms of time and responsibility, given competing clinical responsibilities. Participants used phrases like “form fatigue” (P3), “bureaucracy” (P7), and “mountain of busy work” (P12) to describe the associated administrative burden. As P5, a resident who had previously trained in a non-CBME program, described: “[With CBME] I [am] doing the same job as a learner [compared to before] except now it's like I had all this paperwork, this bureaucracy. Effectively, I'm not doing anything else…” (P5)
Furthermore, the frequent need to initiate assessment encounters and then to “chase [supervisors] down [to document and] sign off on the observed competency” (P3) or EPA was felt to disrupt workflow and clinical care requirements:
About half the EPAs I send out require some follow up... It's at least two to three hours a week to trigger the EPA [assessment], tracking which ones I need to get done, to following up...then you've got pressures from your competency committee that you need more...It's consuming my time and my energy (P12)
It was also felt to contribute to a culture of constant assessment, leading to “frustration" (P4) “anxiety” (P1), “[being] overwhelmed” (P13), “burn-out” (P11) and distress:
That anxiety then translates into a little bit of panic sometimes, you're not sure if you're going to be able to meet all of the competencies within the given time frame. (P8)
This extra paperwork and the stress, and the anxiety... [thinking] I hope I finish all my EPAs on time so that they don’t make you stay an extra year of residency. (P5)
Learning overshadowed by focus on assessment
Residents were receptive to the philosophy of CBME and acknowledged that the focus on competencies and frequent formative assessment had the potential to lead to more frequent feedback and coaching to support learning. However, in reality, a focus on assessment was felt to “detract from learning” (P16). Although residents felt that CBME assessments were generally intended to be low-stakes, formative encounters, they recognized their ultimately summative purpose and perceived them as high-stakes evaluations that led to performance anxiety: “If I'm attempting an EPA with a staff, then I know I'm being judged. It's not the low-stake evaluation as it was proposed to us to be.” (P11)
For example, residents in Triple C described a feeling of constant scrutiny due to frequent assessments that they felt were in tension from meaningful learning conversations outside of the form-filling exercise. For residents in CBD and in some operationalizations of Triple C, the language and forms with standardized rating scales that conveyed discrete supervisory judgements reinforced that a judgment was occurring. They described feeling that these moments of assessment were not truly low stakes opportunities for learning as proposed, which they felt hindered their vulnerability and openness to learn.
It’s a little inhumane, it takes away the humanity of and the fact that…You should also be learning and going up that learning curve, not just being tested and assessed every single day on every single thing. (P2)
Additionally, in both systems of CBME, narrowing one’s focus around achieving a specific competency or checklist of assessments was felt to be to the detriment of other learning experiences. This was felt to impact the achievement of more holistic attributes needed to become a well-rounded physician and reinforced a mindset focused on assessment:
It took people's objectives away from being a good physician to completing this checklist…[that] don't involve the whole physicianship [sic] that is associated with being a doctor...It takes you away from being the scholar...the educator...the advocate because you don't focus on those things and being a better person as a whole, you actually focus more on the individual task. (P7)
Sometimes it does take away from allowing for natural learning experiences to happen...ignoring anything else that pops up because it doesn't fit with the EPA for today, ignore that. (P16)
Feedback quality on assessments was variable, often not meeting expectations
Residents felt that these frequent assessments and associated supervisory judgements would be worthwhile if they had received consistent “specific, constructive” (P13) feedback. In reality, feedback often fell short of those expectations, and was frequently “wide, vague, [and] generalized” (P10), making the feedback lack meaning: “If I felt like I was getting better feedback...better coaching this would all be worth it...And so, this is all just added work and added stress with no benefit.” (P12)
According to many residents, the scale-based ratings or checklist of competencies to be achieved on assessment forms encouraged preceptors to prioritize “checking boxes” (P15) and focusing on form-completion in the encounter over meaningful verbal feedback or the provision of constructive written comments. Though they recognized it should foster meaningful verbal conversation, they found it often did not. As such, many described having assessments completed as an exercise in ticking boxes on a list rather than valuable feedback moments that they experienced outside of assessment encounters:
[Faculty should] use evidence of the resident's performance to either tell them [what] they're doing well or tell them where they can improve, not just sit there with a checklist and sign off all these competencies. (P6)
[When the] staff fill out these cards for me, and they're looking at it...sign their name…check a few things [on the form] …feedback is less meaningful. (P15)
Gaming the system was common
Entrustment ratings and judgements of performance were felt to vary widely between preceptors, even if performance was felt to be consistent. This led to frustration over inconsistent performance standards, especially if these judgements eventually contributed, in aggregate, to summative decisions that could be used to delay progression in training.
I found it very frustrating...Some preceptors may find someone competent in one of these competencies, and for the same exact resident some other preceptors may find that that resident is not competent in that specific [interaction]…it becomes, actually, a very subjective assessment. (P10)
Attuned to assessment and the need to be deemed competent on a sufficient number of assessments to progress in training, some residents described gaming the system. In this “game” (P2), prior knowledge of supervisor preferences was shared among cohorts and residents modified their behaviours with the express purpose of being deemed sufficiently competent on assessments: “We kept a record of every staff; how they like to do their step of doing an epidural just so when we do that type of EPA with them, we can pass.” (P11)
Other residents purposefully sought out tasks they would succeed at or lenient supervisors to improve their chances of a favourable assessment: “There are certain attendings that you would want to get your competencies signed off because you know that they would sign all those competences than others who never would.” (P6)
As interpersonal dynamics were felt to influence assessment outcomes, some residents described deliberately participating in a “social game … to schmooze with people and kissing butt” to (P5). Overall, these behaviours were recognized as potentially subverting the intent and validity of assessment, yet for many, were felt to be necessary given the perceived lack of consistent performance standards among supervisors, performance anxiety, and desire to pass.
Faculty engagement and support was valued
More positive experiences were described when there was greater perceived engagement and buy-in from faculty, particularly in the assessment process. Experiences were valued in which supervisors demonstrated in interest in coaching or initiated assessments encounters: “[I appreciate that] my family preceptor is really good about being like, “Hey, let's sit down and do some field notes.” (P13)
Residents valued when faculty leaders sought to understand and address barriers to obtaining program requirements and assessments or offered flexibility in the training program to facilitate opportunities to demonstrate attainment of competencies. Longitudinal relationships with faculty were also supportive, especially in programs that had an academic coach to help residents interpret assessment data and was not involved in performance judgments. They also served to advocate for residents at the program level:
I would advise others to foster a good relationship with your program director and with your academic coach…. So [your] coach knows who you are as a person… [and at] the competency committee meeting they can also vouch for you, should you be behind. (P14)
Discussion
Our study explored the experiences of residents in CBME programs in Canada and identified challenges with assessment and feedback. Residents perceived that assessment overshadowed learning in CBME, from the need to frequently complete and keep track of assessment forms, to the perceived inconsistency and summative intent of performance judgments and a focus on form-filling instead of quality feedback. Behaviours developed to cope with these concerns including attempts to game the system, and they described emotional consequences from the tensions related to the assessment process.
While workplace-based assessments in CBME like EPA assessments are designed to be formative and lower-stakes, residents in our study recognized their ultimately summative intent and felt that they still constituted a higher-stakes evaluation. This led to performance-anxiety that was felt to overshadow learning, hindering residents’ ability to engage with assessments as true zero-stakes learning opportunities, as in previous findings.26,40,41 Feedback acceptance has been shown to require psychological safety, which may be constrained by a focus on assessment over learning. This could lead to missed opportunities for coaching and learning if trainees fail to engage with the developmental intent of assessment in CBME.42-44 Strategies have been proposed to promote a developmental focus and lower the perceived stakes of assessment in CBME. For example, narrative forms without associated competence or entrustment ratings could be used more frequently in CBME to promote moments of lower-stakes assessment. Alternatively, notes could be taken during feedback and assessment encounters to later stimulate personal reflection without contributing assessment portfolios or summative decision making.45,46 This could help learners focus more on the encounter than the form, and help them feel vulnerable to engage in feedback conversations that foster their development. These strategies indicate the need for an overall culture change to help shift the mindset of trainees and faculty alike. Programs must also work to support a culture change to overcome the predominance of the summative assessment paradigm in medical education, in addition to these deliberate organizational strategies that may improve overall acceptance of assessment in CBME.43,46
To compound the issue of the perception of high-stakes evaluations, learners often perceived that assessment in CBME often led to poor quality feedback. Previous work has shown that standardized numeric entrustment scales may limit the flexibility, breadth, and quality of dialogue and feedback.26,40 Moreover, a focus on form completion over coaching, even in the absence of an entrustment scale (as in some assessments in Triple C programs) may limit feedback quality.26,47 Critically, feedback that does not meet expectations may constitute a missed opportunity to achieve the intended aim of supporting learners’ growth and skill acquisition in CBME.40,48 Programs must therefore prioritize widespread faculty development to support the development of necessary feedback and coaching skills and to promote a growth mindset in learners, while at the same time adopting essential organizational strategies to promote valuable feedback conversations and to address challenges with the current implementation of assessment tools and encounters.23,24,49
Expanding on other recent work exploring how residents understand CBME as an educational framework,50 our study unpacked concerns that assessment burden, coupled with the view that judgments were high stakes but inconsistent across supervisors, led to resident attempts to game the system. Similar to prior studies,51,52 some residents deliberately chose assessors who would rate them favourably or picked tasks that they would be successful at and not necessarily experiences that would enhance their learning. As hypothesized by Leung (2002), an unintended challenge with CBME implementation may be that it encourages trainees to focus on doing what is just enough to progress in training–at the expense of attitudes towards diversified learning.53 These behaviours could also threaten the validity of assessment data interpretation, emphasizing the need for prevention strategies.41,54 Faculty could be instructed to more frequently initiate assessment encounters to minimize the potential for strategic selection of assessors or tasks by residents.27 At a program level, appropriate monitoring strategies could be instituted to try to recognize when such behaviours are occurring and to help mitigate or calibrate for them. As Acai suggests,51 coaches who are sensitized to hawks and doves might be able to identify gaming. Yet, critically, efforts will also be needed to address contributing factors including burdensome administrative requirements, the current assessment culture, and possible unclear performance standards across faculty. As different judgments summed together can have significant value for learning, there will be a need to still maintain some inter-faculty variability even if performance expectations are better standardized.36
Finally, fostering longitudinal coaching relationships, as highlighted by our findings, may also support improved experiences with assessment. Strengthening faculty-trainee relationships or more consistently integrating academic advisors into training programs may help learners translate formative assessment data into learning plans and promote focus on continuous improvement instead of assessment outcomes.13 Ultimately structural and organization changes, as well as widespread faculty and resident development to maximize effective feedback dialogue must take place to improve resident experiences in CBME.23,24,49 Moreover, involvement of residents not only in the initial stages of implementation but in program evaluation and improvement efforts might allow ongoing vulnerabilities to be addressed in a more timely manner.
Limitations
Limitations of this study include the small sample size compared to the large population of residents in CBME programs in Canada, although theoretical saturation was reached. Our study was intended to be exploratory and we sampled residents to try to obtain diverse views by including residents from a variety of CBME training programs, institutions, and years of training. However, those who volunteered to participate may differ than those who did not, potentially introducing a degree of bias to these results. Our sample also skewed towards residents early in their training. Accordingly, there is a need for future studies to further explore resident perceptions and experiences to better capture all viewpoints. Our findings may also reflect early experiences of CBME. Yet, as the findings were similar across specialty and family medicine programs, the latter of which has been participating in CBME over the past decade, it suggests a potentially more pervasive series of unintended consequences which may need to be addressed. Further studies may also be helpful to delineate distinctions between CBME training models and programs that were not identified in the present study. Overall, the results of our study are important for informing educators, administrators, and licensing bodies and have already informed work at RDoC and led to collaborations with the Royal College. An additional limitation is that we were not able to include residents from Quebec in our study, however findings of similar studies in residents in Quebec have shown similar findings.30
Conclusion
Although residents value the potential for CBME to improve the quality of education, assessment, and feedback they receive, their experiences suggest that it may not be consistently achieving these objectives. Our work highlights a need to mitigate vulnerabilities associated with assessment in CBME and to optimize opportunities to support meaningful formative feedback. Initiatives to improve faculty education, to modify existing assessment forms and processes, and to foster coaching relationship may improve the resident experience.
Acknowledgements
We thank all participants of this study for their participation.
Appendix A
Final Interview Guide
Demographic Questions: What program are you in? What PGY year of training? Are you training in a CBME program?
- What is your understanding of the Competency-Based Medical Education (CBME) model (*Triple C Framework in Family Medicine)?
- What, in your understanding, is this model an alternative to?
- Do you think that the CBME model* has resulted in the same or different learning experience for you as compared to residents in traditional non-CBME training programs?
- If so, how? What are the impacts? (Explore positive and negative)
- If not, please explain.
From your experience, what is working well in your training program, as it relates to CBME*?
From your experience, what elements of your training program, as they relate to CBME*, might need improvement?
Have you personally experienced any specific challenges with the CBME* training program model? Are there any specific benefits? If so, please elaborate.
- What methods of assessment are used to monitor your progress in the program and in acquiring specific competencies?
- How were these methods explained to you?
- Do you think these assessment methods have been adequate or appropriate? (Explore why or why not)
In terms of your program’s implementation of the CBME* model, what do you think was done well? Is there anything you wish had been done differently?
What recommendations would you have for residents entering programs that are about to undergo transition, or have fully done so?
Is there anything else you would like to bring to our attention regarding your experience?
Funding Statement
Funding: This research was funded by Resident Doctors of Canada.
Conflicts of Interest
The authors have no conflict of interests to declare.
References
- 1.Frank JR, Snell LS, Cate O Ten, et al. Competency-based medical education: theory to practice. Med Teach. 2010;32:638–45. 10.3109/0142159X.2010.501190 [DOI] [PubMed] [Google Scholar]
- 2.ten Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017;17;34. 10.3205/zma001146 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Tannenbaum D, Kerr J, Konkin J, et al. Triple C competency-based curriculum. report of the working group on postgraduate curriculum review–part 1. Coll Fam Physicians Canada. 2011;1:101. [Google Scholar]
- 4.Oandasan I, Wong E, Saucier D, Donoff M, Iglar K, Schipper S. Triple C: linking curriculum and assessment. Can Fam Physician. 2012;58:1165. [PMC free article] [PubMed] [Google Scholar]
- 5.The Royal College of Physicians and Surgeons of Canada . What is CBD? Available from: https://www.royalcollege.ca/rcsite/cbd/what-is-cbd-e [Accessed Jun 30, 2020].
- 6.Ten Cate O, Scheele F, Ten Cate TJ. Viewpoint: competency-based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–7. 10.1097/ACM.0b013e31805559c7 [DOI] [PubMed] [Google Scholar]
- 7.Lockyer J, Carraccio C, Chan MK, et al. Core principles of assessment in competency-based medical education. Med Teach. 2017;39:609–16. 10.1080/0142159X.2017.1315082 [DOI] [PubMed] [Google Scholar]
- 8.Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: from flexner to competencies. Acad Med. 2002;77:361–7. 10.1097/00001888-200205000-00003 [DOI] [PubMed] [Google Scholar]
- 9.Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Sherbino J. A Core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019;94:1002–9. 10.1097/ACM.0000000000002743 [DOI] [PubMed] [Google Scholar]
- 10.Oandasan IF, Saucier D, (eds). Triple C Competency-based Curriculum Report – Part 2 Advancing Implementation Mississauga, ON: College of Family Physicians of Canada; 2013. https://portal.cfpc.ca/resourcesdocs/uploadedFiles/Education/_PDFs/TripleC_Report_pt2.pdf. [Accessed Sept 20, 2020]. [Google Scholar]
- 11.Cate O Ten, Carraccio C. Envisioning a true continuum of competency-based medical education, training, and practice. Acad Med. 2019;94:1283–8. 10.1097/ACM.0000000000002687 [DOI] [PubMed] [Google Scholar]
- 12.Gofton W, Dudek N, Barton G, Bhanji F. Workplace-based assessment implementation guide: Formative tips for medical teaching practice. The Royal College of Physicians and Surgeons of Canada. 2017;1st ed.:1–12. Available at: http://www.royalcollege.ca/rcsite/documents/cbd/wba-implementation-guide-tips-medical-teaching-practice-e.pdf [Google Scholar]
- 13.Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32:676–82. 10.3109/0142159X.2010.500704 [DOI] [PubMed] [Google Scholar]
- 14.Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: a systematic review of published definitions. Med Teach. 2010;32:631–7. 10.3109/0142159X.2010.500898 [DOI] [PubMed] [Google Scholar]
- 15.Gruppen LD, ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced requirements for assessment in a competency-based, time-variable medical education system. Acad Med. 2018;93:S17–21. 10.1097/ACM.0000000000002066 [DOI] [PubMed] [Google Scholar]
- 16.ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–7. 10.1111/j.1365-2929.2005.02341.x [DOI] [PubMed] [Google Scholar]
- 17.Ellaway RH, Mackay MP, Lee S, et al. The impact of a national competency-based medical education initiative in family medicine. Acad Med. 2018;93:1850–7. 10.1097/acm.0000000000002387 [DOI] [PubMed] [Google Scholar]
- 18.Hsu T, De Angelis F, Al-asaaed S, Basi SK, Tomiak A, Grenier D, et al. Ten ways to get a grip on designing and implementing a competency-based medical education training program. Can Med Educ J. 2021;12. 10.36834/cmej.70723 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.ICE Blog . Introducing a core components framework for competency-based medical education. Available from: https://icenetblog.royalcollege.ca/2021/11/18/introducing-a-core-components-framework-for-cbme/ [Accessed on Dec 30, 2021].
- 20.Lim J, Westerman ME, Stewart NH, Correa R, Eno C. Trainee Perspectives on the writing and implementation of milestones 2.0. J Grad Med Educ. 2021;13:8–10. 10.4300/JGME-D-20-00859.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Carraccio C, Englander R, Van Melle E, et al. Advancing competency-based medical education: A charter for clinician-educators. Acad Med. 2016;91(5):645-9. 10.1097/ACM.0000000000001048 [DOI] [PubMed] [Google Scholar]
- 22.Harrison CJ, Könings KD, Schuwirth L, Wass V, van der Vleuten C. Barriers to the uptake and use of feedback in the context of summative assessment. Adv Heal Sci Educ. 2015;20:229–45. 10.1007/s10459-014-9524-6 [DOI] [PubMed] [Google Scholar]
- 23.Iobst WF, Sherbino J, Cate O Ten, et al. Competency-based medical education in postgraduate medical education. Med Teach. 2010;32:651–6. 10.3109/0142159X.2010.500709 [DOI] [PubMed] [Google Scholar]
- 24.Mann S, Hastings Truelove A, Beesley T, Howden S, Egan R. Resident perceptions of Competency-Based Medical Education. Can Med Educ J. 2020;11:e31. 10.36834/cmej.67958 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Upadhyaya S, Rashid M, Davila Cervantes A, Oswald A. Exploring resident perceptions of initial competency based medical education implementation. Can Med Educ J. 2021:e42–56. 10.36834/cmej.70943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Branfield Day L, Miles A, Ginsburg S, Melvin L. Resident perceptions of assessment and feedback in competency-based medical education: a focus group study of one internal medicine residency program. Acad Med. 2020;95:1712–7. 10.1097/ACM.0000000000003315 [DOI] [PubMed] [Google Scholar]
- 27.Marcotte L, Egan R, Soleas E, Dalgarno NJ, Norris M, Smith CA. Assessing the quality of feedback to general internal medicine residents in a competency-based environment. Can Med Educ J. 2019;10:e32–47. 10.36834/cmej.57323 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.David V, Walsh M, Lockyer J, Mintz M. Entrustable professional activities: an analysis of faculty time, trainee perspectives and actionability. Can J Gen Intern Med. 2021;16:8–13. 10.22374/cjgim.v16i1.415 [DOI] [Google Scholar]
- 29.Hall AK, Rich J, Dagnone JD, et al. It’s a marathon, not a sprint: rapid evaluation of competency-based medical education program implementation. Acad Med. 2020;95:786–93. 10.1097/ACM.0000000000003040 [DOI] [PubMed] [Google Scholar]
- 30.Federation des medecins residents du Quebec . Year 3 of implementation of competence by design: negative impact still outweighs theoretical benefits observations on the day-to-day reality of CBD. 2020. [Google Scholar]
- 31.Schultz K, Griffiths J. Implementing competency-based medical education in a postgraduate family medicine residency training program: a stepwise approach, facilitating factors, and processes or steps that would have been helpful. Acad Med. 2016;91:685–9. 10.1097/ACM.0000000000001066 [DOI] [PubMed] [Google Scholar]
- 32.Hamza DM, Ross S, Oandasan I. Process and outcome evaluation of a CBME intervention guided by program theory. J Eval Clin Pract. 2020;26:1096–104. 10.1111/jep.13344 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Ross S, Poth CA, Donoff MG, et al. Involving users in the refinement of the competency-based achievement system: An innovative approach to competency-based assessment. Med Teach. 2012;34. 10.3109/0142159X.2012.644828 [DOI] [PubMed] [Google Scholar]
- 34.Charmaz K, Belgrave LL. The SAGE handbook of interview research: the complexity of the craft. SAGE Handb Interview Res Complex Cr. 2nd ed. 2012;347–66. 10.4135/9781452218403.n25 [DOI] [Google Scholar]
- 35.Watling C, Lingard L. Grounded theory in medical education research: AMEE Guide No. 70. Med Teach. 2012;34:850–61. 10.3109/0142159x.2012.704439 [DOI] [PubMed] [Google Scholar]
- 36.Charmaz K. Constructing grounded theory: a practical guide through qualitative research. Sage Publications Ltd. London: SAGE Publications Ltd; 2006. [Google Scholar]
- 37.Apramian T, Cristancho S, Watling C, Lingard L. (Re)Grounding grounded theory: a close reading of theory in four schools. Qual Res. 2017;17:359–76. 10.1177/1468794116672914 [DOI] [Google Scholar]
- 38.Morse JM. The significance of saturation. Qual Health Res. 1995;5:147–9. 10.1177/104973239500500201 [DOI] [Google Scholar]
- 39.Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27:591–608. 10.1177/1049732316665344 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Martin L, Sibbald M, Brandt Vegas D, Russell D, Govaerts M. The impact of entrustment assessments on feedback and learning: trainee perspectives. Med Educ. 2020;54:328–36. 10.1111/medu.14047 [DOI] [PubMed] [Google Scholar]
- 41.Schut S, Driessen E, van Tartwijk J, van der Vleuten C, Heeneman S. Stakes in the eye of the beholder: an international study of learners’ perceptions within programmatic assessment. Med Educ. 2018;52:654–63. 10.1111/medu.13532 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Atkinson A, Watling CJ, Brand PLP. Feedback and coaching. Eur J Pediatr. 2021;1–6. 10.1007/s00431-021-04118-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Watling C, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53:76–85. 10.1111/medu.13645. [DOI] [PubMed] [Google Scholar]
- 44.MacNeil K, Cuncic C, Voyer S, Butler D, Hatala R. Necessary but not sufficient: identifying conditions for effective feedback during internal medicine residents’ clinical education. Adv Heal Sci Educ. 2020;25(3):641-654. 10.1111/medu.14154 [DOI] [PubMed] [Google Scholar]
- 45.Harrison C, Wass V. The challenge of changing to an assessment for learning culture. Med Educ. 2016;50:704–6. 10.1111/medu.13058 [DOI] [PubMed] [Google Scholar]
- 46.Ginsburg S, Watling CJ, Schumacher DJ, Gingerich A, Hatala R. Numbers encapsulate, words elaborate: toward the best use of comments for assessment and feedback on entrustment ratings. Acad Med. 2021;96:S81–6. 10.1097/ACM.0000000000004089 [DOI] [PubMed] [Google Scholar]
- 47.Malone K, Supri S. A critical time for medical education: The perils of competence-based reform of the curriculum. Adv Heal Sci Educ. 2012;17:241–6. 10.1007/s10459-010-9247-2 [DOI] [PubMed] [Google Scholar]
- 48.Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment. Med Teach. 2012;34:787–91. 10.3109/0142159x.2012.684916 [DOI] [PubMed] [Google Scholar]
- 49.Boet S, Pigford AAE, Naik VN. Program director and resident perspectives of a competency-based medical education anesthesia residency program in Canada: a needs assessment. Korean J Med Educ. 2016;28:157–68. 10.1111/medu.12637 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Upadhyaya S, Rashid M, Davila Cervantes A, Oswald A. Exploring resident perceptions of initial competency based medical education implementation. Can Med Educ J. 2021;12(2):e42-e56. 10.36834/cmej.70943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Acai A, Li SA, Sherbino J, Chan TM. Attending emergency physicians’ perceptions of a programmatic workplace-based assessment system: the McMaster Modular Assessment Program (McMAP). Teach Learn Med. 2019;31:434–44. 10.1080/10401334.2019.1574581 [DOI] [PubMed] [Google Scholar]
- 52.Gaunt A, Patel A, Rusius V, Royle TJ, Markham DH, Pawlikowska T. ‘Playing the game’: How do surgical trainees seek feedback using workplace-based assessment? Med Educ. 2017;51:953–62. 10.1097/acm.0000000000001523 [DOI] [PubMed] [Google Scholar]
- 53.Leung WC. Competency based medical training: review. BMJ. 2002;325 (7366):693–6. 10.1136/bmj.325.7366.693 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Pinsk M, Karpinski J, Carlisle E. Introduction of competence by design to Canadian nephrology postgraduate training. Can J Kidney Heal Dis. 2018;5. 10.1177/2054358118786972 [DOI] [PMC free article] [PubMed] [Google Scholar]