Abstract
Competency Based Medical Education (CBME) is pushing the medical profession to be more accountable in our standards of assessment. This has led us to focus our efforts at the top of Miller’s pyramid, where we aim to see what the trainee ‘does’ in the clinical environment. In Canadian Royal College specialty training, this has come in the form of workplace-based supervision of trainees performing Entrustable Professional Activities (EPAs). This is unfamiliar territory for many residents and faculty, and implementation of an additional assessment process into already busy clinical practice has been particularly challenging. Because EPA assessments serve as significant contributors in new programs of assessment, failure to collect high quality EPA assessments threaten the validity of this new system. Understanding the barriers to and enablers of EPA acquisition can inform faculty development initiatives to ensure success.
Based on our previous work studying early experiences of EPA assessment acquisition in Emergency Medicine, we have identified eight key concepts to guide faculty development initiatives, namely: the rationale for CBME, the ‘behind the scenes’ of CBME, how to construct rich narrative comments, effective use of supervision scales, the tension of EPA assessments being both formative and summative, the importance of a shared responsibility between residents and faculty for EPA assessment completion, familiarity with the suite of EPAs, and tips and tricks for incorporating EPA assessment completion into busy clinical practice. These key concepts can be integrated into an overall faculty development strategy for building this now essential skill set.
Abstract
L’approche par compétences en formation médicale incite les membres de la profession médicale à faire preuve d’une responsabilité accrue en ce qui concerne les normes d'évaluation. En conséquence, nous avons concentré nos efforts sur le sommet de la pyramide de Miller, où nous cherchons à voir ce que « fait » le stagiaire dans l'environnement clinique. Dans le cadre de la formation spécialisée du Collège royal des médecins et chirurgiens du Canada, cela a pris la forme d'une supervision sur le lieu de travail des stagiaires effectuant des activités professionnelles confiables (APC). Il s'agit d'un domaine inconnu pour de nombreux résidents et membres du corps professoral. La mise en œuvre d'un processus d'évaluation supplémentaire dans une pratique clinique déjà très chargée s'est avérée particulièrement difficile. Comme les évaluations des APC contribuent de manière significative aux nouveaux programmes d'évaluation, l'incapacité à recueillir des évaluations de qualité menace la validité de ce nouveau système. La compréhension des obstacles et des facteurs favorables à l'acquisition d’APC peut contribuer aux initiatives de formation professorale afin d'en assurer la réussite.
Sur la base de nos travaux antérieurs sur les premières expériences d'acquisition d'évaluations des APC en médecine d'urgence, nous avons identifié huit concepts clés pour guider les initiatives de formation professorale, à savoir : la raison d'être de l’approche par compétences en formation médicale, les « coulisses » de cette approche, la manière de rédiger de riches commentaires narratifs, l'utilisation efficace des échelles de supervision, les difficultés liées au fait que les évaluations des APC sont à la fois formatives et sommatives, l'importance d'une responsabilité partagée entre les résidents et la faculté pour la réalisation des évaluations des APC, la familiarisation avec l'ensemble des APC, et les trucs et astuces pour incorporer la réalisation des évaluations des APC dans une pratique clinique chargée. Ces concepts clés peuvent être intégrés dans une stratégie globale de formation professorale pour renforcer cet ensemble de compétences désormais essentiel.
Introduction
Workplace-based Entrustable Professional Activity (EPA) assessments have become one of the main targets of assessment in CBME. They require a trainee to be directly or indirectly observed during an essential task of that particular specialty. The discipline-specific EPAs in Canadian Royal College specialty training are grouped according to stages, moving through Transition to Discipline, Foundations of Discipline, Core of Discipline and Transition to Practice. This new process is intended to be more learner centered and focus on the outcomes and abilities of trainees. This program of assessment holds potential to significantly improve medical training; however, implementation has had significant challenges.1 Many faculty and residents were unfamiliar with EPAs and as such have found the transition to using EPAs difficult.2
In our previous work, we interviewed Emergency Medicine faculty and residents to elicit feedback on their early experiences with EPA assessments, using the theoretical domains framework for data analysis.2. We found several challenges for faculty related to EPAs. (Appendix A). These include understanding the rationale for change to CBME, understanding the ‘back-end’ of CBME, understanding how EPAs are used, how to fill them out effectively, who should fill them out and how to fit them into the flow of clinical work.
The authors hold valuable perspectives related to faculty development of EPA skills. RW is a former Program Director and current Medical Education Fellowship Director, MB is a former resident who trained in a CBME residency program, WC is a Clinician Educator for the Royal College and a Competence Committee Chair, AH is a former CBME implementation lead and currently works at the Royal College leading program evaluation of Competence by Design, TC is a former Faculty Development Dean and Competence Committee Chair and Dean of a new medical school, and QP was a resident training prior to the implementation of CBME and is currently a Program Director. We draw on our extensive experience and the existing literature to provide eight ways to get a grip on developing faculty development initiatives that support CBME.
The eight things to focus on when developing Faculty Development initiatives
1. Begin by communicating the purpose of CBME
Physicians need to understand that we weren’t doing fine. Postgraduate medical education leaders have struggled with low quality feedback, variable attainment of competence, concerns with promotion decisions and patient safety risks.1 While the majority of trainees have done okay with the one-size fits all time-based training system, a significant number have not. Trust in the medical profession has fallen. As a self-regulating profession, we need to do better and that means a different system of assessment.3
2. Describe what happens ‘behind the scenes’ in CBME
Similar to the why, front-line faculty coaches need to learn about the ‘back end’ of CBME and learn that their comments really matter. They need to understand how the information collected within EPA assessments are used by residents, competence committees and academic advisors to inform progress decisions and develop individualized learning plans for coaching over time.4
3. Guide faculty to craft useful narrative comments
Rich narrative comments are critical for competence committee function. Because committee members were not there in the moment, they need to gain a picture of the case, including the trainee performance and what can be improved in future cases. This enables members to analyze clinical performance data which can provide detailed information about trainee performance to inform progression decisions and help direct individualized learning plans for trainee growth.5,6
4. Guide faculty to effectively use supervision scales
Leniency bias has been prevalent within previous forms of assessment, and it prevails once again with EPA datas.5 Faculty need to understand that they are not the judge of trainee competence, but rather a judge of performance for that one observed task.6 Determining entrustment is the job of the Competence Committee. Trainees should get lower supervision scale scores early in training, with the goal to improve with more experience and feedback. This will help cultivate a growth mindset in both the trainees and the faculty involved.7
5. Are EPAs formative or summative?
One of the promises of CBME was a decreased reliance on high-stakes summative assessment ‘of learning,’ with EPA assessments being frequent low-stakes formative assessment ‘for learning.’ In reality, EPA assessments are formative in the moment and summative in aggregate.8 Because of this, residents struggle to collect assessments for formative purposes; therefore, faculty need to acknowledge this tension, and set up the learning environment and relationship in a way that allows trainees to use EPA assessments to grow and improve.6
6. Impart to faculty the shared responsibility of initiating EPAs
The burden of EPA assessment acquisition has fallen mostly on residents, but they understandably struggle to ask their busy supervisors to do this for them.8 Faculty need to prioritize completing EPA assessments as part of clinical work, so residents are not left to carry the burden themselves.9
7. Share the importance of faculty being familiar with the EPAs of their trainees
Given the above tension about formative and summative assessment, trainees may select assessment of EPAs that they are familiar with or performed well on.10 Faculty need to take an active role in selecting EPA assessments for trainees where critical feedback has occurred, so Competence Committees can get a robust picture of trainee performance and do a good job promoting growth via individualized learning plans.
8. Provide tips for fitting in EPA completion during clinical work
The quality of an EPA assessment declines rapidly if it is not completed at the moment. Faculty need to find ways to either complete these assessments during clinical work, or set up processes so the details of the case and coaching conversation are not lost or subject to recall bias.
Summary
We have generated a list of eight key concepts to get a grip on for faculty development relating to effective EPA assessment: the rationale for CBME, the ‘behind the scenes’ of CBME, how to construct rich narrative comments, effective use of supervision scales, the tension of EPA assessments being both formative and summative, the importance of a shared responsibility between residents and faculty for EPA completion, faculty familiarity with the suite of EPAs, and to utilize and tips and tricks for fitting EPA completion into busy clinical practice. Programs and institutions can utilize this when planning faculty development initiatives.
Appendix A. Key concepts for faculty development content for EPAs
Sub-Themes from Theoretical Domains Framework | Key Faculty Development Concept |
---|---|
O1 – Assessor attitude toward EPAs affects EPA completion O2 – There are different viewpoints on whether or not EPAs have a meaningful impact BCon1 - There is a question as to whether or not the EPA assessment tool and its resultant data are better than previous methods of assessment. There is question as to whether or not the generated data is meaningful BCon2 – The adoption of EPAs is perceived to have resulted in the loss of global assessment BCon4 – There is concern that contrived events for observation lead to disingenuous feedback X1 – CBME provides more feedback than before X2 – CBME promotes reflective practice X3 – CBME has an added burden of assessment without huge benefits X4 – It is rewarding to see learner progression of confidence as they move up in postgraduate years |
Rational for CBME – why did we change? |
K1 - The assessors’ knowledge of competence committee expectations affects the information included in EPAs SIS6 - A shared understanding of the EPAs’ purpose and value between assessors and learners aids in EPA completion G8 – EPAs can be viewed as a means of communicating with the competence committee, in addition to an individual assessment tool SI6 – A shared understanding of the EPAs purpose and valued between assessors and learners aids in EPA completion SI8 – Privacy, or lack there-of, impacts authenticity of feedback |
Behind the scenes of CBME – how is data used? |
K4 - EPA knowledge translation is continuous BCap1 - Participants’ ease of translating verbal feedback into EPA documentation varies BCon6 - Written EPA feedback may not capture the entirety of feedback and reflection that occurs in assessment Sk1 - The act of translating observation into written EPAs takes practice ECR2b - High quality EPA assessment is time intensive. Transposing verbal feedback into written EPAs takes time MADP1 - High quality feedback requires dedicated attention. Explicitly making note of cases for later discussion enhances the quality of observation. MADP5 – Memory aids can facilitate improved delayed EPA feedback Sk3 - Training and practice with EPAs impacts feedback provision SPRI6 – The assessor approach impacts the EPA acquisition and quality SI7 – End users of the EPAs, like the Program Directors and Competence Committee, can provide feedback to assessors on EPA quality |
Value and use of rich narrative comments, and how to construct a useful comment |
BCap2 – Assessors have varying capabilities to provide written feedback and entrustment scores that are congruent with verbal feedback K3 - There is a learning curve for EPA rating scale use G5 - The assessor’s approach to EPA completion, either focusing on attaining minimum competency or coaching to excellence, impacts the EPA content Sk2 – Consistency of entrustment score use varies amongst assessor |
Effective use of supervision scales |
E1 – Assessors’ ability to emotionally relate to the learner affects their assessments I1 - Setting targets for EPA completion impacts acquisition. I2 - Participants specifically target different scenarios that will result in different types of feedback ECR6 -The compatibility between the EPA description and the clinical case impacts EPA choice ECR9 - Cultural expectations of performance favoring either high scores (performance orientation) or low scores (growth mindset) drive EPA-based assessment practices and assessment seeking behaviour. G2 - Assessors’ goals in EPA completion affects the EPA feedback content G7 – The individualization of coaching impacts the quality of EPA assessment BCon3 – There is an inherent tension regarding the ‘true’ purpose of the EPA assessment and its generated data Sk4 - Providing feedback within a psychologically safe context impacts EPA delivery SI5 – The feedback culture favors the provision and seeking of documented high scores |
Tension of EPAs being both formative and summative |
K2 - Senior residents can orient junior residents to EPAs I1 Setting targets for EPA completion impacts acquisition BCap5 Comfort with seeking out and receiving feedback varies with different levels of experiences R1 – Prior use of similar assessment tools aids the transition to EPA utilization R2 – Fellows and senior resident assessors facilitate EPA acquisition differently than faculty assessors R3 – Previous feedback experience can positively or negatively affect EPA acquisition ECR1 - The Emergency Department environment has priorities that complete with EPA acquisition ECR7 - Variable faculty engagement impacts EPA assessment ECR11 – Longitudinal supervisors facilitate high quality feedback and high-volume EPA assessment completion G6 - Flexibility and an opportunistic approach enables effective EPA-assessment MADP2 Familiarity with EPAs and awareness of the clinical environment allows learners and assessors to recognize when opportunities for ideal EPA cases arise BCon 5 – Tension between social capital and EPA completion exists SPRI1 – Learners have a responsibility to ensure their EPAs are being completed SPRI2 – Different levels of persistence by the learner result in different EPA completion rates SPRI3 – Longitudinal supervision provides a different insight into trainee performance SPRI4 – Faculty teaching and feedback may not be viewed as a job requirement SPRI5 – Variable EPA selection between assessor and trainee SI1 - Power differentials impact the EPA experience for both residents and faculty Seniority along residency training impacts comfort in requesting EPAs SI2 - EPAs are jointly negotiated and constructed by both the learner and the assessor. The choice of EPAs is dependent on selection by both learner and assessor, as well as the assessor’s particular feedback patterns. SI3 – Relationship continuity, familiarity and trust between assessors and learners influences EPA acquisition SI4 – The EPA acquisition experience varies depending on who is doing the assessing |
Shared responsibility of EPA completion |
K5 – Varying familiarity with EPAs affects completion ECR5 - Specific clinical exposures are unpredictable in the clinical learning environment. ECR12 – A functional electronic dashboard allows for gap identification and resultant EPA assessment SPRI5 - Variable EPA selection between assessor and trainee MADP2 - Familiarity with EPAs and awareness of the clinical environment allows learners and assessors to recognize when opportunities for ideal EPA cases arise. BCap4 Power differentials between assessors and learners affect requests for assessment BRS2 - Knowing which EPAs match which clinical rotations allows for targeted completion |
Familiarity with the suite of EPAs to utilize |
E2 – Mental energy affects assessment acquisition and quality BCap3 - Participants’ mental energy levels affects the acquisition of EPAs O1 - Assessor attitude toward EPAs affects acquisition. ECR2a - High quality EPA assessment is time intensive. Providing verbal feedback in a high quality fashion takes time ECR3 - Technology and electronic form-structures impact the capacity to engage in effective EPA assessment ECR4 - The available tools and physical Emergency Department space impacts EPA engagement ECR10 – The workplace normal impact EPA acquisition ECR13 – Assessment forms variably align with cultural norms and assessment practices in different clinical environments G1 – Clinical practice inherently has competing priorities G3 - The assessor’s prioritization of the EPA completion affects its completion G4 - There is variable enthusiasm for EPA completion and assessment Sk4 – Providing feedback within a psychologically safe context impacts EPA delivery EPAs have varying observational requirements SPRI6 - The assessor approach impacts the EPA acquisition and quality SPRI7 – Assessment can lead to relationship tensions in the work environment BR1 - There are in-the-moment strategies that may impact EPA completion. EPAs have varying observational requirements. Indirect observation provides a difference level of assessment authenticity. BR2 – There are systems-level strategies that may impact EPA completion MADP3 - Resource utilization can serve to direct attention towards relevant EPAs in clinical environments (EPA lists, pop-up reminders to complete EPAs during shift) MADP4 - The timing of EPA completion impacts the level of detail recalled. Providing EPA feedback during the shift is beneficial. |
Tips and Tricks for fitting EPA completion into workflow |
Funding Statement
Edited by:
Heather Buckley (section editor); Jane Gair (senior section editor); Marcel D’Eon (editor-in-chief)
Conflicts of Interest
None
References
- 1.Frank JR, Karpinski J, Sherbino J, et al. Competence By Design: a transformational national model of time-variable competency-based postgraduate medical education. Perspect Med Educ. 2024;13(1). 10.5334/pme.1096 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Paterson QS, Alrimawi H, Sample S, et al. Examining enablers and barriers to entrustable professional activity acquisition using the theoretical domains framework: w qualitative framework analysis study. AEM Educ Train. 2023;7(2):e10849. 10.1002/aet2.10849 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Cruess SR. Professionalism and medicine’s social contract with society. Clin Orthop. 2006;449:170-176. 10.1097/01.blo.0000229275.66570.97 [DOI] [PubMed] [Google Scholar]
- 4.Oswald A, Dubois D, Snell L, et al. Implementing Competence Committees on a National Scale: Design and Lessons Learned. Perspect Med Educ. 2024;13(1):56-67. 10.5334/pme.961 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Richardson D, Landreville JM, Trier J, et al. Coaching in Competence by Design: a new model of coaching in the moment and coaching over time to support large scale implementation. Perspect Med Educ. 2024;13(1):33-43. 10.5334/pme.959 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Woods R, Elder J. Context...Performance...Recommendation and Reinforcement (CPR2): bringing narrative comments to life in competency based medical education. Available at https://icenet.blog/2024/02/06/contextperformancerecommendation-and-reinforcement-cpr2-bringing-supervisor-narrative-comments-to-life-in-competency-based-medical-education/ [Accessed Mar 6, 2024].
- 7.Richardson D, Kinnear B, Hauer KE, et al. Growth mindset in competency-based medical education. Med Teach. 2021;43(7):751-757. 10.1080/0142159X.2021.1928036 [DOI] [PubMed] [Google Scholar]
- 8.Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53(1):76-85. 10.1111/medu.13645 [DOI] [PubMed] [Google Scholar]
- 9.Cheung WJ, Bhanji F, Gofton W, et al. Design and Implementation of a national program of assessment model–integrating Entrustable Professional Activity assessments in Canadian specialist postgraduate medical education. Perspect Med Educ. 2024;13(1):44-55. 10.5334/pme.956 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Mador B, Daniels VJ, Oswald A, Turner SR. Learner phenotypes in Competency-Based Medical Education. Med Sci Educ. 2021;31(6):2061-2064. 10.1007/s40670-021-01380-1 [DOI] [PMC free article] [PubMed] [Google Scholar]