Skip to main content
Perspectives on Medical Education logoLink to Perspectives on Medical Education
. 2024 Feb 6;13(1):56–67. doi: 10.5334/pme.961

Implementing Competence Committees on a National Scale: Design and Lessons Learned

Anna Oswald 1,2,3,4, Daniel Dubois 3,5, Linda Snell 3,6, Robert Anderson 3,7, Jolanta Karpinski 3,8, Andrew K Hall 3,9, Jason R Frank 10, Warren J Cheung 9,11
PMCID: PMC10854462  PMID: 38343555

Abstract

Competence committees (CCs) are a recent innovation to improve assessment decision-making in health professions education. CCs enable a group of trained, dedicated educators to review a portfolio of observations about a learner’s progress toward competence and make systematic assessment decisions. CCs are aligned with competency based medical education (CBME) and programmatic assessment. While there is an emerging literature on CCs, little has been published on their system-wide implementation. National-scale implementation of CCs is complex, owing to the culture change that underlies this shift in assessment paradigm and the logistics and skills needed to enable it. We present the Royal College of Physicians and Surgeons of Canada’s experience implementing a national CC model, the challenges the Royal College faced, and some strategies to address them. With large scale CC implementation, managing the tension between standardization and flexibility is a fundamental issue that needs to be anticipated and addressed, with careful consideration of individual program needs, resources, and engagement of invested groups. If implementation is to take place in a wide variety of contexts, an approach that uses multiple engagement and communication strategies to allow for local adaptations is needed. Large-scale implementation of CCs, like any transformative initiative, does not occur at a single point but is an evolutionary process requiring both upfront resources and ongoing support. As such, it is important to consider embedding a plan for program evaluation at the outset. We hope these shared lessons will be of value to other educators who are considering a large-scale CBME CC implementation.

Introduction

In an era of greater social accountability, the public has come to expect that postgraduate medical education (PGME) systems have a robust assessment process to ensure the competence of physicians who graduate to unsupervised practice [1]. In PGME, training program directors are responsible for monitoring trainee progress. In the past, they often did so using processes that relied on ad hoc data, supervisors’ remote retrospective impressions, or proxy measures of performance [2,3]. In recent years, postgraduate training has been transformed with the widespread implementation of competency based medical education (CBME); the new model of postgraduate training that seeks to improve the structure of trainee progress by promoting the use of programmatic assessment and group decision-making by a Competence Committee (CC) to guide programs in the systematic collection of trainee performance data for summative assessment of progress [4,5]. While the use of CCs has not been universally adopted in PGME CBME systems, there has been widespread uptake to date in the USA [6,7,8,9,10] and Canada [11,12,13], and growing interest internationally (e.g., Netherlands and Taiwan) [14,15,16].

In programmatic assessment, as trainees progress through their training, they must achieve outcomes of a curriculum described as a series of statements about the expected abilities of graduates. A program of assessment explicitly outlines the assessment strategies and breadth of assessment content and contexts to guide programs in the systematic collection of trainee performance data [17]. Many samples are obtained of a learner’s progress in achieving the desired competencies over the course of the curriculum. Multiple tools and many different assessors provide a variety of inputs into the assessment of trainee progress. While programmatic assessment can be achieved without the implementation of CCs, for example through the program director’s review of the varying assessment inputs, this can be challenging given the volume of data to review and the increased risk of inherent subjective biases with individual reviewers [2,3,18]. Done accurately and effectively, programmatic assessment optimizes learning, facilitates decision making regarding learner progression toward desired outcomes, and informs quality improvement activities of the program [19].

In Competence by Design (CBD), the transformational competency-based change to PGME designed and implemented by the Royal College of Physicians and Surgeons of Canada (hereafter referred to as the Royal College), CCs capitalize on the promissory benefits of programmatic assessment by using quantitative and qualitative assessment data that are collected electronically, curated, and collated into meaningful information describing individual learners and learner populations through learning analytics [20]. Programmatic assessment and CCs have been introduced in a linked fashion in CBD in direct response to calls for improved assessment systems and validity evidence for high stakes summative decisions (e.g., unsupervised practice) [21,22]. Learning analytics and all other forms of assessment data are prepared, reviewed, and synthesized by a trained and dedicated group of educators who comprise the CC to make a collective judgment about progress, promotion, and, ultimately, readiness for unsupervised practice [23]. CC decisions are made collectively, thus incorporating multiple perspectives, to create a broad picture of a trainee’s progression toward competence.

The rationale for CCs draws from the literature on group decision-making, which suggests that groups can reach better decisions than individuals [24] and that systematic group procedures that facilitate greater information sharing can improve group decision quality [25,26]. Thus, CCs may take advantage of group decision-making processes to collectively synthesize and interpret assessment data to make judgments about trainee performance and progress. To maintain fair decision-making within CCs, there should be consistent processes and procedures for how CCs review trainees’ progression. The validity of this summative assessment may be affected by the variability in volume, quality, and interpretation of assessment data within individual programs [27,28,29]. While many benefits of group processes for summative competence decisions have been proposed, CCs are not infallible [26]. There is increasing evidence that lack of member diversity, poor data quality and synthesis, ineffective sharing of information, and groupthink can all threaten the quality and defensibility of group decisions [30,31]. Thus, special attention must be given to these threats in the design and implementation of CCs on a national scale.

In this paper we describe the Royal College model of CCs used in CBD. Here, we reflect on the rationale for the model’s design and desired impact and then outline some of the early successes and challenges that have been noted in the functioning of CCs in Canadian Royal College specialty PGME. Our author group includes physician clinician educators from a variety of specialties. This group includes members involved in the Royal College CC design, implementation and faculty development support and so has detailed awareness of both the aspirations and the challenges through this journey. We recognize that this lens means we bring the biases associated with those who have developed and continue to work to support this national implementation and that this may differ from those who are struggling with rationale or implementation. However, all members of our author group either are or were CC chairs (DD, JK, WC) or CC members (AO, JF, LS, RA) at their local institutions and so we also have lived experience of the CC process on the ground which may help to balance these perspectives.

The Royal College’s national model of competence committees

While the structure and function of CCs have many similarities across health education systems, the Royal College developed a national model that underpins the work of their CCs in Canada [32]. CCs in the Canadian specialist CBD model of CBME are guided on a national level by processes and procedures recommended by the Royal College in hopes to facilitate optimal group function [33]. The key principles in which the national CC model is grounded are derived from the overarching CBD CBME approach and are outlined in Table 1.

Table 1.

Key principles of the Royal College of Physicians and Surgeons of Canada’s Competence Committee design.


COMPETENCE COMMITTEE PRINCIPLE DESCRIPTION

Developmental view of a learner The system is set up to support progression of competence for all trainees and to ensure that every learner has a pathway to certification.
The CC takes a stance that all learners have the potential to be successful, given the right opportunities.
The CC supports learners through tailored learning plans paired with guidance on next steps for the trainee’s development toward competence.

Programmatic assessment The CC uses a comprehensive approach to assessment that capitalizes on multiple data inputs from a variety of data sources and contexts.
Data collation and curation allows for an in-depth review of the trainee’s complete portfolio of assessment.

Defined group process with rules The CC is expected to interpret data in a fair and just manner by applying strategies to mitigate bias and produce defensible group decisions.
CC review and deliberation should allow for consideration of diverse views and consensus building.
CCs are expected to have common expectations and processes to allow for sufficient consistency within and across programs and institutions.

Criterion-referenced decision-making The CC is guided by predefined markers of progression, established by the respective national specialty committee, to inform decision-making.

Transparency The CC is expected to follow structured processes and procedures in their review and deliberation of trainee data to make recommendations of achievement, progress, and promotion. These processes and procedures should be made clear to relevant all invested groups, in particular the trainee.

Clear communication The CC is expected to use clear communication strategies to ensure the outputs of the committee are communicated to all relevant invested groups

CC Competence Committee.

The CC process in CBD is strongly influenced by the structure for CBD curriculum design that was used by every Royal College national specialty committee to create discipline-specific content [34]. The links between the features of CBD and the key principles underlying CC design are outlined in Table 2. In the Royal College CBD model, each national specialty committee took part in a standardized design process (the specialty education design [SED] workshop series) [35]. In this process, the committees set stage-specific expectations of trainee performance and created standards for training and assessment that included guides for the assessment of entrustable professional activities (EPAs), required training experiences, and CanMEDS competencies specific to the discipline [36]. The SED workshops provided an opportunity for the Royal College and front-line clinicians and program directors to engage in co-creation, which maximized the likelihood that local programs would integrate the standards into their CC practices. The CC in each local program uses these national assessment standards to inform their recommendations about trainee progress and promotion. This link to the work of the national specialty committee promotes consistency across diverse local CC contexts, and yet also allows programs to incorporate competencies that are unique to their specific local context. The CC must ensure that both local and national competence expectations are met.

Table 2.

Links between key CBD design features and Competence Committee principles.


CBD DESIGN FEATURE COMPETENCE COMMITTEE PRINCIPLES

DEVELOPMENTAL VIEW OF TRAINEE PROGRAMMATIC ASSESSMENT DEFINED GROUP PROCESS WITH RULES CRITERION-REFERENCED DECISION-MAKING TRANSPARENCY CLEAR COMMUNICATION

Framework

National specialty and stage-specific EPA assessment expectations and required training experiences guide CC decisions

WBA provides data to support EPA assessment and decision-making

National accreditation standards ensure minimum requirements of CCs

National guidelines outline common expectations for CC processes and local implementation

CCs are subcommittees of existing program committee structure

Comprehensive data-informed review values both quantitative and qualitative data sources to inform progress decisions

Flexibility in educational experiences at local level to ensure achievement of competencies

Communication and faculty development

National technical guides outline areas where CCs must follow national policy, processes, and accreditation standards and areas of flexibility where local CCs can customize to their settings

National documents, standards, and expectations are easily accessible

National, targeted faculty development initiatives support creation and running of CC

Expanded CC role

CC review contributes to the development of individualized learning plans

CCs are integrated as agents of program, specialty, and CBD CQI nationally

CBD Competence by Design; CC Competence Committee; CQI continuous quality improvement; EPA entrustable professional activity; WBA workplace-based assessment.

The Royal College produced documents to guide CC structure and function in the CBD model [32,33]. These documents support consistency in the application of the CC model across all Royal College accredited programs; they include guidance on suitable membership of CCs and CC processes and procedures [33], sample terms of reference for a CC [32], and a technical guide [37] that outlines specific requirements as well as areas of flexibility in CC decision-making. With regard to membership, a minimum of three members is recommended and the committee is encouraged to include a diversity of members (e.g., seniority, gender, urban/rural, physician and non-physician members etc.); this aims to promote diverse interpretation of the data and consideration of differing perspectives [38]. Program directors are encouraged to join CCs as non-voting members to facilitate communication between the CC and the residency program committee (RPC), but they are discouraged from chairing the CC themselves to avoid conflicts of interest and excessive workload [39]. The processes and procedures lay out the expectation that the CC will, in its review, apply a holistic comprehensive approach to a portfolio of assessment beyond clinical competencies [40,41]. The CCs are expected to have access to and incorporate a range of assessment data that may include workplace-based assessments (WBAs) such as EPA observations, non-WBA clinical assessments (e.g., OSCE and simulation assessments), and non-clinical assessments (e.g., research competencies and teaching evaluations). It is expected that the review conducted by the CC will be data driven and grounded in the evidence documented in the trainee’s assessment portfolio. This aims to promote transparency and serves to minimize hearsay or inherent biases [25,42]. To allow for a deep review of the trainees’ portfolios while maintaining efficiency during deliberation at CC meetings, CCs are encouraged to assign a primary reviewer who will complete a deep review of individual files to inform (but not replace) the CC discussion. It is required that programs share CC recommendations with the trainees. The Royal College encourages programs to provide trainees with longitudinal coaches [14,43,44] to help create plans for acting on recommendations made by the CC; however, programs have flexibility to develop other systems that serve the same purpose.

The Royal College CBD model is unique in that CCs make judgments of trainees’ achievement of stage-specific developmentally sequenced EPAs (known as RCEPAs) rather than solely judging their achievement of terminal (end-of-training) competencies. These judgments of competence are categorical achievement decisions (yes/no) for each of the stage-specific EPAs and contribute to the trainee’s progress toward overall competence. In addition, CCs make recommendations to their RPC on the learner’s status (e.g., progressing as expected, not progressing as expected, accelerated progress), readiness for progression from one stage to the next, readiness for sitting the certification examination, and, ultimately, readiness for certification [45]. These are meant to be comprehensive decisions, informed by the entire collection of performance data in the portfolio, not only by EPA observation data. In the Royal College CBD model, CCs are tasked with synthesizing assessment data to assign these summative status recommendations within and between stages; they do not determine a “level” or degree of competence along a continuous scale of entrustment for each EPA or milestone as in other CBME models [46,47].

In Canadian PGME, the Royal College is in a unique position to support the national implementation of CCs. Owing to its responsibility for national education design and standard setting, program and institution accreditation, and individual physician credentialing, the Royal College sets standards for multiple aspects of CC functioning. Before CBME, the Royal College had a system in which successful completion of specifically itemized time-based experiences was the basis for decisions regarding examination readiness and credentialing for specialist certification. In CBD, the local CC is responsible for recommending eligibility to sit national specialty examinations and eligibility for certification. However, the Royal College administers these examinations, credentials the candidates (in large part on the basis of the CC’s recommendation), and confers certification. The Royal College develops the national standards for training and assessment applied by CCs and the national expectations for CC functioning, while the responsibility for and oversight of CCs is at the level of each university’s PGME office. This local oversight of CCs is governed by the national system of program accreditation through the Canadian Residency Accreditation Consortium (CanRAC) of which the Royal College is a member; CanRaC includes standards for CCs at both the program and institution PGME level [48,49]. The Royal College CBD CC model recognizes and embraces the notion that responsibility for oversight to deliver graduates who provide safe, high-quality patient care is shared between three entities: the Royal College and its national specialty committees, the local programs and their CCs, and local institutions’ PGME offices.

An important goal of CCs in CBD is to apply a developmental approach that aims to ensure trainees are provided support and guidance to promote further development of their competence and mastery [50,51]. For example, the CC may suggest learning plans or clinical experiences to support success in stage progression or in the national certification examinations. However, this means that the CCs face a dual purpose in that they must take a gatekeeper or public safety role in their determinations of trainees’ progress and assurance of achievement of competence, while also taking a developmental approach in identifying goals and providing direction for further growth.

In the Royal College CBD CC model, CCs have an additional role: the continuous quality improvement (CQI) activities of the discipline-specific specialty committees [31,52]. Specialty committees are asked to review their standards on a regular basis. As part of this review, CCs are invited to identify and report to the national specialty committee, through their program director, instances where revisions may be needed to the national assessment guides or where updated versions of the guides may be required. This provides a link between local CC practices and experiences and national CQI of the specialty education design.

Challenges and lessons learned

The large-scale implementation of CCs was a complex undertaking, and thus it is not surprising that Royal College educators, program directors, and CC chairs encountered several challenges in the process. Their experiences and the strategies they used to attempt to mitigate those challenges offer lessons learned that may guide others who are planning to implement CCs at scale as they may encounter similar challenges (see Table 3). We do not have all the answers and continue to work with the invested groups in our PGME and Royal College community to co-create solutions as part of this journey. We present what we have learned to date.

Table 3.

Challenges and lessons learned in the large-scale implementation of competence committees.


CHALLENGES ROYAL COLLEGE RESPONSES TO CHALLENGES INSIGHTS AND LESSONS WE LEARNED ALONG THE WAY

1. Standardizing process and procedures while maintaining flexibility
  • Disseminated national terms of reference and policy documents

  • Articulated where there is flexibility in the process to allow adaptation to local structures and increased ownership through a technical guide

  • Created a community of practice model through the CC chairs forums to help identify and develop best practices among programs with similar contexts

  • Developed annual pulse surveys distributed to invested groups to identify whether processes were implemented as intended and to identify any unforeseen challenges

  • Provide clear guidance and simplified expectations to ensure consistent messaging and practices

  • Anticipate local adaptations as there is no one-size-fits-all approach

  • Anticipate tensions between flexibility and standardization of interventions

  • Use program evaluation as a key enabler to help identify and mitigate any divergence in practices and to maintain fidelity and integrity during implementation


2. Addressing the contextual variability within institutions, programs, and systems
  • Identified and recruited a national CBME Leads group with Leads within each university

  • Created a network of peers within each university and externally through individual specialties through the CC chairs forums

  • Developed ongoing two-way dialogue between the Royal College and invested groups

  • Organized multiple in-person and virtual CC chairs forums for clear communication, sharing of best practices, and identification of common challenges with implementation

  • Recognize that each university and individual program will have unique contexts that require adaptable implementation

  • Identify and group common elements related to context (e.g., size of programs, institutional policies, and resources) that can help provide direction on ways to adapt CC implementation

  • Be mindful that when new systems of assessment are applied too rigidly it can lead to frustration or overburdened assessment practices

  • Engage invested groups in the process to create a shared vision and build trust


3. Working with finite human and financial resources
  • Provided centralized investment through development of free key resources (e.g., electronic platform, assessment templates, e-modules, and adaptable slide decks for faculty development)

  • Provided a venue to share best practices and locally developed approaches that could be adapted by institutions via national CBME Leads group and the national CC chairs forum

  • Recognize and plan to accommodate the wide variations in financial and human resources among institutions and programs

  • Expect the need for and support additional faculty time for portfolio review and attendance at meetings as CCs are a new structure

  • Be mindful that individual institutions may feel more comfortable using existing or locally developed resources, which may increase the resource burden to that institution


4. Providing faculty development and ensuring engagement
  • Developed and maintained a curated repository of online faculty development resources (e-modules, workshops, webinars)

  • Created a national CC chairs forum to enable effective networking, innovation sharing, and movement of knowledge to those who need it to improve their CC practices

  • Plan for faculty development activities that involve longitudinal and multimodal offerings aimed at all invested groups (e.g., CC chairs, administrators, faculty, and trainees)

  • Develop faculty development strategies that emphasize interconnectedness and relationship building to help support insights on effective knowledge translation in complex systems


5. Changing the culture of assessment
  • Worked toward shared mental models among invested groups of intended CC implementation

  • Ensured alignment of national institutional policies and accreditation standards to avoid confusing or mixed messages

  • Provide guidance on the policies, processes, and procedures that guide CC functioning

  • Communicate the purpose and flow of CC work to all invested groups to build transparency in the assessment system

  • Acknowledge the dual purpose of assessment for developmental and summative progress purposes while providing rationale and strategies on how to manage this tension

  • Monitor for linear or reductionist approaches to programmatic assessment that can lead to negative assessment behaviors and practices


CBME competency based medical education; CC Competence Committee.

1. Balancing fidelity with flexibility. The first of these challenges relates to the tension between maintaining consistency and alignment with the key principles of CC implementation and procedures, while also embracing the flexibility needed to accommodate the wide variability between institutions and between programs. Examples of this heterogeneity include the size of the program, the distribution of training sites, the context of the clinical work, the program’s readiness for change to CBME, the sophistication of electronic assessment portfolios, policies at the local institution, the teaching culture, and even fee structures for clinical teachers. For example, small programs may be challenged with a lack of faculty members to achieve a CC quorum or in their ability to minimize conflicts of interests, and larger programs may suffer from having a large number of trainees to review or too many members in the committee to make decisions effectively. Some programs may work closely with their trainees and provide direct observation of clinical work while others may have workflows that rely more heavily on indirect observation for their WBAs. These contextual differences necessitate local adaptations: there cannot be a one-size-fits-all or overly prescriptive approach [5]. To maintain alignment with CC principles in the face of this variability, the Royal College created a national CBME Lead group with representatives from each university and facilitated regular meetings and a national CC community of practice. These two groups have created venues for sharing of best practices and policy management to aid in development and organization of CCs and to promote dialogue between the invested groups as part of a wider communication strategy.

Standardization of procedures and operations may help provide transparency and consistency in the processes, but there still needs to be flexibility in CC practices to ensure they are focused on the overarching principle of making defensible summative decisions with a holistic view of trainee progress. When processes or procedures are applied too rigidly it can lead to frustration or counterproductive behaviours in CCs, trainees, and front-line faculty [53,54]. CCs may thus focus much of their cognitive efforts on organizing data and reviewing what is easy to collate rather than engaging in reasoning to make sense of all the data. This has led some programs to overvalue quantitative assessments, which can lead to “checkbox behaviours” and a performance orientation or mindset by the learner; or to undervalue these assessments by being overly lenient, which has led to learner or faculty disengagement; or to overly penalize struggling trainees, leading to assessment avoidance [55,56,57]. To enable CCs to function in diverse environments and mitigate some of the challenges outlined above, the Royal College created a national technical guide that explicitly states the standards and minimum requirements and also outlines where there is flexibility [37]. For example, while the intention in the original CBD design was to incorporate the Royal College’s national CanMEDS competency framework into the milestones level, CC review practices were hindered by the limitations of the data reporting features available in the existing electronic portfolios. As such, the technical guide acknowledges these challenges and aims to reduce assessor burden by removing the requirement for milestone assessment scales. The evolution in the Royal College’s approach to CBME as well as the opportunity for contextual modifications while maintaining the principles outlined in Table 1 and articulated in the technical guide have allowed for some flexibility in how CCs function.

Maintaining fidelity of implementation while supporting local adaptations also requires ongoing program evaluation and institutional accreditation to ensure key elements are still met. The program evaluation committee at the Royal College has provided information related to effective CC implementation, and these evaluation reports were disseminated to implementers and key invested groups [57,58]. Given local adaptations, the responsibility for CC oversight, continuous quality improvement, and peer review is shifting to the institution level. This shift to institutionally centred CC oversight is still in the pilot stages of implementation and evaluation. National benchmarking and aggregate data sharing could help to inform the evolution of policies and procedures, but such a step is associated with a high level of complexity and concerns regarding data safety and privacy across institutions.

2. Resourcing implementation of CCs. When major curricular changes are planned, there must be sufficient resources to support implementation as well as enough time. Many of the resources required to implement CBD were underestimated or not identified at the outset, which posed significant challenges [59]. Although the true costs of setting up a CC for programs are unknown, consideration needs to be given to several issues: a realistic time commitment from faculty members needs to be established, expenses associated with program administration need to be clearly identified, and the potential for novel assessment data requirements (e.g., multisource feedback, simulation) needs to be examined. Members of the CC should have protected time for faculty development and reviewing activities, which requires support from leadership for financial incentives, time, and/or recognition. Institutions may encounter costs related to support for oversight, program evaluation, and unexpected delays or cost overruns of an electronic portfolio platform. To assist with resource requirements the Royal College provided a centralized electronic portfolio, curated faculty development resources, and undertook a program evaluation for programs requiring this type of support.

3. Supporting faculty for CCs. Clear and concise communication of the purpose, process, and procedures for CCs is critical for member engagement and faculty understanding. The Royal College created a plan for communicating a shared mental model among invested groups to develop trust in the system. Importantly, this work supported knowledge dissemination efforts about the underlying principles (see Table 1) to increase the defensibility and acceptability of group decision-making. Throughout the implementation process the Royal College provided multiple longitudinal and reinforcing offerings for trainee and faculty development. These included the development of workshops, as well as a resource directory that included recorded webinars, readily adaptable slide sets, teaching videos, infographics, technical guides, and e-modules [60]. The Royal College also developed a national community of practice model for CC chairs (called the CC chairs forum) to better understand the barriers and facilitators to implementation and to provide ongoing support to CC chairs. An ongoing dialogue and a longitudinal faculty development strategy enabled the alignment of all processes with standard principles and calibrated expectations for CC conduct. Given the high turnover of CC leaders [61], communities of practice require ongoing support and development.

4. Addressing assessment culture. The ingrained culture of performance assessment has been the most formidable barrier limiting the integrity of CBME and thus CC implementation. Given their prior experiences and perceptions of assessment, trainees may have a difficult time understanding the dual purpose of assessment as both for and of learning, and they may consider even low-stakes daily assessment as summative [62,63]. This limits the usefulness of feedback provided in WBA as trainees may avoid or negate any feedback that they may perceive as potentially harmful to their progress decisions. Faculty members in the CC may align their roles within a problem identification model and focus all their efforts solely on identifying trainees who are struggling [64]. Although unintentional, these behaviours may limit the underlying benefits of CBME and CCs. The use of CCs to enact programmatic assessment requires careful attention and thoughtfulness when communicating with or developing invested groups. Communicating the nature of the dual purpose of low-stakes observations for high-stakes CC decisions has been challenging both for learners and faculty to adopt or accept, as it requires more complex conversations and fundamental trust in the CC’s goal of taking a developmental view of learner progress. Without these key conversations, there is a risk that a hidden curriculum in which summative components alone are valued will emerge. Those implementing CCs need to work to create a culture of assessment wherein assessment for learning and a growth mindset are emphasized.

Conclusions

The national implementation of CCs for a new CBME system is complex owing to the expertise and skills required by those delivering and receiving the intervention; the number of groups, settings, and levels targeted; and the permitted level of flexibility of the intervention or its components. With large-scale CC implementation, managing the tension of standardization with flexibility is a fundamental issue that needs to be anticipated and addressed with careful consideration and engagement with invested groups. Anticipating the challenges of implementation in a wide variety of contexts necessitates an approach that uses multiple engagement and communication strategies to allow for local adaptation. Large-scale implementation of CCs does not occur at a single point but is an evolutionary process requiring ongoing support. As such, it is important to consider embedding a program evaluation plan at the outset of implementation. We have presented the Royal College’s CC implementation experience, the challenges that were faced, and some strategies we used to address them. We hope this will be of value to other educators who are considering a large-scale CBME CC implementation.

Funding Statement

The CBD Project was funded by the Royal College of Physicians and Surgeons of Canada and some individual authors received funding from the Royal College either as staff (LS, JK, JRF) or as consultants (AO, DD, RA, AKH, WJC).

Funding Information

The CBD Project was funded by the Royal College of Physicians and Surgeons of Canada and some individual authors received funding from the Royal College either as staff (LS, JK, JRF) or as consultants (AO, DD, RA, AKH, WJC).

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Royal College of Physicians and Surgeons of Canada (“Royal College”). Information in this article about Competence by Design (“CBD”), its implementation and related policies and procedures do not necessarily reflect the current standards, policies and practices of the Royal College. Please refer to the Royal College website for current information.

Competing Interests

Some individual authors received funding from the Royal College either as staff (LS, JK, JRF) or as consultants (AO, DD, RA, AKH, WJC).

References

  • 1.Cruess SR. Professionalism and medicine’s social contract with society. Clin Orthop Relat Res. 2006; 449: 170–6. DOI: 10.1097/01.blo.0000229275.66570.97 [DOI] [PubMed] [Google Scholar]
  • 2.Duitsman ME, Fluit CR, van der Goot WE, ten Kate-Booij M, de Graaf J, Jaarsma DA. Judging residents’ performance: a qualitative study using grounded theory. BMC Med Educ. 2019; 19(1): 1–9. DOI: 10.1186/s12909-018-1446-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Castanelli DJ, Weller JM, Molloy E, Bearman M. Shadow systems in assessment: how supervisors make progress decisions in practice. Adv Health Sci Educ. 2020; 25: 131–47. DOI: 10.1007/s10459-019-09913-5 [DOI] [PubMed] [Google Scholar]
  • 4.Lockyer J, Carraccio C, Chan MK, Hart D, Smee S, Touchie C, Holmboe ES, Frank JR, ICBME Collaborators. Core principles of assessment in competency-based medical education. Med Teach. 2017; 39(6): 609–16. DOI: 10.1080/0142159X.2017.1315082 [DOI] [PubMed] [Google Scholar]
  • 5.Ross S, Hauer KE, Wycliffe-Jones K, Hall AK, Molgaard L, Richardson D, et al, ICBME Collaborators. Key considerations in planning and designing programmatic assessment in competency-based medical education. Med Teach. 2021; 43(7): 758–64. DOI: 10.1080/0142159X.2021.1925099 [DOI] [PubMed] [Google Scholar]
  • 6.Accreditation Council for Graduate Medical Education. Common Program Requirements Guide. Retrieved February 24, 2023, from https://www.acgme.org/meetings-and-educational-activities/program-directors-guide-to-the-common-program-requirements/.
  • 7.Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. New Eng J Med. 2012; 366(11): 1051–6. DOI: 10.1056/NEJMsr1200117 [DOI] [PubMed] [Google Scholar]
  • 8.Edgar L, Jones, MD Jr., Harsy B, Passiment M, Hauer KE. Better decision-making: shared mental models and the clinical competency committee. J Grad Med Educ. 2021; 13(2s): 51–8. DOI: 10.4300/JGME-D-20-00850.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Donato AA, Alweis R, Wenderoth S. Design of a clinical competency committee to maximize formative feedback. J Com Hosp Int Med Perspect. 2016; 6(6): 33533. DOI: 10.3402/jchimp.v6.33533 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.French JC, Dannefer EF, Colbert CY. A systematic approach toward building a fully operational clinical competency committee. J Surg Educ. 2014. Nov 1; 71(6): e22–7. DOI: 10.1016/j.jsurg.2014.04.005 [DOI] [PubMed] [Google Scholar]
  • 11.Cheung WJ, Wagner N, Frank JR, Oswald A, Van Melle E, Skutovich A, Dalseg TR, Cooke LJ, Hall AK. Implementation of competence committees during the transition to CBME in Canada: A national fidelity-focused evaluation. Med Teach. 2022; 44(7): 781–9. DOI: 10.1080/0142159X.2022.2041191 [DOI] [PubMed] [Google Scholar]
  • 12.Rich JV, Luhanga U, Fostaty Young S, Wagner N, Dagnone JD, Chamberlain S, McEwen LA. Operationalizing Programmatic Assessment: The CBME Programmatic Assessment Practice Guidelines. Acad Med. 2022; 97(5): 674–8. DOI: 10.1097/ACM.0000000000004574 [DOI] [PubMed] [Google Scholar]
  • 13.Soleas E, Dagnone D, Stockley D, Garton K, van Wylick R. Developing academic advisors and competence committees members: a community approach to developing CBME faculty leaders. Can Med Educ J. 2020; 11(1): e46. DOI: 10.36834/cmej.68181 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Duitsman ME, Fluit CR, van Alfen-van der Velden JA, de Visser M, ten Kate-Booij M, Dolmans DH, Jaarsma DA, de Graaf J. Design and evaluation of a clinical competency committee. Perspect Med Educ. 2019; 8: 1–8. DOI: 10.1007/S40037-018-0490-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Smit MP, de Hoog M, Brackel HJ, Ten Cate O, Gemke RJ. A national process to enhance the validity of entrustment decisions for Dutch pediatric residents. J Grad Med Educ. 2019; 11(4s): 158–64. DOI: 10.4300/JGME-D-18-01006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Jeng-Cheng W, Wen-Hsuan H, Chien-Yu C, Shu-Liu G, Chun-Chao C. Competency-based assessment for tailored education – promotion for clinical competency committee. J Taiwan Sim Soc Healthcare. 2020. Dec; 7(2): 35–47. [Google Scholar]
  • 17.Cheung WJ, Bhanji F, Gofton W, Hall AK, Karpinski J, Richardson D, et al. Programmatic assessment in Canadian specialist residency education: implementation and lessons learned. Perspect Med Educ. 2024; 13(1: 44–55. DOI: 10.5334/pme.956 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Oudkerk Pool A, Govaerts MJ, Jaarsma DA, Driessen EW. From aggregation to interpretation: how assessors judge complex data in a competency-based portfolio. Adv Health Sci Educ. 2018; 23: 275–87. DOI: 10.1007/s10459-017-9793-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, Van Tartwijk J. A model for programmatic assessment fit for purpose. Med Teach. 2012; 34(3): 205–14. DOI: 10.3109/0142159X.2012.652239 [DOI] [PubMed] [Google Scholar]
  • 20.Frank JR, Karpinski J, Sherbino J, Snell LS, Atkinson A, Oswald A, et al. Competence By Design: a transformational national system of time-variable competency-based postgraduate medical education. Perspect Med Educ. 2024; 13(1: Forthcoming. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gruppen LD, Ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced requirements for assessment in a competency-based, time-variable medical education system. Acad Med. 2018; 93(3): S17–21. DOI: 10.1097/ACM.0000000000002066 [DOI] [PubMed] [Google Scholar]
  • 22.Touchie C, Kinnear B, Schumacher D, Caretta-Weyer H, Hamstra SJ, Hart D, Gruppen L, Ross S, Warm E, Ten Cate O, ICBME Collaborators. On the validity of summative entrustment decisions. Med Teac. 2021; 43(7): 780–7. DOI: 10.1080/0142159X.2021.1925642 [DOI] [PubMed] [Google Scholar]
  • 23.Thoma B, Ellaway RH, Chan TM. From utopia through dystopia: charting a course for learning analytics in competency-based medical education. Acad Med. 2021; 96(7S): S89–95. DOI: 10.1097/ACM.0000000000004092 [DOI] [PubMed] [Google Scholar]
  • 24.Michaelsen LK, Watson WE, Black RH. A realistic test of individual versus group consensus decision-making. J Appl Psychol. 1989; 74(5): 834. DOI: 10.1037/0021-9010.74.5.834 [DOI] [Google Scholar]
  • 25.Kerr NL, Tindale RS. Group performance and decision making. Annu Rev Psychol. 2004; 55: 623–55. DOI: 10.1146/annurev.psych.55.090902.142009 [DOI] [PubMed] [Google Scholar]
  • 26.Hauer KE, Cate OT, Boscardin CK, Iobst W, Holmboe ES, Chesluk B, et al. Ensuring resident competence: a narrative review of the literature on group decision making to inform the work of clinical competency committees. J Grad Med Educ. 2016; 8(2): 156–64. DOI: 10.4300/JGME-D-15-00144.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 2015; 49(6): 560–75. DOI: 10.1111/medu.12678 [DOI] [PubMed] [Google Scholar]
  • 28.Schuwirth LW, van der Vleuten CP. Programmatic assessment and Kane’s validity perspective. Med Educ. 2012; 46(1): 38–48. DOI: 10.1111/j.1365-2923.2011.04098.x [DOI] [PubMed] [Google Scholar]
  • 29.Downing SM. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003; 37(9): 830–7. DOI: 10.1046/j.1365-2923.2003.01594.x [DOI] [PubMed] [Google Scholar]
  • 30.Hauer KE, Edgar L, Hogan SO, Kinnear B, Warm E. The science of effective group process: lessons for clinical competency committees. J Grad Med Educ. 2021; 13(2s): 59–64. DOI: 10.4300/JGME-D-20-00827.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Pack R, Lingard L, Watling C, Cristancho S. Beyond summative decision making: illuminating the broader roles of competence committees. Med Educ. 2020; 54(6): 517–27. DOI: 10.1111/medu.14072 [DOI] [PubMed] [Google Scholar]
  • 32.Royal College of Physicians and Surgeons of Canada Competence Committee guidelines – terms of reference; 2018. Retrieved 13 February 2023. https://www.royalcollege.ca/content/dam/documents/accreditation/competence-by-design/directory/competence-committees-guidelines-for-terms-of-reference-e.html.
  • 33.Royal College of Physicians and Surgeons of Canada. Competence Committee guideline: process and procedures in decision making; 2018. Retrieved 13 February 2023. https://www.royalcollege.ca/content/dam/documents/accreditation/competence-by-design/directory/competence-committees-process-procedures-e.html.
  • 34.Royal College of Physicians and Surgeons of Canada. Specialty education design. Retrieved 13 February 2023. https://www.royalcollege.ca/ca/en/cbd/cbd-implementation.html.
  • 35.Karpinski J, Stewart J, Oswald A, Dalseg TR, Atkinson A, Frank JR. Competency-based medical education at scale: a road map for transforming national systems of postgraduate medical education. Perspect Med Educ. 2024; 13(1: 24–32. DOI: 10.5334/pme.957 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Karpinski J, Frank JR. The role of EPAs in creating a national system of time-variable competency-based medical education. Acad Med. 2021; 96(7S): S36–41. DOI: 10.1097/ACM.0000000000004087 [DOI] [PubMed] [Google Scholar]
  • 37.Royal College of Physicians and Surgeons of Canada. Guide 3: Competence Committees. CBD technical guide series; 2020. Retrieved 13 February 2023. https://www.royalcollege.ca/content/dam/documents/accreditation/competence-by-design/directory/cbd-technical-guide-3-comp-committees-e.pdf.
  • 38.Hauer KE, Edgar L, Hogan SO, Kinnear B, Warm E. The science of effective group process: lessons for clinical competency committees. J Grad Med Educ. 2021; 13(2s): 59–64. DOI: 10.4300/JGME-D-20-00827.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Chan T, Oswald A, Hauer KE, Caretta-Weyer HA, Nousiainen MT, Cheung WJ, ICBME Collaborators. Diagnosing conflict: conflicting data, interpersonal conflict, and conflicts of interest in clinical competency committees. Med Teach. 2021; 43(7): 765–73. DOI: 10.1080/0142159X.2021.1925101 [DOI] [PubMed] [Google Scholar]
  • 40.Schuwirth LW, Van der Vleuten CP. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 2011; 33(6): 478–85. DOI: 10.3109/0142159X.2011.565828 [DOI] [PubMed] [Google Scholar]
  • 41.Van Der Vleuten CP, Schuwirth LW, Driessen EW, Govaerts MJ, Heeneman S. Twelve tips for programmatic assessment. Med Teach. 2015; 37(7): 641–6. DOI: 10.3109/0142159X.2014.973388 [DOI] [PubMed] [Google Scholar]
  • 42.Dickey CC, Thomas C, Feroze U, Nakshabandi F, Cannon B. Cognitive demands and bias: challenges facing clinical competency committees. J Grad Med Educ. 2017; 9(2): 162–4. DOI: 10.4300/JGME-D-16-00411.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Richardson D, Landreville JM, Trier J, Cheung WJ, Bhanji F, Hall AK, et al. Coaching in Competence by Design: coaching in the moment and coaching over time. Perspect Med Educ. 2024; 13(1: 33–43. DOI: 10.5334/pme.959 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Deiorio NM, Carney PA, Kahl LE, Bonura EM, Juve AM. Coaching: a new model for academic and career achievement. Med Educ Online. 2016; 21(1): 33480. DOI: 10.3402/meo.v21.33480 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Royal College of Physicians and Surgeons of Canada. Status recommendations. Retrieved 13 February 2023. https://www.royalcollege.ca/ca/en/cbd/impact-cbd/competence-committees/competence-committees-status-recommendations.html.
  • 46.Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach. 2007; 29(7): 648–54. DOI: 10.1080/01421590701392903 [DOI] [PubMed] [Google Scholar]
  • 47.General Medical Council. Standards and outcomes. Retrieved 13 February 2023. https://www.gmc-uk.org/education/standards-guidance-and-curricula/standards-and-outcomes.
  • 48.Canadian Excellence in Residency Accreditation. About CanERA. Retrieved 13 February 2023. https://www.canera.ca/canrac/about-e.
  • 49.Frank JR, Taber S, van Zanten M, Scheele F, Blouin D, International Health Professions Accreditation Outcomes Consortium. The role of accreditation in 21st century health professions education: report of an International Consensus Group. BMC Med Educ. 2020; 20: 1–9. DOI: 10.1186/s12909-020-02121-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Richardson D, Kinnear B, Hauer KE, Turner TL, Warm EJ, Hall AK, et al., ICBME Collaborators. Growth mindset in competency-based medical education. Med Teach. 2021; 43(7): 751–7. DOI: 10.1080/0142159X.2021.1928036 [DOI] [PubMed] [Google Scholar]
  • 51.Holmboe ES, Salzman DH, Goldstein JL, McGaghie WC. Mastery learning, milestones, and entrustable professional activities. In: Comprehensive Healthcare Simulation: Mastery Learning in Health Professions Education. Springer; 2020. p. 311–30. DOI: 10.1007/978-3-030-34811-3_17 [DOI] [Google Scholar]
  • 52.Thoma B, Caretta-Weyer H, Schumacher DJ, Warm E, Hall AK, Hamstra SJ, et al., ICBME Collaborators. Becoming a deliberately developmental organization: using competency based assessment data for organizational development. Med Teach. 2021; 43(7): 801–9. DOI: 10.1080/0142159X.2021.1925100 [DOI] [PubMed] [Google Scholar]
  • 53.Branfield Day L, Miles A, Ginsburg S, Melvin L. Resident perceptions of assessment and feedback in competency-based medical education: a focus group study of one internal medicine residency program. Acad Med. 2020; 95(11): 1712–7. DOI: 10.1097/ACM.0000000000003315 [DOI] [PubMed] [Google Scholar]
  • 54.Martin L, Sibbald M, Brandt Vegas D, Russell D, Govaerts M. The impact of entrustment assessments on feedback and learning: trainee perspectives. Med Educ. 2020; 54(4): 328–36. DOI: 10.1111/medu.14047 [DOI] [PubMed] [Google Scholar]
  • 55.Mann S, Truelove AH, Beesley T, Howden S, Egan R. Resident perceptions of competency-based medical education. Can Med Educ J. 2020; 11(5): e31. DOI: 10.36834/cmej.67958 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019; 53(1): 76–85. DOI: 10.1111/medu.13645 [DOI] [PubMed] [Google Scholar]
  • 57.Hall AK, Oswald A, Frank JR, Dalseg T, Cheung WJ, Cooke L, et al. Evaluating Competence by Design as a large system change initiative: readiness, fidelity and outcomes. Perspect Med Educ. 2024; 13(1: 95–107. DOI: 10.5334/pme.962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Royal College of Physicians and Surgeons of Canada. Is Competence by Design working? Retrieved 13 February 2023. https://www.royalcollege.ca/rcsite/cbd/cbd-program-evaluation-e.
  • 59.Royal College of Physicians and Surgeons of Canada. Competence by Design (CBD): cost analysis; 2019. Retrieved 13 February 2023. https://www.royalcollege.ca/content/dam/documents/accreditation/competence-by-design/non-resource-documents/cbd-costing-analysis-executive-version-e.pdf.
  • 60.Royal College of Physicians and Surgeons of Canada. Competence by Design: resource directory. Retrieved 13 February 2023. https://www.royalcollege.ca/ca/en/cbd/cbd-tools-resources.html.
  • 61.de Carvalho-Filho MA, Tio RA, Steinert Y. Twelve tips for implementing a community of practice for faculty development. Med Teach. 2020; 42(2): 143–9. DOI: 10.1080/0142159X.2018.1552782 [DOI] [PubMed] [Google Scholar]
  • 62.Dijksterhuis MGK, Schuwirth LWT, Braat DDM, Teunissen PW, Scheele F. A qualitative study on trainees’ and supervisors’ perceptions of assessment for learning in postgraduate medical education. Med Teach. 2013; 35(8): e1396–402. DOI: 10.3109/0142159X.2012.756576 [DOI] [PubMed] [Google Scholar]
  • 63.Ott MC, Pack R, Cristancho S, Chin M, Van Koughnett JA, Ott M. “The most crushing thing”: understanding resident assessment burden in a competency-based curriculum. J Grad Med Educ. 2022; 14(5): 583–92. DOI: 10.4300/JGME-D-22-00050.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Hauer KE, Chesluk B, Iobst W, Holmboe E, Baron RB, Boscardin CK, Ten Cate O, O’Sullivan PS. Reviewing residents’ competence: a qualitative study of the role of clinical competency committees in performance assessment. Acad Med. 2015; 90(8): 1084–92. DOI: 10.1097/ACM.0000000000000736 [DOI] [PubMed] [Google Scholar]

Articles from Perspectives on Medical Education are provided here courtesy of Ubiquity Press

RESOURCES