Skip to main content
Journal of Graduate Medical Education logoLink to Journal of Graduate Medical Education
. 2023 Aug;15(4):463–468. doi: 10.4300/JGME-D-22-00756.1

Features of Effective Clinical Competency Committees

Kathleen Rowland 1,, Deborah Edberg 2, Lauren Anderson 3, Katherine Wright 4
PMCID: PMC10449336  PMID: 37637335

Abstract

Background

The Clinical Competency Committee (CCC) provides accountability to the general public that physicians completing a training program have achieved competence. CCC processes and features that best identify resident outcomes along a developmental spectrum are not well described.

Objective

This study sought to describe CCC features associated with effective and efficient CCC performance.

Methods

The study was conducted as part of the 2022 Council of Academic Family Medicine Educational Research Alliance survey of family medicine residency program directors. The survey assessed CCC methods, policies, faculty development, structure, and overall CCC time required. The outcomes were identification of residents along a spectrum of development, from failing to exceeding expectations. Ordinal logistic regressions were used to explore the relationship between CCC characteristics and CCC outcomes.

Results

The response rate was 43.3% (291 of 672). Eighty-nine percent (258 of 291) of program directors reported their CCC is successful in identifying residents not meeting expectations; 69.3% (201 of 290) agree their CCC identifies residents who are exceeding expectations. Programs with written policies for synthesizing data (OR=2.53; 95% CI 1.22-5.22; P=.012) and written policies for resident feedback (OR=19.91; 95% CI 3.72-106.44; P<.001) were more likely to report successfully identifying residents below expectations. Programs whose members spent fewer than 3 hours per 6-month interval on CCC meetings were less likely to report being able to identify failing residents (OR=0.37; 95% CI 0.19-0.72; P=.004).

Conclusions

This survey of family medicine program directors suggests that formal policies, faculty development, and adequate time for CCC faculty are associated with an effective CCC, especially if goals beyond “identifying failure” are desired.

Introduction

The Clinical Competency Committee (CCC) is an Accreditation Council for Graduate Medical Education (ACGME) requirement for accreditation and serves a complex set of functions at the system, program, faculty, and resident levels.1,2 It is expected to identify failing, struggling, and advanced residents to tailor educational opportunities to meet their needs, and to synthesize datapoints to assign milestones.3

A 2015 study found that most CCCs used a problem identification model to complete their work, with fewer using a developmental model (table 1).4 The problem identification model assumed the residents would become competent during training and focused on identifying struggling residents. The developmental model focused on identifying stages of competence for each resident. In this model, residents were assumed to have a range of skills, and CCC processes were better defined, more transparent, and focused on feedback to residents. The extent to which this model has been incorporated into graduate medical education is not well studied.

Table 1.

Problem Identification vs Developmental Model of CCCs

Problem Identification Developmental Model
Focus on identifying residents below standards Focus on identifying level of competence for each resident
Most time spent on a few residents Time spent on each resident
Assumption that most residents are competent Assumption that residents gain competence at different rates
CCC processes are informal CCC processes are more formalized
Faculty development formalized
Allows for individualized education plans for all residents
Transparency of CCC processes

Abbreviation: CCC, Clinical Competency Committee.

Note: Adapted from Hauer KE, Chesluk B, Iobst W, et al. Reviewing residents’ competence: a qualitative study of the role of clinical competency committees in performance assessment. Acad Med. 2015;90(8):1084-1092. doi:https://doi.org/10.1097/ACM.0000000000000736

Our study sought to explore whether family medicine program directors’ report of features that are consistent with a developmental approach to the CCC correlated with increased identification of residents along a spectrum of development. Specifically, we hypothesized that CCCs with specific policies and procedures for the acquisition and synthesis of data, as well as standards for faculty development of CCC members, may correlate with identification of residents who are struggling but not failing, as well as residents who are excelling.

KEY POINTS

What Is Known

Clinical Competency Committees (CCCs) have a high-stakes role in ensuring residents graduate as safe physicians; however, given the current lack of consistent best practices approaches, they risk inefficiency and overfocus on simply identifying those who are struggling.

What Is New

This survey of family medicine program directors found that certain features, such as the presence of formal policies, were associated with improved ability to both identify struggling residents and those exceeding expectations.

Bottom Line

Program directors interested in improving the efficiency and nuance of their CCC outcomes could consider adding structured faculty development, formal policies, and adequate time for their CCC members.

Methods

Participants

Between April 13 and May 16, 2022, family medicine program directors (N=672) who had not previously opted out were invited to participate in the online Council of Academic Family Medicine Educational Research Alliance (CERA) program director survey.5

Survey Development

Items were developed by members of the research team after a literature review (see online supplementary data for survey). The CERA steering committee independently vetted the questions based on evidence presented, and a sample of family medicine educators pretested the questions.

The items were developed to assess factors associated with the program director’s determination of their CCC’s ability to identify residents who are struggling, excelling, or at risk of failing. Items asked about data management, formal and informal policies, faculty development for CCC members, structure, and time.

Analysis

Survey items were summarized using descriptive statistics. Ordinal logistic regressions were used to explore the relationship between various CCC characteristics and CCC outcomes. These models estimate proportional odds ratios (ORs) for each predictor (CCC characteristics) when shifting to higher levels of CCC efficiency/outcomes. All statistical analyses were performed using SPSS for Windows Version 28 (IBM Corp, Armonk, NY). Statistical significance was assessed using an alpha level of .05.

The study was approved by the Institutional Review Board of the American Academy of Family Physicians.

Results

The overall response rate was 44.3% (298 of 672); 43.3% (291 of 672) went on to answer the first item about their CCC. Table 2 provides demographic and program characteristic data for respondents.

Table 2.

Respondent Characteristics

Characteristics n (%), N=291
Please describe the type of residency program you direct: Community-based, university-affiliated 164 (56.4)
Community-based, non-affiliated 78 (26.8)
University-based 39 (13.4)
Military 5 (1.7)
Other 5 (1.7)
What is the approximate size of the community in which your program is located? Less than 30 000 31 (10.7)
30 000 to 74 999 34 (11.7)
75 000 to 149 000 70 (24.1)
150 000 to 499 999 68 (23.4)
500 000 to 1 million 37 (12.7)
More than 1 million 48 (16.5)
Missing 3 (1.0)
How many residents (total complement) were in your program as of July 2021? <19 108 (37.1)
19-31 135 (46.4)
>31 47 (16.2)
Missing 1 (0.3)
Your medical degree is: MD 237 (81.4)
DO 54 (18.6)
How many years have you been in your current program director role? Mean (SD) 6.01 (5.67)
How many total years have you served as a program director? Mean (SD) 6.83 (6.12)
What is your gender? Female/woman 146 (50.2)
Male/man 137 (47.1)
Genderqueer/gender non-conforming 0 (0.0)
Non-binary 0 (0.0)
Prefer to self-describe 0 (0.0)
Choose not to disclose 7 (2.4)
Missing 1 (0.3)

Eighty-nine percent of respondents (258 of 291) strongly agree/agree their CCC is successful at identifying residents not meeting expectations. A similar number strongly agree/agree being able to identify residents who are below expectations but are not failing (88.7%, 258 of 291). Fewer strongly agree/agree (69.1%, 201 of 291) their CCC identifies residents who are exceeding expectations and may benefit from individualized education to achieve their full potential (table 3). The full analysis is available in the online supplementary data.

Table 3.

Program Director Survey Response Frequencies

Survey Prompts n (%), N=291
My program’s CCC is successful at identifying residents who are failing. Strongly disagree 5 (1.7)
Disagree 4 (1.4)
Neutral 24 (8.2)
Agree 134 (46.0)
Strongly agree 124 (42.6)
My program’s CCC is successful at identifying residents who require remediation in one or more areas but are not failing. Strongly disagree 5 (1.7)
Disagree 7 (2.4)
Neutral 21 (7.2)
Agree 145 (49.8)
Strongly agree 113 (38.8)
My program’s CCC is successful at identifying residents who are exceeding expectations in training and may benefit from individualized education to achieve their potential. Strongly disagree 2 (0.7)
Disagree 30 (10.3)
Neutral 57 (19.6)
Agree 133 (45.7)
Strongly agree 68 (23.4)
Missing 1 (0.3)
Do CCC members receive formal faculty development or training on CCC best practices? For example, this training might include the expectations of the CCC or how to synthesize assessment data and might occur through STFM, RLS, the ACGME, or your GME office. Yes, all members receive formal CCC training 59 (20.3)
Yes, some members receive formal CCC training 101 (34.7)
Only the program director receives formal CCC training 25 (8.6)
Only one member (other than the program director) receives formal CCC training 19 (6.5)
No one has formal CCC training 87 (29.9)
Is there a formal policy describing a standardized way for residents in your program to receive feedback generated from the CCC? Yes, we have a written policy describing this process 140 (48.1)
Yes, we have a process we always or usually follow but no written policy 132 (45.1)
No, we have no usual process, policy, or procedure, but residents usually get feedback 12 (4.1)
No, we have no usual process, policy, or procedure, and feedback to residents can be hit or miss 5 (1.7)
No, residents do not usually receive feedback after a CCC meeting 0 (0.0)
Missing 2 (0.7)
Which of the following best describes the data considered in your CCC meetings? We use assessment data from multiple sources, such as rotation evaluation scores and written comments, procedure logs, etc 276 (94.8)
We mostly use data from one source, such as rotation evaluations, and consider other data sources as well 14 (4.8)
We rely heavily on data from one source, such as rotation evaluations 0 (0.0)
Something else 0 (0.0)
Missing 1 (0.3)
Does your CCC have a policy or procedure for considering data from multiple sources? For example, does your CCC have a way of reviewing core faculty and non-core faculty evaluations differently, or stating they should be considered the same way? Yes, we have a formal written policy or procedure for how to include different kinds of data 70 (24.1)
Yes, we have a procedure that we usually carry out, but it is not formal or written 174 (59.8)
No, we do not have a usual way of integrating data, or it may vary from meeting to meeting or resident to resident 45 (15.5)
Missing 2 (0.7)
For each 6-month milestone reporting interval, how much time does a typical CCC member spend on your CCC meetings, including time spent reviewing materials ahead of time, time in the meeting, and time spent completing any follow up work afterward? <3 hours 54 (18.5)
3-<5 hours 91 (31.3)
5-<7 hours 57 (19.6)
>7 hours 87 (29.9)
Missing 2 (0.7)
How efficient do you think your CCC is? Very inefficient 10 (3.4)
Inefficient 50 (17.2)
Efficient 198 (68.0)
Very efficient 32 (11.0)
Missing 1 (0.3)
Which one of these scenarios best describes how your CCC functions? Individual CCC members review one or more assigned resident files prior to the meeting and present their milestone place 129 (44.3)
Most milestone rankings are generated automatically from the information and evaluations in the resident management system 32 (11.0)
The CCC works in smaller committee format, where groups of CCC members discuss assigned residents and make recommendations 12 (4.1)
The whole CCC meets together and assesses each resident file one at a time at the meeting, discussing each milestone 95 (32.6)
Some other format 21 (7.2)
Missing 2 (0.7)

Abbreviations: CCC, Clinical Competency Committee; STFM, Society of Teachers of Family Medicine; RLS, Residency Leadership Summit; ACGME, Accreditation Council for Graduate Medical Education; GME, graduate medical education.

Identifying Failing Residents

Programs were more likely to report that their CCC successfully identifies failing residents when all CCC members receive formal faculty development about the CCC (OR=3.62; 95% CI 1.02-12.90; P=.047). For each 6-month milestone reporting period, CCCs whose members spent less than 3 hours per 6-month interval on CCC meetings were less likely to report being able to identify failing residents (OR=0.37; 95% CI 0.19-0.72; P=.004).

Identifying Residents Requiring Remediation

Programs with a written policy describing a standardized way for residents to receive feedback generated from the CCC were 14 times more likely to successfully identify residents who require remediation but who are not failing (OR=14.14; 95% CI 2.64-75.63; P=.002). Use of assessment data from multiple sources was also associated with greater success (OR=4.3; 95% CI 1.52-12.21; P=.006), compared to relying mostly on a single source.

Identifying Residents Exceeding Expectations

Programs with a formal written policy or procedure for how to include different kinds of data were 5.3 times more likely to report successfully identifying residents exceeding expectations (OR=5.34; 95% CI 2.62-10.90; P<.001). Presence of a formal policy for residents to receive feedback was also associated with greater success in identifying residents exceeding expectations (OR=12.65; 95% CI 2.42-66.16; P=.003).

Discussion

A model that allows for placement of residents along a spectrum, rather than a binary “failing/not failing” distinction, is more compatible with the competency-based milestone approach to resident development. This competency-based developmental model4 is not only more compatible with most program curricula, it is also more closely adherent to ACGME requirements.4,6

Our study found that written CCC policies correlated with better CCC operations and better resident feedback. Formal policies may provide accountability and clear expectations for communication.

An effective CCC requires substantial faculty time. Programs whose members spent fewer than 3 hours on meetings were less likely to report being able to identify failing residents. Previous literature suggested that faculty who spent more time reviewing resident files and who were responsible for providing feedback to residents were more likely to assign lower ratings.1 One previous study found only 10% of CCC members had protected time for CCC work, although they found the annual time requirement to be more than 9 hours for nearly 40% of programs.7 In spite of this outlay of time, the typical resident was discussed for 10 minutes. Not investing adequate time was found to be associated with worse outcomes in this and other studies. Without adequate time for the complex task, CCCs may default to identifying only residents at risk of failing, rather than to the development of all residents.4

Our study suggests faculty development is associated with better identification of residents who are not meeting expectations. Additional faculty development in the role and process of the CCC is another investment in time that may be required to obtain the high-quality results required to adequately synthesize data and provide effective feedback. This is consistent with previous literature.8-10

Limitations

The response rate of the survey was 44.3%, and we do not have information on nonresponders. Program director self-report may not reflect the opinions of the CCC chair or other committee members. It may also be subject to recall bias and social desirability bias. The cross-sectional nature of this study provides insight into a single point in time. Most programs reported being able to identify residents who were failing or struggling, leading to a smaller pool analysis among programs not reporting being able to do so. This study was limited to family medicine program directors. However, CCC requirements are common to all ACGME accredited programs.2 We expect many of the outcomes from this study are relevant to CCCs in other specialties as well.

Conclusions

Formal written policies for CCC procedures and increasing faculty time for CCC activities appear to be associated with a developmental rather than a problem identification approach to CCC activities.

Supplementary Material

Funding Statement

Funding: The authors report no external funding source for this study.

Footnotes

Conflict of interest: The authors declare they have no competing interests.

References

  • 1.Schumacher DJ, King B, Barnes MM, et al. Influence of clinical competency committee review process on summative resident assessment decisions. J Grad Med Educ . 2018;10(4):429–437. doi: 10.4300/JGME-D-17-00762.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Accreditation Council for Graduate Medical Education. Common program requirements (residency) Published 2022. Accessed May 5, 2023. https://www.acgme.org/globalassets/PFAssets/ProgramRequirements/CPRResidency_2022v2.pdf.
  • 3.Andolsek K, Padmore J, Hauer K, Ekpenyong A, Edgar L, Holmboe E. Clinical Competency Committees: A Guidebook for Programs . Accreditation Council for Graduate Medical Education; 2020. [Google Scholar]
  • 4.Hauer KE, Chesluk B, Iobst W, et al. Reviewing residents’ competence: a qualitative study of the role of clinical competency committees in performance assessment. Acad Med . 2015;90(8):1084–1092. doi: 10.1097/ACM.0000000000000736. [DOI] [PubMed] [Google Scholar]
  • 5.Seehusen DA, Mainous AG, 3rd, Chessman AW. Creating a centralized infrastructure to facilitate medical education research. Ann Fam Med . 2018;16(3):257–260. doi: 10.1370/afm.2228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Colbert CY, Dannefer EF, French JC. Clinical competency committees and assessment: changing the conversation in graduate medical education. J Grad Med Educ . 2015;7(2):162–165. doi: 10.4300/JGME-D-14-00448.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Bartlett K, DiPace J, Vining M. Quantifying faculty time commitment for clinical competency committee members across programs (research abstract) Acad Pediatrics . 2017;17(5):e25–e26. doi: 10.1016/j.acap.2017.04.084. [DOI] [Google Scholar]
  • 8.Turner J, Wimberly Y, Andolsek KM. Creating a high-quality faculty orientation and ongoing member development curriculum for the clinical competency committee. J Grad Med Educ . 2021;13(2):65–69. doi: 10.4300/JGME-D-20-00996.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Ekpenyong A, Padmore JS, Hauer KE. The purpose, structure, and process of clinical competency committees: guidance for members and program directors. J Grad Med Educ . 2021;13(2):45–50. doi: 10.4300/JGME-D-20-00841.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Allen S. Learning from the implementation of milestones. Fam Med . 2021;53(7):593–594. doi: 10.22454/FamMed.2021.825433. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials


Articles from Journal of Graduate Medical Education are provided here courtesy of Accreditation Council for Graduate Medical Education

RESOURCES