Abstract
Background
Since 2013, US residency programs have used the competency-based framework of the Milestones to report resident progress and to provide feedback to residents. The implementation of Milestones-based assessments, clinical competency committee (CCC) meetings, and processes for providing feedback varies among programs and warrants systematic examination across specialties.
Objective
We sought to determine how varying assessment, CCC, and feedback implementation strategies result in different outcomes in resource expenditure and stakeholder engagement, and to explore the contextual forces that moderate these outcomes.
Methods
From 2017 to 2018, interviews were conducted of program directors, CCC chairs, and residents in emergency medicine (EM), internal medicine (IM), pediatrics, and family medicine (FM), querying their experiences with Milestone processes in their respective programs. Interview transcripts were coded using template analysis, with the initial template derived from previous research. The research team conducted iterative consensus meetings to ensure that the evolving template accurately represented phenomena described by interviewees.
Results
Forty-four individuals were interviewed across 16 programs (5 EM, 4 IM, 5 pediatrics, 3 FM). We identified 3 stages of Milestone-process implementation, including a resource-intensive early stage, an increasingly efficient transition stage, and a final stage for fine-tuning.
Conclusions
Residency program leaders can use these findings to place their programs along an implementation continuum and gain an understanding of the strategies that have enabled their peers to progress to improved efficiency and increased resident and faculty engagement.
Objectives
We sought to determine how varying assessment, CCC, and feedback implementation strategies result in different outcomes in resource expenditure and stakeholder engagement and to explore the contextual forces that moderate these outcomes.
Findings
We identified 3 stages of Milestones process implementation, including a resource-intensive early stage, an increasingly efficient transition stage, and a final stage for fine-tuning.
Limitations
The results are influenced by non-response bias and relied on the perceptions and experiences of the interview participants, potentially missing additional themes.
Bottom Line
The implementation of the Milestones takes place along a continuum, and programs can build resident and faculty engagement and enhance efficiency by improving their processes deliberately and iteratively.
Introduction
In 2001, the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Specialties (ABMS) published the 6 general competency domains for evaluation of resident performance: patient care, medical knowledge, practice-based learning and improvement, systems-based practice, interpersonal and communication skills, and professionalism. In 2012, the ACGME introduced the Next Accreditation System (NAS).1 An integral component of NAS was the introduction of a competency-based, developmental framework called the Milestones to inform resident and fellow assessment and progression. Milestones sets articulated specialty-specific subcompetencies within each of the 6 competency domains.2 These Milestones were implemented nationally in 2013 for emergency medicine (EM), internal medicine (IM), and pediatrics.3 The family medicine (FM) Milestones were implemented in 2014.4
The ACGME also required all programs to create clinical competency committees (CCCs) as part of the NAS. CCCs consist of 3 or more members of the faculty and convene at least twice per year to review resident assessments and make recommendations to program directors on resident subcompetency Milestone ratings and resident progress. Programs are required to share documented performance with residents semiannually. This may occur as an in-person conversation between a faculty educational leader and the resident or fellow. Finally, program directors are required to submit the Milestone ratings semiannually to the ACGME.
The use of the Milestones is part of a larger movement toward competency-based medical education (CBME), with a specific goal of facilitating a transformation from a process-bound system of graduate medical education (GME) accreditation to one that focuses on educational and clinical outcomes. This focus on outcomes serves to prepare physicians for a changing health care system, emphasizing the functional capabilities of graduating residents and ensuring they match patient and health care system needs.
Challenges to the full realization of this vision have been identified in the literature, including concerns about assessment, CCC processes, and effective use of Milestones data for meaningful feedback to residents. While the Milestones define developmental progressions of residents in narrative terms, they are often used as a simple numeric scale in practice, leading to criticisms of reductionism.5 This has led to calls for greater faculty development efforts to make better use of the narrative descriptors, both for reflective assessment and feedback to residents.6 Another concern involves the methods employed by CCCs to inform their judgments about resident progression during their semi-annual meetings.7 Residency programs vary widely in how they provide feedback to residents and in terms of whether the Milestones are used to provide this feedback at all.8
While Milestone reporting has been required since 2014, effective practices to implement and use Milestones across specialties is not fully known. One previous qualitative study exploring the early use of Milestones in neurological surgery programs found that Milestones helped to build a shared understanding of the competencies among faculty and that CCCs were helpful in assessing residents, while resident and faculty involvement in the Milestones was variable.9 Data collected from focus groups suggested Milestones are most effective when residents are introduced to their specialty set early in training, perform self-assessments prior to CCC meetings, compare their self-assessments with CCC feedback in person with a trusted faculty member, and create an individual learning plan.10 A deeper understanding of Milestone implementation and impact, especially in the large specialties of EM, IM, pediatrics, and FM is needed to better inform effective practice. Lessons from these 4 specialties can guide implementation and effective practice with Milestones as well as help inform the Milestones 2.0 revision and implementation process.11
In this study, Milestone implementation was defined as the set of program-specific processes that inform the professional development of medical residents, with the goal of improving process efficiency and the engagement and knowledge of faculty and residents. For this qualitative interview study, we were interested in finding themes, processes, moderating forces, and outcomes attributable to the Milestone implementation efforts of participating programs across the 4 specialties.
Methods
Study Design and Approach
This study, using transcript data obtained from interviewing program directors (PDs), CCC chairs, and residents in EM, IM, pediatrics, and FM, served to determine how strategies and environmental contexts affected programmatic assessment, CCC meetings, and feedback processes. Using template analysis to build on previous insights across these 3 Milestone domains, we sought to explore what works (or not), for whom, in what circumstances, and why.12 Drawing from previous Milestone-based research and experience, investigators with qualitative content and methodological expertise from the 4 specialties and ACGME research staff created interview protocols and an initial thematic template to guide the coding of transcripts.9,10,13 The American Institutes of Research Institutional Review Board approved this study after an expedited review.
Setting and Subjects
Using purposeful sampling based on program size, geographic location, and type of sponsoring institution in order to query programs with varying Milestone experiences, 63 programs were identified for invitation to participate in the study. In 2017 and 2018, expert panel members from the 4 specialties sent recruitment emails to program directors, with an a priori goal of obtaining 24 programs (6 from each of the 4 specialties) to participate. A study investigator then contacted these 63 program directors to assess their interest in participating in the study. Upon consenting to participate, PDs provided study staff with contact information for CCC chairs and program coordinators. Program coordinators, in turn, provided contact information for second- and third-year residents who could provide thoughtful responses to interview questions. All subjects who ultimately participated in the interviews verbally consented to participate.
Data Collection
For each participating program, either the program director or the CCC chair was asked to complete a pre-interview survey (available as online supplementary data), querying faculty development and CCC processes within the program. An interview guide was developed and tested by the lead author (N.A.Y.) and contained questions regarding the implementation of the Milestones, experiences using them, and resident and faculty perceptions. All interviews were conducted by telephone. Most program director and CCC chair interviews were conducted individually, but some opted to be interviewed together. A maximum of 2 residents were interviewed per program. When more than one resident in a program was participating, they were given the option to be interviewed together or separately. All interviews were recorded and transcribed. Transcriptions were edited for accuracy, and all identifiable information was anonymized prior to any coding. Any program for which at least one interview was conducted was included in the study.
Template Development
The interview transcripts were analyzed using template analysis, a form of thematic analysis that emphasizes the use of hierarchical coding by means of a coding template to represent themes identified in the data.14 The online supplementary data includes more information on the template analysis. The initial thematic template was derived from a previous qualitative study that sought to determine the effect of the ACGME Milestones on the assessment of neurological surgery residents and focus groups conducted by ACGME staff at educational meetings.9,10,13
A detailed description of the evolution of the template is included as online supplementary data. The panel held iterative consensus discussions about the template, and the second iteration of the template consisted of 3 overarching domains: processes specific to the CCCs, assessment and resident feedback, and the reported effects of Milestone implementation on the faculty, residents, and culture of residency programs. Originally, all members of the panel employed the template to code 2 interview transcripts from the same program. They were given instructions to evaluate the template and its usability for accurately representing themes and phenomena reported by the interviewees. Specifically, coders were limited to affirming or supporting existing themes, inserting new themes, deleting, or changing the scope of themes, and modifying the hierarchical classification. Coded transcripts and annotated templates were collected and collated, and the template was revised (third iteration) according to the recommendations of the panel.
For feasibility and timeliness of continued textual analysis, 2 coders (N.A.Y., E.C.B.) from the original 8 proceeded with coding additional interviews. The additional transcript coding and template development led to more template iterations. As a result of the second round of coding, a separate codebook was developed to account for the different resident perspectives, emphasizing their understanding of the subcompetencies and their perceptions of the validity and scope of their assessments. The penultimate template then structured the themes in terms of individual program implementation processes, highlighting the variation and commonalities across programs in terms of their assessment, CCC meeting, and feedback processes. The final template integrated the perceptions and attitudes of residents, faculty, and program leadership into these 3 processes, treating these stakeholder perspectives as moderating forces that affected both the processes themselves as well as the resultant outcomes.
In finalizing the template, members of the panel were consulted to ensure that the themes and descriptions were representative of the interview data. This final approval process included representatives from each of the 4 specialties.
Results
In total, 44 individuals from 16 programs participated in the study. Participants were from 5 EM, 3 FM, 4 IM, and 5 pediatrics programs. Details of the participating programs and the interviewees within each program are shown in Table 1. Fifteen of the 16 programs completed the preinterview survey. Table 2 shows each program's responses to selected survey items.
Table 1.
Program Characteristics of Included Programs
| Program Code | Institution Categorization | Program Size Category (Based on Specialty) | Geographic Region | Participants Interviewed | Year of Interviews |
| Emergency Medicine (Milestones Implemented in June 2013) | |||||
| EM 1 (3-year program) | General/teaching hospitala | Very small (fewer than 20 residents) | Northeast | PD, CCC chair, PGY-2 resident | 2017 |
| EM 2 (3-year program) | Academic medical centerb | Medium (30–39 residents) | West | PD, CCC chair, PGY-2 resident, PGY-3 resident | 2017 |
| EM 3 (3-year program) | Academic medical center | Large (more than 40 residents) | South | PD | 2018 |
| EM 4 (4-year program) | Academic medical center | Large (more than 40 residents) | Midwest | PD | 2018 |
| EM 5 (3-year program) | Academic medical center | Large (more than 40 residents) | South | PD, CCC chair | 2018 |
| Internal Medicine (Milestones Implemented in June 2013) | |||||
| IM 1 | Academic medical center | Small (fewer than 36 residents) | West | PD, CCC chair, PGY-2 resident, PGY-3 resident | 2017 |
| IM 2 | Academic medical center | Medium (36–62 residents) | South | PD, CCC chair, 2 PGY-3 residents | 2018 |
| IM 3 | Academic medical center | Large (more than 62 residents) | South | PD, CCC chair, PGY-3 resident | 2018 |
| IM 4 | Academic medical center | Large (more than 62 residents) | Northeast | PD, CCC chair | 2018 |
| Pediatrics (Milestones Implemented in June 2013) | |||||
| Peds 1 | General/teaching hospital | Medium (31–50 residents) | Midwest | PD, CCC chair, PGY-2 resident, PGY-3 resident | 2017 |
| Peds 2 | Academic medical center | Large (51–96 residents) | Midwest | PD, CCC chair, PGY-2 resident, PGY-3 resident | 2017 |
| Peds 3 | Academic medical center | Very large (more than 96 residents) | South | PD and CCC chair (same person), 2 PGY-3 residents | 2017 |
| Peds 4 | Academic medical center | Small (fewer than 31 residents) | Midwest | PD and CCC chair (same person) | 2018 |
| Peds 5 | Children's hospitalc | Large (51–96 residents) | Midwest | PD and CCC chair (same person) | 2018 |
| Family Medicine (Milestones Implemented in June 2014) | |||||
| FM 1 | Federally qualified health centerd | Medium (18–30 residents) | West | PD, CCC chair, PGY-2 resident, PGY-3 resident | 2017 |
| FM 2 | Academic medical center | Small (fewer than 18 residents) | Midwest | PD and CCC chair (same person) | 2018 |
| FM 3 | General/teaching hospital | Medium (18–30 residents) | South | PD, CCC chair | 2018 |
Abbreviations: CCC, clinical competency committee; EM, emergency medicine, FM, family medicine; IM, internal medicine, PD, program director, Peds, pediatrics, PGY, postgraduate year.
A teaching hospital is a hospital that provides medical training to future and current health professionals. Oftentimes teaching hospitals are affiliated with medical schools and/or physician residency programs. (Teaching Hospitals | AHA. https://www.aha.org/advocacy/teaching-hospitals. Accessed July 1, 2020.)
An academic medical center is a tertiary care hospital that organizationally and administratively integrated with a medical school. The hospital is the principal site for the education of both medical students and postgraduate medical trainees and conducts medical research with approval and oversight by an Institutional Review Board or research ethics committee. (Academic Medical Center Accreditation | Joint Commission International. https://www.jointcommissioninternational.org/accreditation/accreditation-programs/academic-medical-center/. Accessed July 4, 2020.).
A children's hospital is a hospital that offers its services exclusively to children, adolescents, and young adults from birth to the age of 21. (Colvin JD. What is a children's hospital and does it even matter? J Hosp Med. 2016;11(11):809-810. doi:10.1002/jhm.2626)
Federally Qualified Health Centers are community-based health care providers that receive funds from the HRSA Health Center Program to provide primary care services in medically underserved areas. They must meet a stringent set of requirements, including providing care on a sliding fee scale based on ability to pay and operating under a governing board that includes patients. (Roland KB, Milliken EL, Rohan EA, et al. Use of community health workers and patient navigators to improve cancer outcomes among patients served by federally qualified health centers: a systematic literature review. Heal Equity. 2017;1(1):61-76. doi:10.1089/heq.2017.0001).
Table 2.
Selected Survey Responses
| Program Code | Voting Members on CCC | Non-Voting Members on CCC | Chief Residents on CCC | Number of Physicians on CCC | Non-Physicians on CCC | Frequency of CCC Meetings |
| EM 1 (3-year) | 8 | 2 | None | 8 | 1 program coordinator 1 social scientist | Quarterly |
| EM 2 (3-year) | 15 | 1 | None | 12 | 2 nurses 1 pharmacist 1 physician assistant | Twice per year |
| EM 3 (3-year) | 14 | 0 | None | 14 | None | Quarterly |
| EM 4 (4-year) | 18 | 4 | None | 18 | None | Monthly |
| EM 5 (3-year) | 10 | 0 | None | 10 | None | Twice per year |
| IM 1 | 9 | 1 | None | 9 | 1 program coordinator | Twice per year |
| IM 2 | 5 | 0 | None | 5 | None | Twice per year |
| IM 3 | More than 20 | 0 | Yes, voting | 20 | 1 program coordinator | Monthly |
| IM 4 | 14 | 5 | Yes, voting | 17 | 2 program coordinators | Monthly |
| Peds 1 | 15 | 2 | Yes, voting | 14 | 1 nurse 2 program coordinators | 6 times per year |
| Peds 2 | 11 | 1 | Yes, voting | 10 | 1 program coordinator 1 psychologist | 9 times per year |
| Peds 3 | More than 20 | 1 | Yes, voting | More than 20 | 1 program coordinator | 2 per year |
| Peds 4 | 10 | 1 | None | 10 | 1 program coordinator | 8 times per year |
| Peds 5 | More than 20 | 0 | Yes, voting | More than 20 | None | Quarterly |
| FM 1 | 6 | 3 | None | 9 | 2 program coordinators 1 psychologist | 6 times per year |
| FM 2 | 7 | 0 | None | 7 | 1 social worker | Twice per year |
| FM 3 | Did not complete pre-interview survey | |||||
Abbreviations: CCC, clinical competency committee; EM, emergency medicine; FM, family medicine; IM, internal medicine; Peds, pediatrics.
The final template emphasized 3 domains of Milestone implementation: assessments, CCCs, and feedback. This template accounts for program variation in processes, contextual moderators, and reported outcomes for each of the 3 domains. Tables 3, 4, and 5 describe some of the subprocesses, moderating factors, and outcomes within each domain. Brief descriptions of early experiences through the transition implementation stages and later stages of implementation are outlined in the tables and presented alongside illustrative quotes.
Table 3.
Programmatic Implementation of Assessment Systems
| Implementation Element | Early Stage: Learning Curve, Reporting Mostly Challenges | Transition Stage: Reporting Improvement, Positive Outcomes | Late Stage: Integration of Milestones, Routine Use, Skillful Faculty, Fine Tuning | |||
| Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | |
| Processes | ||||||
| Tools: construction and distribution of evaluation forms | Difficulty transitioning to a competency-based framework, the inertia to moving away from expectations based on training level. Utilizing verbatim Milestones subcompetency language and developmental progression in evaluation forms with little consideration of assessor knowledge and resources. | “We give them the opportunity to say whether resident exceeds, meets, or does not meet the Milestones or the expectations for that level of resident…They're given 3 possible responses.” –FM3 CCC Chair “In order to get those evaluations to feed into the Milestones more seamlessly, you actually need to use Milestone language and Milestone rating scales. That's the part that takes the education [of faculty].” –EM2 PD | Breaking down and differentiating evaluation forms for faculty in different rotations, aggregation and subcompetency coverage at the CCC-level (mapping of assessment tools to Milestones). | “I think the way that we are gathering Milestone data for each individual resident is, not so much by asking broader non-CCC faulty to know all the Milestones and evaluate each person on all the Milestones but we're asking individual preceptors, whether they're doctors or nurses or whoever it is, to evaluate folks with a bent towards one or a handful of Milestones. We ask our nurses, social workers, front desk folks in the clinic about Milestones pertaining to teamwork, working in systems, etc. We cull data on note timeliness and procedure submissions etc. and link those to Milestones and then we look at clinic evaluation reports, tie those to different Milestones, and the inpatient ones tie those to different Milestones.” –IM1 PD | An iterative approach to the creation of evaluation forms, periodically checking in with assessors, balancing assessor burden with adequate and appropriate domain coverage. | “I want the form to be as intuitive and user-friendly as possible for the general faculty at large so that the data we get is of the best possible quality and accuracy… faculty are asked to do so many different evaluations with so many different learners. They may have a medical student that's on their service and for 3 days the medical student overlaps with the resident and then overlaps with a fellow. They're asked to evaluate all of them. It's being respectful of their time, it's finding that balance where you're getting high quality information that's accurate, but is relatively brief so that the form isn't so big or complicated that it actually causes the faculty to shut down and just check all seven's or whatever number you pick on the form.” |
| Approach | Misinterpretation of regulatory requirements, unnecessarily burdensome demands on faculty, unrealistic expectation of complete competency domain coverage across rotations and assessors. | Interviewer: The requirement that after each rotation that each learner receives a full assessment, does that come from the internal medicine community? IM2 PD: As I understand, I was supposed to sit down with the resident and go over the evaluation. Well, verbally and then fill out the form, and have the feedback to the resident based on that written evaluation. Interviewer: Every 4 weeks and sometimes within that 4 weeks twice if it's every 2 weeks, faculty are doing an assessment on all 22 subcompetencies on each resident for your program? IM2 PD: Correct. | Grouped evaluations for rotations, compiled and completed by an experienced individual before aggregated data sent to CCC. | “…we've gone to a group evaluation. We've had one coordinator or one faculty member taking responsibility for certain rotations from different departments. For example, somebody you think you can give 3 out of 5 then you have the whole group who meets together, then somebody who's maybe more adept or capable goes through and completes the evaluation. They are the ones who moderate that. I think that has helped us a lot to make the evaluations more meaningful.” –Peds4 PD | Periodically reviewing domain coverage, determining potential subcompetency deficits and addressing them. | “We're constantly looking at how we can get more data points for our residents to have more data. It's hard to make a decision if you just have one data point, where if you have 10–20, you feel a lot more comfortable saying, ‘Yeah, this is probably where this person really falls.' We've been playing around with our evaluations; we've changed their names over time. Do we make them Milestones-specific? We try to focus them, but try to come up with a system where it might be easy for somebody to always hit the patient care eval, but we also need to have people fill out the systems-based eval and the practice-based [learning and improvement]. We're constantly looking at how to improve that. We've changed a little bit. We've added a couple of different evaluations that will coordinate those different milestones and bring them up so we can get more data on specific ones where we felt like we weren't getting enough data.” –EM3 PD |
| Resident self-assessment | Little acknowledgment for importance, happens inconsistently or not at all. | Interviewer: Do they do any self-assessment? EM1 PD: I don't think they use the milestones for self-assessment. I think they used their procedure log for self-assessment. The number of patients they cared for, I think they used their in-training exam score. I think they just use, in general, our feedback. | Acknowledgment of importance occurs with some level of consistency, enriches formal and informal feedback processes. | “It's more informal, looking at how I did on certain days can I be a better team leader this year, being more senior on the team. What can I do to get better or talking to my intern as well as med students I think that they felt I could have done differently that would be a better teaching environment et cetera.” –IM1 PGY-2 Resident | Trainees self-assess in an objective and reflective manner. Self-assessments are consistent, grounded in competency domains, and contextualized within short-term goals and attached to long-term career goals. Presence of training components to improve individual insight and build self-assessment skills. | “At the end of every month, there's another evaluation that also goes out that is a self-assessment of basically these kind of core concepts and how we think we're doing” –EM2 PGY-2 Resident “…what we're doing with our simulation based training or whatever domains of practice and training, including debriefs, a segment of self-assessment, which is structured, which is followed by subsequent debrief feedback by evaluating faculty or simulation staff, and those debrief session routinely emphasize the importance of self-assessment by the resident and we routinely point out where necessary the contrast between items that residents should have been able to pick up on their own” –IM2 CCC Chair |
| Moderating Forces | ||||||
| Faculty engagement | Faculty members disengaged and unmotivated to complete evaluations. | “Then you have others who are fantastic in their clinical arena at the bedside. They clearly love to teach, and the residents love them, but they don't complete the evaluations. They don't attend conference. They wouldn't attend any of these functions. They're just not really interested in it. They're happy to work with residents and teach them in the ED, but they're not interested in all of this” –EM1 PD | Higher engagement level observed in some faculty members, including knowledge of the competency domains and moderate level of buy-in for the developmental narratives outlined in the subcompetencies. | “Certainly the ones that sit on our CCC that are subspecialists, they have a very good understanding of it. The fact that Milestones are now being used in fellowship training has helped just the whole concept of Milestones be I think much more widely understood even among the subspecialty faculty.” –Peds3 PD | Rotational leadership and other local champions identified and empowered to create assessment tools and influence assessment methods. | “We made the decision early on to identify champions on the faculty within key divisions and for key rotations. We enlisted the help of those faculty champions in creating the assessment tools. We didn't do it for them. So, for example, on cardiology we have a heart failure rotation, and we identified somebody who's one of the key heart failure faculty who got it and created an assessment tool that was then vetted among their faculty. So that their faculty signed off on it, they felt like they had some ownership in this evaluation system.” –IM3 PD |
| Leadership attitudes | The negative perception of Milestones and subcompetencies, difficulty understanding the value of the Milestones in the face of the significant resource cost. | “I think we're moving in a direction of getting a little too much in the weeds and losing sight of the bigger picture. It's evaluation until death. The requirements, the evaluation process starts to become so overwhelming it ends up becoming just a chore and just a bunch of check boxes. You're actually doing less with more.” –EM1 PD | Understanding of the purpose and framework, seeing the value of the structure and push for objectivity. | “The Milestones are a tool to get to a place. They are not the end result. As long as people are making progress, it's fine. And I think that before the Milestones project came along, there were a lot of people that were just giving informal feedback to residents and they were not really evaluating them, even once a year. I know some places where they just said, ‘Well, they're okay. They're doing all right.' Do you have any reason for that? ‘Nope. I just think they're okay.' And the resident says, ‘Well, I think I'm doing okay.' But they don't have any idea. So, I think that imposing some structure and some requirements—no matter what that was, honestly, was the most helpful thing.” –FM2 PD/CCC Chair | Appreciation of the potential for a shared mental model among assessors and program leadership, seeing opportunities to catch individual trainee development issues early, providing objective context for utilizing the entire developmental scale on subcompetencies. | “Our score range is also better. If you just look at it from a numerical standpoint, pre-milestones, it was all fours and fives, on a Likert scale. And now…we definitely have a wider range, and we have people who get twos, or one and a half, and the residents aren't super offended by that and I think they understand it has to do with their developmental progression as opposed to, they are a 2 out of 5 person. I've not have any residents come in to my office crying because they thought they had a 2.” –Peds1 CCC Chair |
| Perceived utility of Milestones by stakeholders | Low perceived utility of sub-competency spectra. | “…there were maybe 2 out of over 1000 number one measures on any given measure. They're basically such a low bar, it's probably like a remedial medical student level. Its ‘Cannot present effectively or falsifies data to such extremes' that anchor is not really useful. The fifth level is so aspirational, its ‘transforms child advocacy across the entire state.' It's such a high bar that many faculty didn't even reach that. So those numbers on each end are basically out and useless to be honest.” –Peds2 CCC | Emergence of common language, consistent definitions of competency domains across faculty members, subcompetency details attended to by raters. | “I think the Milestones have largely provided us with a common language to speak to our learners and, frankly, to our faculty evaluators as they're considering resident performance. And that has certainly changed since 2011. I think there was a little bit of a gestalt-o-meter before 2011 and now there's something more concrete to hang evaluations on.” –IM1 PD | Perception of utility for individual resident development and comparisons across training levels. | “…a good way to measure residents and their progress in residency and it does it in a way that you can easily compare residents across. You can compare interns to second years, both individually and as a whole, and then track their own progress as they continue through a residency. I think that it gives just a nice cohesive way to measure progress.” –Peds2 Res B PGY-3 |
| General Outcomes | ||||||
| Prioritization and coverage of all competency domains | Devaluation of competency domains. | “I think we probably don't address [practice-based learning and improvement] very well, and we just push that down to that's the least important thing, to tell you the truth.” –IM2 PD | Understanding of scope of competency domains and importance of assessments and observations external to subcompetencies. Realization that Milestones were not designed to provide complete coverage for specialty-specific professional development. | “I think people don't realize the Milestones are not it. You're not supposed to stop at the Milestone as it's written…On the professionalism side you cannot. There's no easy anchor to say, ‘Able to manage a great diversity of patients across different cultural backgrounds.' You have to create tools under that. Sub-anchors. Tools to evaluate and then use those to decide what Milestone a resident is at. Not cut and paste the Milestones onto various evaluation forms. –EM5 PD | Appreciation for the broad coverage of the 6 competency domains, leveraging the objectivity in the descriptive language across the developmental continuum for purposes of assessment. | “Initially I felt like, a little bit nervous because I'm being observed on all these different parts or things that I'm doing and things I could potentially need a lot of work on. But later I realized that this could actually be really beneficial because it can have a 360-degree view of my performance and my development in the program. I found it to be quite helpful in getting more comprehensive feedback overall. I would put it this way. Looking at the 6 domains provide pretty good coverage in terms of what is expected of us in different fields... It covers different aspects of what is expected of us. It is a good breakdown to see what we should be working on or what we should be evaluating ourselves on. And more the general framework.” –IM1 PGY-2 Resident |
| Interviewer: For systems-based practice, you said that sometimes it's just a random number? Peds1 PD: Yeah, it feels like it. Being able to understand the system, work within the system, help patients identify resources. Three, four, five, kind of in there somewhere. | ||||||
| Assessor burden | Assessors overwhelmed, without significant understanding of the scope of their responsibility, expending considerable time and energy to complete evaluations, often evaluating unfamiliar constructs. | “I think we bit off more than we could chew all at one time, in terms of changing the forums and then starting the CCC all at the same time and all in full-on Milestone language. What we ran into is ... I think it's too much cognitive burden on the individual assessors, such that they pushed back against using them a lot. They also, I think blew through them in a way that didn't do due diligence to actually reading them and using them.” –Peds1 CCC Chair | Increasing familiarity with assessment language, informal faculty development processes, appropriately deferring scoring responsibilities to the CCC. | “After they kind of got used to the language, it made it easier, and our evaluations are not written distinctly in Milestone language…So we've had to at the CCC take our more traditional evaluations and comments and try to bridge those with the Milestones. That takes some education on our part. I think that they've been helpful. And I think that they're meaningful. But it's not without some relatively specialized training that I think you need to have to get to that point.” –FM2 PD/CCC Chair | Consideration of assessor expertise and scope of observable behaviors. | “In terms of the non CCC faculty and other providers inter-professionally, we are hitting them up only for the type of data that we think that they most are able to review.” –IM1 PD |
Abbreviations: CCC, clinical competency committee; EM, emergency medicine; FM, family medicine; IM, internal medicine; PD, program director; PEDS, pediatrics; PGY, postgraduate year.
Table 4.
Programmatic Implementation of Clinical Competency Committee
| Themes | Early Stage: Learning Curve, Reporting Mostly Challenges | Transition Stage: Reporting Improvement, Early Positive Outcomes | Late Stage: Integration of Milestones, Routine Use, Sophistication of CCC Members, Fine-Tuning | |||
| Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | |
| Processes | ||||||
| Premeeting preparation | Little to no preparatory work, the pressure to discuss each subcompetency for each resident during meeting. | Interviewer: “What was it like in the beginning?” “Clumsy, slow, didn't know exactly what we were doing. I think I would imagine that was true to every program in the country. It seemed a little bit nit-picky and laborious because there's so many different things you fill out... But at first, I would say it almost seemed Mickey Mouse because to have to fill out all those things out on every single one every time seemed unnecessary.” –IM3 PD | Compilation of evaluations, usually by the program coordinator, distribution of spreadsheet or report to CCC members before the meeting. | “They are all compiled by our program coordinator for the last 6 months from the last CCC and then they're all literally printed. Everyone gets printed out a copy and handed out to every member of the CCC and they're supposed to go over it. They go over those, they're given them initially early to go over it, if they can, and then we take the time to go over them during our CCC meeting again.” –FM2 PD and CCCC | Using resident-specific evaluations, faculty members, APDs, PD, and the CCC chair, each assigned a small number of residents to originate premeeting ratings for each resident on each subcompetency. Premeeting ratings serve as a starting point for CCC discussion. | “Each faculty member was assigned residents per each class to be the individual to follow their training progress and view any specific things like procedure logs and evaluations.” –EM5 CCCC |
| Interviewer: “Do you go over each sub-competency for each resident every meeting?” FM3 CCCC: “No…We used to, but that's done individually because we send it all out ahead of time.” | CCCC: “We assign 3 residents to each faculty CCC member and they each get one, they have an HO1, and HO2 and an HO3…” PD: I would also say that most of the time, for an individual resident, is spent in the faculty member who reviewed their dossier in discussing why they might have scored certain Milestones at certain levels if they are out of the ordinary, either very good or very bad.“ EM2 | |||||
| Data Review | A disconnect between evaluations and CCC meeting in subcompetency coverage. Gaps in data perceived as an inconvenience, not as an opportunity for quality improvement. | “Yeah, I think for the purposes of just completing it and filling in a circle, that we sometimes probably said to ourselves, ‘Well we don't really have much to go by, but seems fine,' and so we just give them what we think is more their post-graduate appropriate level.” –EM1 PD | Recognition of data gaps, early plans to address gaps. A realistic understanding of the limited opportunities for direct observation and supervision. | “Procedures. We are fixing this, but up until recently, we have had a very difficult time getting meaningful data on residents doing bedside procedures. Many times the residents are doing them, once that they are deemed competent to do these unsupervised, they're not being supervised doing these. There's nobody that's filling out evaluation forms on them during these procedures, we look at notes in the chart when they've done procedures and self-reported procedures. But we are struggling with what to do with somebody we don't have data and we're trying to figure out for the specific milestones for patient care, and patient care two for internal medicine, how to properly evaluate that.” –IM3 PD | Incorporation of resident self-assessments. Increased efficiency in resident review, particularly for low performers. Taking gaps in subcompetency coverage into account, utilizing the best available data to determine resident ratings. Leveraging gaps for quality improvement, both for CCC and assessment processes. | “On a scale of 5 to 1, and it's basically how you see yourself. So how do you see your clinical knowledge? How do you see your procedural skills? Your interpersonal interactions, your ability to identify potentially serious causes of disease…it's to tailor the Competency Committee members to, ‘What do I think?' What do I think about my own performance and what do I think I'm deficient in?' So if I think I'm deficient in interpersonal interactions, that would tell the Competency Committee to examine my evaluations in those aspects more closely, and say, ‘Actually he doesn't have a problem in interpersonal interactions but he does have a problem in procedural skills.'” –EM2 PGY-2 After that meeting we came back, there was a core group. It included, me, 2 or 3 associate program directors, and most importantly, the program director for our Med/Ped's residency, who does a lot of work with the ACGME and me as well and was actually the brains behind a lot of what we started doing in terms of mapping milestones. Mapping assessment, we created our own EPAs and mapped them to milestones. So, we had a team of probably 6 or 7 people that were involved in really overhauling our assessment system. –IM 3 PD |
| Decision Process | Discussion within CCC meeting has little impact on reported Milestones ratings. Actual CCC decisions limited to a general sense of resident progress. | Interviewer: So, following the CCC meeting, you yourself still have a significant amount of work to do the additional step of taking the 6 general ratings you have for each resident, and putting the resident, with some help from the binders? You score them on each of the 22? IM2_PD: Yeah. And sometimes it's a matter of ... It's hard to put everything down. If some of the subcompetencies may be answered in a way with some of the comments on the side, or even just verbal discussions that we've had that, “this guy is a little below par on physical exam, or histories are fine, but he misses cardiac mummers” or something like that. So that may be verbally discussed. Not exactly put down, maybe, on the CCC meeting, but I will recall it for the milestone submission. | Incorporation of preliminary ratings, visual representation, opportunities for faculty to provide input, emphasis on discussing outlier ratings, premeeting ratings are adjusted when necessary. | IM3_PD: When we come into the meeting every resident has already had preliminary ratings or milestones assigned to them. And then we use the spider plots because we find that's the most efficient way to present that data quickly to the group as a whole … We ask the faculty member who has done the preliminary work to say a few words, particularly if there are areas that are either exceptionally high or exceptionally low. Then we invite everybody in the room to comment. Most of the time people are in agreement with how those residents are doing particularly if they're on track and we feel like they're where we would expect them to be. Not infrequently we will have people that have had personal observations and we try to create an environment where they chime in and we make those adjustments either right there in the room or we flag that person and we work outside of the meeting to make some changes. Interviewer: What would you say is the proportion of changes that are made from those initial, temporary ratings? IM3_PD: Probably 15%–20% of the residents will have at least one milestone change. | Systematically capturing qualitative comments during rating discussion to feedback to residents, allows for context and understanding when alongside numerical ratings. For residents that need remediation, a personalized learning plan is discussed as well as specific faculty assigned to residents. | “So after each resident, after we have completed the milestones for the particular resident, there is one designated person during that time who has, it's their job to participate but also to write down what they're hearing from the group as we are going through the milestone in terms of where are their areas of strength and where are their areas of growth. And after we have completed the communication part of the milestone then that—it's our behavioralist who's doing that, will then put the comment up in front of the group, we review those comments together to edit them as we feel necessary, and then at the end of the meeting then I will send out those comments and a brief blurb to remind the residents what is the CCC, why we're meeting, what do we do, how does this impact them. And then those comments go to the program director, the residency advisor” –FM1 CCCC |
| Membership | Limited to program leadership and senior physician faculty, little consideration for rotation or site representation. | So basically, clinical competency committee was created a few years back once it became part of the ACGME requirements for residencies. In general, I believe when it was initiated there were involvement from the program director, several of the assistant program directors, and just random clinical or educational faculty involved. –EM 5 CCCC | An emerging role for program coordinators and other non-physicians, awareness of the importance of rotation and site representation. | Peds4_PD_CCCC: She is there to just assist if we need something. Sometimes she'll be like, “Oh no no no, this person is so late at getting things in to me.” Or let us know if there are concerns as far as how that person's interacting with her. ‘Cause sometimes residents are more appropriate to the faculty, but then they're a little more disrespectful to the coordinator, which is not okay. They provide some of that input. And then of course the schedule the meetings, and facilitate getting everything ready logistically. But they have a pretty limited role as far as completing the milestones. | Embracing the diversity of perspective to enrich the discussion Non-physician members such as nurse managers and psychologists have a “voting” role or a role equivalent to physician faculty, with an emphasis on rotational coverage and representation from multiple sites when appropriate. | IM1 CCC CHAIR: Our CCC has people who think differently, and I think that's a major positive because I feel like ideally it would be more resident-centric and you'd have all the time in the world and it would be a resident panel and the resident themselves and dadadadada. But the way that we have it now, at least we have different minded individuals as a part of the CCC who are able to ask salient questions about things that others may not have thought of, advocate for people, etc. So that when a consensus is made it's not just everyone agreeing with PD. It's a consensus born out of important conversations and so when we go to that resident, it's been well thought out and yeah unfortunately if things need to go in a more punitive direction, then it seems kind of more ethically and legally palatable to have these groups in place. Then we also have people that have different ideas for the direction of the program. So, some folks say, “We need to just train folks who are great communicators and good people and who know a decent amount of medicine” and that's great. And then others who have very lofty goals and think we should turn out the highest performing medical minds and so the way that these different people think about different things, allows individual CCC members to sometimes identify with individual residents who are struggling, while other ones don't. And then so becomes sort of in-lieu advocate. The nurse is a nurse manager from the general pediatrics unit and she ... For sure every first year and third year passes through her domain. She hears complaints about residents from either bedside nurses or nurse managers, and positive things for sure. She's very invested in education and is also a very fair-minded person who is not just out to protect nurses at all cost. We very much welcomed her into the committee, and she's been very thoughtful. The 2 program coordinators are our program coordinators and they can speak especially to some professionalism issues, I think. |
| Role in program | Role limited to monitoring resident progress and making recommendations to PD about milestones ratings. | “From our perspective, I would say the role is to assess the residents on an ongoing basis in terms of their technical proficiency, their cognitive abilities, and to monitor for progression as we move them through the year and promote them.” –EM 1 PD “We evaluate residents and make recommendations as program director about the competencies and the progress on the milestones and suggestions for any remediation or commendation. Twice a year, we do that twice a year and most of that is formative. Of course, the final third year is a summative and we do a quasi-kind of, it's kind of quasi-summative in making decisions or recommendations for a promotion at the second part of each postgraduate year.” –FM3 CCC Chair | The role is expanded to contributing to individual learning plans or remediation plans for residents and providing support for decisions on learner progression. | “Our CCC is the overarching, collective thinking group that reviews resident performance several times throughout the year and, when necessary, helps provide thoughtful improvement plans for residents who may have areas that need help. They also are the support of the strong-arming if something really goes off the rails and some sort of formal remediation process needs to be undertaken, which makes them sound like they're that bad guy the whole time. “For example, when we have a struggling learner, the program director may meet with that individual you know monthly, every 2 weeks, whatever sort of seems appropriate but then schedule check-ins with the clinical competency committee if things don't continue to improve. Frankly, like an advisory group. Like, “This isn't going well, we love this learner. Does anybody have any ideas on how we can help the learner get back on track?” And then the CCC has not had to do that but if somebody were to just not be able to exceed to be part of the decision to potentially let somebody go.” –IM 1 PD | CCC has multiple roles, including deciding on resident ratings, providing qualitative context for resident feedback, and a continuous quality improvement role: both for the residency program as well as for the CCC itself. | CCC Chair: Four of them we generate that paragraph report. At the end of each semester we have 2 meetings. In those two 2-hour meetings we go through every single resident in a PowerPoint. We make a few PowerPoint slides about each resident that I put together. The program coordinator helps skeletonize, based on feedback and synthesis from one CCC member. The CCC member gets assigned to 3 or 4, maybe 5 residents and does that big compilation of material, writes the synthesis statement, and then as a committee we read it out loud, vet it, change the wording a little bit, rewrite it and then the statement is eventually submitted to the resident. That's what happens at four of the meetings, so basically December and June. Then the two interim meetings, one in Fall and one in the Spring we use basically as quality improvement, to do a postmortem on the last set of evaluative meetings, and then change our process. We've changed a couple things every single time. I think it keeps getting ... it's a living document in terms of the template we use to assess each resident and the processes we use that we go through in the meeting. –Peds1 CCCC |
| Training | Little real training or orientation for new members, members learn by doing. | “Well, there's a ... we haven't taken any new faculty. Well, yes we have. IM3_PD does that. We just sit down and go over it. It's sort of the whole group. As he goes through the milestones, he just will point out, ‘Now, this is how we do such and such.' It's sort of by osmosis. “The best example would be the chief residents. The chief residents are on the CCC. So when they come to their first one, they've never been to one before and we don't give them any responsibility, but they start watching and they're smart. They realize what we're doing and quickly, they're up to speed.” –IM3 CCC Chair | Training and orientation are informal, involving discussions by current program leadership. | “One of our APDs will sit down with [incoming APD] and talk to him about our evaluations. CCC Chair will probably talk to him about our CCC, and between those two we have our ... The person who's in charge of evaluations and assessment, [Faculty member], is the one who runs our Milestones meetings, and CCCC is the chair of our CCC, so between the two of them, I think our new APD will get a sense for how to think about the Milestones, how to use them, how to report them, et cetera.” | Training and orientation are formal, with roles and processes detailed as well as expectations set ahead of time for new members. Decision and data review processes can be simulated for the experience ahead of time. | So we just had a meeting last month. And so we had a lot of turnover in our committee this past year just because we had a few faculty change roles and leave the institution, et cetera. So we had 4 new members. So we had a committee meeting where, and we also changed the leadership of the committee so since this was my first year to officially run the committee, I wanted to meet with everyone. And so we just met as a group and we talked about how the committee functions, what our role is and how we're going to evaluate the residents in terms of where we find the, like logistics about where do we find the information, what information are we looking for, how do we fill out the form that we're requesting to be filled out. Allow them to ask questions about how that works. –Peds2 CCC Chair Probably the focus has been on the people that are on the CCC initially. We had some sessions with them. We even took a couple of sample real resident evaluations and pulled them all together, had a workshop where we had people that are on the CCC sit down and work through, “Here is the kind of things you're going to have to review and it's going to be available to you. When you have to rate this person on the Milestones, where would you put this person?” We did that sort of thing to give them just some actual hands-on practical experience before we've had the first CCC a few years back. –Peds3 PD and CCC Chair |
| Moderating Forces | ||||||
| Institutional support and resources | Little to no support, no infrastructure, no protected time for faculty to sit on CCC. | “Our sponsoring institution is the community health center, and there is no 4-year undergraduate institution in town, there's a community college. There is an osteopathic medical school, but they're resource limited, so they're not actually a place we could tap for resources. So we are having, and that is a piece for this process with some of the ACGME processes with the new accreditation system and some other pieces when you're in a large institution, you actually have a somewhat reduced workload, but when the program and the institution are one and the same it's a significant additional workload for us.” –FM1 PD | Protected time for program director for CCC and program improvement. Institutional culture and resources provide context for strong relationships across programs and departments. | “I'm very fortunate where I have a department chair who's very supportive of our training program and if I ask for resources, she'll generally provide them as long as it's a reasonable request. Most importantly, she provides me with a lot of protected time…I also think what's unique is that we, I'm biased, I think that we have a good team dynamic where there's a very strong working relationship between the associate program directors, the other residency programs on campus. Including [Med/Peds PD] from Med Ped's, [Peds PD] from pediatrics. We have a very strong DIO, [DIO], who is a wonderful mentor for me and for other program directors. I think those are the ingredients that really helped us.” –IM 3 PD | Some protected time for associate program directors and faculty to sit on CCC. GME office is familiar with ongoing programmatic quality improvement efforts and works to collate and disseminate these efforts within the institution. | “I know that other residency programs within our institution do much of the same. For example, we've had a longstanding interest in social determinants of health within our program. More recently, the surgical program here started an organization called Socially Responsible Surgery. I know that our emergency medicine program does something similar. There are other programs that are doing some of these things, and our GME office has just, 6 months ago, started to collate what these are and try to start disseminating them.” –IM 4 PD |
| Data management systems and support | Data systems are inadequate to accomplish assessment collation and summary tasks. Applications are unable to provide and present information in a way that readily facilitates CCC discussion. | “I haven't been entirely happy with any of the residency management systems. We use [application] here. I find it's fine for collecting data, but getting data back out of it is difficult, and some of the functionalities I needed just weren't there. I've been, ‘begging' is probably too strong a word, but I've been asking for a database person for a number of years.” In the past, when we did a CCC meeting, it has everybody completing either on actual paper or at least using a Word document, and then trying to find and losing notes and sending them around and not being able to keep track, and is this the latest or not? It drove me crazy.” –Peds3 PD and CCC Chair | Finding workarounds, usually staffing resources, to more effectively compile data and make it accessible to CCC members. | “I'm trying to think of [staff member]'s specific title, but she's with the admin office, but she has already compiled certain benchmarks that we had used in terms of assignments or rotations that we have agreed that we're allowed to meet certain milestones. After we have taken the 5 minutes to be able to review, then we take five minutes to discuss anything that we feel is important and probably also to get those areas of concern that we need to be aware of as we are going to the milestones. Then we go to them individually, and all of us have what we call our cheat sheets which is basically, under each of the milestones we have written notes that say if there's a specific, for example, if you have done a wellness workshop, and you have done an essay for it, in one of the behavioral health rotations, then that is our benchmark that you have met this milestone. And so we all can look at that to be able to say has that resident met it? And so people have different assignments, so if say, [Faculty] is assigned the wellness workshop she's able to then, I can look at her and say has resident x done wellness workshop, and she can say yes or no. So there is little parts that are paired out, to help make the process go faster, but everyone gets to read their own data. –FM 1 CCC Chair | Data system is organized to produce usable reports facilitating efficient subcompetency rating submissions to the ACGME, rotation-based evaluations are readily accessible to residents so and residents can even submit disagreements with certain evaluations. | “We have this database manager and this process is much more automated now. I had the database guy creating a database. Everybody's name is in it. I can open it up. People can open it up wherever they are…They can see, when they open up who's done and who's not done, if they forget to click off a blank in one of the milestones, it tells them that, and you think you're done, but you're not. You missed something somewhere. Go back and look again. Then, at the end, I have reports that print out, so my personnel, since ACGME doesn't have a way for us to upload it directly, I have a report that comes out that looks as much as possible like the screen that my secretaries are going to have to turn around and enter, and so they got on a dual monitor and they got one screen up with the database for Johnny Jones and on the other screen is Johnny Jones's ACGME input and they can rapidly, it took maybe 2 hours even with 180-some people to get all the data in. This was all an effort on my part to try to simplify things, and then this next time around, what I've had this database guy do is we're trying to pull in data from a variety of sources, so for instance, continuity clinic attendants, conference attendants, how many PREP (Pediatrics Review and Education Program) questions have they done…What I've had the database guy do is do a lot of that legwork for them and pull it together… The other thing I did was, this time around, I had it made so that electronically the resident can agree or disagree with their evaluations. In particular, I told them, ‘If you have disagreement, please lay out your disagreement,' and the next time around when the committee meet, in the database, that will be captured there so that when the coaches are looking, they can see, this resident says they think they have been scored too low in this particular Milestone and to take that into consideration.” –Peds3 PD and CCC Chair |
| Outcomes | ||||||
| Program resources dedicated to CCC | Heavy resource expenditure: time and work before, during, and after meeting. | Interviewer: What was that first meeting like? EM1 PD: Colonoscopy without anesthesia. Interviewer: Was the second one a little bit easier? PD: “I think you get kind of ... We have these arguments all the time, milestones are not milestones. We all have a general sense as to how they're doing. The majority of them are on cruise control doing just fine. There are a few who need a little more hand holding. Then there's the one or two who you wish matched someplace else. “That's supposed to be a joke, but not really. Again, it feels laborious after a while. You do get fatigued. We only had [number of residents] to go through last year. I can't imagine going through [greater number of residents] per class. That's the problem, you get fatigued after a while. Those that are doing well, everyone agrees, it's just a great resident, you just fill in circles.” | Program leadership begins to make efforts to reduce length of time spent within the CCC meeting. Early efforts to try and target CCC discussion to individual residents struggling with specific subcompetencies. | “We do an hour and a half a month, and then we've sort of taken an approach that the program leadership team, the APDs and me, do a lot of advance preparation for our milestone assignments. Then I try to keep those meetings short, but we run about 5-ish hours for every 6-month review. But we come in with a prepared file and say, ‘All right, here's our data, this person is right on track in all their patient care milestones and comments, but we see some concerns around communication skills. Let's talk about those specifically as a group.' Rather than trying to do all the assignments as a large group. There's probably, for those semiannual milestones, boy, I would say each member of the program leadership team probably spends an easy 20 to 25 hours doing that advanced preparation to shorten that meeting, and make that meeting really move, and not waste everyone's time.” –EM 4 PD | Significant pre-work, individual residents are presented with pre-assigned milestones. Though ratings may be adjusted, the discussion centers around the overall progression and development of residents. | Interviewer: So in 2 hours you're able to cover all the ratings for each resident at a given training level. IM 4 PD: Yes, but keep in mind though that there has been pre-work done before those meetings ...where each advisor, which is each Associate Program Director, in conjunction with one other person, who is usually a chief resident, has gone over their panel of people and made the initial determinations on what they're going to propose at the Milestones, so that's not done for the first time at the Milestones meeting. That's done offline, and for their say 10 or 11 people that they're going to discuss at that particular meeting, they've already reviewed all the assessments, summarized them, put down what their proposed Milestones are, and then once those are displayed, that's when we have discussion in the bigger group about each candidate. |
Table 4.
Programmatic Implementation of Clinical Competency Committee (continued)
| Themes | Early Stage: Learning Curve, Reporting Mostly Challenges | Transition Stage: Reporting Improvement, Early Positive Outcomes | Late Stage: Integration of Milestones, Routine Use, Sophistication of CCC Members, Fine-Tuning | |||
| Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | |
| Capturing problems in resident progress | CCC catches only residents with significant development issues. Competency domain coverage incomplete, difficulty capturing issues beyond those that are readily apparent. | “We had a CCC of sorts before it was required but, it was definitely less structured and people felt fine just talking about what somebody was wearing one day as opposed to really getting down to the important thing about how they're developing, so that's a difference. I also think we've had more residents on little remediation, academic improvement plans within our institution and I've reported more residents to the American Board of Pediatrics for unprofessional behavior and have gotten some remediation that way too. I probably had like one in 10 years before this and we've had like 4 since. I think we're feeling better about the remediation, I think we've had more success with it. We are more concrete about what we're getting because of the work of the CCC.” –Peds1 PD | Acknowledgment and use of increased competency domain coverage increased discussion of developmental levels within subcompetencies. | “I think it really refines what is working well and what isn't working well with the residents. Historically, and currently, there's sort of the eval, and we have 2 minutes to talk about a resident ... and not quite the same, everyone is not looking at the same data sets. We're looking at that. In the CCC meeting, we have up to 45 minutes to talk about our residents, and we're all looking at the same data sets and so when we are summarizing at the end what the resident's doing well, what the resident needs to improve, for those who are sort of ... if you're a low preforming resident, it further defines really what are those areas for the low-preforming residents.” –FM1 PD | Leveraging both assessment data and CCC member expertise to discuss the “true” level of resident rating on subcompetencies. CCC culture and structure foster detailed discussions and integrate different inputs for accurate placement of resident progression. | “Okay they have 30 intubations so they are at what's expected of a resident at this level of training. But by delving a little bit deeper with the CCC and faculty who are involved in that you might actually get more information that would say, ‘Well this resident isn't as prepared for managing airways because they're not understanding the dosing. They're unaware of good backup measures for failed airways.' Even though they've met this objective criteria of 30 intubations. So I think just kind of having an open forum from faculties that are involved in the CCC that work with residents clinically has helped.” |
Abbreviations: CCC, clinical competency committee; EM, emergency medicine; FM, family medicine; IM, internal medicine; PD, program director; PEDS, pediatrics; PGY, postgraduate year.
Table 5.
Programmatic Implementation of Feedback
| Implementation Element | Early Stage: Learning Curve, Reporting Mostly Challenges | Transition Stage: Reporting Improvement, Positive Outcomes | Late Stage: Integration of Milestones, Routine Use, Skillful Faculty, Fine-Tuning | |||
| Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | Characteristics | Illustrative Quotes | |
| Processes | ||||||
| Content | Numerical data presented as feedback without context, feedback provided is not actionable. | “They get some patient per hour metrics, although people were getting so obsessed with those that I took those and some of the other clinical performance metrics about bounce-backs and unexpected ICU transfers out of the semiannual review because people were so obsessed with them that, because it was a number, it was data. I took them out of the semiannual reviews so that people stopped arguing with me over whether they were seeing 1.73 patients per hour or 1.81 patients per hour based on their own numbers. Because as long as you're seeing enough, as long as your department is not backed up, it just doesn't matter.” –EM4 PD | Developmental progression discussed, with comparisons to peers as context, strengths, and weaknesses highlighted, some guidance as to how to address weaknesses or deficiencies. | “The Milestones give you a 30,000-foot view and help me see ‘Oh okay, I'm progressing as I should, or I'm kind of deficient in this area, I need to work on this throughout this next 6 months.' I think some of it is just the level of detail and, for me, how to implement that feedback.” –IM1 PGY-3 Resident “I really like the way our program handles it, at least with every 6 months. It gives you a great not only understanding just from talking with the program director and the views of the clinical competency committee but also they give you a visual representation of where you have been and where you currently are and where your peers are so that you have some understanding of where problem areas lie and where you need to improve.” –Peds 2 PGY-3 Resident | Numbers shared but communicated as signals of overall progression. Program leadership provides a summary of comments collated from the CCC meeting, establishing both overall progression in plain language and places numerical ratings in context. | Peds 1 PD and Resident PD: “For me, what I do with my meetings as the program director, is that I go through primarily, the paragraph. I skip over; we have a little section on there about milestone ratings that are higher or lower than the other class average but, I don't know if I trust the actual score. Maybe they're more of a three and a half than a four and a half but, somebody said they were a four and a half, so okay, I'll let that go. I see it much more the other way. If they're low and they think, ‘Oh I should be more than this.' I don't want to have that conversation. I find more meaning in the comments, so I focus on that section.” PGY-3 Resident: (separate interview): “I like the paragraphs. Just because it summarizes the nitty gritty comments that we've had throughout the year. You can see how someone who maybe wasn't on those rotations with you globally viewed what they read about you in their evaluation…It's nice to have this overview. |
| “We have an older guy who retired, who told me to get a haircut during the comment. He was joking with me. He's an old military guy, and I had a huge ponytail. I hadn't got a haircut in years, so I was ... I did get a haircut eventually. That wasn't that helpful, you know, but people be reassuring in the way that they'll say, “Oh, keep up the good work, but ... It's optional for them to do all the numerical check boxes...it's nice to hear that you did okay, but then it's like it's not really helping you to be better in a specific way.” | ||||||
| Timing and structure | Inconsistent or delayed feedback, details of specific encounters are lost, blocks of time pass without formal check-ins concerning individual resident progress. | “It's hard to have those conversations in a delayed fashion, like 2 months after something has happened. I think people just don't remember. We see tons of patients, and they might not remember a specific encounter or a specific behavior that they did. We have tried to change that culture so that some of the feedback is given real time so that by the time they come to us and we are having that 2-person meeting, is it not the first time that they've been talked to.” –EM3 PD | Increased consistency and duration, face-to-face meetings cover milestones ratings alongside other data. | “There are 2 meeting per year that the attendings sit down and they talk about how well we're progressing overall.” –EM2 PGY-2 Resident | Detailed feedback given on an ongoing basis, rotation-specific feedback given at the end of the rotation, and milestones ratings are covered at least every 6 months. Feedback is consistent, timely, and the level of detail matches the modality. | “There are a few different ways we're evaluated. Number one, is probably the most frequent is by our attendings who watch us, ask us questions and then give us feedback also on an ongoing basis and then at the end of a rotation. Then, we get feedback from our clinical preceptors in a similar fashion, both an ongoing basis, and then periodically throughout our time in clinic, and then every 6 months we have a semiannual review, where we review Milestones, based on our in-training exam, and that's I believe based on aggregate evaluations from peers and preceptors over the course of the year, or the course of the 6 months.” –IM1 PGY-3 Resident |
| Interviewer: “What is the proportion of those meetings [with residents] that occur every 6 months?” Peds 3 PD and CCC Chair: “I don't know. Not as high as I would like. I actually don't have the number for you.” | “I generally schedule them for an hour at a time and go over everything from conference attendance to completion of their patient logs and procedure logs and then I go specifically into each of their milestones. And I reiterate at the beginning of the meeting, ‘Are you familiar with milestones?' They generally say yes.” –EM5 CCC Chair | |||||
| Directionality | Limited to faculty and program leadership providing feedback to residents, no mechanism for capturing resident feedback concerning faculty members or the program. | “During my midyear evaluation and the end of year evaluations, he [PD] also gives us a hand out, and it shows our statistics, individual hitting of milestones, so when I was an intern last year and I had my midyear evaluation, basically I was hitting all the ones for my level, so then it seems like we go up as we age. From my understanding back then, I was like ‘okay an intern hits this level,' and then as you progress and as you get better slowly, you slowly hit the milestones that, more in the middle. And then next year seniors were expected to be on the higher end of the milestones, and whatnot.” –EM1 PGY-2 Resident | Residents receive the opportunity, in-person or through a digital interface, to provide some general comments about the program and the overall educational quality of their training. | “Towards the end of the session we'll have opportunities to talk about our goals as what we want to do after residency, and any other questions about things that we can do to improve the program, things I thought were good, that's where I was bad. Things that the program would like to hear about.” –IM1 PGY-2 Residents | Residents regularly provide feedback about the faculty, staff, and about rotations. | “When we go to the ICU, when we go to toxicology or ultrasound, or whatever we're doing, every month we have to sit down and then evaluate that rotation. That's half of it you know, “This a low yield anesthesia rotation, we should consider other resources to change it up.” “…then also how you felt with regards to the volume of work you had. You rate all the things one to 10, and then there's a comment section for your ranting. You rate. You talk about the people on the rotation...it's your opportunity to praise other departments that have great nurse practitioners that invested themselves with extra learning or identifying any red flags for certain rotations where it's not the emergency department.” –EM2 PGY-3 Resident |
| “Then, they just get feedback from us on things that we wish were different or that kind of thing.” –Peds PGY-2 Resident | ||||||
| Moderating Forces | ||||||
| Perceived validity of source and content | Low perceived validity attributable to assessment sources and content, devaluation of feedback due to perceived subjectivity. | “I think they try to make the process as clear as possible, but I think there's an inherent vagueness from the resident perspective as to how they get those numbers, because as far as I know, the only data they have is our clinical evaluations, and then they use things like attendance in conference and prep questions and things like that, and then they'll use personal experience...To me, it seems like there's a lot of hand-waving vagueness to it that I think will improve as we use them more. So, I think people will become a little more familiar with what's in the milestones and hopefully gear our evaluations, which I think they're in the process of doing now.” –Peds3 PGY-3 Resident | Signs of developing trust in sources of feedback processes in place correct for adjustment of outlier ratings or comments while still incorporating diverse inputs. | “I feel like the summary comment sheet is probably the most helpful thing. Because they might compare, well, this ER doctor said this about them. And right below them, the rheumatologist said this…they'll be reading that, and they look and see who...they go back to the full evaluation and see who the ER doctor was, or who the rheumatologist was...what that person's personality is like? Are they a hawk or a dove? And then they say, ‘Yeah, well [faculty member] said that. But, you know, he likes everybody.'” –FM2 PD CCC | Residents perceive feedback is valid and valuable, aggregated feedback from Milestones ratings and CCC reports viewed as objective and not overly impacted by stringency or leniency, program leadership discuss formal feedback and coproduce individual plans with residents for future development. | “From my perspective, the CCC takes some of the responsibility for giving residents objective feedback off the shoulders of an individual, ie, me, and puts it onto this much larger entity and perhaps depersonalizes it a little bit. The residents do take it fairly seriously and from that standpoint it is helpful for me because it is the conduit through which they receive formalized feedback on all aspects of their performance, which then can be discussed and sort of picked apart on a much more personal level. And with a supportive role with the program director, rather than me being both the person who tells them that they're not up to snuff in certain areas and oh, let me help you get there.” –EM2 PD |
| Individual receptivity | Residents have difficulty understanding areas for improvement, lack of adequate feedback language to address problems with individual resident insight. | “I think lack of insight is the usual problem. Particularly, most of these people are smart, but some of them don't realize how they come across to other people. The team may not be functioning well, even though they think they're doing fine. That can be a very painful discussion because they don't have the insight.” –IM3 CCC Chair | When learner insight is a problem, feedback is delivered objectively with examples, and a specific focus is placed on improvement. | “There have been times when some people have taken a while to accept the feedback and think that there is no problem. For some of them, you just need to tell them, ‘Hey, this I what you can do to try to move from this area to the next.' Then they're fine and they go and do it on their own. For others, ‘It seems like you might need a little bit more help in this. How can we help you go and think about what would be helpful for you?' They invest in that plan with us and we create a plan that actually helps them get better.” –EM3 PD | Formal feedback takes learner perspective into account, insight and self-reflection are queried. When there are insight issues, program leadership delivers feedback objectively and with a developmental framework. | “It depends how self-reflective the people are. I don't know if I can correlate with top or bottom in that way. A lot of people are hard on themselves so even the top people will say I'm not good enough or I felt like all these numbers were really high and I would've evaluated it as lower. And sometimes that's good to say well, here's why we think you are doing that well and you should have confidence in yourself. Then trying to identify, sometimes I think often they kind of self-identify their relative strengths. Sometimes it's very hard to get caught up in trying to steer people a little bit away from worrying about are you a three versus a 3.5 or a four, to kind of pull back to the big picture then and reapplying these things. Well, how are you going to look at this differently? As you because next year, in the next half how are you going to improve upon this to move that up in that way?” –Peds2 PD |
| General Outcomes | ||||||
| Individual behavior change | Little to no change in resident behavior following feedback. | Interviewer: “What do they do with the information that you give them if you know? You sit down; you go over the milestones' ratings and evaluations, what do they then do?” EM1 PD: “I don't know. I assume they have a drink.” | Residents consistently receive and act on specific and timely targeted feedback, opportunities for improvement regularly pointed out. | “A piece of feedback I got from my first rotation was to have more specific review of systems, so then the next rotation my goal was to pare down, be specific or more detailed in my review of systems, depending on the patient's complaint. That was nice, I was like ‘Okay, I'll do this.' Then I practiced it and I worked on it and it got better in the next month or two. I made that goal on my own, based on the feedback.” –IM1 PGY-3 Resident | The program has a system of feedback in place involving program leadership and faculty, residents respond by significantly adjusting behavior patterns. | “We had been working with one of our senior residents who's not perfect by any means, but has started out residency [angering] every nurse at every emergency department, and they took that feedback to heart, they had a nurse mentor, they attended communication classes, and their trajectory has been overwhelmingly, tremendously positive. Occasionally they need to get called in again every six months to get yelled at, but it's every 6 months rather than every week. That's one of those wins. Or you work with someone who's got knowledge-based deficits to really build their knowledge base, and you see them do the work and apply it in the clinical setting and improve their metrics.” –EM4 PD |
| Resident development | Milestones ratings and feedback have little influence on residents, perceived merely as an affirmation of steady projects as opposed to any real roadmap to development. | “I would say that my learners who are not struggling, that it is largely something that we review with them at their semiannual evaluations and that I think they pay very little attention to because it is essentially, ‘You're on track, you're doing well.' You know? Then we give them specific things to work on, but it usually doesn't have a whole lot of teeth in it. So, I think it's, like CCC CHAIR was saying, ‘I'm doing okay.' I don't know that they get more granular than that.” –IM1 PD | Increased resident awareness and valuation of competency domains, residents, understand their development and that of their peers. | “I think one thing that I do differently would be my view of a hospital and the systems is different. I see better how things are related, and just how pieces are stuck together is a little clearer to me than it was initially. I think these evaluations help and say, ‘Okay these are the kind of things I'll focus on in looking out for the way things develop or the way things interact.' “Then I think as far as professionalism goes, it's just good to say ‘Oh, you're on the right track' or for some people it's just good to have a reminder to be nicer to people. I don't know if it's just the process of going through residency or whether it's the Milestones themselves that have helped me be more aware of how the process works, how the education system's developed to train us to be doctors.” –IM1 PGY-3 Resident | Program leadership provides a supportive context for constructive feedback and sets the groundwork for the Milestones as the developmental framework. Residents are comfortable progressing and can respond well to both specific and general feedback. | “Residency is a rather nerve-racking time and so, of course, you want to progress at the level of your peers at the very least. And so that conversation was a little bit ... It's a little bit scary, right? Whenever you sit down, and you know that you're deficient in some area. In my case, I'm in a very supportive residency program and I feel that, of course, the resident program will not just advance you forward if you're not progressing adequately, but I feel like they provide you the resources to progress through residency as efficiently and as successfully as possible. I was a little bit nervous prior to going into the meeting. However, my fears were quickly assuaged by my advisor at the time that everybody has at least one thing that they have to improve upon, and this is your one thing. I felt a little bit better and was able to refocus my efforts and it looks like those efforts have paid off.” –EM2 PGY-2 Resident |
Abbreviations: CCC, clinical competency committee; EM, emergency medicine; FM, family medicine; IM, internal medicine; PD, program director; PEDS, pediatrics; PGY, postgraduate year.
For all 3 domains, the early stage of implementation was defined by programmatic challenges, including large resource investments without returns as well as low levels of engagement among residents and faculty. The transition stage of implementation was characterized by modest positive outcomes in terms of resident development, assessments, and CCC meeting efficiencies, as well as increased engagement. The later stage of implementation corresponded with routine use of the specialty-specific Milestones subcompetencies by faculty and CCCs. This included increased faculty skill and consistency in assessing residents and providing feedback. Programs also adopted continuous quality improvement approaches to their assessment, CCC meeting, and feedback processes in this stage.
Assessments
Table 3 describes assessment process themes, moderating forces, and outcomes reported by study participants. Programmatic processes included assessment approaches, tools, and learner self-assessment. Faculty engagement levels, attitudes of program leadership, and perceived valuation of the Milestones were identified as forces that moderated the effectiveness of the identified processes. Outcomes of assessment system implementation included differences in coverage of the 6 general competencies as well as the varying resource burdens experienced by faculty and other assessors.
The early implementation stage of program assessments was defined by difficulties transitioning to a competency-based framework of learner development, burdens on faculty assessors, and negative perceptions of the Milestones by program leadership and faculty. The transition stage was characterized by increasing engagement of assessors and emergence of a common language to describe learner development along with the subcompetency levels. The later implementation stage involved the development and fine-tuning of assessment tools targeted to the expertise and clinical observation opportunities of assessors, reduced faculty burden, and positive attitudes toward the specialty Milestones by faculty. In this later stage, as reported by PDs and CCC chairs, faculty realized the value of using the developmental subcompetency levels to catch struggling residents early, and to provide more objective descriptions of professionalism and communication skills than were available before the Milestones.
Clinical Competency Committees
Table 4 highlights the CCC process themes, forces, and outcomes. Programs reported differing strategies for their pre-meeting preparation, data review, and decision processes. The reported role of the CCCs also varied among programs, as did CCC membership and the training provided to CCC members. The available institutional support and existing data management systems moderated the effectiveness of these CCC processes. In terms of outcomes attributable to CCC meetings, themes mirrored the assessment outcomes and included program resource expenditure and the extent to which CCCs uncovered specific problems in resident development.
The early stage of CCC meeting implementation was characterized by an absence of pre-meeting preparations, little or no training of CCC members, and perceptions by program leadership that gaps in evaluation data were an inconvenience instead of an opportunity for curriculum or evaluation improvement. In the transition stage, program leaders implemented CCC member training, distributed resident evaluation data before the meeting for member review, and devised visual representation of learner data to increase meeting efficiency. In the later stage, program leaders preassigned resident ratings to anchor learner-specific discussions, and CCC deliberations were summarized to provide context for the feedback given to residents.
The number of residents in specific programs also affected the CCC processes, but program strategies did not fit along an implementation continuum. One CCC chair from a small IM program commented on how the CCC addressed coverage difficulties when a resident required time away from providing clinical care. Similarly, frequency, duration, and setting of CCC meetings differed across programs as a result of contextual differences, and different implementation stages were not apparent.
Feedback
Feedback processes, forces, and outcomes are described in Table 5. The feedback received by residents differed across programs in terms of content, timing, and directionality. Learner perceptions of feedback validity as well as individual receptivity were found to influence the effects of received feedback. Resident development and reported instances of individual behavior changes emerged as outcomes of the feedback processes.
In the early implementation stage, resident feedback was inconsistent, devalued by both faculty and residents, and generally not actionable. Feedback in the transition stage was better received and resulted in individual behavior changes among residents. The later stage of program implementation was characterized by consistent, frequent, objective, and actionable feedback to residents, where faculty and program leaders utilized the Milestones to provide context to enable residents to understand and take ownership of their developmental trajectory.
General Characteristics of the Implementation Stages
Generally, when examples were given from the early implementation stage, interviewees were describing past experiences or outcomes, either in the first year of Milestones reporting or when they first joined the residency program. A few programs reported persistent implementation challenges at the time of the interviews.
The transitional stages outlined in the tables signaled increasing familiarity with the new competency-based framework. Certain programs adopted practices that accelerated this familiarity, including consistent and brief reintroductions of the subcompetencies at the beginning of CCC meetings as well as concise reviews of the purpose of the Milestones and the developmental trajectory before delivering feedback to residents.
The later stage of implementation was often described by program directors, CCC chairs, and residents in aspirational terms. Many interviewees, including residents, responded to the interview questions with explanations of the long-term goals for their programs or for their specialty communities. Several programs did report specific successes, where certain processes were understood, valued, and executed by most of the faculty and the residents within the program.
Resident Experience
Resident interview transcripts proved invaluable in illustrating the learner experience across the assessment, CCC, and feedback domains. Most resident interviewees did not report observing the ways in which assessment data reach the CCC meeting and how these data are used by CCC members to make Milestone ratings and resident progression decisions. Still, many voiced a desire to be more involved in the design and implementation of the assessment and feedback processes. Others reported an appreciation for the structured framework to guide their development and even allow them to advocate for teaching or observation germane to specific subcompetencies. “Seeing where my weaker points were, and then also getting affirmed in where my stronger points were, being able to ask specifically at the start of a rotation, ‘I need to work on my procedural-based competencies in this area,' or ‘I need to work on this specific thing, do you mind setting time aside to help me with this area over the next 2 weeks?'” –IM PGY-3 Resident
Discussion
In this study of 16 residency programs, varying programmatic assessment, CCC, and feedback strategies were placed along implementation continua. The individual process components, moderating factors, and outcomes were organized as to whether they were characteristic of the resource-intensive early stage, the emerging efficiencies of the transition stage, or the fine-tuning activity of the later stages of implementation.
In our sample, certain programmatic implementation strategies were associated with reported positive outcomes. As discussed in many implementation frameworks, programs that accounted for contextual factors, such as program size and institutional support, and employed an iterative quality improvement approach to implementation also reported relatively low resource burdens, good resident and faculty engagement, and increased efficiency across each of the implementation processes.15–19 For example, programs that tailored assessment forms according to the expertise and observation opportunities of raters across clinical rotations also reported improved completion and accuracy. Further, programs that iteratively checked in with faculty and CCC members periodically also reported increased faculty engagement and an emerging sense of faculty ownership over assessment and feedback processes.
We believe that the descriptions of the implementation processes can inform residency program leaders as to the various stages they will experience and help them progress more efficiently and effectively toward the later stages described in the continuum. Unfortunately, many attending physicians and residents remain skeptical of the validity and value of utilizing the competency-based Milestones framework to provide context for residency training. Residency program leadership, in concert with national leaders in medical education and within the specialty communities, must communicate the benefits of competency-based medical education and disseminate strategies and practices that both foster faculty and resident engagement as well as decrease program and assessor burden.
Future Milestone research should apply this implementation continua or other implementation frameworks across a more comprehensive set of processes and could query the possible differences among medical, surgical, and hospital-based specialties.
There are several limitations to this study. While we discovered new themes for the thematic template, we did not reach our a priori goal of 24 programs. The results are likely influenced by non-response bias, with additional themes potentially missed. We did interview 44 subjects, however, involving program directors, CCC chairs, and residents. The recruitment occurred over a longer period than anticipated; meaning programs were likely at different stages of Milestones implementation when they were being interviewed.
Another limitation of this analysis is that it focuses solely on the perceptions and experiences of the interview participants. While respondents were asked about their perceptions of the experiences of faculty members and non-interviewed residents, their own experiences likely influenced their responses.20 Also, the self-reported survey data collected prior to the interviews were limited by the fact that they were not independently verified and were subject to self-reporting bias.21
Conclusions
Using template analysis to analyze program director, CCC chair, and resident interview transcripts, we found that the implementation of assessment, CCC meeting, and feedback processes were moderated by contextual forces and resulted in varying outcomes across EM, IM, pediatrics, and FM programs. These processes, moderating forces, and outcomes can be characterized across 3 distinct stages on an implementation continuum. Based on interview transcripts, we characterized the early stage of implementation by resource challenges, the transition stage by increased efficiency and engagement, and the later stage by skillful stakeholder execution and iterative efforts to fine-tune assessment, CCC, and feedback processes.
Supplementary Material
Acknowledgments
The authors would like to thank Lisa Conforti, MPH, for her contributions to this manuscript.
Footnotes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
References
- 1.Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366(11):1051–1056. doi: 10.1056/NEJMsr1200117. [DOI] [PubMed] [Google Scholar]
- 2.Kavic MS. Competency and the six core competencies. JSLS. 2002;6(2):95–97. [PMC free article] [PubMed] [Google Scholar]
- 3.Swing SR, Beeson MS, Carraccio C, et al. Educational milestone development in the first 7 specialties to enter the next accreditation system. J Grad Med Educ. 2013;5(1):98–106. doi: 10.4300/JGME-05-01-33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.The Family Medicine Milestone Project. J Grad Med Educ. 2014;6(1 suppl 1):74–86. doi: 10.4300/JGME-06-01s1-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Hodges BD. A tea-steeping or i-Doc model for medical education? Acad Med. 2010;85(suppl 9):34–44. doi: 10.1097/ACM.0b013e3181f12f32. [DOI] [PubMed] [Google Scholar]
- 6.ten Cate O, Regehr G. The Power of subjectivity in the assessment of medical trainees. Acad Med. 2019;94(3):333–337. doi: 10.1097/ACM.0000000000002495. [DOI] [PubMed] [Google Scholar]
- 7.Hauer KE, Cate OT, Boscardin CK, et al. Ensuring resident competence: a narrative review of the literature on group decision making to inform the work of clinical competency committees. J Grad Med Educ. 2016;8(2):156–164. doi: 10.4300/JGME-D-15-00144.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Ekpenyong A, Baker E, Harris I, et al. How do clinical competency committees use different sources of data to assess residents' performance on the internal medicine milestones? A mixed methods pilot study. Med Teach. 2017;39(10):1074–1083. doi: 10.1080/0142159X.2017.1353070. [DOI] [PubMed] [Google Scholar]
- 9.Conforti LN, Yaghmour NA, Hamstra SJ, et al. The effect and use of milestones in the assessment of neurological surgery residents and residency programs. J Surg Educ. 2018;75(1):147–155. doi: 10.1016/j.jsurg.2017.06.001. [DOI] [PubMed] [Google Scholar]
- 10.Accreditation Council for Graduate Medical Education. The Milestones Guidebook 2020. https://www.acgme.org/Portals/0/MilestonesGuidebook.pdf Accessed February 18, 2021.
- 11.Edgar L, Roberts S, Holmboe E. Milestones 2.0: a step forward. J Grad Med Educ. 2018;10(3):367–369. doi: 10.4300/JGME-D-18-00372.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Pawson R. The Science of Evaluation A Realist Manifesto. Thousand Oaks, CA: SAGE Publications; 2014. [Google Scholar]
- 13.Holmboe ES, Call S, Ficalora RD. Milestones and competency-based medical education in internal medicine. JAMA Intern Med. 2016;176(11):1601–1602. doi: 10.1001/jamainternmed.2016.5556. [DOI] [PubMed] [Google Scholar]
- 14.King N. Symon G, Cassell C, editors. Template analysis. Qualitative Methods and Analysis in Organizational Research A Practical Guide. 1998. In. eds. Thousand Oaks, CA: SAGE Publications.
- 15.Field B, Booth A, Ilott I, Gerrish K. Using the knowledge to action framework in practice: a citation analysis and systematic review. Implement Sci. 2014;9:172. doi: 10.1186/s13012-014-0172-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290–298. doi: 10.1136/bmjqs-2013-001862. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24. doi: 10.1002/chp.47. [DOI] [PubMed] [Google Scholar]
- 18.Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. doi: 10.1186/s13012-015-0242-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Schamberger MM. Elements of quality in a qualitative research interview. S A Archives Journal. 1997 39:25. http://search.ebscohost.com/login.aspx?direct=true&db=eue&AN=386202&site=eds-live.
- 21.Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc. 2016;9:211–217. doi: 10.2147/JMDH.S104807. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
