Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2022 Aug 11;74(10):1625–1627. doi: 10.1002/art.42158

The Transition From Residency to Fellowship: Enhancing Training by Increasing Transparency

Eli M Miloslavsky 1, Anisha B Dua 2,
PMCID: PMC9804378  PMID: 35536162

Introduction

The events of the last several years have highlighted the challenges of bias and systemic racism and the importance of trainee wellness and support within graduate medical education. Coupled with the rapid shift to virtual interviews, these events spotlight a critical point in training at which residents transition into fellowship programs. Each trainee enters a fellowship program with a unique background and skillset, which can make it challenging for a program to meet the needs of its fellows. In order to provide an ideal training environment, the residency‐to‐fellowship transition process should ensure an optimal match between the program and the candidate and mitigate bias inherent to the application process (pre–match phase). Additionally, the transition process should facilitate rapid and accurate learner assessment and enable programs to provide an appropriate level of support for their accepted candidates (post–match phase). However, the current process is not optimized to accomplish these goals.

There has been a robust national conversation regarding the transition from medical school to residency, culminating with a recent report by the Coalition for Physician Accountability (CoPA) (1). This initiative outlined important goals such as increasing transparency in the application process, better defining competencies and assessment metrics, addressing inequities, and improving the post–match transition process. The residency‐to‐fellowship transition faces many of the same challenges. Herein, we review the major barriers to an optimal transition process and propose potential solutions.

Pre–match phase: fellowship preparation and the application process

Student/resident performance assessment

An optimal application process would ensure the best match between the applicant and the program while minimizing bias, inequity, stress, and financial expenditure. However, emerging evidence suggests that the core assessments program directors use in fellow selection have significant limitations, including bias and structural racism.

The residency training program of an applicant is one of the most important factors in a fellowship application. The clinical training and opportunities for research, mentorship, and leadership are highly dependent on where the candidate trained in residency (2). However, the key determinants of the residency match, such as clerkship grades, United States Medical Licensing Examination (USMLE) scores, and Alpha Omega Alpha (AΩA) membership, have all been shown to have significant biases and limitations. Moreover, these metrics are also used in the fellowship application process, amplifying their impact. USMLE scores have been associated with performance on in‐training and certification exams; however, USMLE scores have not been shown to predict achievement of competencies during residency training (3). A study of more than 45,000 medical students found that USMLE Step scores were lower among underrepresented in medicine (URiM) students and female students as compared to White male students, suggesting possible bias (4). Clerkship grades are subject to shortcomings in assessment instruments, differing criteria for grading across schools, and variability in faculty evaluation skills. Evidence suggests that there is also systemic racial bias in clerkship grading and selection of candidates for AΩA membership, with a study reporting that Black and Asian medical students were less likely to be awarded AΩA membership than White medical students after controlling for USMLE Step 1 scores and extracurricular activities (adjusted odds ratio for Black students 0.16 [95% confidence interval (95% CI) 0.07–0.37]; adjusted odds ratio for Asian students 0.52 [95% CI 0.42–0.65]) (5).

Letters of recommendation are challenging to interpret because they are highly subjective and can lack substantiating objective criteria. These letters of recommendation may not be comparable across residents because of variations in the source of the letters, whether written by an individual, who may write only 1 letter or a few letters each year, or by a program director, who may be accustomed to writing multiple letters each year. Even letters written by program directors that adhere to the guidelines for standardized letters of recommendation have demonstrated the presence of race and sex bias, with more use of doubt‐raising language and terms describing behaviors of empathy and interpersonal skills in letters for URiM applicants, which are 2 types of language that have been negatively associated with hiring in academia (6). Program directors in particular may have a conflict of interest between accurately describing the skills of their trainees and enhancing the reputation of the residency program. However, we would argue that increased transparency would garner trust in the residency program and minimize the likelihood of applicants matching in programs where they may struggle.

Away rotations also introduce inequity into the residency‐to‐fellowship transition process, as financial constraints and flexibility may disproportionately affect residents who are from underprivileged backgrounds or groups who are URiM. These inequities may particularly impact residents in smaller programs that do not have structured rheumatology rotations. In sum, the metrics in the current application process for both residency and fellowship programs have drawbacks and may disadvantage students who are URiM, as well as students whose relative performance improves later in medical school, thereby limiting the potential for an optimal match between the applicant and the program. Awareness of these limitations and working to reduce bias in assessment are important steps in optimizing the fellowship program application process.

Fellowship applications and interviews

Fellowship programs often receive more applications than can be meaningfully reviewed, and applications may lack discrete details that would help to meaningfully differentiate between similar candidates, making holistic review challenging. With a consistently increasing volume of applicants per fellowship spot (7), filters (such as AΩA and exam scores) may be more frequently used to identify candidates who meet selection criteria. The use of these filters in the application review process can perpetuate bias (5). Utilizing best practices during application screening and during the interview process may ameliorate these challenges (Table 1).

Table 1.

Suggested best practices for fellowship program application screening and interviews

Best practice How to implement best practice
Application screening
Determine program values and priorities Defining these attributes with input from faculty participating in recruitment can help create a shared value model and enable the program to focus on applicant attributes that best match the mission of the program.
Maximize the utility of data in the application Understanding the meaning of “code words” and tiered rankings in the program director's letter across years may allow for direct comparison of applicants. Triangulating performance data from medical school, residency, and letters of recommendation can shift focus away from outlier data.
Limit bias Strategies such as implicit bias training, screening without applicant photos, and utilizing multiple screeners may help to reduce bias inherent in the screening process.
Interviews*
Increase standardization and limit bias Utilizing standard interview questions or multiple mini interviews as well as blinding interviewers to parts of the application that may introduce bias (USMLE scores, AΩA) are strategies that can limit interviewer bias.
Limit the impact of technology and the applicant's living situation Provide training and support for both applicants and faculty around the use of technology and increased cognitive load present during virtual interviews. Allow make‐up opportunities for interviews disrupted by technology failure.
*

For more details, see refs. 14 and 15. USMLE = United States Medical Licensing Examination; AΩA = Alpha Omega Alpha.

From a national perspective, and mirroring recommendations set forth by CoPA, specialty‐specific best practices for recruitment should be considered in order to increase diversity across the educational continuum, and this information should be disseminated to program directors, residency programs, and institutions. Further, the development of a database of fellowship program applicants that is widely accessible, reliable, and searchable for the characteristics (demographics, geography, scores, degree, visa status, and other areas of interest) of individuals who applied, were interviewed, were ranked, and matched for each subspecialty fellowship program would enhance transparency and enable applicants to focus more on the nature of programs to which they should apply. This should be available at no cost to applicants and their advisors. Career advising is nuanced and can also introduce conflicts of interest. Reflective and honest discussion between applicants and their advisors, combined with accurate, transparent portrayal of information by programs, can enhance the value and outcomes of the application process.

The fellowship program application process is further confounded by the ways in which interviews are offered to candidates. The process by which interviews are offered varies from program to program and can be unnecessarily complex, with little regulation or structure. This can lead to increased applicant anxiety, “hoarding” of interview invitations, and hindering of the mutual interests of applicants and fellowship programs. Equity and fairness for candidates and fellowship programs could be improved by the implementation of standards for the interview offer and acceptance, including standards for the timing and methods of communication. In residency programs, there has been discussion of implementing an “early match” process for applicants and potentially limiting the number of interviews each applicant may attend. While this could significantly level the playing field by redistributing interview slots to other interested candidates, it could also have negative consequences, such as potentially increasing unmatched programs or applicants, or limiting opportunities for some candidates to fully explore potential programs, mentors, and training resources. A coordinated, informed, and concerted effort from all stakeholders in the residency‐to‐fellowship transition process will be needed to minimize biases, enhance opportunities, and optimize the match process for fellowship applicants and programs.

Post–match phase: transition to fellowship

In addition to ensuring the best match between the fellowship program and the applicant, residency assessment metrics should help fellowship programs train fellows effectively. Further complicating the frequent lack in fellowship applications of detailed, objective, and actionable data on resident performance is the fact that resident evaluation over the final year is not disclosed to fellowship programs. Studies suggest that milestone ratings in residency correlate with milestone ratings in fellowship (8), but that milestone ratings may go down from the end of residency to the beginning of fellowship (9). Therefore, a mechanism that allows residency programs to provide fellowship programs with data on resident performance over the final year would help facilitate the assessment of fellows and address the differences in ratings with trainees, thus furthering mutual understanding and buy‐in.

Individualized learning plans (ILPs) developed by residency programs and shared with the fellowship program would further enhance the residency‐to‐fellowship handover (10). The amount of time new fellows must devote to becoming familiar with a new institution, relocating to a new geographic area, and preparing for the Internal Medicine board exam makes the rapid and accurate assessment of new fellows challenging. Further, the ability of new fellows to be effective partners in developing a learning plan may be limited. Studies have demonstrated that learners, particularly those who are struggling, can lack the ability to effectively assess their skills (11). Fellows may also be hesitant to share their learning needs and perceived weaknesses with their new training program. An ILP is a learner‐directed tool with which trainees identify their personal educational goals, perform self‐evaluation in competencies and/or milestones, and create actionable objectives in consultation with faculty that are reviewed periodically in order to enhance the trainee's professional development (12). An ILP, agreed on by both the trainee and the residency training program, that is provided transparently to the fellowship program would facilitate early assessment and partnering with the trainee to develop and carry out a learning plan. While there is a theoretical concern that such a process would influence fellow assessment, a recent study using simulated encounters suggested that an educational handover does not influence subsequent assessment (13).

Conclusion

Limitations in assessment instruments, in the application process, and in the communication of residents’ strengths and areas of development hinder the ability of fellowship programs to provide optimal training. More objective and transparent assessment of residents, optimization of the application process, and the sharing of resident evaluations and ILPs between the residency and fellowship programs after the match may enhance the residency‐to‐fellowship transition, help limit the perpetuation of health disparities and lack of diversity in the rheumatology workforce, and positively impact trainees, faculty, and most importantly our patients.

Supporting information

Disclosure Form

REFERENCES

  • 1. Coalition for Physician Accountability . Initial summary report and preliminary recommendations of the undergraduate medical education to graduate medical education review committee (UGRC), 2021. URL: https://physicianaccountability.org/wp‐content/uploads/2021/04/UGRC‐Initial‐Summary‐Report‐and‐Preliminary‐Recommendations‐1.pdf.
  • 2. Krueger CA, Helms JR, Bell AJ, Israel H, Cannada LK. How the reputation of orthopaedic residency programs is associated with orthopaedic fellowship match results. J Bone Joint Surg Am 2020;102:e28. [DOI] [PubMed] [Google Scholar]
  • 3. Prober CG, Kolars JC, First LR, Melnick DE. A plea to reassess the role of United States medical licensing examination step 1 scores in residency selection. Acad Med 2016;91:12–5. [DOI] [PubMed] [Google Scholar]
  • 4. Rubright JD, Jodoin M, Barone MA. Examining demographics, prior academic performance, and United States medical licensing examination scores. Acad Med 2019;94:364–70. [DOI] [PubMed] [Google Scholar]
  • 5. Boatright D, Ross D, O'Connor P, Moore E, Nunez‐Smith M. Racial disparities in medical student membership in the ΑΩΑ honor society. JAMA Intern Med 2017;177:659–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Zhang N, Blissett S, Anderson D, O'Sullivan P, Qasim A. Race and gender bias in internal medicine program director letters of recommendation. J Grad Med Educ 2021;13:335–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. The National Resident Matching Program. Fellowship Match Data & Reports, 2021. URL: https://www.nrmp.org/fellowship-match-data/. [Google Scholar]
  • 8. Heath JK, Wang T, Santhosh L, Denson JL, Holmboe E, Yamazaki K, et al. Longitudinal milestone assessment extending through subspecialty training: the relationship between ACGME internal medicine residency milestones and subsequent pulmonary and critical care fellowship milestones. Acad Med 2021;96:1603–8. [DOI] [PubMed] [Google Scholar]
  • 9. Sawyer T, Gray M, Chabra S, Johnston LC, Carbajal MM, Gilliam‐Krakauer M, et al. Milestone level changes from residency to fellowship: a multicenter cohort study. J Grad Med Educ 2021;13:377–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Morgan HK, Mejicano GC, Skochelak S, Lomis K, Hawkins R, Tunkel AR, et al. A responsible educational handover: improving communication to improve learning. Acad Med 2020;95:194–9. [DOI] [PubMed] [Google Scholar]
  • 11. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self‐assessments. J Pers Soc Psychol 1999;77:1121–34. [DOI] [PubMed] [Google Scholar]
  • 12. Li ST, Burke AE. Individualized learning plans: basics and beyond. Acad Pediatr 2010;10:289–92. [DOI] [PubMed] [Google Scholar]
  • 13. Dory V, Danoff D, Plotnick LH, Cummings BA, Gomez‐Garibello C, Pal NE, et al. Does educational handover influence subsequent assessment? Acad Med 2021;96:118–25. [DOI] [PubMed] [Google Scholar]
  • 14. Marbin J, Hutchinson YV, Schaeffer S. Avoiding the virtual pitfall: identifying and mitigating biases in graduate medical education videoconference interviews. Acad Med 2021. doi: 10.1097/ACM.0000000000003914. E‐pub ahead of print. [DOI] [PubMed] [Google Scholar]
  • 15. Huppert LA, Hsiao EC, Cho KC, Marquez C, Chaudhry RI, Frank J, et al. Virtual interviews at graduate medical education training programs: determining evidence‐based best practices. Acad Med 2020. doi: 10.1097/ACM.0000000000003868. E‐pub ahead of print. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Disclosure Form


Articles from Arthritis & Rheumatology (Hoboken, N.j.) are provided here courtesy of Wiley

RESOURCES