Skip to main content
The Journal of Education in Perioperative Medicine : JEPM logoLink to The Journal of Education in Perioperative Medicine : JEPM
. 2026 Apr 13;28(1):E760. doi: 10.46374/VolXXVIII_Issue1_Bhardwaj

Association of the Change in USMLE Step 1 Scoring With Research Output Among Applicants to Anesthesiology Residency Programs: A Retrospective Observational Study

Ashna Bhardwaj 1,, Pratima Bajaj 1, Karim Rifai 1, Akshar Patel 1, Arjun Bhatt 1, Ali Abolhassani 1, Jack McClain 1, Asim Ahmed 1, Caryl Bailey 1, Mary Arthur 1
PMCID: PMC13070739  PMID: 41982265

Abstract

Background:

As anesthesiology residency becomes increasingly competitive, the transition of US Medical Licensing Examination (USMLE) Step 1 to pass/fail scoring has raised questions about the evolving role of research in applicant selection. Whereas national data suggests rising research engagement, there are limited insights into publication output among medical students applying into anesthesiology. The purpose of this study is to evaluate the relationship between the USMLE Step 1 transition and preresidency publication output of matched anesthesiology residents and to assess the role medical school National Institutes of Health (NIH) funding level has on publication metrics.

Methods:

This retrospective cohort study analyzed the research output of 461 anesthesiology residents across 30 programs from the classes of 2026 (numerical Step 1 scoring) and 2028 (pass/fail Step 1 scoring). Medical schools were stratified by NIH funding level to evaluate its association with research output. PubMed-indexed publications were examined using metrics including first author status, specialty relevance, citation counts, and publication types. Using Mann-Whitney U tests, group differences were compared with total publications serving as the primary outcome.

Results:

There were no significant differences in research output between pre- and post-Step 1 transition cohorts across all metrics. The 50th percentile in total publications was 0 for both cohorts. There was also no difference in publication output when groups were stratified by NIH funding level. Logistic regression analysis showed that high NIH funding (odds ratio 1.87, 95% confidence interval 1.21-2.89) was a significant predictor of having at least 1 publication.

Conclusions:

Despite increasing emphasis on research in residency applications, our findings show no significant change in publication output among anesthesiology applicants in the first class following the Step 1 transition. Whereas NIH funding level was associated with a greater likelihood of having at least 1 publication, it was not linked to higher overall publication output. This study provides a timely baseline for understanding scholarly activity in anesthesiology applicants as the residency evaluation landscape continues to evolve.

Keywords: Anesthesiology residency, research productivity, medical school, US Medical Licensing Examination

Introduction

Anesthesiology has emerged as an increasingly competitive medical specialty, fueled by its attractive earning potential, the essential role anesthesiologists play in patient care, and growing interest from highly qualified applicants. According to the National Resident Matching Program (NRMP), anesthesiology residency applicants rose from 2418 in 2020 to 3034 in 2024, representing a 25% increase in applicants. Although the number of residency positions offered increased from 1370 to 1695 over the same period, the specialty has consistently attracted nearly twice as many applicants as available spots, underscoring its competitiveness.1,2 During this same period, unmatched applicant rates rose from 10.4% to 14.8% for MD seniors and 21.3% to 41.7% for DO seniors.3-6 As more applicants compete for limited positions, programs are compelled to become increasingly selective in their matching decisions, whereas applicants must adapt their strategies to enhance competitiveness.

Although variability exists among program directors regarding the criteria for offering interviews, the US Medical Licensing Examination (USMLE) Step 1 score has historically been regarded as 1 of the most important factors across specialties, often ranking higher than research output or clinical performance grades.7,8 However, with the transition of Step 1 from numerical scoring to pass/fail in January 2022, greater emphasis is now being placed on other application components, such as USMLE Step 2 CK scores, letters of recommendation, and research productivity.9 According to the NRMP's Charting Outcomes in the Match, the average number of abstracts, presentations, and publications among matched US MD seniors applying to anesthesiology rose from 5.2 in 2020 to 9 in 2024, whereas matched DO seniors showed a similar increase from 2.5 to 4.9.3-6 These findings suggest that applicants are turning to research activities to distinguish themselves, particularly as traditional quantitative metrics have become less influential. Notably, NRMP data are self-reported and include abstracts and presentations together with peer-reviewed publications, making it difficult to assess the true volume of full-length manuscripts published by applicants. In contrast, our study focuses exclusively on published manuscripts, which are often considered the highest form of scholarly activity for medical students.

Whereas research productivity is increasingly emphasized, output can be influenced by external factors beyond an applicant's individual effort or ability. One such factor is institutional research infrastructure, commonly approximated by medical school National Institutes of Health (NIH) funding. Attending a top NIH-funded medical school has been linked to higher publication output and a lower likelihood of never having published.10 This relationship is likely multifactorial: schools with greater NIH support often offer enhanced access to research infrastructure and mentorship, but they may also attract students who are research-oriented with prior scholarly experience. Conversely, students at under-resourced institutions may face both structural barriers that limit research access and lower baseline interest or exposure to research, each of which could contribute to reduced publication output and perceived competitiveness in the residency match.

This study aims to examine trends in preresidency research output before and after the USMLE Step 1 transition to pass/fail scoring. The primary objective is to compare publication volume during medical school among matched anesthesiology residents from cohorts applying before and after the scoring change. The secondary objective is to assess the relationship between medical school NIH funding level and publication volume. As the first study to provide quantitative data on publication metrics specific to anesthesiology applicants, these findings may offer valuable insights for medical students building competitive applications and for program directors and faculty seeking clearer benchmarks for scholarly activity in this evolving application landscape.

Methodology

Ethical Considerations

This study utilized publicly available data and did not involve any direct interaction with human subjects or identifiable private information. It was, therefore, determined by the Institutional Review Board (IRB) at Augusta University to not constitute human subjects research and to meet the requirements for IRB exemption.

Program Selection

Anesthesiology residency programs were identified using Doximity Residency Navigator, filtered by “Reputation” ranking.11 These rankings are based on board-certified physician nomination surveys, resident satisfaction data, and objective program metrics and were selected as a publicly available standard for identifying US anesthesiology residency programs.12 To be included, programs had to publicly list resident information, including full names, medical degrees (MD, DO, or MD/PhD), and medical school affiliation for residents in the classes of 2026 (postgraduate year [PGY]-3/second year clinical anesthesia resident) and 2028 (PGY-1). Programs lacking this public information were excluded and replaced with the next eligible program in the same tier. Thirty programs were included and divided into 3 tiers by the order of Doximity reputation rankings. The top tier consisted of the first 5 eligible programs (ranks 1-57), the middle tier included the next 10 eligible programs (ranks 58-115), and the bottom tier comprised the final 15 eligible programs (ranks 116-172). More programs were selected from the middle and bottom tiers to balance sample sizes as higher ranked programs typically have larger resident class sizes. These tiers were used solely to structure program selection and ensure a standardized, representative sampling of programs, providing an objective framework to minimize selection bias in program inclusion. They were not incorporated as an analytic variable.

Resident Selection

Residents from the graduating residency classes of 2026 and 2028 were selected to represent cohorts immediately before and after the transition of USMLE Step 1 to pass/fail scoring on January 26, 2022. Based on the typical timeline of US MD and DO curricula, the 2026 cohort would have graduated from medical school in 2022 and likely took Step 1 in the summer of 2020 under numerical scoring. In contrast, the 2028 cohort would have graduated medical school in 2024 and likely took Step 1 in the summer of 2022 or later after the transition to pass/fail scoring. Residents who started medical school in 2019 were excluded as they may have tested under either scoring system depending on individual timelines. Medical school start years were determined using publicly available data from LinkedIn or Doximity resident profiles. If unavailable, start years were inferred by subtracting 4 years from medical school graduation year. This classification assumes a standard progression through medical school without interruptions. MD/PhD trainees were excluded due to their elevated research output resulting from extended research training.13 International medical graduates (IMGs) were also excluded due to variable timelines and the absence of a standardized Step 1 schedule outside the US medical education system.

Publication Data Collection

Preresidency publication data was collected via PubMed using each resident's full name, medical school, and hospital affiliation. Only full manuscripts indexed before PGY-1 start dates (July 2022 for the class of 2026, July 2024 for the class of 2028) were included to capture research completed during medical school. Whereas this cutoff may include some publications accepted between Match Day (March) and residency start (July), the typical publication timeline suggests most manuscripts indexed by July were submitted well before Match Day, reflecting research conducted during medical school. Only PubMed-indexed publications were considered to maintain consistency in indexing standards and accessibility and to prevent variability introduced by nonindexed sources, such as conference abstracts or nonindexed presentations. The number of total publications and first author publications was recorded based on PubMed listings. In-specialty publications, defined as those relevant to anesthesiology, were manually classified based on the article's title, content, and journal with inclusion limited to studies directly related to anesthesiology or its subspecialties, including perioperative medicine, critical care, pain management, airway management, anesthesia techniques, and anesthetic pharmacology. Three authors independently reviewed and verified all classifications to ensure consistency and accuracy. Publications were classified into 3 categories: journal articles (original research, clinical investigations, experimental studies), case reports, and other (observational studies, clinical trials, comparative studies, editorials).

Medical School NIH Funding Level Classification

NIH funding levels for each resident's medical school were determined using the 2024 Blue Ridge Institute for Medical Research (BRIMR) rankings, which included 144 US MD-granting institutions. Medical schools were stratified into 3 tiers: high NIH funding (ranks 1-48), moderate NIH funding (ranks 49-96), and low NIH funding (ranks 97-144). Residents were assigned a funding tier based on their medical school. Institutions not listed in BRIMR were labeled unknown NIH funding. NIH funding level was used to stratify institutions as a proxy for research opportunity but may not directly reflect student-level access to resources, particularly across departments with differing mentorship structures or resource distribution. Accordingly, NIH ranking was treated as an approximate rather than absolute indicator of research opportunity.

Statistical Analyses

Descriptive statistics summarized resident characteristics, residency program tier distribution, and research productivity metrics. Continuous variables were reported as means SD, whereas categorical variables were reported as frequencies and percentages. The Kolmogorov-Smirnov test confirmed significant deviation from normality for publication counts (p < .001). The Mann-Whitney U test was, therefore, used to compare research output between pre- and post-Step 1 transition cohorts and across NIH funding tiers due to the nonnormal distribution of the data. A significance level of p < .05 was used for all statistical tests.

To assess predictors of research productivity, a multivariable logistic regression model was used to identify factors associated with having at least 1 publication. Independent variables included Step 1 format (pass/fail vs numerical) and NIH funding tier (high vs moderate/low). The dependent variable was dichotomized as total publications ≥1 versus 0. Model fit was evaluated using the likelihood ratio test and pseudo-R2. Odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were reported. Additionally, Poisson regression was used to model total publication counts as a function of Step 1 format and NIH funding tier; incidence rate ratios (IRR) and corresponding 95% CIs were reported. Analyses were conducted in Python (version 3.12).

Results

A total of 461 anesthesiology residents were included, comprising 206 residents from the class of 2026 (pre-Step 1 transition) and 255 residents from the class of 2028 (post-Step 1 transition). Residents were stratified by the NIH funding level of their medical schools (Table 1). Across both cohorts, 29.7% graduated from high NIH-funded medical schools, 27.8% from moderate, 18.4% from low, and 24.1% from institutions with unknown NIH funding status.

Table 1.

Summary of Residents by Step 1 Transition Phase

Pretransition (n) Post-transition (n)
High NIH funding 53 84
Moderate NIH funding 53 75
Low NIH funding 41 44
Unknown NIH funding 59 52
Total 206 255

Comparisons between pre- and post- Step 1 transition cohorts showed no significant differences across any of the measured outcomes. The mean number of total publications was 1.04 ± 2.79 in the pretransition group and 0.80 ± 1.38 in the post-transition group (p = .516). First author publications (0.28 ± 0.81 vs 0.21 ± 0.58, p = .623), in-specialty publications (0.17 ± 1.01 vs 0.15 ± 0.45, p = .181), and citation counts (1.27 ± 4.82 vs 0.75 ± 1.96, p = .936) also did not differ significantly (Table 2). This shows that the transition to pass/fail scoring was not immediately associated with an increase in full-manuscript research output in the first year following this scoring change.

Table 2.

Comparative Analysis of Research Metrics Pre– and Post–Step 1 Transition

Pretransition (mean ± SD) Post-transition (mean ± SD) p
Total publications 1.04 ± 2.79 0.80 ± 1.38 .516
First authorships 0.28 ± 0.81 0.21 ± 0.58 .623
In-specialty 0.17 ± 1.01 0.15 ± 0.45 .181
Citation count 1.27 ± 4.82 0.75 ± 1.96 .936

When stratified by NIH funding level, no significant differences were observed across most categories. Mean values for total publications, first authorships, in-specialty publications, and citation counts were similar between the pretransition and post-transition cohorts regardless of funding tier. The only exception was a modest increase in in-specialty publications among graduates from moderately funded schools, rising from 0.03 ± 0.16 pretransition to 0.13 ± 0.39 post-transition (p = .047) (Table 3). Overall, the transition to pass/fail scoring did not correspond with meaningful changes in research output across funding strata.

Table 3.

Comparative Analysis of Research Metrics Stratified by NIH Funding Level

NIH level Pretransition (mean ± SD) Post-transition (mean ± SD) p
Total publications High 2.06 ± 4.40 1.38 ± 1.78 .996
Moderate 0.72 ± 1.44 0.64 ± 1.27 .925
Low 0.57 ± 1.07 0.80 ± 1.31 .18
First authorships High 0.55 ± 1.20 0.43 ± 0.91 .815
Moderate 0.23 ± 0.63 0.17 ± 0.43 .854
Low 0.16 ± 0.43 0.12 ± 0.33 .821
In-specialty High 0.44 ± 1.71 0.19 ± 0.65 .381
Moderate 0.03 ± 0.16 0.13 ± 0.39 .047
Low 0.11 ± 0.32 0.12 ± 0.33 .912
Citation count High 1.96 ± 5.22 1.42 ± 3.31 .569
Moderate 1.57 ± 6.79 0.66 ± 1.35 .912
Low 0.66 ± 1.39 0.77 ± 1.28 .201

Publication types remained consistent before and after the Step 1 transition. Journal articles made up the majority, accounting for 87.9% of pretransition and 83% of post-transition publications. Case reports followed, composing 7.6% of pretransition and 11.5% of post-transition publications. Other types, including editorials, clinical trials, observational studies, and comparative studies, each represented less than 5% of publications (Figure 1). These findings indicate relative stability in publication type distribution.

Figure 1.

Figure 1.

Distribution of publication types before and after the Step 1 transition to pass/fail scoring. Percentages of journal articles, case reports, and other types are shown for each cohort. Journal articles comprise original research, clinical investigations, experimental studies, and other primary research articles. The “other” category refers to observational studies, clinical studies, clinical trials, comparative studies, and editorials.

Percentile analysis of research publications showed that most residents had little or no publication output. The 50th percentile for total publications, first author publications, and in-specialty publications was 0 in both cohorts, indicating that at least half had no publications before residency. Even among those who had published, output remained low. The 90th percentile of the pretransition group had a maximum of 3 publications, whereas the post-transition group had at most 2. Citation counts followed a similar pattern with median values of 0 and only slight increases among top performers. This data suggests that, whereas research productivity through publications is often emphasized, many successfully matched residents had minimal publication history.

Multivariable logistic regression was conducted to identify predictors of having at least 1 preresidency publication. Variables included Step 1 transition status (pretransition vs post-transition) and NIH funding level (high vs nonhigh). Graduating from a high NIH-funded institution was significantly associated with greater odds of having at least 1 publication (OR 1.87, 95% CI 1.21-2.89, p = .005), whereas Step 1 transition status was not (OR 1.27, 95% CI 0.82-1.96, p = .284). The model showed improved fit over the null (likelihood ratio test p = .012) and demonstrated acceptable discrimination (c-statistic = 0.61), suggesting that institutional funding may be a stronger predictor of research productivity than the transition to pass/fail Step 1 scoring. Poisson regression examining total publication counts yielded consistent findings. High NIH funding was associated with increased publication rates (IRR 2.24, 95% CI 1.81-2.77, p < .001), whereas Step 1 transition status showed no significant association (IRR 0.85, 95% CI 0.70-1.03, p = .092).

Discussion

Research Output Before and After Step 1 Transition

Our analysis revealed that the median anesthesiology residency applicant in both pre- and post-Step 1 transition cohorts had no PubMed-indexed publications with the 50th percentile in total publications equal to 0. There was also no significant difference in total research publications among anesthesiology applicants who received a numerical score on Step 1 versus pass/fail scoring. Despite the specialty's increasing competitiveness and the transition of Step 1 to pass/fail scoring, our findings indicate no meaningful change in publication volume among matched anesthesiology applicants. This contrasts with NRMP data, which report increasing research activity when combining abstracts, posters, and presentations together with peer-reviewed publications. In contrast, our study focused specifically on PubMed-indexed publications, providing a level of detail not captured in NRMP data. Based on NRMP's Charting Outcomes in the Match reports from 2020 and 2024, applicants listing 5 or more abstracts, presentations, and publications had a higher match rate compared with those with fewer research experiences. However, our analysis suggests that these totals are likely driven primarily by abstracts and presentations as publication volume among matched anesthesiology applicants remained minimal across cohorts.3-6

Whereas these results may seem surprising given the growing emphasis on research in competitive specialties, it likely reflects long-standing norms within anesthesiology, in which research publications have historically not carried the same weight in the application process as in other competitive specialties.14-18 Additionally, it is important to recognize the broader norms of research activity within the field of anesthesiology itself. Studies show that anesthesiology, as a specialty, tends to have a lower overall research output compared with other specialties with contributing factors including limited funding, fewer structured research training opportunities, and the absence of mandatory research components within fellowship programs.19,20 This may translate to the trends seen within our study among medical students interested in anesthesiology as opportunities for anesthesia-related publications may be lower.

It is important to note that the effects of the Step 1 scoring transition on research productivity may not yet be fully observable when interpreting these results. The class of 2028 represented the first group of applicants to take Step 1 pass/fail, and later cohorts may have had greater awareness of the change and more time to adapt their strategies. Because publication timelines often span several months to years, some manuscripts in this analysis may have originated before medical school, whereas research efforts prompted by the Step 1 policy change may have taken longer to develop and were not yet captured prior to the start of residency. However, our goal was to include all preresidency research output that would reasonably have been a part of an applicant's residency application. These timing factors limit causal interpretation of the observed trends and highlight the need for continued monitoring of future applicant cohorts.

NIH Funding Influences on Research Productivity

Our findings present a nuanced perspective of the relationship between NIH funding and research productivity among anesthesiology applicants. When residents were stratified by the NIH funding level of their medical schools, no significant differences were observed in total publication count, first author publications, in-specialty publications, or citation rates. This suggests that, at the group level, NIH funding was not associated with greater research output in terms of quantity or impact. However, logistic regression analysis revealed that graduating from a highly NIH-funded institution was significantly associated with increased odds of having at least 1 publication. This suggests that, whereas NIH funding may not directly influence how many publications an applicant produces, it may be associated with a greater likelihood of having at least 1 publication. Highly funded medical schools may draw students who are already predisposed to pursue scholarly activity, and these self-selection effects could contribute to the observed differences in publication likelihood. As such, NIH funding likely facilitates, but does not independently determine, research productivity among medical students. Because this study cannot distinguish between institutional effects and self-selection of research-oriented students, the observed association should be interpreted as descriptive rather than causal. Future studies incorporating applicant-level data such as prior research experience, mentorship exposure, and departmental support could help clarify these relationships.

Research engagement among anesthesiology applicants may, therefore, be shaped less by institutional funding and more by other factors, such as student interest, departmental culture, or the availability of mentorship. It is also likely that anesthesiology applicants engage in scholarly activities through conference abstracts, poster or oral presentations, quality improvement projects, or nonindexed publications, which were not captured in this analysis. Although prior literature demonstrates that NIH funding is a strong predictor of publication volume, these effects are often concentrated in a limited number of highly resourced institutions, contributing to disparities in research access across US medical schools.21-23 In contrast, our findings suggest that anesthesiology may not follow this pattern as closely as other specialties, warranting further investigation into the specialty-specific pathways through which applicants participate in scholarly activity.24,25

Limitations

This study has several limitations that should be acknowledged. Although the sample included 461 residents across 30 programs, it may not fully represent the broader anesthesiology applicant pool, and findings may not be generalizable to all residency programs or applicants nationwide. The selection of programs based on Doximity's reputation rankings also limits generalizability; however, this approach provided a standardized framework for program inclusion, incorporating top, middle, and lower tier programs to balance the natural skew toward more research-intensive institutions.

Second, publication identification relied on name and institutional affiliation, which may introduce false positives (eg, duplicate names) and false negatives (eg, incomplete affiliation data). Additionally, our methodology linked applicants’ publications to their home institutions but did not capture research completed externally or published in non-PubMed-indexed journals. Using the PGY-1 start date (July) as the publication cutoff may have included manuscripts accepted after Match Day though most publications indexed by that time likely reflect work conducted during medical school. These factors may affect attribution accuracy and the true breadth of research activity among medical students. Moreover, a subset of residents graduated from schools with unknown levels of NIH funding, which may have influenced comparisons across funding tiers.

Third, residents were grouped into Step 1 numerical versus pass/fail cohorts based on medical school start years obtained from publicly available data with missing start years inferred from graduation dates. This approach does not account for nontraditional paths, such as dual-degree programs, gap years, or leaves of absence, which may affect Step 1 timing. Similarly, residents who pursued dedicated research or gap years before residency may have had additional time for publications, potentially inflating preresidency output and limiting the ability to isolate research completed during medical school. Although these cases are likely few due to the exclusion of MD/PhDs and IMGs, even minor misclassification could dilute true differences between groups. Finally, this study evaluated only the first cohort of anesthesiology residents to take Step 1 pass/fail, and subsequent groups may demonstrate evolving research trends not captured within the present analysis. Despite these limitations, this study provides a valuable foundation for understanding research productivity among matched anesthesiology applicants and offers direction for future investigations to build on these findings.

Implications and Future Directions

As the residency application landscape continues to shift in response to the Step 1 scoring change, understanding the evolving role of research in anesthesiology is increasingly important for both applicants and program leadership. For medical students, our findings suggest that publications were not universally required for matching into anesthesiology among the class of 2028, which may or may not apply to subsequent cohorts. For faculty and mentors, these results highlight that anesthesiology research opportunities for students may be influenced more by department-level culture, mentorship availability, and local resources rather than institutional NIH funding. For program directors, this study provides a snapshot of research patterns for the first residency class to take Step 1 pass/fail, offering context when interpreting publication records as part of holistic application review. Future studies should continue to monitor trends in scholarly output as the specialty's competitiveness evolves and as Step 1 pass/fail scoring reshapes applicant strategies.

Conclusion

In summary, this study provides the first detailed analysis of PubMed-indexed publication patterns among matched anesthesiology applicants, revealing persistently modest research output across cohorts before and after the Step 1 transition. These findings offer important context for understanding scholarly activity within the specialty as its competitiveness evolves. Whereas no significant increase in publication output was observed following the transition to pass/fail Step 1 scoring, ongoing monitoring is essential to track how future applicants may adapt their strategies in response to shifting evaluation metrics.

Acknowledgments

The authors gratefully acknowledge Dr Ban Majeed, biostatistician and faculty member at the Medical College of Georgia, for her invaluable guidance during the early phases of the study's design. We also thank Emily Harris and Jennifer Davis, university librarians, for their contributions in refining and standardizing the data collection methodology.

Footnotes

Funding: This research was not supported by any explicit funding sources.

References

  • 1.National Resident Matching Program Results and Data: 2024 Main Residency Match. Washington, DC: NRMP; 2024. [Google Scholar]
  • 2.National Resident Matching Program Results and Data: 2020 Main Residency Match. Washington, DC: NRMP; 2020. [Google Scholar]
  • 3.National Resident Matching Program Charting Outcomes in the Match: U.S. MD Seniors – 2024. Washington, DC: NRMP; 2024. [Google Scholar]
  • 4.National Resident Matching Program Charting Outcomes in the Match: U.S. DO Seniors - 2024. Washington, DC: NRMP; 2024. [Google Scholar]
  • 5.National Resident Matching Program Charting Outcomes in the Match: U.S. MD Seniors – 2020. Washington, DC: NRMP; 2020. [Google Scholar]
  • 6.National Resident Matching Program Charting Outcomes in the Match: U.S. DO Seniors – 2020. Washington, DC: NRMP; 2020. [Google Scholar]
  • 7.Vining CC, Eng OS, Hogg ME, et al. The impact of COVID-19 on surgical residency applicant research productivity. J Surg Res. 2022;270:480–6. [Google Scholar]
  • 8.Chow DS, Simoes CJ, Vangsness M, et al. Trends in research among applicants to orthopedic surgery residency programs. J Surg Educ. 2023;80:1268–76. [Google Scholar]
  • 9.Sterling M, Leung P, Wright D, Bishop J. Research productivity of medical students at the time of residency application: a retrospective review. Cureus. 2021;13:e13178 [Google Scholar]
  • 10.Dyess G, Bassett M, Baker N, et al. Publication volume and equity in competitive surgical residency programs: a bibliometric analysis of the 2023-2024 match. Cureus. 2025;17(4):e82451. doi: 10.7759/cureus.82451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Doximity Residency navigator rankings: anesthesiology programs. [Accessed February 4, 2025]. https://www.doximity.com/residency/programs?specialtyKey=92737b59-99cc-40a2-8998-5debb54d36c0-internal-medicine&sortByKey=reputation
  • 12.Doximity Residency navigator research methodology. San Francisco, CA: Doximity; 2024. [Accessed October 8, 2025]. https://assets.doxcdn.com/image/upload/pdfs/residency-navigator-survey-methodology.pdf [Google Scholar]
  • 13.Vora NL, Elnicki DM, Encandela J, et al. AAMC professional competencies and the residency selection process: a survey of US program directors. Acad Med. 2013;88:1566–72. [Google Scholar]
  • 14.Gu A, Johnson A, Khouri RK, et al. Analysis of NIH funding and its association with research productivity in plastic surgery residency programs. Plast Reconstr Surg. 2024;153:651–8. [Google Scholar]
  • 15.Pawloski AR, Feghali K, Patel A, et al. Investigating scholarly trends in neurosurgery residency applicants in the Step 1 pass/fail era. World Neurosurg. 2024;184:e35–41. [Google Scholar]
  • 16.Heavner JJ, Sussman AJ, Walker E, et al. Factors anesthesiology program directors value in choosing residents. J Educ Perioper Med. 2012;14:E071. [Google Scholar]
  • 17.De Oliveira GS, Jr, Akikwala T, Kendall MC, et al. Factors influencing the selection of surgical residents: a national survey of program directors. J Surg Educ. 2012;69(5):663–70. [Google Scholar]
  • 18.Shah S, Con J, Mercado L, et al. Predictors of matching into anesthesiology and surgery: analysis of one program's results. J Surg Educ. 2023;80(9):1231–41. doi: 10.1016/j.jsurg.2023.06.021. [DOI] [PubMed] [Google Scholar]
  • 19.Culley DJ, Fahy BG, Xie Z, et al. Academic productivity of directors of AGME-accredited residency programs in surgery and anesthesiology. Anesthesia analgesia. 2014;118(1):200–5. doi: 10.1213/ANE.0b013e3182a8fab5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kwon AH, Varelmann D, Karamnov S, et al. Anesthesiologists wake up! It is time for research and innovative medical entrepreneurism. J Educ Perioper Med. 2021;23(1):E658. doi: 10.46374/volxxiii_issue1_nabzdyk. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Claiborne JR, Crichlow R, McKinney R, et al. The impact of research productivity on orthopaedic residency selection. J Surg Educ. 2020;77:710–6. [Google Scholar]
  • 22.Andriole DA, Jeffe DB. The distribution of U.S. medical school graduates in residency training programs: association with academic medical center funding. Acad Med. 2000;75:470–6. [Google Scholar]
  • 23.Averill AM, Hampton LJ, Reighard CD, et al. Trends in dermatology residency applicant research productivity before and after the COVID-19 pandemic. J Am Acad Dermatol. 2021;85:1131–3. [Google Scholar]
  • 24.Pool LR, Wagner RM, Scott LL, et al. Size and characteristics of the biomedical research workforce associated with U.S. National Institutes of Health extramural grants. FASEB J. 2024;38:e12345. doi: 10.1096/fj.14-264358. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Zinn PO, Hahn G, Bauss J, et al. NIH grant funding correlates with higher research impact and productivity in dermatology. Arch Dermatol Res. 2022;314:13–21. [Google Scholar]

Articles from The Journal of Education in Perioperative Medicine : JEPM are provided here courtesy of Society for Education in Anesthesia

RESOURCES