Skip to main content
Journal of Clinical and Translational Science logoLink to Journal of Clinical and Translational Science
. 2025 Jul 21;9(1):e164. doi: 10.1017/cts.2025.10096

A scoping review of mentorship in a CTSA context: A summary of past work and an agenda for future research

Phillip Ianni 1,, Elias Samuels 1, Ellen Champagne 1, Eric Nehl 2,3, Deborah DiazGranados 4,5
PMCID: PMC12392357  PMID: 40895453

Abstract

Mentorship is a vital part of the training provided in the K and T programs funded by the Clinical and Translational Science Awards (CTSA). However, the inputs, indicators, and outcomes associated with a successful mentoring relationship remain poorly understood. In this review, we critically examine the current body of literature on mentorship in a CTSA context. We conducted a comprehensive search of the literature for relevant research articles. We included articles that were contextualized within a CTSA hub, examined a mentorship program, and conducted evaluation research. Through an initial search of online databases and by reviewing reference sections of relevant articles, we identified 141 potentially relevant articles. Twenty-five of these articles met our inclusion criteria. We identified three categories of research: nationwide institutional surveys of CTSA mentorship programs, mentored research training programs, and mentor training programs. While the findings highlighted the effectiveness of mentor training and mentored training programs, there is a notable lack of assessment of mentoring inputs and indicators. Based on our review, we propose a model for the evaluation of CTSA mentorship that includes measurable inputs, indicators, and outcomes. This model provides a holistic framework for evaluators and CTSA program directors to better understand their mentorship programs.

Keywords: Translational science, mentor training, ctsa, evaluation, mentorship

Introduction

The advancement of translational science requires cultivating and improving effective mentorship practices among translational scientists. Mentorship has been studied extensively in many different contexts, including health care delivery [1], corporate settings [2], not-for-profit settings [3], and government agencies [4]. In higher education, mentorship has been studied across different career stages (undergraduate, graduate, postdoc, faculty) and disciplines (biomedical science, engineering, social sciences, physical sciences, allied health professions and humanities) [58]. However, relatively little research has been devoted to understanding mentoring practices and needs that are distinctive to the unique needs of the clinical and translational science context. This review establishes a foundation for studying mentorship in Translational Science by analyzing findings and identifying research gaps in studies supported by Clinical and Translational Science Awards (CTSA) from the National Center for Advancing Translational Sciences (NCATS).

Mentorship in the context of translational science is distinct from other STEM fields in a few different ways. First, the context and the activities required for effective translational science differ from those in basic science research. Unlike basic science researchers, translational science researchers focus on developing practical solutions for specific health-related problems [9]. Second, translational science is distinct in its goal of testing and disseminating tools and practices to enhance the clinical and translational research enterprise. While testing methodologies may not be unique, the process of dissemination and translation into practice requires specialized skills and knowledge, highlighting the need for mentorship. Third, beyond distinguishing translational science from basic research, we also emphasize the importance of understanding effective mentorship behaviors [10], and we believe this need can fulfill a gap in the mentoring literature and be applied across many fields in science.

Translational scientists require training in a wide range of competencies [11]. Their mentors play a key role in this training process. A strong, active mentoring relationship serves both a career function and a psychosocial function and is one of the best predictors of academic success. However, as noted in a previous review of research mentorship in clinical and translational science, the mentorship experience is often difficult to define and measure [12].

In addition, recent T32 and K12 notice of funding opportunities released by the NIH [13,14] put a strong emphasis on mentorship. As of 2025, all training grant applications must now include a Mentor/Trainee Assessment Plan, which requires that institutions specify how their programs will monitor mentoring relationships, which approaches and tools will be used to assess both mentors’ and mentees’ perceptions of the mentoring relationship, and how the program leadership will handle major discrepancies. For career development programs, institutions are now required to describe how participating faculty will be trained to use evidence-informed mentorship practices, how gains in perceived skill will be measured, how changes in mentoring behaviors and effectiveness as a result of mentor training will be measured, how the research training environment will be monitored and assessed, and how outstanding mentors will be recognized and rewarded.

To address these challenges, NCATS has developed programs and training resources to promote the development of effective mentors. Mentored research training programs, such as the T32 and the K12, aim to give trainees and scholars the opportunity to work on a research project with a faculty mentor. Many CTSA hubs have also developed mentor training programs [1520] that are designed to support and improve the culture of mentoring by giving mentors needed skills. However, identification of mentorship practices and principles that are distinct to the field of clinical and translational science is still needed.

While rigorous evaluations of CTSA mentorship programs have been conducted [21,22], there is little understanding of how and why they work, a challenge that can be described as the “black box” problem in program evaluation [23]. As stated in the NASEM report [5], “To fully understand mentorship, evaluation measures would ideally address both mentorship processes and mentorship outcomes and the system factors that can profoundly shape it.” (p. 127). To gain an understanding of the mechanisms of mentorship within the context of clinical and translational science, we examined the current state of the literature about mentorship provided by, or supported within, CTSA-funded research centers, identified gaps in evaluation, and articulated future directions for research that could address these gaps. Based on these findings, we propose a model of CTSA mentorship.

Methods

Study design

This study followed a systematic review methodology, guided by the framework proposed by Arksey and O’Malley [24] and refined by Levac et al. [25]. This approach was selected to examine the extent, range, and nature of the existing literature on evaluation research for mentorship programs at CTSA hubs and to identify key concepts, gaps in the research, and avenues for future inquiry. The reporting of this review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines [26].

Inclusion/exclusion criteria

The inclusion criteria for our review were based on the Population, Concept, and Context (PCC) framework as recommended by the Joanna Briggs Institute (JBI) methodology for scoping reviews [27]. Studies were eligible for inclusion if they:

  1. Were conducted at a CTSA-funded mentored research program.

  2. Included evaluation results (qualitative or quantitative).

  3. Included measurable metrics of mentoring inputs, activities, outputs, or outcomes.

  4. Examined faculty or postdoctoral participants (no CRP’s, undergraduate students, etc.).

Studies were excluded if they were not directly related to the core concept, were not published in English (if applicable), or if the full text was unavailable.

Search strategy

A comprehensive literature search was conducted using PubMed to identify relevant studies. The search strategy was developed and included a combination of keywords and Medical Subject Headings (MeSH) related to mentor programs. We used the following search terms: “mentor* AND program AND research AND (KL2 OR TL1 OR T32 OR K12 OR postdoctoral) AND (CTSA OR NCATS) AND (Clinical OR translational).” In addition to electronic databases, a secondary search of the reference lists of included articles and key journals was conducted to capture additional relevant studies. The literature search was conducted in 2024. There were no limits placed on year of publication, and articles included in this review were published between 2009 and 2022.

Study selection

All search results were imported into Covidence, a web-based collaboration software platform that streamlines the production of systematic and other literature reviews [28]. The review process had three steps. In the first step, three of the authors screened the titles and abstracts of the studies based on the predefined eligibility criteria. In the second step, full-text screening was then conducted on studies that met the initial inclusion criteria by two separate authors. Any disagreements between reviewers were resolved through discussion or by consulting the entire research team. In the third step, relevant data were extracted from the selected papers by three separate authors. We repeated this process with articles identified from the original list of papers. A diagram of this process is shown in Figure 1.

Figure 1.

Figure 1.

Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram.

Results

Study types, designs, and methods

Clear commonalities are shared by the 25 studies summarized in Table 1, notably including similarities in study types, designs, and methods. Typically, these studies involved mixed-methods research, often using surveys, interviews, and/or focus groups in a pre-post design.

Table 1.

Summary of key study variables

Study CTSAs N CTSA Program Study participants N Study type Study design Study methods
Abedin 2013 55 K Administrators 53 Mixed methods Cross sectional Surveys, Interviews
Behar-Horenstein 2017 1 T Faculty mentors, Postdoctoral mentees 10 Qualitative Other Interviews
Behar-Horenstein 2019 1 Other Faculty mentors 20 Mixed methods Pre-post Surveys, Qualitative coding, Secondary data collection
Bonilha 2019 1 Other Senior faculty mentors, Junior faculty mentees 1362 Quantitative Pre-post Surveys
Burnham 2011 46 K Faculty mentors, Administrators 91 Qualitative Other Focus groups
Comeau 2017 1 K Junior faculty mentors 46 Mixed methods Other Interviews, Secondary data collection
Feldman 2009 1 K, T Mid-career and senior faculty mentors 29 Mixed methods Pre-post Surveys
Feldman 2012 1 K Mid-career and senior faculty mentors 38 Mixed methods Other Surveys
Huskins 2011 46 K Senior faculty mentors, Junior faculty mentees, Administrators 154 Qualitative Other Interviews, Focus groups, Secondary data collection
Martina 2014 1 K, T Faculty mentors 73 Mixed methods Cross sectional Surveys
McGee 2023 1 K Junior faculty mentors 39 Mixed methods Pre-post Surveys
Nearing 2020 1 K, T Faculty mentors, junior faculty and postdoctoral mentees 158 Mixed methods Pre-post Surveys
Patino 2017 1 K Community-based organization mentors, Junior faculty mentees 6 Qualitative Other Interviews
Pfund 2013 16 K Faculty mentors 144 Mixed methods Pre-post, Comparison group Surveys, Qualitative coding
Pfund 2014 16 K, T Faculty mentors, junior faculty and postdoctoral mentees 566 Mixed methods Pre-post, Comparison group Surveys, Interviews
Robinson 2016 9 K Junior faculty mentees 40 Qualitative Other Interviews, Secondary data collection
Sancheznieto 2022 51 T Administrators 50 Mixed methods Cross sectional Surveys
Schweitzer 2019 1 Other Senior faculty mentors, Junior faculty mentees 331 Mixed methods Pre-post Surveys
Silet 2010 46 K Administrators 46 Qualitative Cross sectional Interviews
Smyth 2022 N/A K Junior faculty mentees 547 Mixed methods Cross sectional Surveys
Spence 2018 1 Other Senior faculty mentors, Junior faculty mentees 36 Quantitative Pre-post Surveys
Stefely 2019 1 Other Faculty mentors, MD/PhD student mentees 74 Mixed methods Pre-post Surveys
Tillman 2013 53 K Administrators 53 Mixed methods Other Surveys, Interviews
Trejo 2022 1 Other Senior faculty mentors, Junior faculty mentees 391 Quantitative Pre-post Surveys
Weber-Main 2019 1 Other Faculty mentors 59 Mixed methods Pre-post, Comparison group Surveys, Focus groups, Secondary data collection

However, there was also considerable variation across studies. For example, while most (60%) of these studies focused on a single CTSA hub, some were far broader in scope, with six studies evaluating mentoring activity in at least 45 hubs each. The number of participants in these studies also varied considerably, ranging from 6 to 1362, with an average of 177 participants per study (SD = 288).

As shown in Table 2, there are clear similarities between the design and methods of the studies that were evaluated. Most (16, 64%) of these studies utilized mixed methods, with all but one of these studies using survey methods; other evaluation methods used by these studies include interviews (4), focus groups (1), secondary data collection (3) and qualitative coding (2). Almost a quarter (24%) of the studies included in this review used only qualitative methods, including interviews (5), focus groups (2), and secondary data collection (2). Three of the studies evaluated used quantitative methods solely, all of which involved surveys.

Table 2.

Study type, design, and methods extracted from the articles evaluated

Study type
 Mixed methods 64%
 Qualitative 24%
 Quantitative 12%
Study design
 Cross sectional 20%
 Pre-post 48%
 Other 32%
Study methods
 Surveys 72%
 Interviews or focus groups 44%
 Secondary data collection 5%

Roughly half of the studies (48%) had a pre- and post-test design to measure change over the course of the intervention, three of which also used a comparison group to measure outcomes against a meaningfully comparable group of individuals. Five studies (20%) had a cross-sectional design, and the remaining 9 studies (32%) had other study designs.

Types of interventions

Most of the studies (n = 16, 64%) examined a K Scholars program, with four of the 16 studies examining both T trainee and K Scholars programs. Six studies examined other mentoring programs designed for health sciences faculty overall [15,17,22,29], junior faculty [17], or physician-scientists [30]. Regarding the source of the data collected, ten studies (40%) gathered data from both mentors and mentees, and over half (n = 13, 52%) only collected data from mentors or program administrators. Two studies collected data only from mentees [31,32].

Statistical tests used and interview/survey questions asked

Roughly half of the studies evaluated (n = 13, 52%) utilized inferential statistics, such as t-tests, chi-square tests, logistic regression, or MANOVAs, to inform conclusions or predictions about mentoring activities and impacts. And nine (36%) of the evaluated studies used validated survey measures of mentoring, with survey questions typically derived from Mentoring Competency Assessment (MCA) [33]. Some studies utilized other validated scales of mentoring activity [18,34] or in combination with the MCA [35]. Seven studies (28%) employed both inferential statistics and validated survey measures.

The open-ended survey questions and focus group or interview protocols used in these studies were diverse, considered as a whole. Many of the open-ended survey questions solicited information about the mentoring experience, gains in mentoring knowledge and skills, the quality of training programs, examples of valuable mentoring interactions, and opportunities for developmental or programmatic improvements which could be made by CTSAs. The impact of mentoring on mentees research studies and research careers was also the subject of many of these survey questions as well as the focus group or interview protocols. The qualitative data collected through these means were typically coded by multiple members of the research team and grouped into thematic categories; only occasionally were these qualitative data grouped into themes without methodical coding carried out by multiple raters.

Inputs measured

Only five studies involved the use of inferential statistical tests that included independent variables [15,21,29,32,35]. All of these studies analyzed differences in outcomes across groups identified by their sex or faculty rank. Some studies tested for and found statistically significant differences between other participant characteristics including those identified by race, age, tenure track, and years of experience or between mentors and mentees [21,29]. The remaining studies that were evaluated (n = 20, 80%) did not involve testing for statistically significant differences between subgroups of participants, although many tested for significant differences among participants before and after they received a programmatic intervention, such as mentorship training. The following summary of the results addresses these differences in the context of the overall findings of the studies reviewed.

Data collected in articles reviewed

We identified three types of studies in our review. Several studies focused largely on evaluating training programs across the whole CTSA Consortium [3641]. A few studies focused on evaluating specific mentored research training programs within the CTSA Consortium [30,32,38,42]. Finally, the largest proportion of studies focused on evaluating mentor training programs [1522,29,31,34,35,4346].

We have structured the review according to these three types of papers. The findings of each type of paper are reviewed below.

Mentorship programs across the CTSA consortium

To understand the current state of mentoring in the CTSA consortium, we examined the findings of several manuscripts that evaluated K and T mentoring training programs across the CTSA Consortium [3641]. These works demonstrate that CTSA K and T programs share many common mentoring training requirements, training opportunities, resources, and outcomes [36,37,41]. However, they also show that there are considerable differences across both K and T programs, notably the mechanisms for evaluating mentoring performance [3941].

The most common form of institutional support for mentors was mentor training. There was great variety in the mode of delivery of mentor training. Mentor training is usually given as a one-time orientation (51%), but informal training (33%), Web-based training (28%) and face-to-face training (23%) were other common formats [36]. Mentor training was typically offered by both KL2 and TL1 programs but was often not mandatory. Thirty-four (64%) KL2 programs reported that they offered mentor training, but only 56% of those programs required mentors to attend training [41]. Similarly, for TL1 predoctoral training programs, 90% offered mentor training, but this training was mandatory for only 20% of hubs [39]. For TL1 postdoctoral training programs, 84% of hubs offered mentor training, but only 30% required this training [39]. Most TL1 programs also provided mentee training to their mentees, with training available for both predoctoral (78%) and postdoctoral (76%) trainees. However, as with mentor training, mentee training was rarely required (22% of predoctoral programs and 32% of postdoctoral programs) [39].

KL2 programs often utilized multiple strategies to facilitate KL2 Scholars’ mentoring experience. In addition to mentor training, other strategies included aligning mentor and mentee expectations, mentor evaluations, mentor awards, and subsidizing membership to mentoring academies [37,40]. 53% of KL2 programs reported offering incentives, such as consideration in annual evaluation or promotion [41]. KL2 programs often utilized existing institutional mentoring policies, training, and resources [36,41].

A majority of KL2 programs had mechanisms to communicate the programmatic expectations for the mentoring relationship to mentors (52%) and scholars (54%), such as contracts, agreements, signed letters, orientation meetings, handbooks, oversight committees, and initial meetings with the program director [38]. However, only 28%–38% of KL2 programs had written mentor-mentee agreements [40,41] and 21% of KL2 programs had written policies to manage conflicts [41].

To evaluate mentors, most KL2 programs reported having some kind of formal evaluation, with 79% reporting using one or more evaluation processes [41]. 37% of KL2 programs reported conducting formal evaluations of the mentoring relationship, with 11% of programs developing their own survey instrument. Typically, mentees rate their mentors using annual or semi-annual surveys [40,41].

While there has been research on the use of specific mentoring activities, such as Individualized Development Plans (IDPs) in the broader mentorship literature [4749], there has been little research on their influence on mentee outcomes in the CTSA literature. One recent survey [39] of the mentoring activities of TL1 predoctoral and postdoctoral programs examined the strategic use of IDPs, which are currently required for NIH-supported research training programs. They found that IDPs were primarily being used to track trainee progress, set milestones, and to provide opportunities for “midcourse” corrections in the training. However, as far as we are aware, there have been no studies examining how IDP adherence affects mentee productivity. Lastly, there were several reported barriers to mentoring, including a lack of knowledge or appreciation for the importance of mentor training, a lack of resources to provide training, and a lack of accountability for organizing mentor training activities [36,40]. There was also a noted inconsistency in how mentor training is conducted across institutions [41]. The use of IDPs also varied, with IDPs not being utilized for any evident purposes by 7% of TL1 predoctoral programs and 10% of postdoctoral TL1 programs. Concerningly, some CTSA hubs reported that mentors and program directors contributed to trainees’ IDP without trainee input [39].

Mentored research programs

To understand how mentoring impacted mentees, we examined the results of manuscripts on mentored research programs. Across all four studies we reviewed, mentees reported that their mentors were critical to their success. Specifically, mentees reported that their mentors helped them generate ideas for research studies, plan and design studies, review drafts of grants and manuscripts, and acted as a resource for advice, encouragement, and feedback [30,32,42]. Additionally, mentees emphasized the importance of identifying and aligning expectations with their mentors [38].

There were also several other programmatic outcomes reported in these articles; however, we chose not to report these findings here. This is because we could not pinpoint whether these programmatic outcomes were specifically associated with the mentoring they received. Mentored research programs often include a range of didactic elements, including grant and scientific writing, team science, entrepreneurship, leadership, community engagement, and health disparities research [38,50].

Mentor training programs

To examine whether mentor training programs had an impact on mentors and mentees, we examined several studies. Most of the manuscripts evaluating mentor training programs demonstrated the impact of mentor training on one or more levels of evaluation outcomes, including participant experience, learning or skill, behavior, programmatic results, or organizational impact [51,52]. Most of these manuscripts measured the impact of both the training experience and learning outcomes [18,22,4346]. The types of mentor training programs, inputs, outcomes, and statistical tests used in these papers are shown in Table 3.

Table 3.

Type of mentor training, outcomes, inputs, and statistical tests

Study Mentor/Mentee training offered Outcome variables Input variables Statistical tests
Behar-Horenstein 2017 Career advancement (e.g., promotion, tenure and leadership roles); research career (e.g., continuing work in research roles) N/A N/A
Behar-Horenstein 2019 Workshops/ seminars, mentor academy or master mentor Six MCA constructs/26 specific skills (items), overall quality of mentoring, meeting of mentee’s expectations Gender and rank (associate vs. assistant vs. full professor) Percentages, means, effect sizes, p-values, paired-samples t test, MANOVA, Wilcoxon test
Bonilha 2019 University of Wisconsin Program (Entering Mentoring) Satisfaction with department support and commitment to institution, career advancement (e.g., promotion, tenure and leadership roles), research career (e.g., continuing work in research roles) Have mentor (Yes/No), gender (Male/Female) knowledge of promotion criteria (Yes/No) Descriptive statistics, chi-square tests, Mantel-Haenszel chi-square, Fisher’s exact test, multivariable logistic regression model
Feldman 2009 Workshops/ seminars Mentoring skills, career satisfaction, confidence in mentoring skills N/A Averages, standard deviation and percent of pre and post skills confidence, paired t-tests, p-values, open coding for qualitative data
Feldman 2012 Workshops/ seminars Frequency of the application of knowledge, attitude, or skills obtained, career satisfaction, confidence in mentoring skills, ability to assist mentees, grant related outcomes N/A Averages, standard deviation and percent
Martina 2014 Workshops/ seminars, online training, training as dyads or triads N/A N/A Counts and percentages
McGee 2023 Workshops/ seminars, University of Wisconsin Program (Entering Mentoring) Mentoring skills, intended behavior changes, overall satisfaction, perceived value of mentor training NA Descriptives, Cronbach’s alpha, paired t-tests, Cohen’s D
Nearing 2020 Workshops/ seminars, training as dyads or triads Self-reported levels of experience and confidence in mentorship-related skills N/A Paired samples t tests
Patino 2017 Mentor pair communication, mentor pair success in meeting program goals, completing a community- engaged activity within the program, grant related outcomes, research career (e.g., continuing work in research roles) N/A N/A
Pfund 2013 Workshops/ seminars, University of Wisconsin Program (Entering Mentoring) Six MCA constructs/26 specific skills (items) N/A Basic descriptive analysis (counts, percentages, means), paired t-tests of the post minus retrospective scores
Pfund 2014 Workshops/ seminars, University of Wisconsin Program (Entering Mentoring) Six MCA constructs/26 specific skills (items) N/A Wilcoxon rank-sum test, mean group difference, p-values, chi-square tests
Robinson 2016 Mid-Career challenges and successes NA Counts, percentages
Schweitzer 2019 Workshops/ seminars, training as dyads or triads, other Six MCA constructs/26 specific skills (items) N/A Paired t-tests, means
Spence 2018 Workshops/ seminars, online training, other Six MCA constructs/26 specific skills (items), research career (e.g., continuing work in research roles) Financial, senior faculty mentors, grant specialists, biostatisticians, mentoring toolkits, career development and mentoring workshops Counts, percentages, means, and paired t-tests
Trejo 2022 Workshops/ seminars, University of Wisconsin Program (Entering Mentoring) Mentoring quality, mentoring behaviors, institutional climate, six MCA constructs/26 specific skills (items), supportive environment for women and UR, morale N/A Means and percent, Mann–Whitney U test, Fisher’s exact test (two-tailed)
Weber-Main 2019 Workshops/ seminars, online training Perceived skills gains, intentions to change mentoring behaviors, implemented changes in mentoring behaviors N/A Two-sample t-tests, paired t-tests, p-values

*MCA = Mentoring Competency Assessment.

Overall, quantitative data analysis provided evidence for the effectiveness of mentor training. Most studies found that mentor training programs increased mentors’ self-reported confidence in their mentoring skills [15,18,22,35,4446]. There is evidence that this increase in confidence was durable and long-lasting [34] and significantly greater than a control group that did not receive training [21]. This finding is consistent with previous research on mentor training programs in medicine [53] There is also evidence that mentor training programs have similar benefits for mentees, with mentees feeling more confident in their ability to connect with potential and future mentors, know what characteristics to look for in current and future mentors, and manage the work environment [45].

There appear to be several other benefits of mentor training. In one study [15], as a result of mentor training, mentors reported statistically significant increases in: the overall quality of their mentoring, a perception that they were currently meeting their mentees’ expectations, and their ability to set clear expectations for mentees [15], However, these benefits may differ by faculty rank. While assistant and associate professors’ perceived ability to help mentees acquire resources and set clear expectations significantly increased from pre-test to post-test, full professors’ scores on these items did not change significantly from pre-test to post-test [15]. Mentor training has also been found to significantly increase the percent of faculty familiar with their departmental mentoring plans, the percent of faculty (instructors, assistant professors, and associate professors) with a mentor, the percent of full professors in a mentoring role, the percent of faculty familiar with their college’s promotion criteria, the percent of faculty satisfied with departmental support, and significantly decrease the number of mentees perceiving mentoring resources to be insufficient [29].

There has been little research in the CTSA context on the effects of demographic characteristics on the mentorship process. Evidence from two studies suggests that mentor training may be more beneficial for male mentors than for female mentors. Interaction effects found that as a result of the training they received, males provided more constructive feedback and helped mentees develop strategies to meet goals; no significant increase was found for females [15]. However, female gender was significantly associated with satisfaction with departmental support among mentees [29]. We caution that the findings of these two studies are narrowly-focused and so cannot provide meaningful insights into the effects of any other demographic characteristics variables on the mentorship process.

A number of studies also evaluated the impact of mentor training on participant behavior [1517,19,20,31,34]. In an experimental study, Pfund [20,21] compared the impact of mentor training on participants’ experience and learning using the MCA against a control group, but also evaluated participants’ actual mentoring practices throughout the course of a multi-session training. They found that compared to the control group, mentors in the intervention group reported a significantly greater increase in MCA composite scores and also reported more changes in their mentoring practices.

Two studies focused on evaluating the impact of mentor training at organizational or systemic levels. Trejo and colleagues [35] found that participants in a faculty mentor training program reported improved overall morale and an increased perception of a supportive organizational environment. Mentor training can also improve rates of faculty satisfaction and retention [29].

Discussion

Summary of findings

The results of nationwide surveys show that there is a wide variety of mentoring practices across the CTSA, including mode of delivery, whether mentor training is mandatory or optional, strategies to facilitate KL2 Scholars’ mentorship experience, mechanisms to communicate programmatic expectations for the mentoring relationship, and methods to evaluate mentors. Studies on mentored research training programs demonstrated that mentors can have a substantive impact on their mentees’ professional work. The findings of mentor training programs in a CTSA context are consistent with the body of literature on mentor training programs in medicine more broadly [53]. Specifically, mentor training was consistently found to boost mentors’ confidence in several domains of mentoring competence.

While there is an ample body of research on mentorship supported by CTSA institutes, there are also key gaps in the current research. Most notably, many research studies on mentorship use small sample sizes, self-report measures, and correlational designs, which have limited effectiveness for evaluating student-faculty mentorship programs [54]. There is a need for more rigorous designs, such as longitudinal and quasi-experimental methods, that enable researchers to get a better understanding of the dynamics of mentor relationships over the course of a mentee’s research career. It is reasonable to speculate that this gap exists in this body of literature because cross-sectional evaluations of mentoring practices, programs, and training are much less time- and resource-intensive to conduct compared to longitudinal studies of the dynamics of mentoring relationships over long periods of time. Outside the CTSA context, a recent review [55] found only 24 out of 109 reviewed articles on mentoring in medical and STEM settings used a longitudinal design, suggesting that this kind of design is still rarely used in mentoring research. The implications of this general trend in the scientific literature are discussed further in regards to potential directions for future research.

There are also clear gaps in the literature regarding mentorship teams, processes, and quality improvement initiatives. Few of the papers reviewed here included process measures of mentorship in their evaluations. As Steiner [56] writes: “Such structural tools will not improve mentorship outcomes unless they are consistently adopted into the day-to-day process of mentorship…since all good mentorships are journeys, training programs should monitor and evaluate mentoring relationships continuously—similarly to the existing strategies for evaluating clinical or classroom teaching. Systematic assessments of mentorship are surprisingly rare.” (p. 3–4). Potentially measurable mentorship processes identified in the STEM literature include career support, psychosocial support, role modeling, and negative mentoring experiences [5,8]. As far as we are aware, these processes have not yet been investigated in a translational science mentoring context.

There were several limitations of this review. First, it is possible that we could have missed relevant articles in our literature search because of our choice of search terms, which were intentionally designed to be limited and focused. In addition, at the time of this writing, some of the studies (n = 10) included in this review are over 10 years old and may not reflect the current state of mentor activity and training across the CTSA Consortium. Lastly, publication bias likely excluded findings that were not significant. Because of this, the studies reviewed may not give an accurate picture of the impact of mentor training programs [57].

Directions for future research

Future research on CTSA mentored research programs should utilize sophisticated statistical methods involving comparison groups, power analysis, quasi-experimental designs, and propensity score matching [15,20,58]. The use of such rigorous methods can answer important empirical questions about causality, providing program directors with useful, in-depth knowledge of how their programs work. While there have been a few rigorous evaluations that statistically model the impact of a mentorship program on mentee outcomes in clinical and translational science [17], much more of this research is needed. Rigorous studies of the impact of mentorship in STEM fields could provide a guide for evaluation of mentorship programs in clinical and translational science [5961]. In addition, CTSA institutions ought to use standardized methodologies to evaluate their programs [42]. To facilitate this process, software programs such as Flight Tracker [62] can be used to extract data required for standardized evaluations of mentorship programs, including mentee grant success and counts of papers co-authored by mentors and mentees [56].

Future research should also evaluate the criteria and processes used to match mentees with appropriate mentors and mentorship team, align mentor and mentee expectations, and how mentor training impacts on long-term career outcomes [16,38,46]. This includes research on whether mentee demographics (e.g., race, ethnicity, gender, sex, and other commonly used measures of demographic representation) might affect mentoring practices or outcomes. The paucity of research on demographic effects on mentorship in the CTSA context stands in contrast to the broader mentorship literature, for which ample research exists. For example, previous research has found that women perceived mentorship to be more valuable to their career development and reported receiving more psychosocial support [6365]. More research is needed to establish whether similar relationships also exist in a CTSA context.

There is also a need to understand how mentees cultivate relationships with multiple mentors as part of a mentoring team, and how mentors help mentees launch and sustain an independent scientific career [16]. Mentorship teams are particularly important for K Scholars, and more research is needed to understand how mentorship networks function.

More research is needed on how specific mentorship activities influence mentee outcomes. As noted in the NASEM report [5], “there is little consensus on how to determine either the most essential specific forms of mentorship support or the programmatic or institutional structures that could enhance, incentivize, or reward mentorship support.” (p. 138). Prior research in academic medicine suggests that adequate institutional support, allowing mentees to choose their mentors, giving mentors and mentees protected time for mentorship, and using written statements to set boundaries and provide accountability all contribute to successful mentorship [66]. The CTSA context could provide a good testing ground for groundbreaking research that could inform the broader body of knowledge on mentorship. One possible avenue of research would be to examine the effects of IDP adherence on mentee productivity. While we predict that mentees who adhere to their IDP’s will be more productive, to our knowledge no research to support this hypothesis currently exists, even outside the CTSA context.

Finally, future research should utilize conceptual models of mentorship. An early review of evaluations of clinical and translational research mentors [12] proposed a mentorship evaluation model that measured outcome as a function of individual (e.g., demographic factors, education, personality) and environmental (e.g., institutional resources, institutional attitudes and institutional policies) factors. We build upon this model by adding indicators of mediating processes (Figure 2).

Figure 2.

Figure 2.

Proposed model for evaluating clinical and translational science awards (CTSA) mentoring.

The simple model shown in Figure 2 is intended to help guide future research on mentorship and mentored research training programs in the CTSA Consortium. Our goal in developing this model was to address a specific gap: the lack of a comprehensive framework explicitly designed to guide the evaluation of mentoring relationships, particularly within the complex environment of CTSA programs. We were informed by established conceptual frameworks depicting the functions of mentorship (e.g., psychosocial and career support) [67] and overarching theories that explain mentoring processes (such as ecological systems theory and social capital theory) [6870]. Our awareness of these foundational theoretical contributions significantly shaped our understanding of mentoring dynamics.

Moreover, we considered process-oriented models of mentoring, notably the seminal work by Eby et al. [7]. This model’s emphasis on inputs, processes, and outcomes directly informed our conceptualization of how an effective evaluation framework should function to better understand mentoring relationships. While existing models describe what mentoring is or how it functions, our aim is to provide a framework that translates these understandings into actionable components for rigorous evaluation efforts. We identified a distinct need for a practical model that helps researchers and practitioners systematically assess the effectiveness and impact of mentoring relationships in a structured way, rather than solely describing mentoring processes or functions.

The first column of the model includes several key inputs, including the mentors’ and mentees’ prior experiences, similarities and differences between the mentor and mentee, mentorship training, and mentorship structure. The second column includes several mediating indicators of relationship quality, including the quality of the relationship between the mentor and mentee and the frequency of interaction between mentors and mentees. The third column of the model includes subjective and objective outputs, including mentee career-related performance, attitudinal outcomes, persistence, and learning outcomes. Lastly, there are several contextual factors representing institutional and systemic influences (such as organizational support and culture, funding availability, and institutional priorities) that we hypothesize influence inputs, mediating indicators, and outputs.

Conclusion

This scoping review shows that the CTSA Consortium is producing a growing body of research on mentorship and mentor training. Surveys conducted across the CTSA Consortium show that there is great diversity in scholars’ and trainees’ mentoring experiences and outcomes. Evaluations of mentored research training programs demonstrate their contributions to scholars’ and trainees’ professional careers, and there is ample evidence demonstrating that mentor training is an effective strategy to build the mentorship skills and competencies needed to cultivate the clinical and translational science workforce. However, there is a clear need for more practice-based investigation, especially to identify potential inputs and processes. The current literature provides a solid foundation on which future research should build.

Author contributions

Phillip Ianni: Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing-original draft, Writing-review & editing; Elias Samuels: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing-original draft, Writing-review & editing; Ellen Champagne: Data curation, Formal analysis, Visualization, Writing-original draft, Writing-review & editing; Eric Nehl: Conceptualization, Formal analysis, Writing-original draft, Writing-review & editing; Deborah DiazGranados: Conceptualization, Formal analysis, Writing-original draft, Writing-review & editing.

Funding statement

This research was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Numbers KL2TR002381, K12TR004374, R25TR004776, TL1TR002382, T32TR004371, T32TR004764, UL1TR002378, UM1TR004360, and UM1TR004404.

Competing interests

The authors report no declarations of interest.

References

  • 1. Burgess A, van Diggele C, Mellis C. Mentorship in the health professions: a review. Clin Teach. 2018;15:197–202. doi: 10.1111/tct.12756. [DOI] [PubMed] [Google Scholar]
  • 2. Underhill CM. The effectiveness of mentoring programs in corporate settings: a meta-analytical review of the literature. J Vocat Behav. 2006;68:292–307. [Google Scholar]
  • 3. Bortnowska H, Seiler B. Formal mentoring in nonprofit organizations. Model proposition. Management. 2019;23:188–208. [Google Scholar]
  • 4. Ehrich L, Hansford B. Mentoring in the public sector. Int J Pract Exp in Pro Educ. 2008;11:1–16. [Google Scholar]
  • 5. Dahlberg ML, Byars-Winston A, eds. The Science of Effective Mentorship in STEMM. Washington, DC: National Academies Press, 2019. [PubMed] [Google Scholar]
  • 6. Pleschova G, McAlpine L. Enhancing university teaching and learning through mentoring: a systematic review of the literature. Int J Ment Coach Educ. 2015;4:107–125. [Google Scholar]
  • 7. Eby LT, Allen TD, Hoffman BJ, et al. An interdisciplinary meta-analysis of the potential antecedents, correlates, and consequences of protege perceptions of mentoring. Psychol Bull. 2013;139:441–476. doi: 10.1037/a0029279. [DOI] [PubMed] [Google Scholar]
  • 8. Hernandez PR. Landscape of assessments of mentoring relationship processes in postsecondary STEMM contexts: A synthesis of validity evidence from mentee, mentor, institutional/Programmatic perspectives. Paper commissioned by the National Academies of Sciences Engineering & Medicine Committee on The Science of Effective Mentorship in STEMM, 2018.
  • 9. Rubio DM, Schoenbaum EE, Lee LS, et al. Defining translational research: implications for training. Acad Med. 2010;85:470–475. doi: 10.1097/ACM.0b013e3181ccd618. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Kraiger K, Finkelstein LM, Varghese LS. Enacting effective mentoring behaviors: development and initial investigation of the cuboid of mentoring. J Bus Psychol. 2019;34:403–424. [Google Scholar]
  • 11. Gilliland CT, White J, Gee B, et al. The fundamental characteristics of a translational scientist. ACS Pharmacol Transl Sci. 2019;2:213–216. doi: 10.1021/acsptsci.9b00022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Meagher E, Taylor L, Probsfield J, Fleming M. Evaluating research mentors working in the area of clinical translational science: a review of the literature. Clin Transl Sci. 2011;4:353–358. doi: 10.1111/j.1752-8062.2011.00317.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. National Institutes of Health. PAR-25-195 Limited Competition: Ruth L. Kirschstein National Research Service Award (NRSA) Postdoctoral Research Training Grant for the Clinical and Translational Science Awards (CTSA) Program (T32 Clinical Trial Not Allowed). 2024.
  • 14. National Institutes of Health. PAR-25-196 Limited Competition: Mentored Research Career Development Program Award in Clinical and Translational Science Awards (CTSA) Program (K12 Clinical Trial Optional) 2024.
  • 15. Behar-Horenstein LS, Feng X, Prikhidko A, Su Y, Kuang H, Fillingim RB. Assessing mentor academy program effectiveness using mixed methods. Mentor Tutoring. 2019;27:109–125. doi: 10.1080/13611267.2019.1586305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Behar-Horenstein LS, Prikhidko A. Exploring mentoring in the context of team science. Mentor Tutoring. 2017;25:430–454. doi: 10.1080/13611267.2017.1403579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Spence JP, Buddenbaum JL, Bice PJ, Welch JL, Carroll AE. Independent investigator incubator (I(3)): a comprehensive mentorship program to jumpstart productive research careers for junior faculty. BMC Med Educ. 2018;18:186. doi: 10.1186/s12909-018-1290-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Feldman MD, Huang L, Guglielmo BJ, et al. Training the next generation of research mentors: the university of California, San Francisco, clinical & translational science institute mentor development program. Clin Transl Sci. 2009;2:216–221. doi: 10.1111/j.1752-8062.2009.00120.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Patino CM, Kubicek K, Robles M, Kiger H, Dzekov J. The community mentorship program: providing community-engagement opportunities for early-stage clinical and translational scientists to facilitate research translation. Acad Med. 2017;92:209–213. doi: 10.1097/acm.0000000000001332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Pfund C, House S, Spencer K, et al. A research mentor training curriculum for clinical and translational researchers. Clin Transl Sci. 2013;6:26–33. doi: 10.1111/cts.12009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Pfund C, House SC, Asquith P, et al. Training mentors of clinical and translational research scholars: a randomized controlled trial. Acad Med. 2014;89:774–782. doi: 10.1097/ACM.0000000000000218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Weber-Main AM, Shanedling J, Kaizer AM, Connett J, Lamere M, El-Fakahany EE. A randomized controlled pilot study of the University of Minnesota mentoring excellence training academy: a hybrid learning approach to research mentor training. J Clin Transl Sci. 2019;3:152–164. doi: 10.1017/cts.2019.368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Solmeyer A, Constance N. Unpacking the “black box” of sociall programs and policies: introduction. Am J Eval. 2015;36:470–474. [Google Scholar]
  • 24. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Method. 2005;8:19–32. [Google Scholar]
  • 25. Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:69. doi: 10.1186/1748-5908-5-69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169:467–473. doi: 10.7326/M18-0850. [DOI] [PubMed] [Google Scholar]
  • 27. Peters MDJ, Godfrey CM, McInerney P, Soares CB, Khalil H, Parker D. The Joanna Briggs Institute Reviewers Manual. The Joanna Briggs Institute; 2015. [Google Scholar]
  • 28. Veritas Health Innovation. Covidence systematic review software. Veritas Health Innovation, (www.covidence.org) Accessed February 26, 2024. [Google Scholar]
  • 29. Bonilha H, Hyer M, Krug E, et al. An institution-wide faculty mentoring program at an academic health center with 6-year prospective outcome data. J Clin Transl Sci. 2019;3:308–315. doi: 10.1017/cts.2019.412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Stefely JA, Theisen E, Hanewall C, et al. A physician-scientist preceptorship in clinical and translational research enhances training and mentorship. BMC Med Educ. 2019;19:89. doi: 10.1186/s12909-019-1523-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Robinson GF, Schwartz LS, DiMeglio LA, Ahluwalia JS, Gabrilove JL. Understanding career success and its contributing factors for clinical and translational investigators. Acad Med. 2016;91:570–582. doi: 10.1097/acm.0000000000000979. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Smyth SS, Coller BS, Jackson RD, et al. KL2 scholars’ perceptions of factors contributing to sustained translational science career success. J Clin Transl Sci. 2022;6:e34. doi: 10.1017/cts.2021.886. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Fleming M, House S, Hanson VS, et al. The mentoring competency assessment: validation of a new instrument to evaluate skills of research mentors. Acad Med. 2013;88:1002–1008. doi: 10.1097/ACM.0b013e318295e298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Feldman MD, Steinauer JE, Khalili M, et al. A mentor development program for clinical translational science faculty leads to sustained, improved confidence in mentoring skills. Clin Transl Sci. 2012;5:362–367. doi: 10.1111/j.1752-8062.2012.00419.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Trejo J, Wingard D, Hazen V, et al. A system-wide health sciences faculty mentor training program is associated with improved effective mentoring and institutional climate. J Clin Transl Sci. 2022;6:e18. doi: 10.1017/cts.2021.883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Abedin Z, Rebello TJ, Richards BF, Pincus HA. Mentor training within academic health centers with clinical and translational science awards. Clin Transl Sci. 2013;6:376–380. doi: 10.1111/cts.12067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Burnham EL, Schiro S, Fleming M. Mentoring K scholars: strategies to support research mentors. Clin Transl Sci. 2011;4:199–203. doi: 10.1111/j.1752-8062.2011.00286.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Huskins WC, Silet K, Weber-Main AM, et al. Identifying and aligning expectations in a mentoring relationship. Clin Transl Sci. 2011;4:439–447. doi: 10.1111/j.1752-8062.2011.00356.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Sancheznieto F, Sorkness CA, Attia J, et al. Clinical and translational science award T32/TL1 training programs: program goals and mentorship practices. J Clin Transl Sci. 2022;6:e13. doi: 10.1017/cts.2021.884. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Silet KA, Asquith P, Fleming MF. Survey of mentoring programs for KL2 scholars. Clin Transl Sci. 2010;3:299–304. doi: 10.1111/j.1752-8062.2010.00237.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Tillman RE, Jang S, Abedin Z, Richards BF, Spaeth-Rublee B, Pincus HA. Policies, activities, and structures supporting research mentoring: a national survey of academic health centers with clinical and translational science awards. Acad Med. 2013;88:90–96. doi: 10.1097/ACM.0b013e3182772b94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Comeau DL, Escoffery C, Freedman A, Ziegler TR, Blumberg HM. Improving clinical and translational research training: a qualitative evaluation of the Atlanta clinical and translational science institute KL2-mentored research scholars program. J Investig Med. 2017;65:23–31. doi: 10.1136/jim-2016-000143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Martina CA, Mutrie A, Ward D, Lewis V. A sustainable course in research mentoring. Clin Transl Sci. 2014;7:413–419. doi: 10.1111/cts.12176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. McGee RE, Blumberg HM, Ziegler TR, et al. Mentor training for junior faculty: a brief evaluation report from the Georgia clinical and translational science alliance. J Investig Med. 2023;71:577–585. doi: 10.1177/10815589231168601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Nearing KA, Nuechterlein BM, Tan S, Zerzan JT, Libby AM, Austin GL. Training mentor-mentee pairs to build a robust culture for mentorship and a pipeline of clinical and translational researchers: the colorado mentoring training program. Acad Med. 2020;95:730–736. doi: 10.1097/ACM.0000000000003152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Schweitzer JB, Rainwater JA, Ton H, Giacinto RE, Sauder CAM, Meyers FJ. Building a comprehensive mentoring academy for schools of health. J Clin Transl Sci. 2019;3:211–217. doi: 10.1017/cts.2019.406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Vanderford NL, Evans TM, Weiss LT, Bira L, Beltran-Gastelum J. Use and effectiveness of the individual development plan among postdoctoral researchers: findings from a cross-sectional study. F1000Res. 2018;7:1132. doi: 10.12688/f1000research.15610.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Vanderford NL, Evans TM, Weiss LT, Bira L, Beltran-Gastelum J. A cross-sectional study of the use and effectiveness of the individual development plan among doctoral students. F1000Res. 2018;7:722. doi: 10.12688/f1000research.15154.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Thompson HJ, Santacroce SJ, Pickler RH, et al. Use of individual development plans for nurse scientist training. Nurs Outlook. 2020;68:284–292. doi: 10.1016/j.outlook.2020.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Sorkness CA, Scholl L, Fair AM, Umans JG. KL2 mentored career development programs at clinical and translational science award hubs: practices and outcomes. J Clin Transl Sci. 2020;4:43–52. doi: 10.1017/cts.2019.424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Kirkpatrick D, Kirkpatrick J. Evaluating training programs : the four levels. 3rd ed. Berrett-Koehler, 2006. [Google Scholar]
  • 52. Yardley S, Dornan T. Kirkpatrick’s levels and education ‘evidence’. Med Educ. 2012;46:97–106. doi: 10.1111/j.1365-2923.2011.04076.x. [DOI] [PubMed] [Google Scholar]
  • 53. Sheri K, Too JYJ, Chuah SEL, Toh YP, Mason S, Radha Krishna LK. A scoping review of mentor training programs in medicine between 1990 and 2017. Med Educ Online. 2019;24:1555435. doi: 10.1080/10872981.2018.1555435. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Campbell CD. Best practices for student-faculty mentoring programs. In: Allen TD, Eby LT, eds. The Blackwell Handbook of Mentoring. Oxford, UK: Blackwell Publishing, 2007: 325–343. [Google Scholar]
  • 55. Gangrade N, Samuels C, Attar H, et al. Mentorship interventions in postgraduate medical and STEM settings: a scoping review. CBE Life Sci Educ. 2024;23:ar33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Steiner JF. Promoting mentorship in translational research: should we hope for Athena or train mentor? Acad Med. 2014;89:702–704. doi: 10.1097/ACM.0000000000000205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Torgerson C. Publication bias: the Achilles’ heel of systematic reviews? Brit J Educ Stud. 2006;54(1):89–102. [Google Scholar]
  • 58. Samuels E, Ianni PA, Eakin B, Champagne E, Ellingrod V. A quasiexperimental evaluation of a clinical research training program. Perf Impr Qtr. 2023;36:4–13. [Google Scholar]
  • 59. Kuchynka SL, Gates AE, Rivera LM. When and why is faculty mentorship effective for underrepresented students in STEM? A multicampus quasi-experiment. Cult Divers Ethn Min. 2023;31:69–75. [DOI] [PubMed] [Google Scholar]
  • 60. Hernandez PR, Ferguson CF, Pedersen R, Richards-Babb M, Quedado K, Shook NJ. Research apprenticeship training promotes faculty-student psychological similarity and high-quality mentoring: a longitudinal quasi-experiment. Mentor. Tutor.: Partnership Learn. 2023;31:163–183. [Google Scholar]
  • 61. Estrada M, Hernandez PR, Schultz PW. A longitudinal study of how quality mentorship and research experience integrate underrepresented minorities into STEM careers. CBE Life Sci Educ Spring. 2018;17:ar9. doi: 10.1187/cbe.17-04-0066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Helton R, Pearson S, Hartmann K. Flight Tracker: A REDCap Tool to Streamline Career Development Grant Preparation and Reporting. presented at: Las Vegas, Nevada: Association for Clinical and Translational Science, 2024. [Google Scholar]
  • 63. Shen MR, Tzioumis E, Andersen E, et al. Impact of mentoring on academic career success for women in medicine: a systematic review. Acad Med. 2022;97:444–458. doi: 10.1097/ACM.0000000000004563. [DOI] [PubMed] [Google Scholar]
  • 64. Farkas AH, Bonifacino E, Turner R, Tilstra SA, Corbelli JA. Mentorship of women in academic medicine: a systematic review. J Gen Intern Med. 2019;34:1322–1329. doi: 10.1007/s11606-019-04955-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. O’Brien KE, Biga A, Kessler SR, Allen TD. A meta-analytic investigation of gender differences in mentoring. J Manage. 2010;36:537–554. [Google Scholar]
  • 66. Kashiwagi DT, Varkey P, Cook DA. Mentoring programs for physicians in academic medicine: a systematic review. Acad Med. 2013;88:1029–1037. doi: 10.1097/ACM.0b013e318294f368. [DOI] [PubMed] [Google Scholar]
  • 67. Kram KE. Phases of the mentor relationship. Acad Manage J. 1983;26:608–625. [Google Scholar]
  • 68. Chandler DE, Kram KE, Yip J. An ecological systems perspective on mentoring at work: a review and future prospects. Acad Manag Ann. 2011;5:519–570. [Google Scholar]
  • 69. Hezlett SA, Gibson SK. Linking mentoring and social capital: implications for career and organization development. Adv. Dev. Human Resour. 2007;9:384–411. [Google Scholar]
  • 70. Aikens ML, Sadselia S, Watkins K, Evans M, Eby LT, Dolan EL. A social capital perspective on the mentoring of undergraduate life science researchers: an empirical study of undergraduate-postgraduate-faculty triads. CBE Life Sci Educ Summer. 2016;15:ar16. doi: 10.1187/cbe.15-10-0208. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Clinical and Translational Science are provided here courtesy of Cambridge University Press

RESOURCES