Abstract
A hallmark of success for early career biomedical researchers is the acquisition of research funding. There are marked disparities among PIs who submit grants and the likelihood of receiving national funding. The National Research Mentoring Network was funded by the National Institutes of Health to diversify the biomedical research workforce and included grantsmanship training for early career researchers. Self-efficacy in developing research grant applications is significantly improved over time with training and experience. We created a 19-item self-efficacy assessment inventory. Our aims were to confirm the internal consistency of a three-factor solution for grantsmanship confidence and to test the likelihood that self-efficacy influences grant proposal submission timing. We gathered data from 190 diverse biomedical trainees who completed NRMN grantsmanship training between August 2015 and June 2017. Findings revealed high internal consistency for items in each of three factors. There was a statistically significant association between self-efficacy mean scores and grant submission timing predicting that, for every one-point increase in the mean score, the odds of submitting a grant 6 months posttraining increased by 69%. An abbreviated inventory of grantsmanship skills self-efficacy is a promising tool for monitoring changes over time in early career researchers and for promoting tailored grantsmanship interventions.
Keywords: research grantsmanship, self-efficacy assessments, the U.S. National Research Mentoring Network, research workforce diversity
Introduction
African American, Hispanic, Native American, and Native Alaskan people remain vastly underrepresented in the U.S. biomedical research workforce despite previous and ongoing efforts to support the training and career progression of investigators from underrepresented groups.1–5 An important hallmark of career success for biomedical researchers is the acquisition of major research funding from agencies such as the National Institutes of Health (NIH) of the United States. However, two recent reports on the racial composition of NIH research grant applicants and awardees revealed marked disparities in both proposal submissions and the likelihood of receiving an NIH award.6,7 Interventions to reduce and ultimately eliminate these funding disparities are vital to increasing the proportion of underrepresented scientists who enter and persist in the biomedical research workforce and to addressing the health needs of the United States’ increasingly diverse population.8
Few have written about mentoring programs that address underrepresented minority faculty or about the success in real-world conditions of those that have been implemented.9 One current innovative approach—implemented in 2015 as part of the NIH National Research Mentoring Network to Diversify the Biomedical Workforce (NRMN)—involves skills training and practice coupled with intensive coaching designed for early career postdoctoral fellows and junior faculty in lengthy (3–12 months) research proposal development programs. The NRMN offers four models of grantsmanship coaching programs (GCP, described below) that were adapted from existing successful models implemented at academic institutions across the U.S. Although each of the GCP models has distinct features, all strive to increase participants’ self-efficacy in domains that support the development of high quality, competitive research grant proposals.
Bandura has shown that confidence in one’s ability to perform specific tasks, known as self-efficacy, is related to positive outcomes in self-regulating or controlling one’s behavior.10,11 Others have applied Bandura’s self-efficacy theory to developing interventions to improve research skills confidence in advancing through a clinical research career pipeline12,13 and the timing of submitting research grants and publications14,15. For example, Mullikin and colleagues developed measures to assess research self-efficacy as the first of its kind to target the unique clinical research skills required of physician-scientists.15 Described in detail in 2007, their iterative process resulted in the 88-item Clinical Research Assessment Inventory (CRAI). It was tested (n = 173 racially diverse academic clinical researchers from across five academic ranks participating in controlled experiments) for internal consistency on eight skill set factors: study design and data analysis; funding a study; reporting and presenting a study; conceptualizing; responsible conduct of research; collaborating with others; managing project staff; and organizing a study. The authors reported a median Cronbach’s coefficient alpha of 0.96 with a range of 0.89 to 0.97 and ANOVA eta squared effect size estimates ranging from 0.05 to 0.14.
Bakken, a co-investigator on the CRAI development project, and her colleagues used a quasi-experimental design to evaluate a clinical research short-course intervention with 58 biomedical graduate students and postdoctoral fellows.16 Using the original 88-item CRAI, they measured research self-efficacy (pre-/postintervention) and reported a within-intervention-group-differences effect size estimate of 0.598.
Subsequently, Jeffe and her colleagues shortened a 69-item version of the CRAI to meet their need for a more sensitive, less burdensome, and streamlined instrument to obtain reliable repeated, change-over-time measures before, after, and 12 months beyond an implementation of three cohorts receiving a research-mentoring intervention (n = 152 diverse U.S. and Puerto Rican early career investigators in three program cohorts implemented between 2011 and 2014).17 Their tailored CRAI-19 measured four research self-efficacy subscales—designing a study, collaboration, writing, and human subjects consent. The study authors reported overall variance explained after principal components analysis (PCA; n = 131) as 81% for the 19-item CRAI with a range across four factors from 13% to 28%. Repeated measures ANOVA partial eta-squared (effect size) was moderate (>0.09<0.25) at 14% for research self-efficacy as a potential predictor of increased publications 12 months post-intervention using the Scopus database as a source of participant publications. Research of self-efficacy assessments from Mullikin, Bakken, Jeffe, and others15–24 support the use of positive role models, mentorship, and learning environments to influence research self-efficacy in performance confidence, and, potentially, career path persistence and research productivity for timing of publications16,17 and research grant applications20–22 for female and male investigators from racial and ethnic minority groups.16,17,19,23,24
Our reasons for developing a shortened version of the CRAI were similar to Jeffe’s purpose—we sought a low-burden, reliable, repeated-measures instrument to assess research self-efficacy over time specifically targeting research grantsmanship skills as opposed to general research skills development. The purpose of our current study was to assess the internal consistency and structural validity of a three-factor solution for 19 items drawn from the 88-item CRAI. We used pre-/post-assessment data gathered from 190 racially diverse trainees who had completed one of 15 GCP program cohorts implemented between August 1, 2015, and June 30, 2017, and who had also reported the status of their grant proposal in a 6-month follow-up survey. Our second purpose was to use the self-efficacy data to test the likelihood of submitting a grant application within 6 months of completing training. We begin this report by briefly describing the NRMN and its four GCP models.
Description of the National Research Mentoring Network (NRMN)
The NRMN was formed in late 2014 as part of the NIH Diversity Program Consortium (https://diversityprogramconsortium.org). The Consortium includes U.S. higher education institutions that provide mentorship and professional development initiatives aimed at diversifying the biomedical research workforce. NRMN’s myriad of programs (e.g., guided virtual mentoring, a social networking platform for NRMN members, mentor/mentee training events, and a series of professional development webinars, virtual seminars, and talks (see https://nrmnet.net)) to engage participants at different stages of their training and career progression from undergraduate/graduate students and postdoctoral fellows to early career researchers and senior investigators. By the end of its third year of funding, the NRMN had engaged over 7000 participants. The current study examines data from just one of the NRMN initiatives: intensive GCPs for early career biomedical researchers, such as postdoctoral fellows, research associates, and junior faculty.
Overview of NRMN GCPs
The NRMN GCPs incorporate effective practices from four successful professional development programs that originated at NRMN investigators’ home institutions. Those existing programs were adapted to address the national NRMN mission by (1) extending their reach to a broader, nationwide population of investigators, with an emphasis on recruiting researchers from underrepresented groups in biomedical research fields in need of additional mentorship; (2) tailoring the pace, program content, and coaching to accommodate variation in trainees’ levels of experience; (3) expanding and diversifying trainees’ professional networks through engagement with investigators (peers and coaches) from outside their institution; and (4) training new coaches from diverse backgrounds and U.S. locations to enable program expansion. The primary goal of the GCPs is to increase the diversity of investigators who submit grant applications and receive funding for biomedical research, especially NIH awards for research (R-series) and research career development (K-series). At the time of this writing, the GCP models were offered as training and coaching services rather than experimentally controlled interventions.
NRMN program directors select participants from a pool of GCP applications obtained from around the nation through email blasts, information tabling at scientific conferences, strategic outreach to individuals and the groups of underrepresented racial and ethnic minorities, and researchers from minority-serving institutions. Eligible applicants are either referred to, or self-select into, a particular GCP model based primarily on their level of readiness to actively develop, write, and submit a research grant proposal by the end of training. To date, selected trainees represent over 100 academic institutions and a variety of research disciplines. Key relevant features of each of the four GCP models and three variations on those models are presented below and summarized in Table 1; participant demographics across the seven models are summarized in Table 2.
Table 1.
Program Modela |
Main Institution | Mean Length in Months |
Trainee Selection Criteriab |
Mean Cohort Size |
Trainee: Coach Ratio |
Total Trained |
Percent in Studyc |
---|---|---|---|---|---|---|---|
GUMSHOE | University of Colorado and Washington State University | 6 | Little to no experience | 28 | 3 | 83 | 77% |
STAR | University of North Texas Health Science Center | 12 | Little to no experience | 11 | 2 | 11 | 82% |
P3–UMN | University of Minnesota | 6 | Ready to write | 12 | 2 | 37 | 100% |
P3–UC Davis | University of California, Davis | 4 | Ready to write | 11 | 2 | 11 | 91% |
NU–NU | Northwestern University | 3 | Ready to write | 20 | 5 | 40 | 85% |
NU–NE | Boston University | 3 | Ready to write | 12 | 6 | 23 | 87% |
NU–SE | Morehouse School of Medicine | 3 | Ready to write | 18 | 6 | 37 | 43%d |
Model abbreviations: GUMSHOE, grant writing uncovered: maximizing strategies, help, opportunities, experiences; STAR, steps toward academic research fellowship program; P3, proposal preparation program; NU. Northwestern University grant writers coaching group; NE, northeastern hub; SE, southeastern hub.
Experience in grant proposal development.
Criteria for case inclusion in this study were a full set of pre- and posttest assessment data and a grant submission status update (submitted, still writing, or abandoned) from a 6-month follow-up survey.
The low proportion of eligible trainees for this study for this southeastern hub variation of the NU model is explained by the fact that the self-efficacy assessment tool was not fully administered across the first cohort members prior to the program kick-off event.
Table 2.
Training Programs | ||||||||
---|---|---|---|---|---|---|---|---|
GUM- SHOE (n = 83) |
NU (n = 40) |
NU-NE (n = 23) |
NU-SE (n = 37) |
P3 (n = 37) |
P3-UC Davis (n = 11) |
STAR (n = 11) |
Total (n = 242) |
|
Number of Cohorts |
4 | 2 | 2 | 2 | 3 | 1 | 1 | 15 |
Race | ||||||||
Black | 23% | 43% | 22% | 51% | 43% | - | 82% | 35% |
White | 36% | 13% | 35% | 14% | 5% | 36% | - | 22% |
Asian | 6% | 15% | 17% | 24% | 16% | 36% | - | 14% |
Hispanic | 5% | 20% | 17% | 8% | 22% | - | 18% | 12% |
Native American | 18% | 3% | - | - | 5% | - | - | 7% |
Mixed race | 6% | 5% | 4% | - | 5% | 9% | - | 5% |
Hawaiian and Pacific Islander | 4% | - | - | - | - | 18% | - | 2% |
Gender | ||||||||
Female | 80% | 65% | 68% | 54% | 73% | 82% | 55% | 70% |
Male | 19% | 33% | 32% | 41% | 27% | 18% | 45% | 28% |
Professional degree | ||||||||
PhD | 82% | 75% | 70% | 81% | 84% | 55% | 82% | 79% |
MD | 5% | 5% | 4% | 3% | 5% | 18% | 9% | 5% |
PhD MD | 4% | 10% | 9% | 5% | 3% | 18% | 9% | 6% |
PhD DVM | - | 3% | - | 3% | - | 9% | - | 1% |
Other | 5% | 3% | 9% | 3% | 3% | - | - | 4% |
Publications | ||||||||
Mean number of | 11 | 15 | 15 | 16 | 19 | 17 | 6 | 14 |
First/senior authorship | 6 | 8 | 7 | 8 | 8 | 10 | 3 | 7 |
Prior research | ||||||||
0 to < 1 year | 32% | 26% | 30% | 24% | 35% | 36% | 73% | 32% |
1 to 2 years | 20% | 24% | 30% | 14% | 16% | 18% | 9% | 19% |
3 to 5 years | 18% | 24% | 22% | 16% | 22% | 18% | 18% | 20% |
> 5 years | 30% | 16% | 17% | 46% | 27% | 27% | - | 28% |
Career stage | ||||||||
Postdoctoral | 17% | 38% | 30% | 22% | 22% | 9% | 45% | 24% |
Assistant professor | 55% | 50% | 48% | 54% | 59% | 73% | 27% | 54% |
Associate professor | 8% | 3% | - | 8% | 8% | 9% | - | 6% |
Full professor | 2% | - | 4% | 5% | - | - | - | 2% |
Other | 14% | 8% | 9% | 8% | 11% | 9% | 9% | 11% |
Demographic metrics are captured in program application materials.
Grant Writing Uncovered: Maximizing Strategies, Help, Opportunities, Experiences (GUMSHOE).
This Colorado and Washington state–supported model targets investigators from or working with, specific underrepresented racial and ethnic groups. Cohort 1 concentrated on Native American, Native Alaskan, and Native Hawaiian people; cohorts 2 and 4 were directed toward rural populations; and cohort 3 focused on African American/Black people. Each 6-month program cycle begins with an intensive, highly didactic and experiential 3-day workshop. Trainees prepare and review specific aims and other NIH research grant application components, followed by 6 months of virtual guided coaching, peer-to-peer learning, and professional development activities. Participants typically have little research grant preparation experience before their GUMSHOE training.
From the GUMSHOE model, we identified eligible data for this study from 83 trainees.
Steps toward academic research fellowship program (STAR).
The STAR model, based at the University of North Texas Health Sciences Center, focuses on basic scientific grant proposal development and writing skills, the grant funding process, mock grant review, and other professional development topics. The program consists of 35 in-person and virtual training sessions delivered over 12 consecutive months. Individuals selected for STAR cohorts typically have little to no grant preparation experience. The only STAR cohort eligible for this study produced usable data for 11 trainees.
Proposal preparation program (P3).
The P3 model, originating from the University of Minnesota, targets early career investigators with reasonably well-developed research projects and a strong likelihood of submitting a proposal within 6 months after P3 completion. This 6-month program begins with a 2-day, in-person session anchored in a group review of participants’ specific aims pages and biosketches. Other activities include individual coach consultations, panel discussions with NIH staff and successful early-career grantees, and didactic presentations of grantsmanship principles. Over the ensuing months, cohort trainees and coaches attend biweekly video conferences to review drafts in progress. The final in-person session offers a practice NIH mock study section; reviewers are selected based on their content expertise enabling them to provide highly valuable critiques to guide P3 trainees’ final revisions. A four-month-long version of the P3 model was implemented in 2016 at the University of California, Davis and is included in the sample cohorts.
For this study, three P3 cohorts plus the UC Davis cohort produced usable data for 48 trainees.
Northwestern University (NU) Grant Writers Coaching Group.
The 3–4 month Chicago-based NU model begins with a 2-day, in-person meeting to introduce the writing framework upon which the model is based and to initiate writing groups. The groups of three to five participants are formed based on research types (e.g., laboratory, clinical/epidemiology, and social/behavioral); each group is led by an experienced faculty Coach. After the initial in-person meeting, group members attend weekly or bi-weekly video and/or audio conferencing for approximately 4 months, depending on the group’s needs. The individuals are selected based on their readiness to develop and craft a specific grant proposal to be submitted soon after training is completed.
Two regional expansion programs that use the NU model were established for two Eastern United States regions—Northeast (NE) and Southeast (SE). Two cohorts from these expansions are included in this report: NU-NE Boston College and NU-SE Morehouse School of Medicine. Across the three versions of the NU model, we identified eligible data for 100 trainees.
Methods
Design
Using pretest, posttest, and 6-month follow-up data collected from NRMN GCP trainees, we sought for this study to first confirm that our 19-item version of the more extensive 88-item CRAI reliably performed as a consistent measure of grantsmanship self-efficacy. Additionally, we examined the likelihood of predicting grant submission timing from grantsmanship self-efficacy scores.
Participant inclusion and exclusion criteria
To determine the internal consistency of items in three grantsmanship confidence domains, we chose assessment data gathered from the first 15 cohorts delivered and completed between August 1, 2015, and July 1, 2017. Additionally, to test the predictive qualities of grantsmanship self-efficacy for a grant submission timing, we selected and used data only from those trainees (across cohorts) who completed both pre- and postintervention assessments and who had reported the status of their grant applications in a six-month follow-up survey. One hundred ninety (n = 190) trainee cases_of 242 individuals trained met those data-based criteria for inclusion in both parts of the current study.
Measurement
We gathered demographic, career stage, and research experience data from participant applications for the following variables: seven categories of race/ethnicity (Black, White, Asian, Hispanic, Native American, Native Alaskan, and Native Hawaiian and Pacific Islander people, and multiracial); sex (female, male); five categories of highest degree earned (PhD, MD, PhD/MD, PhD/DVM, other professional degree); counts of preintervention publications and the first/senior author publications; prior research experience in years (continuous, reported here in ranges: 0 to <1, 1 to 2, 3 to 5, >5 years); and five categories of career stage (postdoctoral trainee, assistant, associate, and full professor, and others). (Table 2).
We created a self-administered, grantsmanship self-efficacy assessment instrument by selecting 18 items from the existing 88-item CRAI.15 We added an item to measure a grantsmanship skill addressed in GCP intervention models but not covered by the CRAI. We asked trainees at the beginning and conclusion of their GCP training to rate their level of confidence in performing 19 tasks using the same 11-point scale (0 to 10) used in the original CRAI where 0 represents no confidence and 10 indicates complete confidence in one’s ability to successfully perform the tasks related to three domains: conceptualizing, designing, and funding a study.
To monitor posttraining grant submission timing, we created a follow-up survey to capture self-reported research status that includes the following information used for this study: the current status of the grant application worked on during the training program (submitted, still writing, or abandoned).
Data collection
GCP cohort trainees assessed their grantsmanship self-efficacy before training begins (preintervention assessment) and soon after training ended (post-assessment). At the immediate end of training for a few early cohorts in this study, we collected grant submission information; but all GCP cohort trainees were followed at 6-month intervals post training for 18 months to monitor change over time in grantsmanship self-efficacy scores and grant submission and award status. Due to the timing of this preliminary study, we used only grant status data from the 6-month follow-up.
We administered all data collection questionnaires through Research Electronic Data Capture (REDCap) software.25 Using and sharing these data were deemed by appropriate institutional and federal institutional review board entities as exempt status.
Analyses
To examine differences in self-efficacy domain scores by program and demographics, we used paired t-tests to compare pre- to posttraining factor scores for each GCP model and by sex and race/ethnicity categories.
To examine the structure of the self-efficacy instrument, we conducted confirmatory factor analysis (CFA). Previous work with the 88-item CRAI instrument suggested a three-factor solution; and, we estimated this modeling solution using a maximum likelihood method on the 19 items chosen for this shortened version of the CRAI as noted. We examined factor loading magnitudes and model fit to determine the efficacy of the measurement model. To examine the internal consistency of items, we calculated Cronbach’s alpha on all 19 items.
Finally, we used bivariate logistic regression to test our hypothesis that postintervention self-efficacy is associated with a grant submission timing.
We analyzed all data using Base SAS® version 9.4 (SAS Institute Inc., Cary, North Carolina).
Results
Grantsmanship self-efficacy domains across and within training programs
We found variation in pretest scores for three domains across all GCP training models (Table 4). Mean scores were lowest for STAR and GUMSHOE trainees in all three domains aligning well, by design, with the lower level of experience in grant preparation for individuals accepted into these program cohorts.
Table 4.
Program model |
Conceptualize a study (8 items) |
Design a study (4 items) |
Fund a study (7 items) |
||||||
---|---|---|---|---|---|---|---|---|---|
Pre | Post | Change | Pre | Post | Change | Pre | Post | Change | |
GUMSHOE | 6.4 | 7.4 | 1.0 | 6.2 | 7.0 | 0.8 | 4.9 | 7.0 | 1.9 |
NU | 7.1 | 8.1 | 1.0 | 6.7 | 7.7 | 1.1 | 6.2 | 7.9 | 1.5 |
NU-NE | 8.0 | 8.7 | 0.8 | 7.2 | 8.2 | 1.0 | 6.5 | 7.8 | 0.8 |
NU-SE | 7.6 | 8.4 | 0.8 | 6.5 | 7.8 | 1.2 | 6.1 | 7.8 | 1.5 |
P3 | 7.5 | 8.7 | 1.2 | 7.3 | 8.4 | 1.1 | 6.6 | 8.7 | 1.8 |
P3-UC Davis | 7.4 | 8.0 | 0.6 | 6.9 | 7.6 | 0.8 | 6.2 | 7.8 | 1.0 |
STAR | 5.6 | 7.5 | 1.8 | 5.5 | 6.7 | 1.2 | 3.3 | 7.5 | 4.0c |
Column means | 7.0 | 8.0 | 1.0 | 6.6 | 7.6 | 1.0 | 5.7 | 7.8 | 1.8 |
Confidence self-efficacy in ability perform associated tasks item scale = 0 to 10, “no confidence” to “complete confidence”
No statistically significant differences were found for Race and Gender (P > 0.50); data not shown.
Statistically significant change-score differences (P < 0.01) across models within domains
Bold type indicates statistically significant change score differences (P < 0.01) within models across domains.
We found no statistically significant differences (P < 0.01) in mean pre- versus posttest scores between training models for two self-efficacy domains (conceptualizing and designing a study). However, average across-program-model change scores for factor 3, funding-a-study, were statistically significant (P < 0.01) for STAR model trainees (improvement = 4 points) compared to the six other models (1.4 average improvement).
In comparing pre-/posttest mean difference scores across domains within each GCP training model, we found statistically significant mean increases (P < 0.01) in all three domains for each model.
We examined mean domain-score differences by gender and race and found no statistically significant differences (gender P > 0.50; race P > 0.35) (output is not shown).
Factor structure and internal consistency
Missing values on 10 self-efficacy items for six study trainees (0.3% of 3610 expected values) were replaced by the individual’s subscale mean score, following imputation procedures used by Bakken and colleagues16 for their CRAI validation study. An analysis (not shown) of data omitting the six participants resulted in no statistically significant differences in point estimates as compared to the analysis that included the 6 cases with the imputed item scores.
We found statistically significant (P < 0.0001) standardized factor loadings across the three-factor solutions within a range of 0.714 to 0.900 (Table 3). Similar to Mullikin’s findings,15 Cronbach’s coefficient alpha scores revealed high internal consistency (α ≥ 0.90; range: 0.90–0.95) for our three factors; internal consistency for the overall instrument scale was 0.96.
Table 3.
Factor and item labels | Number of Factor Items, α score |
Factor Loadingse |
---|---|---|
GCP confidence self-efficacy, overall | 19, α = 0.96 | |
Conceptualizing a study | 8, α = 0.95 | |
Articulate clear purpose | 0.898 | |
Select suitable topic area | 0.714 | |
Refine a problem to investigate | 0.806 | |
Organize research ideas in writing | 0.900 | |
Justify importance of research | 0.888 | |
Convince reviewers that the research is worth funding (added) | 0.745 | |
Logical rational for research | 0.867 | |
Relate questions to underlying theory | 0.855 | |
Designing a study | 7, α = 0.95 | |
Design data analysis strategy | 0.855 | |
Select methods of data collection | 0.868 | |
State purpose, strengths, limits of study design | 0.839 | |
Determine population and sample of study | 0.837 | |
Choose appropriate research design | 0.832 | |
Determine how each variable will be measured | 0.864 | |
Determine adequate number of subjects | 0.834 | |
Funding a study | 4, α = 0.90 | |
Write a competitive grant | 0.781 | |
Identify appropriate funding | 0.848 | |
Converse with funders about the project | 0.824 | |
Describe funding process | 0.842 |
Fit indices: Chi-square: 534.35, df = 149, P < 0.0001; adjusted GFI: 0.70; CFI: 0.89; RMSEA: 0.12 (90% CI: 0.11–0.13); SRMR: 0.06; Bentler comparative fit: 0.89; Bentler-Bonett non-normed fit: 0.87.
Cronbach’s alpha scores range from 0 to 1.0 and indicate the level of internal consistency of the items measuring the underlying abstract construct/factor for a given sample.
Factor loadings are standardized correlation coefficient estimates for each item in each factor.
Item scores are self-rated on a 0 to 10 scale where 0 = no confidence and 10 = complete confidence in ability to perform the task. There were no differences in factor loading whether we used pre- or postintervention data from the study sample.
The maximum likelihood procedure used eliminates incomplete cases from the dataset; there were seven cases with some missing values.
All factor loadings are significant at P < 0.0001.
Fit indices shown in the last row of Table 3 represent comparative and non-normal fit (CFI and Bentler-Bonett) related to the chi-square values; fit estimations of 0.89 for both measures indicate a moderately acceptable model fit at close to conventionally recommended values of 0.90 or 0.95. Our estimates of standardized root mean square residual (SRMR) and root mean square of approximation (RMSEA)—0.06 and 0.12, respectively—indicate acceptable model fits where lower values between 0 and 1 indicate better fitting models that avoid issues of sample size. However, our 0.70 adjusted goodness of fit (AGFI) index for factor parsimony indicates room for improving our factor measures. Overall, we feel confident that our three-factor model is a good fit for estimating and monitoring grantsmanship self-efficacy over time for diverse early career research faculty.
Predictive validity analysis
Survey data from our 190 study trainees revealed that 74 (39%) had submitted a research grant proposal within 6 months post-training. Our logistic regression revealed a statistically significant (P < 0.01) association between a trainee’s posttest mean score of all 19 items combined and grant submission, predicting that, for every one-point increase in mean self-efficacy, the odds of submitting a grant 6 months posttraining increased by 69% (95% CI: 1.14, 2.53). Using bivariate logistic regressions, we further tested the relationship between early submissions and three individual-level indicators of grantsmanship readiness taken from trainee applications (post-professional-degree grant development experience, a number of articles published, and senior authorship). We found no statistically significant relationship between pretraining readiness and grant proposal submission timing (all indicators P > 0.27 <1.06; reference = none to <1-year experience).
Discussion
We used an abbreviated, a 19-item version of the 88-item CRAI to assess and monitor over time the effects of intensive training and coaching on grantsmanship self-efficacy among diverse groups of early career biomedical researchers. Our aim for this study was to use existing assessment data to confirm the internal consistency of a three-factor solution for repeated measures and to test the likelihood that self-efficacy can predict grant proposal submission timing for 190 trainees who completed training in one of 15 NRMN grantsmanship training and coaching cohorts implemented between August 2015 and June 2017.
The three-factor structure of our instrument aligned well with subscale coefficient alpha reliability of the original CRAI subscales15 (conceptualizing, designing, and funding a study: CRAI 0.96, 0.97, 0.97 versus GCP 0.95, 0.95, 0.90). For our purposes with this sample data, the 19-item grantsmanship self-efficacy assessment implemented at pre-/posttraining yielded consistently reliable scores for monitoring grantsmanship self-efficacy over time and for addressing early career productivity for diverse biomedical investigators. Our abbreviated assessment inventory findings show potential for predicting the likelihood of early grant submissions timing and contributed to our understanding of the role of confidence in the biomedical research career trajectory, especially as attributed to skills addressed in intensive grantsmanship programs tailored to individual readiness and cultural diversity.
Our findings on grantsmanship self-efficacy suggest that this instrument can be used to track and monitor self-efficacy change over time as early career investigators gain more confidence in the grant development process. The positive association of self-efficacy with grant submissions also supports the contribution of tailored and intensive grantsmanship training programs to improve the timing of biomedical research grant submission for diverse researchers; over 80% of NRMN GCP completers who submitted grants also self-reported being a member of an underrepresented minority group. This outcome bodes well for increasing diversity in the pool of NIH grant applicants and, ultimately, biomedical research workforce diversity.
Limitations
A limitation of our study is the inability to generalize our early findings beyond the current population of grantsmanship coaching group trainees. Because the aim of this study was primarily to determine measurement factor reliability and to validate the instrument’s value in predicting grant submission timing, our sample data were selected based on availability from 15 completed cohorts from seven diverse but intensive program models. Trainee selection is also based in large part on the investigator’s readiness to develop a grant proposal. Furthermore, the individual level study data sample was restricted to data from 190 early career research investigators (79% of 242 trainees) who had completed electronic assessments from pre-/posttraining and the first of the three follow-up surveys. This sample size borders on being too low for CFA analysis; yet, others (for example, see Ref. 17 with a sample size n = 152) have used relatively small samples to test model fit for similar abbreviated versions of the CRAI instrument.
Our assessment protocols, as well as, reliability and predictive validity findings described here, and our plans for assessing longitudinal self-efficacy for up to 18 months beyond program completion may inspire others conducting similar grantsmanship training programs nationally and internationally to adopt our abbreviated version of the CRAI. Future studies using our brief standardized assessment tool will increase our understanding of the long-term contributions of a variety of grantsmanship training programs on participants’ confidence self-efficacy, productivity, and career development.
Acknowledgements
The National Research Mentoring Network is supported by the National Institutes of Health Common Fund and Office of Scientific Workforce Diversity under award number U54GM119023 administered by the National Institute of General Medical Science. Additionally, the project described was supported by Award number UL1TR000114 from the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) for access to REDCap electronic data capture tools hosted at the University of Minnesota for data collected and managed. Content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources or the National Institutes of Health.
Footnotes
Competing interests
The authors declare no competing interests.
REFERENCES
- 1.U. S. National Academy of Sciences, U. S. National Academy of Engineering, Institute of Medicine, U. S. National Research Council. 2011. Report to Congress. Accessed January 9, 2018 (http://www.nationalacademies.org/annualreport/Report_to_Congress_2011.pdf)
- 2.Landivar LC 2013. Disparities in STEM employment by sex, race, and Hispanic origin. American Community Survey Reports, United States Census Bureau. [Google Scholar]
- 3.U. S. National Institutes of Health (NIH). 2012. Report of the Advisory Committee to the Director Working Group on Diversity in the Biomedical Research Workforce. Accessed January 9, 2018 (https://acd.od.nih.gov/documents/reports/DiversityBiomedicalResearchWorkforceReport.pdf)
- 4.Xie Y, Fang M, & Shauman K. 2015. STEM Education. Annual Review of Sociology. 41: 331–357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.U.S. National Science Foundation (NSF), U.S. National Center for Science and Engineering Statistics. 2016. Assessing the impact of frame changes on trend data from the survey of graduate students and post-doctorates in science and engineering. Special Report NSF 16–314. [Google Scholar]
- 6.Ginther DK, Schaffer WT, Schnell J, Masimore B, Liu F, Haak LL & Kington R. 2011. Race, ethnicity, and NIH research awards. Science. 333(6045): 1015–1019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.U.S. Census Bureau. 2015. 2011–2015 American Community Survey 5-Year Estimates. Accessed January 9, 2018 (https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk)
- 8.Valantine HA & Collins FS. 2015. National Institutes of Health addresses the science of diversity. Proceedings of the National Academy of Sciences of the United States of America. 112(40): 12240–12242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Beech BM, Calles-Escandon J, Hairston KG, Langdon SE, Latham-Sadler BA & Bell RA. 2013. Mentoring programs for underrepresented minority faculty in academic medical centers: A systematic review of the literature. Academic Medicine. 88(4); 1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Bandura A 1991. Social cognitive theory of self-regulation. Organizational behavior and human decision processes. 50: 248–287 [Google Scholar]
- 11.Bandura A 1997. Self-efficacy: The exercise of control. New York, NY: Freeman Press. [Google Scholar]
- 12.Parajes F 1996. Self-efficacy beliefs in academic settings. Review of Educational Research. 66(4): 543–578. [Google Scholar]
- 13.Tate KA, Fouad NA, Reid Marks L, Young G, Guzman E, & Williams EG. 2014. Underrepresented first-generation, low-income college students’ pursuit of a graduate education. J Career Assessment. 23(3): 427–441. [Google Scholar]
- 14.Phillips JC & Russell RK. 1994. Research self-efficacy, the research training environment, and research submission timing among graduate students in counseling psychology. Counseling Psychologist. 22(4): 628–641. [Google Scholar]
- 15.Mullikin EA, Bakken LL, & Betz NE. 2007. Assessing research self-efficacy in physician-scientists: the clinical research appraisal inventory. Career Assessment 15(3): 367–387. [Google Scholar]
- 16.Bakken LL, Byars-Winston A, Gundermann DM, Ward EC, Slattery A, King A, Scott D & Taylor R. 2010. Effects of an educational intervention on female biomedical scientists’ research self-efficacy. Adv Hlth Sci Ed. 15: 167–183 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Jeffe DB, Rice TK, Boyington JEA, Rao DC, Jean-Louis G, Davila-Roman VG, Taylor AL, Pace BS, Boutjdir M. 2017. Development and evaluation of two abbreviated questionnaires for mentoring and research self-efficacy. Ethnicity & Disease, 27(2):179:188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Pasupathy R & Oginga Siwatu K. 2014. An investigation of research self-efficacy beliefs and research submission timing among faculty members at an emerging research university in the USA. Higher Ed Research & Development. 33(4):728–741. [Google Scholar]
- 19.Lent R, Brown S, & Hackett G. 1994. Toward a unifying social cognitive theory of career and academic interest, choice, and performance. J Voc Beh 45: 79–122. [Google Scholar]
- 20.Hollingsworth MA, Fassinger RE. 2002. The role of faculty mentors in the research training of counseling psychology doctoral students. J Counseling Psych. 49: 324–330. [Google Scholar]
- 21.Kahn JH 2001. Predicting the scholarly activity of counseling psychology students: A refinement and extension. J Counseling Psych. 48: 344–354. [Google Scholar]
- 22.Byars-Winston A, Gutierrez B, Topp S, & Carnes M. 2011. Integrating theory and practice to increase scientific workforce diversity: A framework for career development in graduate research training. Life Sciences Ed. 10(4): 357–367. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Bakken LL, Byars-Winston A, & Wang M. 2005. Viewing clinical research career development through the lens of social cognitive career theory. Adv Hlth Sci Ed. 11(1): 91–110 [DOI] [PubMed] [Google Scholar]
- 24.Manson SM 2009. Personal journeys, professional paths: Persistence in navigating the crossroads of a research career. Am J Public Health. 99: S20–S25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Harris PA, Taylor T, Thielke R, Payne J, Gonzalez N, & Conde JG. 2009. Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomedical Information. 42(2): 377–81. [DOI] [PMC free article] [PubMed] [Google Scholar]