Introduction
Radiation oncology is unique among oncologic specialties due to its high exposure to scientific research. Notably, most radiation oncology residents spend 9-12 months of training conducting research, and a considerable number become academic investigators (1). Success as an early-career academic investigator (e.g., trainees and junior faculty) is a multifaceted concept, encompassing personal growth, professional development, and a deep sense of purpose. However, metrics of success have often incorporated quanta of productivity, namely publications and funded grants. Regrettably, individuals often enter academic careers without a clear sense of what factors lead to funding success. Moreover, given that most radiation oncology junior faculty report inadequately protected research time (2), a solid understanding of the factors underlying the funding landscape is paramount to ensure that limited research hours are productive and impactful.
While existing literature has quantitatively explored predictors of funding success (3), we will herein present three overarching principles (Figure 1) that can serve as guiding lights for early-career investigators. These principles are derived from empirical evidence coupled with the collective wisdom of experienced academics. By internalizing these principles, we hope early-career investigators can build a robust mental framework for a successful academic journey.
Figure 1.
An overview of the three principles underlying this editorial and how their interactions lead to successful early-career academic investigators. Individuals should seek to liberally apply for funding (Principle 3), thereby not only gaining potential early wins (Principle 1) but also gaining crucial experience from inevitable failures (Principle 2), ultimately maximizing their long-term success.
Principle 1. The Matthew Effect: Early Wins Predict Late Wins.
The “Matthew Effect” describes a phenomenon where successful individuals, already well-established, tend to experience a disproportionate amount of success compared to their less-established counterparts (4). This phenomenon draws its name from a biblical verse roughly translating to the old adage - “the rich get richer and the poor get poorer”. In the context of scientific funding, the Matthew Effect suggests that researchers who have already secured funding are more likely to receive additional funding, creating a cycle of advantage and disadvantage for grant applicants.
Reputational factors play a substantial role in the Matthew Effect. Grant reviewers are more likely to be familiar with the work of well-established scientists. Moreover, established institutions often have greater resources to support grant applications, including the availability of experienced faculty mentors, increasing the chances of their researchers securing funding. However, successful academics may be cultivated from a broad spectrum of institutional pedigrees. Notably, the National Institutes of Health (NIH), which provides a significant portion of scientific funding in the United States, has recently considered removing institutional factors as grant-scoring criteria (5), but the ramifications of this change may not be seen for some time.
Interestingly, Bol et al. (6) demonstrated that early career scientists just above the funding review score threshold (i.e., “narrow-win”) accumulated more than twice as much research funding during the following eight years as their peers just below the threshold (i.e., “near-miss”) (Figure 2). This emergent funding gap arose because near-miss scientists, likely limited in time and resources and disheartened by initial failures, applied for grants less often. Effectively, the Matthew Effect operates under the mutually reinforcing processes of supply (i.e., winners self-select to apply for more funding) and demand (i.e., winners are evaluated more positively). In essence, early grant success for early-career investigators, even if marginal, is crucial for future grant success. Given the typically higher funding paylines and wider availability of career development awards from professional organizations for early-career investigators, this underscores the relative importance of securing grants as trainees and junior faculty.
Figure 2.
Effect of an early-career grant on winning a mid-career grant (left) and applying for a mid-career grant (right). Shown is the percentage of early-career grant applicants winning a mid-career grant (left, vertical axis) and applying for a mid-career grant (right, vertical axis) for different evaluation ranks of applicants in the early-career grant competition (horizontal axis). Applicants with positive ranks (+) to the right of the funding threshold (vertical line) received an early-career grant, while applicants with negative ranks (−) did not. For each estimate, an exact 95% confidence interval is displayed. Reprinted with permission from Bol et al. (6).
The Matthew Effect is a complex phenomenon largely rooted in biases intrinsic to the funding landscape. Unfortunately, given that most funding organizations will remain single-blind (i.e., reviewer knowledge of applicants) for the foreseeable future, the Matthew Effect will likely remain an important phenomenon in years to come. Notably, radiation oncology garners only a small percentage of research funding despite its major role in cancer therapy (7). This funding scarcity could further intensify the Matthew Effect, underscoring the necessity for investigators focused on radiation oncology to comprehend this phenomenon. Principles 2 and 3 offer insights into how early-career investigators can leverage strategies to benefit from the dynamics surrounding the Matthew Effect.
In summary, the Matthew Effect suggests that well-established investigators often secure more grants due to reputational advantages, with early successes significantly impacting future grant achievements—an issue intensified by the funding landscape and particularly challenging for fields like radiation oncology.
Principle 2. Persistence: Early Failure Attenuates Late Wins.
Early-career scientists often face fierce competition for limited funding resources, and failure to secure these resources can be perceived as a significant setback. These failures, though disheartening, can potentially serve as catalysts for future success. A recent study by Wang et al. (8) sheds light on early-career setbacks and their future career impact. Like Bol et al., they investigated junior scientists, this time applying for NIH grants, and found those who initially fail often boast an alarming attrition rate. However, by focusing their investigation on individuals at the margins of success, they discovered applicants who were “near-misses” yet continued to apply for grants, systematically outperformed their successful “narrow-win” counterparts when evaluated for long-term performance (Figure 3). Moreover, they also determined funding differences between these groups effectively disappeared given a long enough time frame. In other words, applicants who persisted despite their initial failure were able to succeed in the long run, giving credence to the saying “what doesn’t kill you makes you stronger”. In a similar vein, Yin et al. (9) quantify the dynamics of failure across different domains, including NIH funding, and demonstrate the complex interplay between failure and success. They suggest that the ability to handle and learn from failure is critical to long-term success, providing scientists with unique insights often absent in a smooth career trajectory.
Figure 3.
Near-misses (orange) outperformed narrow-wins (blue) in scientific impact. Here, the authors considered three measures probing the clinical relevance of their research: 1. clinical trial papers [direct contribution to clinical translation]; 2. papers cited by at least one clinical trial paper [indirect contribution to clinical translation]; 3. papers with potential to become translational research. (a) Near misses outperformed narrow wins in terms of the probability of producing clinical trial papers in the next 1–5 years, and 6–10 years; (b) The same as (a) but for papers cited by clinical trials in the future; (c) The same as (a) but for papers with potential to become translational research; ***p < 0.001; Error bars represent the standard error of the mean. Modified from Wang et al. (8); this is an open-access article distributed under a CC BY license.
It’s important to acknowledge that the capacity to recover from failures is unevenly distributed among individuals. Having systems in place to ensure early-career researchers from under-resourced institutions also have the support to overcome funding setbacks is of paramount importance. Notably, the significant role of effective mentorship, be it from internal or external sources, in enhancing academic productivity within radiation oncology is well-established (10), and thus, should be fervently championed. Moreover, it is crucial to highlight underutilized yet incredibly beneficial freely accessible resources, such as virtual NIH grant writing workshops, particularly for individuals lacking a mentor. Finally, departments may consider acknowledging trainees and junior faculty that apply for multiple funding opportunities regardless of the outcome, in addition to honoring those that compete successfully for grants and awards.
In summary, early-career investigators who persevere after initial funding setbacks often outperform those with early successes, emphasizing the importance of resilience, effective mentorship, and supportive resources for long-term achievement in academic fields like radiation oncology.
Principle 3: Grant Review is (Partially) Stochastic: Frequent Attempts are Advantageous for Early/Late Wins.
Despite its pivotal role in science, peer review—the process where work is evaluated by domain-specific experts—is not a perfect system for evaluating grants. Notably, many have called for a complete overhaul of the approach (11). Nevertheless, grant peer review is likely here to stay for the foreseeable future. In this section, we illuminate an important but often under-acknowledged aspect of grant evaluation: its inherent stochastic or random nature.
An important study by Pier et al. (12) revealed a surprising level of disagreement among reviewers assessing the same NIH grant applications. Despite receiving identical instructions, a large group of reviewers assessing applications above a minimum quality threshold was unable to yield comparable evaluations on qualitative and quantitative assessment (Figure 4). This inconsistency in evaluations can significantly impact the fate of a grant application. The consequences can be particularly severe for early-career researchers or those operating at the margins of established research domains where the work is less understood or appreciated. This uncertainty in grant reviewing can be attributed to several factors. Reviewers come from diverse backgrounds and carry different experiences, expertise, and biases that can influence their assessment of a grant application. Moreover, the complexity of grant proposals and the sheer volume of information they contain can make them difficult to evaluate consistently.
Figure 4.
Visual depiction of the three measures of agreement (intraclass correlation [ICC], Krippendorff’s alpha [α], Similarity Score) among grant reviewers with 95% confidence intervals. Note that only the upper bound of the confidence interval is shown for the ICCs because the lower bound is, by definition, 0. Reprinted with permission from Pier et al. (12).
Understanding and accepting the stochastic nature of the peer-review process, while frustrating, can serve as valuable insight for new investigators embarking on their scientific journey. Notably, most grant-reviewing bodies (e.g., NIH) do not penalize applicants for multiple submissions or re-submissions. Given the potential for variability in reviewer judgments, one potentially effective strategy is to increase the number of grant applications submitted. This approach is grounded in the logic of probability; the more applications submitted, the higher the chances of successfully securing funding, simply by virtue of increased opportunities. In essence, applicants are rewarded for “more shots on goal”. Naturally however, it should go without saying that quality is also important. Every application should be written with diligence and care, thoroughly showcasing research in the best possible light; it is after all “near misses” we describe throughout this editorial and not “complete misses”. Moreover, as noted in Principle 2, it is crucial not to be disheartened by rejections. Critiques from reviewers, while sometimes hard to swallow, can be instrumental in refining future applications.
In summary, the grant peer-review process has inherent unpredictability due to reviewer inconsistencies, making it essential for early-career investigators to submit multiple quality applications for higher success chances and value feedback for refining future proposals.
Conclusion
Herein, we have provided a set of three core principles that we hope will help ground new investigators as they begin their academic careers and seek funding opportunities. Namely, these principles consist of understanding: 1) the Matthew Effect, i.e., accumulated advantage, 2) the importance of persistence, and 3) the stochasticity of peer review. These principles interlock in a self-reinforcing manner (Figure 1). In essence, early-career investigators should seek to liberally apply for funding (Principle 3), thereby not only gaining potential early wins (Principle 1) but also gaining crucial experience from inevitable failures (Principle 2). Importantly, systemic questions remain about how institutions can best equip their trainees and junior faculty to leverage these principles effectively. Investigating these aspects at an organizational level, which would contribute to a more comprehensive support system for early-career investigators, warrants further attention.
Acknowledgments:
The authors extend their gratitude to the past, present, and future members of the Fuller lab. Their dedication to the principles outlined in this manuscript has been, and will continue to be, the driving force behind the lab’s collective success.
Funding Statement:
KAW was supported by an Image Guided Cancer Therapy (IGCT) T32 Training Program Fellowship from T32CA261856 and an NIH F31 fellowship (5F31DE031502-02). CDF received/receives unrelated funding and salary support from: NIH National Institute of Dental and Craniofacial Research (NIDCR) Academic Industrial Partnership Grant (R01DE028290) and the Administrative Supplement to Support Collaborations to Improve AIML-Readiness of NIH-Supported Data (R01DE028290-04S2); NIDCR Establishing Outcome Measures for Clinical Studies of Oral and Craniofacial Diseases and Conditions award (R01DE025248); NSF/NIH Interagency Smart and Connected Health (SCH) Program (R01CA257814); NIH National Institute of Biomedical Imaging and Bioengineering (NIBIB) Research Education Programs for Residents and Clinical Fellows Grant (R25EB025787); NIH NIDCR Exploratory/Developmental Research Grant Program (R21DE031082); NIH/NCI Cancer Center Support Grant (CCSG) Pilot Research Program Award from the UT MD Anderson CCSG Radiation Oncology and Cancer Imaging Program (P30CA016672); Patient-Centered Outcomes Research Institute (PCS-1609-36195) sub-award from Princess Margaret Hospital; National Science Foundation (NSF) Division of Civil, Mechanical, and Manufacturing Innovation (CMMI) grant (NSF 1933369). CDF receives grant and infrastructure support from MD Anderson Cancer Center via: the Charles and Daneen Stiefel Center for Head and Neck Cancer Oropharyngeal Cancer Research Program; the Program in Image-guided Cancer Therapy; and the NIH/NCI Cancer Center Support Grant (CCSG) Radiation Oncology and Cancer Imaging Program (P30CA016672).
Footnotes
Declaration of generative AI and AI-assisted technologies in the writing process: During the preparation of this work, the authors used ChatGPT (GPT-4 architecture) to facilitate ideation and improve the grammatical accuracy of the text. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
References
- 1.Culbert MM, Parekh A, Shah A, et al. Predictors of an Academic Career in Radiation Oncology 5 to 10 Years After Residency. Am. J. Clin. Oncol 2023;46:45–49. [DOI] [PubMed] [Google Scholar]
- 2.Lalani N, Griffith KA, Jones RD, et al. Salary and Resources Provided to Junior Faculty in Radiation Oncology. Int. J. Radiat. Oncol. Biol. Phys 2019;103:310–313. [DOI] [PubMed] [Google Scholar]
- 3.Eblen MK, Wagner RM, RoyChowdhury D, et al. How Criterion Scores Predict the Overall Impact Score and Funding Outcomes for National Institutes of Health Peer-Reviewed Applications. PLoS One. 2016;11:e0155060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Merton RK. The Matthew Effect in Science. Science. 1968;159:56–63. [PubMed] [Google Scholar]
- 5.Kozlov M NIH plans grant-review overhaul to reduce bias. Nature. 2022;612:602–603. [DOI] [PubMed] [Google Scholar]
- 6.Bol T, de Vaan M, van de Rijt A. The Matthew effect in science funding. Proc. Natl. Acad. Sci. U. S. A 2018;115:4887–4890. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Jagsi R, Wilson LD. Research funding for radiation oncology: an unfortunately small sliver of an inadequate pie. Int. J. Radiat. Oncol. Biol. Phys 2013;86:216–217. [DOI] [PubMed] [Google Scholar]
- 8.Wang Y, Jones BF, Wang D. Early-career setback and future career impact. Nat. Commun 2019;10:4331. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Yin Y, Wang Y, Evans JA, et al. Quantifying the dynamics of failure across science, startups and security. Nature. 2019;575:190–194. [DOI] [PubMed] [Google Scholar]
- 10.Holliday EB, Jagsi R, Thomas CR Jr, et al. Standing on the shoulders of giants: results from the Radiation Oncology Academic Development and Mentorship Assessment Project (ROADMAP). Int. J. Radiat. Oncol. Biol. Phys 2014;88:18–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Fang FC, Casadevall A. NIH peer review reform--change we need, or lipstick on a pig? Infect. Immun 2009;77:929–932. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Pier EL, Brauer M, Filut A, et al. Low agreement among reviewers evaluating the same NIH grant applications. Proc. Natl. Acad. Sci. U. S. A 2018;115:2952–2957. [DOI] [PMC free article] [PubMed] [Google Scholar]