Skip to main content
The Oncologist logoLink to The Oncologist
. 2026 Apr 3;31(5):oyag125. doi: 10.1093/oncolo/oyag125

Pivotal studies of pharmacotherapies approved by the US FDA for cancer treatment: a meta-analysis

Ronald Chow 1,2,, James H B Im 3, Camilla Zimmermann 4, Georgia C Richards 5, Carl Heneghan 6
PMCID: PMC13099410  PMID: 41926737

Abstract

Background

This study aimed to examine the number of FDA-approved cancer pharmacotherapies and analyse pivotal study characteristics over time, including sample sizes.

Methods

We developed a web scraper to collate a cohort of FDA-approved cancer pharmacotherapies from 1953 until 31 December 2024. For each pharmacotherapy, details of pivotal studies leading to approval were recorded, including protocol and final sample size, study design, therapy type, and quality assessment using Cochrane Risk of Bias tools. We used regression analyses and discretization to identify trends in sample size. Type I error was set at 0.05.

(Study protocol pre-registration: https://doi.org/10.17605/OSF.IO/KVA23)

Results

We identified 255 pharmacotherapies, supported by 306 pivotal studies; 125 (49%) were targeted pharmacotherapies, 61 (24%) chemotherapies, 47 (18%) immunotherapies, 21 (8%) hormonal therapies, and 1 (0.4%) other. The median sample size was 290 (IQR = 427); sample sizes increased in the 1990s (median = 407) and remained stable thereafter. Stratified analysis demonstrated smaller sample sizes for phase 1 and 2 studies before 1980, with no change in phase 3 studies. For 165 studies reporting protocol sample sizes, studies from 2020-2024 (median = 147.5) were smaller than those from 2010-2019 (median = 320).

Conclusions

The increase in sample sizes during the 1990s may reflect new policies and legislation. Subsequent stability in sample size could be due to modern trial designs (eg, basket/umbrella studies, surrogate endpoints) that require smaller sample sizes. The recent decrease in protocol sample sizes may herald a similar decline for future studies but requires post-market surveillance to verify credibility.

Keywords: cancer, pivotal trials, sample


Implications for Practice.

  • For methodologist: there may be a shift towards surrogate endpoints, biomarker selection and efficient designs. These approaches require careful validation of endpoints, rigorous power calculations, and transparent reporting. Future research should refine adaptive and targeted designs to uphold statistical robustness while supporting innovation.

  • For clinicians: trials are based on selected populations. Clinicians should consider these uncertainties during decision-making, emphasize in patient education the provisional nature of early trial data, and incorporate post-market evidence where available.

  • For policymakers: continued emphasis on transparency in sample size assumptions and endpoint choice is important. As reliance on surrogate outcomes grows, stronger post-market surveillance requirements may help safeguard patient benefit and verify trial credibility.

Introduction

Cancer is the leading global cause of morbidity and mortality, with approximately 20 million incident cases and 10 million deaths annually.1 Consequently, substantial international investment in cancer research, exceeding 8 billion US dollars annually, drives efforts to understand the disease and develop new treatments.2 This investment has led to extensive research output and the approval of hundreds of pharmacotherapies by regulatory bodies such as the US Food and Drug Administration (FDA).2,3 Recent advances in immunotherapy have further accelerated the development and approval of new treatments.4

There are many FDA-approved cancer treatments,5 including standard-of-care therapies, clinical trial drugs, and others not used in practice. Reviews have assessed subsets of these therapies. Prior reviews have assessed complete response rates,6 types of primary endpoints used,7 and trial sample sizes required for label extensions or secondary indications.8,9

To date, no systematic review has been conducted of all FDA-approved pharmacotherapies, focusing on the sample sizes of pivotal studies leading to approval. A review of all approved pharmacotherapies can provide valuable insights into changes in practice over time. Furthermore, approval pathways have evolved and become faster,10,11 which may have impacted the required sample size. Comprehensive inclusion of all pharmacotherapies, without limitation to a certain time interval, allows for a broader understanding of changes in policy and practice across a longer time horizon. Additionally, prior reviews have not conducted quality assessments of trials, a key component of systematic review methodology.

This systematic review aimed to examine all US FDA-approved pharmacotherapies for cancer, focusing specifically on the sample sizes of pivotal studies required for approval.

Methods

A systematic review of all FDA-approved cancer pharmacotherapies was designed and the study protocol was preregistered on an open repository (doi: https://doi.org/10.17605/OSF.IO/KVA23).12 The FDA was chosen as the focal agency due to its rigorous and centralised drug approval process compared to other agencies.13 There was one protocol deviation: only non-parametric testing was conducted, as the underlying data were not normally distributed. The reporting of this review adheres to the PRISMA 2020 statement.14

Eligibility criteria

All FDA-approved pharmacotherapies listed on the US FDA’s website for cancer treatment were reviewed. Pharmacotherapies with different routes of administration (eg, intravenous versions of previously approved oral drugs) or combinations of previously approved therapies were excluded. Biochemical modulators, which lack a directly antineoplastic mechanism of action, were also excluded.

Outcomes

The primary outcomes were year of FDA approval for the pharmacotherapies’ first cancer treatment use and the sample size of the pivotal studies. Pivotal studies were defined as the efficacy study cited by the FDA to support approval. Secondary outcomes included the pivotal studies’ primary outcome, and, where available, the a-priori protocol sample size and estimated effect size (and whether it fell within the 95% confidence interval of the observed effect size).

Data records and management

Data extraction was conducted in two rounds. We developed a web scraper (publicly accessible on GitHub: https://doi.org/10.5281/zenodo.15316361)15 to record all approved drugs and their cancer indications from the US FDA’s website listing5 of all approved pharmacotherapies for cancer, and manually verified (RC) the data (Methodology S1).

We also extracted study design (phase 1, 2, or 3), and the primary outcome’s effect size if reported as a relative risk ratio. The study protocol was searched in published papers and on clinicaltrials.gov. If available, the a-priori protocol sample size and estimated effect size as a relative risk ratio were recorded. The extracted data is publicly available in an open repository.12

Quality assessment

Two reviewers (RC, JHBI) independently assessed the quality of each pivotal study using the Cochrane Risk of Bias in Non-Randomized Studies of Interventions version 2 (ROBINS-I v2) tool16 for non-randomised trials, and the Cochrane Risk of Bias version 2 (RoB 2) tool17 for randomised trials. Discrepancies were resolved by discussion and consensus, with involvement of third reviewer (CZ) if needed.

Data synthesis

A narrative synthesis with descriptive statistics summarised the collected data. We reported each US FDA-approved cancer pharmacotherapy, its first approved cancer site, approval year, therapy type, and pivotal study details, including sample size. Quality assessment was summarised using the robvis visualisation tool developed in R.18 For studies with protocols, we noted the difference between the final and protocol sample size, and whether the estimated effect size was within the 95% confidence interval of the observed final effect size. We also reported the median estimated and observed effect size, and their median relative difference; we repeated this for estimated and observed event rates.

Sample sizes were summarised using boxplots with accompanying statistics of median and interquartile range (IQR). If FDA approval for a pharmacotherapy depended on multiple pivotal studies, the summed sample size was used for analysis. The Shapiro-Wilk test assessed normality, and as the data was not normally distributed, non-parametric analyses were used. We compared medians by quality assessment, study design and types of pharmacotherapy with the Kruskal–Wallis test and post hoc Dunn’s Test with Bonferroni Correction. Final quality assessments using the RoB v2 and ROBINS-I v2 tools were categorized together (Supplemental Methodology S2).

Logistic regression reporting odds ratio (OR) was used to determine whether year of approval was associated with pharmacotherapies being immunotherapy or targeted therapies, and whether sample size was associated with studies having a high/serious/critical risk of bias or being phase 3 studies. 95% confidence intervals (CI) were obtained using non-parametric bootstrapping with 1,000 replications.

We grouped studies into 10-year intervals, and compared median sample size across the decade using the Kruskal–Wallis test, with post hoc Dunn’s Test with Bonferroni Correction; we repeated analysis by 25-year, 20-year and 5-year intervals. We analysed sample sizes by year of approval using cubic spline regressions with knots at 20-year, 10-year, and 5-year intervals. Stratified analysis was conducted by study design.

To assess for relationship between effect size and sample size, we grouped studies into the following categories: 1-250, 251-500, 501-750, 751-1000, and >1000 patients. Across the groups, we compared the point estimate of effect size, and the upper limit of the 95% confidence interval of effect size. Differences between median were analysed using the Kruskal–Wallis test, followed by post hoc Dunn’s test with Bonferroni correction. We analysed for trends of effect size by sample size using cubic spline regressions with knots at intervals of 500 (500 and 1000 patients), 250 (250, 500, 750, and 1000 patients), 200 (200, 400, 600, 800, and 1000 patients), and 100 (100, 200, 300, 400, 500, 600, 700, 800, 900, and 1000 patients).

For analysis of protocol sample size, we likewise grouped studies into 10-year intervals and compared the median across decades using the Kruskal–Wallis test and post hoc Dunn’s Test with Bonferroni Correction. Type I error was set at 0.05. All analyses were performed in StataBE 18.0.

Results

We identified 306 pivotal studies for 255 FDA-approved cancer pharmacotherapies (Figure S1). Table S1 lists each pharmacotherapy, approval date, pivotal study name, and its sample size. E1-E294 One pharmacotherapy (Aldesleukin) was approved based on seven studies, nine (4%) on three, 23 (9%) on two, and 222 (87%) on one.

Among pharmacotherapies, 49% (n = 125) were targeted therapies, 24% (n = 61) chemotherapy, 18% (n = 47) immunotherapy, 8% (n = 21) hormonal therapy, and 0.4% (n = 1) stem cell mobilisers. The number of immunotherapies (OR = 1.08, 95% CI: 1.03-1.13; P = .01) and targeted therapies (OR = 1.09, 95% CI: 1.06-1.12; P < .01) increased after 2010 (Figure S2). Approval was based on phase 3 studies for 50% (n = 128) of pharmacotherapies, phase 2 studies for 40% (n = 103), and phase 1 studies for 9.4% (n = 24). The number of phase 2 and 3 studies increased after 1990 (Figure S3), with more recent pharmacotherapies more likely to be supported by phase 3 studies (OR = 1.03, 95% CI: 1.01, 1.04; P < .01).

The majority of studies had low risk of bias. Quality assessments of pivotal studies are detailed in Table S2 and summarized in Figures S4 and S5; 50% (n = 77) of randomized controlled trials and 58% (n = 88) of non-randomized trials had low risk of bias. More recent pharmacotherapies were more likely to have low risk of bias (OR = 1.08, 95% CI: 1.04, 1.10; P < .01) (Figure S6).

Sample size

Median sample size per pharmacotherapy was 290 (IQR = 427). Hormonal therapies (median = 764, IQR = 473) had larger sample sizes than chemotherapies (median = 120, IQR = 342) (P < .01) and immunotherapies (median = 206, IQR = 230) (P < .01) (Figure S7). Phase 3 studies of pharmacotherapies had larger sample sizes (median = 520.5, IQR = 433.5) than phase 1 (median = 47, IQR = 96.5) (P < .01) and phase 2 studies (median = 143, IQR = 138) (P < .01); phase 2 studies were larger than phase 1 studies (P = .02) (Figure S8). Studies with low (median = 312, IQR = 391) or moderate (median = 480, IQR = 544) risk of bias had larger sample sizes than those with high risk of bias (median = 165, IQR = 294) (P < .01) (Figure S9).

Figure S10 reports the number of pharmacotherapies by year and Figure 1 shows sample size by decade. Sample sizes were smaller from 1950-1979 (1950-1959: median = 61, IQR = 86; 1960-1969: median = 40, IQR = 37; 1970-1979: median = 48, IQR = 29) compared to post-1990 (1990-1999: median = 407, IQR = 561; 2000-2009: median = 329, IQR = 344; 2010-2019: median = 350, IQR = 459.5; 2020-2024: median = 290, IQR = 67). No significant differences were observed post-1990.

Figure 1.

For image description, please refer to the figure legend and surrounding text

Sample size of pharmacotherapies, by 10-year approval periods.

Sample sizes were not the same across decades (P < .01). Sample sizes from 1950-1979 were smaller than those post-1990.

Comparisons by 25-year, 20-year and 5-year categories showed similar trends; sample sizes were larger after 1990. The regression models fit poorly; the best model, with knots at 10-year intervals, explained 9.2% of variability (adjusted R2= 0.092).

In stratified analysis by study design, median sample sizes differed across time for pharmacotherapies approved on phase 1 (P = .04) and phase 2 (P = .01) studies; median sample sizes were similar across decades for phase 3 studies (P = .59). Among phase 1 studies, those approved from 1960-1969 (median = 38.5, IQR = 26.5) had smaller sample sizes than those approved from 2010- 2019 (median = 174, IQR = 141) (P = .02). Among phase 2 studies, those approved from 1970-1979 (median = 50, IQR = 45) had smaller sample sizes than those approved from 1990-1999 (median = 162, IQR = 129) (P = .03), 2010-2019 (median = 160, IQR = 128) (P = .01), and 2020-2024 (median = 146, IQR = 205) (P = .04).

Among 128 pharmacotherapies based on phase 3 studies, 2 (2%) reported only a relative risk ratio, 47 (37%) only absolute risks, and 79 (62%) reported both. Absolute risks were just over 50% more likely to be reported than relative measures (likelihood ratio = 1.56, 95% CI: 1.35-1.79; P < .01).

Pharmacotherapies with 1-250 patients (median = 0.47, IQR = 0.49) and 251-500 patients (median = 0.52, IQR = 0.20) had relative estimates further from the null than larger studies with 501-750 (median = 0.67, IQR = 0.16), 751-1000 (median = 0.70, IQR = 0.09), and >1000 patients (median = 0.70, IQR = 0.15) (Figure S11). Regression of relative estimates with knots at every 100 patients showed moderate fit (adjusted R2=0.303). The upper limit of 95% confidence intervals was not the same across sample size groups (P = .03), with no differences between specific groups (P > .05) (Figure S12).

Protocol sample size

Of 255 pharmacotherapies, 165 (65%) had protocols reporting a-priori protocol sample size. All were approved after 1999. Pharmacotherapies approved in 2020-2024 (median = 147.5, IQR = 300) had smaller protocol sample sizes than those in 2010-2019 (median = 320, IQR = 506; P = .01) (Figure 2). Regression with the best model of knots at 10-year intervals showed poor fit (adjusted R2=0.033). Final sample sizes exceeded protocol sizes by 28%. There was no significant difference over time (P = .68).

Figure 2.

For image description, please refer to the figure legend and surrounding text

Protocol sample size of pharmacotherapies, by 10-year approval periods.

Sample sizes were not the same across decades (P = .01). Sample sizes in 2020-2024 are smaller than those from in 2010-2019 (P = .01).

Effect sizes

Of these 165 protocols, 38% (n = 63) reported estimated effect sizes used in sample size calculation. The median estimated effect size was 0.60 (IQR 0.17-0.70) and median observed effect size was 0.53 (IQR 0.28-0.67); the median relative underestimation of effect size was 10% (IQR 1-29%). Three-quarters of protocol effect sizes fell within the 95% confidence interval of the observed effect size, and the remainder closer to the null hypothesis. A higher proportion of recent pharmacotherapies had an estimated effect size within the 95% confidence interval of the observed effect size (15 of 19 (78.9%) of studies from 2020-2024, compared to 29 of 39 (74.4%) from 2010-2019 and 2 of 4 (50%) from 2000-2009). The median proposed event rate was 69% (IQR 51-73%) and median observed event rate was 57% (IQR 47-70%); across studies, the median relative overestimation of effect rate was 4%.

Discussion

In this first systematic report of all FDA-approved pharmacotherapies for cancer treatment, reporting on 255 pharmacotherapies, we found that the sample size of the 306 pivotal studies used for FDA approval was significantly larger for pharmacotherapies approved after 1990, particularly for phase 1 and 2 studies. After 1990, the median sample size was stable at approximately 350 participants until 2020, when it decreased to 290. There were no clear differences in levels of significance for results across trial sizes. This may appear counterintuitive, as ongoing improvements in standard care should lower baseline mortality rates, thereby decreasing absolute event rates in control arms.19 Detecting a consistent relative risk under these conditions generally requires larger trials to detect smaller changes in absolute event rates.

The increase in sample sizes in the 1990s may be linked to policies and legislation introduced early in that decade. In 1990, the US FDA, alongside the European Union and Japan, launched the International Council for Harmonisation of Technical Requirements of Pharmaceuticals for Human Use (ICH), which established stricter, harmonized drug development standards.20 The ICH focused on safety, efficacy, and quality, publishing guidelines on clinical trials, statistical principles, and control group selection.21 These guidelines called for more rigorous methodologies,22–27 which likely contributed to a lower risk of bias and an increase in sample sizes of pivotal studies that is supported by our critical appraisal of over 300 pivotal studies. In 1992, the US passed the Prescription Drug User Fee Acts (PDUFA), allowing the FDA to collect fees from sponsors of new drug or biologics license applications, and in exchange mandating faster review processes.28 In conjunction with the ICH’s more rigorous clinical trial guidelines,29 PDUFA accelerated the FDA’s review and raised evidentiary standards, likely contributing to the rise in sample sizes of pivotal studies post-1990.

The stabilization of sample sizes since 1990 is likely multifactorial. Recent trials often use surrogate endpoints, like progression-free survival and response rates instead of overall survival, which can increase power by increasing control event rate and therefore reduce the required sample size.30,31 Additionally, the rise in targeted therapies, which require biomarker-defined subpopulations, has led to the use of efficient trial designs such as basket and umbrella studies, allowing smaller sample sizes.32,33 These methodological changes may explain the lack of significant increase in pivotal study sample size.

Furthermore, trial sample size is determined not only by regulatory standards but also by sponsor planning: our comparison of protocol-specified and observed effect sizes (median estimated effect size 0.60 vs observed 0.53; median relative underestimation 10%; ∼75% of estimates within observed 95% CIs) suggests that sponsor power calculations frequently approximate observed effects, which together with the proliferation of mechanistically similar agents (for example, PD-1/PD-L1 inhibitors) and convergent trial designs contributes to the observed stability of sample sizes in recent decades.

A related and important paradigm shift in oncology regulation has been the increasing acceptance of robust phase 1/2 evidence—particularly in biomarker-selected populations and for agents with compelling early efficacy signals is often under accelerated or breakthrough. This practice can lead to approvals based on larger early-phase cohorts and single-arm designs that nevertheless require fewer patients than large phase 3 trials in unselected populations; such regulatory and development changes likely contribute to the composition of trial phases and sample sizes observed in our dataset.

Studies generally enrolled 28% more patients than planned with no change over time and effect size estimates in power calculations were relatively reliable regardless of sample size. Thus, protocol sample size may be a good predictor of final sample size. Given the recent decline in protocol sample sizes, we predict the trend of declining study sample size may continue without statistical compromise in the final study result.

Strengths and limitations

Study strengths included critical appraisal of 300 studies and preregistration on an open repository.12 Data extraction was conducted through a two-stage process, involving a reproducible web scraper to automate data collection supplemented by human review; the web scraper15 is available on GitHub for transparency and reproducibility, and employs web scraping methodology similar to existing large-scale tools.34 Extraction followed prespecified criteria and quality assessment was conducted in duplicate and transparently reported for each pivotal study. We also reported excluded pharmacotherapies, conducted stratified analyses by study design, and adhered to PRISMA 2020 reporting guidelines.

For pharmacotherapies approved before the 1970s, identifying the pivotal study was challenging due to unclear documentation of which trial supported FDA approval. We manually searched online resources to identify the major trial immediately preceding FDA approval, relying on the assumption that relevant trials were published. This assumption may not always hold,35 but was nevertheless the most feasible method of approximation.

Over one-third of pharmacotherapies approved on phase 3 studies reported only absolute risks as primary outcomes; these were excluded from analyses of relative efficacy. Nearly one-third of pharmacotherapies lacked protocol sample sizes, and of those reporting these, nearly two-thirds did not report estimated effect size in power calculations, leading to incompleteness of analyses, particularly for earlier decades.

Only non-parametric testing was used due to non-normal data, which may have limited statistical power to detect subtle differences. For pharmacotherapies approved based on multiple studies, we used summed sample sizes. Although this may obscure the contribution of individual trials, it was appropriate given our focus on sample sizes required for approval, rather than for individual trials. Additionally, exploratory regressions with multiple spline knots may risk overfitting, but we addressed this by reporting adjusted R2 to penalize model complexity.

Implications for clinicians, methodologists and policymakers

For methodologists, the recent stabilisation and decline of sample sizes amid increasing trial complexity may reflect a previously-observed shift towards surrogate endpoints, biomarker selection, and efficient designs.36 These approaches require careful validation of endpoints, rigorous power calculations, and transparent reporting. Future research should refine adaptive and targeted designs to uphold statistical robustness while supporting innovation.

For clinicians, the results described above may similarly indicate the recent trend in approval of newer cancer therapies based on trials using surrogate endpoints and selected populations, with limited evidence on overall survival and long-term benefit.36 Clinicians should consider these uncertainties during decision-making, emphasize in patient education the provisional nature of early trial data, and incorporate post-market evidence where available.

Finally, for policymakers, the post-1990 rise in sample size may reflect stronger international standards. The recent decline in protocol sample size should be monitored to ensure it reflects methodological advances, not weaker evidence thresholds. Continued transparency in sample size assumptions and endpoint choice is important. As reliance on surrogate outcomes grows, stronger post-market surveillance requirements may help safeguard patient benefit and verify trial credibility.

In conclusion, there was a substantial increase in sample size of pivotal studies in the 1990s, which was likely related to new policies and legislations at that time. After that time, no further increase has occurred, possibly due to the adoption of surrogate endpoints and more efficient trial designs. Protocol sample sizes have decreased recently with statistical compromise in the final study results, which may herald a future decline in sample sizes of pivotal studies.

Supplementary Material

oyag125_Supplementary_Data

Contributor Information

Ronald Chow, Centre for Evidence-Based Medicine, University of Oxford, Oxford, OX2 6GG, United Kingdom; Department of Supportive Care, Princess Margaret Cancer Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, M5G 2M9, Canada.

James H B Im, Faculty of Medicine & Dentistry, University of Alberta, Edmonton, T6G 2R7, Canada.

Camilla Zimmermann, Department of Supportive Care, Princess Margaret Cancer Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, M5G 2M9, Canada.

Georgia C Richards, Department of Analytical, Environmental and Forensic Sciences, Institute of Pharmaceutical Sciences, School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences & Medicine, King’s College London, London, SE1 9NH, United Kingdom.

Carl Heneghan, Centre for Evidence-Based Medicine, University of Oxford, Oxford, OX2 6GG, United Kingdom.

Author contributions

Ronald Chow (Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing—original draft, Writing—review & editing), James H. B. Im (Data curation, Writing—review & editing), Camilla Zimmermann (Methodology, Writing—review & editing), Georgia C Richards (Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Writing—review & editing), and Carl Heneghan (Conceptualization, Formal analysis, Funding acquisition, Project administration, Supervision, Writing—review & editing)

Funding

None declared.

Conflicts of interest

GCR is employed by King’s College London through a grant from the King’s Prize Fellowship supported by the Anthony and Elizabeth Mellows Charitable Settlement (2024-2026). GCR has an honorary Senior Associate Tutor contract with the University of Oxford and has received a travel fellowship from the Pandemic EVIDENCE Collaboration, funded by the McCall MacBain Foundation (2024-2026). GCR has had travel expenses reimbursed for attending and presenting at conferences, and received fees for speaking at events, teaching, and training coroners. GCR is the director of a limited company, a community interest company, and receives subscriptions to a personal Substack newsletter for the Preventable Deaths Tracker.

CZ is supported by the Harold and Shirley Lederman Chair in Psychosocial Oncology and Palliative Care, a joint Chair among the University of Toronto, Princess Margaret Cancer Centre/University Health Network and the Princess Margaret Cancer Foundation.

Data availability

All data are publicly available in published literature.

References

  • 1. Sung H, Ferlay J, Siegel RL, et al.  Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin.  2021;71:209-249. [DOI] [PubMed] [Google Scholar]
  • 2. Abudu R, Bouche G, Bourougaa K, et al.  Trends in international cancer research investment 2006-2018. JCO Glob Oncol.  2021;7:602-610. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Anand U, Dey A, Chandel AKS, et al.  Cancer chemotherapy and beyond: current status, drug candidates, associated risks and progress in targeted therapeutics. Genes Dis.  2023;10:1367-1401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Twomey JD, Zhang B.  Cancer immunotherapy update: FDA-approved checkpoint inhibitors and companion diagnostics. Aaps J.  2021;23:39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. National Cancer Institute. Drugs approved for different types of cancer. 2024. https://www.cancer.gov/about-cancer/treatment/drugs/cancer-type2024.
  • 6. Chen EY, Raghunathan V, Prasad V.  An overview of cancer drugs approved by the US food and drug administration based on the surrogate end point of response rate. JAMA Intern Med.  2019;179:915-921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Arora S, Narayan P, Osgood CL, et al.  U.S. FDA drug approvals for breast cancer: a decade in review. Clin Cancer Res.  2022;28:1072-1086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Hutchinson N, Carlisle B, Doussau A, et al.  Patient participation in clinical trials of oncology drugs and biologics preceding approval by the US food and drug administration. JAMA Netw Open.  2021;4:e2110456-e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Ouimet C, Hutchinson N, Wang C, Matyka C, Del Paggio JC, Kimmelman J.  Large numbers of patients are needed to obtain additional approvals for new cancer drugs: a retrospective cohort study. Sci Rep.  2023;13:16138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Beakes-Read G, Neisser M, Frey P, Guarducci M.  Analysis of FDA’s accelerated approval program performance december 1992-December 2021. Ther Innov Regul Sci.  2022;56:698-703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Michaeli DT, Michaeli T, Albers S, Boch T, Michaeli JC.  Special FDA designations for drug development: orphan, fast track, accelerated approval, priority review, and breakthrough therapy. Eur J Health Econ.  2024;25:979-997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Chow R, Richards GC, Zimmermann C, Heneghan C. Protocol: a systematic review of pharmacotherapies approved by the United States Food and Drug Administration for the treatment of cancer. 2025. 10.17605/OSF.IO/KVA23. [DOI]
  • 13. Van Norman GA.  Drugs and devices: comparison of European and U.S. approval processes. JACC Basic Transl Sci.  2016;1:399-412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Page MJ, McKenzie JE, Bossuyt PM, et al.  The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Bmj.  2021;372:n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Chow R. Web scraper for systematic review of pharmacotherapies approved by the United States Food and Drug Administration for the treatment of cancer. 2025. 10.5281/zenodo.15316361.
  • 16. Sterne JAC, Higgins JPT. ROBINS-I v2 tool. 2024. Accessed January 7, 2025. https://www.riskofbias.info/welcome/robins-i-v2.
  • 17. Sterne JAC, Savović J, Page MJ, et al.  RoB 2: a revised tool for assessing risk of bias in randomised trials. Bmj.  2019;366:l4898. [DOI] [PubMed] [Google Scholar]
  • 18. McGuinness LA, Higgins JPT.  Risk-of-bias VISualization (robvis): an R package and shiny web app for visualizing risk-of-bias assessments. Res Synth Methods. 2021;12:55-61. [DOI] [PubMed] [Google Scholar]
  • 19. Tenny S, Hoffman MR. Relative risk. 2023. Accessed May 2025. https://www.ncbi.nlm.nih.gov/books/NBK430824/.
  • 20. U.S. Food & Drug Administration. ICH overview. 2022. https://www.fda.gov/media/165161/download2025.
  • 21.International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use. Efficacy guidelines. 2024. https://www.ich.org/page/efficacy-guidelines2025.
  • 22. Bass AS, Hombo T, Kasai C, Kinter LB, Valentin JP.  A historical view and vision into the future of the field of safety pharmacology. Handb Exp Pharmacol.  2015;229:3-45. [DOI] [PubMed] [Google Scholar]
  • 23. Pledger G.  Proof of efficacy trials: choosing the dose. Epilepsy Res.  2001;45:23-28. discussion 9–30. [DOI] [PubMed] [Google Scholar]
  • 24. Gould AL.  Substantial evidence of effect. J Biopharm Stat.  2002;12:53-77. [DOI] [PubMed] [Google Scholar]
  • 25. Rockhold FW.  Industry perspectives on ICH guidelines. Stat Med.  2002;21:2949-2957. [DOI] [PubMed] [Google Scholar]
  • 26. Akacha M, Bretz F, Ruberg S.  Estimands in clinical trials - broadening the perspective. Stat Med.  2017;36:5-19. [DOI] [PubMed] [Google Scholar]
  • 27. Jahanshahi M, Gregg K, Davis G, et al.  The use of external controls in FDA regulatory decision making. Ther Innov Regul Sci.  2021;55:1019-1035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Berndt ER, Gottschalk AH, Philipson TJ, Strobeck MW.  Industry funding of the FDA: effects of PDUFA on approval times and withdrawal rates. Nat Rev Drug Discov.  2005;4:545-554. [DOI] [PubMed] [Google Scholar]
  • 29. Mitchell AP, Trivedi NU, Bach PB.  The prescription drug user fee act: much more than user fees. Med Care.  2022;60:287-293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Kemp R, Prasad V.  Surrogate endpoints in oncology: when are they acceptable for regulatory and clinical decisions, and are they currently overused?  BMC Med.  2017;15:134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Timothée O, Alyson H, Dagney O, Eduardo F, Vinay P.  Bedside implications of the use of surrogate endpoints in solid and haematological cancers: implications for our reliance on PFS, DFS, ORR, MRD and more. BMJ Oncol.  2024;3:e000364. [Google Scholar]
  • 32. Kasim A, Bean N, Hendriksen SJ, Chen TT, Zhou H, Psioda MA.  Basket trials in oncology: a systematic review of practices and methods, comparative analysis of innovative methods, and an appraisal of a missed opportunity. Front Oncol.  2023;13:1266286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Barnett AG, Glasziou P.  Target and actual sample sizes for studies from two trial registries from 1999 to 2020: an observational study. BMJ Open.  2021;11:e053377. [Google Scholar]
  • 34. DeVito NJ, Richards GC, Inglesby P.  How we learnt to stop worrying and love web scraping. Nature.  2020;585:621-622. [DOI] [PubMed] [Google Scholar]
  • 35. Lee K, Bacchetti P, Sim I.  Publication of clinical trials supporting successful new drug applications: a literature analysis. PLoS Med.  2008;5:e191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Heneghan C, Goldacre B, Mahtani KR.  Why clinical trial outcomes fail to translate into benefits for patients. Trials.  2017;18:122. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

oyag125_Supplementary_Data

Data Availability Statement

All data are publicly available in published literature.


Articles from The Oncologist are provided here courtesy of Oxford University Press

RESOURCES