Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Oct 1.
Published in final edited form as: Semin Radiat Oncol. 2023 Oct;33(4):429–437. doi: 10.1016/j.semradonc.2023.06.007

Challenges, complexities, and considerations in the design and interpretation of late-phase oncology trials

Timothy A Lin 1, Alexander D Sherry 2, Ethan B Ludmir 2,3,*
PMCID: PMC10917127  NIHMSID: NIHMS1914148  PMID: 37684072

Abstract

Optimal management of cancer patients relies heavily on late-phase oncology randomized controlled trials. A comprehensive understanding of the key considerations in designing and interpreting late-phase trials is crucial for improving subsequent trial design, execution, and clinical decision-making. In this review, we explore important aspects of late-phase oncology trial design. We begin by examining the selection of primary endpoints, including the advantages and disadvantages of using surrogate endpoints. We address the challenges involved in assessing tumor progression and discuss strategies to mitigate bias. We define informative censoring bias and its impact on trial results, including illustrative examples of scenarios that may lead to informative censoring. We highlight the traditional roles of the log-rank test and hazard ratio in survival analyses, along with their limitations in the presence of non-proportional hazards and an introduction to alternative survival estimands, such as restricted mean survival time or MaxCombo. We emphasize the distinctions between the design and interpretation of superiority and non-inferiority trials, and compare Bayesian and frequentist statistical approaches. Finally, we discuss appropriate utilization of phase II and phase III trial results in shaping clinical management recommendations, evaluating the inherent risks and benefits associated with relying on phase II data for treatment decisions.

Introduction

Late-phase oncology randomized controlled trials (RCT) form the primary basis for approval of new cancer therapies by regulatory agencies. Therefore, it is essential to comprehensively understand the design of these trials, recognize potential biases that may arise during the study, and address subsequent challenges in interpreting the findings. This knowledge is invaluable for both trialists and clinicians in order to enable informed decision-making and empower patients to actively participate in treatment recommendations. Furthermore, given the substantial costs and time investment involved in the development of novel therapies,1 it is imperative to optimize all available resources for the benefit of patient care. In this article, we present an overview of key topics in late-phase clinical trial design and execution (Table 1). Topics covered include the selection of primary endpoints, identification of potential sources of bias, considerations for superiority and non-inferiority trials, exploration of alternative survival endpoints, and the role of Bayesian statistical inference in cancer trials. Lastly, we discuss controversies on the appropriate utilization of late-phase trial data to inform regulatory decisions.

Table 1:

Key concepts affecting late-phase trial design, execution, and interpretation

Topic Background Issues and Impact on Late-Phase Trials Takeaway
Primary endpoint selection and Surrogate endpoints • Surrogate endpoints (e.g. progression-free survival) are increasingly used as primary endpoints in late-phase trials and as the basis for regulatory approval of new therapies • Surrogate endpoints often do not correlate with overall survival or quality-of-life endpoints
• Heightened risk of approving therapies on the basis of surrogate endpoints that may have uncertain benefit
• Exercise caution when interpreting surrogate endpoint trial data particularly in the absence of a validated correlation with overall survival or quality of life
Assessment of tumor progression • Tumor progression is usually assessed with an existing standard such as the Response Evaluation Criteria in Solid Tumors (RECIST) • Inter-rater variability exists despite standardized definitions of progression
• Open-label trials in which tumor progression is assessed by the local investigator may be at risk of bias
• Use of blinded independent central review (BICR) of imaging may reduce inter-rater variability but introduces financial and logistical costs and the risk of different biases
Informative censoring • Informative censoring bias occurs when patients who are censored are more or less likely to experience an event than patients who remain on study
• Informative censoring may be more likely in studies requiring active follow-up or imaging scans to assess the study endpoint
• Informative censoring may distort trial results, particularly those with surrogate endpoints
• For example, informative censoring may artificially improve the progression-free survival results of an experimental therapy that is more toxic than standard of care if experimental arm patients who discontinue treatment were likely to progress with minimal added follow-up
• Trialists can reduce the risk of informative censoring by assessing overall survival, ensuring the control arm receives standard-of-care therapy, and requiring additional imaging assessments after treatment discontinuation
Nonproportional hazards • In survival analyses, the validity of the hazard ratio depends upon the proportional hazards assumption, i.e. a constant hazard ratio over the follow-up period • Non-proportional hazards (NPH) causes uncertainty in interpretating the hazard ratio and reduces the power of the log-rank test
• NPH, present 1 in 4 late-phase oncology trials, may result in unclear or inappropriate interpretation of trial results
• Trialists should routinely assess for proportional hazards
• Trialists should consider alternative survival estimands that do not depend upon proportional hazards (e.g. restricted mean survival time)
Superiority and Non-inferiority trials • Superiority trials determine if one therapy is more effective than another
• Non-inferiority trials determine if one therapy is not meaningfully worse than another
• Superiority and non-inferiority trials differ in key respects (e.g. sample size requirements, analysis cohort), and lack of superiority does not imply noninferiority
• Superiority trials with negative results have in practice been used to justify non-inferiority (e.g. CONVERT, RTOG 0538), leading to challenges in the clinical interpretation of such results • Clinicians should resist the temptation to extrapolate non-inferiority from a negative superiority trial
• Trialists must continue to carefully weigh the implications of selecting between superiority and non-inferiority trial designs
Frequentist vs. Bayesian statistical inference • Oncology trials traditionally rely on frequentist statistical inference (i.e. p-values, 95% confidence intervals), as opposed to Bayesian statistical inference, which incorporates data obtained prior to or during the study period • P-values do not describe the probability of the study hypothesis being true or false and do not provide information about effect size or clinical significance
• Rare diseases are difficult to study due to sample size requirements that outpace enrollment
• Frequentist trial results, in practice, are interpreted in the context of prior data, bioplausibility, etc.
• Bayesian trials allow for transparency in the assumptions used to interpret trial results
• Bayesian adaptive trials incorporate data obtained during the study and may produce more efficient trials that are useful when enrollment is difficult
Interpretation of Phase II trial results • Phase III trials are the gold standard for practice-changing evidence but are costly and time-consuming, and the quality of evidence from phase II and III studies varies by trial design, endpoint selection, and the strength of trial results
• Phase II data has been used to change the standard of care (e.g. RTOG 0529)
• Phase II trials meeting their primary endpoint often do not replicate those results in Phase III
• Phase II studies often lack randomization
• Phase II studies may underestimate or overestimate effect sizes based upon how the studies are powered
• Phase III studies should continue to be the mainstay evidence used to change the standard of care
• Caution should be exercised when using Phase II data to inform practice changes

Primary endpoint selection

Selecting an appropriate primary endpoint is a critical consideration in late phase RCTs with time-to-event endpoints. Traditionally, overall survival, defined as the time from diagnosis or intervention to death from any cause, has been the gold standard for evaluating the efficacy of novel therapies. However, modern cancer trials increasingly utilize intermediate endpoints, also known as surrogate endpoints, such as progression-free survival, relapse-free survival, or disease-free survival.2

The adoption of surrogate endpoints provides an alternative approach to assess the effectiveness of experimental therapies. Surrogate endpoints offer the potential to expedite trial completion3 and reporting,4 accelerating the introduction of new treatments for patients. Unlike overall survival, surrogate endpoints are less affected by treatment crossover at the time of disease progression, which may complicate the detection of differences in overall survival between therapies.5

Despite potential advantages of surrogate endpoints, it is essential to exercise caution when interpreting trial results relying on surrogate endpoints. A positive outcome measured with a surrogate endpoint does not guarantee an overall survival benefit6 or improved quality of life outcomes.7 In fact, novel treatments associated with improved progression-free survival may be associated with greater cost and toxicity compared to the standard of care.8 In many cases, the correlation between surrogate endpoints and overall survival may not have been adequately validated before regulatory approval.9,10 Several factors contribute to the lack of association between surrogate and overall survival endpoints, including the diluting effect of crossover at the time of progression,5,11 heterogeneity in progression determination and evaluation, and informative censoring that may bias the results of surrogate endpoint comparisons.12 In the subsequent sections, we will delve into a more detailed discussion of some of these possibilities and their implications in the context of late-phase design and interpretation.

Assessing tumor progression: the role of blinded independent central review

Trials that rely on radiographic progression of an existing lesion often use a standardized definition of disease progression, such as the Response Evaluation Criteria in Solid Tumors.13 However, it is important to recognize that delaying tumor progression does not always correlate with prolonged survival or improved patient quality of life.14 Furthermore, despite standardized criteria to define progression, even for specific agents such as immunotherapy, inter-rater variability exists,15,16 and the potential for investigator bias in open-label trials, however wellintentioned, may also confound results.17

To address these concerns, trials are usually recommended to incorporate blinded independent central review of imaging scans (BICR) to mitigate inter-rater variability, especially when blinding investigators or patients to the randomization is not feasible. In trials utilizing BICR, an independent evaluator, unaware of the randomization or other patient information, determines whether progression has occurred. However, the implementation of BICR presents financial and logistical challenges for trials, and it is uncertain whether BICR fully eliminates systematic bias.16 In fact, BICR may introduce different forms of bias in survival analysis. For instance, if a local investigator identifies progression before the central reviewer, the patient may discontinue the study. However, this patient would then be censored before the determination of progression using BICR, despite being more likely to experience progression at the next BICR assessment compared to other patients.18 This phenomenon is an example of informative censoring, a type of bias which we will explore in greater detail in the subsequent section.

Informative censoring biases

Informative censoring is another factor that can weaken the association between surrogate endpoints and overall survival. Censoring is an important aspect of Kaplan-Meier survival analyses, which are frequently used in trials with time-to-event endpoints. Censoring occurs when a patient does not experience the event of interest during the follow-up period and is assumed to be non-informative, meaning their risk of experiencing the event should be similar to others in the study with continued follow-up. However, the presence of informative censoring can distort survival analyses by artificially influencing outcomes in a particular study arm.19

Surrogate endpoints are more susceptible to informative censoring biases. For example, consider an experimental treatment that has a higher toxicity profile compared to the control treatment. In this scenario, patients receiving the experimental therapy may be more likely to discontinue or withdraw from the study before experiencing progression, leading to informative censoring related to the surrogate endpoint. However, these patients would not be censored in the assessment of overall survival, as determining the date of death is usually more straightforward than the date of progression, which often requires specific follow-up and imaging scans.

Similarly, in trials with suboptimal control arm therapies,20 some argue that patients may be more prone to early study discontinuation due to disappointment with their randomization. Healthier or better-resourced patients may be more willing and able to drop out, leading to the premature removal of patients with more favorable prognostic outcomes from the control arm. Consequently, this may negatively impact the observed outcomes in the control arm, resulting in an exaggeration of the benefit associated with the experimental therapy.21

Further research is needed to better understand the impact of informative censoring on surrogate endpoint outcomes. In the interim, routinely assessing for the presence of informative censoring and ensuring that patients in the control arm have access to standard-of-care therapies may help mitigate the potential influence of informative censoring on surrogate endpoint outcomes. Additionally, exploring alternative surrogate endpoints that incorporate treatment harm from toxicity or early study discontinuation, such as treatment failure analysis,22 may prove valuable.

Proportional hazards violations and alternative survival measures

The log-rank test is the most commonly utilized statistical test for evaluating treatment efficacy in RCTs. Log-rank evaluates whether there is a difference in the probability of survival at any time point23 between multiple groups. However, the log-rank test does not provide information about effect size or which treatment is better. To address this, the hazard ratio is typically calculated, which compares the hazard functions of the experimental and control study arms. The hazard ratio of Cox regression relies on the assumption of proportional hazards (PH), which holds that the relative hazards for comparison groups remain constant over the follow-up period.

In oncology trials, the PH assumption is routinely violated,24,25 and under conditions of non-proportional hazards (NPH), the hazard ratio has an unclear interpretation.26 Trials assessing immunotherapy appear to have particularly higher rates of non-proportional hazards,27,28 as immunotherapy often exhibits a delayed treatment effect, presenting challenges to the interpretation of newer trials that increasingly incorporate these agents.29,30 Perhaps the most recognizable scenario of NPH occurs with the crossing of treatment effects, also known as crossing hazards, in which the direction of effect changes over the course of the follow-up period.31 A decaying treatment effect, in which the treatment effect appears to decline over time, may also lead to non-proportional hazards.

While the log-rank test does not require the PH assumption, it may have reduced power to detect treatment differences under NPH conditions.3234 Alternatives survival estimands have been proposed to overcome these limitations. One such alternative is the the difference in restricted mean survival time (RMST),35,36 which represents the integral under the survival function up to a specific time point. Reconstructed patient-level data from the CELESTIAL trial37 (NCT01908426) is provided in Figure 1 to illustrate the RMST. RMST provides a patient-centric description of treatment efficacy and the magnitude of treatment benefit in a single test.38 Mean survival time may offer a more precise measure of survival than median survival,39 as it incorporates information across the follow-up period rather than a single point in time. RMST is particularly useful when events are scarce, as the hazard ratio loses precision in such situations.40 Another test, MaxCombo, incorporates the standard log-rank test and a series of additional weighted log-rank tests designed for different scenarios of NPH, such as with delayed treatment effect, before selecting the best-performing test.27. However, the use of multiple weighted tests may increase the risk of type 1 error.41 Other proposed alternatives include the integrated log-rank test,42 flexible parametric cure models,43 or Bayesian inference.44 These alternative approaches offer valuable options to address NPH and may provide a more comprehensive and valid interpretation of treatment efficacy in oncology trials.

Figure 1:

Figure 1:

Reconstructed Kaplan-Meier curves of overall survival from the CELESTIAL trial (NCT01908426) comparing cabozantinib to placebo in patients with hepatocellular carcinoma (A), with the restricted mean survival time of the cabozantinib arm (B) depicted by the area under the curve up to 36 months.

Superiority and Non-inferiority Trials: Differences in design and interpretation

Phase 3 RCTs are most often designed to assess the superiority of experimental therapy compared to the standard of care. However, in certain scenarios where the new therapy offers advantages such as convenience, cost-effectiveness, or reduced toxicity, non-inferiority trials are employed to determine if the new therapy is not meaningfully worse than the standard of care. Non-inferiority radiotherapy trials comparing hypofractionated regimens to conventional fractionation feature prominently in the literature for breast cancer,45 prostate cancer,46,47 and other indications.

There are several key differences in the design of trials assessing superiority versus non-inferiority. First, non-inferiority trials generally require larger sample sizes than superiority trials. Determining the sample size for a non-inferiority assessment involves establishing the non-inferiority margin, which represents the acceptable level of detriment of the experimental therapy compared to the standard therapy. While the non-inferiority margin should always be set below the expected benefit from the standard therapy, defining this margin is often quite controversial and challenging given a degree of inherent subjectivity and interpretation.48,49 Detecting smaller treatment differences necessitates larger sample sizes, contributing to increased cost and resources required for non-inferiority trials. Secondly, the statistical analyses used in superiority and non-inferiority assessments may differ. In superiority trials, intention-totreat analyses are typically preferred to provide a conservative estimate of treatment effect by accounting for factors that may obscure treatment differences, such as patient withdrawal, protocol deviations, or non-compliance.50,51 In non-inferiority assessments, intention-to-treat analyses may increase the likelihood of observing a non-inferior result through the same washing out of treatment differences; thus, per-protocol analyses are more likely to also be utilized in the statistical analysis plan of non-inferiority trials.52

The choice between superiority and non-inferiority designs can have profound consequences on the interpretation of trial results. For example, the superiority of twice-daily fractionation compared to daily fractionation for limited-stage small cell lung cancer was established by RTOG 881553. However, the study did not account for differences in the biologically effective dose of radiation between the two arms, resulting in sub-optimal biologically effective dose in the daily fractionation arm. The subsequent CONVERT study 54 was a superiority RCT comparing the standard-of-care BID regimen to 60 Gy in 30 daily fractions, but did not demonstrate that 60 Gy in 30 daily fractions was superior to the BID regimen. Similarly, the CALGB 30610 / RTOG 0538 superiority RCT comparing 70 Gy given once daily versus standard of care BID did not demonstrate improved overall survival associated with the dose-escalated arm,55 concluding that BID fractionation remains the standard of care. Despite these results, many clinicians continue to use the daily fractionation scheme,56, possibly due to logistical challenges for patients and physicians associated with the BID regimen. It is important to note that there are clear methodological pitfalls in inferring that the two arms of either CONVERT or RTOG 0538 have similar efficacy, as neither trial was powered to assess non-inferiority. Nevertheless, it is interesting that the next generation of phase III clinical trials for limited stage small cell lung cancer, including NRG-LU005 (NCT03811002) and ADRIATIC, allows for either fractionation regimen, as does the NCCN guidelines.57 This uncertainty highlights the challenges faced by trialists in the trial design process and that clinicians face in interpreting imperfect data from well-accrued negative superiority-design studies. Overall, careful consideration of the trial design and proper interpretation of results are crucial to informing treatment decisions and advancing patient care for both superiority and non-inferiority trials.

Bayesian statistical inference in cancer trials

Traditionally, oncology trials have employed a frequentist approach to statistical inference, utilizing p-values and 95% confidence intervals to compare treatment efficacy. A p-value indicates the probability of obtaining the observed result, plus more extreme results, if the null hypothesis (i.e., no effect of a treatment on survival) were true. The conventional threshold for rejecting the null hypothesis is a p-value less than 0.05. However, there are numerous issues associated with the use and interpretation of p-values.5860 Importantly, the p-value does not indicate the probability of the study hypothesis being true or false, nor does it describe the likelihood that the data were produced solely by random chance. Large p-values do not provide evidence of no effect, and p-values do not offer insights into effect size or clinical significance.

In contrast to the frequentist approach of assigning probabilities to data, Bayesian inference assigns probabilities to hypotheses by incorporating priors, composed of information known or believed about the efficacy of a given treatment that is independent of the study.61,62 This information can come from published data, expert opinion, or biological plausibility. Bayesian analyses typically utilize multiple potential priors, including a non-informative prior, an “enthusiastic” prior that may assign a higher probability of seeing a clinically meaningful effect, and a “skeptical” prior that assigns a lower probability of seeing a clinically meaningful effect, to account for the subjectivity and potential differences in priors. 63,64 Priors are then combined with the likelihood, representing data from the study participants, to obtain the posterior. All three (prior, likelihood, and posterior) are represented by probability distributions. Bayesian inference makes the assumptions underlying the interpretation of trial data generated with frequentist approaches more transparent. Bayesian analyses are particularly valuable when few patients can be feasibly enrolled and analyzed on the study, as is often the case in the study of rare diseases.65 Additionally, Bayesian inference may be used in adaptive clinical trials, incorporating accumulating data during the trial to make more efficient inferences66,67. Yet, despite the potential advantages to Bayesian inference, its utilization in phase 3 cancer trials has been limited.68 Challenges such as unfamiliarity of the approach among trialists and statisticians, the need for coordination in updating and communicating trial results in adaptive designs,69 and uncertainty among clinicians in interpreting and applying results from Bayesian-designed trials may contribute to the slow adoption of Bayesian approaches. Nonetheless, Bayesian methods have the potential to enhance trial efficiency, interpretation, and even validity, especially for adaptive trial designs. Addressing familiarity and logistical challenges could facilitate wider implementation of Bayesian approaches in late-phase cancer trials.

Interpretation of Phase II and Phase III Trials

Phase III clinical trials are widely regarded as the gold standard for establishing and changing the standard of care. However, conducting and completing phase III trials present numerous challenges, including significant costs, time investment, and resource requirements. Many phase III trials fail to accrue patients, and in fact one in 15 phase III trials do not reach publication.70 Furthermore, the relevance of the clinical question at the trial’s initiation may be affected by advances in other fields by the time the trial concludes. An example is RTOG 1112, randomizing patients with unresectable hepatocellular carcinoma to sorafenib, the standard of care at the time of trial initiation in 2012, with or without stereotactic body radiotherapy.71,72 While overall survival and progression-free survival benefits were demonstrated with the addition of stereotactic body radiotherapy to sorafenib, the optimal management of unresectable hepatocellular carcinoma remains uncertain, as standard-of-care systemic therapy changed from sorafenib to atezolizumab plus bevacizumab between RTOG 1112 initiation in 2012 and trial reporting in 2022.73 These challenges raise important questions for oncologists, payors and national guideline committees: is a phase III trial necessary to change the standard of care? When is it justifiable to change the standard of care based on phase II study findings? Under what circumstances, if any, is it even desirable to do so? How do the endpoints, design and clinical context of the clinical question impact these considerations?

Advocates of changing standard of care based on phase II data emphasize the importance of timely delivery of new therapies to patients, particularly in settings with high mortality rates. An accelerated pathway for FDA drug approval has been in place for three decades, leading to approval of dozens of drugs based on phase II data without phase III confirmation.74,75 Additionally, improvements in central quality assurance across surgical and radiotherapy trials have enhanced the credibility of modern phase II trials compared to historical phase III studies.76 Moreover, the application and extrapolation of evidence to patient care should be necessarily influenced by study design. It is worth noting that 70% of oncology drugs approved by the FDA rely on phase III trials with surrogate primary endpoints, rather than what many argue are more patient-centered endpoints, such as overall survival or quality of life.77,78 Should the normative of oncology be to change standard of care based on phase III studies only improving overall response rate, or phase II trials improving overall survival? In situations where a well-conducted phase II randomized study demonstrates a survival benefit (especially if survival benefit occurs in the setting of cross-over), there may be a perceived lack of equipoise and enthusiasm among oncologists and patients for a randomized phase III study.79 Other design factors, such as patient selection criteria and the treatments used in the control arm, can significantly impact interpretability.20,8084 Therefore, it is essential to consider whether a poorly designed phase III trial should hold greater evidentiary weight than a well-conducted phase II study. Furthermore, phase II data often inform changes in the standard of care for rare conditions where conducting and completing a phase III trial may be unfeasible. An example is the phase II RTOG 0529, which definitively changed the standard of care approach to chemoradiation in anal squamous cell carcinoma from three-dimensional conformal radiotherapy to intensity-modulated radiotherapy.85 The RTOG 0529 also illustrates two even more controversial considerations for changing standard of care based on phase II data: RTOG 0529 did not meet its primary endpoint (acute grade ≥ 2 gastrointestinal and genitourinary toxicity), and RTOG 0529 was a single-arm trial comparing outcomes to historical controls from another study (RTOG 9811).86

While RTOG 0529 successfully changed standard of care, there are strong criticisms regarding the use of phase II data to drive such changes. One key concern is that single-arm phase II designs lack a controlled comparison between the experimental treatment and standard of care. These designs are typically intended to estimate effect sizes for powering subsequent phase III studies, but when they replace the phase III trial, the absence of randomization introduces several uncontrolled statistical biases emerging that can adversely impact interpretation. One well-known bias is the Will Rogers phenomenon, wherein improvements in diagnostic modalities lead to stage migration, resulting in apparent improvements in clinical outcomes even if the experimental treatment is not genuinely superior to the historical treatment.87 Furthermore, even if a phase II study is randomized, there is risk of serious underestimation or overestimation of the effect size depending on how investigators power the study to control for type 1 or type II errors. Specifically, powering a non-inferiority design with a phase II sample size carries the greatest risk for type II errors. It is important to recognize that the majority of drugs meeting the primary endpoint in phase II trials ultimately fail to demonstrate primary endpoint efficacy in phase III, underscoring the criticality of careful consideration when using phase II data to inform changes in the standard of care.88

Ultimately given the above, while there may be exceptional circumstances where changing the standard of care based on phase II data is reasonable, phase III trials should continue to serve as the gold standard for guiding standard of care decisions. Oncologists should prioritize the most evidence-based care for their patients. Innovative trial designs, such as adaptive phase II/III platforms or multi-arm umbrella studies, are gaining popularity due to time and cost advantages compared to the traditional sequence of phase II followed by phase III trials.89,90 . Further exploration and study of these innovative design platforms are warranted.

Conclusions

Late-phase randomized controlled trials play a crucial role in introducing therapies that have the potential to enhance and lengthen the lives of patients with cancer. It is imperative to approach the design, execution, and interpretation of these studies with careful deliberation. This article has highlighted several considerations and potential challenges associated with late-phase cancer trials. To improve the interpretation of trial results and their application to patient care, ongoing education of oncologists is necessary. Furthermore, continued research is warranted to gain a deeper understanding of these issues and to propose innovative solutions that can enhance cancer clinical trial design. Addressing these concerns can continue to advance the field of oncology and optimize late-phase trials for the benefit of cancer patients.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Prasad V, Mailankody S. Research and Development Spending to Bring a Single Cancer Drug to Market and Revenues After Approval. JAMA Intern Med. 2017;177(11):1569–1575. doi: 10.1001/jamainternmed.2017.3601 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Chen EY, Haslam A, Prasad V. FDA Acceptance of Surrogate End Points for Cancer Drug Approval: 1992–2019. JAMA Intern Med. 2020;180(6):912–914. doi: 10.1001/jamainternmed.2020.1097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Chen EY, Joshi SK, Tran A, Prasad V. Estimation of Study Time Reduction Using Surrogate End Points Rather Than Overall Survival in Oncology Clinical Trials. JAMA Intern Med. 2019;179(5):642–647. doi: 10.1001/jamainternmed.2018.8351 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Lin TA, Fuller CD, Verma V, et al. Trial Sponsorship and Time to Reporting for Phase 3 Randomized Cancer Clinical Trials. Cancers (Basel). 2020;12(9):2636. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Haslam A, Prasad V. When is crossover desirable in cancer drug trials and when is it problematic? Ann Oncol. 2018;29(5):1079–1081. doi: 10.1093/annonc/mdy116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Pasalic D, McGinnis GJ, Fuller CD, et al. Progression-free survival is a suboptimal predictor for overall survival among metastatic solid tumour clinical trials. Eur J Cancer. 2020;136:176–185. doi: 10.1016/j.ejca.2020.06.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hwang TJ, Gyawali B. Association between progression-free survival and patients’ quality of life in cancer clinical trials. Int J Cancer. 2019;144(7):1746–1751. doi: 10.1002/ijc.31957 [DOI] [PubMed] [Google Scholar]
  • 8.Booth CM, Eisenhauer EA. Progression-free survival: meaningful or simply measurable? J Clin Oncol Off J Am Soc Clin Oncol. 2012;30(10):1030–1033. doi: 10.1200/JCO.2011.38.7571 [DOI] [PubMed] [Google Scholar]
  • 9.Kim C, Prasad V. Strength of Validation for Surrogate End Points Used in the US Food and Drug Administration’s Approval of Oncology Drugs. Mayo Clin Proc. Published online 2016. doi: 10.1016/j.mayocp.2016.02.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Hess LM, Brnabic A, Mason O, Lee P, Barker S. Relationship between Progression-free Survival and Overall Survival in Randomized Clinical Trials of Targeted and Biologic Agents in Oncology. J Cancer. 2019;10(16):3717–3727. doi: 10.7150/jca.32205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Hashim M, Pfeiffer BM, Bartsch R, Postma M, Heeg B. Do Surrogate Endpoints Better Correlate with Overall Survival in Studies That Did Not Allow for Crossover or Reported Balanced Postprogression Treatments? An Application in Advanced Non–Small Cell Lung Cancer. Value Heal. 2018;21(1):9–17. doi: 10.1016/j.jval.2017.07.011 [DOI] [PubMed] [Google Scholar]
  • 12.Gilboa S, Pras Y, Mataraso A, Bomze D, Markel G, Meirson T. Informative censoring of surrogate end-point data in phase 3 oncology trials. Eur J Cancer. 2021;153:190–202. doi: 10.1016/j.ejca.2021.04.044 [DOI] [PubMed] [Google Scholar]
  • 13.Eisenhauer EA, Therasse P, Bogaerts J, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45(2):228–247. doi: 10.1016/j.ejca.2008.10.026 [DOI] [PubMed] [Google Scholar]
  • 14.Villaruz LC, Socinski MA. The clinical viewpoint: definitions, limitations of RECIST, practical considerations of measurement. Clin cancer Res an Off J Am Assoc Cancer Res. 2013;19(10):2629–2636. doi: 10.1158/1078-0432.CCR-12-2935 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Yoon SH, Kim KW, Goo JM, Kim D-W, Hahn S. Observer variability in RECIST-based tumour burden measurements: a meta-analysis. Eur J Cancer. 2016;53:5–15. doi: 10.1016/j.ejca.2015.10.014 [DOI] [PubMed] [Google Scholar]
  • 16.Jianrong Z, Yiyin Z, Shiyan T, et al. Systematic bias between blinded independent central review and local assessment: literature review and analyses of 76 phase III randomised controlled trials in 45 688 patients with advanced solid tumour. BMJ Open. 2018;8(9):e017240. doi: 10.1136/bmjopen-2017-017240 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Seymour L, Bogaerts J, Perrone A, et al. iRECIST: guidelines for response criteria for use in trials testing immunotherapeutics. Lancet Oncol. 2017;18(3):e143–e152. doi: 10.1016/S1470-2045(17)30074-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Dodd LE, Korn EL, Freidlin B, et al. Blinded independent central review of progression-free survival in phase III clinical trials: important design element or unnecessary expense? J Clin Oncol. 2008;26(22):3791–3796. doi: 10.1200/jco.2008.16.1711 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ranganathan P, Pramesh CS. Censoring in survival analysis: Potential for bias. Perspect Clin Res. 2012;3(1):40. doi: 10.4103/2229-3485.92307 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Hilal T, Sonbol MB, Prasad V. Analysis of Control Arm Quality in Randomized Clinical Trials Leading to Anticancer Drug Approval by the US Food and Drug Administration. JAMA Oncol. 2019;5(6):887–892. doi: 10.1001/jamaoncol.2019.0167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Meirson T, Markel G, Prasad V, Goodman AM, Mohyuddin GR. Post-protocol therapy and informative censoring in the CANDOR study. Lancet Oncol. 2022;23(3):e97. doi: 10.1016/s1470-2045(22)00075-4 [DOI] [PubMed] [Google Scholar]
  • 22.Huang B, Sun R, Claggett B, Tian L, Ludmir EB, Wei L-J. Handling Informative Premature Treatment or Study Discontinuation for Assessing Between-Group Differences in a Comparative Oncology Trial. JAMA Oncol. 2022;8(10):1502–1503. doi: 10.1001/jamaoncol.2022.2394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bentzen SM, Vogelius IR. Using and Understanding Survival Statistics - or How We Learned to Stop Worrying and Love the Kaplan-Meier Estimate. Int J Radiat Oncol Biol Phys. 2023;115(4):839–846. doi: 10.1016/j.ijrobp.2022.11.035 [DOI] [PubMed] [Google Scholar]
  • 24.Lin T, Koong A, Lin C, et al. Incidence and impact of proportional hazards violations in phase 3 cancer clinical trials. J Clin Oncol. 2022;40(16_suppl):1561. doi: 10.1200/JCO.2022.40.16_suppl.1561 [DOI] [Google Scholar]
  • 25.Ludmir EB, McCaw ZR, Kim DH, Tian L, Wei L-J. Fulvestrant plus capivasertib for metastatic breast cancer. Lancet Oncol. 2020;21(5):e233. doi: 10.1016/S1470-2045(20)30228-X [DOI] [PubMed] [Google Scholar]
  • 26.Uno H, Claggett B, Tian L, et al. Moving beyond the hazard ratio in quantifying the between-group difference in survival analysis. J Clin Oncol. 2014;32(22):2380–2385. doi: 10.1200/JCO.2014.55.2208 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Mukhopadhyay P, Ye J, Keaven ;, et al. Log-Rank Test vs MaxCombo and Difference in Restricted Mean Survival Time Tests for Comparing Survival Under Nonproportional Hazards in Immuno-oncology Trials: A Systematic Review and Meta-analysis. JAMA Oncol. Published online July 21, 2022. doi: 10.1001/JAMAONCOL.2022.2666 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Rahman R, Fell G, Ventz S, et al. Deviation from the Proportional Hazards Assumption in Randomized Phase 3 Clinical Trials in Oncology: Prevalence, Associated Factors, and Implications. Clin Cancer Res. 2019;25(21):6339–6345. doi: 10.1158/1078-0432.CCR-18-3999 [DOI] [PubMed] [Google Scholar]
  • 29.Alexander BM, Schoenfeld JD, Trippa L. Hazards of Hazard Ratios — Deviations from Model Assumptions in Immunotherapy. N Engl J Med. 2018;378(12):1158–1159. doi: 10.1056/nejmc1716612 [DOI] [PubMed] [Google Scholar]
  • 30.Ludmir EB, McCaw ZR, Grossberg AJ, Wei L-J, Fuller CD. Quantifying the benefit of non-small-cell lung cancer immunotherapy. Lancet (London, England). 2019;394(10212):1904. doi: 10.1016/S0140-6736(19)32503-6 [DOI] [PubMed] [Google Scholar]
  • 31.Mantel N, Stablein DM. The Crossing Hazard Function Problem. J R Stat Soc Ser D (The Stat. 1988;37(1):59–64. doi: 10.2307/2348379 [DOI] [Google Scholar]
  • 32.Freidlin B, Korn EL. Methods for Accommodating Nonproportional Hazards in Clinical Trials: Ready for the Primary Analysis? J Clin Oncol. 2019;37(35):3455–3459. doi: 10.1200/JCO.19.01681 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zhao L, Tian L, Uno H, et al. Utilizing the integrated difference of two survival functions to quantify the treatment contrast for designing, monitoring, and analyzing a comparative clinical study. Clin Trials. 2012;9(5):570–577. doi: 10.1177/1740774512455464 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Uno H, Tian L, Claggett B, Wei LJ. A versatile test for equality of two survival functions based on weighted differences of Kaplan-Meier curves. Stat Med. 2015;34(28):3680–3695. doi: 10.1002/sim.6591 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Royston P, Parmar MK. Restricted mean survival time: An alternative to the hazard ratio for the design and analysis of randomized trials with a time-to-event outcome. BMC Med Res Methodol. 2013;13(1):1–15. doi: 10.1186/1471-2288-13-152 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Pak K, Uno H, Kim DH, et al. Interpretability of Cancer Clinical Trial Results Using Restricted Mean Survival Time as an Alternative to the Hazard Ratio. JAMA Oncol. 2017;3(12):1692–1696. doi: 10.1001/jamaoncol.2017.2797 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Abou-Alfa GK, Meyer T, Cheng A-L, et al. Cabozantinib in Patients with Advanced and Progressing Hepatocellular Carcinoma. N Engl J Med. 2018;379(1):54–63. doi: 10.1056/NEJMoa1717002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Ludmir EB, McCaw ZR, Fuller CD, Wei L-J. Progression-free survival in the ICON8 trial. Lancet. 2020;396(10253):756. doi: 10.1016/S0140-6736(20)31175-2 [DOI] [PubMed] [Google Scholar]
  • 39.Das A, Lin TA, Lin C, et al. Assessment of Median and Mean Survival Time in Cancer Clinical Trials. JAMA Netw Open. 2023;6(4):e236498–e236498. doi: 10.1001/JAMANETWORKOPEN.2023.6498 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Uno H, Wittes J, Fu H, et al. Alternatives to hazard ratios for comparing the efficacy or safety of therapies in noninferiority studies. Ann Intern Med. 2015;163(2):127–134. doi: 10.7326/M14-1741 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Magirr D, Burman C-F. The MaxCombo Test Severely Violates the Type I Error Rate. JAMA Oncol. 2023;9(4):571–572. doi: 10.1001/jamaoncol.2022.7747 [DOI] [PubMed] [Google Scholar]
  • 42.O’Quigley J Testing for Differences in Survival When Treatment Effects Are Persistent, Decaying, or Delayed. J Clin Oncol. 2022;40(30):3537–3545. doi: 10.1200/JCO.21.01811 [DOI] [PubMed] [Google Scholar]
  • 43.Filleron T, Bachelier M, Mazieres J, et al. Assessment of Treatment Effects and Longterm Benefits in Immune Checkpoint Inhibitor Trials Using the Flexible Parametric Cure Model: A Systematic Review. JAMA Netw open. 2021;4(12):e2139573. doi: 10.1001/jamanetworkopen.2021.39573 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Castañon E, Sanchez-Arraez Á, Jimenez-Fonseca P, et al. Bayesian interpretation of immunotherapy trials with dynamic treatment effects. Eur J Cancer. 2022;161:79–89. doi: 10.1016/j.ejca.2021.11.002 [DOI] [PubMed] [Google Scholar]
  • 45.Whelan TJ, Pignol J-P, Levine MN, et al. Long-term results of hypofractionated radiation therapy for breast cancer. N Engl J Med. 2010;362(6):513–520. doi: 10.1056/NEJMoa0906260 [DOI] [PubMed] [Google Scholar]
  • 46.Widmark A, Gunnlaugsson A, Beckman L, et al. Ultra-hypofractionated versus conventionally fractionated radiotherapy for prostate cancer: 5-year outcomes of the HYPO-RT-PC randomised, non-inferiority, phase 3 trial. Lancet. 2019;394(10196):385–395. doi: 10.1016/S0140-6736(19)31131-6 [DOI] [PubMed] [Google Scholar]
  • 47.Dearnaley D, Syndikus I, Mossop H, et al. Conventional versus hypofractionated high-dose intensity-modulated radiotherapy for prostate cancer: 5-year outcomes of the randomised, non-inferiority, phase 3 CHHiP trial. Lancet Oncol. 2016;17(8):1047–1060. doi: 10.1016/S1470-2045(16)30102-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Center for Drug and Evaluation and Research; Center for Biologics Evaluation and Research. Non-Inferiority Clinical Trials to Establish Effectiveness — Guidance for Industry; 2016. [Google Scholar]
  • 49.D’Agostino RBS, Massaro JM, Sullivan LM. Non-inferiority trials: design concepts and issues - the encounters of academic consultants in statistics. Stat Med. 2003;22(2):169–186. doi: 10.1002/sim.1425 [DOI] [PubMed] [Google Scholar]
  • 50.Gupta SK. Intention-to-treat concept: A review. Perspect Clin Res. 2011;2(3):109–112. doi: 10.4103/2229-3485.83221 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Sicklick JK, Kato S, Okamura R, Kurzrock R. Precision oncology: the intention-to-treat analysis fallacy. Eur J Cancer. 2020;133:25–28. doi: 10.1016/j.ejca.2020.04.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Cuzick J, Sasieni P. Interpreting the results of noninferiority trials-a review. Br J Cancer. 2022;127(10):1755–1759. doi: 10.1038/s41416-022-01937-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Turrisi AT 3rd, Kim K, Blum R, et al. Twice-daily compared with once-daily thoracic radiotherapy in limited small-cell lung cancer treated concurrently with cisplatin and etoposide. N Engl J Med. 1999;340(4):265–271. doi: 10.1056/NEJM199901283400403 [DOI] [PubMed] [Google Scholar]
  • 54.Faivre-Finn C, Snee M, Ashcroft L, et al. Concurrent once-daily versus twice-daily chemoradiotherapy in patients with limited-stage small-cell lung cancer (CONVERT): an open-label, phase 3, randomised, superiority trial. Lancet Oncol. 2017;18(8):1116–1125. doi: 10.1016/S1470-2045(17)30318-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Bogart J, Wang X, Masters G, et al. High-Dose Once-Daily Thoracic Radiotherapy in Limited-Stage Small-Cell Lung Cancer: CALGB 30610 (Alliance)/RTOG 0538. J Clin Oncol Off J Am Soc Clin Oncol. 2023;41(13):2394–2402. doi: 10.1200/JCO.22.01359 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Levy A, Hendriks LEL, Le Péchoux C, et al. Current management of limited-stage SCLC and CONVERT trial impact: Results of the EORTC Lung Cancer Group survey. Lung Cancer. 2019;136:145–147. doi: 10.1016/j.lungcan.2019.08.007 [DOI] [PubMed] [Google Scholar]
  • 57.National Comprehensive Cancer Network. Small Cell Lung Cancer. doi: 10.1016/B978-0323-37753-9.50112-2 [DOI] [Google Scholar]
  • 58.Colquhoun D The reproducibility of research and the misinterpretation of p-values. R Soc open Sci. 2017;4(12):171085. doi: 10.1098/rsos.171085 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Goodman S A dirty dozen: twelve p-value misconceptions. Semin Hematol. 2008;45(3):135–140. doi: 10.1053/j.seminhematol.2008.04.003 [DOI] [PubMed] [Google Scholar]
  • 60.Nuzzo R Scientific method: statistical errors. Nature. 2014;506(7487):150–152. doi: 10.1038/506150a [DOI] [PubMed] [Google Scholar]
  • 61.Fornacon-Wood I, Mistry H, Johnson-Hart C, Faivre-Finn C, O’Connor JPB, Price GJ. Understanding the Differences Between Bayesian and Frequentist Statistics. Int J Radiat Oncol Biol Phys. 2022;112(5):1076–1082. doi: 10.1016/j.ijrobp.2021.12.011 [DOI] [PubMed] [Google Scholar]
  • 62.Adamina M, Tomlinson G, Guller U. Bayesian statistics in oncology: a guide for the clinical investigator. Cancer. 2009;115(23):5371–5381. doi: 10.1002/cncr.24628 [DOI] [PubMed] [Google Scholar]
  • 63.Goodman SN. Toward evidence-based medical statistics. 2: The Bayes factor. Ann Intern Med. 1999;130(12):1005–1013. doi: 10.7326/0003-4819-130-12-199906150-00019 [DOI] [PubMed] [Google Scholar]
  • 64.Spiegelhalter DJ, Freedman LS, Parmar MKB. Bayesian Approaches to Randomized Trials. J R Stat Soc Ser A Stat Soc. 1994;157(3):357–387. doi: 10.2307/2983527 [DOI] [Google Scholar]
  • 65.Quintana M, Viele K, Lewis RJ. Bayesian Analysis: Using Prior Information to Interpret the Results of Clinical Trials. JAMA. 2017;318(16):1605–1606. doi: 10.1001/jama.2017.15574 [DOI] [PubMed] [Google Scholar]
  • 66.Giovagnoli A The Bayesian Design of Adaptive Clinical Trials. Int J Environ Res Public Health. 2021;18(2). doi: 10.3390/ijerph18020530 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Angus DC, Alexander BM, Berry S, et al. Adaptive platform trials: definition, design, conduct and reporting considerations. Nat Rev Drug Discov. 2019;18(10):797–807. doi: 10.1038/s41573-019-0034-3 [DOI] [PubMed] [Google Scholar]
  • 68.Fors M, González P. Current status of Bayesian clinical trials for oncology, 2020. Contemp Clin trials Commun. 2020;20:100658. doi: 10.1016/j.conctc.2020.100658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Tidwell RSS, Thall PF, Yuan Y. Lessons Learned From Implementing a Novel Bayesian Adaptive Dose-Finding Design in Advanced Pancreatic Cancer. JCO Precis Oncol. 2021;(5):1719–1726. doi: 10.1200/PO.21.00212 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Pasalic D, Fuller CD, Mainwaring W, et al. Detecting the Dark Matter of Unpublished Clinical Cancer Studies: An Analysis of Phase 3 Randomized Controlled Trials. Mayo Clin Proc. 2021;96(2):420–426. doi: 10.1016/j.mayocp.2020.08.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Dawson LA, Winter KA, Knox JJ, et al. NRG/RTOG 1112: Randomized phase III study of sorafenib vs. stereotactic body radiation therapy (SBRT) followed by sorafenib in hepatocellular carcinoma (HCC). Published online 2023. [Google Scholar]
  • 72.Llovet J, Ricci S, Mazzaferro V, et al. Sorafenib in Advanced Hepatocellular Carcinoma. N Engl J Med. 2008;359:378–390. [DOI] [PubMed] [Google Scholar]
  • 73.Finn RS, Qin S, Ikeda M, et al. Atezolizumab plus Bevacizumab in Unresectable Hepatocellular Carcinoma. N Engl J Med. 2020;382(20):1894–1905. doi: 10.1056/NEJMoa1915745 [DOI] [PubMed] [Google Scholar]
  • 74.Johnson JR, Ning YM, Farrell A, Justice R, Keegan P, Pazdur R. Accelerated approval of oncology products: the food and drug administration experience. J Natl Cancer Inst. 2011;103(8):636–644. doi: 10.1093/jnci/djr062 [DOI] [PubMed] [Google Scholar]
  • 75.Chabner B Approval of New Agents after Phase II Trials. Am Soc Clin Oncol Educ B. Published online 2012:e1–3. doi: 10.14694/EdBook_AM.2012.32.114 [DOI] [PubMed] [Google Scholar]
  • 76.Corrigan KL, Kry S, Howell RM, et al. The radiotherapy quality assurance gap among phase III cancer clinical trials. Radiother Oncol. 2022;166:51–57. doi: 10.1016/j.radonc.2021.11.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Abi Jaoude J, Kouzy R, Ghabach M, et al. Food and Drug Administration approvals in phase 3 Cancer clinical trials. BMC Cancer. 2021;21(1):695. doi: 10.1186/s12885-021-08457-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Kemp R, Prasad V. Surrogate endpoints in oncology: when are they acceptable for regulatory and clinical decisions, and are they currently overused? BMC Med. 2017;15(1):134. doi: 10.1186/s12916-017-0902-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.AlHamaly MA, Alzoubi KH, Khabour OF, Jaber RA, Aldelaimy WK. Review of Clinical Equipoise: Examples from Oncology Trials. Curr Rev Clin Exp Pharmacol. 2023;18(1):22–30. doi: 10.2174/2772432817666211221164101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Patel RR, Parisi R, Verma V, et al. Association between Prior Malignancy Exclusion Criteria and Age Disparities in Cancer Clinical Trials. Cancers (Basel). 2022;14(4). doi: 10.3390/cancers14041048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Ludmir EB, Espinoza AF, Jethanandani A, et al. Incidence and correlates of HIV exclusion criteria in cancer clinical trials. Int J Cancer. 2020;146(8):2362–2364. doi: 10.1002/ijc.32800 [DOI] [PubMed] [Google Scholar]
  • 82.Abi Jaoude J, Kouzy R, Mainwaring W, et al. Performance Status Restriction in Phase III Cancer Clinical Trials. J Natl Compr Canc Netw. 2020;18(10):1322–1326. doi: 10.6004/jnccn.2020.7578 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83.Ludmir EB, Mainwaring W, Lin TA, et al. Factors associated with age disparities among cancer clinical trial participants. JAMA Oncol. 2019;5(12):1769–1773. doi: 10.1001/jamaoncol.2019.2055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Ludmir EB, Fuller CD, Moningi S, et al. Sex-Based Disparities Among Cancer Clinical Trial Participants. J Natl Cancer Inst. 2020;112(2):211–213. doi: 10.1093/jnci/djz154 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Kachnic LA, Winter K, Myerson RJ, et al. RTOG 0529: a phase 2 evaluation of dose-painted intensity modulated radiation therapy in combination with 5-fluorouracil and mitomycin-C for the reduction of acute morbidity in carcinoma of the anal canal. Int J Radiat Oncol Biol Phys. 2013;86(1):27–33. doi: 10.1016/j.ijrobp.2012.09.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Ajani JA, Winter KA, Gunderson LL, et al. Fluorouracil, mitomycin, and radiotherapy vs fluorouracil, cisplatin, and radiotherapy for carcinoma of the anal canal: a randomized controlled trial. JAMA. 2008;299(16):1914–1921. doi: 10.1001/jama.299.16.1914 [DOI] [PubMed] [Google Scholar]
  • 87.Feinstein AR, Sosin DM, Wells CK. The Will Rogers Phenomenon. N Engl J Med. 1985;312(25):1604–1608. doi: 10.1056/nejm198506203122504 [DOI] [PubMed] [Google Scholar]
  • 88.Takebe T, Imai R, Ono S. The Current Status of Drug Discovery and Development as Originated in United States Academia: The Influence of Industrial and Academic Collaboration on Drug Discovery and Development. Clin Transl Sci. 2018;11(6):597–606. doi: 10.1111/cts.12577 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Korn EL, Freidlin B, Abrams JS, Halabi S. Design issues in randomized phase II/III trials. J Clin Oncol. 2012;30(6):667–671. doi: 10.1200/JCO.2011.38.5732 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Carthon BC, Antonarakis ES. The STAMPEDE trial: paradigm-changing data through innovative trial design. Transl Cancer Res. 2016;5(3 Suppl):S485–S490. doi: 10.21037/tcr.2016.09.08 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES