Hey and Kimmelman disagree with the purported ethical advantage of outcome-adaptive randomization in the setting of two-arm trials and/or early phase studies. They also conclude that, compared to equal randomization, adaptive randomization might have a compelling ethical basis in multi-arm trials. They raise important considerations; however, many criticisms of adaptive randomization are broad and unjustified. It deserves a more thorough evaluation.
Hey and Kimmelman use “ethics” to cover many things. It helps to discuss the usefulness of a trial design on the basis of three categories: statistical properties; practical considerations; and real-life performance.
Statistical properties
A trial’s operating characteristics can be evaluated by theory and/or simulations. Optimal designs can be theoretically constructed given the design parameters and optimization criteria. For example, Neyman allocation minimizes the test statistic variance to maximize statistical power. RSIHR allocation minimizes the total number of failures.1 While equal randomization is optimal under the null hypothesis when there is no difference in treatment efficacy, adaptive randomization tailors the allocation proportion to optimize specified statistical properties. Simulations can be used to evaluate a trial’s operating characteristics. For example, Korn and Freidlin compared the number of non-responders under equal and adaptive randomization designs, controlling type I and II errors.2 They preferred equal randomization because adaptive randomization yielded more non-responders, which occurred because adaptive randomization required a larger trial to reach the same power due to unequal treatment arm sample sizes. In all settings, however, adaptive randomization allocates a higher percentage of patients to the better arm compared to equal randomization. Lee et al. 3 and Du et al. 4 used simulations to consider the total number of patients available in and beyond the trial. They preferred adaptive randomization when the efficacy difference between treatments is large or when the number of patients beyond the trial is small. There are defined tensions between equal and adaptive randomization, but their difference diminishes when implementing futility and efficacy early stopping rules. Equal randomization emphasizes the statistical power to benefit future patients by designing the smallest trial to definitively compare treatment efficacy (collective ethics). In contrast, adaptive randomization focuses on providing better treatment for patients enrolled in the trial while reaching the desirable statistical power (individual ethics). The performances of equal and adaptive randomization have been compared under the concept of patient horizon, i.e., the total number of patients with that particular disease.4–6 Both have also been evaluated in trials of targeted agents.7–9 Each design shows strength and weakness in reaching desirable operating characteristics, and no single design prevails in all settings. From the viewpoint of statistical properties, investigators can choose an adaptive procedure that is optimal for multiple objectives, such as improving power and/or reducing expected treatment failures.10
Practical considerations
For two-arm trials and early phase studies, Hey and Kimmelman find no ethical advantage for adaptive over equal randomization. Their first argument is that in early phase trials, short-term efficacy endpoints often do not translate to long-term patient benefit; therefore it will not matter whether adaptive randomization assigns more patients to the better arm based on the short-term endpoint. However, a poor choice of surrogate endpoint is a study design flaw, no matter which randomization method is used. The failure of more than 50% of phase III trials reflects the inefficiency of the current drug development process and not a lack of merit for adaptive randomization. In fact, the vast majority of randomized phase II trials have applied equal rather than adaptive randomization.11 Contrary to Hey and Kimmelman’s recommendation, novel clinical trial designs, including those involving adaptive randomization, should be considered in early phase drug development to more efficiently determine the best dose/schedule and identify effective agents.12–14
Their second argument against adaptive randomization is that most new treatments deliver a small efficacy improvement over that of the standard treatment. Hence, adaptive randomization offers limited benefit but requires larger sample sizes. I agree that when the expected efficacy gain for a new treatment is small, adaptive randomization has limited advantage. However, in biomarker-based stratified designs, with properly chosen predictive markers, extremely large treatment effects have been observed, for example, for imatinib, erlotinib, vemurafenib, and crizotinib among patients with BCL-ABL translocation, EGFR mutation, BRAF mutation, and EML4-ALK fusion, respectively.15,16 Adaptive designs shine in these situations.
Hey and Kimmelman argue that phase III trial endpoints that reflect patient benefit take longer to observe and therefore limit the ability of adaptive randomization to incorporate that information into randomization unless the accrual is slow. Indeed, adaptive randomization requires that the accrual not be too fast relative to the outcome observation time. And if the accrual is too slow, the trial will take too long to complete. A recent article addresses this problem, finding a “sweet spot” that optimizes the utility function by considering treatment efficacy and tolerability, sample size, and accrual rate.17
Hey and Kimmelman underscore the greater use of resources by adaptive randomization because of larger sample sizes. They consider a dichotomy: research systems versus care systems. They emphasize efficiency above all other concerns to minimize research resources and expeditiously pass findings to the care delivery systems. This is a typical, old-school argument that research is not intended to benefit the research participant, under which collective ethics trumps individual ethics. However, when faced with a life-threatening disease, is there a single patient who does not want to benefit from participating in a clinical trial rather than simply contribute to generalizable knowledge? Marshall eloquently stated that Institutional Review Boards “should be specifically and predominantly the moral advocates of the research subject, and only to a lesser degree concerned with the advancement of research and its public goals”.18 While determining treatment efficacy, adaptive randomization has the advantage of considering patient benefit. Furthermore, in-depth genomic profiling and analysis of environmental impacts have shown us that no two patients are alike, which renders patient homogeneity and generalizability of knowledge less relevant and blurs the boundary between research and patient care. The future of precision medicine is integrating research with patient care, providing every patient with the best possible treatment based on the available information, and continuing to learn and to improve the system.
Hey and Kimmelman construct an artificial case to invalidate adaptive randomization—that argument is flawed. They say that adaptive randomization might be useful in trials employing placebos because it will allocate fewer patients to the placebo should the novel agent show favorable activity. Then they counter that adaptive randomization should not be used because it is unethical to use a placebo rather than standard care. In that scenario, a placebo-controlled study should not be designed at all, regardless of whether adaptive or equal randomization is used.
Hey and Kimmelman are concerned that adaptive randomization worsens patients’ misconceptions of randomization, instilling the erroneous belief that they will receive the best possible treatment. They consider a 0.8 probability of randomization to the superior arm, and worry that an adaptive randomization advocate ignores the 20% chance of a patient being randomized to the inferior arm. This does not illustrate an issue of adaptive versus equal randomization, but rather the challenge of communicating accurately in the informed consent document. I agree that the benefit of adaptive randomization should not be overstated. Patients should not be drawn to trials based on unrealistic expectations of treatment benefit regardless of whether adaptive or equal randomization is applied. An important point is that all trials enact randomization before sufficient evidence is available to definitively conclude the comparative efficacy of two or more treatments. Adaptive randomization initially assigns patients to the putatively better arm with higher probability, and then judiciously uses the accumulating outcome information to benefit future patients. In contrast, equal randomization ignores the information accumulating in the trial.
Hey and Kimmelman point out that adaptive randomization designs are vulnerable to population and treatment changes during the trial, which threatens their internal validity. Indeed, this is a valid concern. Solutions to mitigate this problem include the incorporation of a concurrent control group, block randomization, and covariate-adjusted data analysis.
Real-life performance
Adaptive designs require more resources to plan and implement, and because of the current research infrastructure, not all institutions are ready to conduct such trials. Except in early phase I cancer trials, adaptive designs have not been widely adopted.19,20 More tools need to be developed to design, simulate, and implement adaptive randomization trials. The clinical trial infrastructure needs to be revamped to collect study endpoints timely and to conduct trials more efficiently. At MD Anderson Cancer Center, we have developed freely available software tools for the design and conduct of adaptive designs.21 While conducting the BATTLE trial, we developed a web-based database application for data entry, e-mail notification, web services for calling R functions to calculate the randomization probability, interim study monitoring, and analysis. Although it takes more effort, the advantage is that study data are collected more efficiently, timely, and accurately.22
Adaptive randomization has been implemented in a small fraction of clinical trials. Much effort is needed in education, software development, trial conduct, data analysis, and interpretation of adaptive randomization methods. The benefit of adaptive randomization (or lack thereof) has not been demonstrated convincingly in clinical trials. However, with rapid advancements in medical knowledge and drug development we need to challenge the status quo and strive to provide better treatments that benefit more patients.
Acknowledgments
Funding
This research was supported in part by the grant CA016672 from the National Cancer Institute.
Footnotes
Declaration of conflicting interests
No conflict of interests.
References
- 1.Hu F, Rosenberger WF. The theory of response-adaptive randomization in clinical trials. Hoboken, New Jersey: John Wiley & Sons, Inc; 2006. [Google Scholar]
- 2.Korn EL, Freidlin B. Outcome-adaptive randomization: Is it useful? J Clin Oncol. 2011;29:771–776. doi: 10.1200/JCO.2010.31.1423. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Lee JJ, Chen N, Yin G. Worth adapting? Revisiting the usefulness of outcome-adaptive randomization. Clin Cancer Res. 2012;18:4498–4507. doi: 10.1158/1078-0432.CCR-11-2555. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Du Y, Wang X, Lee JJ. Simulation study for evaluating the performance of response-adaptive randomization. Contemp Clin Trials. 2014;40C:15–25. doi: 10.1016/j.cct.2014.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Berry DA, Eick SG. Adaptive assignment versus balanced randomization in clinical trials: a decision analysis. Stat Med. 1995;14:231–246. doi: 10.1002/sim.4780140302. [DOI] [PubMed] [Google Scholar]
- 6.Cheng Y, Berry DA. Optimal adaptive randomized designs for clinical trials. Biometrika. 2007;94:673–687. [Google Scholar]
- 7.Lee JJ, Gu X, Liu S. Bayesian adaptive randomization designs for targeted agent development. Clin Trials. 2010;7:584–596. doi: 10.1177/1740774510373120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Lai TL, Lavori PW, Shih M-CI, et al. Clinical trial designs for testing biomarker-based personalized therapies. Clin Trials. 2012;9:141–154. doi: 10.1177/1740774512437252. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Wason JMS, Trippa L. A comparison of Bayesian adaptive randomization and multi-stage designs for multi-arm clinical trials. Stat Med. 2014;33:2206–2221. doi: 10.1002/sim.6086. [DOI] [PubMed] [Google Scholar]
- 10.Rosenberger WF, Sverdlov O, Hu F. Adaptive randomization for clinical trials. J Biopharm Stat. 2012;22:719–736. doi: 10.1080/10543406.2012.676535. [DOI] [PubMed] [Google Scholar]
- 11.Lee JJ, Feng L. Randomized phase II designs in cancer clinical trials: Current status and future directions. J Clin Oncol. 2005;23:4450–4457. doi: 10.1200/JCO.2005.03.197. [DOI] [PubMed] [Google Scholar]
- 12.Wu W, Shi Q, Sargent DJ. Statistical considerations for the next generation of clinical trials. Semin Oncol. 2011;38:598–604. doi: 10.1053/j.seminoncol.2011.05.014. [DOI] [PubMed] [Google Scholar]
- 13.Rubin EH, Gilliland DG. Drug development and clinical trials - The path to an approved cancer drug. Nat Rev Clin Oncol. 2012;9:215–222. doi: 10.1038/nrclinonc.2012.22. [DOI] [PubMed] [Google Scholar]
- 14.Qin R, Kohli M. Pharmacogenetics- and pharmacogenomics-based rational clinical trial designs in oncology. Per Med. 2013;10:859–869. doi: 10.2217/pme.13.78. [DOI] [PubMed] [Google Scholar]
- 15.La Thangue NB, Kerr DJ. Predictive biomarkers: A paradigm shift towards personalized cancer medicine. Nat Rev Clin Oncol. 2011;8:587–596. doi: 10.1038/nrclinonc.2011.121. [DOI] [PubMed] [Google Scholar]
- 16.Devita VT, Eggermont AMM, Hellman S, et al. Clinical cancer research: The past, present and the future. Nat Rev Clin Oncol. 2014;11:663–669. doi: 10.1038/nrclinonc.2014.153. [DOI] [PubMed] [Google Scholar]
- 17.Gajewski BJ, Berry SM, Quintana M, et al. Building efficient comparative effectiveness trials through adaptive designs, utility functions, and accrual rate optimization: Finding the sweet spot. Stat Med. 2015 doi: 10.1002/sim.6403. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Marshall E. Does the moral philosophy of the Belmont Report rest on a mistake? IRB. 1986;8:5–6. [PubMed] [Google Scholar]
- 19.Chevret S. Bayesian adaptive clinical trials: A dream for statisticians only? Stat Med. 2012;31:1002–1013. doi: 10.1002/sim.4363. [DOI] [PubMed] [Google Scholar]
- 20.Lee JJ, Chu CT. Bayesian clinical trials in action. Stat Med. 2012;31:2955–2972. doi: 10.1002/sim.5404. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.The University of Texas MD Anderson Cancer Center, Division of Quantitative Sciences – Department of Biostatistics. [accessed 8 December 2014];2012 Software download site https://biostatistics.mdanderson.org/SoftwareDownload/
- 22.Kim ES, Herbst RS, Wistuba II, et al. The BATTLE trial: Personalizing therapy for lung cancer. Cancer Discov. 2011;1:44–53. doi: 10.1158/2159-8274.CD-10-0010. [DOI] [PMC free article] [PubMed] [Google Scholar]