Abstract
Background
Response adaptive randomization is popular in adaptive trial designs, but the literature detailing its execution is lacking. These designs are desirable for patients/stakeholders, particularly in comparative effectiveness research, due to the potential benefits including improving participant buy-in by providing more participants with better treatment during the trial. Frequentist approaches have often been used, but adaptive designs naturally fit the Bayesian methodology; it was developed to deal with data as they come in by updating prior information.
Methods
PAIN-CONTRoLS was a comparative-effectiveness trial utilizing Bayesian response adaptive randomization to four drugs, nortriptyline, duloxetine, pregabalin, or mexiline, for cryptogenic sensory polyneuropathy (CSPN) patients. The aim was to determine which treatment was most tolerable and effective in reducing pain. Quit and efficacy rates were combined into a utility function to develop a single outcome, which with treatment sample size, drove the adaptive randomization. Prespecified interim analyses allowed the study to stop for early success or update the randomization probabilities to the better-performing treatments.
Results
Seven adaptations to the randomization occurred before the trial ended due to reaching the maximum sample size, with more participants receiving nortriptyline and duloxetine. At the end of the follow-up, nortriptyline and duloxetine had lower probabilities of participants that had stopped taking the study medication and higher probabilities were efficacious. Mexiletine had the highest quit rate, but had an efficacy rate higher than pregabalin.
Conclusions
Response adaptive randomization has become a popular trial tool, especially for those utilizing Bayesian methods for analyses. By illustrating the execution of a Bayesian adaptive design, using the PAIN-CONTRoLS trial data, this paper continues the work to provide literature for conducting Bayesian response adaptive randomized trials.
Keywords: Conducting adaptive design, Interim analyses, Bayesian methods, Outcome driven
1. Background
Adaptive trial designs, unlike fixed trial designs, allow scheduled reviews of the accumulating data while the trial is ongoing for prespecified changes to flexible components of the trial without undermining its integrity or validity [[1], [2], [3]]. Research using response adaptive randomization in clinical trials developed as a solution to the ethical dilemma that participants in a fixed trial will benefit from the new treatment only by a fixed chance throughout the trial [4]. Response adaptive randomization, which optimizes the allocation probabilities to favor the treatment performing better, is a clinical trial design that has grown in popularity over the last few decades [[5], [6], [7]]. The launch of the groundbreaking I-SPY2 [6] clinical trial and the guidance from the U.S. Food and Drug Administration (FDA) in 2010 [8] boosted the use of adaptive clinical trials. In 2012, the Patient-Centered Outcomes Research Institute (PCORI) adopted specific policies and guidelines to encourage the use of these Bayesian adaptive designs in comparative effectiveness trials [9]. The FDA released updated guidance in 2019 for sponsors and applicants on the appropriate use of adaptive designs for clinical trials to provide evidence of the effectiveness and safety of a drug [10].
There are potential advantages that adaptive designs can provide over a non-adaptive design due to the fundamental property that allows the trial to adjust to information that was not available when the trial began [11]. Some of these benefits include statistical efficiency (greater chance to detect a true drug effect, providing the same statistical power with smaller expected sample sizes), ethical considerations (stopping early, more participants receiving the more effective drug), improved understanding of drug effects, and acceptability to stakeholders [[10], [11], [12], [13]]. Many adaptive designs have often been applied in clinical trials using frequentist approaches [13,14], but further advantages to trial design and analysis can be gained by using Bayesian methods [15]. Adaptive designs naturally fit the Bayesian methodology as it was developed to deal with new data as they come in by updating the prior information using hierarchical methods [16]. The Bayesian approach allows the use of different sources of information, such as intermediate and endpoint assessments, while providing direct probability statements about the treatment effect, which can offer insight to clinicians on the likelihood that they are using the best therapy [12,17]. Bayesian adaptive designs can also make use of simulations to evaluate the equivalent frequentists operating characteristics, power, and type I error rate, while the inference is not affected by the number or timing of interim analysis [17,18]. Efforts to promote Bayesian approaches and provide details on the execution of these response-adaptive randomized clinical trials are necessary.
One such example of a Bayesian response adaptive randomized clinical trial that was designed during the boost in these outcomes-driven adaptive trials is PAIN-CONTRoLS [19]. A previous paper (Brown et al., 2016, Trials) provides much of the design details and technical processes involved in running an adaptive design (a “how to” paper) [20]. This paper continues on the work of the “how to” paper by providing its execution with the actual trial data and detailed results of the interim and final analyses demonstrating the response adaptive randomization throughout the duration of the trial. We hope this paper will continue to fill the void in the adaptive trial design literature by illustrating the execution of a Bayesian adaptive design using the PAIN-CONTRoLS data.
2. Methods
2.1. Study information
The Patient Assisted Intervention for Neuropathy: Comparison of Treatment in Real Life Situations (PAIN-CONTRoLS) trial was a comparative effectiveness study utilizing a Bayesian adaptive design with response adaptive randomization to one of four drugs for cryptogenic sensory polyneuropathy (CSPN) participants [19,21]. There were no prior studies that compared the effectiveness of medications in controlling pain for participants with CSPN. The primary aim of this study was to determine which study drug was most tolerable and effective in providing pain relief and improving the quality of life in participants with CSPN. The study was a prospective, randomized, comparative effectiveness adaptive design trial with participants who did not have diabetes and in whom no other cause for neuropathy had been found. The four drugs we compared in this study were nortriptyline, duloxetine, pregabalin, and mexiletine. These four drugs are commonly prescribed by physicians caring for patients with CSPN. While none of the four medications are FDA approved for CSPN, pregabalin, and duloxetine are FDA approved for diabetic peripheral neuropathy, but mexiletine and nortriptyline are often used in U.S. clinics to treat painful neuropathy. We chose a Bayesian Adaptive Design with participant burden and efficiency in mind. Our trial also included an added layer of complexity by randomizing participants across 40 sites around North America. The results from this study gave patients and doctors meaningful, practical information to guide them in selecting the drug for pain that may be the most effective while having the fewest side effects, which otherwise would result in stopping the medication [21,22].
2.2. Outcome
Pain is a major symptom for patients with CSPN. As is the practice in most clinics, we chose improvements to a participant rating on a Visual Analog Scale Likert pain assessment as the main endpoint with 0 representing no pain and 10 representing severe pain. Patients and the Patient Advisory Board consistently voiced their endorsement of pain as a central study measure, recognizing that participants’ lack of pain relief contributes to medication quits, poor quality of life, diminished abilities to engage in desired daily activities as well as having a negative impact on their emotional wellbeing. Thus, the outcome of the study is a utility function that reflects a combination of two measures, treatment efficacy, and quit rates. At follow-up, a participant would be categorized into one of three levels: treatment quit; treatment efficacious and non-quit; or treatment non-efficacious and non-quit. Specifically, a treatment for a participant was deemed efficacious if they reported a 50 % or more reduction in the pain scale from the baseline visit to the 12-week visit. The treatment efficacy was the percentage of participants that were efficacious and non-quit and the second measure was the observed percentage of participants who quit treatment before the study endpoint (i.e., 12-week) visit for any reason or were lost to follow-up. Thus we have three levels of the outcome variable (quit, treatment efficacious and not quit, treatment not efficacious and not quit). To develop a single primary outcome measure, we combined efficacy rates and quit rates into a single utility function, described below [20,23].
2.3. Longitudinal statistical modeling for interim analyses
Study participants were randomized to one of four drugs, and pain was measured at 4 and 8 weeks, in addition to our 12-week primary endpoint. Each participant was rated at each measurement time as either staying on the drug or quitting the drug due to lack of efficacy or adverse side effects. If the participant maintained on the drug, it was determined whether the drug was considered efficacious, if observed at least a 50 % reduction in pain from baseline, or not. For each interim analysis, we utilized Bayesian response adaptive randomization based on 12-week participant outcomes by using a Bayesian model to predict the 12-week participant outcomes using the 4- and 8-week follow-up visits for any incomplete participants for the interim analyses. The updated randomization ratios depended on the accumulating information in the trial and the treatment sample size.
A conditional multinomial model was created for the prediction between the two early follow-up time-period response values and the final 12-week values. The Bayesian model was built to learn from the accruing information and utilize the information from participants with incomplete information to the extent that the 4- and 8-week values are predictive of the 12-week values. A separate version of the model was used for each time period. Thus if a participant only had 4-week data the 4-week Bayesian longitudinal model performed the multiple imputation of the final 12-week values. If the participant had 8-week data the 8-week Bayesian longitudinal model was used for the multiple imputation. The multinomial models were used only on participants who stayed on medication, as after a participant quit a medication, subsequent follow-ups are also quit. We label the response for participant i at weeks 4, 8, and 12 as vectors of length three, Yi,4, Yi,8, and Yi,12, respectively, with each component of the three-dimensional vector representing a 1/0 corresponding to the three levels of participant outcome at the follow-up: participant quit, efficacious and non-quit; or non-efficacious and non-quit. Fairly weakly informative priors were provided for the conditional multinomial models, and details are provided in Brown et al. [20].
2.3.1. Primary analysis statistical model
The response for participant i, at the 12-week visit, is a three-dimensional vector, Yi,12, that follows a multinomial distribution with parameters, θai, where ai is the treatment arm for participant i, and θa is a three-dimensional vector for arm a that represents a response to pain medication (rates of quit; not quit and efficacious; not quit and not efficacious; [θQa, θEa, θNa].) Uniform priors were provided [θa] ∼ Dirichlet (1/3,1/3,1/3) [23]. Using the 12-week data and the prior probabilities, we used Markov Chain Monte Carlo (MCMC) computations using R[24] to obtain the Bayesian posterior distributions of θa at each interim and the final analysis.
2.3.2. Bayesian quantities of interest
To measure the performance of a treatment, the posterior probabilities of the quit and efficacy components of θa are combined to get a utility, Ua, for the ath drug. The formula is Ua = 0.75θEa + (1-θQa), where θQa and θEa represent the posterior probabilities for the quit rate and efficacy rate, respectively, for drug a . The following Bayesian quantities are used for the adaptive design and are calculated at each interim analysis. The treatment with the highest utility was denoted . As the trial was ongoing, we didn't know which treatment was best, so we estimated the probability each treatment was the best, using MCMC. Specifically, where X, Y, and Z represent the treatments other than a.
2.3.3. Bayesian adaptive design for PAIN-CONTRoLS
The goal of the adaptive randomization was to allocate more participants to the better-performing treatments and learn about which treatment was most effective and tolerable. After a burn-in period in a 0.25:0.25:0.25:0.25 allocation ratio for the first 80 participants randomized, a vector of probabilities, was updated to favor those drugs most likely to be the best. A burn-in period with equal randomization helps to avoid extreme randomization probabilities. This randomization allocation vector of probabilities is based on the posterior distribution of the utility function for each arm. Let be the number of participants enrolled in arm a and be the variance of the posterior probability of the utility for arm a . The information for each treatment is defined as and the randomization probabilities are proportional to given the probability vector, q, such that: . The interim analyses and allocation updates occurred again after 100 participants had endpoint data (quit or 12-week follow up) and then every 13 weeks until the study stopped for success or the total accrual of 400 was met. The study would stop for early only for success if the probability a treatment was best was >.925 for one treatment at any interim analysis after 100 participants end point data. There was no early futility or arm dropping in this adaptive design. For the final analysis, a treatment would be most effective and tolerable if the probability that treatment was best is > 0.925, or a treatment would be deemed a loser if the probability that treatment was best <0.01.
3. Results
3.1. Trial execution
Between December 1, 2014, and June 14, 2017, 402 participants with CSPN were enrolled in the study, with the last participant's follow-up occurring in October 2017. The initial adaptation to the randomization was preplanned to occur after the burn-in phase when 20 participants had been randomized to each treatment arm. Accrual was slow at the beginning, so this did not occur until more than one year after the first enrollee, but as shown in Table 1 occurred in December 2015. Table 1 provides details regarding the timing of each adaptation, the progression of each treatment's sample size, the Bayesian quantities for the posterior probability that each treatment was best, and the subsequent adaptation probabilities for the next cohort of participants. At each interim analysis, a blinded Data Safety Monitoring Board report was generated and sent to the DSMB members to review. The report contained the information in Table 1, the screen-fail information, and safety information for effects reported by patients.
Table 1.
Progression of sample size, utility, and randomization probabilities during the trial.
| Nortriptyline n = 134 | Duloxetine n = 126 | Pregabalin n = 73 | Mexiletine n = 69 | Total N = 402 |
|
|---|---|---|---|---|---|
| First Adaptation – 12/2015, N | 20 | 20 | 20 | 20 | 80 |
| Completersa, N | 12 | 9 | 9 | 9 | 39 |
| Prob. Treatment has best utility | 0.27 | 0.01 | 0.06 | 0.66 | |
| Adaptation randomization probs. | 0.35 | 0.07 | 0.17 | 0.40 | |
| 2nd Adaptation – 03/2016, N | 43 | 24 | 32 | 32 | 131 |
| Completersa, N | 26 | 22 | 26 | 27 | 101 |
| Prob. Treatment has best utility | 0.33 | 0.36 | 0.20 | 0.10 | |
| Adaptation randomization probs. | 0.24 | 0.40 | 0.21 | 0.16 | |
| 3rd Adaptation – 06/2016, N | 47 | 41 | 38 | 37 | 163 |
| Completersa, N | 44 | 25 | 33 | 33 | 135 |
| Prob. Treatment has best utility | 0.45 | 0.25 | 0.15 | 0.15 | |
| Adaptation randomization probs. | 0.28 | 0.30 | 0.21 | 0.22 | |
| 4th Adaptation – 09/2016, N | 59 | 50 | 46 | 40 | 195 |
| Completersa, N | 48 | 42 | 37 | 38 | 165 |
| Prob. Treatment has best utility | 0.27 | 0.61 | 0.06 | 0.06 | |
| Adaptation randomization probs. | 0.26 | 0.43 | 0.14 | 0.17 | |
| 5th Adaptation – 12/2016, N | 74 | 75 | 52 | 49 | 250 |
| Completersa, N | 61 | 55 | 43 | 46 | 205 |
| Prob. Treatment has best utility | 0.43 | 0.47 | 0.10 | 0.01 | |
| Adaptation randomization probs. | 0.34 | 0.37 | 0.20 | 0.09 | |
| 6th Adaptation – 03/2017, N | 92 | 96 | 62 | 55 | 305 |
| Completersa, N | 75 | 78 | 56 | 49 | 258 |
| Prob. Treatment has best utility | 0.66 | 0.29 | 0.03 | 0.02 | |
| Adaptation randomization probs. | 0.46 | 0.29 | 0.12 | 0.13 | |
| 7th Adaptation – 06/2017, N | 127 | 121 | 72 | 67 | 387 |
| Completersa, N | 99 | 97 | 63 | 56 | 314 |
| Prob. Treatment has best utility | 0.36 | 0.57 | 0.06 | 0.01 | |
| Adaptation randomization probs. | 0.31 | 0.40 | 0.19 | 0.10 | |
| Final Analysis – 10/2017, N | 134 | 126 | 73 | 69 | 402 |
| Prob. Treatment has best utility | 0.52 | 0.43 | 0.05 | 0.00 |
Count of participant's that had outcome variable (12-week) data at time of interim analysis.
The assessment of stopping the trial early due to success occurred after 101 subjects had endpoint data; by that time, there were 131 subjects randomized. All subsequent interim analyses occurred every 13 weeks after. There was a total of seven adaptations to the randomization sequence occurring before the trial ended due to reaching the maximum sample size. Fig. 1, plots (a), (b), and (c) visually display the information that can be found in Table 1. The first interim analysis shown both in Table 1 and Fig. 1(a) demonstrates the importance of implementing a burn-in period and stringent stopping criteria in an adaptive trial design. At the initial adaptation to the randomization after the burn-in, the probability that duloxetine was best was found to be 0.01, but there were only 41 of the 80 participants had endpoint data, well under the 100 endpoints needed to stop set in the protocol. By the next interim analysis, more endpoint data provided more information about the better-performing treatment drugs.
Fig. 1.
Progression of sample size, utility, and randomization probabilities during the trial. Posterior probabilities each treatment is best (a), the subsequent allocation probabilities (b), and the sample size for each treatment at each of the interim analyses (c).
Fig. 1(a) makes it clear that nortriptyline and duloxetine fluctuated between having the best utility throughout the study, while pregabalin and mexiletine continuously performed worse than both nortriptyline and duloxetine. Thus, due to the adaptive randomization used, the study has unbalanced treatment arms with fewer participants being randomized to pregabalin and mexiletine as we moved through the adaptive randomization eras due to the small probability of those two treatments having the best utility, Fig. 1(b) and (c). The total sample sizes for nortriptyline and duloxetine were larger, with 134 and 126 participants, respectively, compared to the sample sizes of pregabalin (73) and mexiletine (69).
Despite the unbalanced treatment arms, all four therapy groups were well-matched for baseline characteristics [19]. Overall, the mean age was 60.1, and 53 % of the subjects were men. Most of the cohort were non-Hispanic (94.3%) and white (85.3 %). The primary endpoint measure, Likert pain scale scores (range of possible values is 0–10), were similar at baseline across the four groups with a mean of 6.87, 6.73, 6.44, and 6.54, respectively. For each interim analysis, any available follow-up data were used to estimate transition probabilities from outcome at an early time point to final outcome using the longitudinal modeling that predicted participants’ week 12 data from data at earlier time points (weeks 4 and 8). For the final data analysis, all endpoint data (week 12) were used in the utility function, and per the protocol, participants that were lost to follow-up were imputed as quits. Table 2 shows the posterior probabilities for the quit, efficacy, and non-efficacious rates for each of the four treatments at the final analysis. The utility of each treatment with the 95 % Bayesian credible interval (BCI) and the probability that the treatment is the best was calculated using the estimated quit and efficacy rates.
Table 2.
Final analysis of posterior probabilities for each treatment.
| Nortriptyline | Duloxetine | Pregabalin | Mexiletine | |
|---|---|---|---|---|
| Utility | 0.81 | 0.80 | 0.69 | 0.58 |
| [95 % Bayesian Credible Interval (BCI)] | [0.69, 0.93] | [0.68, 0.92] | [0.55, 0.84] | [0.42, 0.75] |
| Probability Treatment is Best | 0.52 | 0.43 | 0.05 | 0.00 |
| Week 12 Outcome, [95 % BCI] | ||||
| Quit | 0.38 [0.30, 0.46] | 0.37 [0.29, 0.46] | 0.42 [0.31, 0.54] | 0.58 [0.46, 0.69] |
| Efficacious and Non-Quit | 0.25 [0.18, 0.33] | 0.23 [0.16, 0.31] | 0.15 [0.08, 0.24] | 0.20 [0.12, 0.31] |
| Non-Efficacious and Non-Quit | 0.36 [0.29, 0.45] | 0.40 [0.31, 0.48] | 0.42 [0.31, 0.54] | 0.22 [0.13, 0.32] |
At the end of the 12-week follow-up period, nortriptyline and duloxetine had the lowest probability of participants that had stopped taking the study medication, with 38 % and 37 % quitting, respectively. Mexiletine had the highest quit rate, with many participants quitting due to gastrointestinal side effects [19]. Participant-reported side effects were the primary reason for quitting nortriptyline and duloxetine, too. There were many reasons for quitting among participants that were randomized to pregabalin, including side effects and cost due to lack of insurance coverage. Again, any participant that observed at least a 50 % reduction (i.e., 6 to 3) in the pain scale was deemed as efficacious. Pregabalin had the lowest rate of efficacious participants, with only 15 % achieving a 50 % reduction in the VAS pain scale. All three other medications had similar efficacy rates, with nortriptyline observing the highest rate of 25 %. These estimated quit and efficacy rates provided the final probability that each treatment was best. Per the protocol, we could define any treatment as being the best if the final probability for any treatment was >0.925 or a loser if the probability was <0.01. As Table 2 shows, we were able to define mexiletine, with a very high quit rate and very low efficacious rate, as a loser or a drug that clinicians should not recommend to patients with CSPN. For the participants that did not quit medication, see the performance of the secondary outcomes for mexilitine [22].
4. Conclusion
Response adaptive randomization trials have grown rapidly in the last few decades, especially those utilizing Bayesian methods for analyses. Bayesian adaptive trials can be challenging to design and more difficult to implement than a fixed trial design [25,26]. Even though adaptive trials have grown rapidly, the literature detailing the execution and processes involved in running these trials is severely lacking. We have provided a thorough description that details each stage of this Bayesian response-adaptive study.-Some challenges we faced along the way of utilizing response adaptive randomization include timing of the interim interfering with subject recruitment and vacation schedules and unexpectedly slow recruitment. We alleviated some of the timing issues by working closely with the sites to ensure we would not interrupt any potential recruitment, performing some of the updates to the randomization table in the evening outside of clinic hours, and collaborating with our team to delegate if someone was out of office.
Bayesian methods and adaptive designs are particularly beneficial in comparative effectiveness research where the effect size between treatments might be smaller compared to a placebo-controlled trial [26]. For future work, suppose we started the next trial, another comparative effectiveness with response adaptive randomization, and kept nortriptyline and duloxetine but added six new treatment drugs [27]. We could make use of the time-adjusted Bayesian drift model, Bayesian Time Machine, to bridge the results from the original PAIN-CONTRoLS [19] trial to the hypothesized next trial looking at six new treatments and make conclusions regarding the most effective or least effective medications as well as if any potential temporal trends occurred during or between the two comparative effectiveness studies [[28], [29], [30]].
We hope this paper begins to serve in filling the gap in the literature that provides details on the implementation and execution of response-adaptive randomized trials, specifically using Bayesian modeling methods.
Funding
This study was supported by PCORI award: University of Kansas Medical Center CER-1306-02496 and NIH Clinical and Translational Science Award UL1TR002366.
Role of the Funder/Sponsor
The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
Data will be made available on request.
References
- 1.Pallmann P., Bedding A.W., Choodari-Oskooei B., et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16(1):29. doi: 10.1186/s12916-018-1017-7. Published 2018 Feb 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Berry D.A. Adaptive clinical trials: the promise and the caution. J. Clin. Oncol. 2011;29(6):606–609. doi: 10.1200/JCO.2010.32.2685. [DOI] [PubMed] [Google Scholar]
- 3.Chow S.C., Chang M. Adaptive design methods in clinical trials - a review. Orphanet J. Rare Dis. 2008;3:11. doi: 10.1186/1750-1172-3-11. Published 2008 May 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hu F., Rosenberger W.F. Wiley; Hoboken, NJ: 2006. The Theory of Response‐adaptive Randomization in Clinical Trials. [Google Scholar]
- 5.Berry D.A. The Brave New World of clinical cancer research: adaptive biomarker-driven trials integrating clinical practice with clinical research. Mol. Oncol. 2015;9(5):951–959. doi: 10.1016/j.molonc.2015.02.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Barker A.D., Sigman C.C., Kelloff G.J., Hylton N.M., Berry D.A., Esserman L.J. I-SPY 2: an adaptive breast cancer trial design in the setting of neoadjuvant chemotherapy. Clin. Pharmacol. Ther. 2009;86(1):97–100. doi: 10.1038/clpt.2009.68. [DOI] [PubMed] [Google Scholar]
- 7.Bauer P., Bretz F., Dragalin V., et al. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat. Med. 2016;35(3):325–347. doi: 10.1002/sim.6472. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Food and Drug Administration . Food and Drug Administration; Washington DC, USA: 2010. Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics. [Google Scholar]
- 9.Patient Centered Outcomes Research Institute . 2012. PCORI Methodology Standards – Report.http://www.pcori.org/assets/PCORI-Methodology-Standards.pdf Published December 14, 2012. [Google Scholar]
- 10.Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER) 2019. Adaptive designs for clinical trials of drugs and biologics guidance for industry, U.S.D.o.H.a.H.S. Food and drug administration. [Google Scholar]
- 11.Luce B.R., Drummond M.F., Dubois R.W., et al. Principles for planning and conducting comparative effectiveness research. J. Comp. Eff. Res. 2012;1(5):431–440. doi: 10.2217/cer.12.41. [DOI] [PubMed] [Google Scholar]
- 12.Connor J.T., Elm J.J., Broglio K.R. ESETT and ADAPT-IT Investigators. Bayesian adaptive trials offer advantages in comparative effectiveness trials: an example in status epilepticus. J. Clin. Epidemiol. 2013;66(8 Suppl):S130–S137. doi: 10.1016/j.jclinepi.2013.02.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Perkins G.D., Ji C., Deakin C.D., et al. A randomized trial of epinephrine in out-of-hospital cardiac arrest. N. Engl. J. Med. 2018;379(8):711–721. doi: 10.1056/NEJMoa1806842. [DOI] [PubMed] [Google Scholar]
- 14.Combes A., Hajage D., Capellier G., et al. Extracorporeal membrane oxygenation for severe acute respiratory distress syndrome. N. Engl. J. Med. 2018;378(21):1965–1975. doi: 10.1056/NEJMoa1800385. [DOI] [PubMed] [Google Scholar]
- 15.Ryan E.G., Lamb S.E., Williamson E., Gates S. Bayesian adaptive designs for multi-arm trials: an orthopedic case study. Trials. 2020;21(1):83. doi: 10.1186/s13063-019-4021-0. Published 2020 Jan 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Chevret S. Bayesian adaptive clinical trials: a dream for statisticians only? Stat. Med. 2012;31(11–12):1002–1013. doi: 10.1002/sim.4363. [DOI] [PubMed] [Google Scholar]
- 17.Berry D.A. Adaptive clinical trials in oncology. Nat. Rev. Clin. Oncol. 2011;9(4):199–207. doi: 10.1038/nrclinonc.2011.165. [DOI] [PubMed] [Google Scholar]
- 18.Wang Y., Travis J., Gajewski B. Bayesian adaptive design for pediatric clinical trials incorporating a community of prior beliefs. BMC Med. Res. Methodol. 2022;22(1):118. doi: 10.1186/s12874-022-01569-x. Published 2022 Apr 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Barohn R.J., Gajewski B., Pasnoor M., et al. Patient assisted intervention for neuropathy: Comparison of treatment in real life Situations (PAIN-CONTRoLS): bayesian adaptive comparative effectiveness randomized trial [published correction appears in JAMA neurol. 2020 nov 1;77(11):1453] JAMA Neurol. 2021;78(1):68–76. doi: 10.1001/jamaneurol.2020.2590. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Brown A.R., Gajewski B.J., Aaronson L.S., et al. A Bayesian comparative effectiveness trial in action: developing a platform for multisite study adaptive randomization. Trials. 2016;17(1):428. doi: 10.1186/s13063-016-1544-5. Published 2016 Aug 31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Pasnoor M., Dimachkie M.M., Barohn R.J. Cryptogenic sensory polyneuropathy. Neurol. Clin. 2013;31(2):463–476. doi: 10.1016/j.ncl.2013.01.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Bhai S.F., Brown A., Gajewski B., et al. A secondary analysis of PAIN-CONTRoLS: pain's impact on sleep, fatigue, and activities of daily living. Muscle Nerve. 2022;66(4):404–410. doi: 10.1002/mus.27637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Gajewski B.J., Berry S.M., Quintana M., et al. Building efficient comparative effectiveness trials through adaptive designs, utility functions, and accrual rate optimization: finding the sweet spot. Stat. Med. 2015;34(7):1134–1149. doi: 10.1002/sim.6403. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.R Core Team . R Foundation for Statistical Computing; Vienna, Austria: 2022. R: A Language and Environment for Statistical Computing.https://www.R-project.org/ URL: [Google Scholar]
- 25.Angus D.C., Berry S., Lewis R.J., et al. The REMAP-CAP (randomized embedded multifactorial adaptive platform for community-acquired pneumonia) study. Rationale and design. Ann. Am. Thorac. Soc. 2020;17(7):879–891. doi: 10.1513/AnnalsATS.202003-192SD. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Gao G., Gajewski B.J., Wick J., et al. Optimizing a Bayesian hierarchical adaptive platform trial design for stroke patients. Trials. 2022;23(1):754. doi: 10.1186/s13063-022-06664-4. Published 2022 Sep. 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Barohn R.J., Allen J., Avila D., et al. Determining best or inferior drug(s) using an adaptive platform for cryptogenic sensory polyneuropathy. RRNMF Neuromuscular J. 2022;3(2):45–98. doi: 10.17161/rrnmf.v3i2.18133. [DOI] [Google Scholar]
- 28.Lipsky A.M., Greenland S. Confounding due to changing background risk in adaptively randomized trials. Clin. Trials. 2011;8(4):390–397. doi: 10.1177/1740774511406950. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Saville B.R., Berry D.A., et al. The Bayesian Time Machine: accounting for temporal drift in multi-arm platform trials. Clin. Trials. 2022;19(5):490–501. doi: 10.1177/17407745221112013. [DOI] [PubMed] [Google Scholar]
- 30.Berry S.M., Reese C.S., Larkey P.D. Bridging different eras in sports. J. Am. Stat. Assoc. 1999;94(447):661–676. doi: 10.1080/01621459.1999.10474163. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data will be made available on request.

