Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jan 2.
Published in final edited form as: J Biopharm Stat. 2022 Nov 21;33(1):43–52. doi: 10.1080/10543406.2022.2148161

The Value of a Two-Armed Bayesian Response Adaptive Randomization Trial

Byron J Gajewski 1, Susan E Carlson 2, Alexandra R Brown 1, Dinesh Pal Mudaranthakam 1, Elizabeth H Kerling 2, Christina J Valentine 3
PMCID: PMC9812849  NIHMSID: NIHMS1797917  PMID: 36411742

Abstract

We investigate the value of a two-armed Bayesian response adaptive randomization (RAR) design to investigate early preterm birth rates of high versus low dose of docosahexaenoic acid during pregnancy. Unexpectedly, the COVID-19 pandemic forced recruitment to pause at 1100 participants rather than the planned 1355. The difference in power between number of participants at the pause and planned was 87% and 90% respectively. We decided to stop the study. This paper describes how the RAR was used to execute the study. The value of RAR in two-armed studies is quite high and their use in the future is promising.

Keywords: Group sequential designs, beta-binomial, predicting accrual

1. INTRODUCTION

Bayesian adaptive clinical trials have become very popular over the last several decades because they can provide more timely results and save on resources relative to a classical trial design (Berry et al., 2010; Jiang et al., 2013). This is particularly evident in trials that have more than two arms into which participants are randomized. In fact, the Bayesian adaptive design that utilizes response adaptive randomization (RAR) is not only more powerful (when there are at least three arms), it also can place more trial participants on the better performing arms during the conduct of the clinical trial (e.g. Gajewski et al., 2019; Viele et al., 2020). These are extremely attractive characteristics for investigators on the trial and provide great selling points to get support to conduct the trial (Wick et al., 2017).

Unfortunately, these designs can have reduced power for the two-armed trial (e.g. Azriel et al., 2012; Hey & Kimmelman, 2015). However, for the two-armed trial reported here, the benefit of the more randomization to the better performing arm was a positive to the investigative team and National Institutes of Health reviewers, and later, to the Data and Safety Monitoring Board. Further, using a sufficient burn-in at the beginning of the trial, where allocation to arms is 1:1, results in little loss in power (Viele et al., 2020).

With strong conviction we decided with the trial design that utilized RAR. This model choice was substantiated based on simulations that were performed and compared to a fixed trial design. Using this randomization design and data from our previous trial, the simulated study had 90% power to detect a reduction in early preterm birth (ePTB, <34 weeks gestation) from 4% to 1% of births with an estimated 938 births, trial duration of 184 weeks, and 59% of the participants in the better performing group. A conventional equal allocation randomization trial would have 90% power, but be larger (1200 births), slower (230 weeks), and have a lower rate of participants assigned to the winning group (50%) (Carlson et al., 2017).

In this paper we describe the conduct of the ADORE trial (Assessment of DHA on Reducing Early Preterm Birth) focusing on the statistical conduct of the trial rather than the final results (Carlson et al., 2021) or the clinical trials management system (Mudaranthakam et al., in press). We explore the details of the trial conduct and include details of the prespecified interim analyses (CDER, 2019). In particular we focus on the impact the COVID-19 pandemic had on recruitment. We ultimately stopped enrollment because we were concerned over the safety of participants and clinical staff as well as cognizant of the funding timeline. It is showed below that this decision was also supported by power calculations that were performed.

The remaining manuscript is structured as follows: In Section 2, we describe the trial design, the overview of the trial execution during the COVID-19 pandemic, and the decision to halt enrollment based on power. We describe the results and details of the trial execution up to the final endpoint emphasizing the RAR benefit to the trial participants in Section 3. Discussion is in Section 4 and concluding remarks are in Section 5.

2. METHODS

2.1. Summary of Trial Design

The prespecified design is summarized as follows and is detailed in the published study protocol (Carlson et al., 2017). The primary hypothesis of the study was that participants assigned to a prenatal supplement of 1000 mg DHA daily would have a lower rate of ePTB than those assigned to 200 mg daily. The maximum sample size was 1355 participants, and the planned interims were specified as every 13 weeks of enrollment. The first interim analysis occurred after a burn-in of 300 participants, allocated equally to the two arms (standard of care 200 mg/day DHA versus 1000 mg/day DHA). After that, the randomization allocation ratios were updated (response adaptive randomization) every 13 weeks based on data from all births observed up to that point. Once 800 participants were enrolled, the trial had an opportunity to stop for success at each interim if the posterior probability (pp) was (group j is best) > 0.99 for either group, or for short pp>0.99. The maximum number of participants to be enrolled was 1355 (with expected 1200 observed births due to dropout). The primary endpoint was binary, early preterm birth, Yj, was modeled conditional on the number of births, nj, as a binomial distribution Yj ~Binom(njj). θj had a prior, logit(θj)~ N(−3.5,1.52), which notably, was extremely close to θj~Beta(0.3,4.7). The posterior probability (pp) the high dose (arm 1) was better than the low dose (arm 2) was pp=Pr(θ1 <θ2 |data). We utilized an information-based formula for response adaptive randomization, specifically the allocation to arm 1 is proportional to (pp*Var(θ1)n1+1)14. The allocation to arm 2 is proportional to ((1pp)*Var(θ2)n2+1)14. To be more specific, we can breakdown the formula for RAR allocation into two parts: pp14 (or (1pp)14) and (Var(θj)nj+1)14. The first part weighs future participants to the arm having a lower rate of early preterm births. The latter part considers nj births in arm j with the posterior variance of Var(θj). By adding participants to this arm, the variance becomes approximately njVar(θj)nj+1, thus the variance is reduced by Var(θj)njVar(θj)nj+1=Var(θj)nj+1. The information-based RAR formula used balances placing future participants on the better performing arm while also accounting for arms with less information to encourage more participants in that arm.

2.2. Overview of Trial Execution during the COVID-19 Pandemic

Now a brief recap of the results of the trial. As many investigators may identify, the execution of clinical trials has many layers of complexity. The adaptive portion of our trial design added to the complexity (Brown et al., 2016). We had a minor deviation from the adaptive specification when we performed the first interim analysis after 301 participants were enrolled rather than the planned 300. We did not want to pause enrollment while doing the analysis. This was a minor deviation which was reviewed and documented by the study team under the supervision of the principal investigator.

The unexpected and forced pause in recruitment due to the COVID-19 pandemic occurred, coincidently, right after the 1100th participant was enrolled and at the same time as our 11th interim analysis. All members of the research team were required to lockdown at home and recruiting centers forecasted potentially long restrictions on access to patients. We faced a difficult decision. Should we stick to the goal of 1355 or accept a trial with only 1100 enrollees? Two key aspects weighing on our decision was that we did not know when research would be resumed, and the trial funding was capped at five years. The critical COVID related studies were the only trials that could operate in the beginning of the pandemic. We decided to stop the trial based on a power analysis. Note that the properties of the findings presented here are not unique to the pandemic. The same types of issues would exist in any trial that needs to halt recruitment early for other reasons.

2.3. The Decision to Halt Enrollment Based on Power

Table 1 summarizes the power calculation that was used to help inform a stop to the entire trial. The scenarios represent several alternative hypotheses that were justified from previous DHA clinical trials. The argument for support of permanently stopping enrollment was based on the “very likely” scenario. In this case, the reduction in sample size for the very likely scenario resulted in the power dropping from 90% to 87%.

Table 1.

Power calculations for the full design that compared N=1100 to N=1355 enrolled.

Description Scenario PowerN=1100 N=1355
Unlikely 3% vs 1% 59% 63%
Likely 3% vs 0.5% 87% 91%
Very unlikely 3% vs 2% 15% 27%
Very likely 4% vs 1% 87% 90%

3. RESULTS

3.1. Trial Execution Up to the Pandemic

Our standard operating procedure (SOP) for interim analyses specified that we never pause enrollment while performing the interim analyses. The potential enrollments were too valuable (for trial duration), and we believed it unfair to potential participants to take away the opportunity. Figure 1 shows the SOP for each of the interim analyses. Preparation for the interim analyses were on-going but the team began preparing 1.5 weeks before they were to take place. This is one reason why a time-based schedule (e.g., every 13-weeks) for interim analyses is important. Interim analyses are akin to performing several final analyses, thus data cleaning is a must and weeks of preparation is necessary. Further, several human checks are placed in the process of the data cleaning to ensure accurate data. Once the interim analysis time arrived, the team had two working days to perform the interim analysis which included uploading the new allocation ratio. From interim initiation to allocation upload the interims were all accomplished within the two days.

Figure 1.

Figure 1.

Standard operating procedure for each interim analysis. Expected due date (EDD) and date of birth (DOB) are used to calculate whether delivery was designated as an early preterm birth (<34 weeks) or not. Comprehensive research information system (CRIS) was the clinical trials management system used for the trial.

Figure 2 shows a video and summarizes the progression of key clinical trial metrics across all interim analyses in a video and final figure. The 1000 mg/day arm had a lower early preterm birth rate (ePTB, <34 weeks gestation) than the 200 mg/day arm across the interim analyses. This resulted in more deliveries in the group assigned to 1000 mg/day due to more participants being allocated because of its success. While the posterior probability that 1000 mg/day is better than 200 mg/day never reached 0.99, it did get as high as 0.90, early in the trial. The final analysis calculated a posterior probability of 0.81, reported in the primary paper (Carlson et al., 2021). The final frame in Figure 2 reports the 95% highest posterior density for each of the arms.

Figure 2.

Figure 2.

Download video file (7.8MB, mp4)

A link to the ADORE interim analyses as an approximately 3 minute movie (.mp4 file). ADORE interim analyses that describes the number of deliveries by arm, probability that 1000 mg/day is better than 200 mg/day on the primary endpoint, and the 95% highest posterior density (HPD) intervals for early preterm birth (ePTB) across arms. The 12th interim analysis is the final analysis. The two arms are colored red for 1000 mg/day and blue for 200 mg/day.

Figure 3 uses a Bayesian accual model (Gajewski, Simon, & Carlson, 2008) and associate R package (Jiang et al., 2016) to predict the enrollment up to the time that we stopped the trial. Trial enrollment was projected to do better early on but was right on target of achieving the 1100 that actually occurred, meaning enrollment was not a concern prior to the pandemic. It’s a challenge to say what the projections would have been at the point of the recruitment being stopped due to the pandemic. Some references indicate as much as a 75% drop (Unger et al., 2020) in enrollment rates across all clinical trials. We originally projected 1355 participants enrolled across 3 years. The impact of the pandmic on enrollment began around March 20, 2020. If our projection was to be able to gain 255 enrollments in 1.5 years, then the 95% interval of completion date would be 18 to 24 months [using the “accrual” package in R accrual.T.plot(n=1100/2, T=45, P=1, m=0, tm=0, np=255, Method=“Informative Prior”)]. This is outside of our target funding range and “no cost” extensions are not guaranteed. In addition,if the accrual rate dropped 75% due to the pandemic, the 95% interval for accrual completion would be 22 to 33 months later.

Figure 3.

Figure 3.

Prediction of accrual 1100 participants at 45 months by ‘months since first partcipant enrolled’ (solid red) summaruzed as posterior mean of prediction (solid) and 95% prediction intervals (dashed).

3.2. Impact of RAR on Allocation and Outcomes

Next we look in more detail at the response adaptive randomization throughout the trial. The calculations for the formulas used in the RAR allocation at the beginning of the trial and at each interim analysis are presented in Table 2. In the beginning, there is equal allocation because pp=.5, Varθ1=Varθ2, and n1=n2=0. Following the burn-in of 301 participants, the allocation favors arm 1 because the pp>.5 despite the (Var(θ2)n2+1)14> (Var(θ1)n1+1)14. This trend of arm 1 having higher RAR than arm 2 continues until interim 7 when the pp>.5 regresses down and the (Var(θ2)n2+1)14 becomes high enough to balance towards arm 1. This happens because less births occurred in arm 2, thus the higher early preterm birth rate resulted in higher posterior variance. Interim analysis 11 was back to equal allocation before approaching the final analysis. The final analysis resulted in 576 births in arm 1 and 524 in arm 2. This ratio 576/1100=.524 was lower than the ‘very likely’ description of simulations provided in Table 3, which averaged .587. In fact, it was closer to the average allocation for the ‘very unlikely’ description of .528. Also shown in Table 3, the ‘very unlikely’ simulations had an average of 61 more subjects enrolled in the better performing, which is very close to the obseved 52.

Table 2.

Results of response adaptive randomization (RAR) at the beginning of the study (Interim=0) as well as for each interim analysis. The formula for RAR for arm 1 is proportional to (pp*Var(θ1)n1+1)14, where pp is the posterior probability arm 1 is better than arm 2, Var(θ1) is the posterior variance, and n1 is the number of births. N1 and N2 are the total enrolled, and n1 and n2 are the total births observed.

Interim N1 N2 n1 n2 pp14 (1pp)14 Varθ1n1+114 (Var(θ2)n2+1)14 RAR1 RAR2
0 0 0 0 0 0.84 0.84 0.3105 0.3122 0.50 0.50
1 150 151 79 83 0.91 0.75 0.0387 0.0427 0.52 0.48
2 197 189 116 114 0.97 0.56 0.0295 0.0393 0.56 0.44
3 245 228 153 151 0.97 0.60 0.0277 0.0339 0.57 0.43
4 294 264 196 184 0.96 0.61 0.0253 0.0308 0.56 0.44
5 341 299 246 217 0.91 0.75 0.0239 0.0274 0.52 0.48
6 385 338 287 255 0.91 0.75 0.0224 0.0252 0.52 0.48
7 428 377 333 291 0.88 0.79 0.0214 0.0237 0.50 0.50
8 464 414 367 319 0.88 0.79 0.0200 0.0222 0.50 0.50
9 497 448 409 360 0.88 0.79 0.0190 0.0208 0.51 0.49
10 541 490 443 399 0.88 0.80 0.0178 0.0194 0.50 0.50
11 576 524 479 432 0.88 0.80 0.0168 0.0182 0.50 0.50
Final 576 524

Table 3.

For the N=1100 design, the proportion of trial participants enrolled into the better performing arm (%Best) with the average number of more enrolled in the better arm.

Description Scenario %Best Average More Enrolled in Better Arm
Unlikely 3% vs 1% 0.558 117
Likely 3% vs 0.5% 0.576 140
Very unlikely 3% vs 2% 0.528 61
Very likely 4% vs 1% 0.587 162

In Figure 4 it is further displayed in the ‘very unlikely’ description the simulated %Better Arm and the average of more enrolled in better arm with an overlay of the observed value in our trial.

Figure 4.

Figure 4.

10,000 trials simulated under the condition ‘very unlikely’ with an overlay of observed values in the trial for % allocated to the better arm (%Better) and more enrolled in the better arm (negative implies more enrolled in the worst arm).

3.2. How many EPB did we prevent with RAR?

At the end of the study, we had birth and ePTB counts in 1000 mg/day and 200 mg/day (from 1100 enrolled participants) of n1=540, y1=9, n2=492, and y2=12. Using the RAR design, the ePTB rate favored the allocation of the participants to the 1000 mg/day arm over the 200 mg/day arm resulting in 540 and 492 births respectively, which is 48 more births in the 1000 mg arm. A 1:1 allocation throughout would move 24 of the births from 1000 mg/day arm to the 200 mg/day arm. Let y**~ hypergeometric(24,9,540), which represents the total number of ePTB births for the 24 enrollees switching from 1000 mg to 200 mg. Using the beta-binomial model we can see that the posterior predictive probability of a future ePTBs from 24 participants for 200 mg/day is then binomial with y2*~binomial(p2*, 24) where p2*=(.3+12)/(.3+5.7+492)=.0247 respectively. Then y*2~Bin(.0247,24) is the posterior predictive distribution of the number of ePTBs among the 24 that switched to the 200 mg arm, which observed an ePTB rate of 12/492 births. Then the total number of predicted ePTBs in the switch from RAR to 1:1 allocation would be z*=12+y*2+9-y**. The results did not shift much mainly because the allocation ended up being fairly close to 1:1 since 540/(540+492)=.52. The probability that more ePTBs would occur in 1:1 allocation is Pr(z*>21)=.33, the probability that 1:1 would have the same amount of ePTBs is Pr(z*=21)=.46, and the least likely is that 1:1 would have less ePTBs, Pr(z*<21)=.21. The indication is that there was a benefit of observing fewer ePTBs in the RAR design.

4. DISCUSSION

RAR is impactful by its efficient design and could also lessen subject’s risk. Indeed, our own research in this area has been quite demonstrative (Gajewski et al., 2015; Gajewski et al., 2019). There have been critiques over the years about the potential pitfalls of RAR for two arm trial designs. Many of those concerns are alleviated by designing the trial with proper burn-in, that is having an adequate number of participants allocated 1:1 before initiating the RAR switch. This keeps the design close to optimal power while also benefiting from treating more participants on the better performing arm. It was hypothesized that this would result in a much larger benefit in our trial because the hypothesized effect size was much larger than what was observed. The biggest reason for this, we believe, is that the use of prenatal DHA supplements by US women increased over the past decade when prenatal DHA supplements became available. During this same time, the value of DHA during pregnancy has been appreciated, in part due to some of the research we and others have reported (Carlson et al., 2013; Carlson et al., 2021). We routinely measure the DHA status of the women enrolling in the trials and define a low DHA status at enrollment to be <6% red blood cell phospholipid fatty acids as DHA and a high status to be ≥6%. In our KUDOS trial, conducted from 2006-2010, 9.1% of 300 participants had high DHA status at enrollment, whereas in ADORE high-status mothers constituted 53.4% of the cohort of 1100 women enrolled. Coupled with this, the effect of 1000 mg versus 200 mg DHA on the rate of ePTB for high-status mothers (1.4% and 1.1%, respectively) is smaller than the effect for low-status mothers (2.0% and 4.1%, respectively) (Carlson et al., 2021). This subgroup analysis was not the primary analysis but given this information one change in design that we could justify would be to focus the primary analysis on the low DHA status subgroup. However, it was not known how prenatal supplements with different DHA concentrations had changed the baseline status of pregnant women until we were well into in this clinical trial.

Note that the subgroup choice is justified by other research recently published. During the execution of ADORE, we discovered the results of a concurrent trial in Australia, ORIP (Makrides et al., 2019), that enrolled 5517 pregnant women and randomized to either a placebo (0 mg/day of DHA) or a high dose of DHA (800 mg/day) that also included 100 mg of another long chain omega-3 fatty acids, eicosapentaenoic acid (EPA). This was a frequentist trial with a primary analysis and, like ours, had a primary endpoint comparing ePTB rates across arms. Their primary result was p-value=0.500, with 2.2% ePTB in the high dose and 2.0% ePTB in the placebo. However, in a follow-up analysis, they reported (Simmonds et al., 2020) that participants entering the ORIP trial with a low status of DHA benefited from a high dose of DHA during pregnancy, but the participants with high status of DHA at enrollment did not benefit from a high dose of DHA during pregnancy.

Given this current study, if we reperformed the study with a maximum sample size of 2500 (~5% type I error, more than doubling the accrual rate, 600 enrolled before RAR starts, and 1600 enrolled before success can be declared) using the observed effect sizes (measured as ePTB across 200 mg and 1000 mg) of 2.5% and 1.7%, the operating characteristics would be 2397 average enrollment, 24% power, 53.9% on the better arm, and take 186 weeks to conduct. Focusing the study on women with the low DHA status at enrollment instead, the effect sizes would be 4.1% and 2.0%, and the operating characteristics would be vastly improved to 2059 average enrollment, 81% power, 58.9% on the better arm, and take 137 weeks to conduct; a 1:1 fixed 2500 trial would have 81% power, with only 50% on the better arm. Improvements can be made to the operating characteristics using the mixture of normal model (Yelland et al., 2016; Gajewski et al., 2016; Carlson et al., 2018).

5. Conclusion

We emphasize that one of the major reasons for RAR in the two-armed study is its attractiveness to reviewers and stakeholders, due to enrolling more to the better performing group. The success we had with ADORE has reinvigorated our strong desire to conduct RAR across a broad range of trials with various study arms, including the two-armed study. This conclusion is not true for all situations and simulation should be used to compare trial operating characteristics of the RAR to other designs including, for example the fixed 1:1 trial design.

Grant support:

Research reported in this publication was supported by the National Institutes of Health Grant R01HD083292.

Appendix: R code

final analysis

#Primary analysis using Beta prior

SS=100000

P1=rbeta(SS,9+.3,540-9+4.7)

P2=rbeta(SS,12+.3,492-12+4.7)

mean(P2>P1)

library(HDInterval)

c(mean(P1),hdi(P1,.95))*100

c(mean(P2),hdi(P2,.95))*100

How many EPB did we prevent with RAR?

yss=rhyper(10000, 9, 540-9, 24)

ys=rbinom(10000, 24, .0247 )

z=12+ys+9-yss

Footnotes

5

ETHICS APPROVAL AND INFORMED CONSENT DETAIL

The ADORE study is approved by the Institutional Review Board (IRB) at The University of Kansas Medical Center under the #STUDY00003455. Subjects (participants) were either able to read or orally understand the study in English or Spanish and signed an informed consent form. The consent form was in both English and Spanish and translators were available to obtain informed consent for potential subjects who were not fluent in English.

7. References

  1. Azriel D, Mandel M, Rinott Y (2012), “Optimal allocation to maximize the power of two-sample tests for binary response,” Biometrika, 99, (1), 101–113. [Google Scholar]
  2. Berry SM, Carlin BP, Lee JJ, and Mueller P (2010), Bayesian adaptive methods for clinical trials, Chapman & Hall. [Google Scholar]
  3. Brown AR, Gajewski BJ, Aaronson LS, Mudaranthakam DP, Hunt SL, Berry SM, Quintana M, Pasnoor M, Dimachkie MM Jawdat O, Herbelin L, Barohn RJ (2016), “A Bayesian Comparative Effectiveness Trial in Action: Developing a Platform for Multi-Site Study Adaptive Randomization,” Trials, 17(1), 428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Carlson SE, Colombo J, Gajewski BJ, Gustafson KM, Mundy D, Yeast J, Georgieff MK, Markley LA, Kerling EH, & Shaddy DJ (2013), “DHA supplementation and pregnancy outcomes,” The American Journal of Clinical Nutrition, 97(4), 808–815. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Carlson SE, Gajewski BJ, Alhayek S, Colombo J, Kerling EH, Gustafson KM (2018), “Dose-Response Relationship Between Docosahexaenoic Acid (DHA) Intake and Lower Rates of Early Preterm Birth, Low Birth Weight and Very Low Birth Weight,” Prostaglandins, Leukotrienes and Essential Fatty Acids (PLEFA), 138 (1-5). [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Carlson SE, Gajewski BJ, Valentine CJ, Rogers LK, Weiner CP, DeFranco DO, Buhimschi CS, (2017) “Assessment of DHA on reducing early preterm birth: the ADORE randomized controlled trial protocol,” BMC Pregnancy and Childbirth, 17 (1), 62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Carlson, Gajewski, Valentine, Kerling, Weiner, Cackovic, Buhimschi, Rodgers, Sands, Brown, Mudaranthakam, Crawford, DeFranco (2021), “Higher dose docosahexaenoic acid supplementation during pregnancy reduces early preterm birth: a randomised, double-blind, adaptive-design superiority trial,” EClinicalMedicine: a Publication by the Lancet. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Gajewski BJ, Berry SM, Quintana M, Pasnoor M, Dimachkie M, Herbelin L, and Barohn R (2015), “Building Efficient Comparative Effectiveness Trials through Adaptive Designs, Utility Functions, and Accrual Rate Optimization: Finding the Sweet Spot,” Statistics in Medicine, 34(7), 1134–1149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Gajewski BJ, Reese CS, Colombo J, & Carlson S (2016), “Commensurate Priors on a Finite Mixture Model for Incorporating Repository Data in Clinical Trials,” Statistics in Biopharmaceutical Research, 8(2), 151–160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Gajewski B, Simon S, and Carlson S (2008), “Predicting Accrual in Clinical Trials with Bayesian Posterior Predictive Distributions,” Statistics in Medicine, 27(13), 2328–2340. [DOI] [PubMed] [Google Scholar]
  11. Gajewski, Statland, Barohn (2019), “Using adaptive designs to avoid selecting the wrong arms in multi-arm comparative effectiveness trials,” Statistics in Biopharmaceutical Research, 11 (4). [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hey S, Kimmelman J (2015),“Are outcome-adaptive allocation trials ethical?,”Clinical Trials, 12 (2), 102–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Jiang Y, Guarino G, Ma S, Simon S, Mayo MS, Raghavan R, & Gajewski BJ (2016), “Bayesian Accrual Prediction for Interim Review of Clinical Studies: Open Source R Package and Smart Phone Application,” Trials, 17(1), 336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Jiang F, Jack JL, & Muller P (2013), “A Bayesian decision-theoretic sequential response-adaptive randomization design,” Statistics in Medicine 32, 1975–1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Mudaranthakam, Brown, Kerling, Carlson, Valentine, Gajewski (in press), “A Successful synchronized orchestration of an investigator-initiated multi-center trial through use of a clinical trial management system and team approach,” JMIR Formative Research. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Makrides M, Best K, Yelland L, McPhee A, Zhou S, Quinlivan J, et al. (2019), “A Randomized Trial of Prenatal n-3 Fatty Acid Supplementation and Preterm Delivery,” The New England Journal of Medicine, 381(11):1035–45. [DOI] [PubMed] [Google Scholar]
  17. Simmonds LA , Sullivan TR , Skubisz M , Middleton PF, Best KP, Yelland LN, Quinlivan J, Zhou SJ, Liu G, McPhee AJ, Gibson RA, Makrides M (2020), “Omega-3 fatty acid supplementation in pregnancy—baseline omega-3 status and early preterm birth: exploratory analysis of a randomised controlled trial,” BJOG: An International Journal of Obstetrics & Gynaecology 127:8, 975–981. [DOI] [PubMed] [Google Scholar]
  18. Unger JM, Blanke CD, LeBlanc M, Hershman DL (2020), “Association of the Coronavirus Disease 2019 (COVID-19) Outbreak With Enrollment in Cancer Clinical Trials,” JAMA Netw Open. 2020;3(6):e2010651. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. U.S. Department of Health and Human Services Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER) (2019), Adaptive Designs for Clinical Trials of Drugs and Biologics Guidance for Industry. [Google Scholar]
  20. Viele K, Broglio K, McGlothlin A, Saville BR (2020), “Comparison of methods for control allocation in multiple arm studies using response adaptive randomization,” Clinical Trials, 17(1), 52–60. [DOI] [PubMed] [Google Scholar]
  21. Wick J, Berry SM, Yeh H, Choi W, Pacheco CM, Daley C, Gajewski BJ (2017), “A Novel Evaluation of Optimality for Randomized Controlled Trials,” Journal of Biopharmaceutical Statistics, 27 (4), 659–672. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Yelland L, Gajewski B, Colombo J, Gibson R, Makrides M, Carlson S (2016), “Predicting the effect of maternal docosahexaenoic acid (DHA) supplementation to reduce early preterm birth in Australia and the United States using results of within country randomized controlled trials,” Prostaglandins, Leukotrienes and Essential Fatty Acids (PLEFA), 112, 44–49. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES