Skip to main content
Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America logoLink to Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America
. 2017 Nov 7;66(7):1140–1146. doi: 10.1093/cid/cix907

Adaptive Designs for Clinical Trials: Application to Healthcare Epidemiology Research

W Charles Huskins 1,, Vance G Fowler Jr 2, Scott Evans 3
PMCID: PMC6018921  PMID: 29121202

Compared with conventional designs, adaptive trial designs provide advantages and flexibility that may enable a clinical trial to better achieve its primary objective. However, these designs also have limitations that investigators must consider carefully in applying them.

Keywords: research design, clinical trials, randomized controlled trials, infection control, drug resistance, microbial

Abstract

Clinical trials with adaptive designs use data that accumulate during the course of the study to modify study elements in a prespecified manner. The goal is to provide flexibility such that a trial can serve as a definitive test of its primary hypothesis, preferably in a shorter time period, involving fewer human subjects, and at lower cost. Elements that may be modified include the sample size, end points, eligible population, randomization ratio, and interventions. Accumulating data used to drive these modifications include the outcomes, subject enrollment (including factors associated with the outcomes), and information about the application of the interventions. This review discusses the types of adaptive designs for clinical trials, emphasizing their advantages and limitations in comparison with conventional designs, and opportunities for applying these designs to healthcare epidemiology research, including studies of interventions to prevent healthcare-associated infections, combat antimicrobial resistance, and improve antimicrobial stewardship.


Clinical trials provide high-quality evidence necessary to improve healthcare. However, they can be complex, lengthy, and expensive, and the results are often inconclusive. Clinical trials with adaptive designs may help generate the evidence needed to improve healthcare more effectively and efficiently [1].

The goal of an adaptive design is to provide flexibility to enable a trial to serve as a definitive test of its primary hypothesis, preferably in a shorter time period, involving fewer human subjects, and at lower cost. To accomplish this goal, an adaptive trial uses data that accumulates during the study to modify study elements in a prespecified manner [2–4]. The nature of the change is driven by the accumulating data, but the plan for the change is specified in advance and by design. Elements that may be modified include the sample size, end points, eligible populations, randomization ratio, and interventions. Accumulating data used to drive these modifications include the outcomes, subject enrollment (including factors associated with the outcomes), and information about the application of the interventions.

However, adaptive designs present challenges; they are not always efficient, carry threats to trial integrity, and can be complex to implement. The planning process may be prolonged to specify how study elements will be modified in response to accumulating data. Design changes may infer interim results to investigators, clinicians and study subjects. This unmasking may introduce operational bias resulting in changes in the recruitment or retention of subjects, adherence with the intervention, or the objectivity of outcome assessments that compromise the validity of the study [5]. Data monitoring committees (DMCs) can reduce operational bias by examining data (eg, treatment effects) that trial sponsors and investigators should not review during the trial. However, DMCs should not be responsible for redesigning the trial after reviewing unmasked data [6–8]. Design changes may be based on observed effects that are ultimately determined to be irrelevant clinically. Statistical methods for adaptive designs are more complex and must account for inflation of the type I (α) error (false-positive interpretation of trial results) associated with interim analyses [9]. Modifying a study element during the course of the trial may raise ethical concerns and complicate informed consent [10]. Finally, it may be difficult to estimate the cost associated with adaptations for the trial in advance.

Interest in adaptive designs comes from the pharmaceutical industry, the Food and Drug Administration, the National Institutes of Health (NIH), and academia [2–4, 11–13]. Although trials related to drug and device development were the initial focus, adaptive designs are applicable to a broad swath of clinical research including many types of clinical trials, comparative effectiveness research, large-scale pragmatic trials, and interventions to improve quality of care. Guidelines for reporting studies using adaptive designs have been described (Table 1) [14]. An extension of the consolidated standards of reporting trials statement for adaptive trials is under development.

Table 1.

Guidelines for the Reporting of Adaptive Trialsa

Describe The adaptation
Whether the adaptation was planned or unplanned
The rationale for the adaptation
When the adaptation was made
The data on which adaptation is based and whether the data were unblinded
The planned process for the adaptation including who made the decision regarding adaptation
Deviations from the planned process
Consistency of results before vs after the adaptation
Discuss Potential biases induced by the adaptation
Adequacy of firewalls to protect against operational bias
The effects on error control and multiplicity context

aAdapted from Evans and Ting [14].

The purposes of this review are to discuss (1) types of adaptive clinical trial designs, emphasizing advantages and limitations in comparison to conventional designs; and (2) opportunities for applying adaptive designs to healthcare epidemiology research, including studies of interventions to prevent healthcare-associated infections, combat antimicrobial resistance, and improve antimicrobial stewardship.

TYPES OF ADAPTIVE DESIGNS FOR CLINICAL TRIALS

Exploratory phase I trials studies have used adaptive designs to adjust sample sizes and eliminate dosing regimens that are not tolerated or are ineffective for many years [15, 16]. Subsequent work has applied adaptive designs to confirmatory studies examining the efficacy and safety of interventions [4, 11–13]. Commonly used designs are outlined in Table 2 and discussed in the following subsections, with examples from the infectious diseases literature. A critical factor in this discussion is whether the adaptive design relies on analyses that use blinded versus unblinded outcome data, with the latter offering a much greater threat to trial integrity.

Table 2.

Study Design Issues Identified During Trial Planning and Types of Adaptive Designs to Potentially Address These Issues

Study Design Issue Identified in Trial Planning Adaptive Design to Potentially Address the Issuea
Imprecise estimate of control group response rate or variation in responses Sample size adjustment
Imprecise magnitude and/or precision of effect size Sample size adjustment
Predicted intervals
Uncertainty regarding subjects most likely to experience a benefit or a toxicity Population enrichment
Uncertainty regarding the optimal dose of a new drug to assess its efficacy Seamless phase II/III trial; multiarm, multistage trial
Multiple drugs, drug combinations, or treatment or testing strategies need to be evaluated in a consistent and efficient manner Multiarm, multistage trial; platform trial
Uncommon or rare condition makes it difficult to recruit sufficient subjects Umbrella or basket trial
Uncertainty or evolution in optimal end point(s) to evaluate efficacy or safety Changing end points

aSee text for discussion of the advantages and limitations of specific designs.

Group Sequential Designs

A group sequential trial analyzes accumulating data and uses prespecified criteria to determine whether the trial should be terminated early based on interim evidence of efficacy, harm or futility, while preserving statistical error rates through error-spending strategies and sample size adjustments. Group sequential designs allow DMCs to assess unblinded interim safety or efficacy results to decide whether to stop or continue a trial. We do not regard these adaptive designs here because neither involves modifying a study element and continuing the trial with the revised methods. Adaptive designs discussed in the sections that follow can be incorporated into a group sequential trial.

Predicted Intervals

Prediction and predicted intervals (PIs) augment interim analyses of group-sequential designs by considering the magnitude and precision of the effect size estimate associated with trial continuation [17, 18]. The confidence interval if the trial were to continue is predicted conditional on the interim data and assumptions about data yet to be collected (eg, current trends, best/worst case scenarios). PIs convey information regarding the effect size, allowing assessment of both clinical and statistical significance. The gain in precision with continued enrollment can be assessed by comparing the width of the confidence interval based on interim data to the width of the PI. Examples of studies using PI include multicountry trials of treatments for human immunodeficiency virus (HIV) infection and tuberculosis [19, 20].

Sample Size Adjustment Designs

Sample size calculations are based on estimates of multiple parameters related to the study end point, which may be unknown or estimated imprecisely during the planning stage [4]. Given this uncertainty, sample size assumptions can be reevaluated using interim data collected during the trial. For example, a cluster randomized trial examining the efficacy of ring vaccination using a novel Ebola vaccine included a plan to adjust the number of clusters based on unblinded analysis of transmission rates within clusters and vaccine treatment efficacy [21].

A sample size adjustment based on nuisance parameters (eg, control group response rate or the variance of a continuous outcome) is straightforward, provided the frequency of interim analysis is limited so as not to substantially inflate type I error. For example, if resizing based on a variance, researchers can estimate the number of patients needed to estimate the variance with an acceptable level of precision and then plan an interim analyses to adjust the sample size after gathering sufficient data. The preferred approach is to use pooled blinded data since procedures using blinded data generally have good operating characteristics and the use of blinded data minimizes type I error inflation and avoids operational bias associated with reviewing unblinded results. The overall type I error can be controlled by changing the test statistic with any sample size change to one that partitions the data before and after the event, and then applying weights to each component such that type I error is preserved [22].

Sample size adjustment based on the observed treatment effect is more complex because it involves review of unblinded results. This approach is subject to operational bias because knowledge of the new sample size may allow back-calculation of the observed treatment effect even by persons peripherally involved in the study. To protect against operational bias, the methods for recalculation can be put into a “closed” protocol available only to the DMC. This adjustment extracts a greater statistical price through inflation of the type I error. In addition, it is important to assess the clinical relevancy of the observed treatment effect. For example, if the treatment effect is much smaller than expected, it may be more appropriate to terminate the study rather than to increase the sample size substantially.

A popular approach is to make adjustments based on conditional power by dividing the observed conditional power into 3 zones: favorable, promising; and unfavorable [23]. If results fall in the favorable or unfavorable zone, the sample size is unchanged. If results fall within the promising zone, one adjusts the sample size to bring the conditional power to an acceptable level. Though this method can be used, it has been shown to be less efficient than a standard group sequential design powered for the target effect [24]. Methods for adaptive sample size calculations for trials with coprimary end points (eg, trials that may evaluate a clinical outcome and an antibiotic use outcome as coprimary) have also been developed [25–29].

Multiarm Multistage Designs

A multiarm multistage (MAMS) design is an efficient approach for investigating >1 new drug or treatment strategy [30]. These designs maintain strong control of statistical error rates through group-sequential design principles. Examples of MAMS designs include new medications for treatment of noninfectious diarrhea and diabetes in HIV-infected persons and new treatment strategies for tuberculosis [31–33].

One form that a multistage design can take is a seamless phase II/III study evaluating a new drug. This design specifies initial testing using multiple dosing regimens (including a group that will serve as an appropriate concurrent control throughout the study). The optimal dosing regimen for the new drug is determined by an interim analysis of the phase II component. This regimen is continued in parallel with the control group (eliminating less appropriate dosing regimens) in the phase III study of the drug’s efficacy and safety. Data are combined across both phases in the final analysis. This design reduces the overall number of subjects and eliminates delay during the transition between the 2 phases.

A multiarm design can be used for a trial comparing new drugs, established drugs in various combinations, or different treatment strategies, in comparison with each other and with a control. New arms can be added after trial initiation with between-arm comparisons restricted to use of concurrent data to retain randomization integrity. The control may be eliminated if a treatment arm is shown to be superior, in which case this arm becomes the new control though with added complexities.

Operational bias may be introduced if the results of the interim analysis lead to modification of the arms of the trial that infer treatment effects, particularly if the results are published or the modifications are easily and widely recognized. For seamless phase II/III trials, the impact of this bias is not likely to differ from for separate phase II and III studies, especially if the availability of the new drug is restricted to participation in the study or for compassionate use. Operational bias may be more difficult to control in a multiarm design if clinicians have access to the drugs and modify their behavior in ways that compromise the trial.

Population Enrichment Designs

In a population enrichment design, the subject eligibility criteria are modified to increase the enrollment of subjects likely (or decrease enrollment of subjects less likely) to experience a treatment effect [34]. An adjustment may be based on accumulating data regarding demographic or clinical characteristics of enrolled subjects, analysis of biomarkers, or other factors that are known or observed to affect treatment response (eg, host genomic markers, antimicrobial resistance genotypes or phenotypes). Eligibility criteria may be modified based on interim analyses of blinded or unblinded data.

Modification of eligibility criteria based on pooled blinded data regarding subject characteristics is straightforward, whereas modification of eligibility criteria based on unblinded data of treatment effects is more complicated. Limitations associated with this approach are similar to those discussed previously for sample size recalculation using blinded and unblinded results. In addition, population enrichment may reduce the generalizability of the trial.

Response-Adaptive Randomization Designs

A response-adaptive randomization (RAR) design takes the approach of population enrichment using unblinded data a step further [35]. In a RAR design, the randomization ratio may be modified during the trial depending on the observed responses of trial participants. For example, a trial comparing intermittent versus continuous dosing of aerosolized ribavirin for treatment of respiratory syncytial virus infection in cancer patients shifted the randomization ratio toward intermittent dosing because it had a lower observed failure rate [36]. The motivation behind this adaptation is primarily ethical, attempting to assign fewer patients to less effective treatment based on early results [37]. Some have argued that this is misleading because equipoise must exist for randomization to occur in the first place [38].

While appealing, a RAR design can produce biased estimates of effects when there is temporal or geographic variation in disease or advances in supportive care. Modeling can be used to adjust for the induced confounding though imperfectly. Two trials comparing similar regimens, one using RAR and one with fixed randomization, produced different results raising questions about the potential bias [39]. Consequently, RAR may be contraindicated for trials of long duration. Confounding by geographic factors can also occur if sites start after or complete before the randomization ratio is adapted. Given the gradual and episodic evolution of antimicrobial resistance and geographic diversity in the incidence and molecular mechanisms involved in infections caused by multidrug-resistant bacteria, RAR may be a suboptimal fit for trials of interventions for infectious disease settings [40].

RAR can be inefficient from the statistical perspective as well. Trial power is maximized when the randomization ratio is 1:1. Diversion from 1:1 requires a larger sample size to maintain desired power, increasing costs and prolonging the trial. This also eliminates the advantage of fewer participants being randomized to the potentially inferior arm [41, 42].

RAR can induce operational bias in open-label trials given that treatment effects may be inferred from observed randomization rates. RAR is also complex to apply because the randomization schedule cannot be conducted before trial initiation.

Umbrella, Basket, and Platform Designs

An umbrella design includes 2 or more subtrials linked through a common subject-screening infrastructure [34, 43]. Subjects are screened for a set of characteristics, such as a specific infection, pathogen, or antimicrobial resistance genotype or phenotype, and assigned to an appropriate subtrial. The screening process acts as an enrichment process, facilitating accrual for trials examining low-prevalence diseases. The design has the flexibility to add, drop, or modify subtrials. Threats to integrity are minimal because each subtrial can run independently.

A basket design, often used in oncology trials, evaluates the effect of a drug on a single mutation in a variety of cancer types, or in some cases multiple mutations in a single cancer type [34, 43]. Basket trials can screen multiple drugs across many cancer types. This design may be attractive for testing a new antibiotic against a rare, highly resistant pathogen or antimicrobial-resistant genotype or phenotype when alternatives are limited by the challenge of recruiting a sufficient number of subjects. A significant concern is that heterogeneity in treatment effects across infection sites likely to go undetected.

A platform design is an extension of the multiarm design evaluating different treatment strategies, potentially involving several subtrials [43, 44]. A platform trial may also evaluate the efficacy of treatment strategies in subgroups identified by subject characteristics, illness severity, biomarkers, or antimicrobial resistance genotypes or phenotypes. The Antimicrobial Resistance Leadership Group (ARLG) has proposed a platform trial to evaluate new treatments for infections at various sites caused by multidrug-resistant bacteria [45]. The European Commission has funded a platform trial to evaluate multiple treatments (eg, antimicrobials, adjunctive agents, ventilation strategies) for severe community-acquired pneumonia (ClinicalTrials.gov, NCT02735707) [46].

Changing End Points

Revisions to end points may compromise the validity of the trial, so they should be considered rarely—for instance, if scientific knowledge has evolved significantly after trial initiation, new benefits or harms are identified, or improved surrogate markers are identified, or to incorporate patient perspectives [47]. A trial examining the effectiveness of guideline recommendations for the duration of antibiotic treatment in hospitalized adults with community-acquired pneumonia included a change in its primary end points [48].

APPLICATION OF ADAPTIVE DESIGNS TO HEALTHCARE EPIDEMIOLOGY RESEARCH

We outline opportunities for application of adaptive designs to trials of interventions to prevent healthcare-associated infections, treat infections caused by antimicrobial-resistant bacteria and enhance antimicrobial stewardship (Table 3).

Table 3.

Conventional and Adaptive Trial Designs for Healthcare-Associated Infection Prevention, Treatment of Infections Caused by Antimicrobial-Resistant Bacteria and Antimicrobial Stewardship: Examples, Challenges of Conventional Designs, and Potential Advantages and Limitations of Adaptive Designs

Domain (Possible Research Question) Conventional trial design Adaptive trial design
Example Common Challenge Example Potential Advantage Limitation
Healthcare-associated infection prevention (What is the effect of a novel screening test to identify patients colonized with MDR GNB on admission on the transmission of these organisms in healthcare facilities?) Group-level cluster-randomized trial Inaccurate power calculation due to imprecise estimate of incidence and within- and between-group variance of the control group–level outcome Sample size recalculation (No. of groups or duration of study) using blinded interim analysis of baseline incidence and within- and between-group variance of the group-level outcome Adequately powered study; early identification of futility May be unable to add groups midway through the trial; difficult to estimate the budget for the trial
Variation in population characteristics among groups likely to affect the response to the intervention Population enrichment using blinded interim analysis of population characteristics among groups More likely to detect a positive effect associated with intervention; more precise estimate of effect size in subjects likely to respond May overestimate the size of the effect in a general population; difficult to estimate the budget for the trial
Treatment of infections caused by antimicrobial- resistant bacteria (What is the effect of a novel antibiotic, used alone or in combination with existing antibiotics, on the MDR GNB BSI-associated mortality rate?) Separate phase II and III trials for a new drug development Delay and inefficiency of conducting separate phase II and III trials of a new drug Multistage seamless phase II/III trial design with interim analysis to choose the optimal dosing regimen to complete trial Eliminates delay between phases II and III; smaller overall sample size; combines data from both phases for final analysis Additional planning required to specify the phase II to III transition; operational bias when changes can be used to infer phase II results
Separate randomized trials of different treatment strategies (eg, different doses, administration regimens, agents, or combinations) Inefficiency of conducting separate trials of various strategies Multiarm design to test different treatment strategies vs a common control; interim analysis to choose best-performing strategy to continue the trial Simultaneous evaluation of multiple treatment strategies allows direct comparison of each; interim analyses allow discontinuation of ineffective treatment strategies or early stopping if one strategy is superior; arms testing new treatment strategies can be added Careful planning and coordination required for multiarm studies; operational bias when changes in arms infer interim results
Antimicrobial stewardship (What is the effect of different novel rapid tests for detection of resistance genes in MDR GNB on time to initiation of effective antibiotic treatment for MDR GNB BSI?) Separate randomized trials of new diagnostic tests Inefficiency of conducting separate trials of new diagnostic tests Umbrella or platform design to evaluate multiple new diagnostic tests using samples from single subjects (see text) Simultaneous evaluation of multiple new diagnostic tests allows direct comparison of each; smaller overall sample size compared with separate trials Careful planning and coordination required

Abbreviations: BSI, bloodstream infection; GNB, gram-negative bacilli; MDR, multidrug-resistant.

Prevention of Healthcare-Associated Infections

Cluster (group) randomized trials have been used to evaluate interventions to prevent healthcare-associated infections caused by antimicrobial-resistant bacteria and Clostridium difficile. These studies face challenges regarding accurate sample size calculation (due to imprecise estimates of baseline incidence and within- and between-group variance of the group-level outcome), differences in population characteristics among groups, and suboptimal adherence with the intervention, both overall or in individual groups. Adaptive sample size recalculation and population enrichment offer potential solutions (Table 3). Limitations include inability to increase the number of groups midway through the trial, reduced generalizability, and difficulty estimating the budget for the trial.

Treatment of Infections Caused by Antimicrobial-Resistant Bacteria

The development of new treatments for infections caused by highly antimicrobial-resistant gram-negative bacteria is complicated by the relative infrequency of these infections [49]. MAMS designs may speed development of these treatments. For instance, a multistage seamless phase II/III design may be a more efficient approach for development of a new drug (Table 2). A multiarm design may be a more effective approach for comparing different doses, administration regimens, agents, or combinations of drugs for treatment of ≥1 resistant pathogen versus a common control (Table 3). Limitations to these approaches are the required increased planning and coordination, reluctance of pharmaceutical companies to participate in head-to-head comparison, and the potential for operational bias.

Enhance Antimicrobial Stewardship

Trials evaluating antimicrobial stewardship interventions may need to use cluster randomization, in which case previous statements about appropriate adaptive designs are applicable (Table 3). Trials that evaluate interventions at the subject level may use sample size recalculation or population enrichment designs [50]. The ARLG has proposed a platform design to evaluate multiple new diagnostic tests, individually and in comparison (Table 3) [51]. A (diagnostic) platform study to evaluate the performance of nucleic acid amplification tests for detection of Neisseria gonorrhoeae and Chlamydia trachomatis in extragenital sites is currently enrolling subjects (ClinicalTrials.gov, NCT02870101). This approach will reduce the number of subjects required and reduce costs substantially. Although this design will not involve treatment strategies, the development of new diagnostic tests to direct appropriate antimicrobial therapy is a key stewardship priority.

CONCLUSIONS

Used appropriately, adaptive designs can be informative and efficient and represent a promising innovation for clinical trials in healthcare epidemiology research. Used inappropriately, these designs threaten trial integrity through loss of control of statistical error rates and induction of operational bias. Investigators must consider these factors and determine how to apply these designs. Increasing experience and additional developments in field will yield more insight into how these designs can be applied most appropriately to improve the effectiveness and efficiency of healthcare epidemiology research.

Notes

Acknowledgments. We acknowledge Robert Weinstein for his invitation to write this article and the many members of the Antimicrobial Resistance Leadership Group (ARLG) who have contributed to the development of ARLG studies, including those cited in this review.

Disclaimer. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health (NIH).

Financial support. This study was supported by the National Institute of Allergy and Infectious Diseases (NIAID), NIH (grant UM1 AI104681).

Potential conflicts of interest. W. C. H. reports grants from NIH/NIAID. V. G. F. reports grants from NIH, the Centers for Disease Control and Prevention, MedImmune, Cerexa/Forest/Actavis/Allergan, Pfizer, Advanced Liquid Logics, Theravance, Novartis, Cubist/Merck, Medical Biosurfaces, Locus, Affinergy, Contrafect, Karius, and Genentech; consultancies with Pfizer, Novartis, Galderma, Novadigm, Durata, Debiopharm, Genentech, Achaogen, Affinium, Medicines Co, Cerexa, Tetraphase, Trius, MedImmune, Bayer, Theravance, Cubist, Basilea, Affinergy, Janssen, xBiotech, and Contrafect; personal fees from Green Cross, Cubist, Cerexa, Durata, Theravance, and Debiopharm; royalties from UpToDate; and a patent pending with Sepsis Diagnostics. S. E. reports grants from NIH/NIAID, during the conduct of the study; and personal fees from Takeda/Millennium; Pfizer; Roche; Novartis; Achaogen; the Huntington’s Study Group; Auspex; Alcon; Merck; Chelsea; Mannkind; QRx Pharma; Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities and Networks (ACTTION); Genentech; Affymax; FzioMed; Amgen; GlaxoSmithKline; Boehringer-Ingelheim; American Statistical Association; the Food and Drug Administration; Osaka University; City of Hope; the National Cerebral and Cardiovascular Center of Japan; the NIH; the Muscle Study Group; the Society for Clinical Trials; Drug Information Association; the University of Rhode Island; New Jersey Medical School (NJMS)/Rutgers; Preclinical Pain Research Consortium for Investigating Safety and Efficacy (PPRECISE); Statistical Communications in Infectious Diseases; Cubist; AstraZeneca; Teva; Repros; Austrian Breast & Colorectal Cancer Study Group/Breast International Group; the Alliance Foundation Trials; Zeiss; Dexcom; the American Society for Microbiology; Taylor and Francis; Claret Medical; Vir; Arrevus; Five Prime; Shire; Alexion; Gilead; and Spark, outside the submitted work. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.

References

  • 1. Dzau VJ, McClellan MB, McGinnis JM et al. . Vital directions for health and health care: priorities from a National Academy of Medicine initiative. JAMA 2017; 317:1461–70. [DOI] [PubMed] [Google Scholar]
  • 2. Gallo P, Chuang-Stein C, Dragalin V, Gaydos B, Krams M, Pinheiro J; PhRMA Working Group Adaptive designs in clinical drug development—an executive summary of the PhRMA Working Group. J Biopharm Stat 2006; 16:275–83; discussion 285–91, 293–8, 311–2. [DOI] [PubMed] [Google Scholar]
  • 3. Office of Biostatistics and the Office of New Drugs, Center for Drug Evaluation and Research, Food and Drug Administration Guidance for industry: adaptive design clinical trials for drugs and biologics, draft guidance 2010. Available at: https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM201790.pdf. Accessed 12 June 2017.
  • 4. Bhatt DL, Mehta C. Adaptive designs for clinical trials. N Engl J Med 2016; 375:65–74. [DOI] [PubMed] [Google Scholar]
  • 5. Evans SR, Ting N.. Fundamental concepts for new clinical trialists. Boca Raton, FL: Chapman & Hall/CRC, 2015. [Google Scholar]
  • 6. Antonijevic Z, Gallo P, Chuang-Stein C. Views on emerging issues pertaining to data monitoring committees for adaptive trials. Ther Innov Regul Sci 2013; 47:495–502. [DOI] [PubMed] [Google Scholar]
  • 7. Sanchez-Kam M, Gallo P, Loewy J et al. . A practical guide to data monitoring committees in adaptive trials. Ther Innov Regul Sci 2014; 48:316–26. [DOI] [PubMed] [Google Scholar]
  • 8. Clinical Trials Transformation Initiative. CTTI recommendations: data monitoring committees. 2017. Available at: https://www.ctti-clinicaltrials.org/sites/www.ctti-clinicaltrials.org/files/recommendations/dmc-recommendations.pdf. Accessed 12 June 2017. [DOI] [PMC free article] [PubMed]
  • 9. Emerson SS, Fleming TR. Adaptive methods: telling “the rest of the story.” J Biopharm Stat 2010; 20:1150–65. [DOI] [PubMed] [Google Scholar]
  • 10. Saxman SB. Ethical considerations for outcome-adaptive trial designs: a clinical researcher’s perspective. Bioethics 2015; 29:59–65. [DOI] [PubMed] [Google Scholar]
  • 11. Coffey CS, Levin B, Clark C et al. . Overview, hurdles, and future work in adaptive designs: perspectives from a National Institutes of Health-funded workshop. Clin Trials 2012; 9:671–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Meurer WJ, Lewis RJ, Tagle D et al. . An overview of the Adaptive Designs Accelerating Promising Trials into Treatments (ADAPT-IT) project. Ann Emerg Med 2012; 60:451–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Chow SC, Chang M.. Adaptive design methods in clinical trials. 2nd ed Boca Raton, FL: Chapman & Hall/CRC, 2011. [Google Scholar]
  • 14. Evans SR, Ting N.. Publishing trial results: fundamental concepts for new clinical trialists. Boca Raton, FL: Chapman & Hall/CRC, 2015. [Google Scholar]
  • 15. Liu Q, Chi GY. Understanding the FDA guidance on adaptive designs: historical, legal, and statistical perspectives. J Biopharm Stat 2010; 20:1178–219. [DOI] [PubMed] [Google Scholar]
  • 16. Bauer P, Bretz F, Dragalin V, König F, Wassmer G. Twenty-five years of confirmatory adaptive designs: opportunities and pitfalls. Stat Med 2016; 35:325–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Evans SR, Li L, Wei LJ. Data monitoring in clinical trials using prediction. Drug Info J 2007; 41:733–42. [Google Scholar]
  • 18. Li L, Evans SR, Uno H, Wei LJ. Predicted interval plots (PIPS): a graphical tool for data monitoring of clinical trials. Stat Biopharm Res 2009; 1:348–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Evans SR, Simpson DM, Kitch DW et al. ; Neurologic AIDS Research Consortium; AIDS Clinical Trials Group A randomized trial evaluating Prosaptide for HIV-associated sensory neuropathies: use of an electronic diary to record neuropathic pain. PLoS One 2007; 2:e551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Hosseinipour MC, Bisson GP, Miyahara S et al. ; Adult AIDS Clinical Trials Group A5274 (REMEMBER) Study Team Empirical tuberculosis therapy versus isoniazid in adult outpatients with advanced HIV initiating antiretroviral therapy (REMEMBER): a multicountry open-label randomised controlled trial. Lancet 2016; 387:1198–209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Henao-Restrepo AM, Longini IM, Egger M et al. . Efficacy and effectiveness of an rVSV-vectored vaccine expressing Ebola surface glycoprotein: interim results from the Guinea ring vaccination cluster-randomised trial. Lancet 2015; 386:857–66. [DOI] [PubMed] [Google Scholar]
  • 22. Cui L, Hung HM, Wang SJ. Modification of sample size in group sequential clinical trials. Biometrics 1999; 55:853–7. [DOI] [PubMed] [Google Scholar]
  • 23. Gao P, Ware JH, Mehta C. Sample size re-estimation for adaptive sequential design in clinical trials. J Biopharm Stat 2008; 18:1184–96. [DOI] [PubMed] [Google Scholar]
  • 24. Mehta CR, Pocock SJ. Adaptive increase in sample size when interim results are promising: a practical guide with examples. Stat Med 2011; 30:3267–84. [DOI] [PubMed] [Google Scholar]
  • 25. Asakura K, Hamasaki T, Evans SR, Sugimoto T, Sozu T. Group-sequential designs when considering two binary outcomes as co-primary endpoints. In: Chen Z, Liu A, Qu Y, Tang L, Ting N, Tsong Y, eds. Applied statistics in biomedicine and clinical trials design. Cham, Switzerland: Springer, 2015:235–62. [Google Scholar]
  • 26. Asakura K, Hamasaki T, Sugimoto T, Hayashi K, Evans SR, Sozu T. Sample size determination in group-sequential clinical trials with two co-primary endpoints. Stat Med 2014; 33:2897–913. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Ando Y, Hamasaki T, Evans SR et al. . Sample size considerations in clinical trials when comparing two interventions using multiple co-primary binary relative risk contrasts. Stat Biopharm Res 2015; 7:81–94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Hamasaki T, Asakura K, Evans SR, Ochiai T.. Group-sequential clinical trials with multiple co-objectives. New York, NY: Springer, 2016. [Google Scholar]
  • 29. Ochiai T, Hamasaki T, Evans SR, Asakura K, Ohno Y. Group-sequential three-arm noninferiority clinical trial designs. J Biopharm Stat 2017; 27:1–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Wason JM, Jaki T. Optimal design of multi-arm multi-stage trials. Stat Med 2012; 31:4269–79. [DOI] [PubMed] [Google Scholar]
  • 31. Macarthur RD, Hawkins TN, Brown SJ et al. . Efficacy and safety of crofelemer for noninfectious diarrhea in HIV-seropositive individuals (ADVENT trial): a randomized, double-blind, placebo-controlled, two-stage study. HIV Clin Trials 2013; 14:261–73. [DOI] [PubMed] [Google Scholar]
  • 32. Wason J, Magirr D, Law M, Jaki T. Some recommendations for multi-arm multi-stage trials. Stat Methods Med Res 2016; 25:716–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Bratton DJ, Phillips PP, Parmar MK. A multi-arm multi-stage clinical trial design for binary outcomes with application to tuberculosis. BMC Med Res Methodol 2013; 13:139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Mandrekar SJ, Dahlberg SE, Simon R. Improving clinical trial efficiency: thinking outside the box. Am Soc Clin Oncol Educ Book 2015; e141–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Zelen M. Play the winner rule and the controlled clinical trial. J Am Stat Assoc 1969; 64:131–46. [Google Scholar]
  • 36. Chemaly RF, Torres HA, Munsell MF et al. . An adaptive randomized trial of an intermittent dosing schedule of aerosolized ribavirin in patients with cancer and respiratory syncytial virus infection. J Infect Dis 2012; 206:1367–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Berry D, Esserman L. Adaptive randomization of neratinib in early breast cancer. N Engl J Med 2016; 375:1592–3. [DOI] [PubMed] [Google Scholar]
  • 38. Buyse M, Saad ED, Burzykowski T. Adaptive randomization of neratinib in early breast cancer. N Engl J Med 2016; 375:1591–2. [DOI] [PubMed] [Google Scholar]
  • 39. Joensuu H. Adaptive randomization of neratinib in early breast cancer. N Engl J Med 2016; 375:1592. [DOI] [PubMed] [Google Scholar]
  • 40. Fleming TR, Ellenberg SS. Evaluating interventions for Ebola: the need for randomized trials. Clin Trials 2016; 13:6–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Korn EL, Freidlin B. Outcome–adaptive randomization: is it useful?J Clin Oncol 2011; 29:771–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Korn EL, Freidlin B. Reply to Y. Yuan et al. J Clin Oncol 2011; 29:e393. [Google Scholar]
  • 43. Woodcock J, LaVange LM. Master protocols to study multiple therapies, multiple diseases, or both. N Engl J Med 2017; 377:62–70. [DOI] [PubMed] [Google Scholar]
  • 44. Berry SM, Connor JT, Lewis RJ. The platform trial: an efficient strategy for evaluating multiple treatments. JAMA 2015; 313:1619–20. [DOI] [PubMed] [Google Scholar]
  • 45. Platform Trial for the Evaluation of Antimicrobials for the Treatment of Multiple Resistant Bacterial Pathogens in Bacteremia 2017. Available at: https://www.arlg.org/studies-in-progress. Accessed 12 June 2017.
  • 46. REMAP-CAP: Randomized, Embedded, Multifactorial, Adaptive Platform Trial for Community-Acquired Pneumonia 2017. Available at: https://www.prepare-europe.eu/About-us/Workpackages/Workpackage-5. Accessed 12 June 2017.
  • 47. Evans S. When and how can endpoints be changed after initiation of a randomized clinical trial?PLoS Clin Trials 2007; 2:e18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Uranga A, España PP, Bilbao A et al. . Duration of antibiotic treatment in community-acquired pneumonia: a multicenter randomized clinical trial. JAMA Intern Med 2016; 176:1257–65. [DOI] [PubMed] [Google Scholar]
  • 49. Doi Y, Bonomo RA, Hooper DC et al. ; Gram-Negative Committee of the Antibacterial Resistance Leadership Group (ARLG) Gram-negative bacterial infections: research priorities, accomplishments, and future directions of the Antibacterial Resistance Leadership Group. Clin Infect Dis 2017; 64:30–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Anderson DJ, Jenkins TC, Evans SR et al. ; Stewardship and Infection Control Committee of the Antibacterial Resistance Leadership Group (ARLG) The role of stewardship in addressing antibacterial resistance: Stewardship and Infection Control Committee of the Antibacterial Resistance Leadership Group. Clin Infect Dis 2017; 64:36–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Patel R, Tsalik EL, Petzold E, Fowler VG Jr, Klausner JD, Evans S; Antibacterial Resistance Leadership Group (ARLG) MASTERMIND: bringing microbial diagnostics to the clinic. Clin Infect Dis 2017; 64:355–60. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America are provided here courtesy of Oxford University Press

RESOURCES