Abstract
Background
The clinical trials community has a never-ending search for dependable and reliable ways to improve clinical research. This exploration has led to considerable interest in adaptive clinical trial designs, which provide the flexibility to adjust trial characteristics on the basis of data reviewed at interim stages. Statisticians and clinical investigators have proposed or implemented a wide variety of adaptations in clinical trials, but specific approaches have met with differing levels of support. Within industry, investigators are actively exploring the benefits and pitfalls associated with adaptive designs (ADs). For example, a Drug Information Association (DIA) working group on ADs has engaged regulatory agencies in discussions. Many researchers working on publicly funded clinical trials, however, are not yet fully engaged in this discussion. We organized the Scientific Advances in Adaptive Clinical Trial Designs Workshop to begin a conversation about using ADs in publicly funded research. Held in November of 2009, the 1½-day workshop brought together representatives from the National Institutes of Health (NIH), the Food and Drug Administration (FDA), the European Medicines Agency (EMA), the pharmaceutical industry, nonprofit foundations, the patient advocacy community, and academia. The workshop offered a forum for participants to address issues of ADs that arise at the planning, designing, and execution stages of clinical trials, and to hear the perspectives of influential members of the clinical trials community. The participants also set forth recommendations for guiding action to promote the appropriate use of ADs. These recommendations have since been presented, discussed, and vetted in a number of venues including the University of Pennsylvania Conference on Statistical Issues in Clinical Trials and the Society for Clinical Trials annual meeting.
Purpose
To provide a brief overview of ADs, describe the rationale behind conducting the workshop, and summarize the main recommendations that were produced as a result of this workshop.
Conclusions
There is a growing interest in the use of adaptive clinical trial designs. However, a number of logistical barriers need to be addressed in order to obtain the potential advantages of an AD. Currently, the pharmaceutical industry is well ahead of academic trialists with respect to addressing these barriers. Academic trialists will need to address important issues such as education, infrastructure, modifications to existing funding models, and the impact on Data and Safety Monitoring Boards (DSMB) in order to achieve the possible benefits of adaptive clinical trial designs.
Introduction
In the traditional clinical trial setting, study design often involves making a number of assumptions about important aspects of the study. For example, choosing a fixed sample size can be complicated by the need to select a clinically meaningful treatment effect or to specify values for nuisance parameters (such as the variance for a continuous outcome, the overall event rate for a binary outcome, or the accrual rate for a time-to-event outcome). Inaccurate estimates of the parameters lead to an underpowered or overpowered study, both of which have negative consequences.
Since knowledge accrues as the study progresses, adaptive designs (ADs) allow investigators to review design elements and parameters during the trial [1–7]. In general, an AD allows for changing or modifying the characteristics of a trial on the basis of cumulative information. An AD usually consists of at least two stages. At each stage, data analyses are conducted and adaptations implemented on the basis of updated information [8]. This approach provides investigators with an attractive solution to address some of the uncertainty that exists when the trial is originally designed. Changes may be based on parameters such as the rate of patient recruitment, baseline covariate information, or the rate of information accrual such as frequencies of endpoints – all of which can be evaluated without unmasking treatment assignment. Other adaptations might involve parameters that require unmasking, such as the effect of treatment on response. Adaptations based on reestimated nuisance parameters may or may not require the unmasking of treatment assignment.
Recently, research and interest in the area of ADs have risen considerably. However, the rapid proliferation of interest in ADs and inconsistent use of terminology has created confusion about the similarities and differences among the various techniques. For example, the definition of an ‘AD’ itself is a common source of confusion. The term currently applies to a variety of situations. For example, ‘adaptive’ can refer to a broad approach to conducting a trial or to a specific element of a trial’s design. Because the term applies to many different designs, all persons in a large group conversation could be describing the recent AD that they are working on; yet the designs could have very little in common. To advance the field, the scientific community must first agree on a definition.
Fortunately, two recent publications have alleviated some of this confusion. In 2005, a multidisciplinary AD working group was formed to ‘… foster and facilitate wider usage and regulatory acceptance of ADs and to enhance clinical development, through fact-based evaluation of the benefits and challenges associated with these designs’ [1]. The group, originally sponsored by the Pharmaceutical Research and Manufacturers of America (PhRMA), is currently sponsored by the Drug Information Association (DIA). An important early contribution of this group was a published white paper that provided one of the first formal definitions of an AD – ‘By adaptive design we refer to a clinical study design that uses accumulating data to modify aspects of the study as it continues, without undermining the validity and integrity of the trial’ [1]. The group went on to specify that ‘… changes are made by design, and not on an ad hoc basis’, and that ADs are ‘… not a remedy for inadequate planning’.
In 2010, the Food and Drug Administration (FDA) released a draft guidance document – ‘Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics’ [9]. The document included a definition similar to that of the working group: ‘… a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (usually interim data) from subjects in the study’.
Both groups support the general concept of ‘adaptive by design’, which implies that the only way to determine whether an AD preserves the validity and integrity of a trial requires prespecifying the rules for adaptation (i.e., ‘if X happens, then we will do Y′) before seeing any data. If the adaptations are prespecified, extensive simulations can be conducted. These simulations are crucial for determining whether the proposed adaptation introduces bias, how best to correct for bias if it occurs, and for comparing the properties of the AD with those obtained from a standard fixed design (ADs are not always better). For these and other reasons, it is necessary to plan adaptations in advance. Although opportunities may arise to perform simulations with unplanned adaptations, such simulations would generally apply to activity only from the point of adaptation forward. There is no defensible way to go back and capture the randomness of different scenarios that might have led to similar or different ad hoc changes. Therefore, only planned adaptations can be guaranteed to avoid any unknown bias due to the adaptation and provide a replicable and, thus, statistically rigorous experiment. This caveat about prespecified adaptations is often the source of misunderstanding between investigators and regulatory agencies. Many investigators complain that they would like to conduct an AD but they are hesitant because they think the FDA is unlikely to accept anything out of the ordinary. On the other hand, an FDA reviewer may encourage the use of ADs, but only if researchers provide sufficient evidence that the design preserves the operating characteristics of the study (control of type I error rate, lack of bias, etc.). Specifying adaptations in advance, and conducting appropriate simulation studies, can address the concerns of both parties.
Many diseases have an engaged patient community, often actively communicating through web-sites. In some diseases, patients, advocates, and their families have expressed intense interest in the potential for ADs to make effective treatments available to patients more quickly. In order for this conversation to be productive, these groups must understand ADs – what they are and are not. ADs alone cannot ‘change the answer’ regarding the effectiveness of a particular treatment, but they can increase the efficiency with which the answer is found. ADs alone cannot make an ineffective treatment suddenly effective. In fact, one of the biggest potential benefits of an AD is the ability to more quickly identify ineffective treatments. Stopping development of such a treatment early minimizes the resources expended to study an ineffective treatment and allows a redistribution of those resources to potentially more promising interventions.
The remainder of this article first provides an overview of the commonly proposed types of ADs. We then describe some hurdles that must be addressed, particularly for implementing ADs outside of industry. In particular, we present a summary of the recommendations from a November 2009 ‘Scientific Advances in Clinical Trials Design’ workshop that was held to address these hurdles. We then discuss the future work based on the recommendations that came from this workshop. We close by highlighting several recent steps that have been taken by National Institutes of Health (NIH) and academic researchers to address some of the most salient issues.
Overview
As implied by the definition above, many design features of a study can be altered. Alterations to some of these elements are more controversial than others. For example, a group sequential design [10] can be thought of as an AD that allows premature termination of a trial due to efficacy or futility, based on the results of an interim analysis. In fact, group sequential designs are some of the most commonly used ADs in clinical trials. In briefly summarizing specific ADs used in clinical research, we include the early learning phase of clinical trials, the late confirmatory phase, and adaptive seamless designs that seek to integrate the two phases.
ADs are generally well accepted in the learning (exploratory) stage of clinical trials [9]. Adaptive dose-finding methods, such as the continual reassessment method [11], offer more efficient ways to learn about dose–response; they can provide more information about dose–response earlier in development than do more conventional designs. These trials can be used to estimate the maximum tolerated dose (MTD), that is, the highest dose with less than some specified percentage of treated subjects having dose-related toxicities. After the MTD has been determined, the late learning phase generally aims to choose a dose (less than or equal to the MTD) that shows the most promise of affecting the clinical outcome of interest. An adaptive dose–response working group published a white paper that concluded that typical sample sizes in dose-ranging studies are inadequate for accurately estimating dose–response [12]. The group went on to claim that adaptive dose-ranging methods provided advantage and clearly improved both dose–response detection and estimation. The group favored a general adaptive dose allocation approach, as employed in the Acute Stroke Therapy by Inhibition of Neutrophils (ASTIN) study on acute ischemic stroke [13]. Both the above-mentioned methods are examples of Bayesian designs. In that sense, all Bayesian designs are adaptive. However, not all ADs are Bayesian designs.
Several authors have proposed a number of different ADs for use in the confirmatory phase of clinical trials. From the FDA’s perspective, some of these are ‘well understood’, while others are less so [9]. A few of the most commonly discussed designs are summarized below, but we note that many more types of adaptations exist than can be covered here.
Adaptive randomization designs allow for modifying randomization schedules when assigning subjects to treatment groups during a trial [2,14]. A study may employ response adaptive randomization (RAR), covariate adaptive randomization (CAR), or a combination of the two. With RAR, the allocation probability is based on the responses observed in previous subjects. With CAR, the allocation probability is chosen to reduce covariate imbalance between groups.
A biomarker AD allows for adaptations based on short-term biomarkers that are thought to be informative about the treatment effect on a clinical endpoint. The appeal of this type of design is greatest when a biomarker exists that can be measured earlier, more easily, and more frequently than a gold standard endpoint (such as survival). In such designs, the biomarker can be used at an interim analysis to inform possible adaptations to the design, even though the final analysis will still be based on the gold standard endpoint. Although theoretically appealing, these designs have not been widely implemented to date. This is likely due to the lack of validated biomarkers in many disease settings. Without a validated biomarker, there are major concerns regarding whether any positive result based on a biomarker will translate into success in the confirmatory setting. As a consequence of this concern, researchers may be hesitant to modify important design characteristics based on unvalidated biomarkers.
An enrichment design originally enrolls from a large population of subjects, but then uses an initial screening period to determine those candidates most likely to benefit from the test agent [15,16]. As an example, the screening period may be used to determine which subgroup of subjects is most likely to benefit. Correspondingly, the inclusion and exclusion criteria of the second stage are modified to focus on this subgroup of subjects, who are then randomized to receive either the active agent or control. This approach can increase power when a suitable subgroup can be identified, because any effects are not diluted by subgroups less likely to respond. There are examples in the literature of studies that have successfully used an enrichment design [17,18]. However, these designs have also been criticized due to biased treatment effect estimates and lack of generalizability [19].
A sample size recalculation design allows adjustment of the sample size based on a review of the interim data [20]. The appeal of this type of approach depends greatly on which parameters are being reestimated. A great deal of controversy surrounds the use of ADs based on reestimation of observed treatment effects [21]. The FDA considers this type of design ‘less well understood’ and has noted the potential for inefficiency, an increased type I error rate, difficulties in interpretation, and magnification of treatment bias [9]. The methods are sometimes defended for use in specific contexts, such as in cases of limited initial funding [22]. On the contrary, numerous authors have shown that internal pilot (IP) designs can be used in large randomized clinical trials to reassess nuisance parameters and make appropriate sample size modifications without affecting the type I error rate [5,23,24]. From a regulatory standpoint, methods that blind treatment allocation at the time of the interim reassessment are preferred whenever possible [25]. Accordingly, the FDA has classified blinded IP designs as ‘well understood’ [9]. The FDA may accept unblinded procedures if sufficient procedures are implemented to minimize the number of individuals with the unblinded information (e.g., the use of A/B pseudoblinding for an interim report), and detailed simulations have been conducted to define and correct for any inflation of the type I error rate or bias associated with the adaptation.
A seamless design combines objectives traditionally addressed in separate trials into a single trial. Such designs may be considered ‘operationally seamless’ and are aimed at increasing the efficiency over traditional approaches by reducing the resources and time required to conclude and analyze the results of the first trial before designing and initiating the second trial. An adaptive seamless design, also known as an ‘inferentially seamless’ design, takes this one step further. In an adaptive seamless design, the separate trials are combined, and the subjects from the first phase are included in the final confirmatory hypothesis test [26,27]. Most interest to date has focused on a seamless transition between phase IIb (late learning) and phase III (confirming). However, opportunities also exist for seamless designs in early development [28]. Adaptive seamless designs may potentially reduce the timeline for approval in the drug development process. However, the analysis of data from this type of design generally requires specialized methods to correct for the bias introduced using data from the first stage in both the decision-making and final analysis. Hence, extra planning is necessary when implementing an adaptive seamless design, and the potential benefits should be carefully weighed against the challenges of such designs [29].
Hurdles and description of the workshop
Much of the research on ADs has been driven by drug development within the pharmaceutical industry, but many basic principles remain the same regardless of the funding environment. However, some specific challenges differ when considering the use of ADs in trials funded by the NIH, the Veteran’s Administration, foundations, or nonprofit organizations. As stated above for example, justifying the properties of a proposed AD often requires conducting extensive simulation studies. Burton et al. [30] provide an excellent description of how to develop a protocol for a simulation study. The scope of these required simulations is generally nontrivial. Many pharmaceutical companies are developing in-house teams whose primary responsibility is to assist with such simulations. Greater barriers exist for implementing the same type of infrastructure within the NIH-funded environment. For example, grant applications often require these simulations be done in advance of submission and before any funding is available to support these activities. This requirement for simulations to be conducted prior to the submission of a grant application is somewhat analogous to the need for the collection of preliminary data prior to the submission of an R01 application. However, there are funding mechanisms through both NIH and internally through academic institutions that can support the collection and analysis of preliminary data needed to support a future grant application. At the current time, there are few mechanisms available to support the complex simulations needed to support the conduct of an AD. Furthermore, even if a researcher has access to a mechanism for funding, the researcher must also have access and support for a biostatistician, bioinformaticist, or other expert able to conduct the simulation study. Consequently, there is a growing divide between the practicality and feasibility of conducting AD clinical trials in industry compared to academia. Greater clarity is needed within the academic clinical trials community regarding how to remove these barriers in order to promote the use of appropriate ADs.
As one step toward addressing the prior concerns, the authors organized a workshop in November 2009. This ‘Scientific Advances in Adaptive Clinical Trial Designs’ workshop emerged from the adaptive clinical trial of high dose coenzyme Q10 in amyotrophic lateral sclerosis (QALS) [28,31]. After the study ended, two members of the trials Data and Safety Monitoring Board (DSMB: C.C. and C.S.C.) and the trial’s senior biostatistician (B.L.) obtained funding for a workshop to advance the use of ADs in publicly funded research. The two-day workshop invited 50 representatives from the clinical trials community (see ‘Acknowledgments’), including participants from the NIH, FDA, European Medicines Agency (EMA), patient advocacy and nonprofit organizations, professional associations, and pharmaceutical companies.
The workshop had two specific aims. The first aim was to provide a forum for exploring the potential of adaptive clinical trial designs to achieve reliable results more quickly and with fewer resources than required by conventional designs. The second aim of the workshop was to provide an opportunity for participants to make recommendations regarding future work to increase research, education, and coordinated activity related to ADs. As a result of the discussions at the workshop, and subsequent discussions at national meetings, several key recommendations have emerged for addressing issues related to addressing the barriers that exist for implementing ADs in the academic trial setting. These are briefly summarized below.
Recommendation 1 (need for a better-defined taxonomy): more opinions should be gathered about the need to bring a commonly understood framework to the field of ADs
Although recent publications have more clearly delineated the definition of an AD, more clarification is needed. For example, as previously indicated, both the AD working group and FDA support the general concept of ‘adaptive by design’. The FDA definition is, however, somewhat more general: ‘The term prospective here means that the adaptation was planned (and details specified) before data were examined in an unblinded manner … this can include plans that are introduced or made after the study has started if the blinded state of the personnel involved is unequivocally maintained when the modification plan is proposed’. This definition clearly allows for more flexibility, but is a little confusing since different individuals become unblinded at different points in a trial. For example, after an interim analysis, the unblinded statistician and DSMB have been unblinded, but the principal investigator (PI) has not. If the PI, who has not seen unblinded data, asks for a design change that must be approved by the DSMB, who has seen unblinded data, it is not clear whether this fits into the spirit of the FDA definition. Hence, although substantial progress has been made toward defining a taxonomy for clinical trial designs, there is not any single agreed upon taxonomy in the existing literature. If the experts in AD clinical trial design cannot agree on a consistent taxonomy, it is no surprise that the general scientific community remains very confused. In our experience, many very knowledgeable clinical trialists do not understand the important, subtle differences between different types of ADs. Most importantly, many fail to distinguish the ADs that have been classified by the FDA as ‘well understood’ and ‘accepted’ from those ADs that have been either classified as ‘less well understood’ or ‘not accepted’. Thus, a great deal of confusion still exists in the general scientific community, and future discussion is needed to better clarify what distinguishes one type of AD from another.
Recommendation 2 (need to better quantify statistical risks): there is a need to better quantify the statistical risks (e.g., statistical bias, potential increase in type I error rates, and risk for covariate imbalance) associated with proposed adaptive approaches
Advocates and proponents of ADs need to move away from general, qualitative statements regarding advantages and disadvantages of ADs. Rather, more formal quantitative comparisons of the issues associated with alternative proposed adaptations are necessary. Despite the considerable amount of methodological work already done in this area, much work remains to be done. Because there are many possible adaptations, some will be more acceptable than others. The FDA draft guidance document characterization of ADs as ‘well understood’ or ‘less well understood’ reflects this view. It will be important to continue examining the proposed adaptations to classify them through a common metric. It also will be important for future researchers to provide guidance regarding whether a particular adaptation is acceptable for regulatory purposes, what evidence will be needed to gain such acceptance, and, if accepted, what issues must be addressed when implementing that type of design. This also underscores the need for a well-defined taxonomy that allows the field to classify which types of adaptations are acceptable, and which are not.
Recommendation 3 (better understanding of the concept of ‘adaptive by design’): investigators should minimize (or preferably avoid) ad hoc changes
The concept of ‘adaptive by design’ is unequivocally important. Investigators should also limit the number and complexity of the proposed adaptations. As previously stated, simulations to assess bias can only occur when the changes are prespecified and can be adequately described in advance. The literature contains many examples of studies that have implemented unplanned adaptations. Although not supported by the AD working group and FDA definitions, many individuals see merit in these types of designs. However, many of the concerns raised by these examples could be eliminated by giving more thought to potential adaptations during the planning stages of a trial. Thus, future educational efforts should focus on clarifying the importance of pre-specifying potential adaptations at the outset of a trial. Of course, ‘the best laid schemes o’ mice an’ men gang aft agley’. Some flexibility must be allowed when major unanticipated events occur during the course of clinical trials. However, researchers should be encouraged to think through reasonable possibilities that might occur during the design of a trial and consider potential adaptations accordingly.
Recommendation 4 (modifications to current NIH funding model): NIH should offer more recognition and funding for planning clinical trials that might benefit from ADs
The NIH is uniquely positioned to help promote and support the use of ADs in clinical trials. Participants in the workshop were particularly concerned about the challenge associated with the grant review process for trials proposing to use an AD. Proposals using an AD will require reviewers with substantial experience and expertise with ADs, a still relatively small core group. This will be critical because a grant reviewer uncomfortable with the unknowns associated with an AD may have trouble understanding the specific aspects of the proposed design, and this discomfort may lead to a low score. To help assure reviewers that the design is feasible, investigators should provide sufficient details regarding the proposed adaptations in their grant applications. However, explaining the nuances of an AD, including the complex simulation studies that are often required, is difficult within the space constraints of the new 12-page NIH grant application. Reviewers of grant proposals also need to understand how AD trials work, when they are beneficial, and how evaluating their progress differs from evaluating traditional trials. A grant proposal should not receive a positive score merely because a reviewer likes the fact that the trial uses something innovative. Similarly, a strong proposal should not get a negative score simply because none of the reviewers understand the aspects of the proposed adaptations.
Some type of alternative arrangement is need within the application format to provide adequate specific details associated with the proposed adaptation. One possible solution would be to allow the inclusion of an appendix that summarizes the design and results of the simulation study that was conducted to validate the proposed design which reviewers would also be required to consider. Whenever possible, researchers should also be encouraged to pursue the publication of ADs in the literature, along with detailed simulation reports as electronic appendices. The peer review process of such publications would allow the quality and performance of the design to be reviewed and documented prior to the review of the grant application.
The NIH should also implement a targeted program for developing tools associated with the use of ADs, such as software for the modeling and simulation required to assess a proposed adaptation. The use of an AD generally requires more (not less) upfront planning time. The development of a funding mechanism to support this type of design work, specifically to examine various options for clinical trials under consideration, would greatly help to advance the use of ADs in academic clinical trials. This investment could ultimately save resources by encouraging the use of simulations and modeling that lead to carefully planned trials that take far less time and many fewer resources than traditional trials. A great deal of recent progress has been achieved through industry partnerships with software developers. However, this software does not cover the entire range of possible adaptations, so future development is required.
ADs sometimes require adjustments to funding, but such changes often introduce logistical problems. The NIH should consider implementing flexible mechanisms that address long-term funding of clinical trials and commit to funding properly designed adaptive trials. Furthermore, a better understanding is emerging that one of the greatest benefits of ADs is the ability to declare a futile study much earlier in the process. Thus, ADs provide an avenue for directing available resources to better scientific advantage. This provides one of the notable divides between academic and industry trials – industry trialists will readily stop a study that shows itself to be futile and reassign those resources to more promising areas. On the contrary, the current structure of academic grant funding provides a strong disincentive to early termination of a trial for futility. An academic clinical trial group, especially one mostly or entirely reliant on NIH funding, faces an immediate loss of support for their funded positions if a planned multi-year study ends abruptly. This reality leads to an understandable reluctance among academic trialists to stop a study for futility until very late in the study life cycle. In the academic arena, rethinking the inherent disincentive to ending futile trials needs to occur anyway, and where applicable, ADs provide an efficient means with which to stimulate these discussions.
Recommendation 5 (better understand the impact on a DSMB): the use of ADs may require a different way of thinking about the structure and conduct of DSMBs
As ADs become more common, the role of the DSMB (or, equivalently, the Data Monitoring Committee (DMC)) may change. Future adaptive clinical trials have the potential to require a level of commitment from DSMB members that is beyond the current standard. A board’s knowledge of interim results might be perceived as a potential source of bias if the DSMB has too much leeway in implementing the adaptations. Again, this reiterates the importance of the ‘adaptive by design’ concept. Investigators need to provide clear rules and decision triggers to the DSMB to minimize the role of the DSMB’s judgment at the time the adaptations are implemented. However, the DSMB must also be prepared to make decisions based on both the pre-specified rules and any unexpected events that arise during the conduct of the trial. This added complexity greatly complicates the role of the DSMB. This issue has been addressed in the literature [32–34], but merits further discussion.
The use of ADs also will require greater efforts with respect to the training of potential DSMB members. For example, statisticians who serve on DSMBs for trials with an AD should be familiar with the theory and practice of ADs. These types of DSMBs should more often include individuals with data management expertise, since timeliness and quality data management are crucial for the successful implementation of an adaptive trial design. While this is always important, the use of an AD places a greater emphasis on a more timely resolution of data queries – which greatly impacts the infrastructure of the information management systems used for a given trial. The worst-case scenario would involve a planned design change that is implemented but then later found to be based on incorrect data – and a different decision would have been made had the data been correct.
Recommendation 6 (need for education): there is a need for education of the clinical trials community regarding the use of ADs
Education about ADs is a major unmet need for the clinical trial community. Because of the limited resources, educating all stakeholders will be challenging and will require a collective, collaborative effort. One possible mechanism for meeting this challenge is through additional forums that will continue the dialogue about ADs. For example, the NIH could develop programs for educating and training researchers, reviewers, and potential DSMB members about potential areas that might benefit from the use of ADs, and which ADs would be considered appropriate for which setting. The AD working group has created many training presentations that have been made publicly available. The academic clinical trials community could seek broader representation within the existing AD working groups (which have focused more heavily on industry-sponsored trials). Alternatively, interested parties could form separate groups under the auspices of an appropriate umbrella group such as the Society for Clinical Trials (SCT). Finally, the medical media (including medical journal editors and reviewers) should understand the issues associated with the use of adaptive trial designs. The media should recognize that not everything that is called an ‘AD’ warrants the label according to the definitions presented in this article and published elsewhere.
Patient advocacy groups, professional associations, and nonprofit organizations also could collaborate on ways to educate professionals, patients, family members, and the media about ADs. Many investigators acknowledge the role of nonprofits as adjunct trial funders, potential portals to trial subjects, and as liaisons with members of the public without whom there would be no clinical trial research. Because of the especially important role, these organizations play in educating patient communities and advancing stakeholders’ understanding of ADs, they are an important resource to be incorporated into future educational activities.
Discussion
Clearly, there is an increasing interest in the use of ADs. In general, regulatory agencies will accept some ADs, but they are cautious about others – particularly for pivotal studies [9]. An adaptive trial generally requires much more upfront planning for design and for implementing the simulations typically required to demonstrate the validity of the design. Yet, properly planned ADs can lead to more efficient trials requiring less time and fewer resources overall than traditional designs.
While the development of new statistical methodology is needed, appropriate statistical methods currently exist for implementing a number of well-accepted ADs. Before any AD can be practically implemented, however, a number of logistical barriers must be overcome [35,36]. The AD working group developed a ‘good practices’ article that summarized lessons learned from several years of industry experience [33]. This article was provided as a handout at a conference to discuss the FDA Draft Guidance and includes an electronic attachment of a DMC charter that has been effectively used in practice. While many of these recommendations are also helpful to academic trialists, there are nuances of NIH-funded trials that have not yet been adequately addressed. The workshop described here was an attempt to initiate discussions of how to address these important limitations among the academic trials community.
Although issues related to infrastructure are often rate limiting, several recent steps have addressed this concern. One example is the creation of the National Institute of Neurological Disorders and Stroke (NINDS)–funded Network of Excellence in Neuroscience Clinical Trials (NeuroNEXT) [37]. The goal of this network is to provide infrastructure to support the conduct of phase II studies in neuroscience. Another example is the recently funded ‘Accelerating Drug and Device Evaluation through Innovative Clinical Trial Design’ project [38]. This NIH- and FDA-supported project, based at the University of Michigan, proposes to use ADs to optimize the design of several large trials being conducted within the NINDS-funded Neurological Emergency Treatment Trials (NETT) network. The development of these types of complex infrastructures dramatically increases the feasibility for using more novel trial designs, including ADs. Additional infrastructure building efforts are needed in other areas to help further advance the use of ADs, particularly in the publicly funded environment. Importantly, although the increased interest in ADs has brought many of the infrastructure issues to light (need for more upfront planning, need for better training to serve on a DSMB, need for efficient data management, etc.), one could argue that these same issues are just as important in traditional clinical trials. The increased consideration of ADs has put a much greater emphasis on the need to address these issues across clinical trial research. It is critical to recognize the importance of these issues whether one is considering an AD design or a traditional one, and it can be argued that AD designs are making an extremely positive contribution to clinical trials in general by pushing these often overlooked issues to the forefront.
Acknowledgments
The authors are grateful for the generous participation and valuable contributions of the workshop participants: Keaven Anderson, PhD, Merck & Company, Inc.; Irina Antonijevic, MD, CHDI Foundation, Inc.; Barbara Araneo, PhD, Juvenile Diabetes Research Foundation; Sukirti Bagal, MD, National Organization for Rare Disorders; Sarah Baraniuk, PhD, the University of Texas School of Public Health; Monica Barnette, Palladian Partners, Inc.; Colin Begg, PhD, Memorial Sloan-Kettering Cancer Center; Steven Bramer, PhD, National Neurovision Research Institute; Erica Brittain, PhD, National Institute of Allergy and Infectious Diseases; Lucie Bruijn, PhD, the ALS Association; Bibhas Chakraborty, PhD, Mailman School of Public Health, Columbia University; Kathryn Chaloner, PhD, College of Public Health, the University of Iowa; Ken Cheung, PhD, Mailman School of Public Health, Columbia University; Emory Clark, JD, Foundation for Interdisciplinary Motor Neuron Medicine; Amy Comstock Rick, JD, Parkinson’s Action Network; Jason Connor, PhD, Berry Consultants; Robin Conwit, MD, National Institute of Neurological Disorders and Stroke; Jacqueline Corrigan-Curay, JD, MD, Office of the Director, National Institutes of Health; Simon Day, PhD, Roche Products Ltd; Patrice Desvigne-Nickens, MD, National Heart, Lung, and Blood Institute; Kay Dickersin, PhD, Center for Clinical Trials, Johns Hopkins Bloomberg School of Public Health; Valerie Durkalski, MPH, PhD, Department of Medicine, Medical University of South Carolina; David Eckstein, PhD, Office of Rare Diseases Research, National Institutes of Health; Brian Fiske, PhD, the Michael J. Fox Foundation for Parkinson’s Research; Karen Furie, MD, MPH, Harvard University, Massachusetts General Hospital; Wendy Galpern, MD, PhD, National Institute of Neurological Disorders and Stroke; Brenda Gaydos, PhD, Eli Lilly and Company; Nancy Geller, PhD, National Heart, Lung, and Blood Institute; Steve Gibson, the ALS Association; Steve Goodman, MD, PhD, Johns Hopkins School of Medicine; Jennifer Gorman, NIH Nobel Laureate Hall and Visitors Center, and Office of the Director, National Institutes of Health; Michael Hill, MD, MSc, FRCPC, University of Calgary; Karen Johnston, MD, MSc, University of Virginia; Petra Kaufmann, MD, MSc, National Institute of Neurological Disorders and Stroke; Franz Koenig, PhD, European Medicines Agency; Walter Koroshetz, MD, National Institute of Neurological Disorders and Stroke; Minjung Kwak, PhD, National Heart, Lung, and Blood Institute; Michael Manganiello, HCM Strategists, LLC; Elizabeth McKenna, PhD, Foundation Fighting Blindness, Inc.; Nancy Miller, PhD, Office of the Director, National Institutes of Health; Stephanie Moran, Juvenile Diabetes Research Foundation; Claudia Moy, PhD, the National Institute of Neurological Disorders and Stroke; Robert O’Neill, PhD, US Food and Drug Administration; Yuko Palesch, PhD, Medical University of South Carolina; Michael Proschan, PhD, National Institute of Allergy and Infectious Diseases; Bernard Ravina, MD, MSCE, University of Rochester; Stephen Rose, PhD, Foundation Fighting Blindness, Inc.; Ira Shoulson, MD, University of Rochester; Robert Silbergleit, MD, University of Michigan; Robert Temple, MD, US Food and Drug Administration; Ronnie Tepp, HCM Strategists, LLC; Peter Thall, PhD, the University of Texas MD Anderson Cancer Center; Seamus (J.L.P.) Thompson, PhD, Mailman School of Public Health, Columbia University; Veronica Todaro, Parkinson’s Disease Foundation; Daniel van Kammen, MD, PhD, CHDI Foundation, Inc.; Philip Wang, MD, DrPH, National Institute of Mental Health; Adam Wanner, Alpha-1 Foundation; and John Warner, PhD, CHDI Foundation, Inc.
Funding
The Scientific Advances in Adaptive Clinical Trial Designs Workshop was supported by the National Institute of Neurological Disorders and Stroke (award number R13NS065622) with contributing support from the NIH Office of Rare Diseases Research; the National Institute of Allergy and Infectious Diseases; and the National Heart, Lung, and Blood Institute. In addition, the Foundation for Interdisciplinary Motor Neuron Medicine provided seed money for planning the workshop. The following nonprofit organizations generously contributed support to implement the workshop and produce this summary: Alpha-1 Foundation, American Parkinson Disease Association, CHDI Foundation, Hope for ALS, Juvenile Diabetes Research Foundation International, The Michael J. Fox Foundation for Parkinson’s Research, National Neurovision Research Institute/Foundation Fighting Blindness, National Organization for Rare Disorders, Parkinson’s Action Network, The Parkinson Alliance, and Parkinson’s Disease Foundation.
Footnotes
Conflict of interest
The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the contributors from the National Institutes of Health or the nonprofit organizations.
References
- 1.Gallo P, Chuang-Stein C, Dragalin V, et al. Adaptive designs in clinical drug development – An executive summary of the PhRMA working group. J Biopharm Stat. 2006;16:275–83. doi: 10.1080/10543400600614742. [DOI] [PubMed] [Google Scholar]
- 2.Chow S, Chang M. Adaptive Design Methods in Clinical Trials. Chapman & Hall/CRC; Boca Raton, FL: 2007. [Google Scholar]
- 3.Krams M, Burman CF, Dragalin V, et al. Adaptive designs in clinical drug development: Opportunities, challenges, and scope reflections following PhRMA’s November 2006 workshop. J Biopharm Stat. 2007;17:957–64. doi: 10.1080/10543400701643764. [DOI] [PubMed] [Google Scholar]
- 4.Chow SC, Chang M. Adaptive design methods in clinical trials – A review. Orphanet J Rare Dis. 2008;3:11. doi: 10.1186/1750-1172-3-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Coffey CS, Kairalla JA. Adaptive clinical trials: Progress and challenges. Drugs R D. 2008;9(4):229–42. doi: 10.2165/00126839-200809040-00003. [DOI] [PubMed] [Google Scholar]
- 6.Bretz F, Branson M, Burman CF, Chuang-Stein C, Coffey CS. Adaptivity in drug discovery and development. Drug Dev Res. 2009;70:169–90. [Google Scholar]
- 7.Bretz F, Koenig F, Brannath W, Glimm E, Posch M. Adaptive designs for confirmatory clinical trials. Stat Med. 2009;28:1181–217. doi: 10.1002/sim.3538. [DOI] [PubMed] [Google Scholar]
- 8.Dragalin V. Adaptive designs: Terminology and classification. Drug Inf J. 2006;40:425–35. [Google Scholar]
- 9.Food and Drug Administration. Guidance for industry: Adaptive design clinical trials for drugs and biologics, 2010. [accessed 7 July 2010];Draft guidance. Available at: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM201790.pdf.
- 10.Jennison C, Turnbull BW. Group Sequential Methods: Applications to Clinical Trials. Chapman & Hall/CRC; Boca Raton, FL: 2000. [Google Scholar]
- 11.Garrett-Mayer E. The continual reassessment method for dose-finding studies: A tutorial. Clin Trials. 2006;3:57–71. doi: 10.1191/1740774506cn134oa. [DOI] [PubMed] [Google Scholar]
- 12.Bornkamp B, Bretz F, Dmitrienko A, et al. Innovative approaches for designing and analyzing adaptive dose-ranging trials. J Biopharm Stat. 2007;17:965–95. doi: 10.1080/10543400701643848. [DOI] [PubMed] [Google Scholar]
- 13.Krams M, Lees KR, Hacke W, et al. ASTIN: An adaptive dose-response study of UK-279,276 in acute ischemic stroke. Stroke. 2003;34:2543–49. doi: 10.1161/01.STR.0000092527.33910.89. [DOI] [PubMed] [Google Scholar]
- 14.Zhang L, Rosenburger W. Adaptive randomization in clinical trials. In: Hinkelmann K, editor. Design and Analysis of Experiments, Special Designs and Applications. Vol. 3. John Wiley & Sons; Hoboken, NJ: 2012. pp. 251–282. [Google Scholar]
- 15.Wang SJ, Hung HMJ, O’Neill RT. Adaptive patient enrichment designs in therapeutic trials. Biom J. 2009;51:358–74. doi: 10.1002/bimj.200900003. [DOI] [PubMed] [Google Scholar]
- 16.Temple R. Enrichment of clinical study populations. Clin Pharmacol Ther. 2010;88:774–78. doi: 10.1038/clpt.2010.233. [DOI] [PubMed] [Google Scholar]
- 17.Van der Baan FH, Knol MJ, Klungel OH, et al. Potential of adaptive clinical trial designs in pharmacogenetic research. Pharmacogenomics. 2012;13:571–78. doi: 10.2217/pgs.12.10. [DOI] [PubMed] [Google Scholar]
- 18.Ho TW, Pearlman E, Lewis D, et al. Efficacy and tolerability of rizatriptan in pediatric migraineurs: Results from a randomized, double-blind, placebo controlled trial using a novel adaptive enrichment design. Cephalagia. 2012;10:750–65. doi: 10.1177/0333102412451358. [DOI] [PubMed] [Google Scholar]
- 19.Emerson SS, Fleming TR. Adaptive methods: Telling ‘The Rest of the Story’. J Biopharm Stat. 2010;20:1150–65. doi: 10.1080/10543406.2010.514457. [DOI] [PubMed] [Google Scholar]
- 20.Proschan MA. Sample size re-estimation in clinical trials. Biom J. 2009;51:348–57. doi: 10.1002/bimj.200800266. [DOI] [PubMed] [Google Scholar]
- 21.Tsiatis AA, Mehta C. On the inefficiency of the adaptive design for monitoring clinical trials. Biometrika. 2003;90:367–78. [Google Scholar]
- 22.Mehta C, Pocock SJ. Adaptive increase in sample size when interim results are promising: A practical guide with examples. Stat Med. 2011;30:3267–84. doi: 10.1002/sim.4102. [DOI] [PubMed] [Google Scholar]
- 23.Proschan MA. Two-stage sample size re-estimation based on a nuisance parameter: A review. J Biopharm Stat. 2005;15:559–74. doi: 10.1081/BIP-200062852. [DOI] [PubMed] [Google Scholar]
- 24.Friede T, Kieser M. Sample size recalculation in internal pilot study designs: A review. Biom Journal. 2006;4:537–55. doi: 10.1002/bimj.200510238. [DOI] [PubMed] [Google Scholar]
- 25.Zucker DM, Wittes JT, Schabenberger O, et al. Internal pilot studies II: Comparison of various procedures. Stat Med. 1999;18:3493–509. doi: 10.1002/(sici)1097-0258(19991230)18:24<3493::aid-sim302>3.0.co;2-2. [DOI] [PubMed] [Google Scholar]
- 26.Maca J, Bhattacharya S, Dragalin V, et al. Adaptive seamless phase II/III designs: Background operational aspects and examples. Drug Inf J. 2006;40:463–73. [Google Scholar]
- 27.Stallard N, Todd S. Seamless phase II/III designs. Stat Methods Med Res. 2010;20:623–34. doi: 10.1177/0962280210379035. [DOI] [PubMed] [Google Scholar]
- 28.Kaufmann P, Thompson JLP, Levy G, et al. Phase II trial of CoQ10 for ALS finds insufficient evidence to justify phase III. Ann Neurol. 2009;66:235–44. doi: 10.1002/ana.21743. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Levin B, Thompson JLP, Chakraborty B, et al. Statistical aspects of the TNK-S2B trial of Tenecteplase versus Alteplase in acute ischemic stroke: An efficient, dose-adaptive, seamless phase II/III design. Clin Trials. 2011;8:398–407. doi: 10.1177/1740774511410582. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Burton A, Altman DG, Royston P, et al. The design of simulation studies in medical statistics. Stat Med. 2006;25:4279–92. doi: 10.1002/sim.2673. [DOI] [PubMed] [Google Scholar]
- 31.Levy G, Kaufmann P, Buchsbaum R, et al. A two-stage design for a phase II clinical trial of coenzyme Q10 in ALS. Neurology. 2006;66:660–63. doi: 10.1212/01.wnl.0000201182.60750.66. [DOI] [PubMed] [Google Scholar]
- 32.Gallo P. Confidentiality and trial integrity issues for adaptive designs. Drug Inf J. 2006;4:445–50. [Google Scholar]
- 33.Gaydos B, Anderson KB, Berry D, et al. Good practices for adaptive clinical trials in pharmaceutical product development. Drug Inf J. 2009;43:539–56. [Google Scholar]
- 34.Chow SC, Corey R, Lin M. On the independence of data monitoring committees in adaptive design clinical trials. J Biopharm Stat. 2012;22:853–67. doi: 10.1080/10543406.2012.676536. [DOI] [PubMed] [Google Scholar]
- 35.Quinlan JA, Krams M. Implementing adaptive designs: Logistical and operational considerations. Drug Inf J. 2006;40:437–44. [Google Scholar]
- 36.Quinlan J, Gaydos B, Maca J, et al. Barriers and opportunities for implementation of adaptive designs in pharmaceutical product development. Clin Trials. 2010;7:167–73. doi: 10.1177/1740774510361542. [DOI] [PubMed] [Google Scholar]
- 37.The Lancet Neurology. NeuroNEXT: Accelerating drug development in neurology. Lancet Neurol. 2012;11:119. doi: 10.1016/S1474-4422(12)70008-X. [DOI] [PubMed] [Google Scholar]
- 38.Meurer WJ, Lewis RJ, Tagle D, et al. An overview of the adaptive designs accelerating promising trials into treatments (ADAPT-IT) project. Ann Emerg Med. 2012;60:451–57. doi: 10.1016/j.annemergmed.2012.01.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
