Abstract
Background
Identifying efficacious interventions for the prevention and treatment of human diseases depends on the efficient development and implementation of controlled clinical trials. Essential to reducing the time and burden of completing the clinical trial lifecycle is determining which aspects take the longest, delay other stages, and may lead to better resource utilization without diminishing scientific quality, safety, or the protection of human subjects.
Purpose
In this study we modeled time-to-event data to explore relationships between clinical trial protocol development and implementation times, as well as identify potential correlates of prolonged development and implementation.
Methods
We obtained time interval and participant accrual data from 111 interventional clinical trials initiated between 2006 and 2011 by NIH’s HIV/AIDS Clinical Trials Networks. We determined the time (in days) required to complete defined phases of clinical trial protocol development and implementation. Kaplan-Meier estimates were used to assess the rates at which protocols reached specified terminal events, stratified by study purpose (therapeutic, prevention) and phase group (pilot/phase I, phase II, and phase III/ IV). We also examined several potential correlates to prolonged development and implementation intervals.
Results
Even though phase grouping did not determine development or implementation times of either therapeutic or prevention studies, overall we observed wide variation in protocol development times. Moreover, we detected a trend toward phase III/IV therapeutic protocols exhibiting longer developmental (median 2 ½ years) and implementation times (>3years). We also found that protocols exceeding the median number of days for completing the development interval had significantly longer implementation.
Limitations
The use of a relatively small set of protocols may have limited our ability to detect differences across phase groupings. Some timing effects present for a specific study phase may have been masked by combining protocols into phase groupings. Presence of informative censoring, such as withdrawal of some protocols from development if they began showing signs of lost interest among investigators, complicates interpretation of Kaplan-Meier estimates. Because this study constitutes a retrospective examination over an extended period of time, it does not allow for the precise identification of relative factors impacting timing.
Conclusions
Delays not only increase the time and cost to complete clinical trials, but they also diminish their usefulness by failing to answer research questions in time. We believe that research analyzing the time spent traversing defined intervals across the clinical trial protocol development and implementation continuum can stimulate business process analyses and reengineering efforts that could lead to reductions in the time from clinical trial concept to results, thereby accelerating progress in clinical research.
Introduction
The identification of efficacious interventions for the prevention and treatment of human diseases depends in large part on the efficient development and implementation of controlled clinical trials. This is particularly challenging in global, multi-center settings where the clinical trials process is complex, costly, time-consuming, and subject to a number of factors at legal, regulatory, policy, and operational levels. Consequently, delays in the process of protocol development and implementation can adversely impact costs and the ability to meet enrollment targets. Previous research found that 90% of industry-sponsored clinical trials experienced delayed enrollment [1], and 40% of oncology trials in the National Cancer Institute’s (NCI) cancer therapy evaluation program (CTEP) did not meet accrual goals due to delays in the development of protocols [2]. The impact of such delays is noteworthy, and the resource implications are important. The systematic study of clinical trials research processes can yield insights as to where efficiencies across the research lifecycle may be maximized to reduce performance burdens and costs, while accelerating the research process [3,4]. Several studies examining barriers to activation of clinical trials and low accrual performance across a variety of settings have revealed substantial use of resources during development and implementation with little scientific benefit [5,6]. Still others have found numerous opportunities to remove non-value-added steps prior to opening clinical trials in order to save time and increase efficiency [7,8]. The results of such studies provide important information on which aspects of the development pathway take the longest, may impact other stages, and can lead to better strategy management and resource utilization in clinical research.
Essential to reducing the time and burden of moving a clinical trial from conceptualization through completion is the identification of steps in the protocol lifecycle that are amenable to streamlining in ways that do not diminish scientific quality, safety, or the protection of human subjects. Every protocol decision has a downstream effect on speed and efficiency [9], and decisions beyond the scope of scientific relevance, safety, and ethical issues that delay the opening of clinical trials can have negative repercussions on the likelihood of attaining accrual goals [2]. This is especially germane to phase III clinical trials, which may offer the greatest promise for translation of effective interventions into clinical practice, but are frequently burdened by over-complexity, large resource requirements, and inefficient processes, thereby delaying their successful execution [10,11].
The U.S. National Institute of Allergy and Infectious Diseases (NIAID), within the U.S. National Institutes of Health (NIH), restructured its HIV/AIDS Clinical Trials Networks in 2006, and initiated the development of a comprehensive, utilization-focused evaluation system for this program. Stakeholders from across six networks1 collaboratively-authored an evaluation framework and identified factors critical to the success of this large research initiative [12]. Within the context of this evaluation framework, increasing the efficiency and shortening timelines for protocol development was seen as a high priority by system stakeholders (e.g., study participants, investigators, and collaborators). Our early studies of these networks focused on time-based analyses of the absolute and relative times required to traverse defined phases of clinical protocol development [13]. While identifying the time during which clinical research sites complete requirements to initiate enrollment as a significant source of delay in international trials, more importantly our findings indicated that by integrating elements of business process modeling with the analysis of time-based events across a standardized clinical trial protocol lifecycle, we could begin to determine factors contributing to lengthy trial development times [13]. The purpose of the present study is to build upon our initial work modeling time-to-event data across both clinical trial protocol development and implementation intervals to gain a more complete picture of the entire process. The approach used herein attempts to address questions regarding which intervals to measure and how to incorporate cases in which specific events had not been reached. We also sought to explore relationships between protocol development and implementation times, as well as identify potential correlates of prolonged development and implementation.
Materials and Methods
Sample and Data Compilation
All therapeutic and prevention clinical trials protocols from the NIH HIV/AIDS Clinical Trials Networks that were ongoing or in development in 2006 through 2010 were eligible for inclusion in this study. Protocols were screened and prioritized at the network level prior to submission to the NIAID Division of Acquired Immunodeficiency Syndrome (DAIDS). Data were obtained from the protocol management component of the DAIDS Enterprise System (DAIDS-ES). The DAIDS-ES was launched in 2006 as a management information system modeled around a protocol lifecycle paradigm that features standardized protocol status, milestone, and event definitions [14]. Designed to harmonize protocol data elements reported by the network-specific data management centers, the DAIDS-ES provides an easily accessible, quality-assured source of specific date-based protocol tracking information from across the networks, and it enables the type of time interval analyses needed to answer the study questions.
From the DAIDS-ES database we identified 277 network and non-network protocols submitted for approval between February 20, 2006 and October 7, 2010. We delimited this set further by selecting the 165 network protocols, which included a variety of study types. From this initial sample, several criteria were employed to define a key set of protocols for analysis in order to facilitate accurate comparisons. First, we selected only “main” studies, filtering out sub-studies. Second, we selected only interventional protocols out of the main studies, removing observational, natural history, and diagnostic study types. These “main interventional” studies constituted the final set of 111 clinical trial protocols for analysis. All clinical trials protocols were implemented at the individual (vs. community) level.
Next, we operationalized two time intervals that included “development” and “implementation” periods of the protocol process. The starting point for the development interval was the date the protocol was submitted for review by DAIDS scientific review committees (Clinical Science Review Committee [CSRC] for therapeutic studies or Prevention Science Review Committee [PSRC] for prevention studies). Protocol development activities do occur prior to the start of this time period and are typically managed virtually, although some face-to-face meetings are occasionally used where feasible. However, this pre-submission activity is not standardized or routinely available across networks. The end point or terminal event of the development interval was the date the protocol was approved by DAIDS to open to accrual. The implementation interval period was defined as the period between the dates a protocol opened to accrual until it closed to accrual.
In order to assess differences in the timing patterns of selected protocols, we stratified the sample by study type (therapeutic or prevention) and by clinical trial phase. To ensure adequate numbers of cases in the subgroup analyses, we created three phase groupings of protocols for each study type: (a) pilot and phase I protocols, combined; (b) phase II protocols (except HIV vaccine phase IIb trials which were included with phase III/IV studies); and (c) phase III and phase IV protocols, combined.
Statistical Analyses
Time intervals were subject to right-censoring, either because our summaries were being prepared before trial completion (administrative censoring) or because a decision was made to withdraw a trial from development or stop it for operational futility (informative censoring). We provide Kaplan-Meier (K-M) estimates of the rates at which protocols reached a specified terminal event. Since withdrawals and stoppages are unlikely to be statistically independent forms of censoring, K-M estimates do not have the same clear interpretation as they would in the presence of independent censoring. Accordingly, we limit the analysis to descriptive summaries and make no assertions regarding the statistical significance of between-curve differences.
Results
A total of 111 main interventional HIV/AIDS therapeutic and prevention protocols from the clinical trials networks were eligible for inclusion in the analysis set. Table 1 summarizes the number of protocols that entered both the development and implementation intervals, stratified by study type and phase grouping, and the number that had not yet reached the respective terminal events (i.e., open and closed to accrual). As shown, few protocols reached the implementation terminal event during the study period. The proportions of censored cases were, however, fairly consistent across the study types and phase groupings.
Table 1.
Summary of cases, events, and censored cases, stratified by protocol type and phase group.
|
|
|||
---|---|---|---|---|
Development Interval | Implementation Interval | |||
| ||||
Number Entering Interval |
Number (%) Censored | Number Entering Interval |
Number (%) Censored | |
Therapeutic | ||||
Pilot/Phase I | 27 | 11(40.7) | 15 | 7 (46.7) |
Phase II | 14 | 2 (14.3) | 10 | 7 (70.0) |
Phase III/ IV | 21 | 10 (47.6) | 11 | 8 (72.7) |
Total | 62 | 23 (37.1) a | 36 | 22 (61.1) |
Prevention | ||||
Pilot/Phase I | 28 | 7 (25.0) | 20 | 7 (35.0) |
Phase II | 15 | 3 (20.0) | 12 | 5 (41.7) |
Phase III/ IV | 6 | 2 (33.3) | 4 | 2 (50.0) |
Total | 49 | 12 (24.5) b | 36 | 14 (38.9) |
Overall | 111 | 35 (31.5) | 72 | 36 (50.0) |
Informatively censored cases =12
Informatively censored cases = 6
Estimates of the distributions of development times did not exhibit marked differences according to the three phase groupings (pilot/phase I, phase II, and phase III/ IV) in the pooled set of protocols, and similarly for the distributions of the implementation times. However, based on observation of the survival curves presented in Figures 1a and 2a, both development and implementation times for therapeutic trials tend to be longer for phase III/ IV trials than for pilot/phase I trials. No clear trends appear in Figures 1b and 2b for prevention trials.
Figure 1.
Survival distributions for development interval, stratified by phase grouping, for therapeutic and prevention protocols (numbers of protocols not having reached end of development at 0, 400, 800, and 1,200 days from start are shown below).
Figure 2.
Survival distributions for implementation interval, stratified by phase grouping, for therapeutic and prevention protocols (numbers of protocols not having reached end of accrual at 0, 400, 800, and 1,200 days from start are shown below).
An obvious question in monitoring development and management of a clinical trial research program is whether prolonged development time can be useful in forecasting prolonged implementation time. Each protocol, arranged by type and phase grouping, was coded as either “at/below” or “above” the median development time for its respective study type and phase grouping. Ignoring the designation as therapeutic or prevention trial, Figure 3 displays K-M estimates of implementation-time distributions according to whether development times were at or below versus above the medians for pilot/phase I and for phase II protocols, respectively. There was indeed evidence of such an association, but it should be noted that data were too sparse to attempt estimates for phase III/IV protocols.
Figure 3.
Survival distributions for implementation interval, stratified by development interval completion, for Phase I/Pilot and Phase II protocols (numbers of protocols not having reached end of accrual at 0, 400, 800, and 1,200 days from start are shown below).
Finally, we examined several potential correlates to prolonged development and implementation. We found phase III/IV therapeutic trials had significantly more study agents than the III/IV prevention trials, F(1, 19) = 6.31, p < .05, requiring significantly more clinical trials agreements, F(1, 19) = 5.10, p < .05. Moreover, phase III/IV therapeutic trials had a substantially larger proportion of non-US based sites compared to other phase groupings of therapeutic trials and the phase III/IV prevention trials group, χ2(1) = 27.81, p < .001.
Discussion
This study examined time-to-event data for a set of NIH HIV/AIDS Clinical Trials Network protocols, stratified by study type and phase grouping, across discrete intervals in the protocol lifecycle. Although neither type (e.g., prevention versus therapeutic) nor phase grouping determined protocol development or implementation times, overall we observed wide variation in protocol development times. Moreover, we found the HIV/AIDS phase III/ IV therapeutic trials had the longest development time and took the longest to enroll. The median development time for phase III/IV therapeutic protocols was nearly 2 ½ years and the median time required to enroll these trials exceeded 3 years. Although there are unique disease-specific aspects of oncology and HIV/AIDS trials that make direct comparisons difficult, the pattern of timing delays found in our study is similar to those found in research examining the activation of phase III oncology trials [2,7]. The variable and lengthy development times are noteworthy given the trial networks by design have well-established and familiar processes and procedures for protocol review and approval. Considering the financial and human resource costs in relation to the potential value of successful completion of phase III trials to clinical practice [10,11], this trend, if valid, suggests efforts to shorten development times of therapeutic phase III/IV protocols can likely yield operational and scientific benefits.
Several factors may be contributing to our observation of longer development times for phase III/IV protocols. First, in earlier studies we observed that, on average, non-US based sites took more than 3 times as long to complete regulatory requirements compared to US sites [13]. Among the protocol groups studied herein, the proportion of international sites was greatest for the phase III/IV therapeutic protocols. Second, the phase III/IV therapeutic protocols often involved greater numbers of study products from multiple industrial partners. Third, the time to develop and negotiate the requisite clinical trial agreements may also have contributed to increased development times. Lastly, a procedural difference in which therapeutic protocols are often submitted at an earlier stage in their development, in comparison with prevention trials, may have increased development time computations for these studies, as there is frequently a need for greater clarification and refinement of these protocols. Although all plausible contributors, more data are needed to establish their importance convincingly and provide a basis for systems changes to expedite trial activation.
Another finding of this study was the effect of increasing development time upon implementation time, after adjusting for study type and phase grouping. We found that protocols above the median number of days for completing the development interval took significantly longer during the implementation interval. Lengthy development times are common across a wide collection of clinical trials studies and have been implicated in protocols not meeting minimum accrual goals [2,8,10,15,16]. Although the variety of potential disruptors to trial implementation can be numerous, the degree to which protracted development compounds lengthy implementation may be an important system issue to address in accelerating trial conduct.
This report is our second look at time-to-event analyses in protocols from this clinical trials program. Compared to our initial findings [13], our current analyses found longer protocol development times. This is likely influenced by the fact that our earlier results were obtained from analysis of a much smaller protocol dataset that included only completed protocols, and also included observational studies and sub-studies. Both of these factors are seen to develop in much shorter time frames than the main interventional studies that made up the newer and larger protocol dataset investigated in this examination.
Several limitations of this study are worth noting. First, there was still a relatively small set of protocols, which limited our ability to detect differences across phase groupings. Although the grouping of protocols by phase appeared reasonable, some timing effects present for a specific study phase (i.e., I, II, IIa, IIb, etc.) may have been masked by combining protocols. Second, the time for protocols to complete the development and implementation process is long. Therefore, many of the protocols are still in process. While we have employed survival analysis to account for cases where information is not yet available to avoid a completion bias in our estimations, there is a high proportion of cases in each of the subgroups that were censored. Third, because this study constitutes a retrospective examination over an extended period of time (2006-2010), we do not know if the factors affecting protocol timing early in the observational period were the same as those more recently. As such, it is difficult to precisely identify which factors have the greatest impact. Finally, the multi-factorial reasons for delay in protocol development are not systematically captured in a common database across protocols (i.e. DAIDS-ES). This would help in future analyses and directing the process improvements and re-engineering efforts.
Despite these limitations, this inquiry has several strengths, notably: (a) the precision in defining and stratifying the protocol data set; (b) the fact that protocol status and milestone definitions (process intervals) are standard for all of the protocols; (c) the reliability of time-to-event data from a carefully monitored, quality-assured database’ (d) the HIV/AIDS trials networks are mature research organizations with established protocol review, development and oversight processes. It is common, if not universal practice, for trialists to review their experience to identify past challenges and adjust future protocol development and implementation accordingly. The question of interest to us here was whether it is possible to go beyond that constructive learning from cases (i.e., anecdotes) and attempt to bring to bear some principles of management science to measure processes systematically and identify consistent patterns permitting active intervention and system-wide improvement [17]. The results of our initial attempts are reported here. It is fair to conclude that the challenges are considerable, beginning with the reality that observations are not exactly plentiful, and the data are noisy. We continue to believe, however, that the goal of making trials more efficient is best pursued through systematic study rather than case study alone. We posit that this research analyzing the time spent traversing defined intervals across the trial development and implementation continuum can inform and help guide others who seek to accelerate trials. We recommend systematic approaches beginning with analyses of “as is” processes, using defined process markers [18] to identify the elements whose resource demands exceed their value and can be feasibly re-engineered within organizational constraints. Though the core components of clinical trial development are basically the same for all, it is the specific operational processes (e.g., scientific and ethical review, protocol writing/revision, partner negotiations) within a particular trial environment which determine the rate-limiting steps and resource consumptions which should drive the re-engineering. Further, these sorts of studies done correctly can themselves be costly and take time and are therefore infeasible without a sustainable commitment to partnership between management and scientific leadership around defined goals. Management must provide support for the work, including systems analysis expertise, tools, and dedicated funding. Investigators need to engage in the work, by informing the process analysis to generate meaningful, utilizable results. Successful work of this type has catalyzed improvements in ethical review (IRB) processes and, for some institutions, led to measurable reductions in the time for this oft-cited barrier to clinical research efficiency [19]. An organizational model of success can be seen in the efforts of the National Cancer Institute (NCI), which has begun to show that with the sustained engagement of vital stakeholders, a strategic plan to shorten the timeline to trial activation can result in demonstrable efficiencies and measurable improvements. Simulated economic models show that for large multi-center clinical trials, a 2 month reduction in planning and development and a 6 month reduction in enrollment result in overall trial cost reductions of .4% and 1.6%, respectively [20].
Taken together, we believe that thoughtful systematic analysis and re-engineering of rate-limiting processes can support better scientific strategy and resource management broadly, but perhaps particularly in NIH-sponsored programs where research is supported under time-limited awards.
Acknowledgments
This project has been funded in whole or in part with funds from the United States Government Department of Health and Human Services, National Institutes of Health (NIH), National Institute of Allergy and Infectious Diseases (NIAID), under: 1) Grant U01AI068614, “Leadership Group for a Global HIV Vaccine Clinical Trials Network” (Office of HIV/AIDS Network Coordination) to the Fred Hutchinson Cancer Research Center, and; 2) a subcontract to Concept Systems, Inc. under Contract N01AI-50022, “HIV Clinical Research Support Services”.
Footnotes
The six NIH HIV/AIDS Clinical Trials Networks include the AIDS Clinical Trials Group (ACTG), the HIV Prevention Trials Network (HPTN), the HIV Vaccine Trials Network (HVTN), the International Maternal Pediatric Adolescent AIDS Clinical Trials (IMPAACT), the International Network for Strategic Initiatives in Global HIV Trials (INSIGHT), and the Microbicide Trials Network (MTN).
References
- 1.Getz KA, Wenger J, Campo RA, et al. Assessing the impact of protocol design changes on clinical trial performance. Am J Ther. 2008;15(5):450–457. doi: 10.1097/MJT.0b013e31816b9027. [DOI] [PubMed] [Google Scholar]
- 2.Cheng SK, Dietrich MS, Dilts DM. A sense of urgency: evaluating the link between clinical trial development time and the accrual performance of cancer therapy evaluation program (NCI-CTEP) sponsored studies. Clin Cancer Res. 2010;16(22):5557–5563. doi: 10.1158/1078-0432.CCR-10-0133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Getz K. Protocol design trends and their effect on clinical trial performance. RAJ Pharma. 2008:315–316. [Google Scholar]
- 4.Kaitin KI, editor. Growing protocol design complexity stresses investigators, volunteers. Tufts Center for the Study of Drug Development Impact Report. 2008 Jan-Feb;10(1) [Google Scholar]
- 5.Kitterman DR, Cheng SK, Dilts DM, et al. The prevalence and economic impact of low-enrolling clinical studies at an academic medical center. Acad Med. 2011;86(11):1360–1366. doi: 10.1097/ACM.0b013e3182306440. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Wang-Gillam A, Williams K, Novello S, et al. Time to activate lung cancer clinical trials and patient enrollment: a representative comparison study between two academic centers across the atlantic. J Clin Oncol. 2010;28(24):3803–3807. doi: 10.1200/JCO.2010.28.1824. [DOI] [PubMed] [Google Scholar]
- 7.Dilts DM, Sandler AB. Invisible barriers to clinical trials: the impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials. J Clin Oncol. 2006;24(28):4545–4552. doi: 10.1200/JCO.2005.05.0104. [DOI] [PubMed] [Google Scholar]
- 8.Dilts DM, Cheng SK, Crites JS, et al. Phase III clinical trial development: a process of chutes and ladders. Clin Cancer Res. 2010;16(22):5381–5389. doi: 10.1158/1078-0432.CCR-10-1273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Getz KA. The heavy burden of protocol design. Appl Clin Trials. 2008;17(3):38–40. [Google Scholar]
- 10.Dilts DM, Sandler AB, Cheng SK, et al. Steps and time to process clinical trials at the Cancer Therapy Evaluation Program. J Clin Oncol. 2009;27(11):1761–1766. doi: 10.1200/JCO.2008.19.9133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Dilts DM, Sandler AB, Baker M, et al. Processes to activate phase III clinical trials in a Cooperative Oncology Group: the case of Cancer and Leukemia Group B. J Clin Oncol. 2006;24(28):4553–4557. doi: 10.1200/JCO.2006.06.7819. [DOI] [PubMed] [Google Scholar]
- 12.Kagan JM, Kane M, Quinlan KM, et al. Developing a conceptual framework for an evaluation system for the NIAID HIV/AIDS clinical trials networks. Health Res Policy Syst. 2009;7(12) doi: 10.1186/1478-4505-7-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Kagan JM, Rosas S, Trochim WMK. Integrating utilization-focused evaluation with business process modeling for clinical research improvement. Res Eval. 2010;19(4):239–250. doi: 10.3152/095820210X12827366906607. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Kagan JM, Gupta N, Varghese S, et al. The NIAID Division of AIDS enterprise information system: integrated decision support for global clinical research programs. J Am Med Inform Assoc. 2011;18(Suppl 1):i161–165. doi: 10.1136/amiajnl-2011-000114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Cheng SK, Dietrich MS, Dilts DM. Predicting accrual achievement: monitoring accrual milestones of NCI-CTEP-sponsored clinical trials. Clin Cancer Res. 2011;17(7):1947–1955. doi: 10.1158/1078-0432.CCR-10-1730. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.McDonald AM, Knight RC, Campbell MK, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials. 2006;7(1):9. doi: 10.1186/1745-6215-7-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Dilts DM, Rosenblum D, Trochim WM. A virtual national laboratory for reengineering clinical translational science. Sci Transl Med. 2012;4(118):118cm2. doi: 10.1126/scitranslmed.3002951. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Trochim W, Kane C, Graham MJ, Pincus HA. Evaluating translational research: a process marker model. Clin Transl Sci. 2011;4(3):153–162. doi: 10.1111/j.1752-8062.2011.00291.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.McJoynt TA, Hirzallah MA, Satele DV, et al. Building a protocol expressway: the case of Mayo Clinic Cancer Center. J Clin Oncol. 2009;27(23):3855–3860. doi: 10.1200/JCO.2008.21.4338. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Eisenstein EL, Collins R, Cracknell BS, et al. Sensible approaches for reducing clinical trial costs. Clin Trials. 2008;5(1):75–84. doi: 10.1177/1740774507087551. [DOI] [PubMed] [Google Scholar]