Skip to main content
Clinical and Translational Science logoLink to Clinical and Translational Science
. 2015 Nov 17;8(6):655–661. doi: 10.1111/cts.12352

Accrual Index: A Real‐Time Measure of the Timeliness of Clinical Study Enrollment

Lauren Corregano 1, Katelyn Bastert 1, Joel Correa da Rosa 2, Rhonda G Kost 1,
PMCID: PMC4703441  NIHMSID: NIHMS730009  PMID: 26573223

Abstract

Background

Achieving timely accrual into clinical research studies remains a challenge for clinical translational research. We developed an evaluation measure, the Accrual Index (AI), normalized for sample size and study duration, using data from the protocol and study management databases. We applied the AI retrospectively and prospectively to assess its utility.

Methods

Accrual Target, Projected Time to Accrual Completion (PTAC), Evaluable Subjects, Dates of Recruitment Initiation, Analysis, and Completion were defined. AI is (% Accrual Target accrued/% PTAC elapsed). Changes to recruitment practices were described, and data extracted from study management databases.

Results

December 2014 (or final) AI was analyzed for 101 studies initiating recruitment from 2007 to 2014. Median AI was ≥1 for protocols initiating recruitment in 2011, 2013, and 2014. The AI varied widely for studies pre‐2013. Studies with AI > 4 utilized convenience samples for recruitment. Data‐justified PTAC was refined in 2013–2014 after which the AI range narrowed. Protocol characteristics were not associated with study AI.

Conclusion

Protocol AI reflects the relative agreement between accrual feasibility assessment (PTAC), and accrual performance, and is affected by recruitment practices. The AI may be useful in managing accountability, modeling accrual, allocating recruitment resources, and testing innovations in recruitment practices.

Keywords: participant recruitment, clinical research; translational research; clinical trial; clinical research methods

Introduction

Timely accrual of clinical trials remains a critical bottleneck in the completion of clinical translational research.1, 2 Poor accrual at US academic clinical research sites incurs economic costs,3, 4 jeopardizes funding,5, 6 and delays the translation of research discoveries into practices that improve health. Clinical trials as well as investigator‐initiated studies are subject to similar challenges to tracking and achieving study accrual.7 In 2013, to incentivize timely enrollment of studies at National Institutes of Health (NIH)‐funded US academic medical centers, the Evaluation Committee of the National Center for Advancing Translational Sciences (NCATS)/CTSA proposed specific metrics to be reported by Clinical Translational Science Awards (CTSAs) such as the number of studies completing accrual “on time.”8 The committee further suggested that grantees develop evaluation methods to provide information on program performance and milestones achieved both to hold grantees accountable and to guide program development. Thereto, in its latest request for applications, NCATS incorporated the evaluation of “accrual success” into its review criteria for clinical research,9 however no specific measures of accrual success were proposed.

Although principal investigators may be aware of the accrual status of their own studies, typically the evaluation at the level of institutional or program management of whether a clinical study has met a targeted enrollment deadline is retrospective, assessing at the end of the proposed enrollment period whether 100% of the expected accrual has been achieved. With such a model, institutions miss opportunities for early intervention and are slow to recognize systemic requirements to correct the accrual trajectories. The need for increased accountability regarding accrual for all studies, and the lack of simple predictors of accrual success highlight the need for additional methods and measures to manage and analyze accrual including the capability for a real‐time assessment of the timeliness of accrual in order to rationally direct effort and resources where they are most needed.

Previously we described the creation of a data‐rich recruitment core infrastructure to provide recruitment expertise and support, data‐driven feasibility and recruitment planning, recruitment services, and detailed data capture for performance evaluation and improvement.10 The core reported that the median time until the first screening visit was 10 days; however we did not report assessment of accrual success at that time. Here we build on that core infrastructure model to describe a simple novel measure, the Accrual Index (AI) that can be derived from data readily extracted from the study design and the recruitment plan. To normalize for differences across protocols in sample size and in the anticipated duration of recruitment efforts, we created an equation that encompasses the progress of accrual relative to proportion of the proposed enrollment time line elapsed, to capture the timeliness of accrual, at any point during the protocol enrollment period. We applied the AI retrospectively to evaluate the effectiveness of our recruitment performance and practices, and prospectively to illustrate the potential utility of the Index.

Methods

Research ethics

This research was reviewed and approved by The Rockefeller University Institutional Review Board (IRB) prior to initiation of the work. The volunteer and participant data related to recruitment and enrollment activities were collected and analyzed for research purposes under an IRB‐approved research protocol.

Infrastructure

The Rockefeller University Center for Clinical Translation Research (CCTS) supports an electronic platform for protocol writing, IRB management, and clinical research study management,11 in which IRB‐approved screening and Accrual Targets and proposed time lines for completion of accrual are captured, and in which participant screening and enrollment outcomes for each protocol are tracked in real time. In parallel, the CCTS‐supported recruitment core (CRROSS) provides recruitment expertise and services for detailed recruitment planning, and CRROSS tracks participant recruitment and enrollment activities and outcome data in the CRROSS Database.6 CRROSS staff routinely integrates recruitment data and enrollment outcome data to assess recruitment effectiveness.

Recruitment consultation services provided by CRROSS include the development of a study‐specific projected time line for completion of accrual. The estimation of the projected time line has undergone iterative refinement since the inception of CRROSS. The projection was initially based on the team's estimation of the burdens and incentives for participants as incorporated into the protocol plan and the apparent availability of the target population. Prior to 2011, investigators received advice during recruitment consultations, and proposed predicted time lines for the completion of accrual of their studies without explicit justification, usually described as an anticipated a rate of enrollment per year. Starting in 2011–2012, in consultation with investigators, CRROSS proactively elicited information about the timing of anticipated vacations, conference travel, and competing grant deadlines with which to refine projected times for accrual completion provided to investigators. As a further refinement, in 2013–2014, CRROSS began to systematically elicit additional specific information for planning purposes (e.g., the number of appointments research team members would be available for per week for screening; a specific calendar of availability; research assay readiness; competing priorities [protocols, projects, grants, and manuscripts], planned leaves of absence for staff/coordinators or key research technicians [vacation, maternity leave, other leaves of absence], seasonal variation in disease under study, historical seasonal lulls in recruitment [archived data]), and others. Since CRROSS manages the scheduling of screening appointments for some investigators,10 availability estimates are readily verified. Under this refinement, in collaboration with investigators, CRROSS staff calculated and offered investigators explicitly justified protocol‐specific time lines projecting the time line for completion of accrual.

Development of the AI

The AI was developed to frame the progress toward the accrual goal in the context of the progression of the allotted time for accrual. How well is accrual performance matching the consultation‐based prediction? Definitions were chosen carefully to avoid under estimating the time needed to reach the Accrual Target, or underestimating the number of evaluable participants prior to study end points. Protocol data, including sample size, dates for recruitment milestones, and recruitment and enrollment outcome data were extracted from the IRB‐approved protocol submission, and the study management and recruitment management databases at the Rockefeller University CCTS.

Definitions

Additional detail is provided for each definition in Table 1.

Table 1.

Definitions used in assessing accrual outcomes

Term Definition Comment
Accrual Target (integer) Number of evaluable participants required to satisfy the sample size (for power calculation) defined in the IRB‐approved protocol.
Projected Time to Accrual Completion (months) (PTAC) Predicted duration of the recruitment period necessary to attain the Accrual Target from date of initiation of recruitment efforts until Accrual Target is achieved, but estimated based on Total Enrollment, including participants who go on to drop out, as they contribute to the time line.
  • For studies starting recruitment 2013–2014 with CRROSS assistance, PTAC was extracted from the explicitly justified projected time to enrollment in the recruitment plan.

  • Studies initiating recruitment prior to 2013, or in 2013–2014 without CRROSS assistance included planned total enrollment (all signing ICF), and planned enrollment in the next year in the protocol, values used to project the PTAC.

Evaluable Subjects Enrolled The number of participants having completed the study, plus those actively on‐study at the time of accrual assessment. This total does not include participants who have screen failed or withdrawn at the time of accrual assessment.
Recruitment Initiation Date The date, subsequent to IRB approval, when the investigator requested recruitment begin. Since 2010, the recruitment initiation date has been documented for CRROSS‐assisted studies. Pre‐2010, and for studies without CRROSS assistance, the median time to first enrollment for that year12 is used to estimate the start date.
Date of First Enrollment Defined as the date on which the first participant enrolled in a study and signed the informed consent for that study.
Enrollment Closure Date The date after which the study is officially closed to enrollment and no further recruitment or enrollment activities take place.
Recruitment Time Elapsed The number of days, divided by 30, elapsed from the Recruitment Initiation Date until the date of analysis or formal cessation of recruitment, or full accrual, or study closure, whichever comes first.
Percentage Accrual (Number of Evaluables (on‐study + completed)/Accrual Target) x 100%

Accrual Target is defined as the number of evaluable participants required to satisfy the sample size (for power calculation) defined in the IRB‐approved protocol. In the absence of a power calculation, Accrual Target is the target enrollment stated in the protocol.

Projected Time to Accrual Completion (PTAC) (months) is the planned duration of the recruitment period (in months) necessary to attain the target number of evaluable participants. PTAC is calculated based on the estimated total number of volunteers to sign consent on the way to achieving the Accrual Target, and the rate at which the team anticipates being able to enroll participants. PTAC is projected from that date of initiation of recruitment until the when the Accrual Target will be achieved.

Evaluable Subjects Enrolled, for each protocol, is the real‐time total number of participants who have either completed all study procedures or are still enrolled on study at the time of accrual assessment.

The Recruitment Initiation Date is defined as the date, subsequent to IRB approval, on which an investigator requested recruitment efforts to begin.

Date of First Enrollment for a given study is defined as the date on which the first participant enrolled and signed the informed consent for that study.

Enrollment Closure Date is the date on which the study is officially closed to enrollment, or absent formal closure of enrollment, the date of completion of accrual (the date on which the last participant signed the consent form).

Recruitment Time Elapsed was calculated as the number of days elapsed from the Recruitment Initiation Date until the date of analysis, divided by 30, if recruitment was ongoing at the time of analysis. For studies closed to enrollment at the time of analysis, Recruitment Time Elapsed was calculated from the Recruitment Initiation Date until the Enrollment Closure Date.

Percentage Accrual is the number of Evaluable Subjects Enrolled divided by the Accrual Target (multiplied by 100).

Accrual Index expresses the fraction of the Accrual Target accrued over the fraction of PTAC time elapsed, to express the timeliness of accrual, at any point during the protocol enrollment period.

graphic file with name CTS-8-655-e001.jpg

When accrual progress is exactly in time with expectations, for example, one‐fourth accrued in one‐fourth of the PTAC, the AI = 1.0. An AI of less than 1.0 reflects slower than planned accrual, and an AI greater than 1.0 reflects accrual ahead of schedule.

Protocols

Protocol data were extracted from IRB‐approved documents in the protocol and study management databases. Protocols used to evaluate the AI met the following criteria: (1) participants were seen at The Rockefeller University for study visits or procedures; (2) the IRB‐approved protocol clearly stated the number of evaluable participants (those reaching a study outcome/end point) required, in alignment with the power calculation if applicable; (3) the protocol contained information about the anticipated rate of enrollment, expressed either as the number of evaluable participants expected to be enrolled annually, and/or a specific projected time to total enrollment (e.g., 18 months).

Protocols were classified by the calendar year in which recruitment was initiated regardless of the amount of anticipated or actual time required for completion of accrual (e.g., extending into subsequent years).

Protocols were categorized as to whether they were open or closed to enrollment at the time of the analysis. Protocols open to enrollment were further defined as those actively recruiting participants with assistance from the CRROSS staff, or exclusively through other means. Studies with previously initiated recruitment that was temporarily suspended, for example, to address assay refinement or staffing issues, were included as accruing protocols. Protocols closed to enrollment were defined as protocols that had met the enrollment Accrual Target, or were permanently closed to enrollment by the investigator without reaching the stated Accrual Target. Sponsored clinical trials as well as investigator‐initiated studies, studies of disease mechanism, and other non–randomized clinical trial (RCT) studies were included in the analysis.

Protocols were excluded from this analysis if they had the following characteristics: (1) the principal investigator (PI) exclusively enrolled individuals with rare diseases (i.e., Fanconi anemia, certain blood disorders, etc.) through anticipated or serendipitous provider referral without utilizing other approaches to direct recruitment; (2) the protocols did not include plans to recruit any participants at The Rockefeller University site (e.g., all participants enrolled at collaborating centers, or were data‐only or sample‐only studies conducted at Rockefeller, or exclusively enrolled lab peers as healthy volunteers); (3) IRB‐approved protocols that were open for conduct but previously closed to enrollment at the start of the evaluation period for this study.

Protocol characteristics

In addition to the sample size, Accrual Targets and the timing of recruitment and enrollment, additional characteristics of protocols that might reasonably impact the ease of accrual were captured for analysis. These included: (1) whether the CRROSS team assisted with any aspect of recruitment; (2) characteristics of the protocol recorded in the IRB‐approved document such as the level of risk assigned the protocol in the Data Safety Monitoring Plan (as defined in the federal register [45 CFR 46] as minimal risk, or as defined in written institutional guidance documents for low, moderate, or significant risk); (3) the total number of inclusion and exclusion criteria; (4) the number of types of procedures listed in the protocol (e.g., phlebotomy, fat biopsy, magnetic resonance imaging (MRI); (5) the use of placebo in the protocol; (6) the presence or absence of direct benefit to the participant as stated in the IRB‐approved informed consent form. For protocols using recruitment services, these factors are routinely assessed and optimized in evaluating the barriers to and feasibility of recruitment.

Statistical considerations

Statistical data analysis was carried out to illustrate the dynamic behavior of the AI over time and its association to a set of potential predictors among protocol characteristics. First, in order to eliminate the influence of outliers and/or extreme points, the AI was log‐transformed and a box plot was used to illustrate its distribution. Second, time course dynamics of the AI were described by box plots that were categorized by different levels of protocol characteristics. Scatterplots were used for visualization of the degree of association with quantitative characteristics. Finally, significance of association with presence/absence of CRROSS help, Placebo and Subject Benefit was tested by two‐sided unpaired t‐tests at 95% confidence level. The Spearman's correlation coefficient was applied to measure the significance of association with the number of inclusion/exclusion criteria, number of procedures and number of visits. Analysis of variance (ANOVA) was used to assess the association of the AI with different levels of Data Safety Monitoring Plan (DSMP) risk.

Results

Protocols

A total of 190 protocols were IRB approved in the observation period; of these, 101 met criteria for inclusion in the analysis (Table 2). From 2007 to 2014, the total number of new protocols initiating recruitment and enrolling participants at the RU‐CCTS each year rose modestly each year. During this time, the median Accrual Target declined from 53 (in 2007–2009), to 25 evaluable participants in 2014. The median PTAC remained unchanged at 12 months across the observation period, however, the range of anticipated accrual time frames narrowed during this period (12–400 months, to 2–24 months).

Table 2.

Characteristics of protocols included in the analysis

Year in which recruitment was initiated
2007–2009* 2010 2011 2012 2013 2014
Protocols initiating recruitment 14 17 18 19 20 13
Accrual Target, median (range) 53 (4–500) 47.5 (5–500) 47.5 (5–500) 38 (5–300) 30 (10–180) 25 (8–80)
Projected Time to Accrual Completion in months, median (range) 13 (12–400) 12 (1–120) 12 (12–48) 12 (12–72) 12 (4–42) 12 (2–24)
CRROSS recruitment assistance provided 13 (93%) 9 (53%) 13 (72%) 9 (50%) 13 (65%) 13 (100%)
Protocols with placebo 2 (14%) 2 (12%) 4 (22%) 4 (21%) 0 2 (20%)
Protocols with direct benefits to subjects 4 (28%) 6 (36%) 3 (17%) 7 (41%) 4 (20%) 5 (39%)
DSMP risk
0—Minimal 1 0 2 0 3 1
1—Low 6 6 5 3 6 3
2—Moderate 7 8 10 14 8 8
3—Significant 0 0 0 0 0 0

*Due to a low number of protocols initiating recruitment from 2007 to 2009, data from these years were grouped together.

Percent accrual

The historical measure, Percent Accrual of the Accrual Target (not adjusted for time) was calculated for all protocols initiating enrollment during 2007–2014. Studies were grouped according to whether enrollment was closed or ongoing at the time of analysis. Of the 71 studies both initiating and closing enrollment during the analysis period, 22 (31%) achieved 100% of the Accrual Target before closing (Figure 1 A). For the 30 protocols still actively recruiting participants at the time of analysis, the simple Percent Accrual is of limited value (Figure 1 B). The Percent Accrual does not incorporate what proportion of the enrollment period has elapsed and therefore does not distinguish between a low percentage reflecting long‐term accrual failure versus that reflecting accrual progress early in the enrollment period.

Figure 1.

Figure 1

The Percent Accrual (Evaluables/Accrual Target) is shown for protocols initiating recruitment from 2007 to 2014. Studies closed to enrollment at the time of analysis (Panel A) and studies with ongoing enrollment (Panel B) are shown separately. As the Percent Accrual does not reflect the anticipated or elapsed duration of recruitment efforts, Percent Accrual does not illustrate whether accrual is meeting expectations.

Accrual index

The AI—the fraction of Accrual Target achieved as compared to the fraction of the PTAC elapsed—was calculated for a single time point for each of the protocols initiating recruitment from 2007 to 2014, using the last available data on December 31, 2014 (Figure 2). For the years 2007–2009, 2011, 2013, and 2014, the median AI is near or above 1. The range of protocol AIs is largest in 2011, 2012, and 2013 reflecting a relative mismatch between the projected rate of enrollment and actual accrual performance. In 2014, the variance among protocol AIs is smaller and the median AI is at or above 1; compared with prior years, in 2014 accrual prediction more closely anticipated actual accrual performance.

Figure 2.

Figure 2

The Accrual Index is shown for studies initiating recruitment from 2007 to 2014. The Accrual Index represents the fraction of Accrual Target achieved, divided by the fraction of Predicted Time to Accrual Completion that has elapsed. The median AI for each year is indicated with a dark line. Studies are distinguished as closed to enrollment at the time of analysis (red circle) or remaining open to enrollment (green circle). In order to eliminate the influence of outliers and/or extreme points, the Accrual Index was log‐transformed. Box and whisker plots show the distribution of AI for the group of studies initiating recruitment in each year indicated. The reference line for on‐time accrual, where AI = 1, is shown with a darker line. An AI value less than 1 indicates accrual that is behind schedule; an AI value greater than 1 indicates accrual ahead of schedule.

Retrospective study profiles

To gain a more detailed view of the timeliness of accrual throughout the enrollment period, and to examine whether there are recognizable patterns to the AI, the AI was calculated retrospectively at monthly intervals over the period of enrollment for all protocols initiating recruitment in 2013 and 2014. In addition to their greater variance in the last‐measured or final AI, protocols initiating recruitment in 2013 exhibited large interprotocol variance in initial and monthly AI, such that no unifying pattern was discernible (data not shown). For protocols initiating recruitment in 2014, the most showed less interprotocol monthly variance, before accruing consistently closer to and AI = 1. The rate of accrual was not linear across the life of the protocols, and exhibited several different coarse patterns among the protocols. Some protocols AIs started and stayed well above the AI = 1 line, and other protocols began well below AI = 1 and recovered toward 1 slowly if at all. The position of the 1‐month AI, greater than or less than 1, did not predict the final AI.

In order to understand this variance, we examined the AI in detail, plotting the AI at the first and third months and quarterly thereafter, for several illustrative studies from the overall observation period (Figure 3). Each study entailed participant‐related recruitment challenges such as: stringent eligibility criteria and uncomfortable procedures in the first study, detailed and specific metabolic criteria and use of an investigational agent in the second study, and extensive neurologic testing and use of an investigational agent in the third. In each study's prospective recruitment plan, a period of reduced enrollment capacity was anticipated and the PTAC calculated to accommodate it. In each study, accrual initially appears to be ahead of schedule (AI > 1), subsequently dipping below an AI of 1, before completing accrual on‐time or slightly ahead of schedule. The third study, still open at the time of analysis, fell far behind of its anticipated accrual time line before exhibiting a steady and sustained climb toward successful accrual, albeit behind the projected time line (Figure 3 C). This study faced significant feasibility challenges at the outset, including several unanticipated barriers, solutions for which were more than a year in the making as evidence by the delayed but clear rise in the AI.

Figure 3.

Figure 3

Retrospective analysis of the AI across three individual protocol accrual histories is shown to analyze the effectiveness of a recruitment activities and accuracy of the Projected Time to Accrual Completion developed during recruitment planning. Planning incorporates anticipated challenges to accrual. The AI is shown 1 month after the initiation of recruitment activities, and quarterly thereafter. In one study, the PI anticipated a planned 3‐month absence to occur 5 months into the study with consequent halving of the capacity to conduct screening and study visits for those 3 months. (A) In a second study, eligibility required that participants have very low vitamin D levels. In a prior study, the investigator and the recruitment core found, during the summer months (when vitamin D levels typically rise due to increased sun exposure), that otherwise eligible participants did not meet the low vitamin D criteria but could later be found eligible in winter months. The recruitment plan was therefore designed to anticipate the inability to recruit eligible subjects during the summer months. (B) For a third study, the recruitment planning and PTAC estimated the prevalence of the disease under consideration and team availability, but underestimated barriers to recruitment of a population new to our institution and competing protocols at area institutions. After approximately a year of partnership building and empiric testing of recruitment strategies, enrollment began to rise sustainably (C).

We also examined the source of the increased variance in the AI, specifically the studies with AIs much greater than 1, for the years 2007–2014. During this period, eight studies (in years 2010, 2011, 2013) accrued far ahead of schedule with AI values greater than 4.0. Each of the studies utilized either a ready convenience sample (e.g., sequential patients coming to the clinic for other purposes), existing cohorts (roll‐over enrollment from another study) or enrolled a cohort of healthy volunteers from an existing registry, with few eligibility restrictions. In the same period, 19 studies demonstrated AI ≤ 0.3. Dominant factors in the under accrual of these studies included a change in the investigator's availability or interest (n = 10), overestimate of population availability (n = 6), or an outside factor (Sponsor or regulatory agency action).

Relationship of AI to study characteristics

Characteristics of the protocols were captured as part of an ongoing effort to identify factors that positively or negatively impact the recruitability of a study. Across all studies, there was an inverse correlation between the level of risk associated with the study as recorded in the Data Safety Monitoring Plan, and the AI. In addition, the presence in the study of a direct benefit to the study subject was associated with a final AI ≥ 1.0 (Figure 4). The total number of inclusion and exclusion criteria, number or types of procedures listed in the protocol, and presence of placebo in the protocol did not correlate with the AI in our studies (Spearman's coefficient).

Figure 4.

Figure 4

The Protocol Accrual Index Dashboard is shown, configured with summary data for simplicity for the user. The “AI, current month” is calculated from: (Current Evaluable Subjects (on‐study participants plus those who have completed the study), divided by the Target Accrual) then divided by ((Current date – Date recruitment initiated)/30)/PTAC). The “AI, prior month,” is carried over from the prior dashboard. Conditional formatting provides a green arrow for AI ≥ 1; a yellow arrow for 0.9 ≤ A I < 1, and a red arrow for AI < 0.9. The “Change in AI” is calculated as a slope, normalized to 1 month in the event there is a gap in AI data. Conditional formatting signals a green arrow for AI slope >0, yellow arrow for unchanged AI with slope = 0, and a red arrow for a negative slope <0. The percentage of PTAC elapsed ((current date – date recruitment initiated/30/PTAC) x 100%) is shown for reference. It is intended that users would look first at the Current AI and slope to assess whether accrual is on‐time, then if needed, reference the Percent PTAC elapsed to understand the significance of the AI within the protocol life cycle.

Prospective analysis: AI dashboard

To examine how the AI could be used to evaluate an individual protocol's performance prospectively (monthly, quarterly, semiannually, etc.), we developed a dashboard to assess progress in real time. Drawing from data captured routinely by the recruitment core, the dashboard incorporates all the data needed to calculate the AI, including the Recruitment Initiation Date, Accrual Target, PTAC (months), and updated totals of Evaluable Subjects Enrolled, the calculated time elapsed since recruitment initiation (months). These data are integrated to provide, for each protocol listed: the current AI, the most recent prior AI, and the slope of change in the AI since last measured (Figure 4). Red, yellow, and green arrow conditionally formatted icons are included to make the dashboard readable at a glance, indicating whether the timeliness of accrual for a protocol is in need of immediate remediation, merits cautious observation due to a downward trend, or is consistently performing on‐time or ahead of schedule. Further, the dashboard affords a view of the whole protocol portfolio providing a platform to identify trends, test, and compare practices, and prioritize distribution of effort and resources.

Discussion

Despite broad recognition of the negative consequences of underaccrual of participants into clinical studies, few organizations monitor recruitment and hold investigators accountable to their proposed enrollment deadlines.5 Historically, the evaluation of whether a clinical study has met a targeted enrollment deadline has been retrospective, assessing at the end of the proposed enrollment period, or at the end of all enrollment efforts whether 100% of the expected accrual was achieved. This measure is often calculated without regard to the time elapsed making it impossible to know whether recruitment was timely. Data from oncology trials point to early enrollment of the first participant as an early predictor of accrual success,3 and so we used the shortening of the time to first enrollment as a metric in our previous study demonstrating that value of a data‐rich recruitment core. We recognized however, that this is a limited metric and so in this study, we have developed the AI as a real‐time measure of the timeliness of accrual.

The AI reflects the actual recruitment at any given time point as a percentage of the expected recruitment at that time point based on the investigator's proposed enrollment time line. Its simplicity allows for ease of monitoring in a dashboard format for even a large number of protocols. Its major limitation is that, unless adjusted, it assumes a constant rate of enrollment throughout the study. Investigators reviewing one or a few protocols may find it useful to follow the AI regularly to assess timeliness, paired with the details of the explicitly justified time line to illuminate which assumptions may need refinement of additional attention. Recruitment core managers may use the dashboard to quickly identify which among a field of protocols needs additional attention, and in the longer term to evaluate and improve the effectiveness of feasibility and recruitment planning practices.

Implementation of the AI at our institution has provided extremely valuable information in analyzing progress of ongoing studies and equally important, has informed the organization and strategic planning of our centralized recruitment program. Prior to 2013, PTAC was either proposed by the PI without explicit justification, or was implied in the PI's plan for anticipated annual enrollment captured in the protocol. In mid July 2013, CRROSS staff began to provide a data‐justified projected Enrollment Target Time Line as part of the recruitment consultation and this information was then included in the protocol submission to the IRB. The calculation of the PTAC attempts to incorporate any possible events or reasons that may impact the ability to achieve timely study accrual.

Although the data are only preliminary, we believe that the reduced variance in the AI in 2014 reflects the increased reliability of the recruitment plans and enrollment predictions developed by investigators with the assistance of the recruitment core.

Of the 56 studies that opened and closed before implementation of data‐justified enrollment time lines, from 2007 to 2012, 13 (23%) achieved 100% of the Accrual Target by the time enrollment was closed. In contrast, for protocols initiated and closed in 2013–2014, after implementation of data‐justified enrollment time lines, 12 of 15 (80%) protocols attained 90–100% of the Accrual Target at the time they closed to enrollment. (The three remaining studies closed prematurely having achieved their scientific end points without full accrual.)

Some protocols initiating recruitment in 2007–2012 have an AI greater than or equal to 1 implying good performance (Figure 2), while also displaying a very low Percent Accrual at the time of study closure (Figure 1 A) implying poor accrual. This seeming contradiction is explained in part by the calculation of AI based on projected annual enrollment rates (when PTAC was provided), and in part by very large Accrual Targets for some studies during this period. The refinement in recent years toward explicitly justified PTAC and Accrual Targets has reduced the incidence of open‐ended enrollment plans and resulted in a narrower range of Accrual Targets and PTAC (Table 2).

We tried to identify protocol factors that correlated with the AI, starting with variables previously reported to negatively impact recruitment,12 for example, the number of procedures, protocol intensity, number of eligibility requirements, compensation—but we did not observe a relationship between AI and any of these factors. One possible explanation of these data is that the recruitment consultation process and PTAC fully took these factors into account. In particular, factors that the IRB, Protocol Navigation,11 and CRROSS recruitment consultation10 process attempt to mitigate, such as the risk level of the study, and perceived benefit for the participant, were not associated with AI. The value of designing a study to include a participant‐perceived benefit, regardless of the type of study, has been identified by participants in experience surveys as a major incentive to joining or remaining enrolled in a study.10 It may be that the lack of association of these factors on the AI suggests that our PTAC estimates have correctly estimated their impact. Alternatively, their effect may be small compared with other factors we have not yet identified. We anticipate that the iterative process of evaluating the AI data will provide valuable information in refining our PTAC estimates and in improving the accuracy of our forecasting.

The AI is a useful real‐time measure and can quickly signal the need for real‐time refinement of recruitment efforts by offering a summary of accrual timeliness to the investigator, and to leadership and resource managers, at any time throughout a particular study. It is a useful tool for testing and refining our predictions and improving practice. The retrospective application of the index highlights the complexity of analyzing individual research protocols and the study‐specific characteristics that affect recruitment. Identifying slow accrual in a timely manner makes it possible to intervene early improve recruitment.

We conclude that as a standard measure of accrual timeliness, the AI can be valuable in the prospective management of individual protocols, or of large portfolios of protocols, and in retrospectively analyzing accrual performance to identify factors that contribute to efficient timely accrual.

Sources of Funding

Supported in part by grant # UL1 TR000043 from the National Center for Research Resources (NCRR) and National Center for Advancing Translational Sciences (NCATS), National Institutes of Health (NIH), and Clinical and Translational Science Award (CTSA) program.

Ethical Approval

The volunteer and participant data related to recruitment and enrollment activities were collected and analyzed for research purposes under an IRB‐approved research protocol in place since 2007. This research was reviewed and approved by The Rockefeller University Institutional Review Board (IRB) prior to initiation of the work.

Conflict of Interest

The authors have no conflicts of interest to disclose.

Acknowledgment

The authors would like to acknowledge Barry S. Coller and Barbara O'Sullivan for helpful discussions.

References

  • 1. RFA‐TR‐14‐009 . 2014. Available at: http://grants.nih.gov/grants/guide/rfa‐files/RFA‐TR‐14‐009.html. Accessed August 1, 2015.
  • 2. Institute of Medicine . The CTSA Program at NIH: Opportunities for Advancing Clinical and Translational Research. Washington, DC: The National Academies Press; 2013: 178. [PubMed] [Google Scholar]
  • 3. Cheng SK, Dietrich MS, Dilts DM. Predicting accrual achievement: monitoring accrual milestones of NCI‐CTEP‐sponsored clinical trials. Clin Cancer Res. Apr 1 2011; 17(7): 1947–1955. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Kitterman DR, Cheng SK, Dilts DM, Orwoll ES. The prevalence and economic impact of low‐enrolling clinical studies at an academic medical center. Acad Med. 2011; 86(11): 1360–1366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Glickman SW, McHutchison JG, Peterson ED, Cairns CB, Harrington RA, Califf RM, Schulman KA. Ethical and scientific implications of the globalization of clinical research. N Engl J Med. 2009; 360(8): 816–823. [DOI] [PubMed] [Google Scholar]
  • 6. Hanauer SB. Outsourcing clinical trials. Nat Rev Gastroenterol Hepatol. 2009; 6(4): 191. [DOI] [PubMed] [Google Scholar]
  • 7. Kost RG, Mervin‐Blake S, Hallarn R, Rathmann C, Kolb HR, Himmelfarb CD, D'Agostino T, Rubinstein EP, Dozier AM, Schuff KG. Accrual and recruitment practices at the CTSAs: a call for expectations, expertise and evaluation. Acad Med. 2014; 89(8): 1180–1189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Trochim WM, Rubio DM, Thomas VG. Evaluation guidelines for the Clinical and Translational Science Awards (CTSAs). Clin Transl Sci. Aug 2013; 6(4): 303–309. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Clinical Translational Science Awards . 2014. Request for Proposals. Available at: http://grants.nih.gov/grants/guide/rfa‐files/RFA‐TR‐14‐009.html. Accessed February 23, 2015.
  • 10. Kost RG, Corregano LM, Rainer TL, Melendez C, Coller BS. A data‐rich recruitment core to support translational clinical research. Clin Transl Sci. April 2015; 8(2): 91–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Brassil D, Kost RG, Dowd KA, Hurley AM, Rainer TL, Coller BS. The Rockefeller University Navigation Program: a structured multidisciplinary protocol development and educational program to advance translational research. Clin Transl Sci. Feb 2014; 7(1): 12–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Getz KA, Wenger J, Campo RA, Seguine ES, Kaitin KI. Assessing the impact of protocol design changes on clinical trial performance. Am J Ther. 2008; 15(5): 450–457. [DOI] [PubMed] [Google Scholar]

Articles from Clinical and Translational Science are provided here courtesy of Wiley

RESOURCES