Skip to main content
Journal of Oncology Practice logoLink to Journal of Oncology Practice
. 2016 Mar 22;12(5):e536–e547. doi: 10.1200/JOP.2015.008920

Assessing Clinical Trial–Associated Workload in Community-Based Research Programs Using the ASCO Clinical Trial Workload Assessment Tool

Marjorie J Good 1,, Patricia Hurley 1, Kaitlin M Woo 1, Connie Szczepanek 1, Teresa Stewart 1, Nicholas Robert 1, Alan Lyss 1, Mithat Gönen 1, Rogerio Lilenbaum 1
PMCID: PMC5702801  PMID: 27006354

Abstract

Purpose:

Clinical research program managers are regularly faced with the quandary of determining how much of a workload research staff members can manage while they balance clinical practice and still achieve clinical trial accrual goals, maintain data quality and protocol compliance, and stay within budget. A tool was developed to measure clinical trial–associated workload, to apply objective metrics toward documentation of work, and to provide clearer insight to better meet clinical research program challenges and aid in balancing staff workloads. A project was conducted to assess the feasibility and utility of using this tool in diverse research settings.

Methods:

Community-based research programs were recruited to collect and enter clinical trial–associated monthly workload data into a web-based tool for 6 consecutive months. Descriptive statistics were computed for self-reported program characteristics and workload data, including staff acuity scores and number of patient encounters.

Results:

Fifty-one research programs that represented 30 states participated. Median staff acuity scores were highest for staff with patients enrolled in studies and receiving treatment, relative to staff with patients in follow-up status. Treatment trials typically resulted in higher median staff acuity, relative to cancer control, observational/registry, and prevention trials. Industry trials exhibited higher median staff acuity scores than trials sponsored by the National Institutes of Health/National Cancer Institute, academic institutions, or others.

Conclusion:

The results from this project demonstrate that trial-specific acuity measurement is a better measure of workload than simply counting the number of patients. The tool was shown to be feasible and useable in diverse community-based research settings.

INTRODUCTION

Effective and efficacious management of a clinical research program is becoming increasingly challenging as a result of growing regulatory burden, fewer available yet more complex trials, increasingly restrictive inclusion and exclusion criteria, and the need to screen large numbers of patients for small cohorts with selected molecular targets. These challenges are compounded by the diminished resources available to accomplish the work. Investigators and program managers must be able to use their staff as effectively and efficiently as possible. Research managers are regularly faced with the question of how much of a workload each research staff member is able to manage while they still achieve program accrual goals, maintain data quality and protocol compliance, and stay within budget. Determination of a way to measure clinical trial workload and apply objective metrics toward documentation of actual work may provide clearer insights to better meet today’s challenges and aid in balancing staff workloads. Creating a balanced workload has the potential to positively affect research staff job satisfaction by reducing individual staff burden, which can translate into fewer turnovers, dollars saved, and, ultimately, improved quality of clinical trial data.

A 2010 survey conducted by the ASCO Community Research Forum of community-based investigators reinforced the need for a mechanism to assess clinical trial–associated workload. In fact, a tool to determine research staff workload was ranked in the top three of 12 proposed projects. ASCO responded by forming a Workload Assessment Working Group; its goal was to develop such a tool. After members reviewed the literature and available tools,1-7 the Working Group concluded that the characteristics of a useful workload measurement tool should be simplicity, reproducibility, and long-term usability. They concluded that, if a workload measurement tool is too complex and requires many layers of measurement—and, therefore, too much time and effort—it will not be used in the long term. The Wichita Community Clinical Oncology Program Protocol Acuity Tool (WPAT),1 a tool used for more than 10 years, is based on assignment of a numeric weight or acuity score to cancer clinical trials to reflect complexity and intensity of care; the WPAT met the Working Group criteria and was selected as the model on which the ASCO tool would be based. An important next step was to conduct an assessment of the potential feasibility and utility of such as tool in diverse research settings. This article describes the findings from the ASCO Clinical Trial Workload Assessment Tool Project, which was conducted from May through November 2013.

Description of Clinical Trial Workload Assessment Tool Project

The goal of the project was to determine the feasibility of using a clinical trial workload assessment tool among multiple community-based oncology research programs, including evaluation of its ease of adoption and utility. The primary objective of the project was to gather information about average acuity levels and numbers of trial-specific patient encounters per research nurse (RN) and clinical research associate (CRA) full-time equivalent (FTE) for industry and National Cancer Institute (NCI) –funded clinical trials (classified as treatment, cancer control, and other types) according to the following defined categories: on study/on active treatment (currently receiving active treatment as delineated in a protocol); on study but off treatment (completed specified trial-related treatment but still required to adhere to trial-defined criteria/testing); or off study in follow-up (no longer receiving trial treatment [eg, because of disease progression] and no longer required to adhere to trial-defined criteria/testing).

The project data also were expected to inform the development and applicability of such a tool for use beyond community-based sites, within the broader ASCO membership and oncology research field.

The project focused only on clinical trial workload associated with patient-centered encounters or clinically focused efforts, defined as any in-person protocol-required evaluation and management visit that was designated as required on the protocol study calendar/study plan (eg, physician/health care professional physical examination and/or toxicity assessment used to determine treatment dosing modification for next cycle of therapy). The assessment of regulatory-based workload and other nonclinical elements associated with clinical trial management was not pursued within this project.

METHODS

Two interrelated tools were used to conduct the project. First, the ASCO Protocol Acuity Scoring Worksheet (Worksheet; available online) was created with refined and edited WPAT protocol scoring criteria. A 4-point rating scale was maintained, in which a score of 1.0 reflected a lower workload (eg, registry and observational trial) and a score of 4.0 reflected a complex trial associated with a higher workload (eg, acute leukemia trial). Before assignment of an acuity score, each protocol was to be assessed for the complexity of the treatment/intervention, trial-specific laboratory and/or testing requirements, treatment toxicity potential, complexity and number of data forms required, degree of coordination required, and the number of random assignment and/or registration steps involved.

Second, a web-based platform entitled the ASCO Clinical Trial Workload Assessment Tool (Tool) was created to facilitate the collection of clinical trial–associated workload data. Two acuity metrics were accounted for: protocol acuity scores assigned to individual protocols, and individual staff acuity scores, calculated with assigned protocol acuity scores. Rather than use days per week worked as in the WPAT model, the Tool was programmed to use FTE status. All patient encounters recorded in the Tool as on study but off treatment or off study in follow-up were automatically assigned an acuity score of 1.0. The Tool was programmed to calculate individual staff acuity scores with the following formula, which accounted for each assigned protocol acuity score and the number of patient encounters for the specific trials during a month, and which factored in the individual staff FTEs: (No. of patient encounters × protocol acuity score) / staff member FTE value = individual staff acuity score.

Beginning in May 2013, research programs were recruited by ASCO through the ASCO Community Research Forum, NCI Community Clinical Oncology Programs (CCOPs), NCI Minority-Based CCOPs (MB-CCOPs), NCI Community Cancer Center Programs (NCCCPs), Oncology Nursing Society’s Clinical Trial Nurses’ Special Interest Group, Sarah Cannon Research Institute, and The US Oncology Network. To be eligible, a participating research program had to be a community-based oncology research program; currently accruing to industry and/or NCI-funded cooperative group trials; able to generate information requested by the Tool; willing to collect and submit the required data in a timely manner via the Tool once per month for the duration of the project; willing to participate in a training webinar and conference calls; and willing to comply with all project requirements.

Data Collection

As part of the registration process, each research program provided program-specific information about the structure of the program, staffing, and portfolio of active trials. Research program administrators were instructed to use the Worksheet to assign a protocol acuity rating score to each of their trials that had patients in the category of on study/on active treatment and to enter staff workload–associated data (eg, number of patient encounters per protocol) into the Tool on a monthly basis for 6 consecutive months.

Data Analyses

Descriptive statistics computed for self-reported characteristics of all participating programs, as well as the characteristics of their data contributed during the project period, were summarized by using frequencies and percentages. Because of the variability of the participating research programs, the research programs were grouped into five categories, or groups, that were based on the type and size of the program, according to information provided during the registration process. Group 1 comprised CCOPs/MB-CCOPs with seven or fewer FTEs (n = 13 research programs); group 2, CCOPs/MB-CCOPs with greater than 7 FTEs (n = 10 research programs); group 3, community hospitals/NCCCPs (n = 8 research programs); group 4, non–hospital-based private practices/private research networks (n = 12 research programs); and group 5, hospital-based private practices (n = 7 research programs).

This categorization allowed programs to compare themselves to the most applicable or similar types of program. All comparisons among groups were descriptive. Research staff members who worked less than full time, particularly those with fewer than 0.5 FTEs, may have had different workload experiences and associated levels of workload than research staff members who were devoted full time to patient-centered care. Therefore, only data from staff members designated as 1.0 FTEs were used to summarize staff acuity scores and number of patient encounters for each group; the data were reported as medians and ranges by patient status, trial sponsor type, type of trial, and staff title (Data Supplement). The data provided by one program were excluded, because the unique structure of the program did not fit into any of the five groups. Fisher’s exact test was used to compare the proportion of on-study treatment protocols with assigned protocol acuity score variability among the different sponsors. The test was two sided, and P < .05 was considered significant. All data analysis was done in R 3.0.1 (R Development Core Team; https://www.r-project.org/).

RESULTS

Greater than 100 research programs expressed an initial interest in project participation. Twenty-four declined to participate after they received the recruitment package; high workload and not enough time were reasons for not participating. Another 31 programs either were duplications of research programs (n = 6), were ineligible (n = 2), did not respond after they received a recruitment package (n = 17), or withdrew before reporting any data (n = 6). Fifty-one research programs, which represented 30 states, completed the project. Forty-six of the programs (90%) provided all 6 months of workload data; two sites provided 5 months of data, and each of the remaining three sites provided 2, 3, or 4 months of data, respectively. The self-reported characteristics of the programs revealed a variety of types of programs and various degrees of experience and accrual volumes (Table 1). Almost half (47%) of the research programs identified themselves as federally sponsored (ie, CCOP, MB-CCOP, or NCCCP). The remainder identified themselves as community hospital (14%); private physician practice, hospital based, not academic (14%); private physician practice, not hospital based (22%); private research network (2%); or other (2%). The participating programs reported conducting clinical trials for as few as 7 years to as many as 30 or more years. They reported accrual of a median of 150 patients (range, 6 to 900 patients) overall into clinical trials; a median of 45 patients (range, 0 to 821) into National Institutes of Health (NIH)/NCI-sponsored trials; and a median of 22 patients (range, 0 to 290) into industry-sponsored trials during the prior 3 years. The participating programs reported having a median of 37 open and actively accruing trials (range 9 to 186).

Table 1.

Self-Reported Characteristics of Participating Research Programs and Their Clinical Trial Portfolios

Characteristic No. (%) of Participating Research Programs (N = 51)
Type of research program
 CCOP 16 (31)
 MBCCOP, academic based 3 (6)
 MBCCOP, not academic based 2 (4)
 NCCCP and CCOP 2 (4)
 NCCCP 1 (2)
Community hospital 7 (14)
 Private physician practice, hospital based, not academic 7 (14)
 Private physician practice, not hospital based 11 (22)
 Private research network 1 (2)
 Other 1 (2)
No. of years conducting clinical trials
 7-13 13 (25)
 14-25 15 (29)
 26-30 14 (27)
 > 30 9 (18)
No. of patients accrued annually*
 Overall
  6-60 14 (27)
  61-150 13 (25)
  151-245 11 (22)
  > 245 13 (25)
 To NCI-sponsored trials
  0-13 13 (25)
  14-45 13 (25)
  46-124 12 (24)
  > 124 13 (25)
 To industry-sponsored trials
  0-6 13 (25)
  7-22 13 (25)
  23-50 12 (24)
  > 50 13 (25)
No. of phase I trials
 0 35 (69)
 1-10 12 (34)
 > 10 4 (8)
No. of phase II trials
 0-7 13 (25)
 8-16 15 (29)
 17-30 11 (22)
 > 30 12 (24)
No. of phase III trials
 7-19 14 (27)
 20-33 12 (24)
 34-47 12 (24)
 > 47 13 (25)
No. of cancer control trials
 0 14 (27)
 1-4 14 (27)
 5-9 12 (24)
 > 9 11 (22)
No. of prevention trials
 0 26 (51)
 1-5 25 (49)
No. of correlative science trials
 0 17 (33)
 1-10 25 (49)
 > 10 9 (18)
No. of observational/registry trials
 0 6 (12)
 1-5 32 (63)
 > 5 13 (25)
No. of investigator-initiated trials
 0 27 (53)
 1-5 18 (35)
 > 5 6 (12)
No. of industry-sponsored trials
 0 6 (12)
 1-10 16 (31)
 11-27 17 (33)
 > 27 12 (24)
No. of NCI-sponsored trials
 0-19 13 (25)
 20-40 13 (25)
 41-83 12 (24)
 > 83 13 (25)

NOTE. The data source is participating research programs’ self-reported responses on the registration form reported in 2013.

Abbreviations: CCOP, Community Clinical Oncology Program; MBCCOP, Minority-Based CCOP; NCCCP, NCI Community Cancer Center Program; NCI, National Cancer Institute.

*

On average per year in previous 3 years.

The research programs contributed clinical trial–associated workload data for 323 staff members in total, including RNs (49%), CRAs (28%), research coordinators (16%), and administrator/managers (1%), which represented 963 unique protocols and 165 unique sponsors. Additionally, 6% were identified as teams of staff that included multiple members. Of the 963 unique protocols, 604 protocols in the contributed data observed patients on study treatment.

Participating programs were congruent in assignment of the same protocol acuity rating to 461 protocols with patients on study treatment (76%). A difference of one point was found in 120 protocols (20%), and a difference of two points was reported in 23 protocols (4%). Variability in assigned scores was found in 36% of industry trials, compared with only 17% of NIH/NCI-sponsored trials, 3% of academic-sponsored trials, and 4% of trials with other sponsors (P < .001). The median acuity rating assigned to treatment trials was 3; cancer control trials were assigned a median value of 2; correlative science trials, a median of 1.5; and observational/registry trials, a median rating of 1.

Table 2 lists a summary of the number of trials for which data were collected and includes sponsor, type, and research staff by group. Trials were primarily sponsored by the NIH/NCI in all groups except group 4, in which 60% of trials were sponsored by industry. Approximately 80% or more of all trials in each group were treatment trials. Types of research staff were varied across the groups. Approximately 70% of the research staff from groups 3, 4, and 5 were RNs. The majority of research staff members from group 1, however, were CRAs. Group 2, with the highest number of staff represented in our data (n = 147), had similar proportions of RNs, CRAs, and research coordinators. Table 3 lists a summary of the program characteristics in each group, as well as the staff acuity scores and the number of patient encounters per month for each of the five groups by patient status, trial sponsor type, type of trial, and staff title for the data from members with FTEs of 1.0. Because of the small sample sizes for prevention and other trials, as well as small administrator/manager staff members and teams, these summaries were omitted.

Table 2.

Patient Encounters by Variable of Interest and Group Category

Variable No. (%) of Trials and Staff With Patient Encounters by Group Category
Group 1
(n = 415 trials; n = 63 staff) Group 2
(n = 451 trials;
n = 147 staff) Group 3
(n = 180 trials;
n = 32 staff) Group 4
(n = 272 trials; n = 55 staff) Group 5
(n = 139 trials;
n = 26 staff)
Sponsor type
 NIH/NCI cooperative group/research base 350 (84) 313 (69) 115 (64) 69 (25) 84 (60)
 Industry 38 (9) 84 (19) 47 (26) 162 (60) 40 (29)
 Academic 6 (1) 28 (6) 9 (5) 5 (2) 4 (3)
 Other 21 (5) 26 (6) 9 (5) 36 (13) 11 (8)
Type of trial
 Treatment 345 (83) 379 (84) 145 (81) 237 (87) 110 (79)
 Cancer control 45 (11) 56 (12) 19 (11) 16 (6) 15 (11)
 Correlative science 13 (3) 2 4 (2) 2 (1) 3 (2)
 Observational/registry 12 (3) 12 (3) 12 (7) 16 (6) 11 (8)
 Prevention 0 1 0 1 0
 Other 0 1 0 0 0
Type of research staff*
 Clinical research associate 35 (56) 40 (27) 5 (16) 9 (16) 2 (8)
 Research coordinator 1 (2) 37 (25) 5 (16) 4 (7) 6 (23)
 Research nurse 24 (38) 53 (36) 22 (69) 40 (73) 18 (69)
 Administrator/manager 0 1 (1) 0 2 (4) 0
 Team 3 (5) 16 (11) 0 0 0

NOTE. Group 1, CCOP/MB-CCOP with seven or fewer FTEs (n = 13 research programs); group 2, CCOP/MB-CCOP with more than 7 FTE (n = 10 research programs); group 3, community hospital/NCCCP (n = 8 research programs); group 4, private practice, not hospital based/private research network (n = 12 research programs); group 5 = private practice, hospital based (n = 7 research programs).

Abbreviations: CCOP, Community Clinical Oncology Program; MBCCOP, Minority-Based CCOP; NCCCP, NCI Community Cancer Center Program; NCI, National Cancer Institute; NIH, National Institutes of Health.

*

Clinical research associate (non-nurse): unlicensed, with or without college degree, and responsible for collection, submission, and monitoring of clinical trial–associated data, and so on. May be involved in recruitment and screening process but has minimal patient contact. Research coordinator: responsible for protocol evaluation and feasibility including budget evaluation and trial preparation and planning, assembling and instruction of the trial team, development and evaluation of patient information and informed consent forms, and patient recruitment. May also have responsibilities pertaining to coordination and management of research staff. Research nurse: licensed registered nurse responsible for assessing patients for clinical trials; enrolling and monitoring patients (assessing adverse events, and so on); consenting process, communication with physician investigator(s), patient(s), and family/friends; verifying data collected is accurate, and so on. Administrator/manager: responsible for research staff and overall research program.

Table 3.

Monthly Staff Acuity Scores and Number of Patient Encounters by Variables of Interest

Site characteristic or variable by acuity score and encounter Group 1 Group 2 Group 3 Group 4 Group 5
No. of research staff* Median Range No. of research staff* Median Range No. of research staff* Median Range No. of research staff* Median Range No. of research staff* Median Range
Site characteristic
 No. of individual staff 6 1-8 15.5 9-22 4 2-7 3 1-14 3 1-8
 No. of unique protocols 69 12-150 88 20-160 33 9-50 27 5-75 20 16-47
 Total No. of staff FTEs 5 1-7 14.9 8.3-20.9 3.25 1.3-6.5 2.85 1-12 2.7 0.8-6.6
Patient status
 On study treatment
  Acuity score 38 27.75 6-117 103 17 2-117.5 25 8 1-62 37 47.5 3-98.5 13 23.5 3-40.5
  No. of patient encounters 10.5 2-75 6 1-37.5 3 1-22 14.5 1-31.5 8.5 1-17.5
 On study off treatment
  Acuity score 20 11.5 1-107.5 68 2.25 1-49 15 2 1-17.5 33 2.5 1-9.5 7 1.5 1-4
  No. of patient encounters 11.5 1-107.5 2.25 1-49 2 1-17.5 2.5 1-9.5 1.5 1-4
 Off study follow-up
  Acuity score 28 2.75 1-91 64 2 1-35 14 1.75 1-5 26 1.25 1-4.5 9 2 1-4
  No. of patient encounters 2.75 1-91 2 1-35 1.75 1-5 1.25 1-4.5 2 1-4
Sponsor
 NIH/NCI Cooperative group/research base
  Acuity score 39 34.5 8-265.5 107 14.5 1-97 20 8.25 1-69 23 2 1-66 8 7.75 2-37
  No. of patient encounters 18 3.5-187.5 7 1-56.5 3.5 1-33 1.5 1-26 3.75 1-4
 Industry
  Acuity score 14 5.75 1-27.5 67 8 1-110 14 5.75 1-22.5 36 40.5 2-95 12 14.25 1-39
  No. of patient encounters 3.5 1-18.5 2.5 1-27.5 2 1-17.5 13 1-36 4.5 1-13
 Academic
  Acuity score 6 3 1-19 22 3 1-24 8 1.5 1-25.5 1 3 3 2 1-5
  No. of patient encounters 3 1-19 2 1-11 1 1-8.5 3 1 1-5
 Other
  Acuity score 12 6.5 1-67 27 10 1-30 7 3 1-6 26 2 1-17.5 6 4 1-8
  No. of patient encounters 3.5 1-67 5 1-15 2 1-2 1.5 1-7.5 2 1-5
Trial type
 Treatment
  Acuity score 39 33 7-235.5 107 18 1-119 24 8.75 1-66 37 48.5 1.5-99 11 27 3-41.5
  No. of patient encounters 15 2-165.5 9 1-56.5 4.25 1-29 16.5 1.5-40 9.5 1-14.5
 Cancer control
  Acuity score 33 4 1-68 73 4 1-54 14 2.75 1-20 9 4 1-6 8 3.75 1-13
  No. of patient encounters 3 1-67 2 1-27 2 1-17.5 2 1-3 1.75 1-12.5
 Correlative science
  Acuity score 10 4.5 1-23 11 2 1-6 5 1.5 1-6 1 50 2 1 1-1
  No. of patient encounters 3 1-23 2 1-3.5 1.5 1-3 25 1 1-1
 Observational registry
  Acuity score 25 3.5 1-18 42 2.25 1-19.5 10 3.5 1-22.5 18 2.5 1-14 7 2 1-5
  No. of patient encounters 3.5 1-10 2 1-10 2 1-7.5 2.25 1-6.5 1.5 1-5
Staff title
 Research nurse
  Acuity score 22 42 10.5-265.5 45 19.5 1-115 18 12.75 2-70 28 53 11.5-107.5 9 30 3-42.5
  No. of patient encounters 16 3.5-81 9 1-41.5 5 1-34 18 5-43 11.5 1-20
 Research coordinator
  Acuity score 1 44 34 33.5 2-110 3 20.5 11-23.5 1 21 4 16.75 1-27
  No. of patient encounters 44 15.5 2-41 10.5 9-15.5 10.5 9.25 1-13
 Clinical research associate
  Acuity score 6 38.75 8-93 34 15.5 1-119 5 15 3.5-25.5 7 50 5.5-79 1 3
  No. of patient encounters 26.75 7-187.5 7.5 1-38.5 8.5 2.5-15 25 5.5-29.5

NOTE. Group 1, CCOP/MB-CCOP with seven or fewer FTEs (n = 13 research programs); group 2, CCOP/MB-CCOP with more than 7 FTE (n = 10 research programs); group 3, community hospital/NCCCP (n = 8 research programs); group 4, private practice, not hospital based/private research network (n = 12 research programs); group 5 = private practice, hospital based (n = 7 research programs).

Abbreviations: CCOP, Community Clinical Oncology Program; FTE, full-time equivalent; MBCCOP, Minority-Based CCOP; NCCCP, NCI Community Cancer Center Program; NCI, National Cancer Institute; NIH, National Institutes of Health.

*

No. of staff are research staff engaged in patient-associated encounters for specific variable of interest.

Monthly median was calculated by taking the median of monthly aggregate protocol acuity scores × the number of patient encounters.

Across all groups, median staff acuity scores were highest for staff that had patients who were on study and receiving treatment, relative to staff with patients who were in follow-up (either on study but off treatment, or off study and in-follow up). Treatment trials typically resulted in higher median staff acuity relative to cancer control, observational/registry, and prevention trials. Industry trials exhibited higher median staff acuity scores than trials sponsored by NIH/NCI, academic institutions, or others.

Group 4 had some of the highest median acuity scores of 47.5 for patient status on study treatment; 40.5 for industry trials; 48.5 for treatment trials, and 53 and 50 for RNs and CRAs, respectively. Group 5 had a higher median staff acuity score for industry trials than for trials sponsored by NIH/NCI, academic institutions, and others, whereas groups 1, 2 and 3 had higher median acuity scores for NIH/NCI trials than for industry trials.

DISCUSSION

On the basis of currently available literature, this was the first effort, to our knowledge, to use a web-based tool to measure and capture data associated with clinical trial workload across multiple programs. The preliminary goal was to enroll 25 to 30 research programs in the project. Receipt of solicitations from more than 100 research programs for possible participation and the ultimate decline to participate by 24 of these programs, primarily because of time constraints, confirmed the importance of addressing clinical trial–associated workload. The majority of the 51 participating research programs (96%) were able to provide at least 5 months of data. This response rate, along with feedback received from the participants, demonstrates that the Tool is simple and easy to use and supports its long-term feasibility and utility for community-based research programs. It also minimizes bias in the findings and conclusions.

Historically, clinical trial workload has been measured by counting the number of patients enrolled and observed; however, this practice does not take into account the complexity of the trials themselves. The results from this project support the idea that work associated with some trials exceeds that associated with others. Treatment trial acuity scores were consistently higher than those for cancer control trials, and industry trials had higher acuity scores than NIH/NCI-funded trials. Evidence of acuity or complexity as a better measure of workload was also evident when groups were compared. For example, groups 1 and 4 had similar numbers of staff. If only the median numbers of patient encounters are considered, the two groups appear similar. However, when the median acuity scores for the patient status of on study treatment are compared in treatment trials and trials sponsored by NIH/NCI or industry, the median acuity scores for group 4 are higher than for group 1. Similarly, group 5 reported more patient encounters with NIH/NCI trials (Table 2), but median acuity scores were actually higher for industry trials (Table 3). Measurement of workload on the basis of acuity scoring or trial-specific effort, therefore, provides a more accurate view of the actual work being conducted.

An initial objective of the project was to establish an average, single-benchmark acuity score and an associated number of patient encounters for an RN or CRA for various types of trials and trial sponsors during a given period of time that could be used as a reference for community-based research programs. Because there was high variability in the self-reported characteristics of the programs that participated, the Working Group made a decision to group the participating programs into similar categories, which allowed programs to compare themselves to the most applicable or similar types of program. It is hoped that this approach will provide a more meaningful assessment of the landscape of clinical trial–associated workload among similar research programs.

Research staff members who either work less than full time, particularly those with 0.5 or fewer FTEs, or who work full time but devote their efforts to multiple research activities (eg, regulatory efforts) in addition to patient-centered care, may have different workload experiences and associated levels of workload compared with research staff members who are devoted full time to patient-centered care. Therefore, in an effort to provide reliable and comparable summaries, the Working Group decided to limit the data analysis to research staff members who reported 1.0 FTEs of work. Research programs can easily convert the data listed in Table 3 to any FTE. For example, an acuity score of 30.0 would translate to a score of 15.0 for an FTE of 0.5. It is common knowledge that the majority of community-based research programs do not have the luxury of hiring a research team composed entirely of 1.0-FTE staff members; as a result, the programs often hire part-time staff and/or proportion multiple tasks to a single 1.0-FTE member. As a result, research staff members often find that they are unable to devote the required time to all assigned activities, and they become overwhelmed. In these cases, it becomes even more important for the research manager to monitor workload attentively to ensure that the assigned work is getting done, quality is not compromised, and staff is not overly burdened.

In conclusion, this project focused on a core, consistent element, rather than all elements, of the clinical trial–associated workload. Future studies are needed to evaluate other elements of clinical trial workload, such as regulatory and screening efforts. Nevertheless, these data are the first multisite data available, to our knowledge, and offer a preliminary understanding of the complexity of the measurement of clinical trial–associated workload.

It is challenging and complicated to compare one staff person to another. Research managers cannot expect all staff members to function at an exceptional level, because there are innate differences in skill sets, experiences, attention to detail, and work efficiency levels. However, a benchmark reflective of a similar research program that research managers can use as a comparative gauge has long been needed. Participation in this project was limited to community-based research programs; however, notwithstanding infrastructure and organizational differences, the data obtained as a result of this project could also be referred to and used by non–community-based programs, such as academic-based centers. No matter the tool(s) used or type of program, every research program should regularly assess its staff workload and staffing metrics and, preferably, should consider the level of complexity of the work being conducted. Such information captured and assessed at regular intervals over time may be invaluable as a means for research programs to establish their own benchmarks; to monitor trends and shifts; to inform management and institutional administration about justification of current staff and the need to hire additional staff; to assist with budget planning; to provide metrics for staff performance; to ensure workload balance among staff; and, ultimately, to improve staff satisfaction and potentially reduce staff burnout and turnover.

The final version of the ASCO Clinical Trial Workload Assessment Tool was released in October 2014 and is accessible for free at http://workload.asco.org. As of December 2015, nearly 200 research programs have registered to use it; the programs vary in types, sources of funding, and types of clinical trials. The data entered into the Tool will provide a wealth of clinical trial–associated workload information for the research programs that use the Tool.

Acknowledgment

ASCO acknowledges the important contributions of the 51 research programs and their affiliate sites, without whom this project, and the refinements and launch of the tool, would not have been possible. The authors thank ASCO staff members who were integral to the development of the tool and the success of the project, including Eden Mesfin, who provided administrative support, and Angela Strano and Krystal Brady, who assisted with the design and development of the web-based tool.

AUTHOR CONTRIBUTIONS

Conception and design: Marjorie J. Good, Patricia Hurley, Connie Szczepanek, Teresa Stewart, Nicholas Robert, Alan Lyss, Mithat Gönen, Rogerio Lilenbaum

Collection and assembly of data: Marjorie J. Good, Patricia Hurley, Connie Szczepanek, Alan Lyss, Rogerio Lilenbaum

Data analysis and interpretation: Marjorie J. Good, Patricia Hurley, Kaitlin M. Woo, Connie Szczepanek, Teresa Stewart, Nicholas Robert, Alan Lyss, Mithat Gönen, Rogerio Lilenbaum

Manuscript writing: All authors

Final approval of manuscript: All authors

AUTHORS' DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST

Assessing Clinical Trial–Associated Workload in Community-Based Research Programs Using the ASCO Clinical Trial Workload Assessment Tool

The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or jop.ascopubs.org/site/misc/ifc.xhtml.

Marjorie J. Good

No relationship to disclose

Patricia Hurley

No relationship to disclose

Kaitlin M. Woo

No relationship to disclose

Connie Szczepanek

No relationship to disclose

Teresa Stewart

No relationship to disclose

Nicholas Robert

Honoraria: New Century Health

Other Relationship: Paradigm Dx

Alan Lyss

Leadership Role: Verdi Oncology

Consulting or Advisory Role: Verdi Oncology

Mithat Gönen

No relationship to disclose

Rogerio Lilenbaum

Honoraria: Verastem, Incyte

Consulting or Advisory Role: Genentech, Boehringer Ingelheim, Celgene, Clovis Oncology

Research Funding: Celgene

Travel, Accommodations, Expenses: Roche

References

  • 1.Good MJ Lubejko B Humphries K, etal: Measuring clinical trial–associated workload in a community clinical oncology program J Oncol Pract 9:211–215,2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Roche K Paul N Smuck B, etal: Factors affecting workload of cancer clinical trials: Results of a multicenter study of the National Cancer Institute of Canada Clinical Trials Group J Clin Oncol 20:545–556,2002 [DOI] [PubMed] [Google Scholar]
  • 3.Fowler DR, Thomas CJ: Protocol acuity scoring as a rational approach to clinical research management Research Practitioner 4:64–71,2003 [Google Scholar]
  • 4.Smuck B Bettello P Berghout K, etal: Ontario protocol assessment level: Clinical trial complexity rating tool for workload planning in oncology clinical trials J Oncol Pract 7:80–84,2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.James P Bebee P Beekman L, etal: Creating an effort tracking tool to improve therapeutic cancer clinical trials workload management and budgeting J Natl Compr Canc Netw 9:1228–1233,2011 [DOI] [PubMed] [Google Scholar]
  • 6.National Cancer Institute, National Institutes of Health : NCI trial complexity elements and scoring model. http://ctep.cancer.gov/protocolDevelopment/docs/trial_complexity_elements_scoring.doc
  • 7.Sherman SL Waldinger WB Lepisto EM, etal: The 2008 National Comprehensive Cancer Network (NCCN) research benchmarking survey (RBS): Clinical trials (CTs) operations in the academic center. J Clin Oncol 28:15s, 2010 (suppl; abstr 6154)

Articles from Journal of Oncology Practice are provided here courtesy of American Society of Clinical Oncology

RESOURCES