Skip to main content
Journal of Oncology Practice logoLink to Journal of Oncology Practice
. 2015 Dec 1;12(1):63–64. doi: 10.1200/JOP.2015.005181

ReCAP: Clinical Trial Assessment of Infrastructure Matrix Tool to Improve the Quality of Research Conduct in the Community

Eileen P Dimond 1,, Robin T Zon 1, Bryan J Weiner 1, Diane St Germain 1, Andrea M Denicoff 1, Kandie Dempsey 1, Angela C Carrigan 1, Randall W Teal 1, Marjorie J Good 1, Worta McCaskill-Stevens 1, Stephen S Grubbs 1
PMCID: PMC4976452  PMID: 26627979

Abstract

QUESTION ASKED:

Is there a tool for sites engaged in cancer clinical research to use to assess their infrastructure and improve their research conduct toward exemplary levels of performance beyond the standard of Good Clinical Practice (GCP)?

SUMMARY ANSWER:

The NCI Community Cancer Center Program (NCCCP) sites, with NCI Clinical Trial advisor input, created a “Clinical Trials Best Practice Matrix” self-assessment tool to assess research infrastructure. The tool identified nine attributes (eg, physician engagement in clinical trials, accrual activity, clinical trial portfolio diversity), each with three progressive levels (I – III) for sites to score infrastructural elements from less (I) to more (III) exemplary. For example, a level-one site might have active Phase III treatment trials in two to three disease sites and review their portfolio diversity once a year, whereas a level-three site has active Phase II and also Phase I or I/II trials across five or more disease sites and reviews their portfolio quarterly. The tool also provided a road map toward more exemplary practices.

METHODS:

From 2011 to 2013, 21 NCCCP sites self-assessed their programs with the tool annually. Sites reported significant increases in level III (more exemplary) scores across the original nine attributes combined (P < .001 [see Figure 1]). During 2013 to 2014, NCI collaborators conducted a five-step formative evaluation of the tool resulting in expansion of attributes from nine to 11 and a new name: the Clinical Trials Assessment of Infrastructure Matrix, or CT AIM, tool which is described and fully presented in the manuscript.

BIAS, CONFOUNDING FACTOR(S), DRAWBACKS:

Tool scores are self-reported which are subject to potential bias. The tool was developed by community hospital based cancer centers and has not been psychometrically validated. Use of scores for ranking between programs is not recommended at this time. The attributes and indicators in the tool may need to be adapted for other settings (eg, academic or private practice settings), and over time as research practice evolves. Not all sites can, or want to, move beyond the provision of GCP in their research programs. Adherence to GCPs meets the minimum criteria for clinical trial conduct and some of the attributes in the CT AIM can be both fiscally and administratively challenging to implement.

REAL-LIFE IMPLICATIONS:

The CT AIM tool gives community programs a tool to assess their research infrastructure as they strive to move beyond the basics of GCP to more exemplary performance. Experience within the NCCCP program suggests the CT AIM tool may be useful for improving programmatic quality, benchmarking research performance, reporting progress, and communicating program needs with institutional leaders. The tool may also be a companion to existing clinical trial education and program resources. Although used in a small group of community cancer centers, the tool may be adapted as a model in other disease disciplines.

FIG 1.

FIG 1.

Level-three reporting for 2011, 2012, and 2013 for 21 National Cancer Institute Community Cancer Centers Program sites. Although all 21 sites completed self-assessment each year, bars do not add to 21 because the figure represents the number of sites reporting level-three score per indicator in each year. Increase in level-three scores over time across all nine attributes combined was significant at P < .001. (*) Significant P value for change over time (clinical trial communication, P = .0281; clinical trial portfolio, P = .0228).

J Oncol Pract. 2015 Dec 1;12(1):e23–e35. doi: 10.1200/JOP.2015.005181

Original Contribution: Clinical Trial Assessment of Infrastructure Matrix Tool to Improve the Quality of Research Conduct in the Community

Eileen P Dimond 1,, Robin T Zon 1, Bryan J Weiner 1, Diane St Germain 1, Andrea M Denicoff 1, Kandie Dempsey 1, Angela C Carrigan 1, Randall W Teal 1, Marjorie J Good 1, Worta McCaskill-Stevens 1, Stephen S Grubbs 1

Experience within the NCCCP suggests that the CT AIM is useful for improving quality, benchmarking research performance, reporting progress, and communicating program needs with institutional leaders.

Abstract

Purpose:

Several publications have described minimum standards and exemplary attributes for clinical trial sites to improve research quality. The National Cancer Institute (NCI) Community Cancer Centers Program (NCCCP) developed the clinical trial Best Practice Matrix tool to facilitate research program improvements through annual self-assessments and benchmarking. The tool identified nine attributes, each with three progressive levels, to score clinical trial infrastructural elements from less to more exemplary. The NCCCP sites correlated tool use with research program improvements, and the NCI pursued a formative evaluation to refine the interpretability and measurability of the tool.

Methods:

From 2011 to 2013, 21 NCCCP sites self-assessed their programs with the tool annually. During 2013 to 2014, NCI collaborators conducted a five-step formative evaluation of the matrix tool.

Results:

Sites reported significant increases in level-three scores across the original nine attributes combined (P < .001). Two specific attributes exhibited significant change: clinical trial portfolio diversity and management (P = .0228) and clinical trial communication (P = .0281). The formative evaluation led to revisions, including renaming the Best Practice Matrix as the Clinical Trial Assessment of Infrastructure Matrix (CT AIM), expanding infrastructural attributes from nine to 11, clarifying metrics, and developing a new scoring tool.

Conclusion:

Broad community input, cognitive interviews, and pilot testing improved the usability and functionality of the tool. Research programs are encouraged to use the CT AIM to assess and improve site infrastructure. Experience within the NCCCP suggests that the CT AIM is useful for improving quality, benchmarking research performance, reporting progress, and communicating program needs with institutional leaders. The tool model may also be useful in disciplines beyond oncology.

Introduction

The National Cancer Institute (NCI) has a long history of promoting clinical research in the community setting, where a majority of patients with cancer receive care. In 1983, the NCI established the Community Clinical Oncology Program, followed by the Minority-Based Community Clinical Oncology Program in 1990. In 2007, the NCI launched the NCI Community Cancer Centers Program (NCCCP), with an emphasis on health care disparities across the cancer continuum and a distinct effort to enhance access to high-quality cancer care and expand clinical research capacity in the participating hospitals.1-4 To help the NCCCP sites enhance their clinical trial infrastructure, a tool for self-assessment and programmatic improvement was created and referred to as the clinical trial Best Practice Matrix. In 2013, the NCI pursued refinements of the tool through a formative evaluation, and with input from community researchers and field testing with stakeholders, the tool evolved into the Clinical Trial Assessment of Infrastructure Matrix (CT AIM) tool, which will be described in this article.

A growing body of literature and commentary has emerged recognizing the importance of benchmarking toward excellence in oncology clinical trial performance. Specifically, in 2008, the American Society of Clinical Oncology (ASCO) published a special article in Journal of Clinical Oncology describing minimum standards and exemplary attributes of clinical trial sites.5 This was followed by a series of publications in Journal of Oncology Practice (JOP) related to attributes of exemplary research,6-8 including a full JOP exemplary clinical trial series.9

The series identified attributes that move a program beyond the Good Clinical Practices established by the International Conference on Harmonisation10 to safely implement research with human participants. The series was also a response to the 2005 report by the Clinical Trials Working Group of the NCI National Cancer Advisory Board and the 2010 Institute of Medicine report “A National Cancer Clinical Trials System for the 21st Century: Reinvigorating the NCI Cooperative Group Program” to act as a guide and benchmark for research programs in lieu of a full research certification program.11,12

To develop the original tool, NCCCP site representatives of varied disciplines (eg, nursing, physicians, clinical research staff), in conjunction with NCI program advisors, used the existing literature, the NCCCP goals (eg, improve community outreach to enhance accrual to clinical trials), and their collective experience as a guiding framework to collaboratively develop the clinical trial Best Practice Matrix, a self-assessment tool designed to benchmark oncology clinical trial programs. The tool was developed with four objectives in mind: to develop and improve research programs within community hospital settings, to benchmark program performance, to capture metrics to report progress to funders and sponsors, and to communicate program needs with senior leadership.

The clinical trial Best Practice Matrix consisted of nine clinical trial infrastructure attributes: underserved community outreach and accrual, quality assurance, clinical trial portfolio diversity and management, physician engagement in clinical trials, participation in the clinical trial process, multidisciplinary team involvement, educational standards, accrual, and clinical trial communication and awareness. Each attribute contained multiple indicators. Each indicator had three levels that progressed in complexity from level one (least complex) to level three (most complex) (eg, level one, active phase III treatment trials v level three, active phase I, II, and III treatment and cancer control trials). Sites selected the most applicable level for each indicator, which they then used to determine their score for each of the nine attributes (score range, 9 to 27). Concurrent with the matrix development, NCCCP sites focused on building a culture of research with improved capacity in areas such as examining their clinical trial portfolio, engaging navigation in research, vetting trial eligibility at multidisciplinary conferences, improving clinical trial communication and outreach into the community, and biospecimen capacity.13 Detailed information on the NCCCP capacity- and program-building efforts can be found in various publications.14-20

Methods

The clinical trial Best Practice Matrix was launched in 2011. A total of 21 NCCCP sites used it to self-assess their clinical trial programs annually for 3 years (2011 to 2013). The tool was most often completed by site research administrators or coordinators; however, sites were encouraged to get clinical trial team input in completing the tool (eg, principle investigators [PIs], lead administrators, clinical research associates, research nurses). The results were analyzed to ascertain programmatic infrastructural change, indicated by advancements in level scores over the years. A likelihood ratio χ2 test was performed on the proportions of level three responses across the 3-year time period to determine whether the proportions of success (level three, yes) were different or had changed over time between the years. This project was determined to be not human subjects research by the National Institutes of Health Office of Human Subjects Research (exemption No. 11514).

On the basis of the favorable results of the clinical trial Best Practice Matrix21 and input from NCCCP sites and Community Clinical Oncology Program/Minority-Based Community Clinical Oncology Program PIs and Administrators, the NCI sought to continue the development of the tool via a formative evaluation process. In 2013, the NCI began a five-step formative evaluation in collaboration with health service researchers at the University of North Carolina Chapel Hill to further develop, refine, and evaluate the tool. The effort included: stakeholder input from community researchers at two national research meetings (2013 NCI Community Clinical Oncology Program Annual Meeting and 2013 ASCO Community Research Forum), cognitive interviews with four pairs of PIs and program administrators from NCI-funded community cancer programs to gather data on the interpretability of the tool, a pilot test of a revised tool with four additional pairs of PIs and administrators to assess ease of use and consistency in responses within pairs, a field test to compare alternative scoring and feedback reporting methods with the revised tool with nine more PIs, and a three-round Delphi panel with six PIs to explore opinion about the relative importance (weighting) of attributes.

Results

NCCCP 3-Year Site Self-Assessments Scores

The likelihood ratio χ2 test of 21 NCCCP site self-assessment scores over 3 years showed significant increases in level-three indicator scoring over time across all nine attributes of the original tool combined (P < .001; Figure 1). In addition, two specific attributes individually exhibited significant change over the 3 years of assessments: clinical trial portfolio diversity and management (P = .0228) and clinical trial communication (P = .0281).

FIG 1.

FIG 1.

Level-three reporting for 2011, 2012, and 2013 for 21 National Cancer Institute Community Cancer Centers Program sites. Although all 21 sites completed self-assessment each year, bars do not add to 21 because the figure represents the number of sites reporting level-three score per indicator in each year. Increase in level-three scores over time across all nine attributes combined was significant at P < .001. (*) Significant P value for change over time (clinical trial communication, P = .0281; clinical trial portfolio, P = .0228).

Development of New CT AIM Tool

On the basis of the formative evaluation process, the following revisions were made to the clinical trial Best Practice Matrix: best practice designation was replaced with assessment of infrastructure to better reflect the purpose of the tool, details were added to better clarify indicator terms and the cumulativeness of levels (example of one indicator's evolution [clinical trial portfolio diversity and management] depicted in Figure 2), and the nine attributes were expanded to 11, resulting in:

  • Folding underserved accrual into a broader accrual attribute

  • Revising clinical trial communication and awareness into clinical trial education and community outreach

  • Adding clinical trial workload assessment,22,23 clinical research team and navigator engagement, and biospecimen research infrastructure attributes

FIG 2.

FIG 2.

Example of Clinical Trial Assessment of Infrastructure Matrix tool evolution: clinical trial portfolio diversity and management attribute.

Appendix Figure A1 (online only) provides the current CT AIM tool. Four PI and administrator pairs were then queried about the revised CT AIM indicators. No respondents answered “don't understand this indicator,” suggesting the additional detail seemed to improve indicator clarity. Of 11 “don't know the answer to this indicator” responses, seven originated from one program. Most of the “don't know” responses were related to the biospecimen research attribute, indicating some uncertainty in program leaders' knowledge about biospecimen program infrastructure. PIs responded differently than their administrators 36% of the time, indicating that completion of the tool by the research team could promote a more accurate reflection of the infrastructure of the program.

Community input and field testing of the scoring and reporting functions led to changes in the scoring report layout and content. A level zero was added for sites that were not yet at level-one performance. It was found that the average scoring for each attribute was perceived by PIs to be more accurate and sensitive to incremental program improvements than cumulative worst-count scoring. Worst-count scoring involves a decision rule that at least two thirds of the indicators for a given attribute must be at the same level or higher. For example, in the physician engagement attribute (see Appendix Figure A1 for tool), if a program scored a level two for physician accrual and referral activity, a level three for physician leadership of the clinical trial program, and a level two for nononcology physician participation, the program would receive a level-two score for physician engagement in clinical trials. This is because at least two thirds of the indicators for this attribute were level two or higher. If a program scored a level one, two, and three for these three indicators, a program would again be scored as a level two because at least two thirds of the indicators for this attribute were level two or higher.

PIs also indicated that the graphical display of the scoring report was acceptable, easy to understand, and actionable. The display showed the mean score per attribute and the numbers of levels zero, one, two, and three selected for all indicators (Figure 3).

FIG 3.

FIG 3.

Example of Clinical Trial Assessment of Infrastructure Matrix (CT AIM) scoring report. The scoring report shares three pieces of information: the attribute level (range 0-3), an overview of how many indicators fall at each level and corresponding %, and an overall score (based on the average of all attribute scores together).

A pilot Delphi panel was conducted among six seasoned community PIs to assess the potential of weighted scoring. The Delphi method is a structured communication technique for obtaining consensus of opinion among a panel of experts, in this case regarding weighting of the attributes of the tool.24,25 Although the cognitive interview results indicated the six PIs thought all 11 attributes were important for characterizing the level of clinical trial infrastructure, the Delphi results indicated they regarded some attributes as relatively more important than others (eg, physician engagement and accrual activity received highest weights, and clinical trial workload assessment, educational standards, and clinical trial education and community outreach received lowest weights). The difference in their tool scores that were weighted versus equal was minimal in these six cases, with an increase in the average scores by 0.1 to 0.3 points (eg, tool score went from 2.5 to 2.6 or 2.4 to 2.7).

Discussion

Using the original clinical trial Best Practice Matrix, statistically significant increases in level-three indicator scores were seen between 2011 and 2013 across all nine attributes combined. The reasons for this improvement are likely multifactorial, including individual institutional efforts as well as involvement in the NCCCP program as a whole. The program included numerous efforts relevant to enhancing the research culture that also could influence clinical trial infrastructure, such as increasing multidisciplinary conferences and focusing on community outreach, quality improvement initiatives, and navigation efforts.26

Two of the nine original clinical trial Best Practice Matrix attributes (ie, clinical trial portfolio diversity and management and clinical trial communication and awareness) showed significant change over time, as reported by the 21 NCCCP sites. One reason for the clinical trial portfolio diversity and management change could be the extensive effort by the NCCCP in creating and using a screening and accrual log. The log was initially used by the NCCCP sites to assess accrual barriers and portfolio gaps with selected NCI cooperative group trials, although the sites reported expanding the log effort across all their trials. The closer scrutiny of languishing trials and gaps and successes in site portfolios likely contributed to the increase in level-three function in this attribute. Details about the log and its analysis have been published elsewhere.13,27,28

The improvement in clinical trial communication and awareness could be attributed to an emphasis within the NCCCP to shift clinical trial education beyond the institutionally focused research team and promote a broader understanding about clinical trials with the general medical and lay communities associated with the site. As part of the completion of NCCCP in June 2014, site closeout calls were conducted with the NCI. During these calls, a theme that was qualitatively shared by most sites was the high value of the clinical trial Best Practice Matrix. The process of completing the matrix was reported to be worthwhile because of the provided benchmarks and metrics that research teams could use in their programmatic planning over time. In addition, because the NCCCP fostered a collaborative learning environment, sites shared lessons learned and best practices (eg, how to leverage telemedicine to enhance rural accruals, how to provide better trial access in community, how to address language barriers, how to improve collaboration with pathology and surgery to support research tissue acquisition). The sites reported that these exchanges fostered rapid progress toward positive infrastructure changes.

Finally, because the formative evaluation showed PIs and Administrators at the same site had differential knowledge about the attributes of their clinical trial programs, we recommend that program leaders take a team approach to assessing their programs with the tool by including all applicable program departments and applicable staff experts in the evaluation process.

There are some limitations to our study. The original developers of the CT AIM tool were the NCCCP sites (ie, community hospital-based cancer centers). The data reported are limited to the 21 participating sites and thus cannot be broadly generalized. During the formative evaluation, additional input was obtained from health care professionals not affiliated with the NCCCP, yet many of them were also from NCI-supported community-based organizations. For this reason, the attributes and/or indicator definitions may need to be adapted for varied clinical trial infrastructural environments. Refinement is needed to better identify and analyze key attributes and indicators most relevant across different organizations and practices (eg, office/group practices, academic cancer centers).

The CT AIM has undergone extensive revision; however, it has not been psychometrically validated. Caution in score interpretation is warranted, and use for ranking between programs is not recommended at this time. The NCCCP evaluation of site clinical trial program infrastructure was based on self-reported programmatic information from a limited number of sites. Self-reports are subject to potential bias, and absent independent observation or use of unobtrusive measures, the authors cannot validate stated program improvements.

Because the tool scores were self-reported from a limited number of sites, further research with larger numbers of sites to corroborate self-reported scores (infrastructure) with objective data (eg, accrual stats, audit performance, portfolio mix, number of active multidisciplinary conferences, credentialed staff) possibly via extended observation at the sites to link reported with actual exemplary performance could be undertaken.

Scoring is also not weighted at this point. Input from broader community researchers as well as nonphysicians (eg, administrators, clinical research associates) is needed to create consensus on attribute weights in this area, because the expert opinion of six PIs may not represent the opinions of PIs as a whole or of nonphysician team members. Validation efforts could also be explored, but tool attributes may need to change as clinical trials evolve with new scientific opportunities. As a result of this reality, validation becomes a more elusive end point, because the effort would be for a tool that may only exist for a limited period of time.

Finally, not all sites can or desire to move their programs beyond Good Clinical Practice compliance toward exemplary performance. Adherence to Good Clinical Practices meets the minimum criteria for clinical trial conduct, and the authors recognize that some of the attributes described in the CT AIM can certainly be fiscally and/or administratively challenging to implement, especially for smaller sites.

In conclusion, the primary purpose of the CT AIM is to provide community programs a self-assessment tool to assess their clinical trial infrastructure as they strive toward excellence and movement above the requirements of Good Clinical Practice. Through the formative evaluation with other NCI-funded community sites, broader community researcher insight was gained to make the tool applicable to sites beyond the NCCCP. This input significantly affected the evolution of the metrics, content, and utility of the tool and moved it beyond the initial Best Practice Matrix to the current CT AIM tool.

On the basis of the experiences of the NCCCP sites with the original tool and revisions during the formative evaluation to improve clarity and utility, the CT AIM may be useful for institutional and/or program quality improvement, benchmarking research performance, progress reporting, and communication of program needs with institutional leaders. As oncology practices increasingly are influenced by a new era of clinical trials, as well as policy and regulatory changes, the tool will need built-in flexibility to support frequent updates.

Future research could include further refinement of attributes and indicator levels in varied environments (eg, private practice, academic centers), weighting of scores, and collection of objective site data to correlate with site self-scoring as a means to better define and validate exemplary research performance metrics. The tool may also be a relevant companion to existing clinical trial education and program resources.

Research program leaders are encouraged to consider using CT AIM with research team members to benchmark and develop their site infrastructure. Although used in a small group of community cancer centers, future adaptation of this type of assessment tool model in other disease disciplines may show utility.

Acknowledgment

Supported by the National Cancer Institute, National Institutes of Health, under Contract No. HHSN261200800001E. Presented in part at the 50th Annual Meeting of ASCO, Chicago, IL, May 30-June 3, 2014, and the ASCO Quality Care Symposium, Boston, MA, October 17-18, 2014. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government.

We thank the following key contributors to the formation of the clinical trial Best Practice Matrix: Maria Gonzalez, MPH, Providence St Joseph Medical Center, Burbank, CA; James Bearden, MD, Gibbs Cancer Center and Research Institute, Spartanburg Regional Healthcare, Spartanburg, SC; Lucy Gansauer, RN, MSN, OCN, Gibbs Cancer Center and Research Institute, Spartanburg Regional Healthcare, Spartanburg, SC; Phil Stella, MD, St Joseph Mercy Hospital, Ann Arbor, MI; Beth LaVasseur, RN, St Joseph Mercy Hospital, Ann Arbor, MI; Mitch Berger, MD, PricewaterhouseCoopers; Donna Bryant, MSN, ANP-C, OCN, CCRC, Cancer Program of Our Lady of the Lake and Mary Bird Perkins Cancer Center, Baton Rouge, LA; Kathy Wilkinson, RN, BSN, OCN, Billings Clinic, Billings, MT; and Maria Bell, MD, Sioux Valley/University Hospital, Sanford Health, Sioux Falls, SD. We also thank Octavio Quinones, MSPH, for his statistical support and Kathleen Igo, Leidos Biomedical Research, for her editorial contributions. We are also grateful to the National Cancer Institute Community Cancer Centers Program sites for their contributions to this effort.

Appendix

FIG A1.

FIG A1.

FIG A1.

FIG A1.

FIG A1.

FIG A1.

Full Clinical Trial Assessment of Infrastructure Matrix tool.

AUTHOR CONTRIBUTIONS

Conception and design: Eileen P. Dimond, Robin T. Zon, Bryan J. Weiner, Diane St. Germain, Andrea M. Denicoff, Kandie Dempsey, Angela C. Carrigan, Marjorie J. Good, Worta McCaskill-Stevens, Stephen S. Grubbs

Collection and assembly of data: Bryan J. Weiner, Angela C. Carrigan, Randall W. Teal

Data analysis and interpretation: All authors

Manuscript writing: All authors

Final approval of manuscript: All authors

AUTHORS' DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST

Clinical Trial Assessment of Infrastructure Matrix Tool to Improve the Quality of Research Conduct in the Community

The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or jop.ascopubs.org/site/misc/ifc.xhtml.

Eileen P. Dimond

No relationship to disclose

Robin T. Zon

Research Funding: Agendia (Inst), Amgen (Inst)

Other Relationship: Medical Protective Advisory Board

Bryan Weiner

No relationship to disclose

Diane St. Germain

No relationship to disclose

Andrea M. Denicoff

No relationship to disclose

Kandie Dempsey

No relationship to disclose

Angela C. Carrigan

No relationship to disclose

Randall W. Teal

No relationship to disclose

Marjorie J. Good

No relationship to disclose

Worta McCaskill-Stevens

No relationship to disclose

Stephen S. Grubbs

Leadership: Blue Cross and Blue Shield of Delaware

References


Articles from Journal of Oncology Practice are provided here courtesy of American Society of Clinical Oncology

RESOURCES