Skip to main content
Clinical and Translational Science logoLink to Clinical and Translational Science
. 2013 Apr 19;6(4):317–320. doi: 10.1111/cts.12051

Evaluating Various Areas of Process Improvement in an Effort to Improve Clinical Research: Discussions from the 2012 Clinical Translational Science Award (CTSA) Clinical Research Management Workshop

Jane E Strasser 1,, Philip A Cola 2, Daniel Rosenblum 3
PMCID: PMC3740438  NIHMSID: NIHMS455383  PMID: 23919369

Abstract

Emphasis has been placed on assessing the efficiency of clinical and translational research as part of the National Institutes of Health (NIH) goal to “improve human health.” Improvements identified and implemented by individual organizations cannot address the research infrastructure needs of all clinical and translational research conducted. NIH's National Center for Advancing Translational Sciences (NCATS) has brought together 61 Clinical and Translational Science Award (CTSA) sites creating a virtual national laboratory that reflects the diversity and breadth of academic medical centers to collectively improve clinical and translational science. The annual Clinical Research Management workshop is organized by the CTSA consortium with participation from CTSA awardees, NIH, and others with an interest in clinical research management. The primary objective of the workshop is to disseminate information that improves clinical research management although the specific objectives of each workshop evolve within the consortium. The fifth annual workshop entitled “Learning by doing; applying evidence‐based tools to re‐engineer clinical research management” took place in June 2012. The primary objective of the 2012 workshop was to utilize data to evaluate, modify, and improve clinical research management. This report provides a brief summary of the workshop proceedings and the major themes discussed among the participants.

Keywords: clinical trials, translational research, trials

Introduction

The purpose of the CTSAs are to make translational science better, faster, and more efficient without sacrificing quality, cost, or safety; a charge emphasized by NCATS’ Director of the Division of Clinical Innovation, Josephine Briggs when she opened the 2012 workshop. Although many organizations and institutions have begun evaluating processes and implementing change, what works for one institution often does not translate directly to another. As a group, the 61 institutions that comprise the CTSA consortium reflect the diversity of academic medical centers including private and public, adult and pediatric institutions; creating a “virtual national laboratory” that is identifying both specific and global obstacles to efficient clinical research and their resolutions.1

To improve clinical research it is important to analyze the time, cost, and value of each step in any given process, identify bottlenecks, clarify and streamline processes, and then continually assess and refine. Keynote speaker Patrick Hagan described his experience as President and Chief Operating Officer for Seattle Children's Hospital (SCH), a research intensive hospital, and their efforts to become the “best children's hospital.” Striving toward this goal it became clear that SCH had to improve its research infrastructure and processes. SCH modeled Toyota's management approach, minimizing waste and inconsistency, respecting people, and engaging in continuous improvement. Using this approach SCH was able to increase participation in research while improving patient care, increasing patient safety, improving family perception of service, and improving efficiency and employee morale. Engaging patients, families, and staff in an iterative, ongoing process of evaluating actual places, actual people, and actual processes SCH was able to obtain a 61% increase in families rating care as excellent and increase research visits to 9% of total patient visits (12% of potential research visits). Hagan emphasized the need to eliminate the unnecessary and to continually reevaluate in a positive, inclusive manner suggesting the mantra, “presence, knowledge, participation, tenacity, and patience.”

Study Start‐Up

Seanne Falconer (Harvard University) described Harvard's efforts to streamline study approval and start‐up. With input from stakeholders from across the components of the Harvard CTSA, they evaluated processes, eliminated unnecessary steps, ran processes in parallel, and developed a shared, electronic system to facilitate workflow. To date (69 pediatric and 91 adult protocols) this approach has decreased protocol approval time by 56%. Harvard plans to make this system open source in 2013.

Kim Toussant and Carson Reider (Ohio State University; OSU) demonstrated the process improvement tools available at www.morestream.com (SigmaPedia) and showed how OSU has utilized them to improve study start‐up times. Jill van Dalfsen (SCH) described the development of electronic quality improvement programs for clinical research (eQUIP‐CR) an online toolbox developed by the Cystic Fibrosis Therapeutic Network which has process improvement tools developed for Cystic Fibrosis research that are broadly applicable to all clinical research.

IRB Review and Approval

Marc Drezner and Nichelle Cobb (University of Wisconsin) presented data from an observational study entitled, “The collection of metrics of IRB performance at CSTA sites” (43 sites, 1,401 protocols). The data were compared to that from the first study CTSA IRB study (33 sites, 425 protocols) which provided valuable benchmarking information despite substantial variability (e.g., 20–100 days from submission to approval; median of 64 days) reflecting both the wide diversity in processes used and the need to refine and clarify data points (e.g., “submission” meant different things at different sites).

In the second, expanded, study the median time from submission to IRB approval was 54 days. Factors that decreased approval time included having the protocol prepared by a centralized program with regulatory expertise and increasing numbers of active protocols at the institution. Protocol authorship by an institutional investigator increased approval time; accreditation status and use of electronic protocol review processes had no impact.

There is substantial pressure to eliminate redundant regulatory review for multisite studies. Three examples of models for multi‐site review were presented. Model one from Indiana University (Shelley Bizilla, Sarah Crabtree, and Edye Taylor) allows a single IRB review. Model two from Vanderbilt University (Jenni Beadles) utilizes a collaborative IRB model, IRBshare. IRBshare shares documents and review with flexibility in the assignation of “IRB of record.” Model three, presented by Sarah White (Harvard University), is the NeuroNext centralized IRB (CIRB) model funded by NIH's National Institute of Neurologic Disorders and Stroke. For any given study the initiating site will submit the protocol to CIRB, amendments will be used to add sites. CIRB is responsible for regulatory oversight (review, approval, HIPAA, etc.) while each local site is responsible for the site‐specific context (including limited modifications to the informed consent document) and ancillary reviews. White cautioned that the start‐up and long‐term costs of a CIRB should not be underestimated. Many different models are being implemented; no single model fits all circumstances.

Contract Review and Approval

New York University Medical Center (NYUMC, Jean Gatewood and Nicky O'Connor) described the benefits of process improvement in contract invoicing by explaining how to run specific processes “in‐parallel” and handling contracts “first in first out,” with flexibility to accommodate urgent and important exceptions. Using this approach NYUMC was able to bridge silos, overcome resistance to change, define terms, identify opportunities to standardize and utilize triggers for invoicing that resulted in a 98% increase in the amount invoiced and a 39% decrease in manual invoicing. Nickie Bruce (Mayo Clinic) used process mapping to collect data and identify both unnecessary steps and steps that could be run in parallel. Bruce emphasized that factors necessary for successful improvement include: facilitating consistency with an internal manual of SOPs; defining roles and responsibilities, developing and utilizing checklists, a library of standard language clauses, training, and communication. Improvements implemented included reviewing and finalizing contracts via conference call, resulting in one “redlined” document, and relying on electronic signatures resulting in a 71% decrease in processing time; negotiations decreased from 105 to less than 30 days; and signature time went from 13 days to less than 5.

The use of standardized language and agreements would substantially expedite contract negotiations. Libby Salberg (Vanderbilt University) facilitated a discussion focused on the need to implement master contract language. Existing master contracts were discussed and commonalities and divergences identified (e.g., public vs. private). Discussants agreed to standardize template confidentiality and data use agreements, quantifying cost savings to build momentum toward master contract acceptance.

Recruitment and Retention

Failure to enroll is a major obstacle to successful clinical research. Rhonda Kost (Rockefeller University) summarized data from Ken Getz (Tufts University Center for Information and Study on Clinical Research Participation; www.ciscrp.org) demonstrating that enrollment delays 90% of clinical trials with 30% under‐enrolling, 20% failing to enroll any participants, and only 7% of sites delivering the projected number of participants. Kost described methods to improve recruitment including establishing institutional expectations, developing and supporting infrastructure, developing and standardizing policies and procedures, formalizing accountability and oversight; treating recruitment as a science, collecting and analyzing data systematically, using data to drive improvements, and publishing data and practices.

Kost described the development and validation of a tool to evaluate the perceptions of research participants, initially using focus groups2 and then expanding into surveys that were utilized at 17 facilities. Results indicate that most (70–85%) of participants trusted their research team and the process, indicating room for improvement. The data collected are being used to identify and prioritize opportunities for improvement and to engage stakeholders. Using the survey data as baseline 4 centers are working on high priority concerns (e.g., improving informed consent) and will continue to resurvey and benchmark both internally and externally. Rockefeller negotiated the price of fielding the survey to facilitate use by academic centers and conveyed a royalty‐free license for the survey to NRC Picker http://www.nrcpicker.com/research‐participant‐survey; the user is encouraged to adhere to methodology.

Nariman Nasser (University of California San Francisco) emphasized the need to immediately follow‐up when potential participants are determined to be eligible for enrollment; recommending the use of a centralized, branded call center that is available beyond business hours, text messaging, Web screening, volunteer registries, electronic medical record based strategies, and mobile applications as recruitment tools. They have had success using text messaging to screen potential participants. She emphasized the need to determine the return on investment for any approach and evaluate what is best for each project.

Deb Gipson and Molly Dwyer‐White described recruitment processes at the University of Michigan. In 2009, after reviewing 4 years of clinical research enrollment data, the cost of poorly enrolling studies was estimated to be over $2 million. University of Michigan made increasing research participation a priority, setting a goal of doubling enrollment within 5 years. Using interviews, surveys, and focus groups they identified altruism, connections to academic medical centers, higher education, income status, and increasing age as positive correlates for clinical research participation; while fear, misconceptions, and time constraints were identified as participation barriers. Using input from participant and research communities they developed tools (www.UMClinicalStudies.org and www.michrrecruitingtoolkit.org) designed to define and engage eligible participant populations and minimize research obstacles3 (e.g., not including recruitment in the budget and coordinator turnover). Addressing concerns, enhancing research profiles, and building relationships with stakeholders, resulted in an increase in research volunteer participation by over 10,000 people.

The co‐chairs of the CTSA Research Coordinator Taskforce (Nancy Needler, University of Rochester; Sylvia Baedorf Kassis, Massachusetts General Hospital; and Lisa Speicher, Childrens' Hospital of Philadelphia) described CTSA consortium survey results that identify and prioritize the needs of coordinators and identified tools that assist in training and retaining coordinators. Tools include standardizing job descriptions, building career ladders, providing ongoing education and training to promote professional development, and establishing a supportive professional network.

Resource Allocation and Cost Recovery

John Roache (University of Texas Health Sciences Center) spoke of the need to understand and evaluate different models for resource allocation and cost recovery in clinical services cores (CSCs) beginning with the need to develop a consistent vocabulary. CTSA consortium CSC subgroup's survey results identified multiple resource sharing models that describe the share of each stakeholder, the aggregate costs of CSC resources, and how that cost is recovered; additional information includes understanding what resources can be subsidized, what/who incurs costs, and how the rates and charges are assessed.

Business Management of Clinical Research

Kate Marusina (University of California Davis) described financial processes for clinical research including ownership of and responsibility for financial processes, collaboration with information technology, and knowledge retention and dissemination. Philip Cola (University Hospitals Case Medical Center) and Madeleine Williams (Huron Life Sciences) described coverage analysis of an academic medical center. Benefits of coverage analysis include information for patients about the costs of participating, protection from billing errors and from violations of the false claims act; facilitation of budget development and assessment of costs in clinical trials.

Ideas for Continued Progress and Success

Jonathan Kagan, National Institute of Allergy and Infectious Disease's AIDS Clinical Trials Program presented a “systems thinking” concept to research administration, viewing system components in the context of their relationships with each other and with other systems. To promote team work and build collaboration Kagan polled stakeholders to understand how they measured success. They used that information to develop a logic model4 to show clinical trial success and how it is measured. To ensure that the highest priorities are addressed and develop a culture of ongoing evaluation they track protocol development, accrual, and completion of clinical trials. The AIDS Clinical Trials networks also agreed that, regardless of the outcome of a trial, public dissemination of the data were critical, and that the success of the publication must be evaluated. They expanded traditional bibliometrics to include journal ranking, the target audience and specialties impacted, inclusion in guidelines, systematic reviews and meta analyses, and the co‐authorship network.5 Systems thinking streamlined protocol development; more effectively engaged participants and the community, and cross‐network coordination. Using this approach and polling annually Kagan has increased communication, satisfaction, and participation helping the participants prioritize, ensure relevancy, and maximize success.

The Clinical Research Management Process Excellence Group described the use of a standardized template (an “A3” form widely used in process improvement) to report completed process improvement projects and to develop a system for sharing knowledge. This form was utilized by many of the presenters and attendees in evaluating and improving processes at their sites. By sharing data in a standardized format and using metrics compiled across the consortium one can benchmark individual steps and sites and continually assess improvements.

Clay Johnston (University of California San Francisco) posited that clinical trial sites should consider a plan to develop leadership that promotes process improvement and provides funding to support it; develop a culture of cooperation; align incentives and motivate the constituent groups to prioritize performance for the entire site, not just a single component if they are to improve. To exemplify, Johnston described the recent CTSA consortium contracts study. The time from start to final agreement on contracts had a mean 55 days (range of means 13–116 days); however, the time from start to execution was 103 days (range 39–109) in large part because reviews (budgets, contracts, IRB) were processed sequentially. Johnston promoted a graduated national certification of clinical trial sites, with the CTSA consortium leading the way. He recommended convening stakeholders, setting standards, establishing a funding model, and enabling a certifier.

Conclusion

Clinical research management is often inefficient, cumbersome, and costly. One of the primary charges of the CTSAs is to improve clinical research management, the CRM workshop provides a venue for the presentation of evidence to support formalized process improvement. The data presented and described herein emphasize the need for metrics that can be used as benchmarks and for defined review processes to facilitate the elimination of unnecessary steps. Many individual organizations and institutions are working to improve clinical research management within their group, with and across individual processes. The diversity of operations involved in clinical research management precludes a one‐size‐fits all solution, although there is clearly overlap. The CTSA consortium provides a virtual national laboratory where metrics are being identified, processes delineated, improvements implemented, and best practices identified by the individual CTSA sites (Table 1). In this paper it is our intent to describe the meeting presentations and events and limit the report to the meeting material itself. At the CRM workshops this knowledge is disseminated and tools are provided for modification, refinement, and implementation at other institutions; facilitating improvements at each individual site without reinventing processes or repeating mistakes. The efforts described herein can be summarized as: Gain institutional commitment; obtain input and buy‐in from clinical research stakeholders; eliminate silos, build collaborative teams; identify, prioritize, and communicate goals, outcomes, and metrics; identify and eliminate unnecessary steps; coordinate processes in parallel; provide infrastructure, training, and support; identify and engage potential participants; do not invest in studies that will not succeed at your site; quantify outcomes and publish findings; and strive for continuous reassessment and quality improvement. This approach facilitates the removal of unnecessary steps and regulatory burdens6 to more efficiently move the national clinical and translational science agenda forward.

Table 1.

Summary of best practices from 5th Annual Clinical Research Management Workshop

CTSA Site Target Best Practice Effect
University of Washington Overall assessment of research hospital Introduce “Toyota” management • Reduced mean IRB approval from 44 to 13 days
• Reduced concurrent protocol processing from 370 to 69 studies under review.
• Increased research visits to 9% of total visits
• Decreased “problem score” from 58 to 35
Harvard University Study approval time Streamline processing, eliminate useless steps, parallel processing, electronic workflow; 524 protocols reviewed electronic scheduling Study Approval Time reduced by 56%
CTSA Consortium van Dalfsen Time to first participant Enrolled in multisite national study Compare best and worst nationally: Key success factors were
• Shared leadership (PI/manager)
• Clear, shared process for research
• Regular, effective communication
• Business‐like approach (financial systems and practices)
• Hiring the right people
Best quartile–activated studies soonest, enrolled 16–40% of participants
Worst quartile–activated studies latest, enrolled 0–15% of participants.
University of Michigan Increase research participation Research volunteer registry based on preliminary study of perceptions (investigators and participants)
• Teach participants about research
• Increase awareness of studies
• Increased access to registration
• Electronic management
• Increased participants in system by 39% (6,500 → 9,300)
• Volunteer registry 10,700 participants (increased 224%)
CTSA Consortium Drezner, Cobb IRB approval time 2009 Study of IRB processing (33 sites, 425 protocols) vs. 2011 Study (43 sites, 1,401 protocols). Reduced median approval time from 64 days to 54 days

Acknowledgments

The authors apologize to the outstanding presenters and participants whose input could not be captured herein. The agenda for the workshop and presentations are available at (https://www.ctsacentral.org/committee/clinical‐research‐management), by clicking through the agenda you can access the individual presentations (individual links are available at the end of this section).

Yale University served as the host institution with support from NCATS U13TR000059‐04. Efforts of the authors were supported by UL1 TR000077 and UL1TR000439; efforts of the presenters are delineated on their presentations. All of the presentations described were underwritten by that institution's CTSA and supported by an NCATS award except for those by Patrick Hagan and Jonathan Kagan. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

These proceedings would not have occurred without the efforts of the presenters and the workshop planning committee particularly Royce Sampson, Eric Rubenstein, Terri Edwards, Kim Toussant, Rhonda Kost, Deborah Keeling, Fred DePourq, Kelly Burton, and Barbara Bigby. Direct links to presentations: Hagan: https://www.ctsacentral.org/documents/crm‐workshop‐keynote‐address‐hagan Toussant/Reider: https://www.ctsacentral.org/sites/default/files/files/Process_Improvement_breakout_session.pdf Drezner/Cobb: https://www.ctsacentral.org/documents/crm‐workshop‐irb‐study‐drezner Bizilla/Beadles/White: https://www.ctsacentral.org/documents/crm‐workshop‐irb‐models Gatewood/O'Connor: https://www.ctsacentral.org/documents/crm‐workshop‐invoicing‐gatewood Bruce: https://www.ctsacentral.org/sites/default/files/documents/10_Bruce‐Day2_0.pdf Kost: https://www.ctsacentral.org/documents/crm‐workshop‐participant‐survey‐kost Nasser: https://www.ctsacentral.org/documents/crm‐workshop‐recruitment‐nasser Gipson/Dwyer‐White: https://www.ctsacentral.org/documents/crm‐workshop‐recruitment‐gipson Needler/Baedorf Kassis: https://www.ctsacentral.org/documents/crm‐workshop‐coordinator‐training‐needlercassis Roache: https://www.ctsacentral.org/documents/crm‐workshop‐csc‐resource‐sharing‐roache Marusina: https://www.ctsacentral.org/documents/crm‐workshop‐business‐management‐breakout‐session Kagan: https://www.ctsacentral.org/documents/crm‐workshop‐systems‐thinking‐kagan

References

  • 1. Dilts DM, Rosenblum D, Trochim WM. A virtual national laboratory for reengineering clinical and translational science. Sci Transl Med. 2012; 4(118): 118cm2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Kost RG, Lee LM, Yessis J, Coller BS, Henderson, DK . Research Participant Perception Survey Focus Group Subcommittee: Assessing research participants' perceptions of their clinical research experiences. Clin Transl Sci. 2011; 4(6): 403–413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Dwyer‐White M, Doshi A, Hill M, Pienta KJ. Centralized research recruitment‐evolving a local clinical research recruitment web application to better meet user needs. Clin Transl Sci. 2011; 4(5): 363–368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Kagan JM, Kane M, Quinlan KM, Rosas S, Trochim, WMK . Developing a conceptual framework for an evaluation system for the NIAID HIV/AIDS clinical trials networks. Health Res Policy Syst. 2009; 21: 7–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Rosas SR, Kagan JM, Schouten JT, Slack PA, Trochim WM. Evaluating research and impact, a bibliometric analysis of research by the NIH/NIAID HIV/AIDS clinical trials networks. PLoS One. 2011; 4:6(3): e17428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Kim S, Ubel P, De Vries, R . Pruning the regulatory tree. Nature. 2009; 457(2), 534– 535. [DOI] [PubMed] [Google Scholar]

Articles from Clinical and Translational Science are provided here courtesy of Wiley

RESOURCES