Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2014 Jul 17;29(Suppl 3):714–723. doi: 10.1007/s11606-014-2896-8

Highly Effective Cystic Fibrosis Clinical Research Teams: Critical Success Factors

George Z Retsch-Bogart 1,, Jill M Van Dalfsen 2, Bruce C Marshall 3, Cynthia George 3, Joseph M Pilewski 4, Eugene C Nelson 5, Christopher H Goss 2,6, Bonnie W Ramsey 2,6
PMCID: PMC4124113  PMID: 25029977

ABSTRACT

BACKGROUND

Bringing new therapies to patients with rare diseases depends in part on optimizing clinical trial conduct through efficient study start-up processes and rapid enrollment. Suboptimal execution of clinical trials in academic medical centers not only results in high cost to institutions and sponsors, but also delays the availability of new therapies. Addressing the factors that contribute to poor outcomes requires novel, systematic approaches tailored to the institution and disease under study.

OBJECTIVE

To use clinical trial performance metrics data analysis to select high-performing cystic fibrosis (CF) clinical research teams and then identify factors contributing to their success.

DESIGN

Mixed-methods research, including semi-structured qualitative interviews of high-performing research teams.

PARTICIPANTS

CF research teams at nine clinical centers from the CF Foundation Therapeutics Development Network.

APPROACH

Survey of site characteristics, direct observation of team meetings and facilities, and semi-structured interviews with clinical research team members and institutional program managers and leaders in clinical research.

KEY RESULTS

Critical success factors noted at all nine high-performing centers were: 1) strong leadership, 2) established and effective communication within the research team and with the clinical care team, and 3) adequate staff. Other frequent characteristics included a mature culture of research, customer service orientation in interactions with study participants, shared efficient processes, continuous process improvement activities, and a businesslike approach to clinical research.

CONCLUSIONS

Clinical research metrics allowed identification of high-performing clinical research teams. Site visits identified several critical factors leading to highly successful teams that may help other clinical research teams improve clinical trial performance.

Electronic supplementary material

The online version of this article (doi:10.1007/s11606-014-2896-8) contains supplementary material, which is available to authorized users.

KEY WORDS: clinical trials, qualitative research, quality improvement, clinical research metrics, benchmarking, process improvement, cystic fibrosis

INTRODUCTION

The implementation of clinical trials is complex and involves the interactions of multiple components, including the center research team, the study sponsor, a contract research organization, and institutional services such as the institutional review board (IRB) and the contracts and budgets office. Effective communication between the clinical research team and clinical care team may also be necessary to identify research participants. Sub-optimal conduct of clinical trials in academic medical centers both raises the costs to institutions and sponsors and delays the availability of new therapies.1 The slow development of protocols, redundant scientific and ethical reviews, and protocol requirements that hinder enrollment contribute to the inferior conduct of clinical trials in academic centers.24 In 2006, the Clinical and Translational Science Award (CTSA) program of the National Institutes of Health (NIH) was developed to support a national consortium of biomedical research institutions to accelerate progress in clinical research. The NIH subsequently requested proposals for work that would improve processes related to the development, approval, activation, enrollment, and completion of clinical trials.

The limited patient population available for clinical trials in rare diseases makes optimization of clinical trial conduct even more relevant to these affected patients. The Cystic Fibrosis Foundation’s Therapeutics Development Network (TDN), including 77 centers, was established in 1998 to accelerate the testing of new therapeutics, collect data on the natural history of CF through observational studies, and test the utility of new outcome measures for people with cystic fibrosis.5 Because 31 of the 77 TDN centers were also CTSA institutions that had collected clinical trial metrics data, it was possible to identify high-performing centers and evaluate their clinical research practices.

The aim of this project was a collaborative benchmarking inquiry6,7 to identify the critical factors that enable high-performing clinical research teams to excel at clinical trial initiation and study execution. We aimed to use clinical trial performance metrics data to identify centers with superior performance, and then mixed methods, including semi-structured qualitative interviews, to identify common success factors.

METHODS

Selection of Centers to Benchmark

Clinical Study Metrics Data Collection and Analysis

All CF TDN centers contribute study metrics quarterly into a web-accessed database for each TDN study. Data collected include the number of patients enrolled and the time from regulatory packet receipt to: 1) Institutional Review Board (IRB) approval, 2) contract execution, 3) approval to enroll (site activation), and 4) first patient enrolled.

Analyzing study metrics for a network of research centers, even within the same disease entity, is complicated by variations in the study portfolio of each TDN center. No studies include all 77 of the network centers; rather, each sponsor selects participating centers based on requirements of the protocol. Furthermore, because each study has its own factors that influence study metrics, normalization of the metrics data is required to compare centers within the network and to identify centers with consistently superior performance.

Normalization for Enrollment

We developed a scoring algorithm to weight the enrollment data according to study complexity and burden. The weighting score considers observational vs. interventional study, duration, complexity of procedures, visit intensity (many visits over a shorter period of time), restrictiveness of inclusion/exclusion criteria, and ease of working with the sponsor (e.g., responsiveness to inquiries, quality of materials provided, and ease of budget and contract negotiation). The weighting scores for each study were evaluated and approved by the TDN Steering Committee, a group of 15 TDN principal investigators and research coordinators from different network centers. The weighting scores range from 0.1 (single-visit observational studies) to 3.0 (highly complex long-duration interventional studies). Enrollment is normalized by calculating a percent weighted enrollment for each center:

graphic file with name M1.gif

Normalization for Start-up Timing

Start-up timing metrics are normalized for each study by performing quartile analyses for each of the key start-up milestones. For example, on a 40-center study, the first ten centers to achieve the milestone of first patient enrolled would receive a quartile score of 4, the next ten centers a quartile score of 3, and so on, with the slowest ten centers receiving a quartile score of 1. For each center, an average quartile score is calculated for each of the start-up milestones by summing the quartile scores for that milestone and then dividing by the number of studies with data for that milestone.

During the time period assessed for this study, 67 multi-center studies (with at least four participating centers) provided data for analysis. To select highly successful centers, the start-up milestone deemed most important was the time to first patient enrolled, since it reflected the efficiency of the entire study start-up process.

To select centers for benchmarking visits, we conducted a composite analysis that plotted the average quartile score for time to first patient enrolled against the percent weighted enrollment for each center (Fig. 1). The latter was considered more important.

Figure 1.

Figure 1

Composite analysis of start-up timing and enrollment success. Each dot represents a single center. The Y axis shows the average quartile score for the metric of time from regulatory package receipt to first patient enrolled. A quartile score of 4 indicates that a center is in the fastest 25 % for that particular metric. The X axis shows the percentage of a center’s total CF patient population (weighted for study complexity) enrolled into studies during two consecutive evaluation periods, the first lasting 24 months and the second 18 months. Dotted lines mark the medians for these metrics during the designated time period. Centers represented in the (shaded) upper right quadrant had start times that were generally faster and enrolled a greater proportion of their patients in clinical studies compared to other centers. Letters designate centers selected for benchmarking. These centers showed either sustained high performance or significant improvement between the first and second evaluation periods.

The subset of data for the 31 CTSA centers was used to select centers for benchmarking using data from two time periods (10/1/06–9/30/08 and 10/1/08–3/31/10 [month/day/year]). Later in the project, a small non-CTSA center (Center E) was added to represent similar centers. We selected seven centers with sustained high performance (generally within the upper right quadrant of Fig. 1 for each of the two periods evaluated) and two centers (Centers C and G) with the greatest improvement in enrollment (at least twofold) between the first and second evaluation periods. We also considered center size (small to large) and CF translational focus (none to significant) in making our selection. As a final step, we solicited informal feedback from study sponsors to confirm that the centers we had selected were also perceived as delivering quality data (i.e., general perceptions of responsiveness and acceptable rates for protocol violations, subject completion, and queries). This feedback resulted in the exclusion of several centers that might have been selected on the basis of the metrics data analysis alone.

CONDUCTING THE BENCHMARKING ACTIVITIES

Realist Evaluation Framework

We applied the realist evaluation framework to our benchmarking strategy and the development of the semi-structured interview questions.8 Realist evaluation is a mixed-methods approach that begins with a practical theory about what works under specific circumstances and then refines and improves the theory based on testing using qualitative and quantitative data collected over time. The realist evaluation approach is based on a formula: mechanism + context = outcomes. Mechanisms are processes that trigger and produce reactions to generate outcomes in specific contexts, while contexts are defined as places or settings in which the mechanisms work and in which the outcomes are produced. The “theory of the case” thus specifies a prediction or explanation about what happens (mechanisms) in specific research institutions (contexts) to produce changes in performance (outcomes). Our team developed a theory about the success characteristics of high-performing centers (Fig. 2) that was evaluated and refined based on data collected during the benchmarking research.

Figure 2.

Figure 2

A priori theory of the case. We established this list of likely mechanisms of success based on information gathered between 1998 (when the CF TDN was established) and 2010 through informal discussions, workshops, and training sessions with TDN centers. Our site selection process allowed us to evaluate centers with different institutional contexts by including centers of varying size and translational focus, centers located in densely populated urban areas as well as centers located in smaller cities, some centers with nearby competing programs, and varying levels of institutional resources for all centers.

Investigational Team

The benchmarking protocol development and site-visiting team included personnel with significant experience in clinical research management, CF investigators and research coordinators, and CF clinical care and quality improvement scientists from several universities (Seattle Children’s Hospital Research Institute/University of Washington, University of North Carolina at Chapel Hill, University of Pittsburgh and Dartmouth) and the CF Foundation.

Data Collection and Analysis

All centers selected for benchmarking were approached, agreed to the visit requirements, and were visited between July and November 2010. The site visit included an initial meeting with the clinical research team to present an overview of the program’s structure and processes. Each team member was given a questionnaire developed around the theory of the case and asked to select the five factors from those listed that they believed were most important to their team’s success (Online Appendix 1). Notably, key team members were asked to complete the questionnaires before our interviews and were given only the brief description of each characteristic included in the questionnaire when providing their rankings. The following day, the site visitors observed a research team meeting, toured the facilities, and conducted individual interviews (TDN principal investigators and lead research coordinators/managers) or group interviews (including other members of the research team). When possible, institutional leadership such as CTSA program directors or department chairs, IRB managers, and contracts office managers were also interviewed to assess institutional characteristics. Although the interview questions were primarily designed to probe the theory of the case, open-ended questions were also included to allow center participants to describe factors and processes not specifically addressed (Online Appendix 2). Our approach during site visits and interviews was appreciative inquiry, which acknowledges achievement and seeks to identify the reasons for success.9

The research plan was approved by the Seattle Children’s Institutional Review Board. All interviews were recorded and transcribed, and the interview information was analyzed using content analysis software (Atlas.ti., Berlin, Germany) to code individual quotes into various themes. The results of the interview content analysis were integrated with questionnaire data and observer notes using an iterative process, and cross-case methodology10 was applied. For each center, an individual case study summary was written, then reviewed and edited by the site visitors. Finally, each center reviewed its own case summary to confirm that key concepts had not been missed.

RESULTS

Institutional Context and Outcomes

The centers ranged in size from 101 to 454 patients, with five of nine centers visited having adult CF populations that made up more than half of the patient population served. (Table 1) All teams had weekly research meetings. The ratio of research staff to patient population spanned nearly a threefold range. Center staff and investigators noted a variety of challenges: (1) working across two institutions (Centers A and D), (2) significant travel time between staff offices and research visit space (Centers F and I), (3) low ratio of research staff to CF patient population (Centers B, C, F and G), and (4) institutional barriers to timely start-up [Center H (IRB) and Center I (contract)]. However, as reflected in each center’s overall success, the clinical research teams were able to compensate for these challenges.

Table 1.

Institutional Context and Outcomes for the Nine Benchmarked Centers

Center
A B C D E‡ F G H I
Number of CF Patients (% Adult) 297 (38.9) 165 (54.0) 369 (39.1) 231 (39.9) 101 (41.2) 454 (54.5) 450 (54.4) 326 (61.0) 264 (54.2)
Separate Adult and Pediatric Institutions Yes No No Yes No No No No No
Combined Adult/Pediatric Research Support Staff No* Yes Yes Yes Yes Yes Yes Yes Yes
CF Translational Research Focus|| ++ + +++ + - +++ ++ +++ ++
Staff Travel Time (Minutes) to Research Visit Location 5 0 0 5 0 15 5 0 10
Estimated FTE for Research Support Staff 4.6 1.5 3.8 5.0 1.6 3.5 4.0 5.0 5.0
Number of Research Staff per 100 Patients with CF at Site 1.5 0.9 1.0 2.2 1.6 0.8 0.9 1.5 1.9
Average Quartile Time to IRB Approval§ 2.5 2.6 2.5 2.8 3.8 2.9 2.5 1.8 3.3
Average Quartile Time to Contract Execution§ 3.3 3.5 2.2 2.4 3.0 2.3 2.6 2.2 1.7
Average Quartile Time to Activation§ 2.7 3.3 2.3 2.8 3.2 3.0 2.4 1.8 2.4
Average Quartile Time to First Patient Enrolled§ 3.3 2.7 2.2 3.1 3.0 2.2 2.4 2.5 2.7

*The pediatric research team was the primary focus of the review because at the time of the benchmarking, adult subjects were recruited by the pediatric research team and were seen at the pediatric center for research visits

Research support staff included research managers, research coordinators (both nurse and non-nurse), laboratory staff, regulatory document coordinators, budget and contract support. All teams had some dedicated CF clinical research staff, while some teams also had institutional support for various functions (for which FTE was estimated). There was variation within each research team regarding the specific personnel and who performed which function

Center E was not a CTSA Awardee

§ The average quartile time presented in this table comes from Period 2 (10/1/2008 – 3/31/2010). For comparison, the average quartile time for the 23 CTSA sites that were not benchmarked was 2.1 or 2.2 for all milestones

||Centers were categorized as having no (−), limited (+), moderate (++), or significant (+++) translational focus based on the amount of basic and investigator-initiated CF research being conducted at the institution

Top-Ranked Success Factors for Benchmarked Centers

For each center, the five factors that were mentioned most frequently in the individual questionnaires are noted in Table 2. Teams were unanimous regarding the importance of shared leadership between the principal investigator and research coordinator, as well as the importance of communication between the clinical care and research teams and the value of a customer-service orientation to interactions with study participants. However, what was perceived as important varied considerably for all of the other factors. For example, the centers that identified the longevity of their research staff as being most important did not see either structured training or regular meetings as highly important, while those that did not identify longevity did identify those two factors as important. This suggests that teams with newer staff depended more on structured communication and training than more experienced staff did.

Table 2.

Top Ranked Success Factors for the Nine Benchmarked Centers

Center Institutional Support Multi-level Shared Leadership Support Staff are Medical Professionals Staff Training Staff Longevity Customer Service Orientation (Participants) Customer Service Orientation (Sponsors) Culture of Research Research/Clinical Communication Research Team Meetings Documented Processes Databases to Identify Potential Subjects IT Systems to Manage Processes Process Improvement
A 1 4 -- -- -- 2 -- 1 3 5 -- -- -- --
B -- 5 -- 4 -- 3 -- -- -- 2 -- 1 -- --
C 2 5 -- -- -- -- -- -- 1 4 3 -- -- --
D 3 2 -- -- 1 4 -- -- 5 -- -- -- -- --
E 4 5 -- -- 3 2 -- -- 1 -- -- -- -- --
F -- 5 -- -- -- 4 -- 1 3 -- -- 2 -- --
G -- 4 -- -- 3 2 -- 1‡ 5 -- -- -- -- 1
H -- 5 -- -- 4 3 -- 2 1 -- -- -- -- --
I -- 5 -- -- 4 3 -- 2 1 -- -- -- -- --
Average
Ranking* 1.1 4.4 0 0.4 1.7 2.5 0 0.8 2.2 1.2 0.3 0.3 0 0.1

*The aggregate top five factors for each center are noted above, with the item cited the most often for that center given a score of 5, the item cited the second most often given a score of 4, etc.

An equal number of team members at Center A included Institutional Support and Culture of Research in their top five factors

‡An equal number of team members at Center G included Culture of Research and Process Improvement in their top five factors

Key Success Characteristics Based on Interviews and Observation

Table 3 presents the characteristics identified by site visitors based on their observations and the interviews. Leadership, effective communication, and adequate staff to complete the work were identified at all centers.

Table 3.

Specific Examples of the 12 Success Characteristics

Success Characteristic No. of Centers* Examples of How this Characteristic was Met Key Demonstrative Quotes
Leadership with Clinical Research as a Priority 9 • Single highly motivated, engaged PI (either adult or pediatric)
• Shared leadership between multiple engaged PIs (both adult and pediatric)
• Shared leadership between the PI(s) and the research coordinator or manager
“The heart of our program is the very strong leadership of the Pediatric PI and the Adult PI. Really, without them we would have maintained the status quo.”
“If I had to say who has the greatest influence on our team, I would have to say both the Pediatric PI and the Research Manager.”
“He (PI) is very hands-on. He’ll sit down with us rather than working us to death and say, ‘Let’s re-evaluate, what can we do to get through this, how can I help you?’ He really makes this a great job.”
Adequate Staff 9 • Recruit good staff looking for team player, problem-solving skills, attention to detail, self-motivation, and good fit with the rest of the team
• Enough staff for the work: monitor workload and hire or improve processes (i.e., create efficiencies) such that the existing staff can perform the work
• Focus on retention by providing opportunities for growth
“I think you can have the best job descriptions and all the funding you need, but if you don’t get good people it’s a non-starter. And it’s a certain type of person…the people that work in our program are all self-starters who you don’t need to tell what to do.”
“When I interviewed her…her energy and how quickly she learns things were so apparent to me, and it didn’t take long for her to make key contributions to our group.”
“You can’t be like the kid in the candy shop and just keep going….you have to know when the staff is at their limit.”
Effective Communication 9 • Regular research team meetings (adult and pediatric staff)
• Research staff attend clinical care meetings
• Research staff share office space with clinical colleagues
• Adult and pediatric research teams share office space
• Responsiveness and clear communication of expectations
“There is just great communication. Beyond the communication about who is coming in and might be eligible for a study, the research team does an outstanding job of communication with us if there is a clinical issue that they notice during a research visit. We will often hear about changes in labs or lung function. There is just a great interchange of information.”
“The more you communicate with the sponsors and the quicker you get things back to them, the quicker they’re going to get things back to you. The only way that I can do submissions so quickly is because I call them and say, ‘I am willing to give you my full attention for the next day and get this done for you, but here is what I need from you.”
Customer Service 8 • Study visits scheduled at odd times of the day and on weekends
• Attention paid to the comfort and convenience of the study subjects
“The coordinators just have that attitude…’oh we will make it work—we will find you a place to stay when you’re in town, we will get you a gas card if that’s a problem.’ The subject feels valued and that they are important.”
“We really coddle our subjects. The coordinators are just so flexible with them…. We’ll come in at night, we’ll do stuff on the weekends, just do the best we can to accommodate their schedules.”
“We really make an effort to see these people when they are not participating in studies, which I think they really appreciate because we see them as people, not just research participants.”
Culture of Research 8 • Considered a shared responsibility for clinical care and research teams
• Observational studies offered to subjects when young
• Research team members attend all CF clinics to meet patients, talk about research in general or about specific studies
“Historically, no one was really approaching the families…it just wasn’t part of the fabric of the center. I give credit to the Pediatric and Adult PIs, because that has visibly changed here. Families now know and understand that we do research here.”
“Even before we had the Port CF registry, we had a consent form for an in-house CF database. That consent form gives permission to put their data in the database, but also gets permission to have their data reviewed for potential eligibility for studies. So it is right up front that research is part of the culture here.”
Continuous Process Improvement Driven by the CF Research Team 6 • CF team approached institutional offices to work on reducing time for key milestones (IRB or contract approval)
• Fixing problems as soon as they arise
• Regularly polling staff about what is working, what isn’t, and what they need
“We did a QI project last year when we got our metrics and we did not like our number of days from regulatory packet to final contract.”
“It’s the denial part….you’ve got this whole list of excuses, but then you realize that other centers face similar challenges but are doing better. It was a great motivator for us to see how underperforming our center was compared to others. So we presented it (our metrics report) to the group and started talking about ideas to help improve recruitment.”
“About three times a year, I ask them individually a list of questions: What’s working? What’s not working?” Is there someone I should recognize who has really made your job or life better? Do you have the tools you need to do your job?”
Great Team Dynamics 6 • Shared vision about the value of the work being done
• Getting to know each other outside of work
• Research teams work together to raise money for CF research
“There is a program-wide sense of commitment and camaraderie. We all not only work together here, but we all turn out for Great Strides [fund-raising event]. There is this whole sense of community that this is something that really matters.”
Shared, Efficient Processes 5 • Longevity of staff resulted in shared processes that were not always written down
• Written processes, checklists, white boards
• Shared calendars
• Mock or practice study visit prior to study initiation
“There is an order in which we do things, these are the things we do simultaneously, this our time frame. There is a lot of meticulous cross checking and people keep things up to date and filed so that you can go back and find things.”
“I am just constantly in awe of how well it’s run and how organized everyone is, from all these boards telling me what trials are coming up and when all of the subject visits are.”
“We prep well in advance anything that we could possibly need at a subject visit, it’s all right there. “You are never running around saying, ‘oh, I need this or I need that’…It’s already put together and ready.”
Business-like Approach 5 • Develop adequate study budgets that cover all costs, and team understands the risk of financial loss if enroll fewer than planned number of subjects
• Institution allows IRB, budget, and contract activities to run simultaneously (not consecutively) to improve speed of start-up
“…sponsors and customers see us as an anomaly in the academic world because we’ve made purposeful decisions to be nimble, to be business-like, to be aware of the world outside of here and to be flexible if needed.”
“When we produce value, people will come back to us for value and they will keep coming back to us for value.”
IT Systems 3 • Databases used to identify potential subjects
• Electronic tracking systems used to facilitate and monitor processes, financial accountability, and metrics
“We involve our data management group…and I think that this has been instrumental in transforming both clinical care and research. We can send them inclusion/exclusion criteria, and within 24 hours we have the exact number of subjects who potentially meet eligibility”
Balance of CF Programs 2 • Adult and pediatric research teams equally invested in research
• Clinical care and research are equally valued
• Basic research also valued
“As the clinical director, it is really important for us to empower the research people, to let them know that we want them involved…not that it is just OK if you come by to talk to this patient about a study, but that we WANT you to come and talk to this patient about that study.”
Institutional Support of Clinical Research 2 • Institution supports research activities (budget development, contract negotiation, regulatory document preparation, pool of research coordinators)
• Institution provides adequate, well-located space for research visits and research staff office space
“It really helped when we moved. We didn’t use to have office space together, but now we are on the same floor, we see each other every day….we just talk more.”

*Number of visited centers where this characteristic was identified as a key contributor of success.

CF = cystic fibrosis, IT = information technology, PI = principal investigator, QI = quality improvement

Leadership

Leadership was highly ranked in the questionnaire and in the interviews. The presence of at least one highly engaged leader and a model of shared leadership were universally valued. The shared leadership might be between a pediatric and adult principal investigator, or between the principal investigator and research coordinator. The most valued qualities of leadership included a visible commitment to TDN studies and goals, availability to the team, the solicitation of team opinions in decision-making, and in particular, attention to team workload. The two centers selected for benchmarking visits based on their observed improvement reported that the primary reason for the change were new team leaders who focused on improvement and used metrics data to compare their study performance to peer institutions.

Effective Communication

The diverse components of the clinical research process within an institution are linked through effective communication. This was demonstrated through regular research team meetings, close interaction between clinical research and clinical care teams, and shared expectations for responsiveness to calls and email correspondence. Communication was also facilitated by physically placing team members near one another and near the clinical program, where conversation could replace email.

Staff Adequacy

This refers to a broad range of team qualities, including having staff whose skills match the work, having a sufficient number of coordinators and support staff, and ensuring that the total number of studies managed by the team does not exceed the staff’s capacity. In general, comments about what attributes were sought when hiring came from centers that had recently recruited new staff. Three of the four centers with a low ratio of research support staff to patients (C, F, and G) were the largest centers, suggesting that there may be some efficiencies of scale. However, most of these centers had recently experienced turnover and planned to hire additional staff.

One key characteristic we identified had not been included in the theory of the case: the importance of a businesslike approach to the financial sustainability of the program. This strategy includes the development of adequate study budgets, enrolling the number of subjects specified within the contract, financial tracking to ensure that payments due have been received, and examining final accounting to inform future budgeting.

We had anticipated that a culture of research would be a key success factor, but did not expect that such a large amount of time and effort would be necessary to develop such a culture. For many centers, the effort had been an essential part of the program for years. Clinical research was woven into discussions in clinic visits from infancy through adulthood, and reflected a longstanding general focus on CF research in the program (basic, translational and clinical). In other centers, having a culture of research was only recently recognized as a success characteristic, and intentional activity to develop such a culture had begun only a few years before our benchmarking work.

DISCUSSION

Critical success factors noted at all nine high-performing centers were strong leadership, established and effective communication within the research team and with the clinical care team, and adequate staff to complete the work. Other frequent characteristics included a mature culture of research, a customer service orientation in interactions with study participants, shared efficient processes, continuous process improvement activities, and a businesslike approach to clinical research. We believe that collecting clinical trial performance metrics is central to facilitating improvement in clinical research, and have continued to monitor metrics data for all centers in the network. Since 2009, TDN centers have received annual metrics reports that compare their performance with that of other centers in the network. This allows them to identify potential areas for improvement and to measure the impact of changes made over time.

Our theory of the case was largely confirmed and reinforces the notion that the people comprising the team are as important—if not more important—than specific clinical research processes in achieving success. Some of our findings were not predicted by our theory; specifically, the attention paid to financial sustainability in some centers, while others were less important than we expected (e.g., IT systems to manage processes and a customer-service orientation in interactions with sponsors). Overall, we gained a rich understanding of how individual teams leveraged different strengths to achieve similar goals.

Our initial assumptions regarding the importance of multi-level shared leadership between the principal investigator and the lead research coordinator were broadened to reflect the variety of leadership structures observed at the centers. We found that the strongest leadership could come from the pediatric or adult program investigators, the lead research coordinator, or the research manager. In some cases, it was strong leadership from both adult and pediatric investigators or an investigator and coordinator equally sharing the role. Each of these models was effective and depended on the people involved and the institutional context. Not surprisingly, turnover of strong leadership affects a team’s dynamics and performance. Such perturbations force the team to adapt and provide another opportunity to measure how some teams recover and improve. Given the inevitability of staff turnover, those who manage such disruptions well can share their strategies for preserving team effectiveness or accelerating recovery.

We identified several apparent contradictions between the forced rankings of success factors that each research team member completed before their interview (Table 2) and the actual observations and analysis of the interview content (Table 3). For example, only one center ranked Process Improvement as one of its top five factors, yet information shared during the interview was coded to a category that included process improvement, and it was deemed as a significant contributor for six centers. We offer four factors to explain these apparent discrepancies: 1) forcing team members to rank the five most important factors caused them to leave out other important factors ; 2) team members completed the questionnaires and performed forced rankings before the interviews, relying on the brief description in the questionnaire; 3) the coding process used for the interview content and site visitor observations changed over time to group commonly cited, related themes together into a specific coded category; and 4) the interview questions themselves may have solicited more comments about some themes than others.

Our study has several limitations. Our selection of centers was not entirely driven by data, but was modified by additional factors, such as observations by others regarding study quality and our decision to pick a broad range of centers by size and research environment. Although we used a consensus approach to develop the benchmarking protocol, bias is inevitable when creating a theory of the case, which then guides the content of the questionnaires and interview guides. The technique of appreciative inquiry focuses primarily on what is going well rather than what is not going well, and therefore may introduce positive biases. We studied only centers that were highly successful, which prevented us from formally comparing characteristics of centers with average or below-average performance; to see if these differed substantially in centers that were less successful. The content analysis approach relies on grouping data into themes by the same investigators who developed the study, which also introduces bias. We observed only a small number of centers and made our observations during a single time period rather than in multiple visits over an extended period; thus, we could not prospectively assess how teams cope with new challenges. Our analysis of the institutional context was limited to a few factors. We did not pursue an in-depth examination of factors that could affect success in recruitment, such as demography, socioeconomic profile, population density of patients served in the region where the center is located, presence of competing clinical research centers in large metropolitan areas, and distance patients traveled to the center. During the interview process, we were able to determine the CF research team’s perceptions of institutional research support (and were able to observe the physical facilities available to the research teams); however, we did not systematically evaluate the institutional review boards, contracts and grants offices, or other institutional clinical research offices.

Through this work, we identified a number of modifiable factors common to high-performing centers. If these factors are unique or if particular combinations of them are associated with high performance, they may be characteristics that set these teams apart. However, because we were not able to use quantitative methods, we cannot offer evidence to support a causal association between the factors at high-performing centers compared to those with average or below-average performance. We were able to assess the context or institutional setting in only a limited way, so it is possible the factors we identified at high-performing centers may be more dependent on the institution than we appreciated, making them less transferable to other teams.

Modifiable factors include those that may be adapted by any study team regardless of institutional context, such as regular team meetings, use of white boards to share key information, use of checklists to ensure consistent and efficient processes, sharing space to facilitate communication, and providing flexible hours in the evening and weekend for study visits. Establishing strong connections and a presence in the clinical-care setting facilitates informing the clinical team about research studies, and is a venue for explaining clinical research to patients outside of recruitment to specific studies.

For clinical research teams who wish to improve their performance, familiarity with the basic principles and standard methods of quality improvement is a first step. This includes an understanding of change models, the importance of systems-based approaches, an appreciation of group dynamics, and quantitative assessment of the impact of changes made. Collecting and tracking the team’s study metrics (start-up and enrollment) represent such measures, which can be extended more broadly to the institutional or research network level.

Our benchmarking work identified several key factors and practices associated with clinical research success that can be modified by study teams. Clinical research teams may find these observations useful as they undertake their own quality improvement work through assessing their clinical research performance, identifying suitable targets for improvement, and considering strategies or practices to adopt. We would anticipate that such changes would enhance the experience of clinical research for study participants and teams alike, as well as improve the conduct and completion of clinical trials.

Electronic supplementary material

ESM 1 (125KB, pdf)

(PDF 125 kb)

ESM 2 (155.9KB, pdf)

(PDF 155 kb)

Acknowledgements

The authors would like to acknowledge the many staff members at the nine benchmarked centers that participated in interviews and allowed us to observe their meetings and processes. We are also very grateful for our collaboration with Leila Atry, Elizabeth Hartigan, Diane Towle, and Kathryn Sabadosa, who participated on the site visit teams.

Funding

This research was funded by grant #3UL1RR025014-03S2 (currently ITHS-UL1TR000423) from the National Institutes of Health, and by the Cystic Fibrosis Foundation.

Conflict of Interest

E. Nelson reports stock ownership in Quality Data Management Inc. No other conflicts of interest were identified for the authors.

REFERENCES

  • 1.Kitterman DR, Cheng SK, Dilts DM, Orwoll ES. The prevalence and economic impact of low-enrolling clinical studies at an academic medical center. Acad Med. 2011;86(11):1360–6. doi: 10.1097/ACM.0b013e3182306440. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Califf RM. Clinical research sites–the underappreciated component of the clinical research system. JAMA. 2009;302(18):2025–7. doi: 10.1001/jama.2009.1655. [DOI] [PubMed] [Google Scholar]
  • 3.Dilts DM, Sandler AB. Invisible barriers to clinical trials: the impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials. J Clin Oncol. 2006;24(28):4545–52. doi: 10.1200/JCO.2005.05.0104. [DOI] [PubMed] [Google Scholar]
  • 4.English R, Lebovitz Y, Griffin R. Transforming Clinical Research in the United States: Challenges and Opportunities: Workshop Summary. Washington (DC): National Academies Press (US); 2010. [PubMed] [Google Scholar]
  • 5.Rowe SM, Borowitz DS, Burns JL, Clancy JP, Donaldson SH, Retsch-Bogart G, et al. Progress in cystic fibrosis and the CF Therapeutics Development Network. Thorax. 2012;67(10):882–90. doi: 10.1136/thoraxjnl-2012-202550. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Nelson EC, Batalden PB, Huber TP, Mohr JJ, Godfrey MM, Headrick LA, et al. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Jt Comm J Qual Improv. 2002;28(9):472–93. doi: 10.1016/s1070-3241(02)28051-7. [DOI] [PubMed] [Google Scholar]
  • 7.Ettorchi-Tardy A, Levif M, Michel P. Benchmarking: a method for continuous quality improvement in health. Health Policy. 2012;7(4):e101–19. [PMC free article] [PubMed] [Google Scholar]
  • 8.Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10(Suppl 1):21–34. doi: 10.1258/1355819054308530. [DOI] [PubMed] [Google Scholar]
  • 9.Carter CA, Ruhe MC, Weyer S, Litaker D, Fry RE, Stange KC. An appreciative inquiry approach to practice improvement and transformative change in health care settings. Qual Manag Health Care. 2007;16(3):194–204. doi: 10.1097/01.QMH.0000281055.15177.79. [DOI] [PubMed] [Google Scholar]
  • 10.Donaldson MS, Mohr JJ. Exploring innovation and quality improvement in health care micro-systems: a cross-case analysis : a technical report for the Institute of Medicine Committee on the Quality of Health Care in America Washington (DC): Institute of Medicine; 2000.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ESM 1 (125KB, pdf)

(PDF 125 kb)

ESM 2 (155.9KB, pdf)

(PDF 155 kb)


Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES