Abstract
BACKGROUND:
To improve the site selection process for clinical trials, we expanded a site survey to include standardized assessments of site commitment time, team experience, feasibility of tight timelines, and local medical community equipoise as factors that might better predict performance. We also collected contact information about institutional research services ahead of site onboarding.
AIM:
As a first step, we wanted to confirm that an expanded survey could be feasible and generalizable—that asking site teams for more details upfront was acceptable and that the survey could be completed in a reasonable amount of time, despite the assessment length.
METHODS:
A standardized, two-part Site Assessment Survey Instrument (SASI), examining qualitative components and with multiple contact list sections, was developed using a publicly accessible dashboard and later transferred to a REDCap platform. After multiple rounds of internal testing, the SASI was deployed 11 times for multicenter trials. Follow-up questionnaires were sent to site teams to confirm that an expanded survey instrument is acceptable to the research community and could be completed during a brief work shift.
RESULTS:
Respondents thought the SASI collected useful and relevant information about their sites (100%). Sites were “comfortable” (90%) supplying detailed information early in the site selection process and 57% completed the SASI in one to two hours.
CONCLUSIONS:
Coordinating centers and sites found the SASI tool to be acceptable and helpful when collecting data in consideration of multicenter trial site selection.
Keywords: Clinical trials, Site Selection, Trial Management, Site Performance Metrics, Quality Improvement, Feasibility Survey
“It is remarkable how little scientific understanding we have of one of the most fundamental aspects of the trials we undertake, that is, site selection.” -McMurray 20161
INTRODUCTION
Taking the time to match trials to committed and high-performing enrollment sites is key to meeting clinical trial recruitment goals and collecting reliable study data.2–4 Ultimately, we rely on site teams to achieve recruitment targets.5 In fact, after effective trial design and statistical planning, the single most important determinant of a trial’s success will be which site teams are selected to conduct the trial. Knowing how important a role the site teams will play, understanding how each site team compares to the characteristics of a high-performing model, and to other teams signing up for the same trial, should be a high priority for trial sponsors. Yet, there are very few site selection guidelines for investigators and, historically, few standards of what to measure or what information to collect when considering site selection.6
One approach that could improve our scientific understanding of site selection is embracing an in-depth comparative, standardized, objective survey that explores the qualities associated with high-performing teams, and whether local compliance department personnel are able to meet tight timelines, to help inform site selection decisions and form better partnerships. In a recent review, Buse et al. put forth that a widespread adoption of common principles and site readiness practices could improve the performance and success of clinical trials by streamlining site selection and trial initiation with a common set of expectations for trial sites and sponsors.7 We designed a comprehensive and adaptable site assessment survey instrument (SASI) asking site teams to not only report on patient counts, infrastructure and oversight,6–9 but also to describe characteristics that might better predict a fast startup time and the likelihood of meeting recruitment targets—characteristics such as trial commitment time, team experience, adaptiveness to tight timelines, and local medical community equipoise. The survey instrument also asks site teams to collect and share contact information and estimated response times for institutional research services as part of site consideration. Our long-term goal is to determine if this SASI method can be a predictive model, informing the selection of site teams who will be more likely to promptly activate and meet recruitment goals. First, we wanted to confirm that an expanded survey instrument is feasible—that asking site teams for more details upfront, could be completed within a reasonable amount of time despite the effort required.
BACKGROUND
Prior to the SASI, a series of eight multicenter, international trials of a single intervention (CLEAR, 2009–2015;10,11 MISTIE, 2006–201712,13), spanning more than a decade, provided the opportunity to expand site selection from one based on pre-established relationships among site investigators to one that is more objectively based on site performance and readiness. The first list of items required at a site focused on facilities, equipment, and patient care protocols; investigators were identified based on relationships; and the clinical experience and trial record of accomplishment of the investigator was assumed. The first rounds of recruitment performance, however, indicated that clinical qualifications and trial track records should not be assumed, and the checklist was revised to assess past trial successes and investigator understanding of the protocol and its demands. Over the next few trials, a structured interview was added prior to final selection to ask more qualitative questions about the research environment, coordinator assignments, team track records, equipoise of the referring medical community, and local care standards. During the interview, the site teams were also given the opportunity to ask clarifying questions about the study protocol and expectations in return. By the end of these eight trials, the checklist had grown into a web-based survey and dataset (VISION, Prelude Dynamics, Austin, TX), and the structured interview questions and statements of trial expectations became a permanent feature of the site assessment process. In 2018, the National Institutes of Health, National Center for Advancing Translational Sciences (NCATS) Trial Innovation Network (TIN) proposed a working group to standardize a site-selection tool for use among the network’s investigator-initiated, multicenter clinical trial projects. In a recent call to action, Clinical and Translational Science Awards (CTSA) hubs have been challenged to use their robust centralized infrastructure and sophisticated personnel clinical trial expertise to lead the charge in adopting and sharing site readiness practices across the Trial Innovation Network.14
METHODS
From Web-Based Survey to Site Assessment Survey Instrument (SASI)
Over a two-year period, the working group combined the web-based survey with additional qualitative interview content and expanded the contact information sections (Fig. 1). In addition to questions about the team make-up and experience, qualitative questions were added to explore whether introducing a specific trial intervention, or a clinical trial in general, complemented the clinical environment or would be disruptive, vis-a-vis coordinator and investigator time commitment and care pathway practices. Personnel information was expanded to include institutional compliance contact names and addresses. The questions were refined in multiple rounds of internal testing, both from the perspective of the study principal investigator (PI) selecting enrolling centers and from the perspective of site teams completing the SASI.
Figure 1:
Survey items excerpted from a recent SASI deployment. Shown are a few questions containing statements that not only garner information from a site team, but also help the site team understand trial expectations. The first two items explore recruitment commitments in general; the last two items ask the site team to describe its practices with culturally diverse populations.
SASI Platform and Structure
As the survey design became more comprehensive, the SASI was divided into two parts, with part 1 prioritizing institutional and general go/no-go critical questions to narrow down the list of potential sites to advance to part 2, a more detailed study of the team, environment, and readiness to meet trial goals. For example, part 1 confirms a commitment to institutional review board (IRB) reliance, data-sharing systems, enrollment and retention goals and Good Clinical Practice standards; it also collects contact details to expedite introductions between site and coordinating center personnel.15 Part 2 narrows the focus to help the local teams understand the protocol and agree to protocol performance goals. It explores intervention-specific site resources and team expertise, gathering information on specific skillsets and capabilities and team enthusiasm for and willingness to be a part of the trial. Across both parts, questions cover important local policies and rapid processing commitments of the Human Research Protection Program office, the Office of Research Administration, and the data protection and legal departments.
The SASI was created first in Qualtrics (Qualtrics Software Company, Seattle, WA), a free flexible survey platform that creates surveys quickly and easily. After the deployment of six Qualtrics SASIs, feedback from study sponsors was collected, and based on their feedback, the SASI was implemented within the REDCap (Vanderbilt University, Nashville, Tennessee) platform to automate deployment and site response comparisons.16
Standardization and Customization
The SASI includes both fixed and customized questions. Standardization is important for assuring that the essential investigator questions and commitments are being asked. This allows for future reliability and validity testing. In addition to the usual evidence of satisfactory volumes of participants meeting the eligibility criteria, investigators are asked about the availability of clinical research coordinators and medical personnel to perform trial tasks (team cohesion and commitment) and whether there is equipoise and support among health care delivery colleagues (medical community commitment). Prospective investigators also are asked to comment on the value of the trial to their patients (participant commitment) and future health care (personal PI commitment). They are also asked to comment on the scientific rationale and its local impact, the approach and acceptability of the design, whether the research can be safely performed and accepted in the local community, the confidence in the leadership team at the coordinating center level, and whether there are ethical concerns about conducting the research.
Customization is an important feature for trial-specific context; therefore, bespoke questions are created by each study PI. For example, an important consideration in a clinical trial might be 24-hour imaging and laboratory testing availability. These tailored questions are important to confirm that protocol-specific resources, facilities, and special equipment will be available for use throughout a trial and that a team can meet trial-specific reporting requirements. In addition, partially customizable questions that relate to every clinical trial but differ numerically, such as expected monthly screening and enrollment numbers, are customized fields. A structured work sheet is used to help principal investigators create trial-specific questions, equally distributed across parts 1 and 2 of the SASI.
Selecting the Relevant Standardized Questions
In general, we designed the survey to keep questions simple, avoid question bias, and limit open-ended responses. Fixed Yes/No and multiple-choice responses are mutually exclusive and assigned a 1-or-5 or 1-to-5 (1, least; 5, best answer) scale. Open-ended answers are not scored; questions are not weighted; and no pass/fail score is designated. Responses are totaled and sites are then ordered on a scale ranging from the most favorable responses to least favorable responses.
The standardized questions focus on site organizational attributes and commitment indicators. The indicators are related to an ability to recruit and retain participants and a willingness to comply with protocol and data collection procedures. The four indicators are:
Team commitment: cohesive coordinator resources available to perform trial tasks;
Medical community commitment: equipoise and support of health care delivery colleagues;
Participant commitment: access to an adequate volume of participants who will find value being in a clinical trial and meet the eligibility criteria; and
Personal PI commitment: site PI enthusiasm that studying the question is of value to local clinical practice.
For each of the four indicators, the questions elicit standardized responses, based on objective criteria, to determine:
1. Resources for trial tasks.
Subsequent investigator performance has been correlated with past trial performance, as experience usually repeats itself, both good and poor.17 Questions include estimations of investigator effort and site coordinator time protected for trial tasks, past performance in reaching enrollment goals, the availability of treatment/care areas needed to perform the trial, perceived budgetary barriers, and equipment access;
2. Equipoise and colleague support.
Many risk factors for low enrollment are related to the research environment in which a trial is launched and attitudes about the disease, the treatment being evaluated, and the trial design.18,19 The support of the referring community is often overlooked and is difficult to ascertain; investigators are asked if there will be local acceptance of the trial as an extension of treatment for the disease or intervention being studied.20 There are also several questions ask about competing research, standard procedures, and insurance compatibility, as well as compatibility with hospital/clinic standards of care, operating procedures, and care pathways;
3. Volume of participants meeting the eligibility criteria and ease of access to them to meet site recruitment goals.
Most trial sponsors survey sites for electronic health record profiles (patient counts); however, recruitment access extends much further than patient counts.21 The survey includes operational and situational considerations— questions about whether the PI can commit a specific amount of effort to study activities and to the recruitment of culturally diverse patients; if cultural/language competence training is provided; the hours of clinic operation, and the ease of referral.
4. Value.
Recruitment can be challenging when investigators are conflicted in their dual roles of clinicians and researchers.20 Investigators are asked to rate the enthusiasm and the eagerness of their teams to participate in the research in question.2,9
SASI Deployments
Over a two-year period, we deployed a SASI assessment for eleven multicenter clinical trials testing a variety of therapeutic indications. The Qualtrics platform was used to dispatch SASIs for seven trials (one deployment covered two trials); the REDCap platform was used for the remaining five deployments. Often, site selection started with sites selected in advance, usually handpicked from pre-established networks and relationships. In these cases, the SASI was deployed to new sites and previously selected sites alike, as the SASI serves the dual purpose of aiding the study investigator select sites and as an ongoing directory of site information for upcoming trial activities.
SASI Follow-Up Questionnaires
Follow-up questionnaires were sent to the site teams to provide feedback on the SASI process and user experience. The questionnaires were conducted anonymously. Using the Qualtrics platform, the time lapse from SASI completion to the request for user feedback was two to six months. A questionnaire link (URL) was then built into the REDCap platform to automatically deploy the feedback questionnaire on receipt of the SASI Part 2. Questionnaires were sent to 106 site teams; 31 teams responded, as of the time of data export.
SASI Descriptive Statistics
Data included in the results section describe the type of trial, survey deployment platform, response rates, and response times. Results from the questionnaire developed and sent to the site teams during the same period are also included. Analyses investigating associations between specific SASI questions and site performance are beyond the scope of this investigation, as the trials are in their respective data gathering phases.
RESULTS
Simple descriptive statistics of 11 SASI deployments between 2020 and 2022 are provided (Table 1). Four SASI deployments were conducted without preselected sites; one deployment used the survey to fill open slots (~20%); four deployments used the SASI to gather data from entirely pre-selected sites; and two deployments (#1 and #11) used the SASI to select sites to include in grant submissions.
Table 1:
SASI Deployments in Chronological Order. Shown are qualitative characteristics of the deployments, counts of the number of sites who were sent and completed a return SASI, the median and range of the time taken to return in hours, and number of sites who entered startup. All SASI deployments were complete at the time of data export, except Trial #12, where responses were ongoing.
Part 1†† | Part 2 | |||||||
---|---|---|---|---|---|---|---|---|
Deployment # | Therapeutic Indication | Platform | Preselected Sites | Parts 1 & 2 combined | % Returned (Returned/Sent) | Hours to return: Median (min, max) | % Returned (Returned/Sent) | Hours to return: Median (min, max) |
1 | Idiopathic Normal Pressure Hydrocephalus in Adults | Qualtrics | Some | Yes | 47% (26/55) | 1419.0 (3.1, 2627.3) | N/A | N/A |
2 | Renal Anhydramnios | Qualtrics | All | Yes | 62% (8/13) | 147.6 (90.1, 280.9) | N/A | N/A |
3 | Cognitive Impairment in Alzheimer’s | Qualtrics | All | Yes | 100% (11/11) | 232.5 (81.7, 731.3) | N/A | N/A |
4 | 2 Related COVID-19 Trials | Qualtrics | None | Yes | 54% (45/84) | 53.2 (3.1, 839.9) | N/A | N/A |
5 | Acute Ischemic Stroke | Qualtrics | None | No | 76% (35/46) | 171.5 (1.8, 1154.9) | 96% (27/28) | 145.6 (1.1 679.1) |
6 | Osteoarthritis Knee Pain | Qualtrics | None | No | 85% (29/34) | 238.6 (22.2, 1898.9) | 96% (25/26) | 259.8 (4.2, 2849.5) |
7 | Intracerebral Hemorrhage | REDCap | Some | No | 51% 22/43 | 26.4 (1.0, 8948.2) | 95% (19/20) | 121.9 (0.7, 8755.0) |
8 | Pediatric Pulmonary Arterial Hypertension | REDCap | All | No | 100% (12/12) | * | 100% (12/12) | 75.8 (22.3, 340.3) |
9 | Post-mastectomy Pain | REDCap | None | No | 51% (18/35) | 16.4 (0.5, 218.4) | 71% (12/17) | 82.7 (0.5, 171.4) |
10 | Pediatric Sickle Cell Disease | REDCap | Some | No | 54% (19/35) | 94.0 (1.0, 139.7) | 63% (12/19) | 160.4 (30.7, 817.2) |
11 | Atrial Fibrillation in Adults | REDCap | All | No | 96% (72/75) | 48.7 (0.7, 1557.4) | 69% (50/72)† | 168.2 (0.5, 1120.1) |
An error in the survey prevented saving the return date, making this metric unavailable
Partial count; response collection ongoing at the time of data export
Deployments 1 through 4 show the return percentage and turnaround time for the combined SASI
Site Performance
Shown in Table 1 are qualitative characteristics of the SASI deployments, counts of the number of sites sent and returned a completed SASI, and the median and range of the time, in hours, to return the SASI. Preselected site teams responded at higher rates (103 of 111 institutions) than centers who were not yet included in a consortium (127 of 199 institutions). Early responders completed and returned the SASI in as little as 30 minutes. Most site teams reported needing 1–2 hours to complete each part, which was corroborated by review of return timing metrics on the earlier Qualtrics platform. Differences between overall time-to-return distributions, as measured by the number of hours between SASI send out and return of the given survey, are attributable to differences in deadlines given to site teams. These deadlines ranged from approximately 2 months (deployment #1) to 3 days for Part 1 and 7 days for Part 2 (deployments #9, #10, and #11). Sites were able to return the SASI within these timelines, with many sites returning both parts on the day of receiving the SASI link.
SASI Follow-Up Questionnaire Results
Survey respondents (Table2) thought the SASI collected useful and relevant information about their sites (100%), that the questions were easy to understand (100%), and that the survey followed a logical order (97%). To answer the question whether the SASI could be completed within a reasonable amount of time, we asked the site teams how much time was required to complete and return the survey. Fifty-seven percent completed the SASI in one to two hours. Of the remaining respondents, 20% completed the SASI in less than one hour and 23% required more than two hours. Eighty-three percent (25/30) of the respondents reported they completed the survey during business hours.
Table 2:
SASI Follow-Up Questionnaire Results.
SASI Follow-Up Questionnaire Questions | % of Responses (Yes/Total) |
---|---|
In general, did you find that the survey collected useful and relevant information about your site? | 100% (31/31) |
In general, did you find that it was easy to understand what was being asked by the questions? | (100%) (31/31) |
How much combined time did you and your team need to complete the site assessment survey? | |
Less than 1 hour | 20% (6/30) |
Between 1 and 2 hours | 57% (17/30) |
Between 2 and 4 hours | 23% (7/30) |
Did the due date allow you and/or your team time to complete the survey during business hours? | 83% (25/30) |
The site assessment survey is designed to obtain critical information about each site early in the process before the trial begins. | |
Were you comfortable answering the questions early in the trial process? | |
Yes | 90% (28/31) |
No | 3% (1/31) |
Indifferent | 6% (2/31) |
Did the survey questions follow a logical order? | 97% (30/31) |
Did you find that the topics covered in the survey targeted your site’s capabilities? – (items below have this lead-in) | |
Personnel identification | 97% (29/30) |
Personnel experience | 93% (28/30) |
Commitment to trial goals & expectations | 97% (29/30) |
Institutional & IRB (Institutional Review Board) Information | 97% (29/30) |
Contracts & Office of Research Administration | 97% (29/30) |
Study-specific questions | 93% (27/29) |
Were the questions helpful in understanding what it takes to be a successful site in this trial? | 84% (26/31) |
The numerator counts and percentages reflect “Yes” answers except where other options are explicitly stated.
There were three additional free-response questions.
Sites were “comfortable” (90%) supplying detailed information early in the site selection process, with only 3% responding that the information was being asked too early. As mentioned earlier, sharing trial expectations within the SASI is an important feature. Eighty-four percent of the respondents felt the survey questions clarified what it takes to be a successful enrolling site. To round out the feedback, we asked if the SASI, as constructed, gave the site teams the opportunity to highlight their capabilities, their commitment to the trial, and their local capabilities; 93%−97% responded “yes” to each of the specific site capability topics.
DISCUSSION
We have successfully implemented the SASI to assess site readiness to participate in multicenter clinical trials. The SASI was found to be useful from the participant’s perspective across multiple preselected and de novo trial cohorts and multiple diseases. At the request of study sponsors, four of the six Qualtrics deployments combined parts 1 and 2 into a single survey; the remaining two trials used the preferred two-part deployment model. Following these six deployments, the SASI was built as a REDCap project to further standardize survey item construction to automate deployment and results. The two-part format quickly differentiates site teams not interested in a particular protocol from those with a sustained interest to continue to part 2. For the site teams continuing to part 2, dividing effort into two parts reduces time at a single session.
The SASI response rates per deployment ranged from 47% to 100%, with an average return rate of 71% across the 11 deployments. Comparing the two platforms, the response rates were similar no matter which platform was used, indicating that the IT (information technology) platform is not the primary driver of response rates overall. Interestingly, when the SASI was utilized for information gathering from preselected sites, the preselected site teams obliged at a higher response rate. This could suggest that those preselected felt a greater obligation to complete and submit the SASI. It could also suggest that some prospective new investigators are less interested in submitting a SASI-type survey without assurances of trial participation. Alternatively, this could indicate a natural selection process—investigators who do not consider a trial as a suitable fit, or are not interested, will be less likely to reply with the requested information in time or at all.
The SASI return period was dictated by turn-around times authorized by each trial sponsor. The earlier Qualtrics-based deployments offered site teams longer deadlines, resulting in longer data collection cycles, whereas the later REDCap-based deployments instructed site teams to meet shorter return deadlines, which they did, notably during the extremely rapid pace of the COVID pandemic. The rapid return of a questionnaire is treated as another indicator of motivation and was included as an assessment element during the final review and selection for all deployments.22
In addition to site teams being able to complete and return their information in a few hours (Questionnaire item #3), during normal working hours (Questionnaire item #4), and meet survey return deadlines, site team feedback has been positive. Site teams report that the SASI was helpful in sharing and collecting useful and relevant information about site capabilities and trial expectations, despite the survey’s length. This is good news. Although it may seem labor intensive in the short run, taking more time to select the best sites will result in a reduction of later delays from low recruitment and data quality.22
Our data demonstrate that filling out a detailed, structured site-assessment survey can be completed by most research teams in a few hours. Comments from the open-ended questions on the SASI questionnaire indicate that the most time-consuming part of completing the SASI was the time spent tracking down institutional personnel to gather contact information. Given that this contact information will be required during the start-up phase regardless, this represents a shift of the timing of collecting these tasks rather than additional tasks or new burdens. And earlier information gathering could benefit site productivity, as it gives site support personnel early awareness and involvement with the research team on how the trial will be operationalized within their institution, potentially eliminating slowdowns during site startup.
STRENGTHS AND LIMITATIONS.
Our work has strengths and weaknesses. A strength, unlike other reports of using a standardized site selection process,17,18 is the heterogeneity of trial types and therapeutic indications. This is, to our knowledge, the first effort to standardize site assessment across different trials and different networks in a prospective manner. This assessment of use indicates that a standardized set of questions is generalizable. Looking at 11 deployments, however, is a limited data set to demonstrate generalizability or ability to predict performance. We do not yet have full trial operational metrics and trial completion data that will allow for a more robust analysis of relationships between site team statements at the time of site selection and subsequent site performance. Site teams may have reasons to present their local factors and characteristics as better than they really are. A survey alone should not be the basis for site selection but rather a part of a broader assessment, including interviews and site visits confirming the accuracy of the information in the surveys. The CREST (Carotid Revascularization, Endarterectomy versus Stent Trial) investigators suggest that site selection may need to focus on the treatment under assessment, which is why the SASI includes questions customized specifically for each trial.23 The generalizability of the SASI in our experience, however, indicates that a standardized set of questions could be tested as a predictor of motivation and other performance characteristics, such as early activation and timely data collection, in addition to expertise on the treatment under assessment, as the CREST group found.
The SASI deployments include reminder messages for slow responders, but we have not designed a follow-up process for those not returning a SASI. Absent data about non responders leaves a void in understanding why potential investigators are unable or choose not to return a completed SASI. The post-SASI feedback questionnaire was sent to a homogenous audience (those who returned a completed SASI), but the response rate was low. A common concern with low response is the potential for non-response bias, although this is not a given.24–26 There are many reasons why site teams might not have answered the questionnaire—survey fatigue or too many emails, or they may not have understood or liked completing the SASI. Whatever the cause, crucial information may be missing to improve the experience of site teams participating in this type of site selection process.
NEXT STEPS.
Still to be investigated is whether consolidating trial expectations, protocol feasibility, institutional contact details, and site qualities or assets into a survey for site selection improves trial timelines with faster start-up and overall trial performance, or if comparing interested investigators using a standardized survey tool confidently predicts which investigators to choose. Future work will include statistical analyses to better understand the potential of a survey in optimizing site selection. A comprehensive statistical analysis can be conducted when performance data are available to analyze SASI responses and clinical trial performance. The analysis should include whether survey summary scores are associated with site selection decisions, and whether selected sites scores are associated with site-level performance. The REDCap platform provides better data organization and will facilitate future analyses of whether individual survey questions can better predict the promptness of start-up time and the likelihood of meeting a recruitment obligation. Exploring which SASI questions may be the most predictive of these performance metrics is another important analysis.
Following our selected analyses, we intend to create the SASI in a downloadable format to be available for standalone use on the publicly accessible TIN website. Recently Viera et al. suggested that the centralized structure, rich expertise, and collaborative nature of CTSA hubs make them well positioned to lead the charge of randomized clinical trial (RCT) site readiness across institutions.14 The hubs, who collaborate and partner with the TIN, will provide the unique opportunity to promote the SASI and engage local investigators to adapt the SASI as a framework for site selection.
CONCLUSION
There is a need to understand more about what makes a trial team a high performing team—ready, interested, and welcoming the addition of trials into their local care paradigms. Utilizing such information in a predictive mode is important to efficient trial execution. Currently, site vetting and eventual site selection is not predictable. Although there will never be guarantees, trial operations could be better served if study leaders choose to better identify the characteristics of their candidate sites and more closely match local support systems and commitments to the demands of trial timelines and protocols before making their site selections.
Site surveys hold promise to help us do this. But they should be designed to be flexible while remaining true to their purpose: assessing and educating site teams and improving the site vetting process, no matter the technical or performance requirements. The economy of using a uniform site survey to collect multiple points of information simultaneously should benefit all clinical trial stakeholders, site teams included.
Supplementary Material
Figure 2:
A structured work sheet is used to customize trial-specific questions, equally distributed across Parts 1 and 2 of the SASI. In addition, appraisals that relate to every clinical trial, such as enrollment rates and equipment availability, are partially customizable as fill-in-the-blank survey items.
ACKNOWLEDGMENTS
We would like to show our gratitude to the Clinical and Translational Science Awards (CTSA) Consortium, the SASI Working Group, the study principal investigators who implemented the SASI, and the principal investigators and study coordinators who participated in the first 11 SASI deployments. In addition, we would like to thank the REDCap development group at Vanderbilt University and the Trial Innovation Network Trial Innovation Center project leads, project managers, site navigators, and reviewers for their assistance in each SASI selection process. Special thanks to Emily Bartlett, Jasmine Moon, and Elizabeth Holthouse for editorial support.
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work is supported by the National Institutes of Health Center for Advancing Translational Sciences (NCATS) and the National Institute on Aging (NIA), in support of the Trial Innovation Network under grant numbers: U24TR004440 (Johns Hopkins University Trial Innovation Center); U24TR001609 (Johns Hopkins/Tufts Universities); U24TR001579, U24TR004437, U24TR004432, (Vanderbilt University), and U24TR001608 (Duke/Vanderbilt Universities).
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
APPENDIX: Site Assessment Survey Instrument, Parts I and II
Declaration of interests
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:
Bradley Barney’s relationship with Johns Hopkins University includes: consulting or advisory and receives financial support. Salina Waddy and Ken Wiley work for the National Institutes of Health.
REFERENCES
- 1.McMurray JJV Site Selection and Performance in Clinical Trials: The Need for Better Understanding. Circ Heart Fail. 2016;9(9). doi: 10.1161/CIRCHEARTFAILURE.116.003490 [DOI] [PubMed] [Google Scholar]
- 2.Fogel DB. Factors associated with clinical trials that fail and opportunities for improving the likelihood of success: A review. Contemp Clin Trials Commun. 2018;11:156–164. doi: 10.1016/j.conctc.2018.08.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Desai M Recruitment and retention of participants in clinical studies: Critical issues and challenges. Perspect Clin Res. 2020;11(2):51–53. doi: 10.4103/picr.PICR_6_20 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Clinical Trials: A Data Driven Feasibility Approach | Pharmaceutical Outsourcing - The Journal of Pharmaceutical & Biopharmaceutical Contract Services. Accessed November 27, 2023. https://www.pharmoutsourcing.com/Featured-Articles/333830-Clinical-Trials-A-Data-Driven-Feasibility-Approach/ [Google Scholar]
- 5.Huang GD, Bull J, Johnston McKee K, Mahon E, Harper B, Roberts JN. Clinical trials recruitment planning: A proposed framework from the Clinical Trials Transformation Initiative. Contemp Clin Trials. 2018;66:74–79. doi: 10.1016/j.cct.2018.01.003 [DOI] [PubMed] [Google Scholar]
- 6.Potter JS, Donovan DM, Weiss RD, et al. Site selection in community-based clinical trials for substance use disorders: strategies for effective site selection. Am J Drug Alcohol Abuse. 2011;37(5):400–407. doi: 10.3109/00952990.2011.596975 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Buse JB, Austin CP, Johnston SC, et al. A framework for assessing clinical trial site readiness. J Clin Transl Sci. 2023;7(1):e151. doi: 10.1017/cts.2023.541 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Warden D, Trivedi MH, Greer TL, et al. Rationale and methods for site selection for a trial using a novel intervention to treat stimulant abuse. Contemp Clin Trials. 2012;33(1):29–37. doi: 10.1016/j.cct.2011.08.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Harper BD, Zuckerman D. Critical success factors for planning for site selection and patient recruitment planning. BioExecutive International. 2006;2(6):16–28. www.fda. [Google Scholar]
- 10.Hanley DF, Lane K, McBee N, et al. Thrombolytic removal of intraventricular haemorrhage in treatment of severe stroke: results of the randomised, multicentre, multiregion, placebo controlled CLEAR III trial. Lancet. 2017;389(10069):603–611. doi: 10.1016/S0140-6736(16)32410-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Naff N, Williams MA, Keyl PM, et al. Low-dose recombinant tissue-type plasminogen activator enhances clot resolution in brain hemorrhage: the intraventricular hemorrhage thrombolysis trial. Stroke. 2011;42(11):3009–3016. doi: 10.1161/STROKEAHA.110.610949 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Hanley DF, Thompson RE, Rosenblum M, et al. Efficacy and safety of minimally invasive surgery with thrombolysis in intracerebral haemorrhage evacuation (MISTIE III): a randomised, controlled, open-label, blinded endpoint phase 3 trial. Lancet. 2019;393(10175):1021–1032. doi: 10.1016/S0140-6736(19)30195-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Hanley DF, Thompson RE, Muschelli J, et al. Safety and efficacy of minimally invasive surgery plus alteplase in intracerebral haemorrhage evacuation (MISTIE): a randomised, controlled, open-label, phase 2 trial. Lancet Neurol. 2016;15(12):1228–1237. doi: 10.1016/S1474-4422(16)30234-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Viera L, James L, Shekhar A, Ioachimescu OC, Buse JB. Site readiness practices for clinical trials - considerations for CTSA hubs. J Clin Transl Sci. 2023;7(1):e146. doi: 10.1017/cts.2023.569 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use ICH Harmonised Guidelines General Considerations for Clinical Studies E8(R1). Accessed November 27, 2023. https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf [Google Scholar]
- 16.Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381. doi: 10.1016/j.jbi.2008.08.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Getz KA. Predicting Successful Site Performance. Applied Clinical Trials. 2011. Accessed November 27, 2023. https://www.appliedclinicaltrialsonline.com/view/predicting-successful-site-performance [Google Scholar]
- 18.Nipp RD, Hong K, Paskett ED. Overcoming Barriers to Clinical Trial Enrollment. Am Soc Clin Oncol Educ Book. 2019;39:105–114. doi: 10.1200/EDBK_243729 [DOI] [PubMed] [Google Scholar]
- 19.Isaksson E, Wester P, Laska AC, Näsman P, Lundström E. Identifying important barriers to recruitment of patients in randomised clinical studies using a questionnaire for study personnel. Trials. 2019;20(1):618. doi: 10.1186/s13063-019-3737-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Kaur G, Smyth RL, Williamson P. Developing a survey of barriers and facilitators to recruitment in randomized controlled trials. Trials. 2012;13:218. doi: 10.1186/1745-6215-13-218 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Nelson SJ, Drury B, Hood D, et al. EHR-based cohort assessment for multicenter RCTs: a fast and flexible model for identifying potential study sites. J Am Med Inform Assoc. 2022;29(4):652–659. doi: 10.1093/jamia/ocab265 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Hurtado-Chong A, Joeris A, Hess D, Blauth M. Improving site selection in clinical studies: a standardised, objective, multistep method and first experience results. BMJ Open. 2017;7(7):e014796. doi: 10.1136/bmjopen-2016-014796 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Demaerschalk BM, Brown RDJ, Roubin GS, et al. Factors Associated with Time to Site Activation, Randomization, and Enrollment Performance in a Stroke Prevention Trial. Stroke. 2017;48(9):2511–2518. doi: 10.1161/STROKEAHA.117.016976 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Groves RM. Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opinion Quarterly. 2006;70(4):646–75. [Google Scholar]
- 25.Groves RM & Peytcheva E. The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis. The Public Opinion Quarterly. 2008;72(2):167–189. http://www.jstor.org/stable/25167621 [Google Scholar]
- 26.Davern M Nonresponse rates are a problematic indicator of nonresponse bias in survey research. Health services research. 2013;48(3):905–912. 10.1111/1475-6773.12070 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.