Abstract
Introduction:
Clinical trials are a vital component of translational science, providing crucial information on the efficacy and safety of new interventions and forming the basis for regulatory approval and/or clinical adoption. At the same time, they are complex to design, conduct, monitor, and report successfully. Concerns over the last two decades about the quality of the design and the lack of completion and reporting of clinical trials, characterized as a lack of “informativeness,” highlighted by the experience during the COVID-19 pandemic, have led to several initiatives to address the serious shortcomings of the United States clinical research enterprise.
Methods and Results:
Against this background, we detail the policies, procedures, and programs that we have developed in The Rockefeller University Center for Clinical and Translational Science (CCTS), supported by a Clinical and Translational Science Award (CTSA) program grant since 2006, to support the development, conduct, and reporting of informative clinical studies.
Conclusions:
We have focused on building a data-driven infrastructure to both assist individual investigators and bring translational science to each element of the clinical investigation process, with the goal of both generating new knowledge and accelerating the uptake of that knowledge into practice.
Keywords: Clinical trials, CTSA, metrics, translational research, translational science
Introduction and Background
Clinical trials are a vital component of translational science, providing crucial information on the efficacy and safety of new interventions and forming the basis for regulatory approval and/or clinical adoption. At the same time, they are complex to design, conduct, and report successfully since they must meet stringent bioethical, statistical, and regulatory requirements, and be able to recruit the required number of participants in a prescribed time period.
Concerns about the lack of completion and reporting of clinical trials led to the 2007 Food and Drug Administration Amendment Act (FDAA), which mandated registration of trials and the timely reporting on the ClinicalTrials.gov website [1]. Despite the FDAA, in 2013 Nguyen et al. reported that only 17% of cancer randomized clinical trials were either reported on ClinicalTrials.gov or published 12 months after study completion or termination [2], and Gordon et al. reported that only 23% of National Heart, Lung, and Blood Institute (NHLBI) clinical trials were published within 12 months [3]. In recognition of the challenges in designing and conducting successful clinical trials, the NIH Collaboratory, initiated in 2012 to develop pragmatic clinical trials, implemented a two-phase development program in which studies were first provided with extensive support from core working groups to refine the protocols, after which the trials were finalized and evaluated for financing [4]. Similarly, NHLBI in 2018 introduced a biphasic, milestone-driven mechanism of clinical trial approval comprising a start-up phase to refine the protocol followed by a clinical trial execution phase [5]. The CTSA program responded in 2017 by creating the Translational Innovation Network to serve as a laboratory to study and improve the clinical trial process, building into the network a series of services, consultations, and pilots studies to ensure that protocols meet high standards [6].
Zarin et al. in 2019 published a commentary entitled, “Harms From Uninformative Trials” in which they defined an uninformative trial as one in which the results are not of meaningful use for a patient, clinician, researcher, or policymaker [7]. They identified five things that are necessary for a study to be informative (Table 1), focusing on the study hypothesis, design, feasibility, analysis, and reporting. They pointed out that while IRBs assess risk/benefit, they are often unable to assess scientific merit beyond that needed to justify risk. While NIH peer review of clinical trials and the programs described above undoubtedly improve the quality of studies, NIH-supported trials represent only a small portion of all clinical trials. In fact, Zarin et al. reported that in March 2019, there were 9,484 open clinical trials registered in ClinicalTrials.gov that were enrolling over 5 million American participants and had no evidence of external funding. They therefore called on academic institutions to take on the responsibility of scientific review. Most recently, the COVID-19 pandemic raised additional concerns about uninformative trials, with Bugin and Woodcock reporting that only 5% of the clinical trials testing drugs as COVID-19 treatments were randomized and adequately powered to provide clinically meaningful data [8].
Table 1.
Conditions for trial informativeness | Departments and cores | Processes | Metrics | Performance |
---|---|---|---|---|
1. Importance: Trial hypothesis is likely to inform an important scientific, medical, or policy decision [53–58] |
|
|
|
|
2. Design: Trial methods are likely to provide meaningful evidence related to study hypothesis [59–63] |
|
|
||
3. Feasibility: The trial is likely to be feasible [64–67] |
|
|
||
4. Integrity: Trial is conducted and analyzed in a scientifically valid manner that is faithful to design [68,69] |
|
|
||
5. Reporting: Systems are in place to ensure timely, complete and accurate reporting [70,71] |
|
|
Number of Problem Records on ClinicalTrials.gov register | 0 records with problem status in 2023 |
6. Return of Results (RoR): Investigators are counseled on appropriate return of aggregate and individual results, including actionable genetic information |
|
|
|
|
7. Data Sharing |
|
|
|
|
8. Representative Enrollment |
|
|
|
|
ACCTS, advisory committee for clinical and translational science; IRB, institutional review board; TSE, translational science expert/educator; CAB, community advisory board; DSMB, data safety and monitoring board; IT, information technology; TRN, translational research navigation program; R3, research rigor, reproducibility, and reporting program; DSMP, data safety moniroting plan; ICF, informed consent form; CRSO, clinical research support office; RU, Rockefeller University
The NIH General Clinical Research Center (GCRC) program, which was inaugurated in the 1960s and terminated when the Clinical and Translational Science Awards (CTSA) program began in 2006, required that recipient institutions create a GCRC Advisory Committee (GAC) to provide a scientific review of protocols to be conducted in the center and that the GAC also review a Data Safety and Monitoring Plan (DSMP) for each study. The DSMP is designed to be a quality assurance plan for the study, encompassing subject safety, data integrity, subject privacy, data confidentiality, product accountability, study documentation, and study coordination. With the advent of the CTSA program, the requirement for a GAC was eliminated, but in 2015 a CTSA Consensus Working Group recommended that institutions create Scientific Review Committees (SRCs) and specified the composition and function of such a committee [9]. Five years later, Selker et al. reported on the implementation or modification of SRCs at 10 institutions, stressing the importance of having a clear mandate from institutional leadership and at the local level, as well as clarity on integrating procedures and responsibilities with the IRB [10].
This background provides a framework for presenting the policies, procedures, and programs that we have developed in the Rockefeller University Center for Clinical and Translational Science (CCTS), supported in part by a CTSA grant since 2006, to assist in the development, conduct, and reporting of informative clinical studies.
The Rockefeller University Clinical and Translational Science Program
Overview
Rockefeller University opened as a research institute with the mission, Scientia pro bono humani generis (Science for the benefit of humanity) in 1901 and in 1910 the Rockefeller Institute Hospital opened as the first research hospital in the United States. From its beginning, the Hospital reflected the vision of its first Director, Rufus Cole, one of William Osler’s trainees at Johns Hopkins Medical School and Hospital, to create a cadre of physician-scientists who would evaluate patients at the bedside, at the autopsy table, and most importantly, at the laboratory bench. The guiding principles were that all patients were research patients, and thus not charged for their hospital or medical care, and all physicians were salaried and did not receive payment for their medical services so that they could devote themselves to the research mission. The Rockefeller Institute became The Rockefeller University in the 1960s when it began conferring PhD degrees, and in 2006, it began conferring Master’s of Clinical and Translational Science degrees when the KL2 Clinical Scholars program for junior translational scientists, was developed under the CTSA grant [11].
Today, Rockefeller University is comprised of approximately 70 separate laboratories, each headed by a senior scientist, most of whom hold PhD degrees and engage in basic science. A small number of labs are led by MD/PhD and MD investigators who variably divide their efforts between basic and clinical studies. The core educational experience for the KL2 Clinical Scholars is experiential, namely designing, conducting, and reporting a human participant protocol [11]. Many of the KL2 Clinical Scholars join basic science labs where they use their clinical skills and experience to initiate translational research programs that build on the basic science discoveries from that lab. To encourage basic scientist participation in human participant translational research and assist Clinical Scholars who have limited experience in conducting human studies, we developed a series of programs to help both groups develop, conduct, analyze, and report informative and impactful studies and to identify stakeholders and potential community-based partners to aid in study design and conduct, as well as dissemination and implementation of study results.
Translational Research Navigation (TRN) Program
Overview: This program is led by a senior core of Translational Science Experts/Educators (TSEs) who interact extensively with each other on a weekly basis to assist investigators in developing, conducting, and analyzing their studies (Table 2). TRN originally focused on protocol development [12], and was later expanded to foster standards across the full life cycle of a protocol. The TSEs reviewed the criteria proposed by Zarin et al. to ensure that protocols are informative and collectively decided to add three additional criteria: (1) “Return of results.” This was based in part on data derived from the Research Participant Perception Survey (RPPS) developed at Rockefeller [13,14], which have consistently demonstrated the high-value research participants place on receiving research results, as well as recommendations from the Secretary’s Advisory Committee on Human Research Protection, the revised Common Rule, and the landmark 2018 National Academies of Sciences, Engineering, and Medicine report recommending a paradigm shift for return of research results [15–17]. Thus, return of results, which has often been neglected, is not only a crucial element of dissemination but also an important element in showing respect for research participants as partners in the research process and demonstrating that the institution is worthy of participant and public trust [18–20]. (2) “Data Sharing.” This was based on the recognition that beyond reporting aggregate results, there is a need for sharing primary data and metadata to ensure transparency and reproducibility of research results and to facilitate the conduct of additional analyses (e.g., meta-analysis) by other investigators. (3) “Representative Enrollment.” This ensures that studies are conducted with participants who accurately reflect the target population and that individuals in the target population have access to participation.
Table 2.
Translational science experts/educators’ title | ACCTS meeting |
Senior staff meeting | Team science leadership review | IRB |
---|---|---|---|---|
PI/PD and Physician-in-Chief | √* | √ | √ | √ |
Clinical Research Facilitation Leader | √ | √ | √ | √ |
Director of Nursing | √ | √ | √ | √ |
Hospital Information Manager | √ | √ | √ | – |
Director of Regulatory Affairs/HIS | – | √ | √ | – |
*Clinical Research Officer | √* | √ | √ | √* |
PI/PD and Hospital CEO | √* | √ | √ | √* |
Pharmacy Director | √* | √ | √ | √* |
Medical Director | √* | √ | √ | √* |
Bionutrition Director | √ | √ | √ | √ |
IRB Chair and KL2 Director | √* | – | √ | √* |
Lead Community and Collaboration | √* | √ | √ | – |
Bioinformatics Director | √ | √ | √ | - |
Biostatistics Director | √* | √ | √ | √* |
Chief Operating Officer/CTSA Administrator | √* | √ | √ | √ |
Basic Science Faculty members | √* | – | √ | √* |
Voting member.
ACCTS, advisory committee on clinical and translational science (monthly meeting); CEO, chief executive officer; HIS, hospital information systems; IRB, institutional review board (monthly meeting); PI/PD, CTSA principal investigator/program director; senior staff meeting (weekly meeting); TSLR, team science leadership review (periodic meetings)
These eight criteria are listed in Table 1, along with the measures we take to ensure that we address each of them. To support several of the criteria, and in recognition of the increasing importance of scientific rigor in research design, analysis, and reporting, the TSEs created a University-wide program to enhance rigor, reproducibility, and reporting, R3 [21], with active support from data management experts in the University’s library. R3 sponsors a series of presentations throughout the year and maintains a website with background materials; NIH and NSF requirements and resources; a group of available aids (guidelines, methods, and checklists); and research support resources and tools, including open science framework, navigation tools, and a data sharing wizard.
We are committed to incorporating the views and priorities of diverse, under-represented communities in our research and achieve this in part through a wide variety of outreach methods to engage special populations and our 15-year collaboration with Clinical Directors Network (CDN), an awarding-winning Practice-Based Research Network that performs clinical investigation to high standards in collaboration with Federally Qualified Community Health Centers and other safety-net primary care practices. The president of CDN, Dr Jonathan N. Tobin, serves as the Co-Lead of the Rockefeller University Community and Collaboration Core. Thus, studies performed over the past years have included vulnerable older adults from minority populations under-represented in research and engaged practicing clinicians and other staff workers in FQHCs, older adult centers, and other community-based health and social service providers. These valued research collaborators diversify the research workforce and broaden the research perspective, sensitizing investigators to service delivery concerns and the barriers and facilitators to implementation that exist in a wide range of medical care settings.
Through a variety of mechanisms, investigators at Rockefeller have also built relationships with special populations, including patients with Fanconi anemia, Down syndrome, neurodevelopmental disorders, rare immunodeficiencies, fibrolamellar hepatocellular carcinoma, hidradenitis suppurativa, several different malignancies, COVID-19, and defects in facial recognition, as well as LGBTQ persons and women in the criminal justice system at high risk of developing HIV-1 infection. Our institution also conducts studies of normal physiology in healthy individuals.
Fig. 1 provides an overview of the roles of the TSEs, TRN, and R3 in supporting the multidisciplinary teams in developing, conducting, analyzing, and reporting their studies, as well as the role of the TSEs in workforce development. Our continuous quality improvement program is driven by metrics of the performance of each element in TRN and the outcome data about the quality of the protocols as expressed by our research participants.
Protocol development: The TRN process begins with a lead TSE with extensive experience in protocol development and conduct under Good Clinical Practice (GCP) exploring the scientific hypothesis with the investigator and other TSEs who provide guidance on 1. how to engage communities in all phases of the research; 2. key considerations related to patient safety, biostatistics, bioethics, research nursing support, bioinformatics, pharmaceuticals, regulatory requirements, and data management, security, and reporting; 3. how to articulate a hypothesis that operationalizes a scientific question in a clinical and/or community context; 4. alignment of aims and outcomes; 5. robust study design, including calculations of power and sample size; 6. participant recruitment feasibility and the likely time needed to complete recruitment into the study, as well as the best methods to identify eligible participants [12,22]. The TRN process also brings IRB leadership into early discussions when required, as well as the Research Hospitalist [23], a position that our Hub created to provide medical support to studies conducted by basic scientists and ensure the safety of research participants by reviewing the medical aspects of all protocols and compliance of novel research procedures and devices with infection-control and electrical engineering standards [23]. Additional multidisciplinary TRN protocol development meetings are held until the protocol is refined and judged by the investigators and TSEs, as well as the clinical research coordinator responsible for the protocol’s conduct, as ready for formal review. Investigators are encouraged but not mandated to accept recommendations from TRN staff. Protocols that do not meet high standards of validity, feasibility, bioethics, safety, or other key considerations despite TRN support are terminated before submission. Protocols that meet the standards are submitted for review through iRIS (Cayuse Data Corporation), a protocol development, review, and conduct electronic software application that the TSEs have customized over many years to support research at Rockefeller.
Protocol review: Submitted protocols undergo two rounds of review:
The scientific review is conducted by the Advisory Committee for Clinical and Translational Science (ACCTS), which is composed of TSEs, university faculty engaged in clinical and basic research, university legal counsel with expertise in contracting and conflict of interest policies, and the two Clinical Scholars selected as Chief Clinical Scholars (Table 2). The ACCTS starts by analyzing the hypothesis, aims, and outcomes for scientific and medical validity and novelty. The selected endpoint measures and the study design and plan for analysis are reviewed by scientists, clinicians, and the biostatistician. For studies that incorporate nucleotide sequencing and analysis, the bioinformatician is consulted. The ACCTS also reviews the data management plan; the required research nursing and research pharmacy support; funding, space, and other aspects of feasibility; and participant safety and data and safety monitoring. ACCTS also reviews resource utilization and fairly apportions resources when there are competing demands. If the scientific hypothesis is not sufficiently supported or justified, or if there is insufficient information about the relevance and validity of the measures, the sample size, and the likelihood of a meaningful finding, the protocol is either returned to the investigator with one or more stipulations that need to be addressed as a condition for later approval or tabled because the protocol needs extensive revision. In addition, some protocols that do receive full approval are returned to the investigator with non-binding recommendations for improvement. Thus, ACCTS policies and procedures meet all of the recommendations detailed in the CTSA Consortium Consensus Scientific Review Committee Working Group Report on the SRC Process [9].
The IRB reviews the protocol for ethical design, regulatory compliance, safety, and other aspects of human protections, with a focus on the clarity of consent. The IRB also reviews the plan for Return of Results to study participants and the community, as well as recruitment feasibility, and the plan for, and likelihood of achieving, representative enrollment. Protocols without credible recruitment plans are returned for revision, which usually involves a comprehensive recruitment consultation. The recruitment core utilizes a data-rich approach [22] to identify successful strategies, achieve representative enrollment, and predict achievable accrual [24]. The IRB also makes an independent assessment of conflict of interest issues.
The IRB chair and staff participate actively in the TRN process, meeting with investigators early during protocol development to address issues that may arise during the IRB process. The chair is also a member of the ACCTS, providing an opportunity to participate in the scientific review process and identify issues that may impact the IRB review. To ensure that the dual review process by ACCTS and the IRB does not delay the approval of protocols that are complete, the ACCTS meets the day prior to the IRB meeting. There is broad overlapping membership between ACCTS and the IRB (Table 2) and members of both committees can view the stipulations of both committees so that there is excellent communication between the committees.
Additional safeguards to ensure that approved protocols continue to meet our quality standards include, when available, assignment of experienced research coordinators to protocols; IT assistance with developing REDCap databases to record research information; early audits (i.e., after the first few participants are enrolled) of studies led by Clinical Scholars or new PhD investigators and studies that are complex or considered high risk; ongoing review of protocol recruitment progress, with the research team and recruitment staff communicating regularly to assess recruitment success and modify approaches to support timely enrollment [24]; and at least yearly review by ACCTS and the IRB. Information obtained from the above review processes informs the design of educational activities, including a bi-monthly GCP newsletter sent to all investigators that identifies emerging issues in GCP and reinforces proper clinical practice, including, where needed, systemic corrections such as changes to templates, policies, or workflow. This corrective information is then incorporated into the TRN process to proactively prevent recurrence (Fig. 1).
As indicated above, TRN was expanded beyond protocol development [12] to include Community Engaged Research Navigation [25], Protocol Initiation/Implementation Navigation, Protocol Conduct Navigation, and Protocol Completion Navigation, including assistance with reporting of results to Clinicaltrials.gov and return of results to participants. It thus provides a comprehensive structure to maintain quality control. Separate from their TRN advisory activities, as indicated in Fig. 1, TSEs also provide important services to directly support the conduct of protocols and play vital roles in educating the Clinical Scholars and assessing their progress in mastering Team Science Leadership competencies [26]. Initiatives under development include opportunities for Clinical Scholars to participate in, and lead, research teams that include practicing clinicians and other service providers, including gaining experience in training community-based collaborators in human participant research while learning about community health needs directly from clinicians and their patients. This experience will inform future implementation science studies that design and compare alternative strategies to implement, scale up, and sustain innovations developed in the laboratories at Rockefeller.
Recruitment
One of the most common reasons for studies to fail to provide informative information is the inability to recruit sufficient numbers of individuals into the study. We found that inexperienced investigators often did not begin to think in detail about their recruitment plan until their protocol was approved by the ACCTS and IRB and almost always underestimated the challenges to timely recruitment, the required budget, and time to achieve full recruitment. To address this, we instituted a comprehensive recruitment program [22] comprised of the following five elements. 1. Comprehensive Consultation early in the design of the protocol. As part of TRN, all investigators are strongly encouraged to obtain a comprehensive consultation from the members of the Recruitment Core early during protocol development. This provides an opportunity to educate the investigator about the “leaky pipe” concept, in which they can anticipate the loss of eligible participants at multiple stages in the recruitment process, and how to anticipate the likely loss at each stage based on the population being recruited, the stringency of the inclusion and exclusion criteria, the benefits and burdens of the study as assessed by potential participants, and the effectiveness of the advertising campaign. The Recruitment Core also helps investigators develop a realistic assessment of the likely time to achieve full recruitment based on the above information as well as the availability of the investigator and team members related to holidays, attendance at professional meetings, and other competing demands on their time. Finally, the Recruitment Core suggests a specific advertising campaign, including the production of advertisements that are designed to enhance equitable enrollment by ensuring that the images and wording are appropriate for a diverse group of participants, and provides a partial subsidy of the advertising. 2. Recruitment plan. Investigators are required to include a detailed recruitment plan in their protocol, including where appropriate, identifying stakeholders and collaborators who can both facilitate recruitment and support later dissemination and implementation activities. The ACCTS focuses on whether the proposed plan is realistic and the IRB focuses on whether the strategy will offer fair access to representative populations in deciding whether to approve the protocol. 3. Participant Repository. We recognized the importance of being able to contact individuals who previously volunteered to participate in a study at Rockefeller to assess their willingness to participate in a future study [22]. This is especially valuable for our CTSA hub because many of our studies involve healthy participants. In response, we developed an IRB-approved protocol to establish a Research Volunteer Repository that permits individuals who agree to add their names and some demographic data to the Repository to be recontacted about participation in future research studies. Of note, ∼95% of individuals who contact Rockefeller to inquire about their eligibility for a study agree to join the Repository, and the Repository now contains more than 10,000 individuals. One metric that highlights the value of the Repository is that 62% of all Repository members have enrolled in at least one study. 4. Centralized Contact Center. The Recruitment Core supports individual studies by providing personnel to receive contacts from potential participants who respond to advertisements for the study. The staff member engages the caller, collects basic demographic and referral information, and then assesses their eligibility for one or more studies by reviewing a list of prescreening criteria. Volunteers who meet the criteria are scheduled or referred for screening by the investigator. The staff follows an IRB-approved script to offer enrollment into the Repository, regardless of the prescreening outcome. When individuals do not meet the criteria or decline participation, the staff member records the reason and this information is compiled and shared with the investigator to inform whether it would be beneficial to modify the advertising or the protocol to improve recruitment into the study. 5. The Accrual Index. Based on the Recruitment Core’s extensive experience, a realistic estimate is made of the total time likely to be required for full accrual of the study [24]. From this, the expected fractional level of recruitment at any time during the study can be estimated. This then is converted into a dashboard that allows the Recruitment Core to monitor studies for the expected progress toward completion. When studies deviate from the expected timeline, the investigators and recruiting staff analyze the reason(s) and make appropriate modifications in the protocol or recruitment strategy, including, where appropriate, identifying additional academic and community partners.
Contracting and Conflict of Interest
A dedicated university counsel oversees clinical trial contracting, material transfer agreements, and the university’s conflict of interest program. She serves on both the ACCTS and the IRB, facilitating the flow of information on both topics to the appropriate committees.
Regulatory Requirements: Clinical Trials Registration, FDA Documents, Monitoring, and Auditing
The Clinical Research Support Office (CRSO) assists investigators in registering their trials on ClinicalTrials.gov and in reporting the required updates when they are due. They also assist investigators in interacting with the FDA to obtain and maintain INDs. The CRSO also oversees compliance with internal monitoring of studies, as well as institutional not-for-cause audits of studies, including the early auditing of studies so that corrective actions and training can be implemented as soon as possible.
Research Participant Perception Survey (RPPS)
The motivation to create the RPPS came from a desire to have an outcome measure of the quality of the investigative team’s ability to conduct clinical studies to high standards as judged from the perspective of the individuals who participated in the studies. This was in reaction to realizing that successfully completing a quiz after reading an online human research protection training module, which was the nearly universal approach adopted by institutions in response to the mandate for training in human research protections starting in the early 2000s, was at best a weak measure of quality. This led us to work with colleagues at eight NIH-funded CTSA hubs, GCRCs, and the NIH Clinical Center to develop the first version of the validated RPPS to assess the perceptions of individuals who actually participated in the study [27]. In addition to overall assessments of the participants’ experience and whether they would recommend participation to others, more detailed questions probed whether the informed consent process fully prepared the participant for what they experienced during the study, whether members of the investigative team built trust and mutual respect with the participant, and whether the investigative team was available when needed. The survey underwent extensive validation analysis including deployment in a study conducted at 15 NIH-supported institutions that reported on the responses from a diverse group of almost 5,000 participants [13,14]. A number of key conclusions came from that study, including the desire of participants to receive information about the results of the study, the central role of altruism in motivating participation, and the importance participants place on feeling that they are partners in discovery. Later research was targeted to shortening the study based on a careful analysis of the incremental value of each question in the original version and to comparing alternative methods of deployment to maximize the response rate [28–30].
The RPPS continues to be deployed at Rockefeller as a crucial outcome measure that now can be analyzed across multiple dimensions, including individual protocols, race, ethnicity, gender, age, study type, and study intensity or duration. This provides vital information for process improvement as the data are reviewed with the Senior Staff, individual investigators, and community and participant stakeholders. Specific interventions have been developed in response to data obtained from the RPPS, including enhancing educational tools to improve the informed consent process in complex studies, emphasizing clear communication, and requiring a prospective plan for return of results to participants. Thus, the RPPS facilitates a robust performance improvement cycle with the potential to monitor the impact of interventions by analysis of subsequent surveys. Most recently, Rockefeller has led an NCATS-funded 6-CTSA collaboration to develop a shared infrastructure to streamline adoption of the RPPS at other institutions and build a learning collaborative, data aggregation platform and dashboard that can be used for benchmarking [30].
Evaluation and Metrics
Our evaluation of the effectiveness of our program has undergone continual evolution and that evolution continues as we introduce new programs and appreciate subtleties in the complexity of interpreting specific metrics. We are well aware of what has been termed the tyranny of metrics [31], wherein attempts to improve the metric result in gaming the system and perverse unintended consequences, now enshrined as Goodhart’s Law (as modified by Strathern), “When a measure becomes a target, it ceases to be a good measure [32].” To address these concerns, we think the value of a metric should be judged based on the answers to the following questions. 1. To what use will the metric be put? 2. Who will use the metric and how will they use it? 3. What are the implicit assumptions underlying the metric? 4. Are those assumptions supported by evidence? 5. Are there any risks that the metric will distort decision-making in a way that would have a negative impact on optimal productivity? 6. How high is the priority of the metric proposed? 7. Is there a related metric that would have a higher priority? 8. Are there automated or minimally obtrusive methods to capture the needed data?
As a result, our primary focus is on trying to refine outcome measures that will provide us with the most valuable information on whether we have achieved our goals, and where we fall short, how we can best improve our processes and policies. We rely on process and surrogate measures when we do not have data on outcomes because they are not yet attainable, or they are not practical to obtain. And we rely on utilization measures to assess which of our services are being utilized so that we can adjust our resources to best meet the demand. Table 3 contains a list of metrics we use geared to this taxonomy and Table 1 contains a selection of these metrics and data from our experience.
Table 3.
|
FDA, Food and Drug Administration; ACCTS, adivsory committee on clinical and translational science; IRB, institutional review board; RPPS, research participant perception survey; CCTS, center for clinical and translational science; TSEs, translational science experts/educators; GCP, good clinical practice
Outcome measures: The fundamental goal of an informative study is to improve human health; this can be measured by whether the study leads to FDA approval of a new drug, medical device, or diagnostic method, or whether it results in any measurable changes in clinical practice and public health statistics. For example, pioneering studies supported by the CTSA-funded infrastructure conducted in the Rockefeller University Hospital on the pathogenesis of psoriasis, and later the safety and efficacy of several different novel agents to inhibit T-cell activation, IL-23, and different isoforms of IL-17 family cytokines, led to the approval of multiple drugs that have dramatically improved the therapy of psoriasis and psoriatic arthritis [33–40]. These initial studies in psoriasis also led to these new drugs being used in other autoimmune diseases, including Crohn’s disease, ulcerative colitis, rheumatoid arthritis, and Type 1 diabetes [41,42]. Impact on public health statistics can be obtained from analyzing ongoing regional or national surveys, for example, those conducted by the CDC National Center for Health Statistics, state and municipal health departments, and “big data” electronic health record repositories.
Measures of the quality of a protocol include whether it is approved by the ACCTS and IRB and the time required for approval. From 2018 to 2022, 75% of protocols submitted to the ACCTS were approved on initial review and 20% received conditional approval with stipulations; all of the latter gained approval after revision and resubmission (Table 1). Fewer than 4% of protocols submitted to the ACCTS were tabled for significant deficiencies. Investigators with tabled protocols returned to the TRN process and submitted revised protocols that were granted approval within 1-3 months.
Data on the time to IRB approval from protocol submission were collected from 2012 to 2018 as part of the CTSA Common Metrics program and demonstrated median times ranging from 10 to 26 days (average 17 days; n = 108). For comparison, the median times for all of the hubs in the CTSA consortium for the years 2015–2017 ranged from 42 to 45 days. More recently, for the years 2018–2022, the mean number of days from submission to approval for all protocols at Rockefeller (expedited and full review) was 20 days, and for those undergoing full review, 67% received approval in < 30 days, with a median of 20 days and a mean of 32 days. We now also qualitatively review data on specific ACCTS and IRB stipulations and requirements for reviewed protocols and use the data to inform educational initiatives and for incorporation into the TRN to improve the program.
To assess the quality of the informed consent process and the ability of the study team to build trust with the participants, we review carefully the RPPS responses to the selected validated questions below. Also indicated below are the percentages of participants in studies conducted in 2022 selecting the indicated rating. The 142 respondents were 49% female, 21% Hispanic, 72% White, 24% Black, 4% Asian, 2% American Indian/Alaska Native, and 1% Native Hawaiian or other Pacific Islander. The response rate was 25%, which is comparable to the 26% response rate for the patient experience survey used by the Centers for Medicare and Medicaid Services [43].
Informed consent:
Did the informed consent form prepare you for what to expect during the study? 93%, “Completely”
Did the information and discussions you had before participating in the research study prepare you for your experience in the study? 92%, “Completely”
During your discussion about the study, did you feel pressure from the research staff to join the study? 91%, “Never”
Building trust:
Rate your overall experience in the research study (0–10). 85% chose grades 9 or 10
Would you recommend joining a research study to your family and friends? 76%, “Definitely yes”
Did the research team members listen carefully to you? 94% “Always”
Did the research team members treat you with courtesy and respect? 97%, “Always”
Did you feel like you were a valued partner in the research process? 92%, “Always”
To assess the representatives of recruitment, we compare the race and ethnicity of participants in our studies and in our Research Volunteer Repository over the past five years to demographics from census data for New York City in 2022 (Table 4). The sex distribution of our participants matches the New York City data, with males constituting 46.3% of our participants compared to 48.0% citywide. Since almost 23% of our participants are in the Unknown race category compared to almost 15% for New York City as a whole, all of our categories for race, which are expressed as a percentage, need to be adjusted upwards to account for this difference when comparing to New York City data. While our racial data from 2018 to 2022 generally track the New York City data, our enrollment of Black individuals is somewhat less than both the citywide data and the data from our own Repository. Similarly, the enrollment of Latino/Hispanic individuals was proportionately less than in the citywide data and data in the Repository. Since our latest data are from 2018 to 2022, it is possible that the COVID-19 pandemic, which disproportionately impacted the Black and Latino/Hispanic populations in New York City, affected their participation in our studies. This is an area under active analysis.
Table 4.
Race/ethnicity/sex | New York city | RU Repository | RU enrollment (2018–2022) |
---|---|---|---|
White | 39.8 | 31.2 | 43.3 |
Black | 23.4 | 35.2 | 18.0 |
Other | 0.6 | 10.4 | 8.4 |
Asian | 14.2 | 4.6 | 8.8 |
More than one race | 7.1 | 3.7 | 5.4 |
Unknown | 14.9 | 11.7 | 22.9 |
Latino/Hispanic | 28.9 | 18.6 | 12.5 |
Male | 48.0 | 53.6 | 46.3 |
To assess the quality of study conduct, we analyze protocol deviations, violations, stoppages, and reporting to regulatory agencies. When appropriate, full root cause analyses are performed to identify potential underlying weaknesses in policies and procedures and institute measures to prevent their recurrence. We also review protocol variance and audit data to identify issues recurring across studies that may afford opportunities to improve education, forms, policies, workflow, or policies for systemic impact.
Process and surrogate measures: Table 3 also contains a series of metrics that provide valuable information on translational science activities and the proper conduct of clinical studies. Thus, data on intellectual property, publications, and presentations give some insight into translational discoveries and where they are along the path to improving the health of communities. Process and surrogate measures of the impact of the TRN program in supporting clinical research by basic scientists at Rockefeller include the number of PhD-led laboratories that have undergone TRN for new protocols, which in 2022 was 25, and the number of human participant protocols that are or have been led by PhD principal investigators, which is 48. Other process metrics primarily chart compliance with regulatory requirements. We are planning to track study completion more systematically.
Discussion
Ensuring that clinical studies are informative is a complex process and each institution needs to customize its approach to align with its structure, culture, and goals. We recognize that Rockefeller University is unlike academic medical centers in many ways, especially with regard to size, the number of protocols, and the near-exclusive focus on early-stage studies, and so the infrastructure we have developed is unlikely to be scalable without considerable modification. At the same time, some of the fundamental principles of the approaches we have taken are likely generalizable, and these are reflected in the elements of our program designed to address the core requirements for informative trials listed in Table 1. Most important is institutional and granting agency recognition that a robust multidisciplinary infrastructure is crucial, especially in providing support for studies led by trainees and less experienced investigators. The cost of building, maintaining, and when appropriate to meet new needs, expanding the infrastructure to support informative trials is considerable and has increased dramatically during the past two decades in response to expectations about training, regulatory compliance, and informativeness. Each institution requires a plan for stable funding that meets its particular needs, recognizing that relying primarily or exclusively on chargebacks to investigators will likely discourage clinical investigation, especially for basic scientists and junior investigators with limited budgets. CTSA funding was extremely important in expanding our infrastructure when the program began in 2006, but now represents a much smaller fraction of the costs.
The most important theme that unites our approach is the application of the scientific method to each element in the process. The clinical science enterprise itself has not been the subject of rigorous research, in part because grant funds from the categorical NIH Institutes have in the past focused on disease-specific research rather than the process of performing that research, now identified as a component of translational science [44]. This began to change with the realization by the NIH Institutes that many of the clinical studies they funded were not yielding informative information in an acceptable timeframe or in a form that leads to effective dissemination and implementation. The creation of the National Center for Advancing Translational Science (NCATS) provided an opportunity to address this more directly. NCATS recently began to focus its funding on translational science rather than disease-specific translational research, but without robust funding for follow-on R-type grants, there still is not a straight path for investigators to build academic careers in studies of translational science beyond what individual CTSA hubs can fund with extremely limited uncommitted resources. Partnerships with other funding agencies, including the Centers for Disease Control and Prevention (CDC), the Agency for Healthcare Research and Quality, and the Patient-Centered Outcomes Research Institute, may provide support to examine the structure, process, and outcome of research conducted with patients, as well as provide an opportunity to broaden the research workforce by engaging individuals involved in direct care delivery settings as part of translational research teams. This will enhance workforce diversity and inject real-world lived experience perspectives and insights into the design and conduct of research. In turn, this will increase the likelihood that studies are patient-centered and designed for dissemination and that the results can be applied across diverse clinical settings [45,46].
The COVID-19 pandemic demonstrated the weaknesses in the clinical trial enterprise [8] and has stimulated both soul-searching and requests for ideas about reorganization, most recently from the White House Office of Science and Technology Policy [47,48]. The CTSA program and NCATS are ideally positioned to lead this effort, bringing rigorous scientific analysis to the clinical enterprise itself. Identifying policies and methods to ensure that clinical studies are informative is a major component of that effort and an important investment in enhancing the future of translational research.
Acknowledgments
Supported in part by grants UL1 TR001866 and U01TR003206 from the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health (NIH) Clinical and Translational Science Award (CTSA) program. We thank Suzanne Rivera for outstanding administrative assistance.
Disclosures
The authors have no disclosures to declare.
References
- 1. Hirsch L. Trial registration and results disclosure: Impact of US legislation on sponsors, investigators, and medical journal editors. Curr Med Res Opin. 2008;24(6):1683–1689. [DOI] [PubMed] [Google Scholar]
- 2. Nguyen TA, Dechartres A, Belgherbi S, Ravaud P. Public availability of results of trials assessing cancer drugs in the United States. J Clin Oncol. 2013;31(24):2998–3003. [DOI] [PubMed] [Google Scholar]
- 3. Gordon D, Taddei-Peters W, Mascette A, et al. Publication of trials funded by the national heart, lung, and blood institute. New Engl J Med. 2013;369(20):1926–1934. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Johnson KE, Neta G, Dember LM, et al. Use of PRECIS ratings in the national institutes of health (NIH) health care systems research collaboratory. Trials Journal. 2016;17(1):32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. National Heart L, and Blood Institute (NHLBI). Single-site investigator-initiated clinical trials 2018). Accessed January 23, 2023.
- 6. Shah MR, Culp MA, Gersing KR, et al. Early vision for the CTSA program trial innovation network: A perspective from the national center for advancing translational sciences. Clin Trans Sci. 2017;10(5):311–313. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Zarin DA, Goodman SN, Kimmelman J. Harms from uninformative clinicals trials. JAMA. 2019;322(9):813–814. doi: 10.1001/jama.2019.9892 [DOI] [PubMed] [Google Scholar]
- 8. Bugin K, Woodcock J. Trends in COVID-19 therapeutic clinical trials. Nat Rev Drug Discov. 2021;20(4):254–255. [DOI] [PubMed] [Google Scholar]
- 9. Selker HP, Buse JB, Califf RM, et al. CTSA consortium consensus scientific review committee (SRC) working group report on the SRC processes. Clin Trans Sci. 2015;8:623–631. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Selker HP, Welch LC, Patchen-Fowler E, et al. Scientific review committees as part of institutional review of human participant research: Initial implementation at institutions with clinical and translational science awards. J Clin Trans Sci. 2020;4(2):115–124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Schlesinger SJ, Romanick M, Tobin JN, et al. The rockefeller university clinical scholars (KL2) program 2006-2016. J Clin Trans Sci. 2017;1(5):285–291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Brassil D, Kost RG, Dowd KA, et al. The rockefeller university navigation program: A structured multidisciplinary protocol development and educational program to advance translational research. Clin Trans Sci. 2014;7(1):12–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Kost RG, Lee LN, Yessis JL, et al. Research participant-centered outcomes at NIH-supported clinical research centers. Clin Trans Sci. 2014;7(6):430–440. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Kost RG, Lee LM, Yessis J, et al. Assessing participant-centered outcomes to improve clinical research. New Engl J Med. 2013;369(23):2179–2181. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. National Academies of Sciences E, Medicine. Returning Individual Research Results to Participants: Guidance for a New Research Paradigm. Washington, DC: The National Academies Press; 2018. [PubMed] [Google Scholar]
- 16. U.S. Department of Health and Human Services. SACHRP recommendations. Sharing study data and results: Return of individual results, 2016. https://www.hhs.gov/ohrp/sachrp-committee/recommendations/attachment-b-return-individual-research-results/index.html, Accessed March 15, 2023,
- 17. Health and Human Services. Office of human research protections, requirements (2018 common rule), 2018, https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/revised-common-rule-regulatory-text/index.html]2018, Accessed March 15, 2023,
- 18. Stallings SC, Cunningham-Erves J, Frazier C, et al. Development and validation of the perceptions of research trustworthiness scale to measure trust among minoritized racial and ethnic groups in biomedical research in the US. JAMA Netw Open. 2022;5(12):e2248812. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Canedo JR, Villalta-Gil V, Grijalva CG, et al. How do hispanics/Latinos perceive and value the return of research results? Hisp Health Care Int. 2022;20(4):238–247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Griffith DM, Jaeger EC, Bergner EM, Stallings S, Wilkins CH. Determinants of trustworthiness to conduct medical research: Findings from focus groups conducted with racially and ethnically diverse adults. J Gen Intern Med. 2020;35(10):2969–2975. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. The Rockefeller University Hospital center for clinical and translational science. R3: Enhancing scientific rigor, reproducibility, and reporting, 2020. https://www.rockefeller.edu/research/r3/, Accessed January 23, 2023.
- 22. Kost RG, Corregano LM, Rainer TL, Melendez C, Coller BS. A data-rich recruitment core to support translational clinical research. Clin Trans Sci. 2015;8(2):91–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. O’Sullivan B, Coller BS. The research hospitalist: Protocol enabler and protector of participant safety. Clin Trans Sci. 2015;8(3):174–176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Corregano L, Bastert K, Correa da Rosa J, Kost RG. Accrual index: A real-time measure of the timeliness of clinical study enrollment. Clin Trans Sci. 2015;8(6):655–661. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Kost RG, Leinberger-Jabari A, Evering TH, et al. Helping basic scientists engage with community partners to enrich and accelerate translational research. Acad Med. 2017;92(3):374–379. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Vaughan R, Romanick M, Brassil D, et al. The rockefeller team science leadership training program: Curriculum, standardized assessment of competencies, and impact of returning assessments. J Clin Trans Sci. 2021;5(1):e165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Yessis JL, Kost RG, Lee LM, Coller BS, Henderson DK. Development of a research participants’ perception survey to improve clinical research. Clin Trans Sci. 2012;5(6):452–460. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Kost RG, de Rosa JC. Impact of survey length and compensation on validity, reliability, and sample characteristics for ultrashort-, short-, and long-research participant perception surveys. J Clin Trans Sci. 2018;2(1):31–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Kelly-Pumarol IJ, Henderson PQ, Rushing JT, et al. Delivery of the research participant perception survey through the patient portal. J Clin Trans Sci. 2018;2(3):163–168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Empowering the Participant Voice. The rockefeller university hospital center for clinical and translational science, 2022. https://www.rockefeller.edu/research/epv/, Accessed January 28, 2023.
- 31. Muller JZ. The Tyranny of Metrics. Princeton, NJ: Princeton University Press; 2018. [Google Scholar]
- 32. Goodhart CA. Goodhart’s law. In: eds. Rochon, Rossi S, eds. The Encyclopedia of Central Banking. United Kingdom of Great Britain and Northern Ireland. United Kingdom of Great Britain and Northern Ireland: Edward Elgar Publishing; 2015:227. [Google Scholar]
- 33. Lowes MA, Kikuchi T, Fuentes-Duculan J, et al. Psoriasis vulgaris lesions contain discrete populations of Th1 and Th17 T cells. J Invest Dermatol. 2008;128(5):1207–1211. [DOI] [PubMed] [Google Scholar]
- 34. Zaba LC, Cardinale I, Gilleaudeau P, et al. Amelioration of epidermal hyperplasia by TNF inhibition is associated with reduced Th17 responses. J Exp Med. 2007;204(13):3183–3194. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Krueger JG, Fretzin S, Suárez-Fariñas M, et al. IL-17A is essential for cell activation and inflammatory gene circuits in subjects with psoriasis. J Allergy Clin Immun. 2012;130(1):145–154.e149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Lowes MA, Suárez-Fariñas M, Krueger JG. Immunology of psoriasis. Annu Rev Immunol. 2014;32(1):227–255. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Sofen H, Smith S, Matheson RT, et al. Guselkumab (an IL-23-specific mAb) demonstrates clinical and molecular response in patients with moderate-to-severe psoriasis. J Allergy Clin Immun. 2014;133(4):1032–1040. [DOI] [PubMed] [Google Scholar]
- 38. Armstrong AW, Read C. Pathophysiology, clinical presentation, and treatment of psoriasis: A review. J Amer Med Assoc. 2020;323(19):1945–1960. [DOI] [PubMed] [Google Scholar]
- 39. Abrams JR, Kelley SL, Hayes E, et al. Blockade of T lymphocyte costimulation with cytotoxic T lymphocyte-associated antigen 4-immunoglobulin (CTLA4Ig) reverses the cellular pathology of psoriatic plaques, including the activation of keratinocytes, dendritic cells, and endothelial cells. J Exp Med. 2000;192(5):681–694. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Krueger J, Clark JD, Suarez-Farinas M, et al. Tofacitinib attenuates pathologic immune pathways in patients with psoriasis: A randomized phase 2 study. J Allergy Clin Immun. 2016;137(4):1079–1090. [DOI] [PubMed] [Google Scholar]
- 41. Chamian F, Lowes MA, Lin SL, et al. Alefacept reduces infiltrating T cells, activated dendritic cells, and inflammatory genes in psoriasis vulgaris. In: Proceedings of the National Academy of Science U S A, 2005, 2075–2080. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Rigby MR, Harris KM, Pinckney A, et al. Alefacept provides sustained clinical and immunological effects in new-onset type 1 diabetes patients. J Clin Invest. 2015;125(8):3285–3296. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Centers for Medicare & Medicaid Services. Introduction to HCAHPS survey training, 2022. https://hcahpsonline.org/globalassets/hcahps/training-materials/2022_training-materials_slides_introduction.pdf, Accessed March 15, 2023.
- 44. Austin CP. Opportunities and challenges in translational science. Clin Trans Sci. 2021;14(5):1629–1647. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Kwan BM, Brownson RC, Glasgow RE, Morrato EH, Luke DA. Designing for dissemination and sustainability to promote equitable impacts on health. Annu Rev Public Health. 2022;43(1):331–353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Ahmed S, Zidarov D, Eilayyan O, Visca R. Prospective application of implementation science theories and frameworks to inform use of PROMs in routine clinical care within an integrated pain network. Qual Life Res. 2021;30(11):3035–3047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Federal Register. Office of science and technology policy. request for information; clinical research infrastructure and emergency clinical trials. Federal register, 87, no. 206, 2022, Accessed January 23, 2023,
- 48. Federal Register. Office of science and technology policy. request for information on data collection for emergency clinical trials and interoperability pilot. Federal register, 87, no. 208, 2022, Accesed January 23, 2023,
- 49. Yordanov Y, Dechartres A, Porcher R, et al. Avoidable waste of research related to inadequate methods in clinical trials. BMJ-Brit Med J. 2015;350(mar24 20):h809. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Altman DG. The scandal of poor medical research. Brit Med J. 1994;308(6924):283–284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Freedman B. Scientific value and validity as ethical requirements for research: A proposed explication. IRB-Ethics & Hum Res. 1987;9:7–10. [PubMed] [Google Scholar]
- 52. Tatsioni A, Karassa FB, Goodman SN, et al. Lost evidence from registered large long-unpublished randomized controlled trials: A survey. Ann Intern Med. 2019;171(4):300–301. doi: 10.7326/M19-0440 [DOI] [PubMed] [Google Scholar]
- 53. Wieschowski S, Chin WWL, Federico C, et al. Preclinical efficacy studies in investigator brochures: Do they enable risk-benefit assessment? PLoS Biol. 2018;16(4):e2004879. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Anderson AJ, Piltti KM, Hooshmand MJ, Nishi RA, Cummings BJ. Preclinical efficacy failure of human CNS-derived stem cells for use in the pathway study of cervical spinal cord injury. Stem Cell Rep. 2017;8(2):249–263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Report by the Temporary Specialist Scientific Committee (TSSC). "FAAH (Fatty Acid Amide Hydrolase)", on the causes of the accident during a Phase 1 clinical trial, 2016, https://archiveansm.integra.fr/var/ansm_site/storage/original/application/744c7c6daf96b141bc9509e2f85c227e.pdf, Accessed January 28, 2023.
- 56. Habre C, Tramer MR, Popping DM, Elia N. Ability of a meta-analysis to prevent redundant research: systematic review of studies on pain from propofol injection. Brit Med J. 2014;348(aug26 1):g5219. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Fergusson D, Glass KC, Hutton B, Shapiro S. Randomized controlled trials of aprotinin in cardiac surgery: Could clinical equipoise have stopped the bleeding? Clin Trials. 2005;2(3):218–229.discussion 229-232. [DOI] [PubMed] [Google Scholar]
- 58. Federico CA, Wang T, Doussau A, et al. Assessment of pregabalin opstapproval trials and the suggestion of efficacy for new indications: A systematic review. JAMA Intern Med. 2019;179(1):90–97. [DOI] [PubMed] [Google Scholar]
- 59. Gan HK, You B, Pond GR, Chen EX. Assumptions of expected benefits in randomized phase III trials evaluating systemic treatments for cancer. J Natl Cancer I. 2012;104(8):590–598. [DOI] [PubMed] [Google Scholar]
- 60. Keen HI, Pile K, Hill CL. The prevalence of underpowered randomized clinical trials in rheumatology. J Rheumatol. 2005;32(11):2083–2088. [PubMed] [Google Scholar]
- 61. Viteri OA, Sibai BM. Challenges and limitations of clinical trials on labor induction: A review of the literature. Am J Perinatol Reports. 2018;8(04):e365–e378. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Khan Z, Milko J, Iqbal M, Masri M, Almeida DRP. Low power and type II errors in recent ophthalmology research. Can J Ophthalmology. 2016;51:368–372. [DOI] [PubMed] [Google Scholar]
- 63. Kohler O, Benros ME, Nordentoft M, et al. Effect of anti-inflammatory treatment on depression, depressive symptoms, and adverse effects: A systematic review and meta-analysis of randomized clinical trials. Jama Psychiat. 2014;71:1381–1391. [DOI] [PubMed] [Google Scholar]
- 64. Carlisle B, Kimmelman J, Ramsay T, MacKinnon N. Unsuccessful trial accrual and human subjects protections: An empirical analysis of recently closed trials. Clin Trials. 2015;12(1):77–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Stensland KD, McBride RB, Latif A, et al. Adult cancer clinical trials that fail to complete: An epidemic? J Natl Cancer I. 2014;106(9):1–6. doi: 10.1093/jnci/dju229 [DOI] [PubMed] [Google Scholar]
- 66. Williams RJ, Tse T, DiPiazza K, Zarin DA. Terminated trials in the ClinicalTrials.gov results database: Evaluation of availability of primary outcome data and reasons for termination. PLoS One. 2015;10(5):e0127242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. Kost RG, Mervin-Blake S, Hallarn R, et al. Accrual and recruitment practices at clinical and translational science award (CTSA) institutions: A call for expectations, expertise, and evaluation. Acad Med. 2014;89(8):1180–1189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Dwan K, Altman DG, Clarke M, et al. Evidence for the selective reporting of analyses and discrepancies in clinical trials: A systematic review of cohort studies of clinical trials. Plos Med. 2014;11(6):e1001666. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69. Zarin DA, Tse T, Williams RJ, Rajakannan T. Update on trial registration 11 Years after the ICMJE policy was established. New Engl J Med. 2017;376(4):383–391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70. Whitlock EP, Dunham KM, DiGioia K, et al. Noncommercial US funders’ policies on trial registration, access to summary results, and individual patient data availability. JAMA Netw Open. 2019;2(1):e187498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71. Stretton S, Lew RA, Ely JA, et al. Sponsor-imposed publication restrictions disclosed on ClinicalTrials.gov. Account Res J. 2016;23:67–78. [DOI] [PubMed] [Google Scholar]