Abstract
In the decade following the journal editors’ trial registration policy, a global trial reporting system (TRS) has arisen to supplement journal publication by increasing the transparency and accountability of the clinical research enterprise (CRE), which ultimately advances evidence-based medicine. Trial registration a foundation component of the TRS. In this article, we assess impact of the trial registration on the CRE with respect to two key goals: (1) establishing a publicly accessible and structured public record of all trials and (2) ensuring access to date-stamped protocol details that change during a study. After characterizing international trial registry landscape, we summarize the published evidence of the impact of the registration laws and policies on the CRE to date. We present three analyses using ClinicalTrials.gov registration data to illustrate approaches for assessing and monitoring the TRS: (1) timing of registration (i.e., prior to trial initiation [prospective] or after trial initiation [retrospective or “late”]; (2) degree of specificity and consistency of registered primary outcome measures compared to descriptions in study protocols and published articles; and (3) a survey of the published literature to characterize how ClinicalTrials.gov data has been used in research on the CRE. These findings suggest that, while the TRS is largely moving towards goals, key stakeholders need to do more in the next decade.
Introduction
Laws and policies within the U.S. and abroad have greatly increased the transparency and accountability of the clinical research enterprise (CRE). This has been accomplished by the development of a global “trial reporting system” or TRS. The three TRS components are trial registration, aggregate results reporting, and access to individual participant data (IPD). Of these, trial registration is foundational to understanding and interpreting trial results by providing information about all relevant clinical trials (to put results into broader context) and their prespecified protocol details (to ensure adherence to the scientific plan).
In this article, we describe the current trial registration landscape and evidence of its impact to date. We then present analyses using ClinicalTrials.gov data to provide additional evidence regarding the degree to which current practices are fulfilling certain key goals initially envisioned for trial registration. Finally, we identify challenges and suggest potential responses for the next decade.
Key Registration Goals of the TRS
Trial registration involves the submission of descriptive information about a clinical trial to a publicly accessible web-based registry. The following goals underlie key registration requirements:
Establish a publicly accessible and searchable database for disseminating a minimum set of structured information about all ongoing and completed trials. Trial registries are designed to document publicly all human experiments, facilitate identification of trials for potential participants, and permit the incorporation of the clinical research findings into the medical evidence base.
Provide access to date-stamped protocol details throughout the study lifecycle. Access to structured, archival information allows the public to track the progress of individual studies and assess whether reported results are consistent with the prespecified protocol or statistical analysis plan.
Evolution of the Global TRS
Following announcement of the ICMJE trial registration policy in September 2004, a series of related laws and policies were implemented within the United States1 and internationally,2 increasing the scope and content of mandatory prospective registration. The World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) established the minimum Trial Registration Data Set3 and continues to coordinate a global network of trial registries (Table 1). To address well-documented results disclosure biases in the published literature,4–6 some organizations subsequently enacted laws and policies requiring the systematic reporting of aggregate results to publicly accessible results databases. In the U.S., the Food and Drug Administration Amendments Act of 2007 (FDAAA) established a legal mandate requiring those responsible for initiating clinical trials to register and report summary results for certain trials of drug, biological, and device products.7,8 In response, the National Institutes of Health (NIH) launched the ClinicalTrials.gov structured results database in September 2008.1 In September 2016, HHS promulgated regulations to implement, clarify, and expand legal requirements under FDAAA for trial registration and results information submission.9 Simultaneously, NIH issued a final policy requiring registration and results reporting for all clinical trials funded by NIH, whether or not the trial falls under the legal requirements of FDAAA.10
Table 1.
Trial Registry | Total Number of Studies (% WHO-Identified Overlap with ClinicalTrials.gov*) |
Year Launched |
---|---|---|
Australian New Zealand Clinical Trials Registry (ANZCTR) | 11,703 (1.9%) | 2005 |
Brazilian Clinical Trials Registry (ReBec) | 746 (2.7%) | 2010 |
Chinese Clinical Trials Registry (ChiCTR) | 7,927 (0.3%) | 2007 |
Clinical Research Information Service, Republic of Korea (CRiS) | 1,771 (11.6%) | 2010 |
ClinicalTrials.gov** | 208,822 (100%) | 2000 |
Clinical Trials Registry – India (CTRI) | 6,562 (14.0%) | 2007 |
Cuban Public Registry of Clinical Trials (RPCEC) | 207 (0%) | 2007 |
European Union Clinical Trials Register (EU-CTR)** | 27,380 (33.2%) | 2004 |
German Clinical Trials Register (DRKS) | 4,293 (29.1%) | 2008 |
Iranian Registry of Clinical Trials (IRCT) | 9,770 (0.5%) | 2008 |
ISRCTN registry | 14,364 (6.3%) | 2000 |
Japan Primary Registries Network (JPRN) | 22,652 (4.1%) | 2008 |
Thai Clinical Trials Registry (TCTR) | 598 (1.0%) | 2009 |
The Netherlands National Trial Register (NTR) | 5,422 (1.3%) | 2004 |
Pan African Clinical Trial Registry (PACTR) | 614 (3.4%) | 2009 |
Sri Lanka Clinical Trials Registry (SLCTR) | 187 (0.5%) | 2006 |
WHO ICTRP Search Portal – total number of study records from all WHO ICTRP registries | 323,018 (64.4%) | 2007 |
Identified by WHO ICTRP using matched Secondary Identifying Numbers listed on study records
Includes a results database
As of October 2016 ClinicalTrials.gov contained information on over 227,000 studies, of which nearly 23,000 have posted results entries; we estimate that only half of these have results published in the literature.11 ClinicalTrials.gov receives approximately 580 new study registrations and 100 new results submissions per week, and about 170 million page views per month with 1.1 million unique visitors per month. The remainder of this paper analyzes data from ClinicalTrials.gov, which accounts for two-thirds of total global registrations.
Assessing ClinicalTrials.gov and the Evolving TRS
Table 2 identifies specific evaluation criteria for each of the two foundational goals of registration. For example, the degree to which minimum trial information is publicly accessible can be assessed in many ways, including scope and coverage of registries and/or registration policies, completeness or timeliness of the registry data, accuracy of submitted information, and utility of available data to the broader community. While published evidence supports several criteria, many evidentiary gaps currently exist. To investigate some of these gaps, we collected and analyzed recent ClinicalTrials.gov data. Our efforts focused on the degree to which posted trial information (1) is registered prior to trial initiation, (2) reports outcome measures with a sufficient degree of specificity, and (3) has been used to characterize the CRE through journal publications.
Table 2.
Key Goal 1. Establish a publicly accessible and searchable database for disseminating a minimum set of structured information about all ongoing and completed trials | |
Description & Importance | Evaluation Criteria & Selected Evidence |
|
a. Scope and coverage of registered trials
|
b. Completeness and timeliness of registered information
| |
c. Utility of registered information
| |
Key Goal 2. Provide access to date-stamped protocol details throughout the study lifecycle | |
Description & Importance | Evaluation Criteria & Selected Evidence |
|
a. Detection of incompleteness/inadequacies in registered information |
b. Specification sufficiency of registered OMs
| |
c. Detection of infidelity/inconsistency between registration information and other sources
|
The Timing of Trial Registration
Public trial registration at study initiation ensures timely access to information about all ongoing trials, while avoiding the risk of selective reporting (Key Goal 1), and documents information about the initial protocol, such as prespecified outcome measures (Key Goal 2). Comprehensive prospective registration is necessary to ensure that registered trials and, ultimately, published trial results, are not substantially biased due to cherry picking. Although no direct mechanism exists for identifying unregistered trials systematically, late registrations are a marker that stakeholders enable trials to proceed without prospective registration.19,24
Our goal was to identify trials that were “registered late.” On March 18, 2015, we downloaded ClinicalTrials.gov records for interventional studies (clinical trials) first received during a three-year period from 2012 to 2014. After excluding records with missing study start dates, we sorted all remaining records into two categories: (1) trials received before or within three months of the study start date and (2) trials received three or more months after the study start date (“registered late”). We also subcategorized records by funder and number of months late. We chose within 3 months of the study start date as a conservative estimate of registration occurring “on time.” The ICMJE policy requires registration prior to enrollment of the first participant (i.e., before the study start date) and FDAAA requires registration within 21 days of enrollment of the first participant. ClinicalTrials.gov collects study start date in Month-Year format. Of the 49,856 trials first received between 2012 and 2014, we excluded 105 trials that were missing a study start date.
Of the analyzed 49,751 trials, 32.8% (16,342) was “registered late,” with similar rates across years, but some variation across funder type: 23.5% (3,819/16,264) Industry, 24.9% (775/3,111) NIH, and 38.7% (11,748/30,376) academic, non-profit, or other government organizations. Among all trials “registered late,” 57.0% (9,321/16,342) were submitted to ClinicalTrials.gov more than 12 months after the study start date, with similar percentages across years and funder types.
Specificity and Consistency of Primary Outcome Measure Reporting Across Sources
The ICMJE policy34 requires the registration of prespecified primary and secondary outcome measures (Key Goal 2). To assess whether current registration practices provide sufficient specificity to permit assessment of the fidelity of published reports to the protocol, we assessed the level of specificity in registered primary outcome measures (POMs) using the framework we described previously.11 We also assessed consistency across corresponding protocols, registration records, and published results using the same data set.
We identified 40 trials in each of the New England Journal of Medicine (NEJM) (extracted on Sept 16, 2015) and the Journal of the American Medical Association (JAMA) (extracted on Aug 5, 2016) reporting the results of non-phase 1 clinical trials for which full protocols were available online, and at least one ClinicalTrials.gov Identifier (NCT Number) was cited in the abstract. Descriptions of the primary outcome measures (POMs) were extracted from the final version of the full protocol; the version of the ClinicalTrials.gov record that was displayed at journal publication; and the Methods section of the published article. We note that such information could have been modified following initial submission of the record (e.g., based on ClinicalTrials.gov quality control review) or manuscript (e.g., based on feedback during the peer review process).
From our sample of 80 publications, we identified 83 trials (some articles reported results on multiple trials) and 101 registered POMs (some trials listed multiple POMs) with the following levels of specification for each POM:
0% domain only (e.g., “anxiety”),
11.9% specific measurement (e.g., “Hamilton Anxiety Rating Scale”),
42.6% specific metric (e.g., change from baseline),
45.6% method of aggregation (e.g., “mean change from baseline”), and
94.1% included a specific timeframe (e.g., “52 weeks”).
We identified only two published POMs with apparent inconsistencies among the three sources. (Table S1) One article pooled data from two studies registered with different POMs— one registered POM was reported for the pooled study (NCT01605136) and another (NCT00979745) was reported as a secondary outcome measure.35 The second discrepancy was in an article that reported results of a POM that differed from the registered POM (NCT01680744) in the described measure and analysis population (i.e., outcome pertaining to the kidney donors versus recipients of kidney transplants).36 The remaining 99 POMs seemed consistent in their description of the specific measurement across sources, although differences in levels of detail for specific definitions and/or criteria made it difficult in some cases to confirm whether measures were truly identical. For example, the meaning of “progression-free survival (PFS)” is critically dependent on the criteria used to determine “progression.” However, it is not possible to assess consistency if only one source provides those criteria. We also noted poor or inconsistent reporting of time frames, especially for time-to-event measures. (Table S2)
Published Research Using ClinicalTrials.gov Data
Many researchers have used data from ClinicalTrials.gov to examine various aspects of the CRE. To understand the nature of such uses of the TRS more precisely and to evaluate the degree to which ClinicalTrials.gov data are meeting the needs of “meta-researchers” (i.e., researchers studying CRE), we conducted a preliminary evaluation of the published literature.
On August 7, 2015, we searched MEDLINE via PubMed to identify publications that conducted original research using data retrieved from the ClinicalTrials.gov registry and/or results database. The PubMed search string excluded MEDLINE records listing a specific ClinicalTrials.gov Identifier(s) in the Secondary Source ID field (e.g., publications reporting the findings from a particular trial) and limited the retrieval to abstracts published in English. Authors manually reviewed eligible publications for year of publication and source of ClinicalTrials.gov data (i.e., registry only, results database only, or both).
Based on this search, we retrieved 339 research articles and 1,218 systematic reviews published between 2010 and 2015 that used data from the ClinicalTrials.gov registry, results database, or both. The number of research articles increased from 24 in 2010 to 94 in 2014. We reviewed and categorized each research article into six broad areas (Table 3).
Table 3.
Research Area and Examples by Article Title | Number (and Percentage) of Research Articles (N = 339) |
---|---|
1. Characterization of Clinical Research on Specific Conditions | 151 (45%) |
2. Research on Ethics, Adverse Event Reporting, Data Mining and Other Topics | 44 (13%) |
3. Quality of Registered Data and Consistency with Registration and Results Reporting Policies
|
43 (13%) |
4. Characterization of the Overall Clinical Research Landscape
|
41 (12%) |
5. Evaluating Publication Bias/Selective Reporting
|
34 (10%) |
6. Assessing Specific Research-related Methods and Issues
|
26 (8%) |
Discussion
The ICMJE registration policy instigated a cascade of events that have greatly expanded and transformed the TRS.50 Before 2004, most investigators did not register their trials and no notion of a public summary results database existed. At that time, readers and editors had no way of knowing whether unpublished results existed for similar trials, or if the manuscripts reporting trial results accurately reflected trial protocols. Following implementation of the ICMJE policy, trial registration (whether prospective or retrospective) and acceptance of the need for structured summary results reporting and its advantages is growing, with most industry sponsors and some academic institutions developing infrastructures to help their investigators report summary results.51 Analysis of ClinicalTrials.gov data has informed policy and research discussions, and fueled, in part, the ongoing call for sharing IPD and associated trial documents. However, gaps in the TRS and its associated policies (e.g., lack of legal reporting requirements for Phase 1 trials) and evidence of suboptimal compliance and utilization of available tools suggests room for improvement.1 The recent issuance of the FDAAA final rule and NIH trial reporting policy will fill some of those gaps and create a framework for monitoring compliance, though considerable work remains.
For example, some funders, sponsors, IRBs, and journals continue to allow non- or late-registered trials to be conducted, and potentially published. This practice undermines Key Goal 1 by interfering with the processes designed to ensure that registries contain a list of all initiated trials; if trials can be registered late, then some trials can proceed without ever being registered at all. We found that about a third of trials across sponsor classes were registered three months or more after the start date, with a large proportion of these occurring after 12 months or more. We are aware that some trials registered late are due to changes in organizational disclosure policies (e.g., Boehringer Ingelheim registered 361 studies in 2014 alone, some dating back to 1990),52 but this positive movement does not explain the overall number of late registrations in all funder classes.
The use of registries to detect fidelity to the protocol using time-stamped records, has vastly improved since 2004. Requiring researchers to declare prespecified outcome measures and other study design elements as discrete structured data elements enables the tracking of each element (e.g., POMs) and facilitates comparison across trials.
Motivated editors and reviewers can compare publications to trial registry entries, a process replicated by our consistency analysis comparing the POMs reported in publications and protocols versus registration records. We note that the recently launched COMPare Project provides an ongoing platform for similar assessments.53 While analyzed POMs were quite consistent across sources, we observed variations in the levels of specification and differences in the amount of detail provided about criteria or definitions associated with a measure. Others have noted the potential impact of differing definitional details (e.g., variations in operational meaning of “disease free survival” across breast cancer trials).54 It is difficult to determine which discrepancies reflect benign variations in level of detail (e.g., stating “respiratory infection” as shorthand for “severe lower respiratory infection”), and which mask potential cherry picking (e.g., post-hoc selection of particular subgroups of participants). Similarly, the lack of specificity of the listed OM or the time frame leaves room for unacknowledged post hoc analytic decisions. There seems to be a special problem in the reporting of time frames for “time to event” OMs in all three sources, perhaps reflecting an underappreciation of its statistical importance.55 (Table S2) Additionally, the lack of standards for structured protocols, which allows for internal inconsistencies and uncertainty about key study design features, reinforces the importance of requiring and enforcing registry entries that reflect the prespecified scientific plan accurately and unambiguously. It is our sense that the non-scientific personnel assigned to register trials information may have trouble identifying the relevant information from unstructured protocols, which may explain some poor registry entries. Finally, our results reflect the protocol and registration as of the time of publication; there could certainly be a greater level of discrepancy in the pre-specified versions of these documents. We anticipate that the systematic posting of full protocols and SAPs, now to be required at ClinicalTrials.gov under the FDAAA final rule and the NIH policy, will allow the research community to discuss and eventually develop consistent standards of specificity and structure needed to help ensure the valid interpretation of reported results. Efforts to standardize protocols are already underway.56
ClinicalTrials.gov has become a critical resource for characterizing and evaluating the CRE. Nevertheless, innumerable opportunities remain for analyzing the data more systematically to inform key decisions by investigators, funders, IRBs, and others. The next phase of the evolution of the TRS requires concerted effort from all stakeholder groups in the clinical trial ecosystem (Table 4). Full implementation of FDAAA and the NIH trial reporting policy is expected to enhance the scope and completeness of trial reporting. However, there will always be a gap between meeting the “letter” versus the “spirit” of the law. For example, investigators can meet the reporting requirements while providing minimally informative data; editors, funders and others can go through the motions to determine that a trial was registered, without actually using the information to assess the quality of the published reports or to inform their understanding of the results. Ultimately, significant improvements in trial reporting will require changes in the values, incentives and scientific norms at those institutions that conduct clinical trials, and those entities that use the results of clinical trials to inform medical and policy decisions. Continued attention to trial registration and summary results reporting is critical particularly as the community considers other endeavors, such as sharing of IPD.50
Table 4.
Stakeholder Group | Sample Actions for Improving the TRS |
---|---|
| |
Funders |
|
| |
IRBs |
|
| |
Academic medical centers (AMCs) |
|
| |
Trialists | Before starting a trial—
|
Once trial is designed and funded—
|
|
Once trial is completed—
|
|
| |
Journal editors and peer reviewers |
|
| |
Meta-research researchers |
|
| |
ClinicalTrials.gov (and other trial registries and results databases) | To improve support for data submitters—
|
To improve support for data users—
|
Supplementary Material
Acknowledgments
Funding: Supported by the Intramural Research Program of the National Library of Medicine, National Institutes of Health
We thank Drs. Kevin M. Fain and Heather D. Dobbins for assistance with data analysis.
References
- 1.Weber WE, Merino JG, Loder E. Trial registration 10 years on. Bmj. 2015;351:h3572. doi: 10.1136/bmj.h3572. [DOI] [PubMed] [Google Scholar]
- 2.Gulmezoglu AM, Pang T, Horton R, Dickersin K. WHO facilitates international collaboration in setting standards for clinical trial registration. Lancet. 2005;365:1829–31. doi: 10.1016/S0140-6736(05)66589-0. [DOI] [PubMed] [Google Scholar]
- 3.Sim I, Chan AW, Gulmezoglu AM, Evans T, Pang T. Clinical trial registration: transparency is the watchword. Lancet. 2006;367:1631–3. doi: 10.1016/S0140-6736(06)68708-4. [DOI] [PubMed] [Google Scholar]
- 4.Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337:867–72. doi: 10.1016/0140-6736(91)90201-y. [DOI] [PubMed] [Google Scholar]
- 5.Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. The New England journal of medicine. 2008;358:252–60. doi: 10.1056/NEJMsa065779. [DOI] [PubMed] [Google Scholar]
- 6.Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H., Jr Publication bias and clinical trials. Controlled clinical trials. 1987;8:343–53. doi: 10.1016/0197-2456(87)90155-3. [DOI] [PubMed] [Google Scholar]
- 7.Food and Drug Administration Amendments Act of 2007. Public Law 110–85. 2007 Sep 27; ( http://www.gpo.gov/fdsys/pkg/PLAW-110publ85/pdf/PLAW-110publ85.pdf)
- 8.Final rule — clinical trials registration and results information submission. Fed Regist. 2016 Sep 21;81:64981–65157. ( https://www.federalregister.gov/documents/2016/09/21/2016-22129/clinical-trials-registration-and-results-information-submission) [PubMed] [Google Scholar]
- 9.Zarin DA, Tse T, Williams RJ, Carr S. Trial Reporting in ClinicalTrials.gov - The Final Rule. The New England journal of medicine. 2016 doi: 10.1056/NEJMsr1611785. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hudson KL, Lauer MS, Collins FS. Toward a New Era of Trust and Transparency in Clinical Trials. Jama. 2016;316:1353–4. doi: 10.1001/jama.2016.14668. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database--update and key issues. The New England journal of medicine. 2011;364:852–60. doi: 10.1056/NEJMsa1012065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.van Valkenhoef G, Loane RF, Zarin DA. Previously unidentified duplicate registrations of clinical trials: an exploratory analysis of registry data worldwide. Systematic reviews. 2016;5:116. doi: 10.1186/s13643-016-0283-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Huic M, Marusic M, Marusic A. Completeness and changes in registered data and reporting bias of randomized controlled trials in ICMJE journals after trial registration policy. PloS one. 2011;6:e25258. doi: 10.1371/journal.pone.0025258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Drazen JM, Zarin DA. Salvation by registration. The New England journal of medicine. 2007;356:184–5. doi: 10.1056/NEJMe068291. [DOI] [PubMed] [Google Scholar]
- 15.Durivage H. Slide 9. "ICMJE Consequences of Not Registereing Trials". New Haven, CT: Yale School of Medicine; Mar 13, 2012. Clinical Trials Disclosure Presetnation. ( http://ycci.yale.edu/Durivage-Godlew-ctgov-2012-03-13_119069_284_5.pdf) [Google Scholar]
- 16.Hooft L, Korevaar DA, Molenaar N, Bossuyt PM, Scholten RJ. Endorsement of ICMJE's Clinical Trial Registration Policy: a survey among journal editors. The Netherlands journal of medicine. 2014;72:349–55. [PubMed] [Google Scholar]
- 17.Wager E, Williams P Project Overcome failure to Publish nEgative fiNdings C. "Hardly worth the effort"? Medical journals' policies and their editors' and publishers' views on trial registration and publication bias: quantitative and qualitative study. Bmj. 2013;347:f5248. doi: 10.1136/bmj.f5248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Harriman SL, Patel J. When are clinical trials registered? An analysis of prospective versus retrospective registration. Trials. 2016;17:187. doi: 10.1186/s13063-016-1310-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Scott A, Rucklidge JJ, Mulder RT. Is Mandatory Prospective Trial Registration Working to Prevent Publication of Unregistered Trials and Selective Outcome Reporting? An Observational Study of Five Psychiatry Journals That Mandate Prospective Clinical Trial Registration. PloS one. 2015;10:e0133718. doi: 10.1371/journal.pone.0133718. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Centers for Medicare and Medicaid Services. Guidance for the Public, Industry, and CMS Staff: Coverage with Evidence Development. 2014 Nov 20; ( https://www.cms.gov/medicare-coverage-database/details/medicare-coverage-document-details.aspx?MCDId=27)
- 21.Department of Veterans Affairs. ORD Sponsored Clinical Trials: Registration and Submission of Summary Results. 2015 ( http://www.research.va.gov/resources/ORD_Admin/clinical_trials/)
- 22.Patient-Centered Outcomes Research Institute. PCORI's Process for Peer Review of Primary Research and Public Release of Research Findings. 2015 Feb 24; ( http://www.pcori.org/sites/default/files/PCORI-Peer-Review-and-Release-of-Findings-Process.pdf)
- 23.Killeen S, Sourallous P, Hunter IA, Hartley JE, Grady HL. Registration rates, adequacy of registration, and a comparison of registered and published primary outcomes in randomized controlled trials published in surgery journals. Annals of surgery. 2014;259:193–6. doi: 10.1097/SLA.0b013e318299d00b. [DOI] [PubMed] [Google Scholar]
- 24.Viergever RF, Karam G, Reis A, Ghersi D. The quality of registration of clinical trials: still a problem. PloS one. 2014;9:e84727. doi: 10.1371/journal.pone.0084727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Zarin DA, Tse T. Trust but verify: trial registration and determining fidelity to the protocol. Annals of internal medicine. 2013;159:65–7. doi: 10.7326/0003-4819-159-1-201307020-00011. [DOI] [PubMed] [Google Scholar]
- 26.Pranic S, Marusic A. Changes to registration elements and results in a cohort of Clinicaltrials.gov trials were not reflected in published articles. Journal of clinical epidemiology. 2016;70:26–37. doi: 10.1016/j.jclinepi.2015.07.007. [DOI] [PubMed] [Google Scholar]
- 27.van Lent M, IntHout J, Out HJ. Differences between information in registries and articles did not influence publication acceptance. Journal of clinical epidemiology. 2015;68:1059–67. doi: 10.1016/j.jclinepi.2014.11.019. [DOI] [PubMed] [Google Scholar]
- 28.Weston J, Dwan K, Altman D, et al. Feasibility study to examine discrepancy rates in prespecified and reported outcomes in articles submitted to The BMJ. BMJ open. 2016;6:e010075. doi: 10.1136/bmjopen-2015-010075. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. The Cochrane database of systematic reviews. 2011:MR000031. doi: 10.1002/14651858.MR000031.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Zhang S, Liang F, Li W, Tannock I. Comparison of Eligibility Criteria Between Protocols, Registries, and Publications of Cancer Clinical Trials. Journal of the National Cancer Institute. 2016;108 doi: 10.1093/jnci/djw129. [DOI] [PubMed] [Google Scholar]
- 31.Fleming PS, Koletsi D, Dwan K, Pandis N. Outcome discrepancies and selective reporting: impacting the leading journals? PloS one. 2015;10:e0127495. doi: 10.1371/journal.pone.0127495. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Jones CW, Keil LG, Holland WC, Caughey MC, Platts-Mills TF. Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC medicine. 2015;13:282. doi: 10.1186/s12916-015-0520-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. Jama. 2009;302:977–84. doi: 10.1001/jama.2009.1242. [DOI] [PubMed] [Google Scholar]
- 34.De Angelis C, Drazen JM, Frizelle FA, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. The New England journal of medicine. 2004;351:1250–1. doi: 10.1056/NEJMe048225. [DOI] [PubMed] [Google Scholar]
- 35.Langendonk JG, Balwani M, Anderson KE, et al. Afamelanotide for Erythropoietic Protoporphyria. The New England journal of medicine. 2015;373:48–59. doi: 10.1056/NEJMoa1411481. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Niemann CU, Feiner J, Swain S, et al. Therapeutic Hypothermia in Deceased Organ Donors and Kidney-Graft Function. The New England journal of medicine. 2015;373:405–14. doi: 10.1056/NEJMoa1501969. [DOI] [PubMed] [Google Scholar]
- 37.Bourgeois FT, Olson KL, Ioannidis JP, Mandl KD. Association between pediatric clinical trials and global burden of disease. Pediatrics. 2014;133:78–87. doi: 10.1542/peds.2013-2567. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Suda KJ, Greene J, Shelton CM. Geographic location of antiretroviral clinical trials in HIV infected pediatric patients. International journal of clinical pharmacy. 2013;35:1203–7. doi: 10.1007/s11096-013-9849-x. [DOI] [PubMed] [Google Scholar]
- 39.Faubel S, Chawla LS, Chertow GM, et al. Ongoing clinical trials in AKI. Clinical journal of the American Society of Nephrology : CJASN. 2012;7:861–73. doi: 10.2215/CJN.12191111. [DOI] [PubMed] [Google Scholar]
- 40.Bethel MA, Sourij H. Impact of FDA guidance for developing diabetes drugs on trial design: from policy to practice. Current cardiology reports. 2012;14:59–69. doi: 10.1007/s11886-011-0229-7. [DOI] [PubMed] [Google Scholar]
- 41.Li J, Lu Z. Systematic identification of pharmacogenomics information from clinical trials. Journal of biomedical informatics. 2012;45:870–8. doi: 10.1016/j.jbi.2012.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Anderson ML, Chiswell K, Peterson ED, Tasneem A, Topping J, Califf RM. Compliance with results reporting at ClinicalTrials.gov. The New England journal of medicine. 2015;372:1031–9. doi: 10.1056/NEJMsa1409364. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Krall RL. State of the controlled clinical trial enterprise in the United States. Clinical pharmacology and therapeutics. 2011;89:225–8. doi: 10.1038/clpt.2010.292. [DOI] [PubMed] [Google Scholar]
- 44.Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, Tasneem A. Characteristics of clinical trials registered in ClinicalTrials.gov, 2007–2010. Jama. 2012;307:1838–47. doi: 10.1001/jama.2012.3424. [DOI] [PubMed] [Google Scholar]
- 45.Mayor S. Half of drug trials with results on ClinicalTrials.gov are not published in journals. Bmj. 2013;347:f7219. doi: 10.1136/bmj.f7219. [DOI] [PubMed] [Google Scholar]
- 46.Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS medicine. 2013;10:e1001566. doi: 10.1371/journal.pmed.1001566. discussion e. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Hartung DM, Zarin DA, Guise JM, McDonagh M, Paynter R, Helfand M. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Annals of internal medicine. 2014;160:477–83. doi: 10.7326/M13-0480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Cepeda MS, Lobanov V, Berlin JA. Use of ClinicalTrials.gov to estimate condition-specific nocebo effects and other factors affecting outcomes of analgesic trials. The journal of pain : official journal of the American Pain Society. 2013;14:405–11. doi: 10.1016/j.jpain.2012.12.011. [DOI] [PubMed] [Google Scholar]
- 49.Gopal AD, Desai NR, Tse T, Ross JS. Reporting of noninferiority trials in ClinicalTrials.gov and corresponding publications. Jama. 2015;313:1163–5. doi: 10.1001/jama.2015.1697. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Zarin DA, Tse T. Sharing Individual Participant Data (IPD) and Scaling the Transparency Summit: But Don’t Forget the Foundation of the Trial Reporting System (TRS) PLoS medicine. 2015;13:e1001946. doi: 10.1371/journal.pmed.1001946. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.O'Reilly EK, Hassell NJ, Snyder DC, et al. ClinicalTrials.gov reporting: strategies for success at an academic health center. Clinical and translational science. 2015;8:48–51. doi: 10.1111/cts.12235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Boehringer Ingelheim GmbH. Policy on Transparency and Publication of Clinical Study Data. ( http://trials.boehringer-ingelheim.com/transparency_policy/policy.html)
- 53.COMPare: Tracking Switched Outcomes in Clinical Trials. ( http://compare-trials.org/)
- 54.Hudis CA, Barlow WE, Costantino JP, et al. Proposal for standardized definitions for efficacy end points in adjuvant breast cancer trials: the STEEP system. Journal of clinical oncology : official journal of the American Society of Clinical Oncology. 2007;25:2127–32. doi: 10.1200/JCO.2006.10.3523. [DOI] [PubMed] [Google Scholar]
- 55.Altman DG, De Stavola BL, Love SB, Stepniewska KA. Review of survival analyses published in cancer journals. British journal of cancer. 1995;72:511–8. doi: 10.1038/bjc.1995.364. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Annals of internal medicine. 2013;158:200–7. doi: 10.7326/0003-4819-158-3-201302050-00583. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Doshi P. Is this trial misreported? Truth seeking in the burgeoning age of trial transparency. Bmj. 2016;355:i5543. doi: 10.1136/bmj.i5543. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.