Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 16.
Published in final edited form as: Clin Trials. 2022 Apr 28;19(4):442–451. doi: 10.1177/17407745221093567

Reporting of clinical trial safety results in ClinicalTrials.gov for FDA-approved drugs: a cross-sectional analysis

Krista Y Chen 1, Erin Borglund 1, Emma Charlotte Postema 2, Adam G Dunn 2,1, Florence T Bourgeois 1,3
PMCID: PMC9378423  NIHMSID: NIHMS1792945  PMID: 35482320

Abstract

Background:

Adverse events identified during clinical trials can be important early indicators of drug safety, but complete and timely data on safety results have historically been difficult to access. The aim was to compare the availability, completeness, and concordance of safety results reported in ClinicalTrials.gov and peer-reviewed publications.

Methods:

We analyzed clinical trials used in the FDA safety assessment of new drugs approved between 7/1/ 2018 and 6/30/2019. The key safety outcomes examined were all-cause mortality, serious adverse events, adverse events, and withdrawals due to adverse events. Availability of safety results was measured by the presence and timing of a record of trial-level results in ClinicalTrials.gov and a corresponding peer-reviewed publication. For the subset of trials with available results, completeness was defined as the reporting of safety results for all participants and compared between ClinicalTrials.gov and publications. To assess concordance, we compared the numeric results for safety outcomes reported in ClinicalTrials.gov and publications to results in FDA trial reports.

Results:

Among 156 trials studying 52 drugs, 91 (58.3%) trials reported safety results in ClinicalTrials.gov and 106 (67.9%) in peer-reviewed publications (risk difference [RD] −9.6%, 95% CI −20.3 to 1.0). All-cause mortality was reported sooner in published articles compared with ClinicalTrials.gov (log rank test, p=0.01). There was no difference in time to reporting for serious adverse events (p=0.05), adverse events (p=0.09), or withdrawals due to adverse events (p=0.20). Complete reporting of all-cause mortality was similar in ClinicalTrials.gov and publications (74.7% vs 78.3%, respectively; RD −3.6%, 95% CI −15.5 to 8.3), and higher in ClinicalTrials.gov for serious adverse events (100% vs 79.2%; RD 20.8%, 95% CI 13.0 to 28.5) and adverse events (100% vs 86.8%; RD 13.2%, 95% CI 6.8 to 19.7). Withdrawals due to adverse events were less often completely reported in ClinicalTrials.gov (62.6% vs 92.5%; RD −29.8%, 95% CI −40.1 to −18.7). No difference was found in concordance of results between ClinicalTrials.gov and publications for all-cause mortality, serious adverse events, or withdrawals due to adverse events.

Conclusions:

Safety results were available in ClinicalTrials.gov at a similar rate as in peer-reviewed publications, with more complete reporting of certain safety outcomes in ClinicalTrials.gov. Future efforts should consider adverse event reporting in ClinicalTrials.gov as an accessible data source for post-marketing surveillance and other evidence synthesis tasks.

Keywords: Clinical trials, drug safety, trial registries, adverse event reporting, post-marketing surveillance, evidence synthesis

Introduction

Many therapeutic interventions are found to have serious safety issues after they are approved, which can take many years to uncover. A third of all novel drugs and biologics approved by the US Food and Drug Administration (FDA) from 2000 to 2010 were associated with a withdrawal, relabeling, or safety communication a median of 4 years after approval.1 Analyses of high-profile drugs with major safety concerns have found that critical safety data could have been made available sooner to clinicians, scientists, and regulators to protect patients from adverse outcomes.2,3

The FDA has increasingly adopted a lifecycle approach to drug evaluation, whereby ongoing surveillance of drug safety and efficacy is performed during the post-marketing phase.4 This serves in part to balance the increasing use of expedited approval pathways, which may rely on evidence from fewer or shorter clinical trials and have been associated with higher rates of post-approval safety findings.1,5,6 During the post-marketing phase, data sources used to detect new safety signals and monitor known risks include observational studies,79 spontaneous adverse event reporting systems,10 clinical trials, and combinations of these data with literature discovery.11 Between 2010 and 2018, clinical trial data were the basis for more than a third of the post-market safety concerns described in FDA drug safety communications, though the median time from FDA approval to safety communication was greater than 10 years.12

A challenge in using safety data from clinical trials for drug safety monitoring, is that results reporting from trials remains slow and incomplete.1315 Further, around half of all clinical trials are estimated to have a missing outcome in the published report.16 Incomplete reporting has been observed specifically for safety outcomes, including serious adverse events and patient withdrawals due to adverse events, with about a third of publications omitting or providing only partial data on serious adverse events and almost half lacking reports on withdrawals related to adverse events.17,18 This incomplete reporting may contribute to the underutilization of more sophisticated approaches to adverse event data analysis, though these are frequently applied by sponsors in pre-market analyses.19,20

Structured results data in ClinicalTrials.gov may represent a novel source of information on adverse events that could be leveraged in the post-marketing phase. Results reporting in ClinicalTrials.gov has grown rapidly following mandates by medical journals, funding agencies, and regulatory bodies that sponsors report trial safety and efficacy data at specified time points.2124 The database employs standardized vocabularies and formatting for adverse event reporting, enabling easier synthesis of adverse event data from ClinicalTrials.gov compared to published articles. Prior studies have compared results reporting between the registry and publications, focusing primarily on completeness and discrepancies in results, with the goal of informing efforts to improve clinical trial reporting practices and increase data quality.2528 To assess the value of ClinicalTrials.gov as a novel source of drug safety data, the reporting of all trials pertaining to a given drug must be examined, including time to data availability, completeness of reporting across safety outcomes, and quality of the results data. Drug approval documents compiled by the Food and Drug Administration include analysis of a specified set of pivotal trials and provide trial results that can serve as a reference standard to evaluate the accuracy of data reporting in other sources.29,30 Focusing on the availability of results for trials included in FDA reviews is one way to avoid biases associated with sampling trials from public result reporting sources when assessing the availability of trial results in ClinicalTrials.gov and publications.

Accordingly, to explore the potential application of trial registry data for drug safety surveillance, our aim was to compare the availability, completeness, and concordance of key clinical trial safety results between ClinicalTrials.gov and peer-reviewed publications for trials included in FDA reviews for novel therapeutic agents.

Methods

Trial selection

We identified all trials used to support the safety assessment of New Molecular Entities and New Biologic Approvals receiving FDA approval between July 1, 2018 and June 30, 2019. These dates were selected to allow sufficient time for follow up to publication. For each drug, the approval documents available at Drugs@FDA.gov were reviewed and the trials listed as contributing to the primary safety database selected.

Trial safety data reported in ClinicalTrials.gov

Each safety trial was linked to the corresponding trial registration in ClinicalTrials.gov, where we determined whether safety data had been reported. For most trials, the National Clinical Trial number was included in the FDA reports, allowing for direct linkage to the registry record. When the National Clinical Trial number was not available, ClinicalTrials.gov was systematically searched using keywords for the active ingredient and drug name. Trial records were matched to the FDA safety trials using information on sponsor, trial design, trial phase, site locations, sample size, and dates. A final update to result availability was performed on December 31, 2020.

Trial safety data published in journal articles

We searched MEDLINE via PubMed to identify publications in peer-reviewed journals. When available, we used PubMed ID numbers or links to publications included in ClinicalTrials.gov. If these were unavailable, we searched PubMed with keywords and trial features, using a similar protocol to the searches performed in ClinicalTrials.gov. If multiple publications were identified for the same trial, we extracted data from the primary safety publication with trial level data on the full study cohort. For unpublished trials, a final search for publications was performed on December 31, 2020.

Data extraction

Data elements were extracted from ClinicalTrials.gov and publications for each trial, including drug name, active ingredient, study identifiers (sponsor trial ID, National Clinical Trial number, and PMID), publication date, comparator arms, and interventions. For ClinicalTrials.gov, we noted the date results were first posted. For publications, we collected the earliest date of online publication.

The key safety outcomes analyzed were all-cause mortality, serious adverse events, adverse events, and withdrawal due to adverse events. In ClinicalTrials.gov, we extracted data on all-cause mortality from the adverse event section and withdrawals due to adverse events from the participant flow section. Adverse events are presented in the registry using a standardized set of tables structured as serious adverse events and other adverse events. This format differs from most publications, where safety data are presented as serious adverse events and any adverse events. For our analysis of adverse events, we used other adverse events from ClinicalTrials.gov and any adverse events from publications. For each trial, the number of patients experiencing each of the safety outcomes in every trial arm was extracted.

We also collected information on restrictions in adverse event reporting to assess completeness of adverse event reporting. Restrictions consisted of thresholds based on a specified number or percentage of patients experiencing a particular adverse event (e.g., nausea, rash), below which adverse events were not reported.

For some of the drugs, safety data were pooled across multiple trials and results reported in aggregate. Whenever possible, we attempted to identify trial-level results, but noted when only pooled safety data were available. Certain trials included multiple phases, such as open-label extensions, or cross-over phases. Only safety data reported for the primary trial phase were considered.

Two authors (KC and EB) developed the data definitions and extraction protocol using the trials associated with a random set of 12 drugs. Double-data extraction was performed using a standard data collection form for trials associated with a new set of 8 drugs, demonstrating high concordance across all data elements extracted from ClinicalTrials.gov (agreement 99.6%, 259 of 260 data elem.ents) and publications (agreement 96.5%, 166 of 172), but greater discrepancy for data from FDA documents (agreement 62.0%, 124 of 200). Based on these findings, we performed double-data extraction from FDA documents for all trials and single-data extraction in ClinicalTrials.gov and publications for the remainder of trials.

Completeness of reporting

Completeness of reporting was assessed for each of the four key safety outcomes. Reporting was considered complete if the number of participants experiencing a safety outcome was provided for all trial arms. Reporting was classified as partial if the number of participants experiencing a safety outcome was provided for only a subset of trial arms and missing if this number was not available for participants in any trial arm. Only safety data available at the trial level was considered complete.28

Concordance in results reporting

Results reported in ClinicalTrials.gov and publications were assessed for concordance with values provided in FDA medical and statistical reviews, which were considered the reference standard, consistent with prior approaches.29,30 Sponsors submit clinical study reports with participant-level data, which are analyzed by FDA statisticians and reviewers and used to generate approval documents. Since sponsors are legally required to provide accurate data to the FDA, the reviews represent an appropriate data source for a reference standard.29 Concordance of results across all trial arms was calculated for trials with trial-level results available for matched trials (i.e. trials with results available in both FDA reports and ClinicalTrials.gov and trials with results available in both FDA reports and published articles). Concordance was not evaluated for the safety outcome of adverse events since these were defined differently across documents (i.e. other or any adverse events). Results for safety outcomes were considered concordant with FDA reports if the number of patients experiencing a key safety outcome was the same for all trial arms.

Analyses

Descriptive analyses were performed to describe trial characteristics and reporting of the key safety outcomes. To assess the availability of safety data, we determined the proportion of trials with safety outcomes reported in each data source and compared reporting levels between ClinicalTrials.gov and publications by calculating absolute risk differences. Time to reporting after FDA approval of trial-level safety outcomes in ClinicalTrials.gov and publications was evaluated using a Kaplan-Meier analysis and log-rank test with censoring of data on December 31, 2020, allowing a minimum of 18 months follow-up for each trial. Completeness of reporting for the four key safety outcomes in ClinicalTrials.gov and published articles was compared by assessing risk difference for the proportion of trials reporting each of the safety outcomes. Concordance of results was calculated for trials with results available in both FDA reports and ClinicalTrials.gov or a published article, also using risk difference assessments. A p value of 0.05 was considered significant. All analyses were performed using R version 4.0.3.

Results

The study cohort was comprised of 52 drugs, which were approved based on safety data from 156 clinical trials. The number of safety trials per drug ranged from 1 to 19, with a median of 2 trials per drug. The most common drug classes were oncology products (n=17), anti-infective agents (n=7), and hematologic agents (n=4).

Availability of trial data

Trial-level safety results were available for 91 (58.3%) trials in ClinicalTrials.gov and 106 (67.9%) trials in published journal articles after a median follow-up of 27.0 months (risk difference [RD]= −9.6%, 95% CI, −20.3 to 1.0). At 12 and 24 months after drug approval, 84 (53.8%) and 91 (58.3%) trials, respectively, had results reported in ClinicalTrials.gov, and 100 (64.1%) and 106 (67.9%) trials had results reported in publications. There were an additional 2 (1.3%) trials reporting results pooled across multiple trials in ClinicalTrials.gov, and 14 (9.0%) trials in publications. Among the 118 trials with results available in either source, 12 (10.2%) were available only in ClinicalTrials.gov and 27 (22.9%) only in publications.

Comparing availability in ClinicalTrials.gov and journal articles, all-cause mortality was reported in 43.6% of ClinicalTrials.gov registries and 55.1% of publications (RD 11.5%, 95% CI −22.6 to −0.5). Withdrawals due to adverse events were reported in 37.2% of ClinicalTrials.gov registries and 64.1% of publications (RD 27.0%, 95% CI −37.6 to −16.2) (Table 1). No evidence of a difference was found in reporting levels for serious adverse events or adverse events between ClinicalTrials.gov and publications.

Table 1.

Availability of Trial-Level Safety Data for 156 Clinical Drug Trials

Trials with safety results reported Median time to reporting,a months
ClinicalTrials.gov N (%) Publications N (%) Risk Difference (95% CI) ClinicalTrials.gov, median (IQR) Publications, median (IQR)
All-cause mortality 68 (43.6%) 86 (55.1%) −11.5% (−22.6 – −0.5) 2.7 (1.2 – 5.6) 0 (0 – 4.2)
Serious adverse events 91 (58.3%) 85 (54.5%) 3.8 % (−7.2 – 14.8) 1.9 (0 – 3.9) 0 (0 – 2.4)
Adverse events 91 (58.3%) 93 (59.6%) −1.3 % (−12.2 – 9.6) 1.9 (0 – 3.9) 0 (0 – 3.4)
Withdrawals due to adverse event 58 (37.2%) 100 (64.1%) −27.0 % (−37.6 – −16.2) 2.1 (0 – 5.1) 0 (0 – 3.4)
a

Represents time from FDA drug approval to availability of trial safety data

We examined time to availability of results in ClinicalTrials.gov and article publications for each safety outcome (Figure 1). All-cause mortality was reported sooner in published articles compared with ClinicalTrials.gov reports (log-rank test p=0.01). There was no difference in time to reporting for serious adverse events (log-rank test p=0.05), adverse events (log-rank test p=0.09), or withdrawals due to adverse events (log-rank test p=0.20). For all safety outcomes, differences in median time to reporting between ClinicalTrials.gov and published articles ranged from 1.9 to 2.7 months.

Figure 1.

Figure 1.

Time to availability of key safety outcomes for trials with reports in ClinicalTrials.gov or peer-reviewed publications. Time represents time from FDA drug approval to availability of each of the safety outcomes.

Completeness of trial data

Completeness of reporting for all-cause mortality was similar between ClinicalTrials.gov and publications, with 74.7% (68 of 91) of ClinicalTrial.gov reports providing data for all participants in all trial arms compared with 78.3% (83 of 106) of publications (RD −3.6%, 95% CI −15.5 to 8.3) (Table 2). For serious adverse events, reporting was more complete in ClinicalTrials.gov, where 100% (91 of 91) of reports included serious adverse events for all participants, compared to 79.2% (84 of 106) of published articles (RD 20.8, 95% CI 13.0 to 28.5). Adverse event reporting showed similar results, with 100% (91 of 91) of trials with complete reporting in ClinicalTrials.gov and 86.8% (92 of 106) in published articles (RD 13.2%, 95% CI 6.8 to 19.7). Withdrawals due to adverse events were less often completely reported for all trial participants in ClinicalTrials.gov at 62.6% (57 of 91) compared to 92.5% (98 of 106) in published articles (RD −29.8%, 95% CI −40.1 to −18.7).

Table 2.

Completeness of Reporting of Key Safety Outcomes in ClinicalTrials.gov and Publications

Safety outcome Trials with safety outcomes reported in ClinicalTrials.gov (n = 91), N (%) Trials with safety outcomes reported in publication (n = 106), N (%) Risk Difference (95% CI)
All-cause mortality
All participants in all trial arms 68 (74.7%) 83 (78.3%) −3.6% (−15.5 – 8.3)
Participants in a subset of trial arms 0 (0%) 3 (2.8%)
No results for any trial arms 23 (25.3%) 20 (18.9%)
Serious adverse events
All participants in all trial arms 91 (100%) 84 (79.2%) 20.8% (13.0 – 28.5)
Participants in a subset of trial arms 0 (0%) 1 (0.9%)
No results for any trial arms 0 (0%) 21 (19.8%)
Adverse event reporting
All participants in all trial arms 91 (100%) 92 (86.8%) 13.2% (6.8 – 19.7)
Participants in a subset of trial arms 0 (0%) 1 (0.9%)
No results for any trial arms 0 (0%) 13 (12.3%)
Reporting threshold for adverse events a
No threshold applied 9/91 (9.9%) 3/93 (3.2%) 6.7% (−0.4 – 13.8)
Threshold applied 82/91 (90.1%) 70/93 (75.3%)
Not specified 0/91 (0%) 20/93 (21.5%)
Withdrawals due to adverse events
All participants in all trial arms 57 (62.6%) 98 (92.5%) −29.8% (−40.1 – −18.7)
Participants in a subset of trial arms 1 (1.1%) 2 (1.9%)
No results for any trial arms 33 (36.3%) 6 (5.7%)
a

Analysis of reporting threshold limited to trials with adverse events reported for all participants or for participants in a subset of trial arms

In published articles, when results for participants across all study arms were not reported, results for participants in a subset of study arms were provided for a small percentage of trials. By contrast, in ClinicalTrials.gov, owing to the structured format for reporting, results were available either for all trial arms or none.

The most common restriction in adverse event reporting was providing only adverse events observed above a certain frequency. In ClinicalTrials.gov, all trials used this measure, with thresholds ranging from 0.9–5% of study participants. In publications, the types of restrictions varied widely. When adverse event frequencies were used, these ranged from 1–20%. However, many other measures were applied, including reporting only adverse events above a certain frequency in specified intervention arms, adverse events that were most severe, adverse events that occurred at a higher frequency in the intervention group compared to the placebo group, or some combination of these.

Concordance of results with FDA reports

The concordance of results in ClinicalTrials.gov and publications with results provided in FDA reports was similar (Table 3). Concordance was highest for all-cause mortality, with 70.0% (28 of 40) of ClinicalTrials.gov reports and 79.2% (38 of 48) of publications reporting concordant results (RD −9.2%, 95% CI −27.4 to 9.1). For serious adverse effects, concordance was 66.7% (22 of 33) and 70.4% (19 of 27), respectively (RD −3.7%, 95% CI −27.3 to 19.9). Concordance was lowest for withdrawals due to adverse events, with results in 44.1% (15 of 34) of ClinicalTrials.gov reports and 35.9% (23 of 64) of publications concordant with FDA reports (RD 8.2%, 95% CI −12.2 to 28.6).

Table 3.

Concordance of Safety Results Between FDA Reviews and Results Reported in ClinicalTrials.gov and Publications

Safety outcome Trials with results concordant with FDA reports Risk Difference (95% CI)
Trial results in ClinicalTrials.gov Trial results in publications
All-cause mortality n=40 trials n=48 trials
28 (70.0%) 38 (79.2%) −9.2% (−27.4 – 9.1)
Serious adverse events n=33 trials n=27 trials
22 (66.7%) 19 (70.4%) −3.7% (−27.3 – 19.9)
Withdrawals due to adverse events n=34 trials n=64 trials
15 (44.1%) 23 (35.9%) 8.2% (−12.2 – 28.6)

Discussion

In this assessment of safety results from a large cohort of clinical drug trials, key safety outcomes were available in ClinicalTrials.gov in a timely fashion and were as complete as results provided in published articles. These findings support the use and further evaluation of trial registries as a source of drug safety data to complement other approaches to drug safety monitoring in the post-marketing phase. While time to reporting was shorter for certain outcomes in publications compared with ClinicalTrials.gov, differences in median time to reporting were less than 3 months for all safety outcomes assessed. Further, results for certain safety outcomes were reported more completely in ClinicalTrials.gov compared to published articles, in part due to the structured reporting format of the registry. We were able to compare concordance of results for only a small number of trials with matched reports in FDA documents, but for available results, the concordance was similar between ClinicalTrials.gov and publications.

Prior studies examining results reporting in ClinicalTrials.gov have focused on varying types of interventional trials and trial outcomes to ascertain reporting discrepancies in ClinicalTrials.gov and publications, and highlight the use of registries to supplement information available in publications, evaluate publication bias and selective reporting of results, and advance efforts to improve reporting quality.17,2628 Building on this work, we assessed the utility of ClinicalTrials.gov as a data source for post-approval surveillance of drug safety and other evidence syntheses tasks. We selected all clinical trials used to support drug safety evaluations conducted by the FDA as a method to evaluate a comprehensive set of safety trials and compare reporting practices in ClinicalTrials.gov to publications with FDA documents as the reference standard. Our findings indicate that use of ClinicalTrials.gov for results reporting has reached a point where safety data are available at nearly the same levels in the registry as in publications, with only small differences in timing favoring publications. Importantly, when reported in ClinicalTrials.gov, the structure of the database and requirements around data entry support more complete reporting of adverse events and serious adverse events in ClinicalTrials.gov.

Our findings are consistent with prior studies indicating that trial results are posted in a timely fashion in ClinicalTrials.gov and tend to report more complete results compared to publications. A study of phase III randomized controlled trials (not limited to drug interventions) found that the median time from trial completion to reporting of serious adverse events was 22 months in ClinicalTrials.gov compared to 72 months in publications.28 Another evaluation of randomized drug trials reported median times to reporting for any results to be 19 months for ClinicalTrials.gov and 20 months for publications.17 Analyses assessing trials across a variety of interventions have also reported more complete reporting for flow of participants, efficacy results, adverse events, and serious adverse events.17,25,28 On the other hand, similar to our findings, a study examining interventional phase III studies found fewer trials with results on deaths in the registry compared to corresponding publications.25

Two prior studies have also assessed discrepancies in results reporting between ClinicalTrials.gov and FDA drug reviews.29,30 One study based on 15 trials, reported concordance for serious adverse event results for 67% of trials in ClinicalTrials.gov and 70% of trials in publications.29 Another study reported 50% (7 of 14) of trials with concordant serious adverse event results between the registry and FDA documents.30 These results are in line with our findings that 67% of ClinicalTrials.gov reports and 70% of publications provided results on serious adverse events that were concordant with FDA reviews. For mortality, prior studies have reported concordance rates of 28% and 30% with results in ClincialTrials.gov and 47% with results in publications.29,30 These are lower than our rates of 70% for ClinicalTrials.gov and 78% for publications. Possible explanations for these differences include the smaller sample sizes and different timeframes, with one study evaluating 14 trials for drugs approved in 2013 and 2014, and the other 15 trials for drugs approved between 2013 and 2015. For ClinicalTrials.gov in particular, prior reports have highlighted specific deficiencies in handling of mortality data, which may have led to changes in the structuring of the results database and improved reporting.30,31

Our results have implications for post-marketing surveillance, systematic reviews, and other evidence synthesis tasks. We found that 10% of trial reports were available only in ClinicalTrials.gov, underscoring the need to utilize the registry in addition to bibliographic datasets to obtain more complete clinical trial safety data. Further, when results were reported in the registry, all trials reported complete results for adverse events and serious adverse events across participants in all trial arms, compared with 87% and 79%, respectively, of trials in publications. ClinicalTrials.gov also employed a consistent approach to reporting thresholds for adverse events, compared to very heterogeneous reporting practices observed in publications. These features facilitate evidence synthesis tasks and may improve the ability to detect new or rare adverse events across trials. In addition to the traditional sources of data used in post-marketing surveillance and systematic reviews, ClinicalTrials.gov may represent a novel source of data for drug safety, providing structured, computable data that are amenable to automated extraction and synthesis, and lend themselves to a more comprehensive, continuous system of safety surveillance and evidence synthesis.32

Our results also point to areas for potential improvement of the registry, particularly in the reporting of all-cause mortality and withdrawals due to adverse events. For both of these safety outcomes, rates of reporting were higher in publications compared to ClinicalTrials.gov. In addition, data on withdrawals due to adverse events were less complete in the registry compared to publications. Concordance with FDA documents was also lowest for this outcome, with only 44% of trials reporting the same results in the registry as in the FDA documents. However, this rate was similar to publications, where 36% of results were found to be concordant. The reasons for these discrepancies are likely multi-factorial, including differences in the cohorts or time periods used in reporting certain events, and we were not able to ascertain the sources of discrepancies. The reason for the discordance in results between ClinicalTrials.gov and FDA documents is not clear, but may be related to variations in case definitions, differences in reporting purposes and data processing methods, and errors in data recording and submission.27,29,30 Furthermore, most trials report only adverse events above a threshold of 5% in ClinicalTrials, which prevents full visibility into adverse events that could be useful for systematic reviews and represents poor reporting practice.33 Finally, the overall rate of result availability for the cohort of safety trials was only 58% after a median follow up of 2 years, pointing to the need for additional efforts to ensure compliance with timely reporting requirements in the registry.34 Further development of ClinicalTrials.gov should address mechanisms to increase compliance with reporting of certain safety outcomes and ensure clear and standard definitions for the reporting of comprehensive safety outcomes.

Strengths of the study included the inclusion of a comprehensive set of trials used to support the safety evidence for recently approved drugs. This study design allowed us to assess availability and quality of results reporting in ClinicalTrials.gov and publications compared to FDA documents as a reference standard. This approach builds on prior reports examining matched sets of trial reports to assess discrepancies between results and allowed us to examine the utility of ClinicalTrials.gov as a data source to augment adverse event surveillance and evidence synthesis tasks. This study design is also the source of a limitation of the study, since reporting practices may be different for pre-market trials compared to those in the post-marketing phase, even though trial registration and reporting requirements apply to all controlled clinical investigations (other than phase 1 trials) of FDA-regulated products.35 For example, it is possible that sponsors are more vigilant about timely results reporting for trials undergoing review as part of the drug approval process than for trials conducted after market authorization has been obtained. In addition, trials conducted in the post-marketing phase are likely to be led by a variety of stakeholders beyond sponsors, including academic institutions and research networks, which may lead to differences in the reporting of results compared with pre-market trials. However, it is unclear whether these factors would favor one reporting source over another and prior studies have found high rates of results reporting in ClinicalTrials.gov for registered post-marketing studies.36,37 Other potential limitations of the study included constraints on the measurements of completeness and concordance based on trial data availability. When measuring concordance for results in ClinicalTrials.gov and published articles, we could only examine a subset of trials with matched trial reports in FDA documents. Future studies should extend our analysis to ongoing post-marketing trials, especially longer and larger trials designed specifically to measure safety outcomes.

Conclusion

Among clinical trials used to support the approval of new drugs, safety results were available in ClincialTrials.gov at approximately the same rate as in published articles, and data on adverse events and serious adverse events were more complete in ClinicalTrials.gov. These findings support use of ClinicalTrials.gov for drug safety surveillance and evidence synthesis tasks. Further, the structured format of the results database lends itself to automated methods for data extraction and synthesis, and may provide the basis for a comprehensive, continuous system for drug safety monitoring.

Funding

National Library of Medicine, National Institutes of Health (R01LM012976). The funder had no role in the design of the study, collection, analysis, and interpretation of data, or writing of the manuscript.

Footnotes

Declaration of conflicting interests

The authors declare that they have no competing interests.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available in the ClinicalTrials.gov trial registry and the FDA’s dataset of approved drugs at Drugs@FDA.

References

  • 1.Downing NS, Shah ND, Aminawung JA, et al. Postmarket safety events among novel therapeutics approved by the US food and drug administration between 2001 and 2010. JAMA - Journal of the American Medical Association 2017;317(18):1854–1863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Jüni P, Nartey L, Reichenbach S, et al. Risk of cardiovascular events and rofecoxib: cumulative meta-analysis. The Lancet 2004;364(9450):2021–2029. [DOI] [PubMed] [Google Scholar]
  • 3.Nissen SE. The rise and fall of rosiglitazone. European Heart Journal 2010;31(7):773–776. [DOI] [PubMed] [Google Scholar]
  • 4.Psaty BM, Meslin EM and Breckenridge A. A Lifecycle Approach to the Evaluation of FDA Approval Methods and Regulatory Actions: Opportunities Provided by a New IOM Report. JAMA 2012;307(23):2491–2492. [DOI] [PubMed] [Google Scholar]
  • 5.Kesselheim AS, Wang B, Franklin JM, et al. Trends in utilization of FDA expedited drug development and approval programs, 1987–2014: cohort study. BMJ 2015;351:h4633. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Frank C, Himmelstein DU, Woolhandler S, et al. Era Of Faster FDA Drug Approval Has Also Seen Increased Black-Box Warnings And Market Withdrawals. Health Affairs 2014;33(8):1453–1459. [DOI] [PubMed] [Google Scholar]
  • 7.Hammad TA, Neyarapally GA, Iyasu S, et al. The Future of Population-Based Postmarket Drug Risk Assessment: A Regulator’s Perspective. Clinical Pharmacology & Therapeutics 2013;94(3):349–358. [DOI] [PubMed] [Google Scholar]
  • 8.Platt R, Wilson M, Chan KA, et al. The new Sentinel Network—improving the evidence of medical-product safety. N Eng. J Med 2009;361(7):645–647. [DOI] [PubMed] [Google Scholar]
  • 9.Oliveira JL, Lopes P, Nunes T, et al. The EU-ADR Web Platform: delivering advanced pharmacovigilance tools. Pharmacoepidemiology and Drug Safety 2013;22(5):459–467. [DOI] [PubMed] [Google Scholar]
  • 10.Harpaz R, DuMouchel W, LePendu P, et al. Performance of Pharmacovigilance Signal-Detection Algorithms for the FDA Adverse Event Reporting System. Clinical Pharmacology & Therapeutics. 2013;93(6):539–546. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Xu R and Wang Q. Large-scale combining signals from both biomedical literature and the FDA Adverse Event Reporting System (FAERS) to improve post-marketing drug safety signal detection. BMC Bioinformatics 2014;15(1):17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Tau N, Shochat T, Gafter-Gvili A, et al. Association Between Data Sources and US Food and Drug Administration Drug Safety Communications. JAMA Intern Med 2019179(11):1590–1592. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Manzoli L, Flacco ME, D’Addario M, et al. Non-publication and delayed publication of randomized trials on vaccines: survey. BMJ. 2014;348:g3058. [DOI] [PubMed] [Google Scholar]
  • 14.Ross JS, Mulvey GK, Hines EM, et al. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med 2009;6(9):e1000144. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Rees CA, Pica N, Monuteaux MC, et al. Noncompletion and nonpublication of trials studying rare diseases: A cross-sectional analysis. PLoS Med 2019;16(11):e1002966. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008;3(8):e3081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Riveros C, Dechartres A, Perrodeau E, et al. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLOS Med 2013;10(12):e1001566–e1001566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Pitrou I, Boutron I, Ahmad N, et al. Reporting of Safety Results in Published Reports of Randomized Controlled Trials. Archives of Internal Medicine 2009;169(19):1756–1761. [DOI] [PubMed] [Google Scholar]
  • 19.Phillips R and Cornelius V. Understanding current practice, identifying barriers and exploring priorities for adverse event analysis in randomised controlled trials: an online, cross-sectional survey of statisticians from academia and industry. BMJ Open 2020;10(6):e036875. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Chuang-Stein C and Xia HA. The Practice of Pre-Marketing Safety Assessment in Drug Development. Journal of Biopharmaceutical Statistics. 2013;23(1):3–25. [DOI] [PubMed] [Google Scholar]
  • 21.Zarin DA, Fain KM, Dobbins HD, et al. 10-Year Update on Study Results Submitted to ClinicalTrials.gov. N Engl J Med 2019;381(20):1966–1974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Zarin DA, Tse T, Williams RJ, et al. Update on Trial Registration 11 Years after the ICMJE Policy Was Established. N Engl J Med 2017;376(4):383–391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Zarin DA, Tse T, Williams RJ, et al. Trial Reporting in ClinicalTrials.gov — The Final Rule. N Engl J Med 2016;375(20):1998–2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Zarin DA. The Culture of Trial Results Reporting at Academic Medical Centers. JAMA Intern Med 2020;180(2):319–320. [DOI] [PubMed] [Google Scholar]
  • 25.Hartung DM, Zarin DA, Guise JM, et al. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med 2014;160(7):477–483. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Talebi R, Redberg RF and Ross JS. Consistency of trial reporting between ClinicalTrials.gov and corresponding publications: One decade after FDAAA. Trials 2020;21(1):675. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Becker JE, Krumholz HM, Ben-Josef G, et al. Reporting of results in ClinicalTrials.gov and high-impact journals. JAMA 2014;311(10):1063–1065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tang E, Ravaud P, Riveros C, et al. Comparison of serious adverse events posted at ClinicalTrials.gov and published in corresponding journal articles. BMC Med 2015;13:189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Pradhan R and Singh S. Comparison of Data on Serious Adverse Events and Mortality in ClinicalTrials.gov, Corresponding Journal Articles, and FDA Medical Reviews: Cross-Sectional Analysis. Drug Safety 2018;41(9): 849–857. [DOI] [PubMed] [Google Scholar]
  • 30.Schwartz LM, Woloshin S, Zheng E, et al. Clinical trials.gov and drugs@FDA: A comparison of results reporting for new drug approval trials. Ann Intern Med 2016;165(6): 421–430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Earley A, Lau J and Uhlig K. Haphazard reporting of deaths in clinical trials: A review of cases of ClinicalTrials.gov records and matched publications-a cross-sectional study. BMJ Open 2013;3(1):e001963. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Dunn AG and Bourgeois FT. Is it time for computable evidence synthesis? Journal of the American Medical Informatics Association. 2020;27(6):972–975. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ioannidis JPA, Evans SJW, Gøtzsche PC, et al. Better reporting of harms in randomized trials: An extension of the CONSORT statement. Annals of Internal Medicine. 2004;141(10):781–788. [DOI] [PubMed] [Google Scholar]
  • 34.Anderson ML, Chiswell K, Peterson ED, et al. Compliance with Results Reporting at ClinicalTrials.gov. N Engl J Med 2015;372(11):1031–1039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.U.S. Food and Drug Administration. FDA Amendments Act of 2007 (FDAAA). Public Law No. 110–85 § 801; 2007. [Google Scholar]
  • 36.Wallach JD, Luxkaranayagam AT, Dhruva SS, et al. Postmarketing commitments for novel drugs and biologics approved by the US Food and Drug Administration: A cross-sectional analysis. BMC Med 2019;17(1):117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Wallach JD, Egilman AC, Dhruva SS, et al. Postmarket studies required by the US Food and Drug Administration for new drugs and biologics approved between 2009 and 2012: Cross sectional analysis. BMJ (Online). 2018;361:k2031. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated and/or analyzed during the current study are available in the ClinicalTrials.gov trial registry and the FDA’s dataset of approved drugs at Drugs@FDA.

RESOURCES