Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Apr 13.
Published in final edited form as: JAMA. 2015 Mar 17;313(11):1163–1165. doi: 10.1001/jama.2015.1697

Reporting of Non-inferiority Trials in ClinicalTrials.gov and Corresponding Publications

Anand Gopal 1, Nihar Desai 2, Tony Tse 3, Joseph Ross 2
PMCID: PMC5897692  NIHMSID: NIHMS957306  PMID: 25781447

Non-inferiority clinical trials are designed to determine whether an intervention is not ‘unacceptably worse’ than a comparator by more than a prespecified difference, known as the non-inferiority margin. Selection of an appropriate margin is fundamental to non-inferiority trial validity, yet a point of frequent ambiguity.1,2 Given the increasing use of non-inferiority trial designs, maintaining high standards for conduct and reporting is a priority.3,4 Publicly-accessible trial registries and results databases promote transparency and accountability by requiring specification of research designs and endpoints and disclosure of summary results.1,5 To better understand reporting of non-inferiority trials, we examined registration records and results posted on ClinicalTrials.gov, as well as their corresponding publications, for information about the non-inferiority margin and statistical analyses, and determined their association with trial and journal characteristics.

Methods

Because ClinicalTrials.gov does not currently require registration of non-inferiority-specific information, we searched Ovid MEDLINE for non-inferiority trials published between January 2012 and June 2014 using keywords pertaining to “non-inferior” and “equivalence,” limited to English-language publications. We then selected publications reporting primary analyses of non-inferiority trials indexed with a ClinicalTrials.gov identifier, excluding publications without trial registration (n=163) or registered in other registries (n=97). For each trial, we abstracted details on trial design, including specification and justification of the non-inferiority margin, and results, including reporting of non-inferiority statistical analyses, from both ClinicalTrials.gov (July/August 2014) and corresponding publications. We used Chi-Square tests to compare reporting by study sponsor, condition, location, intervention, trial design characteristics, and journal impact factor (Table), with a two-sided type 1 error level of 0.006 to account for multiple comparisons. Analyses were performed using JMP (version 10.0.0, SAS Institute).

Table.

Reporting of trial design and results for non-inferiority trials registered on ClinicalTrials.gov and published between January 2012 and June 2014, stratified by trial and journal characteristics (n=344).

Overall (column %, 95% CI) NI design reported on CTgov (row %, 95% CI) P Value Trial results with NI analysis posted on CTgov (row %, 95% CI)e P Value
Overall 344 (100) 99 (28.8, 24.2–33.8) 76 (22.1, 18.0–26.8)
Trial sponsora 0.002 <0.001
 Industry 210 (61.0, 55.8–66.1) 48 (22.9, 17.7–29.0) 70 (33.3, 27.3–40.0)
 Non-industry 134 (39.0, 33.9–44.2) 51 (38.1, 30.3–46.5) 6 (4.5, 2.1–9.4)
Conditiona 0.09 0.05
 Heart and blood diseases 63 (18.3, 14.6–22.7) 24 (38.1, 27.1–50.4) 10 (15.9, 8.9–26.8)
 Nutritional and metabolic disorders 43 (12.5, 9.4–16.4) 7 (16.3, 8.1–30.0) 17 (39.5, 26.4–54.4)
 Cancers and other neoplasms 42 (12.2, 9.2–16.1) 11 (26.2, 15.3–41.1) 6 (14.3, 6.7–27.8)
 Bacterial, fungal, and parasitic diseases 39 (11.3, 8.4–15.1) 16 (41.0, 27.1–56.6) 7 (17.9, 9.0–32.7)
 Viral diseases 38 (11.0, 8.2–14.8) 10 (26.3, 15.0–42.0) 9 (23.7, 13.0–39.2)
 Other 119 (34.6, 29.8–39.8) 31 (26.1, 19.0–34.6) 27 (22.7, 16.1–31.0)
Locationa,d 0.64 <0.001
 International only 166 (50.2, 44.8–55.5) 51 (30.7, 24.2–38.1) 19 (11.4, 7.5–17.2)
 US/Canada only 107 (32.3, 27.5–37.5) 31 (29.0, 21.2–38.2) 31 (29.0, 21.2–38.2)
 US/Canada and international 58 (17.5, 13.8–22.0) 14 (24.1, 15.0, 36.5) 24 (41.4, 29.6–54.2)
Interventionb 0.35 <0.001
 Pharmacologic 179 (52.0, 46.8–57.3) 50 (27.9, 21.9–34.9) 53 (29.6, 23.4–36.7)
 Vaccine or biologic 79 (23.0, 18.8–27.7) 19 (24.1, 16.0–34.5) 17 (21.5, 13.9–31.8)
 Medical device 32 (9.3, 6.7–12.8) 13 (40.6, 25.5–57.7) 2 (6.3, 1.7–20.1)
 Other 54 (15.7, 12.2–19.9) 17 (31.5, 20.7–44.7) 4 (7.4, 2.9–17.6)
Randomizedb 0.54 0.04
 Yes 330 (95.9, 93.3–97.6) 96 (29.1, 24.5–34.2) 76 (23.0, 18.8–27.9)
 No 14 (4.1, 2.4–6.7) 3 (21.4, 7.6–47.6) 0 (0, 0–0)
Maskingb 0.03 <0.001
 Open-label 168 (48.8, 43.6–54.1) 49 (29.2, 22.8–36.4) 31 (18.5, 13.3–25.0)
 Double-blind 121 (35.2, 30.3–40.4) 26 (21.5, 15.1–29.6) 42 (34.7, 26.8–43.5)
 Single blind 44 (12.8, 9.7–16.7) 19 (43.2, 29.7–57.8) 3 (6.8, 2.3–18.2)
 Not reported 11 (3.2, 1.8–5.6) 5 (45.5, 21.3–72.0) 0 (0, 0–0)
Enrollmentb 0.25 0.004
 ≥ 500 patients 150 (43.6, 38.5–48.9) 48 (32.0, 25.1–39.8) 44 (29.3, 22.6–37.1)
 < 500 patients 194 (56.4, 51.1–61.5) 51 (26.3, 20.6–32.9) 32 (16.5, 11.9–22.4)
Journal impact factorc 0.34 0.24
 ≥ 10 112 (32.6, 27.8–37.7) 36 (32.1, 24.2–41.3) 29 (25.9, 18.7–34.7)
 < 10 232 (67.4, 62.3–72.2) 63 (27.2, 21.8–33.3) 47 (20.3, 15.6–25.9)

Notes: NI=Non-inferiority; CTgov=ClinicalTrials.gov; CI=Confidence Interval.

a

Data abstracted from ClinicalTrials.gov.

b

Data abstracted from journal publication.

c

Data abstracted from Journal Citation Reports (Thomson Reuters).

d

Total n=331 as study location information was missing for 13.

e

A total of 129 trial records posted summary results, of which 76 reported that non-inferiority analyses were performed and provided appropriate confidence intervals or p-values to interpret results.

Results

We identified and characterized 344 unique trials registered on ClinicalTrials.gov, published in 338 articles (6 described multiple trials) that reported primary results of non-inferiority trials (Table). Consistent with our search strategy, all publications described non-inferiority designs and nearly all (n=340 trials; 98.8%) provided non-inferiority margins. However, rationales for choosing margins were provided for only 95 (27.6%); the most commonly cited reasons were previous research (including historical data and meta-analyses) (n=46) and reliance on expert opinion/clinical judgment (n=43). In contrast, on ClinicalTrials.gov, approximately one-quarter (n=99; 28.8%) described non-inferiority designs, among which 15 (4.4% of total) specified non-inferiority margins, 9 of which (2.6% of total) were prespecified at initial registration. The ClinicalTrials.gov and published margin values were concordant for all 15.

Nearly all publications reported non-inferiority analyses and results (n=342, 99.4%). On ClinicalTrials.gov, 129 (37.5%) had posted summary results, among which 76 (22.1% of total) reported that non-inferiority analyses were performed and provided appropriate confidence intervals (CI) or p-values to interpret results. On ClinicalTrials.gov, industry-sponsored trials were less likely to register non-inferiority designs when compared with non-industry-sponsored trials (22.9%, 95% CI, 17.7%–29.0%, vs. 38.1%, 95% CI, 30.3%–46.5%; p=0.002), but were more likely to provide results with appropriate details of non-inferiority analyses (33.3%, 95% CI, 27.3%–40.0%, vs. 4.5%, 95% CI, 2.1%–9.4%; p<0.001). Location, intervention, masking, and enrollment were also associated with providing results with appropriate details (Table).

Discussion

Our cross-sectional analysis of non-inferiority trials published between 2012 and 2014 demonstrated near-universal reporting of non-inferiority designs and margins within our sample of publications, although not rationales. However, voluntary reporting of non-inferiority designs and margins in corresponding ClinicalTrials.gov records was suboptimal, consistent with prior research.6 Moreover, among trials with results reported on ClinicalTrials.gov, more than one-third provided insufficient information to interpret non-inferiority analyses.

Our study was limited to a sample of recently published non-inferiority trials registered on ClinicalTrials.gov. While ClinicalTrials.gov does not currently provide specific registration data elements for specifying non-inferiority trial designs, it provides specific elements for reporting non-inferiority results. Nevertheless, modifications may improve reporting and temper the possibility of post hoc distortion of design and margins, facilitating transparency and accountability for non-inferiority trial conduct. Our findings raise concerns about the adequacy of non-inferiority trial registration and results reporting within publicly-accessible trial registries and highlight the need for continued efforts to improve its quality.

Acknowledgments

The authors would like to thank Rebecca J. Williams, PharmD, MPH, of the National Library of Medicine for her comments on an earlier draft of this manuscript. Dr. Williams was not compensated for her contributions to this work.

Data access and responsibility: Mr. Gopal and Dr. Ross had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Funding/support and role of the sponsor: This project was not supported by any external grants or funds. Dr. Ross is supported by the National Institute on Aging (K08 AG032886) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Tse was supported by the Intramural Research Program of the National Library of Medicine, National Institutes of Health.

Conflicts of interest: Dr. Tse is an employee of the U.S. National Institutes of Health and a program analyst for ClinicalTrials.gov. Dr. Ross receives support from Medtronic, Inc. and Johnson & Johnson to develop methods of clinical trial data sharing, from the Centers of Medicare and Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting, and from the Food and Drug Administration (FDA) to develop methods for post-market surveillance of medical devices.

Author contributions: Mr. Gopal and Drs. Desai and Ross were responsible for the conception and design of this work. Mr. Gopal and Dr. Tse were responsible for acquisition of data. Mr. Gopal and Dr. Ross drafted the manuscript and conducted the statistical analysis. Dr. Ross provided supervision. All authors participated in the analysis and interpretation of the data and critically revised the manuscript for important intellectual content.

References

  • 1.Gotzsche PC. Lessons from and cautions about noninferiority and equivalence randomized trials. JAMA. 2006 Mar 8;295(10):1172–1174. doi: 10.1001/jama.295.10.1172. [DOI] [PubMed] [Google Scholar]
  • 2.Schumi J, Wittes JT. Through the looking glass: understanding non-inferiority. Trials. 2011 May 3;12:106. doi: 10.1186/1745-6215-1112-1106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Piaggio G, Elbourne DR, Pocock SJ, Evans SJ, Altman DG. Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA. 2012 Dec 26;308(24):2594–2604. doi: 10.1001/jama.2012.87802. doi:2510.1001/jama.2012.87802. [DOI] [PubMed] [Google Scholar]
  • 4.Le Henanff A, Giraudeau B, Baron G, Ravaud P. Quality of reporting of noninferiority and equivalence randomized trials. JAMA. 2006 Mar 8;295(10):1147–1151. doi: 10.1001/jama.295.10.1147. [DOI] [PubMed] [Google Scholar]
  • 5.Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database--update and key issues. N Engl J Med. 2011 Mar 3;364(9):852–860. doi: 10.1056/NEJMsa1012065. doi:810.1056/NEJMsa1012065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dekkers OM, Soonawala D, Vandenbroucke JP, Egger M. Reporting of noninferiority trials was incomplete in trial registries. J Clin Epidemiol. 2011 Sep;64(9):1034–1038. doi: 10.1016/j.jclinepi.2010.12.008. doi:1010.1016/j.jclinepi.2010.1012.1008. [DOI] [PubMed] [Google Scholar]

RESOURCES