Skip to main content
JRSM Open logoLink to JRSM Open
. 2014 Apr 9;5(5):2042533313517688. doi: 10.1177/2042533313517688

Discrepancies between registration and publication of randomised controlled trials: an observational study

Kate F Walker 1,, Graham Stevenson 2, James G Thornton 1
PMCID: PMC4012655  PMID: 25057391

Summary

Objectives

To determine the consistency between information contained in the registration and publication of randomised controlled trials (RCTs).

Design

An observational study of RCTs published between May 2011 and May 2012 in the British Medical Journal (BMJ) and the Journal of the American Medical Association (JAMA) comparing registry data with publication data.

Participants and Settings

Data extracted from published RCTs in BMJ and JAMA.

Main outcome measures

Timing of trial registration in relation to completion of trial data collection and publication. Registered versus published primary and secondary outcomes, sample size.

Results

We identified 40 RCTs in BMJ and 36 in JAMA. All 36 JAMA trials and 39 (98%) BMJ trials were registered. All registered trials were registered prior to publication. Thirty-two (82%) BMJ trials recorded the date of data completion; of these, in two trials the date of trial registration postdated the registered date of data completion. There were discrepancies between primary outcomes declared in the trial registry information and in the published paper in 18 (47%) BMJ papers and seven (19%) JAMA papers. The original sample size stated in the trial registration was achieved in 24 (60%) BMJ papers and 21 (58%) JAMA papers.

Conclusions

Compulsory registration of RCTs is meaningless if the content of registry information is not complete or if discrepancies between registration and publication are not reported. This study demonstrates that discrepancies in primary and secondary outcomes and sample size between trial registration and publication remain commonplace, giving further strength to the World Health Organisation’s argument for mandatory completion of a minimum number of compulsory fields.

Keywords: clinical trial, randomised controlled trial, consensus development conference

Background

Since 2004, clinical trial registration has been recommended, and reputable medical journals claim to insist on it.1 However, in practice registration may be omitted, postdated, or information provided in the trial registration may not correlate with the eventual trial publication. ‘Publication bias’ arises when the results of trials are not published because of the strength or direction of the results. ‘Outcome reporting bias’ refers to the selection for publication of a subset of the originally chosen variables to be reported based on the results.2 Both publication and outcome reporting bias threaten the validity of evidence-based medicine, and clinicians only have access to the results which the researcher chooses to publish. Strict, comprehensive registration of trials at their outset allows the differences between what was originally planned in the study and what is eventually published to be seen, allowing critical evaluation of the trial and minimising these two sources of bias.

A Cochrane Review in 2011 found that discrepancies between trial registration and publication were common and often not declared in the publication.2 The 2010 CONSORT checklist, a guide to what to include when reporting clinical trials, includes ‘Any changes to trial outcomes after the trial commenced, with reasons’ which should provide further transparency. In a comprehensive review of the historical context of biased reporting of clinical trials, Dickersin and Chalmers3 conclude it is a ‘serious and extensive problem, which threatens the best interests of patients, undermines the scientific enterprise and wastes resources’. Mathieu et al.4 in a survey of reviewers of clinical trial manuscripts found that only 34% had examined trial registry information in the process of peer review. We sought to compare trial registrations with trial publications from two reputable general medical journals to see if anything had changed.

Methods

We searched for randomised controlled trials (RCTs) published between May 2011 and May 2012 in the British Medical Journal (BMJ) and the Journal of the American Medical Association (JAMA) using the journals’ respective webpages. We assessed the abstracts of retrieved citations to determine if the study was randomised. We defined a randomised study as a comparative study in which there is random allocation of participants to an intervention and a control group, with follow-up to examine differences in outcomes between the two groups.5 We excluded review articles, observational studies, meta-analyses and follow-up studies.

We examined the corresponding trial registry information using the trial registration number in the published paper where it was available. Where a trial registration number was unavailable, we searched for trial registry information in the following clinical trial registries: International Standard Randomised Controlled Trial Number Register (ISRCTN), ClinicalTrials.gov (NCT) and the registry of the country of the first author of the paper.

We collected data on whether the trial was registered, date of trial registration and date of trial publication, registered and published primary endpoints, registered and published sample sizes, registered and published statistical analysis plans and whether discrepancies were declared in the published paper.

We read the full-text articles of all RCTs and compared trial registry information with information from the corresponding trial publication. Where available, we examined the archived trial registry data, looking for changes to the registration to see whether changes had been made to the registry post trial publication.

Results

We identified 40 RCTs in BMJ and 36 in JAMA. All 36 JAMA trials and 39 (98%) BMJ trials were registered, only one trial was unregistered. This was a trial of sildenafil citrate use for pulmonary arterial hypertension, and the paper reported on the ocular safety of the drug. In the publication arising from this trial, two different trial registrations numbers were reported but neither referred to an RCT where ocular safety was an outcome measure (primary or secondary).

All 39 BMJ trials that were registered were registered prior to publication. Thirty-two (82%) BMJ trials recorded the anticipated or actual date of outcome data completion. In 31 of those trials, the date of data completion proceeded the date of trial publication by an average of 24 months (range 7–56 months). One trial registered an anticipated date of data completion nine months post the date of trial publication. In two out of 32 trials, the date of trial registration postdated the registered date of data completion (by 5 and 6 months). Excluding these two trials, the average time between trial registration and date of data completion was 27 months (range 5–63 months).

All 36 JAMA trials that were registered were registered prior to publication. Thirty-three (92%) JAMA trials recorded the date of outcome data completion. Of those, in 31 trials the date of data collection preceded the date of trial publication by an average of 21 months. In two trials, the registered anticipated date of data completion postdated the date of trial publication (by 8 months and 47 months). All 31 trials were registered before the date of data completion with an average time of 46 months (range 3–142 months).

There were discrepancies between primary outcomes declared in the trial registry information and in the published paper in 18 BMJ papers (47%) and seven JAMA papers (19%). The discrepancies are listed in Table 1.

Table 1.

Discrepancies between registered and published primary outcomes.

Registered and published primary outcomes BMJ (39) JAMA (36)
No discrepancy between registered and published primary outcome 21 (53%) 29 (81%)
No primary outcome registered 2 (5%) 1 (3%)
Change in timing of the assessment of the primary outcome 4 (10%) 1 (3%)
New primary outcome introduced in the paper 7 (18%) 3 (8%)
Registered primary outcome not reported in the paper 2 (5%) 0
Registered primary endpoint reported as secondary outcome in the paper 1 (3%) 1 (3%)
Registered primary endpoint reported in a previous publication 1 (3%) 0
Registered secondary outcome reported as a primary outcome in the paper 1 (3%) 0
Change in study population 0 1 (3%)

The original sample size stated in the trial registration was achieved in 24 (60%) of BMJ papers and 21 (58%) of JAMA papers. Of those RCTs where the sample size was not achieved, the sample size was underachieved by <10% in 7/11 studies in BMJ and 3/14 studies in JAMA (Table 2). Of those studies where the sample size was not achieved, this was openly disclosed in the trial publication in 4/16 BMJ papers and 4/15 JAMA papers.

Table 2.

Discrepancies between registered and published sample size.

Sample size BMJ (40) JAMA (36)
Sample size achieved 24 (60%) 21 (58%)
Sample size not achieved 11 (28%) 14 (39%)
 By >10% 4 11
 By <10% 7 3
Sample size not registered 2 (5%) 0
Registered SS changed to actual SS achieved 3 (8%) 1 (3%)

We did not find evidence of a registered statistical analysis plan for any of the trials we included from BMJ and only one trial in JAMA provided details of their statistical analysis plan in the trial registration.

Discussion

Compulsory registration of RCTs is meaningless if the content of registry information is not complete or if discrepancies between registration and publication are not reported. This study demonstrates that while the majority of published RCTs in major journals are registered in a timely fashion, discrepancies in primary and secondary outcomes and sample size between trial registration and publication are commonplace, and often not mentioned in the published paper.

Although the Cochrane Review included 16 studies with a median number of 54 RCTs, many people believe there has been a recent improvement. Our study of 76 RCTs is larger than the average of the studies included in the Cochrane Review and more up to date.

The papers from the two journals were examined by only one author (KW, BMJ; GS, JAMA). It would have added more scientific rigour if each paper had been examined separately by two authors.

In a study of 110 RCTS published in 2009 comparing trial registry information with trial publication, Ewart et al.6 found that in 31% of trials a primary outcome had been changed, and in 70% of trials a secondary outcome had been changed. Mathieu et al.7 in a similar paper published in 2009 examining 147 studies found a discrepancy in primary outcome between registration and publication in 31% of studies. We found discrepancies in the primary outcome in 33% of trials. Ewart et al.6 clearly state that they examined trial registry data as it appeared on the day of examination of the registry but did not examine changes to trial registration in the archive. We did examine archived trial registry information looking for changes that may have resulted in a greater number of discrepancies being uncovered and why there has been no improvement in discrepancies since the publication of the Cochrane Review. Hannink et al.8 in a study of 327 surgical RCTs found that only 152 trials were registered before the end of the trial. Of those 152 trials, 75 (49%) showed evidence of a discrepancy between registered and published outcomes. And in 28% of the trials, these discrepancies favoured statistically significant results. Ross et al.10 scrutinised the completeness of trial registry information in ClinicalTrials.gov and found that while compulsory data sets were completed nearly 100% of the time, when it came to optional data sets reporting was varied: principal investigator name (63%), enrolment (82%), start date (87%), end date (53%), primary outcome measure (66%) and secondary outcome measure (56%). Huic et al.9 in a study of 149 RCTs found that 77.6% of RCTs had a different sample size recorded in the registration data than in the published data.9

This paper gives further strength to the argument that the World Health Organisation Minimal Registration Data set should be adopted by trial registries and journals so that compulsory fields are completed when a trial is registered (WHO Trial Registration Data Set Version 1.2.1). Reveiz et al.11 make the argument that making trial protocols publicly available would expose the full methodological detail of trials and improve the ability of journals to critically appraise the reporting of trials.

It is well established that clinical trials reporting positive outcomes are more likely to be published than those that feature negative results. But the inclusion of positive outcomes can also influence the speed of publication and the accessibility of the work.3

It would be interesting to know whether there is a similar correlation between trials with discrepancies in primary and secondary outcomes from registration to publication and an increased likelihood of reporting positive outcomes than trials that stick to their registered intentions.

Conclusion

It is clear that compulsory registration without scrutiny of the registration contents leads to widespread discrepancies in the trial registration and eventual trial publication. Registration should be a more stringent exercise, where certain fields are compulsory.

Declarations

Competing interests

None declared

Funding

None declared

Ethical approval

The study was an observational study of published RCTs and therefore did not require ethical approval.

Guarantor

JGT

Contributorship

JGT had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: JGT, KW and GS. Acquisition of data: KW and GS. Analysis and interpretation of data: KW. Drafting of manuscript: KW. Critical revision of manuscript: JGT.

Acknowledgements

None

Provenance

Not commissioned; peer-reviewed by Katrien Oude Rengerink

References

  • 1. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Lancet 2004; 364(9438): 911–912 [DOI] [PubMed] [Google Scholar]
  • 2. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev 2011; 19(1): MR000031–MR000031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Dickersin K, Chalmers I. Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO. J R Soc Med 2011; 104(12): 532–538 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009; 302(14): 977–984 [DOI] [PubMed] [Google Scholar]
  • 5. Khan K, Regina K, Kleijnen J, et al. Systematic reviews: to support evidence based medicine, 2nd ed London, UK: Hodder Arnold, 2011 [Google Scholar]
  • 6. Ewart R, Lausen H, Millian N. Undisclosed changes in outcomes in randomized controlled trials: an observational study. Ann Fam Med 2009; 7(6): 542–546 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Mathieu S, Chan AW, Ravaud P. Use of trial register information during the peer review process. Plos One 2013; 8(4).) DOI: 10.1371/journal.pone.0059910 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Hannink G, Gooszen HG, Rovers MM. Comparison of registered and published primary outcomes in randomized clinical trials of surgical interventions. Ann Surg 2013; 257(5): 818–823 [DOI] [PubMed] [Google Scholar]
  • 9. Huic M, Marusic M, Marusic A. Completeness and changes in registered data and reporting bias of randomized controlled trials in ICMJE journals after trial registration policy. Plos One 2011; 6(9): 1–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.gov: a cross-sectional analysis. Plos Med 2009; 6(9): 1–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Reveiz L, Chan AW, Krleza-Jeric K, Granados CE, Pinart M, Etxeandia I, et al. Reporting of methodologic information on trial registries for quality assessment: a study of trial records retrieved from the WHO search portal. Plos One 2010; 5(8): 1–6 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from JRSM Open are provided here courtesy of SAGE Publications

RESOURCES