Skip to main content
Journal of the Royal Society of Medicine logoLink to Journal of the Royal Society of Medicine
. 2011 Dec;104(12):532–538. doi: 10.1258/jrsm.2011.11k042

Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO

Kay Dickersin 1,, Iain Chalmers 2
PMCID: PMC3241511  PMID: 22179297

Why is incomplete reporting of research a problem?

Under-reporting of the results of research in any field of scientific enquiry is scientific misconduct because it delays discovery and understanding. In the field of clinical research, incomplete and biased reporting has resulted in patients suffering and dying unnecessarily.1 Reliance on an incomplete evidence base for decision-making can lead to imprecise or incorrect conclusions about an intervention's effects. Biased reporting of clinical research can result in overestimates of beneficial effects2 and suppression of harmful effects of treatments. Furthermore, planners of new research are unable to benefit from relevant past research.

Failure to publish is also unethical. Participants in clinical research are usually assured that their involvement will contribute to knowledge; but this does not happen if the research is not reported publicly and accessibly. Moreover, failure to publish is simply a waste of precious research and other resources.3 Every year an estimated 12,000 clinical trials which should have been fully reported are not, wasting just under a million tonnes of carbon dioxide annually – the carbon emission equivalent of about 800,000 round-trip flights between London and New York.4

In brief, failure to report research findings is not only unscientific but also unethical.58 How did this problem come to be recognized and investigated, and what steps are being taken today to deal with it?

Evidence of biased reporting of studies

‘Reporting bias’ occurs when the nature and direction of the results of research influences their dissemination. Research results that are not statistically significant (‘negative’) tend to be under-reported,9 while results that are regarded as exciting or statistically significant (‘positive’) tend to be over-reported.1012 The nature and direction of research results can influence whether or not research is reported at all,9,13 and if so, in which forms.14 They can also influence the speed at which results are reported,1517 the language in which they are published,18,19 and the likelihood that the research will be cited.2025

Failure to publish research findings is pervasive.26,27 Studies demonstrating failure to publish have included research conducted in many countries, including Australia, France, Germany, Spain, Switzerland, the United Kingdom and the United States. For example, an analysis of follow-up studies based on 29,729 reports of research made available only in abstract form found that fewer than half of the studies went on to full publication, and that positive results were positively associated with full publication, regardless of whether ‘positive’ results had been defined as any ‘statistically significant’ result or as ‘a result favoring the experimental treatment’.14

Recognition and investigation of biased reporting of research

The problem of reporting bias has been recognized for hundreds of years. In the 17th century, Francis Bacon noted that ‘The human intellect … is more moved by affirmatives than by negatives’;28 and Robert Boyle, the chemist, lamented the common tendency among scientists not to publish their results until they had a ‘system’ worked out, with the result that ‘many excellent notions or experiments are, by sober and modest men, suppressed’.29 Other scientists, across many fields, have also recognized the problem over the years.3035

For example, the bronze statue of Albert Einstein outside the National Academy of Sciences in Washington, DC is inscribed with a quotation from a letter that he wrote on 3 March 1954, for a conference of the Emergency Civil Liberties Committee:

Academic freedom as I understand it means having the right to seek the truth and to publish and teach what is believed to be true. Naturally this right comes together with the duty not to withhold a part of what is believed to be true. It is clear that any restriction on academic freedom hinders the dissemination of knowledge in the population and therefore restrains rational judgement and action.36

In 1959, the father of medical statistics in Britain, Austin Bradford Hill, wrote:

A negative result may be dull but often it is no less important than the positive; and in view of that importance it must, surely, be established by adequate publication of the evidence.33

And in the same year, Seymour Kety, an American psychiatrist wrote:

A positive result is exciting and interesting and gets published quickly. A negative result, or one which is inconsistent with current opinion, is either unexciting or attributed to some error and is not published. So that at first in the case of a new therapy there is a clustering toward positive results with fewer negative results being published. Then some brave or naïve or nonconformist soul, like the little child who said that the emperor had no clothes, comes up with a negative result which he dares to publish. That starts the pendulum swinging in the other direction, and now negative results become popular and important.37

Although the importance of reporting biases had been recognized for centuries, it was not until the second half of the 20th century that researchers began to investigate the phenomenon. The impetus for these investigations came from the development of research synthesis, first by social scientists, then by health researchers.3840 Unsurprisingly, researchers who have exposed reporting biases are often those who have also been involved in the application of methods for research synthesis.

Investigations of biased reporting of research began with surveys of journal articles, which revealed improbably high proportions of published studies showing statistically significant differences.4143 Subsequent surveys of authors and peer reviewers showed that research that had yielded ‘negative’ results was less likely than other research to be submitted or recommended for publication.4447 These findings were reinforced by the results of experimental studies, which showed that studies with no reported statistically significant differences were less likely to be accepted for publication.4850

The most direct evidence of publication bias in the medical field has come from following up cohorts of studies identified at the time of funding,51 ethics approval,52,53 submission for drug licences,5456 or when they were reported in summary form, for example in conference abstracts.14,57 Systematic reviews of this body of evidence have shown that ‘positive findings’ are the principal factor associated with subsequent publication: a systematic review of data from five cohort studies following research projects from inception found that, overall, the odds of publication for studies with ‘positive’ findings was about two and a half times greater than the odds of publication of studies with ‘negative’ or ‘null’ results, and that study results were the principal factor explaining these differences in reporting.9,13,27,58

Even when studies are eventually reported in substantive publications, ‘negative’ findings take longer to appear in print:15,17,59,60 on average, clinical trials with ‘positive results’ are published about a year sooner than trials with ‘null or negative results’. There is also evidence that, compared to negative or null results, statistically significant results tend to be published in journals with higher impact factors,52 and that publication in the mainstream (‘non-grey’) literature is associated with an overall 9% larger estimate of treatment effects compared to reports in the grey literature.61 Articles reporting negative findings for efficacy, or reporting adverse events associated with an exposure, may be published but 'hidden' in harder to access sources.62 Furthermore, even when studies initially published in abstract form are published in full, ‘negative’ results are less likely to be published in high impact journals than ‘positive’ results.63

Selective reporting of suspected or confirmed adverse treatment effects is an area for particular concern because of the potential for patient harm. In a study of adverse drug events submitted to Scandinavian drug licensing authorities, subsequently published studies were less likely than unpublished studies to have recorded adverse events.54 The lay and scientific media have drawn attention to failure to accurately report adverse events for drugs, for example, of selective serotonin uptake inhibitors for depression,64,65 rosiglitazone for diabetes,66 and rofecoxib for arthritis pain.67

Biased reporting of data within studies

Even when substantive reports of research are published, there may be biased reporting of outcome data within the reports.13,56,6871 Comparisons of published articles with the study protocols approved by an ethics committee in Denmark found that in nearly two-thirds of trial reports at least one planned outcome had been changed, introduced, or omitted in the published article.70 In a similar comparison of randomized trials funded by the Canadian Institutes of Health Research, primary outcomes differed between the protocol and published article 40% of the time.69 In both of these studies, outcomes that were statistically significant in favour of an experimental intervention had a higher chance of being published in full compared to those that were not statistically significant. Other analyses have shown important discrepancies between journal articles and information supplied for trial registration.7275

Biased outcome reporting has also been shown in a comparison with subsequent publications of data about 12 antidepressant agents submitted for review to the Food and Drug Administration (FDA).56 Only 31% of the 74 FDA-registered studies had been published, and publication was associated with a ‘positive’ outcome (as determined by the FDA). Studies that the FDA had considered ‘negative’ or ‘questionable’ (n = 36) were either not published (22 studies), reported with a positive interpretation (11 studies), or reported in a manner consistent with the FDA interpretation (3 studies). In summary, evidence from the published literature suggested that 94% of studies had positive findings, while the FDA analysis concluded that only 51% had positive findings.

Who is responsible for biased reporting of clinical research?

Reporting bias can be due to researchers and sponsors failing to submit study findings for publication, or due to journal editors and others rejecting reports for publication. Numerous surveys of investigators have left little doubt that almost all failure to publish is due to the failure of investigators to submit reports for publication,63,76 with only a small proportion of studies remaining unpublished because of rejection by journals.77 Indeed, qualitative studies of editorial discussion indicate that a study's scientific rigour is the area of greatest concern.78 Researchers report that the reason they do not write up and submit reports of their research for publication is usually because they are ‘not interested’ in the results (‘editorial rejection by journals’ is only rarely given as a cause of failure to publish). Even those investigators who have initially published their results as (conference) abstracts are less likely to submit their findings for full publication unless the results are ‘significant’.14

It is now also well-established that biased reporting of research studies is associated with the sources of funding. In particular, research funded by the pharmaceutical industry has been shown to be less likely to be published than research funded from other sources,79,80 and that studies sponsored by pharmaceutical companies are more likely to have outcomes favouring the sponsor than studies with other sponsors.81,82 There are several possible explanations for the association between industry support and failure to publish ‘negative’ results. Industry may selectively publish findings supporting a product's efficacy. It is also possible that industry is more likely to design studies with a high likelihood of a positive outcome, for example, by selecting a comparison population likely to yield results favouring the product.83,84 This is clearly ethical.

The practice of hiring a commercial firm to write up the results from a clinical trial is common in industry trials.85 It has been estimated that 75% of industry-initiated studies approved by two ethics committees in Denmark had ghost authors.86 In these cases, the named authors listed rarely included the hired writer. The World Association of Medical Editors has made it clear it considers such ghost authorship to be dishonest (see http://www.wame.org/resources/policies – accessed 1 August 2008). Unnamed, paid medical writers may be asked to address commercial interests in the way that research methods and results are presented. When the proportion of paid medical writers is sufficiently large, the literature, and thus opinion about the drug, may be influenced.87

Because industry is the main funder of clinical research, it must inevitably shoulder a high proportion of the blame for this unscientific and unethical behaviour. The responsibility for biased reporting of clinical research does not lie solely with industry, however. As long ago as 1998, the Ethics Committee of the Faculty of Pharmaceutical Medicine, which represents physicians working in industry in particular, declared that:

Pharmaceutical physicians … have a particular ethical responsibility to ensure that the evidence on which doctors should make their prescribing decisions is freely available … the outcome of all clinical trials on a medicine should be reported.88

Dealing with incomplete and biased reporting of research

Investigations of incomplete and biased reporting of clinical research conducted over the past three decades have made clear that this is a serious and extensive problem, which threatens the best interests of patients, undermines the scientific enterprise, and wastes resources.

Various attempts have been made to overcome the effects of reporting biases. These have included statistical adjustments of the results of published studies,8991 surveys of investigators in attempts to locate unpublished studies,92 editorial ‘amnesties’ for unpublished trials,93,94 and journals and journal sections9597 specifically designated for reporting the misconceived notion of ‘negative results’.5 None of these approaches has proved satisfactory, however.

In 1986, John Simes showed that analyses of treatments for ovarian cancer based on the results of trials that had been registered before their results were known showed no statistically significant differences, while analyses based on all published trials did. He postulated that these differences reflected biased under-reporting of trials, and suggested that this problem should be addressed by establishing an international registry of clinical trials.98 Over the following three decades pressure to register trials gradually increased.99104

It took a public scandal in 2004 to provide the momentum needed to lead to a consensus that clinical trial registration, which had been called for repeatedly over the previous two decades, should become mandatory. In June of that year, Eliot Spitzer, the Attorney General of the State of New York, sued GlaxoSmithKline, makers of an antidepressant drug (paroxetine), for suppressing evidence of possible serious harmful effects, thus depriving physicians of the information needed to assess the drug's risks.64,65 A systematic review of the relevant published and unpublished data showed that the favourable impression created by the published studies was negated when unpublished data were included.105

The scandal prompted the International Committee of Medical Journal Editors to announce that their journals would require, as a condition of considering reports of clinical trials for publication, that the studies had been registered prior to enrolling participants.67 Furthermore, under the aegis of the World Health Organization (WHO), it was agreed that basic information about all clinical trials should be registered, at inception, and that this information should be publicly accessible through the WHO International Clinical Trials Registry Platform.106

Public availability of full study protocols, either at trial inception107,108 or at registration,71,109 or alongside reports of trials,110 is also gaining momentum.74,111 This further development has been fuelled by evidence of biased reporting of outcomes within studies.13,56,6871,112 This has been reflected in the development of reporting guidelines for protocols.113

It remains to be seen how well these measures will deal with a serious problem recognized nearly four centuries ago by Francis Bacon.28

DECLARATIONS

Competing interests

None declared

Funding

None

Ethical approval

Not applicable

Guarantor

KD

Contributorship

Both authors contributed equally

Acknowledgements

The authors are grateful to Doug Altman and Mike Clarke for drawing their attention to relevant historical material; and to Doug Altman, An-Wen Chan and Sally Hopewell for commenting on earlier drafts of this brief history of reporting biases. Additional material for this article is available from the James Lind Library website (www.jameslindlibrary.org), where it was originally published

References

  • 1.Cowley AJ, Skene A, Stainer K, Hampton JR The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. Int J Cardiol 1993;40:161–6 [DOI] [PubMed] [Google Scholar]
  • 2.Sterne J, Egger M, Moher D, on behalf of the Cochrane Bias Methods Group, eds. Chapter 10. Addressing reporting biases. In: Higgins JPT, Green S, Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.0 Oxford: The Cochrane Collaboration, 2008. See www.cochrane-handbook.org [Google Scholar]
  • 3.Chalmers I, Glasziou P Avoidable waste in the production and reporting of research evidence. Lancet 2009;374:86–9 [DOI] [PubMed] [Google Scholar]
  • 4.Chalmers I, Glasziou PG The environmental, scientific and ethical scandal of biased under-reporting of research. BMJ 2009. See http://www.bmj.com/cgi/eletters/339/oct30_1/b4187#224821
  • 5.Chalmers I Proposal to outlaw the term ‘negative trial’. BMJ 1985;290:1002 [Google Scholar]
  • 6.Chalmers I Underreporting research is scientific misconduct. JAMA 1990;263:1405–8 [PubMed] [Google Scholar]
  • 7.Antes G, Chalmers I Under-reporting of clinical trials is unethical. Lancet 2003;361:978–9 [DOI] [PubMed] [Google Scholar]
  • 8.World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. Ferney-Voltaire: WMA, 2008 [Google Scholar]
  • 9.Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev 2009;1:MR000006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Rochon PA, Gurwitz JH, Simms RW, et al. A study of manufacturer supported trials of non-steroidal anti-inflammatory drugs in the treatment of arthritis. Arch Intern Med 1994;154:157–63 [PubMed] [Google Scholar]
  • 11.Tramèr M, Reynolds DJ, Moore RA, McQuay HJ Impact of covert duplicate publication on meta-analysis: A case study. BMJ 1997;315:635–40 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Von Elm E, Poglia G, Walder B, Tramèr MR Different patterns of duplicate publication. An analysis of articles used in systematic reviews. JAMA 2004;291:974–80 [DOI] [PubMed] [Google Scholar]
  • 13.Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 2008;3:e3081 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Scherer RW, Langenberg P, Von Elm E Full publication of results initially presented in abstracts. Cochrane Database Syst Rev 2007;2:MR000005 [DOI] [PubMed] [Google Scholar]
  • 15.Stern JM, Simes RJ Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 1997;315:640–5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Dickersin K, Olson CM, Rennie D, et al. Association between time interval to publication and statistical significance. JAMA 2002;287:2829–31 [DOI] [PubMed] [Google Scholar]
  • 17.Hopewell S, Clarke M, Stewart L, Tierney J Time to publication for results of clinical trials. Cochrane Database Syst Rev 2007;2:MR000011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G Language bias in randomised controlled trials published in English and German. Lancet 1997;350:326–9 [DOI] [PubMed] [Google Scholar]
  • 19.Juni P, Holenstein F, Sterne J, Bartlett C, Egger M Direction and impact of language bias of controlled trials: An empirical study. Int J Epidemiol 2002;31:115–23 [DOI] [PubMed] [Google Scholar]
  • 20.Gøtzsche PC Reference bias in reports of drug trials. BMJ 1987;195:654–6 [PMC free article] [PubMed] [Google Scholar]
  • 21.Ravnskov U Frequency of citation and outcome of cholesterol lowering trials. BMJ 1992;305:717 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ravnskov U Quotation bias in reviews of the diet heart idea. J Clin Epidemiol 1995;48:713–19 [DOI] [PubMed] [Google Scholar]
  • 23.Kjaergaard LL, Gluud C Citation bias of hepato-biliary randomized clinical trials. J Clin Epidemiol 2002;55:407–10 [DOI] [PubMed] [Google Scholar]
  • 24.Schmidt LM, Gøtzsche PC Of mites and men: reference bias in narrative review articles: a systematic review. J Fam Practice 2005;54:334–8 [PubMed] [Google Scholar]
  • 25.Nieminen P, Rucker G, Miettunen J, Carpenter J, Schumacher M Statistically significant papers in psychiatry were cited more often than others. J Clin Epidemiol 2007;60:939–46 [DOI] [PubMed] [Google Scholar]
  • 26.Dickersin K Publication bias: Recognizing the problem, understanding its origins and scope, and preventing harm. : Rothstein H, Sutton A, Borenstein M, Publication bias in meta-analysis: prevention, assessment, and adjustments. London: Wiley, 2005:11–33 [Google Scholar]
  • 27.Song F, Parekh S, Hooper L, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010;14:iii, ix–xi, 1–193 [DOI] [PubMed] [Google Scholar]
  • 28.Bacon F Franc. Baconis de Verulamio / Summi Angliae Cancellarii /Novum organum scientiarum. [Francis Bacon of St. Albans Lord Chancellor of England. A ‘New Instrument’ for the sciences] Lugd. Bat: apud Adrianum Wiingaerde, et Franciscum Moiardum. Aphorism XLVI 1645:45–6
  • 29.Hall MB In defense of experimental essays. : Robert Boyle on natural philosophy: An essay with selections from his writings. Bloomington, IN: Indiana University Press, 1980:119–31 [Google Scholar]
  • 30.Alanson E Practical observations on amputation, and the after-treatment. 2nd edn London: Joseph Johnson, 1782 [Google Scholar]
  • 31.Editorial The reporting of unsuccessful cases. Boston Medical and Surgical Journal 1909;161:263–4 [Google Scholar]
  • 32.Earp JR The need for reporting negative results. JAMA 1927;88:119 [Google Scholar]
  • 33.Hill AB Discussion of a paper by DJ Finney. J Roy Stat Soc Stat Soc 1959;119:19–20 [Google Scholar]
  • 34.Feynman RP Surely You're Joking, Mr. Feynman! New York, NY: Norton, 1985 [Google Scholar]
  • 35.Gould SJ Urchin in the Storm. Essays about Books and Ideas. New York, NY: Norton, 1987 [Google Scholar]
  • 36.Einstein A Statement for a conference of the Emergency Civil Liberties Committee, 3 March. Jerusalem: Albert Einstein Archives, Hebrew University of Jerusalem, 1954:28–1025 [Google Scholar]
  • 37.Kety S Comment. In: Cole JO, Gerard RW, Psychopharmacology. Problems in Evaluation. Publication 583. Washington, DC: National Academy of Sciences, 1959:651–2 [Google Scholar]
  • 38.Hunt M How science takes stock: The story of meta-analysis. New York: Russell Sage Foundation, 1997 [Google Scholar]
  • 39.Chalmers I, Hedges LV, Cooper H A brief history of research synthesis. Eval Health Prof 2002;25:12–37 [DOI] [PubMed] [Google Scholar]
  • 40.O'Rourke K. 2006. An historical perspective on meta-analysis: dealing quantitatively with varying study results. The James Lind Library. See http://www.jameslindlibrary.org .
  • 41.Sterling TD Publication decisions and their possible effects on inferences drawn from tests of significance – or vice versa. J Am Stat Assoc 1959;54:30–4 [Google Scholar]
  • 42.Smart RG The importance of negative results in psychological research. Can Psychol 1964;5:225–32 [Google Scholar]
  • 43.Chalmers TC, Koff RS, Grady GF A note on fatality in serum hepatitis. Gastroenterol 1965;49:22–6 [Google Scholar]
  • 44.Greenwald AG Consequences of prejudice against the null hypothesis. Psychol Bull 1975;82:1–20 [Google Scholar]
  • 45.Coursol A, Wagner EE Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Prof Psychol Res Pract 1986;17:136–7 [Google Scholar]
  • 46.Shadish WR, Doherty M, Montgomery LM How many studies are in the file drawer? An estimate from the family/marital psychotherapy literature. Clin Psychol Rev 1989;9:589–603 [Google Scholar]
  • 47.Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H Publication bias and clinical trials. Control Clin Trials 1987;8:343–53 [DOI] [PubMed] [Google Scholar]
  • 48.Mahoney MJ Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research 1977;1:161–75 [Google Scholar]
  • 49.Peters D, Ceci S Peer review practice of psychologic journals: The fate of published articles submitted again. Behav Brain Sci 1982;5:187–95 [Google Scholar]
  • 50.Epstein WM Confirmational response bias among social work journals. Sci Tech Hum Val 1990;15:9–37 [Google Scholar]
  • 51.Dickersin K, Min Y-I NIH clinical trials and publication bias. Online J Curr Clin Trials 1993. Apr 28; Doc No 50 [PubMed] [Google Scholar]
  • 52.Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR Publication bias in clinical research. Lancet 1991;337:867–72 [DOI] [PubMed] [Google Scholar]
  • 53.Dickersin K, Min YI, Meinert CL Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 1992;267:374–8 [PubMed] [Google Scholar]
  • 54.Hemminki E Study of information submitted by drug companies to licensing authorities. BMJ 1980;280:833–6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B Evidence-b(i)ased medicine – selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 2003;326:1171–3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252–60 [DOI] [PubMed] [Google Scholar]
  • 57.Scherer RW, Dickersin K, Langenberg P Full publication of results initially presented in abstracts. JAMA 1994;272:158–62 [PubMed] [Google Scholar]
  • 58.Song F, Parekh-Bhurke S, Hooper L, et al. Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Med Res Methodol 2009;9:79 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Ioannidis JP Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 1998;279:281–6 [DOI] [PubMed] [Google Scholar]
  • 60.Misakian AL, Bero LA Publication bias and research on passive smoking. Comparison of published and unpublished studies. JAMA 1998;280:250–3 [DOI] [PubMed] [Google Scholar]
  • 61.Hopewell S, McDonald S, Clarke M, Egger M Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev 2007;2:MR000010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Bero LA, Rennie D Influences on the quality of published drug studies. Int J Technol Assess Health Care 1996;12:209–37 [DOI] [PubMed] [Google Scholar]
  • 63.Timmer A, Hilsden RJ, Cole J, Hailey D, Sutherland LR Publication bias in gastroenterological research – a retrospective cohort study based on abstracts submitted to a scientific meeting. BMC Med Res Methodol 2002;2:7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Healy D Did regulators fail over selective serotonin reuptake inhibitors? BMJ 2006;333:92–5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Bass A Side Effects: A Prosecutor, a Whistleblower, and a Bestselling Antidepressant on Trial. Boston, MA: Algonquin, 2008 [Google Scholar]
  • 66.Drazen JM, Morrissey S, Curfman GD Rosiglitazone – continued uncertainty about safety. N Engl J Med 2007;357:63–4 [DOI] [PubMed] [Google Scholar]
  • 67.DeAngelis CD, Drazen JD, Frizelle FA, et al. Is This Clinical Trial Fully Registered? A Statement From the International Committee of Medical Journal Editors. JAMA 2005;293:2927–9 [DOI] [PubMed] [Google Scholar]
  • 68.Hahn S, Williamson PR, Hutton JL Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract 2002;8:353–9 [DOI] [PubMed] [Google Scholar]
  • 69.Chan AW, Krleža-Jeric K, Schmid I, Altman D Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 2004;171:735–40 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG Empirical Evidence for selective reporting of outcomes in randomized trials. Comparison of protocols to published articles. JAMA 2004;291:2457–65 [DOI] [PubMed] [Google Scholar]
  • 71.Vedula S, Bero L, Scherer RW, Dickersin K Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med 2009;361:1963–71 [DOI] [PubMed] [Google Scholar]
  • 72.Ross MG, Mulvey OK, Hines EM, Nissen SE, Krumholz HM Trial publication after registration in clinicaltrials.gov: a cross-sectional analysis. PLoS Med 2009;6:e1000144; doi:10.1371/journal.pmed.1000144 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Al-Marzouki S, Roberts I, Evans S, Marshall T Selective reporting in clinical trials: analysis of trial protocols accepted by The Lancet. Lancet 2008;372:201 [DOI] [PubMed] [Google Scholar]
  • 74.Chan A-W Bias, spin, and misreporting: time for full access to trial protocols and results. PLoS Medicine 2008;5: e230 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009;302:977–84 [DOI] [PubMed] [Google Scholar]
  • 76.Godlee F, Dickersin K Bias, subjectivity, chance, and conflict of interest in editorial decisions. : Godlee F, Jefferson T, Peer review in health sciences. 2nd edn. London: BMJ Books, 2003 [Google Scholar]
  • 77.Olson CM, Rennie D, Cook D, et al. Publication bias in editorial decision making. JAMA 2002;287:2825–8 [DOI] [PubMed] [Google Scholar]
  • 78.Dickersin K, Ssemanda E, Mansell C, Rennie D What do JAMA editors say when they discuss manuscripts that they are considering for publication? Developing a schema for classifying the content of editorial discussion. BMC Med Res Methodol 2007;7:44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Lexchin J, Bero LA, Djulbegovic B, Clark O Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 2003;326:1–10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Sismondo S Pharmaceutical company funding and its consequences: A qualitative systematic review. Contemp Clin Trials 2008;29:109–13 [DOI] [PubMed] [Google Scholar]
  • 81.Als-Nielsen B, Chen W, Gluud C, Kjærgaard LL Association of funding and conclusions in randomized drug trials: A reflection of treatment effects or adverse events? JAMA 2003;290:921–8 [DOI] [PubMed] [Google Scholar]
  • 82.Bhandari M, Busse JW, Jackowski D, et al. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ 2004;170:477–80 [PMC free article] [PubMed] [Google Scholar]
  • 83.Djulbegovic B, Lacevic M, Cantor A, et al. The uncertainty principle and industry-sponsored research. Lancet 2000;356:635–8 [DOI] [PubMed] [Google Scholar]
  • 84.Mann H, Djulbegovic B Why comparisons must address genuine uncertainties. James Lind Library 2004. See http://www.jameslindlibrary.org
  • 85.Sismondo S Ghost management: How much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med 2007;4:1429–33 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Gøtzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, Chan A-W Ghost authorship in industry-initiated randomised trials. PloS Med 2007;4:47–52 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Healy D, Cattell D Interface between authorship, industry and science in the domain of therapeutics. Br J Psychiatry 2003;183:22–7 [PubMed] [Google Scholar]
  • 88.Faculty of Pharmaceutical Medicine Ethical Issues Working Group. Ethics in pharmaceutical medicine. Int J Pharmaceutical Medicine 1998;12:193–8 [Google Scholar]
  • 89.Rosenthal R The ‘file drawer problem’ and tolerance for null results. Psychol Bull 1979;86:638–41 [Google Scholar]
  • 90.Light RJ, Pillemer DB Summing up. Cambridge, MA: Harvard University Press, 1984 [Google Scholar]
  • 91.Vandenbroucke JP Passive smoking and lung cancer: a publication bias? BMJ 1988;296:391–2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Hetherington J, Dickersin K, Chalmers I, Meinert CL Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Pediatrics 1989;84:374–80 [PubMed] [Google Scholar]
  • 93.Smith R, Roberts I An amnesty for unpublished trials. BMJ 1997;315:622 [PMC free article] [PubMed] [Google Scholar]
  • 94.Roberts I An amnesty for unpublished trials. BMJ 1998;317:763–4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Editorial Negative results section. JAMA 1962;181:42–3 [Google Scholar]
  • 96.Shields PG Publication bias is a scientific problem with adverse ethical outcomes: the case for a section for null results. Cancer Epidemiol Biomarkers Prev 2000;9:771–2 [PubMed] [Google Scholar]
  • 97.BiomedCentral (2002). Journal of Negative Results in Biomedicine. See http://www.jnrbm.com/info/about/
  • 98.Simes RJ Publication bias: the case for an international registry of clinical trials. J Clin Oncol 1986;4:1529–41 [DOI] [PubMed] [Google Scholar]
  • 99.Meinert CL Toward prospective registration of clinical trials. Controlled Clin Trials 1988;9:1–5 [DOI] [PubMed] [Google Scholar]
  • 100.Ad Hoc Working Party of the International Collaborative Group on Clinical Trials Registries International Collaborative Group on Clinical Trials Registries. Position paper and consensus recommendations on clinical trial registries. Clin Trials Metaanal 1993;29:255–66 [PubMed] [Google Scholar]
  • 101.Dickersin K How important is publication bias? A synthesis of available data. AIDS Educ Prev 1996;9 (Suppl. 1):15–21 [PubMed] [Google Scholar]
  • 102.Wager E, Field EA, Grossman L Good publication practice for pharmaceutical companies. Curr Med Res Opin 2003;19:149–54 [DOI] [PubMed] [Google Scholar]
  • 103.Dickersin K, Rennie D Registering clinical trials. JAMA 2003;290:516–23 [DOI] [PubMed] [Google Scholar]
  • 104.Chalmers I From optimism to disillusion about commitment to transparency in the medico-industrial complex. J R Soc Med 2006;99:337–41 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, Boddington E Selective serotonin reuptake inhibitors in childhood depression: sysyrtamtycm review of published versus unpublished data. Lancet 2004;363:1341–5 [DOI] [PubMed] [Google Scholar]
  • 106.Gülmezoglu AM, Pang T, Horton R, Dickersin K WHO facilitates international collaboration in setting standards for clinical trial registration. Lancet 2005;365:1829–31 [DOI] [PubMed] [Google Scholar]
  • 107.Horton R Pardonable revisions and protocol reviews. Lancet 1997;349:6 [DOI] [PubMed] [Google Scholar]
  • 108.BioMedCentral Information for authors: Publish your study protocols. See http://www.biomedcentral.com/info/authors/protocols (last checked 8 March 2010)
  • 109.Krleža-Jeric K, Chan A-W, Dickersin K, Sim I, Grimshaw J, Gluud C, for the Ottawa Group Principles for international registration of protocol information and results from human trials of health-related interventions. Ottawa Statement (Part 1). BMJ 2005;330:956–8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Siegel J Editorial review of protocols for clinical trials. N Engl J Med 1990;323:1355 [DOI] [PubMed] [Google Scholar]
  • 111.Miller JD Registering clinical trial results: the next step. JAMA 2010;303:773–4 [DOI] [PubMed] [Google Scholar]
  • 112.Kirkham J, Dwan KM, Altman DG, et al. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 2009;340:c365 [DOI] [PubMed] [Google Scholar]
  • 113.SPIRIT Initiative See http://www.equator-network.org/resource-centre/library-of-health-research-reporting/reporting-guidelines-under-development/ (last checked 15 February 2010)

Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press

RESOURCES