Skip to main content
Journal of Cerebral Blood Flow & Metabolism logoLink to Journal of Cerebral Blood Flow & Metabolism
editorial
. 2010 Jul 1;30(7):1263–1264. doi: 10.1038/jcbfm.2010.51

Fighting publication bias: introducing the Negative Results section

Ulrich Dirnagl Editors-in-Chief, Martin Lauritzen
PMCID: PMC2949220  PMID: 20596038

For many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. Ioannidis, 2005

Only data that are available via publications—and, to a certain extent, via presentations at conferences—can contribute to progress in the life sciences. However, it has long been known that a strong publication bias exists, in particular against the publication of data that do not reproduce previously published material or that refute the investigators' initial hypothesis. The latter type of contradictory evidence is commonly known as ‘negative data.' This slightly derogatory term reflects the bias against studies in which investigators were unable to reject their null hypothesis (H0), a tool of frequentist statistics that states that there is no difference between experimental groups.

Researchers are well aware of this bias, as journals are usually not keen to publish the nonexistence of a phenomenon or treatment effect. They know that editors have little interest in publishing data that refute, or do not reproduce, previously published work—with the exception of spectacular cases that guarantee the attention of the scientific community, as well as garner extra citations (Ioannidis and Trikalinos, 2005). The authors of negative results are required to provide evidence for failure to reject the null hypothesis under numerous conditions (e.g., dosages, assays, outcome parameters, additional species or cell types), whereas a positive result would be considered worthwhile under any of these conditions (Rockwell et al, 2006). Indeed, there is a dilemma: one can never prove the absence of an effect, because, as Altman and Bland (1995) remind us, ‘absence of evidence is not evidence of absence'.

It has been demonstrated that studies reporting positive, or significant, results are more likely to be published, and outcomes that are statistically significant have higher odds of being fully reported (Dwan et al, 2008). Negative results are more likely than positive results to be published in journals with lower impact factors (Littner et al, 2005). Many of you have experienced this phenomenon yourselves—often scientists mention in conversation that they ‘were not able to reproduce' a particular finding, a statement that is very often countered by the question ‘Why did you not publish this? It would have been important for me to know.'

Publication bias has been systematically investigated, particularly in clinical trials (e.g., Liebeskind et al, 2006). Systematic reviews and meta-analyses have exposed the problem, as they are heavily confounded by this phenomenon (Sutton et al, 2000). Given a sufficiently large number of original studies, meta-analysis can even quantify the bias attributable to unpublished data. Where this has been done—for example, with Egger plots and trim-and-fill analysis (Duval and Tweedie, 2000)—imputation of the probable results of the unpublished experiments not only reveals the amount of missing data but also estimates ‘true' effect sizes resulting from the inclusion of the missing data. Quite commonly, a substantial proportion of the existing data appears to be missing. Inclusion of the modeled missing data into the meta-analysis sometimes results in a complete loss of the published effect of an intervention or the existence of a phenomenon. In many cases effect sizes shrink dramatically, hinting at the fact that very often the literature represents the ‘positive' tip of an iceberg, whereas unpublished data loom below the surface. Such missing data would have the potential to have a significant impact on our pathophysiological understanding or treatment concepts.

Only recently have systematic reviews been introduced into experimental medicine. Indeed, the stroke and cerebrovascular fields have pioneered this movement. These systematic reviews have exposed various sources of bias and produced the first indications that publication bias is highly prevalent (Macleod et al, 2004). Macleod and colleagues have now, for the first time, quantified publication bias in animal stroke studies and demonstrated that it leads to major overstatements of efficacy (Sena et al, 2010).

The phenomenon of publication bias has long been known and long been bemoaned. Its substantial negative impact on science has been quantified. But how can we improve this lamentable situation, which may contribute greatly to our difficulties in translating bench findings to the bedside? The impetus must now come from the journals and publishers (De Maria, 2004; Diguet et al, 2004; Dirnagl, 2006; Knight, 2003). To our knowledge, only one journal in the neurosciences, Neurobiology of Aging, has thus far formally addressed the problem of negative publication bias by introducing a special section (Coleman, 2004). The Journal of Negative Results in Biomedicine (a BioMed Central publication) provides a forum for negative results and ‘promotes a discussion of unexpected, controversial, provocative and/or negative results in the context of current tenets' (Pfeffer and Olsen, 2002). However, the latter has developed into an eclectic repository in which relevant results may not receive the exposure they deserve.

As part of its drive to improve quality in experimental research (Dirnagl, 2006; MacLeod et al, 2009), the Journal of Cerebral Blood Flow and Metabolism, together with the Nature Publishing Group, is tackling negative publication bias by introducing a Negative Results section. Each such study will be published as a one-page summary (maximum 500 words, two figures) in the print edition of the journal, and the accompanying full paper will appear online.

We invite authors to submit data that did not substantiate their alternative hypotheses (i.e., a difference between experimental groups) and/or did not reproduce published findings. A common criticism of the publication of negative results is that the experimentation involved may not have been as extensive as in research with positive results, which are often further complemented by additional, mechanistic experiments. A survey of the existing literature exposes this as wishful thinking, as most experimental studies are grossly underpowered. Importantly, the quality of the data submitted to our Negative Results section must meet the same rigorous standards that our journal applies to all other submissions. In fact, it may be said that the standards must even exceed those applied currently, as type II error (false negatives) considerations need to be included. Of note, in clinical studies, a priori sample-size calculations (at given levels for type I and II error, α and β) are mandatory. Experimental medicine has deplorably escaped this requirement, at least partially explaining why experimental results often have a very low positive predictive value.

The Negative Results section of the Journal of Cerebral Blood Flow and Metabolism will provide a platform and raise awareness of a problem with a proven negative impact on scientific progress as well as bench-to-bedside translation. Now researchers must step up to this platform. It is an experiment, but, if successful, it may serve as a role model for other journals and other research fields and thus help to reduce publication bias.

References

  1. Altman D, Bland M. Absence of evidence is not evidence of absence. Br Med J. 1995;311:485. doi: 10.1136/bmj.311.7003.485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Coleman PD. Negative results can be valuable. Neurobiol Aging. 2004;25:iii. [Google Scholar]
  3. De Maria AN. Publication bias and journals as policemen. J Am Coll Cardiol. 2004;44:1007–1008. doi: 10.1016/j.jacc.2004.09.018. [DOI] [PubMed] [Google Scholar]
  4. Diguet E, Gross CE, Tison F, Bezard E. Rise and fall of minocycline in neuroprotection: need to promote publication of negative results. Exp Neurol. 2004;189:1–4. doi: 10.1016/j.expneurol.2004.05.016. [DOI] [PubMed] [Google Scholar]
  5. Dirnagl U. Bench to bedside: the quest for quality in experimental stroke research. J Cereb Blood Flow Metab. 2006;26:1465–1478. doi: 10.1038/sj.jcbfm.9600298. [DOI] [PubMed] [Google Scholar]
  6. Duval SJ, Tweedie RL. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;56:455–463. doi: 10.1111/j.0006-341x.2000.00455.x. [DOI] [PubMed] [Google Scholar]
  7. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioannidis JP, Simes J, Williamson PR. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3:e3081. doi: 10.1371/journal.pone.0003081. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Ioannidis JP, Trikalinos TA. Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. J Clin Epidemiol. 2005;58:543–549. doi: 10.1016/j.jclinepi.2004.10.019. [DOI] [PubMed] [Google Scholar]
  10. Knight J. Negative results: null and void. Nature. 2003;422:554–555. doi: 10.1038/422554a. [DOI] [PubMed] [Google Scholar]
  11. Liebeskind DS, Kidwell CS, Sayre JW, Saver JL. Evidence of publication bias in reporting acute stroke clinical trials. Neurology. 2006;67:973–979. doi: 10.1212/01.wnl.0000237331.16541.ac. [DOI] [PubMed] [Google Scholar]
  12. Littner Y, Mimouni FB, Dollberg S, Mandel D. Negative results and impact factor: a lesson from neonatology. Arch Pediatr Adolesc Med. 2005;159:1036–1037. doi: 10.1001/archpedi.159.11.1036. [DOI] [PubMed] [Google Scholar]
  13. Macleod MR, O'Collins T, Howells DW, Donnan GA. Pooling of animal experimental data reveals influence of study design and publication bias. Stroke. 2004;35:1203–1208. doi: 10.1161/01.STR.0000125719.25853.20. [DOI] [PubMed] [Google Scholar]
  14. Macleod MR, Fisher M, O'Collins V, Sena ES, Dirnagl U, Bath PM, Buchan A, van der Worp HB, Traystman RJ, Minematsu K, Donnan GA, Howells DW. Good laboratory practice: preventing introduction of bias at the bench. J Cereb Blood Flow Metab. 2009;29:221–223. doi: 10.1161/STROKEAHA.108.525386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Pfeffer C, Olsen BR. Editorial: journal of negative results in biomedicine. J Negat Results Biomed. 2002;1:2. doi: 10.1186/1477-5751-1-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Rockwell S, Kimler BF, Moulder JE. Publishing negative results: the problem of publication bias. Radiat Res. 2006;165:623–625. doi: 10.1667/RR3573.1. [DOI] [PubMed] [Google Scholar]
  17. Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8:e1000344. doi: 10.1371/journal.pbio.1000344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Sutton AJ, Duval SJ, Tweedie RL, Abrams KR, Jones DR. Empirical assessment of effect of publication bias on meta-analyses. Br Med J. 2000;320:1574–1577. doi: 10.1136/bmj.320.7249.1574. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Cerebral Blood Flow and Metabolism: Official Journal of the International Society of Cerebral Blood Flow and Metabolism are provided here courtesy of SAGE Publications

RESOURCES