Skip to main content
F1000Research logoLink to F1000Research
letter
. 2013 Jan 2;2:1. [Version 1] doi: 10.12688/f1000research.2-1.v1

Biased under-reporting of research reflects biased under-submission more than biased editorial rejection

Iain Chalmers 1,a, Kay Dickersin 2
PMCID: PMC3782352  PMID: 24358860

Abstract

Stephen Senn challenges Ben Goldacre’s assertion in ‘Bad Pharma’ that biased editorial acceptance of reports with ‘positive’ findings is not a cause of biased under-reporting of research. We agree with Senn that biased editorial decisions may contribute to reporting bias, but Senn ignores the evidence that biased decisions by researchers to submit reports for possible publication are the main causes of the problem.


Stephen Senn challenges Ben Goldacre’s assertion in ‘ Bad Pharma1 that biased editorial acceptance of reports with ‘positive’ findings is not a cause of biased under-reporting of research, and concludes that "the prospects for disentangling cause and effect when it comes to publication bias are not great" 2. Senn apparently overlooks the studies – including controlled experiments - which have investigated reporting biases. These are summarised in an article 3 from which the following is an excerpt:

Who is responsible for biased reporting of clinical research?

Reporting bias can be due to researchers and sponsors failing to submit study findings for publication, or due to journal editors and others rejecting reports for publication. Numerous surveys of investigators have left little doubt that almost all failure to publish is due to the failure of investigators to submit reports for publication 4, 5, with only a small proportion of studies remaining unpublished because of rejection by journals 6, although positive-outcome bias has been demonstrated among peer reviewers 7. Qualitative studies of editorial discussion indicate that a study’s scientific rigour is the area of greatest concern 8. Researchers report that the reason they do not write up and submit reports of their research for publication is usually because they are "not interested" in the results ("editorial rejection by journals" is only rarely given as a cause of failure to publish). Even those investigators who have initially published their results as (conference) abstracts are less likely to submit their findings for full publication unless the results are ‘significant’ 9.

Investigations of biased reporting of research began with surveys of journal articles, which revealed improbably high proportions of published studies showing statistically significant differences 1014. Subsequent surveys of authors and peer reviewers showed that research that had yielded ‘negative’ results was less likely than other research to be submitted or recommended for publication 1518. These findings have been reinforced by the results of experimental studies, which showed that studies with no reported statistically significant differences were less likely to be accepted for publication 7, 1921".

Senn’s use of the term ‘publication bias’ in his commentary suggests that he is restricting it to editorial bias whereas, as indicated above, the origins of reporting bias are largely due to researchers’ decisions not to submit, not editorial decisions not to accept. The analyses of observational data cited by Ben Goldacre in his book ‘ Bad Pharma1 do not detect editorial bias, but neither do they support a confident conclusion that no editorial bias exists. However, we believe Goldacre is correct to castigate researchers and research sponsors as being more culpable than editors in betraying their responsibility to the patients who have participated in trials.

The controlled experiments suggest that it is the results of studies, not their quality, that predisposes them to editorial bias. Senn believes that any editorial bias that exists can be ‘very plausibly explained’ by preferential publication of ‘positive’ studies, and that it "seems plausible that higher quality studies are more likely to lead to a positive result". Unless he is using the word ‘positive’ to mean something other than ‘a beneficial effect’, however, Senn appears to be overlooking substantial evidence challenging the plausibility of his belief (see, for example, reference 22). Given the estimated likelihood of new treatments proving superior to standard treatments 23 it surprises us that, "as a statistician" Senn would find this evidence "unpalatable".

Funding Statement

The author(s) declared that no grants were involved in supporting this work.

v1; ref status: indexed

References

  • 1.Goldacre B: Bad Pharma.London: 4 thEstate2012. Reference Source [Google Scholar]
  • 2.Senn S: Misunderstanding publication bias: editors are not blameless after all [v1; ref status: indexed, http://f1000r.es/YvAwwD]. F1000 Research. 2012;1(59). 10.12688/f1000research.1-59.v1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dickersin K, Chalmers I: Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation.JLL Bulletin: Commentaries on the history of treatment evaluation. ( www.jameslindlibrary.org). Reference Source [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Timmer A, Hilsden RJ, Cole J, et al. : Publication bias in gastroenterological research - a retrospective cohort study based on abstracts submitted to a scientific meeting. BMC Med Res Methodol. 2002;2:7 10.1186/1471-2288-2-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Godlee F, Dickersin K: Bias, subjectivity, chance, and conflict of interest in editorial decisions.In: Godlee F, Jefferson T, eds. Peer review in health sciences, 2nd edition London: BMJ Books 2003. Reference Source [Google Scholar]
  • 6.Olson CM, Rennie D, Cook D, et al. : Publication bias in editorial decision making. JAMA. 2002;287(21):2825–2828 10.1001/jama.287.21.2825 [DOI] [PubMed] [Google Scholar]
  • 7.Emerson GB, Warme WJ, Wolf FM, et al. : Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med. 2010;170(21):1934–1939 10.1001/archinternmed.2010.406 [DOI] [PubMed] [Google Scholar]
  • 8.Dickersin K, Ssemanda E, Mansell C, et al. : What do JAMA editors say when they discuss manuscripts that they are considering for publication? Developing a schema for classifying the content of editorial discussion. BMC Med Res Methodol. 2007;7:44 10.1186/1471-2288-7-44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Scherer RW, Langenberg P, Von Elm E: Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007;18(2):MR000005 10.1002/14651858.MR000005.pub3 [DOI] [PubMed] [Google Scholar]
  • 10.Sterling TD: Publication decisions and their possible effects on inferences drawn from tests of significance - or vice versa. J Am Statistical Assoc. 1959;54:30–34 10.1080/01621459.1959.10501497 [DOI] [Google Scholar]
  • 11.Smart RG: The importance of negative results in psychological research. Can Psychologist. 1964;5(4):225–232 10.1037/h0083036 [DOI] [Google Scholar]
  • 12.Chalmers TC, Koff RS, Grady GF, et al. : A note on fatality in serum hepatitis. Gastroenterology. 1965;49:22–26 Reference Source [Google Scholar]
  • 13.Light RJ, Pillemer DB: Summing up.Cambridge: Harvard University Press1984. Reference Source [Google Scholar]
  • 14.Song F, Parekh S, Hooper L, et al. : Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8): iii, ix-xi,1–193 [DOI] [PubMed] [Google Scholar]
  • 15.Greenwald AG: Consequences of prejudice against the null hypothesis. Psychol Bull. 1975;82(1):1–20 10.1037/h0076157 [DOI] [Google Scholar]
  • 16.Coursol A, Wagner EE: Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Prof Psychol: Res Pract. 1986;17(2):136–137 10.1037/0735-7028.17.2.136 [DOI] [Google Scholar]
  • 17.Dickersin K, Chan S, Chalmers TC, et al. : Publication bias and clinical trials. Control Clin Trials. 1987;8(4):343–53 10.1016/0197-2456(87)90155-3 [DOI] [PubMed] [Google Scholar]
  • 18.Shadish WR, Doherty M, Montgomery LM, et al. : How many studies are in the file drawer? An estimate from the family/marital psychotherapy literature. Clin Psychol Rev. 1989;9(5):589–603 10.1016/0272-7358(89)90013-5 [DOI] [Google Scholar]
  • 19.Mahoney MJ: Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cog Ther Res. 1977;1(2):161–175 10.1007/BF01173636 [DOI] [Google Scholar]
  • 20.Peters D, Ceci S: Peer review practice of psychologic journals: The fate of published articles submitted again. Behav Brain Sci. 1982;5(2):187–195 10.1017/S0140525X00011183 [DOI] [Google Scholar]
  • 21.Epstein WM: Confirmational response bias among social work journals. Sci Technol Hum Values. 1990;15(1):9–38 10.1177/016224399001500102 [DOI] [Google Scholar]
  • 22.Savović J, Jones HE, Altman DG, et al. : Influence of reported study design characteristics on intervention effect estimates from randomized controlled trials. Ann Intern Med. 2012;157(6):429–438 10.7326/0003-4819-157-6-201209180-00537 [DOI] [PubMed] [Google Scholar]
  • 23.Djulbegovic B, Kumar A, Glasziou PP, et al. : New treatments compared with established treatments in randomized trials. Cochrane Database Syst Rev. 2012;10:MR000024 10.1002/14651858.MR000024.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2013 Jan 9.

Referee response for version 1

Steven A Julious 1

I would just put one anecdotal observation and that is of second studies that replicate the findings of a study published in a journal. An editor may turn down the second study as 'nothing new' is being said although most would argue replication to be important.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2013 Jan 8.

Referee response for version 1

Luigi Naldi 1

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2013 Jan 8.

Referee response for version 1

Riekie de Vet 1

The authors comment on a article by Stephen Senn who questions Ben Goldacre’s assertion in the book “Bad Pharma” that editorial process is not the main cause of publication bias. They present a large amount of evidence from the literature that researchers are the main cause of publication bias by selectively submitting paper for publication. 

They provide a lot of convincing information in this short reaction. However, some sentences are very difficult to read. Especially for readers who haven’t read the book by Goldacre, the comment by Senn, and some of the other references. I had to reread the first sentence about five times before I understood. The sentence is especially difficult to read because there is a double negation. Splitting the sentence in the statement of Ben Goldacre and the comment of Stephen Senn may help.  Also the last sentence of the comment is difficult to understand, especially when the reader is unaware of the conclusion of reference 23. 

The second part of the citation of Goldacre “the prospects for disentangling cause and effect when it comes to publication bias are not great” is difficult to understand and, as far as I can see, does not come back in the comment. Consider whether that part can be omitted, or refer to it again at the end of the comment. 

The last section starts with ‘The controlled experiments’. It is not clear to which experiments this refers. To ‘studies – including controlled experiments ‘mentioned in the first section?  

In conclusion, this is a very important and informative comment. However, the readability should be improved in order to make it better understandable for readers who have not read all previous papers.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.


Articles from F1000Research are provided here courtesy of F1000 Research Ltd

RESOURCES