Sir,
The Indian Journal of Anaesthesia should be complimented for publication of a separate issue dedicated to research methodology, covering meticulously and comprehensively all the aspects of research methodology.[1]
Evidence-based medicine is the golden bridge that connects the research world and the clinical domain. Systematic reviews are hailed as the foundation of this approach. They track down the best ‘available’ research in a particular area of interest, and systematically review them to generate a body of evidence that enables clinicians to adopt them into their daily practice. The important word in this statement is ‘available’ research.
Evidence suggests that there is a humongous score of researches which have been conducted but not reported or published for various reasons. It should be plausible to our minds that any evidence built on only a few available studies excluding this enormous unpublished treasure should be found wanting for the validity and relevance of the extrapolated results.[2] This brings to the fore, the debate of ‘The File Drawer Effect’ which refers to the practice of tucking away negative/neutral/statistically non-significant research findings into ‘file drawers’ making these results inaccessible and hidden from reviewers.[3] It is speculated that every significant result in the published world has 19 non-significant counterparts in file drawers.[4]
Failure to report all the findings of a clinical trial breaks the core value of honesty, trustworthiness and integrity of the researchers. A research with potential implications for a wider society lays wasted.[5] One of the main reasons cited for this situation is the bias exhibited towards positive and statistically significant results by the scientific community. There exists an innate complexity in the nature of human beings to question and challenge any new ideas that differ from pre-existing beliefs.[6] Negative results tend to be equated to erroneous and flawed study designs. Researches show that studies with positive results enjoy two and a half times more chances of being published than those with negative results.[2] Studies point out that researches with negative results take longer time to appear in print, and attract poor citations yielding lesser overall impact.[2]
This deprives the scientific community of valuable evidence, which if made available might significantly affect the direction of the evidence. The classical example of the public scandal involving the drug paroxetine in the year 2004 exemplifies how the hopeful impressions created by published data could be negated when unpublished studies were unearthed and analysed.[2]
When a research with negative results goes unreported, a lot of money, material and manpower are spent elsewhere trying to repeat the same research. Incomplete reporting can lead to overestimation of benefits or the underestimation of risks of the intervention.[2] Such fallacies can be overcome by being transparent in the reporting by following appropriate reporting checklists.[5]
Publication of any research is ethically imperative. The World Health Organization in its statement on public disclosure of clinical trial results has called for the reporting of all clinical trials, including the past unreported trials, and for the data to be made available in searchable clinical trial registries. It has also stated that it is desirable to publish the research in a peer-reviewed journal.[7] Although there are several journals completely devoted to publication of studies with negative results, peer-reviewed journals must explicitly consider the publication of research with good scientific rigour, regardless of the nature of the results.[8] There is a huge need to revamp the minds of the researchers towards negative/null results. It is vital to inculcate a positive attitude towards negative results and understand that they are equally important.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
Acknowledgements
We would like to acknowledge Dr. Muthu Karuppaiah. R. M.D.S., Reader and Guide, Dr. Palanivel Pandian R. M.D.S., Senior Lecturer, Department of Public Health Dentistry, Best Dental Science College, Madurai, Tamil Nadu, India, for their valuable inputs and guidance.
REFERENCES
- 1.Harsoor SS, Bhaskar SB. Learning research methodology: Revisiting the evidence. Indian J Anaesth. 2016;60:619–21. doi: 10.4103/0019-5049.190613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Dickersin K, Chalmers I. Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: From Francis Bacon to the WHO. J R Soc Med. 2011;104:532–8. doi: 10.1258/jrsm.2011.11k042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bull. 1979;86:638. [Google Scholar]
- 4.Praveen G, Anitha A, Ramesh M. Negating the negativity: Spotlight on “file drawer effect” in health care research. J Indian Assoc Public Health Dent. 2016;14:243. [Google Scholar]
- 5.Nicholls SG, Langan SM, Benchimol EI, Moher D. Reporting transparency: Making the ethical mandate explicit. BMC Med. 2016;14:44. doi: 10.1186/s12916-016-0587-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Matosin N, Frank E, Engel M, Lum JS, Newell KA. Negativity towards negative results: A discussion of the disconnect between scientific worth and scientific culture. Dis Model Mech. 2014;7:171–3. doi: 10.1242/dmm.015123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.World Health Organization. WHO Statement on Public Disclosure of Clinical Trial Results. Geneva: WHO; 2015. [Last accessed on 2017 May 28]. Available from: http://www.who.int/ictrp/results/reporting/en/ [Google Scholar]
- 8.Simundic AM. Bias in research. Biochem Med (Zagreb) 2013;23:12–5. doi: 10.11613/BM.2013.003. [DOI] [PMC free article] [PubMed] [Google Scholar]