Abstract
The US National Institute of Neurological Disorders and Stroke convened major stakeholders in June 2012 to discuss how to improve the methodological reporting of animal studies in grant applications and publications. The main workshop recommendation is that at a minimum studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data. We recognize that achieving a meaningful improvement in the quality of reporting will require a concerted effort by investigators, reviewers, funding agencies and journal editors. Requiring better reporting of animal studies will raise awareness of the importance of rigorous study design to accelerate scientific progress.
Dissemination of knowledge is the engine that drives scientific progress. Because advances hinge primarily on previous observations, it is essential that studies are reported in sufficient detail to allow the scientific community, research funding agencies and disease advocacy organizations to evaluate the reliability of previous findings. Numerous publications have called attention to the lack of transparency in reporting, yet studies in the life sciences in general, and in animals in particular, still often lack adequate reporting on the design, conduct and analysis of the experiments. To develop a plan for addressing this critical issue, the US National Institute of Neurological Disorders and Stroke (NINDS) convened academic researchers and educators, reviewers, journal editors and representatives from funding agencies, disease advocacy communities and the pharmaceutical industry to discuss the causes of deficient reporting and how they can be addressed. The specific goal of the meeting was to develop recommendations for improving how the results of animal research are reported in manuscripts and grant applications. There was broad agreement that: (1) poor reporting, often associated with poor experimental design, is a significant issue across the life sciences; (2) a core set of research parameters exist that should be addressed when reporting the results of animal experiments; and (3) a concerted effort by all stakeholders, including funding agencies and journals, will be necessary to disseminate and implement best reporting practices throughout the research community. Here we describe the impetus for the meeting and the specific recommendations that were generated.
Widespread deficiencies in methods reporting
In the life sciences, animals are used to elucidate normal biology, to improve understanding of disease pathogenesis, and to develop therapeutic interventions. Animal models are valuable, provided that experiments employing them are carefully designed, interpreted and reported. Several recent articles, commentaries and editorials highlight that inadequate experimental reporting can result in such studies being un-interpretable and difficult to reproduce1–8. For instance, replication of spinal cord injury studies through an NINDS-funded program determined that many studies could not be replicated because of incomplete or inaccurate description of experimental design, especially how randomization of animals to the various test groups, group formulation and delineation of animal attrition and exclusion were addressed7. A review of 100 articles published in Cancer Research in 2010 revealed that only 28% of papers reported that animals were randomly allocated to treatment groups, just 2% of papers reported that observers were blinded to treatment, and none stated the methods used to determine the number of animals per group, a determination required to avoid false outcomes2. In addition, analysis of several hundred studies conducted in animal models of stroke, Parkinson’s disease and multiple sclerosis also revealed deficiencies in reporting key methodological parameters that can introduce bias6. Similarly, a review of 76 high-impact (cited more than 500 times) animal studies showed that the publications lacked descriptions of crucial methodological information that would allow informed judgment about the findings9. These deficiencies in the reporting of animal study design, which are clearly widespread, raise the concern that the reviewers of these studies could not adequately identify potential limitations in the experimental design and/or data analysis, limiting the benefit of the findings.
Some poorly reported studies may in fact be well-designed and well-conducted, but analysis suggests that inadequate reporting correlates with overstated findings10–14. Problems related to inadequate study design surfaced early in the stroke research community, as investigators tried to understand why multiple clinical trials based on positive results in animal studies ultimately failed. Part of the problem is, of course, that no animal model can fully reproduce all the features of human stroke. It also became clear, however, that many of the difficulties stemmed from a lack of methodological rigor in the preclinical studies that were not adequately reported15. For instance, a systematic review and meta-analysis of studies testing the efficacy of the free-radical scavenger NXY-059 in models of ischaemic stroke revealed that publications that included information on randomization, concealment of group allocation, or blinded assessment of outcomes reported significantly smaller effect sizes of NXY-059 in comparison to studies lacking this information10. In certain cases, a series of poorly designed studies, obscured by deficient reporting, may, in aggregate, serve erroneously as the scientific rationale for large, expensive and ultimately unsuccessful clinical trials. Such trials may unnecessarily expose patients to potentially harmful agents, prevent these patients from participating in other trials of possibly effective agents, and drain valuable resources and energy that might otherwise be more productively spent.
A core set of reporting standards
The large fraction of poorly reported animal studies and the empirical evidence of associated bias6,10–14,16–20, defined broadly as the introduction of an unintentional difference between comparison groups, led various disease communities to adopt general21–23 and animal-model-specific6,24–26 reporting guidelines. However, for guidelines to be effective and broadly accepted by all stakeholders, they should be universal and focus on widely accepted core issues that are important for study evaluation. Therefore, based on available data, we recommend that, at minimum, authors of grant applications and scientific publications should report on randomization, blinding, sample-size estimation and the handling of all data (see below and Box 1).
BOX 1. A core set of reporting standards for rigorous study design.
Randomization
Animals should be assigned randomly to the various experimental groups, and the method of randomization reported.
Data should be collected and processed randomly or appropriately blocked.
Blinding
Allocation concealment: the investigator should be unaware of the group to which the next animal taken from a cage will be allocated.
Blinded conduct of the experiment: animal caretakers and investigators conducting the experiments should be blinded to the allocation sequence.
Blinded assessment of outcome: investigators assessing, measuring or quantifying experimental outcomes should be blinded to the intervention.
Sample-size estimation
An appropriate sample size should be computed when the study is being designed and the statistical method of computation reported.
Statistical methods that take into account multiple evaluations of the data should be used when an interim evaluation is carried out.
Data handling
Rules for stopping data collection should be defined in advance.
Criteria for inclusion and exclusion of data should be established prospectively.
How outliers will be defined and handled should be decided when the experiment is being designed, and any data removed before analysis should be reported.
The primary end point should be prospectively selected. If multiple end points are to be assessed, then appropriate statistical corrections should be applied.
Investigators should report on data missing because of attrition or exclusion.
Pseudo replicate issues need to be considered during study design and analysis.
Investigators should report how often a particular experiment was performed and whether results were substantiated by repetition under a range of conditions.
Randomization and blinding
Choices made by investigators during the design, conduct and interpretation of experiments can introduce bias, resulting in false-positive results. Many have emphasized the importance of randomization and blinding as a means to reduce bias6,21–23,27, yet inadequate reporting of these aspects of study design remains widespread in preclinical research. It is important to report whether the allocation, treatment and handling of animals were the same across study groups. The selection and source of control animals needs to be reported as well, including whether they are true littermates of the test groups. Best practices should also include reporting on the methods of animal randomization to the various experimental groups, as well as on random (or appropriately blocked) sample processing and collection of data. Attention to these details will avoid mistaking batch effects for treatment effects (for example, dividing samples from a large study into multiple lots, which are then processed separately). Investigators should also report on whether the individuals caring for the animals and conducting the experiments were blinded to the allocation sequence, blinded to group allocation and, whenever possible, whether the persons assessing, measuring or quantifying the experimental outcomes were blinded to the intervention.
Sample-size estimation
Minimizing the use of animals in research is not only a requirement of funding agencies around the world but also an ethical obligation. It is unethical, however, to perform underpowered experiments with insufficient numbers of animals that have little prospect of detecting meaningful differences between groups. In addition, with smaller studies, the positive predictive value is lower, and false-positive results can ensue, leading to the needless use of animals in subsequent studies that build upon the incorrect results28. Studies with an inadequate sample size may also provide false-negative results, where potentially important findings go undetected. For these reasons it is crucial to report how many animals were used per group and what statistical methods were used to determine this number.
Data handling
Common practices related to data handling that can also lead to false positives include interim data analysis29, the ad hoc exclusion of data30, retrospective primary end point selection31, pseudo replication32 and small effect sizes33.
Interim data analysis
It is not uncommon for investigators to collect some data and perform an interim data analysis. If the results are statistically significant in favour of the working hypothesis, the study is terminated and a paper is written. If the results look ‘promising’ but are not statistically significant, additional data are collected. This has been referred to as ‘sampling to a foregone conclusion’ and can lead to a high rate of false-positive findings29,30. Therefore, sample size and rules for stopping data collection should be defined in advance and properly reported. Unplanned interim analyses, which can inflate false-positive outcomes and require unblinding of the allocation code, should be avoided. If there are interim analyses, however, these should be reported in the publication.
Ad hoc exclusion of data
Animal studies are often complex and outliers are not unusual. Decisions to include or exclude specific animals on the basis of outcomes (for example, state of health, dissimilarity to other data) have the potential to influence the study results. Thus, rules for inclusion and exclusion of data should be defined prospectively and reported. It is also important to report whether all animals that were entered into the experiment actually completed it, or whether they were removed, and if so, for what reason. Differential attrition between groups can introduce bias. For example, a treatment may appear effective if it kills off the weakest or most severely affected animals whose fates are then not reported. In addition, it is important to report whether any data were removed before analysis and the reasons for this data exclusion.
Retrospective primary end-point selection
It is well known that assessment of multiple end points, and/or assessment of a single end point at multiple time points, inflates the type-I error (false-positive results)31. Yet it is not uncommon for investigators to select a primary end point only after data analyses. False-positive conclusions arising from such practices can be avoided by specifying a primary end point before the study is undertaken, the time(s) at which the end point will be assessed, and the method(s) of analysis. Significant findings for secondary end points can and should be reported, but should be delineated as exploratory in nature. If multiple end points are to be assessed, then appropriate statistical corrections should be applied to control type-I error, such as Bonferroni corrections31,34.
Pseudo replicates
When considering sample-size determination and experimental design, pseudo-replication issues need to be considered32. There is a clear, but often misunderstood or misrepresented, distinction between technical and biologic replicates. For example, in analysing effects of pollutants on reproductive health, multiple sampling from a litter, regardless of how many littermates are quantified, provides data from only a single biologic replicate. When biologic variation in response to some intervention is the variable of interest, as in many animal experiments, analysis of samples from multiple litters is essential. The unit of assessment is the smallest unit (animal, cage, litter) to which the intervention in question can be independently administered35.
Small effect sizes
A statistically significant result does not provide information on the magnitude of the effect and thus does not necessarily mean that the effect is robust, which could account for the poor reproducibility of certain studies36. Therefore, reporting whether results were substantiated by repetition, preferably under a range of conditions that demonstrate the robustness of the effect is encouraged. Also, reporting how often the particular experiment was performed as a means to control for a general tendency to publish only the best results would strengthen the validity of experimental results. To this end, carefully designed and powered animal studies should be budgeted for in the grant applications and funding agencies should consider supporting repetition studies where appropriate.
An important note about exploratory experiments
For the most part, these best practices do not apply to early-stage observational experiments searching for possible differences among experimental groups. Such exploratory testing is frequently conducted using a small sample size, does not have a primary outcome and is often unblinded. However, because such experiments are likely to be subject to many of the limitations described above, they should be viewed as hypothesis-generating experiments and interpreted as such. Potential discoveries arising from the exploratory phase of the research should be supported by follow-up, hypothesis-testing experiments that take into consideration and adequately report on the core standards detailed above (Box 1).
The path to implementation
Improving the transparency and quality of reporting cannot be achieved by a single party, but will require cooperation among all stakeholders, including investigators, reviewers, funding agencies and journals. Calling upon investigators to provide key information about the design, execution and analysis of animal experiments described in grant applications and manuscripts and encouraging reviewers to consider these issues in their evaluations should, over time, increase both the quality and predictive value of preclinical research. Potential strategies for achieving this goal can be adopted from the clinical trials community, which also contended with poor reporting and associated bias. Evidence that clinical trials can yield biased results if they lack methodological rigor37–42 led to the development and implementation of the CONSORT guidelines for randomized clinical trials (among other guidelines), now adopted by many clinical journals and funding organizations. These guidelines require that authors report whether and how their studies were carried out blind and randomized, how sample size was determined, whether data are missing owing to attrition or exclusion, and supply information about other important experimental parameters43–45. Importantly, the guidelines have improved the transparency of clinical study reporting in journals that have adopted them46–49. Additional evidence for the power of such guidelines can be deduced from the observation that, although few animal studies report on randomization, blinding or sample-size determination, most describe compliance with animal regulations, which is required by journals6,9,10,50,51.
As a first step, we recommend that funding organizations and journals provide reviewers with clear guidance about core features of animal study design (listed in Box 1). The goal is not to be prescriptive or proscriptive, but rather to delineate the minimum set of standards that should routinely be considered in evaluating the appropriateness of a study. Such guidance would make the task easier for reviewers of manuscripts and grant applications who volunteer their time and are often overextended. In addition, investigators and reviewers should be encouraged to consult published generic and model-specific guidelines for designing in vivo animal experiments6,21–27,52,53. To assist reviewers, editors and funding organizations in making sure that applications and manuscripts contain sufficient information on the core reporting recommendation (Box 1), authors could be asked to append relevant information on a standardized form that accompanies the submission. This form could be as simple as a checkbox indicating the page on which the key reporting standard is addressed. Such a form is currently used by clinical research journals.
In addition to the measures proposed above, better dissemination of knowledge will be greatly facilitated by addressing publication bias, the phenomenon that few studies showing negative outcomes are published54–63. Such deficiency in reporting contributes to needless repetition of similar studies by investigators unaware of earlier efforts60,61. There is a widely accepted belief that the scientific community, promotions committees, funding agencies and journals favour positive outcomes, an impression that can lead to bias64. Possible solutions include incentivizing investigators to publish negative outcomes, supporting studies of independent replication, encouraging journals to publish a greater number of studies reporting negative outcomes, creating a database for negative outcomes (analogous to http://ClinicalTrials.gov/), and linking the raw data to publications.
Change will not occur overnight. The importance of training scientists to properly design and adequately report animal studies cannot be overstated. Training and education focused on key features of experimental design should be an ongoing process for both the novice and veteran involved in biomedical research. Attention to better study design reporting should be communicated at major meetings, brought to the attention of reviewers, editors and funders, required by the publishers of peer-review journals, and included in the training program of graduate and postdoctoral students. Furthermore, good mentorship is crucial for developing such skills and should be encouraged and rewarded. Rigorous experimental design and adequate reporting needs to be emphasized across the board and monitored in training grants awarded by the US National Institute of Health (NIH) and other funding agencies. Professional societies can also have an important role by highlighting this issue in their respective communities.
An important gatekeeper of quality remains the peer review of grant applications and journal manuscripts. We therefore call upon funding agencies and publishing groups to take actions to reinforce the importance of methodological rigor and reporting. NINDS has begun taking steps to promote best practices for preclinical therapy development studies. In 2011, a Notice was published in the NIH Guide encouraging the scientific community to address the issues described above in their grant applications, in describing both the project being proposed and the supporting data upon which it is based (http://grants.nih.gov/grants/guide/notice-files/NOT-NS-11-023.html). Points that should be considered in a well-designed study are listed on the NINDS website (http://www.ninds.nih.gov/funding/transparency_in_reporting_guidance.pdf). Furthermore, the reviewers of applications reviewed by the NINDS Scientific Review Branch are reminded of these issues and asked to pay careful attention to the scientific premise of the proposed projects.
We believe that improving how animal studies are reported will raise awareness of the importance of rigorous study design. Such increased awareness will accelerate both scientific progress and the development of new therapies.
Acknowledgments
Funded by NINDS.
Footnotes
Author Contributions R.F., A.K.G., S.C.L., J.D.P., S.D.S., U.U. and W.K. organized the workshop. R.B.D., S.E.L., S.C.L., M.R.M. and S.D.S. wrote the manuscript. All authors participated in the workshop and contributed to the editing of the manuscript.
Author Information Reprints and permissions information is available at www.nature.com/reprints. The authors declare no competing financial interests. Readers are welcome to comment on the online version of the paper.
References
- 1.Begley CG, Ellis LM. Raise standards for preclinical cancer research. Nature. 2012;483:531–533. doi: 10.1038/483531a. [DOI] [PubMed] [Google Scholar]
- 2.Hess KR. Statistical design considerations in animal studies published recently in Cancer Research. Cancer Res. 2011;71:625. doi: 10.1158/0008-5472.CAN-10-3296. [DOI] [PubMed] [Google Scholar]
- 3.Kilkenny C, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE. 2009;4:e7824. doi: 10.1371/journal.pone.0007824. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Moher D, Simera I, Schulz KF, Hoey J, Altman DG. Helping editors, peer reviewers and authors improve the clarity, completeness and transparency of reporting health research. BMC Med. 2008;6:13. doi: 10.1186/1741-7015-6-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nature Rev Drug Discov. 2011;10:712. doi: 10.1038/nrd3439-c1. The first report that many published studies cannot be reproduced by the pharmaceutical industry. [DOI] [PubMed] [Google Scholar]
- 6.Sena E, van der Worp HB, Howells D, Macleod M. How can we improve the pre-clinical development of drugs for stroke? Trends Neurosci. 2007;30:433–439. doi: 10.1016/j.tins.2007.06.009. [DOI] [PubMed] [Google Scholar]
- 7.Steward O, Popovich PG, Dietrich WD, Kleitman N. Replication and reproducibility in spinal cord injury research. Exp Neurol. 2012;233:597–605. doi: 10.1016/j.expneurol.2011.06.017. [DOI] [PubMed] [Google Scholar]
- 8.van der Worp HB, Macleod MR. Preclinical studies of human disease: time to take methodological quality seriously. J Mol Cell Cardiol. 2011;51:449–450. doi: 10.1016/j.yjmcc.2011.04.008. [DOI] [PubMed] [Google Scholar]
- 9.Hackam DG, Redelmeier DA. Translation of research evidence from animals to humans. J Am Med Assoc. 2006;296:1727–1732. doi: 10.1001/jama.296.14.1731. A study reporting that a large fraction of high-impact publications in highly reputable journals lack important information related to experimental design. [DOI] [PubMed] [Google Scholar]
- 10.Macleod MR, et al. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke. 2008;39:2824–2829. doi: 10.1161/STROKEAHA.108.515957. A study demonstrating that lack of reporting of key methodological parameters is associated with bias. [DOI] [PubMed] [Google Scholar]
- 11.Bebarta V, Luyten D, Heard K. Emergency medicine animal research: does use of randomization and blinding affect the results? Acad Emerg Med. 2003;10:684–687. doi: 10.1111/j.1553-2712.2003.tb00056.x. [DOI] [PubMed] [Google Scholar]
- 12.Crossley NA, et al. Empirical evidence of bias in the design of experimental stroke studies – A metaepidemiologic approach. Stroke. 2008;39:929–934. doi: 10.1161/STROKEAHA.107.498725. [DOI] [PubMed] [Google Scholar]
- 13.Rooke ED, Vesterinen HM, Sena ES, Egan KJ, Macleod MR. Dopamine agonists in animal models of Parkinson’s disease: a systematic review and meta-analysis. Parkinsonism Relat Disord. 2011;17:313–320. doi: 10.1016/j.parkreldis.2011.02.010. [DOI] [PubMed] [Google Scholar]
- 14.Vesterinen HM, et al. Improving the translational hit of experimental treatments in multiple sclerosis. Mult Scler J. 2010;16:1044–1055. doi: 10.1177/1352458510379612. [DOI] [PubMed] [Google Scholar]
- 15.Stroke Therapy Academic Industry Roundtable (STAIR) Recommendations for standards regarding preclinical neuroprotective and restorative drug development. Stroke. 1999;30:2752–2758. doi: 10.1161/01.str.30.12.2752. [DOI] [PubMed] [Google Scholar]
- 16.Fanelli D. “Positive” results increase down the hierarchy of the sciences. PLoS ONE. 2010;5:e10068. doi: 10.1371/journal.pone.0010068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Jerndal M, et al. A systematic review and meta-analysis of erythropoietin in experimental stroke. J Cereb Blood Flow Metab. 2010;30:961–968. doi: 10.1038/jcbfm.2009.267. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Macleod MR, O’Collins T, Horky LL, Howells DW, Donnan GA. Systematic review and metaanalysis of the efficacy of FK506 in experimental stroke. J Cereb Blood Flow Metab. 2005;25:713–721. doi: 10.1038/sj.jcbfm.9600064. [DOI] [PubMed] [Google Scholar]
- 19.Sena ES, et al. Factors affecting the apparent efficacy and safety of tissue plasminogen activator in thrombotic occlusion models of stroke: systematic review and meta-analysis. J Cereb Blood Flow Metab. 2010;30:1905–1913. doi: 10.1038/jcbfm.2010.116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Wheble PCR, Sena ES, Macleod MR. A systematic review and meta-analysis of the efficacy of piracetam and piracetam-like compounds in experimental stroke. Cerebrovasc Dis. 2008;25:5–11. doi: 10.1159/000111493. [DOI] [PubMed] [Google Scholar]
- 21.Festing MF, Altman DG. Guidelines for the design and statistical analysis of experiments using laboratory animals. ILAR J. 2002;43:244–258. doi: 10.1093/ilar.43.4.244. [DOI] [PubMed] [Google Scholar]
- 22.Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8:e1000412. doi: 10.1371/journal.pbio.1000412. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.van der Worp HB, et al. Can animal models of disease reliably inform human studies? PLoS Med. 2010;7:e1000245. doi: 10.1371/journal.pmed.1000245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Fisher M, et al. Update of the stroke therapy academic industry roundtable preclinical recommendations. Stroke. 2009;40:2244–2250. doi: 10.1161/STROKEAHA.108.541128. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Ludolph AC, et al. Guidelines for preclinical animal research in ALS/MND: a consensus meeting. Amyotroph Lateral Scler. 2010;11:38–45. doi: 10.3109/17482960903545334. [DOI] [PubMed] [Google Scholar]
- 26.Shineman DW, et al. Accelerating drug discovery for Alzheimer’s disease: best practices for preclinical animal studies. Alzheimers Res Ther. 2011;3:28. doi: 10.1186/alzrt90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Unger EF. All is not well in the world of translational research. J Am Coll Cardiol. 2007;50:738–740. doi: 10.1016/j.jacc.2007.04.067. [DOI] [PubMed] [Google Scholar]
- 28.Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Dienes Z. Bayesian versus orthodox statistics: which side are you on? Perspect Psychol Sci. 2011;6:274–290. doi: 10.1177/1745691611406920. [DOI] [PubMed] [Google Scholar]
- 30.Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22:1359–1366. doi: 10.1177/0956797611417632. [DOI] [PubMed] [Google Scholar]
- 31.Beal KG, Khamis HJ. A problem in statistical-analysis: simultaneous inference. Condor. 1991;93:1023–1025. [Google Scholar]
- 32.Lazic SE. The problem of pseudoreplication in neuroscientific studies: is it affecting your analysis? BMC Neurosci. 2010;11:5. doi: 10.1186/1471-2202-11-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Scott S, et al. Design, power, and interpretation of studies in the standard murine model of ALS. Amyotroph Lateral Scler. 2008;9:4–15. doi: 10.1080/17482960701856300. An enlightening analysis of how small sample sizes can lead to false-positive outcomes. [DOI] [PubMed] [Google Scholar]
- 34.Proschan MA, Waclawiw MA. Practical guidelines for multiplicity adjustment in clinical trials. Control Clin Trials. 2000;21:527–539. doi: 10.1016/s0197-2456(00)00106-9. [DOI] [PubMed] [Google Scholar]
- 35.Festing MFW. Design and statistical methods in studies using animal models of development. ILAR J. 2006;47:5–14. doi: 10.1093/ilar.47.1.5. [DOI] [PubMed] [Google Scholar]
- 36.Nakagawa S, Cuthill IC. Effect size, confidence interval and statistical significance: a practical guide for biologists. Biol Rev Camb Philos Soc. 2007;82:591–605. doi: 10.1111/j.1469-185X.2007.00027.x. [DOI] [PubMed] [Google Scholar]
- 37.Chalmers TC, Celano P, Sacks HS, Smith H. Bias in treatment assignment in controlled clinical-trials. N Engl J Med. 1983;309:1358–1361. doi: 10.1056/NEJM198312013092204. [DOI] [PubMed] [Google Scholar]
- 38.Jüni P, Altman DG, Egger M. Systematic reviews in health care - assessing the quality of controlled clinical trials. Br Med J. 2001;323:42. doi: 10.1136/bmj.323.7303.42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Pildal J, et al. Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials. Int J Epidemiol. 2007;36:847–857. doi: 10.1093/ije/dym087. [DOI] [PubMed] [Google Scholar]
- 40.Pocock SJ, Hughes MD, Lee RJ. Statistical problems in the reporting of clinical-trials. A survey of three medical journals. N Engl J Med. 1987;317:426–432. doi: 10.1056/NEJM198708133170706. [DOI] [PubMed] [Google Scholar]
- 41.Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. J Am Med Assoc. 1995;273:408–412. doi: 10.1001/jama.273.5.408. [DOI] [PubMed] [Google Scholar]
- 42.Wood L, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. Br Med J. 2008;336:601–605. doi: 10.1136/bmj.39465.451748.AD. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Moher D. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. Br Med J. 2011;340:c869. doi: 10.1136/bmj.c869. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357:1191–1194. Revision of guidelines by the CONSORT group to improve the reporting of randomized clinical trials. [PubMed] [Google Scholar]
- 45.Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. PLoS Med. 2010;7:e1000251. doi: 10.1371/journal.pmed.1000251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Plint AC, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006;185:263–267. doi: 10.5694/j.1326-5377.2006.tb00557.x. [DOI] [PubMed] [Google Scholar]
- 47.Kane RL, Wang J, Garrard J. Reporting in randomized clinical trials improved after adoption of the CONSORT statement. J Clin Epidemiol. 2007;60:241–249. doi: 10.1016/j.jclinepi.2006.06.016. [DOI] [PubMed] [Google Scholar]
- 48.Prady SL, Richmond SJ, Morton VM, Macpherson H. A systematic evaluation of the impact of STRICTA and CONSORT recommendations on quality of reporting for acupuncture trials. PLoS ONE. 2008;3:e1577. doi: 10.1371/journal.pone.0001577. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Smith BA, et al. Quality of reporting randomized controlled trials (RCTs) in nursing literature: application of the consolidated standards reporting trials (CONSORT) Nurs Outlook. 2008;56:31–37. doi: 10.1016/j.outlook.2007.09.002. [DOI] [PubMed] [Google Scholar]
- 50.Macleod MR, O’Collins T, Howells DW, Donnan GA. Pooling of animal experimental data reveals influence of study design and publication bias. Stroke. 2004;35:1203–1208. doi: 10.1161/01.STR.0000125719.25853.20. [DOI] [PubMed] [Google Scholar]
- 51.Macleod MR, O’Collins T, Horky LL, Howells DW, Donnan GA. Systematic review and meta-analysis of the efficacy of melatonin in experimental stroke. J Pineal Res. 2005;38:35–41. doi: 10.1111/j.1600-079X.2004.00172.x. [DOI] [PubMed] [Google Scholar]
- 52.Gallo JM. Pharmacokinetic/pharmacodynamic-driven drug development. Mt Sinai J Med. 2010;77:381–388. doi: 10.1002/msj.20193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Moher D, et al. Describing reporting guidelines for health research: a systematic review. J Clin Epidemiol. 2011;64:718–742. doi: 10.1016/j.jclinepi.2010.09.013. [DOI] [PubMed] [Google Scholar]
- 54.Callaham ML, Wears RL, Weber EJ, Barton C, Young G. Positive-outcome bias and other limitations in the outcome of research abstracts submitted to a scientific meeting. J Am Med Assoc. 1998;280:254–257. doi: 10.1001/jama.280.3.254. [DOI] [PubMed] [Google Scholar]
- 55.Dickersin K, Chalmers I. Recognizing, investigation and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO. J R Soc Med. 2011;104:532–538. doi: 10.1258/jrsm.2011.11k042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2012;90:891–904. [Google Scholar]
- 57.Kyzas PA, Denaxa-Kyza D, Ioannidis JPA. Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer. 2007;43:2559–2579. doi: 10.1016/j.ejca.2007.08.030. [DOI] [PubMed] [Google Scholar]
- 58.Liu S. Dealing with publication bias in translational stroke research. J Exp Stroke Transl Med. 2009;2:16–21. doi: 10.6030/1939-067x-2.1.16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Rockwell S, Kimler BE, Moulder JE. Publishing negative results: the problem of publication bias. Radiat Res. 2006;165:623–625. doi: 10.1667/RR3573.1. [DOI] [PubMed] [Google Scholar]
- 60.Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bull. 1979;86:638–641. [Google Scholar]
- 61.Sterling TD. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J Am Stat Assoc. 1959;54:30–34. [Google Scholar]
- 62.Song F, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14:1–220. doi: 10.3310/hta14080. [DOI] [PubMed] [Google Scholar]
- 63.Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8:e1000344. doi: 10.1371/journal.pbio.1000344. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Fanelli D. Do pressures to publish increase scientists’ bias? An empirical support from US states data. PLoS ONE. 2010;5:e10271. doi: 10.1371/journal.pone.0010271. [DOI] [PMC free article] [PubMed] [Google Scholar]