Abstract
Sharing knowledge is a basic tenet of the scientific community, yet publication bias arising from the reluctance or inability to publish negative or null results remains a long-standing and deep-seated problem, albeit one that varies in severity between disciplines and study types. Recognizing that previous endeavors to address the issue have been fragmentary and largely unsuccessful, this Consensus View proposes concrete and concerted measures that major stakeholders can take to create and incentivize new pathways for publishing negative results. Funders, research institutions, publishers, learned societies, and the research community all have a role in making this an achievable norm that will buttress public trust in science.
Sharing knowledge is a fundamental principle within the scientific community, yet null and negative results are still being underreported. This Consensus View discusses the problem of such publication bias and presents a roadmap to a solution that has a role for everyone in the scientific community in promoting knowledge sharing, research transparency and rigor.
Introduction
Scientific progress relies on the dissemination of knowledge so that others can build upon it. Yet not all scientific knowledge is disseminated [1]. Studies with ‘nonsignificant’ or ‘unexciting’ results often remain unpublished, a phenomenon known as ‘the file drawer problem’ or publication bias [2]. This bias persists despite decades of warnings against its consequences [3–8]. By its very nature, the extent of publication bias is difficult to ascertain, but the available data clearly indicate that null findings are underreported. For instance, according to recent studies, fewer than 2 in 100 articles on prognostic markers or animal models of stroke report null findings [9,10], a result that is unlikely to arise from a system that faithfully reports the outcome of experiments regardless of statistical significance. More concretely, published meta-analyses frequently show evidence of publication bias [11], and the introduction of the ‘registered report’ format, in which journals commit to publication before experimental outcomes are known, substantially increased the proportion of null fundings in some psychology journals [12]. Nevertheless, publication bias remains the subject of ongoing debate. In a wide-ranging critique of the so-called ‘crisis narrative’ that has emerged in recent years, Fanelli [13] recognizes the problem but cautions against universalizing it, citing the lack of evidence across many fields. Silvertown and Conway [14] argue that the impact of publication bias may have been overstated due to underestimates of the tendency for false positives to eventually succumb to further scrutiny.
The extent of publication bias does seem to vary across disciplines. For example, surveys of meta-analyses suggest that publication bias is greater in some social science disciplines than it is in biomedical or physical sciences [11,15]. This variation can arise owing to any number of factors, ranging from choice of research methodology (e.g., observational studies, or research that is more descriptive, qualitative, or theoretical) to distinctive disciplinary norms or cultures. Importantly, both the magnitude and consequences of under-reporting of null findings also vary across disciplines. In biomedicine and clinical research, unreported null results can lead to patient-care risks, whereas in fields like economics or ecology, the societal impact of underreported null findings might be less obvious than the impact it has on research efficiency and advancing knowledge [16].
Notwithstanding the modulating effects of methodological and disciplinary variations, the consequences of publication bias can be severe. Unpublished null studies waste resources, slow the pace of science, and impede career advancement. Unwitting researchers are likely to expend time and money conducting similar experiments, not realizing that prior work has yielded null results: time and money that could have been spent pursuing new and more promising ideas. The failure to publish null findings also distorts the literature, resulting in exaggerated effect sizes [17–20], biased meta-analyses [11], inaccurate clinical trial results, flawed policy interventions [21–23], and the acceptance of false claims if incorrect ideas are not challenged [20,24]. Moreover, the rise of artificial intelligence (AI) models trained on incomplete data can magnify this problem, potentially contributing to misinformation [25] and undermining public trust in science.
Publication bias is deeply rooted in scientific culture. Null studies are often perceived as less valuable, leading to harsher peer reviews [26–28] and lower citation rates [29,30]. Null studies are also commonly supposed to be unlikely to be accepted by higher-impact journals [31], further limiting their visibility and reinforcing biases against reporting such findings. Tenure and promotion systems frequently prioritize journal impact factors over methodological rigor [32,33], further discouraging researchers from sharing null results. Additional discouragement arises because reviewers of funding applications may focus on the perceived prestige of the publication outlet rather than the content of investigators’ prior work [34]. As a result, many investigators are reluctant to publish null studies despite the time and effort spent conducting experiments and other empirical studies [35–37]. Even apart from external drivers, investigators may, for good reason, lose confidence in a line of inquiry without having validated or controlled their findings to the rigorous degree that might be applied to a positive result, and therefore be less likely to write them up for publication. There are many ways for an experiment to go awry, and the constant pressure for researchers to focus on what they perceive to be their most productive lines of inquiry may lead them to set aside lines of research that have not yielded positive results.
Journal practices can also exacerbate publication bias. A recent analysis (https://go.nih.gov/looOO3o) by the US National Institute of Neurological Disorders and Stroke (NINDS) found that 180 out of 215 neuroscience journals do not explicitly welcome null studies, while only 14 appeared to accept null studies without additional conditions (e.g., a higher burden of evidence than required for positive studies). Although newer scientific dissemination mechanisms such as study preregistration and Registered Reports [38–41] have shown promise in increasing the publication of null results [12,42], they are not universally applicable [43–47], despite recent efforts to extend their use to exploratory and observational studies [48,49]. Other more recent formats, such as micropublications [50] and modular publications [51], also offer promising avenues for sharing null studies, as do F1000 open research platforms and preprint and data repository platforms such as bioRxiv, arXiv, OSFPreprints, Zenodo, Figshare, and Dryad (bioRxiv even has a dedicated ‘Contradictory Results’ section), all of which can offer more frictionless avenues to dissemination than traditional journals [52,53]. However, these platforms still underrepresent null findings (https://go.nih.gov/looOO3o).
Given the many and various factors at play in sustaining publication bias, what scope is there to prevent it from happening? In this Consensus View, we present a framework for practical action that seeks to enlist contributions from all the relevant stakeholders within the research ecosystem. Although our perspective is primarily biomedical, we believe there is a fresh opportunity for a wider discussion of the problem of publication bias across all disciplines and the construction of proportionate and effective measures to address it.
Methodology
This ideas and reflections presented here were generated in discussions by participants of the Novel Approaches to Preventing Publication Bias Workshop (https://go.nih.gov/Y6iYPxU), which was run by the NINDS Office of Research Quality in May 2024 in Bethesda, Maryland, USA. This meeting aimed to assess progress and identify remaining barriers to disseminating null studies across the research ecosystem.
Around 50 stakeholders were invited to ensure broad representation by sector (researchers and research institutions; traditional and nontraditional publishers; journal editors; funders; nonprofit organizations; industry; scientific societies), career stage (senior and early career researchers), and geography (United States of America, United Kingdom, Ireland, Germany, Spain, France, Mexico, Australia, plus virtual participation from Kenya). Although most attendees were drawn from biomedical fields, experts in psychology, economics, business, ecology, and public health were included to capture cross-disciplinary perspectives.
The agenda combined plenary presentations with five sector-focused breakout sessions (researchers, research institutions, traditional and nontraditional publishers, scientific societies and nonprofit organizations, and funders and policymakers), each facilitated by a domain expert and an early career participant. Sessions were followed by a closing plenary in which all attendees synthesized key insights.
Facilitators recorded breakout session notes using a shared template. These notes were consolidated in the closing plenary, then thematically coded by two independent analysts. Emergent themes were organized into the framework presented here, ensuring that domain-specific nuances and overarching patterns informed the final recommendations.
Values-based approach to system change
To address the causes of publication bias, scholars from diverse disciplinary and geographical backgrounds gathered at a NINDS-hosted meeting in May 2024. They agreed that while scientists should be free to choose which questions to pursue, whatever the results of their research, the current scientific culture clearly incentivizes the production of ‘significant’ or positive results over methodological rigor [54–57]. To transform this culture, we argue that there is a need to shift away from valuing only positive or ‘exciting’ results towards prioritizing the importance of the research question and the quality of the research process, regardless of outcome [55,58,59]. We therefore propose a values-based approach [60–66] to reforming policies, activities, and incentives to reduce or eliminate the occurrence of publication bias (Fig 1). At its core, we believe this means removing barriers to sharing all knowledge (regardless of statistical significance or perceived impact) and enabling, incentivizing and centering the key academic values of transparency, accessibility, and openness.
Fig 1. Values-based approach to reducing publication bias.
Conceptual diagram illustrating the cyclic and iterative nature of the steps involved in seeking to end publication bias.
Drawing principally on experience from biomedical and related science, technology, engineering and mathematics domains, but with relevance to social science, the humanities, and beyond, we therefore call on all sectors of the scientific enterprise to commit openly to the dissemination of all knowledge and to enact specific, practical changes to accomplish this goal within their respective disciplines. We propose a framework for action that includes improved incentive structures to reward transparency and rigor, the creation of simpler mechanisms for reporting null results, and collaboration among sectors of the scientific community to achieve this goal.
Sustaining these changes will require reinforcement from a wide range of actors in the funding, delivery, and dissemination of research. Below, we outline concrete steps for different stakeholders to reduce publication bias.
Openly commit to the value of disseminating all knowledge
As a first step, we ask all scientific entities (including institutions, departments, core facilities, libraries, ethics committees, scientific societies, funders, journals, laboratory groups, and individual scientists) to reflect on the value of sharing and disseminating all knowledge and to hold internal conversations on why this is crucial for their missions. This is essential to winning internal buy-in from across organizations so that they are positioned to make credible and achievable commitments within their respective domains to sharing all knowledge and eliminating publication bias.
Identify which practices align with sharing all knowledge
Funders, research-performing organizations, and publishers should assess whether current practices facilitate or hinder transparent research processes. For example, do current practices of researcher assessment (e.g., graduation/degree requirements, promotion, and tenure policies) encourage dissemination of all high-quality research, including null results, or do they focus only on the most ‘exciting’ results (e.g., by heavily weighing bibliometrics or media coverage, which are dubious measures of research quality [67–70])? Does manuscript peer review give due weight to the questions being addressed and the methodological quality of the experiments when determining scientific merit and impact? Does grant review focus on the methodological rigor of investigators’ prior work rather than on the journals in which their papers were published? In many current research settings, the answer is no [55,71]. Explicitly outlining which practices should be modified enables a systematic approach to conceptualizing and implementing these changes, such as through new resources or incentives. Sharing research findings publicly should be a default part of the research workflow rather than an optional step that depends on study outcomes. Commitments to sharing all knowledge should be accompanied by a clear plan of action so that organizations can publicly be held accountable. Table 1 illustrates specific actions for different entities to consider.
Table 1. High-priority interventions that promote the dissemination of all knowledge, including null studies, across research fields.
| Entity | High-priority activities that align with sharing all knowledge |
|---|---|
| Funders | • Require reporting of all outcomes from previously funded proposals (including work packages not undertaken and null studies) in curricula vitae/biosketches and progress reports, and take this into account during funding considerations. • Support existing platforms and pilots for simplifying and facilitating the default dissemination of all rigorous studies, such as free, funder-backed online venues for disseminating short, quality-controlled null studies linked to their underlying protocols and data. • Modify grant review criteria and train peer reviewers to focus more heavily on the investigator’s history of sharing all knowledge and the methodological rigor of the prior work supporting the research question and of the proposed work (and less heavily on exciting but potentially misleading preliminary outcomes of experiments). |
| Research institutions and their sub-entities | • Reform researcher assessment across all career stages (e.g., training, hiring, promotion and tenure) by enlarging definitions of research ‘excellence’ and ‘impact’ to include ‘pursuing important research questions and performing sound research that is transparently reported’, rather than relying primarily on bibliometric indicators, and by ensuring PhD graduation requirements include reporting of all rigorously performed studies in the dissertation or another public venue, without expectation for positive results. • Provide training to researchers at all career stages on the importance and process of sharing all knowledge, including null studies, within undergraduate and graduate curricula and other professional development programs • Provide infrastructure and logistical support for sharing a variety of research outputs |
| Traditional publishers | • Publicly welcome and actively promote the submission and publication of methodologically sound null studies and Registered Reports • Track submission of null studies to monitor progress • Provide guidance to peer reviewers and editors on how to assess methodology while remaining neutral to the direction (or statistical significance) of outcomes |
| Newer research dissemination venues | • Build upon existing platforms that remove barriers to submission of methodologically sound null findings by including: free submission and publication; simple formats that do not require complete ‘stories’; the capacity to describe full methodology, such as linking to published protocols; pre- or post-publication peer-review of methods; and interconnected data repositories to facilitate data citation. |
| Societies & organizations | • Raise awareness of the importance of null studies, for example, through: hosting discussions of the extent and impacts of publication bias within their disciplines; encouraging submission of null studies to associated journals; developing best practice toolkits and educational workshops; creating early career ambassador programs to advocate for change; featuring null studies at conferences; and providing travel awards to individuals who have performed high-quality studies that happened to produce null results. |
| Researchers of all career stages | • Be a role model for other researchers (including mentees) by sharing null studies publicly. • Advocate for change via roles as colleagues, mentors, peer reviewers, journal editors, institutional administrators, or scientific society officers, and by creating or joining networks of other researchers to share best practices for effecting change. |
While the ideas in Table 1 represent possible starting points for different stakeholders, we do not wish to be overly prescriptive in how to effect change. The most effective solutions are likely to be tailored to disciplinary norms and practices and will therefore require ground-level input, but we would nevertheless highlight some ideas for practical implementation by different stakeholders. Funders, for example, could pilot a ‘null-results summary’ program that requires a one-page report of null findings linked to project registries that are accessible to reviewers of subsequent funding applications; such reports could also usefully include mention of work packages or proposed research questions that were not pursued (e.g., because of unanticipated developments or shifts in priorities). Institutions could further support this effort by hosting discipline-specific ‘null-data clinics’, periodic forums where researchers present null results alongside methodological lessons learned. Publishers could place a more explicit emphasis on the review of methodological rigor when handling manuscripts and highlight high-quality papers reporting null findings to encourage submissions. They could also monitor acceptance rates of manuscripts containing positive or negative results to iteratively refine their guidelines for authors and reviewers.
Enable and reinforce change
Given their influence in the system, research funders (public and private) are uniquely positioned to lead the implementation of policies that incentivize research transparency. Funder policies that are adequately enforced provide strong but proportionate incentives to change researcher reporting behavior and institutional researcher assessment practices. Funders also have the resources to spearhead the development of new infrastructure for easy dissemination outside of traditional publishing venues, for example, by supporting platforms that enable simple and widespread sharing of concise reports of null findings or unfinished projects.
Academic institutions will no doubt want to ensure their researchers are responding to funder incentives, but can synergize in other ways with efforts to address publication bias. For example, when assessing researchers, they should prioritize methodological quality in evaluating experimental outcomes. This could encourage a shift away from reliance on publication prestige and support outlets that emphasize methodological quality and openness. Enhancing education and awareness about effective knowledge sharing will further support these changes.
More ambitious actions, such as creating new tools or platforms for publication and review of null results, will likely require careful staged implementation, piloting, and engagement with different stakeholders (e.g., funders, publishers, researchers, and reviewers). Only with proper incentives and well-designed processes will barriers to the publication of null results be overcome.
Evaluate and iteratively improve interventions over time
As with any effort to enact genuine reform, a crucial step will be to document, evaluate, and publicly share whether modifications to processes, policies, and practices increase research transparency. At the same time, reformers should be mindful to avoid any negative unintended consequences (e.g., inequity, inappropriate gaming of the system, or undue burden). There are a number of formal evaluation frameworks that may be helpful for guiding the change process [72–74]. These emphasize the importance of a systems-level perspective, collaborating with stakeholders to develop and contextualize the framework, and iterative incorporation of new insights to refine the intervention. Openness to critical evaluation throughout cycles of reform (Fig 1) will be essential if we are to make real progress in addressing publication bias.
Conclusions
While sharing knowledge is a fundamental principle within the scientific community, publication bias has remained a tough nut to crack and requires renewed attention from the broader research community as part of wider debates about the health of the research and scholarly enterprise. Addressing the challenge of publication bias will not just enhance the progress of research, but also buttress the social contract that publicly funded research relies on for continued support. Our roadmap has a role for all stakeholders (funders, institutions, publishers, and researchers) in promoting knowledge sharing, research transparency, and rigor, but change will only happen if we are all willing to play our part.
Acknowledgments
The authors would like to thank the organizers of the 2024 National Institute of Neurological Disorders and Stroke (NINDS) Novel Approaches to Preventing Publication Bias workshop, including Devon Crawford, Mariah Hoye, and Shai Silberberg, who contributed to early drafts of this manuscript. The authors would also like to thank all workshop participants for lively and informative discussions that shaped this paper, including Prachee Avasthi, Anna Hatch, Erin McKiernan, and Bodo Stern, who provided insightful comments on this manuscript. The content of this publication does not reflect the views or policies of the Department of Health and Human Services or of authors’ affiliated organizations, nor does mention of trade names, commercial products, or organizations imply endorsement by the United States Government.
Funding Statement
D.B. is supported by research funding from the National Institutes of Health (National Institute on Aging, National Institute on Neurological Disorders and Stroke). N.B. acknowledges funding through the National Institutes of Health (National Institute of General Medical Sciences - Grant Ref. R35GM141835. MGT receives partial funding from the Michael J Fox Foundation, the National Institutes of Health (National Institute on Neurological Disorders and Stroke), and the Indiana Collaborative Initiative for Talent Enrichment (INCITE) Lilly Endowment Fund. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
References
- 1.Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2011;90(3):891–904. doi: 10.1007/s11192-011-0494-7 [DOI] [Google Scholar]
- 2.Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bulletin. 1979;86(3):638–41. doi: 10.1037/0033-2909.86.3.638 [DOI] [Google Scholar]
- 3.Greenwald AG. Consequences of prejudice against the null hypothesis. Psychol Bulletin. 1975;82(1):1–20. doi: 10.1037/h0076157 [DOI] [Google Scholar]
- 4.Coursol A, Wagner EE. Effect of positive findings on submission and acceptance rates: a note on meta-analysis bias. Prof Psychol: Res Pr. 1986;17(2):136–7. doi: 10.1037/0735-7028.17.2.136 [DOI] [Google Scholar]
- 5.Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986;4(10):1529–41. doi: 10.1200/JCO.1986.4.10.1529 [DOI] [PubMed] [Google Scholar]
- 6.Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA. 1990;263(10):1385. doi: 10.1001/jama.1990.03440100097014 [DOI] [PubMed] [Google Scholar]
- 7.Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3(8):e3081. doi: 10.1371/journal.pone.0003081 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Rothstein HR. Publication bias as a threat to the validity of meta-analytic results. J Exp Criminol. 2007;4(1):61–81. doi: 10.1007/s11292-007-9046-9 [DOI] [Google Scholar]
- 9.Kyzas PA, Denaxa-Kyza D, Ioannidis JPA. Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer. 2007;43(17):2559–79. doi: 10.1016/j.ejca.2007.08.030 [DOI] [PubMed] [Google Scholar]
- 10.Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8(3):e1000344. doi: 10.1371/journal.pbio.1000344 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bartoš F, Maier M, Wagenmakers E-J, Nippold F, Doucouliagos H, Ioannidis JPA, et al. Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics. Res Synth Methods. 2024;15(3):500–11. doi: 10.1002/jrsm.1703 [DOI] [PubMed] [Google Scholar]
- 12.Scheel AM, Schijen MRMJ, Lakens D. An excess of positive results: comparing the standard psychology literature with registered reports. Adv Methods Pract Psychol Sci. 2021;4(2). doi: 10.1177/25152459211007467 [DOI] [Google Scholar]
- 13.Fanelli D. Is science in crisis? Research integrity. New York: Oxford University Press. 2022. p. 93–121. doi: 10.1093/oso/9780190938550.003.0004 [DOI] [Google Scholar]
- 14.Silvertown J, McConway KJ. Does “Publication Bias” lead to biased science? Oikos. 1997;79(1):167. doi: 10.2307/3546101 [DOI] [Google Scholar]
- 15.Fanelli D, Costas R, Ioannidis JPA. Meta-assessment of bias in science. Proc Natl Acad Sci U S A. 2017;114(14):3714–9. doi: 10.1073/pnas.1618569114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Fanelli D. Opinion: is science really facing a reproducibility crisis, and do we need it to? Proc Natl Acad Sci U S A. 2018;115(11):2628–31. doi: 10.1073/pnas.1708272114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358(3):252–60. doi: 10.1056/NEJMsa065779 [DOI] [PubMed] [Google Scholar]
- 18.Turner EH, Knoepflmacher D, Shapley L. Publication bias in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration database. PLoS Med. 2012;9(3):e1001189. doi: 10.1371/journal.pmed.1001189 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Roest AM, de Jonge P, Williams CD, de Vries YA, Schoevers RA, Turner EH. Reporting bias in clinical trials investigating the efficacy of second-generation antidepressants in the treatment of anxiety disorders: a report of 2 meta-analyses. JAMA Psychiatry. 2015;72(5):500–10. doi: 10.1001/jamapsychiatry.2015.15 [DOI] [PubMed] [Google Scholar]
- 20.de Vries YA, Roest AM, de Jonge P, Cuijpers P, Munafò MR, Bastiaansen JA. The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression. Psychol Med. 2018;48(15):2453–5. doi: 10.1017/S0033291718001873 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, et al. Can animal models of disease reliably inform human studies?. PLoS Med. 2010;7(3):e1000245. doi: 10.1371/journal.pmed.1000245 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Marks-Anglin A, Chen Y. A historical review of publication bias. Res Synth Methods. 2020;11(6):725–42. doi: 10.1002/jrsm.1452 [DOI] [PubMed] [Google Scholar]
- 23.Nakagawa S, Lagisz M, Yang Y, Drobniak SM. Finding the right power balance: better study design and collaboration can reduce dependence on statistical power. PLoS Biol. 2024;22(1):e3002423. doi: 10.1371/journal.pbio.3002423 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Nissen SB, Magidson T, Gross K, Bergstrom CT. Publication bias and the canonization of false facts. Elife. 2016;5:e21451. doi: 10.7554/eLife.21451 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Brazil R. Illuminating “the ugly side of science”: fresh incentives for reporting negative results. Nature. 2024. doi: 10.1038/d41586-024-01389-7 [DOI] [PubMed] [Google Scholar]
- 26.Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med. 2010;170(21):1934–9. doi: 10.1001/archinternmed.2010.406 [DOI] [PubMed] [Google Scholar]
- 27.Muradchanian J, Hoekstra R, Kiers H, van Ravenzwaaij D. The role of results in deciding to publish: a direct comparison across authors, reviewers, and editors based on an online survey. PLoS One. 2023;18(10):e0292279. doi: 10.1371/journal.pone.0292279 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.von Klinggraeff L, Burkart S, Pfledderer CD, Saba Nishat MN, Armstrong B, Weaver RG, et al. Scientists’ perception of pilot study quality was influenced by statistical significance and study design. J Clin Epidemiol. 2023;159:70–8. doi: 10.1016/j.jclinepi.2023.05.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Fanelli D. Positive results receive more citations, but only in some disciplines. Scientometrics. 2012;94(2):701–9. doi: 10.1007/s11192-012-0757-y [DOI] [Google Scholar]
- 30.Jannot A-S, Agoritsas T, Gayet-Ageron A, Perneger TV. Citation bias favoring statistically significant studies was present in medical research. J Clin Epidemiol. 2013;66(3):296–301. doi: 10.1016/j.jclinepi.2012.09.015 [DOI] [PubMed] [Google Scholar]
- 31.Joober R, Schmitz N, Annable L, Boksa P. Publication bias: what are the challenges and can they be overcome?. J Psychiatry Neurosci. 2012;37(3):149–52. doi: 10.1503/jpn.120065 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Schimanski LA, Alperin JP. The evaluation of scholarship in academic promotion and tenure processes: past, present, and future. F1000Res. 2018;7:1605. doi: 10.12688/f1000research.16493.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.McKiernan EC, Schimanski LA, Muñoz Nieves C, Matthias L, Niles MT, Alperin JP. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. Elife. 2019;8:e47338. doi: 10.7554/eLife.47338 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Rockey S. Changes to the biosketch. Extramural Nexus. 2014. [Google Scholar]
- 35.ter Riet G, Korevaar DA, Leenaars M, Sterk PJ, Van Noorden CJF, Bouter LM, et al. Publication bias in laboratory animal research: a survey on magnitude, drivers, consequences and potential solutions. PLoS One. 2012;7(9):e43404. doi: 10.1371/journal.pone.0043404 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Echevarría L, Malerba A, Arechavala-Gomeza V. Researcher’s perceptions on publishing “negative” results and open access. Nucleic Acid Ther. 2021;31(3):185–9. doi: 10.1089/nat.2020.0865 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Herbet M, Leonard J, Santangelo MG, Albaret L. Dissimulate or disseminate? A survey on the fate of negative results. Learned Publishing. 2022;35(1):16–29. doi: 10.1002/leap.1438 [DOI] [Google Scholar]
- 38.Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci U S A. 2018;115(11):2600–6. doi: 10.1073/pnas.1708274114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Drax K, Clark R, Chambers CD, Munafò M, Thompson J. A qualitative analysis of stakeholder experiences with Registered Reports Funding Partnerships. Wellcome Open Res. 2021;6:230. doi: 10.12688/wellcomeopenres.17029.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Henderson EL, Chambers CD. Ten simple rules for writing a Registered Report. PLoS Comput Biol. 2022;18(10):e1010571. doi: 10.1371/journal.pcbi.1010571 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Hardwicke TE, Wagenmakers E-J. Reducing bias, increasing transparency and calibrating confidence with preregistration. Nat Hum Behav. 2023;7(1):15–26. doi: 10.1038/s41562-022-01497-2 [DOI] [PubMed] [Google Scholar]
- 42.Allen C, Mehler DMA. Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 2019;17(12):e3000587. doi: 10.1371/journal.pbio.3000587 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Bakker M, Veldkamp CLS, van Assen MALM, Crompvoets EAV, Ong HH, Nosek BA, et al. Ensuring the quality and specificity of preregistrations. PLoS Biol. 2020;18(12):e3000937. doi: 10.1371/journal.pbio.3000937 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Pham MT, Oh TT. Preregistration is neither sufficient nor necessary for good science. J Consum Psychol. 2021;31(1):163–76. doi: 10.1002/jcpy.1209 [DOI] [Google Scholar]
- 45.Chambers CD, Tzavella L. The past, present and future of Registered Reports. Nat Hum Behav. 2022;6(1):29–42. doi: 10.1038/s41562-021-01193-7 [DOI] [PubMed] [Google Scholar]
- 46.Anthony N, Tisseaux A, Naudet F. Published registered reports are rare, limited to one journal group, and inadequate for randomized controlled trials in the clinical field. J Clin Epidemiol. 2023;160:61–70. doi: 10.1016/j.jclinepi.2023.05.016 [DOI] [PubMed] [Google Scholar]
- 47.Manago B. Preregistration and registered reports in sociology: Strengths, weaknesses, and other considerations. Am Soc. 2023;54(1):193–210. doi: 10.1007/s12108-023-09563-6 [DOI] [Google Scholar]
- 48.Leavitt VM. Strengthening the evidence: a call for preregistration of observational study outcomes. Mult Scler. 2020;26(12):1608–9. doi: 10.1177/1352458519886071 [DOI] [PubMed] [Google Scholar]
- 49.Dal-Ré R, Ioannidis JP, Bracken MB, Buffler PA, Chan A-W, Franco EL, et al. Making prospective registration of observational research a reality. Sci Transl Med. 2014;6(224):224cm1. doi: 10.1126/scitranslmed.3007513 [DOI] [PubMed] [Google Scholar]
- 50.Raciti D, Yook K, Harris TW, Schedl T, Sternberg PW. Micropublication: incentivizing community curation and placing unpublished data into the public domain. Database (Oxford). 2018;2018:bay013. doi: 10.1093/database/bay013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Dhar P. Octopus and ResearchEquals aim to break the publishing mould. Nature. 2023. doi: 10.1038/d41586-023-00861-0 [DOI] [PubMed] [Google Scholar]
- 52.Tennant J, Bauin S, James S, Kant J. The evolving preprint landscape: introductory report for the Knowledge Exchange working group on preprints. Center for Open Science. 2018. doi: 10.31222/osf.io/796tu [DOI] [Google Scholar]
- 53.Sarabipour S, Debat HJ, Emmott E, Burgess SJ, Schwessinger B, Hensel Z. On the value of preprints: an early career researcher perspective. PLoS Biol. 2019;17(2):e3000151. doi: 10.1371/journal.pbio.3000151 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Casadevall A, Fang FC. Causes for the persistence of impact factor mania. mBio. 2014;5(2):e00064-14. doi: 10.1128/mBio.00064-14 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Moore S, Neylon C, Eve MP, O’Donnell DP, Pattinson D. Excellence R US: University research and the fetishisation of excellence. Palgrave Communications. 2017;3:16105. doi: 10.1057/palcomms.2017.105 [DOI] [Google Scholar]
- 56.Ellis RJ. Questionable research practices, low statistical power, and other obstacles to replicability: Why preclinical neuroscience research would benefit from registered reports. eNeuro. 2022;9(4). doi: 10.1523/ENEURO.0017-22.2022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Crawford DC, Hoye ML, Silberberg SD. From methods to monographs: fostering a culture of research quality. eNeuro. 2023;10(8). doi: 10.1523/ENEURO.0247-23.2023 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN. Assessing scientists for hiring, promotion, and tenure. PLoS Biol. 2018;16(3):e2004089. doi: 10.1371/journal.pbio.2004089 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Laitin DD, Miguel E, Alrababa’h A, Bogdanoski A, Grant S, Hoeberling K, et al. Reporting all results efficiently: a RARE proposal to open up the file drawer. Proc Natl Acad Sci U S A. 2021;118(52):e2106178118. doi: 10.1073/pnas.2106178118 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Dougherty MR, Slevc LR, Grand JA. Making research evaluation more transparent: aligning research philosophy, institutional values, and reporting. Perspect Psychol Sci. 2019;14(3):361–75. doi: 10.1177/1745691618810693 [DOI] [PubMed] [Google Scholar]
- 61.Agate N, Kennison R, Konkiel S, Long CP, Rhody J, Sacchi S, et al. The transformative power of values-enacted scholarship. Humanit Soc Sci Commun. 2020;7(1). doi: 10.1057/s41599-020-00647-z [DOI] [Google Scholar]
- 62.Schmidt R, Curry S, Hatch A. Creating SPACE to evolve academic assessment. Elife. 2021;10:e70929. doi: 10.7554/eLife.70929 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.HuMetricsHSS. Walking the talk: toward a values-aligned academy. 2022. https://hcommons.org/deposits/item/hc:44631/
- 64.Carter C, Dougherty MR, McKiernan EC, Tananbaum G. Promoting values-based assessment in review, promotion, and tenure processes. Commonplace. 2023. doi: 10.21428/6ffd8432.9eadd603 [DOI] [Google Scholar]
- 65.Himanen L, Conte E, Gauffriau M, Strøm T, Wolf B, Gadd E. The SCOPE framework - implementing ideals of responsible research assessment. F1000Res. 2024;12:1241. doi: 10.12688/f1000research.140810.2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.McKiernan E, Carter C, Dougherty MR, Tananbaum G. A framework for values-based assessment in promotion, tenure, and other academic evaluations. Center for Open Science. 2024. doi: 10.31219/osf.io/s4vc5 [DOI] [Google Scholar]
- 67.Nieminen P, Carpenter J, Rucker G, Schumacher M. The relationship between quality of research and citation frequency. BMC Med Res Methodol. 2006;6:42. doi: 10.1186/1471-2288-6-42 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Aksnes DW, Langfeldt L, Wouters P. Citations, citation indicators, and research quality: an overview of basic concepts and theories. Sage Open. 2019;9(1). doi: 10.1177/2158244019829575 [DOI] [Google Scholar]
- 69.Saginur M, Fergusson D, Zhang T, Yeates K, Ramsay T, Wells G, et al. Journal impact factor, trial effect size, and methodological quality appear scantly related: a systematic review and meta-analysis. Syst Rev. 2020;9(1):53. doi: 10.1186/s13643-020-01305-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Dougherty MR, Horne Z. Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences. R Soc Open Sci. 2022;9(8):220334. doi: 10.1098/rsos.220334 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Aubert Bonn N, Pinxten W. Rethinking success, integrity, and culture in research (part 1) - a multi-actor qualitative study on success in science. Res Integr Peer Rev. 2021;6(1):1. doi: 10.1186/s41073-020-00104-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Fernandez ME, Ruiter RAC, Markham CM, Kok G. Intervention mapping: theory-and evidence-based health promotion program planning: perspective and examples. Front Public Health. 2019;7:209. doi: 10.3389/fpubh.2019.00209 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Kidder DP, Fierro LA, Luna E, Salvaggio H, McWhorter A, Bowen S-A, et al. CDC program evaluation framework, 2024. MMWR Recomm Rep. 2024;73(6):1–37. doi: 10.15585/mmwr.rr7306a1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Longworth GR, Goh K, Agnello DM, Messiha K, Beeckman M, Zapata-Restrepo JR, et al. A review of implementation and evaluation frameworks for public health interventions to inform co-creation: a Health CASCADE study. Health Res Policy Syst. 2024;22(1):39. doi: 10.1186/s12961-024-01126-6 [DOI] [PMC free article] [PubMed] [Google Scholar]

