On March 6, 1665, the first scientific journal, Philosophical Transactions (of the Royal Society of London), was launched in the United Kingdom. The publication marked the birth of scientific publishing as a formal mechanism of dissemination and as a repository of scientific knowledge. The journal was edited and published by the German philosopher Henry Oldenburg, the first Secretary of the Royal Society of London. The inaugural issue included reports such as An account of a very odd monstrous calf (Boyle 1665), A narrative concerning the success of pendulum-watches at sea for the longitudes (Holmes 1665), and other curious short notes that bear little resemblance to modern scientific articles. The decision to publish these early reports rested with the editor.
During the nineteenth century, editorial authority gradually evolved to be more distributed, so that publication decisions no longer relied on the judgment of a single individual. In the twentieth century, the current anonymous peer review system became standard practice. Peer review, coupled with improvements in experimental design and statistical analysis, added credibility to scientific reports and has been a cornerstone of public trust in the academic publishing process.
Since 1665, more than 50 million scientific papers have been published (Jinha 2010), but some aspects of the current academic environment are concerning. For example, Richardson et al. (2025) estimated that the number of scientific articles doubles every 15 years, but the rate of retracted and fraudulent papers is doubling much faster, at 3.3 years and 1.5 years, respectively. There are several likely reasons for increasing rates of retraction and fraud, but they undoubtedly include incentive systems that reward bad actors either financially or reputationally through quantitative statistics like the number of published papers, impact factors, or h-indices. [To read a humorous proposal about a new h-index, we refer you to the Editorial by Corbett-Detig (2026) published in this issue.]
Scientific fraud is common enough that the term paper mills has been coined to describe the practice of producing poor quality or fraudulent papers designed to resemble genuine research. These often include offers of authorship in exchange for money (Parker et al. 2024). Unfortunately, editorial boards are not immune to poor practices, and they may even promote them. In Richardson et al. (2025), for example, particular editors at PLOS One were found to have handled 1.3% of published papers but nearly a third of the retracted papers. Another study has estimated that 2.6% of 2.85 million articles published in Life and Health journals showed some evidence of paper mill production (Van Noorden 2023).
Although 2.6% may sound like a small number, it is a damaging proportion because any fraudulent activity erodes public trust in science. Moreover, faulty or fraudulent data can enter the public sphere; in the United States, putatively fraudulent data has been used recently by policy makers to argue against scientific consensus. Other anecdotes suggest that attacks on scientific ethics are likely quite common. Here are just a few examples: hidden messages have been discovered in submitted manuscripts in an attempt to manipulate AI-assisted peer review (Gibney 2025), an MBE Editor-in-Chief was recently approached with an offer of financial compensation in exchange for citations to papers, and a social media platform advertised authorship on a Molecular Biology and Evolution (MBE) paper in exchange for paying article processing charges. [This is a good place to remind the SMBE community that MBE offers many waivers for article processing charges; see Gaut and Russo (2025) for more details.]
To counteract threats to publishing ethics, the first meeting of the Committee on Publication Ethics (COPE) was convened in April 1997. The meeting was launched when Michael Farthing detected several misconduct cases in his first year as the Editor-in-Chief of the journal Gut. Soon after, the first COPE guidelines were published (Fulford et al. 1999) with the goal to uphold the integrity of the scholarly record. Currently, COPE provides best-practice standards throughout the editorial process, including guidelines for editors, reviewers and authors. Through the COPE platform, members can submit ethical cases; COPE will publish an anonymized version along with their recommendation. Listed cases include paper retractions, author removals, image alterations, concerns about animal welfare, duplicate submissions, non-compliance of authors with information requests, and other topics. All COPE trustees, council members, and advisors are volunteers.
MBE has been a COPE member since 2015, and the journal has actively incorporated COPE principles into our editorial policies and practices. For example, COPE has specific guidance on editorial board participation and transparency in the editorial organization. To that end, our editorial oversight is distributed across three levels: two co-Editors-in-Chief, 19 Senior Editors, and 59 Associate Editors, all assisted by our Managing Editor (Heather Howe) and an Editorial Assistant (Chloe Masten). Editors-in-Chief have a (once) renewable four- or five-year contract and Senior and Associate editors have a renewable three-year term. We strive for a board that represents the diversity of topics in our field, maintaining fair gender representation (currently 38 females and 42 males) and geographical representation that aligns with submission trends.
In terms of transparency, it is worth elaborating on the editorial steps that lead to manuscript decisions. The editorial office conducts an initial assessment of all submitted manuscripts, primarily checking formatting requirements as well as topical pertinence and impact evaluation. Manuscripts that pass the first assessment are assigned to a Senior Editor (SE) with expertise in the broad subject area of the manuscript. The SE then conducts a more in-depth evaluation of the manuscript, including the topic, scope, and overall suitability for MBE. The editor is given the option to assign the manuscript to an Associate Editor (AE) with specialized expertise or to reject the manuscript without review. These “desk rejects” are designed to provide rapid feedback to our community of authors so that they can submit to other journals without delay; indeed, the average time to a desk reject is just over four days. A manuscript can also be desk-rejected after it is assigned to an AE, but most papers assigned to AEs are sent for peer review. In total, approximately 50% of submitted manuscripts are sent for peer review, and the other half are rejected without review, usually due to limited scope, incremental objectives, or lack of a clear evolutionary focus. These rejections do not necessarily reflect on the quality of science but instead the topic and perceived impact. Generally, MBE seeks to identify papers with clear, competent science that contain multi-layered analyses or approaches, are sufficiently novel, and target a broad audience of evolutionary biologists.
Peer review at MBE relies on a single-anonymized peer review system: authors’ identities are known to editors and reviewers, but reviewers’ identities remain concealed from authors. Both editors and reviewers are expected to adhere to COPE guidelines by disclosing any potential conflicts of interest. Once reviews are returned, AEs make a recommendation, but the final decision rests with the SE, pending approval by the editorial office. Although seldom used, MBE editors have the authority to rescind unsuitable reviews. Also, minor edits to reviews are allowed when inappropriate terms or phrases have been used. In those cases, the edits are not allowed to change the meaning or intention of the review. What should be clear from this discussion is that publication in MBE requires agreement and consensus across an editorial hierarchy. Even those papers rejected without review are seen by at least two layers of the editorial structure.
At MBE, peer review is not a small operation. In 2025, our editorial board invited 4,986 reviewers to assess submissions. Of those, 36% (or 1,793 reviewers) accepted the reviewing assignment for 676 submissions, resulting in an average of 2.6 reviewers per manuscript. It is not trivial to produce high-quality reviews rapidly, but for the past few years, the median time to a first peer-review decision has been around 50 days. Each year, MBE receives around 1,300 manuscripts, of which 18% to 20% are accepted for publication.
In its history, which began in 1983, MBE has published more than 8,000 scientific reports. Of course, errors occasionally find their way into published papers, and these require transparent correction following COPE guidelines. Corrections—which can also be known as Corrigendum, corrigenda, errata, and erratum—are short post-publication notices to address errors identified after a paper is published. Most corrections describe minor but critical figure adjustments, updates to supplemental material, modifications of links to data, or rewordings, but changes to the author list have also been issued. Post-publication corrections are typically initiated at the unanimous request of authors or by an appropriate institutional authority, if authors disagree. Since 1983, MBE has issued 200 corrections that represent efforts to rectify the scientific record. More serious concerns about the reliability of a publication may warrant a manuscript retraction when conclusive evidence is available of malfeasance or gross error, but this is rare, as only four papers have been retracted from MBE to date. For both post-publication corrections and retractions, MBE follows COPE guidelines carefully to vet the nature of the error, ensure rigorous standards of objectivity are met, and issue any needed corrections to the scholarly record.
What is MBE doing to try to avoid ethical problems? Recently, MBE and OUP have implemented integrity-screening tools designed to identify problematic or potentially fraudulent manuscripts, including paper mill-generated manuscripts. These screens use Artificial Intelligence (AI) tools and advanced plagiarism-detection algorithms. Every submitted manuscript is subjected to a plagiarism check, and we are also piloting tools that flag AI-generated text and images. In general, MBE recommends that AI-based tools should not be used to generate, alter, or interpret scientific content. However, they may be used for data processing, for code and script revisions, and for minor image adjustments, as long as image interpretation is unaffected. The use of AI to improve readability, revise, and translate English texts can be very helpful to the non-native speakers of our community (Mank 2025). MBE simply requires that authors (and reviewers) disclose all AI use in both the cover letter and acknowledgments (in reviews).
As evidenced by offers of authorship-for-pay posted in social media, high standards of authorship are essential. In MBE, authorship must be restricted to researchers who have actively contributed to the experimental design and project execution and, as COPE recommends, who will be able to take long-term responsibility for the published article. Hence, our policies prohibit ghost authorship or the inclusion of individuals who made little (or no) contribution to be listed as authors, although this is admittedly difficult to police. AI language tools, such as ChatGPT, also do not qualify for authorship (Thorp 2023; Mank 2025). Another authorship concern is “parachute science,” a term that refers to a seemingly collaborative project led by researchers from wealthier countries while researchers from other countries are relegated to smaller roles focused on access to specimens or data. SMBE is taking a leading stance against this problem (Lerat 2025), and MBE supports that stance.
Despite our best efforts, scientific errors are bound to occur both in peer review and in MBE publications. For the former, authors can appeal a decision and request reconsideration of a “rejection” when they believe that the decision was based on misinterpretation or bias. We note, however, that desk-rejected manuscripts are not eligible for appeals, as these are largely judgments of topic, impact, and suitability. If an appeal request is accepted by the editorial office, revised manuscripts will likely repeat the full editorial process, and the handling editors will have access to the original version of the manuscript. In line with COPE recommendations, we also allow critical commentaries that address papers published in MBE and elsewhere. These are often published as Brief Communications and should be of interest to our broad readership rather than focusing solely on criticism of a single manuscript. Importantly, commentaries and criticisms are subject to the same peer review rigors as regular manuscripts, and they are expected to provide new insights.
We are extremely proud of the manuscripts published in MBE. MBE is a highly respected journal within and beyond our immediate molecular evolutionary community. It remains so because of our vibrant and active volunteer community of reviewers and editors, whom we applaud for their willingness to support a society-owned, non-profit journal in an environment of ever-escalating for-profit journals (Gaut and Russo 2025). We understand the burden placed on our community and continue to search for ways to implement modern tools that may lessen these burdens, such as AI-detection of internal inconsistencies among images, tables, text, and supplemental materials or using AI to evaluate the appropriate use of references (Perlis et al. 2025). We want to introduce improved practices while maintaining a transparent and meaningful editorial process for authors (see also Mank 2025). It takes a vigilant community to guard against the growth of malfeasance in the academic publishing environment, and society-owned journals like MBE must take a leading role. Along with the core responsibility of dissemination and as a repository of the scientific record, we must be vigilant in safeguarding the reliability of the scientific record to ensure the long-term durability of science as a method of knowing.
Acknowledgments
AI tools were used to revise the text and flow of this editorial, but did not generate content.
Contributor Information
Claudia A M Russo, Genetics Department, Federal University of Rio de Janeiro, Rio de Janeiro, RJ 21941-902, Brazil.
Brandon S Gaut, Department of Ecology and Evolutionary Biology, U.C. Irvine, Irvine, CA 92697-2525, USA.
Funding
This study was financially supported by the National Research and Technology Council from Brazil (CNPq 301659/2025-7) and from Rio de Janeiro State Research Funding Agency (FAPERJ -E-26/010.001887/2019, SEI-260003/001170/2020, SEI-260003/012995/2021, SEI-260003/006126/2024) grants received by CAMR. This work was also supported by National Science Foundation USA grant DEB-2414478 to BSG.
Data Availability
There are no data associated with this editorial.
References
- Boyle R. An account of a very monstrous calf. Philos Trans. 1665:1:10. 10.1098/rstl.1665.0007. [DOI] [Google Scholar]
- Fulford P. Committee on publication ethics (COPE): guidelines on good publication practice. Meeting April 27, 1999. The Committee on Publication Ethics Report 1999. 10.1136/oem.57.8.506. [DOI]
- Gaut BS, Russo CAM. A message from the Editors-in-Chief. Mol Biol Evol. 2025:42:msaf022. 10.1093/molbev/msaf022. [DOI] [Google Scholar]
- Gibney E. Scientists hide messages in papers to game AI peer review. Nature. 2025:643:887–888. 10.1038/d41586-025-02172-y. [DOI] [PubMed] [Google Scholar]
- Holmes M. A narrative concerning the success of pendulum-watches at sea for the longitudes. Philos Trans. 1665:1:13–15. 10.1098/rstl.1665.0011. [DOI] [Google Scholar]
- Jinha AE. Article 50 million: an estimate of the number of scholarly articles in existence. Learned Publ. 2010:23:258–263. 10.1087/20100308. [DOI] [Google Scholar]
- Lerat E. SMBE secretary's report 2025. Mol Biol Evol. 2025:42:msaf280. 10.1093/molbev/msaf280. [DOI] [Google Scholar]
- Mank JE. Editorial: accountability, voice, and trust—responsible use of GenAI in scientific publishing. Evol Lett. 2025:9:381–382. 10.1093/evlett/qraf027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parker L, Boughton S, Bero L, Byrne JA. Paper mill challenges: past, present, and future. J Clin Epidemiol. 2024:176:111549. 10.1016/j.jclinepi.2024.111549. [DOI] [PubMed] [Google Scholar]
- Perlis RH, et al. Artificial intelligence in peer review. J Am Med Assoc. 2025:334:e92. 10.1001/jama.2025.15827. [DOI] [Google Scholar]
- Richardson RAK, Hong SS, Byrne JA, Stoeger T, Amaral LA. The entities enabling scientific fraud at scale are large, resilient, and growing rapidly. Proc Natl Acad Sci U S A. 2025:122:e2420092122. 10.1073/pnas.2420092122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thorp HH. ChatGPT is fun, but not an author. Science. 2023:379:313. 10.1126/science.adg7879. [DOI] [PubMed] [Google Scholar]
- Van Noorden R. How big is science's fake-paper problem? Nature. 2023:623:466–467. 10.1038/d41586-023-03464-x. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
There are no data associated with this editorial.
