Skip to main content
EMBO Reports logoLink to EMBO Reports
. 2023 Sep 15;24(10):e57887. doi: 10.15252/embr.202357887

How to avoid mistakes in science publications

Fiona M Watt 1,
PMCID: PMC10561164  PMID: 37712331

Abstract

The key to reducing errors in science is collaboration between all practitioners—researchers, funders and editors—through a shared motivation to nurture scientists and promote discovery.

graphic file with name EMBR-24-e57887-g001.jpg

Subject Categories: Careers, Science Policy & Publishing


My first encounter with a publication error was in the 1970s when I was an undergraduate. The internet had not been invented, and our knowledge came from in‐person lectures, tutorials and trips to the library. Our class had progressed from reading textbooks to journal articles and, very occasionally, we were given reprints of papers that had appeared in a journal. The authors would hand out reprints sparingly, often signed, to a few select colleagues. You can imagine the reverence with which students treated the few reprints we were given.

In one tutorial, a fellow student told our lecturer—now deceased—that he did not understand one of the figures in the reprint we were studying. “Quite right,” said the lecturer, “I made a mistake.” The lecturer produced a fountain pen, crossed out the erroneous figure with a flourish and returned the reprint. This event stayed in my memory for several reasons. I realised that my fellow student was pretty smart (Reader, I married him); I was shocked that someone would deface a reprint (albeit with a fountain pen); and, of course, I learned that not everything in the scientific literature is correct. I don't know whether a correction was ever published, but the idea that we students should report the mistake anonymously for fear of retribution simply never entered our heads. We enjoyed being cub scientists and couldn't wait to begin our own research.

Fast forward to the present day and there is an active exchange amongst scientists seeking clarification of data pre‐ and postpublication and identifying and correcting published errors. It is easier than ever before to correct the scientific literature by linking comments, corrigenda and, if necessary, retractions to the original articles. Of course, scientists may be slow to own up to a mistake because they fear it will cause reputational harm, and some journals still cling to the concept of a paper being an end‐stage scholarly record as opposed to being updatable. More can be done to lower the hurdles to self‐correction, such as attracting credit when a scientist's research outputs are assessed.

There has also been a rise in the identification and publicising of mistakes by people who choose to remain anonymous, conceivably for fear of retribution. One of the pioneers is “Clare Francis,” the eponymous whistle‐blower who not only contacts journals but also the authors' funding agencies and host institutions. “Clare” has been joined by others, including the platform “PubPeer” which allows anonymous commenting on papers, and so there is no shortage of routes to identifying mistakes, often many years after the original publication.

Not all authors receive an alert when comments on their work are posted. It is therefore unwise to assume that authors have chosen not to respond—they may be unaware that there is a comment on their paper. There is actually a paucity of information about the extent to which mistakes are unintended or a deliberate effort to mislead readers (Pulverer, 2015).

There are now many institutional research integrity offices and organisations such as the European Network of Research Integrity Offices (ENRIO) and the European Network of Research Ethics Committees (EUREC). The burgeoning of such institutions, combined with the high volume of cases to be investigated, has had several unintended consequences. It can reduce the speed with which corrections and retractions are published; it can transfer evaluation of mistakes into the hands of nonscientists; and it does not provide a mechanism to weigh the severity of the error. The more granular correction mechanisms adopted by some journals serve science best, although they are labour‐intensive and require all parties to collaborate (e.g. see Bechara et al2023).

Preventing publication errors

So much for spotting and reporting errors postpublication. How can we stop them from being published in the first place? We should start in the laboratory, where experiments are being designed and results generated. In a lab setting, there are clear parallels between avoiding mistakes in publishing and avoiding accidents. Just as all lab members contribute to ensuring that the laboratory is a safe place to work, they should assume collective responsibility for ensuring that data for publication are sound. While overall responsibility for safety typically rests with the head of the laboratory, implementation of a safety culture depends on individual lab members. Members of the laboratory will typically have more expertise in carrying out specific experiments than the head of the laboratory and will often have more practical experience in maintaining a safe working environment than the person who signs off the risk assessment forms.

When it comes to publishing research my two favourite lab mantras are: “There's nothing wrong with being wrong” and “Let's imagine how the story goes.” The first encourages people to be completely open about the mistakes, flaws and inconsistencies in experimental design and data collection while the research is being carried out. Mistakes can occur through ignorance or cutting corners, being in a hurry, or feeling pressure to produce the “right” results. A collaborative team in which people help one another out with experiments and analysis, and in which individuals are not singled out for blame, can make all the difference in spotting and correcting errors. Going back to the safety analogy, hiding the release of a genetically modified organism or a radioactive spill to avoid getting into trouble can clearly have disastrous consequences, and so blame‐free accident reporting is key—the same is true in generating data.

As for my second mantra, discussing the “story”—in other words, what might constitute a future publication—helps scientists think beyond data collection and design new experiments. It also helps us challenge and, if necessary, abandon a favourite hypothesis. Speculating about the “punch line”—the last sentence of the abstract—is very useful when it comes to finalising how data are presented in a publication. Constructing a potential narrative for an ongoing research project is an example of “creative problem solving,” which balances generating ideas and turning them into concrete solutions. Reframing problems as questions, deferring judgement of ideas and focussing on “Yes, And” rather than “No, But” are well‐established tools in business. Using positive language to build and maintain an environment that fosters ideas is also important in a lab setting.

Creative problem‐solving does have its dangers. One is that power differentials exist within a laboratory (even amongst PhD students), and another is that different cultural norms co‐exist within individual laboratories just as they permeate society as a whole. Someone could misinterpret “storytelling” as an instruction to obtain a particular result to fit the story. Scientists from different backgrounds can be under strong financial, institutional and family pressure to succeed, with success being judged as publishing in a specific journal, even if that pressure does not exist within the laboratory itself. Recognising such dangers is a first step towards mitigation, and fostering trust is very important. During the COVID‐19 pandemic, many lab meetings moved from in‐person to virtual, and while the positives of virtual meetings are clear, there are attendant risks: presentations can become too polished, and lab members can be lazy about asking questions—they become spectators rather than participants. In‐person one‐to‐one discussions, group meetings and lab retreats all stimulate creativity and build mutual trust and understanding, thereby fostering the collaborative environment that helps avoid errors.

Keeping mistakes to a minimum

Even with our best efforts to keep the laboratory safe, accidents do happen—and, equally, mistakes do enter the scientific literature. But there are a number of ways in which errors can be spotted and corrected while preparing a paper for submission. Online text editing platforms, like Google Docs, have transformed the ability of scientists to write collaboratively. Plagiarism checkers such as Turnitin can be used to detect text duplication prior to submission. Given the multiple data types and formats that can comprise a single publication, it is harder to deploy tools for authors to share and explore the data to incorporate into potential figures while they are working on the text of a paper.

Good lab management software exists, which includes sample tracking and online lab notebooks; there are public repositories for different types of data; and some journals try to ensure that the source data for publications are available as part of a commitment to reproducible research. Nevertheless, these individual platforms are not fully integrated—if they were they would be more widely used. The challenge is not insurmountable; and some interesting platforms are already in development (e.g. https://sdash.sourcedata.io/) that can facilitate collaboration in preparing data for publication.

The classic route to publication is that a paper is submitted to a specific journal, undergoes peer review by two or three experts, receives an editorial decision, is revised, re‐reviewed and then, all being well, published. The weaknesses of this route have been highlighted many times—the difficulty of finding reviewers and the existence of cartels of positive or negative reviewers are but two. All this is changing, fortunately, through the rise of preprints and refereed preprints. As journal clubs focus on preprints rather than published papers, there is the possibility to catch—and correct—mistakes prior to publication. And journal agnostic peer review, as practised by Review Commons, helps minimise the number of cycles of review and helps prevent biases and misperceptions of scientists who want to keep a competitor's work out of specific journals.

Just as collaborative strategies can prevent mistakes from being submitted for publication, the same is true of reviewing. Collaboration across journals (for example through the Manuscript Exchange Common Approach www.niso.org/standards‐committees/meca) and between journals and preprint servers could ensure that the source data and statistical analysis are robust, regardless of whether the paper is published. Given the sheer volume of scientific publishing, this requires a huge amount of effort and money. Nevertheless, the implementation of AI tools to—for example—check for figure duplication and manipulation reduces the cost and increases speed and accuracy.

Classically peer review is conducted by subject experts, but one could envisage deploying additional reviewers with specific expertise in methodology and statistics who look at the structure of the paper independent of the storyline. The requirement for data availability sections in journals and reproducible methods and protocols (https://osf.io/x85gh) both help mitigate mistakes and facilitate the reproducibility of published experiments.

If this sounds positive, it is unrealistic to suggest that the scientific literature can ever be pristine and error‐free. There are two quite prosaic reasons why mistakes are still published—one of which is understandable and the other completely avoidable.

When projects are truly interdisciplinary or involve collaborations between different laboratories, the lead author may genuinely have no way of knowing whether the data are correct. Collaborations between biologists and physicists or mathematicians are highly enriching, but it is unrealistic to expect the biologist to fully understand, let alone be accountable, for the contributions of such partners. It is therefore important that collaborations are built on mutual trust and shared responsibility.

The other issue is that co‐authors can be incredibly lazy—failure to share a paper prior to submission, share or read the proofs, check for broken links to websites and even check that names and affiliations are correct are all quite common in my experience. Fortunately, many journals notify authors that a paper has been submitted in their name, and require author sign‐off of the work prior to publication. But an even more significant development in recent years has been the introduction of Contributor Roles Taxonomy (CRediT) to specify the roles of authors in a paper. CRediT not only clarifies who did what in a piece of work but should also help whistle‐blowers refine the assignment of blame when errors are spotted. EMBO Press has gone much further than CRediT, enabling authorship to be claimed at the level of individual figure panels.

So, to end where I began, mistakes happen but science can be self‐correcting. The key to keeping mistakes out of the published literature lies in collaboration between all practitioners through a shared motivation to nurture scientists and promote discovery.

Disclosure and competing interests statement

The author is the Director of EMBO, an EMBO member and a member of the editorial advisory board of EMBO Molecular Medicine.

Supporting information

EMBO reports (2023) 24: e57887

References

  1. Bechara A, Nawabi H, Moret F, Yaron A, Weaver E, Bozon M, Abouzid K, Guan J, Tessier‐Lavigne M, Lemmon V et al (2023) FAK–MAPK‐dependent adhesion disassembly downstream of L1 contributes to semaphorin3A‐induced collapse. EMBO J 42: e113962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Pulverer B (2015) When things go wrong: correcting the scientific record. EMBO J 34: 2483–2485 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials


    Articles from EMBO Reports are provided here courtesy of Nature Publishing Group

    RESOURCES