Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2015 Mar 10;112(13):E1512. doi: 10.1073/pnas.1501371112

Reply to Margalida and Colomer: Science should strive to prevent mistakes, not corrections

Kyle Siler a,1, Kirby Lee b, Lisa Bero c
PMCID: PMC4386344  PMID: 25759436

Margalida and Colomer (1) proffer a “mistake index” based on corrections published by scientific journals to gauge peer review quality. This is an interesting idea but has numerous theoretical and practical problems.

Even with recently slightly rising rates, only 2–3% of articles in Nature, PNAS, and Science issue corrections. Most are minute or trivial. We examined a sample of the last 100 corrections issued for research articles in Nature, PNAS, and Science in 2013. There were six retractions and one seemingly important revision. Beyond those 7 corrections, the other 93 corrections mostly involved small details that did not affect the research findings of the original article. Thirty corrections involved author names, affiliations, or acknowledgments, which have no bearing on the content of science.

Because most retractions involved author malfeasance, it is unfair to pin those errors on peer review. Further, the 30 corrections to names/acknowledgments had nothing to do with scientific quality control. Expecting perfection from peer reviewers is unrealistic, particularly with minutiae and unshared data. Peer reviewers might have done a brilliant job filtering out most mistakes and positively refining the article in other ways. Errors that make it into print are visible, unlike errors averted through quality control processes. As data and information become more accessible in the internet age, increased transparency enables greater scrutiny of research findings from larger and more diverse populations. Consequently, published corrections—or at least challenges to published articles—should continue to rise in science. This would likely be a positive development. Debate and refinement of previous work drives scientific progress.

In our sample, the lack of corrections dealing with moderate or severe errors was notable. This may be a sign of the effectiveness of peer review to filter out and/or revise such mistakes, as well as the skill of researchers. However, surely more than 2–3% of articles have flaws of some sort. Gelman (2) lamented that it is excessively difficult to publish criticisms and obtain data for published articles. Although criticisms during peer review must be handled carefully, criticisms after publication are held to a much higher bar. As a result, it is often difficult to directly refute mistakes, perhaps as shown by extremely low correction rates in scientific journals. Publishing mistakes go beyond empirical accuracy. Our article (3) suggested that gatekeepers reject many high-quality articles. Still, often in science and in life, imperfection does not necessarily imply a lack of merit.

It is worth noting that articles published in high-impact outlets are under intense scrutiny. This is in part how and why small—and usually inconsequential —errors are spotted and reported. A mistake index would also provide journals with incentives to not issue corrections or prefer less risky or complex articles.

Although even venial errors are lamentable, corrections are not, especially because scientific errors can affect lives beyond the ivory tower (4). Corrections help ensure that scientists conduct research based on accurate information, teach lessons regarding how and why mistakes occur, and reinforce norms of meticulousness and vigilance in science.

Footnotes

The authors declare no conflict of interest.

References

  • 1.Margalida A, Colomer MÀ. Mistake index as a surrogate of quality in scientific manuscripts. Proc Natl Acad Sci USA. 2015;112:E1511. doi: 10.1073/pnas.1500322112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Gelman A. It’s too hard to publish criticisms and obtain data for replication. Chance. 2013;26(3):49–52. [Google Scholar]
  • 3.Siler K, Lee K, Bero L. Measuring the effectiveness of scientific gatekeeping. Proc Natl Acad Sci USA. 2015;112(2):360–365. doi: 10.1073/pnas.1418218112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Krugman P. 2013. The Excel depression. New York Times, April 19, 2013, p A31.

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES