Funding agencies, journal editors, and hiring and promotion committees expend large amounts of time and resources deciding how to allocate precious funds, what science to publish, and which scientists deserve a job or promotion because of their scientific contributions. Now imagine an automated information system that can make this process much more efficient. Every academic researcher in the world is ranked based on a productivity index. Let us call it the Metric for Evaluation of Scientific Scholarship (MESS). This system works with an algorithm that tabulates the number of grants awarded to a researcher, the award amount for each grant, the number of publications authored, and the number of times those publications were downloaded and cited. In addition, the MESS would include a variable reflecting the prestige of the journals in which a researcher's work was published. The MESS would be adopted to rank all researchers in the world, providing the basis for hiring, funding, and promotion decisions. And no need for a Nobel Prize selection committee, either! We have the MESS. The MESS algorithm would be undisclosed proprietary information, but the MESS rankings would be available to anyone willing to pay a fee for access to this means of ranking researchers.
Such a frightening system might have a few minor flaws, like favoring senior investigators and researchers in fields with more funding and more publications and replacing the quest for deeper understanding with a quest for a higher MESS. As absurd as this imaginary system might seem, sadly, the research community often relies on a system that is almost as absurd as this hypothetical one, and researchers have only themselves to blame for this unfortunate state of affairs.
The Journal Impact Factor (JIF), developed to help librarians make subscription decisions, has de facto been repurposed by researchers, journals, administrators, and funding and hiring committees as a proxy for the quality and importance of research publications. The result of this shortcut is that researchers are judged by where their articles are published rather than by the content of their publications. This is fundamentally wrong.
To address the issue, a group that includes representatives from many leading scientific journals, funding agencies, and research institutions across the globe has released the San Francisco Declaration on Research Assessment (DORA), which has been posted on the website of the American Society for Cell Biology (www.ascb.org/SFdeclaration.html) and is attached here as Supplemental Material. This document is a call for reform of how research outputs are assessed. Anyone can sign the document, if he or she wants to support this cause.
There are many reasons why shortcuts to research assessment don't work. One reason is that outputs and outcomes from researchers are varied. These include the following, in addition to publications: data sets, new methods, reagents, and computer programs, scientists trained, contributions to society, and influence on public policy, better health, a cleaner environment, and more efficient utilization of energy resources. Assessment of the value of any of these outputs and outcomes requires an appreciation for context and history and often can only truly be achieved retrospectively.
Although the JIF is the metric that is most often misused to quantitatively assess research outputs, many other metrics, based on different assumptions and algorithms, have been introduced over the years. The drafters and signers of the San Francisco Declaration believe that assessing a research publication requires actually reading it and understanding its content. Metrics, particularly article-specific metrics, may augment such an assessment by providing a numerical gauge of how well an article has been received and has influenced subsequent work, but in such cases a whole array of metrics, not just one numerical ranking, should be used.
Misuse of journal metrics has harmful consequences for scientists and science. Many scientists feel pressure to publish in journals with the highest JIFs. In embracing JIFs as meaningful tools for research assessment, scientists are in effect handing over research assessment to journal editors. Journal editors do their best to select good work for publication and to choose papers with the widest appeal to journal readers, but their decisions often amount to informed guesses. The value of an article cannot really be determined until after it has been published. Moreover, when a publication is evaluated according to the JIF of the journal in which it appeared, it is really being evaluated not on its own merits but on the number of citations to all of the other articles that happen to be published in that journal. But of course it is perfectly possible for very important (and highly cited) work to be published in a low-JIF journal or for work that ultimately turns out to be uninteresting or even wrong to be published in a high-JIF journal. Does the scientific community really want its work to be evaluated by the company it keeps rather than on its own merits?
The San Francisco Declaration is a call for scientists to take control of research assessment. The Declaration encourages researchers serving on funding, hiring, and promotion committees to judge research articles based on their content, not on where they are published, and even less on the JIFs of the journals in which they are published. In addition, researchers are encouraged to look beyond metrics when choosing where to submit a manuscript for publication. Different journals fill different niches. Some journals are highly specialized, and some target general audiences. Some journals mainly publish short articles, and some publish long ones. The editorial board of each journal covers certain areas better than others. We look forward to establishing a scientific culture in which authors have the incentive to target their manuscripts to a journal based on their intended audience, best fit for format, and scope, not on the JIF of that journal.
Supplementary Material
Footnotes
Stefano Bertuzzi is Executive Director of the American Society for Cell Biology. David G. Drubin is Editor-in-Chief of Molecular Biology of the Cell.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.