Howy Jacobs's light-hearted exploration of where the perverse incentives surrounding the quest for publication in high impact-factor journals might lead paints a gruesome picture. Of course, there is a serious message in his October editorial, and even now many would argue that our fixation with judging the scientific influence and importance of people, articles, labs and departments on the basis of a single statistic is deeply flawed and ultimately, detrimental to science (Anon, 2006; PLoS Medicine Editors, 2006; Lawrence, 2008; Adler et al, 2008; Simons, 2008). Even Nature's own editor-in-chief has bemoaned the deficiencies of the impact factor (Campbell, 2008), although his own journal still celebrates its metric to three decimal places (http://www.nature.com/nature/about).
But what is to be done when the journal impact factor is so tightly woven into the fabric of research assessment? How could we escape Jacobs's nightmare scenario? At PLoS, we believe articles should be judged on their own merits, rather than on the basis of the journal in which they happen to be published. After all, most readers now use search engines such as Google and PubMed to find research that is relevant to them, rather than browsing a particular set of journals. We also think it is important to look beyond citation counts; although they do provide some indication about how the academic community values a piece of work, they are only one of many possible measures.
Over the past months, we have therefore taken what we hope are some useful steps towards improving research assessment. In March of this year, we launched an ‘article-level metrics' programme, whereby every PLoS article includes information about the possible measures of its impact. In addition to citations, we add online usage data—page views and downloads—as well as the number of social bookmarks, comments, notes, blog posts and ratings that are made concerning the article (http://www.plos.org/cms/node/485). As far as we know, this is the most comprehensive and transparent set of article data that any publisher is providing.
Article-level data are not without their problems, and so it is important to interpret the data carefully. But, we believe that providing the data in the first place will inspire new ideas about how to assess research. Rather than limiting attention to the journal impact factor, it will be possible to ask sophisticated questions about the impact and influence of published research, and to obtain meaningful answers. For example, for a piece of research that is aimed at practitioners, we might want to know the extent to which it has actually changed practice—citation metrics probably would not be of much help in that case. And it should be possible to find work that only emerges with the passage of time as crucial for the development of a particular field.
Another potential consequence of focusing attention at the article level is that it might matter less where a piece of work is published, so long as it is openly available. In turn, this might reduce the tremendous amount of energy and pain that is associated with trying to publish work in the highest possible impact journals (Raff et al, 2008). It would also reduce the risk of another element in Jacobs's nightmare vision—that the journals with the highest impact factors can charge ever-escalating fees for publication, and that only the richest labs would be able to afford to publish in those journals.
As alternatives begin to emerge, the primacy of the impact factor will be challenged. But this will only happen if other stakeholders also take a stand. The Wellcome Trust has made its position clear in its policy on open access by affirming “the principle that it is the intrinsic merit of the work, and not the title of the journal in which an author's work is published, that should be considered in making funding decisions” (Wellcome Trust, 2008). And it was encouraging to see the recent statement that journal impact factors “should not be used as a basis for evaluating the significance of an individual scientist's past performance or scientific potential,” which was unanimously adopted at the International Respiratory Journals Editors Roundtable (Adler, 2009). In addition to focusing on article-level metrics, at PLoS we have also decided to no longer promote impact factors anywhere on our sites—we would love to see other publishers do the same.
References
- Adler KB (2009) Impact factor and its role in academic promotion. Am J Respir Cell Mol Biol 41: 127–128 [DOI] [PubMed] [Google Scholar]
- Adler R, Ewing J, Taylor P (2008) Citation statistics. A report from the International Mathematical Union. http://www.mathunion.org/publications/report/citationstatistics [Google Scholar]
- Anon (2006) Cash-per-publication…is an idea best avoided. Nature 441: 785–786 [DOI] [PubMed] [Google Scholar]
- Campbell P (2008) Escape from the impact factor. Ethics Sci Environ Polit 8: 5–7 [Google Scholar]
- Lawrence PW (2008) Lost in publication: how measurement harms science. Ethics Sci Environ Polit 8: 9–11 [Google Scholar]
- PLoS Medicine Editors (2006) The impact factor game. PLoS Med 3: e291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raff M, Johnson A, Walter P (2008) Painful publishing. Science 321: 36. [DOI] [PubMed] [Google Scholar]
- Simons K (2008) The misused impact factor. Science 322: 165. [DOI] [PubMed] [Google Scholar]
- Wellcome Trust (2008) Position statement in support of open and unrestricted access to published research. London, UK: Wellcome Trust. http://www.wellcome.ac.uk/about-us/policy/spotlight-issues/open-access/policy/index.htm [Google Scholar]
