Journal editors and experts in scientometrics are increasingly concerned with the reliability of the Journal Impact Factor (JIF, Clarivate Analytics, formerly the IP & Science business of Thomson Reuters) as a tool for assessing the influence of scholarly journals. A paper byLarivière et al. (1), which was reposited on bioarXiv portal and commented on in Nature (2), reminded all stakeholders of science communication that the citability of most papers in an indexed journal deviates significantly from its JIF. These authors recommend to display journal citation distribution instead of the JIF, and the proposal is widely discussed on social networking platforms (3,4).
The overall impression is that the discussion over the JIF is endless. The JIF along with the h-index is the simplest and most studied indicator in scientometrics (5,6). However, the commentary in Nature (2) and subsequent debates over the citation distribution revived interest of the scientific community toward empirical analyses of the JIF and its uses and misuses in research evaluation.
After all the endless discussions, research evaluators should have realized that the JIF should not be used to measure the impact of single papers. But there are still some experts, who argue that the use of the JIFs at the level of single papers cannot be simply distinguished from its use at the journal level (4). In some circumstances, the JIFs may help authors and readers to pick, read, and cite certain papers. Papers from high-impact journals are more likely to be picked and cited than similar ones from low-impact periodicals.
The JIF should not be demonized. It still can be employed for research evaluation purposes by carefully considering the context and academic environment. Elsevier – provider of the Scopus database – rates the JIF as so important that the company introduced the near-doppelgänger CiteScore, recently (see https://journalmetrics.scopus.com/). The JIF measures the average impact of papers, which are published in a journal, with a citation window of only one year. The JIFs are calculated and published annually in the Journal Citation Reports (JCR, Clarivate Analytics). Papers counted in the denominator of the JIF formula are published within 2 years prior to this citation metric calculation. In contrast to the JIF, the new CiteScore metric considers the papers from 3 years (instead of 2 years).
As such, the JIF (and also the CiteScore) covers rather short term of interest toward papers (i.e., interest at the research front) and overlooks long-term implications of publication activity (the so-called sticky knowledge) (7). Focus on the short-term attention of the field-specific community makes sense since the JIF was initially designed to guide librarians purchase the most used modern periodicals for their libraries. Accordingly, the JIF cannot and should not be employed for evaluating the average impact of papers in a journal in the long and distant run.
The JIF formula aims at calculating average numbers that reveal the central tendency of a journal's impact. As such, one or few highly-cited papers, which are published within the 2 years, may boost the JIF. That is particularly the case with Nature, Science, and other influential journals (1). The skewed citation distribution implies that the JIF values do not reflect the real impact of most papers published in the index journal. The absolute number of citations received by a single paper is the correct measure of its impact. Currently, the Web of Science and Scopus databases can provide an outlook at citations for evaluating the impact of single papers.
Importantly, the JIF is the best predictor of single papers' citability (8). Studies examining the predictive value of the JIF along with number of authors and pages proved that notion (9). One can expect more citations to single papers, which are published in higher-impact journals, compared to those in lower-impact ones.
Another important point is the field-dependency of citations contributing to the JIFs. There are differing citation rates across different disciplines and subject categories, regardless of the scientific quality of the papers, and confounded by field-specific authorship rules, publication activity, and referencing patterns (10). Such differences justified the development of field-normalized indicators, which are employed for evaluating individual researchers, research groups, and institutions (11,12). Since the JIF is not a field-normalized indicator, it can only be used for evaluations within a single subject category.
The SCImago Journal Rank (SJR) indicator – a variant of the JIF – were employed for institutional excellence mapping at www.excellencemapping.net (13,14). For institutions worldwide, this site maps the results of 2 indicators. First, the ‘best paper rate’ measures the long-term impact of papers in a size-independent way, using percentiles as a field-normalized indicator. Second, the ‘best journal rate,’ which is based on the citation impact of the journals publishing the institutions' papers. That indicator is the proportion of papers published in journals belonging to the 25% of the best journals in their subject categories in terms of citation impact. Through the consideration of journal sets, the indicator is a field-normalized metric at the journal level. The indicator demonstrates how successful are academic institutions in terms of publishing their papers in high-impact journals (13,14). Thus, the so-called success at www.excellencemapping.net is measured by the ability of publishing in high-impact target journals and by receiving the long-term attention of the scientific community.
The JIF can be used to measure the ability of individual researchers and institutions to successfully publish their research. However, the JIF should not be used as a proxy for measuring the impact of single papers. In this regard, more appropriate indicators should be considered (e.g., data from the “Field Baselines” tables in the Essential Science Indicators [ESI] by Clarivate Analytics). The baselines can be used to assess whether a specific paper received an impact which is far above or below the worldwide average performance in a field. For example, the 2006 baseline for chemistry is approximately 23 (November 14, 2016). If a chemistry paper from 2006, which is published by an evaluated entity, attracts 50 citations, the impact of that paper is far above the baseline, whereas with 10 citations the impact would be far below the baseline.
There is only one scenario when the use of the JIFs is justifiable for the assessment of individual scientists (15). It is when recently published papers are considered for research evaluation, which is routinely practised for intramural monitoring of staff productivity, academic promotion, or recruitment. The evaluators pay particular attention to the most recent publications. But for these items, the citation window is too short for quantifying their citation impact reliably (16), and in that case reputation of the publishing journals along with their JIFs can be conditionally employed as the proxies of single papers' impact (9). InCites (Clarivate Analytics) has already implemented the calculation of specialty-specific percentile-transformed JIFs (17), which reflect field-normalized journal impact values and can be used for assessing recently published papers.
Footnotes
DISCLOSURE: The authors have no potential conflicts of interest to disclose.
AUTHOR CONTRIBUTION: Conceptualization: Bornmann L, Pudovkin AI. Writing - original draft: Bornmann L, Pudovkin AI. Writing - review & editing: Bornmann L, Pudovkin AI.
References
- 1.Larivière V, Kiermer V, MacCallum CJ, McNutt M, Patterson M, Pulverer B, Swaminathan S, Taylor S, Curry S. A simple proposal for the publication of journal citation distributions. bioRxivorg. 2016:062109. [Google Scholar]
- 2.Callaway E. Beat it, impact factor! Publishing elite turns against controversial metric. Nature. 2016;535:210–211. doi: 10.1038/nature.2016.20224. [DOI] [PubMed] [Google Scholar]
- 3.de Rijcke S. Let's move beyond too simplistic notions of ‘misuse’ and ‘unintended effects’ in debates on the JIF [Internet] [accessed on 16 November 2016]. Available at https://www.cwts.nl/blog?article=n-q2x234.
- 4.Waltman L. The importance of taking a clear position in the impact factor debate [Internet] [accessed on 16 November 2016]. Available at https://www.cwts.nl/blog?article=n-q2w2c4&title=the-importance-of-taking-a-clear-position-in-the-impact-factor-debate.
- 5.Hönekopp J, Kleber J. Sometimes the impact factor outshines the H index. Retrovirology. 2008;5:88. doi: 10.1186/1742-4690-5-88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Bornmann L, Marx W, Gasparyan AY, Kitas GD. Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatol Int. 2012;32:1861–1867. doi: 10.1007/s00296-011-2276-1. [DOI] [PubMed] [Google Scholar]
- 7.Baumgartner SE, Leydesdorff L. Group-based trajectory modeling (GBTM) of citations in scholarly literature: dynamic qualities of “transient” and “sticky knowledge claims”. J Assoc Inf Sci Technol. 2014;65:797–811. [Google Scholar]
- 8.Bornmann L, Leydesdorff L. Does quality and content matter for citedness? A comparison with para-textual factors and over time. J Informetrics. 2015;9:419–429. [Google Scholar]
- 9.Onodera N, Yoshikane F. Factors affecting citation rates of research articles. J Assoc Inf Sci Technol. 2015;66:739–764. [Google Scholar]
- 10.Gasparyan AY, Yessirkepov M, Voronov AA, Gerasimov AN, Kostyukova EI, Kitas GD. Preserving the integrity of citations and references by all stakeholders of science communication. J Korean Med Sci. 2015;30:1545–1552. doi: 10.3346/jkms.2015.30.11.1545. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: the Leiden Manifesto for research metrics. Nature. 2015;520:429–431. doi: 10.1038/520429a. [DOI] [PubMed] [Google Scholar]
- 12.Waltman L. A review of the literature on citation impact indicators. J Informetrics. 2016;10:365–391. [Google Scholar]
- 13.Bornmann L, Stefaner M, de Moya Anegón F, Mutz R. Ranking and mapping of universities and research-focused institutions worldwide based on highly-cited papers: a visualisation of results from multi-level models. Online Inf Rev. 2014;38:43–58. [Google Scholar]
- 14.Bornmann L, Stefaner M, de Moya Anegón F, Mutz R. What is the effect of country-specific characteristics on the research performance of scientific institutions? Using multi-level statistical models to rank and map universities and research-focused institutions worldwide. J Informetrics. 2014;8:581–593. [Google Scholar]
- 15.Wouters P, Thelwall M, Kousha K, Waltman L, de Rijcke S, Rushforth A, Franssen T. The Metric Tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management) London: Higher Education Funding Council for England (HEFCE); 2015. [Google Scholar]
- 16.Wang J. Citation time window choice for research impact evaluation. Scientometrics. 2013;94:851–872. [Google Scholar]
- 17.Pudovkin AI, Garfield E. Rank-normalized impact factor: a way to compare journal performance across subject categories. Proc Assoc Inf Sci Technol. 2004;41:507–515. [Google Scholar]