In the sixties, Thomson Reuters invented the journal “impact factor.” After using journal statistical data in-house to compile the Science Citation Index® (SCI®) for many years, Thomson Reuters began to publish Journal Citation Reports® (JCR®) in 1975 as part of the SCI and the Social Sciences Citation Index® (SSCI®). The JCR provides quantitative tools for ranking, evaluating, categorizing, and comparing journals. The impact factor is one of these; it is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones. Impact factors are frequently considered for academic promotions, pressuring academics to keep targeting higher impact journals. In fact, scientists in China are paid based on the impact factor of the journals they publish in.
In any given year, the impact factor of a journal is the average number of citations received per paper published in that journal during the two preceding years [1]. For example, if a journal has an impact factor of 3 in 2013, then its papers published in 2011 and 2012 received 3 citations each on average in 2013. The 2013 impact factor of a journal would be calculated as follows:
A the number of times that articles published in that journal in 2011 and 2012 was cited by articles in indexed journals during 2013.
B the total number of “citable items” published by that journal in 2011 and 2012. (“Citable items” are usually articles, reviews, proceedings, or notes; not editorials or letters to the editor.)
(Note that 2013 impact factors are actually published in 2014; they cannot be calculated until all of the 2013 publications have been processed by the indexing agency).
New journals, which are indexed from their first published issue, will receive an impact factor after 2 years of indexing; in this case, the citations to the year prior to Volume 1 and the number of articles published in the year prior to Volume 1 are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for 3 years. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period, and the Journal Citation Reports (JCR) also includes a five-year impact factor [2]. The JCR shows rankings of journals by impact factor, if desired by discipline, such as biophysics or dermatology. The impact factor is used to compare different journals within a certain field.
The impact factor is highly dependent on the academic discipline, possibly on the speed with which papers get cited in a field. The percentage of total citations occurring in the first 2 years after publication varies highly among disciplines from 1 to 3 % in the mathematical and physical sciences to 5–8 % in the biological sciences [3]. Thus, impact factors cannot be used to compare journals across disciplines.
This problem was exacerbated when the use of impact factors is extended to evaluate not only the journals, but also the papers therein. The Higher Education Funding Council for England was urged by the House of Commons Science and Technology Select Committee to remind Research Assessment Exercise panels that they are obliged to assess the quality of the content of individual articles, not the reputation of the journal in which they are published [4]. The effect of outliers can be seen in the case of the article “A short history of SHELX,” which included this sentence: “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.” This article received more than 6,600 citations. As a consequence, the impact factor of the journal Acta Crystallographica Section A rose from 2.051 in 2008 to 49.926 in 2009, more than Nature (at 31.434) and Science (at 28.103) [5]. The second-most cited article in Acta Crystallographica Section A in 2008 only had 28 citations [6].
Finally, journal rankings constructed based solely on impact factors only moderately correlate with those compiled from the results of expert surveys [7]. It is important to note that impact factor is a journal metric and should not be used to assess individual researchers or institutions [8, 9].
Some journals, however, are starting to take more innovative approaches. One such journal is PLOS One, which provides individual article metrics to anyone who accesses the article. Instead of letting the reputation of the journal decide the impact of its papers, PLOS One provides information about the influence of the article on a more granular level. Other novel measures of impact including the h index (a metric that indicates how many papers with a minimum number of citations have been published by an individual author) continue to depend on citations as a surrogate for impact. In truth, history is the best judge of the impact of any research, and scientists will have to apply the same scientific standard to the measures of impact, which they apply to their research, to devise a measure that captures the quality and impact of investigative work.
A journal can adopt editorial policies to increase its impact factor [10]. For example, journals may publish a larger percentage of review articles, which generally are cited more than research reports [11]. Thus, review articles can raise the impact factor of the journal and review journals will, therefore, often have the highest impact factors in their respective fields [11]. Some Journals set their submissions policy to “by invitation only” to invite exclusively senior scientists to publish “citable” papers to increase the journal impact factor [12].
Journals may also attempt to limit the number of “citable items”—i.e., the denominator of the impact factor equation—either by declining to publish articles (such as case reports in medical journals) that are unlikely to be cited or by altering articles (by not allowing an abstract or bibliography) in hopes that Thomson Scientific will not deem it a “citable item.” As a result of negotiations over whether items are “citable,” impact factor variations of more than 300 % have been observed [13].
Interestingly, items considered to be uncitable—and thus are not incorporated in impact factor calculations—can, if cited, still enter into the numerator part of the equation despite the ease with which such citations could be excluded. This effect is hard to evaluate, for which the distinction between editorial comment and short original articles is not always obvious. For example, letters to the editor may refer to the either class.
Another less insidious tactic journals employ is to publish a large portion of its papers, or at least the papers expected to be highly cited, early in the calendar year. This gives those papers more time to gather citations. Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal, which will increase the journal’s impact factor [14]. Coercive citation is a practice in which an editor forces an author to add spurious self-citations to an article before the journal will agree to publish it in order to inflate the journal’s impact factor.
There are—finally—other ways of measuring impact and visibility of scholarly articles. Thomson Reuters now faces competition from organizations that have developed online tools for citation counting, such as Google Scholar and CrossRef, and this competition may help bring about overdue change. Other measures of scientific impact may also become widely adopted, such as the usage factor, which is being promoted by the United Kingdom Serials Group (http://www.uksg.org/rfp.pdf), or the Y factor, a combination of both the impact factor and the weighted page rank, developed by Google (http://www.soe.ucsc.edu/~okram/papers/journal-status.pdf). Perhaps even measures such as these will become outmoded as the Internet allows for users to interact more directly with published articles. Journals have taken a step toward such a future with the publication of e-letters, and the physics preprint server arXiv.org has been promoting such interaction for many years. As more and more articles are available in full electronically and as search engines get more sophisticated at mining the Web and assessing usage, such interaction with the literature will become easier and readers will be able to judge papers for themselves rather than relying on outmoded surrogates for quality such as the impact factor. If authors are going to quote the impact factor of a journal, then they should understand what it can and cannot measure. The opening up of the literature means that better ways of assessing papers and journals are coming—and we should be ready to embrace them.
Gautam N Allahbadia, MD
is the Editor-in-chief of the Journal of Obstetrics & Gynecology of India as well as the IVF Lite (Journal of Minimal Stimulation IVF). He is the Medical Director of Rotunda -The Center for Human Reproduction, the world-renowned fertility clinic at Bandra, Mumbai, India as also the New Hope IVF Clinic at Sharjah, UAE. He is a noted world authority on Ultrasound guided Embryo Transfers and & one of the pioneers in Third Party Reproduction in South-East Asia. Dr. Allahbadia was responsible for India’s first trans-ethnic surrogate pregnancy involving a Chinese couple’s baby delivered by an unrelated Indian surrogate mother. He has over 100 peer-reviewed publications to his credit and is on the Editorial Board of several International Journals. Throughout his career, Dr. Allahbadia has been instrumental in developing new fertility enhancing protocols and propagating the use of Ultrasound in Embryo Transfer procedures
References
- 1.Introducing the Impact factor. http://thomsonreuters.com/journal-citation-reports/ Retrieved 14 May 2014.
- 2.Inclusion of Eigenfactor™ Metrics Creates Multi-Faceted View of Journal Performance. http://thomsonreuters.com/press-releases/012009/350008 Retrieved 14 May 2014.
- 3.Nierop EV. Why do statistics journals have low impact factors? Stat Neerl. 2009;63(1):52–62. doi: 10.1111/j.1467-9574.2008.00408.x. [DOI] [Google Scholar]
- 4.Integrity of the publishing process. “House of Commons—Science and Technology—Tenth Report”. http://www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/399/39912.htm Retrieved 14 May 2001.
- 5.Grant, Bob (21 June 2010). “New impact factors yield surprises”. The Scientist. http://www.the-scientist.com/?articles.view/articleNo/29093/title/New-impact-factors-yield-surprises/ Retrieved 14 May 2014.
- 6.What does it mean to be #2 in Impact?”, Thomson Reuters Community.http://community.thomsonreuters.com/t5/Citation-Impact-Center/What-does-it-mean-to-be-2-in-Impact/ba-p/11386 Retrieved 14 May 2014.
- 7.Serenko A, Dohan M. Comparing the expert survey and citation impact journal ranking methods: example from the field of Artificial Intelligence. J Inform. 2011;5(4):629–648. doi: 10.1016/j.joi.2011.06.002. [DOI] [Google Scholar]
- 8.Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314(7079):498–502. doi: 10.1136/bmj.314.7079.497. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.EASE Statement on Inappropriate Use of Impact Factors. European Association of Science Editors. November 2007. http://www.ease.org.uk/publications/impact-factor-statementRetrieved 14 May 2014.
- 10.Arnold DN., Fowler KK. Nefarious Numbers. Not Am Math Soc 2011; 58 (3): 434–437. arXiv:100278. Bibcode:2010arXiv100278A.
- 11.Khaled M. The disaster of the impact factor. Sci Eng Ethics 2014; 1–11. doi:10.1007/s11948-014-9517-0.
- 12.PLoS Medicine Editors (6 June 2006). The Impact Factor Game. PLoS Med 3 (6): e291. doi:10.1371/journal.pmed.0030291. [DOI] [PMC free article] [PubMed]
- 13.Agrawal A. Corruption of journal impact factors. Trends Ecol Evol. 2005;20(4):157. doi: 10.1016/j.tree.2005.02.002. [DOI] [PubMed] [Google Scholar]
- 14.Fassoulaki A, Papilas K, Paraskeva A, et al. Impact factor bias and proposed adjustments for its determination. Acta Anaesthesiol Scand. 2002;46(7):902–905. doi: 10.1034/j.1399-6576.2002.460723.x. [DOI] [PubMed] [Google Scholar]