Skip to main content
The BMJ logoLink to The BMJ
. 2007 Mar 17;334(7593):568. doi: 10.1136/bmj.39146.549225.BE

Should we ditch impact factors?

Gareth Williams 1
PMCID: PMC1828313  PMID: 17363827

Abstract

Even advocates of impact factors admit that they are a flawed measure of quality. Gareth Williams believes we should get rid of them whereas Richard Hobbs thinks refinement is the answer


Proper measurement of the quality of research requires a thorough understanding of the subject, balanced evaluation of evidence (which may take years to acquire), and ultimately consensus among experts. All in all, a tall order—as shown by the decades which the Nobel Prize Committee may take to recognise achievement and by the controversy which often follows its decisions.

Enter the impact factor, which at first sight is a welcome solution to this conundrum.1 The impact factor has become the global currency for a journal's scientific standing and, by implication, of the papers it publishes. Available at the click of a mouse (http://scientific.thomson.com/isi/) from the Institute of Scientific Information and updated every year, the impact factor has three decimal place precision and an impressive range from close to zero to over 30. Some journals delight in flaunting their impact factors, and when the big names such as Nature do this you could be forgiven for believing that the impact factor is both credible and important.

Sadly, this is not the case. Even superficial scratching beneath the hype shows this currency to be so seriously debased that only the naive could attach any value to it. A journal's impact factor is derived as the total number of citations of all its eligible articles (full papers and reviews) published during the previous two years, divided by the total number of eligible articles. The basic assumption that this ratio reflects the journal's scientific quality has been challenged on many counts, including the heavy citation of reviews, self citation, and period of measurement.2 3 4 5 6 7 8 It doesn't even matter if a paper turns out to be rubbish—or even if the only reason for citing it is to point this out—because all citations count and contribute equally to the journal's impact factor.

Research quality

The further leap of faith, that the stature of an individual paper equates to the impact factor of the journal in which it appears, is fatally flawed. Every scientist knows that the vagaries of peer review can push a “not so good” paper into a “good” journal and vice versa. It is patently absurd to believe that the intrinsic value of a piece of research is increased just because the editor of a “good” journal takes a shine to it. Even the basic mathematics don't add up: numerous studies have found that as few as 10-20% of a journal's papers can account for most of its citations3 9 10; 10-50% of articles may never be cited at all. Thus, the impact factor enables research that has made no detectable impression on the academic community to steal prestige from more conspicuous articles that happen to appear in the same journal.

Over the years, the pseudoscientific rationale of the impact factor has been comprehensively demolished, notably by Per Seglen.2 Of the first 50 references listed by Google Scholar (accessed on 27 February 2007), 33 were critical of one or more aspects of the impact factor's validity. Even though 10 of the other references listed were by Eugene Garfield, one of the progenitors of the impact factor,1 none of the substantive criticisms seems to have been adequately rebutted. The inescapable conclusion is therefore that the impact factor is worthless. So why, in this age of critical, evidence based analysis, is it still around?

Part of the answer is that it is produced as a commercial venture, driven by profits milked from the academic community. In 2003, the Institute of Scientific Information mounted a vigorous legal defence against a potential competitor, which suggests that the citation industry must generate big bucks. Ultimately, though, the impact factor survives only because of the acquiescence and support of the academic community. Even worse, it feeds off three attributes that no academic could be proud of—gullibility, intellectual sloppiness, and (for those who enjoy surfing this particular wave) vanity.

It could be argued that the impact factor is just a harmless numerical distraction, like the music charts. Unfortunately, some accord it an importance that can do real damage. Nowadays, many applicants for jobs or promotion tag their publications with the journal's impact factor, and there is a risk that impressionable assessors might take this seriously. Of much greater concern is evidence that the impact factor profile of individual academics is used by universities and funding bodies to determine employability and grant support11 12—even though this is scientifically indefensible.

As academics, we should have all the skills needed to evaluate the quality of our work. The impact factor is a pointless waste of time, energy, and money, and a powerful driver of perverse behaviours in people who should know better. It should be killed off, and the sooner the better. Academics should now acknowledge that we have been conned for long enough, and the academic community as a whole should now agree to consign the impact factor to the dustbin. Crucially, the journals and libraries which have kept the citation industry alive should follow suit. Perhaps Nature could lead the way?

Competing interests: None declared.

References


Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES