In the current issue of the Netherlands Heart Journal, Opthof and Wilde [1] assessed the bibliometric parameters of 37 Dutch professors in clinical cardiology (of whom two have recently retired). These obtained data were largely based on the Hirsch index (h-index) and the authors calculated a top-10 for parameters such as first-authored papers, numbers of papers, number of citations, and the citations per year. Interestingly, they found no ‘golden’ parameter. The authors observed much heterogeneity in the used citation parameters and they found that an increase in the number of co-authors and the strategic network of a scientist played a dominant role in the ‘level’ of scientific productivity. It was concluded that citation analysis should always be applied with great care in science policy.
Of course, it is always a delicate matter to report inter-individual comparisons between close colleagues. First of all, the ‘outer world’ might wrongly perceive the numbers as an absolute measure of scientific quality which might, in case of higher scores, open more avenues to better academic positions and more financial grants. Second, the false impression might be given that the numbers are only based on individual achievements and therefore further encourage such personal endeavours. However, most of the individually obtained rewards have been based on close cooperation between the academic/research institutions as already testified by the achievements of the Interuniversity Cardiology Institute of the Netherlands (ICIN) over the past 40 years. Fortunately, the authors themselves clearly state that the observed differences in h-index derived parameters should not be seen as differences in scientific quality.
Since Hirsch’s first publication of the h-index in 2005 [2], this measure of academic performance has generated increasing interest. The h-index measures both the productivity and the cumulative impact of a researcher’s output by looking at the amount of citations his/her work have received. In other words, a scientist with an index of 30 has published 30 papers each of which has been cited by others at least 30 times. The h-index is preferable to other single-number criteria, such as the total number of papers, the total number of citations and citations per paper. The h-index combines an assessment of both quantity (number of papers) and quality (impact, or citations to these papers). One cannot have a high h-index without publishing a substantial number of papers. However, these papers need to be cited by other academics in order to count for the h-index. The h-index therefore favours scientists who publish a continuous stream of papers with lasting and above-average impact. The h-index has become a generally accepted measure of academic achievement and ISI Thomson has now included the h-index as part of its new ‘citation report’ feature in the Web of Science (WOS).
The h-index appears to have considerable validity. Hirsch calculated the h-index of Nobel prize winners and found 84% of them to have an h-index of at least 30; this holds for 22 (60%) of our Dutch professors! Hirsch himself suggested that an h-index of 10–12 might be a useful guideline for tenure decisions at major research universities, a value of 18 could mean a full professorship, and an h-index of 45 or higher could mean membership in the United States National Academy of Sciences (N.B. 11 (30%) of the Dutch professors could become a member). However, we should realise that Albert Einstein would have had an h-index of only 5 (based on his Annus Mirabilis papers in 1905)!!
There are several considerations to the h-index in general, as also previously noted by the authors [3]:
The h-index does not account for the number of authors of a paper and discards the placement of an author in the authors’ list. There is no difference in appreciation between the first, the last, or the middle author. This might be the reason for some scientists to become involved in Guidelines and other consensus papers, as these documents become frequently cited
The h-index does not account for confounding factors such as ‘gratuitous authorship’, and the favourable citation bias associated with review articles. It is something hilarious to notice that there are more authors than patients or animals studied
The h-index cannot decline but only increase, implying that scientists who retire after more than 20 active years of publishing maintain and may even improve their h-index even if they never publish another paper (as long as their papers keep being cited)
The h-index is an inadequate measure for junior academics, or for scientists with a late career start, as their papers have not yet had the time to accumulate citations. For these individuals, the impact factor of a journal might be a more realistic measure of scientific impact. As a result, the h-index provides a more realistic assessment of senior researchers who started publishing at least 10 years ago
The h-index ignores the number of citations to each individual article beyond what is needed to achieve a certain h-index. Consequently, a scientist with an h-index of 10 could theoretically have a total of 100 citations (10 for each paper), but could also have more than 2000 citations (9 papers with 220 citations each and one paper with 20 citations).
Keeping these considerations in mind, it is still worthwhile to learn that the mean scientific level of our Dutch clinical cardiology professors is fairly high (mean h-index well over 40!) relative to the indicated h-indices by Hirsch for eminent scientists pursuing an academic career. Nevertheless, as pointed out by the authors [1], this sort of citation analysis should always be observed with a critical eye and citation levels should not be considered the merits of only one individual. By close cooperation among scientists, the mean h-index of a country (or conglomerate) will improve more steadily than on an individual basis. In general, a combination of intelligence, an open mind, and a scientific drive are more important parameters in the evaluation of a scientist. Along those lines, Hirsch himself provides a strong caveat to the use of the h-index as the sole parameter of scientific performance [2]:
Obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual. This and the fact that there can always be exceptions to rules should be kept in mind especially in life-changing decisions such as the granting or denying of tenure.
References
- 1.Opthof T, Wilde AA. Bibliometric data in clinical cardiology revisited. The case of 37 Dutch professors. Neth Heart J. 2011. doi:10.1007/s12471-011-0128-y. [DOI] [PMC free article] [PubMed]
- 2.Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA. 2005;102:16569–72. doi: 10.1073/pnas.0507655102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Opthof T, Wilde AA. The Hirsch-index: a simple, new tool for the assessment of scientific output of individual scientists: The case of Dutch professors in clinical cardiology. Neth Heart J. 2009;17:145–54. doi: 10.1007/BF03086237. [DOI] [PMC free article] [PubMed] [Google Scholar]