Skip to main content
EMBO Reports logoLink to EMBO Reports
letter
. 2013 May 14;14(6):494. doi: 10.1038/embor.2013.63

Comments to the response of Rodríguez-Navarro

Lutz Bornmann 1, Werner Marx 2
PMCID: PMC3674449  PMID: 23670196

EMBO Reports (2013) 14 6, 493 doi:10.1038/embor.2013.60

EMBO Reports (2013) 14 3, 226–230 doi:10.1038/embor.2013.9

In his critical response [1] to our article ‘How good is research really?', Alonso Rodríguez-Navarro lists several points that we address here. First, he criticizes that we use debatable arguments for our preference for percentiles and that we ignore the validity of the approach. Unfortunately, he does not specify which of the arguments he finds debatable. We believe that there are many good reasons in favour of percentile-based indicators: they are normalized for subject area and time period, they are independent of the skewed distribution of citations and they offer the option of focusing on specific percentile rank classes. Rodríguez-Navarro himself proposes the x-index, a “percentile-based index of the high-citation tail” [2]. He must, therefore, be persuaded by the percentile approach.

Lutz Bornmann has already undertaken several studies into the validity of bibliometric indicators, which are in line with the recommendation from Harnad [3]: “Scientometric predictors of research performance need to be validated by showing that they have a high correlation with the external criterion they are trying to predict.” For example, in references [4,5] the h-index is validated with the assessments of experts as an external criterion for quality. Bornmann and Leydesdorff use data from F1000 to validate the percentiles of citation counts externally and show that the correlation of percentiles with F1000 scores is higher than with any other indicator [6].

Rodríguez-Navarro proposes creating quotients from a research indicator and the corresponding research expenditures. The question with these indicators is, of course, can we assume that their validity has already been tested on a broad database? We doubt that. Bibliometricians face great problems in considering research efforts in their studies—for example, by the number of researchers and scope of research resources—particularly when institutions are being compared. These data are usually difficult to obtain, frequently incomplete and always inconsistent. As an external criterion for reviewing the validity of bibliometric indicators, Rodríguez-Navarro proposes using the number of Nobel Prizes. This indicator is also problematic, as the Noble Prize is too infrequent an event; it is often not clear to which institution a Nobel Prize should be awarded; and the prizes are awarded many years after the researcher has delivered the performance in question.

Second, Rodríguez-Navarro criticizes our choice of the proportion and number of top 10% most cited papers. Generally speaking, we have found that when comparing institutions, we get similar results with the top 5% or the top 20%. We used the top 10% most-cited papers because these are equated with highly cited publications in many bibliometric publications [7,8,9,10]. Furthermore, this indicator is already used for the Leiden Ranking [11] and the SCImago Institutions Ranking [12].

Third, Rodríguez-Navarro states that a bibliometric indicator should be able to distinguish between normal and revolutionary science. But what is normal and revolutionary science? Kuhn [13] used examples from history to explain the difference between the two, but fails to shed any light on how it could be determined in modern science. We have already addressed Kuhn's concept in bibliometric terms. Our results, using the example of the big bang theory [14] and plate tectonics [15], show that regarding the development of research as two phases does not reflect its complexity appropriately. We have therefore proposed the enhancement of Kuhn's paradigm concept with the Anna Karenina principle [16]: a scientific revolution can only be expected when several key conditions have been fulfilled. For example, solid evidence to answer basic questions must be presented and taken up by colleagues, and must be amenable to verification by means of independent data and methods.

Fourth, Rodríguez-Navarro incorrectly mixes societal and scientific impact measurements of research. The indicators proposed by us measure scientific impact—no more and no less. Indicators of research “that is relevant to society”, have not yet been developed [17,18]. At present, the case study approach, which is very complex, is seen as the best method to measure societal impact. We developed a more simple approach where scientists write assessment reports (such as the IPCC) that explain how their results could be used and/or applied in society [19].

Footnotes

The authors declare that they have no conflict of interest.

References

  1. Rodríguez-Navarro A (2013) EMBO Rep 14: [Epub ahead of print] doi:10.1038/embor.2013.60 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Rodríguez-Navarro A (2011) PLoS ONE 6: e20510. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Harnad S (2009) Scientometrics 79: 147–156 [Google Scholar]
  4. Bornmann L, Daniel H-D (2005) Scientometrics 65: 391–392 [Google Scholar]
  5. Bornmann L, Wallon G, Ledin A (2008) Res Eval 17: 149–156 [Google Scholar]
  6. Bornmann L, Leydesdorff L (2013) J Informetr 7: 286–291 [Google Scholar]
  7. Bornmann L, de Moya Anegón F, Leydesdorff L (2012) J Informetr 6: 333–335 [Google Scholar]
  8. Sahel JA (2011) Sci Transl Med 3: 84cm13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Tijssen R, van Leeuwen T (2006) Centres of research ‘excellence' and science indicators. In Ninth International Conference on Science and Technology Indicators (ed. Glänzel W), pp 146–147. Katholieke Universiteit Leuven [Google Scholar]
  10. Tijssen R, Visser M, van Leeuwen T (2002) Scientometrics 54: 381–397 [Google Scholar]
  11. Waltman L et al. (2012) J Am Soc Inf Sci Technol 63: 2419–2432 [Google Scholar]
  12. SCImago Reseach Group (2012) SIR World Report 2012. University of Granada [Google Scholar]
  13. Kuhn TS (1962) The Structure of Scientific Revolutions. University of Chicago Press [Google Scholar]
  14. Marx W, Bornmann L (2010) Scientometrics 84: 441–464 [Google Scholar]
  15. Marx W, Bornmann L (2013) Scientometrics 94: 595–614 [Google Scholar]
  16. Bornmann L, Marx W (2012) J Am Soc Inf Sci Technol 63: 2037–2051 [Google Scholar]
  17. Bornmann L (2012) EMBO Rep 13: 673–676 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Bornmann L (2013) J Am Soc Inf Sci Technol 64: 217–233 [Google Scholar]
  19. Bornmann L, Marx W (2013) Scientometrics [Epub ahead of print] doi:10.1007/s11192-013-1020-x [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from EMBO Reports are provided here courtesy of Nature Publishing Group

RESOURCES