Skip to main content
Sudanese Journal of Paediatrics logoLink to Sudanese Journal of Paediatrics
editorial
. 2011;11(1):6–7.

Evaluation of Science

Adnan Mahmmood Usmani (1),, Sultan Ayoub Meo (2)
PMCID: PMC4949783  PMID: 27493300

Abstract

Scientific achievement by publishing a scientific manuscript in a peer reviewed biomedical journal is an important ingredient of research along with a career-enhancing advantages and significant amount of personal satisfaction. The road to evaluate science (research, scientific publications) among scientists often seems complicated. Scientist’s career is generally summarized by the number of publications / citations, teaching the undergraduate, graduate and post-doctoral students, writing or reviewing grants and papers, preparing for and organizing meetings, participating in collaborations and conferences, advising colleagues, and serving on editorial boards of scientific journals.

Scientists have been sizing up their colleagues since science began. Scientometricians have invented a wide variety of algorithms called science metrics to evaluate science. Many of the science metrics are even unknown to the everyday scientist. Unfortunately, there is no all-in-one metric. Each of them has its own strength, limitation and scope. Some of them are mistakenly applied to evaluate individuals, and each is surrounded by a cloud of variants designed to help them apply across different scientific fields or different career stages [1]. A suitable indicator should be chosen by considering the purpose of the evaluation, and how the results will be used. Scientific Evaluation assists us in: computing the research performance, comparison with peers, forecasting the growth, identifying the excellence in research, citation ranking, finding the influence of research, measuring the productivity, making policy decisions, securing funds for research and spotting trends. Key concepts in science metrics are output and impact. Evaluation of science is traditionally expressed in terms of citation counts. Although most of the science metrics are based on citation counts but two most commonly used are impact factor [2] and h-index [3].

Appropriate use of Science Metrics

We should try to understand the difference between individual and article level metrics, journal level metrics and institutional level metrics. Some researchers mistakenly use inappropriate metric / indicator to conclude about quality, quantity, influence and impact of science. While evaluating or making a conclusion, we must consider the following core issues:

  • Measuring the productivity and research output by paper counts

  • Measuring the influence by citation counts

  • Measuring the impact by counting cites per paper

  • Measuring the influence/efficiency by Hirsch (h) index

  • Measuring the efficiency by considering cited and uncited papers

  • Measuring relative impact, benchmarking against baselines

Do we need new metrics?

The two most commonly used science metrics i.e. Impact Factor and h-index assessing scientific performance are subject of deep concern, especially among younger scientists. Given that scientometricians continue to devise metrics of ever-increasing sophistication, universities and scientific societies need to help decision-makers keep abreast [4]. Traditional science metrics were developed in the age of sparse information, whereas we live in the age of excess information. The debate and criticism on traditional scientific evaluation metrics are calling for new measures of evaluation. Scientific activity has moved online over the past decade. To better capture the scientific impact in the digital era, a variety of new sophisticated science metrics is required. [5]. Former “Nature” Editor Charles G. Jennings summarizes the basic requirements for a scientific quality assessment system as reliable, digestible, economical, work fast and resistant to ‘gaming’.

The evaluation of science is a multi-dimensional construct and cannot be assessed by a single indicator. We need to devise new indicators / metrics to bridge the gap between citations-based metrics and usage-based metrics (raw internet / digital access data) as well as quantitative, qualitative and impact indicators. Usage-based metrics offer a new dimension. It is suggested that collaboratively aggregated metadata may help to fill the gap.

References


Articles from Sudanese Journal of Paediatrics are provided here courtesy of Sudan Association of Paediatricians

RESOURCES