Skip to main content
Journal of Pharmacology & Pharmacotherapeutics logoLink to Journal of Pharmacology & Pharmacotherapeutics
. 2013 Apr-Jun;4(2):125–129. doi: 10.4103/0976-500X.110894

Scientific evaluation of the scholarly publications

Alok Saxena 1,, Vijay Thawani 1, Mrinmoy Chakrabarty 2, Kunda Gharpure 3
PMCID: PMC3669571  PMID: 23760040

Abstract

Worthiness of any scientific journal is measured by the quality of the articles published in it. The Impact factor (IF) is one popular tool which analyses the quality of journal in terms of citations received by its published articles. It is usually assumed that journals with high IF carry meaningful, prominent, and quality research. Since IF does not assess a single contribution but the whole journal, the evaluation of research authors should not be influenced by the IF of the journal. The h index, g index, m quotient, c index are some other alternatives to judge the quality of an author. These address the shortcomings of IF viz. number of citations received by an author, active years of publication, length of academic career and citations received for recent articles. Quality being the most desirable aspect for evaluating an author's work over the active research phase, various indices has attempted to accommodate different possible variables. However, each index has its own merits and demerits. We review the available indices, find the fallacies and to correct these, hereby propose the Original Research Performance Index (ORPI) for evaluation of an author's original work which can also take care of the bias arising because of self-citations, gift authorship, inactive phase of research, and length of non-productive period in research.

Keywords: c index, g index, h index, impact factor, m quotient, original research publication index, self-citation

BACKGROUND

The scientific journal impact factor (IF) is sought by scientific research writers, to target their manuscripts for publication in journals matching their research worthiness and expectations. The available literature on journal IF as a tool to measure the standard of a journal is scarce, hence there is need to analyse the quality of journals in terms of citations. The basic assumption that the journals with high IF show meaningful and prominent research work does not always mean that journals with low IF have poor quality. Factors like editorial policy, frequency, language, medium, regularity and timeliness of publication, its readership, circulation, and quality of publication decide the reputation of the journal.

Hence it is inappropriate to evaluate the research impact of any author by looking at the IF of the journal or number of citations received for his published work. In recent years many new indices have been proposed to evaluate the research worthiness of authors, which have own merits and demerits. We take a short review of the several indices available and also propose a new variant for evaluating the scientific originality and continuity of research publications of an individual.

JOURNAL EVALUATION

The idea of journal IF was first propagated by Eugene Garfield in Science in 1955.[1] A core group of large and highly cited articles were required for mandatory coverage under Science Citation Index (SCI) to be considered for IF.[1] In 1975, Thomson Reuters started publishing Journal Citation Reports (JCR) as part of the SCI and Social Science Citation Index (SSCI).[2] The JCR shows rankings of journals by IF, if required by discipline, and also gives a five year IF.

Calculation of journal impact factor

There are two elements in calculation of IF; viz. total citations received (the numerator) and total citable items (the denominator). The numerator includes citation to all types of articles such as original articles, reviews, proceedings, editorials, letters to editor, news items, while the denominator has the citable items i.e., original articles, reviews and proceedings during two years, for the calculation of IF. The IF is the ratio of the total number of citations received by all articles published in that journal during previous two years, to the total number of citable items published during the same two years.[3,4] Thus for e.g., the IF for a journal in 2011 will be calculated by dividing the total number of citations during the year 2011 for the articles published in the years 2009 and 2010 by the total number of citable items in the year 2009 and 2010. Garfield decided the two years frame on the basis of reader's interest to the current contents of a journal. He observed that 25% citations drawing the attention of readers belonged to the year of publication and two previous years. He also calculated 3, 5, 7 and 15 years IF, which is available in Institute for Scientific Information's (ISI) journal performance indicator.[5]

Other methods of ranking journal performance

The Journal Quality List (JQL) compiled and edited by Prof. Anne-Wil Harzing contains 21 rankings of 933 journals. The JQL is a collation of journal rankings from a variety of sources. The prime motive of publishing JQL was to differentiate the journals on the basis of their quality standard and not for staff evaluation.

The VHB 03 scale developed by the Association of Professors of Business in German speaking countries includes 15,000 journals associated with business administration and related areas (economics, psychology, political science) in which the evaluation grading ranges from 1 (very low) to 10 (very high).

The Hong Kong Baptist 05 (HKB05) rating instituted by the Hong Kong Baptist University School of Business Executive committee sets the ranking as A, B+, B, B to evaluate journals where A means highest, B and B+ medium and B lowest quality.

The JQL has been widely used and is currently in the 44th edition. Harzing has expressed her views that quality in terms of number of citations and quantity in terms of number of papers alone cannot be a perfect measure.[6]

Problems with impact factor

In spite of the wide use of IF, many factors have been found to influence it and therefore question its validity. Dissemination of knowledge about IF is important for research scholars to enable them to target their manuscripts at journals of an appropriate standard. This may lead to arguments on the factors which cause bias in the calculation of IF.

Language of the publication

The presumption that the language of a journal affects the IF is justified by the previous literature. English being the most widely used global language in science, the journal publishers prefer it to attract larger reader base, resulting in more visibility, increased citations and higher IF, as compared to German, Latin, Greek, which were erstwhile popular languages.[5,7]

Subject area

Rapidly evolving fields always gain more interest of the readers. Fundamental subjects gain more citations rather than super specialised subjects which have limited readership, thus creating bias in calculation of the IF. The IF of journals in fundamental life sciences is higher than that of journals in neurosciences.[3] Neuroscience journal is exclusive for the subject specific articles whereas fundamental life sciences cover varied articles inclusive of those on neuroscience. Therefore, the latter is bound to receive more readership, citations and consequently higher IF.

Category of an article

Short letters receive immediate citations for about two years whereas review articles receive longer term citations resulting in higher IF.[3] Hence journals publishing letters, case reports have greater impact but short cited half-life, which is reverse in case of review articles.

Self-citations

Journal IF can be manipulated by self-citations by the author, when the author cites own published articles. Such self-citations increase the citation frequency and IF.[8] Aksnes recommends the extraction of it from citation count at micro and meso level.[9] However self-citation has its own justification. Lack of self-citation may make peer reviewers’ interpret that the author has little or no research background. On the contrary, self-citation also shows authors’ egotism and the ignorance of respective co-authors. Hence it should be used to evaluate the research background of an author only.[10]

Numerator and denominator

The numerator and denominator used for the calculation of IF can mislead the result. The numerator includes all types of articles but the inclusion criteria for denominator are limited. Editorials and letters to editor are not considered in the denominator for the calculation of IF. Hence the numerator becomes more and denominator less, leading to exaggerated IF.[3]

Frequency of publication

Journals which are published more frequently have greater visibility and citations than others, resulting in higher IF.

EVALUATING THE AUTHORS

The IF is more focussed on quantity than quality. The IF thus is handicapped but yet is a favourite indicator of researchers for assessing the journal quality, due to the lack of an alternative. However it should not be used in academic assessment of the faculty.[10] Seglen feels that for evaluating the scientific merit of a publication, it is better to upgrade and validate the evaluative principles, procedures and criteria used rather than suggesting more advanced version of indicators, which practically serve little purpose.[11] The European Association of Science Editors (EASE) advises that IF should be pragmatically used for comparing the influence of entire journal and not for single paper.[12]

Focus on quality

Hirsch proposed a new h-index to evaluate the research impact of a scientist author by looking at the number of citations that author's work has received.[13] It has the advantage that it provides both quantitative (number of papers) and qualitative (impact or citation to these papers) assessment.[14]

Calculation of h index

A scientist has index h if h number of papers out of his/her total number of paper published over n years (Np) have ≥ h number of citations each and the remaining papers have no more than h citations each.[15] Thus having the h index of 40 means a scientist has 40 papers published with minimum of 40 citations/paper. Since it is a simple and easily calculated index, it received positive acceptance worldwide, but objections have been raised on performance of this index. The h index slants in favour of academicians who publish a continuous stream of papers with lasting and above average impact.[16] The strongest indication of the h index being accepted as measure of academic achievement is that ISI Thomson has included it in new citation report feature in the Web of Science.

Disadvantages of h index

It only includes citations to journal articles and not to books, book chapters, working papers, reports, conference papers. It includes citations in journals listed in the ISI Thomson database, which especially for the social sciences and humanities includes a small proportion of academic journals. In calculation of h index from citations of papers on Web of Science the papers by a different scientist bearing the same name may creep in and thus give erroneous results. It can be altered by self-citations. It cannot decline even if a scientist does not publish any paper after 10-20 active years of publication, thus always maintaining high h index.[13]

The g index

The h index ignores the weightage of number of citations to that individual article receives. Hence, in order to give more weight to highly cited articles of author, Leo Egghe proposed the g index. With the articles of an author ranked in descending order of the number of citations received, the g index is the largest number such that the top “g” articles received together at least “g2 citations.”[17,18] A higher g score is directly proportional to the higher number of citations obtained by top class articles.[13,17] The g index has not yet attracted much attention and empirical verification, yet it is a very useful complement to the h index.

Zhang's e index

The e index is the square root of the surplus of citations in the h set beyond h2, i.e., beyond the theoretical minimum required to obtain an h-index of ‘h’. The aim of the e-index is to differentiate between scientists with similar h-indices but different citation patterns.[18,19]

Individual h index (original)

The Individual h index proposed by Batista et al. divides the standard h-index by the average number of authors in the respective articles that contribute to the h-index, in order to reduce the effects of co authorship.[18,20] They suggest that since published manuscripts with more authors usually receive more self-citations and since co authorship behaviour is characteristic of disciplines, the individual h-index might serve to quantify an individual's scientific output by indicating the number of papers an academician would have written with at least the number of individual citations if one had worked alone.[20]

Individual h index (PoP variation)

It is also an individual h index which, instead of dividing the total h index, first normalizes the number of citations for each published paper by dividing the number of citations by the number of authors for that paper, and then calculates the h index of the normalized citation count. This approach accurately accounts for any co authorship effect that might be present and hence is a better approximation of the per author impact, which is what the original h index set out to provide.[18]

Contemporary h index

Proposed by Sidiropoulos et al. (2006), it is concerned with research output of active and inactive researchers, it accounts for citations received by recent articles predominantly. It justifies the work of active scientists unlike the h index which allows an inactive scientist to maintain a high h index even after many years of contribution. It offers a differentiation between junior and senior scholars as h index of junior scholars is often recent.[13]

m quotient

To facilitate comparisons between academicians with different lengths of academic careers, Hirsch proposed a measure “m” which is derived by dividing the h index by the number of years the academician has been active (measured as the number of years since the first published paper). It discriminates against academician that work part time or have had career interruptions. It is given by the formula:

m = h index/number of years the academician has been active since the first published paper.[13]

The m index enables comparisons between academicians who have had different lengths of academic careers as well as those who have had one or multiple career interruptions during their academic career.

Author impact analysis

The software “Publish or Perish”[13] calculates to provide variety of outputs like total number of papers, total number of citations, average number of citations per paper, average number of citations per author, average number of papers per author, h index and related parameters Hirsch a = y.yy, m = z.zz, Zhang's e index, Egghe's g index. The contemporary h index, shown as hc index and ac = y.yy, variants of the individual h index - hI index, hI, norm, and hm-index; age weighted citation rate, and analysis of the number of authors per paper.

Shadows of h index

The Hirsch index has become so popular that its variants are used in many fields. Bornmann and Daniel[16] have given three other indices based on h index as:

h – b index: It applies to interesting topics and compounds which are grabbing maximum attention of readers e.g., carbon nanotubes (h – b = 167) and nanowires (h – b = 105) are currently more talked about topics in physics.

c index

This is an alternative to m and indicates the number of citations of an academician in the most recent calendar year.

a index

This was devised to compensate for the overall insensitivity of the other available indices to “highly cited” papers. It is the average number of citations garnered by articles in the Hirsch core i.e., articles on rank smaller than or equal to h.

OUR PROPOSITION: ORIGINAL RESEARCH PUBLICATION INDEX

Having gone through the available indices, we realised the fallacies these suffer from. Hence we propose an index for the performance measurement of an academician which is independent of cumulative IF of the journals one has published in. Our idea stems from the fact that an article gets citations based upon its overall visibility and its access. It is possible that an article of high academic calibre/impact may go unnoticed if it is published in a less popular journal. If lesser number of readers from other disciplines accesses the article, it should not compromise its academic worth. Hence a new indicator which gives an overall value of the individual's research output along with orientation of academic output i.e., research, report or review is hereby proposed as:

ORPI = N/I + (C-Sc)/T

Where ORPI is an acronym for Original Research Publication Index, as first author,

N = Total number of original articles published in Pubmed indexed journals (since it is the most extensively used database for citations) by the author starting from the first indexed publication till date,

C = Total number of citations received by “N” original articles published in indexed journal by the author starting from the first indexed publication till date,

SC= Total number of self-citations on the “N” original articles published in indexed journal by the author starting from the first indexed publication till date,

I = Total number of citable items i.e., original articles, reviews, case reports, proceedings published by the author in indexed journal by the author starting from the first indexed publication till date,

T = Time in years starting from the first indexed publication till date (this will give the time depth of publication track).

Strengths of ORPI

It indicates the originality of a researcher through publications of original articles vis-à-vis total publication output during the given time span, irrespective of the journal's ranking. It indicates how the author fares in terms of citations. It nullifies the bias of self-citation that otherwise creeps in other indices. It eliminates the interference of gift authorship since gift authorship is usually not received as first authorship. It indicates the continuity of original research output. It gives more weight to the first authorship and prompts researchers to inculcate the habit of original contribution to research. Original articles of merit published in journals with low IF also get due credit. Thus ORPI score is an indicator of originality, productivity, and visibility, without citation bias.

CONCLUSION

All researchers wish to see their work published in the best scientific journal with highest rating. Currently, the value to a medical journal flows chiefly through the impact factor (IF). However, the IF has its own shortcomings which have been partially addressed by other indices. Yet, there is no perfect alternative. Hence, we have proposed a new index to overcome the realized deficiencies. Our “Original Research Publication Index” (ORPI) scores over the rest, in best performance evaluation of a researcher with continuous research publications as a first author. We hope that ORPI suggested by us will be accepted with open minds.

Footnotes

Source of Support: Nil

Conflict of Interest: None declared.

REFERENCES

  • 1.Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295:90–3. doi: 10.1001/jama.295.1.90. [DOI] [PubMed] [Google Scholar]
  • 2.Impact factor. [homepage on the internet]. Wikipedia [updated 2012 Mar 30] [Last cited on 2011 Nov 06]. Available from: http://www.en.wikipedia.org/wiki/impact_factor .
  • 3.Amin M, Mabe M. Impact factors: Use and abuse. Medicina (B Aires) 2003;63:347–54. [PubMed] [Google Scholar]
  • 4.Saha S, Saint S, Christakis DA. Impact factor: A valid measure of journal quality. J Med Libr Assoc. 2003;91:42–6. [PMC free article] [PubMed] [Google Scholar]
  • 5.Garfield E. The meaning of the impact factor. Int J Clin Health Psychol. 2003;3:363–9. [Google Scholar]
  • 6.Harzing AW. Journal quality list. [homepage on Internet]. [updated 2008 Sept 26] [Last cited on 2012 Mar 31]. Available from: http://www.harzing.com .
  • 7.Dong P, Maries L, Mondry A. The “Impact factor” revisited. Biomed Digit Libr. 2005;2:1–8. doi: 10.1186/1742-5581-2-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Fowler JH, Aksnes DW. Does self citation pay? Scientometrics. 2007;72:427–37. [Google Scholar]
  • 9.Aksnes DW. A macro study of self-citation. Scientometrics. 2003;56:235–46. [Google Scholar]
  • 10.Sammarco PW. Journal visibility, self citation, and reference limits: Influences in impact factor and author performance review. Ethics Sci Environ Polit. 2008;8:121–5. [Google Scholar]
  • 11.Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502. doi: 10.1136/bmj.314.7079.497. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.EASE. EASE statement on inappropriate use of impact factors.[homepage on internet] [Last cited on 2012 Mar 31]. Available from http://www.ease.org.uk/sites/default/files/ease_statement_ifs_final.pdf .
  • 13.Harzing AW. Reflections of the h index [homepage on Internet] [Last cited on 2011 Dec 13]. Available from: http://www.harzing.com/pop_hindex.htm .
  • 14.Glanzel W. On the opportunities and limitations of the H index. Sci Focus. 2006;1:10–1. [Google Scholar]
  • 15.Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA. 2005;102:16569–72. doi: 10.1073/pnas.0507655102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Bornmann L, Daniel HD. What do we know about h index? J Am Assoc Inf Sci Technol. 2007;58:1381–5. [Google Scholar]
  • 17.Egghe L. Scientific evaluation of the scholarly publications. Scientometrics. 2006;69:131–52. [Google Scholar]
  • 18.PJJ Welfens [homepage on Internet] [Last cited on 2012 Mar 31]. Available from: http://www.welfens.wiwi.uniwuppertal.de/fileadmin/welfens/daten/Presse/Bibliometrie.pdf .
  • 19.Zhang CT. The e-index, complementing the h-index for excess citations. PLoS One. 2009;4:e5429. doi: 10.1371/journal.pone.0005429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Batista PD, Campieli MG, Kinouchi O, Matinez AS. Is it possible to compare researchers with different scientific interests? Scientometrics. 2006;68:179–89. [Google Scholar]

Articles from Journal of Pharmacology & Pharmacotherapeutics are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES