Skip to main content
Journal of Clinical Orthopaedics and Trauma logoLink to Journal of Clinical Orthopaedics and Trauma
. 2020 Jul 16;11(Suppl 4):S684–S685. doi: 10.1016/j.jcot.2020.07.010

What is there in the scoring and rating of journals?

Raju Vaishya a, Abhishek Vaish a,, Abid Haleem b
PMCID: PMC7394839  PMID: 32774051

Abstract

Medical and associated speciality journals aim to disseminate area-specific knowledge, discoveries, experiences, cases and substantiate or negate the previously published pieces of information. These Journals are considered essential for doctors, researchers, and scientists to disseminate work, research, and experiences with the rest of the world. However, it is often quite challenging to choose an appropriate journal to submit work for possible publication. Researchers attempt to choose the most appropriate platform to highlight their research work so that their work gets published in good order, read, and referred. Hence, selecting an appropriate journal is the most vital task for them. Although no ranking and scoring system can be 100% perfect and foolproof, yet the scoring systems are required to be fair and objective in scoring the journals on various metrics and parameters.

Keywords: Scoring, Rating, Impact factor, Cite score, Journal


Medical and associated speciality journals aim to disseminate area-specific knowledge, discoveries, experiences, cases and substantiate or negate the previously published pieces of information. These Journals are considered essential for doctors, researchers, and scientists to disseminate work, research, and experiences with the rest of the world. However, it is often quite challenging to choose an appropriate journal to submit work for possible publication. Researchers attempt to choose the most appropriate platform to highlight their research work so that their work gets published in good order, read, and referred. Hence, selecting an appropriate journal is the most vital task for them.

One of the essential factors in journal selection is the relevance, reputation, and importance of the journal. Citation metrics evaluate the value of the Journal. Today, these metrics are being criticized, as people find a lack of correlation between their values and the real importance of articles.1 Notably, the highly cited journals are valued as more prestigious and attract higher quality research and authors. Several scoring systems exist to rank the journals like Impact Factor (IF), Cite Score (CS), and Eigen Factor Score (EFS). The two most commonly used ones are IF and CS. These two metrics are based on similar principles of measuring the impact by citations yearly, yet some differences between them exist (Table 1), like years used to calculate the metric, access to computing data, and the number of journals covered is some of the significant differences.2 The Journal Impact Factor (JIF) is also known as Impact Factor (IF) is a well-known and the oldest citation metric that was created in the 1950s and is published by Web of Science (WOS), whereas Elsevier launched a new citation metric of CS in 2016. Table 1 briefs a criterion wise analysis between CS and IF. The limitations of IF have paved for other journal metrics like SJR and SNIP to address some of these shortcomings.3,4

Table 1.

Comparisons of impact factor and cite score.

Criteria IMPACT FACTOR CITE SCORE
Company Web of Science (Clarivate Analytics) Scopus (Elsevier)
Journal coverage Lesser Much more
Window for analysis Two years Four years
Inclusions Publications available in Web of Science Publications available in Scopus
Exclusions Different paper types included for both numerator and denominator, e.g., the denominator will not include non-citable items. Non-citable items are letters, news, editorial, etc.
Scoring year Not included Included
Subscription Payable Free
Transparency Less More
Membership Complicated Simpler
Disciplinary differences Not accounted for Not accounted for
Scoring Mid subsequent year Mid subsequent year

Impact Factor is the proprietary of the Web of Science (WOS) owned by Clarivate Analytics, whereas Elsevier manages CS through Scopus. Both IF and CS are calculated annually, in the middle of the subsequent year of calculation, by dividing the number of citations received to a journal in the year of calculation, divided by the number of items published in the previous years. From 2019, Scopus now employs a four-year window period for the calculation of CS; earlier, it was of 3-years duration. However, the window period for analysis for IF is only 2-years,5 and it benefits the disciplines which have rapid citations. Secondly, it does not take in to account the disciplinary differences in the expected numbers of citations. The CS has recently become popular and is competing quite well with the IF, as it offers several benefits over IF like free to access, a massive Scopus database which is much more diverse and massive than WOS and also it provides a 4-year citation window. The availability of the CS tracker tool reflects the score trends every month.

Scopus has now changed and adjusted the methodology of scoring. It now includes only peer-reviewed publications (articles, reviews, conference papers, book chapters, and data papers), thus allowing a fairer comparison between journals and the citation numerator and publication denominator. They believed that it would make the comparison between journals more robust. Previously, all the publications were included in the calculations (non-peer-reviewed article types like editorials, news items, letters, and notes). Furthermore, the publications in the previous four years (up to and including the calculation year) will now be included. Thus, it gives a fair chance to even the new journals to have their CS, with just a single year of publication.6

The inclusion of a journal, in the WOS, is much more complicated, time-consuming, and sometimes arbitrary. It may take a couple of years for an application to be processed, and sometimes Journal complains of a lack of transparency and accountability in this process (Table 1). Further, one cannot track their application status and find out the reason for the delay. On the contrary, some journals, especially from the developed world, get WOS nod early, than the developing world journals. Here, the IF is losing ground compared to the CS, for its lack of transparency, challenges in an association, and lesser versatile scoring system. Therefore, the CS may become a benchmark for journal scoring and ranking shortly.

Fig. 1 shows the tabulated values of IF and CS for 2019 for top 20 orthopaedic and trauma journals, It is being observed that on an average CS provides 1.56 times higher values than the IF, on an average, with total scores for 20 journals of 117.4 for CS and 75.3 for IF. The percentage difference between the CS and IF was the minimum for the Clinical Orthopaedics and Related Research (6.97%), and the maximum was for the Journal of PaediatricOrthopaedics (73.53%), with both published by Wolter Kluwers. These differences could be because the CS now takes into account the 4-years of citations, vis-à-vis IF, which takes 2-years of citations, and the Scopus database is much more extensive than the WOS database and includes more number of journals in their list.

Fig. 1.

Fig. 1

Top 20 Orthopaedic Journals of 2019, with their comparative Cite Scores and Impact Factors.

Higher ranked journals get more and better-quality research submissions, compared to lower-ranked journals. Most researchers like their work to be published in the ‘best’ journals in their specialty. However, there are some caveats in the scoring systems; a) difficulty in assessing the quality of research, b) impact of the quantum number of publications, and c) possibility of ‘gaming’. Hence, there seems an element of arbitrariness in the present scoring systems to unable to incorporate the complexity of the “best-ness”.7

Although no ranking and scoring system can be 100% perfect and foolproof, yet the scoring systems are required to befair and objective in scoring the journals on various metrics and parameters. The biggest question of ‘quality’ denominator of the publications in a given journal remains to be adequately answered by scoring agencies like Scopus and WOS. The most reputed and high ranked journals tend to proudly publicize their ranking scores in order to attract the best research and to remain in contention with their competing journals. On the other hand, the journals with low or no ranking tend to criticize the shortcoming and the value of the scoring systems, which is understandable. The world’s best journals and institutions need not worry much about the rankings but to ensure that their peers respect them for the quality of the research they publish. Nevertheless, it is easier said than done, especially for the new coming journals, with lower ranks!

“It is sour grapes, I admit, I want to be more famous, so people are examining my work couplet by couplet, you know what I mean? That is the level where I want to go"- Black Francis.

References


Articles from Journal of Clinical Orthopaedics and Trauma are provided here courtesy of Elsevier

RESOURCES