The number of original research articles published per issue in scientific journals has declined. Nonetheless, as in almost all issues of our journal, including this one, original research articles have an important place. This decrease in the number of original articles may have been, at least in part, a result of an increase in the number of journals. However, this is clearly not the only factor responsible for this decrease. The drive to increase the impact factor (IF) likely also plays an important role.
Various metrics are used to measure a journal’s quality and impact. The best-known metric is the IF, which is measured annually, based on the number of citations to the articles published in a journal. For example, the 2016 IF of a journal is the number of citations in 2016 for papers published in 2014 and 2015, divided by the total original research articles and reviews in 2014 and 2015.
No metric is perfect. IF does have significant problems (1). For example, it measures a short period of time, a small number of articles that receive a large number of citations can significantly raise the IF, and citations with typographic errors are not taken into account. The inordinate amount of importance attached to the IF may lead to journals directing their efforts toward increasing the IF, such as by publishing fewer articles or only selecting articles that would have a stronger chance of being cited (e.g., reviews or guidelines).
Various metrics have been developed to eliminate the deficiencies and criticized aspects of the IF, such as a 5-year IF, Eigenfactor, Article Influence Score, Immediacy Index, CiteScore, Source Normalized Impact per Paper, Scimago Journal Rank, H-index, and H5 index (2). The IF was developed by a private company called Clarivate Analytics. Scopus, an institution of Elsevier, developed its own metric, CiteScore. CiteScore is based on 3 years rather than 2, and collects citations from a larger number of journals compared to that used in calculating the IF (2). CiteScore takes into account all articles published in a journal, while IF takes into account only the citable items (2). All metrics have some limitations, and some serve to compensate for others’ deficiencies. However, with the many metrics out there, confusion often ensues. Many scientists are not even aware of what some of these metrics mean.
Despite its deficiencies and criticized aspects, the IF has maintained its prominence in the journal field, and it seems that it shall remain the most important metric used in the ranking and comparison of journals. A competition to achieve a high IF has been observed among the three most significant journals of cardiology. The journal that succeeded in ranking first in 2016 celebrated its accomplishment (3), while the journal that lost this honor of first rank criticized its competitor, stating that it only won due to guidelines (4).
The above-mentioned metrics are based on citations. Can an article’s effectiveness and contribution to science be measured only by citations? Might an article that receives only a few citations still be read frequently? Might not a frequently read article contribute to the field of medicine though it receives few citations?
Everything is done by smartphones and computers today. We use them to reach all manner of information, and we use them to read scientific articles. Most of us have forgotten, or perhaps have never even known, the smell of a journal’s paper and glue. The fact that everything is done on an online platform has now led to a new metric that is not based on citations: Altmetrics. This metric measures the frequency of mentions about an article in academic social networks, like the mainstream media, Twitter, scientific blogs, and Mendeley. This enables the determination of how much an article draws attention and how often it is mentioned. Altmetrics, too, has many deficiencies (5). A very popular article is not necessarily a high-quality article. It would be wrong to neglect Altmetrics, but at the same time, it would be equally wrong to attach too much importance to it, as this could potentially lead to negative consequences, such as the marketing of scientific articles on online platforms.
References
- 1.Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502. doi: 10.1136/bmj.314.7079.497. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Crotty D. Other Metrics:beyond the Impact Factor. Eur Heart J. 2017;38:2646–2647. doi: 10.1093/eurheartj/ehx446. [DOI] [PubMed] [Google Scholar]
- 3.Lüscher TF1. Record high EHJ Impact Factor 2016. Eur Heart J. 2017;38:2524–2526. doi: 10.1093/eurheartj/ehx424. [DOI] [PubMed] [Google Scholar]
- 4.Fuster V1. Impact Factor:A Curious and Capricious Metric. J Am Coll Cardiol. 2017;70:1530–1531. doi: 10.1016/j.jacc.2017.08.002. [DOI] [PubMed] [Google Scholar]
- 5.Crotty D. Altmetrics. Eur Heart J. 2017;38:2647–2648. doi: 10.1093/eurheartj/ehx447. [DOI] [PubMed] [Google Scholar]