Abstract
Research organizations are becoming more reliant on quantitative approaches to determine how to recruit and promote researchers, allocate funding, and evaluate the impact of prior allocations. Many of these quantitative metrics are based on research publications. Publication metrics are not only important for individual careers, but also affect the progress of science as a whole via their role in the funding award process. Understanding the origin and intended use of popular publication metrics can inform an evaluative strategy that balances the usefulness of publication metrics with the limitations of what they can convey about the productivity and quality of an author, a publication, or a journal. This paper serves as a brief introduction to citation networks like Google Scholar, Web of Science Core Collection, Scopus, Microsoft Academic, and Dimensions. It also explains two of the most popular publication metrics: the h‐index and the journal impact factor. The purpose of this paper is to provide practical information on using citation networks to generate publication metrics, and to discuss ideas for contextualizing and juxtaposing metrics, in order to help researchers in translational science and other disciplines document their impact in as favorable a light as may be justified.
INTRODUCTION
As the scale of global research continues to increase, research organizations are becoming more reliant on quantitative approaches to determine how to recruit and promote researchers, allocate funding, and evaluate the impact of prior allocations. It has been common practice for funders; appointment, tenure, and promotion committees; academic administrations; publishers; and others to apply a variety of quantitative metrics to rank researchers, papers, journals, and even institutions and countries. 1 , 2 , 3 Many of these quantitative metrics are based on research publications. The total number of publications can be used to infer scientific output or productivity, whereas the number of citations to those publications may be used to infer the impact of the research. In aggregate, these “publication metrics” have potential to serve both researchers and those evaluating the work of researchers. 4 For the researcher, metrics can highlight the scope and strengths of one’s work, forming a useful starting point to answer the question, “what have I done?” Researchers can use metrics to structure their review of their past work, design efficient summaries of their prior research trajectory, and inform future decision making. For an evaluator (and many researchers eventually find themselves in the position of evaluating the scientific achievements of others), understanding a researcher’s publication and citation record provides context for judging their achievements and future potential.
Publication metrics are not only important for individual careers, but also affect the progress of science as a whole via their role in the funding award process. Funders that receive many grant applications may see quantitative publication metrics as a shortcut to assess research quality and impact. Researchers competing for grants may in turn strive to achieve a perceived threshold for certain metrics. An understanding of the origin and intended use of popular publication metrics can inform an evaluative strategy that balances the usefulness of metrics with the limitations of what they can convey about the productivity and quality of an author, a paper, or a journal.
This is especially important in translational science, a discipline created to improve patient and population outcomes. Translational science researchers are iteratively called upon by peers, funders, and their institutions to use their publication records to document their progress toward these outcomes, and translational science evaluators use publication metrics in their assessments. 5 , 6 The purpose of this paper is to briefly describe the most frequently used quantitative publication metrics, provide practical information on generating metrics, and discuss ideas for contextualizing and juxtaposing metrics, in order to help researchers in translational science and other disciplines document their impact in as favorable a light as may be justified.
CITATION NETWORKS
Overview
Citation counts represent the number of times a publication has been cited by other publications. Because citation counts represent a key constituent of the most frequently used publication metrics, understanding their source is necessary to effectively utilize publication metrics. Citation counts are usually provided by citation networks or indices, which are systems that connect each publication to every publication it cites, as well as every publication that has cited it. No single citation network functions as the dominant source of citation data. Instead, several citation networks exist and vary according to which publications they include. As a consequence, citation networks also vary in the citation numbers they generate for any given publication. Citation networks include traditional indexed databases, which contain article metadata ingested from publisher sources and accessed by users via a searchable interface; academic search engines, which scrape the web for relevant content and allow users to search the content via a web interface; and metadata datasets that can only be accessed computationally (e.g., via API [Application Programming Interfaces]). However, citation data are compiled, networks are created by connecting each citation reference in the publication’s bibliography to that reference’s record in the database. For example, if paper A cites another paper B, the citation network “reads” that citation and adds one cited reference to the existing total citation count of paper A. Users have a choice among multiple science citation networks. Six of the largest are described in Table 1.
TABLE 1.
Descriptions of common citation networks
| Citation network | Accessibility | Type of database | Description | Owner | Number of publication records | Number of citation connections |
|---|---|---|---|---|---|---|
| Web of Science Core Collection 50 | Subscription required | Indexed database (i.e., its publication metadata comes from publishers); citation network | Clarivate publishes several citation indices that cover publications in different disciplines and formats, but the largest is the Science Citation Index Expanded, which is included in the Web of Science Core Collection | Clarivate Analytics | 53 million | 1.1 billion |
| Scopus 51 | Subscription required | Indexed database; citation network | Scopus’s essential functionality is very similar to Web of Science Core Collection | Elsevier | 75 million | 1.4 billion |
| Google Scholar 52 | Freely available a | Academic search engine (i.e., it crawls the web looking for scholarly content); citation network | Google Scholar contains many publications beyond journal articles, such as books, reports, patents, presentations, posters, and other materials. As it crawls the web, creating citation connections, sometimes it encounters incorrect or difficult‐to‐parse bibliographies and erroneously creates duplicate records for the same publication. Google Scholar’s broad and opaque definition of scholarly content, as well as its automated citation record creation process, usually results in higher citation count numbers than Web of Science and Scopus. | Unknown, but was recently estimated at 389 million 53 | Unknown | |
| OpenCitations Index of Crossref Open DOI‐to‐DOI Citations 54 | Freely available a | Publication metadata and citation index dataset | The data are accessible through an API or a public website, 55 but the search is limited and the interface may be difficult to navigate for users. | Crossref | 58 million | 720 million |
| Microsoft Academic 56 | Freely available a | Indexed database; academic search engine; citation network | Launched in 2016, Microsoft Academic’s citation index is unique in presenting citation counts not only as verified connections between papers in its own index, but also as an “estimated” citation count using a statistical prediction tool to compensate for possible citations that may exist outside of its own dataset. 57 | Microsoft | 240 million | 2.2 billion (estimated) |
| Dimensions 58 | Freely available a | Indexed database that also leverages additional open and proprietary data; citation network | Dimensions is the newest publication index and citation data source, launched in 2018. Dimensions makes use of open data such as Crossref, its parent company Digital Science’s other research‐related products, and publisher partnerships to index and link its records. 59 | Digital Science | 106 million | 1.2 billion |
Abbreviation: API, Application Programming Interfaces.
Note that with the exception of Crossref, a not‐for‐profit 501(c)6 organization, all the “freely available” networks listed in this table are owned by for‐profit companies. The citation networks may be free to the user, but they likely generate revenue for their parent company by collecting user data from searches and academics’ profiles.
Practical applications
To select one or more citation networks, users may consider (1) the network’s coverage of publication types and areas of research, (2) its number of citation linkages between publications, (3) the user‐friendliness of its interface, and (4) its functionality when it comes to automatically generating metrics. Researchers should be aware that most citation networks are primarily comprised of journal articles. Therefore, it may be more difficult to assess the citation impact of gray literature such as white papers, reports, clinical trials, or other nontraditional publications. 7 Recent studies comparing various citation networks for accuracy and completeness may help inform the decision to choose a particular citation network. 8 , 9 , 10 , 11 , 12 Because publication metrics are derived from citation counts within citation networks, and citation counts vary depending on the network’s publication coverage, metrics derived for a given author from one network will not necessarily be concordant with metrics for the same author but derived from another network. A researcher may find that one citation network contains records for most of their publications, whereas another network may only have records for some of their publications. Although it is generally advantageous for researchers to find a network containing records for all of their publications, 8 researchers selecting among networks must also consider that networks’ bibliometric data vary according to the quality of the included data, in addition to the quantity of publications reported. For example, a researcher is likely to find more of their publications, and therefore a higher citation count and h‐index, by using Google Scholar. However, as described in Table 1, Google Scholar may contain erroneous or duplicate records due to the way it collects publication data from the web. Although at first glance the researcher may think that Google Scholar offers a higher number and therefore a “better” metric, the accuracy of that metric may be questionable if it is based upon faulty bibliographic data. Precise documentation by researchers of the citation network(s) they select to inform their publication metrics provides the opportunity for their evaluators to assess the accuracy of their analyses.
If a researcher determines that metrics from multiple citation networks are useful to show context for their work, two or more different citation networks can be documented clearly to avoid confusion in interpreting their metrics. For example, a researcher working on a tenure and promotion dossier may decide to primarily use Web of Science Core Collection to search for their journal articles, and use Web of Science Core Collection’s citation counts and calculated h‐index to document their career’s published articles. This researcher may also use Google Scholar to find their gray literature publications, and decide to include the citation counts of those publications to promote their research that resulted in a white paper, report, or other nonarticle publication type. In this example, the dossier should clearly indicate that the metrics presented for the journal articles came from Web of Science Core Collection, while the metrics presented for the gray literature publications came from Google Scholar.
PUBLICATION‐LEVEL METRICS
Overview
Publication‐level (including both articles and nonarticle publications) metrics represent any quantitative number relating to an individual publication. Most commonly, this takes the form of citation counts: the number of citations to any given publication. In addition to citation counts, the number of article views and downloads are frequently listed on journal articles hosted on publisher websites. Other emerging metrics known as “alternative metrics” often seek to indicate social impact rather than solely scientific impact. 13 , 14 Although they may theoretically be applied to authors, institutions, journals, or other entities, in practice, the most prevalent implementation of alternative metrics is publication‐level. Examples include the number of times a publication has been shared on social media or blogs, the number of comments or “likes” it has received, or the number of times it has been mentioned in mass media. Due to their loosely defined and rapidly changing nature, alternative metrics are difficult to locate, although one company, Altmetrics, 15 has monetized centralizing various indicators into an “attention score.” Alternative metrics can add societal context and diversity to a research evaluation, 16 but researchers and evaluators should keep in mind that metrics reflecting public engagement may not correlate with scientific impact. 13 , 17
Practical applications
Citation counts for an individual publication can be generated by searching a title or Digital Object Identifier (DOI) in any of the citation networks described in Table 1. All six citation networks display the number of citations to a particular publication on the search results page for that publication. Individual publication citation counts may be used to highlight particularly impactful citations, but a more creative approach for a researcher’s dossier might be to group publications together and write about the citation impact of the group. For example, a researcher may aggregate citations by time period (e.g., before or after getting a prior promotion or being awarded a grant), by their different research fields or subfields (e.g., clinical and basic science), or by authorship type (e.g., first vs. senior [last] author). This facilitates discussion of publication impact in context, and may be useful to assert the value of a previous grant investment, explain impact variation within different fields, or provide evidence that research leadership affected impact. Another approach for a researcher or evaluator might be to selectively use comparative metrics by comparing a single or group of publications to any of the following: other articles published in the same field, other articles published within the same journal, or other articles published by peer researchers.
One strategy for utilizing publication‐level metrics for a grouped set of publications is to use the mean number of citations per publication. This number may be higher or lower than the same author's h‐index (see below) depending on the distribution of citations within the body of work. Supplementing the mean number of citations for a large list of articles with the median and/or the standard deviation would help evaluators to understand the spread of the citation counts. Such measures of central tendency and variability could be used alongside, or instead of, direct citation counts for individual publications when presenting any of the previously discussed methods of grouping publications.
AUTHOR‐LEVEL METRICS
Overview
The h‐index 18 is the number (N) for an author such that at least N of the author’s publications have a minimum of N citations each. For example, imagine an author with any number of total publications, and at least 10 of their total papers have 10 or more citations. This author’s h‐index would be 10 (Table 2). The h‐index is a widely used and easily understood metric that demonstrates the citation impact across an author’s career. However, users of the h‐index should recognize several major limitations of this metric. The h‐index is consistently skewed toward researchers’ older papers, which have had more time to accumulate citations. A high h‐index is challenging to achieve for early career researchers. The h‐index also weights all authors equally regardless of authorship position, meaning it does not provide information about the relative contribution of authors. Additionally, h‐indices may be lower for researchers who have published extensively, but have only a limited number of highly cited publications compared with researchers whose papers’ citations are more evenly distributed. The h‐index is also vulnerable to extreme instances of self‐citation, or in‐group citation, which artificially inflate it. 19 , 20 Finally, and importantly, the h‐index should not be used to compare researchers across fields, as citation rates vary widely between disciplines. 21 As long as the drawbacks are understood, the h‐index can be a useful tool in an analysis comparing the total publication output of an author with the distribution of citations to their work. Numerous alternatives to the h‐index have been proposed that attempt to correct for such drawbacks, including variations on the h‐index itself, 22 , 23 , 24 , 25 the e‐index, 26 the g‐index, 27 and the m‐quotient, 18 , 28 but none have reached the popularity of the original h‐index.
TABLE 2.
Two different patterns of the distribution of authors’ total number of publications
| Author A’s 15 total publications, sorted in order of decreasing citation count | Author B’s 100 total publications, sorted in order of decreasing citation count |
|---|---|
| Publication #1: 270 citations | Publication #1: 5000 citations |
| Publication #2: 250 citations | Publication #2: 1000 citations |
| Publication #3: 210 citations | Publication #3: 800 citations |
| Publication #4: 170 citations | Publication #4: 685 citations |
| Publication #5: 120 citations | Publication #5: 469 citations |
| Publication #6: 116 citations | Publication #6: 371 citations |
| Publication #7: 101 citations | Publication #7: 196 citations |
| Publication #8: 29 citations | Publication #8: 82 citations |
| Publication #9: 17 citations | Publication #9: 57 citations |
| Publication #10: 10 citations | Publication #10: 11 citations |
| Publication #11: 9 citations | Publication #11: 9 citations |
| Publication #12: 8 citations | Publication #12: 8 citations |
| Publication #13: 5 citations | Publication #13: 8 citations |
| Publication #14: 0 citations | Publication #14: 7 citations |
| Publication #15: 0 citations | Publication #15: 6 citations |
| Publications #16–100: 5 or fewer citations each | |
| Summary: Author A has an h‐index of 10, because A has at least 10 papers with at least 10 citations each. | Summary: Author B also has an h‐index of 10, because B has at least 10 papers with at least 10 citations each, even though they have a more extensive publication history and more individual citation counts on their most highly cited papers. |
Practical applications
An h‐index can be calculated manually from a list of an author's publications’ total citation counts. Ideally, citation counts should be generated from a single citation network; citation counts collected from multiple networks should be presented separately and not joined into a single h‐index. If the list of citation counts for each paper is sorted from highest to lowest, it is simple to spot the crossover point at which the number of citations meets or exceeds the number of publications (Table 2). If an author does not have a list of their publications at hand, an h‐index can also be generated by searching Web of Science Core Collection, Scopus, or Google Scholar. In Web of Science and Scopus, an author’s publications can be searched by author name, affiliation, or unique identifier (such as ORCID); and an h‐index may be generated from the result set. In Google Scholar, authors will need to create a profile page and add their publications to their account to have their h‐index displayed. Researchers should be aware that Google Scholar may display duplicate records for their publications. This can cause inflated citation counts, if duplicate records are counted separately as citing papers. It can also cause the total number of citations to one publication to be split across the duplicate records for that same publication, decreasing the author’s h‐index. Researchers are encouraged to verify their publication records in Google Scholar. As with the publication‐level metrics discussed previously, it may be useful to consider multiple h‐indices for groups of publications that represent temporal, thematic, or authorship responsibility to either argue for or evaluate specific impact.
JOURNAL‐LEVEL METRICS
Overview
The most popular journal metric is the journal impact factor (JIF), 29 created by the scientometrician Eugene Garfield. The JIF is the number of total “citable items” a journal published in a 2‐year period divided by the total number of citations over the same 2‐year period. The denominator is currently defined as articles, review articles, and proceedings papers, 30 whereas the numerator includes citations to all publications in a journal. The JIF is a proprietary metric owned by Clarivate Analytics, who publishes Journal Citation Reports (JCR; subscription required), a database of annually updated JIFs, journal rankings, and other journal‐level metrics. The JIF was originally designed to indicate a relationship between a journal’s publications and citations, but there have been many critiques of its evolution into a single‐number proxy for broad scientific value. 31 , 32 , 33
Responsible application of JIFs requires an understanding of how the impact factor is calculated. For example, because citable items are defined to include research papers but to exclude nonresearch publication types (e.g., letters and editorials), editors may restructure their publication types in order to publish research articles in sections that were classified by Clarivate as “editorial.” Similarly, reducing the number of items in JIF denominators increases the total JIF. To keep JIFs proprietary, Clarivate does not disclose information on journals’ citable item sections, making it impossible for users to know if the metric is fair or accurate. 34 Editors may also pursue a higher impact factor via their journal’s submissions by asking submitters to include more citations to the journal in their manuscripts, or by soliciting more highly cited article types. Review articles consistently receive more citations than original research articles, 35 so editors may be incentivized to focus on secondary rather than primary publications. The pursuit of citations contributes to publication bias wherein prestigious, and aspirational, journals reject incremental or replicative research in favor of novel results, whose findings may not be reliable. 36 As with h‐indices, JIFs are also susceptible to fraudulent citations. 37 Most importantly, the impact factor of a journal is not capable of conveying the quality, scientific accuracy, or impact of any particular article published within that journal. The impact factor reflects citation patterns to the journal title as a whole, not the impact of any individual publication. Other journal metrics include Eigenfactor, 38 Scopus CiteScore, 39 SciImago Journal Rank Indicator, 40 and various modifications of the JIF itself, that may be useful for researchers desiring to explore or verify journal metrics for a particular context. 41 , 42 However, the original JIF remains, by far, the most familiar journal‐level metric.
Practical applications
The journal impact factor for a particular journal title can be searched via JCR. Although some individual journals may list their impact factors on their websites, it is recommended that dates and JIFs be verified via JCR. Research evaluators may not be familiar with the relative prestige of journals outside their own discipline, so researchers may use this opportunity to make a compelling presentation of the JIFs of the journals where they have published. JCR contains journal ranking data, simplifying the process of comparative analysis. Researchers can compare the JIFs of the journals in which they have published to other journals in the same field. A journal without a sky‐high impact factor may still be in the top quartile of journals within one’s field. JCR also contains historical impact factor data, which may be useful for discussion of a researcher’s decision to publish in up‐and‐coming journals.
LIMITATIONS
Some of the key limitations of citation networks and their citation counts, the h‐index and the JIF, have been discussed in the present paper. However, other metric considerations, as well as the broader concept of quantitative publication metrics as a whole, should be further studied in the process of evaluation policies and procedures improvement. This paper is intended as an introduction to the most frequently used publication metrics in the context of research careers or grant evaluations, and not as a thorough analysis of all available metrics. Additionally, this paper seeks to present practical information on how to access and apply popular metrics and tools in the context of research evaluation. Many of the products mentioned in this paper require expensive subscriptions that may be beyond the budget of some institutions. Understanding how the “free” alternatives, which collect user data in lieu of subscriptions, compare to the major subscription databases may be helpful for researchers trying to understand their options for accessing and presenting their publication metrics. Those who wish to gain a deeper understanding of their local subscriptions, or who seek further information about scientometrics, are encouraged to contact their institution’s librarian.
CONCLUSION
The use of quantitative strategies as a proxy for the scientific productivity, impact, and quality of research publications has both strengths and limitations. 43 , 44 No metric can serve as a fully representative proxy for research quality. The research itself, which may include nonpublication outputs, must be evaluated based on scientific integrity, societal need, advancement of the field, and other potentialities that matter to the evaluators (such as emphasis on support for new or under‐represented researchers, or previously unfunded research topics). There is increasing recognition of the importance of utilizing publication metrics responsibly in research evaluation. 45 The San Francisco Declaration on Research Assessment and the Leiden Manifesto provide recommendations and principles for improving research assessment and the appropriate use of metrics. 46 , 47 Quantitative publication metrics may serve as one component of a holistic assessment. However, even when integrated into a peer‐reviewed evaluative process that also includes qualitative assessment, metrics can either overly inflate or miss the perceived “impact” of research. Nevertheless, publication metrics’ ubiquity demands that funders, authors, and the publishing industry have a solid grasp of the strengths and weaknesses of using numbers as a proxy for scientific impact. A prudent utilization of publication metrics requires a thoughtful approach that includes a realistic understanding of what individual and aggregate metrics are capable of conveying. When used as part of a larger narrative, publication metrics can provide insight into an article’s reach, a journal’s evolution, or a researcher’s career. Strategic application of metrics can empower researchers to tell a clearer and more holistic story of their work, and responsible interpretation of metrics can empower evaluators to more efficiently, fairly, and consistently determine the future of scientific funding and advancement. Future improvements in research evaluation strategies can incentivize Open Science and the greater dissemination of research outputs. 48 , 49 Ultimately, the considered and transparent application and interpretation of publication metrics may help address some of the social inequities in science, provide more opportunity for under‐represented researchers and research areas, improve the wellbeing of researchers caught in the burnout “publish or perish” cycle, and speed the most promising basic research to clinical and policy implementation, and improved outcomes.
CONFLICT OF INTEREST
The authors declared no competing interests for this work.
Funding information
This research was supported in part by NIH National Center for Advancing Translational Science (NCATS) UCLA CTSI Grant Number UL1TR001881.
REFERENCES
- 1. Research and Innovation Rankings . 2020. Accessed July 29, 2020. https://www.scimagoir.com/rankings.php
- 2. Studies (CWTS) C for S and T . CWTS Leiden Ranking. Accessed July 29, 2020. http://www.leidenranking.com
- 3. Nature Index ‐ County/territory outputs ‐ 1 June 2019 ‐ 31 May 2020. https://www.natureindex.com/country‐outputs/generate/All/global/All/score
- 4. Carpenter CR, Cone DC, Sarli CC. Using publication metrics to academic productivity and research impact. Acad Emerg Med. 2014;21(10):1160‐1172. 10.1111/acem.12482 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Llewellyn N, Carter DR, Rollins L, Nehl EJ. Charting the publication and citation impact of the NIH Clinical and Translational Science Awards (CTSA) program from 2006 through 2016. Acad Med. 2018;93(8):1162‐1170. 10.1097/acm.0000000000002119 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Schneider M, Kane CM, Rainwater J, et al. Feasibility of common bibliometrics in evaluating translational science. J Clin Transl Sci. 2017;1(1):45‐52. 10.1017/cts.2016.8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Bonato S. Google Scholar and Scopus. J Med Libr Assoc. 2016;104(3):252‐254. 10.5195/jmla.2016.31 [DOI] [Google Scholar]
- 8. Martín‐Martín A, Orduna‐Malea E, Thelwall M, Delgado L‐Cózar E. Google Scholar, Web of Science, and Scopus: a systematic comparison of citations in 252 subject categories. J Informetr. 2018;12(4):1160‐1177. 10.1016/j.joi.2018.09.002 [DOI] [Google Scholar]
- 9. Anker MS, Hadzibegovic S, Lena A, Haverkamp W. The difference in referencing in Web of Science, Scopus, and Google Scholar. ESC Heart Failure. 2019;6(6):1291‐1312. 10.1002/ehf2.12583 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Harzing A‐W. Two new kids on the block: how do Crossref and Dimensions compare with Google Scholar, Microsoft Academic, Scopus and the Web of Science? Scientometrics. 2019;120(1):341‐349. 10.1007/s11192-019-03114-y [DOI] [Google Scholar]
- 11. Thelwall M. Microsoft Academic: a multidisciplinary comparison of citation counts with Scopus and Mendeley for 29 journals. J Informetr. 2017;11(4):1201‐1212. 10.1016/j.joi.2017.10.006 [DOI] [Google Scholar]
- 12. van Eck NJ, Waltman L, Larivière V, Sugimoto C. Crossref as a new source of citation data: a comparison with Web of Science and Scopus. CWTS. Accessed July 20, 2020. https://www.cwts.nl/blog?article=n‐r2s234
- 13. Bornmann L. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. J Informetr. 2014;8(4):895‐903. 10.1016/j.joi.2014.09.005 [DOI] [Google Scholar]
- 14. Bornmann L, Haunschild R. Alternative article‐level metrics. EMBO Rep. 2018;19(12):e47260. 10.15252/embr.201847260 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. The donut and Altmetric Attention Score. Altmetric. Published July 9, 2015. Accessed July 20, 2020. https://www.altmetric.com/about‐our‐data/the‐donut‐and‐score/
- 16. Piwowar H, Priem J. The power of altmetrics on a CV. Bull Am Soc Inform Sci Technol. 2013;39(4):10‐13. 10.1002/bult.2013.1720390405 [DOI] [Google Scholar]
- 17. Warren HR, Raison N, Dasgupta P. The rise of altmetrics. JAMA. 2017;317(2):131‐132. 10.1001/jama.2016.18346 [DOI] [PubMed] [Google Scholar]
- 18. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA. 2005;102(46):16569‐16572. 10.1073/pnas.0507655102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Noorden RV, Chawla DS. Hundreds of extreme self‐citing scientists revealed in new database. Nature. 2019;572(7771):578‐579. https://www.nature.com/articles/d41586-019-02479-7 [DOI] [PubMed] [Google Scholar]
- 20. Bartneck C, Kokkelmans S. Detecting h‐index manipulation through self‐citation analysis. Scientometrics. 2010;87(1):85‐98. 10.1007/s11192-010-0306-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Marx W, Bornmann L. On the causes of subject‐specific citation rates in Web of Science. Scientometrics. 2015;102(2):1823‐1827. 10.1007/s11192-014-1499-9 [DOI] [Google Scholar]
- 22. Batista PD, Campiteli MG, Kinouchi O. Is it possible to compare researchers with different scientific interests? Scientometrics. 2006;68(1):179‐189. 10.1007/s11192-006-0090-4 [DOI] [Google Scholar]
- 23. Sidiropoulos A, Katsaros D, Manolopoulos Y. Generalized h‐index for disclosing latent facts in citation networks. arXiv:cs/0607066. Published online July 13, 2006. Accessed July 21, 2020. http://arxiv.org/abs/cs/0607066
- 24. Bornmann L, Mutz R, Daniel H‐D. Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. J Am Soc Inform Sci Technol. 2008;59(5):830‐837. 10.1002/asi.20806 [DOI] [Google Scholar]
- 25. Post A, Li AY, Dai JB, et al. c‐index and subindices of the h‐index: new variants of the h‐index to account for variations in author contribution. Cureus. 10(5):e2629. 10.7759/cureus.2629 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Zhang C‐T. The e‐Index, complementing the h‐index for excess citations. PLoS One. 2009;4(5):e5429. 10.1371/journal.pone.0005429 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Egghe L. An improvement of the h‐index: the g‐index. https://www.researchgate.net/publication/242393078_An_improvement_of_the_H‐index_The_G‐index
- 28. Feb 2016 16:10 A‐WH‐S 6 . Reflections on the h‐index. Harzing.com. Accessed July 21, 2020. https://harzing.com/publications/white‐papers/reflections‐on‐the‐h‐index
- 29. Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90. 10.1001/jama.295.1.90 [DOI] [PubMed] [Google Scholar]
- 30. About Citable Items. Accessed July 21, 2020. http://help.incites.clarivate.com/incitesLiveJCR/9607‐TRS/version/17
- 31. Larivière V, Sugimoto CR. The journal impact factor: a brief history, critique, and discussion of adverse effects. In: Glänzel W, Moed HF, Schmoch U, Thelwall M, eds. Springer Handbook of Science and Technology Indicators. Cham, Switzerland: Springer Handbooks; 2019:3‐24. 10.1007/978-3-030-02511-3 [DOI] [Google Scholar]
- 32. Neuberger J, Counsell C. Impact factors: uses and abuses. Eur J Gastro Hepatol. 2002;14(3):209‐211. https://journals.lww.com/eurojgh/fulltext/2002/03000/impact_factors__uses_and_abuses.1.aspx [DOI] [PubMed] [Google Scholar]
- 33. Teixeira da Silva JA. The Journal Impact Factor (JIF): science publishing’s miscalculating metric. Acad Quest. 2017;30(4):433‐441. 10.1007/s12129-017-9671-3 [DOI] [Google Scholar]
- 34.Davis, P. Citable items: the contested impact factor denominator. The Scholarly Kitchen. Published February 10, 2016. Accessed July 21, 2020. https://scholarlykitchen.sspnet.org/2016/02/10/citable‐items‐the‐contested‐impact‐factor‐denominator/
- 35. Lei L, Sun Y. Should highly cited items be excluded in impact factor calculation? The effect of review articles on journal impact factor. Scientometrics. 2020;122(3):1697‐1706. 10.1007/s11192-019-03338-y [DOI] [Google Scholar]
- 36. Brembs B. Prestigious science journals struggle to reach even average reliability. Front Hum Neurosci. 2018;12:37. 10.3389/fnhum.2018.00037 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Davis, P. Visualizing citation cartels. The Scholarly Kitchen. Published September 26, 2016. Accessed July 21, 2020. https://scholarlykitchen.sspnet.org/2016/09/26/visualizing‐citation‐cartels/
- 38. Eigenfactor: About. Accessed July 21, 2020. http://www.eigenfactor.org/about.php
- 39. Metrics ‐ How Scopus Works ‐ Scopus ‐ Solutions | Elsevier. Accessed July 21, 2020. https://www.elsevier.com/solutions/scopus/how‐scopus‐works/metrics#Journal
- 40. SJR : Scientific Journal Rankings . Accessed July 21, 2020. https://www.scimagojr.com/journalrank.php
- 41. Kianifar H, Sadeghi R, Zarifmahmoudi L. Comparison between impact factor, eigenfactor metrics, and SCimago journal rank indicator of pediatric neurology journals. Acta Inform Med. 2014;22(2):103‐106. 10.5455/aim.2014.22.103-106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Yuen J. Comparison of impact factor, eigenfactor metrics, and SCImago journal rank indicator and h‐index for neurosurgical and spinal surgical journals. World Neurosurg. 2018;119:e328‐e337. 10.1016/j.wneu.2018.07.144 [DOI] [PubMed] [Google Scholar]
- 43. Aksnes DW, Langfeldt L, Wouters P. Citations, citation indicators, and research quality: an overview of basic concepts and theories. SAGE Open. 2019;9(1):2158244019829575. https://doi.org/10.1177%2F2158244019829575 [Google Scholar]
- 44. Chapman CA, Bicca‐Marques JC, Calvignac‐Spencer S, et al. Games academics play and their consequences: how authorship, h‐index and journal impact factors are shaping the future of academia. Proc Royal Soc B Biol Sci. 1916;2019(286):20192047. 10.1098/rspb.2019.2047 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Gadd E. Influencing the changing world of research evaluation. Insights. 2019;32(1):6. 10.1629/uksg.491 [DOI] [Google Scholar]
- 46. San Francisco Declaration on Research Assessment. DORA. Accessed April 4, 2021. https://sfdora.org/read/
- 47. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: the Leiden Manifesto for research metrics. Nature News. 2015;520(7548):429. https://www.nature.com/news/bibliometrics‐the‐leiden‐manifesto‐for‐research‐metrics‐1.17351 [DOI] [PubMed] [Google Scholar]
- 48. Working Group on Rewards Under Open Science . Evaluation of Research Careers Fully Acknowledging Open Science Practices : Rewards, Incentives and/or Recognition for Researchers Practicing Open Science [O'Carroll C, Rentier B, Cabello Valdes C, Esposito F, Kaunismaa E, Maas K, Metcalfe J, McAllister D, Vandevelde K, eds]. Luxembourg, Europe: Publications Office of the European Union; 2017. Accessed April 4, 2021. https://op.europa.eu/s/pbVp [Google Scholar]
- 49. Morais R, Borrell‐Damián L. Open Access in European Universities: Results from the 2016/2017 EUA Institutional Survey. Brussels, Belgium: European University Association; 2018. Accessed April 4, 2021. https://eua.eu/resources/publications/324:open‐access‐in‐european‐universities‐results‐from‐the‐2016‐2017‐eua‐institutional‐survey.html [Google Scholar]
- 50. Web of Science Core Collection. Web of Science Group . Accessed July 16, 2020. https://clarivate.com/webofsciencegroup/solutions/web‐of‐science‐core‐collection/
- 51. About Scopus ‐ Abstract and citation database | Elsevier. Accessed July 20, 2020. https://www.elsevier.com/solutions/scopus
- 52. About Google Scholar. Accessed July 20, 2020. https://scholar.google.com/intl/en/scholar/about.html
- 53. Gusenbauer M. Google Scholar to overshadow them all? Comparing the sizes of 12 academic search engines and bibliographic databases. Scientometrics. 2019;118(1):177‐214. 10.1007/s11192-018-2958-5 [DOI] [Google Scholar]
- 54. Heibi I, Peroni S, Shotton D. Software review: COCI, the OpenCitations Index of Crossref open DOI‐to‐DOI citations. Scientometrics. 2019;121(2):1213‐1228. 10.1007/s11192-019-03217-6 [DOI] [Google Scholar]
- 55. OpenCitations Indexes Search Interface . Accessed July 20, 2020. https://opencitations.net/index/search
- 56. Microsoft Academic . Microsoft Research. Accessed July 20, 2020. https://www.microsoft.com/en‐us/research/project/academic/
- 57. Harzing A‐W. Microsoft Academic: is the phoenix getting wings? Scientometrics. 2017;110:371‐383. 10.1007/s11192-016-2185-x [DOI] [Google Scholar]
- 58. Why did we build Dimensions. Dimensions. Accessed July 20, 2020. https://www.dimensions.ai/why‐dimensions/
- 59. Hook DW, Porter SJ, Herzog C. Dimensions: building context for search and evaluation. Front Res Metr Anal. 2018;3:1–11. 10.3389/frma.2018.00023 [DOI] [Google Scholar]
