Universities across the United States are competing for a shrinking pool of research funding. This bleak prospect has turned their attention to finding new ways to compete, for example, using their unique strengths and resources to make decisions about how to organize themselves, whom to recruit, and in what to invest. Such positioning, however, requires a rich understanding of the institution, based on actual data. If an institution wishes to glean funding from the Department of Defense for traumatic brain injury research, for example, then it must understand its real strengths and gaps in this area to be a credible candidate for funding.
SciVal, by Elsevier Research Intelligence, is designed to meet this need. It delivers research performance metrics, based on the Scopus database, for 4,600 research universities and other institutions in more than 200 countries. It incorporates and builds on some of Elsevier’s previous products, such as Spotlight and Strata, to deliver a module-based reporting tool that allows institutions to understand the position and productivity of various groups of researchers—user-defined or as organized in Scopus. These modules include institutional overview, benchmarking, collaboration, and trends. Their intended audience ranges from the individual faculty member to the institutional leader, although it is likely that the latter will derive the greatest benefit, since the tool is really designed to evaluate groups of researchers. The groups can comprise institutions, departments, or customized groups of researchers in an institution. It also allows for scenario modeling—like fantasy football—to see how adding or removing a researcher would affect performance as a whole. A department could devise a scenario in which they recruit a star researcher from another institution to see how their departmental productivity and influence would change.
In theory, the tool also allows users to assess potential and existing collaborations, drawing on publication and citation data, as well as newer measurements such as downloads (i.e., “usage” data). Vice presidents for research will like the academic-industry metrics, which show coauthorship with industry partners, as well as the ability to see which companies are downloading papers. The tool also permits visualizations of collaboration on Google maps. They may also like the potential to discern possible collaborations for grant applications or research projects. Institutions can also utilize citation data to assess research trends, allowing identification of “rising stars” as well as the top performers.
The major advantage that SciVal has over other metrics and reporting tools is the sheer amount of available data. SciVal takes advantage of supercomputer technology to process more than 32 million publication records from almost 22,000 journals from 5,000 publishers across the globe, mostly from the Scopus database. SciVal utilizes these data to produce a robust assortment of metrics that can be used alone or in combination to interrogate the data through several lenses. All of these metrics can be combined in various ways, and institutions can generate visually useful and appealing reports, in both table and graph or chart form. They include some standard metrics, such as H-index and citation counts, but they also include unique metrics such as field-weighted citation index (more about that in a moment). The metrics are classified into six different groups (and an individual metric may fall into more than one category): productivity metrics, which provide information about the volume of publications; citation impact metrics, which demonstrate influence; collaboration metrics, which demonstrate partnerships; disciplinarity metrics, which show how an institution’s publications are assigned to fields; “power” metrics correlated to institutional size; and finally, proprietary measurements called “Snowball Metrics” [1]. These are perhaps the most important and useful from an institutional perspective and constitute the “special sauce” that gives SciVal an advantage over competing products.
Snowball Metrics have been endorsed by research-intensive universities in Europe and the United Kingdom but are less familiar in the United States. The intention is that they will become global standards. One of the most interesting of these metrics is the field-weighted citation impact. This measurement shows how the number of citations of a group compares with the world average for similar publications, normalized for the research field across the world as documented in the Scopus database. The normal value is 1.0, so anything over 1.0 is desirable. If a group of researchers has a field-weighted citation index of 3.00, it means that the group has been cited 200% more than the world average. By contrast, if the score were 0.75, it would mean that the group is cited 25% less than the world average. “Similar publications”— a point of interest for our faculty at Oregon Health & Science University (OHSU)—are defined as those in the Scopus database that were published in the same year, are the same publication type (e.g., peer-reviewed paper), and are in the same discipline. For the field-weighted citation index, assignment is at the individual publication level (versus the journal level) to accommodate multidisciplinary work—that is, research that may be relevant to more than one field. The assignment algorithm is also designed to not double-count influence in more than one field, which would skew the weight assigned.
The field-weighted citation impact is useful for benchmarking, since it smooths out, for the most part, differences in size, discipline, time, and publication type. It can therefore demonstrate the success, or lack thereof, of a group of researchers in an institution in comparison to others in the same institution or a similar group from another institution. Since the metric already accounts for time—it adjusts for the fact that recent publications have fewer citations—it can provide institutions with a more accurate measurement of influence. In this way, it is an improvement upon the H-index, which has different values in different disciplines, making it difficult to compare a basic science researcher with a more clinically focused one, for example. It also normalizes for disciplinary citation practices in the social sciences or humanities, expanding its utility beyond the hard and biomedical sciences. There are some caveats with this metric: because it calculates average value, it may not be accurate for small groups (that is, small data sets), which are more prone to skewing by outliers. In addition, since it is a new metric that requires some explanation, it may require some selling internally (as with any “alternative” metric). It is useful to combine it, then, with more traditional metrics such as citation counts that show magnitude.
Another caution about the SciVal tool is that in spite of the speed of most reports, more complex reporting may take several days or, in a few cases, weeks to be returned. There is also a learning curve with the metrics, and it is important to understand them as well as possible and to use them to ask clear questions. Data-loving faculty may be tempted to play with these metrics to “improve” upon them, so it also is helpful to have a strong working knowledge of the metrics so one can channel this enthusiasm effectively. Elsevier publishes a helpful companion guide to the metrics that explains in detail how they are derived and their best uses [2].
We have been using SciVal at OHSU to bolster several strategic initiatives. Two of these include evaluating faculty productivity to support fundraising campaigns and to provide critical background information supporting institutional organizational strategies. This kind of analysis is essential for institutions as we slouch toward scarcity in scientific funding: SciVal, and tools like it, should allow us to deploy institutional resources as judiciously as possible, by helping us answer questions about recruitment, organization, and investment. Ultimately, the goal is to create a stable environment to foster better science. Tools such as SciVal will be instrumental in reaching it.
References
- 1.Elsevier. Snowball metrics [Internet] Elsevier; 2012 [cited 19 Mar 2015]. < http://www.snowballmetrics.com>. [Google Scholar]
- 2.Colledge L, Verlinde R. SciVal: SciVal metrics guidebook [Internet]. Version 1.01. Elsevier; 2014 [cited 13 Apr 2015] < http://www.elsevier.com/__data/assets/pdf_file/0006/184749/scival-metrics-guidebook-v1_01-february2014.pdf>. [Google Scholar]
