Skip to main content
eLife logoLink to eLife
editorial
. 2013 May 16;2:e00855. doi: 10.7554/eLife.00855

Reforming research assessment

Randy Schekman, Mark Patterson
PMCID: PMC3656620  PMID: 23700504

Abstract

It is time for the research community to rethink how the outputs of scientific research are evaluated and, as the San Francisco Declaration on Research Assessment makes clear, this should involve replacing the journal impact factor with a broad range of more meaningful approaches.


One of the aims of eLife is to publish research articles in all areas of the life sciences and biomedicine, ranging from insights into basic biology through to translational and more applied work, and to date we have published articles on topics ranging from genome editing and plant-predator interactions to global life expectancy and the neurobiology of walking.

The impacts of such a broad range of research topics will be similarly diverse. Some articles will stimulate further research by other scientists in the same field, some will lead to clinical or commercial applications, some will be covered in the media and be of interest to the public, some will achieve all of the above and some, inevitably, will have limited impact. The recently released San Francisco Declaration on Research Assessment (http://www.ascb.org/SFdeclaration.html) aims to ‘improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties’.

Currently, however, there is a widespread perception that research assessment is dominated by a single metric, the journal impact factor, which is the average rate of citation to a given journal over a short period. There are many reasons why the impact factor of a journal cannot and should not be used as a proxy for the importance of individual articles in the journal (Seglen, 1992; Adler et al., 2008; Campbell, 2008; Curry, 2012). Yet even though most of these reasons are well known, the most frequently asked question for any journal is ‘what’s your impact factor?’

The consequences of such a narrow view of research assessment have been discussed many times (Vale, 2012; Vosshall, 2012). There is intense competition for publication in high-impact-factor journals, frequently resulting in multiple rounds of review and revision; and if the manuscript is ultimately rejected, the whole depressing cycle is often repeated at a new journal. The resultant delays in the communication of new findings hinder scientific progress and waste limited resources. The focus on publication in a high-impact-factor journal as the prize also distracts attention from other important responsibilities of researchers—such as teaching, mentoring and a host of other activities (including the review of manuscripts for journals!). For the sake of science, the emphasis needs to change.

Anecdotally, we as scientists and editors hear time and again from junior and senior colleagues alike that publication in high-impact-factor journals is essential for career advancement. However, deans and heads of departments send out a different message, saying that letters of recommendation hold more sway than impact factors in promotion and tenure decisions (Abbott et al, 2010; Zare, 2012). Moreover, some research funders (including the Wellcome Trust and Research Councils UK) now stress that assessments of funding applications should focus on the merits of the work proposed rather than the journals (and therefore their impact factors) in which an applicant has published. Similarly, researchers on the sub-panels assessing the quality of research in higher education institutions in the UK as part of the Research Excellence Framework (REF) have been told: ‘No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs’. However, there is evidence that some universities are making use of journal impact factors when selecting the papers that will be included in their submission to the REF (Rohn, 2012). And, it remains sadly true that at many institutions in countries where the internal resources may be inadequate to give proper consideration to expert letters and thoroughly review a candidate’s published work, the impact factor remains a convenient crutch on which to base an imperfect evaluation of merit.

There are, however, early signs of an encouraging shift in focus from the journal in which a finding is published to the work itself, with this shift being supported by the availability of metrics at the level of individual articles for many journals. PLOS have been pioneers in this area and, since 2009, have been providing a rich array of metrics on every article published. Using these approaches, assessment can be further extended to a broader array of research outputs, via services that support the deposition of outputs other than full articles, such as Dryad (for datasets), Figshare (for the results of individual experiments, figures, datasets), and Slideshare (for presentations). The emergence of new services, such as Altmetric, Impact Story and Plum Analytics, which aggregate media coverage, citation numbers, social web metrics and so on of individual research outputs, will also provide authors with a more complete picture of the impact of their research.

The changes that are slowly taking place, and which are being facilitated by new technology and tools, lend support to the view that it is time for the research community to reclaim ownership of research evaluation (Vale, 2012). The Declaration on Research Assessment identifies some steps that can now be taken. Recommendations are proposed for all of the key constituencies involved–researchers, publishers, institutions and funders–because it will take commitment and persistence across these groups if we are to reform current practices.

At eLife, we strongly support the improvement of research assessment, and the shift from journal-based metrics to an array of article (and other output) metrics and indicators. If and when eLife is awarded an impact factor, we will not promote this metric. Instead, we will continue to support a vision for research assessment that relies on a range of transparent evidence–qualitative as well as quantitative–about the specific impacts and outcomes of a collection of relevant research outputs. In this way, the concept of research impact can be expanded and enriched rather than reduced to a single number or a journal name.

With less (or ideally no) involvement of impact factors in research assessment, we believe that research communication will undergo substantial improvement. Journals can focus on scientific integrity and quality, and promote the values and services that they offer, supported by appropriate metrics as evidence of their performance. Authors can choose their preferred venue based on service, cost and reputation in their field. All constituencies will then benefit from a deeper understanding of the significance and influence of our collective investment in research, and ultimately a more effective system of research communication.

Footnotes

Competing interests:RS and MP attended the initial meeting at the ASCB annual meeting in San Francisco that led to the creation of the Declaration on Research Assessment and participated in its drafting.

References

  1. Abbott A, Cyranoski D, Jones N, Maher B, Schiermeier Q, Van Noorden R. Do metrics matter? Nature. 2010;465:860–862. doi: 10.1038/465860a. [DOI] [PubMed] [Google Scholar]
  2. Adler R, Ewing J, Taylor P. Citation Statistics. 2008 http://www.mathunion.org/publications/report/citationstatistics0/
  3. Campbell P. Escape from the impact factor. Ethics Sci Environ Polit. 2008;8:5–7. doi: 10.3354/esep00078. [DOI] [Google Scholar]
  4. Curry S. Sick of impact factors. 2012 http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/
  5. Rohn J. Business as usual in judging the worth of a researcher? 2012 http://www.guardian.co.uk/science/occams-corner/2012/nov/30/1
  6. Seglen PO. The skewness of science. J Am Soc Info Sci. 1992;43:628–638. doi: 10.1002/(SICI)1097-4571(199210)43:9<628::AID-ASI5>3.0.CO;2-0. [DOI] [Google Scholar]
  7. Vale RD. Evaluating how we evaluate. Mol Biol Cell. 2012;23:3285–3289. doi: 10.1091/mbc.E12-06-0490. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Vosshall LB. The glacial pace of scientific publishing: why it hurts everyone and what we can do to fix it. FASEB J. 2012;26:3589–3593. doi: 10.1096/fj.12-0901ufm. [DOI] [PubMed] [Google Scholar]
  9. Zare RN. Assessing academic researchers. Angew Chemie Intl Edn. 2012;51:7338–7339. doi: 10.1002/anie.201201011. [DOI] [PubMed] [Google Scholar]

Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

RESOURCES