Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
editorial
. 2010 Nov 22;107(50):21233. doi: 10.1073/pnas.1016516107

Impacting our young

Eve Marder a,1, Helmut Kettenmann b,1, Sten Grillner c,1
PMCID: PMC3003078  PMID: 21098264

Much has been written about impact factors, how they are calculated, and what they do and do not measure. Briefly, the Institute for Scientific Information (ISI) impact factor is calculated for each journal as the number of citations per paper published in that journal in the prior 2 years. When the 2-year impact factor was designed, it was intended to be an aid to librarians making decisions about which journals to purchase so that they could get a rough sense of a journal's influence in its field. In this context, the impact factor makes sense. Nonetheless, the use of the impact factor to judge individual scientists, departments, and institutions is a remarkable case study in the law of unintended consequences. Like so many well-intentioned interventions in social policy, ecology, and medicine, reliance on the impact factor as part of the evaluation of candidates and programs has caused myriad problems, although it has perhaps solved some.

Today, the impact factor is often used as a proxy for the prestige of the journal. This proxy is convenient for those wishing to assess young scientists across fields, because it does not require knowledge of the reputation of individual journals or specific expertise in all fields. In some countries, it was hoped that the impact factor would provide a more objective metric for scientific excellence than reliance on scientific pedigrees. For this reason, the impact factor has become a formal part of the evaluation process for job candidates and promotions in many countries, with both salutatory and pernicious consequences.

There are many reasons why reliance on the impact factor for the evaluation of individual scientists makes little sense. Because the least important paper published in a journal shares the impact factor with the most important papers in the same journal, the number of citations of a given article often does not reflect the impact factor of the journal where it is published. However, our major concern is not whether using the impact factor to evaluate individuals makes sense but its negative consequences for our young scientists as they make decisions about how to do science, publish their work, and apply for positions. It is our contention that overreliance on the impact factor is a corrupting force on our young scientists (and also on more senior scientists) and that we would be well-served to divest ourselves of its influence.

The scientific enterprise is about the creation and dissemination of new knowledge. In today's world, where it is possible to post findings on the web, scientific journals add value by providing peer review. At some journals, peer review consists primarily of asking whether the work was done correctly, if appropriate controls and statistics are present, if the figures and text are clear, and whether the arguments make logical sense. At other journals, peer review emphasizes the potential significance and novelty of the work.

Not surprisingly, the journals with the highest impact factor (leaving aside the review journals) are those that place the highest premium on perceived novelty and significance. This can distort decisions on how to undertake a scientific project. Many, if not most, important scientific findings come from serendipitous discovery. New knowledge is new precisely because it was unanticipated. Consequently, it is hard to predict which projects are going to generate useful and informative data that will add to our body of knowledge and which will generate that homerun finding. Today, too many of our postdocs believe that getting a paper into a prestigious journal is more important to their career than doing the science itself.

We have seen postdocs waste years submitting a paper to a high impact factor journal, having it be rejected, and then, revising it down the prestige chain, costing them months and months of time that would be better spent doing new science. Sadly, this process erodes their sense of accomplishment. Instead of being satisfied by reviews saying that the work was well done and clearly presented, they are disappointed by the impact factor of the journal in which it eventually is published. Too many postdocs say that their favorite journals, where they find the papers that they like to read and where they would choose to publish if they did not feel pressure to publish in high impact factor journals, are off limits to them because of the evaluation system of their home governmental review panels. The hypocrisy inherent in choosing a journal because of its impact factor, rather than the science it publishes, undermines the ideals by which science should be done. This contributes to disillusionment, causing some of our talented and creative young people to leave science.

There are countries that give financial and other bounties to young scientists for publications in high impact factor journals. We understand wanting to encourage young people to aspire to international recognition for their work. However, placing too much emphasis on publication in high impact factor journals is a recipe for disaster. At the extreme, it creates temptation to falsify data. Even among the most scrupulous, it sends the message that the honest pursuit of the truth in science is not sufficient for success.

Is there a solution? Minimally, we must forego using impact factors as a proxy for excellence and replace them with in-depth analyses of the science produced by candidates for positions and grants. This requires more time and effort from senior scientists and cooperation from international communities, because not every country has the necessary expertise in all areas of science. Already, a number of countries around the world solicit opinions internationally. We all must be willing to participate in international reviews, because this is the only way that we can free our young scientists from the tyranny of the impact factor. As a society of scientists, we must be vigilant to ensure by all of our actions that our job is the pursuit of new knowledge and its dissemination, not the pursuit of glory before truth.

Footnotes

The authors declare no conflict of interest.


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES