Summary
Science and finance: same symptoms, same dangers?
We have witnessed the worst global financial crisis in a century and the repercussions are still being felt in developed and developing countries alike. A dangerous cocktail of short-term gains prevailing over long-term interests, herding, increasing pressure to deliver results, the absence of effective oversight, and blind trust that the system would regulate itself eventually exploded when Lehman Brothers imploded in September 2008. Looking at the causes of the crisis and how it unfolded, I cannot help but draw parallels with academic research. Indeed, although the scientific system will not necessarily crash, it is still in danger of seriously damaging itself if we do not fix it.
…although the scientific system will not necessarily crash, it is still in danger of seriously damaging itself if we do not fix it
There are many problems that afflict contemporary science, but the core issue is the increasing competition to publish in a small number of influential journals. Scientists are employed and paid to produce and disseminate knowledge in a competitive process in which the first to publish is the ‘winner'. Indeed, despite the scepticism of some economists, who question whether it is the most cost-efficient strategy for producing knowledge (Dasgupta & David, 1994; Humphrey et al, 1995), competition between scientists has been a major driving force behind the exponential increase of knowledge and its practical applications. But, why should scientists have to compete to publish their results in a few select journals? There is no logical reason to do so: once experiments yield and confirm an interesting insight or observation that contributes to our knowledge about the world, the results should be made public and it should not matter where the knowledge is published, as long it can be accessed, used and built on by the scientific community. Yet, most scientists see a publication in Nature, Cell or Science as a major career achievement, rather than a contribution to knowledge.
Young et al (2008) have compared the modern publishing frenzy to a global auction in which the highest bidder wins. Under the hammer is publication in a widely respected journal; a prize that increases the winner's chances of securing grant funding and recognition among his or her peers. Nature and Science offer roughly 1,500 slots in which to publish scientific papers each year. Any one of the millions of scientists in the world can make a bid; that is, propose a manuscript that might please the editors sufficiently to be sent for peer review. Here comes the first problem: the editors of the most popular and influential journals do not only work in the interest of science, they also work in the interest of the shareholders and owners of these journals. At the end of the day, their job is to maximize the income generated by the journal by attracting readership and selling subscriptions, reprints or downloads. The higher the journal's impact factor (IF)—a value that is calculated by ISI Thompson (Philadelphia, PA, USA), which is another commercial enterprise with its own interests—the more the journal appeals to authors and readers, as it suggests the science published therein is of a high quality. This is a crucial flaw in the publication system: the scientific community has relinquished immense power to a few publishers whose agenda and interests differ from those of most scientists. The analogy to the global financial crisis is obvious: the global economy and even governments have become increasingly dependent on a few, enormously powerful banks, whose interests are not the same as those of national governments or the economy at large.
…why should scientists have to compete to publish their results in a few select journals?
Given that very few papers are published in high-IF journals, and that there are vast numbers of scientists competing, how does a scientist win the auction? Here is the second problem: it is a widespread illusion that merit has anything to do with getting published in Nature, Cell or Science. Merit—measured in terms of the paper's relevance, technical quality and future impact on research—is, of course, a prerequisite, but it does not make the difference. Let alone the quirks and unpredictable effects of the peer review process, the papers that are eventually accepted are usually a combination of good research and spectacular and unexpected results in a trendy field. Again, the analogy to the financial world is more than obvious: risky speculations to achieve short-term yields gained prominence over solid, long-term investments.
Similarly to high-yield investments, spectacular publications come with a high risk. Most scientific papers report results based on a statistical analysis of data; inevitably, there is a chance of reporting a false-positive conclusion, which is usually set at 5%. Anyone who writes or reviews a paper that relies on statistical analysis should keep in mind that, by definition, one in 20 tests is a false positive. Given the huge number of papers submitted, there is a considerable chance that a significant portion of Nature and Science papers report false-positive results (Ioannidis, 2005a). The fact that positive results are more valued than negative ones even reinforces the trend. Most papers based on preclinical and clinical studies, for instance, report only positive results even if the treatment is more likely to have negative effects.
This also leads to the ‘winner's curse': the common exaggeration of “groundbreaking” insights. If the value of an auctioned item is difficult to define, which applies to scientific findings, the winner of the auction tends to overpay for it—in the case of publication, by overstating the importance of the finding (Young et al, 2008). Not surprisingly, retractions, contradictions and secondary papers that correct the initially spectacular findings have become more common (Ioannidis, 2005b).
…useless papers are the toxic assets of the scientific system
The competition for the few available publication slots in high-IF journals has encouraged another problem: data manipulation and falsification. Journal editors spend an increasing amount of time and use sophisticated software to scrutinize submissions for inappropriate image manipulation or falsification (Rossner et al, 2007). This should not come as a surprise; scarce goods have a high price, and the higher the price, the more people are inclined to bend or break the rules to get it. Nonetheless, even though fraud—although it cannot be measured quantitatively—is a growing concern among scientists, it probably has only a marginal effect on the publication of false results.
What are the consequences? For the journals, the effect of publishing exaggerated or irreproducible results is almost negligible, as long as they are able to keep the number of retractions low. There is even some gain: controversy and debate increase interest in the journal and retracted papers continue to be cited and thus add to the journal's IF (Unger & Couzin, 2006). Journals can always shift the blame for errors to authors for not being thorough enough, and senior authors, in turn, can blame the experimental set-up or some postdoc. The real loser, however, is the scientific community; the literature is becoming swamped with useless papers in which the data is flawed and the conclusions are wrong. In science, the truth is intrinsically difficult to establish; nowadays, it is also compounded by thousands of papers that should never have been published in the first place. Returning to the financial analogy, these useless papers are the toxic assets of the scientific system. Not only do they represent a huge waste of money in terms of the experiments that are needed to re-examine and correct the findings and conclusions, they also devalue truly good papers that do not contain exaggerated claims and conclusions.
The current rat race to publish in the top journals affects not only the scientific community as a whole, but also individual researchers and their careers. How is it possible to determine the quality of a scientist's contribution to knowledge by analysing their performance in a system that is largely artificial and does not make much sense from a scientific point of view? The businesses of the financial world—notwithstanding their other flaws—at least evaluate and reward their employees for what they produce: the more money an employee earns for the bank, the higher will be their yearly bonus. In science, we seem to accept that being a good scientist means a shiny publication record. But, how good is the correlation between that shiny record and the real contribution a scientist makes to their community and to the pursuit of knowledge?
In science, we seem to accept that being a good scientist means a shiny publication record
The excessive use of the IF in evaluation processes has been discussed elsewhere (Smith, 1998; Jacobs, 2009). Despite being an inappropriate measure of a scientist's contribution to knowledge, IFs remain the most important criterion for assessment by many grant and promotion panels. As a result, and because they usually depend on grant money that is allocated for only a few years, scientists are under continuous pressure to publish in top-ranking journals. The problem is exacerbated by the low acceptance rates at top journals, which in turn creates an illusion of exclusivity based on merit and results in even more frenzied competition to publish in these journals. This is how the system feeds itself; success and merit are taken one for the other. It is a disturbing development that Nature and Science, the most illustrious scientific journals, have recently introduced ‘people' sections highlighting the lives of not-so-famous scientists. One could take it as a sign that fashion and fame are gaining ground over serious scientific endeavour.
Unfortunately, many scientists are not in a position to avoid the external pressures that make publishing seem like an end goal
Chief among the IF addicts, funding agencies are to blame for putting too much pressure on scientists to publish in high-IF journals. This pressure brings with it the confusion of long-term and short-term goals—scientists pursue results that will provide short-term income, rather than long-term insight and understanding. Instead of building the house of knowledge with rock-solid bricks, scientists tend to jump to new, attractive fields, which look more rewarding in terms of publication. The short intervals in today's ‘secure funding, obtain results, produce publications' cycle sets de facto deadlines for obtaining results. In science, as in finance, deadlines create stress, lead to sloppiness and encourage questionable behaviour.
The obsession with short-term gains and ever-higher returns on investments has been a major component of the current financial crisis. In the scientific arena, the same obsession leads to many scientists confusing ends and means. I am always stunned when I hear a scientist say that his or her goal is to publish. This is simply wrong. Publishing is not the goal of a scientist; the goal is to make significant and solid contributions to the current body of knowledge, to disseminate and exploit findings, and to train students. Period. Publishing is a means to achieving some of these things, not an end in itself. Unfortunately, many scientists are not in a position to avoid the external pressures that make publishing seem like an end goal. As a young colleague from my department put it: “The department wants me to engage in long-term projects. But how can I reach the long-term if I do not survive the short-term?”
Psychologists have noted that the confusion of means and ends is an ingredient of unethical behaviour (Schweitzer et al, 2004). Moreover, herding—obtaining rewards by copying the successful behaviour and strategies of colleagues (De Bondt & Forbes, 1999)—is a well-known effect of competition for short term-goals (Cote & Goodstein, 1999) and one that has been a major cause of the current financial crisis (Bikhchandani & Sharma, 2000; Hott, 2008). In science, herding means that scientists tend to imitate one another and focus their work on topics that are more easily sold to both funding agencies and top journals. A direct consequence of herding and the race for high-IF publications is that some areas of research become neglected, which reduces the diversity of research (Dasgupta & David, 1994). Moreover, if researchers abandon neglected areas simply because they cannot publish their results in influential journals, it creates a problem in the long term as it means a loss of expertise in these fields.
We like to think of scientists as disinterested seekers of truth who gather and analyse facts without prejudice or preconceptions and who are immune to common human failings such as pride or personal ambition. This is, of course, a rather idealized view; scientists are as human as anyone else. However, the modern scientific system has forced scientists to become like securities traders: they add value to pre-existing knowledge—that is, assets—by using their intellectual skills and publicly available information. Like traders, their behaviour is eminently selfish, although it does not prevent them from herding if it seems to suit their needs. They are in a merciless race with little room for altruistic behaviour; collaborations between scientists or research groups are dictated by the rules of the grant agencies and the demands of the publishing business. Scientists might have the vague perception that the system is rotten, but the importance of quickly producing publishable results obstructs their vision.
Another apt comparison between science and global finance is the lack of effective oversight. In the financial sector, the dominant ideology during the past decade was that markets are better left alone to regulate themselves and that oversight is both unnecessary and counter-productive. The idea that ‘natural selection' will increase the fitness of the system as a whole and the competing elements within it prevailed. The sight of bankers asking for government support and lining up for bailouts from taxpayers in the autumn of 2008 demonstrated the failure of this school of thought. Similarly, science works without global oversight. Although governments can direct scientific research by prioritizing funding for certain topics, and although they can impose some regulations, the way in which knowledge is produced and disseminated is still self-organized, and a heritage of the past. There is no international board setting the rules, just as there were no rules for financial markets. Will the ‘let the market regulate itself' ideology prove more robust for science than it was for global finance?
Will the ‘let the market regulate itself' ideology prove more robust for science than it was for global finance?
In the aftermath of the global financial crisis, journalists and politicians are discovering that many experts issued warnings about the system's shortcomings long before the situation deteriorated. Why were their Cassandrian warnings of inevitable collapse not heard? The answer is that the stars of the financial world, who had grown enormously rich within the system, were the ones who called the tune and exerted an enormous influence on governments. Similarly, researchers who regularly publish in top-ranking journals and are thus rewarded by honours, promotions and grants, are the stars of science. They, too, are unlikely to criticize a system that largely benefits them. Some renowned scientists have pointed out the risks of the current system (Lawrence, 2003), but their warnings have had little impact. In a system that confuses success and merit, wisdom rarely prevails.
The only viable, long-term solution is to release the pressure on scientists
This short commentary advocates neither the end of prestigious scientific journals, nor the regulation of science by the United Nations or any other global organization. My purpose here is only to highlight the striking similarities between science and the financial sector, which have much more in common than one might surmise at first glance. Unless one believes that these similarities are purely coincidental, the global financial crisis should be an eye-opener for scientists.
Theoretically, it should be possible to fix the system before it turns into a real crisis, as the dangerous developments highlighted in this article are not an intrinsic problem of science itself. It is rather an adaptive response to the harsh competition that is imposed on researchers by external agents. There have been various initiatives from scientists themselves to reform the publication process, most notably the launch of open-access journals. However, such solutions are tinkering with the symptoms, not with the cause of the illness. The only viable, long-term solution is to release the pressure on scientists.
The ball is clearly in the court of policy-makers and funding agencies. They must reconsider the modus operandi of their evaluation processes. They must weigh the long-term effects of the current resource–allocation model, and they must reconsider the assumption that more competition leads to more and better science. Finally, they must engage in thorough discussion with the scientific community about quality control—a word that, strangely enough, is rarely used in the production of scientific knowledge. Unlike the financial system, the scientific system is unlikely to suffer a systemic crash. But, if we do not fix it soon, it might seriously damage itself by steadily undermining its own credibility.
Footnotes
The author declares that he has no competing interests.
References
- Bikhchandani S, Sharma S (2000) Herd Behavior in Financial Markets: A Review. Washington, DC, USA: International Monetary Fund [Google Scholar]
- Cote J, Goodstein J (1999) A breed apart? Security analysts and herding behavior. J Bus Ethics 18: 305–314 [Google Scholar]
- Dasgupta P, David PA (1994) Toward a new economics of science. Res Pol 23: 487–521 [Google Scholar]
- De Bondt W, Forbes WF (1999) Herding in analyst earnings forecasts: evidence from the United Kingdom. Europ Finan Manage 5: 143–163 [Google Scholar]
- Hott C (2008) Herding behavior in asset markets. J Finan Stab 5: 35–56 [Google Scholar]
- Humphrey C, Moizer P, Owen D (1995) Questioning the value of the research selectivity process in British university accounting. AAAJ 8: 141–164 [Google Scholar]
- Ioannidis J (2005a) Why most published reserach papers findings are false. PloS Med 2: e124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ioannidis J (2005b) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218. [DOI] [PubMed] [Google Scholar]
- Jacobs H (2009) Pay to cite. EMBO Rep 10: 1067. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lawrence P (2003) The politics of publication. Nature 422: 259–261 [DOI] [PubMed] [Google Scholar]
- Rossner M, Van Epps H, Hill E (2007) Show me the data. J Exp Med 204: 3052–3053 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schweitzer ME, Ordonez L, Douma B (2004) Goal setting as a motivator of unethical behavior. Acad Manage J 47: 422–432 [Google Scholar]
- Smith R (1998) Unscientific practice flourishes in science. BMJ 316: 1036–1040 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Unger K, Couzin J (2006) Scientific misconduct. Even retracted papers endure. Science 312: 40. [DOI] [PubMed] [Google Scholar]
- Young NS, Ioannidis JP, Al-Ubaydli O (2008) Why current publication practices may distort science. PLoS Med 5: e201. [DOI] [PMC free article] [PubMed] [Google Scholar]