Abstract
For most researchers, academic publishing serves two goals that are often misaligned—knowledge dissemination and establishing scientific credentials. While both goals can encourage research with significant depth and scope, the latter can also pressure scholars to maximize publication metrics. Commercial publishing companies have capitalized on the centrality of publishing to the scientific enterprises of knowledge dissemination and academic recognition to extract large profits from academia by leveraging unpaid services from reviewers, creating financial barriers to research dissemination, and imposing substantial fees for open access. We present a set of perspectives exploring alternative models for communicating and disseminating scientific research. Acknowledging that the success of new publishing models depends on their impact on existing approaches for assigning academic credit that often prioritize prestigious publications and metrics such as citations and impact factors, we also provide various viewpoints on reforming academic evaluation.
Keywords: academic journals, alternative publishing models, academic prestige economy, publish or perish culture, publication bias
The original purpose of academic journals is to disseminate scientific research. However, for many researchers, this goal has become entangled with serving the academic prestige economy (the system where academic reputation hinges on prestigious publications, citations, impact factors, and affiliations). The problem is that the goals of publishing—the documentation of new knowledge and establishing scientific credentials—are often in tension. The former benefits from accumulating a large body of integrated results and detailed, carefully investigated theoretical explanations. The latter encourages scientists to publish in ways that maximize their metrics. For example, maximizing metrics can lead scientists to prioritize novelty and sensationalize findings with the hope of publishing in prestigious journals. On the one hand, prioritizing novel findings might speed scientific progress. However, prioritizing novelty can impede dissemination, as researchers may hide null results due to concerns about their publishability in top journals.
Commercial publishing companies have leveraged the centrality of publishing to both knowledge dissemination and academic recognition, generating huge profits in the process. Some argue that the profit-driven goals of commercial publishing organizations can foster exploitation of the free services of reviewers, make it costly for scientists to distribute their work, and result in academics and libraries paying hefty fees for open access (1–3). In some years, commercial publishing companies’ profit margins approach those of big tech companies such as Google and Apple (4). Many scientific societies, whether operating as a nonprofit publisher or relying on a commercial publisher, also view their journals as sources of revenue supporting essential activities like annual conferences, travel grants, and research awards. In this perspective, we discuss the complex issues of incentive alignment in academic publishing and alternative publication models aimed at addressing these concerns.
Brief History of Academic Publishing and the Professionalization of Academia
The history of academic publishing is intertwined with the growth of universities and the professionalization of academia (5). Prior to the 19th century, there were few academic institutions and most scholars were wealthy individuals who could either self-finance or pursued scientific studies in addition to their main profession. Starting in the late 17th century, these scholars started creating scientific societies [such as the Royal Society in 1660 (6)] to promote their scholarship. Many of these societies subsidized the production of scholarly periodicals (such as the “Philosophical Transactions” published by the Royal Society of London starting in 1665) without the goal of generating a profit.
During this time, commercial publishing firms were also established. These entities tended to focus on publishing brief research reports and scientific news rather than detailed primary research articles. Before 1900, few commercial publishers managed to be profitable (5). It was rare for academic publications to yield enough income to offset their expenses, which included the cost of materials like paper, ink, and typesetting. Some publishers played up the more lurid aspects of certain research (e.g., the sexual customs of far-away indigenous people) to prop up sales (7).
The nineteenth century saw the growth of academic institutions and the establishment of new universities, leading to the creation of a professional academic community (8, 9). In addition to teaching, professors were expected to actively participate in research, which was typically done through engagement with scientific societies and their periodicals. In 1830, Babbage (10) argued that scholarship should be evaluated through authorship of scholarly work. By the end of the 19th century, research success and academic employment typically depended on lists of published journal articles.
After the Second World War, there was massive growth in universities and large changes in research cultures and publication practices. Governments in both North America and across Europe drove the expansion of higher education. With this expansion came the massive hiring of academic staff [e.g., the number of academics employed in higher education in the United Kingdom (UK) grew from 4,000 prior to the Second World War to 200,000 in 2015; (11)] accompanied by a growing need to evaluate their scholarly contributions. Academic career advancement became codified by universities, with “research prestige” being the primary criterion for hiring and promotion (12, 13).
During the massive expansion of universities following the Second World War, commercial publishing firms were able to take advantage of the growth of academic research and university libraries. Dutch firm Elsevier and British firm Pergamon Press had a three-pronged profit generating strategy (5). First, they shifted the focus of commercial publishing from scientific news and brief reports to primary research articles. As part of this shift, they also created many new research journals in emerging scientific disciplines, targeting young societies that did not have their own journals. In 1950, there were about 10,000 journals worldwide. That number climbed to 62,000 by 1980 (14) and to 80,000 by 2019 (15). Second, they switched their consumer focus. Instead of primarily selling to individuals, they targeted institutions that could pay more per subscription. Third, they took advantage of the international market by publishing in English and targeting institutions around the world. The approaches taken by Elsevier and Pergamon Press were so successful in generating profits that other commercial publishing firms and ultimately mission-oriented publishers (e.g., scientific societies and university presses) followed suit (5).
Commercial publishers realized that in order to make the transition from publishing scientific news and short reports to primary research papers, their journals would need to be viewed as legitimate outlets of high-quality research (16). This involved adopting the refereeing practices of scientific societies. During the 18th and 19th centuries, it was common for scientific societies to recruit qualified members to voluntarily review papers before publication. Similarly, commercial publishers recruited academics to serve on editorial boards and as reviewers. Since this was a traditionally voluntary job, commercial publishers saw no need to provide compensation for work. Thus, the modern peer review process was born.
Peer review has become a cornerstone of the academic prestige economy. Today, research that is disseminated outside of the peer review system receives little weight in most institutional evaluations. In many fields, commercial publishers dominate the landscape and thus control the academic prestige economy. In 2015, it was estimated that 70% of the articles in the social sciences and 50% of the articles in the natural sciences were published by one of four large commercial firms (Springer Nature, Elsevier, Wiley-Blackwell, and Taylor & Francis) (17).
As the number of scholarly journals continued to grow, publishers adopted metrics (e.g., citation counts and journal impact factors) to help distinguish themselves. With the advent of digital publishing in the early 2000s, these metrics became easier to collect and analyze and have thus become key elements of the academic prestige economy. Like peer review, these metrics are intertwined with the profit-driven goals of publishing firms.
Perspectives on the Role of Modern Journals
In this section, we provide perspectives on the multiple (and sometimes conflicting) goals of modern journals including: 1) generating money (for both commercial publishers and scientific societies), 2) disseminating research, and 3) assigning academic credit for career advancement.
Journals as Revenue Streams.
Some argue that previous attempts to reform the publishing business, such as open access, have failed to halt the commercialization of scientific journals (18). Moreover, although researchers in low-income countries often have free access, open access has made it more costly for them to publish their own work. In addition, predatory journals publish almost anything for profit and paper mills (i.e., profit-driven entities often operating outside of legal and ethical academic norms) fabricate and sell fake manuscripts, imitating authentic research, on an industrial scale.
The profit-driven motives of commercial publishers can be beneficial for scientific societies that rely on them for publishing their journals, even when these societies only receive a small portion of their journals’ revenue. An interesting case study is that of two different societies in psychology, the Psychonomic Society and the Society for Mathematical Psychology. The Psychonomic Society, a preeminent society for the experimental study of cognition, was founded in 1959, and their publishing program was started by Clifford T. Morgan in 1964 (19). Morgan was an academic who became independently wealthy through the sale of his textbook Introduction to Psychology. Morgan owned Psychonomic Press and started publishing the journal Psychonomic Science in 1964. This was followed by Psychonomic Monograph Supplements in 1965 and Perception & Psychophysics in 1966. In 1967, Morgan gifted the three journals to the Psychonomic Society. The estimated total value of the gift was between $60,000 to $70,000 at the time (19).
From 1967 to 2010, the Psychonomic Society controlled its journals and expanded its publishing program to a total of six journals, all published in-house. Then, in 2010, it entered into a publishing partnership with Springer Nature for all six journals. Importantly, the Psychonomic Society retained ownership of the journals and their titles. This partnership has generated significant revenue for the Psychonomic Society, allowing them to hold annual conferences with no registration fees through 2023 and to sponsor multiple travel, career, and best-paper awards. In addition, the revenue stream is used to fund an endowment that will support the Society deep into the uncertain future. It is intended for the endowment to reach “a level that can support the Society’s core operations with earned interest and with capital gains under normal market conditions in perpetuity” (20).
In contrast, the Society for Mathematical Psychology and its journal had a very different publishing history. The flagship journal of the Society for Mathematical Psychology, the Journal of Mathematical Psychology (JMP), predated the formal organization of the society. A group of senior mathematical psychologists entered into a publication contract with Academic Press for JMP in 1964. This group of psychologists constituted the editorial board of the journal, and it was only in 1977 that the Society for Mathematical Psychology was formally incorporated. In 1980, the Society for Mathematical Psychology signed an indefinite contract with Academic Press for JMP, naming the publisher as the sole and exclusive owner of the journal and its title. At the time, Academic Press was a subsidiary of Harcourt Brace Jovanovich, which was acquired by Elsevier in 2001 (21). The contract for JMP became the property of Elsevier and thus they took ownership of the journal and title.
From the founding of the journal until 2018, the Society for Mathematical Psychology received zero revenue for JMP. After intense negotiations with Elsevier, the society entered into a new contract for JMP in 2018 which provided the society with an annual small sum of money. While this amount has increased a little over recent years, it is likely an order of magnitude less than the profit Elsevier earns from publishing JMP. Since Elsevier owns the journal and title, it is unlikely the Society for Mathematical Psychology will ever generate large revenues from JMP.
Journals as Curators of Research.
Journals play a pivotal role in disseminating academic research, serving as key platforms for sharing scholarly findings within the academic community and beyond. However, some view the current process as highly problematic. In particular, journals act as gatekeepers where only articles that pass peer review are published. Journals traditionally derive prestige from being very selective; rejecting many manuscripts and accepting only a few for publication. Some argue that such selectivity can benefit science because it focuses the reporting only on the best research. However, others argue that this approach does not necessarily align with promoting rigor of the published work since it focuses the editorial and peer review process on a binary accept/reject decision instead of the improvement of the reporting. This might lead scientists to adopt binary decisions as well, either fully committing to a high-prestige publication—and attempting the next prestigious journal after every rejection—or not producing a manuscript at all—file-drawering research data that might have a low chance of meeting journals’ acceptance criteria (such criteria often include novel, unambiguous results). This has prompted discussions about how to improve peer review to enhance the dissemination of research (see the article “The present and future of peer review: ideas, interventions, and evidence” in this special feature).
In addition, the traditional subscription-based model limits access to research articles behind paywalls. While this publication model is typically free for authors, it often requires high subscription fees for entities wanting to access journals, creating disparities in access. Many researchers, students, and institutions, especially those with limited financial resources, face challenges in accessing important scientific literature. This exclusivity can impede collaboration, hinder progress, and perpetuate knowledge gaps across different demographics and regions.
In the case of open-access publishing, high Article Processing Charges (APCs) imposed by journals create financial barriers for researchers aiming to disseminate their work. These fees can place a significant burden on individual researchers, particularly those from underfunded institutions or developing countries. The inability to access or publish research due to financial constraints impedes the sharing of crucial scientific knowledge and innovation.
Recently, commercial publishers have sought to resolve issues surrounding paywalls and high APCs by transitioning from traditional subscription-based models to transformative agreements (an umbrella term used to describe agreements between institutions and publishers where prior subscription costs are redirected to support open-access publishing). Under these agreements, a significant portion or the entirety of the publisher’s content becomes openly accessible to readers without paywalls. Additionally, many of these agreements cover the cost for authors at their institution to publish their work in open access formats without additional APCs. However, transformative agreements require institutions to allocate significant funds for these deals. The costs associated with these agreements may strain institutional budgets, particularly for smaller or underfunded institutions. Additionally, the uncertainty about long-term costs and sustainability poses a challenge, as the financial models for these agreements might evolve over time, potentially affecting institutions’ ability to maintain participation.
Journals as the Cornerstone of the Academic Prestige Economy.
Early on, prestige was closely related with being a member of a select scientific society, such as the Royal Society of London. As the number of professional scientists increased, this honor was diluted, and new ways of assigning prestige were needed (10). In 1830, Charles Babbage criticized the awarding of medals, due to their subjective nature, and recommended that Philosophical Transactions publish an annual report, counting how many articles were contributed by each member. Different member classes were to be assigned depending on publication count. This idea was later generalized in the 1867 publication of the Catalogue of Scientific Papers, which spanned many fields and publication outlets. Although its initial goal was to gather an index of existing knowledge, it quickly became used to evaluate scientific productivity by counting the cumulative number of publications of different authors (22). Over time, this led to a drastic change in academic evaluations. For example, in the early 19th century, applications for elected positions in the Royal Society consisted mainly of narrative summaries of contributions to knowledge. By the middle of the century, these were almost entirely replaced by publication lists (22), a practice familiar to all of us today.
Today, the major factors that influence tenure and promotion in science and many other academic disciplines are publications, citations, and grant funding. These are interdependent, as the likelihood of obtaining grants is affected by one’s publication record, and the ability to publish is dependent on (among other things) getting one’s research funded. Both of these factors put a great deal of pressure on researchers, especially in the early stages of their careers. This has led to a “publish or perish” culture in academia as well as publication bias: Researchers face significant expectations to continuously produce and publish scholarly work and also optimize their chances of publishing in high-prestige journals by being selective about the data they report.
The prevailing publish or perish culture has resulted in a “counting” mindset, where the number of publications and prestige of the journals are critical for career advancement. This counting mindset is partially responsible for a considerable rise in both the average number of papers per author and the number of coauthors per paper. Reward on the basis of number of publications favors, all other things being equal, publishing least-publishable-units—splitting the totality of one’s results into smaller bits to augment publication numbers—and/or repackaging the same data for different purposes and venues. It also favors having a large number of coauthors per paper, with authorship becoming gifts to be exchanged. We note that this latter issue can be difficult to disentangle from the rise of large collaborative science that took off at the end of the twentieth century with initiatives such as the Human Genome Project (23, 24) and big teams operating large-scale equipment, such as the Large Hadron Collider (25). Thus, it is worth distinguishing research which requires large collaborations from work in which coauthorship is nominal.
While tracking the number of publications is easy to implement, relying on numbers of publications overlooks the importance of paper quality and may detract scientists from pursuing deeper and more risky research objectives. For example, researchers might be incentivized to think too narrowly and pursue short-term goals and test only the immediate (vs. long-run) implications of interventions. Assessments of the quality of publications should push in the opposite direction, motivating researchers to tackle more important questions and to publish more consequential papers. Unfortunately, in the absence of a direct metric, it is difficult to assess quality of work as doing so is time consuming and often requires expertise in the area, and thus academic institutions tend to rely upon proxies for assessing scientific quality, such as citation counts and journal impact factor.
The incentive to produce more papers is likely to have a larger impact on people from structurally disadvantaged backgrounds—both from the researcher side and the participant side, with terrible implications for science. Researchers from underrepresented and structurally disadvantaged backgrounds are less likely to have large networks that facilitate large coauthorship opportunities. If what “counts” is impact factor, and impact factor is a function of who knows (and thus cites) whom (26), then researchers from underrepresented and structurally disadvantaged backgrounds are less likely to be cited by other scholars—even if their work is more innovative (27). For example, researchers (27) analyzed more than 1.2 million PhD recipients and found that scholars from underrepresented groups innovated at a higher rate (based on natural language processing measures developed to quantify substantive concepts in dissertations) than those in majority demographic groups and yet their contributions were systematically devalued and discounted (examining citation rates) and thus were at a disadvantage when it came to achieving the same milestones of success (like a faculty position at a top research university, such as an R1 institution in the United States) as researchers from majority groups.
From the perspective of diverse samples and populations, incentives to run “easier” studies with more easily accessible populations means that many more difficult-to-access diverse populations are left out of study designs—and scientific knowledge, especially regarding generalizability and replicability in diverse populations, suffers as a consequence. Thus, many argue that the publish or perish culture of academia is bad for everyone and especially bad for underrepresented groups.
Perspectives on Journal Reform and Alternative Publishing Models
In this section, we present a set of perspectives rethinking the whole system of producing and communicating scientific research. These include arguments that academic institutions/associations should take over the top journals or create their own, turning the entire business into a nonprofit business where “science controls science.” We also discuss new publishing models and platforms, such as preprint servers, the Peer Community In (PCI) family, modular publishing (e.g., Research Equals), and micropublications.
Academia Retaking Control of Publishing.
It is one perspective that academic institutions and scientific associations should retake control of scientific publishing. This is unlikely to be achieved by asking Elsevier, Springer Nature, and others to give up their lucrative businesses and hand over the rights for their journals. Journal reform can be achieved in a different way. First, academic institutions/associations create parallel top journals controlled by themselves and ask the editorial teams of current for-profit journals to switch to the new nonprofit journals. Second, scientific academies and societies call upon the scientific community to join their fight for independence and publish, review, and serve as editors solely for nonprofit journals in a joint effort to regain control and save money. Third, because most academies do not have the resources and personnel for the technical work of online or print publishing, the academies make a competitive call for technical companies that can do this work (experienced companies, including for-profit publishers, can apply). The hope is that such changes will result in lower costs and better science.
Scientists have begun retaking control of their journals from for-profit companies. For instance, the entire editorial team of NeuroImage, a leading journal for brain-imaging research, resigned in 2023 in protest against the perceived “greed” of the journal’s publisher Elsevier. The team urged the scientific community to reject Elsevier and submit papers instead to a nonprofit, open-access journal called Imaging Neuroscience, which the team was launching (28). The launch of the Springer Nature journal Nature Machine Intelligence in 2019 was met with large resistance from the machine learning community because for-profit publishing goes against the field’s norms of open science where most research is made freely available on preprint servers. Thousands of researchers, including many from top institutions, signed a statement refusing to engage with the new journal (29). French mathematicians have set up the “Cost of Knowledge” website where over twenty thousand scientists have signed with their name and affiliation to refrain from publishing, refereeing, or doing editorial work for Elsevier (30).
The actions of scientists to reclaim their journals can have measurable impacts on existing publishers. For example, in 2006, the entire editorial board of Topology resigned in protest to Elsevier’s huge subscription prices, and launched the new Journal of Topology under the auspices of the London Mathematical Society. Topology went out of business three years later. The European Economic Association terminated its contract with Elsevier’s European Economic Review as the Association’s official journal, and founded the new Journal of the European Economic Society in 2003. Elsevier’s European Economic Review still exists, but with a lower impact factor.
From this perspective, publishing should be by scientists for scientists, not by companies that make profits close to 40% by using the free services of scientists and charging high fees to libraries and universities (4).
Preprint Servers.
In certain communities, such as machine learning, alternative publishing models such as preprints and society proceedings are already regarded as more reputable than journal publications. This is because certain influential and highly cited articles may reside solely on preprint servers without undergoing formal publication. Additionally, the nature of benchmark-driven research, such as in the field of machine learning, often obviates the need for prior peer review. The community has a vested interest in replicating techniques that set new benchmarks, thereby facilitating a self-correcting mechanism. In this context, preprint servers can expedite the pace of “rapid science” in benchmark-driven research (31).
In the context of other fields, preprints are often viewed as riskier indicators of a researcher’s performance compared to traditional journal articles. Yet, preprints are increasingly valuable. This is because such manuscripts are highly likely to be formally published at some point. Ultimately, the true merit of a paper can only be ascertained by reading and evaluating it, regardless of whether it appears as a preprint or in a prestigious journal. Therefore, the recognition of preprints is on the rise. When it comes to assessing quality and credibility, metrics like download counts or citation numbers are available for preprints. While these metrics are not foolproof indicators of quality, they may be no less reliable than traditional measures like impact factors or the reputation of the publishing venue.
Journal Reviewed Preprints.
A major concern many researchers have with preprint servers is that articles have not been reviewed. Additionally, it can be difficult to navigate the vast preprint landscape. The new publication model undertaken by the journal eLife, an independent nonprofit publisher, offers solutions to both of these concerns. eLife has been a well-respected journal in the biomedical sciences since 2012. In its first decade, it operated on the standard peer-review journal model. Beginning in 2022/2023, it moved to a fundamentally different model where 1) only articles made available on preprint servers are reviewed, 2) peer review is no longer used as a basis to accept/reject an article, and 3) the number of articles published is not artificially limited. In this model, the decision of whether to host (rather than publish in the classic sense) an article at eLife is made by editors (who are active researchers) prior to peer review, and reviews are presented as commentaries alongside the article.
While this is a significant change in approach, it is not disruptive. The journal still curates research since editors are now tasked with determining what is suitable to be hosted. The seemingly incremental nature of this change from a practical perspective belies the significance of the philosophical change that underlies it. Review processes that curate manuscripts that are already available can move away from binary accept/reject decisions. The journal can provide the service of improved reporting and integration into the existing knowledge base. This focuses the publishing process on its original goal of dissemination and facilitating the reader of scientific works. The publication process at eLife has changed only recently and research is needed on its practical consequences.
Community Reviewed Preprints.
eLife is not the only organization that has taken on the responsibility of reviewing preprints. PCI is a community-sourced service which provides free, journal-independent review of preprints. Review of preprints, once reviewers and preprint authors work together to improve the preprint, is intended to lead to a “recommendation,” by which a recommender (analogous to an editor in a traditional journal) endorses the article for publication. The PCI-recommended preprint can be cited and used as a peer-reviewed article, circumventing the need for traditional journal publication. Alternatively, it can still be published in a journal (for instance, one that is “PCI-friendly,” which readily accepts and publishes recommended articles). It is one perspective that bottom–up initiatives like this, due to the “peer-based character and format,” have the potential to reach widespread adoption, as they give voice to individual scholars within the community (32).
Recently, PCI has been integrated into the multidisciplinary open archive HAL (Hyper Articles en Ligne), which is a popular preprint server for French academics. When authors deposit a paper on HAL, they can also select to have it sent directly to the relevant PCI. If PCI recommends the preprint, then HAL will display the bibliographic reference to the recommendation. This type of integration will aid both authors and readers by streamlining and simplifying the process of preprint posting and reviewing. Other preprint servers such as ArXiv and PsyArXiv could integrate PCI in a similar manner to further aid researchers wanting to use PCI.
Another benefit of PCI is that it does not require significant divergence from the traditional peer-review process, while at the same time, it offers a departure from the traditional journal publication system, in that the journal itself can be completely cut out. One final variable that is likely to increase uptake of PCI within the community is cost—the use of PCI is free. This has multiple positive implications for its adoption chances. First, PCI is free because the service is provided by volunteer recommenders and reviewers. Unlike a traditional journal, however, no publisher is benefiting from volunteer reviewers—the benefit is solely conferred to the preprint authors. Second, an open access model with no APCs can help battle publication bias. If a move away from accept/reject decisions can improve the cost/benefit of complete reporting, the zero cost might limit any additional discouragement of producing manuscripts. Third, PCI being free is especially beneficial for independent researchers and researchers with limited institutional funding and support. This means that PCI is likely to be used by those with limited financial resources, in addition to other members of the community. Finally, some subgroups in the science reform movement are strongly invested in accessibility, openness, and inclusivity (33). They may want to support the PCI model, by contributing to and by using it, on principle as much for its utility.
Society Endorsed Preprints.
One concern about PCI is that it might lack the prestige that scientific societies can provide. One possible solution to this problem is for societies to assume the role of reviewing/endorsing preprints. This could follow eLife’s approach of endorsing articles and providing commentaries alongside the article. Societies could form “endorsement boards” similar to journal editorial boards to take on this responsibility.
To facilitate this process, preprint servers (such as ArXiv and PsyArXiv) could modify their platforms to allow societies the ability to provide their endorsements and commentaries directly alongside the preprint. In this way, societies do not have to assume the burden of building out infrastructure to host their endorsed preprints. For example, PsyArXiv currently uses Plaudit, an eLife Labs product, to allow ORCiD account holders to endorse preprints. The functionality of Plaudit could be expanded to allow groups of individuals (such as societies) to provide endorsements and the ability to upload commentaries. Further, preprint servers could enhance their search features so that researchers can search directly for articles endorsed by specific societies, making it easier for individuals to navigate articles on the server.
Preprint servers would incur some costs in making these changes. We note that search engines, such as Early Evidence Base, already exist for prioritizing refereed preprints and organizing preprints around scientific topics (34), so the proposed changes to preprint servers are quite feasible. Federal granting agencies and private foundations could provide the necessary resources to support these infrastructure changes. eLife’s transition from a traditional peer-reviewed journal to one focused on curating and reviewing preprints was supported by several funders, including the Howard Hughes Medical Institute, Knut and Alice Wallenberg Foundation, the Max Planck Society, and Wellcome. Although this publication model does not yet exist, it is similar to eLife’s model and the PCI model. It also provides a mechanism for scientific societies to retake control of publishing. Additionally, papers endorsed by scientific societies would carry the prestige of those societies.
This proposed model presumes that a publication system must convey prestige to be effective. Some argue that the emphasis on perceived prestige is a fundamental cause of numerous issues within the current publication system. Therefore, attempting to replicate prestige may perpetuate existing problems rather than resolve them. Ultimately, using publications as a signal of prestige is a decision made by scientists. Therefore, one solution is for scientists to stop relying on publications for conveying prestige.
Modular Publishing Platforms and Micropublications.
Modular publishing is an alternative publishing approach that breaks up a paper into small sections called modules (35). Micropublications are small articles which describe just one result or claim, without a “broader narrative.” F1000, eLife, and PLOS Biology are among the many journals that publish micropublications. While modular and micropublications have much potential, they do represent a different way of writing, reporting, and disseminating research. For example, Research Equals is a modular publishing platform run by Liberate Science out of Germany (Berlin) and Octopus is a UK-based modular publishing platform. In the case of Research Equals, there is no set number of modules. Depending on the kind of research, researchers might publish half a dozen modules, or they might publish fifteen. Octopus offers a more structured process of eight modules, including one for reviewing. Octopus is also designed to allow the “threading” of modules into a coherent narrative that can be submitted to a journal for conventional publication. The journal Royal Society Open Science has an agreement with Octopus to consider such articles for publication.*
Barriers to Change.
The more a new publishing model diverges from existing modi operandi, the more difficult it will be to establish their widespread adoption. Indeed, as Armeni and colleagues (32) stress, the perceived costs of change—that is, making change in practice along with the extra learning and effort that comes with it—can lead to resistance from potential adopters. A cost–benefit balance must be achieved, where the short-term costs of adopting a new practice do not outweigh the long-term benefits of such a change.
One significant barrier to journal reform is potential changes to the underlying publication funding model. Under alternative publishing models, who should pay for research publication and dissemination? In the case of preprint servers and PCI, these services are provided free of charge to authors. However, the fee for publishing with eLife is $2,000, charged when they commit to peer reviewing the work. For scientific societies, such as the Psychonomic Society, who currently make a substantial profit off of their traditional journals, switching to an alternative publication model will incur great costs to the society. Thus, a key for societies to change their publication practices is resolving the journal revenue problem.
A pay-to-publish model, such as the one at eLife, is likely the most obvious choice. However, the pay-to-publish model requires that scientists have the funds to pay at their disposal, from (publicly funded) grants, etc. A concern is that publication success (number of publications or venue) then may depend more on financial status than quality. In cases in which publication is underwritten by universities, there are other potential inequalities because different universities have very different levels of resources.
Another solution to the society funding problem is to switch from the journal revenue model to one where societies are directly funded by members, granting agencies, and academic institutions. For example, the NSF funds conference grants to scientific societies and other organizations in the United States. This program could be expanded to provide more support to societies and thus reduce their reliance on journal revenue. Academic institutions could also directly support scientific societies. Right now, they provide indirect support to societies by paying hefty subscription fees to commercial publishers for society journals. An interesting related perspective is that academics seem fine with letting publishers charge high prices to their home institutions as long as some of this money is delivered to the societies of which they are members.
From the perspective of researchers, the rewards for changing one’s research workflow and surrounding practices are often unclear. This is especially true for early adopters who do not have the benefit of seeing how those ahead of them fared, and for researchers who have already found the traditional academic system rewarding and lucrative (36). For example, publishing models with alternative peer review processes (e.g., PCI and society endorsement boards) might be at a disadvantage if researchers view these models as less prestigious and also think of peer review as a hurdle rather than a service.
Disciplinary norms and standards likely also have a role to play in whether a publishing reform will flourish or fail. For instance, it is unlikely that a model like micropublications would find a foothold in disciplines where qualitative traditions are dominant. This is because the value of empirical research in such disciplines rests heavily on the richness of the context and narrative of the study. To decontextualize the findings in the way a micropublication would, would be to completely undermine the validity and quality of the research. In comparison, micropublications were highly popular for fields such as epidemiology during the Coronavirus pandemic, as the results of empirical studies themselves were the focal point, the context was self-evident, and short turnaround time in publication was vital (37).
Publishing reforms that have strong backing from the research community, and which are similar to or easily integrate into researchers’ existing scientific workflows, are more likely to reach a critical mass of acceptance, especially in disciplines where the proposed model is appropriate to the disciplines’ epistemological and practice norms. The three biggest challenges to journal reform are 1) the lack of independence of scientific journals from commercial for-profit publishing companies, 2) the financial impacts on societies that currently generate substantial revenue from their journals, and 3) resistance to adoption because of concerns regarding academic prestige, detailed in the section below.
Perspectives on Reforms in Academic Evaluation
For any alternative publishing models, a big challenge is generating “career value,” that is, recognition among hiring committees, grant agencies, and prize committees. Thus, knowing which publication venues are regarded as reputable among the community (e.g., among members of a hiring committee) is critical for deciding where to publish. Indeed, Rowley et al. (38) found in a survey that researchers consider the reputation of a journal to be one of the most important factors when deciding where to submit their work for publication. Thus, it will be important for their wide-spread adoption that researchers regard publications in alternative venues as equally valuable as publication in traditional venues.
This “incentive obstacle” may be most readily overcome by altering the incentive structures surrounding academic evaluation. However, we caution that any changes to academic incentive structures could have unexpected outcomes. People can adapt to changes in payoffs and penalties in ways that are difficult to anticipate. For instance, the pressure on researchers to produce a certain number of first-authored articles in order to get promotion or tenure has been a leading cause of the rise of paper mills who sell authorships on bogus papers. 2% of all scientific papers published in 2022 appear to be fake papers, which is likely to increase with the rise of generative AI (39). One possible approach to understanding the impacts of new incentive structures is to study these structures using tools from economics, such as game theory. At present, this is an underutilized approach.
In this section, we present a set of perspectives on changing the publish-or-perish culture by reforming academic evaluation. We encourage the scientific community to experiment with these models and to continue proposing viable alternatives, keeping in mind the potential unexpected consequences of changes in incentive structures.
Abandoning Problematic Metrics.
Abandoning the “counting” mindset altogether would be unrealistic. However, many of the metrics currently being used to assess research quality, such as citation counts and journal impact factor, are highly problematic (40). Citation counts are correlated with many factors that are independent of research quality (see, for example, ref. 41) and at least on some analyses are not related to research quality (40). Similar problems arise from the use of impact factors. They are not a good proxy for citation impact, for even high and low impact factor journals have similar citation distributions and a great deal of overlap, and using them for assessment of researcher quality or the impact of individual papers is unjustified (42). Abandoning impact factor would have the added benefit of supporting alternative publishing models, which often do not have impact factors.
Many existing metrics are also easily gamed. For example, researchers have suggested that the h-index incentivizes self-citation or the formation of citation cartels, where scholars informally agree to strategically track and cite each other’s work (43, 44). Indeed, game-theoretic analyses suggest that a rational maximization of h-index involves increasing self-citations while decreasing citations of papers of competing research groups (45). Another study found that a randomly selected 20 economists could increase their h-index by 20% by strategically adding just 1.8% more citations (43). Such gaming is made easier by software, such as Google Scholar, that tracks citations and automatically calculates a researcher’s h-index. Altmetric, which measures the online attention research papers receive, can be manipulated by increasing social media posts, enlisting friends to share the content, or using automated programs to post or repeatedly download specific articles (44). There is also evidence that journal editors game metrics for profit. For example, to get fake papers published, paper mills offer journal editors to increase the journal’s impact factor by extensively citing its articles in their bogus papers. Editors are in addition offered bribes that depend on the current impact factor, such as $1,000 times impact factor for each published paper (46, 47).
There is precedent for abandoning these problematic metrics in academic evaluation (e.g., in promotion decisions). The pan-European “Agreement on Reforming Research Assessment” asked signatories to commit to moving away from metrics such as impact factor and the h-index in assessing researchers for jobs, promotions, and grants (48). Efforts of this kind are not confined to Europe. The Declaration on Research Assessment (DORA; https://sfdora.org/about-dora/), established in 2012, is dedicated to improving the evaluation of scholarly research outputs. DORA’s primary goals include increasing awareness of innovative assessment tools and facilitating the adoption of responsible metrics in hiring, promotion, and funding decisions.
DORA has evolved into a global initiative encompassing all academic disciplines. As of December 2023, 24,320 individuals and organizations across 164 countries have endorsed DORA. The European Research Council (ERC) endorsed DORA in 2021 (https://sfdora.org/resource/european-research-council-erc/), and applicants are explicitly asked not to include journal impact factors. The Dutch Research Council (https://www.nwo.nl/en/dora) signed DORA in 2019 and has progressively integrated its principles into assessment procedures, including the elimination of references to impact factors and the h-index in funding calls and application forms and the introduction of narrative or evidence-based Curriculum Vitae (CVs). Utrecht University in the Netherlands promptly followed suit, formally abandoning the use of the impact factor in all hiring and promotion decisions in early 2022 after signing DORA in 2019 (49).
Utrecht University’s initiative has been inspired by the ambitious Recognition & Rewards program, a collaborative initiative involving Dutch universities, university medical centers, research institutes, and funders (50). The program aims to diversify academic career paths, encompassing not only research but also teaching, outreach, and organizational responsibilities, by balancing individual and team performance, promoting open science and academic leadership, and emphasizing quality in assessing academic performance, with a strong emphasis on DORA principles.
Adopting Responsible Metrics.
One way to encourage universities and other organizations to abandon impact factor and the h-index is to replace them with more responsible metrics. For example, the Center for Science and Technology Studies (CWTS) at Leiden University in the Netherlands has been working on the issue of more responsible metrics for some years now. Among other initiatives, they have introduced the impactful “Leiden Manifesto for Research Metrics” (51), which featured 10 principles designed to guide institutions through a responsible evaluation of researchers. These principles are, in the words of the authors, a “distillation of best practice” in research assessment, allowing researchers to hold their evaluators to account. They emphasize qualitative expert assessment [including qualitatively evaluating individuals’ CVs—the “narrative/evidence-based CV” style being increasingly used in some academic fields is an example of this in practice (52)], transparency, and recognition of systemic issues in the process of evaluation.
A Menu of Quantitative Metrics.
One drawback to expert qualitative assessment is the difficulty and cost of implementing it. Further, qualitative assessment is subjective and would likely have similar problems as other forms of subjective academic evaluation. An alternative to qualitative expert assessment is the development of a menu of quantitative metrics that could be applied to various evaluation contexts. Below, we discuss quantitative approaches to measuring researcher impact, replicability, and societal impact. These three examples highlight the diversity of possible metrics that could be included in such a menu. We note that no metric is perfect on its own and there are drawbacks to each. The key to the menu approach is that multiple metrics will help balance each other and reduce the chance of individuals gaming the system.
We also note that simpler metrics are more likely to be adopted. Even if a metric is accurate, it is unlikely to be adopted if it is difficult to understand. Even if a menu of metrics existed, people might gravitate to using the simplest ones. Thus, the complexity of a metric should be considered during its development.
Measuring researcher impact.
In evaluating the unique impact of a researcher, one could lower the weight of a paper as a function of the number of coauthors. For example, a single-authored paper should be counted as 1 publication whereas a 3-author paper could be counted as a third of a publication. Yet, in evaluating the network of that researcher—how central and well—connected they are, one should give more weight to papers with more coauthors, and give more weight to more connected coauthors (i.e., a well-connected coauthor “helps” this index). One benefit of the proposed approach is its simplicity, as it is easy to explain and understand.
An immediate concern in discounting large, multiauthored collaborative papers (e.g., by counting a single-authored paper as 1 and a 3-authored paper as a third of a paper) is that it disincentivizes large research teams as well as interdisciplinary research, which often advance the field—for example, some have shown in an analysis of over 65 million papers, patents, etc. that larger teams “develop” and build on past work while smaller teams “disrupt” existing paradigms (53). We need both approaches for scientific progress. In addition, this counting system could disincentivize giving credit where credit is due. People such as research assistants who substantially contributed are already all too often not acknowledged. This would only be exacerbated if adding an author decreases the measured unique impact of the other authors. Such a system could lead principal investigators to list research assistants in acknowledgments rather than as coauthors, which could have a larger impact on people from structurally disadvantaged backgrounds.
Counting replications.
The publish or perish culture of academia has fueled the replication crisis by incentivizing researchers to publish novel and unambiguous results. Rather than judging the impact of one’s work by number of citations, one could count the number of replication attempts it has produced. Having peers try to replicate a set of results is not a proxy for the impact of the results, as are citations, but a direct measure of the actual impact and interest the research has created in the peer community.
Incentivizing timely replications could be achieved by considering each replication as replicating not only the original study but also all the replications that preceded it. In this manner, the original study gains the most from follow-up replications, the first replication gains the second most, etc. In this way, we incentivize a race to be an early replicator of work that is judged by independent researchers as likely to generate large waves of replications. This mechanism also has the advantage of being self-regulatory. As more replications (either successful or not) accumulate, a better evaluation of the robustness of the result is achieved while at the same time the incentive to replicate declines as being a late replicator is less likely to produce many future replications, unless the robustness is still unresolved.
While incentivizing replication might help with the replication crisis, it could come at the cost of scientific discovery. While replication is fundamental for verifying the robustness of scientific findings, excessive emphasis solely on replication may divert resources and attention away from innovative research endeavors. Thus, care is needed when designing an incentive model for replication so that an emphasize on reproducibility does not hinder exploration and innovation (54). For example, the incentivization model proposed in this section is designed to be self-regulatory to avoid excessive emphasis solely on replication. While such a model might be self-regulatory, it is more complex than simply counting replication successes and failures. Thus, the complexity of the approach could hinder its adoption.
Rewarding societal impact.
Much of research is funded by public bodies. It is therefore reasonable for the public—either directly or through their elected representatives—to expect some payoff in return (at least in the long run). This necessitates some measure of societal impact of research. For example, in the UK there has been much emphasis now on “public engagement” and, even more so, on “impact.” Both of those have obvious advantages as alternative means of evaluating scientific contributions, but they are also beset with problems because they are often difficult to measure or evaluate. One solution is to consider output that can be easily counted (such as being cited in policy decisions, legal briefs, etc.) that shows how one’s work is impacting the world. This, of course, assumes that such reports and briefings are discoverable. The Overton.io (https://www.overton.io/) platform presents a notable step in that direction and permits quantification of the policy influence of academic outputs and the “gray” literature such as technical reports for governments and other public bodies.
However, many forms of public engagement (such as talks at public gatherings, science fairs, etc.) are difficult to track with performance measures such as surveys or polls. One solution is to measure the input rather than the output—that is, measure the time and effort spent on generating engagement or impact and review the scientific products that go into those efforts (e.g., talks, policy briefs) rather than the downstream consequences that may or may not be timely and measurable. It should be noted that this solution could introduce its own set of problems regarding how researchers should prioritize these efforts relative to other activities. Additionally, metrics of time and effort might be more complex and more difficult to understand than ones based on counting output.
Incentivizing Quality Over Quantity.
Even without developing alternative metrics, there are some easy and relatively minor changes to academic evaluation that could alter narrow incentive structures. For example, one possibility that avoids some of the pernicious effects of counting citations is for researchers to indicate and describe their top publications (where is relatively small, such as 5 or 10, and determined by the field) in the past years (where is again determined by the field), thus incentivizing quality over quantity (also see the article “Alternative Models of Research Funding” in this special feature for a discussion of these issues). By focusing more explicitly on the best quality research and having the scientists themselves express what is truly novel and important about them can reduce the burden on review committees as well, making such assessments more economical. For this approach to be most effective, it is critical for members of committees and panels to read the top papers and their summaries rather than relying on metrics to evaluate these papers.
Several organizations have already adopted this approach. The ERC asks investigators to provide their five (Starting Grants) or ten (Consolidator and Advanced Grants) best outputs in their proposals and no more. The Dutch Research Council asks investigators for their 10 most relevant outputs, including papers, software, databases, and patents. The NSF in the United States similarly limits the number of publications listed on investigators’ biographical sketches to five products most closely related to the proposed project and five other significant products. The Research Excellence Framework (REF) in the UK, a system for evaluating the research quality of higher education institutions, limits the number of outputs academics can submit. In the 2021 REF, no academic could submit more than 5 outputs for evaluation (55).
The “quality over quantity” approach is not without limitations. The key to this approach and its potential abuses hinges on how “quality” is defined. For example, in economics, quality is often viewed as publishing in the “Top Five journals.” A single publication in the Top Five is often the difference between getting tenure or not (56). This emphasis on a select few journals can lead to a “publication funnel,” where innovative or unconventional ideas may struggle to gain recognition if they do not align with the perceived preferences of the top journals. Moreover, this intense focus may incentivize researchers to prioritize publishing in these journals over addressing pressing real-world issues, potentially limiting the field’s responsiveness to current societal challenges.
Thus, the key to a “top publication” evaluation approach is to disentangle the papers from the journals they are published in. Importantly, institutions should recognize alternative publication models as valuable forms of research dissemination. Otherwise, we simply replace one flawed metric (quantity assessed by total number of publications or citations) with another (quality assessed by journal impact factor).
Conclusions
Science has made significant advances, addressing crises like HIV-AIDS and the recent COVID-19 pandemic with antiretroviral therapy and vaccines, respectively. However, it faces challenges due to the incentive systems. Commercial publishers often capitalize on unpaid reviewers and charge high fees for sharing and accessing knowledge. Scientific societies, whether operating as nonprofit publishers or relying on commercial ones, regard their journals as revenue sources to sustain important activities like annual conferences and research awards. The academic prestige economy has led to issues such as publication bias and, in severe cases, academic fraud (57), as well as contributing to barriers for researchers from underrepresented and structurally disadvantaged backgrounds. These incentives are often misaligned with the core purposes of academic publishing—knowledge creation and dissemination. We advocate for aligning publishing objectives—research dissemination and academic evaluation for researchers, and resource support for publishers and scientific societies. In this perspective, we provide various viewpoints on alternative publication and academic evaluation models that attempt to tackle the incentive alignment problem. We encourage the scientific community to explore these and other models and to study the impact of incentive changes on researchers’ behavior. Collectively, we can reshape academic publishing to better serve researchers and their scientific endeavors.
Acknowledgments
J.S.T. is supported by NSF grants SES-1846764 and SES-2242962, and the Alfred P. Sloan Foundation. A.F. is supported by the IBM faculty research fund at the University of Chicago, Booth School of Business. W.R.H. is supported by NSF grant SES-2242962. S.L. acknowledges financial support from the European Research Council (ERC Advanced Grant 101020961 PRODEMINFO), the Humboldt Foundation through a research award, the Volkswagen Foundation (grant “Reclaiming individual autonomy and democratic discourse online: How to rebalance human and algorithmic decision-making”), and the European Commission (Horizon 2020 grants 964728 JITSUVAX and 101094752 SoMe4Dem). S.L. also receives funding from Jigsaw (a technology incubator created by Google) and from UK Research and Innovation (through the Centre of Excellence, Research centre on Privacy, Harm Reduction and Adversarial Influence online (REPHRAIN), and from European Union (EU) Horizon replacement funding grant number 10049415). D.M. is supported by a Vidi grant (VI.Vidi.191.091) from the Dutch Research Council (NWO). M.C.M. is supported by NSF grants #2222453, 2243778, and 2322330; a Raikes Foundation Grant; a Bill and Melinda Gates Foundation Grant. S.M. was supported by Schmidt Science Fellows, in partnership with the Rhodes Trust.
Author contributions
J.S.T., D.B.A., S.M.F., A.F., S.D.M.G., G.G., W.R.H., S.L., D.M., M.C.M., S.M., V.P., A.L.R., J.t.S., and A.R.T. wrote the paper.
Competing interests
J.S.T. is a former President of the Society for Mathematical Psychology. J.S.T. was involved in the negotiations of the Journal of Mathematical Psychology contract with Elsevier in 2018. S.M.F. is editor-in-chief of the Journal of Trial and Error. S.D.M.G. is cofounder of the Journal of Trial and Error. G.G. is vice-president of the European Research Council, beginning in 2024. S.L. is Chair of the Governing Board of the Psychonomic Society in 2024. S.L. serves on the European Research Advisory Council for Springer Nature in an unpaid capacity. D.M. is Editor-in-Chief of Behavior Research Methods. M.C.M. is a Schmidt Futures Foundation Innovation Fellow and a Research Affiliate at the Center for Advanced Study in the Behavioral Sciences at Stanford University. A.L.R. is a former Senior Editor of Neuron. J.t.S. is in the process of becoming editor (“recommender”) at Peer Community In Registered Reports.
Footnotes
J.S.T. is an organizer of this Special Feature.
This article is a PNAS Direct Submission.
Data, Materials, and Software Availability
There are no data underlying this work.
References
- 1.Bergstrom T. C., Free labor for costly journals? J. Econ. Perspect. 15, 183–198 (2001). [Google Scholar]
- 2.R. Johnson, A. Watkinson, M. Mabe, The STM Report, 5th edition: An Overview of Scientific and Scholarly Publishing (International Association of Scientific, 2018), p. 94.
- 3.Aczel B., Szaszi B., Holcombe A. O., A billion-dollar donation: Estimating the cost of researchers’ time spent on peer review. Res. Integr. Peer Rev. 6, 1–8 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.K. Yup, How scientific publishers’ extreme fees put profit over progress (2023). www.thenation.com/article/society/neuroimage-elsevier-editorial-board-journal-profit/. Accessed 18 December 2023.
- 5.A. Fyfe et al., Untangling academic publishing: A history of the relationship between commercial interests, academic prestige and the circulation of research. Zenodo 10.5281/zenodo.546100. Accessed 10 November 2023. [DOI]
- 6.Lyons H. G., The Royal Society (CUP Archive, 1960). [Google Scholar]
- 7.S. Bull, A purveyor of garbage? Charles carrington and the marketing of sexual science in late-victorian britain. Vic. Rev. 38, 55–76 (2012).
- 8.Anderson R., British Universities Past and Present (Bloomsbury Publishing, 2006). [Google Scholar]
- 9.Ringer F. K., The Decline of the German Mandarins: The German Academic Community, 1890–1933 (Wesleyan University Press, 1990). [Google Scholar]
- 10.C. Babbage, “Reflections on the decline of science in England and on some of its causes” in The Works of Charles Babbage, M. Campbell-Kelly, Ed. (London Pickering, 1830).
- 11.Collins P., The Royal Society and the Promotion of Science Since 1960 (Cambridge University Press, 2016). [Google Scholar]
- 12.Morley L., Troubling intra-actions: Gender, neo-liberalism and research in the global academy. J. Educ. Policy 31, 28–45 (2016). [Google Scholar]
- 13.Coate K., Howson C. K., Indicators of esteem: Gender and prestige in academic work. Br. J. Sociol. Educ. 37, 567–585 (2016). [Google Scholar]
- 14.Meadows J., The Growth of Journal Literature: A Historical Perspective. The Web of Knowledge: A Festschrift in Honor of Eugene Garfield (Information Today Inc., Medford, NJ, 2000). [Google Scholar]
- 15.Suiter A. M., Sarli C. C., Selecting a journal for publication: Criteria to consider. Mo. Med. 116, 461 (2019). [PMC free article] [PubMed] [Google Scholar]
- 16.Baldwin M., Credibility, peer review, and nature, 1945–1990. Notes Rec. R. Soc. J. Hist. Sci. 69, 337–352 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Larivière V., Haustein S., Mongeon P., The oligopoly of academic publishers in the digital era. PLoS One 10, e0127502 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Butler D., The dark side of publishing. Nature 495, 433 (2013). [DOI] [PubMed] [Google Scholar]
- 19.Dewsbury D. A., History of the psychonomic society II: The journal publishing program. Psychon. Bull. Rev. 3, 322–338 (1996). [DOI] [PubMed] [Google Scholar]
- 20.Psychonomic society: Strategic plan 2024–2030 (2023). https://www.psychonomic.org/page/stratplancomments. Accessed 18 December 2023.
- 21.J. Bing, Reed elsevier buys harcourt general (2000). https://variety.com/2000/more/news/reed-elsevier-buys-harcourt-general-1117788402/. Accessed 18 December 2023.
- 22.Csiszar A., How lives became lists and scientific papers became data: Cataloguing authorship during the nineteenth century. Br. J. Hist. Sci. 50, 23–60 (2017). [DOI] [PubMed] [Google Scholar]
- 23.Olson M. V., The human genome project. Proc. Natl. Acad. Sci. U.S.A. 90, 4338–4344 (1993). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Roberts L., Timeline: A history of the human genome project. Science 291, 1195–1200 (2001). [DOI] [PubMed] [Google Scholar]
- 25.Brüning O., Burkhardt H., Myers S., The large hadron collider. Prog. Part. Nucl. Phys. 67, 705–734 (2012). [Google Scholar]
- 26.Newman M. E., The structure of scientific collaboration networks. Proc. Natl. Acad. Sci. U.S.A. 98, 404–409 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Hofstra B., et al. , The diversity-innovation paradox in science. Proc. Natl. Acad. Sci. U.S.A. 117, 9284–9291 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.A. Fazackerley, “too greedy”: mass walkout at global science journal over “unethical” fees (2023). https://www.theguardian.com/science/2023/may/07/too-greedy-mass-walkout-at-global-science-journal-over-unethical-fees. Accessed 18 December 2023.
- 29.N. Lawrence, Why thousands of AI researchers are boycotting the new nature journal (2018). https://www.theguardian.com/science/blog/2018/may/29/why-thousands-of-ai-researchers-are-boycotting-the-new-nature-journal. Accessed 18 December 2023.
- 30.The cost of knowledge http://thecostofknowledge.com. Accessed 18 December 2023.
- 31.Lampinen A. K., Chan S. C. Y., Santoro A., Hill F., Publishing fast and slow: A path toward generalizability in psychology and AI. Behav. Brain Sci. 45, e26 (2022). [DOI] [PubMed] [Google Scholar]
- 32.Armeni K., et al. , Towards wide-scale adoption of open science practices: The role of open science communities. Sci. Public Policy 48, 605–611 (2021). [Google Scholar]
- 33.S. M. Field, “Charting the Constellation of Science Reform,” PhD thesis, University of Groningen, Groningen, Netherlands, (2022).
- 34.EMBO, Early evidence base (2023). https://eeb.embo.org/refereed-preprints/review-commons. Accessed 18 December 2023.
- 35.P. Dhar, Octopus and researchequals aim to break the publishing mould. Nature (2023). 10.1038/d41586-023-00861-0. Accessed 15 January 2024 [DOI] [PubMed]
- 36.S. Field, Risk reform, or remain within the academic monolith? Psy. 36, 45–47 (2023).
- 37.Yamada Y., Micropublishing during and After the Covid-19 Era (Psychology, Collabra, 2020), vol. 6, p. 36. [Google Scholar]
- 38.Rowley J., Sbaffi L., Sugden M., Gilbert A., Factors influencing researchers’ journal selection decisions. J. Inf. Sci. 48, 321–335 (2022). [Google Scholar]
- 39.Sanderson K., Science’s fake-paper problem: High-profile effort will tackle paper mills. Nature 626, 17–18 (2024). [DOI] [PubMed] [Google Scholar]
- 40.Dougherty M. R., Horne Z., Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences. R. Soc. Open Sci. 9, 220334 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Bornmann L., Schier H., Marx W., Daniel H. D., What factors determine citation counts of publications in chemistry besides their quality? J. Informet. 6, 11–18 (2012). [Google Scholar]
- 42.V. Larivière et al., A simple proposal for the publication of journal citation distributions. bioRxiv [Preprint] (2016). 10.1101/062109 (Accessed 10 February 2024). [DOI]
- 43.Haley M. R., On the inauspicious incentives of the scholar-level h-index: An economist’s take on collusive and coercive citation. Appl. Econ. Lett. 24, 85–89 (2017). [Google Scholar]
- 44.Chapman C. A., et al. , Games academics play and their consequences: How authorship, h-index and journal impact factors are shaping the future of academia. Proc. R. Soc. B 286, 20192047 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Tagiew R., Ignatov D. I., “Behavior mining in h-index ranking game” in CEUR Workshop Proceeding (Experimental Economics and Machine Learning, 2017), vol. 1968, pp. 52–61.
- 46.B. A. Sabel, E. Knaack, G. Gigerenzer, M. Bilc, Fake publications in biomedical science: Red-flagging method indicates mass production. medRxiv [Preprint] (2023). 10.1101/2023.05.06.23289563 (Accessed 15 May 2024). [DOI]
- 47.Joelving F., Retraction W., Paper trail. Science 383, 252–255 (2024). [DOI] [PubMed] [Google Scholar]
- 48.Woolston C., Grants and hiring: Will impact factors and h-indices be scrapped? Nature (2022). 10.1038/d41586-022-02984-2. Accessed 15 December 2023. [DOI] [PubMed]
- 49.Woolston C., et al. , Impact factor abandoned by dutch university in hiring and promotion decisions. Nature 595, 462–462 (2021). [DOI] [PubMed] [Google Scholar]
- 50.Universiteiten van Nederland, Room for everyone’s talent: Towards a new balance in the recognition and rewards of academic (2020). https://recognitionrewards.nl/wp-content/uploads/2020/12/position-paper-room-for-everyones-talent.pdf. Accessed 15 December 2023.
- 51.Hicks D., Wouters P., Waltman L., De Rijcke S., Rafols I., Bibliometrics: The leiden manifesto for research metrics. Nature 520, 429–431 (2015). [DOI] [PubMed] [Google Scholar]
- 52.Narrative cvs: A new challenge and research agenda (2023). https://www.leidenmadtrics.nl/articles/narrative-cvs-a-new-challenge-and-research-agenda. Accessed 18 December 2023.
- 53.Wu L., Wang D., Evans J. A., Large teams develop and small teams disrupt science and technology. Nature 566, 378–382 (2019). [DOI] [PubMed] [Google Scholar]
- 54.Lewandowsky S., Oberauer K., Low replicability can support robust and efficient science. Nat. Commun. 11, 358 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Guidance on ref 2021 results (2022). https://archive.ref.ac.uk/guidance-on-results/guidance-on-ref-2021-results. Accessed 18 December 2023.
- 56.Heckman J. J., Moktan S., Publishing and promotion in economics: The tyranny of the top five. J. Econ. Lit. 58, 419–470 (2020). [Google Scholar]
- 57.N. Scheiber, A dishonesty expert is labeled a liar. New York Times, 2008, p. 1.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
There are no data underlying this work.