Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Feb 23;16(2):e0246675. doi: 10.1371/journal.pone.0246675

Honest signaling in academic publishing

Leonid Tiokhin 1,*, Karthik Panchanathan 2, Daniel Lakens 1, Simine Vazire 3, Thomas Morgan 4,5, Kevin Zollman 6
Editor: Wing Suen7
PMCID: PMC7901761  PMID: 33621261

Abstract

Academic journals provide a key quality-control mechanism in science. Yet, information asymmetries and conflicts of interests incentivize scientists to deceive journals about the quality of their research. How can honesty be ensured, despite incentives for deception? Here, we address this question by applying the theory of honest signaling to the publication process. Our models demonstrate that several mechanisms can ensure honest journal submission, including differential benefits, differential costs, and costs to resubmitting rejected papers. Without submission costs, scientists benefit from submitting all papers to high-ranking journals, unless papers can only be submitted a limited number of times. Counterintuitively, our analysis implies that inefficiencies in academic publishing (e.g., arbitrary formatting requirements, long review times) can serve a function by disincentivizing scientists from submitting low-quality work to high-ranking journals. Our models provide simple, powerful tools for understanding how to promote honest paper submission in academic publishing.

Introduction

Imagine a world in which scientists were completely honest when writing up and submitting their papers for publication. What might such a world look like? You might start with the observation that, like everything else, scientific research varies. Some research is well thought-through, conducted rigorously, and makes an important contribution to science. Other research is poorly thought-through, conducted hastily, and makes little contribution to science. You might imagine that scientists know quite a bit about the quality of their research and do everything that they can to honestly communicate this information to readers. High-quality research would be written-up as “amazing”, “inspiring”, and “robust”, while low-quality research would be written-up as “weak”, “derivative” and “inconclusive”. Scientists would then submit high-quality research to high-ranking journals and low-quality research to lower-ranking ones. Imagine, no deception. In such a world, peer- and editorial-review of submissions would be straightforward, because quality would be readily apparent. Publication would be quick and painless, journals would reliably distinguish high- from low-quality research, and journal rank would provide a good proxy for research quality. Sure, you might feel like a dreamer. But you wouldn’t be the only one [18].

Obviously, we don’t live in such a world. After all, scientists are only human. We sometimes struggle to explain why our work is important or forget to write up critical aspects of our research protocols. Like all humans, we also seek out information that confirms our pre-existing beliefs, use biased reasoning strategies to arrive at pre-desired conclusions, and self-deceive to present ourselves in a better light [911]. Such factors can lead us to overestimate the quality of our work. And even if we were perfectly aware of our work’s quality, we are incentivized to present it in an overly positive light [6]. One consequence is that we increasingly describe our research as “amazing”, “inspiring”, and “robust”, despite the fact that such research is increasingly rare [12, 13]. We then submit this research to high-ranking journals, in part, because our careers benefit from publishing in prestigious and impactful outlets [1417]. As a consequence, journal editors and peer reviewers must invest considerable time and effort to distinguish high- from low-quality research. While this process does help to filter work by its quality, it is slow and unreliable, and depends on reviewers’ limited goodwill [7, 1820]. As a result, low-quality research gets published in high-ranking journals [2123]. An unfortunate property of this system is that journal rank provides a poor proxy for research quality [24]. And to top it off, we are well aware of these problems, and regularly complain that the publishing process is “sluggish and inconsistent” [25], “painful” [7], and “a game of getting into the right journals” [8].

What can be done to remedy this situation? This paper addresses one set of impediments to achieving an honest and effective publishing system: the information asymmetries and conflicts of interest that incentivize scientists to deceive journals about aspects of their research. Note that the term “deception” is used to refer to behaviors in which individuals communicate information that does not accurately (i.e., honestly) reflect some underlying state. This paper makes no assumptions about the motives of individual scientists or the mechanisms by which deception occurs.

Information asymmetries and conflicts of interest in academic publishing

Academic journals are vulnerable to deception. This vulnerability exists for two interrelated reasons. First, there are information asymmetries between scientists and journals, as scientists have more information about aspects of their research than is presented in a paper (e.g., what the raw data look like, the scientists’ original predictions versus those reported, what occurred during data-collection and analysis versus what was written up) [26]. Second, there are conflicts of interest between scientists and journals. For example, scientists have an incentive to publish each paper in a high-ranking journal, but high-ranking journals prefer to publish only a subset of papers (e.g., those with rigorous methods, compelling evidence, or novel results). By getting research published in high-ranking journals regardless of its true value, scientists can reap the benefits of high-ranking publications without doing high-value research [17].

One dimension along which journals are vulnerable to deception is research quality, and such deception imposes costs on the scientific community. First, if low-quality research is “deceptively” submitted to high-ranking journals, editors and reviewers must waste time evaluating and filtering out low-quality submissions. This extra time burden reduces the efficiency of science. Second, because peer-review is imperfect, some low-quality papers will “slip through the cracks” and be published in high-ranking journals. As a consequence, any correlation between journal rank and paper quality will be reduced. This reduced correlation impedes accurate decision-making, as scientists rely on journal rank to decide which papers to read, which research paradigms to emulate, and which scientists to hire and fund [3, 16, 17, 27]. Third, if low-quality research requires less time to conduct than high-quality research but can still be published in high-ranking journals, then scientists have little incentive to invest in high-quality research. This can result in adverse selection: high-quality research is driven out of the academic market until low-quality research is all that remains [26, 28].

The problem of deception in communication systems is not unique to academic publishing—whenever there are information asymmetries and conflicts of interest, there are incentives to deceive. Consider three examples.

  1. A mother bird brings food back to her nest and must decide which nestling to feed. The mother prefers to feed her hungriest child, and thus benefits from knowing how much food each child needs. But each child may prefer to receive the food for itself. That is, the mother would benefit if her children honestly communicated their level of hunger, but each child may benefit from deceiving its mother by claiming to be the hungriest.

  2. A family needs a new car and decides to buy from a used-car dealership. The family prefers to buy a car without mechanical defects, worn-out parts, or a history of major collisions. But the used-car salesman prefers to make as much money as he can. This means selling even unreliable cars. The family would benefit if the used-car salesman honestly communicated which cars were reliable and which weren’t, but the used-car salesman may benefit from deceiving the family by claiming that an unreliable car is of high-quality.

  3. A university department is hiring a new faculty member and invites three candidates to give job talks. All else equal, the department prefers to hire a rigorous scholar—one who carefully thinks through each project, uses rigorous methods, and transparently reports all results and analyses. But each candidate prefers to get a job, even if they are not rigorous. That is, departments would benefit if job candidates gave talks that honestly communicated their scholarly rigor, but candidates benefit from deceiving departments by only communicating information that makes them seem rigorous (even if they are not).

How can honesty be ensured despite incentives for deception? The theory of honest signaling can shed light on this question. In economics [29, 30] and biology [31, 32], signaling theory represents an attempt to understand how honest communication can exist in situations where there appear to be incentives for dishonesty. Below, we apply the logic of signaling theory to the publication process using a set of formal theoretical models. This formal approach has several affordances, including making assumptions clear and explicit, ensuring that arguments are logically coherent, and providing insights into questions that are difficult to intuit (e.g., should publishing be made as efficient as possible so that papers can be rapidly submitted and evaluated by reviewers, or might removing publishing inefficiencies have unintended consequences?). However, our approach also has important limitations (see Discussion).

A simple model of academic publishing

For simplicity, assume that a scientist produces two kinds of papers: high-quality and low-quality (see Discussion). In addition to other features, a high-quality paper might be one that thoroughly describes prior research, is methodologically rigorous, conducts appropriate statistical analyses and sensitivity checks, honestly reports all analyses and measures (e.g., no p-hacking or selective reporting of positive results), and clearly distinguishes between exploratory and confirmatory findings (see [5] for other factors that affect research quality). In contrast, a low-quality paper may have fewer or none of these qualities. In reality, scientists may disagree about which methods or analyses are best, but there is often consensus that certain practices reduce research quality.

Conditional on paper quality, the scientist decides whether to submit to a high- or low-ranking journal (because submission is conditioned on paper type, the proportion of high- or low-quality papers is not relevant to our model). Publishing in a high-ranking journal results in payoff B, while publishing in a low-ranking journal results in payoff b, where B > b. These payoffs represent all the benefits that a scientist may receive from publication, including prestige, promotion, citations, or an increased probability of obtaining future funding.

Journals have imperfect information about the quality of submitted papers and thus probabilistically determine submission quality. High-ranking journals accept high- and low-quality papers with probabilities Ph and Pl, respectively. We assume that high-ranking journals are more likely to accept high-quality than low-quality papers (Ph > Pl). The screening process that makes high-quality papers have a higher probability of acceptance could occur via various mechanisms (e.g., editor screening; peer-reviewer evaluation). If a paper is rejected from a high-ranking journal, the scientist resubmits the paper to a low-ranking journal. We assume that low-ranking journals accept all submitted papers. This assumption makes the model easier to understand without affecting its generality (because only the ratio of high- to low-ranking acceptance probabilities affects scientists’ expected payoffs) and the model results are identical if low-ranking journals probabilistically accept submitted papers (see S1 File). Further, as scientists receive a fixed payoff for low-ranking submissions, this payoff can also be interpreted as the fixed payoff to a scientist who cuts their losses and chooses not to submit their work for publication.

In this model and subsequent modifications, we focus on situations in which scientists strategically submit papers to journals and journals adopt fixed behaviors. However, our qualitative results generalize to a model in which journals are also strategic actors (see S1 File and S1 Fig). The assumption of fixed journal behaviors is reasonable if scientists can adjust their behavior on shorter timescales than can academic journals. Further, it allows us to simply illustrate the same mechanisms that ensure honesty in true signaling games with two-sided strategic interaction (e.g., differential benefits and differential costs) [33].

Given this model, where should a scientist send high- and low-quality work in order to maximize their payoffs?

A high-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPh+b(1Ph)>b
Ph(Bb)>0 (1)

A low-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPl+b(1Pl)>b
Pl(Bb)>0 (2)

Both conditions are satisfied if 1) a scientist can publish in a high-ranking journal with non-zero probability and 2) a high-ranking publication is worth more than a low-ranking publication. In other words, in this model, scientists benefit from submitting all papers to high-ranking journals, regardless of paper quality.

This illustrates a key conflict of interest in academic publishing. Scientists are incentivized to signal that their work is high-quality (even when it is not), whereas journals prefer to know the true quality of the work. However, this conflict can be resolved by changing publishing incentives. Making journal submissions costly is one mechanism for doing so.

Submission costs

Now assume that submitting a paper for publication is costly. Such costs could include any aspect of the submission process that requires time (e.g., writing a compelling cover letter, meeting stringent formatting requirements, waiting for a journal’s decision) or money (e.g., submission fees), independent of paper quality. These costs can be conceptualized as either originating from the scientist (e.g., a signal) or as being enforced by the journal (e.g., a screening mechanism) [34]. Assume that scientists pay a cost, C, to submit a paper to a high-ranking journal and a cost, c, to submit a paper to a low-ranking journal, where B > C and b > c. All scientists pay a cost once, but those whose papers are rejected from the high-ranking journal and are re-submitted to the low-ranking journal pay both costs. For mathematical simplicity, we analyze a case where c = 0, and do not include c in our analyses. The qualitative results are identical in cases where low-ranking submissions have a non-zero cost (see S1 File).

With the addition of submission costs, how should a scientist now behave? A high-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPh+b(1Ph)C>b
Ph(Bb)>C (3)

A low-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPl+b(1Pl)C>b
Pl(Bb)>C (4)

With submission costs, scientists only submit to high-ranking journals when the benefits of doing so outweigh the costs.

Separating high-quality from low-quality papers

Eqs (3) and (4) define the conditions under which each kind of paper should be submitted to a high-ranking journal. If we seek to separate high-quality papers from low-quality papers based on the journal to which they are submitted, then the ideal case is where scientists submit high-quality papers to high-ranking journals and low-quality papers to low-ranking journals. Such a separating equilibrium [35] exists when:

Ph(Bb)C>0>Pl(Bb)C
Ph(Bb)>C>Pl(Bb) (5)

Introducing submission costs creates a range of parameters in which honest submission is possible. The key insight is that imposing costs can promote honesty. This occurs because scientists who have low- and high-quality papers pay the same submission cost but have different expected benefits [33, 36] when submitting to high-ranking journals, as low-quality papers are less likely to be accepted.

Honesty is possible when the cost of high-ranking submission, C, is larger than the expected added benefit of submitting a low-quality paper to a high-ranking journal, Pl (B–b), but smaller than the expected added benefit of submitting a high-quality paper to a high-ranking journal, Ph (B–b). As high-ranking publications become worth more than low-ranking publications (larger values of B–b), larger submission costs are required to ensure honest submission; otherwise, scientists will be tempted to submit all papers to high-ranking journals. However, if submission costs are too large, no separation exists, as no paper is worth submitting to a high-ranking journal. As journals become better able to differentiate high- from low-quality papers (larger values of Ph−Pl), the range of conditions under which honesty can exist becomes larger. Consider a case where high-ranking journals accept most high-quality papers and reject most low-quality ones. A scientist who submits low-quality papers to high-ranking journals pays a cost for each submission, but rarely obtains a high-ranking publication. As a result, dishonesty is not worthwhile unless the rare high-ranking publication is extremely valuable. In an extreme case where journals cannot tell the difference between high- and low-quality papers (Ph = Pl), honest submission cannot exist, because scientists lack any incentive to condition journal submission on paper quality.

In the above model, all scientists received the same benefit for high-ranking publications—differential benefits arose because of different acceptance probabilities for high- and low-quality papers. However, differential benefits can also exist if low-quality publications in high-ranking journals yield lower payoffs than high-quality publications and the same logic holds. Thus, differential benefits of this kind can also ensure honest submission (see S1 File). Such differential benefits are plausible. For example, publications in high-ranking journals are preferentially chosen for direct replication [2123] and may be more heavily scrutinized for errors and fraud [24, 37], which increases the probability that low-quality papers in high-ranking journals are detected.

Differential costs

In the previous model, all papers were equally costly to submit for publication, and honesty was maintained because high- and low-quality papers produced different expected benefits for scientists. Another mechanism by which costs can ensure honesty is via differential costs. Differential costs exist if signal costs vary conditional on a signaler’s type [33, 38]. In the context of our model, this would mean that submission costs differ depending on paper quality. Examples of such differential costs could include peer-review taking longer for low-quality papers [39] or it being more difficult to present low-quality work as compelling research (just as it may be harder to write compelling grant proposals for bad ideas than good ones [40]).

Assume that scientists pay ex ante submission costs Cl and Ch to submit low- and high-quality papers, respectively, to high-ranking journals, where low-quality papers have higher submission costs (Cl > Ch). With differential costs, a high-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

Ph(Bb)>Ch (6)

A low-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

Pl(Bb)>Cl

Rewriting this inequality to express low- and high-ranking submission costs in the same units, substitute Cl = kCh, where k > 1. The inequality thus becomes:

Pl(Bb)>kCh (7)

A separating equilibrium, such that scientists submit only high-quality papers to high-ranking journals, exists when:

Ph(Bb)>Ch>Pl(Bb)k (8)

As in the differential-benefits only model (Eq 5), honest submission is more likely when journals can reliably differentiate between high- and low-quality papers. The range of conditions for honesty becomes larger as submitting low- versus high-quality papers to high-ranking journals becomes relatively costlier (large values of k). Differential costs promote honest submission (regardless of whether scientists receive differential benefits) by reducing the relative payoff of submitting low- versus high-quality papers to high-ranking journals.

Costs for resubmitting rejected papers

In the previous models, we assumed that initial submissions were costly, but that there was no unique cost to resubmitting rejected papers to low-ranking journals. Below, we modify this assumption by making resubmission costly. Resubmission costs may be important if papers rejected from high-ranking journals have a lower expected value due to an increased probability of being “scooped” or take longer to publish elsewhere because scientists must make formatting modifications. Alternatively, if decision letters and reviews are openly available [41], rejected papers may lose value if other scientists lower their assessment of a paper’s quality based on previous negative reviews.

Consider a modified version of the differential-benefits model (Eq 5) such that re-submitting papers rejected from high-ranking journals has a cost, cr, where cr < b. A high-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPh+(1Ph)(bcr)C>b
Ph(Bb+cr)>C+cr (9)

A low-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPl+(1Pl)(bcr)C>b
Pl(Bb+cr)>C+cr (10)

A separating equilibrium, such that scientists submit only high-quality papers to high-ranking journals, exists when:

Ph>C+crBb+cr>Pl (11)

Because low-quality papers are more likely to be rejected than high-quality papers, resubmission costs are disproportionately paid by scientists who submit low-quality papers to high-ranking journals. This can ensure honesty, even when initial submissions are cost free (C = 0).

Limiting the number of submissions: A cost-free mechanism to ensure honesty

We have thus far assumed that scientists could resubmit rejected papers to low-ranking journals. This is how academic publishing tends to work: scientists can indefinitely resubmit a paper until it is accepted somewhere [14]. However, such a system allows authors to impose a large burden on editors and reviewers. Might it be beneficial to limit submissions in some way? Below, we modify our model such that resubmissions are not possible. This could represent a situation in which papers can only be submitted a limited number of times [39] (see [4] for a related idea about limiting scientists’ lifetime number of publications and [42] for a related proposal to limit scientists to one publication per year). Alternatively, it could represent a situation in which the cost of resubmitting papers to low-ranking journals is larger than the benefit of low-ranking publications (cr > b), such that scientists have no incentive to resubmit rejected papers. Whether such reforms are feasible or desirable is unclear, but their logical consequences are important to understand.

For simplicity, assume that scientists can only submit each paper to one journal and that all submissions are cost-free. A high-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPh>b (12)

A low-quality paper is worth submitting to a high-ranking journal instead of a low-ranking journal when:

BPl>b (13)

Thus, a separating equilibrium, such that scientists submit only high-quality papers to high-ranking journals, exists when:

BPh>b>BPl (14)

Limiting the number of submissions can ensure honesty, even if submission is cost-free. As in the previous models, the range of conditions for honesty becomes larger when journals can reliably differentiate between high- and low-quality papers (large values of Ph−Pl). If high-ranking publications are worth much more than low-ranking ones (B >> b), honest submission can only exist if low-quality papers are usually rejected from high-ranking journals. In contrast, if high-ranking publications are not worth much more than low-ranking ones (small values of B–b), honest submission can only exist if high-quality papers are usually accepted by high-ranking journals.

Limiting the number of submissions works because scientists receive no payoff when a paper is rejected. In contrast, in the previous models, scientists could resubmit rejected papers to low-ranking journals and receive the smaller benefit, b. When the number of submissions is limited, scientists face an opportunity cost because submitting to one journal precludes submission to another. This disincentivizes deceptive submission as long as the expected value of a higher-probability, low-ranking publication outweighs the expected value of a lower-probability, high-ranking one. Note that, for illustrative purposes, we modeled an extreme case in which papers could only be submitted once. In the real world, less-strict submission limitations may be more feasible, but the mechanism by which limiting submissions ensures honesty would still apply.

Relation to existing models in economics

Similar questions regarding signaling [30, 43, 44] and incentive structures in academic publishing [4547] have a long history of study in economics. Most relevant to our paper, models have analyzed the optimal order in which authors should submit papers to academic journals, conditional on varying payoffs to publication, acceptance probabilities, publication delays, and authors’ levels of impatience [39, 4852]. For example, in one model of editorial delay and optimal submission strategies [48], authors know the quality of their papers and prefer to publish each paper in a top journal, but top journals have higher rejection rates, paper submission is costly, and authors experience a time delay between rejection and resubmission. If submission costs and resubmission delays are sufficiently low, then authors’ optimal policy is to initially submit papers to the very top journal and subsequently work down the journal hierarchy. However, as submission costs and resubmission delays increase, authors are incentivized to make initial submissions to lower-ranking journals, thereby decreasing journals’ overall reviewing burden (see [49, 53] for the role of different types of publishing costs). In another game-theoretic model of the academic review process where authors have more information about paper quality than do journals, journals can increase the average quality of their published papers by increasing submission costs and reducing the noisiness of the review process, both of which disincentivize authors from submitting low-quality papers for publication [54]. Similar qualitative findings have emerged in several other models [50, 51].

Despite their potential to inform current discussions on scientific reform, the aforementioned models are not widely appreciated outside of economics. In part, this is due to their mathematical complexity, narrow target audience, and lack of connection to the burgeoning scientific-reform movement [5]. Our paper addresses these gaps by developing simple models that relate explicitly to proposals for scientific reform. We also go beyond economic theory and incorporate concepts from the theory of honest signaling in evolutionary biology (e.g., differential benefits and differential costs), which provide powerful conceptual tools for thinking about how to ensure honest communication. The explicit application of these models to recent proposals for scientific reform is essential, because the practical utility of models depends on the narrative within which they are embedded [55].

Implications

Our models have implications for how to modify academic publishing to promote honest paper submission, and provide a range of insights regarding the repercussions of various publishing reforms.

Publishing inefficiencies can serve a function

Scientists often complain about apparent inefficiencies in academic publishing, such as long review times, arbitrary formatting requirements, high financial costs to publication (e.g., Springer Nature’s recent "guided open access" scheme charging €2,190 for editorial assessment and peer-review of manuscripts [56]) and seemingly-outdated norms (e.g., writing cover letters) [7, 8, 5759]. As a result, academic journals lower submission costs by offering rapid turnaround times (e.g., Nature, Science [60, 61]), allowing authors to pay for expedited peer-review (e.g., Scientific Reports [62]), offering “short report” formats [63, 64], or recommending against writing cover letters [65]. Our models imply that such moves towards efficiency, even if well intentioned, may produce collateral damage because inefficiencies can serve a function: the costs associated with publishing reduce the incentive to submit low-quality research to high-ranking journals. Consider an extreme scenario in which high-ranking journals made submissions cost-free, removed all formatting requirements, and guaranteed reviews within 48 hours. If the benefits of high-ranking publications remained large, scientists would have even larger incentives to submit low-quality research to high-ranking journals, because the costs of doing so would be trivial.

Cutting costs could have additional repercussions. In signaling games where signal costs are too low to ensure separating equilibria, there exist “hybrid” equilibria where receivers of signals engage in random rejection behavior to prevent being exploited by deceptive signalers [33]. As a result, honest signalers are sometimes mistaken for dishonest signalers and honest signals are rejected more frequently than in the higher-cost separating equilibrium. In academic publishing, well-meaning reforms to reduce publication costs could inadvertently lead to similar outcomes—scientists more frequently submit low-quality work to high-ranking journals, high-ranking journals counteract this by increasing rejection rates, and more high-quality papers are rejected as a consequence.

This illustrates why removing publishing inefficiencies without considering how they function in the larger scientific ecosystem may be counterproductive. An important question is whether the costs imposed on scientists are outweighed by the benefits of ensuring honest submission. This will not always be the case. For example, in research-funding competitions, the aggregate cost of writing proposals may outweigh the societal benefits of differentiating between high- and low-quality projects [40]. Similarly, the high signal costs necessary to ensure honest communication can leave both signalers and receivers worse off than in a system without any communication [66]. Making submission costs too large could also dissuade scientists from submitting to high-ranking journals (e.g., Springer Nature’s. The fact that some high-ranking journals (e.g., Nature, Science, PNAS) continue to attract many papers and are preferentially targeted for initial submissions suggests that current submission costs are not this excessive [14, 67].

Better peer review promotes honest journal submission

If journals preferentially accept high- versus low-quality research, given sufficient submission costs, scientists will not benefit from submitting low-quality papers to high-ranking journals. Ensuring this outcome requires that journals reliably differentiate between high- and low-quality work. In theory, peer review is the primary pre-publication mechanism for doing so. But in practice, peer-review regularly fails to differentiate between submissions of varying quality [1820, 68]. Improving the quality of peer-review is a major challenge, and some minor reforms (e.g., short-term educational interventions) have had limited success [69]. This suggests that more substantial changes should be considered. The space of possibilities for such reforms is large, and includes outsourcing peer-review to highly-trained experts [70], harnessing the wisdom of crowds via various forms of open peer-review [71], allowing scientists to evaluate each other’s peer-review quality [72], and supplementing peer review with "red teams" of independent critics [73]. However, because improving peer-review may be costly for journals, it is important to consider whether such costs are outweighed by the benefits of better discrimination between high- and low-quality papers [54].

Transparent research practices promote honest journal submission

Improving peer reviewers’ ability to distinguish low- from high-quality papers is difficult. In part, this is because reviewers lack relevant information to assess submission quality [26], a problem that is exacerbated by short-report article formats [64]. One solution is to reduce information asymmetries by mandating transparent and open research practices. Mechanisms for doing so include pre-registration, open sharing of data and materials [26], validating analyses before publication [41], removing word limits from the methods and results sections of manuscripts [74], requiring research disclosure statements along with submissions [75, 76], or requiring scientists to indicate whether their work adheres to various transparency standards (e.g., via the curatescience.org Web platform [77]).

It is worth noting that such reforms potentially increase the cost of peer-review, because reviewers spend extra time evaluating pre-registrations, checking raw data, and re-running analyses. Without compensating such costs (e.g., financial rewards), reviewers will have even fewer incentives to do a good job. A similar problem exists in animal communication: if assessing signal veracity is too costly, receivers of signals may be better off by settling for signals that are less reliable [78]. This highlights the importance of ongoing efforts to reduce peer-review costs for papers with open data and pre-registered research (e.g., SMART pre-registration [79], machine-readable hypothesis tests [80]).

Reducing the relative benefit of publishing low-quality papers in high-ranking journals promotes honest journal submission

Honest submission is more likely if low-quality, high-ranking publications are less beneficial than high-quality, high-ranking publications. Ways to generate such differential benefits include targeting high-ranking publications for direct replication [81, 82], or preferentially scrutinizing them for questionable research practices [75] and statistical/mathematical errors [83, 84]. This would increase the probability that low-quality papers are detected post-publication. Subsequently, scientists should pay costs that reduce the benefits associated with such publications. These might include financial penalties, fewer citations for the published paper, or reputational damage (e.g., fewer citations for future work or lower acceptance probabilities for future papers [85]). The fact that retraction rates correlate positively with journal impact factor suggests that high-ranking publications already receive extra scrutiny [24]. However, the fact that many high-ranking findings fail to replicate [2123] suggests that current levels of scrutiny are not sufficient to ensure that only high-quality work is submitted to high-ranking journals [86].

Costs for resubmitting rejected papers promote honest journal submission

As submission and resubmission costs become smaller, scientists have greater incentives to initially submit all papers to high-ranking journals, because getting rejected is not very costly. Resubmission costs solve this problem by making rejection costly, which disproportionately affects low-quality submissions. In principle, such costs could be implemented in several ways. If papers are associated with unique identifiers (e.g., DOI) and their submission histories are openly available, journals could refuse to evaluate papers that have been resubmitted too quickly. If journals preferentially reject low-quality papers, editors could wait before sending “reject” decisions, thereby creating disproportionate delays for low-quality submissions [39]. Note the ethical concerns regarding both of the above approaches (e.g., slowing down the communication of scientific findings). Another possibility is to make all peer-review and editorial decisions openly available, even for rejected papers, as is current policy at Meta-Psychology [41]. Although such a reform could introduce complications (e.g., generating information cascades or increasing the probability that authors are scooped pre-publication), it provides a plausible way to increase differential costs. For example, to the extent that low-quality papers are more likely to receive negative reviews, scientists will have fewer incentives to submit such papers to high-ranking journals, because receiving negative reviews could decrease the probability that the paper is published elsewhere, decrease its perceived scientific value once published, or harm scientists’ reputations.

Limiting the number of submissions (or rejections) per paper promotes honest journal submission

When scientists can indefinitely submit papers for publication and submission is cost-free or sufficiently low-cost, scientists are incentivized to submit all papers to high-ranking journals. Limiting the number of times that papers can be submitted or rejected solves this problem by introducing opportunity costs: submitting to one journal means losing out on the chance to submit elsewhere [39]. If papers were assigned unique identifiers before initial submission, journals could potentially track submission histories and reject papers that had been submitted to or rejected from too many other journals. Such a policy would cause some papers to never be published. However, these papers could easily be archived on preprint servers and remain permanently available to the scientific community. Whether this would harm science as a whole remains an open question.

Discussion

We have shown how honest signaling theory provides a tool for thinking about academic publishing. Making submission costly can disincentivize scientists from “deceptively” submitting low-quality work to high-ranking journals. Honest submission is more likely when 1) journals can reliably differentiate high- from low-quality papers, 2) high-quality, high-ranking publications are more beneficial than low-quality, high-ranking publications, and 3) low-quality papers are costlier to submit to high-ranking journals than high-quality papers. When journal submission is cost free or sufficiently low-cost, scientists are incentivized to submit all papers to high-ranking journals, unless 4) resubmission is costly or 5) the number of submissions is limited.

Our paper provides a formal framework for thinking about a wide range of deceptive publishing behaviors, without requiring any assumptions about scientists’ motivations for engaging in practices that mislead readers about the quality of their work. That said, we provide just one formalization of the academic publishing process. In light of this, we note several potential extensions. We also discuss challenges associated with reforming the cost structure of academic publishing.

Just as experiments simplify reality to clearly establish cause-and-effect relationships, models necessarily simplify reality to serve as useful “thinking aids” for scientists [8789]. As with all models, our analysis ignores many real-world details. For example, we do not address other factors that authors may consider when deciding where to submit a paper (e.g., whether a paper is a better fit for an interdisciplinary or specialty journal), although our model is generally relevant to cases where journals vary in selectivity and authors receive different payoffs for publishing in different journals. Further, we focus on papers varying in only quality, whereas papers and their perceived value actually vary along many dimensions (e.g., novelty, type of result (positive or negative), whether conclusions support scientists’ preconceptions [6, 68, 90]). That said, our models are general enough to accommodate alternative interpretations of the single “quality” parameter. For example, we could have described papers as containing either positive or negative results, journals as preferentially accepting positive results, and authors as preferring to get all results published in high-ranking journals. If there were differential benefits to positive versus negative results, there would be some submission cost at which authors would only benefit from submitting positive results to high-ranking journals. It is worth noting that some emerging publishing formats, such as Registered Reports [91], ameliorate this issue by ensuring results-blind evaluation of submissions. More generally, future reforms would benefit from considering how publishing norms and incentives vary for different types of research and across different scientific fields.

Our models assume that papers vary in quality but do not address the process that generates different types of papers. A potential extension would be to allow scientists to influence paper quality by adjusting how much to invest in projects (e.g., sample size or methodological rigor [92, 93] as has been done in related models [45, 9496]). Certain reforms (e.g., greater transparency, costs for rejected papers) decrease the payoff for low-quality research, and may incentivize scientists to produce more high-quality research in the first place. Further, although we model one type of strategic interaction between authors and journals (see S1 File), this model is too simple to capture all interesting aspects of journals’ strategic decisions (for other formalizations, see [49, 53]). For example, journals that invest more in peer review (e.g., soliciting more reviews) may be more likely to reject low-quality papers, but publish fewer papers than journals with lax peer-review policies. In turn, scientists might obtain larger benefits for publishing in more-rigorous journals. Depending on how journals are rewarded, this could be a viable strategy.

Future extensions could also analyze cases where authors have an imperfect estimate about a paper’s quality [51, 54], either due to noise or strategic quality overestimation. Additionally, given that a work’s perceived value is determined in part by social interactions between scientists [97], the stochasticity introduced by such processes might undermine honest submission (but see [98]). For example, if the cost of mistakenly submitting low-quality work to high-ranking journals is large, scientists may prefer to avoid such mistakes by only submitting to low-ranking journals.

We assume that submission costs vary only as a function of paper quality and journal type. However, in the real world, relative submission costs depend on other factors. For example, well-funded scientists with big labs can easily pay submission fees and offload costs onto junior lab members (e.g., writing grants or cover letters), whereas lone scientists with minimal funding are less capable of doing so. All else equal, this predicts that better-funded scientists will be more likely to submit low-quality work to high-ranking journals. Our models also assume that the benefits of high- and low-ranking publications are invariant across scientists. In the real world, the benefits of publication depend on other factors (e.g., career stage, scientific values). For example, well-established scientists may benefit less from high-ranking publications or, alternatively, may prefer to file-drawer a paper instead of submitting it to a lower-ranking journal.

It is also important to extend our models to allow for repeated interactions between scientists and journals. Several existing models of repeated signaling provide a starting point for doing so. Repeated interactions can ensure honesty if deceivers receive a bad reputation (e.g., are not believed in future interactions), thereby missing out on the benefits of long-term cooperative interactions. If deception is easily detected, receivers can simply not believe future signals from a partner who has been caught being deceptive [99]. If deception is detected probabilistically, receivers can more easily detect deceivers by pooling observations from multiple individuals to form a consensus [100]. And if deceptive signals are never detected but can be statistically detected in the long run, receivers can monitor the rate of signaling and forgo interactions with individuals who signal too frequently [101]. Similar reputation-based mechanisms can promote honesty in academic publishing. Journals that catch scientists engaging in misconduct can ban future submissions from those scientists. If editors and reviewers have information about the quality of authors’ past publications, they can obtain a better estimate of a current submission’s quality. Although such non-anonymous peer-review could introduce biases into the review process, complete anonymity would prevent editors and reviewers from basing publication decisions on scientists’ history of producing low- or high-quality work.

Other extensions could incorporate the dynamics of author-journal interactions, as has been done in the signaling literature [102, 103]. This could be important, as dynamical models reveal the probability that populations reach different equilibria, as opposed to only establishing equilibrium stability. Real-world academic publishing does involve dynamic interactions between authors and journals–changes in journal policy in one time period affect optimal submission behavior in subsequent time periods, and journal editors who make policy changes may themselves be affected by such changes when they submit future papers [45].

Our models do not address whether submission costs harm or benefit science. In biological signaling theory, it is well-established that the high signal costs necessary for a separating equilibrium can leave all parties worse off than in a system with lower signal costs and partial or complete pooling of signals across types [33, 66]. Given the difficulty of optimally calibrating submission costs, future work could extend our analysis to determine what combination of honesty and submission cost would lead to the most desirable scientific outcomes (see [53]). It is also worth noting that submission costs can exacerbate inequalities, as submissions may be costlier for individuals with fewer resources (e.g., early-career researchers, scientists in developing countries). One solution is to make submission costs conditional on scientists’ ability to pay them [39]. This might mean faster review times for early-career researchers or lower submission fees for scientists who have limited funding.

Although we have focused on the utility of signaling theory for understanding reforms to academic publishing, existing theoretical frameworks from many disciplines will provide complementary insights. Some of these include economic theories of markets with asymmetric information [43] and public goods [85], cultural evolutionary theory [104] and its relevance to the scientific process [81, 94], and statistical decision theory [105]. Drawing on diverse theoretical frameworks will improve our ability to implement effective reforms and sharpen our intuitions about how incentives are likely to affect scientists’ behavior. It will also improve our theoretical transparency, which has arguably lagged behind improvements in empirics [104, 106108].

Conclusion

How can we feasibly reform academic publishing to make it more honest, efficient, and reliable? We still lack definitive answers to this question. However, to the extent that we seek a publishing system in which journal rank correlates with paper quality, our models highlight several solutions. These include making submission costly, making rejection costly, making it costlier to submit low- versus high-quality papers to high-ranking journals, reducing the relative benefits of low- versus high-quality publications in high-ranking journals, improving the quality of peer review, increasing the transparency of submitted papers, openly sharing editorial decisions and peer-reviews for all submitted papers, and limiting the number of times that papers can be submitted for publication. Reforms based on these ideas should be subjected to rigorous theoretical and experimental test before implementation. Doing so will be our best hope for improving the efficiency and reliability of science, while minimizing the risk of collateral damage from the unintended consequences of otherwise well-intentioned reforms.

Supporting information

S1 Fig. Academic publishing with two-sided strategic interactions.

A decision tree with possible moves by both scientist and journals. In the first move, papers are randomly determined to be high- or low-quality. In the second move, the scientist chooses whether to submit the paper to either the high-ranking journal, with or without paying the submission cost, or to the low-ranking journal. The high-ranking submission cost is Ch and Cl for high-quality and low-quality papers, respectively. In the third move, a high-ranking journal decides whether to send the paper to harsh or soft peer review. Journals have imperfect information about paper quality. When high-ranking journals send papers to harsh peer-review, they accept high- and low-quality papers with probabilities P-h and P-l, respectively. When high-ranking journals send papers to soft peer-review, they accept high- and low-quality papers with probabilities P¯¯h and P¯¯l, respectively. Low-ranking journals accept all submissions. Papers rejected from high-ranking journals are re-submitted to low-ranking journals (not depicted). Dotted lines depict the journal’s information sets. For each node in an information set, the journal does not know at which node they are.

(TIF)

S1 File. Supplementary analyses and discussion.

(DOCX)

Acknowledgments

We thank Anne Scheel, Peder Isager, and Tim van der Zee for helpful discussions, and Carl Bergstrom for initially pointing us to the relevant economics literature. We thank Katherine Button, Barbara Spellman, Marcus Munafo and several anonymous reviewers for constructive feedback on previous versions of this paper.

Data Availability

No data was generated or analyzed for the current study. The minimal data set for this paper consists solely of mathematical equations, which can be found in the manuscript and supplementary materials.

Funding Statement

LT and DL were supported by the Netherlands Organization for Scientific Research (NWO) VIDI grant 452-17-01. KZ was supported by the National Science Foundation (NSF) grant SES 1254291. The funders had no role in any aspects of this study, the preparation of the manuscript, or the decision to publish.

References

  • 1.Alberts B, Kirschner MW, Tilghman S, Varmus H. Rescuing US biomedical research from its systemic flaws. Proc Natl Acad Sci. 2014;111: 5773–5777. 10.1073/pnas.1404402111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Chambers C. The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press; 2017. [Google Scholar]
  • 3.Hicks D, Wouters P, Waltman L, De Rijcke S, Rafols I. Bibliometrics: the Leiden Manifesto for research metrics. Nat News. 2015;520: 429 10.1038/520429a [DOI] [PubMed] [Google Scholar]
  • 4.Martinson BC. Give researchers a lifetime word limit. Nat News. 2017;550: 303 10.1038/550303a [DOI] [PubMed] [Google Scholar]
  • 5.Munafò MR, Nosek BA, Bishop DV, Button KS, Chambers CD, du Sert NP, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1: 0021 10.1038/s41562-016-0021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Nosek BA, Spies JR, Motyl M. Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspect Psychol Sci. 2012;7: 615–631. 10.1177/1745691612459058 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Raff M, Johnson A, Walter P. Painful publishing. Science. 2008;321: 36–36. 10.1126/science.321.5885.36a [DOI] [PubMed] [Google Scholar]
  • 8.Stern BM, O’Shea EK. A proposal for the future of scientific publishing in the life sciences. PLoS Biol. 2019;17: e3000116 10.1371/journal.pbio.3000116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Kunda Z. The case for motivated reasoning. Psychol Bull. 1990;108: 480 10.1037/0033-2909.108.3.480 [DOI] [PubMed] [Google Scholar]
  • 10.Mercier H, Sperber D. Why do humans reason? Arguments for an argumentative theory. Behav Brain Sci. 2011;34: 57–74. 10.1017/S0140525X10000968 [DOI] [PubMed] [Google Scholar]
  • 11.Von Hippel W, Trivers R. The evolution and psychology of self-deception. Behav Brain Sci. 2011;34: 1–16. 10.1017/S0140525X10001354 [DOI] [PubMed] [Google Scholar]
  • 12.Bloom N, Jones CI, Van Reenen J, Webb M. Are ideas getting harder to find? National Bureau of Economic Research; 2017. [Google Scholar]
  • 13.Vinkers CH, Tijdink JK, Otte WM. Use of positive and negative words in scientific PubMed abstracts between 1974 and 2014: retrospective analysis. BMJ. 2015;351: h6467 10.1136/bmj.h6467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Calcagno V, Demoinet E, Gollner K, Guidi L, Ruths D, de Mazancourt C. Flows of research manuscripts among scientific journals reveal hidden submission patterns. Science. 2012;338: 1065–1069. 10.1126/science.1227833 [DOI] [PubMed] [Google Scholar]
  • 15.Lawrence PA. The politics of publication. Nature. 2003;422: 259 10.1038/422259a [DOI] [PubMed] [Google Scholar]
  • 16.McKiernan EC, Schimanski LA, Nieves CM, Matthias L, Niles MT, Alperin JP. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. PeerJ Inc.; 2019 Apr. Report No.: e27638v1. 10.7287/peerj.preprints.27638v1 [DOI] [PMC free article] [PubMed]
  • 17.van Dijk D, Manor O, Carey LB. Publication metrics and success on the academic job market. Curr Biol. 2014;24: R516–R517. 10.1016/j.cub.2014.04.039 [DOI] [PubMed] [Google Scholar]
  • 18.Cicchetti DV. The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behav Brain Sci. 1991;14: 119–135. 10.1017/S0140525X00065675 [DOI] [Google Scholar]
  • 19.Lindsey D. Assessing precision in the manuscript review process: A little better than a dice roll. Scientometrics. 1988;14: 75–82. [Google Scholar]
  • 20.Siler K, Lee K, Bero L. Measuring the effectiveness of scientific gatekeeping. Proc Natl Acad Sci. 2015;112: 360–365. 10.1073/pnas.1418218112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Camerer CF, Dreber A, Forsell E, Ho T-H, Huber J, Johannesson M, et al. Evaluating replicability of laboratory experiments in economics. Science. 2016;351: 1433–1436. 10.1126/science.aaf0918 [DOI] [PubMed] [Google Scholar]
  • 22.Camerer CF, Dreber A, Holzmeister F, Ho T-H, Huber J, Johannesson M, et al. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nat Hum Behav. 2018;2: 637 10.1038/s41562-018-0399-z [DOI] [PubMed] [Google Scholar]
  • 23.Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349: aac4716 10.1126/science.aac4716 [DOI] [PubMed] [Google Scholar]
  • 24.Brembs B, Button K, Munafò M. Deep impact: unintended consequences of journal rank. Front Hum Neurosci. 2013;7: 291 10.3389/fnhum.2013.00291 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kravitz D, Baker CI. Toward a new model of scientific publishing: discussion and a proposal. Front Comput Neurosci. 2011;5: 55 10.3389/fncom.2011.00055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Vazire S. Quality uncertainty erodes trust in science. Collabra Psychol. 2017;3 Available: http://www.collabra.org/articles/10.1525/collabra.74/print/ [Google Scholar]
  • 27.Stephan P, Veugelers R, Wang J. Reviewers are blinkered by bibliometrics. Nat News. 2017;544: 411 10.1038/544411a [DOI] [PubMed] [Google Scholar]
  • 28.Akerlof GA. The market for “lemons”: Quality uncertainty and the market mechanism. Uncertainty in economics. Elsevier; 1978. pp. 235–251. [Google Scholar]
  • 29.Connelly BL, Certo ST, Ireland RD, Reutzel CR. Signaling theory: A review and assessment. J Manag. 2011;37: 39–67. [Google Scholar]
  • 30.Spence M. Job Market Signaling. Q J Econ. 1973;87: 355–374. [Google Scholar]
  • 31.Maynard Smith J, Harper D. Animal signals. Oxford University Press; New York, NY, USA:; 2003. [Google Scholar]
  • 32.Searcy WA, Nowicki S. The evolution of animal communication: reliability and deception in signaling systems. Princeton University Press; 2005. [Google Scholar]
  • 33.Zollman KJ, Bergstrom CT, Huttegger SM. Between cheap and costly signals: the evolution of partially honest communication. Proc R Soc B Biol Sci. 2013;280: 20121878 10.1098/rspb.2012.1878 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Riley JG. Silver signals: Twenty-five years of screening and signaling. J Econ Lit. 2001;39: 432–478. [Google Scholar]
  • 35.Rothschild M, Stiglitz J. Equilibrium in Competitive Insurance Markets: An Essay on the Economics of Imperfect Information. Q J Econ. 1976;90: 629–649. [Google Scholar]
  • 36.Maynard Smith J. Honest signalling: the Philip Sidney game. Anim Behav. 1991;42: 1034–1035. [Google Scholar]
  • 37.Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci. 2012;109: 17028–17033. 10.1073/pnas.1212247109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Grafen A. Biological signals as handicaps. J Theor Biol. 1990;144: 517–546. 10.1016/s0022-5193(05)80088-8 [DOI] [PubMed] [Google Scholar]
  • 39.Azar OH. The academic review process: How can we make it more efficient? Am Econ. 2006;50: 37–50. [Google Scholar]
  • 40.Gross K, Bergstrom CT. Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biol. 2019;17: e3000065 10.1371/journal.pbio.3000065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Carlsson R, Danielsson H, Heene M, Innes-Ker \AAse, Lakens D, Schimmack U, et al. Inaugural editorial of meta-psychology. Meta-Psychol. 2017;1. [Google Scholar]
  • 42.Nelson LD, Simmons JP, Simonsohn U. Let’s Publish Fewer Papers. Psychol Inq. 2012;23: 291–293. 10.1080/1047840X.2012.705245 [DOI] [Google Scholar]
  • 43.Löfgren K-G, Persson T, Weibull JW. Markets with asymmetric information: the contributions of George Akerlof, Michael Spence and Joseph Stiglitz. Scand J Econ. 2002;104: 195–211. [Google Scholar]
  • 44.Crawford VP, Sobel J. Strategic information transmission. Econom J Econom Soc. 1982; 1431–1451. [Google Scholar]
  • 45.Ellison G. Evolving standards for academic publishing: A q-r theory. J Polit Econ. 2002;110: 994–1034. [Google Scholar]
  • 46.Engers M, Gans JS. Why referees are not paid (enough). Am Econ Rev. 1998;88: 1341–1349. [Google Scholar]
  • 47.McCabe MJ, Snyder CM. Open access and academic journal quality. Am Econ Rev. 2005;95: 453–459. [Google Scholar]
  • 48.Azar OH. The review process in economics: is it too fast? South Econ J. 2005; 482–491. [Google Scholar]
  • 49.Cotton C. Submission fees and response times in academic publishing. Am Econ Rev. 2013;103: 501–09. [Google Scholar]
  • 50.Heintzelman M, Nocetti D. Where Should we Submit our Manuscript? An Analysis of Journal Submission Strategies. BE J Econ Anal Policy. 2009;9 10.2202/1935-1682.2340 [DOI] [Google Scholar]
  • 51.Leslie D. Are delays in academic publishing necessary? Am Econ Rev. 2005;95: 407–413. [Google Scholar]
  • 52.Oster S. The optimal order for submitting manuscripts. Am Econ Rev. 1980;70: 444–448. [Google Scholar]
  • 53.Müller-Itten M. Gatekeeping under asymmetric information. Manuscript. 2019. [Google Scholar]
  • 54.Azar OH. A model of the academic review process with informed authors. BE J Econ Anal Policy. 2015;15: 865–889. [Google Scholar]
  • 55.Otto SP, Rosales A. Theory in service of narratives in evolution and ecology. Am Nat. 2019. 10.1086/705991 [DOI] [PubMed] [Google Scholar]
  • 56.Else H. Nature journals reveal terms of landmark open-access option. Nature. 2020. [cited 26 Nov 2020]. 10.1038/d41586-020-03324-y [DOI] [PubMed] [Google Scholar]
  • 57.Jiang Y, Lerrigo R, Ullah A, Alagappan M, Asch SM, Goodman SN, et al. The high resource impact of reformatting requirements for scientific papers. PLOS ONE. 2019;14: e0223976 10.1371/journal.pone.0223976 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.LeBlanc AG, Barnes JD, Saunders TJ, Tremblay MS, Chaput J-P. Scientific sinkhole: The pernicious price of formatting. PLOS ONE. 2019;14: e0223116 10.1371/journal.pone.0223116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Vale RD. Accelerating scientific publication in biology. Proc Natl Acad Sci. 2015;112: 13439–13446. 10.1073/pnas.1511912112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.The Editors. Science Magazine—Information for Authors: Contributors’ FAQ. 2019 [cited 18 Mar 2019]. Available: https://www.nature.com/nature/for-authors/editorial-criteria-and-processes
  • 61.The Editors. Editorial criteria and processes | Nature. 2019 [cited 18 Mar 2019]. Available: https://www.nature.com/nature/for-authors/editorial-criteria-and-processes
  • 62.Jackson A. Fast-track peer review experiment: First findings. 2015. Available: http://blogs.nature.com/ofschemesandmemes/2015/04/21/fast-track-peer-review-experiment-first-findings
  • 63.Editors TPBS. Broadening the scope of PLOS Biology: Short Reports and Methods and Resources. PLOS Biol. 2019;17: e3000248 10.1371/journal.pbio.3000248 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Ledgerwood A, Sherman JW. Short, sweet, and problematic? The rise of the short report in psychological science. Perspect Psychol Sci. 2012;7: 60–66. 10.1177/1745691611427304 [DOI] [PubMed] [Google Scholar]
  • 65.The Editors. Contributor FAQ. 2010 [cited 26 May 2020]. Available: https://www.psychologicalscience.org/journals/ps/faq.cfm
  • 66.Bergstrom C, Lachmann M. Signalling among relatives. I. Is costly signalling too costly? Philos Trans R Soc Lond B Biol Sci. 1997;352: 609–617. [Google Scholar]
  • 67.The Editors. A decade in numbers. Nat Mater. 2012;11: 743–744. 10.1038/nmat3424 [DOI] [Google Scholar]
  • 68.Mahoney MJ. Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cogn Ther Res. 1977;1: 161–175. [Google Scholar]
  • 69.Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. BMJ. 2004;328: 673 10.1136/bmj.38023.700775.AE [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Marsh HW, Jayasinghe UW, Bond NW. Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. Am Psychol. 2008;63: 160 10.1037/0003-066X.63.3.160 [DOI] [PubMed] [Google Scholar]
  • 71.Ross-Hellauer T. What is open peer review? A systematic review. F1000Research. 2017;6 10.12688/f1000research.11369.2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Wicherts JM, Kievit RA, Bakker M, Borsboom D. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front Comput Neurosci. 2012;6: 20 10.3389/fncom.2012.00020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Lakens D. Pandemic researchers-recruit your own best critics. Nature. 2020;581: 121 10.1038/d41586-020-01392-8 [DOI] [PubMed] [Google Scholar]
  • 74.Eich E. Business Not as Usual. Psychol Sci. 2014;25: 3–6. 10.1177/0956797613512465 [DOI] [PubMed] [Google Scholar]
  • 75.Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22: 1359–1366. 10.1177/0956797611417632 [DOI] [PubMed] [Google Scholar]
  • 76.Submission Guidelines–Association for Psychological Science. 2020 [cited 19 Sep 2019]. Available: https://www.psychologicalscience.org/publications/psychological_science/ps-submissions
  • 77.LeBel EP, McCarthy RJ, Earp BD, Elson M, Vanpaemel W. A unified framework to quantify the credibility of scientific findings. Adv Methods Pract Psychol Sci. 2018;1: 389–402. [Google Scholar]
  • 78.Dawkins MS, Guilford T. The corruption of honest signalling. Anim Behav. 1991;41: 865–873. 10.1016/S0003-3472(05)80353-7 [DOI] [Google Scholar]
  • 79.Hardwicke TE. SMART Pre-registration. 26 Jun 2018 [cited 8 May 2019]. Available: https://osf.io/zjntc/
  • 80.Lakens D, DeBruine L. Improving transparency, falsifiability, and rigour by making hypothesis tests machine readable. 2020. [Google Scholar]
  • 81.McElreath R, Smaldino PE. Replication, communication, and the population dynamics of scientific discovery. PLoS One. 2015;10: e0136088 10.1371/journal.pone.0136088 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Zwaan RA, Etz A, Lucas RE, Donnellan MB. Making replication mainstream. Behav Brain Sci. 2018;41. [DOI] [PubMed] [Google Scholar]
  • 83.Brown NJ, Heathers JA. The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Soc Psychol Personal Sci. 2017;8: 363–369. [Google Scholar]
  • 84.Nuijten MB, van Assen MA, Hartgerink CH, Epskamp S, Wicherts J. The validity of the tool “statcheck” in discovering statistical reporting inconsistencies. 2017. [Google Scholar]
  • 85.Engel C. Scientific disintegrity as a public bad. Perspect Psychol Sci. 2015;10: 361–379. 10.1177/1745691615577865 [DOI] [PubMed] [Google Scholar]
  • 86.Zollman KJ. The scientific ponzi scheme. Unpubl Manuscr. 2019. [Google Scholar]
  • 87.Gunawardena J. Models in biology: ‘accurate descriptions of our pathetic thinking.’ BMC Biol. 2014;12: 29 10.1186/1741-7007-12-29 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Kokko H. Modelling for field biologists and other interesting people. Cambridge University Press; 2007. [Google Scholar]
  • 89.Weisberg M. Simulation and similarity: Using models to understand the world. Oxford University Press; 2012. [Google Scholar]
  • 90.Greenwald AG. Consequences of prejudice against the null hypothesis. Psychol Bull. 1975;82: 1. [Google Scholar]
  • 91.Chambers CD, Tzavella L. Registered Reports: Past, Present and Future. MetaArXiv; 2020. February 10.31222/osf.io/43298 [DOI] [PubMed] [Google Scholar]
  • 92.Fraley RC, Vazire S. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power. PLOS ONE. 2014;9: e109019 10.1371/journal.pone.0109019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Sarewitz D. The pressure to publish pushes down quality. Nature. 2016;533 10.1038/533147a [DOI] [PubMed] [Google Scholar]
  • 94.Tiokhin L, Morgan T, Yan M. Competition for priority and the cultural evolution of research strategies. 2020. [cited 26 May 2020]. 10.1177/1745691620966795 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016;3: 160384 10.1098/rsos.160384 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Higginson AD, Munafò MR. Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biol. 2016;14: e2000995 10.1371/journal.pbio.2000995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97.Starbuck WH. How much better are the most-prestigious journals? The statistics of academic publication. Organ Sci. 2005;16: 180–200. [Google Scholar]
  • 98.Meacham F, Perlmutter A, Bergstrom CT. Honest signalling with costly gambles. J R Soc Interface. 2013;10: 20130469 10.1098/rsif.2013.0469 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Silk JB, Kaldor E, Boyd R. Cheap talk when interests conflict. Anim Behav. 2000;59: 423–432. 10.1006/anbe.1999.1312 [DOI] [PubMed] [Google Scholar]
  • 100.Boyd R, Mathew S. Third-party monitoring and sanctions aid the evolution of language. Evol Hum Behav. 2015;36: 475–479. [Google Scholar]
  • 101.Rich P, Zollman KJ. Honesty through repeated interactions. J Theor Biol. 2016;395: 238–244. 10.1016/j.jtbi.2016.02.002 [DOI] [PubMed] [Google Scholar]
  • 102.Huttegger S, Skyrms B, Tarres P, Wagner E. Some dynamics of signaling games. Proc Natl Acad Sci. 2014;111: 10873–10880. 10.1073/pnas.1400838111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Huttegger SM, Zollman KJ. Methodology in biological game theory. Br J Philos Sci. 2013;64: 637–658. [Google Scholar]
  • 104.Muthukrishna M, Henrich J. A problem in theory. Nat Hum Behav. 2019; 1 10.1038/s41562-018-0525-y [DOI] [PubMed] [Google Scholar]
  • 105.Wald A. Statistical decision functions. 1950. [Google Scholar]
  • 106.Borsboom D. Theoretical amnesia. Open Sci Collab Blog. 2013. Available: http://osc.centerforopenscience.org/category/misc6.html [Google Scholar]
  • 107.Guest O, Martin AE. How computational modeling can force theory building in psychological science. 2020. [cited 27 May 2020]. 10.31234/osf.io/rybh9 [DOI] [PubMed] [Google Scholar]
  • 108.Robinaugh D, Haslbeck J, Ryan O, Fried EI, Waldorp L. Invisible hands and fine calipers: A call to use formal theory as a toolkit for theory construction. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Jonathan Jong

27 Oct 2020

PONE-D-20-19925

Honest signaling in academic publishing

PLOS ONE

Dear Dr. Tiokhin,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

As you can see, the paper received two divergent reviews, though both were fundamentally positive about the paper. I too am fundamentally positive about the paper, but some of R1's concerns about the assumptions of the model did occur to me too. However, I am not suggesting that you change your assumptions, merely that you acknowledge their limitations. Of course, you may be persuaded by R1's challenge: in which case, by all means change your assumptions and modify the model--or better yet, run multiple models with slightly different assumptions. I will probably not submit the next version of this paper for further review, but make the decision myself.

Please submit your revised manuscript by Nov 30 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Jonathan Jong, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.  Thank you for stating the following in the Competing Interests section:

"The authors have declared that no competing interests exist.".

We note Simine Vazire's PLOS Board of Directors membership.

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.ii) Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

"LT and DL were supported by the Netherlands

Organization for Scientific Research (NWO) VIDI grant 452-17-01. KZ was supported by the

National Science Foundation (NSF) grant SES 1254291. The funders had no role in any aspects

of this study, the preparation of the manuscript, or the decision to publish."

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

"The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper discusses an issue in academic publishing – the information asymmetry between authors and editors – and considers how the logic of costly signaling theory can be applied to ensure that only high-quality papers are submitted to high quality journals. Their model considers two types of papers and two types of journals: high quality and low quality. It argues that in the absent of costs, authors will always be incentivized to submit all papers to high-quality journals, but that low-quality papers being reviewed by or published in high quality journals is a problem. The authors consider how costs to submit papers can change the decision calculus so that rational authors will only submit high-quality papers to high-quality journals.

I found the paper to be an interesting exercise, but I wasn’t particularly convinced by its arguments. The model is very simple and relies a number of assumptions that either rarely hold or, in the case of submission costs, *always* hold, raising questions about the conclusions one can draw. I found the discussion around the model to be cursory and at times naïve, with some occasionally problematic recommendations. That said, I thought the approach was interesting, and I found no major errors in the analysis. Connecting signaling theory to academia is a good idea, and I found there to be useful food for thought in this paper. It could also provide the foundation for more detailed modeling work on the subject. Given the broad appeal of the topic, it seems like PLOS ONE could provide a good home for the paper, pending some considerations of the comments below.

I first want to talk about the basic assumptions of the model. The model assumes that

1) Scientists should be honest

2) Scientists can accurately assess the quality of their research

3) Journal rank is a mark of quality, not specificity.

4) All research must be submitted as a paper.

The focus is on (1), and on mechanisms to enforce honesty. The others are implied and not explored, but I think there are some real issues there.

First, the extent to which researchers accurately assess the value of their work is highly questionable. It may also be adaptive to overestimate the value of one’s work – scholars that undervalue their work may be selected against in a competitive institutional system.

Second, the distinction between high- and low-quality journals seems very artificial. Certainly some journals are known to be crap (e.g. some predatory journals), and some are very prestigious, but there are also many journals that are excellent in terms of editorial curation and reputation but somewhat narrow in scope – what are often called technical or specialty journals. The dilemma many authors face is whether their paper has broad enough appeal to be submitted to a fancy interdisciplinary journal like Nature or a more solid but niche society journal.

Third, the authors write that in their ideal scenario, “Scientists would then submit high-quality research to high-ranking journals and low-quality research to lower-ranking ones” (lines 9-10). But why? Why publish low-quality research at all, especially if it’s readily identifiable as low quality? I kept thinking about this throughout the analysis, especially in terms of the cost tradeoffs. It seems like a crucial but missing component was the decision to not submit anything at all. This would carry no cost (except maybe the sunk cost of having done the work) and provide no benefit. This also speaks to something that is glossed over but important. ALL paper submissions incur some cost. It is not trivial to write up the paper and give up time to have the paper under review. The fact that most journals require papers to not be under submission elsewhere is a real tangible cost for all submissions.

Another assumption made is that “high-quality” journals are always incentivized to publish quality research. However, they are also incentivized to publish things that will get attention, and to minimize the risk of needing a retraction due to quality. This probably correlates with quality as the authors describe it, but not always – it would be good to address this, since the model strongly relies on this assumption and might run into problems if journal editors are also acting strategically.

The authors main conclusions are that high quality journals should impose (or continue to impose) costs for submission to dissuade trivial submission of low-quality papers. There are a few concerns about this. First, as noted, all submission is at least somewhat costly. Second, journals have desk rejection, which is a relatively low-cost way to weed out the more low-quality papers. And third, and most importantly, imposing high costs to submission has a potential social consequence, which is that it will tend to preferentially favor researchers who can afford to pay such costs (and so for whom the costs are effectively less). This favors senior researchers at wealthy, prestigious institutions doing normal paradigmatic science. This seems to me to be a major downside and needs to be addressed in the authors’ discussion.

_______OTHER COMMENTS_______

The section “Relation to existing models in economics” is useful but probably belongs much earlier in the paper, before the detailed model description. Also, an important but missing reference the authors might consider engaging with is:

Crawford and Sobel (1982) Strategic information transmission. Econometrica 50.

In the description of high vs. low quality research, they write (lines 112-116):

“A high-quality paper might be one that thoroughly describes prior research, is methodologically rigorous, conducts appropriate statistical analyses and sensitivity checks, honestly reports all analyses and measures (e.g., no p-hacking or selective reporting of positive results), and clearly distinguishes between exploratory and confirmatory findings (see (5) for other factors that affect research quality). In contrast, a low-quality paper may have fewer or none of these qualities.”

I would argue that you can have low quality research that has all these properties but fails to ask meaningful or important questions. This isn’t just about impact, it’s about insight. Indeed, many papers are likely rejected from “high quality journals” not because the research isn’t rigorous, but because they fail to be asking probing questions in a sufficiently deep fashion. Also, something to think about might be the fact that some research (often but not always “high quality”) may itself be more costly to conduct, and therefore authors may feel “entitled” to publishing in more prestigious journals to justify the investment, while they may feel more comfortable publishing “low-quality” research in third-tier journals because it doesn’t represent a substantial investment.

Lines 144-145: “papers are randomly determined to be high- or low-quality.”

In reality, there may be many more low quality than high quality papers. Would a skewed ratio change the calculus?

Lines 165: “This illustrates a key conflict of interest in academic publishing.”

The conflict here is that without costs, scientists are always incentivized to submit everything to “high-quality” journals, but HQ journals only want to publish HQ work. This being a conflict is contingent on a number of assumptions holding, including the absence of costs. But as noted above, there are almost always costs. There are costs to writing up results, for example, so another option is to just not submit. Consider this paper, which found that most null results are never even written up:

Franco et al. (2014) Publication bias in the social sciences: Unlocking the file drawer. Science 345.

This comment also impinges on the assumption to make the cost of submission to LQ journals c = 0, which they say “does not affect our model’s generality.” (line 179). This is only because the choice is between submit high or submit low. If another option is submit nowhere, which imposes no cost and causes no benefit, there might be cases in which a strategy could be “go big or go home” — submit to high-ranking journal, and if rejected, submit nowhere. This reduces the generalizability of c = 0.

In terms of deriving values of C to separate HQ and LQ papers, “The key insight is that imposing costs can promote honesty” (line 200). Maybe, but it also seems likely calibrating those costs will often be quite difficult. Especially since the costs and benefits are not constant for all individuals, but vary by research career stage, current prestige of position, workflow to submit and re-submit, etc. Further, if P_h is similar to P_l, there is a very narrow range of C, which interplays with the variation between individuals on both B and C.

Lines 207-209: “If high-ranking publications are worth much more than low-ranking publications (large values of B – b), large submission costs are required to ensure honest submission; otherwise, scientists are tempted to submit all papers to high-ranking journals.”

Not necessarily, if P_h is low. In that case, costs could still be low. Perhaps this suggests that HQ journals should just be more selective?

Regarding limiting the number of submissions, they cite suggestions to limit scientists’ lifetime number of publications or to limit scientists to one publication per year (lines 289-291). This example isn’t really appropriate, because it’s talking about limiting the number of publications, not the number of submissions.

In the Implications, the authors first reiterate the main conclusion: “the costs associated with publishing reduce the incentive to submit low-quality research to high-ranking journals” (lines 334-336).

They seem to have ignored desk rejection, which is the main weapon that journals have against wasting time with low-quality submissions. In truth, that might be enough. You haven’t talked at all about decision calculus for the journals. Why should they become more or less selective? What criteria should they use to select papers?

Lines 337-339: “If the benefits of high-ranking publications remained large, scientists would have even larger incentives to submit low-quality research to high-ranking journals, because the costs of doing so would be trivial.”

An interesting counterexample might be Sociological Science, which guarantees up/down decisions in 30 days and does not allow major revisions. Their rejection rate is high, and they have established a good reputation.

Lines 405-406: “When submissions and resubmissions are cost free…”

Again, I think this is almost never the case in practice. Especially if there’s the additional choice of not submitting at all.

Lines 409-411: “If journals preferentially reject low-quality papers, editors can wait some time before sending authors “reject” decisions, thereby causing disproportionate delays for low-quality submissions.”

Jesus Christ. No. Editors that do this should be punched in their stupid faces.

Lines 424-426: “Limiting the number of times that papers can be submitted or rejected…”

Beyond this being an impossible level of top-down control and overly paternalistic, such a limitation already occurs in practice because most journals have a policy that papers shouldn’t be submitted elsewhere.

Lines 472-473: “we note several extensions.”

Rather than simply listing extensions, which is about what you could have done but didn’t, why not spend some more time talking about the limitations and caveats of your conclusions based on your current model’s assumptions?

As a final note, something I kept asking myself while reading it was: Is this a high quality paper? Is PLOS ONE a high quality journal? Were the authors able to accurately assess the quality of their own research and use their model to help them decide where to submit? This is asked mostly rhetorically, but also seriously to the extent that it forces the question: are there real lessons to be drawn from this?

Reviewer #2: I really liked this paper. I thought it was clear, simple has a valuable insight, and useful prescriptions.

The basic idea is straightforward (and well explained): there is a fundamental informational and incentives problem in publishing—authors have an incentive to over-sell their work and have information about its value that the readers cannot as easily access, such as how much they had to twist the result or model or citations to make the results sound compelling and novel and interesting. The authors do a good job summarizing some of the inefficiencies this information+incentives problem creates. And then discuss how this problem can be understood and targeted using a standard costly signaling framework. Namely, in order to help readers differentiate between high and low quality papers, its essential that high quality papers can get into higher quality journals with lower relative cost—e.g. by having more attentive or better equipped referees, academic system that gives less credence to bad papers in good journals, or makes journal submission more onerous, misrepresenting results harder, or limits the frequency of submissions. I Think these are all nice insights and valuable prescriptions, and costly signaling proves to be a useful perspective to look at this problem.

Perhaps I missed something significant that other referees will catch, but on my reading, I couldn’t think of anything I would want changed in this paper or any reason to prevent its publication.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Moshe Hoffman

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Wing Suen

25 Jan 2021

Honest signaling in academic publishing

PONE-D-20-19925R1

Dear Dr. Tiokhin,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Wing Suen

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The paper provides a nice application of signaling theory to academic publishing. The authors have adequately addressed the referees' comments at the earlier round. However, I have two simple suggestions to make before the paper can go into production.

1. Figure 1 of the paper is not particularly helpful. In the interest of brevity, I would suggest taking out the figure.

2. Referee 1 had some concerns about the suggestion that "editors could wait before sending authors 'reject' decisions, thereby causing disproportionate delays for low-quality submissions." (Lines 439-440, page 17). I share a similar concern. I could understand the underlying logic of making submissions differentially more costly to induce a separating equilibrium, but the suggestion borders on unethical editorial behavior. Clearly inducing a separating equilibrium may be desirable, but is not the only objective that can override all other concerns. I would suggest that the authors either take out the said recommendation, or clearly spell out the competing ethical issues that needs to be considered.

Reviewers' comments:

Acceptance letter

Wing Suen

15 Feb 2021

PONE-D-20-19925R1

Honest signaling in academic publishing

Dear Dr. Tiokhin:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Wing Suen

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Fig. Academic publishing with two-sided strategic interactions.

    A decision tree with possible moves by both scientist and journals. In the first move, papers are randomly determined to be high- or low-quality. In the second move, the scientist chooses whether to submit the paper to either the high-ranking journal, with or without paying the submission cost, or to the low-ranking journal. The high-ranking submission cost is Ch and Cl for high-quality and low-quality papers, respectively. In the third move, a high-ranking journal decides whether to send the paper to harsh or soft peer review. Journals have imperfect information about paper quality. When high-ranking journals send papers to harsh peer-review, they accept high- and low-quality papers with probabilities P-h and P-l, respectively. When high-ranking journals send papers to soft peer-review, they accept high- and low-quality papers with probabilities P¯¯h and P¯¯l, respectively. Low-ranking journals accept all submissions. Papers rejected from high-ranking journals are re-submitted to low-ranking journals (not depicted). Dotted lines depict the journal’s information sets. For each node in an information set, the journal does not know at which node they are.

    (TIF)

    S1 File. Supplementary analyses and discussion.

    (DOCX)

    Attachment

    Submitted filename: Response to reviewers.docx

    Data Availability Statement

    No data was generated or analyzed for the current study. The minimal data set for this paper consists solely of mathematical equations, which can be found in the manuscript and supplementary materials.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES