Skip to main content
PLOS One logoLink to PLOS One
. 2023 Apr 13;18(4):e0284212. doi: 10.1371/journal.pone.0284212

How many submissions are needed to discover friendly suggested reviewers?

Pedro Pessoa 1,2, Steve Pressé 1,2,3,*
Editor: Paolo Cazzaniga4
PMCID: PMC10101443  PMID: 37053223

Abstract

It is common in scientific publishing to request from authors reviewer suggestions for their own manuscripts. The question then arises: How many submissions are needed to discover friendly suggested reviewers? To answer this question, as the data we would need is anonymized, we present an agent-based simulation of (single-blinded) peer review to generate synthetic data. We then use a Bayesian framework to classify suggested reviewers. To set a lower bound on the number of submissions possible, we create an optimistically simple model that should allow us to more readily deduce the degree of friendliness of the reviewer. Despite this model’s optimistic conditions, we find that one would need hundreds of submissions to classify even a small reviewer subset. Thus, it is virtually unfeasible under realistic conditions. This ensures that the peer review system is sufficiently robust to allow authors to suggest their own reviewers.

1 Introduction

Peer review is the cornerstone of quality control of academic publishing. However, the daunting task of selecting appropriate reviewers [1, 2] relies in identifying at least two scholars, free of conflict of interest, who have: 1) the necessary expertise to judge the quality and perceived impact; and 2) the willingness to perform the work pro bono. On account of this, it is ever more common that journals request, and often require, authors to suggest candidate reviewers. That is, provide names and contact information of scholars the authors deem qualified to review.

It is natural to imagine, at first glance, that this incentivizes authors to submit “friendly” names, implying suggesting reviewers that they have reason to believe would be favorably inclined toward them. The fear of such peer review manipulation is potentiated by reports that author-suggested reviewers are more likely to recommend acceptance [310]. However, some of these same studies mention that the quality of reports of author-suggested reviewers does not differ from the ones of editor-suggested reviewers [35, 8, 9]. It is also reported that the difference in suggesting acceptance by author-suggested and editor-suggested reviewers is not significant when comparing reports of the same submission [7] nor is it observed to have an effect in the article’s acceptance [3, 7] and this discrepancy can even vanish entirely in some fields [11].

The question then naturally arises: can a scientist infer from their personal history of submissions which reviewers are likely to bias the decision in their favor? In what follows, we present an optimistic agent-based model that surely underestimates the number of submissions required to ascertain the friendliness of the reviewer with high confidence. What we find is that, due to multiple sources of uncertainty (e.g., lack of knowledge as to which reviewer the editor selects), such an effort would require a number of submissions vastly exceeding the research output of all but the most productive scientists. That is, hundreds and sometimes thousands of submissions.

As neither a manuscript’s submission history, reviewers selected by the editor, nor suggested reviewers by the authors are publicly available, we adapt an agent-based simulation model [1214], already used in generating simulated peer review data [14], and develop an inference strategy on this model’s output to ask whether we can uncover favorably inclined reviewers. This fits into a larger effort to quantitatively study the dynamics of scientific interactions [1518].

As we initially simulate the data, we intentionally make assumptions using agent-based models that would result in easy classification in order to obtain a lower bound on the number of submissions required to confidently classify reviewers. These assumptions read as follows:

  • i)

    For each submission, the author will always suggest a small number of reviewers (three, in our simulation) from a fixed and small (ten elements, in our simulation) pool of names.

  • ii)

    The editor will always select one of the reviewers suggested by the authors.

  • iii)

    The “friendliness” of any given reviewer remains the same for all subsequent submissions.

  • iv)

    Submissions from the same author all have the same overall quality.

Shortly we will lift the assumptions of this “cynical model” and introduce a “quality factor model” or simply, quality model. In particular, we will lift assumption iv). As we will see, lifting assumptions will only raise, often precipitously, the already unfeasible high lower bound on the number of submissions required to confidently classify reviewers and leverage this information to bias reports in their favor.

2 Methods

In order to set a lower bound on the number of submissions required to confidently classify reviewers, the present study focuses on a simplified peer review process characterized by three types of agents: the author(s), the editor, and the reviewers. Each submission is reviewed according to the following steps:

  1. During submission, the author will send to the editor a list of suggested reviewers, S. The suggested reviewers are chosen from a larger set of possible reviewers R—such that S is a subset of R.

  2. The editor will select one reviewer, namely r1, from S randomly with uniform probability.

  3. The editor will also select a second reviewer, r2, from a pool of reviewers considerably larger than R and representative of the scientific community.

  4. The reviewers will write single blind reports, that will be shared with the authors. These are classified as overall positive or negative, with a being the number of positive reviews out of two reports.

A diagram of this idealized process is presented in Fig 1.

Fig 1. Diagram presenting the simplified peer review process.

Fig 1

See the steps described in section 2 for definitions.

In the spirit of identifying a lower bound on submissions, we make the dramatic assumption that r1 either belongs to friend or rival class while r2 is otherwise neutral. Later we will devise a Bayesian inference strategy to achieve suggested reviewer (r1) classification.

The procedure described in the bullet points above refers to a single submission. However, as our end goal is to determine how many submissions are necessary to classify reviewers, we must consider multiple submissions. For this reason, we represent a history of M, identical and independent, submissions using the index μ ∈ {1, 2, …, M}, such that Sμ and aμ are, respectively, the set of suggested reviewers and positive reviews accrued for the μ-th submission. Naturally, reviewer reports are fully written letters rather then binary positive or negative answers. However, introducing a more complete spectrum on the positiveness of a report would certainly introduce further uncertainty on the reviewers class and only increase the lower bound necessary to classify reviewers.

Now that we have qualitatively described our agent-based model, we provide a detailed mathematical formulation of the simulation and inference.

2.1 Mathematical formulation

Here each element of R is a reviewer. We denote xi the state of each reviewer as belonging to one of two classes: either xi = friend or xi = rival. The method can immediately be generalized to accommodate the addition of a third (neutral) class. Put differently, each suggested reviewer is treated as a Categorical random variable realized to either friend or rival. Collecting all states as a sequence, we write x=[x1,x2,,x|R|] with |R| understood as the cardinality of R. For two classes, we have 2|R| allowed configurations of x. It is convenient to index configurations with a j superscript where j{1,2,,2|R|} for which xj=[x1j,x2j,,x|R|j]. For sake of clarity alone, we provide a concrete example enumerating all configurations for two possible suggested reviewers in Table 1.

Table 1. Example of the construction and enumeration of the possible configurations, xj, for a set of two possible suggested reviewers (|R|=2).

As described in the first paragraph of section 2.1.

j x j = [ x1j x2j ]
1 x 1 = [ rival, rival ]
2 x 2 = [ rival, friend ]
3 x 3 = [ friend, rival ]
4 x 4 = [ friend, friend ]

We will now use Bayesian inference to determine the probability we assign to each configuration. That is, to compute posterior probabilities, P(xj|{aμ},{Sμ}), over each xj given the set of positive reports {aμ} received after suggesting a subset {Sμ} of reviewers. In the present article, we will study two models for reviewer behavior.

The first is the, simpler, cynical model where the friend writes a positive review with unit probability and, by contradistinction, the rival writes a positive review with null probability. The reviewer not selected from the author’s list, r2, will write a positive review with probability ½. In this iteration of the model it should be the easiest (i.e., quickest in terms of number of submissions) to sharpen our posterior and classify reviewers.

The second model is the quality model that introduces a new layer of stochasticity. Here, a submission is associated a quality factor q ∈ (0, 1) reflecting the quality of each submission. In this model an unbiased reviewer (r2) would write a positive review with probability q. By contrast, rivals and friends will “double guess” their own judgment of the article implying that they will evaluate the submitted article twice independently. A rival will only suggest acceptance if they deem the submission worthy of publication in both assessments, meaning a rival will write a positive review with probability q2. Analogously, a friend will reject if they “reject twice”, hence they write a negative review with probability (1 − q)2 or, equivalently, a positive review with probability 1 − (1 − q)2 = q(2 − q). A summary of these probabilities is presented in Table 2. As done with aμ and Sμ, we index the quality factor of the μ-th submission as qμ.

Table 2. Probabilities for reviewers of each class to write a positive report (accept) or a negative report (reject) according to the quality model when reviewing a paper of quality factor q.

accept reject
r 2 q 1 − q
r1 is a rival q 2 1 − q2
r1 is a friend 1 − (1 − q)2 = q(2 − q) (1 − q)2

Not all authors, naturally, have distributions over q centered at the same value. It is therefore of interest to compute the effect on the lower bound of submission needed (i.e., how quickly our posterior sharpens around the ground truth) for different distributions over q centered at the extremes (average high or average low quality) in addition to middle-of-the-road distributions centered at q = ½. As we will see, middle-of-the-road distributions allow for more rapid posterior sharpening. Notwithstanding this paltry incentive to write middle-of-the-road papers, we will see that the lower bound on the number of submissions remains unfeasibly high. Even for this idealized scenario.

2.2 Simulation

Following the steps described at the beginning of Section 2, the first step of the simulation involves editorial selection from the list of suggested reviewers with (RSμ) possible sets of suggested reviewers possible, or 120 given our simulation parameters (|Sμ|=3 and |R|=10 for all μ). Each Sμ for any μ is independently sampled with uniform probability. We show the effects of changing the number of suggested reviewers (four and five reviewers per submission) in S1 File.

To start our simulation, we must initialize the ground truth configuration (the identity of x). Initially, we set an equal number of friends and rivals though we generalize to two other cases (seven and nine friends) in the S2 File.

The subsequent steps of the simulation of the data (steps 2–3) are straightforward. Step 4 for the cynical model is equally straightforward (and deterministic in r1): a positive review is returned if r1μ is a friend, a negative review s returned otherwise, while r2 writes a positive review with probability ½. Further mathematical simulation details are found in S3 File.

For the quality model, to each submission (μ) is associated a quality factor qμ ∈ (0, 1). As is usual for a variable bounded by the interval (0, 1), we we take qμ as a Beta random variable such that

P(qμ)=qμα-1(1-qμ)β-1B(α,β) (1)

where B(α,β)=Γ(α)Γ(β)Γ(α+β) where Γ being the Euler’s gamma function. Again, in an effort to compute a lower bound alone on the number submissions required, we assume that all qμ are sampled from the same, stationary, distribution with constant α = 12 and β = 12 for now (middle-of-the-road quality distribution) for which the mean 〈q〉 = ½ and the variance is σq =.01.

In reality, it is conceivable that one’s quality factor distribution shifts to the right with experience. It is also conceivable that a prolific researcher would have its quality factor shift to the left as they start venturing into new fields. This effect only makes it harder to assess which reviewer is friendly and further raises the lower bound required on the number of submissions. In any case, in the S4 File, we consider different quality distributions (both high and low). Foreshadowing the conclusions, it may be intuitive to see that very high or very low quality factors result in less information gathered per reviewer report. That is, we learn best the class to which reviewers belong by sampling quality factors around ½. Not by constant rejection or acceptance.

Thus, with each sampled qμ, step 4) of the quality model is implemented by observing that reviewers write positive reviews according to the probabilities in Table 2. Further mathematical details of the quality model are relegated to S3 File.

Importantly, for the purposes of classifying which reviewers are friendly, it is not necessary to know whether the article is accepted by the editor, only the count of positive or negative reviews per submission.

2.3 Inference strategy

Inference consists of constructing the posterior P(xj|{aμ},{Sμ}) and drawing samples from it. To construct this posterior, we update the likelihood, P({aμ}|xj,{Sμ}), over all independent submission

P({aμ}|xj,{Sμ})=μP(aμ|xj,Sμ) (2)

as follows

P(xj|{aμ},{Sμ})=P(xj|{Sμ})P({aμ}|{Sμ})P({aμ}|xj,{Sμ}). (3)

Since the number of configurations is finite, we may start by taking the prior as uniform over these countable options (P(xj|{Sμ})=2|R|). Keeping all dependency on xj explicit, we may write

P(xj|{aμ},{Sμ})P({aμ}|xj,{Sμ})=μP(aμ|xj,Sμ). (4)

We end with a note on the likelihood which we compute explicitly by treating r1μ as a latent variable over which we sum. That is,

P(aμ|xj,Sμ)=r1μP(aμ|r1μ)P(r1μ|xj,Sμ). (5)

In terms of the factors within the summation, P(r1μ|xj,Sμ) follows from step 2). That is, if the editor selects r1μ with uniform probability from Sμ, the probability of selecting a r1μ from the class of friends is the ratio of friends, f, in Sμ according to the configuration xj. This can be written more rigorously as

P(r1μ=friend|xj,Sμ)=f(xj,Sμ)1|S|iSF(xij), (6)

where

F(xij)={0ifxij=rival1ifxij=friend. (7)

It follows that P(r1μ=rival|xj,Sμ)=1f(xj,Sμ).

We now turn to the term P(aμ|r1μ) within (5) computed differently within both the cynical and quality models.

2.3.1 Inference in the cynical model

Calculating P(aμ|r1μ) for the cynical model is straightforward. That is, given that a friendly r1 always writes a positive review and a rival r1 always writes a negative one, and r2 writes a positive review with probability ½, values for P(aμ|r1μ) immediately follow as tabulated in Table 3. Eqs (3)–(7) and Table 3 summarize what is needed to perform Bayesian classification within the cynical model formulation.

Table 3. Probabilities for the number of positive reports, aμ, in the cynical model, conditioned on the class of the suggested reviewer, r1μ.
P(aμ|r1μ) aμ = 0 aμ = 1 aμ = 2
r1μ=friend 0 ½ ½
r1μ=rival ½ ½ 0

2.3.2 Inference in the quality model

The major difference between inference in the quality and cynical models relies on the fact that the author will not have access to individual qμ’s. However, since we aim for a lower bound, we will proceed with the calculation under the assumption that while individual qμ’s are unknown the author knows the distribution from which qμ is sampled. If the author were uncertain of the distribution, this would add yet another layer of stochasticity and further raise the lower bound. From Table 2, it is straightforward to calculate the probability of each aμ given r1μ and qμ in the quality model. The result is found in Table 4.

Table 4. Probability for the number of positive reviews aμ conditioned on the quality factor qμ and the class of the reviewer r1μ.
P(aμ|qμ,r1μ) aμ = 0 aμ = 1 aμ = 2
r1μ=friend 13qμ+3qμ2qμ3 3qμ5qμ2+2qμ3 2qμ2qμ3
r1μ=rival 1qμqμ2+qμ3 qμ+qμ22qμ3 qμ3

Without access to qμ in (5), we further need to marginalize P(aμ|qμ,r1μ) over qμ as follows

P(aμ|r1μ)=dqμP(aμ,qμ|r1μ)=dqμP(aμ|qμ,r1μ)P(qμ)=P(aμ|qμ,r1μ)qμ. (8)

For example, if qμ is sampled from a Beta distribution (1) with parameters α = β = 12, as proposed in Section 2.2, marginalization (8) yields the values of P(aμ|r1μ) shown in Table 5.

Table 5. Probability for the number of positive reviews aμ conditioned on the class of the reviewer r1μ.

Calculated by marginalizing qμ in Table 4, as described in (8), for α = β = 12.

P(aμ|r1μ) aμ = 0 aμ = 1 aμ = 2
r1μ=rival .38 .48 .14
r1μ=friend .14 .48 .38

Inference occurs, otherwise, exactly as in the cynical model, thus summarized by Eqs (3)–(8), and Table 4.

3 Results

The previous section was focused on constructing the 2|R|-dimensional posterior P(xj|{aμ},{Sμ}) otherwise difficult to visualize. Since our goal is to determine the number of submissions required to correctly classify suggested reviewers, we introduce metrics measuring how well the posterior classifies reviewers. Moreover, these metrics ought to be have an assigned value at each submission, and thus be a function of m for each m ∈ {1, 2, …, M}. Thus, for a fixed data set of M submissions, we calculate each metric using the first m submissions for all m.

Each metric is a stochastic function dependent on the dataset (decisions made by reviewers and quality factors sampled) inherited from the variation of the posterior with the data supplied. For this reason, we consider multiple metric realizations which allow us to compute their mean, median and 50% and 95% credible (or confidence) intervals. Borrowing language from dynamical systems, we refer to these realizations, up to the mth submission, as trajectories.

3.1 Metrics

The first metric, akin to a marginal decoder obtained for mixture models [19, 20], concerns itself with the probability for the class of one specific reviewer i. From the posterior over all configurations, we obtain probabilities over the reviewer i’s class through marginalization

P(xi=friend|{aμ},{Sμ})=jP(xij=friend|{aμ},{Sμ})=jP(xj|{aμ},{Sμ})F(xij), (9)

where F was defined in (7). Equivalently, P(xi = rival) = 1 − P(xi = friend).

Thus, the first metric is defined as the marginal probability of reviewer i being a friend based on the results of m papers where reviewer i was suggested

ρi(m)P(xi=friend|{aμ}μ=1:m,{Sμ}μ=1:m), (10)

where {aμ}μ=1:m and {Sμ}μ=1:m represents the subset of the m first elements of {aμ} and {Sμ} respectively.

The second metric, a global metric, simply compares the maximum a posteriori (MAP) estimate for x after m submissions, x(m),

x¯(m)argmaxxjP(xj|{aμ}μ=1:m,{Sμ}μ=1:m), (11)

and compares, element-wise, how x(m) differs from the ground truth.

The simulated MAP error, while less informative than considering the full posterior, serves as a estimate on the number of submissions necessary to estimate lower bounds (within tolerable error) to classify as a function of m and the number of friends in the original pool of reviewers. More robustness analysis is performed in S2 File.

As a third metric, we look for a more general metric for how “well-classified” the reviewers are. Following the work of Shannon [21], we notice that entropy defined as

S(m)-xjP(xj|{aμ}μ=1:m,{Sμ}μ=1:m)log2P(xj|{aμ}μ=1:m,{Sμ}μ=1:m), (12)

measures, in rough terms, how many reviewers are left unclassified (Base 2 for the logarithm in (12) was chosen because we are dealing with binary classification.). A mock example on how entropy works for the classification of 2 reviewers is presented in Table 6. For more general insight on the role of entropy see e.g., Refs. [2224] and references therein.

Table 6. Example for how entropy is to be interpreted. In this mock example for the classification of two reviewers, similar to Table 1, all probabilities over configurations (first column) are equally likely and thus the entropy is ascribed its maximal value of 2.

The second column shows a case where the first reviewer is not yet classified, but is considerably more likely to belong to one class, hence entropy takes on some value between 1 and 2. The third column contains an example where the first reviewer is fully identified, but the probability does not favor any classification for the second reviewer leading to the entropy value of 1. The fourth column has an example where the first reviewer is fully identified, but the probability favors one classification (rival) for the second reviewer, leading to the entropy value between 0 and 1. The last column contains an example where one configuration has probability 1, hence the reviewers are fully classified, leading to 0 entropy.

j x j = [ x1j x2j ] P(xj) P(xj) P(xj) P(xj) P(xj)
1 x 1 = [ rival, rival ] ¼ 0 0 0
2 x 2 = [ rival, friend ] ¼ 0 0 0
3 x 3 = [ friend, rival ] ¼ ½ 1
4 x 4 = [ friend, friend ] ¼ ½ 0
entropy −∑jP(xj) log2 P(xj) 2 ≈ 1.812 1 ≈ 0.5436 0

The fourth, and final, metric is the third largest marginal posterior, or the posterior for the third reviewer most likely to be friendly,

T(m)maxi3ρi(m), (13)

where maxin is the n-th biggest element in the set indexed by i and ρi(m) is defined in (10). Unlike the first metric, which classifies each reviewer individually, and second and third metrics, which classify all reviewers in R, this metric classifies a scenario where authors only seek to classify a minimum number of suggested reviewers (|Sμ|=3 in our simulations). Therefore, whenever we present results for this fourth metric, we show how many publications are required in order to reach the 95% confidence level. Despite reaching this metric, it is possible to misclassify the third referee; details provided in S5 File. In the same Supporting Information we also explore the possibility that suggesting reviewers based on outcomes from prior optimization on previous submissions does not lead to significant reduction in submissions necessary to classify reviewers.

3.2 Cynical model results

The marginal probability (first metric) for the reviewer belonging to the friend class in the cynical model is shown in Fig 2. We interpret this result as indicating that one needs to suggest this reviewer in a little over than 75 submissions to strongly classify (marginal posterior exceeding 0.95 for one of the classes) this reviewer for the median case. Assuming this reviewer is picked uniformly from the author’s pool of 10 reviewers then, on average, a total number of 250 submissions would be required.

Fig 2. Posterior marginal probability of a single reviewer’s class—ρi defined in (10)—as a function of the number of submissions where the reviewer was suggested.

Fig 2

The graph on the left corresponds to values of ρi(m) for which the reviewer belongs to the friendly class in the simulation’s ground truth, while the graph on the right corresponds to rivals in the simulation’s ground truth. We observe that the median trajectory reaches a probability of.95 for the correct class after a little more than 75 submissions involving the suggested reviewer. Meanwhile, it takes between 100 to 125 for the class with the highest posterior to match ground truth within the 95% credible interval for submissions involving this reviewer.

By contrast, around 100 submissions suggesting this reviewer are necessary to weakly classify, meaning classify this reviewer using the class that has the highest marginal posterior and obtain the correct class within the 95% credible interval. In S2 File we see that if we have more friends in the ground truth configuration, friends are classified faster, but rivals are likely to be misclassified.

The number of errors from the MAP (second metric) for the cynical model as a function of the number of submissions is shown in Fig 3. There, we can see that if we attempt to classify reviewers using the MAP, we would get the correct configuration, in the median case, after approximately 100 submissions. However, to guarantee one finds the correct configuration within the 95% confidence interval, it needs between 250 to 300 submissions.

Fig 3. Number of errors when using maximum a posteriori (MAP) classification, i.e., the number of misclassifications appearing in the MAP configuration (11) when comparing to the simulation’s ground truth as a function of the number of submissions in the cynical model.

Fig 3

We observe that the median trajectory finds the correct ground truth configuration using the MAP estimate after approximately 100 submissions, while it takes approximately 250 submissions to reach the correct configuration within the 95% credible interval.

The posterior entropy (third metric) for the cynical model as a function of the number of submissions is shown in Fig 4. In this case, we would need, in the median case, between 150 and 200 submissions to fully classify a set of 10 reviewers with 3 suggested per submission. In the S2 File, we see that the posterior entropy does not fall considerably faster (as compared to this case with 5 friends) with more friends in the ground truth.

Fig 4. The posterior’s entropy—defined in (12)—as a function of the number of submissions in the cynical model.

Fig 4

We observe that, in the cynical model, we need between 150 and 200 reviewed submissions in order for the entropy of a median trajectory to reach zero, meaning that for half of submissions, the posterior only fully classifies a set of 10 reviewers after 150 submissions.

Finally, we present the third largest marginal posterior as a function of the number of submissions in Fig 5. We observe that it takes approximately 80 submissions for the median trajectory to reach T(m) = 0.95. In the same figure, we also see that it takes on average 70 submissions to reach that confidence for all top 3 reviewers. In the S5 File, we show that if one stops classifying reviewers once they reach that mark, they would classify at least one rival as friend in 6.9% of cases.

Fig 5. The left panel presents the marginal probability of the third most likely reviewer (according to the posterior) to be friendly—defined in (13)—as a function of the number of submissions.

Fig 5

We observe that the median trajectory’s credibility in the third reviewer reaches 95% after approximately 80 submissions in the median case. In the right panel, we see the number of submissions taken to reach 95% credibility for the same metric per sampled simulation. We observe that the mean and median number of submission is slightly bigger than 70.

3.3 Quality model results

Similar to the analysis of the cynical model, the marginal probability for a single reviewer class in the quality model is shown in Fig 6. The results indicate that one needs to suggest a reviewer on approximately 400 submissions before they can strongly classify the reviewer in the median case.

Fig 6. Marginal posterior probability over a single reviewer’s class, analogous to Fig 2, for the quality model.

Fig 6

We observe that the median trajectory indicates that a single reviewer ought to be suggested in approximately 400 submissions in order to reach a probability of 0.95 for the correct class.

MAP errors as a function of the number of submissions is shown in Fig 7 indicating that we would need more than 500 submissions to correctly classify reviewers through MAP in the median case. We would need a little less than 2000 to find the correct configuration within a.95 credible interval.

Fig 7. The number of errors when using maximum a posteriori (MAP) classification, analogous to Fig 3 as a function of the number of submissions.

Fig 7

We observe that the median trajectory finds the correct configuration using the MAP estimate after approximately 500 submissions, while it takes around 2000 submissions to reach the correct configuration within the 95% credible interval.

The posterior entropy as a function of the number of submissions for the quality model is shown in Fig 8. The results suggest that we would need more than 1500 submissions to fully classify a set of 10 reviewers.

Fig 8. The posterior’s entropy, analogous to Fig 4, for the quality model.

Fig 8

We observe that we need around 1500 submissions in order to fully classify the reviewers (entropy approach zero) in the median trajectory.

The third largest marginal posterior, as a function of the number of submissions, as well as the number of submissions necessary to reach 95% credibility are presented in Fig 9. We observe that, in the quality model, it takes approximately 400 submissions to find 3 friendly suggested reviewers with 95% credibility. On the other hand, in the S5 File, we show that this misclassifies reviewers in less than 1.0% of datasets.

Fig 9. The left panel presents the marginal probability of the third most likely reviewer to be friendly as a function of the number of submissions in the quality model, while the right panel presents the number of submissions taken to reach 95% credibility for the same metric, analogous to Fig 5.

Fig 9

Both indicate that approximately 400 submissions are necessary.

As mentioned in Sec 2.2, Figs 69 were constructed in a simulation where the quality factors qμ are sampled from a Beta distribution (1) with α = β = 12. We consider other sampling distributions for the quality factor and justify that this unusually tight distribution provides what is close to the overall lower bound in the S4 File. For example, any broader distribution (e.g., α = β = 2), only further increases the lower bound.

4 Discussion

Assessing whether a reviewer is positively or negatively inclined is a question riddled with challenges. Although we can find publicly available data on peer-review results (e.g. [25, 26])—allowing to quantify potential biases based on factors such as prestige [27], gender [2830], ethnicity [31, 32], and place of origin [33, 34]—this data is anonymized to protect the privacy of authors and reviewers. On account of this, it is not possible to track the submission history of a single author across multiple publication venues to determine whether they are biasing editorial decisons through their list of suggested reviewers. Yet, the answer to the question posed by the title is not fundamentally unknowable despite the paucity of data. This is because we can simulate, and analyze, realistic outcomes based on agent-based models.

Indeed, doing so, our study shows that it is virtually unfeasible, in a single-blind peer review process, for authors to suggest reviewers that will bias the decision in their favor. Even modeling the most cynical and predictable reviewer behavior, we find that an author requires about 100 submissions to correctly classify even a set of 10 reviewers, while it takes about 70 submissions to even find 3 friendly reviewers with high credibility (see Figs 25). When the model is upgraded to a more realistic one (albeit still too simple), at least 400 submissions become necessary for the same task (see Figs 68).

This large number exceeds submissions of all but a small minority of even the most prolific scientists. Moreover, large submission numbers introduce further complications. For example, a reviewer may exhibit friendliness toward the author in one area and not another, especially problematic for prolific authors who publish across fields; it is also reasonable to expect that the reviewer may change their opinions in the time necessary to write hundreds of articles.

Further mitigating the severe idealizations of even our marginally more realistic model, would only further compound the difficulty in identifying reviewers. This would be true of any further layer of stochasticity introduced. For example: we may account for more nuanced reviewer reports, rather than classifying them as only accept or reject; allow an original pool of reviewers to change as the author gains more experience in the field; allow for neutral suggested reviewers; allow friends to become neutral or rivals over time (or vice versa); allow the author’s quality factor distribution to change over time; allow the editor to select a variable number of suggested reviewers.

Another layer of stochasticity is the difference in expectations among editors and reviewers across different journals. Some journals are intended for a very specific audience, while others cater to a broad readership. Additionally, some journals prioritize scientific rigor above all else, while others prioritize publication citation rates (e.g., reviews). As a result, the same manuscript may possess different quality factors when submitted to various journals, creating uncertainty for authors attempting to determine their own quality factor distributions.

Naturally, this study assumes that the author tries to identify reviewers using only information available to them. Cases of fraud or collusion should be handled through careful editorial scrutiny. While our simulation assumes an editor that is extremely impartial, a good editor will verify if the suggested reviewers have the necessary competency to properly evaluate the submission, see e.g., the Committee on Publication Ethics (COPE) guidelines [35]. Only after approved by the editor, do reviewers receive invitations. If no candidate is deemed appropriate, editors may very well select no reviewers from the suggested list introducing yet another layer of stochasticity. This is especially important in small fields where the pool of appropriate reviewers is naturally limited. Therefore the task of finding only the minimal requested number of friendly reviewers is nearly pointless for an author who publishes across fields, as is expected of prolific researchers.

Moreover, reviewers who have a close relationship with the authors may choose to withdraw voluntarily from the peer-review process upon realizing that their association could be perceived as a conflict of interest. This can limit the usefulness of the authors’ suggestions.

Had a lower bound for the number of reviews found in a cynical model been small, it would have been incumbent on us to consider these complexities in order to identify which, if any, assure the soundness of the single-blind review process. But this is not the case, and the results were even surprising to us. Indeed, even the simplest model confirms that the single-blind review process is sufficiently reliable to allow authors to suggest their own reviewers without clouding or otherwise biasing the publication decision.

Supporting information

S1 File. Cynical model results for different numbers of suggested reviewers.

(PDF)

S2 File. Results with a larger ratio of friendly reviewers.

(PDF)

S3 File. Sampling details.

(PDF)

S4 File. Quality results with different parameters.

(PDF)

S5 File. Errors and aggressive strategy for the fourth metric.

(PDF)

Data Availability

The code performing the simulation, inference, and generating figures is available on GitHub https://github.com/PessoaP/how_many_submissions.

Funding Statement

This work is supported by funds from the National Institutes of Health (https://www.nih.gov/) grant No. R01GM134426, R01GM130745, and the MIRA R35 entitled "Toward high spatiotemporal resolution models of single molecules for in vivo applications", all of which were awarded to SP. The funder did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Willis M. Why do peer reviewers decline to review manuscripts? A study of reviewer invitation responses. Learned Publishing. 2016;29:5. doi: 10.1002/leap.1006 [DOI] [Google Scholar]
  • 2. Fox CW. Difficulty of recruiting reviewers predicts review scores and editorial decisions at six journals of ecology and evolution. Scientometrics. 2017;113:465. doi: 10.1007/s11192-017-2489-5 [DOI] [Google Scholar]
  • 3. Schroter S, Tite L, Hutchings A, Black N. Differences in review quality and recommendations for publication between peer reviewers suggested by authors or by editors. JAMA. 2006;295:314. doi: 10.1001/jama.295.3.314 [DOI] [PubMed] [Google Scholar]
  • 4. Wager E, Parkin EC, Tamber PS. Are reviewers suggested by authors as good as those chosen by editors? Results of a rater-blinded, retrospective study. BMC Medicine. 2006;4:13. doi: 10.1186/1741-7015-4-13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Rivara FP, Cummings P, Ringold S, Bergman AB, Joffe A, Christakis DA. A comparison of reviewers selected by editors and reviewers suggested by authors. The Journal of Pediatrics. 2007;151:202. doi: 10.1016/j.jpeds.2007.02.008 [DOI] [PubMed] [Google Scholar]
  • 6. Bornmann L, Daniel HD. Do author-suggested reviewers rate submissions more favorably than editor-suggested reviewers? A study on atmospheric chemistry and physics. PLOS ONE. 2010;5:e13345. doi: 10.1371/journal.pone.0013345 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Moore JL, Neilson EG, Siegel V, Associate editors at Journal of American Society of Nephrology. Effect of recommendations from reviewers suggested or excluded by authors. J Am Soc Nephrol. 2011;22:1598. doi: 10.1681/ASN.2011070643 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Kowalczuk MK, Dudbridge F, Nanda S, Harriman SL, Patel J, Moylan EC. Retrospective analysis of the quality of reports by author-suggested and non-author-suggested reviewers in journals operating on open or single-blind peer review models. BMJ Open. 2015;5:e008707. doi: 10.1136/bmjopen-2015-008707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Liang Y. Should authors suggest reviewers? A comparative study of the performance of author-suggested and editor-selected reviewers at a biological journal. Learned Publishing. 2018;31:216. doi: 10.1002/leap.1166 [DOI] [Google Scholar]
  • 10. Shopovski J, Bolek C, Bolek M. Characteristics of peer review reports: Editor-suggested versus author-suggested reviewers. Sci Eng Ethics. 2020;26:709. doi: 10.1007/s11948-019-00118-y [DOI] [PubMed] [Google Scholar]
  • 11. Zupanc GKH. Suggested reviewers: friends or foes? J Comp Physiol A Neuroethol Sens Neural Behav Physiol. 2022;208:463. [DOI] [PubMed] [Google Scholar]
  • 12. Bonabeau E. Agent-based modeling: Methods and techniques for simulating human systems. Proceedings of the National Academy of Sciences. 2002;99:7280. doi: 10.1073/pnas.082080899 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Abar S, Theodoropoulos GK, Lemarinier P, O’Hare GMP. Agent based modelling and simulation tools: A review of the state-of-art software. Comput Sci Rev. 2017;24:13. doi: 10.1016/j.cosrev.2017.03.001 [DOI] [Google Scholar]
  • 14. Feliciani T, Luo J, Ma L, Lucas P, Squazzoni F, Marušić A, et al. A scoping review of simulation models of peer review. Scientometrics. 2019;121:555. doi: 10.1007/s11192-019-03205-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Barabási AL, Jeong H, Néda Z, Ravasz E, Schubert A, Vicsek T. Evolution of the social network of scientific collaborations. Physica A: Statistical Mechanics and its Applications. 2002;311:590. doi: 10.1016/S0378-4371(02)00736-7 [DOI] [Google Scholar]
  • 16. Peterson GJ, Pressé S, Dill KA. Nonuniversal power law scaling in the probability distribution of scientific citations. Proceedings of the National Academy of Sciences. 2010;107:16023. doi: 10.1073/pnas.1010757107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Sekara V, Deville P, Ahnert SE, Barabási AL, Sinatra R, Lehmann S. The chaperone effect in scientific publishing. Proceedings of the National Academy of Sciences. 2018;115:12603. doi: 10.1073/pnas.1800471115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Wang D, Barabási AL. The science of science. Cambridge University Press; 2021. [Google Scholar]
  • 19. Thompson A, May MR, Moore BR, Kopp A. A hierarchical Bayesian mixture model for inferring the expression state of genes in transcriptomes. Proceedings of the National Academy of Sciences. 2020;117:19339. doi: 10.1073/pnas.1919748117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Mathews JC, Nadeem S, Pouryahya M, Belkhatir Z, Deasy JO, Levine AJ, et al. Functional network analysis reveals an immune tolerance mechanism in cancer. Proceedings of the National Academy of Sciences. 2020;117:16339. doi: 10.1073/pnas.2002179117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27:379. doi: 10.1002/j.1538-7305.1948.tb00917.x [DOI] [Google Scholar]
  • 22. Jaynes ET. Probability theory: The logic of science. Cambridge University Press; 2003. [Google Scholar]
  • 23.Caticha A. Entropic Physics: Probability, Entropy, and the Foundations of Physics; 2012. Available from: https://www.arielcaticha.com/my-book-entropic-physics.
  • 24. Pressé S, Ghosh K, Lee J, Dill KA. Principles of maximum entropy and maximum caliber in statistical physics. Reviews of Modern Physics. 2013;85:1115. doi: 10.1103/RevModPhys.85.1115 [DOI] [Google Scholar]
  • 25.Fox CW, Paine CET. Data from: Gender differences in peer review outcomes and manuscript impact at six journals of ecology and evolution; 2019. Available from: http://datadryad.org/stash/dataset/doi:10.5061/dryad.7p048mk. [DOI] [PMC free article] [PubMed]
  • 26.Farjam M. Replication Data for “Peer review and gender bias: A study on 145 scholarly journals”; 2021. Available from: 10.7910/DVN/3IKRGI. [DOI] [PMC free article] [PubMed]
  • 27. Frachtenberg E, McConville KS. Metrics and methods in the evaluation of prestige bias in peer review: A case study in computer systems conferences. PLOS ONE. 2022;17(2):1–29. doi: 10.1371/journal.pone.0264131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Tamblyn R, Girard N, Qian CJ, Hanley J. Assessment of potential bias in research grant peer review in Canada. Canadian Medical Association Journal. 2018;190(16):E489–E499. doi: 10.1503/cmaj.170901 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Fox CW, Paine CET. Gender differences in peer review outcomes and manuscript impact at six journals of ecology and evolution. Ecology and Evolution. 2019;9(6):3599–3619. doi: 10.1002/ece3.4993 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Squazzoni F, Bravo G, Farjam M, Marusic A, Mehmani B, Willis M, et al. Peer review and gender bias: A study on 145 scholarly journals. Science Advances. 2021;7(2):eabd0299. doi: 10.1126/sciadv.abd0299 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Ginther DK, Schaffer WT, Schnell J, Masimore B, Liu F, Haak LL, et al. Race, ethnicity, and NIH Research Awards. Science. 2011;333(6045):1015–1019. doi: 10.1126/science.1196783 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Karvonen KL, Bonachea EM, Burris HH, Fraiman YS, Lee HC, Proaño A, et al. Addressing bias and knowledge gaps regarding race and ethnicity in neonatology manuscript review. Journal of Perinatology. 2022;42(11):1546–1549. doi: 10.1038/s41372-022-01420-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Skopec M, Issa H, Reed J, Harris M. The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis. Research Integrity and Peer Review. 2020;5(1). doi: 10.1186/s41073-019-0088-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Kowal M, Sorokowski P, Kulczycki E, Żelaźniewicz A. The impact of geographical bias when judging scientific studies. Scientometrics. 2021;127(1):265–273. doi: 10.1007/s11192-021-04176-7 [DOI] [Google Scholar]
  • 35.COPE Council. COPE Flowcharts and infographics—How to recognise potential manipulation of the peer review process—English.; 2017. Available from: https://publicationethics.org/node/34311.

Decision Letter 0

Paolo Cazzaniga

13 Mar 2023

PONE-D-23-03685How many submissions does it take to discover friendly suggested reviewers?PLOS ONE

Dear Dr. Presse,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 27 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Paolo Cazzaniga

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following in the Acknowledgments Section of your manuscript:

“This work is supported by funds from the National Institutes of Health (grant No. R01GM134426 and R01GM130745).”

We note that you have provided funding information that is currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

“This work is supported by funds from the National Institutes of Health (https://www.nih.gov/) grant No. R01GM134426 and R01GM130745 both awarded to SP.

The funder did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

3. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments:

I agree with the reviewers about the quality of the manuscript.

I therefore suggest to accept it after minor modifications, as suggested in the points raised by the reviewers.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Reviewer #1: In this work the authors combined agent-based models to emulate single-blind peer review processes and a Bayesian inference system to set a lower bound in terms of how many papers are necessary to an author to determine the "friendliness" of a reviewer. The results presented show that even in the most simple scenarios, the lower bound is too high for non very prolific authors.

The paper is well written in English.

I suggest to accept this paper after some minor revisions suggested below.

line 46, I think that the term "unfeasibly" should be changed into "unfeasible".

I don't think that a new paragraph is required at lines 84/85.

line 96, the authors forgot to put a "the" before easiest.

I don't know if such a problem is due to the submission process or intended by the authors, but in the latter case I suggest them to remove the red squares around the hyper-text links.

line 203, the authors state "for a fixed a data set", the 'a' after 'fixed' is not necessary.

I suggest to the authors to use vector images, such as pdf images, to strongly improve the readability of the figures and the overall quality of the paper.

In this work, the authors leveraged simple models and already obtained sound lower bounds. Nonetheless, I think it would be interesting to discuss how the lower bounds increase if even more complicated models are considered. I'm aware that this request can be out of the scope of the paper, but models characterized by more realistic rules and numbers can be of interest and give more realistic bounds. I suggest them to evaluate how the boundaries change if an author can suggest more than 3 reviewers and if a large number of impartial reviewers are considered; i.e., the set R is larger.

Reviewer #2: This is a very interesting and, I believe, novel exercise to ascertain how easy it is for authors to influence the publication outcome of their articles by suggesting potentially 'friendly' reviewers. Reassuringly, the answer based on the agent-based model used here is that it is extremely difficult. As the authors state, 'the single-blind review process is sufficiently reliable to allow authors to suggest their own reviewers without clouding or biasing the publication decision.’

Being a non-expert in agent-based modelling, I am not qualified to comment on the methodology used or the accuracy of the interpretation of the results. Actually this may not be too problematic, because many people for whom this work will be of interest may also lack training in ABM, and the authors might consider elaborating some concepts to help understanding. If any of my comments below betray my lack of expertise in ABM, I hope the authors will make allowance for this.

That aside, I do have a few observations and questions relating to the broader questions behind the authors’ work:

• They state that one of the challenges in exploring whether reviewers are positively or negatively inclined is that manuscript history and reviews are ‘kept under lock and key’. This is not correct; some of the studies they cite are based on actual journal data. Moreover, the PEERE group assembled a vast data set that, although now a few years old, could be used to explore the question (https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/3IKRGI). If the authors wish to comment in their Discussion on future possible avenues of study, they could encourage others to apply some of their methods to real data sets.

• The modelling is predicated on a binary ‘accept’ – ‘reject’ inclination on the part of reviewers. In practice there is more of a spectrum from positive to negative sentiment on the part of reviewers about a manuscript. This affects the applicability of the ABM and is a limitation that I don’t believe the authors commented on, unless I missed it.

• Authors may submit the same article to multiple journals. How much would this affect the outcome of the model used by the authors? ‘Submission’ and ‘manuscript’ are often used interchangeably, but it seems important to distinguish here between the actual manuscript and the act of submission. Moreover, different journals may set different expectations for reviewers even for the same manuscript – for example, reviewers for a journal that looks for novelty may evaluate the same manuscript differently from how they would evaluate it for a journal that looks only for sound science. This is another layer of stochasticity that should be mentioned.

• Could another layer of stochasticity be that a niche field will have a much smaller pool of reviewers, and possibly also reviewers may be either much more positively inclined towards the author’s work – because they know the author well – or much more negatively inclined towards it – because it competes directly with their own?

• I am not sure I understand why, in the model, authors have access only to the number of positive reviewers, and not also to the number of negative reviews.

• How would the results of the model be affected if potentially friendly reviewers were to decline to review (because, for example, they realise they have a conflict of interest)?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Daniele M Papetti

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2023 Apr 13;18(4):e0284212. doi: 10.1371/journal.pone.0284212.r002

Author response to Decision Letter 0


20 Mar 2023

We are thankful for the editorial efforts and reviewers suggestions in our manuscript. Overall, both reviewers have indicated agreement with publication. In the attached "Response to Reviewers" file we answer to give a point-by-point answer to each of their queries. Changes in the manuscript are marked in red.

Attachment

Submitted filename: Reply.pdf

Decision Letter 1

Paolo Cazzaniga

27 Mar 2023

How many submissions are needed to discover friendly suggested reviewers?

PONE-D-23-03685R1

Dear Dr. Presse,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Paolo Cazzaniga

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: Thank you very much for responding to my suggestions and queries. I am happy to recommend the manuscript be published without any further changes.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Daniele M Papetti

Reviewer #2: Yes: Michael Willis

**********

Acceptance letter

Paolo Cazzaniga

3 Apr 2023

PONE-D-23-03685R1

How many submissions are needed to discover friendly suggested reviewers?

Dear Dr. Pressé:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Paolo Cazzaniga

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Cynical model results for different numbers of suggested reviewers.

    (PDF)

    S2 File. Results with a larger ratio of friendly reviewers.

    (PDF)

    S3 File. Sampling details.

    (PDF)

    S4 File. Quality results with different parameters.

    (PDF)

    S5 File. Errors and aggressive strategy for the fourth metric.

    (PDF)

    Attachment

    Submitted filename: Reply.pdf

    Data Availability Statement

    The code performing the simulation, inference, and generating figures is available on GitHub https://github.com/PessoaP/how_many_submissions.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES