Skip to main content
PLOS One logoLink to PLOS One
. 2022 Oct 26;17(10):e0270618. doi: 10.1371/journal.pone.0270618

Publication games: In the web of reciprocity

Zoltán Barta 1,*
Editor: Alberto Baccini2
PMCID: PMC9604877  PMID: 36288263

Abstract

The present processes of research assessment, i.e. focusing on one or a few, related, scientometrics, foster questionable authorship practices, like gifting authorship to non-contributing people. An especially harmful one of these unethical practices is the formation of publication cartels, where authors offer gift authorship to each other reciprocally. Here, by developing a simple model and a simulation of the publication process I investigate how beneficial cartels can be and what measure can be used to restrict them. My results indicate that publication cartels can significantly boost members’ productivity even if paper counts are weighted by the inverse of author number (the 1/n rule). Nevertheless, applying the 1/n rule generates conflicts of interest both among cartel members themselves and between cartel members and non-members which might lead to the self-purification of the academic publishing industry.

Introduction

Research integrity (ethical behaviour, sound methodology and rigorous peer review, [1]) provides assurance that scientific activities lead to trustable and replicable results. Research integrity is, however, under threat as a result of how science currently operates. The recent, unprecedented expansion of science, exemplified, for instance, by the exponentially growing number of scientific articles [2], gives way to the wide-spread use of scientometry for assessing the productivity and impact of researchers [3]. As science is usually funded by public resources, the desire to measure the performance of its actors is well justified. Introducing the assessment of scientists by one or a few metrics, like number of publications or citations, together with the hyper-competitiveness of science had, however, somehow unexpected consequences [4] (not only among scholars but even among scholarly institutes [5]).

As, among others, Charles Goodhart observed, if a metric is used as a target then it becomes a bad metric [2, 6]. This happens because people, in response to introduction of a target, alter their own behaviour to affect the metric directly instead to modify the activity the change of which was intended by introducing the metric [7]. In the recent process of corporatisation of science two such metrics became relevant: the numbers of papers and citations [8].

Goodhart’s law is well illustrated by the introduction of the number of papers as a measure of productivity in science. Using this measure is based on the assumption that characteristics of scientific papers (like length or number of coauthors) are fixed and hence targeting more papers automatically leads to the generation of more new knowledge. Unfortunately, this was not what had happened, scientists responded in some unexpected, nevertheless clearly rational but sometimes unethical, ways [9, 10]. For instance, they reduced the length of papers [2], i.e. they are publishing the same amount of knowledge in more papers (salami articles). Furthermore, mangling with authorship appeared where offering authorship to those who did not contributed to the given paper considerably (honorary authorship) can quickly increase their number of publications, again without any increase in knowledge produced [3, 4, 9, 10]. A possible sign of this questionable authorship practice can be the recent raise of number of authors per paper [2]. One may argue that more authors per paper is the sign of science becoming more interdisciplinary. A recent analysis is, however, unlikely to support this conclusion; the number of coauthors increases with time even after controlling for attributes related to complexity of science [11]. Another reason for the increased number of coauthors might be the increased efficiency that can follow from the increased possibility for division of labour facilitated by more authors [12]. In this case, however, it is expected that the number of papers per author also increases, which seems not to be the case [2].

Questionable authorship practice, on the other hand, appears to be common. Recent surveys suggest that about 30% of authors were involved in these unethical practices [4, 810, 13, 14]. One of these practices is ghost authorship when someone who has significantly contributed to the article is excluded from the author bylist [15]. In other forms (honorary authorship) just the opposite happens; those are offered authorship who have not (considerably) contributed to the work published [9, 10]. Several reasons can be behind gifting authorship to someone. Junior authors might include more senior ones because of respect or they are forced to do so [16]. Senior authors may gift authorship to juniors in order to help them obtain post-doctoral scholarships or tenure [17].

A very efficient way to increase the number of publications may be to practice honorary authorship reciprocally. The most organised form of this behaviour is founding publication cartels. The cartel is formed by a group of people who agree to mutually invite each others to their own publications as guest authors without any contribution. As in recent assessment practice coauthored papers count as a whole publication to every coauthor on the bylist, publication cartels can significantly boost the productivity of cartel members. This is the phenomenon which is called as ‘publication club’ by [12]. As the noun of ‘club’ involves a positive connotation I prefer to use ‘cartel’ for this under studied but highly unethical behaviour. Simple argument suggests that sharing the credit of a publication among the coauthors can decrease the incentive of forming cartels [12]. The simplest scenario for sharing is the 1/n rule under which only 1/n part of a publication is attributed to each of the n coauthors of the given paper [12].

In this paper I develop a simple model of publication cartels to understand how effective they are to increase members’ productivity and whether it is possible to eliminate them by applying different measures, like the 1/n rule. I then extend my study to situations resembling more to real world conditions by developing a computer simulation of cartels. I use this simulation to investigate how using different metrics of productivity affect authors outside of cartels.

The model

We compare the publication performance of two authors, author A1 and author B1. Authors work in separate groups (group A and group B respectively) each of which contains Gi (i = A, B) people (including the focal author). Each author in group A produces pA papers in a year by collaborating with cA authors from outside of the group, i.e. their primary production is pA. Similarly, each author in group B primarily produces pB papers by collaborating with cB people outside of the group. The difference between authors A1 and B1 is that authors in group A work independently of each other, while authors in group B invite all other group members to be a coauthor on their papers independently of their contribution to that paper (Fig 1). In other words, authors in group B form a publication cartel.

Fig 1. The publication relationships in groups A and B of the model.

Fig 1

Nodes are authors, while edges symbolise shared publications. Groups of four authors are marked by the underlying shapes. In group A authors work with several coauthors from outside of the group but they do not invite group mates to be coauthors on their own papers. Contrarary, authors in group B form a publication cartel i.e. each author invites all other authors in the group to be a coauthor (note the connections between group members).

For simplicity, we assume that GA = GB = G, (G > 1), pA = pB = p and cA = cB = c, i.e. author groups are of the same size, authors produce the same number of primary papers and they have the same number of coauthors from outside of the group. In this case the total numbers of papers produced by the groups, the group productivity, are equal (Gp = GApA and Gp = GBpB, respectively). The total numbers of papers (co)authored by authors A1 and B1 are, however, different. Author A1 writes nA = pA = p papers. On the other hand, author B1 (co)authors nB = pB + (GB − 1)pB = GBpB = Gp papers. In the case of author B1 the term (GB − 1)pB represents the papers on which author B1 is invited as honorary author. It is easy to see that as far as G > 1 author B1 will have many more paper than author A1, i.e. nB > nA.

A natural way to correct for this bias is to taking into account the number of authors each paper has and instead of counting the papers themselves as a measure of productivity one sums the inverse of the number of authors (weighted number of papers or the 1/n rule, [12, 18]):

w=i=1n11+C.

Here, number 1 in the denominator symbolises the focal author, while C is the number of coauthors. For author A1, C = cA = c. On the other hand, for author B1, C = (GB − 1) + cB = (G − 1) + c. If c = 0, then the division by the number of coauthors works, we regain the number of papers the authors produced without inviting their group members.

For author A1:

wA=i=1nA11=i=1nA1=nA=p.

For author B1:

wB=i=1nB11+G-1=i=1Gp1G=GpG=p.

On the other hand, if the focal authors collaborate with others outside of their groups, as Fig 1 illustrates, the situation changes (Fig 2):

Fig 2. Publication performance when authors collaborate with people from outside of their groups.

Fig 2

Weighted publication performance of authors A1 and B1 (a). Weighted publication performance of author B1 relative to that of author A1 (b). The weighted publication performance is calculated by taking into account the number of coauthors. During this calculation first authorship can be rewarded by a bonus, b. If b = 0, then each coauthors receive the same weight for a given publication. On the other hand, if b > 0, the weight of the first author is higher than that of the coauthors, i.e. the first author of a paper is rewarded. On subpanel (a) b = 0.2, on (b) b is given on the right margin.

For author A1:

wA=i=1p11+c=p1+c.

For author B1:

wB=i=1Gp1G+c=GpG+c.

The weighted number of papers produced by author B1 relative to author A1, wB/wA, is:

wBwA=GpG+cp1+c=GpG+c×1+cp=G(1+c)G+c=G+GcG+c.

The proportion of wB/wA is greater than one if G + Gc > G + c, which is always true if c > 0 (as we already assumed G > 1, Fig 2). This means that if authors collaborate anyone from outside of their groups then authors in group B will always have higher publication performance than authors in group A, despite the fact that the two groups have the same productivity.

To compensate for this productivity bias, author A1 should produce wB/wA times more papers, pA = pB(G + Gc)/(G + c). This surplus of papers needed for compensating the productivity bias increases with c and it keeps to G.

Authors in group A can also compensate for the productivity bias by decreasing the number of their collaborators from outside of the group. This reduction must be by a factor of G: cA = cB/G

A useful modification to the 1/n rule is the so called first-author-emphasis scheme [18]. In this scheme, the first authors receive a bonus, b, to recognise their leading role in producing the papers. Under this scheme the weighted publication performance for author A1, wA, is:

wA=i=1nA(b+1-b1+cA)=i=1p(b+1-b1+c)=p(1+bc)1+c.

Here, the first author, who is author A for all his papers, get a bonus b for contributing most to the paper, and the rest of the credit, 1 − b, is divided equally between all authors (including the first author, [18]). The weighted publication performance for author B1 under the first author scheme, wB, is:

wB=i=1pB(b+1-bGB+cB)+i=1(GB-1)pB1-bGB+cB,

where the first term gives the credit for first author papers, while the second one is for the coauthored papers. After simplification, we obtain:

wB=p(G+bc)G+c.

By comparing wB to wA it is easy to show that author B1 will always have a higher publication performance than author A1, i.e. wB/wA>1, if G > 1 and b < 1. Further analysis,

wBwA=p(G+bc)G+c×1+cp(1+bc)=G+c[G+b(1+c)]G+c[1+b(G+c)],

shows that for wB/wA>1, the condition c > 0 should also be fulfilled. As numerical computation indicates (Fig 2) the bias is decreased by introducing the first authorship bonus, but it is still significant. The paper [18], for instance, recommend a bonus of b = 0.2, but in this case author B1 sill has around 50% more credit for the same work than author A1 has. The difference between authors A1 and B1 decreases as b increases (Fig 2b), but this way coauthorship is worth less and less, undermining the possible benefits of collaborations.

To summarise, this simple model shows that the formation of publication cartels can be an advantageous, but unethical, strategy to increase publication productivity even if one control for the number of coauthors of papers. Note, however, that this model might be overly simplified as all authors have the same primary productivity and we do not investigated how productivity of authors outside of the cartels changes as a consequence of founding cartels. To obtain a more realistic understanding of publication cartels next I develop a simulation of the publication process.

The simulation

We start simulating the publication process by constructing a publication matrix of papers and authors, MP (Fig 3). Element aij of MP is one if author j is on the bylist of paper i and zero otherwise. Therefore, MP can be considered as a matrix representation of a bipartite graph, where rows and columns represent the two types of nodes, papers and authors, respectively. To construct MP we consider a community of c authors. The number of papers written by author j (j = 1, 2, …, c) in the community is given by kj. For the community we construct an empty matrix (all aij = 0) of size p and c where p > max(kj). Then for each column j we randomly distributed kj number of ones over the p empty places. Having constructed MP we create a weighted collaboration (or co-authorship) matrix, MC, by projecting MP to the nodes of authors. The weights of MC, Jij, are Jaccard similarity indices calculated between each pair of authors i and j (ij) as

Jij=|PiPj||PiPj|.

Fig 3. The construction of publication network.

Fig 3

The top left panel shows the publication matrix, MP. Each row and column of this matrix represents a paper and an author, respectively. Values of 1 indicate that an author is on the author list of a given paper, while dots symbolise zeros. From the publication matrix one can derive the collaboration matrix, MC (bottom right panel) by calculating the Jaccard simmilarity (top right) for each possible pairs of authors. The bottom left panel shows the resulting weighted, undirected collaboration graph, GC. Node size is proportional to the number of coauthors (degree), while edge width shows the strength of the connection between two authors (i.e. it is proportional to their Jaccard similarity). The red rectangles in the matrices exemplify the calculation of Jaccar simmilarity.

Here, Pi is the set of papers to which author i contributed. In other words, the weight between two authors is the proportion of shared papers to the total number of unique papers to which either of authors i or j contributed to. It varies between zero (i.e. no common publication between author i and j) and one (i.e. all publications by the two authors are shared). Note, Jaccard similarity between authors in group A of the above model is zero, while between authors in group B is one. From MC we construct a collaboration graph, GC.

After creating a random publication matrix I simulated the formation of cartels as follows (Fig 4). First, I choose several authors to form the set κ which is the set of authors from the community who form the cartel (i.e. the cartel members). The size of the cartel is given by |κ|. Then, with probability pc, I changed each element aij = 0 of MP to aij = 1 where the following conditions met: jκ and at least one aik = 1 with kκ but kj. I project the resulting publication matrix, MP to MC and constructed the corresponding collaboration graph, GC.

Fig 4. The formation of cartels.

Fig 4

The panels on the left illustrate a publication network without cartel. The panels on the right show how a cartel between authors A1, A4 and A6 can be formed: Author A6 invites authors A1 and A4 to be coauthors on paper p2, while author A1 do the same with authors A4 and A6 on paper p8. The small red rectangles mark the authorships gained this way. The bottom right panel shows the resulting collaboration graph. Node size is proportional to the number of coauthors (degree), while edge width shows the strength of the connection between two authors (i.e. it is proportional to their Jaccard similarity). The red edges connect cartel members. Note (i) the strong connections between members and (ii) adding cartels also changes the connections of non-members.

The construction of publication networks and formation of cartels were repeated for 1000 times for a given set of parameter values. After constucting the graphs and adding the cartels I calculated the number of papers and the weighted number of papers for each author in the community without and with cartel formed for each repetation. Then these measures were averaged for each author across the 1000 repetitions. To investigate the effects of cartel formation I compare these averaged measures without and with cartel formations. Simulation was implemented in the Julia programming language [19], and available on GitHub (https://github.com/zbartab/PaperPump).

By setting all kj = k and pk we can simulate the case of equal productivity and no collaboration from outside of the group. Here the simulation produces the same results as the model: productivity of cartel members increased but this can be accounted for by using the weighted number of publications (result not shown).

To induce collaboration between authors I next set p<j=1ckj=ck (the authors still have the same productivity prior to cartel formation). Under these conditions, if we consider the number of papers, the productivity of cartel members increases significantly by forming cartel while productivity of non-members does not change (Fig 5). In accordance with the model, the productivity of cartel members increases even if we consider the weighted number of papers. Interestingly, the productivity of non-members decreases when cartel is formed (Fig 5).

Fig 5. The effect of cartel formation on the productivity of cartel members and non-members: Equal prior productivity of authors.

Fig 5

The top panels illustrate the collaboration graph without and with cartel formation. The middle panels show how the number of papers produced by members and non-members changes because of founding cartel. The bottom panels illustrate the same but using the weighted number of papers as a measure of productivity. Collaboration graphs were formed with c = 30, k = 3, p = 60, pc = 1 and κ = {1, 2, 3, 29, 30} (cartel composition is arbitrary as all authors are the same in terms of prior productivity). Averages of 1000 repetitions are plotted and different colours represent different authors.

I further generalise the simulation results by setting the prior productivity of authors to different values (Fig 6). Using the number of papers as metric leads to the same conclusions: members’ productivity increases after cartel formation, non-members’ productivity does not change. Using the weighted number of papers, similarly to the previous case, productivity of non-members decreases as a consequence of cartel formation. Cartel members’ productivity increases with cartel formation but this increase is uneven: members with low prior productivity have a significant increase, while the increase for members with high prior productivity is marginal (Fig 6). This suggests that different individuals migh benefit differently from cartel formation.

Fig 6. The effect of cartel formation on the productivity of cartel members and non-members: Prior productivity of authors differs.

Fig 6

The top panels illustrate the collaboration graph without and with cartel formation. The middle panels show how the number of papers produced by members and non-members changes because of founding cartel. The bottom panels illustrate the same but using the weighted number of papers as a measure of productivity. Collaboration graphs were formed with c = 30, kj = j, p = 60, pc = 1 and κ = {1, 2, 3, 29, 30} (cartel composition illustrate the case when authors of very low and very high productivity form a cartel). Averages of 1000 repetitions are plotted and different colours represent different authors.

To investigate individual differences further I randomly created 1000 cartels and, as above, repeat the simulation of publication process with each of these cartels for 100 times. From these simulations I calculated the difference in weighted numbers of papers between with and without cartels for each cartel member, averaged over the repetations of each cartel. This value represents the effect of cartel formation, i.e. how an individual’s weighted number of publication would change if it participate in the given cartel compared to the case of no cartel formation. I characterised cartels by the meand and standard deviation (SD) of their members’ prior productivity, kj. Low mean indicates cartel formed by low productivity individuals, while high means signal just the opposite. Cartels with low SD represent uniform group of individuals, i.e. everybody has similar prior productivity, while high SD means diverse cartels where the members’ prior productivity are largely different.

As Fig 7 (left panel) shows the change in weighted number of papers increases with the mean prior productivity of the cartel, i.e. cartel formation is most benefitial for a given individual if its cartel partners are highly productive. Members’ gain, however, is far from equal, individuals with low prior productivity benefit most from cartel formation. Individuals with high prior productivity can even loss with carter formation if their partners’ mean prior productivity is low (note the negative values of change for individuals with k = 29, 30). The diversity of cartels influences the benefit of cartel formation differently for the different individuals (Fig 7, right panel). Cartel members of low productivity gain more and more as the diversity of their cartel increases. If a low productivity individual finds itself in a diverse cartel that neccessarily means that it is teamed up with highly productive individuals who produce many papers authorship on which for the low productivity individuals can be gifted. For individuals with intermediate productivity cartel diversity has no effect. On the other hand, highly productive individuals can even loss in diverse cartels. When a highly productive individual is in a diverse cartel then most of its partners are of low productivity whose papers cannot contribute significantly to the gain of the high productivity author.

Fig 7. Effect of cartel formation on productivity for differently productive cartel members.

Fig 7

The left panel shows the difference in weighted number of papers between simulation with and without cartels for individuals with different prior productivity (see legend on the right panel) as a function of mean prior productiviy of cartel members. Note the negative values (i.e. cartel formation decreases productivity) for highly productive members (k = 29, 30) at low average cartel productivity. The right panel show the same change as a function of cartel diversity (measured as the standard deviation (SD) of members’ prior productivity). Note, the effect of cartel formation increases with cartel diversity for individuals of low prior production, while decreases for highly productive members (it can be even negative at high cartel diversity). Collaboration graphs were formed with c = 30, kj = j, p = 60, pc = 1 and cartels randomly formed. Each point is the average of 100 simulation for the same cartel. For clarity, only data for authors with extreme low and high and intermediate prior productivity are shown.

Conclusion

Under the current climate of wide spread use of scientometry indices to assess academics publication cartels can provide huge, although unethical benefits. As my results indicate, members of cartels by reciprocally inviting each other as honorary authors can easily boost their own publication productivity, i.e. the number of papers they appear on as (co)author. As many scientometrics currently in use are strongly associated with the number of publications a scholar has produced [3, 8, 9] becoming cartel member can have a very general positive effect on one’s academic career.

One may consider that fighting off cartels is not necessary because of “no harm no foul”: research integrity may not be inevitably damaged by cartel foundation, cartels can produce high quality research. Nevertheless, cartels do distort the research competition landscape. This might result in that highly competent, talented researchers, who are not members of any cartels, are forced into inferior roles which, in turn, compromises the society’s ability to produce more novel and innovative results. Therefore, cartel formation should be restricted.

Fighting against cartels is, however, not trivial. First, identifying cartels, not to mention to prove for a group of researchers that they are cartelling is inherently difficult. Investigating properties of coauthor networks might help as indicated here by the strong connections among cartel members in the simulated collaboration networks. Nevertheless, a possible way to restrict cartels without their identification is to use such scientometrics which penalise cartel formation. An obvious choice can be to weight the number of publications an author has by the inverse of the number of authors on the bylists of these papers, the so-called 1/n rule [12, 18]. As my calculation shows this rule can only be effective if coauthorship only occurs between cartel members. As soon as collaboration is wide spread among both cartelling and non-cartelling authors my results indicate that the 1/n rule breaks down and cartel members still gain undeserved benefits. On the other hand, my computations also show that the 1/n rule can still be useful against publication cartels, because it generates conflicts of interest among the parties. Collaborators of cartel members suffer a loss if the 1/n rule is applied which might force them either to change the unethical behaviour of cartel members or abandon to collaborate with them.

The computations also show, given that the 1/n rule is used to rate scholars, that authors of different productivity should persuade different strategy when forming/joining cartels. Low productivity authors are doing best by being alone in a cartel of high productivity authors. On the other hand, for profilic authors the best strategy is to form cartels among themselves, because cartel establishment with lowly ranked authors can have a detrimental effect on their productivity. As highly productive scientists can be assumed to have more power than their low productivity fellows, i.e. they are able to exlude weakly performing authors from among themselves, it is expected that cartels are formed by authors of similar prior productivity under the 1/n rule. The results also suggest that scholars of low productivity gain the most by cartel formation, therefore founding cartels might be the most common among them. Although, profilic authors might also have an interest to form cartels to avoid being overtaken by cartel forming lower productivity authors. As these arguments suggest the introduction of 1/n rule for researcher assessment can generate a dynamic publication landscape where several conflicts of interest can arise. This is very different from the current assessment scheme where everybody’s interest is the same. Currently, it is not entirely clear if this dynamism can help fight againts cartel formation.

To summarise, I strongly argue for using the 1/n rule as the basis of scientometry. Unfortunately, its general use is opposed by many parties for many reasons. It still remains to see whether these reasons are valid or not, but my calculations indicate that the application of 1/n rule can generate such processes which may ultimately lead to the self-purification of the academic publication industry. Of course, abandoning the current, metric-only research assessment system can also help.

Acknowledgments

I thank Miklós Bán, Gábor Lövei, Tibor Magura, Jácint Tökölyi and an anonymous referee to review a previous version of the manuscript.

Data Availability

Computer code needed to reproduce this study is available from GitHub (https://github.com/zbartab/PaperPump).

Funding Statement

ZB was supported by the Thematic Excellence Programme (TKP2020-IKA-04) of the Innovációs és Technológiai Minisztérium, Hungary and the Thematic Excellence Programme (TKP2021-NKTA-32) of the Nemzeti Kutatási, Fejlesztési és Innovációs Alap, Hungary. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Szomszor M, Quaderi N. Research Integrity: Understanding our shared responsibility for a sustainable scholarly ecosystem. Institute for Scientific Information; 2020. Available from: https://clarivate.com/wp-content/uploads/2021/02/ISI-Research-Integrity-Report.pdf.
  • 2. Fire M, Guestrin C. Over-optimization of academic publishing metrics: observing Goodhart’s Law in action. GigaScience. 2019;8(6). doi: 10.1093/gigascience/giz053 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Aubert Bonn N, Bouter L. Research assessments should recognize responsible research practices—Narrative review of a lively debate and promising developments. MetaArXiv; 2021. Available from: https://osf.io/82rmj.
  • 4. Biagioli M, Kenney M, Martin BR, Walsh JP. Academic misconduct, misrepresentation and gaming: A reassessment. Research Policy. 2019;48(2):401–413. doi: 10.1016/j.respol.2018.10.025 [DOI] [Google Scholar]
  • 5. Faria JR, Mixon FG. Opportunism vs. Excellence in Academia: Quality Accreditation of Collegiate Business Schools. American Business Review. 2022;25(1):4–24. doi: 10.37625/abr.25.1.4-24 [DOI] [Google Scholar]
  • 6. Edwards MA, Roy S. Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environmental Engineering Science. 2017;34(1):51–61. doi: 10.1089/ees.2016.0223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Werner R. The focus on bibliometrics makes papers less useful. Nature. 2015;517(7534):245–245. doi: 10.1038/517245a [DOI] [PubMed] [Google Scholar]
  • 8. Grossman GD, DeVries DR. Authorship decisions in ecology, evolution, organismal biology and natural resource management: who, why, and how | Animal Biodiversity and Conservation. Animal Biodiversity and Conservation. 2019;42(2):337–346. doi: 10.32800/abc.2019.42.0337 [DOI] [Google Scholar]
  • 9. Fong EA, Wilhite AW. Authorship and citation manipulation in academic research. PLOS ONE. 2017;12(12):e0187394. doi: 10.1371/journal.pone.0187394 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Gopalakrishna G, Riet Gt, Vink G, Stoop I, Wicherts J, Bouter L. Prevalence of questionable research practices, research misconduct and their potential explanatory factors: a survey among academic researchers in The Netherlands. MetaArXiv; 2021. Available from: https://osf.io/preprints/metaarxiv/vk9yt/. [DOI] [PMC free article] [PubMed]
  • 11. Papatheodorou SI, Trikalinos TA, Ioannidis JPA. Inflated numbers of authors over time have not been just due to increasing research complexity. Journal of Clinical Epidemiology. 2008;61(6):546–551. doi: 10.1016/j.jclinepi.2007.07.017 [DOI] [PubMed] [Google Scholar]
  • 12. de Mesnard L. Attributing credit to coauthors in academic publishing: The 1/n rule, parallelization, and team bonuses. European Journal of Operational Research. 2017;260(2):778–788. doi: 10.1016/j.ejor.2017.01.009 [DOI] [Google Scholar]
  • 13. Halaweh M. Actual Researcher Contribution (ARC) Versus the Perceived Contribution to the Scientific Body of Knowledge. In: Ceci M, Ferilli S, Poggi A, editors. Digital Libraries: The Era of Big Data and Data Science. Communications in Computer and Information Science. Cham: Springer International Publishing; 2020. p. 93–102. [Google Scholar]
  • 14. Marušić A, Bošnjak L, Jerončić A. A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines. PLOS ONE. 2011;6(9):e23477. doi: 10.1371/journal.pone.0023477 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Jabbehdari S, Walsh JP. Authorship Norms and Project Structures in Science. Science, Technology, & Human Values. 2017;42(5):872–900. doi: 10.1177/0162243917697192 [DOI] [Google Scholar]
  • 16. Pan SJA, Chou C. Taiwanese Researchers’ Perceptions of Questionable Authorship Practices: An Exploratory Study. Science and Engineering Ethics. 2020;26(3):1499–1530. doi: 10.1007/s11948-020-00180-x [DOI] [PubMed] [Google Scholar]
  • 17. Von Bergen CW, Bressler MS. Academe’s unspoken ethical dilemma: author inflation in higher education. Research in Higher Education Journal. 2017;32:17. [Google Scholar]
  • 18. Vavryčuk V. Fair ranking of researchers and research teams. PLoS ONE. 2018;13(4):e0195509. doi: 10.1371/journal.pone.0195509 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A Fresh Approach to Numerical Computing. SIAM Review. 2017;59(1):65–98. doi: 10.1137/141000671 [DOI] [Google Scholar]

Decision Letter 0

Alberto Baccini

26 Jul 2022

PONE-D-22-16973Publication games: in the web of reciprocityPLOS ONE

Dear Dr. Barta,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Reviewer #1 suggests that the paper should be accepted. I directly read the paper and I think that it should be slightly revised before publication. 

1) The paper omitted completely the technicalities of simulations. I think that you should add information about the technique and code you used. This probably led to some difficulties in reading Figure 5 and Figure 6.

2) In all the figures, the size of node changes. I assume it is proportional to the weighted number of papers authored by a node author. Moreover, the use of different colours in the lower panels of figures 5- and 6 is not explicitly addressed.  Finally, the choice of representing a specific cartel in the graph is not commented at all.

3) There are some minor inconsistencies in the notation (use of ’ instead of  ').

4) As for discussion and policy indication, in I understand correctly your analysis, the suggestion that the adoption of the rule of 1/n may lead to self-purification is not consistent with your results.. The conflict of interest induced by the rule is valid only for authors of different productivity.  As you explicitly stated, if a group of similar productivity authors forms a cartel, it boosts the productivity of its members. Hence, there is a clear strategy that similar-productivity authors may adopt for gaining positions in the publication game. Moreover, in a dynamic game, where people of similar productivity gain position, also the higher-productivity authors may have an interest to form cartels in view of avoiding being reached by cartels formed by lower-productivity authors.     Please submit your revised manuscript by Sep 09 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Alberto Baccini, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at http://journals.plos.org/plosone/s/latex.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper deals with an important distortion in the production of science, which is the formation of cartel publications. Besides individual incentives associated with higher productivity leading to higher salaries and promotion, there are also institutional incentives to push faculty to increase their research output, such as public founding agencies and/or college associations membership.

See, for example, Besancenot et al. (2009) Why Business Schools do so much research: A signaling Explanation (2009) Research Policy 38, 1093-1101

Faria, J. and F. Mixon (2022) Opportunism vs. excellence in academia: Quality accreditation of collegiate business schools (2022) American Business Review, open source.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Oct 26;17(10):e0270618. doi: 10.1371/journal.pone.0270618.r002

Author response to Decision Letter 0


20 Sep 2022

Alberto Baccini, Ph.D.

Academic Editor

PLOS ONE

__RE: PONE-D-22-16973, "Publication games: in the web of reciprocity"__

Dear Dr. Baccini,

Many thanks for effort to deal with my manuscript and your overall positive opinion. Below I respond to all of your and the reviewer's comments (typed in italic) in details. Line numbers corresponds to the marked-up copy of the MS.

_1) The paper omitted completely the technicalities of simulations. I think that you should add information about the technique and code you used. This probably led to some difficulties in reading Figure 5 and Figure 6._

Thank you for pointing out this deficiency in the MS. Now I added a paragraph (l. 171-179) about the simulations. During this I recognised that simulations were performed only once so I repeated them 1000 times. As a result my conclusion on Fig 6 changed a bit (l. 194-202), so I performed more simulations to clarify this which resulted in a new figure (Fig 7) and some more text (l. 203-231). These results show that the application of 1/n rule can have different effect of authors of different prior productivity. I now discuss these findings in the Conclusions (l. 263-286).

_2) In all the figures, the size of node changes. I assume it is proportional to the weighted number of papers authored by a node author. Moreover, the use of different colours in the lower panels of figures 5- and 6 is not explicitly addressed. Finally, the choice of representing a specific cartel in the graph is not commented at all._

You are right again here. I added to the figure legends that node size is proportional to the number of coauthors (degree). Use of different colours are also clarified in the legends of Fig 5 and 6 as well as the use of specific cartels are justified here.

_3) There are some minor inconsistencies in the notation (use of ’ instead of ')._

These are corrected now.

_4) As for discussion and policy indication, in I understand correctly your analysis, the suggestion that the adoption of the rule of 1/n may lead to self-purification is not consistent with your results.. The conflict of interest induced by the rule is valid only for authors of different productivity. As you explicitly stated, if a group of similar productivity authors forms a cartel, it boosts the productivity of its members. Hence, there is a clear strategy that similar-productivity authors may adopt for gaining positions in the publication game. Moreover, in a dynamic game, where people of similar productivity gain position, also the higher-productivity authors may have an interest to form cartels in view of avoiding being reached by cartels formed by lower-productivity authors._

You are absolutely right here, I modified the Conclusions accordingly taking into account the new results presented in Fig 7 as well.

_Reviewer #1: The paper deals with an important distortion in the production of science, which is the formation of cartel publications. Besides individual incentives associated with higher productivity leading to higher salaries and promotion, there are also institutional incentives to push faculty to increase their research output, such as public founding agencies and/or college associations membership. See, for example, Besancenot et al. (2009) Why Business Schools do so much research: A signaling Explanation (2009) Research Policy 38, 1093-1101 Faria, J. and F. Mixon (2022) Opportunism vs. excellence in academia: Quality accreditation of collegiate business schools (2022) American Business Review, open source._

Thank you for pointing out these wider implications. Faria & Mixon is referenced now to call attention for this phenomenon too.

I hope that after these modifications the MS is now suitable for publication in PLOS ONE.

Sincerely yours,

Zoltan Barta

Attachment

Submitted filename: PP-MS_metrics-plosone-rebutal.docx

Decision Letter 1

Alberto Baccini

11 Oct 2022

Publication games: in the web of reciprocity

PONE-D-22-16973R1

Dear Dr. Barta,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Alberto Baccini, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: n/a

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Acceptance letter

Alberto Baccini

17 Oct 2022

PONE-D-22-16973R1

Publication games: in the web of reciprocity

Dear Dr. Barta:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Alberto Baccini

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: PP-MS_metrics-plosone-rebutal.docx

    Data Availability Statement

    Computer code needed to reproduce this study is available from GitHub (https://github.com/zbartab/PaperPump).


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES