Skip to main content
PLOS One logoLink to PLOS One
. 2022 May 5;17(5):e0267312. doi: 10.1371/journal.pone.0267312

And the credit goes to … - Ghost and honorary authorship among social scientists

Gernot Pruschak 1,2,3,*, Christian Hopp 1,3
Editor: Alberto Baccini4
PMCID: PMC9070929  PMID: 35511807

Abstract

The proliferation of team-authored academic work has led to the proliferation of two kinds of authorship misconduct: ghost authorship, in which contributors are not listed as authors and honorary authorship, in which non-contributors are listed as authors. Drawing on data from a survey of 2,222 social scientists from around the globe, we study the prevalence of authorship misconduct in the social sciences. Our results show that ghost and honorary authorship occur frequently here and may be driven by social scientists’ misconceptions about authorship criteria. Our results show that they frequently deviate from a common point of authorship reference (the ICMJE authorship criteria). On the one hand, they tend to award authorship more broadly to more junior scholars, while on the other hand, they may withhold authorship from senior scholars if those are engaged in collaborations with junior scholars. Authorship misattribution, even if it is based on a misunderstanding of authorship criteria rather than egregious misconduct, alters academic rankings and may constitute a threat to the integrity of science. Based on our findings, we call for journals to implement contribution disclosures and to define authorship criteria more explicitly to guide and inform researchers as to what constitutes authorship in the social sciences. Our results also hold implications for research institutions, universities, and publishers to move beyond authorship-based citation and publication rankings in hiring and tenure processes and instead to focus explicitly on contributions in team-authored publications.

Introduction

“Publish or Perish” characterizes the academic reward system across scientific disciplines. Hiring and tenure decisions depend upon publication metrics, and funding agencies are increasingly employing publications as their main criteria [15]. For academics, competition for publication spots in top-tier journals is fierce [68]. Responding to the increasingly complex nature of academic inquiries [9], and the competition in academia, researchers are working more frequently in teams [10,11].

Team-authored academic work allows researchers to increase their productivity by profiting from specialization and the division of labor [12,13]. Bringing together scientists from different backgrounds enhances creativity and the depth of the work [14,15], as well as–through mutual checks for potential errors–reproducibility [16]. Nonetheless, the rise of co-authorship also opened the door for new areas of academic misconduct. In team-authored work, it has become possible to withhold authorship from heavily engaged contributors (ghost authorship) and/or to award authorship to someone who didn’t participate in a research project (honorary authorship) [17]. Given that authorship misconduct may obfuscate from inferring individual contributions, wrongly assigned co-authorship can distort individual credit.

In a 2005 study, Martinson, Anderson and de Vries [18] found that 10% of the researchers they surveyed had assigned authorship inadequately at least once in their career. Across disciplines, scientists perceive authorship misconduct to be ten times more likely to happen than data fabrication or falsification [19]. While individual instances of authorship misconduct are less damaging to science than transgressions such as data fabrication or falsification, this type of misconduct seems to be much more widespread, and it does have consequences: ghost authorship and honorary authorship distort citation and publication counts, which, as noted above, are among the primary metrics of academic productivity [20].

Therefore, it is not surprising that to prevent adulterations of academic rankings, many highly ranked journals, especially in the natural and life sciences, now require so-called contribution disclosures in which each contributor discloses his or her specific contribution to the paper [21,22]. In 2018, Elsevier, a top publisher of social scientific research [23], adopted guidelines to encourage [24] authors to employ the “CRediT” system [25] for contribution disclosures. CRediT, which stands for Contributor Roles Taxonomy, is a system that breaks down the various roles a contributor can play in writing a paper; contributors can be listed, then, according to the precise roles they played, including conceptualization, methodology, software, writing, reviewing and editing. In turn, some social scientific journals have, begun to encourage authors to submit statements on what role each author played in the composition of a submitted article, but, unlike the journals in the natural and life sciences, they do not require these statements [2628]. In addition, the ethics guidelines of the largest social science research societies do not even mention contribution disclosures [2932].

The lack of mandatory authorship contribution statements in the social sciences leads to our research question: How prevalent are ghost authorship and honorary authorship in the social sciences? To address this question, we provide clear and comprehensive facts on the prevalence, distribution, and motivational factors of ghost and honorary authorship in the social sciences.

Authorship

To adequately assess publication counts and to attribute citation counts to individual authors, it is important to recognize contributions made in team-authored publications. Yet the prevailing definitions of authorship are broad and imprecise because of the wide variety of academic fields they need to apply to [33]. As a case in point, the prevailing modus operandi would identify scholars as authors if they made “substantial contributions” [34] to the publication. Admittedly, this definition of authorship is vague and leaves many degrees of freedom for the scholars involved. Consequently, there is a wide discrepancy between what authors should ideally attribute and what authors do attribute.

In the social sciences, this discrepancy is further exemplified by the authorship definitions of large societies whose codes of ethics or conduct state that individuals who contributed substantially to a publication should receive authorship [2931]. Regrettably, the definition of a substantial contribution remains unclear in those guidelines. To solve this issue, editorial policies at an increasing number of prominent interdisciplinary journals that also publish social science articles have begun to implement more specific guidance on who should receive authorship and who should not. These guidelines are either based on [35,36], or very similar to [37,38], the authorship criteria defined by the International Committee of Medical Journal Editors (ICMJE) [39] (the authorship guidelines of Nature [37] and PNAS [38] are more relaxed as they only require responsibility for submission and one other task). According to the ICMJE, “Authorship credit should be based only on 1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; [AND] 2) drafting the article or revising it critically for important intellectual content; AND 3) final approval of the version to be published” [39]. The ICMJE revised its criteria in 2013, adding the “agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved” [40] as a fourth criterion. Every researcher fulfilling all four requirements is not only eligible for but must receive authorship [40]. It is not only interdisciplinary journals that refer to the ICMJE for details on authorship: the Committee on Publication Ethics (COPE), a multidisciplinary advisory body on ethical issues to which several large academic institutions, journals, and societies subscribe, advises its members to employ the ICMJE authorship standards as well [41]. This highlights the relevance of the ICMJE authorship criteria in the social sciences. Consequently, we employ these criteria as the benchmark definition in our empirical analysis.

Ghost authorship

Having established the definition of authorship based on the ICMJE criteria, we can elaborate on what constitutes authorship misconduct. The first type we will consider is ghost authorship. A ghost author contributes to a research paper, but, willingly or unwillingly, is not named as an author [42]. Ghost authorship shares with plagiarism the misattribution of intellectual work [43]. Yet, in plagiarism, the originators are usually not aware of someone else stealing their ideas.

Existing research provides three general explanations for the presence and prevalence of ghost authorship. First, pressure from co-authors, can lead to researchers declining or being declined authorship. The rising usage of fractional counts in assessing scholars’ productivity rs constitutes a potential reason for this because this motivates researchers to include as few authors as possible to maximize their own publication counts [44]. In such a case, perhaps the ghostwriter is a subordinate of one of the authors, with little power in the relationship [45]. Second, scientists may voluntarily decline author credits because they perceive the findings in the research as controversial, dubious, or weak; perhaps they fear that the publication of the paper may have negative consequences on their future career [46,47]. Third, researchers may evade authorship to disguise potential conflicts of interest. For example, a freelancer sponsored by a pharmaceutical company may approach life scientists with a biased bundle of articles and ask them to write a research paper based on it [48], hoping the paper will promote the official approval, and/or boost the sales, of a drug. As the freelancers do not receive authorship (and thus are ghost authors), the monetary commitment of the firm stays secret [49].

Honorary authorship

In contrast to leaving out contributors in published work, authorship misconduct may also result from adding individuals to publications that they have contributed to either insubstantially or not at all. This behavior is called honorary authorship [50], gift authorship, or guest authorship [51]. It is beneficial to those receiving it because they can get the credit for the publications without exerting the effort and time needed to conduct research and write articles. As with ghost authorship, pressure and power dynamics explain some of the occurrences of honorary authorship [52]: for example, senior scientists may demand authorship from their subordinates in return for employing them, from their Ph.D. students in return for supervising them, or to either in return for providing funding [53,54]. In other cases, the original authors voluntarily include honorary authors. The inclusion of a well-known researcher into the author list increases the publication chances, especially with single-blind review processes and for high-impact journals [55]. A famous co-author often boosts citations, too, which is beneficial to all authors [56]. Toward these ends, authors might ask well-known experts to co-author their papers without having participated in the research or writing processes [51]. A third type of honorary authorship happens when researchers engage in reciprocal relationships. The general increase in co-authored papers enables researchers to trade co-authorship: a scientist adds only a small, non-authorship-worthy contribution to a research project but still receives author credit; the original author also receives reciprocal co-authorship when the honorary author publishes his or her next paper. This behavior may be especially common within chairs or research groups, where reciprocal proofreading might lead to honorary authorship [57].

Research questions

In the following, we aim to investigate to what extent these findings may generalize across disciplinary boundaries. We, therefore, extend prior work by examining authorship assignments in a field where journals only recently started encouraging scholars to disclose their contributions, where the number of authors on average is manageable and where there is (to the best of our knowledge) limited prior work on honorary and ghost authorship: the social sciences.

We believe this examination to be valuable as the results from the life and natural sciences may provide little guidance for the social sciences due to the many contextual differences in which publishing takes place. For example, the average acceptance rates of papers submitted to the respective field journals are quite different. In the social sciences, journals publish only around 25% of all submissions [58]. In nature and life sciences, journals publish approximately 40 to 50% of all submissions [59]. In the following, we study the prevalence of authorship misconduct using a large-scale survey of social science researchers.

Subsequently, we conduct exploratory research on ghost and honorary authorship using various possible determinants of authorship misassignment. We look at job positions because supervisor-subordinate collaborations seem susceptible to authorship misconduct [6063]. We also look at time spent in academia, as it relates to the specific tasks that researchers take on [64] and thus might relate to authors’ eligibility according to the ICMJE criteria. (On the subject of tasks, it is worth noting that a recent study surveying “scientists who publish a paper every five days” [65]–which equals 72 or more published papers per year–showed that more than 70% of the respondents did not conduct at least one of the three required tasks of the ICMJE authorship criteria–excluding accountability–in at least every fourth paper.) In addition, we take into account researchers’ gender. Female researchers are arguably among the most vulnerable, as credit often tends to go primarily to their male colleagues. This tendency is known as the Matilda effect, named for the suffragist and abolitionist Matilda Joslyn Gage, who wrote about how women have been denied credit for their achievements throughout history [66]. Finally, we look at culture, as it, too, may influence authorship assignments. Salita [67] discussed the possibility that cultural differences might induce perceptions of authorship that could deviate from the ICMJE criteria.

We thus ask the question of what authors are more likely to confirm ghost or honorary authorship in their published team-authored work. In studying the determinants of these types of misconduct, we provide crucial information that can be used by university officials, journal editors, publishers, et cetera, to implement effective countermeasures.

Materials and methods

Distribution

To address our research question, we draw on a large-scale survey of social scientists. We designed the survey in spring 2018 using the online survey tool Qualtrics. In May 2018, we asked colleagues for feedback. After incorporating their feedback, we conducted a test run by sending the survey link to 275 scholars who had presented at least one paper at the 2018 European Accounting Association Annual Congress. Data and feedback from the test run showed no need to adapt the questionnaire further. Therefore, the data gathered in the pilot phase is included in the analysis. We did not obtain approval for the survey from an institutional review board (IRB) as the survey was conducted at a point in time where both authors were employed at the RWTH Aachen University. At the RWTH Aachen University, only life science and psychological experimental studies go to the independent ethics committee. The authors contacted the independent ethics committee of the RWTH Aachen University with a request to review the study. Yet the ethics committee replied that they do not provide ethics oversight for empirical social scientific studies. In fact, neither the RWTH Aachen University, nor German state or federal agencies require or offer ethics oversight for empirical research in the social sciences.

To ensure broad dissemination of the questionnaire among researchers from various fields of research in the social sciences, we selected corresponding authors of published articles in well-known journals as well as of papers presented at conferences organized by large field-specific research societies between January 2007 and June 2018. S8 Table includes a list of the societies and journals. After deleting duplicates, we had a total of 126,480 unique email addresses. A random selection of half of these addresses led to an initial sample of 63,240. These scholars received an email containing a brief explanation of the purpose of the study and a Qualtrics URL link to the questionnaire in late August and early September 2018. The link to the survey was the same for everyone to ensure the anonymity of respondents. We ruled out multiple responses by the same individual through IP address restrictions. After sending out the survey, 15,573 emails automatically bounced back due to the email addresses being no longer in use. The contacted sample, therefore, contains 47,697 valid recipients.

Sample

The dissemination of the survey resulted in 2,817 responses. This constitutes 4.45% of all contacted email addresses and 5.91% of all valid e-mail address. These rates are comparable to other recently conducted non-incentivized online surveys among scientists investigating academic misconduct [68,69]. Moreover, the demographic characteristics of our sample (available in S1 and S2 Tables) are comparable to the summarized database of more than 30 million Zippia.com profiles affiliated as social scientists [70]. For example, the share of women is 33.2% in our study and 37.2% in Zippia.com [70]. Of those who completed the survey, 2,223 were social scientists. We then deleted one respondent with nonsensical answers who claimed to have been working in academia for 100 years despite being only 67 years old. Therefore, our original sample consists of 2,222 respondents. This corresponds to a response rate of 3.51% among all e-mail addresses and 4.66% among valid email addresses. Our drop out rates are comparable to existing research on academic misconduct [71]. Nevertheless, we investigate whether these dropouts induced potential sample selection bias in the robustness checks. However, the samples used for the following analyses differ because some conference attendees had not yet published in journals and some respondents selected “N/A” for one or more responses. Respondents who had not published a journal paper did not receive the questions on their last published paper but were still able to respond to two vignettes in the last section that described projects and asked respondents to assign authorship to the researchers involved. Respondents stating that their last published paper did not include any non-author contributors were excluded when comparing the actual and perceived rate of ghost authors in model 5 of Table 1 because of divide by zero errors.

Table 1. Regression results of actual and perceived prevalence of ghost and honorary authorship.

1 Ghost Authorship 2 Honorary Authorship 3 # of Ghost Authors 4 # of Honorary Authors 5 Perceived Ghost Authors 6 Perceived Honorary Authors
Rate of Ghost 0.714
Authors (0.392)
Rate of Honorary 0.935
Authors (0.176)
Female -0.324 0.175 -0.345 0.109 0.151 0.054
(0.302) (0.106) (0.331) (0.070) (0.209) (0.121)
Anglophone -1.619 -0.964 -1.416 -0.612 -0.663 -0.237
(0.456) (0.206) (0.509) (0.123) (0.352) (0.206)
Continental -1.227 -0.786 -1.228 -0.522 -0.577 -0.373
Europe (0.419) (0.203) (0.493) (0.121) (0.337) (0.203)
Developing 0.259 0.131 0.265 -0.118 -0.085 -0.267
Countries (0.405) (0.231) (0.516) (0.134) (0.340) (0.227)
Age 0.031 0.009 0.015 0.006 -0.006 -0.005
(0.020) (0.008) (0.024) (0.005) (0.017) (0.009)
Ph.D. Student 1.268 0.513 1.520 0.290 0.012 0.204
(0.440) (0.204) (0.526) (0.122) (0.340) (0.194)
Professor -0.201 -0.287 0.017 -0.221 -0.087 -0.171
(0.343) (0.128) (0.386) (0.085) (0.256) (0.150)
Editor 0.245 0.067 0.084 0.113 -0.001 0.058
(0.307) (0.121) (0.344) (0.080) (0.243) (0.143)
Years in -0.009 -0.011 0.007 -0.010 0.007 -0.008
Academia (0.021) (0.008) (0.024) (0.006) (0.017) (0.010)
Published -0.023 0.074 -0.105 0.062 0.075 0.024
Papers (0.108) (0.042) (0.125) (0.027) (0.082) (0.048)
Written 0.063 -0.010 0.039 -0.020 -0.164 -0.023
Reviews (0.096) (0.036) (0.110) (0.024) (0.084) (0.042)
Business -0.397 0.216 -0.363 0.074 0.181 -0.050
(0.399) (0.166) (0.466) (0.109) (0.311) (0.182)
Economics and -0.515 -0.430 -0.560 -0.395 -0.110 -0.278
Finance (0.500) (0.199) (0.597) (0.139) (0.399) (0.235)
Computer and 0.056 0.319 0.328 0.150 -0.267 -0.183
Statistics (0.407) (0.188) (0.485) (0.120) (0.400) (0.213)
Political Sciences -0.967 -0.625 -1.294 -0.523 0.129 -0.584
(0.670) (0.222) (0.792) (0.157) (0.427) (0.282)
Psychology -0.410 0.321 -0.060 0.338 -0.311 0.056
(0.676) (0.240) (0.693) (0.150) (0.548) (0.256)
Sociology -0.334 -0.241 0.247 -0.150 0.280 -0.132
(0.605) (0.226) (0.640) (0.152) (0.427) (0.256)
Chi-Square 55.30 145.69 44.53 140.49 23.68 70.45
P > Chi-Square 0.00 0.00 0.00 0.00 0.1659 0.00
Pseudo R-squared 0.01 0.06 0.06 0.03 0.04 0.04
Observations 1854 1857 1854 1857 804 1818

1 and 2 present marginal effects derived from logistic regressions with standard errors in parentheses. 3 and 4 present coefficients derived from negative binomial regressions with standard errors in parentheses. 5 and 6 present coefficients derived from Poisson regressions with standard errors in parentheses. The (omitted) reference categories for the categorical variables are: Asia for geographical region, Postdoc for job position and General Social Sciences for research field. S5 Table includes confidence intervals, p-values and more information on the varying number of observations.

Survey design

The survey consists of three parts. In the first section, respondents answered questions about their demographic and job characteristics. The second part of the survey asked about the distribution of authorship and contributor acknowledgments in the latest published paper that names the respondents as authors. The third section covered two vignettes, as noted above, which described research projects and asked respondents to assign authorship to the researchers involved.

Prevalence of authorship misconduct

To elicit the extent of ghost and honorary authorship, we asked respondents regarding the number of authors and the number of other people, excluding peer reviewers, who had contributed to their last published paper. Following Mowatt et al. [72], we asked respondents to specify for each of the authors and contributors (or for the top five, where there were more than five authors or contributors) whether that person participated in creating the research design, searching for literature, analyzing the literature, collecting and/or preparing data, describing the results, writing up the paper, reviewing and commenting on the written paper and approving the final version of the paper. According to the ICMJE authorship criteria, only a person approving the final version of the paper, writing up the paper, and/or reviewing and commenting on the written paper and engaged in at least one of the other tasks mentioned should receive authorship (for the sake of simplicity and comparability with existing research, we refer to the original three ICMJE authorship criteria without the recently added criterion of accountability throughout the article) [39]. Based on these criteria, we identified as ghost authors those scholars who fulfill the ICMJE requirements but were not listed as authors, and as honorary authors those who do not fulfill the ICMJE requirements but were listed as authors. Furthermore, respondents were asked to indicate on a scale from 0 (disagree) to 100 (agree) whether they agree that for their last published paper, all researchers who made significant contributions were named as authors. We employed this question as an indicator of respondents’ perceptions of the possible presence of ghost authors. The last question in the second part of the survey asked respondents to indicate on a scale from 0 (disagree) to 100 (agree) that for their last published paper, researchers received authorship only if they participated actively in the creation process. In this way, survey recipients stated their perceptions of the possible presence of honorary authors.

Determinants of authorship misconduct

In addition to determining the prevalence of authorship misconduct, we also aim to provide exploratory evidence regarding potential determinants of authorship misconduct. As discussed in the introduction, there are a wide variety of potential determinants of authorship misconduct. The questionnaire, therefore, included demographic questions on gender, age, and geographical region. Moreover, we asked respondents to indicate their primary research field, their job position, and how long they had worked in academia. In addition, we asked about respondents’ productivity, measured by the number of articles they had published in the three years leading up to the date of the survey, and their level of academic engagement, measured by whether they held a position in an editorial board as well as the number of reviews they had written in the year leading up to the survey date.

Assignment of ghost and honorary authorship

The assignment of ghost and honorary authorship in two vignettes was intended to help us further investigate the social scientists’ conceptions and misconceptions about authorship. We developed the following two hypothetical scenarios capturing different authorship team compositions that were involved in the development of research papers. Ideas for these vignettes were derived from [6163], though our research focus and context differs from these studies. Respondents read each vignette and afterward had to decide whether to award authorship to each individual mentioned in the vignettes.

The first vignette described a collaboration between a postdoc and a professor where both contributed to a similar extent towards the publication, while a student assistant helped in the data collection process. According to the ICMJE authorship criteria, the professor and the postdoc should receive authorship, while the student assistant should not.

For the second vignette, Qualtrics randomly split the respondents into two groups. Respondents in Group A assessed a postdoc/professor collaboration while respondents in Group B assessed a professor/professor collaboration, again with a student assistant helping in the data collection. For both groups, the researcher mentioned first in the second vignette exerted substantially higher efforts than the researcher mentioned second, who reached the minimum threshold of the ICMJE authorship criteria by participating in the conception, reviewing, and final approval. The vignettes differ only in the type of collaboration. In neither case should an author credit go to the student assistant.

As researchers’ workloads are the same in the first vignette, the comparison between the randomly assigned groups should not result in differences. Yet the workloads differ between the first and second vignette, resulting in possible variations in authorship assignments between the vignettes.

If, however, we additionally observe differences between the groups in the second vignette, these would be attributable to the perception of rank differences between postdoc/professor and professor/professor collaborations. Thus, we can elicit authorship judgments by the respondents as well as whether these judgments are affected by subjective perceptions about the roles and academic positions of the involved researchers.

Statistical analysis

We conducted the entire data analysis using Stata 16. For the analyses of authorship assignments in respondents’ last published papers, our first set of dependent variables indicates whether the author list of a paper suffers from Ghost Authorship and/or Honorary Authorship. Our second set of dependent variables employs the exact number of Ghost Authors and Honorary Authors per paper. Our third set includes the indications of the extent of Perceived Ghost Authors and Perceived Honorary Authors in respondents’ last published papers. For the analyses of the vignettes, our binary dependent variables employ respondents’ authorship assignments by each individual respondent.

We employ different empirical estimation strategies for each set of dependent variables. We apply logistic regressions for the dichotomous variables of Ghost Authorship and Honorary Authorship. For Ghost Authors and Honorary Authors, we employ negative binomial regressions due to the overdispersion of the count variables. We utilize Poisson regressions for the equidispersed perception-based dependent variables of Perceived Ghost Authorship and Perceived Honorary Authorship. Last, we employ logistic regression again for the authorship assignments associated with the vignettes.

Results

Actual prevalence of authorship misconduct

Based on the ICMJE authorship criteria, out of 1,878 papers with full data on the authorship tasks, one ghost author participated in the creation of 43 (2.29%) papers and two or more ghost authors participated in the creation of 21 (1.12%) papers (Fig 1A). Moreover, regardless of the definition of authorship, we can clearly identify 34 ghost authors who have contributed to all tasks of a research project but did not receive authorship. Honorary authorship, as defined by the ICMJE requirements, occurs much more frequently, with 418 papers (22.22%) containing one honorary author, 234 papers (12.44%) containing two honorary authors, 107 papers (5.69%) containing three honorary authors, and 57 (3.03%) containing four or more honorary authors (Fig 1B). In addition, regardless of the definition of authorship, we can clearly identify 134 honorary authors who did not contribute at all to a research project on which they were named as authors.

Fig 1.

Fig 1

Rates of ghost (A) and honorary (B) authorship. N. of obs. are 1,878 for (A) and 1,881 for (B).

We compare the rate of ghost authors that we identified based on the ICMJE guidelines to the degree that respondents believed that authorship was inappropriately withheld from one or more contributors. Fig 2A show that social scientists perceive ghost authorship to be more prevalent than it is: the perceived rate of ghost authorship (light blue) exceeds the identified rate of ghost authorship (dark blue). The difference in the means (4.65% for actual and 13.09% for perceived) is significant according to a two-sided t-test (t = -8.0874; df = 812; p<0.00).

Fig 2.

Fig 2

Shares of the identified rate of ghost authors and perceived rate of ghost authors (A) as well as the shares identified rate of honorary authors and perceived rate of honorary authors (B). N. of obs. are 813 for (A) (because we only can assess the rate of ghost authors among the papers including at least one non-author contributor) and 1,842 for (B).

Fig 2B compares the rate of identified honorary authors in each paper to respondents’ assessments of the rate of honorary authors in their last published paper. The difference between the identified (dark blue) and perceived (light blue) occurrences is not as stark as for the perception of ghost authorship (the mean of actual is 23.86% and the mean of perceived is 17.26%). Nevertheless, using a two-sided t-test, we find that the perceived rate of honorary authors is, on average, significantly lower than the identified rate of honorary authors (t = 7.5946; df = 1841; p<0.00).

Potential antecedents to authorship misassignment

To identify the reasons for the mismatches between scholars’ perceptions of the prevalence of authorship issues and the actual occurrence of these issues, we investigate their antecedents and correlates. Table 1 depicts the regression results using a host of explanatory variables. Models 1 and 2 report the results from a logistic regression using a binary dependent variable that takes on the value of one if the researchers’ last papers include at least one ghost author (1) or honorary author (2) based on the ICMJE’s definition of authorship. Models 3 and 4 report the results from a negative binomial regression employing the number of ghost and honorary authors in each paper as dependent variables.

The results show that women do not exhibit more ghost authorship in their papers than men. Yet while the positive coefficient pointing towards women facing honorary authorship more often in their papers is not significant at the conventional p < 0.05 level, this effect is marginally significant at p < 0.1 (S5 Table). Furthermore, we find strong effects that scholars in Anglophone and Continental European countries report fewer ghost and honorary authors than scholars from other world regions. Respondents who are Ph.D. students indicate 51.3% more occurrences of honorary authorship and 127% more occurrences of ghost authorship in their last published paper. In contrast, respondents who are professors indicate fewer occurrences of honorary authors but do not differ significantly from the baseline category of junior faculty members regarding ghost authorship. The more papers a scholar has published in the last three years, the more honorary authors he or she reports, though the coefficients are equally small. Concerning differences across research fields, we find that scholars in economics, finance, and political science are about 40% less likely to report the presence of honorary authors in their last published paper than are members of the baseline group of interdisciplinary social scientists. By contrast, psychologists report even more instances of honorary authors in their published papers than interdisciplinary social scientists do.

Models 5 and 6 depict respondents’ perceived assessments of the rates of ghost and honorary authors in their last published paper as the dependent variables. The results show that higher the identified rates of ghost and honorary authors according to the ICMJE criteria are, the higher also the respondents’ perceived the occurrences of these forms of authorship misconduct. Yet the effect for ghost authors is not significant at the conventional p < 0.05 level but only marginally significant at p < 0.1. Nevertheless, the results further attest to the discrepancy between the perception of authorship misconduct and its actual occurrence due to the relatively low explanatory power (as measured in Pseudo-R2) of the regressions.

Hypothetical assignments of authorship

The results on the prevalence and the perception of authorship misconduct revealed some preliminary evidence that the survey respondents seem to be applying an authorship definition that is fairly wide and does not adhere to a common point of reference such as the ICMJE definition. However, it is unclear whether the occurrences of these authorship issues derive from a lack of knowledge of common authorship criteria or from other considerations (whether conscious or unconscious) not mentioned in the survey questions.

To explore the two explanations further, we randomly assigned respondents into two groups and presented them with two vignettes. Table 2 depicts respondents’ authorship assignments for these two hypothetical scenarios.

Table 2. Authorship assignments in the vignettes split by treatment group.

Group A Group B
Total Assessments Vignette 1 973 984
Professor Vignette 1 935 937
Postdoc Vignette 1 955 968
Student Assistant Vignette 1 284 292
Total Assessments Vignette 2 975 986
Postdoc/Professor Vignette 2 950 963
Professor Vignette 2 607 700
Student Assistant Vignette 2 82 87

The numbers correspond to the number of respondents who assigned authorship to the respective figure in the respective vignette. The number of observations differs between Vignette 1 and Vignette 2 because in each group two respondents in each group chose N/A options in the first vignette but not in the second vignette.

The first vignette lists the same scenario for both groups: A professor and a postdoc collaboratively write a paper and a student assistant supports the data collection. By design, the answers concerning who should be listed as an author between the two groups should not differ. The results indicate that the answers within the groups are nearly identical and the respondents similarly assign authorship. Almost all respondents (917 in Group A and 923 in Group B) award authorship to the professor and the postdoc, in concordance with the ICMJE authorship definition. Yet a substantial number of scholars (284 in the first group and 292 in the second group) deviate from the ICMJE criteria by awarding author credits to the student assistant who did not participate in the writing, revision, or submission process. This finding corroborates the prior conjecture that social scientists appear to have a broader conception of authorship.

Importantly, Models 1, 2, and 3 in Table 3 show that the authorship assessments do not differ significantly between the groups when we control for demographic and job-related factors, as the coefficient of Group is insignificant. This helps to establish a baseline behavior for the subsequent analysis when each group judges slightly different scenarios to explore the misconception of authorship criteria among social scientists in more detail.

Table 3. Regression results of hypothetical authorship assignments in the vignettes.

Vignette 1 Vignette 2
Prof. Postd. SA Prof./
Postd.
Prof. SA
Group -0.268 0.025 0.012 -0.006 0.429 0.082
(0.233) (0.362) (0.103) (0.302) (0.101) (0.167)
Female -0.014 -0.411 0.010 -0.133 -0.060 -0.193
(0.247) (0.390) (0.113) (0.321) (0.110) (0.189)
Anglophone 1.081 2.603 0.044 0.709 -0.195 -0.017
(0.482) (0.623) (0.225) (0.499) (0.232) (0.352)
Continental 0.062 1.705 0.299 1.052 -0.772 0.080
Europe (0.437) (0.469) (0.220) (0.515) (0.227) (0.341)
Developing -0.350 1.051 0.631 0.145 -0.503 0.513
Countries (0.460) (0.509) (0.243) (0.540) (0.255) (0.363)
Age -0.064 -0.027 0.011 0.008 0.001 0.006
(0.015) (0.030) (0.009) (0.025) (0.008) (0.014)
Ph.D. Student -0.320 -1.036 0.117 -0.713 0.260 -0.267
(0.405) (0.609) (0.199) (0.482) (0.196) (0.359)
Professor 0.136 0.197 -0.181 -0.118 -0.194 -0.198
(0.294) (0.499) (0.135) (0.412) (0.134) (0.214)
Editor -0.945 -0.597 0.070 -0.337 0.094 0.370
(0.265) (0.414) (0.129) (0.366) (0.130) (0.196)
Years in 0.044 0.004 0.001 -0.012 0.002 0.002
Academia (0.016) (0.031) (0.009) (0.026) (0.009) (0.014)
Published -0.009 -0.188 0.083 -0.140 0.136 0.255
Papers (0.101) (0.130) (0.042) (0.119) (0.045) (0.064)
Written 0.319 0.036 -0.037 0.236 0.004 -0.169
Reviews (0.109) (0.133) (0.039) (0.128) (0.038) (0.065)
Business 0.339 0.503 -0.804 -0.325 -0.058 -0.673
(0.340) (0.638) (0.171) (0.420) (0.172) (0.281)
Economics and 1.461 0.683 -0.808 0.599 -0.089 -0.067
Finance (0.582) (0.861) (0.210) (0.685) (0.202) (0.309)
Computer and 0.813 -0.830 0.345 0.732 0.642 0.595
Statistics (0.436) (0.564) (0.183) (0.618) (0.208) (0.266)
Political Sciences -0.274 -0.696 -0.514 -0.393 -0.959 -0.871
(0.399) (0.675) (0.218) (0.552) (0.213) (0.425)
Psychology 0.791 0.762 -0.637 Perfect 1.044 -1.315
(0.668) (1.127) (0.260) Predictor (0.318) (0.560)
Sociology 0.076 0.894 -0.225 0.107 -0.506 -0.088
(0.435) (1.120) (0.221) (0.687) (0.221) (0.352)
Chi-Square 81.01 47.40 120.42 20.69 169.26 86.48
P > Chi-Square 0.000 0.000 0.000 0.241 0.000 0.000
Pseudo R-squared 0.117 0.142 0.051 0.048 0.069 0.076
Observations 1931 1931 1931 1935 1935 1935

Coefficients correspond to marginal effects derived from logistic regressions with standard errors in parentheses. The (omitted) reference categories for the categorical variables are: Asia for geographical region, Postdoc for job position and General Social Sciences for research field. S6 Table includes confidence intervals, p-values and more information on the varying number of observations.

In the second vignette, the assignment of authorship to the primary researcher (a postdoc for Group A and a professor for Group B), who takes over most of the tasks, differs only very slightly in comparison to the assignment of authorship in the first vignette. 950 respondents in Group A and 963 respondents in Group B assign authorship to the primary researcher. These assignments concur with the ICMJE authorship criteria. Yet respondents in both groups award authorship less often to the second researcher (a professor) who only performs the minimum number of tasks (revising the research design, revising the paper, and approving the submission of the paper) that would fulfill the ICMJE authorship criteria. In Group A, 607 respondents assign authorship to the professor. In Group B, 700 respondents assign authorship to the professor. Again, the responses deviate substantially from common conventions of authorship. More importantly, however, they even tend to withhold authorship. Differences between the groups remain highly significant even when controlling for various demographic and job-related factors. Model 5 in Table 3 shows that respondents in Group B were 42.9% more likely to award authorship to the second researcher.

Summing up, we find differences in the authorship assignment practices not only between the two vignettes but also across the two groups. The respondents less often assign authorship to the second-mentioned researcher if the primary researcher was a postdoc rather than a professor.

Robustness of results

We conduct several robustness and endogeneity checks. First, while we are unfortunately not able to conduct a non-respondent analysis due to the preservation of respondents’ anonymity, we analyzed the levels at which respondents disengaged from the survey. We compare the characteristics of those respondents that disengaged from the survey after filling out the insensitive questions on demographic, job and academic positions to the sample respondents. S8 and S9 Tables show the descriptive statistics as well as the results from a Wilcoxon rank sum test (we use this non-parametric test because there are 137 respondents who solely answered the insensitive questions). There exist only two significant differences: Those who only answered to the insensitive questions had on average more co-authors as well as more contributors on their last published paper. This points out that respondents most likely disengaged from the survey because the survey constituted too much work for them but not because of the existence of sensitive questions. On the page following the insensitive questions, respondents had to indicate all of their co-authors’ and contributors’ participation for each task. Consequently, the second stage of the survey was more extensive for participants with more co-authors and/or contributors, thus leading to them quitting more often. However, this deviation does not seem to heavily impact the composition of our sample because our sample average of 2.83 authors per paper (S3 Table) lies within the social scientific field averages [73].

First, we run all models with robust standard errors. The levels of significance remain invariant. Second, we estimate all models using OLS regressions and calculate variance inflation factors (VIFs) to assess whether our models suffer from multicollinearity. All VIFs are below the conservative threshold of 5 [74] and thus we have no reason to believe that our models suffer from multicollinearity. Third, we apply firthlogit, a special form of a logistic regression that considers rare events, to correct for the relatively low number of 64 ghost authorship observations [75]. The results remain invariant. Fourth, the average team size differs across research fields [76]. Therefore, we estimate the same regressions with the percentages of honorary and ghost authors in the supplementary material. The only difference is that psychologists no longer report a higher rate of honorary authors anymore. Fifth, we run all analyses including only respondents of papers with a maximum of five authors and contributors to avoid any issues arising from the approximations of respondents stating that their last papers included more than five authors/contributors. The significance levels and implications do not differ from the findings above. Sixth, to detect potential unobserved heterogeneity within the groups, we create interaction terms with the product of Group and all exploratory variables for Models 4 and 5 in Table 3 and again run logistic regressions. The results are available in S7 Table. The coefficients of Group change only slightly, and the significance levels remain invariant. Last, Model 5 in Table 1 might suffer from sample selection, since honorary authorship increases the number of authors and lowers the number of contributors. The inclusion of only papers with at least one contributor in the analysis of the relationship between the perception and the occurrence of ghost authors may, therefore, suffer from endogeneity. To investigate this issue, we employ a Heckman two-step regression by applying Honorary Authorship as the selection variable. This regression returns an insignificant inverse Mills ratio. Therefore, our results do not suffer from sample selection bias [77].

Discussion

We analyzed the prevalence of ghost and honorary authorship in the social sciences. We find that many researchers apply very broad authorship criteria that do not accord with the criteria laid out by the ICMJE. This is remarkable, as many social scientific societies subscribe to institutional arrangements such as COPE (which attributes authorship based on ICMJE guidelines) [41]. Nevertheless, the results are in line with prior work that indicated deviations from the ICMJE standards within the life sciences [19,46]. Interestingly, the social scientists in our study report more honorary authorship but less ghost authorship than life scientists [49,78], using ICMJE criteria as the point of reference.

We also investigated how the misattribution of authorship comes about. By and large, our results show that researchers tend to award authorship more broadly to junior scholars and at the same time may withhold authorship from senior scholars if those are engaged in collaborations with junior scholars. Many social scientists in the sample we studied believe that more of their non-author collaborators should receive authorship despite the fact that many of them do not fulfill the ICMJE standards. We thus find a general pattern that scholars tend to be more generous when it comes to assigning authorship. This may imply that social scientists have their own authorship criteria in mind that do not necessarily match commonly applied criteria such as those laid out by ICMJE. Misattributions of authorship can go both ways: the results from the second vignette show that a substantial number of respondents are more restrictive and even tend to withhold authorship from senior researchers who work with junior scholars. As noted above, in the second vignette, we presented a scenario where both scholars should receive authorship following the ICMJE criteria, despite unequal inputs.

In sum, our vignettes document that the prevalence of ghost and honorary authorship is to a large extent affected by the fact that many participants did not adhere to common authorship criteria. Even more so, the discrepancy in authorship attribution could be exacerbated by fairness expectations and benevolent discrimination.

We show that some social scientists withhold authorship from individuals if they collaborate with others who put more effort into a research project. Fairness expectations provide a plausible explanation for this finding: Scholars perceive it as unfair to award authorship to all researchers if the distribution of efforts is highly uneven. After all, even the best social scientists are still human beings and thus learn fairness expectations and altruism from early childhood on [79].

Benevolent discrimination “is a subtle and structural form of discrimination that is difficult to see for those performing it, because it frames their action as positive, in solidarity with the (inferior) other who is helped, and within a hierarchical order that is taken for granted” [80]. Following this reasoning, it appears that researchers in our sample award authorship more generously to undeserving student assistants and withhold authorship from professors if they collaborate with junior scholars without sharing the work equally.

Moreover, the results from the empirical study exhibit suggestive evidence that benevolent discrimination appears not only in the hypothetical assignments but also in actual authorship assignments. Ph.D. students are much more likely than faculty members to have encountered cases of honorary authorship. Even the ethics policies from large social scientific societies like the Academy of Management [29], the American Sociological Association [30], and the American Psychological Association [31] enforce benevolent discrimination by requiring Ph.D. students to become the first author when published articles are based on their dissertation without considering the distributions of the contributions in such research projects.

As it concerns the determinants of authorship misattributions, our results report a clear gender difference. Women are more likely to report that honorary authors were included in their papers. While our results do not speak in favor of discriminatory effects here, they highlight an area for future inquiries to study gender inequalities in the social sciences [66].

Hierarchical pressure might explain some regional differences in the prevalence of ghost and honorary authorship. Scholars living in Anglophone and Central European countries report that authorship misconduct is less prevalent in their research. Work by researchers residing in Asia, by contrast, more often includes ghost and honorary authors. The cultural background of these scientists coming from countries with generally higher levels of power distance [81] might explain this phenomenon, as department or faculty heads may receive authorship even without having read the paper [67,82].

Our research also adds an observation to the literature on highly prolific authors. Awarding authorship to department or faculty heads might also explain why respondents who had published more papers also more often had honorary authors in their papers and assigned honorary authorship more often in the vignettes. More generous authorship assignments may result in higher publication counts [65,83].

Last, we show that the prevalence of ghost and honorary authorship varies across research fields. Different author ordering conventions might explain this. Economics, finance, and political science usually rank authors alphabetically [84,85] while other fields, such as psychology, order by contribution. Consequently, in the former fields, awarding authorship to yet another author means that the main author(s) may disappear into the “et al.” rubric for future citations. For the latter fields, there are more incentives to include individuals in charge of financing the research project (e.g. the department head) as the last author, for example, even if they had not participated in the research process [86]. Arguably, ordering by contributions rather than alphabetically increases the chances of honorary authorship assignments.

Implications

Our results highlight a high prevalence of ghost and honorary authorship and a broad deviation in authorship assignments from such as the commonly employed ICMJE criteria. These findings highlight the necessity to introduce authorship criteria that are better tailored to the needs, preferences, and perceptions of social scientists. Hence, we call upon large research societies such as the Academy of Management, the American Economic Association, the American Psychological Association, and the American Sociological Association, as well as on the most prominent publishers such as Elsevier, Taylor & Francis, and Springer [23], to revise their existing guidelines on authorship. The revision should focus on establishing clear, precise, and specific criteria that should be also discussed and taught at their annual meetings. This would increase knowledge and awareness of authorship and would ease the job of journal editors as they could rely on accurate guidelines when making attributions of authorship. In turn, these attributions would lead to fairer comparisons of authors for career decisions. Though, one might account for more multi-dimensional criteria in these comparisons anyhow. As a case in point, Moher et al. [87] highlighted several methods of assessing scientific performance including creativity, openness, transparency, and addressing and solving societal problems that go beyond pure publication counts.

The differences in authorship attributions in our study highlight the need to rethink contribution disclosures. Contribution disclosures allow insights into the workload distributions among author teams. Even if researchers’ understanding of authorship varies, contribution disclosures would give an outside observer a better chance to assess who did what regarding a specific publication. This would not rule out misconduct, of course, but it would make it much harder to add individuals who did not contribute at all. If journals require contribution statements from each author and contributor, scholars will face high coordination efforts to submit factually wrong but congruent contribution disclosure statements. Hence, mandatory contribution disclosure statements raise the barrier for submitting falsified author lists. The introduction of contribution disclosures has already reduced instances of honorary authorship in the life sciences [33]. Moreover, contribution disclosures might also reduce instances of benevolent discrimination, as uneven divisions of efforts are accounted for in the contribution statements.

Of course, the introduction of contribution disclosures is not a panacea. It requires editors and authors to be aware of authorship criteria and the consequences of potential misconduct. To ensure this, we recommend that academic societies, research institutions, and publishers establish social scientific authorship criteria in their guidelines and provide tutorials and workshops on authorship.

Limitations and future research

Every research has to be understood and interpreted in light of its limitations. Perhaps the greatest limitation of this study is that existing authorship criteria in the social sciences are overly vague, leaving researchers with very little guidance. In our work, we applied the ICMJE authorship criteria as a common point of reference to better understand authorship assignments in the social sciences. Although universities and societies should apply these criteria (through associating with COPE, for example), social scientists’ perceptions about what constitutes authorship clearly differ from the ICMJE criteria. We attribute this among others to inherent differences in the research and authorship attribution processes of social and life sciences [21]. Consequently, though we find that ghost and honorary authorship are common, some social sciences researchers might argue that this does not represent misconduct but rather is reflective of common authorship practices in the social sciences.

Also, the results of our study may not necessarily generalize. Our findings reported a relatively low number of identified ghost authors. While the application of firthlogit indicates that the findings are robust and valid, the small number of observations might be obfuscating further effects. The quantitative survey contained sensitive questions that, despite the fact that respondents were assured of anonymity, could lead to the understatement of actual wrongdoings–a common problem in research on questionable research practices and academic misconduct [68]. For this reason, the actual prevalence of ghost and honorary might be higher than what our study uncovered, especially when using survey designs that explicitly deal with sensitive items.

Also, the unequal distribution of researchers among geographical regions and research fields might reduce the applicability of our results. Some fields might exhibit higher or lower levels of ghost and honorary authorship. More field-specific studies could determine the prevalence more precisely.

Several fruitful areas for future research can be derived from this study. First, future research could discuss whether common authorship practices in the social sciences like automatically assigning Ph.D. students first authorship should be continued or replaced by more merit-based mechanisms. Second, comparing research-field-specific authorship criteria could explain why scholars from different disciplines vary in their authorship assessments. Third, closely examining the authorship assignments of extremely prolific social science scholars could clear up doubts on whether they really exhibit higher productivity levels or receive honorary authorship more often. Fourth, a qualitative study surveying researchers who experienced ghost and/or honorary authorship could provide a better understanding of the motivations and consequences of authorship misattribution. Last, the use of anonymity-preserving survey measures such as item-sum techniques [88] could increase respondents’ perceived anonymity and therefore lead to a more accurate assessment of ghost and honorary authorship.

Conclusion

This study analyzed the prevalence of ghost and honorary authorship in the social sciences. Our results show that social scientists perceive authorship differently than established in the ICMJE criteria and allied organizations that seek to create standard definitions. We find that authorship misconduct, in the form of ghost and honorary authorship, is highly prevalent in the social sciences. We also investigated the correlates of authorship misassignments and found that fairness expectations and benevolent discrimination are prime candidates to explain why and to what extent researchers may either assign authorship too freely or restrict it too much. We discuss potential solutions: The introduction of social scientific authorship criteria by the largest research society, the enforcement of contribution disclosures through journals and publishers, and a shift away from the importance of citation and publication counts in hiring and tenure processes could all alleviate the problem of misattributing authorship and distorting publication records.

Supporting information

S1 Table. Descriptive statistics for dichotomous variables.

(PDF)

S2 Table. Descriptive statistics for integer variables.

(PDF)

S3 Table. Pairwise correlations coefficients for all employed variables.

(PDF)

S4 Table. Regression results of actual and perceived prevalence of ghost and honorary authorship including confidence intervals.

(PDF)

S5 Table. Regression results of hypothetical authorship assignments in the vignettes including confidence intervals.

(PDF)

S6 Table. Regression results of hypothetical authorship assignments in the vignettes.

(PDF)

S7 Table. Comparison of samples for dummy variables.

(PDF)

S8 Table. Comparison of samples for integer variables.

(PDF)

S9 Table. List of journals and societies included in the data collection.

(PDF)

S1 File. Do-file with Stata code for generating all figures and tables.

(DO)

S1 Text. The text of the vignettes employed in the study.

(PDF)

Acknowledgments

We thank the members of the Department of Business Decisions and Analytics of the University of Vienna, especially Oliver Fabel and Rudolf Vetschera, for their inputs on this paper.

Data Availability

The data cannot be made publicly available due to them containing sensitive information on respondents’ authorship malpractices in previous publications. While the data is anonymized, certain combinations of demographics might still reveal survey participants’ identities. This could compromise survey participants and violate respondents’ privacy. Moreover, the GDPR prohibits the public sharing of any data that might lead to the identification of individuals. When providing their informed consent to participate in the study, participants were ensured their privacy would be protected. They did not provide consent for their data to be shared in a repository. Data requests must be addressed to the Data Protection Officer at Bern University of Applied Sciences, who will provide access after the signing of a non-disclosure agreement: datenschutz@bfh.ch.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Kendall DL, Campanario SC. Honoring God through Scientific Research: Navigating the Ethics of Publishing With our Students. Int J Christ Educ. 2016. 20(2): 133–148. [Google Scholar]
  • 2.McGrail M, Rickard CM, Jones R. Publish or Perish: A Systematic Review of Interventions to Increase Academic Publication Rates. High Educ Res Dev. 2006. 25(1): 19–35. [Google Scholar]
  • 3.Miller AN, Taylor SG, Bedeian AG. Publish or Perish: Academic Life as Management Faculty Live it. Career Dev. 2011. 16(5): 422–445. [Google Scholar]
  • 4.Rawat S, Meena S. Publish or Perish: Where Are We Heading? J Res Med Sci. 2014. 19(2): 87–89. [PMC free article] [PubMed] [Google Scholar]
  • 5.Sestak J, Fiala J, Gavrichev KS. Evaluation of the Professional Worth of Scientific Papers, their Citation Responding and the Publication Authority. J Therm Anal Calorim. 2018. 131(1): 463–471. [Google Scholar]
  • 6.Di Bitetti MS, Ferreras JA. Publish (in English) or Perish: The Effect on Citation Rate of Using Languages other than English in Scientific Publications. Ambio. 2017. 46(1): 121–127. doi: 10.1007/s13280-016-0820-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Tardy C. The Role of English in Scientific Communication: Lingua Franca or Tyrannosaurus Rex? J English Acad Purp. 2004. 3(3): 247–269. [Google Scholar]
  • 8.van Raan AFJ. Competition Among Scientists for Publication Status: Toward a Model of Scientific Publication and Citation Distributions. Scientometrics. 2001. 51(1): 347–357. [Google Scholar]
  • 9.Glaenzel W, Schubert A. Double Effort = Double Impact? A Critical View at International Co-authorship in Chemistry. Scientometrics. 2001. 50(2): 199–214. [Google Scholar]
  • 10.Lee S, Bozeman B. The Impact of Research Collaboration on Scientific Productivity. Soc Stud Sci. 2015. 35(5): 673–702. [Google Scholar]
  • 11.Jones B F. The Burden of Knowledge and the “Death of the Renaissance Man”: Is Innovation Getting Harder? Rev. Econ. Stud. 2009. 76(1): 283–317. [Google Scholar]
  • 12.Wuchty S, Jones BF, Uzzi B. The Increasing Dominance of Teams in Production of Knowledge. Science. 2007. 316(5827): 1036–1039. doi: 10.1126/science.1136099 [DOI] [PubMed] [Google Scholar]
  • 13.Ossenblok TLB, Verleysen FT, Engels, TCE. Coauthorship of Journal Articles and Book Chapters in the Social Sciences and Humanities (2000–2010). J Assoc Inf Sci Technol. 2014. 65(5): 882–897. [Google Scholar]
  • 14.Ductor L. Does Co-authorship Lead to Higher Academic Productivity? Oxf Bull Econ Stat. 2015. 77(3): 385–407. [Google Scholar]
  • 15.Bush GP, Hattery LH. Teamwork and Creativity in Research. Adm Sci Q. 1956. 1(3): 361–372. [Google Scholar]
  • 16.Bergman RG, Danheiser RL. Reproducibility in Chemical Research. Angew. Chem. Int. Ed. 2016. 55(41): 12548–12549. doi: 10.1002/anie.201606591 [DOI] [PubMed] [Google Scholar]
  • 17.Claxton LD. Scientific Authorship–Part 2. History, Recurring Issues, Practices, and Guidelines. Mutat Res. 2005. 589(1): 31–45. doi: 10.1016/j.mrrev.2004.07.002 [DOI] [PubMed] [Google Scholar]
  • 18.Martinson BC, Anderson MS, de Vries R. Scientists Behaving Badly. Nature. 2005. 435(7043): 737–738. doi: 10.1038/435737a [DOI] [PubMed] [Google Scholar]
  • 19.Marusic A, Bosnjak L, Jeroncic A. A Systematic Review of Research on the Meaning, Ethics and Practices of Authorship across Scholarly Disciplines. PLoS ONE. 2011. 6(9): e23477. doi: 10.1371/journal.pone.0023477 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Lane J. Let’s Make Science Metrices More Scientific. Nature. 2010. 464: 488–489. doi: 10.1038/464488a [DOI] [PubMed] [Google Scholar]
  • 21.McDonald RJ, Neff KL, Rethlefsen ML, Kallmes DF. Effects of Author Contribution Disclosures and Numeric Limitations on Authorship Trends. Mayo Clin Proc. 2010. 82(10): 920–927. doi: 10.4065/mcp.2010.0291 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Sauermann H, Haeussler C. Authorship and Contribution Disclosure. Sci Adv. 2017. 3(11): e1700404. doi: 10.1126/sciadv.1700404 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Lariviere V, Haustein S, Mongeon P. The Oligopoly of Academic Publishers in the Digital Era. PLoS One. 2015. 10(6): e0127502. doi: 10.1371/journal.pone.0127502 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Elsevier. Ethical Publishing . [cited 26 May 2021]. In: Elsevier; [Internet]. Oxford: 2021. Available from: https://www.elsevier.com/authors/policies-and-guidelines. [Google Scholar]
  • 25.Brand A, Allen L, Altman M, Hlava M, Scott J. Beyond Authorship: Attribution, Contribution, Collaboration, and Credit. Learned Publishing. 2015. 28(2): 151–155. [Google Scholar]
  • 26.Journal of Business Venturing. Author Information Pack. [cited 26 May 2021]. In: Elsevier; [Internet]. Oxford: 2018. Available from: https://www.elsevier.com/journals/journal-of-business-venturing/0883-9026?generatepdf=true. [Google Scholar]
  • 27.Research Policy. Author Information Pack. [cited 26 May 2021]. In: Elsevier; [Internet]. Oxford: 2018. Available from: https://www.elsevier.com/journals/research-policy/0048-7333?generatepdf=true. [Google Scholar]
  • 28.Social Sciences & Humanities Open. Author Information Pack. [cited 26 May 2021]. In: Elsevier; [Internet]. Oxford: 2018. Available from: https://www.elsevier.com/journals/social-sciences-and-humanities-open/2590-2911?generatepdf=true. [Google Scholar]
  • 29.Academy of Management. AOM Code of ethics. Academy of Management. 2018. [Cited 2018 October 18]. Available from http://aom.org/About-AOM/AOM-Code-of-Ethics.aspx. [Google Scholar]
  • 30.American Sociological Association. Code of ethics. American Sociological Association. 2018. [Cited 2018 October 18]. Available from: http://www.asanet.org/sites/default/files/asa_code_of_ethics-june2018.pdf. [Google Scholar]
  • 31.American Psychological Association. Authorship guidelines for dissertation supervision. Washington, DC: American Psychological Association; 1983. [Google Scholar]
  • 32.American Economic Association. AEA Code of Professional Conduct. 2018. [Cited 26 May 2021]. Available from: https://www.aeaweb.org/about-aea/code-of-conduct. [Google Scholar]
  • 33.Bates T, Anic A, Marusic M, Marusic A. Authorship Criteria and Disclosure of Contributors–Comparison of 3 General Medical Journals with Different Author Contribution Forms. JAMA. 2004. 292(1): 86–88. doi: 10.1001/jama.292.1.86 [DOI] [PubMed] [Google Scholar]
  • 34.Osborne JW, Holland A. What is Authorship and What Should it be? A Survey of Prominent Guidelines for Determining Authorship in Scientific Publications. Pract Assess. 2009. 14(15): 1–19. [Google Scholar]
  • 35.PLoS One. Authorship. 2021. [Cited 26 May 2021]. Available from: https://journals.plos.org/plosone/s/authorship. doi: 10.1371/journal.pone.0251194 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Science Magazines. Science Journals: Editorial Policies. 2021. [Cited 26 May 2021]. Available from: https://www.sciencemag.org/authors/science-journals-editorial-policies. [Google Scholar]
  • 37.Springer Nature. Authorship. 2021. [Cited 26 May 2021]. Available from: https://www.nature.com/nature-portfolio/editorial-policies/authorship. [Google Scholar]
  • 38.McNutt MK, Bradfor M, Drazen JM, Hanson B, Howard B, Jamieson KH, et al. Transparency in Authors’ Contributions and Responsibilities to Promote Integrity in Scientific Publication. PNAS. 2018. 115(11): 2557–2560. doi: 10.1073/pnas.1715374115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Albert T, Wager E. How to Handle Authorship Disputes: A Guide for New Researchers. In: White E, editor. The COPE Report 2003. London: BMJ Books; 2004. Pp. 32–34. [Google Scholar]
  • 40.International Committee of Medical Journal Editors. The New ICMJE Recommendations. 2013. [Cited 26 May 2021]. Available from: http://www.icmje.org/news-and-editorials/new_rec_aug2013.html. 26060616 [Google Scholar]
  • 41.Wager E. The Committee on Publication Ethics (COPE): Objectives and Achievements 1997–2012. Presse Med. 2012. 41(9): 861–866. [DOI] [PubMed] [Google Scholar]
  • 42.Bosch X, Ross JS. Ghostwriting: Research Misconduct, Plagiarism or Fool’s Gold? AM J Med. 2012. 125(4): 324–326. doi: 10.1016/j.amjmed.2011.07.015 [DOI] [PubMed] [Google Scholar]
  • 43.Liddell J. A Comprehensive Definition of Plagiarism. Community Jr Coll Libr. 2003. 11(3): 43–52. [Google Scholar]
  • 44.Waltman L. A Review of the Literature on Citation Impact Indicators. J. Informetr. 2016. 10(2): 365–391. [Google Scholar]
  • 45.Oberlander SE, Spencer RJ. Graduate Students and the Culture of Authorship. Ethics Behav. 2006. 16(3): 217–232. [Google Scholar]
  • 46.Bennett DM, Taylor DM. Unethical Practices in Authorship of Scientific Papers. Emerg Med. 2003. 15(3): 263–270. doi: 10.1046/j.1442-2026.2003.00432.x [DOI] [PubMed] [Google Scholar]
  • 47.Klein CJ, Moser-Veillon PB. 1999. Authorship: Can You Claim a Byline?. J Am Diet Assoc. 1999. 99(1): 77–79. doi: 10.1016/S0002-8223(99)00020-6 [DOI] [PubMed] [Google Scholar]
  • 48.Brennan TA. Buying Editorials. N Engl J Med. 1994. 331(10): 673–675. doi: 10.1056/NEJM199409083311012 [DOI] [PubMed] [Google Scholar]
  • 49.Moffatt B, Elliott C. Ghost Marketing: Pharmaceutical Companies and Ghostwritten Journal Articles. Perspect Biol Med. 2007. 50(1): 18–31. doi: 10.1353/pbm.2007.0009 [DOI] [PubMed] [Google Scholar]
  • 50.Greenland P, Fontanarosa PB. Ending Honorary Authorship. Science. 2012. 337(6098): 1019. doi: 10.1126/science.1224988 [DOI] [PubMed] [Google Scholar]
  • 51.da Silva JAT, Dobranszki J. Multiple Authorship in Scientific Manuscripts: Ethical Challenges, Ghost and Guest/Gift Authorship, and the Cultural/Disciplinary Perspective. Sci Eng Ethics. 2016. 22(5): 1457–1472. doi: 10.1007/s11948-015-9716-3 [DOI] [PubMed] [Google Scholar]
  • 52.Feeser VR, Simon JR. The Ethical Assignment of Authorship in Scientific Publications: Issues and Guidelines. Acad Emerg Med. 2008. 15(10): 963–969. doi: 10.1111/j.1553-2712.2008.00239.x [DOI] [PubMed] [Google Scholar]
  • 53.Drenth JPH. 1998. Multiple Authorship–The Contribution of Senior Authors. JAMA. 1998. 280(3): 219–221. doi: 10.1001/jama.280.3.219 [DOI] [PubMed] [Google Scholar]
  • 54.Moffatt B. Responsible Authorship: Why Researchers must Forgo Honorary Authorship. Account Res. 2011. 18(2): 76–90. doi: 10.1080/08989621.2011.557297 [DOI] [PubMed] [Google Scholar]
  • 55.Tomkins A, Zhang M, Heavlin WD. Reviewer Bias in Single- versus Double-Blind Peer Review. PNAS. 2017. 114(48): 12708–12713. doi: 10.1073/pnas.1707323114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Valderas JM, Bentley RA, Buckley R, Wray KB, Wuchty S, Jones BF et al. Why do Team-authored Papers get Cited More? Science. 2007. 317(5844): 1496–1498. doi: 10.1126/science.317.5844.1496b [DOI] [PubMed] [Google Scholar]
  • 57.Padmanabhan S. Who is an Author? Rights and Righteousness!. J Indian Orthod Soc. 2015. 49(3): 127–128. [Google Scholar]
  • 58.Sugimoto CR, Lariviere V, Ni C, Cronin B. Journal Acceptance Rates: A Cross-Disciplinary Analysis of Variability and Relationships with Journal Measures. J. 2013. 7(4): 897–906. [Google Scholar]
  • 59.Björk BC. Acceptance Rates of Scholarly Peer-Reviewed Journals: A Literature Survey. El professional de la informacion. 2018. 28(4): e280407. [Google Scholar]
  • 60.Bartle SA, Fink AA, Hayes BC. Psychology of the scientist: LXXX. Attitudes regarding authorship issues in psychological publications. Psych Report. 2000. 86: 771–788. doi: 10.2466/pr0.2000.86.3.771 [DOI] [PubMed] [Google Scholar]
  • 61.Costa MM, Gatz M. Determination of Authorship Credit in Published Dissertations. Psychol Sci. 1992. 3(6): 354–357. [Google Scholar]
  • 62.Sandler JC, Russel BL. 2005. Faculty-student Collaborations: Ethics and Satisfaction in Authorship Credit. Ethics Behav. 2005. 15 (1): 65–80. [Google Scholar]
  • 63.Tryon GS, Bishop JL, Hatfield TA. Doctoral Students’ Beliefs about Authorship Credit for Dissertations. Train Educ Prof Psychol. 2007. 1: 184–192. [Google Scholar]
  • 64.Lariviere V, Desrochers N, Macaluso B, Mongeon P, Paul-Hus A, Sugimoto CR. Contributorship and Division of Lavor in Knowledge Production. Soc. Stud. Sci. 2016. 46(3): 417–435. doi: 10.1177/0306312716650046 [DOI] [PubMed] [Google Scholar]
  • 65.Ioannidis JPA, Klavans R, Boyack KW. Thousands of Scientists Publish a Paper Every Five Days. Nature. 2018. 561(7722): 167–169. doi: 10.1038/d41586-018-06185-8 [DOI] [PubMed] [Google Scholar]
  • 66.Rossiter MW. The Matthew Matilda Effect in Science. Soc. Stud. Sci. 1993. 23(2): 325–341. [Google Scholar]
  • 67.Salita JT. Authorship Practices in Asian Cultures. Write Stuff. 2010. 19(1): 36–38. [Google Scholar]
  • 68.Hopp C, Hoover GA. How Prevalent is Academic Misconduct in Management Research? J Bus Res. 2017. 80(3): 73–81. [Google Scholar]
  • 69.Liao QJ, Zhang YY, Fan YC, Zheng MH, Bai Y, Eslick GD et al. Perceptions of Chinese Biomedical Researchers Towards Academic Misconduct: A Comparison between 2015 and 2010. Sci Eng Ethics. 2018. 24(2): 629–645. doi: 10.1007/s11948-017-9913-3 [DOI] [PubMed] [Google Scholar]
  • 70.Zippia, Inc. Social Scientist Statistics and Facts in the US. Available from https://www.zippia.com/social-scientist-jobs/demographics/. [Google Scholar]
  • 71.Barczak G, Hopp C, Kaminski J, Piller F, Pruschak G. How Open is Innovation Research?–An Empirical Analysis of Data Sharing among Innovation Scholars. Ind. Innov. 2022. 29(2): 186–218. [Google Scholar]
  • 72.Mowatt G, Shirran L, Grimshaw JM, Rennie D, Flanagin A et al. Prevalence of Honorary and Ghost Authorship in Cochrane Reviews. JAMA. 2002. 287(21): 2769–2771. doi: 10.1001/jama.287.21.2769 [DOI] [PubMed] [Google Scholar]
  • 73.Henriksen D. The Rise in Co-Authorship in the Social Sciences (1980–2013). Scientometrics. 2016. 107: 455–476. [Google Scholar]
  • 74.Multicollinearity Alin A. WIREs Comput. Stat. 2010. 2(3): 370–374. [Google Scholar]
  • 75.Williams R. Analyzing Rare Events with Logistic Regression. University of Notre Dame Working paper.2018. Available from https://www3.nd.edu/~rwilliam/stats3/rareevents.pdf. [Google Scholar]
  • 76.Wren JD, Kozak KZ, Johnson KR, Deakyne SJ, Schilling LM, Dellavalle RP. The Write Position: A Survey of Perceived Contributions to Papers Based on Byline Position and Number of Authors. EMBO Rep. 2007. 8(11): 988–991. doi: 10.1038/sj.embor.7401095 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Heckman JJ. Micro Data, Heterogeneity, and the Evaluation of Public Policy: Nobel Lecture. J Polit Econ. 2001. 109(4): 673–748.#. [Google Scholar]
  • 78.Wislar JS, Flanagin A, Fontanarosa PB, DeAngelis CD. Honorary and Ghost Authorship in High Impact Biomedical Journals: A Cross Sectional Survey. 2011. 343: d6128. doi: 10.1136/bmj.d6128 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Schmidt MFH, Sommerville JA. Fairness Expectations and Altruistic Sharing in 15- Month-Old Human Infants. PLoS One. 6(10): e23223. doi: 10.1371/journal.pone.0023223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Romani L, Holck L, Risberg A. Benevolent Discrimination: Explaining how Human Resources Professionals can be Blind to the Harm of Diversity Initiatives. Organization. 2019. 26(3): 371–390. [Google Scholar]
  • 81.Hofstede G. Dimensionalizing Cutlures: The Hofstede Model in Context. 2011. Online Readings in Psychology and Culture. 2(1): 8. [Google Scholar]
  • 82.Yukawa Y, Kitanaka C, Yokoyama M. Authorship Practices in Multi-Authored Papers in the Natural Sciences at Japanese Universities. Int. J. Jpn. Sociol. 2014. 23(1): 80–91. [Google Scholar]
  • 83.Kressel HY, Dixon AK. Where Is the Honor in Honorary Authorship?. Radiology. 2011. 259(2): 324–327. doi: 10.1148/radiol.11110422 [DOI] [PubMed] [Google Scholar]
  • 84.Laband DN. Contribution, Attribution and the Allocation of Intellectual Property Rights: Economics Versus Agricultural Economics. Labour Econ. 2002. 9(1): 125–131. [Google Scholar]
  • 85.Lake DA. Who’s On First? Listing Authors by Relative Contribution Trumps the Alphabet. PS. 2010. 43(1): 43–47. [Google Scholar]
  • 86.Yu-Wei C. Definition of Authorship in Social Science Journals. Scientometrics. 2019. 118: 563–585. [Google Scholar]
  • 87.Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN. Assessing Scientists for Hiring, Promotion, and Tenure. PLoS Biology. 2018. 16(3): e2004089. doi: 10.1371/journal.pbio.2004089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Hopp C, Speil A. How Prevalent is Plagiarism Among College Students? Anonymity Preserving Evidence from Austrian Undergraduates. Account. Res. 28(3): 133–148. doi: 10.1080/08989621.2020.1804880 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Emmanuel Manalo

12 Mar 2021

PONE-D-20-30365

And the credit goes to … - Ghost and honorary authorship among social scientists

PLOS ONE

Dear Dr. Pruschak,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that carefully and systematically addresses all the points raised by the three reviewers during the review process.

All three reviewers explain clearly the revisions that they consider necessary for your manuscript to reach publishable standards. Reviewers 1 and 3 also provide detailed comments and suggestions pertaining to specific parts of the text of your manuscript. It is important that you address all of these, explaining specially in instances where your responses may not be exactly the same as the reviewers suggest you need to do.

Please submit your revised manuscript by Apr 26 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Emmanuel Manalo, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service.  

Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website (http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services.  If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free.

Upon resubmission, please provide the following:

  • The name of the colleague or the details of the professional service that edited your manuscript

  • A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file)

  • A clean copy of the edited manuscript (uploaded as the new *manuscript* file)

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

 

We will update your Data Availability statement on your behalf to reflect the information you provide.

4. Please note that the ICMJE statement now lists a fourth authorship criterion, namely "Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved." http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html#two.

5. Please improving statistical reporting and refer to p-values as "p<.001" instead of "p=.000". Our statistical reporting guidelines are available at https://journals.plos.org/plosone/s/submission-guidelines#loc-statistical-reporting.

6. In your methods section, please provide additional information about your participant inclusion and exclusion criteria as well as the selection criteria for the journals from which you selected authors. Please also provide the list of journals as supporting information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: PONE-D-20-30365

This paper presents a survey study on gift and ghost authorship in the social sciences. Before speaking to the content of the paper there is the research ethics review situation. According to the authors, this study did not obtain approval for the survey study. The authors explain that social science studies do not requires IRB approval in Germany. Does PLOS allow this justification? If so, I suggest that authors could make explicit the policies (regulations) or institutional norms that protect human subjects (e.g. federal data protection, confidentiality mechanisms, informed consent).

Given the limited literature regarding authorship in this field, this paper does present some interesting empirical findings regarding rates of honorary and ghost authorship a as well as power and gender. Although I do believe that such results should eventually be published, the paper still has important shortcomings at this point.

Major issues:

1. There is a generalized vagueness throughout the paper which makes it difficult for the reader to follow. For example, it is unclear how multiple authors create better quality and higher productivity papers. It is also unclear why the simple presence of multiple authors gives way to malpractice. There was questionable practices before multiple authorship. This is further developed in the specific issues section below. A thorough review of the English may also help.

2. The authors often state that there is no applicable authorship criteria in social sciences (e.g.p.30) However, there is authorship guidance in various professional societies (e.g. American Sociological Association Code of Ethics and the American Psychological Association Code of Conduct). Moreover, the Committee of publication ethics does have some guidance on authorship; https://publicationethics.org/authorship. Lastly, certain journals, research centers as well as funding agencies have developed some guidance that apply to social scientists

3. Surprisingly, even though the author says there is no criteria, the authors seem to explain that there is a correct and incorrect way to distribute authorship. Even in the abstract, the authors seem to want to evaluate if scholars assign authorship “correctly” in three hypothetical situations. However, if there is no criteria how could there be a correct and incorrect way to distribute authorship.

4. The recommendations at the end of the paper (p.27) do not logically follow from the actual results of the survey. For example, the author mention the importance of contributor declarations but never explains why this would be important. If a person lies about authorship they could also simply lie about contributorship. Also, ghost authors will not be listed as contributors. The authors also mention that there are power dynamics and thus whistleblowing should be used to go outside the system and change past submissions. Why is whistleblowing used to counter power-dynamics as opposed to other methods? Moreover, changing the past submission would have a huge impact on science. Not only would authors change on past papers, so would the citations and refences. Do we also take tenure away from people after the fact? Feasibility and cost would be a serious consideration. This is almost a completely different paper and argument.

Minor comments:

In many countries, the notion of malpractice is often a legal term. Is this term found in Germany in policy? To my knowledge authorship regulations are not found directly in legal documents. It may be lack of compliance, misbehavior or questionable research conduct.

p.3 l. 42 There are statements that are often vague and not explained. For example, the first sentence “Publish or Perish” epitomizes the academic reward system across scientific disciplines nowadays better than ever before”. Why is this notion more important now as compared to before? When did this shift happen?

p.54-56; sentence non-sensical. I am not sure what is the gold standard in this sentence. Often we say peer-review is the gold standard. I have not heard this about citation and publication counts being the gold standard.

p.4 l.61 As the authors mention, there is a lot more literature in life sciences and medical science when compared to social sciences. However, the author should also cite the literature that does exist regarding the rise of co-authorship in social science (1). The author can also look at studies on interdisciplinary that include a cross-disciplinary lenses(2–8).

p. 4 l. 63 author suggest that contributor declarations have not been included in social science journals. There are no citations and it is not a commonly accepted fact. There are journals of an interdisciplinary nature that include social science work (e.g. PLOS). Some social science journals may be gathering the information about contributions during paper submissions without actually publishing the content. Also, if there are no citations to illustrate this fact. It would be relevant to exemplify this point by citing journal policies.

p.6 l. 103 ; please explain Harbring and Irlenbush study. This is not clear.

p.10 l. 183; I am unsure what the antepenultimate and penultimate task would be. What is the “last task”. Is this the one that takes the least amount of time? The most value? The idea? The methods? The manuscript writing?

p.11 203-205 ; unclear

l. 206=209 ; what is collaboration on eye level?

l.213 – workload shift should be division of labor.

p.25 l.444-447. I am unsure what result leads to the notion that not everyone knows what constitutes authorship. There may simply be differences in interpretation without the individuals not understanding the concept.

p.25 l.448-452. The author mentions objective authorship criteria. Although there are regulations in which individuals must have substantially contributed to the research to be included as author. The notion of “substantial contribution” is an ongoing debate. Thus it remains a subjective notion. If the author believes there is an objective notion at play, please explain.

p.26. Although Weinstein may have had an impact on this context, there is also increased awareness regarding gender inequities in science. The author should explain the negative impact of this inequities by referencing the literature on the subject.

p.27 l.475; the authors state that researchers from Asian countries have stronger likelihood of obedience. As written it is construed as stereotyping. Can you refer to the literature to explain the hierarchy in certain groups?

p.26. l. 444 “This accords to the authorship guidelines of their academic societies: Articles based on dissertations should always include the PhD students as authors” Can you cite these academic societies. Is this also the case in university regulations?

p. 26 I am not sure what the author considers as elderly authors. Is this actually based on age? Is is based on academic seniority? The notion “elderly” has a very negative connotation and should be avoided.

p. 26. When talking about gender please use the notion women/men. Gender is a social construct. Avoid the notion of male/female that refers to sex as a biological variable.

p. 29 the authors mention a difference between authorship and effort which is certainly true. They conclude that because there is a difference, research institutions should rely on other estimates. What are other estimates and why and how would one include the notion of effort in evaluation?

p.31l. 554 I am unsure what expressiveness implies in this context.

Methods and Results: please ensure that a second peer-review looks over statistics. I am not a quantitative researcher.

Reviewer #2: This paper reports results of a survey of authors of social sciences articles showing that ghost and honorary authorships occur in the social sciences, and correspondingly suggests that authorship standards should be implemented in the social sciences, much as they already are in certain journals in the hard sciences.

Overall, the paper is well done.

The one issue I see is with vignette #3. As written, this vignette mingles the authorship issues with a severe ethical issue. In this vignette “After the successful graduation of the PhD student the professor creates a scientific paper based on the empirical results of the thesis and submits it to a journal.” The vignette does not state if the masters student was listed as an author. It does not state if the masters student gave permission for the professor to turn the thesis into a paper. While the professor provided input to the study and thesis, the vignette makes it look like the professor is appropriating the work of the student. If this is indeed the case, then I have a hard time trusting the results of the survey for this vignette because it mingles the ethical issue (appropriation of work) with the authorship issues. I would imagine that many of those completing the survey would have had the same reaction, and that their answers regarding authorship would have been clouded by the ethical issue.

Given the potential confounding of vignette #3, I would suggest it be removed from the paper unless the authors can make a convincing case that no confounding occurred. In my view, removal of this vignette would not really weaken the conclusions. If the results of this vignette are removed, it would still be worth keeping something about it in the paper to highlight the ethical issue.

Regarding data availability, I would suggest that the authors add a table (or supplement) with the list of journals/conferences associated with the papers/authors who answered the survey. The list of sources is important and cannot be considered confidential. Also, I would suggest the authors do what is necessary to fully anonymize the survey data and to make it available. Although they say “available upon request (with caveats)”, in practice this tends to simply delay the availability question and in most cases authors then decline to share data. I think data availability must be addressed before the paper is published.

Reviewer #3: In this study, the authors conduct a survey of publishing authors in the socials circles in o order to understand prevalence of ghost and honorary authorship in the social sciences. Ghost authorship is when a contributor fits the definition of an author, according to the ICMJE, but wasn’t an author on a paper. Honorary authorship, in contrast, is when the contributor does not meet the criteria of an author, but is an author on the paper anyway. The authors of this study ask respondents about their\\ most recent paper, and also have respondents assign authorship to authors in three vignettes. Using these results, the authors conduct an exploratory analysis of the factors relating to actual and perceived ghost/honorary authorship, and also at how surveyed authors perceive contributions in the vignettes. The findings are that ghost/honorary authorship is a common issue in the social sciences, yet given their responses to the vignettes, authors also have a strong working definition of what constitutes authorship. The authors interpret this to mean that, at least in the social sciences, perceptions do not match reality when it comes to authorship criteria, and some people are undeservedly being added or left off of authorship bylines.

The topic is interesting from a scientoemtrics and science evaluation standpoint, and should have interest to general readers, though the authors could do a better job motivating their study. The quality of the writing was variable throughout, and I would appreciate work improving the clarity of the writing, as well as structuring the manuscript in a more readable way. The methods and statistical analyses are fair, although I would appreciate more detail on both. My largest issues are with the framing of the paper and the findings, specifically with the choice of the ICMJE as the “correct” criteria, and the decision to frame the interpretation of results in terms of ‘correct’ and ‘incorrect’ responses.

Below I provide some major and minor comments which I hope will improve the quality of the manuscript.

# Major comments:

Major Comment 1:

My biggest issue with the manuscript is the use of the “correct/incorrect” lens for interpreting results. Specifically, I think that this sentence in the discussion is problematic, as well as similar text throughout: ‘In addition, social scientists’ perception of the occurrence of authorship malpractices provide grounds to suspect that not everyone might be aware of what constitutes authorship.”

But perceptions are important! What constitutes authorship on a paper is really a norm of a disciplinary field, and so it would make sense that the norms of medical and life sciences (exemplified in the ICMJE criteria) would not transfer cleanly to the social sciences, just as we wouldn’t believe that the norms of High-energy physics to transfer cleanly to the life sciences. Ideally, the authors would use a more Social science-based criteria for determining the “correct” contributions, however no such criteria exists. But if a criteria were created, I imagine that it would be different than the ICMJE, and based on social scientist’s perceptions of contribution in their own field.

My point here is that a “significant contribution” to the typical social scientist will likely mean something very different than to the typical life scientist. Therefore, using a “correct/incorrect” framing, based on the ICMJE, is very strongly applying one field’s standard onto another, where it perhaps isn’t warranted.

I suggest that the authors take a more cautious approach to interpreting the results, particularly avoiding language of “correctness/incorrectness”, and careful not to dismiss the importance of social scientist’s perceptions. The use of ICMJE is mostly warranted, if imperfect(see Major Comment 2), but its imperfection should be a factor in interpreting results.

Major comment 2:

Norms of authorship and contribution vary according to field. What warrants a co-authorship in one field, like life sciences, may warrant only an acknowledgement in another. In a particularly-interesting case, students of the physicist Stephen Hawking received no credit at all! (Though the prestige of their posting still granted them great career success).

In this paper, the authors use a definition of contribution from the ICMJE, an organization in the life and medical sciences, and apply it to situations in the social sciences. Because notions of contribution can vary so widely between fields, the choice of the ICMJE definition needs to be better motivated, and the potential consequences discussed.

One way to better motivate the choice of ICMJE is to make it explicit that PLoS also bases their definition of contribution based on the same guidelines, and PLoS, as you are aware, published a great deal in the social sciences.

The paper would also benefit from discussions of other definitions of contribution, such as those used by Nature journals. For example, would an alternative definition of contribution change the “correct” answer of any of the vignettes?

The pages with PLoS and Nature authorship guidelines are linked below:

https://journals.plos.org/plosone/s/authorship

https://www.nature.com/nature-research/editorial-policies/authorship

Major comment 3:

Aspects of the study are introduced in the methods section without being motivated in the introduction. For example, the authors examine gender, country, age, and more in relation to ghost authorship, however the introduction lacks any mention of any demographic items and their significance to ghost and honorary authorship.

I suggest the authors flesh out the introduction somewhat, bringing up the importance of professional rank (postdoc vs. professor) and gender in order to set the narrative for exploring these variables later in the analysis.

The discussion on these variables is also quite light, and is not well-contextualized in the existing literature. I would appreciate a more in-depth and referenced discussion that considers how gender and other demographics relate to authorships. For example, on the gendered division of labor in science, and also the “Matilda Effect” in science. I linked both of these below.

https://doi.org/10.1177/0306312716650046

https://www.jstor.org/stable/285482

Major Comment 4:

Assuming I’ understanding the authors correctly, I don’t believe the the author’s interpretation of the Central Limit Theorem is correct (in section Methods.C). The CLT has to do with repeated samples from an underlying distribution, such that taking the mean of repeated samples (say, samples of size 100 from a population of size 10,000) will results in those means being normally distributed. Here, the authors take a single sample of 2,000 or so survey responses from some underlying population of scholars. As such, the CLT, at least in my understanding, doesn’t seem to make sense here. The authors should update Methods.C accordingly.

I would also feel more comfortable with additional diagnostics to determine whether the models fit statistical assumptions. One issue, for example, could be multicollinearity, or the degree to which independent variables are correlated, which can cause problems in logistic regression. There should be a function in STATA or some other statistical software (the `vis` package in R, for example) to run a test on this assumption.

Additionally, “# of ghost authors”, “# of honorary authors”, “perceived ghost authors”, and “perceived honorary authors” all appear non-normally distributed, and even zero-inflated in figure figure 2. This might confound the results from linear regressions using these values. I wold advise adding either a) diagnostics of the model, such as Q-Q plots, to demonstrate that model assumptions are not violated, or b) additional robustness checks to ensure that main results don’t qualitatively change under different models. For example, a zero-inflated Poisson model on “# of ghost authors”.

Major Comment 5:

The authors use t-tests and linear regression coefficients throughout, but there is little discussion of the effect size, or in other words, how meaningful or trivial the observed effects are. For example, the authors state in line 292 that “that the perceived share of honorary authors is significantly lower than

the identified share of honorary authors (t=7.5946; df=1841; p=0.0000).”, but how meaningful is the exact difference?

I advise that the author include a mention of the effect size when discussing these statistical tests. For example, in the above sentence, include the difference in the shares of perceived and actual honorary authors. Similarly, the authors should contextualize the coefficients from the regression, discussing how much of the variance they actually explain in the analysis.

This also brings me to another point—that the R2 values, or variance explained by these regressions are all very small. This isn’t a problem, it is exploratory after all, but consider incorporating this fact into the paper’s discussion, that even the strongest coefficients explain only a small part of the variance in the data.

# Minor comments:

- The authors find that “authorship issues also occur in the social sciences”, but is there any evidence that it is better/worse than in the life sciences? Other fields? Also, is there any evidence that policies implemented to combat these issues, such as contribution disclosures, have improved the situation in other fields?

- Line 28: This is a strong claim. Surely there is one social science journal that requires contribution disclosures? Or perhaps a Mega Journal like PLoS or Nature that requires them, and also published them in the social sciences.

Line 46: This sentence implies a causal mechanism for collaboration, namely, that researchers do it to bolster their productivity. However, I'm not confident that this claim is supported by the cited papers. For example, people might collaborate because information technology makes it easier, or because modern scientific research is highly-complex, and productivity is not a consideration. I advise being a little more cautious with the wording here.

- Line 109: I’m not sure what “aspire” means here, replace “might not aspire” with “might decline”

- Line 110-112: a reference should be given here for this claim about freelancers

- Fig 1: pie charts are difficult to read and interpret. Replace figure 1 with a bar chart.

- The terms “ghost” and “honorary” authorship are used in the abstract and introduction, but not formally defined until the literature review. A brief plain-language definition should be given for each of these terms at their initial use, in both the abstract and intro.

- A helpful paper for section C of the intro, in relation to honorary authorships, could be the “Chaperone effect in scientific publishing”: https://www.pnas.org/content/pnas/115/50/12603.full.pdf

- Line 159-160: The 20% and 50% numbered quoted appear to come from an article cited in 1993 (I don’t have access to it). This is an old reference, and surely not representative of contemporary acceptance rates. I suggest finding an updated reference and set of numbers. A starting point could be this paper: https://www.sciencedirect.com/science/article/abs/pii/S1751157713000710

- The authors first state that they used “Qualtrics” on line 200, but this should be brought up much earlier, at the beginning of the methods. Really, Methods.B should be made the first section of the methods.

- line 206: I’m not sure what “eye-level” mean there, perhaps another term could be used?

- line 235: the authors state that they collect authors from “well renowned expert journals”, but how were these journals and conferences selected? Are they representative of the social sciences as a whole, or are they skewed towards certain fields?

- line 244: “ballot stuffing” has a very specific, political meaning in the U.S. at least, and so I would rephrase this sentence to avoid it.

- Line 456: Contrarily -> Contrary

- Line 465: the authors should give a few more words about the Weinsten Scandal, as some readers may not immediately know what it is in reference to.

- Line 530: Is there evidence that hiring committees are not already screening applicants along a variety of criteria? #Publications, while important, rarely sees like the -only- criteria used in hiring and promotion decisions.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Elise Smith

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PONE- PR OCT 2020.docx

Decision Letter 1

Alberto Baccini

28 Feb 2022

PONE-D-20-30365R1And the credit goes to … - Ghost and honorary authorship among social scientistsPLOS ONE

Dear Dr. Pruschak,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I was appointed as academic editor of the paper on 20 January 2022. I read the long editorial history of the manuscript and the two reviewers' reports agreeing on minor revision. I apologize in advance for highlighting, only at this stage, a couple of points that requires careful revision.

A first very general point is about the design of the sample for the survey. You started from a sample of 63,240 scholars (mail addresses). Due to errors in mails the sample was reduced to 47,697 recipients. Respondents were 2,817.  The respondents represent 4.5% of the sample (5,91% of the recipients). The final number of completed surveys used for analysis are 2,222, i.e. 3.5% of the sample. This point should be also indicated

This dramatic loss of respondents should be discussed more carefully than in the current version of the paper, also in reference to the presence of sensitive questions in the survey. It appears that the authors do not design the survey for handling the presence of sensitive questions and the possible self-selection of respondents. Hence, results should not be generalized to ‘social scientists’ such as in the abstract rr. 29-30 and in the conclusion rr. 689-690. More in general, I think that the paper should be more cautious about the general robustness of the survey.

A second point is about the regression analysis. From table S5 and S6 it appears that confidence intervals for the big part of the considered carriable include zero, i.e. the estimates are not statistically significant. This is not highlighted in the presentation of the results. In particular, the discussion of gender effect is based on data that are not statistically significant; but this is true also for other variables. I would like to suggest a more careful interpretation of regression results and, possibly, the clear indication of significance level directly in Table 1 and 3.

Please submit your revised manuscript by Apr 14 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Alberto Baccini, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: (No Response)

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: I Don't Know

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: No

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: Most of my comments have been addressed. However, my request that the journal/conference list be published has been ignored. The authors have successfully argued that the data cannot be made publicly available due to the potential for reconstruction of PII. I'll acquiesce on that. However, they cannot argue that the journal list should not be published. No PII can be inferred from a simple journal list without any other data. Addition of the list will add credibility to the paper in that readers will see names of journals they know.

Reviewer #3: I thank and commend the authors for the hard work of revising this manuscript. Its quality is greatly improved, and I especially appreciate how the ICMJE guidelines are re-contextualized to be less absolutist. In total, I feel that my comments and suggestions were adequately addressed, though I still have some minor issues which I believe should be addressed before submission, and which I detail below,

- The number of survey respondents is inconsistent. The abstract says 2,222, but the paper reports 2,223 in another case. Moreover, smaller numbers are used in the actual statistics analyses. I advise the authors to ensure consistency between these numbers, and make clear why numbers change throughout the manuscript. Moreover, the abstract should list the actual number of respondents used in the analysis, not the number before exclusions, so as not to overstate the size of the data being analyzed.

- Line 50: The use of “development of technologies” here is awkward, largely because a) technology is not listed in the corresponding reference, b) it is of a different kind to the other items in the list, and c) it is unclear whether “complex nature…” refers to “development of technologies”. Overall, I recommend cutting this phrase.

- Line 51: When mentioning the increasing complexity of science, the authors should cite the canonical paper in this area about the Burden of Knowledge: https://academic.oup.com/restud/article-abstract/76/1/283/1577537

- Line 70: The use of “falsify” is, while accurate, also very strong and brings to mind fraud, which I believe the authors don’t want to insinuate. I suggest replacing this with a work like “distort” or “confound” which has a similar meaning but with a less harsh connotation.

- Line 71: There is a double-period here, which should be fixed

- Line 81: the use of the phrase “and even” in the sentence “…methodology, software, **and even** writing, reviewing, and editing* is strange and may confuse the reader—I suggest simply cutting this phrase.

- Line 113: It is unclear when reading the list of ICMJE criteria whether an author must meet one of these criteria or all of them. The ICMJE website makes it clear that the author must meet -all- criteria by appending an “AND” at the end of each list item. I suggest the authors do the same, such that the list looks like: “…acquisition of data, or analysis and interpretation of data; and 2) drafting the article…”

- Line 137, in the sentence “First, pressure from co-authors, whose citation and publications counts usually benefit from a lower number of contributors can lead to researchers declining or being declined authorship;…”; while I see how credit itself (something that is abstract, social) is affected by adding co-authors, it is unclear how adding co-authors would affect the citation and publication counts of a paper, which in nearly all cases use “full-counting” to count every contribution equally. I suggest rephrasing this sentence to be more clear, or to provide a reference for how publication/citation counts would be affected.

- Line 165: I appreciate the authors adding in the chaperone effect, but I think the current description is a little misleading. The Chaperone Effect is not really about repetitional bias in peer review at elite journals, as the authors claim here. Rather, it refers to a kind of mentorship, where a senior author guides their student in how to write for a specific, elite, journal. If the authors would like a reference about the bias towards famous authors, I recommend this by Tomkins, Zhang, & Heaviln (2017): https://www.pnas.org/content/114/48/12708

- Line 203: I appreciate that the authors added a mention of the Matilda Effect which I mentioned, but I think that the description provided misses a key nuance—it is not only that women receive less credit for their work, but that the credit tends to go to male colleagues involved in the study. This can be potentially damaging in the case of honorary authorship, in which adding honorary senior (often male) colleagues can “steal credit” away form women in the team.

- Line 581: The authors should be more specific with “Anglophone”, “Central European”, and “Asian” scholars. Specifically, there are likely many Anglophone scholars in Central Europe, and Asia scholars in North America and Europe. As such, referring to these groups by their ethnicity can be misleading, as an Korean scholar residing in a Canadian university is likely to face very different authorship pressures than if they were in their home country. My suggestion is to rephrase to say something like “scholars residing in Anglophone, Central European, and Asian countries…”.

- Regression tables: The authors should mention the reference level for each categorical variable, either included directly in the table (as a named row with no coefficients) or in the table caption. For example, it is clear that “Male” is the reference level for the gender variable, but it is not clear what the reference level for geography is, or job title.

- Fig 1: The decimal points for percentages on the Y-axis can be removed to make the graph more clean

- Fig 1: An optional suggestion—making the styles consistent between figures 1 and 2 could really make them more aesthetically pleasing and make the final manuscript look more professional.

- Fig 2: text is too small and difficult to read, the font-size should be increased as much as possible

- Fig 2: the graph would be easier to read if there were borders around the bars that clearly delineated them from each other.

- Fig 2: This is only a suggestion, but this plot could be more readable and impactful if the authors find a way to combine A & B into a single panel, along with C & D into a second panel. For example, the bars for identified share of ghost authors could be shown, and a single point, or a line, to mark the perceived share. There are several ways this could be done, it is simply that comparing histograms side-by-side is more difficult than comparing within a single panel.

- Supplemental texts: It wold be preferred to upload the supporting information as a .pdf rather than a .docx. .pdfs are more universal, so by using it you are ensuring that the files are more accessible to whomever need them.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Alberto Baccini

7 Apr 2022

And the credit goes to … - Ghost and honorary authorship among social scientists

PONE-D-20-30365R2

Dear Dr. Pruschak,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Alberto Baccini, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Alberto Baccini

20 Apr 2022

PONE-D-20-30365R2

And the credit goes to … - Ghost and honorary authorship among social scientists

Dear Dr. Pruschak:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Alberto Baccini

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Descriptive statistics for dichotomous variables.

    (PDF)

    S2 Table. Descriptive statistics for integer variables.

    (PDF)

    S3 Table. Pairwise correlations coefficients for all employed variables.

    (PDF)

    S4 Table. Regression results of actual and perceived prevalence of ghost and honorary authorship including confidence intervals.

    (PDF)

    S5 Table. Regression results of hypothetical authorship assignments in the vignettes including confidence intervals.

    (PDF)

    S6 Table. Regression results of hypothetical authorship assignments in the vignettes.

    (PDF)

    S7 Table. Comparison of samples for dummy variables.

    (PDF)

    S8 Table. Comparison of samples for integer variables.

    (PDF)

    S9 Table. List of journals and societies included in the data collection.

    (PDF)

    S1 File. Do-file with Stata code for generating all figures and tables.

    (DO)

    S1 Text. The text of the vignettes employed in the study.

    (PDF)

    Attachment

    Submitted filename: PONE- PR OCT 2020.docx

    Attachment

    Submitted filename: ResponseToReviewer2.docx

    Attachment

    Submitted filename: Response to Reviewer 3.pdf

    Data Availability Statement

    The data cannot be made publicly available due to them containing sensitive information on respondents’ authorship malpractices in previous publications. While the data is anonymized, certain combinations of demographics might still reveal survey participants’ identities. This could compromise survey participants and violate respondents’ privacy. Moreover, the GDPR prohibits the public sharing of any data that might lead to the identification of individuals. When providing their informed consent to participate in the study, participants were ensured their privacy would be protected. They did not provide consent for their data to be shared in a repository. Data requests must be addressed to the Data Protection Officer at Bern University of Applied Sciences, who will provide access after the signing of a non-disclosure agreement: datenschutz@bfh.ch.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES