Abstract
Identifying reviewers is argued to improve the quality and fairness of peer review, but is generally disfavoured by reviewers. To gain some insight into the factors that influence when reviewers are willing to have their identity revealed, I examined which reviewers voluntarily reveal their identities to authors at the journal Functional Ecology, at which reviewer identities are confidential unless reviewers sign their comments to authors. I found that 5.6% of reviewers signed their comments to authors. This proportion increased slightly over time, from 4.4% in 2003–2005 to 6.7% in 2013–2015. Male reviewers were 1.8 times more likely to sign their comments to authors than were female reviewers, and this difference persisted over time. Few reviewers signed all of their reviews; reviewers were more likely to sign their reviews when their rating of the manuscript was more positive, and papers that had at least one signed review were more likely to be invited for revision. Signed reviews were, on average, longer and recommended more references to authors. My analyses cannot distinguish cause and effect for the patterns observed, but my results suggest that ‘open-identities’ review, in which reviewers are not permitted to be anonymous, will probably reduce the degree to which reviewers are critical in their assessment of manuscripts and will differentially affect recruitment of male and female reviewers, negatively affecting the diversity of reviewers recruited by journals.
Keywords: blind review, gender, open-identities review, open review, peer review, scholarly publishing
1. Introduction
The rise of the open science movement over the past decade has been accompanied by calls for greater transparency in peer review [1], including making peer-review reports public (open reports) and identifying reviewers (unblinded review, or ‘open-identities’ review) [2,3]. Identifying reviewers, either privately (just to authors) or publicly alongside their published review, is argued to have many benefits, including increasing accountability of reviewers, which should in turn increase the quality, fairness, constructiveness and courteousness of reviews ([4,5]; reviews in [6–9]). Identification of reviewers also allows them to receive credit for their efforts [6]. Despite a variety of perspectives on the likely benefits and costs of open peer review, there are only a few randomized trials evaluating the effect of identifying reviewers on the content of peer reviews and the recommendations made by reviewers [4,10,11], including a series of related studies by BMJ [12,13]. Unfortunately, these studies reach different conclusions about the consequences of anonymous review for review quality and reviewer recommendations.
Though identifying reviewers will lead to greater transparency and might improve the fairness and quality of peer review, reviewers commonly, and often overwhelmingly, prefer that they be allowed to be anonymous [7,14–21]. In surveys, scholars report that they are less likely to agree to review [5,22], and less likely to be candid and honest in their evaluation [16,23], if they are to be identified to authors, probably for fear of retaliation by authors [15]. These concerns may be more substantial for authors in more vulnerable positions in academia, such as junior researchers [5,6,8,15] (but see [23]) or researchers from lower prestige institutions and/or developing countries. In particular, women are less likely than men to believe that the peer-review process is fair [18], and thus we expect women to be more likely to prefer anonymity at all stages of the review process (as authors and reviewers). Unfortunately, surveys of reviewer attitudes towards being identified as reviewers during open peer review (citations above) have been small and/or have not explored the biographical features of authors (such as age, gender and ethnicity) that influence their willingness to have their identity revealed on their reviews.
The goal of this study is to gain some insight into the factors that influence when reviewers are willing to have their identity revealed by examining which reviewers voluntarily reveal their identity to authors when given a choice to remain anonymous. I take advantage of a large dataset on peer review at an ecology journal, Functional Ecology [24,25]. Functional Ecology has historically used single-blind peer review (until 2019 [26]), in which reviewers know the identity of authors but not vice versa. However, reviewers were allowed to voluntarily identify themselves by signing the reports that are sent to the authors. I examine which reviewers voluntarily sign their comments to authors and examine the features of manuscripts that covary with whether reviewers identify themselves to authors. Specifically, I ask (i) how often reviewers sign their reports, (ii) whether this frequency has changed over time (comparing 2003–2005 with 2013–2015), (iii) whether men are more likely than women to sign their reviews, (iv) whether the observed gender difference varies with geographic location of the reviewers or authors, (v) whether reviewers suggested by the authors are more likely to sign their reports, (vi) whether signed reports are more likely to be those that make positive recommendations, (vii) whether papers with signed reviews are more likely to be invited for revision, (viii) whether signed reviews differ in content (word count and/or a number of suggested references) from unsigned reviews, and (ix) how often reviewers that sign their reviews are acknowledged by authors in final versions of papers.
2. Methods
(a) . About the journal
Functional Ecology is an international journal published by Wiley on behalf of the British Ecological Society (BES). The journal used traditional single-blind peer review, in which authors are identified to reviewers but not vice versa, during the period covered by this study. When submitting reviews to Functional Ecology, reviewers are asked to provide ‘Confidential Comments to the Editor’, which are not to be shared with the authors, and ‘Comments to the Corresponding Author’, which are sent to the authors with the editorial decision. Journal policy, which is stated on the journal website and in guidelines provided to reviewers, is that reviewer names ‘are not revealed to authors unless they choose to sign their review’. Also, the box in which reviewers submitted their comments to the author has a banner at the top that says, ‘Your review will be anonymous unless you identify yourself’.
(b) . The dataset
Functional Ecology uses ScholarOne Manuscripts (previously Manuscript Central) to manage manuscript submissions and peer review. In this study, I examine the peer review of ‘standard’ papers submitted to Functional Ecology in an early period (2003–2005) and a later period (2013–2015). ‘Standard’ papers are typical research studies (empirical or theoretical); they exclude reviews, commentaries, perspectives, editorials and other non-research manuscripts. I only examined papers sent for peer review and only the reviews of papers during their initial submission to the journal. Papers sent for re-review following revision are not included in this analysis.
(c) . Identifying signed reviews
In addition to extracting the reviewer database from ScholarOne Manuscripts, I downloaded the reviewer comments for every standard paper submitted during the study time frame. I only considered a review to be signed if the reviewer identified themselves by name in their ‘Comments to the Corresponding Author’, usually with a signature line somewhere at the end of the review (most common), but occasionally (infrequently) in a header line at the top of the review or after introductory comments that preceded a more detailed review. Reviewers that referenced their own research (e.g. suggesting citations) were not considered to be signed unless they specifically stated their name as the person writing the review. Also, reviewers commonly signed the confidential comments for the editor, but this was not considered to be signing their review since reviewers were informed that these comments will not be provided to authors. Some reviewers unintentionally revealed their identity through metadata fields in uploaded documents; I evaluate the frequency of this in the electronic supplementary material for the current manuscript.
This dataset includes data on authors of submitted papers, and every individual that reviewed for the journal, over the time frame of the study. The dataset includes author and reviewer given and family names, residence region, review score and reviewer comments. I also know which invited reviewers were suggested by the authors [27]. Out of 4415 reviews of 2201 papers, I could identify whether a review was signed for 4381 of them. The remaining 34 reviews either lacked comments for the author or those comments could not be extracted from the journal database.
(d) . Author and reviewer details
I assigned gender to all authors and reviewers in the dataset using an online database of given names (genderize.io) that includes greater than 200 000 unique names (additional details in [28,29]. I was able to assign a gender to all but 73 of 4381 reviewers and approximately 94% of authors [28].
Authors and reviewers self-identified their home country; I used the most recent location of the author or reviewer according to their last database entry. I used the United Nations' M.49 area codes and their continental regions defined by the United Nations’ Statistical Commission (unstats.un.org) to categorize reviewer localities, but with three exceptions: (i) I split Europe into the United Kingdom and ‘other Europe’, because Functional Ecology is owned by a British society (the BES) and a large proportion of manuscript authors and reviewers are British (resulting in a much greater representation of editors from the United Kingdom than expected from the distribution of ecologists in Europe); (ii) I split the Americas into Latin America (which includes North America other than the United States and Canada) and North America (the United States and Canada); and (iii) the dataset has few entries from countries in Oceania other than Australia and New Zealand, so I consider those two countries as a single category, ‘Australia and New Zealand’.
The journal does not collect data on the age or professional seniority of reviewers. However, following the advice of one of the peer reviewers, I examined reviewer salutations as a proxy for professional seniority. Reviewers and authors are asked, when they first create an account on ScholarOne Manuscripts, to identify their preferred salutation. At Functional Ecology, the two most common salutations chosen by our reviewers are ‘Dr’ (71% of all reviewers) or ‘Professor’ (14%). Only 4.3% chose one of the remaining options (Miss, Ms, Mrs or Mr) and the rest (11%) either chose ‘None’ or did not select a salutation. Though the information conveyed by an individual's choice of salutation varies among countries and cultures, it is commonly the case that ‘Professor’ is a more senior salutation than is ‘Dr’; I thus examine whether reviewers that self-identify as ‘Professor’ are more likely to sign their reviews compared to reviewers that self-identify as ‘Dr’. These salutations were extracted from the ScholarOne Manuscripts database in September 2021, and reflect the salutation used by the reviewer in 2021 (rather than when they performed the review). Nonetheless, those who identify as ‘Professor’ should generally be more senior than those who use other salutations. Because only a very small proportion of reviewers chose Miss, Ms, Mrs or Mr as their salutation, I do not consider these categories in analyses presented below.
(e) . Review content
To test whether signed reviews were generally longer than unsigned reviews, I counted the number of words in reviews for the subset of papers for which at least one reviewer but not all reviewers signed their review (to do a paired analysis) (n = 224 papers). Word counts included only the main body of the review, excluding suggested references and any header or footer text (e.g. the paper name, paper number and reviewer signature). I also counted the number of references suggested by each reviewer for this subset of papers.
(f) . Analyses
Most of the analyses here examine whether a review is signed, which is a binary response [yes/no], and thus analysed using logistic regression of the form Signed = IndependentVariables using SAS version 9.4, Proc Logistic [30]. RevisionInvited [yes/no] was also analysed using logistic regression. Analysis of ReviewScore (ratings given to papers by reviewers) was done with SAS Proc Genmod. Specifics of the models are described as they are presented in the Results. The uses of additional analyses, such as McNemar's test, Wilcoxon signed-rank tests and sign tests, are indicated where presented in the Results.
3. Results
Averaged across years, 5.6% of reviewers signed their comments to the authors. The proportion of reviewers that signed their review varied across years (figure 1) (logistic regression, Signed [yes/no] = Year + ReviewerGender [female/male]; Year: , p = 0.002); reviewers were 1.21 times more likely to sign their reviews in the later period of the dataset (2013–2015; average across years = 6.7%) than in the earlier period of the dataset (2003–2005; average across years, 4.4%; contrast between early versus late period, , p = 0.002).
Figure 1.

Men are more likely than women to sign their comments to the authors at the journal Functional Ecology. (Online version in colour.)
(a) . Gender and geographic variation in who signs their reviews
Men were 2.2 times more likely than women to sign their review—6.3% of men, but only 2.9% of women, signed their comments to the authors, averaged across all years (figure 1) (logistic regression, Signed [yes/no] = Year + ReviewerGender [female/male]; ReviewerGender: , p < 0.001). Though both men and women were more likely to sign their reviews in the later period of the study, the difference between men and women in frequency of signing was fairly consistent across years (no statistically significant interaction between Year and ReviewerGender; , p = 0.43) (figure 1). For this first set of analyses, I treated individual reviews as independent data points. However, a few reviewers submitted reviews for more than one paper, which leads to some double counting in these estimates. If I instead examine the proportion of reviewers that signed at least one of their reviews, the gender difference is similar; men were 1.8 times as likely to sign at least one of their reviews (7.2% of men versus 4.0% of women signed at least one review, averaged across all years) (logistic regression, SignedAtLeastOneReview [yes/no] = Year + ReviewerGender [female/male]; ReviewerGender: , p = 0.001).
In the complete dataset, the proportion of reviewers that signed their review did not vary significantly with the geographic location of the reviewer (figure 2) (logistic regression; Signed [yes/no] = Year + ReviewerGender + ReviewerLocation; ReviewerLocation: , p = 0.13). However, the sample sizes for reviewers from developing regions of the world are fairly small (table 1). Excluding regions with fewer than 300 reviewers, and thus limiting the analysis to just reviewers located in North America, Europe, the United Kingdom, and Australia/New Zealand, there is significant geographic variation in the proportion of reviewers that sign their comments to the authors (ReviewerLocation: , p = 0.04); of these four major regions, reviewers from North America (the United States and Canada) were the least likely (4.8%) to sign their reviews, and reviewers from the United Kingdom were the most likely (7.5%) to sign their reviews. There was no evidence that the gender difference in the proportion of authors signing varied among these four major reviewer locations (figure 2; Gender × ReviewerLocation interaction; , p = 0.51), nor that variation among geographic regions varied among years (Year × ReviewerLocation interaction; , p = 0.23). (Note that the apparently very high proportion of women from Latin America that signed their reviews, the rightmost closed point of figure 2, is from a very small sample size of n = 29 female reviewers).
Figure 2.

Variation among geographic regions in the proportion of peer reviewers that sign their comments to authors. Sample sizes are low for Asia, Africa and Latin America (table 1), and thus the proportions should be interpreted with caution for those regions.1 (Online version in colour.)
Table 1.
The number of papers reviewed by men, women and people not assigned a gender.
| reviewer location | reviewer assigned gender |
|||
|---|---|---|---|---|
| male | female | unknown | total | |
| North America | 1398 | 467 | 16 | 1881 |
| Europe | 833 | 264 | 14 | 1111 |
| United Kingdom | 454 | 122 | 7 | 583 |
| Australia and New Zealand | 286 | 94 | 3 | 383 |
| Asia | 109 | 20 | 4 | 133 |
| Latin America | 88 | 29 | 1 | 118 |
| Africaa | 48 | 24 | 1 | 73 |
| unknown locationb | 56 | 16 | 27 | 99 |
aThese reviewers are mostly (71 of 73) from South Africa.
bIncludes one reviewer from a country in Oceania that was not classified into one of the other geographic regions.
Author gender did not predict whether a review was signed, regardless of whether I considered the gender of the first, last or corresponding author (logistic regression, Signed [yes/no] = Year + AuthorGender [f/m] + Reviewer Gender [f/m] + AuthorGender × ReviewerGender interaction; AuthorGender: , p > 0.74 for all author positions), and no evidence of an interaction between author and reviewer gender (, p > 0.79). There was also no evidence that the geographic location of the author predicted whether a reviewer signed their comments to the authors (, p > 0.91 for all author positions) and no evidence of an interaction between author and reviewer location (p > 0.99). I also asked the more specific question of whether reviewers are more likely to sign their review when the author is the same gender or in the same geographic location as the reviewer, but found no evidence for either type of homophily (, p > 0.10 for both gender and location across all author positions).
Reviewers that self-identified as ‘Professor’ were 1.6 times more likely to sign their reviews than those that self-identified as ‘Dr’ (7.8% versus 5.0%; logistic regression, Signed [yes/no] = Year + Salutation [Prof/Dr]; Salutation: , p = 0.006). We see this same difference when considering only the subset of papers for which one reviewer signed their review and another did not, and for which one reviewer self-identified as ‘Professor’ and the other self-identified as ‘Dr’ (n = 47 papers)—the person who signed their review was more often the reviewer who self-identified as ‘Professor’ (31 of these 47 papers; Sign test, p = 0.04). Because reviewers that self-identify as professor are likely to be more senior than those that self-identify as ‘Dr’, these results suggest that more senior scientists are more likely to sign their reviews. Also, because male reviewers are likely to be older, on average, than are female reviewers [31], it is possible that salutation (if it covaries with reviewer age) could explain much of the gender difference described above. However, when considering salutation (Professor versus Dr) in the statistical model testing for differences between male and female reviewers, the large difference between men and women in the proportion of reviewer signing their reviews remained highly significant (logistic regression, Signed [yes/no] = Year + ReviewerGender [female/male] + Salutation [Prof/Dr]; ReviewerGender: , p < 0.001).
(b) . Author-suggested reviewers
Functional Ecology allowed (until mid-2010) or required (from mid-2010 onward) authors to suggest potential reviewers for their manuscript, and these author-suggested reviewers are often invited to review by editors [27]. The dataset includes the identity of all author-suggested reviewers from 2004 to the middle of 2015, so I can ask whether author-suggested reviewers are more likely to sign their reviews. Because authors vary substantially in whether (pre-2010) and how many reviewers they suggest, and editors vary in whether and how many of these suggestions they invite to review, I constrained the dataset to include only papers for which at least one of the reviewers was author-suggested and the other was not, and for which at least one author signed their review and one did not (allowing for a paired analysis controlling for manuscript quality; n = 47 papers). Though the sample size is small, the pattern is very clear: author-suggested reviewers signed their review 72% of the time whereas the reviewers not suggested by the author signed just 28% of the time (McNemar's test, S = 9.38, p = 0.002) (these percentages add to 100% because I excluded papers for which no reviewer or all reviewers signed their comments to authors).
(c) . Review scores differ between signed and unsigned reviews
On average, reviewers that signed their review gave papers a better score; i.e. they rated the paper as either more significant or higher quality (Functional Ecology's peer-review score does not distinguish the specific reason for a rating) (analysis of variance, ReviewScore = Year + Signed [yes/no]; F1,4298 = 66.7, p < 0.001). The difference in mean review score between signed and unsigned reviews was quite substantial, nearly half a point higher (2.72 ± s.e.m. 0.06 versus 2.24 ± 0.01; higher is better; range between 1 and 4). One possible explanation for this is that reviewers are more likely to sign reviews for better papers. However, limiting the dataset to just papers for which one reviewer signed their review and the other did not (a paired analysis; n = 224 papers), the reviewer that signed their review rated the paper, on average, gave a higher rating than did the reviewer that did not sign their review (mean rating = 2.71 versus 2.32; Wilcoxon signed-rank test, W = 2349, p < 0.001).
Most reviewers that sign their reviews sign only some of their reviews. For example, of those reviewers that submitted exactly two reviews to the journal during the time period covered by the dataset (n = 508), 47 (9.3%) signed at least one review but only 14 (2.8%) signed both of their reviews. Of reviewers that submitted at least three reviews (n = 209), 30 (14.3%) signed at least one review, but only 11 (5.3%) signed at least two of their reviews, and no one signed all of their reviews. Of reviewers that reviewed more than once and signed at least one review but not all of their reviews (n = 59), reviewers tended to sign reviews for which they gave higher review scores (Wilcoxon signed-rank test; W = 242.5, p < 0.001); 53% of reviewers gave higher scores to the reviews they signed (averaged across all signed versus reviews) than to the reviews they did not sign, whereas only 19% gave higher scores to unsigned reviews, with the remaining 29% giving equivalent scores to signed and unsigned reviews.
(d) . Reviewer signatures predict editorial decisions
Authors were much more likely (1.65 times more likely) to be invited to submit a revision for their paper if at least one reviewer signed their review; of papers that were sent for review, 45.4% of papers (averaged across years) that had at least one signed review were invited for revision, whereas only 27.5% of papers without at least one signed review were invited for revision (logistic regression; RevisionInvited [yes/no] = Year + AtLeastOneSignedReview [yes/no]; AtLeastOneSignedReview [yes/no]: , p < 0.001). Most of this relationship is because review scores given to papers are higher when reviews are signed. However, whether a review is signed or not continued to predict editorial decisions even after accounting for the average score given to the paper by reviewers (RevisionInvited [yes/no] = Year + AtLeastOneSignedReview [yes/no] + ReviewScore; ReviewScore: , p < 0.001; AtLeastOneSignedReview [yes/no]: , p = 0.02).
(e) . Content of signed versus unsigned reviews
For papers that had at least one signed and one unsigned review, the signed review was more often the longer review (i.e. had a larger word count), with the median words per review for signed reviews = 719 compared to only 616 for unsigned reviews (Wilcoxon signed-rank test, W = 3135, p < 0.001). Sixty-seven per cent of signed reviews recommended specific references that authors should consider, compared to only 50% of unsigned reviews (W = 1072, p < 0.001).
(f) . Reviewers that sign their comments are commonly acknowledged by authors
The journal Functional Ecology acknowledges reviewer contributions by publishing an annual list of all individuals who reviewed a paper for the journal. This list does not identify which papers each individual reviewed, and, given the large number of reviewers, it should be difficult for authors or readers to associate reviewer names with individual manuscripts (e.g. the 2018 list includes 1006 names). However, authors commonly identify, in their acknowledgements, the reviewers that sign their reviews. This is allowed by the journal, and neither encouraged or discouraged. Fifty-three per cent of reviewers that signed their review for a paper eventually published by the journal (n = 126 reviewers) were identified by the authors in the acknowledgements section of their paper.
4. Discussion
To gain some insight into the factors that influence when reviewers are willing to have their identity revealed to authors, I examined which reviewers voluntarily sign their review reports when given a choice to remain anonymous. Only about 6% of reviewers voluntarily sign their comments to authors at the journal Functional Ecology. This proportion increased slightly over time, being approximately 1.2 times higher in 2013–2015 compared to ten years earlier, 2003–2005. Men were approximately 1.8 times more likely to sign their comments to authors than were women, and reviewers that use the salutation ‘Professor’ were approximately 1.6 times more likely to sign their reviews than were those that used ‘Dr’. Of those reviewers that signed reviews, most signed only some of their reviews. On average, reviewers were more likely to sign their comments to authors when their rating of the manuscript was more positive, and were about three times more likely to sign when they were suggested as potential reviewers by the manuscript's authors. Papers that had at least one signed review were, on average, more likely to be invited for revision. Signed reviews were on average longer, and recommended more new references to authors, compared to unsigned reviews.
(a) . Few reviewers voluntarily sign their reviews
When asked in a cross-disciplinary survey whether identifying reviewers to authors would improve peer review, 44% of respondents agreed that making a reviewer's identity open would increase peer-review quality, and 44% agreed it is more fair to authors (e.g. [7]). However, in surveys of community attitudes towards open peer review, a minority of respondents, ranging from about 13% [17] to 25% [19], indicate a preference for open peer review and, in a randomized trial, reviewers randomly assigned to the treatment identifying them to authors were more likely to decline to review compared to reviewers assigned to the anonymous treatment [13]. These survey results suggest that while a large proportion of scholars agree that open review improves review quality, they also value their anonymity during peer review and thus a minority actually prefer open review.
My results, and those in other published studies, are consistent with reviewers generally preferring to be anonymous. Fewer than 6% of reviewers for Functional Ecology, averaged across years, voluntarily signed their comments to authors. This proportion varied among years, and increased between the early period (2003–2015) and later period (2013–2015), but even in the year in which the highest proportion of reviewers signed their comments (2014), only 7.6% of review reports were signed. This is only slightly lower than the proportion of reviewers that agreed to have their names published alongside their review during an open peer-review trial at five unnamed Elsevier journals (8.1% [32]), but substantially lower than the proportion of authors that signed their published reviews at PeerJ (42% [33]) and in a small randomized trial at the Journal of General Internal Medicine (JGIM) (43% [4]). The large difference between Functional Ecology and both PeerJ and JIGM likely stems from the degree of encouragement to self-identify provided by the journal. PeerJ specifically requests that reviewers sign their review reports (though reviewers can decline), whereas Functional Ecology allows but does not encourage or discourage authors from signing their reviews. Similarly, in the randomized trial at JGIM, reviewers were encouraged to sign their review [4], and yet only 43% did so. That reviewers rarely volunteer to sign their reviews, and that a majority decline to do so even when encouraged to sign their reviews, suggests that many prospective reviewers may decline to review if required to sign their reviews (e.g. if a journal adopts a fully open-identities review model).
(b) . Women sign their reviews less often than do men
The most striking result I found is that female reviewers signed their comments to authors about half as often as did male reviewers at the journal Functional Ecology, and that this difference is observed across the major geographic regions of the world. The proportion of both male and female reviewers that signed their comments increased between the early and late periods of my study (2003–2005 versus 2013–2015), but the gender difference remained fairly constant between periods. I know of only two other studies that presented data on gender differences in the frequency at which male and female reviewers sign their reviews. At the European Scientific Journal, when authors were encouraged to reveal their identity as part of an open review process, female reviewers did so 56% as often as male reviewers [34]. The other study [32,35] tested whether women were less likely to review for one of five Elsevier journals if their review reports were to be published online (and found no difference in willingness to review between men and women), but did not test whether women were less likely to agree to reveal their identity on their published reviews (reviewers were given the option of being anonymous on their published review). However, their dataset is available as electronic supplementary material for their manuscript. In an analysis similar to that described above for Functional Ecology (ReviewSigned = Journal [random effect] + Year [covariate] + AuthorGender [male/female]), male reviewers were 1.9 times more likely than female reviewers to reveal their identity on their published review reports (AuthorGender effect: , p < 0.001; 1.9 is the ratio of the least-squares means from the logistic regression model; this analysis includes only the time periods for which the journals were ‘open review’ and excludes reviewers of unassigned gender). That this gender difference is similar to that observed for Functional Ecology suggests it generalizes across journals and peer-review models (private peer review at Functional Ecology and open review at these five Elsevier journals).
The current study, and those of Bolek et al. [34] and Bravo et al. [32], find that women are less likely than men to sign their reviews when given the option to be anonymous, but does not test why women are less likely to sign their reviews. One possibility is that women are generally more risk averse than men [36], and thus more likely to avoid the risk of damage to their reputation (if their criticisms are perceived as invalid, unfair or inappropriately motivated), damage to future professional relationships, or retaliation [37]. Alternatively, the consequences of writing a critical review may be (or perceived to be) different for women and men [37]. Critical comments from women are often viewed more negatively than those from men; for example, women who criticize a paper may be considered less competent, or more aggressive and competitive, compared to men who provide similar evaluations [38,39]. Women may thus be at greater risk of reputational harm or retaliation from signing their reviews [39].
Regardless of the reason why women sign their reviews less often than do men, the current results suggest that switching from single-blind review (in which reviewers are anonymous) to open-identities review (in which reviewers are disclosed to authors or made public alongside their review report) may have different effects on male and female reviewers. In particular, women might be less likely to review, relative to men, if their identity is to be revealed. Women are already underrepresented in many aspects of the scholarly publishing process [40,41]. Requiring reviewer identities to be disclosed may have negative effects on reviewer diversity, possibly counteracting journal efforts to increase the diversity of their reviewer populations.
(c) . Reviewers sign their positive reviews more often than their negative reviews
Survey data show that a large majority of researchers believe that requiring reviewers to sign their reviews will lead to peer reviewers being less critical [7]. In the current analysis of peer reviews submitted to Functional Ecology, reviewers gave papers, on average, higher ratings when they signed their reviews, and papers with signed reviews were more likely to be accepted for publication. This difference in peer-review scores between signed and unsigned reviews was observed when comparing signed versus unsigned reviews within individual manuscripts, and when comparing different reviews (on different papers) submitted by individual reviewers (when reviewers signed at least one but not all of their reviews). Similarly, both Bravo et al. [32] and Bolek et al. [34] found that, when asked to sign reviews to be posted publicly alongside the manuscript being reviewed, those reviewers who agreed gave many more positive recommendations. At least two randomized trials have observed this same relationship; reviewers that were required to sign their reviews were more likely to recommend publication [4,11], though this was not observed in two other randomized trials [12,13] or in a comparison between open-review and single-blind review journals [42]. My analyses cannot distinguish the direction of cause-and-effect in these observed relationships, whether reviewers are more likely to sign positive reviews or are more likely to give higher scores for reviews that they sign, though the results of McNutt et al. [4] and Walsh et al. [11] suggest the latter accounts for at least some of the relationship. At Functional Ecology, papers with signed reviews were more likely to get positive editorial decisions (get invited for revision), probably because of the higher review scores (though the effect remains statistically significant, though much smaller in magnitude, after controlling for review scores). Regardless of the direction of cause and effect between the signing of reviews and the positivity of the reviewer reports that review scores are higher for signed reports suggests that reviewers are cognizant of potential consequences of signing critical reviews and change their reviewing behaviour in response, either choosing not to sign their more critical reviews or choosing not to be critical in reviews they will sign.
(d) . Author-suggested reviewers commonly sign their reviews
Functional Ecology allows authors to suggest prospective reviewers for their paper. Editors commonly invite one or more of these author-suggested reviewers, though it is rare for a paper to be reviewed only by author-suggested reviewers [27]. Reviewers suggested by the authors also tend to give papers much higher review scores, and the likelihood that a paper is eventually accepted for publication increases with the proportion of all reviews that are from author-suggested reviewers [27]. In this current study, reviewers suggested by authors were three times more likely to sign their reviews compared to reviewers not suggested by the authors. As above, the direction of cause and effect cannot be tested with the current dataset, but this observation strongly suggests that reviewers are more likely to sign their reviews when they do research closely related to that of the authors (close enough to be suggested as a reviewer), or when they have a professional relationship with the author (and are thus suggested as a likely friendly reviewer).
(e) . Signed reviews are longer and recommend more references to authors
Multiple studies have evaluated whether revealing reviewer identities to authors improves review quality and/or tone. Most of these have shown no differences between signed and unsigned reviews [4,12,13,43,44], though a few have shown that signed reviews were of higher quality and/or more courteous than unsigned reviews [11,42] (review in [45]). In this current study of Functional Ecology reviews, signed reviews were generally longer (more words) and recommended more references to the authors. Reviews submitted to PeerJ also had higher word counts when signed than when unsigned [33], though the effect sizes in both Functional Ecology and PeerJ were small. I didn't examine here whether reviewers that recommended their own papers, rather than papers of other researchers, were more likely to sign their reviews, but that was examined by Levis et al. [46], who found no evidence that reviewers were more or less likely to recommend their own papers when their identities were known to authors.
5. Conclusion
Open-identities peer review is not currently widespread [47], but is commonly promoted as a means to increase transparency and accountability in the peer-review process. This study attempts to gain insights into the consequences of de-anonymizing peer reviewers by comparing reviewers (and their reviews) who voluntarily reveal their identity to authors with those who do not. Most reviewers choose to retain their anonymity even when given the option (this study) or encouragement [32] to sign their reviews, as expected from survey results. Importantly, female reviewers reveal their identities substantially less often than do male reviewers, and reviewers sign their positive reviews more often than their negative reviews. Though the current study is descriptive and unable to disentangle causes for the observed patterns, my results, combined with those of other studies and community surveys, strongly suggest that adopting a fully open-identities peer-review model, in which journals reveal reviewer identities to authors (or publicly), will discourage some scholars, those who are more likely to experience (or expect to experience) discrimination in academia and scholarly publishing, from participating in the review process. In particular, it will discourage women from reviewing more than it will discourage men, countering the goal of many journals to increase the diversity of reviewers. My results, combined with some published studies, also suggest that reviewers will be less critical of papers, and/or they will be less willing to submit reviews when their assessment is negative, if their names are to be revealed to authors, reducing the value of peer review in advising editors on which papers to publish and how those papers should be revised.
Supplementary Material
Acknowledgements
The BES provided permission to access their databases. Jennifer Meyer extracted the reviewer databases for the BES journals. Josiah Ritchey revised the R code (shared with us by C. Sean Burns) for submitting author and reviewer given names to genderize.io. Ruth Bryan, Luke Holman, Allyssa Kilanowski and Mario Malicki provided comments on earlier versions of this paper or these analyses.
Endnote
The high proportion of women from Latin America that signed their reviews reflects only four signed reviews out of 29 total reviews.
Ethics
This work was approved by the University of Kentucky's Institutional Review Board (IRB 15-0890).
Data accessibility
The data necessary to recreate most analyses are available from the Dryad Digital Repository: https://doi.org/10.5061/dryad.8pk0p2np3 [48]. Some data cannot be made available (e.g. geographic location, reviewer identity and manuscript ID number) because it will allow readers to de-anonymize some authors or reviewers.
Competing interests
The author is executive editor of the journal Functional Ecology.
Funding
This work was approved by the University of Kentucky's Institutional Review Board (grant no. IRB 15-0890) and was supported in part by the Kentucky Agricultural Research Station at the University of Kentucky. The BES provided some funding to support this project.
References
- 1.Tennant JP. 2018. The state of the art in peer review. FEMS Microbiol. Lett. 365, fny204. ( 10.1093/femsle/fny204) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Moylan E, Junge K, Oman C, Morris E, Graf C. 2020. Transparent peer review at Wiley: two years on what have we learnt? Authorea Preprints.
- 3.Wolfram D, Wang P, Hembree A, Park H. 2020. Open peer review: promoting transparency in open science. Scientometrics 125, 1033-1051. ( 10.1007/s11192-020-03488-4) [DOI] [Google Scholar]
- 4.McNutt RA, Evans AT, Fletcher RH, Fletcher SW. 1990. The effects of blinding on the quality of peer review: a randomized trial. JAMA 263, 1371-1376. ( 10.1001/jama.1990.03440100079012) [DOI] [PubMed] [Google Scholar]
- 5.Mulligan A, Hall L, Raphael E. 2013. Peer review in a changing world: an international study measuring the attitudes of researchers. J. Am. Soc. Information Sci. Technol. 64, 132-161. ( 10.1002/asi.22798) [DOI] [Google Scholar]
- 6.Schmidt B, Deppe A, Bordier J, Ross-Hellauer T. 2016. Peer review on the move from closed to open. In Positioning and Power in Academic Publishing: Players, Agents and Agendas: Proceedings of the 20th International Conference on Electronic Publishing (eds Loizides F, Schmidt B), pp. 91-98. Amsterdam, The Netherlands: IOS Press. [Google Scholar]
- 7.Ross-Hellauer T, Deppe A, Schmidt B. 2017. Survey on open peer review: attitudes and experience amongst editors, authors and reviewers. PLoS ONE 12, e0189311. ( 10.1371/journal.pone.0189311) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Tennant JP, et al. 2017. A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research 6, 1151. ( 10.12688/f1000research.12037.3) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ross-Hellauer T, Görögh E. 2019. Guidelines for open peer review implementation. Res. Integrity Peer Rev. 4, 4. ( 10.1186/s41073-019-0063-9) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Godlee F, Gale CR, Martyn CN. 1988. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. Jama 280, 237-240. [DOI] [PubMed] [Google Scholar]
- 11.Walsh E, Rooney M, Appleby L, Wilkinson G. 2000. Open peer review: a randomised controlled trial. Br. J. Psychiatry 176, 47-51. ( 10.1192/bjp.176.1.47) [DOI] [PubMed] [Google Scholar]
- 12.van Rooyen S, Godlee F, Evans S, Smith R, Black N. 1998. Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA 280, 234-237. ( 10.1001/jama.280.3.234) [DOI] [PubMed] [Google Scholar]
- 13.Van Rooyen S, Godlee F, Evans S, Black N, Smith R. 1999. Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ 318, 23-27. ( 10.1136/bmj.318.7175.23) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Melero R, Lopez-Santovena F. 2001. Referees’ attitudes toward open peer review and electronic transmission of papers. Food Sci. Technol. Int. 7, 521-527. ( 10.1106/0MXD-YM6F-3LM6-G9EB) [DOI] [Google Scholar]
- 15.Regehr G, Bordage G. 2006. To blind or not to blind? What authors and reviewers prefer. Med. Educ. 40, 832-839. ( 10.1111/j.1365-2929.2006.02539.x) [DOI] [PubMed] [Google Scholar]
- 16.Baggs JG, Broome ME, Dougherty MC, Freda MC, Kearney MH. 2008. Blinding in peer review: the preferences of reviewers for nursing journals. J. Adv. Nurs. 64, 131-138. ( 10.1111/j.1365-2648.2008.04816.x) [DOI] [PubMed] [Google Scholar]
- 17.Ware M. 2008. Peer review in scholarly journals: perspective of the scholarly community—results from an international study. Information Services Use 28, 109-112. ( 10.3233/ISU-2008-0568) [DOI] [Google Scholar]
- 18.Ho RCM, Mak KK, Tao R, Lu Y, Day JR, Pan F. 2013. Views on the peer review system of biomedical journals: an online survey of academics from high-ranking universities. BMC Med. Res. Methodol. 13, 74. ( 10.1186/1471-2288-13-74) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Moylan EC, Harold S, O'Neill C, Kowalczuk MK. 2014. Open, single-blind, double-blind: which peer review process do you prefer? BMC Pharmacol. Toxicol. 15, 55-59. ( 10.1186/2050-6511-15-55) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Rowley J, Sbaffi L. 2018. Academics' attitudes towards peer review in scholarly journals and the effect of role and discipline. J. Information Sci. 44, 644-657. ( 10.1177/0165551517740821) [DOI] [Google Scholar]
- 21.Besançon L, Rönnberg N, Löwgren J, Tennant JP, Cooper M. 2020. Open up: a survey on open and non-anonymized peer reviewing. Res. Integrity Peer Rev. 5, 1-11. ( 10.1186/s41073-020-00094-z) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Publishing Research Consortium. 2016. Publishing research consortium peer review survey 2015. London, UK: Mark Ware Consulting. [Google Scholar]
- 23.Publons. 2018. Global state of peer review [Internet]. Clarivate Analytics. See https://publons.com/static/Publons-Global-State-Of-Peer-Review-2018.pdf. (accessed 30 Sep 2019).
- 24.Fox CW, Burns CS, Muncy AD, Meyer JA. 2016. Gender differences in patterns of authorship do not affect peer review outcomes at an ecology journal. Funct. Ecol. 30, 126-139. ( 10.1111/1365-2435.12587) [DOI] [Google Scholar]
- 25.Fox CW, Burns CS, Meyer JA. 2016. Editor and reviewer gender influence the peer review process but not peer review outcomes at an ecology journal. Funct. Ecol. 30, 140-153. ( 10.1111/1365-2435.12529) [DOI] [Google Scholar]
- 26.Fox CW, Thompson K, Knapp A, Ferry LA, Rezende EL, Aimé E, Meyer J. 2019. Double-blind peer review—an experiment. Funct. Ecol. 33, 4-6. ( 10.1111/1365-2435.13269) [DOI] [Google Scholar]
- 27.Fox CW, Burns CS, Muncy AD, Meyer JA. 2017. Author-suggested reviewers: gender differences and influences on the peer review process at an ecology journal. Funct. Ecol. 31, 270-280. ( 10.1111/1365-2435.12665) [DOI] [Google Scholar]
- 28.Fox CW, Ritchey JP, Paine CT. 2018. Patterns of authorship in ecology and evolution: first, last, and corresponding authorship vary with gender and geography. Ecol. Evol. 8, 11 492-11 507. ( 10.1002/ece3.4584) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Fox CW, Paine CT. 2019. Gender differences in peer review outcomes and manuscript impact at six journals of ecology and evolution. Ecol. Evol. 9, 3599-3619. ( 10.1002/ece3.4993) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.SAS Institute Inc. 2016. SAS/SHARE 9.4: User's guide, 2nd edn. Cary, NC: SAS Institute Inc. [Google Scholar]
- 31.Shaw AK, Stanton DE. 2012. Leaks in the pipeline: separating demographic inertia from ongoing gender differences in academia. Proc. R. Soc. B 279, 3736-3741. ( 10.1098/rspb.2012.0822) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Bravo G, Grimaldo F, López-Iñesta E, Mehmani B, Squazzoni F. 2019. The effect of publishing peer review reports on referee behavior in five scholarly journals. Nat. Commun. 10, 322. ( 10.1038/s41467-018-08250-2) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Wang P, You S, Manasa R, Wolfram D. 2016. Open peer review in scientific publishing: a Web mining study of PeerJ authors and reviewers. J. Data Information Sci. 1, 60-80. [Google Scholar]
- 34.Bolek C, Marolov D, Bolek M, Shopovski J. 2020. Revealing reviewers' identities as part of open peer review and analysis of the review reports. Liber Q. 30, 1. ( 10.18352/lq.10347) [DOI] [Google Scholar]
- 35.Mehmani. 2016. Is open peer review the way forward? See https://www.elsevier.com/connect/reviewers-update/is-open-peer-review-the-way-forward.
- 36.Harris CR, Jenkins M. 2006. Gender differences in risk assessment: why do women take fewer risks than men? Judgment Dec. Making 1, 48-63. [Google Scholar]
- 37.Wu C, Fuller S, Shi Z, Wilkes R. 2020. The gender gap in commenting: women are less likely than men to comment on (men's) published research. PLoS ONE 15, e0230043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Sinclair L, Kunda Z. 2000. Motivated stereotyping of women: she's fine if she praised me but incompetent if she criticized me. Pers. Soc. Psychol. Bull. 26, 1329-1342. ( 10.1177/0146167200263002) [DOI] [Google Scholar]
- 39.Rudman LA, Phelan JE. 2008. Backlash effects for disconfirming gender stereotypes in organizations. Res. Org. Behav. 28, 61-79. ( 10.1016/j.riob.2008.04.003) [DOI] [Google Scholar]
- 40.Helmer M, Schottdorf M, Neef A, Battaglia D. 2017. Gender bias in scholarly peer review. Elife 6, e21718. ( 10.7554/eLife.21718) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Lundine J, Bourgeault IL, Clark J, Heidari S, Balabanova D. 2018. The gendered system of academic publishing. The Lancet 391, 1754-1756. ( 10.1016/S0140-6736(18)30950-4) [DOI] [PubMed] [Google Scholar]
- 42.Kowalczuk MK, Dudbridge F, Nanda S, Harriman SL, Patel J, Moylan EC. 2015. Retrospective analysis of the quality of reports by author-suggested and non-author-suggested reviewers in journals operating on open or single-blind peer review models. BMJ Open 5, e008707. ( 10.1136/bmjopen-2015-008707) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Godlee F, Gale CR, Martyn CN. 1998. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA 280, 237-240. ( 10.1001/jama.280.3.237) [DOI] [PubMed] [Google Scholar]
- 44.Van Rooyen S, Delamothe T, Evans SJ. 2010. Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ 341, c5729. ( 10.1136/bmj.c5729) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Bruce R, Chauvin A, Trinquart L, Ravaud P, Boutron I. 2016. Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis. BMC Med. 14, 1-16. ( 10.1186/s12916-016-0631-5) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Levis AW, Leentjens AF, Levenson JL, Lumley MA, Thombs BD. 2015. Comparison of self-citation by peer reviewers in a journal with single-blind peer review versus a journal with open peer review. J. Psychosomatic Res. 79, 561-565. [DOI] [PubMed] [Google Scholar]
- 47.Ross-Hellauer T. 2017. What is open peer review? A systematic review. F1000Research 6, 588. ( 10.12688/f1000research.11369.1) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Fox CW. 2022. Data from: Which peer reviewers voluntarily reveal their identity to authors? Insights into the consequences of open-identities peer review. Dryad Digital Repository. ( 10.5061/dryad.8pk0p2np3) [DOI] [PMC free article] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Citations
- Fox CW. 2022. Data from: Which peer reviewers voluntarily reveal their identity to authors? Insights into the consequences of open-identities peer review. Dryad Digital Repository. ( 10.5061/dryad.8pk0p2np3) [DOI] [PMC free article] [PubMed]
Supplementary Materials
Data Availability Statement
The data necessary to recreate most analyses are available from the Dryad Digital Repository: https://doi.org/10.5061/dryad.8pk0p2np3 [48]. Some data cannot be made available (e.g. geographic location, reviewer identity and manuscript ID number) because it will allow readers to de-anonymize some authors or reviewers.
