Skip to main content
PLOS Biology logoLink to PLOS Biology
. 2024 Aug 22;22(8):e3002645. doi: 10.1371/journal.pbio.3002645

An empirical appraisal of eLife’s assessment vocabulary

Tom E Hardwicke 1,*, Sarah R Schiavone 1, Beth Clarke 1, Simine Vazire 1
Editor: Christopher D Chambers2
PMCID: PMC11340897  PMID: 39172747

Abstract

Research articles published by the journal eLife are accompanied by short evaluation statements that use phrases from a prescribed vocabulary to evaluate research on 2 dimensions: importance and strength of support. Intuitively, the prescribed phrases appear to be highly synonymous (e.g., important/valuable, compelling/convincing) and the vocabulary’s ordinal structure may not be obvious to readers. We conducted an online repeated-measures experiment to gauge whether the phrases were interpreted as intended. We also tested an alternative vocabulary with (in our view) a less ambiguous structure. A total of 301 participants with a doctoral or graduate degree used a 0% to 100% scale to rate the importance and strength of support of hypothetical studies described using phrases from both vocabularies. For the eLife vocabulary, most participants’ implied ranking did not match the intended ranking on both the importance (n = 59, 20% matched, 95% confidence interval [15% to 24%]) and strength of support dimensions (n = 45, 15% matched [11% to 20%]). By contrast, for the alternative vocabulary, most participants’ implied ranking did match the intended ranking on both the importance (n = 188, 62% matched [57% to 68%]) and strength of support dimensions (n = 201, 67% matched [62% to 72%]). eLife’s vocabulary tended to produce less consistent between-person interpretations, though the alternative vocabulary still elicited some overlapping interpretations away from the middle of the scale. We speculate that explicit presentation of a vocabulary’s intended ordinal structure could improve interpretation. Overall, these findings suggest that more structured and less ambiguous language can improve communication of research evaluations.


Research articles published by eLife are accompanied by statements that use prescribed phrases to evaluate importance and strength of support. This study uses an online repeated-measures experiment to gauge whether the eLife phrases were interpreted as intended, finding that most participants’ implied ranking did not match the intended ranking.

Introduction

Peer review is usually a black box—readers only know that a research paper eventually surpassed some ill-defined threshold for publication and rarely see the more nuanced evaluations of the reviewers and editor [1]. A minority of journals challenge this convention by making peer review reports publicly available [2]. One such journal, eLife, also accompanies articles with short evaluation statements (“eLife assessments”) representing the consensus opinions of editors and peer reviewers [3]. In 2022, eLife stated that these assessments would use phrases drawn from a common vocabulary (Table 1) to convey their judgements on 2 evaluative dimensions: (1) “significance”; and (2) “strength of support” (for details see [4]). For example, a study may be described as having “landmark” significance and offering “exceptional” strength of support (for a complete example, see Box 1). The phrases are drawn from “widely used expressions” in prior eLife assessments and the stated goal is to “help convey the views of the editor and the reviewers in a clear and consistent manner” [4]. Here, we report a study which assessed whether the language used in eLife assessments is perceived clearly and consistently by potential readers. We also assessed alternative language that may improve communication.

Table 1. Phrases and their definitions (italicised) from the eLife vocabulary representing 2 evaluative dimensions: significance and strength of support.

The significance dimension is represented by 5 phrases and the strength of support dimension is represented by 6 phrases. In a particular eLife assessment, readers only see 1 phrase from each of the evaluative dimensions. Phrases are accompanied by eLife definitions, but these are not shown in eLife assessments (though some words from the definitions may be used).

eLife vocabulary
Significance Strength of support
Landmark: findings with profound implications that are expected to have widespread influence Exceptional: exemplary use of existing approaches that establish new standards for a field
Fundamental: findings that substantially advance our understanding of major research questions Compelling: evidence that features methods, data, and analyses more rigorous than the current state-of-the-art
Important: findings that have theoretical or practical implications beyond a single subfield Convincing: appropriate and validated methodology in line with current state-of-the-art
Valuable: findings that have theoretical or practical implications for a subfield Solid: methods, data, and analyses broadly support the claims with only minor weaknesses
Useful: findings that have focused importance and scope Incomplete: main claims are only partially supported
Inadequate: methods, data, and analyses do not support the primary claims

Box 1. A complete example of an eLife assessment. This particular example uses the phrase “important,” to convey the study’s significance, and the phrase “compelling,” to convey the study’s strength of support

“The overarching question of the manuscript is important and the findings inform the patterns and mechanisms of phage-mediated bacterial competition, with implications for microbial evolution and antimicrobial resistance. The strength of the evidence in the manuscript is compelling, with a huge amount of data and very interesting observations. The conclusions are well supported by the data. This manuscript provides a new co-evolutionary perspective on competition between lysogenic and phage-susceptible bacteria that will inform new studies and sharpen our understanding of phage-mediated bacterial co-evolution.” [5].

Our understanding (based on [4]) is that eLife intends the common vocabulary to represent different degrees of each evaluative dimension on an ordinal scale (e.g., “landmark” findings are more significant than “fundamental” findings and so forth); however, in our view the intended ordering is sometimes ambiguous or counterintuitive. For example, it does not seem obvious to us that an “important” study is necessarily more significant than a “valuable” study nor does a “compelling” study seem necessarily stronger than a “convincing” study. Additionally, several phrases like “solid” and “useful,” could be broadly interpreted, leading to a mismatch between intended meaning and perceived meaning. The phrases also do not cover the full continuum of measurement and are unbalanced in terms of positive and negative phrases. For example, the “significance” dimension has no negative phrases—the scale endpoints are “landmark” and “useful.” We also note that the definitions provided by eLife do not always map onto gradations of the same construct. For example, the eLife definitions of phrases on the significance dimension suggest that the difference between “useful,” “valuable,” and “important” is a matter of breadth/scope (whether the findings have implications beyond a specific subfield), whereas the difference between “fundamental” and “landmark” is a matter of degree. In short, we are concerned that several aspects of the eLife vocabulary may undermine communication of research evaluations to readers.

In Table 2, we outline an alternative vocabulary that is intended to overcome these potential issues with the eLife vocabulary. Phrases in the alternative vocabulary explicitly state the relevant evaluative dimension (e.g., “support”) along with a modifying adjective that unambiguously represents degree (e.g., “very low”). The alternative vocabulary is intended to cover the full continuum of measurement and be balanced in terms of positive and negative phrases. We have also renamed “significance” to “importance” to avoid any confusion with statistical significance. We hope that these features will facilitate alignment of readers’ interpretations with the intended interpretations, improving the efficiency and accuracy of communication.

Table 2. Phrases from the alternative vocabulary representing 2 evaluative dimensions: importance and strength of support. Each dimension is represented by 5 phrases.

Alternative vocabulary
Importance Strength of support
Very high importance Very strong support
High importance Strong support
Moderate importance Moderate support
Low importance Weak support
Very low importance Very weak support

The utility of eLife assessments will depend (in part) on whether readers interpret the common vocabulary in the manner that eLife intends. Mismatches between eLife’s intentions and readers’ perceptions could lead to inefficient or inaccurate communication. In this study, we empirically evaluated how the eLife vocabulary (Table 1) is interpreted and assessed whether an alternative vocabulary (Table 2) elicited more desirable interpretations. Our goal was not to disparage eLife’s progressive efforts, but to make a constructive contribution towards a more transparent and informative peer review process. We hope that a vocabulary with good empirical performance will be more attractive and useful to other journals considering adopting eLife’s approach.

Our study is modelled on prior studies that report considerable individual differences in people’s interpretation of probabilistic phrases [612]. In a prototypical study of this kind, participants are shown a probabilistic statement like “It will probably rain tomorrow” and asked to indicate the likelihood of rain on a scale from 0% to 100%. Analogously, in our study participants read statements describing hypothetical scientific studies using phrases drawn from the eLife vocabulary or the alternative vocabulary and were asked to rate the study’s significance/importance or strength of support on a scale from 0 to 100. We used these responses to gauge the extent to which people’s interpretations of the vocabulary were consistent with each other and consistent with the intended rank order.

Research aims

Our overarching goal was to identify clear language for conveying evaluations of scientific papers. We hope that this will make it easier for other journals/platforms to follow in eLife’s footsteps and move towards more transparent and informative peer review.

With this overall goal in mind, we had 3 specific research aims:

  • Aim One. To what extent do people share similar interpretations of phrases used to describe scientific research?

  • Aim Two. To what extent do people’s (implied) ranking of phrases used to describe scientific research align with (a) each other; and (b) with the intended ranking?

  • Aim Three. To what extent do different phrases used to describe scientific research elicit overlapping interpretations and do those interpretations imply broad coverage of the underlying measurement scale?

Methods

Our methods adhered to our preregistered plan (https://doi.org/10.17605/OSF.IO/MKBTP) with one minor deviation: our target sample size was 300, but we accidentally recruited an additional participant, so the actual sample size was 301.

Ethics

This study was approved by a University of Melbourne ethics board (project ID: 26411).

Design

We conducted an experiment with a repeated-measures design. Participants were shown short statements that described hypothetical scientific studies in terms of their significance/importance or strength of support using phrases drawn from the eLife vocabulary (Table 1) and from the alternative vocabulary (Table 2). The statements were organised into 4 blocks based on vocabulary and evaluative dimension; specifically, block one: eLife-significance (5 statements), block two: eLife-support (6 statements), block three: alternative-importance (5 statements), block four: alternative-support (5 statements). Each participant saw all 21 phrases and responded using a 0% to 100% slider scale to indicate their belief about each hypothetical study’s significance/importance or strength of support.

Materials

There were 21 statements that described hypothetical scientific studies using one of the 21 phrases included in the 2 vocabularies (Table 1). Statements referred either to a study’s strength of support (e.g., Fig 1) or a study’s significance/importance (e.g., Supplementary Figure A in S1 Text). For the alternative vocabulary, we used the term “importance” rather than “significance.” To ensure the statements were grammatically accurate, it was necessary to use slightly different phrasing when communicating significance with the eLife vocabulary (“This is an [phrase] study”) compared to communicating importance with the alternative vocabulary (“This study has [phrase] importance”; e.g., Supplementary Figure B in S1 Text).

Fig 1. An example summary statement referring to a study’s strength of support and the corresponding response scale with an arbitrary response shown.

Fig 1

Additionally, there was 1 attention check statement (Supplementary Figure C in S1 Text), a question asking participants to confirm their highest completed education level (options: Undergraduate degree (BA/BSc/other)/Graduate degree (MA/MSc/MPhil/other)/Doctorate degree (PhD/other)/Other), and a question asking participants the broad subject area of their highest completed education level (options: Arts and Humanities/Life Sciences and Biomedicine/Physical Sciences/Social Sciences/Other). The veridical materials are available at https://osf.io/jpgxe/.

Sample

Sample source

Participants were recruited from the online participant recruitment platform Prolific (https://www.prolific.co/). As of 23rd August 2023, the platform had 123,064 members. Demographic information about Prolific members is provided in S8 Text.

Sample size

As data collection unfolded, we intermittently checked how many participants had met the inclusion criteria, aiming to stop data collection when we had eligible data for our target sample size of 300 participants. Ultimately, 461 participants responded to the survey. Of these 461 participants, 156 participants failed the attention check and 12 participants took longer than 30 min to complete the study and were therefore excluded. No participants failed to respond to all 21 statements or completed the study too quickly (<5 min). We applied these exclusion criteria one-by-one which removed data from 160 participants and retained eligible data from 301 participants (we unintentionally recruited 1 additional participant).

Sample size justification

The target sample size of 300 was based on our resource constraints and expectations about statistical power and precision (see S2 Text).

Inclusion criteria

Participants had to have a ≥95% approval rate for prior participation on the recruitment platform (Prolific). Additionally, Prolific prescreening questions were used to ensure that the study was only available to participants who reported that they speak fluent English, were aged between 18 and 70 years, and had completed a doctorate degree (PhD/other).

Procedure

  1. Data collection and recruitment via the Prolific platform began on September 13, 2023 and was completed on September 14, 2023.

  2. After responding to the study advert (https://osf.io/a25vq), participants read an information sheet (https://osf.io/39vay) and provided consent (https://osf.io/xdar7). During this process, they were told that the study seeks to understand “how people perceive words used to describe scientific studies so we can improve communication of research to the general public.”

  3. Participants completed the task remotely online via the Qualtrics platform. Before starting the main task, they read a set of instructions and responded to a practice statement (S3 Text).

  4. For the main task, statements were presented sequentially, and participants responded to them in their own time. The order of presentation was randomized, both between and within the 4 blocks of statements. After each statement, there was a 15-s filler task during which participants were asked to complete as many multiplication problems (e.g., 5 × 7 =?) as they could from a list of 10. The multiplication problems were randomly generated every time they appeared using the Qualtrics software. Only numbers between 1 and 15 were used to ensure that most of the problems were relatively straightforward to solve. A single “attention check” statement (Supplementary Figure C in S1 Text) appeared after all 4 blocks had been completed.

  5. Participants were required to respond to each statement before they could continue to the next statement. The response slider could be readjusted as desired until the “next” button was pressed, after which participants could not return to or edit prior responses.

  6. After responding to all 21 statements and the attention check, participants were shown a debriefing document (https://osf.io/a9gve).

Results

All analyses adhered to our preregistered plan (https://doi.org/10.17605/OSF.IO/MKBTP). Numbers in square brackets represent 95% confidence intervals computed with the Sison–Glaz method [13] for multinomial proportions or bootstrapped with the percentile method [14] for percentile estimates.

Participant characteristics

Participants stated that their highest completed education level was either a doctorate degree (n = 287) or graduate degree (n = 14). Participants reported that the subject areas that most closely represented their degrees were Life Sciences and Biomedicine (n = 97), Social Sciences (n = 77), Physical Sciences (n = 57), Arts and Humanities (n = 37), and various “other” disciplines (n = 33).

Response distributions

The distribution of participants’ responses to each phrase is shown in Fig 2 (importance/significance dimension) and Fig 3 (strength of support dimension). These “ridgeline” plots [15] are kernel density distributions which represent the relative probability of observing different responses (akin to a smoothed histogram).

Fig 2.

Fig 2

Responses to each phrase on the importance/significance dimension as kernel density distributions with the 25th, 50th (i.e., median), and 75th quantiles represented by black vertical lines and the 25th–75th quantile region (i.e., interquartile range) highlighted in blue. The data underlying this figure can be found in https://osf.io/mw2q4/files/osfstorage.

Fig 3.

Fig 3

Responses to each phrase on the strength of support dimension as kernel density distributions with the 25th, 50th (i.e., median), and 75th quantiles represented by black vertical lines and the 25th–75th quantile region (i.e., interquartile range) highlighted in blue. The data underlying this figure can be found in https://osf.io/mw2q4/files/osfstorage.

Tables 3 and 4 show the 25th, 50th (i.e., median), and 75th percentiles of responses for each phrase (as represented by the black vertical lines in Figs 2 and 3). The tables include 95% confidence intervals only for medians to make them easier to read; however, confidence intervals for all percentile estimates are available in Supplementary Table A in S5 Text and Supplementary Table B in in S5 Text.

Table 3. Percentile estimates for participant responses to phrases on the significance/importance dimension for eLife and alternative vocabularies.

The data underlying this table can be found in https://osf.io/mw2q4/files/osfstorage.

Importance dimension
Percentile
Phrase 25th Median [CI] 75th
eLife vocabulary
Landmark 86 94 [92,95] 98
Fundamental 70 83 [81,85] 90
Important 66 72 [71,75] 80
Valuable 60 70 [70,71] 80
Useful 50 60 [60,63] 70
Alternative vocabulary
Very high importance 84 90 [90,91] 94
High importance 75 80 [80,81] 88
Moderate importance 48 50 [50,51] 58
Low importance 10 15 [13,18] 20
Very low importance 4 8 [6,9] 11

CI, 95% confidence intervals.

Table 4. Percentile estimates for participant responses to phrases on the strength of support dimension for eLife and alternative vocabularies.

The data underlying this table can be found in https://osf.io/mw2q4/files/osfstorage.

Strength of support dimension
Percentile
Phrase 25th Median [CI] 75th
eLife vocabulary
Exceptional 90 95 [94,95] 98
Compelling 70 80 [76,80] 86
Convincing 65 75 [71,75] 81
Solid 65 74 [71,75] 82
Incomplete 10 20 [16,20] 30
Inadequate 5 10 [10,13] 20
Alternative vocabulary
Very strong support 80 88 [85,90] 92
Strong support 70 77 [75,80] 85
Moderate support 46 50 [50,51] 56
Weak support 10 15 [14,18] 22
Very weak support 5 9 [7,10] 13

CI, 95% confidence intervals.

Implied ranking of evaluative phrases

Do participants’ implied rankings match the intended rankings?

Although participants rated each statement separately on a continuous scale, these responses also imply an overall ranking of the phrases (in order of significance/importance or strength of support). Ideally, an evaluative vocabulary elicits implied rankings that are both consistent among participants and consistent with the intended ranking. Fig 4 shows the proportion of participants whose implied ranking matched the intended ranking (i.e., “correct ranking”) for the different evaluative dimensions and vocabularies.

Fig 4. Proportion of participants (N = 301) whose implied ranking matched the intended ranking (i.e., “correct ranking”) for both evaluative dimensions and both vocabularies.

Fig 4

Error bars represent 95% confidence intervals using the Sison–Glaz method [13]. The data underlying this figure can be found in https://osf.io/mw2q4/files/osfstorage.

On the significance/importance dimension, 59 (20% [15% to 24%]) participants’ implied rankings of the eLife vocabulary aligned with the intended ranking and 188 (62% [57% to 68%]) participants’ implied rankings of the alternative vocabulary aligned with the intended ranking. We performed an “exact” McNemar test and computed the McNemar odds ratio with Clopper–Pearson 95% confidence intervals adjusted with the “midp” method, as recommended by [16]. The McNemar test indicated that observing a difference between the vocabularies this large, or larger, is unlikely if the null hypothesis were true (odds ratio = 8.17, 95% CI [5.11, 13.69], p = 1.34e-26). The intended ranking was the most popular for both vocabularies; however, participants had 55 different implied rankings for the eLife vocabulary and 8 different implied rankings for the alternative vocabulary (for details, see Supplementary Tables A–D in S6 Text). Note that these values should be compared with caution, as for the significance/importance dimension, the eLife vocabulary had more (6) phrases than the alternative vocabulary (which had 5 phrases and therefore fewer possible rankings).

On the strength of support dimension, 45 (15% [11% to 20%]) participants’ ratings of the eLife phrases were in accordance with the intended ranking relative to 201 (67% [62% to 72%]) participants who correctly ranked the alternative vocabulary. A McNemar test indicated that observing a difference between the vocabularies this large, or larger, is unlikely if the null hypothesis were true (odds ratio = 11.4, 95% CI [6.89, 20.01], p = 5.73e-35). The intended ranking was the most popular for both vocabularies; though for the eLife vocabulary, an unintended ranking that swapped the ordinal positions of “convincing” and “solid” came a close second, reflected in the ratings of 44 (15% [10% to 19%]) participants. Overall, there were 34 different implied rankings for the eLife vocabulary, relative to 10 implied rankings for the alternative vocabulary.

Quantifying ranking similarity

Thus far, our analyses have emphasised the binary difference between readers’ implied rankings and eLife’s intended rankings. A complementary analysis quantifies the degree of similarity between rankings using Kendall’s tau distance (Kd)—a metric that describes the difference between 2 lists in terms of the number of adjacent pairwise swaps required to convert one list into the other [17,18]. The larger the distance, the larger the dissimilarity between the 2 lists. Kd ranges from 0 (indicating a complete match) to n(n-1)/2 (where n is the size of one list). Because the eLife strength of support dimension has 6 phrases and all other dimensions have 5 phrases, we report the normalised Kd which ranges from 0 (maximal similarity) to 1 (maximal dissimilarity). Further explanation of Kd is provided in S7 Text.

Fig 5 illustrates the extent to which participants’ observed rankings deviated from the intended ranking in terms of normalised Kd. This suggests that although deviations from the intended eLife ranking were common, they only tended to be on the order of 1 or 2 discordant rank pairs. By contrast, the alternative vocabulary rarely resulted in any deviations, and when it did, these were typically only in terms of one discordant rank pair.

Fig 5. Extent to which participants (N = 301) implied rankings deviated from intended rankings using normalised Kendall’s distance.

Fig 5

Zero represents a perfect match (green bar), other values (grey bars) represent increasing dissimilarity from the intended ranking up to 1, which represents maximal dissimilarity. The data underlying this figure can be found in https://osf.io/mw2q4/files/osfstorage.

Locus of ranking deviations

So far, we have examined how many participants adhered to the intended ranking (Fig 4) and the extent to which their implied rankings deviated from the intended rankings (Fig 5). However, these approaches do not optimally illustrate where the ranking deviations were concentrated (i.e., which phrases were typically being misranked). The heat maps in Fig 6 show the percentage of participants whose implied rankings matched or deviated from the intended ranking at the level of individual phrases. Ideally, a phrase’s observed rank will match its intended rank for 100% of participants. For example, the heat maps show that almost all participants (98%) correctly ranked “moderate importance” and “moderate support” in the alternative vocabulary. The heat maps also reveal phrases that were often misranked with each other, for example: “solid,” “convincing,” and “compelling” in the eLife vocabulary.

Fig 6. Heat maps showing the percentage of participants (N = 301) whose implied rankings were concordant or discordant with the intended ranking at the level of individual phrases.

Fig 6

Darker colours and higher percentages indicate greater concordance between the implied rank and the intended rank of a particular phrases. The data underlying this figure can be found in https://osf.io/mw2q4/files/osfstorage.

Discussion

Research articles published in eLife are accompanied by evaluation statements that use phrases from a prescribed vocabulary (Table 1) to describe a study’s importance (e.g., “landmark”) and strength of support (e.g., “compelling”). If readers, reviewers, and editors interpret the prescribed vocabulary differently to the intended meaning, or inconsistently with each other, it could lead to miscommunication of research evaluations. In this study, we assessed the extent to which people’s interpretations of the eLife vocabulary are consistent with each other and consistent with the intended ordinal structure. We also examined whether an alternative vocabulary (Table 2) improved consistency of interpretation.

Overall, the empirical data supported our initial intuitions: while some phrases in the eLife vocabulary were interpreted relatively consistently (e.g., “exceptional” and “landmark”), several phrases elicited broad interpretations that overlapped a great deal with other phrases’ interpretation (particularly the phrases “fundamental,” “important,” and “valuable” on the significance/importance dimension (Fig 2) and “compelling,” “convincing,” and “useful” on the strength of support dimension (Fig 3)). This suggests these phrases are not ideal for discriminating between studies with different degrees of importance and strength of support. If the same phrases often mean different things to different people, there is a danger of miscommunication between the journal and its readers. Responses on the significance/importance dimension were largely confined to the upper half of the scale, which is unsurprising, given the absence of negative phrases. It is unclear if the exclusion of negative phrases was a deliberate choice on the part of eLife’s leadership (because articles with little importance would not be expected to make it through editorial triage) or an oversight. Most participants’ implied rankings of the phrases were misaligned with the ranking intended by eLife—20% of participants had aligned rankings on the significance/importance dimension and 15% had aligned rankings on the strength of support dimension (Fig 4). The degree of mismatch was typically in the range of 1 or 2 discordant ranks (Fig 5). Heat maps (Fig 6) highlighted that phrases in the middle of scale (e.g., “solid,” “convincing”) were most likely to have discordant ranks.

By contrast, phrases in the alternative vocabulary tended to elicit more consistent interpretations across participants and interpretations that had less overlap with other phrases (Figs 2 and 3 and Tables 3 and 4). The alternative vocabulary was more likely to elicit implied rankings that matched the intended ranking—62% of participants had aligned rankings on the significance/importance dimension and 67% had aligned rankings on the strength of support dimension (Fig 4). Mismatched rankings were usually misaligned by one rank (Fig 5). Although the alternative vocabulary had superior performance to the eLife vocabulary, it was nevertheless imperfect. Specifically, interpretation of phrases away from the middle of the scale on both dimensions (e.g., “low importance” and “very low importance”) tended to have some moderate overlap (Figs 2, 3, and 6). We do not know what caused this overlap, but, as discussed in the next paragraph, one possibility is that it is overly optimistic to expect peoples’ intuitions to align when they judge phrases in isolation, without any knowledge of the underlying scale.

Rather than presenting evaluative phrases in isolation (as occurs for eLife readers and occurred for participants in our study), informing people of the underlying ordinal scale may help to improve communication of evaluative judgements. eLife could refer readers to an external explanation of the vocabulary; however, prior research on interpretation of probabilistic phrases suggests this may be insufficient as most people neglect to look up the information [6,19]. A more effective option might be to explicitly present the phrases in their intended ordinal structure [19]. For example, the full importance scale could be attached to each evaluation statement with the relevant phrase selected by reviewers/editors highlighted (Fig 7A). Additionally, phrases could be accompanied by mutually exclusive numerical ranges (Fig 7B); prior research suggests that this can improve consistency of interpretation for probabilistic phrases [19]. It is true that the limits of such ranges are arbitrary, and editors may be concerned that using numbers masks vague subjective evaluations in a veil of objectivity and precision. To some extent we share these concerns; however, the goal here is not to develop an “objective” measurement of research quality, but to have practical guidelines that improve accuracy of communication. Specifying a numerical range may help to calibrate the interpretations of evaluators and readers so that the uncertainty can be accurately conveyed. Future research could also explore the relationship between the number of items included in the vocabulary and the level of precision that reviewers/editors wish to communicate.

Fig 7.

Fig 7

Explicit presentation of the intended ordinal structure (A–C), potentially with numerical ranges (B), could improve consistency of interpretation. Judgements by multiple reviewers could also be represented (by different arrows) without forcing consensus (C).

Our study has several important limitations. First, we did not address whether editor/reviewer opinions provide valid assessments of studies or whether the vocabularies provide valid measurements of those opinions. We also note that eLife assessments are formed via consensus, rather than representing the opinions of individuals, which raises questions about how social dynamics may affect the evaluation outcomes. It may be more informative to solicit and report individual assessments from each peer reviewer and editor, rather than force a consensus (e.g., see Fig 7C). Although these are important issues, they are beyond the scope of this study, which is focused on clarity of communication.

Second, we are particularly interested in how the readership of eLife interpret the vocabularies, but because we do not have any demographic information about the readership, we do not know the extent to which our sample is similar to that population. We anticipated that the most relevant demographic characteristics were education status (because the content is technical), knowledge of subject area (because eLife publishes biomedical and life sciences), and language (because the content is in English). All of our participants reported speaking fluent English, the vast majority had doctoral degrees, and about one third had a degree in the Biomedical and Life Sciences. Relative to this sample, we expect the eLife readership probably consists of more professional scientists, but otherwise we think the sample is likely to be a good match to the target population. Also note that eLife explicitly states that eLife assessments are intended to be accessible to non-expert readers [4], therefore, our sample is still a relevant audience, even if it might contain fewer professional scientists than eLife’s readership.

Third, to maintain experimental control, we presented participants with very short statements that differed only in terms of the phrases we wished to evaluate. In practice however, these phrases will be embedded in a paragraph of text (e.g., Box 1) which may also contain “aspects” of the vocabulary definitions (Table 1) “when appropriate” [4]. It is unclear if the inclusion of text from the intended phrase definitions will help to disambiguate the phrases and future research could explore this.

Fourth, participants were asked to respond to phrases with a point estimate; however, it is likely that a range of plausible values would more accurately reflect their interpretations [9,11]. Because asking participants to respond with a range (rather than a point estimate) creates technical and practical challenges in data collection and analysis, we opted to obtain point estimates only.

Conclusion

Overall, our study suggests that using more structured and less ambiguous language can improve communication of research evaluations. Relative to the eLife vocabulary, participants’ interpretations of our alternative vocabulary were more likely to align with each other, and with the intended interpretation. Nevertheless, some phrases in the alternatively vocabulary were not always interpreted as we intended, possibly because participants were not completely aware of the vocabulary’s underlying ordinal scale. Future research, in addition to finding optimal words to evaluate research, could attempt to improve interpretation by finding optimal ways to present them.

Supporting information

S1 Text. Example stimuli and attention check.

(DOCX)

pbio.3002645.s001.docx (122.6KB, docx)
S2 Text. Sample size planning.

(DOCX)

pbio.3002645.s002.docx (13.7KB, docx)
S3 Text. Task instructions.

(DOCX)

pbio.3002645.s003.docx (45.8KB, docx)
S4 Text. Peer Community in Registered Reports Design Template.

(DOCX)

pbio.3002645.s004.docx (18.9KB, docx)
S5 Text. Expanded versions of Table 3 and Table 4 percentile estimates with confidence intervals.

(DOCX)

pbio.3002645.s005.docx (17.4KB, docx)
S6 Text. Frequency of implied rankings.

(DOCX)

pbio.3002645.s006.docx (17.6KB, docx)
S7 Text. Further explanation of Kendall’s distance.

(DOCX)

pbio.3002645.s007.docx (14KB, docx)
S8 Text. Demographics of Prolific members.

(DOCX)

pbio.3002645.s008.docx (13.5KB, docx)

Acknowledgments

The Stage 1 version of this preregistered research article has been peer-reviewed and recommended by Peer Community in Registered Reports (https://rr.peercommunityin.org/articles/rec?id=488).

Research transparency statement

The research question, methods, and analysis plan were peer reviewed and preregistered as a Stage One Registered Report via the Peer Community in Registered Reports platform (https://doi.org/10.17605/OSF.IO/MKBTP). There was only 1 minor deviation from the preregistered protocol (target sample size = 300; actual sample size = 301). A Peer Community in Registered Reports design table is available in S4 Text. All data, materials, and analysis scripts are publicly available on the Open Science Framework (https://osf.io/mw2q4/files/osfstorage/). A reproducible version of the manuscript and associated computational environment is available in a Code Ocean container (https://doi.org/10.24433/CO.4128032.v1).

Data Availability

All data, materials, and analysis scripts are publicly available on the Open Science Framework (https://osf.io/mw2q4/files/osfstorage/). A reproducible version of the manuscript and associated computational environment is available in a Code Ocean container (https://doi.org/10.24433/CO.4128032.v1).

Funding Statement

This study was supported by funding awarded to to SV and TEH from the Melbourne School of Psychological Sciences, University of Melbourne. The funders did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Vazire S. Opening the Black Box of Peer Review. Phys Ther. 2021;14:169. Available from: https://physics.aps.org/articles/v14/169. [Google Scholar]
  • 2.Wolfram D, Wang P, Hembree A, Park H. Open peer review: promoting transparency in open science. Scientometrics. 2020;125:1033–1051. doi: 10.1007/s11192-020-03488-4 [DOI] [Google Scholar]
  • 3.Eisen MB, Akhmanova A, Behrens TE, Diedrichsen J, Harper DM, Iordanova MD, et al. Peer review without gatekeeping [editorial]. Elife. 2022;11:e83889. doi: 10.7554/eLife.83889 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.eLife’s New Model: What is an eLife assessment? In: eLife [Internet]. eLife Sciences Publications Limited; 20 Oct 2022. [cited 2023 Jan 24]. Available from: https://elifesciences.org/inside-elife/db24dd46/elife-s-new-model-what-is-an-elife-assessment. [Google Scholar]
  • 5.Rendueles O, de Sousa JA, Rocha EP. Competition between lysogenic and sensitive bacteria is determined by the fitness costs of the different emerging phage-resistance strategies. Díaz-Muñoz SL, Landry CR, Díaz-Muñoz SL, editors. Elife. 2023;12:e83479. doi: 10.7554/eLife.83479 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Budescu DV, Por H-H, Broomell SB, Smithson M. The interpretation of IPCC probabilistic statements around the world. Nature Clim Change. 2014;4:508–512. doi: 10.1038/nclimate2194 [DOI] [Google Scholar]
  • 7.Budescu DV, Wallsten TS. Consistency in interpretation of probabilistic phrases. Organ Behav Hum Decis Process. 1985;36:391–405. doi: 10.1016/0749-5978(85)90007-X [DOI] [Google Scholar]
  • 8.Lichtenstein S, Newman JR. Empirical scaling of common verbal phrases associated with numerical probabilities. Psychonomic Science. 1967;9:563–564. [Google Scholar]
  • 9.Reagan RT, Mosteller F, Youtz C. Quantitative meanings of verbal probability expressions. J Appl Psychol. 1989;74:433–442. doi: 10.1037/0021-9010.74.3.433 [DOI] [PubMed] [Google Scholar]
  • 10.Theil M. The role of translations of verbal into numerical probability expressions in risk management: a meta-analysis. J Risk Res. 2002;5:177–186. doi: 10.1080/13669870110038179 [DOI] [Google Scholar]
  • 11.Wallsten TS, Budescu DV, Rapoport A, Zwick R, Forsyth B. Measuring the vague meanings of probability terms. J Exp Psychol Gen. 1986;115:348–365. doi: 10.1037/0096-3445.115.4.348 [DOI] [Google Scholar]
  • 12.Willems S, Albers C, Smeets I. Variability in the interpretation of probability phrases used in Dutch news articles—a risk for miscommunication. JCOM. 2020;19:A03. doi: 10.22323/2.19020203 [DOI] [Google Scholar]
  • 13.Sison CP, Glaz J. Simultaneous confidence intervals and sample size determination for multinomial proportions. J Am Stat Assoc. 1995;90:366–369. doi: 10.1080/01621459.1995.10476521 [DOI] [Google Scholar]
  • 14.Rousselet GA, Pernet CR, Wilcox RR. The percentile bootstrap: a primer with step-by-step instructions in R. Adv Methods Pract Psychol Sci. 2021;4:2515245920911881. doi: 10.1177/2515245920911881 [DOI] [Google Scholar]
  • 15.Wilke C. Fundamentals of data visualization: a primer on making informative and compelling figures. First edition. Sebastopol, CA: O’Reilly Media; 2019. [Google Scholar]
  • 16.Fagerland MW, Lydersen S, Laake P. The McNemar test for binary matched-pairs data: mid-p and asymptotic are better than exact conditional. BMC Med Res Methodol. 2013;13:91. doi: 10.1186/1471-2288-13-91 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kendall MG. A new measure of rank correlation. Biometrika. 1938;30:81–93. doi: [DOI] [Google Scholar]
  • 18.van Doorn J, Westfall HA, Lee MD. Using the weighted Kendall Distance to analyze rank data in psychology. TQMP 2021;17:154–165. doi: 10.20982/tqmp.17.2.p154 [DOI] [Google Scholar]
  • 19.Wintle BC, Fraser H, Wills BC, Nicholson AE, Fidler F. Verbal probabilities: Very likely to be somewhat more confusing than numbers. PLoS ONE. 2019;14:e0213522. doi: 10.1371/journal.pone.0213522 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Roland G Roberts

25 Apr 2024

Dear Tom,

Thank you for submitting your ex-PCI-RR manuscript entitled "Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary" for consideration as a Meta-Research Article by PLOS Biology.

Your stage 2 Pre-Registered Research Article manuscript has now been evaluated by the PLOS Biology editorial staff, as well as by an academic editor with relevant expertise, and I'm writing to let you know that we would like to send your submission out for re-review.

IMPORTANT: Please can you change the article type to "Pre-Registered Research Article" when you upload your additional metadata (see next paragraph)?

However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire.

Once your full submission is complete, your paper will undergo a series of checks in preparation for re-review. After your manuscript has passed the checks it will be sent out for review. To provide the metadata for your submission, please Login to Editorial Manager (https://www.editorialmanager.com/pbiology) within two working days, i.e. by Apr 29 2024 11:59PM.

During the process of completing your manuscript submission, you will be invited to opt-in to posting your pre-review manuscript as a bioRxiv preprint. Visit http://journals.plos.org/plosbiology/s/preprints for full details. If you consent to posting your current manuscript as a preprint, please upload a single Preprint PDF.

Feel free to email us at plosbiology@plos.org if you have any queries relating to your submission.

Kind regards,

Roli

Roland Roberts, PhD

Senior Editor

PLOS Biology

rroberts@plos.org

Decision Letter 1

Roland G Roberts

21 May 2024

Dear Dr Hardwicke,

Thank you for your patience while your revised registered report "Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary" went through peer-review at PLOS Biology. I am sending you this decision letter on behalf of my colleague Roli Roberts, who is out of the office this week, in order to avoid unnecessary delays. Your manuscript has now been evaluated by the PLOS Biology editors, an Academic Editor with relevant expertise, and by the three independent reviewers that assessed the stage 1 submission at PCI-RR. All of the reviewers have signed their reports: Reviewer #1 is Ross Mounce, Reviewer #2 is Štěpán Bahník and Reviewer #3 is Veli-Matti Karhulahti.

In light of the reviews, which you will find at the end of this email, we are pleased to offer you the opportunity to address the following points, which we anticipate should not take you very long. We will then assess your revised manuscript and your response to the reviewers' comments with our Academic Editor without further rounds of peer-review.

1) Please add a column to the right of the study design table that reports the actual outcome per row and whether it confirmed or disconfirmed the hypothesis.

2) As you will see, both R1 and R3 remain uncomfortable about your level of engagement with eLife, uncomfortable enough to raise this independently, because they did not have a chance to see a revised stage 1. Although it is not a precondition for publication, we would tend to agree and encourage you to engage with eLife about their intention behind the assessment system, as this would make the discussion more straightforward and certain (eg about intentionality). In addition, this work is likely to receive a fair amount of scrutiny post-publication, including the peer-review file, and so the reviewer qualms might be discussed in post-publication commentary around the article.

3) Reviewer 2 also speaks of additional analyses that could be done with the data. Again, this would not be required for publication, but we did want to give you the opportunity to address this report if you wanted. Especially the assessment of whether the frequency of actual ratings might be justified given pre-selection based on quality, or whether the reviewers disproportionately use words on the lower end of the scale, appears to be relatively doable in a reasonable time frame, should you wish to complement the work in that way.

4) Could the COI statement for author Simine Vazire be modified in our submission system to state the following?

"SV is a member of the board of directors of The Public Library of Science (PLOS). This role has in no way influenced the outcome or development of this work or the peer-review process, nor does it alter our adherence to PLOS Biology policies on sharing data and materials."

We expect to receive your revised manuscript within 1 month. Please email us (plosbiology@plos.org) if you have any questions or concerns, or would like to request an extension.

At this stage, your manuscript remains formally under active consideration at our journal; please notify us by email if you do not intend to submit a revision so that we withdraw the manuscript.

**IMPORTANT - SUBMITTING YOUR REVISION**

Please submit the following files along with your revised manuscript:

1. A 'Response to Reviewers' file - this should detail your responses to the editorial requests, present a point-by-point response to all of the reviewers' comments, and indicate the changes made to the manuscript (if any).

*NOTE: In your point-by-point response to the reviewers, please provide the full context of each review. Do not selectively quote paragraphs or sentences to reply to. The entire set of reviewer comments should be present in full and each specific point should be responded to individually.

You should also cite any additional relevant literature that has been published since the original submission and mention any additional citations in your response.

2. In addition to a clean copy of the manuscript, please also upload a 'track-changes' version of your manuscript that specifies the edits made. This should be uploaded as a "Revised Article with Changes Highlighted " file type.

*Resubmission Checklist*

When you are ready to resubmit your revised manuscript, please refer to this resubmission checklist: https://plos.io/Biology_Checklist

To submit a revised version of your manuscript, please go to https://www.editorialmanager.com/pbiology/ and log in as an Author. Click the link labelled 'Submissions Needing Revision' where you will find your submission record.

Please make sure to read the following important policies and guidelines while preparing your revision:

*Published Peer Review*

Please note while forming your response, if your article is accepted, you will have the opportunity to make the peer review history publicly available. For the sake of transparency, we strongly recommend that you opt into this. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. Please see here for more details:

https://blogs.plos.org/plos/2019/05/plos-journals-now-open-for-published-peer-review/

*PLOS Data Policy*

Please note that as a condition of publication PLOS' data policy (http://journals.plos.org/plosbiology/s/data-availability) requires that you make available all data used to draw the conclusions arrived at in your manuscript. If you have not already done so, you must include any data used in your manuscript either in appropriate repositories, within the body of the manuscript, or as supporting information (N.B. this includes any numerical values that were used to generate graphs, histograms etc.). For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5

Thank you again for your submission to our journal. We hope that our editorial process has been constructive thus far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Nonia

Nonia Pariente, PhD

Editor in Chief

PLOS Biology

npariente@plos.org

on behalf of

Roland

Roland Roberts, PhD

Senior Editor

PLOS Biology

rroberts@plos.org

----------------------------------------------------------------

REVIEWS:

Reviewer #1: Ross Mounce

Reviewer #2: Štěpán Bahník

Reviewer #3: Veli-Matti Karhulahti

Reviewer #1: I re-read the stage 1 protocol submission, which is registered here: https://doi.org/10.17605/OSF.IO/MKBTP (which I will hereafter refer to simply as S1)

The rationale and the stated hypotheses of this Stage 2 manuscript submitted to PLOS Biology (which I will hereafter refer to simply as S2), are indeed the same as approved in S1. The wording between the S1 and S2 is not precisely the same.

A brief digression / sidenote: not a fault of the authors, but I must remark it is a little unhelpful to provide reviewers only PDF files when asking peer reviewers to check the similarity between two separate documents with separate formatting (one with line numbering, one without!). I converted the S1 and S2 into plain text files, chopping-off the S2 at the appropriate places to ensure maximum comparability with the content of the S1. From here a simple bag of words analysis comparing between the plain text of S1 and S2 revealed to me a few linguistic changes that seem to have occurred between S1 and S2 e.g.

In S1: "eLife recently stated that these assessments will use phrases drawn from a common vocabulary…"

In S2: "eLife stated that these assessments would use phrases drawn from a common vocabulary…"

Although there are small changes of wording detectable between the S1 and S2 I did not perceive them to have changed the meaning of the introduction, rationale, or stated hypotheses.

I note that the authors aimed for a target sample size of 300 in the stage 1 protocol and report a target sample size of 301 in the stage 2 manuscript. This tiny difference is noted by the authors in the stage 2 manuscript and is NOT a significant deviation from the stage 1 protocol.

As far as I can tell the authors have adhered to the approved S1 experimental procedures.

The data are to an extent able to test the authors proposed hypotheses.

As I noted in my review of S1 I still think it is a bit of a cop-out by the authors to choose not to ask eLife for the broad (e.g. country-level) demographic characteristics of eLife readers - most well-run journals absolutely do hold data on the geography of where in-bound page views are coming from, at least to the country-level if not in much finer detail via standard web traffic analysis captured by e.g. Google Analytics or Matomo. I can see from inspection of their HTML that eLife use Google Analytics to track website usage data. Their global site tag is currently "GTM-WVM8KG" - you'll see it tagged in every single page. They will know the IP address of every visitor to their site and from the IP address will be able to determine the geolocation of the reader with a high degree of accuracy (relatively few readers take steps to obfuscate the IP address they are reading from).

The conclusions of S2 are valid given the data. However, the conclusions do pre-suppose that eLife intend their vocabulary to have a clear and unambiguous ranking order between words. What if the apparent ambiguity that the data & research has uncovered was _intentional_ on eLife's part? It's not clear to me if the authors here have ever asked eLife if they intended all the words in the vocabulary to have a strict and clear order of ranking. Might the authors want to ask eLife about this? If the intention was to have a clear and unambiguous ranking, why wouldn't eLife have just used numbers rather than words (which clearly as this study and many others show, can be interpreted differently by different people)? Some of the wording in the S2 rather assumes a certain intentionality on behalf of eLife and I'm not 100% convinced that the authors here can speak confidently of eLife's intentionality without directly asking eLife.

I have examined the data deposited at OSF (https://doi.org/10.17605/OSF.IO/MW2Q4) to support S2. The data is in a public repository, it is appropriately described, rights-waived under CC0 1.0 to enable maximum re-use potential, and it is time-stamped (April 6, 2024) to demonstrate that the data was collected after the S1 approval. The data complies with the PLOS data availability policy.

Conclusion: Whilst I retain and reiterate my quibbles from the S1 review stage, this S2 does represent sound, pre-registered research and should be published.

Reviewer #2: The second stage of the registered report follows the plan from the first stage. I did not notice any deviations. The results are clearly described and the text in discussion and conclusion is justified given the results.

My biggest regret for the study is that it only purports to show that the eLife vocabulary is not optimal, that the alternative vocabulary is better, but it does not purports to show that the alternative vocabulary is good itself. This is correctly described in the conclusion, where the authors note that future research could find optimal vocabulary, so I do not think the description needs to be changed. I just hope that the alternative vocabulary will not be adopted based only on the results of this study. As the authors correctly note, the wording of the alternative vocabulary overlaps (at the two most extreme points on each end of the scale), which suggests that a different wording might be better. An informative and useful vocabulary should take into account not only whether the descriptions are correctly ordered, but also how well they cover the importance and support in actual studies. This would be possible to quantify, but the authors opted not do that during the first stage of the registered report. Relatedly, the authors note in the discussion that the eLife vocabulary covers only the upper half of the scale. It would be possible to look at the frequency of actual ratings to see whether this might be justified given pre-selection based on quality, or whether the reviewers disproportionately use words on the lower end of the scale.

Reviewer #3: Thank you for inviting me to review the Stage 2 of this manuscript. I was excited about the plan at Stage 1 and it's indeed interesting to read the results at Stage 2. First, as far as my assessment goes, the authors have followed the plan carefully, the materials and data are shared, the solitary hypotheses was successfully tested, and the paper thus deserves to be accepted as well as further debated in post-publication discussion. For editors, I acknowledge to have reviewed the html script but didn't run the code in R and am (still) not qualified to comment on the details of Kendall's tau distance.

When reading the final version of the paper in reflection with the Stage 1 review, I must re-address a few points that were raised earlier. Because there was no re-assessment for the revised Stage 1 plan after revisions, I did not have an earlier opportunity to comment on the final Stage 1. Therefore, my Stage 2 review revisits some issues of Stage 1; this is mainly for the open review record and may not entail action from authors. To be clear, these points were considered resolved by the editors/recommenders, so the authors should not be demanded to resolve them but optionally engage if and as they best see fit.

1. One of my and other reviewers' comments at Stage 1 was to further involve eLife in the study. Considering that the paper systematically uses words like "intended" as the core epistemic lens through which the data are analysed, it still feels awkward that "intentions" have not been backed up beyond the journal's public statement "…to help convey the views of the editor and the reviewers in a clear and consistent manner." Reading author responses post hoc, I can see there has been an informal dinner with one editor before the study, but this was only to inform them about the study. Of course it could be that the short description on eLife website represents all there is to the intended purpose, but that remains unknown for readers at least. I'd suggest having a representative of eLife review the full Stage 2 in order to ensure that their intentions have not been misinterpreted. E.g., it feels strange to publish sentences like "It is unclear if the exclusion of negative phrases was a deliberate choice on the part of eLife's leadership" when it all it takes to figure that out is ask. Moreover, it would've been valuable to hear what specifically eLife means by "consistency" that is operationalised in a specific way here.

2. Related to this, while we still don't know what eLife fully intended, their stated decision to use "common vocabulary" implies that the idea was indeed not to use overly quantifying ranking (Likert scale agreement, numeric scores, etc.) but in their own words "rich" vocabulary that perhaps serves to balance between ranking and the inherent vagueness of scientific assessment where comparisons between studies rarely make sense. One could reasonably argue that the results of the study support such balance: eLife vocabulary appears to be somewhat successful at communicating correctly ordered rankings, while not expressing overly confidence on the quantifiability of such ranking. In this regard, it's possible that e.g. reporting percentages (proposed on p. 24) would move the assessment system even more far away from eLife's intended epistemology and/or values. The authors are aware of this, but due to lacking involvement of eLife, all this remains speculation for readers (which could be easily cleared).

3. As commented at Stage 1, the paper has one explicit registered test for a hypothesis, which remains difficult for readers to separate from the rest of analytic context on page 17: "The McNemar test indicated that observing a difference between the vocabularies this large, or larger, is unlikely if the null hypothesis were true (odds ratio = 8.17, 95% CI [5.11,13.69], p = 1.34e-26)." The authors must be commende

Decision Letter 2

Roland G Roberts

27 Jun 2024

Dear Tom,

Thank you for your patience while we considered your revised manuscript "Finding the right words to evaluate research: An empirical appraisal of eLife’s assessment vocabulary" for publication as a Preregistered Research Article at PLOS Biology. This revised version of your manuscript has been evaluated by the PLOS Biology editors and the Academic Editor.

Based on our Academic Editor's assessment of your revision, we are likely to accept this manuscript for publication, provided you satisfactorily address the following data and other policy-related requests.

IMPORTANT - please attend to the following:

a) Please remove the first phrase from the title, making it simply "An empirical appraisal of eLife’s assessment vocabulary."

b) Thank you for adding the extra column to the design table (currently Table D1 in the supplement); please could you move this Table to the main paper and adjust the manuscript to accommodate it?

c) Please can you include the following rubric in an "Acknowledgements" section (after the "Conclusion"): "The Stage 1 version of this pre-registered research article has been peer-reviewed and recommended by Peer Community In Registered Reports (https://rr.peercommunityin.org/articles/rec?id=488)". [This is a formula that we have used previously in manuscripts that have come to us down the PCI route]

d) Thanks for providing the underlying data in OSF. Please cite the location of the data clearly in all relevant main Figure legends, e.g. “The data underlying this Figure can be found in S1 Data” or “The data underlying this Figure can be found in https://osf.io/mw2q4/files/osfstorage"

e) Please move your ethics statement [“This study was approved by a University of Melbourne ethics board (project ID: 26411).”] to an appropriate part of the Methods section.

f) Please remove the funding information and competing interests statement from the manuscript file; it seems that these are already in the Editorial Manager metadata, from which they will automatically be pulled.

g) Please move the "Research transparency statement" into the main manuscript. Perhaps next to the Acknowledgements? (I don't have a precedent for this, so one of my Production colleagues might move it later)

As you address these items, please take this last chance to review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the cover letter that accompanies your revised manuscript.

We expect to receive your revised manuscript within two weeks.

To submit your revision, please go to https://www.editorialmanager.com/pbiology/ and log in as an Author. Click the link labelled 'Submissions Needing Revision' to find your submission record. Your revised submission must include the following:

- a cover letter that should detail your responses to any editorial requests, if applicable, and whether changes have been made to the reference list

- a Response to Reviewers file that provides a detailed response to the reviewers' comments (if applicable, if not applicable please do not delete your existing 'Response to Reviewers' file.)

- a track-changes file indicating any changes that you have made to the manuscript.

NOTE: If Supporting Information files are included with your article, note that these are not copyedited and will be published as they are submitted. Please ensure that these files are legible and of high quality (at least 300 dpi) in an easily accessible file format. For this reason, please be aware that any references listed in an SI file will not be indexed. For more information, see our Supporting Information guidelines:

https://journals.plos.org/plosbiology/s/supporting-information

*Published Peer Review History*

Please note that you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. Please see here for more details:

https://blogs.plos.org/plos/2019/05/plos-journals-now-open-for-published-peer-review/

*Press*

Should you, your institution's press office or the journal office choose to press release your paper, please ensure you have opted out of Early Article Posting on the submission form. We ask that you notify us as soon as possible if you or your institution is planning to press release the article.

*Protocols deposition*

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Please do not hesitate to contact me should you have any questions.

Sincerely,

Roli

Roland Roberts, PhD

Senior Editor

rroberts@plos.org

PLOS Biology

------------------------------------------------------------------------

DATA NOT SHOWN?

- Please note that per journal policy, we do not allow the mention of "data not shown", "personal communication", "manuscript in preparation" or other references to data that is not publicly available or contained within this manuscript. Please either remove mention of these data or provide figures presenting the results and the data underlying the figure(s).

Decision Letter 3

Roland G Roberts

9 Jul 2024

Dear Tom,

Thank you for the submission of your revised Preregistered Research Article "An empirical appraisal of eLife’s assessment vocabulary" for publication in PLOS Biology. On behalf of my colleagues and the Academic Editor, Christopher Chambers, I'm pleased to say that we can in principle accept your manuscript for publication, provided you address any remaining formatting and reporting issues. These will be detailed in an email you should receive within 2-3 business days from our colleagues in the journal operations team; no action is required from you until then. Please note that we will not be able to formally accept your manuscript and schedule it for publication until you have completed any requested changes.

Please take a minute to log into Editorial Manager at http://www.editorialmanager.com/pbiology/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process.

PRESS: We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with biologypress@plos.org. If you have previously opted in to the early version process, we ask that you notify us immediately of any press plans so that we may opt out on your behalf.

We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/.

Thank you again for choosing PLOS Biology for publication and supporting Open Access publishing. We look forward to publishing your study. 

Sincerely, 

Roli

Roland G Roberts, PhD, PhD

Senior Editor

PLOS Biology

rroberts@plos.org

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Text. Example stimuli and attention check.

    (DOCX)

    pbio.3002645.s001.docx (122.6KB, docx)
    S2 Text. Sample size planning.

    (DOCX)

    pbio.3002645.s002.docx (13.7KB, docx)
    S3 Text. Task instructions.

    (DOCX)

    pbio.3002645.s003.docx (45.8KB, docx)
    S4 Text. Peer Community in Registered Reports Design Template.

    (DOCX)

    pbio.3002645.s004.docx (18.9KB, docx)
    S5 Text. Expanded versions of Table 3 and Table 4 percentile estimates with confidence intervals.

    (DOCX)

    pbio.3002645.s005.docx (17.4KB, docx)
    S6 Text. Frequency of implied rankings.

    (DOCX)

    pbio.3002645.s006.docx (17.6KB, docx)
    S7 Text. Further explanation of Kendall’s distance.

    (DOCX)

    pbio.3002645.s007.docx (14KB, docx)
    S8 Text. Demographics of Prolific members.

    (DOCX)

    pbio.3002645.s008.docx (13.5KB, docx)
    Attachment

    Submitted filename: Response to reviews.docx

    pbio.3002645.s009.docx (308.5KB, docx)
    Attachment

    Submitted filename: Response to reviews.docx

    pbio.3002645.s010.docx (308.5KB, docx)

    Data Availability Statement

    All data, materials, and analysis scripts are publicly available on the Open Science Framework (https://osf.io/mw2q4/files/osfstorage/). A reproducible version of the manuscript and associated computational environment is available in a Code Ocean container (https://doi.org/10.24433/CO.4128032.v1).


    Articles from PLOS Biology are provided here courtesy of PLOS

    RESOURCES