Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Oct 28;16(10):e0257340. doi: 10.1371/journal.pone.0257340

How faculty define quality, prestige, and impact of academic journals

Esteban Morales 1,*, Erin C McKiernan 2, Meredith T Niles 3, Lesley Schimanski 4, Juan Pablo Alperin 4,*
Editor: Florian Naudet5
PMCID: PMC8553056  PMID: 34710102

Abstract

Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high "quality," is published in "prestigious" journals, and has an "impact." Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact of academic journals? We address this question by surveying 338 faculty members from 55 different institutions in the U.S. and Canada. While relying on self-reported definitions that are not linked to their behavior, this study’s findings highlight that faculty often describe these distinct terms in overlapping ways. Additionally, results show that marked variance in definitions across faculty does not correspond to demographic characteristics. This study’s results highlight the subjectivity of common research terms and the importance of implementing evaluation regimes that do not rely on ill-defined concepts and may be context specific.

Introduction

Although faculty work involves a wide range of activities and priorities [13], evidence confirms that faculty, even those at teaching and public institutions, believe that research is the most highly valued aspect of their work [46]. It is therefore unsurprising that despite significant efforts to broaden how university faculty are assessed, research continues to be treated as the main component of a professor’s job [68] and, accordingly, research assessment features prominently in how faculty are evaluated for career advancement. Yet, despite its perceived importance and critical role, there is still debate about how to evaluate research outputs.

Research assessment, which is codified for faculty career progression in review, promotion, and tenure (RPT) processes, is itself a controversial topic that has been subject to much debate over the years [c.f., 9]. Critics have argued that current assessment practices overemphasize the use of metrics—such as the Journal Impact Factor (JIF) [10]—and fail to recognize new and evolving forms of scholarship, such as datasets and research software [1114], fail to encourage reproducible science [15], or worse, that current approaches encourage unethical practices, such as gift authorship, p-value hacking, or result manipulation [16, 17]. These discussions point to different perspectives regarding what constitutes research that is worthy of being celebrated and rewarded.

Despite the calls for change and the differences of opinions, there is significant consensus that when it comes to evaluating publications, the RPT process should aim to reward research that is of high "quality" and has "impact" [9]. However, such terms are highly subjective, and present challenges to ascertain precisely what such research looks like and where it ought to be published. Furthermore, their subjectivity presents additional challenges for comparing research and individuals, as is regularly done during the RPT process. The use of these subjective concepts, and others like them, may serve primarily as “rhetorical signalling device[s] used to claim value across heterogeneous institutions, researchers, disciplines, and projects rather than a measure of intrinsic and objective worth” [18].

Others have previously noted the lack of clear definitions surrounding many of the terms and concepts used in research assessment [1821]. Without definitions, individuals and committees are bound to apply different standards which inevitably leads to inequities in how faculty and research are evaluated. If, as Hatch [20] suggests, “most assessment guidelines permit sliding standards,” then the definition used in any given situation can easily shift depending on whose work is being assessed in ways that allow (or even encourage) biases to creep into the evaluation process (be they conscious or unconscious). Even if individuals are consistent and unbiased in how they apply their own definitions, there is rarely agreement in assessments between academics, even from those of the same discipline. Moore and colleagues [18] point to such conflicting assessments as the clearest example of a lack of any agreed upon definition of ‘excellence’—an argument that can easily be extended to other terms and concepts commonly used in research assessment.

The known pernicious effects of using ill-defined criteria has resulted in calls to “swap slogans for definitions” [20] and for greater “conceptual clarity” [22] in research assessment. To aid in this effort, this study focuses on faculty’s perception of academic journals as they carry significant weight in current RPT processes, especially in universities in the United States and Canada (Niles et al., 2020). Accordingly, this study addresses the question: how do faculty understand the concepts of ‘quality’, ‘prestige’, and ‘impact’ as they pertain to academic journals?

Previous research

The current study builds on our previous work to study the documents related to RPT processes at institutions in the United States and Canada [10, 23, 24], as well as similar work by others in this area [2527]. In one of our previous studies [23], we reported that nearly 60% of institutions overall and nearly 80% of research-intensive universities mentioned ‘impact’ in their RPT documents. Using the same data and analysis, but previously unreported, we also found that 73% of institutions in our sample and 89% of research-intensive universities mentioned ‘quality’ in their RPT documents, and that those percentages were 29% and 47% for those that mentioned ‘prestige.’ That is to say, there was a high prevalence of such concepts in academic evaluations, but a closer reading of these instances shows that few of these documents gave a clear definition of what these terms meant or how they were to be measured.

Despite the frequent use of these terms in relation to research assessment, it is difficult to know how faculty understand them, especially in relation to how they will assess others, and in how they understand that they will be assessed. A survey by DeSanto and Nichols [28] found that “…a significant number of faculty [are] unsure of their department’s RPT expectations for demonstrating scholarly impact” (pg. 156). Results from that same study show there is substantial disagreement among faculty as to how impact should be measured and evaluated, with many pushing for traditional journal-level metrics like the Journal Impact Factor (JIF) and a small percentage favoring new article-level metrics and altmetrics. Similarly, there is disagreement as to how research quality should be measured, with little evidence suggesting that it can be assessed through citation-based metrics like the JIF [29]. In lieu of an objective measure of quality, it has become common to use the perceived prestige of the publication venue (i.e., the journal where an article is published) as a proxy. To confound things further, prestige is itself sometimes associated with the JIF [e.g., 30], even while the association of the JIF with both quality and prestige has been heavily criticized, most notably in The San Francisco Declaration on Research Assessment [31, 32], the HuMetricsHSS Initiative [33] and in the Leiden Manifesto [34].

This interplay between the JIF, quality, prestige, and impact and how it features in research assessment is also evident when faculty make decisions about where to publish their research. A recent faculty survey [35] shows that nearly 80% of faculty report the JIF as one of the most important factors influencing their decisions on where to publish, something echoed in our own survey findings [24]. In some instances, faculty are guided by librarians to use the JIF to determine the prestige of a given journal [30] but, according to the same Ithaka survey, the majority of faculty make such decisions based on their own perceptions of the quality, prestige, and potential impact of a given journal. As the report states: “less than 20% of respondents reported they receive help determining where to publish to maximize impact, and assessing impact following publication” [35]. Further complicating our understanding of faculty’s understanding and use of these concepts is the role that demographic characteristics—such as gender and career stage—significantly impact how scholarship is perceived and practiced [36, 37]. In our own previous work [24], for example, we found that women are more likely to publish fewer articles than men but often assign more importance to the number of publications than their male peers.

These surveys show that, when it comes to making decisions about where to publish, faculty see an interplay between the notions of quality, prestige, and impact and have themselves linked these to metrics like the JIF, although their precise understanding of these terms remains unclear. To some extent, these interconnected concepts have been codified in the RPT guidelines and documents that govern academic careers. As noted above, in previous work we uncovered the high incidence of the terms quality and impact in these documents [23] and, in another study, we uncovered that the JIF and related terms are found in the RPT documents of 40% of R-type institutions, and the overwhelming majority of those mentions support their use [10]. Similarly, Rice et al. [26] found mentions of the JIF in nearly 30% of the RPT guidelines from several countries, and also found support for the measure’s use. Moreover, we found that although it is not always stated what the JIF is intended to measure, 63% of institutions that mentioned the JIF in their documents had at least one instance of associating the metric with quality, 40% had at least one mention associating it with impact, and 20% with prestige [10]. These results are in stark contrast to a number of studies showing that JIF has little or nothing to do with research quality [3841].

The complexity of these terms, intertwined with their application in faculty behavior and promotion decisions, demonstrate a need to further understand how faculty themselves perceive these terms. Their persistent use in both publication decisions and in research assessment indicates their importance and the ambiguities, regardless of their reason for being, suggest that further study is needed. As such, we sought to answer the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact?

Methods

To conduct this study, we sent an online survey using SurveyMonkey to 1,644 faculty members from 334 academic units from 60 universities from Canada and the United States. As described in greater detail in Niles et al. [24], we created this contact list based on a random sample of universities from which we have previously collected RPT guidelines. Faculty were invited to participate in a survey between September and October 2018. Ethics approval was provided through Simon Fraser University under application number 2018s0264. The study was not pre-registered. Written consent was obtained prior to data collection. We received responses from 338 faculty (21%) from 55 different institutions. Of these, 84 (25%) were faculty at Canadian institutions and the remaining 254 (75%) were from the United States; 223 (66%) were from R-Type institutions, 111 (33%) from M-Type institutions, and 4 (1%) from B-Type institutions. Full methodological details and demographic reporting of respondents can be found in Niles et al. [24].

In this paper, we present a detailed analysis of the responses to the question “In your own words, how would you define the following terms, sometimes used to describe academic journals? The terms included in this question were high quality, prestigious and high impact. Of the 338 responses, 249 (74%) responded to this set of open-ended questions, for an effective response rate of 15% of the 1,644 invitations that were sent. We analyzed the responses using open-coding and constant comparison [42]. To achieve this, we first organized all the responses into segments—sentences, or parts of sentences, that convey a single idea. We then assigned codes to these segments, grouping those that represent the same idea or creating new codes when a new idea appeared. Each response could contain one or multiple segments, each of which could be coded differently, allowing for a single response to have multiple codes. This process continued until we developed a codebook that included the name of the codes, descriptions for each and examples, as shown in Tables 13 (below, in the results section).

Table 1. Categories identified in the definitions of high quality.

Category Description of the category % of responses Examples
Impact factor and metrics Established measurement for the journals by the articles that are published, and the citation generated 11.5% • High journal ranking [373]
• Journal impact factor [500]
Value Quality and applicability of the articles published in the journal, including its scientific rigor and contribution to the field 35.3% • Quality of the research being presented [582]
• Publishes well-researched, innovative articles [703]
Readership Focuses on how much the published work is read by academic and non-academic people 2.4% • Large readership [62]
• Is widely read [621]
Reputation Influence and recognition of all the elements related to the journal, such as the editorial board, the scholars who publish or the journal itself 8.5% • Well-established, well-known editorial board [568]
• name recognition of journal [162]
Review process Elements related to the process of reviewing the articles that are published, such as editors, feedback and rejection rate. 42.2% • Peer-reviewed publication [1610]
• Rigorous, reviewed by top reviewers in the field [600]

Name, description, percentage of responses, and examples of the categories found in participants’ definitions of High Quality. Numbers in square brackets represent anonymized identification of participants.

Table 3. Categories identified in the definitions of high impact.

Category Description of the category % of responses Examples
Impact factor and metrics Established measurement for the journals by the articles that are published, and the citation generated 49.2% • Interplay between Impact Factor and number of cites per year [500]
• Use of impact factor to identify a journal’s worthiness [862]
Impact on academia Relevance and influence of the articles on future research 16.0% • Immediately impacting the next work to be published [583]
• Influences a lot of other researchers [391]
Impact outside academia Impact on practices and public policies by the articles, as well as replicability on media outlets 10.7% • Immediate impact on practice [1256]
• Impact on policy & practice [478]
Quality Scientific rigor and applicability of the articles published in the journal. 7.8% • Rigorous, robust, important, field changing, important, correct [488]
• High quality research [732]
Readership Focuses on how much the published work is read by academic and non-academic people 16.4% • High readership, broad readership [154]
• Read widely [1171]

Name, description, percentage of responses, and examples of the categories found in participants’ definitions of High Impact. Numbers in square brackets represent anonymized identification of participants.

In order to determine the inter-rater reliability of the codebook, two researchers independently coded the same randomly chosen set of 20 responses for each of the three terms, and compared the results using NVivo 12, which resulted in adjustments to the codebook and finally an overall Kappa value of 0.87 [43]. After having a good result in the inter-rater reliability test, all the responses were coded by one of the authors of this study (EM). Finally, the results of the open-coding process were analyzed by running Chi-Square tests upon different variables in Excel, in order to see the variation of the definitions in the light of different categories.

Results

We present the results in three parts: first, we describe the results of the open-coding process for the three terms, as the codes themselves capture the definitions used by respondents; second, we analyze the frequencies of each code in relation to respondent’s demographic information; and, finally, given the high incidence of the JIF in faculty definitions of the three terms, we explore the relationship between faculty definitions and the presence of the JIF in the RPT guidelines at the faculty member’s institution and academic unit.

Defining quality, prestige and impact

The analysis of the definitions provided for high quality resulted in 295 segments. These segments were categorized into five groups: Impact factor and Metrics, Value, Readership, Reputation, and Review Process. Table 1 provides the description of each group, as well as some examples for each of them.

The result of this coding process shows that faculty most commonly define high quality academic journals based on the review process, referring to the perceived rigor of the process of evaluating, gatekeeping and editing academic articles for the journal (e.g., “rigorous review and selection of articles to publish” [848]). Another common view on what determines the quality of an academic journal is related to the perceived value of the articles published within it, including how consequential they are for the field, the quality of the writing, and the methodological standards of the research (e.g., “Good methodology, quality writing” [1624]).

The analysis of the definitions provided for prestige resulted in 262 segments. These segments were coded under six categories: Impact Factor and Metrics, Quality and Relevance, Readership, Relation to Associations, Reputation, and Review Process. Table 2 provides definitions for each of the categories, as well as some examples for each of them.

Table 2. Categories identified in the definitions of prestigious.

Category Description of the category % of responses Examples
Impact factor and metrics Established measurement for the journals by the articles that are published, and the citation generated 18.3% • Highly ranked [561]
• Super impact factor and circulation [761]
Value Scientific rigor and applicability of the articles published in the journal. 12.2% • Field changing, important, correct [488]
• Usually very good quality [1351]
Readership Focuses on how much the published work is read by academic and non-academic people 4.6% • Widely read [1171]
• Large reading audience [86]
Relation to associations Relation of the journal to organizations that support its operation. 4.2% • Association sponsored [812]
• Affiliated with a widely recognized organization [577]
Reputation Recognition of the journal itself, the authors of the articles published in the journal or the authors citing the journal. 42.7% • High name recognition [581]
Held in high regard by researchers [214]
Review process Elements related to the process of reviewing the articles that are published, such as editors, feedback, and rejection rate 17.9% • Expert peer-review [388]
• Hard to get accepted for publication [365]

Name, description, percentage of responses, and examples of the categories found in participants’ definitions of Prestige. Numbers in square brackets represent anonymized identification of participants.

The result of this coding process shows that the prestige of academic journals is, in a somewhat circular fashion, most commonly defined by their reputation, which is related to the name recognition of the journal, the people who publish in it or the people in charge of the review process. This was exemplified by definitions like “well-regarded by others in a field” [719], “the journal has a known name in my field of study” [1565] or “well regarded with global recognition” [827]. Prestige was also often defined based on Impact Factor and Metrics and by the Review Process used by the journal.

Finally, the analysis of the definitions provided for high impact resulted in 242 segments. These segments were coded under six categories: Impact Factor and Metrics, Impact on Academia, Impact Outside Academia, Quality, and Readership. Table 3 provides definitions for each of the categories, as well as some examples for each of them.

The result of this coding process shows that high impact of academic journals is defined by the JIF or other citation metrics in almost half of all instances. Definitions in this category included: “some factor that assess the impact” [581], “number of citations/ papers” [478] or “Interplay between Impact Factor and number of cites per year” [500]. To a lesser extent, High Impact was defined by the volume of readership the research receives and by the impact that the work had on practice, public policy, or in the media.

Differences by demographics

We performed a Chi-Square test on the definitions of the three terms to understand if they varied depending on the gender, age, and academic discipline of the faculty member or according to the institution type to which they belong (R-type or M-type). The definitions provided by surveyed faculty do not have significant variation in any of these categories, which implies that the academics conceive of these terms irrespective of their gender, age, academic discipline, or type of institution in which they are employed (See supplemental information).

Differences by RPT guidelines

Finally, given the importance that academics give to the JIF and other metrics to define high quality, prestige, and high impact, we compared the responses received in the survey with the RPT documents from the academic units of the respondent. In particular, we performed a Chi-Square test to see if respondents used a definition coded as “Impact Factor and Metrics” for each of the terms any differently if they worked at academic units that mentioned the JIF and related terms in their RPT document. Fig 1 shows the prevalence of this definition among the two groups of faculty (those part of academic units that mention the JIF and those who do not). We do not find statistically significant differences between these groups [X2 = (5, N = 202) = 0.85, p > .05], indicating that the mention of the JIF and related terms in RPT documents does not affect how faculty define high quality, prestige, and high impact.

Fig 1. Use of “Impact factor and metrics” as a definition of various terms.

Fig 1

Percentage of responses that contained at least one segment in participants’ definitions of High Quality, Prestige and High Impact that relies on “Impact Factor and metrics”, as a proportion of all the RPT guidelines that mentioned the JIF and of the guidelines that did not.

Discussion

Our analysis of how faculty define quality, prestige, and impact of academic journals, suggests three important outcomes. First, it shows that these three terms, despite referring to three very different concepts, are often defined in overlapping ways (with references to each other and to themselves). Second, it shows that individual faculty members apply very different definitions to each of the terms, with no single definition used by over 50% of respondents for any of the three terms. Finally, the marked variance in definitions across faculty does not correspond to demographic characteristics, such as the age, gender, discipline nor to characteristics of the institution for which they work, including mentions of the JIF in their academic unit’s RPT guidelines.

While it is known that there is a lack of definitions for many of the terms and concepts used in research assessment [1821], this study explores how three key terms are understood by faculty in absence of these definitions. Specifically, our results indicate that the concepts of quality, prestige, and impact are often seen as synonymous by faculty, with high quality being sometimes defined by reputation, which is itself one of the most common definitions given for prestige; similarly, high impact is at times defined by quality, and high quality is at times defined by the impact factor. In fact, all three terms—quality, prestige, and impact—are defined through the use of the Impact Factor and other citation metrics by some faculty (with the term Impact itself defined in this way nearly half the time).

Given the subjective nature of these concepts and the overlapping definitions provided by faculty, it is perhaps unsurprising to see that, in all three cases, some faculty resort to the use of quantitative citation metrics as the basis for their definition. The rise in the use of metrics in research assessment has been well documented [44, 45], including in our own work that showed their prevalence in RPT documents and guidelines [10, 23]. In this sense, our study confirms that some faculty believe that quality, prestige, and impact of academic journals can be understood through citation metrics. However, contrary to our hypothesis that faculty would be more likely to think in terms of metrics if they were mentioned in the RPT guidelines for their institution, our study showed that respondents were no more or less likely to use citation metrics in their definitions when this was the case. Indeed, results of this study suggest that although RPT guidelines are meant to govern how quality, prestige, and impact are framed at universities, they might not have as much impact as intended among faculty. In other words, despite their importance in determining individual career advancement, the mention of the JIF in RPT guidelines is not correlated with how faculty define quality, prestige, or impact.

This, of course, only raises further questions about how faculty arrive at their understanding of quality, prestige, and impact. While our study does not offer direct answers to these questions, it does point to the wide range of definitions that are currently used by faculty when considering these career-determining aspects of research. Our study shows that in the absence of common definitions, faculty are applying their own understanding of each term and are doing so in ways that differ from their own colleagues, highlighting the subjectivity of the terms. This may stem from their own personal experiences with certain journals, reviewers, editors, and colleagues. For example, the highest percent of respondents in our survey perceived quality as being related to the review process of a journal. However, most journals do not make article reviews public, suggesting that the review process of a journal is not widely known or understood by people outside their own experiences. As it is widely documented that the peer review process can vary significantly, this highlights how personal experiences may affect how people perceive these different terms. This is precisely the situation that Hatch [20] warns could lead to biases (conscious or not) in evaluation processes and could explain in part why faculty are generally unsure of what is expected of them for tenure [28].

More broadly, our findings present a challenge for those seeking the most effective ways to bring about research evaluation reform. Unfortunately, our initial exploration suggests that the pathway for making changes to research assessment may not be as simple as clarifying how definitions are presented in assessment guidelines, given that the inclusion of metrics-related terms in RPT documents was not a determining factor in whether faculty used citation metrics in defining high quality, high prestige, or impact. This is not to say that such changes would not be worthwhile; guidelines, like policies and mandates, are an important way for departments and institutions to signal their values, even when those guidelines are not strictly adhered to. However, our research also suggests that determining how to define these specific terms would be challenging, given how subjectively faculty themselves define them. Overall, our research points to the need for additional cultural and environmental factors that determine faculty thinking that cut across age, gender, institution types, and disciplines.

The reliance on metrics shown throughout this study further highlights two related issues. First, when it comes to understanding quality, prestige, and impact in evaluation, there is a tendency among faculty to gravitate towards definitions that facilitate comparisons between people and outputs, which given the subjectivity of such definitions, creates challenges for comparison especially across disparate disciplines. Second, in the search for such comparable measures, metrics like the JIF have come to be seen as a way of objectively assessing these qualities. These issues are problematic, in part because comparing measures of quality prestige and impact across research or individuals is itself questionable, as these concepts are context dependent, and in part because they fail to account for the numerous limitations and biases that are present in the creation and implementation of citation metrics [4648].

In response to these issues, it is worth recognizing how efforts towards responsible use of metrics for research assessment—advanced by initiatives such as the San Francisco Declaration on Research Assessment (DORA) [34], Leiden Manifesto (n.d.), HuMetricSS (n.d.), the Hong Kong Principles [49] and narrative-based evaluations (see Saenen et al. [50] for a review)—may indeed help us to move away from the most problematic uses of metrics that are laden with the same challenges as ambiguous definitions. DORA advises against using any journal-level metrics, and the JIF in particular, as measures of research quality in evaluations and recommends instead relying on a variety of article-level metrics and qualitative indicators to better assess individual works; The Leiden Manifesto cautions against over-reliance on and incorrect uses of quantitative metrics, and emphasizes the importance of qualitative assessments as well as contextual and disciplinary considerations; HuMetricsSS encourages the development of value-based assessments and proposes a definition of ‘quality’ as “a value that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge.”; The Hong Kong Principles recommend assessing research practices, such as complete reporting or the practices of open science, instead of citation-based metrics; and a number of funders and institutions worldwide are increasingly promoting the use of narrative approaches to allow researchers to more fully describe their work and its importance or influence in an efforts to decrease the reliance on metrics and avoid problematic quantitative comparisons altogether [50].

Complementing these initiatives, results of this study further highlight the need to implement evaluation regimes that don’t rely on the comparisons of ill-defined concepts like those discussed here. Indeed, while we acknowledge the complexity of academic careers, our findings demonstrate that when terms such as “high quality”, “prestige”, and “high impact” are used in research and journal assessment, people perceive these differently across fields, contexts, and individual experiences. The use of such terms may lead academics in many directions—which may be quite desirable in terms of promoting a wide range of academic activities and outputs—but will likely lead to inconsistencies in how these activities are judged by RPT committees. Given the impossibility of universal definitions, it is understandable that many faculty fall back on measures that are comparable, but that, in reality, cannot capture the diversity of interpretations that exist within each context. As such, the findings of this study invite us to reconsider how, if at all, we want quality, prestige, and impact to be critical components of research assessment.

Limitations

There are several limitations to the scope and interpretation of this work. First, the geographic focus area in Canada and the U.S. means that this work may not be representative of other regions, especially of places without comparable academic rewards systems. As well, given the constraints of a brief survey instrument, we acknowledge that the answers reported may not fully reflect the nuance of how faculty understand the terms studied. Future research could better capture their understanding through other means, such as in-depth interviews. Finally, given that the survey utilizes self-reported information, we acknowledge that the definitions provided may not reflect the ones utilized by researchers when assessing academic journals. Future research could better connect individual responses with the results of evaluations that rely on the terms studied.

Supporting information

S1 Table. Breakdown of definitions by demographic characteristics.

Overview of the participants’ definition of Quality, Prestige and Impact by their gender, institution type, and age. The color scale illustrates the distribution of responses, where green indicates a high percentage of responses and red indicates a low percentage of responses.

(DOCX)

S2 Table. Breakdown of definitions by demographic characteristics.

Overview of the participants’ definition of Quality, Prestige, and Impact by discipline. The color scale illustrates the distribution of responses, where green indicates a high percentage of responses and red indicates a low percentage of responses.

(DOCX)

Data Availability

Survey responses can be found at the following publication: Niles, Meredith T.; Schimanski, Lesley A.; McKiernan, Erin C.; Alperin, Juan Pablo, 2020, "Data for: Why we publish where we do", https://doi.org/10.7910/DVN/MRLHNO, Harvard Dataverse, V1 Data regarding RPT documents can be found at the following data publication: Alperin, Juan Pablo; Muñoz Nieves, Carol; Schimanski, Lesley; McKiernan, Erin C.; Niles, Meredith T., 2018, "Terms and Concepts found in Tenure and Promotion Guidelines from the US and Canada", https://doi.org/10.7910/DVN/VY4TJE, Harvard Dataverse, V3, UNF:6:PQC7QoilolhDrokzDPxxyQ== [fileUNF].

Funding Statement

Funding for this project was provided to JPA, MTN, ECM, and LAS from the Open Society Foundations (OR2017-39637). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Fox JW. Can blogging change how ecologists share ideas? In economics, it already has. Ideas Ecol Evol. 2012;5: 74–77. doi: 10.4033/iee.2012.5b.15.f [DOI] [Google Scholar]
  • 2.Gruzd A, Staves K, Wilk A. Tenure and promotion in the age of online social media. Proc Am Soc Inf Sci Technol. 2011;48: 1–9. doi: 10.1002/meet.2011.14504801154 [DOI] [Google Scholar]
  • 3.Miller JE, Seldin P. Changing Practices in Faculty Evaluation: Can better evaluation make a difference? In: AAUP [Internet]. 2014. Available: https://www.aaup.org/article/changing-practices-faculty-evaluation [Google Scholar]
  • 4.Acker S, Webber M. Discipline and publish: The tenure review process in Ontario universities. Assembling and Governing the Higher Education Institution. Palgrave Macmillan, London; 2016. pp. 233–255. doi: 10.1057/978-1-137-52261-0_13 [DOI] [Google Scholar]
  • 5.Bergeron D, Ostroff C, Schroeder T, Block C. The Dual Effects of Organizational Citizenship Behavior: Relationships to Research Productivity and Career Outcomes in Academe. Hum Perform. 2014;27: 99–128. doi: 10.1080/08959285.2014.882925 [DOI] [Google Scholar]
  • 6.Harley D, Acord SK, Earl-Novell S, Lawrence S, King CJ. Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. Cent Stud High Educ. 2010. [cited 27 Nov 2016]. Available: http://escholarship.org/uc/item/15x7385g [Google Scholar]
  • 7.Chen CY. A Study Showing Research Has Been Valued over Teaching in Higher Education. J Scholarsh Teach Learn. 2015; 15–32. doi: 10.14434/josotl.v15i3.13319 [DOI] [Google Scholar]
  • 8.Gordon CK. Organizational rhetoric in the academy: Junior faculty perceptions and roles. University of North Texas. 2008. Available: http://digital.library.unt.edu/ark:/67531/metadc9779/m2/1/high_res_d/thesis.pdf [Google Scholar]
  • 9.Schimanski L, Alperin JP. The evaluation of scholarship in the academic promotion and tenure process: Past, present, and future. F1000Research. 2018. 10.12688/f1000research.16493.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.McKiernan EC, Schimanski LA, Muñoz Nieves C, Matthias L, Niles MT, Alperin JP. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. eLife. 2019;8: e47338. doi: 10.7554/eLife.47338 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Genshaft J, Wickert J, Gray-Little B, Hanson K, Marchase R, Schiffer P, et al. Consideration of Technology Transfer in Tenure and Promotion. Technol Innov. 2016; 197–20. doi: 10.3727/194982416X14520374943103 [DOI] [Google Scholar]
  • 12.Howard J. Rise of “altmetrics” revives questions about how to measure impact of research. Chron High Educ. 2013;59: A6–A7. [Google Scholar]
  • 13.Piwowar H. Altmetrics: Value all research products. Nature. 2013;493: 159–159. doi: 10.1038/493159a [DOI] [PubMed] [Google Scholar]
  • 14.Sanberg PR, Gharib M, Harker PT, Kaler EW, Marchase RB, Sands TD, et al. Changing the academic culture: Valuing patents and commercialization toward tenure and career advancement. Proc Natl Acad Sci. 2014;111: 6542–6547. doi: 10.1073/pnas.1404094111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349: aac4716. doi: 10.1126/science.aac4716 [DOI] [PubMed] [Google Scholar]
  • 16.Chapman CA, Bicca-Marques JC, Calvignac-Spencer S, Fan P, Fashing PJ, Gogarten J, et al. Games academics play and their consequences: how authorship, h -index and journal impact factors are shaping the future of academia. Proc R Soc B Biol Sci. 2019;286: 20192047. doi: 10.1098/rspb.2019.2047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Edwards MA, Roy S. Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environ Eng Sci. 2017;34: 51–61. doi: 10.1089/ees.2016.0223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Moore S, Neylon C, Paul Eve M, Paul O’Donnell D, Pattinson D. “Excellence R Us”: university research and the fetishisation of excellence. Palgrave Commun. 2017;3: 16105. doi: 10.1057/palcomms.2016.105 [DOI] [Google Scholar]
  • 19.Dean E, Elardo J, Green M, Wilson B, Berger S. The importance of definitions. Principles of Microeconomics: Scarcity and Social Provisioning. Open Oregon; 2016. [Google Scholar]
  • 20.Hatch A. To fix research assessment, swap slogans for definitions. In: Nature [Internet]. 2019. Available: https://www.nature.com/articles/d41586-019-03696-w [DOI] [PubMed]
  • 21.van Mil JWF, Henman M. Terminology, the importance of defining. Int J Clin Pharm. 2016. [cited 9 Feb 2021]. doi: 10.1007/s11096-016-0294-5 [DOI] [PubMed] [Google Scholar]
  • 22.Belcher B, Palenberg M. Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity. Am J Eval. 2018;39: 478–495. doi: 10.1177/1098214018765698 [DOI] [Google Scholar]
  • 23.Alperin JP, Muñoz Nieves C, Schimanski LA, Fischman GE, Niles MT, McKiernan EC. How significant are the public dimensions of faculty work in review, promotion and tenure documents? eLife. 2019;8. doi: 10.7554/eLife.42254 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Niles MT, Schimanski LA, McKiernan EC, Alperin JP. Why we publish where we do: Faculty publishing values and their relationship to review, promotion and tenure expectations. Useche SA, editor. PLOS ONE. 2020;15: e0228914. doi: 10.1371/journal.pone.0228914 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN. Assessing scientists for hiring, promotion, and tenure. PLOS Biol. 2018;16: e2004089. doi: 10.1371/journal.pbio.2004089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Rice DB, Raffoul H, Ioannidis JPA, Moher D. Academic criteria for promotion and tenure in biomedical sciences faculties: cross sectional analysis of international sample of universities. BMJ. 2020; m2081. doi: 10.1136/bmj.m2081 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Snider A, Hight K, Brunson A, Payakachat N, Franks AM. Qualitative Content Analysis of Research and Scholarship Criteria within Promotion and Tenure Documents of US Colleges/Schools of Pharmacy. Am J Pharm Educ. 2020. [cited 14 Dec 2020]. doi: 10.5688/ajpe7983 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.DeSanto D, Nichols A. Scholarly Metrics Baseline: A Survey of Faculty Knowledge, Use, and Opinion about Scholarly Metrics. Coll Res Libr. 2017;78: 150–170. doi: 10.5860/crl.78.2.150 [DOI] [Google Scholar]
  • 29.Aksnes DW, Langfeldt L, Wouters P. Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories. SAGE Open. 2019;9: 215824401982957. doi: 10.1177/2158244019829575 [DOI] [Google Scholar]
  • 30.Vinyard M, Colvin JB. How research becomes impact: Librarians helping faculty use scholarly metrics to select journals. Coll Undergrad Libr. 2018;25: 187–204. doi: 10.1080/10691316.2018.1464995 [DOI] [Google Scholar]
  • 31.Cagan R. The San Francisco Declaration on Research Assessment. Dis Model Mech. 2013;6: 869–870. doi: 10.1242/dmm.012955 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.DORA. Good Practices–Funders–DORA. In: San Francisco Declaration on Research Assessment [Internet]. [cited 17 Sep 2018]. Available: https://sfdora.org/good-practices/funders/
  • 33.HuMetricsHSS Initiative. Available: https://humetricshss.org/
  • 34.Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nat News. 2015;520: 429. doi: 10.1038/520429a [DOI] [PubMed] [Google Scholar]
  • 35.Blankstein M, Wolff-Eisenberg C. Ithaka S+R US Faculty Survey 2018. 2019. p. 70. [Google Scholar]
  • 36.Holman L, Stuart-Fox D, Hauser CE. The gender gap in science: How long until women are equally represented? PLoS Biol. 2018;16: 1–20. doi: 10.1371/journal.pbio.2004956 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Recognition Hammarfelt B. and reward in the academy: Valuing publication oeuvres in biomedicine, economics and history. Aslib J Inf Manag. 2017;69: 607–623. doi: 10.1108/AJIM-01-2017-0006 [DOI] [Google Scholar]
  • 38.Brembs B. Prestigious Science Journals Struggle to Reach Even Average Reliability. Front Hum Neurosci. 2018;12. doi: 10.3389/fnhum.2018.00037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Fraley RC, Vazire S. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power. Ouzounis CA, editor. PLoS ONE. 2014;9: e109019. doi: 10.1371/journal.pone.0109019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Munafò MR, Stothart G, Flint J. Bias in genetic association studies and impact factor. Mol Psychiatry. 2009;14: 119–120. doi: 10.1038/mp.2008.77 [DOI] [PubMed] [Google Scholar]
  • 41.Szucs D, Ioannidis JPA. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLoS Biol. 2017;15: e2000797. doi: 10.1371/journal.pbio.2000797 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Strauss A, Corbin J. Basics of Qualitative Research. Sage publications; 1990. [Google Scholar]
  • 43.McHugh ML. Interrater reliability: the kappa statistic. Biochem Medica. 2012; 276–282. doi: 10.11613/BM.2012.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Aung HH, Zheng H, Erdt M, Aw AS, Sin SJ, Theng Y. Investigating familiarity and usage of traditional metrics and altmetrics. J Assoc Inf Sci Technol. 2019;70: 872–887. doi: 10.1002/asi.24162 [DOI] [Google Scholar]
  • 45.Wilsdon J, Allen L, Belfiore E, Campbell P, Curry S, Hill S, et al. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. 2015. [cited 10 Feb 2021]. doi: 10.13140/RG.2.1.4929.1363 [DOI] [Google Scholar]
  • 46.Brembs B, Button K, Munafò M. Deep impact: Unintended consequences of journal rank. Front Hum Neurosci. 2013;7: 291. doi: 10.3389/fnhum.2013.00291 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Haustein S, Larivière V. The use of bibliometrics for assessing research: Possibilities, limitations and adverse effects. Incentives and Performance. Springer, Cham; 2015. pp. 121–139. doi: 10.1007/978-3-319-09785-5_8 [DOI] [Google Scholar]
  • 48.Sugimoto CR, Larivière V. Measuring research: What everyone needs to know. Oxford University Press; 2018. [Google Scholar]
  • 49.Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, et al. The Hong Kong Principles for assessing researchers: Fostering research integrity. PLOS Biol. 2020;18: e3000737. doi: 10.1371/journal.pbio.3000737 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Saenen B, Hatch A, Curry S, Proudman V, Lakoduk A. Reimagining Academic Career Assessment: Stories of innovation and change. European University Association, January; 2021. Available: https://www.eua.eu/resources/publications/952:reimagining-academic-career-assessment-stories-of-innovation-and-change.html. [Google Scholar]

Decision Letter 0

Florian Naudet

15 May 2021

PONE-D-21-10048

How faculty define quality, prestige, and impact in research

PLOS ONE

Dear Dr. Alperin,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

First of all, I would like to thank the two reviewers. They were fast and provided very insightful comments. One suggested to accept the manuscript and the second one suggested major revisions. In my opinion, the major revisions are doable and I will be very pleased to assess this manuscript after your careful revisions (based on the 2 reviewers' comments). Some additional comments : 

- In order to facilitate my assessment, please follow a reporting guideline (and please add a form of checklist). I suggest SRQR (https://www.equator-network.org/reporting-guidelines/srqr/) but I may be wrong so feel free to use any other guideline if you think that it fits better with your research. Thank you in advance. 

- Please also make it explicit in the method section if there was a pre-specified protocol for this very specific research question (the one presented in this paper) and it if it was registered (and where). Please attach the protocol in a supplementary file. If there was no protocol, nor registration, please make it explicit and justify. 

- Please add a few words in the text and abstract about the main limitations, to avoid any spin. 

Please submit your revised manuscript by Jun 29 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Florian Naudet, M.D., M.P.H., Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. If materials, methods, and protocols are well established, authors may cite articles where those protocols are described in detail, but the submission should include sufficient information to be understood independent of these references (https://journals.plos.org/plosone/s/submission-guidelines#loc-materials-and-methods).

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Review of PONE-D-21-10048

This manuscript reports on the answers to an open survey question about how faculty of universities in the USA and Canada would define quality, prestige and impact of academic journals. The article is interesting and well-written but a number of major and minor concerns should be adequately responded to with a view to optimize clarity and relevance of the manuscript.

Major concerns

� It’s not very clear what quality, prestige and impact refer to: research (as in the title), journals (as asked in the survey), to the publication oeuvre of an individual researcher (as one would expect given the focus on review, promotion, and tenure (RPT) processes), or to all of these categories combined (which would be strange and confusing). Please clarify and apply the choice consistently throughout the manuscript.

� I agree that having a clear definition or description of core concepts used in the assessment of research and researchers is a necessary starting point but please also discuss the following.

o The meta-question whether quality, prestige and impact are the main concepts we need to look at. You allude to this a bit in the Introduction section but remain silent in the Discussion section and the Abstract.

o The practical question how quality, prestige and impact can be operationalized. This seems to be the main focus of your respondents (and less so the conceptual definitions).

� Maybe also mention (in the Discussion section) the best practices presented by DORA on their website and the recently introduced Hong Kong principles:

Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, Coriat AM, Foeger N, Dirnagl U. The Hong Kong principles for assessing researchers: fostering research integrity. PLoS Biology 2020; 18: e3000737. (https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000737) (translated in Chinese, German, Portuguese)

� I’m frankly puzzled that both the documents you analysed earlier and the faculty you surveyed don’t focus much more on the H-index. Although this measure – which is frequently used in researcher assessments in Europe – is also deeply flawed it’s clearly superior to the JIF for assessing researchers for two reasons: 1) the H-index concerns the whole publication oeuvre of an individual, and 2) the H-index is based on the actual citations to the articles the researcher at issue published. Please comment on this in the Discussion section.

� It’s not so clear to me why you would expect differences between subgroups defined by demographic characteristic and why that would be interesting or relevant to know.

� It’s also not clear why you would expect a high correlation between the views of faculty and the policy documents in their university. On a slightly cynical note: who is reading these documents? My guess is hardly anyone.

� The manuscript is much longer than necessary and should be shortened substantially by e.g.

o Removing lines 16-30 on page 5 and lines 1-21 on page 6. This lengthy description of earlier work doesn’t belong in the Methods section (or elsewhere in the article) and can be replaced by one or two sentences that only explain what’s really of direct importance to understand the methods of what is reported in this manuscript.

o Figures 1-3 have little informative value and can easily be removed when the percentages reported are transferred to tables 1-3.

o Table 4 and 5 can better be moved to the digital supplements as their message (‘no significant differences between subgroups’) is already well described in two lines in the text.

Minor concerns

� What was the response of the survey? It’s unclear how many invitations were sent. And how many participants answered the open question on which this manuscript is based? Respondents often skip open questions. Please produce a flow chart.

� Was your survey pre-registered? Please clarify and add the pertaining link if you did.

� Please keep the order of quality, prestige and impact uniform throughout the text and also order tables 1, 2 and 3 likewise.

� In table 1 you say ‘definitions of High Quality’ in the title but talk about ‘definitions of High Impact in the note directly below the table. That last formulation seems to be wrong.

� It’s not clear what the different colours in table 4 and 5 indicate.

I sign my reviews:

Lex Bouter, Amsterdam University Medical Centers and Vrije Universiteit Amsterdam

Reviewer #2: The research shows that absolute general definitions of quality, impact and prestige which may be used to compare researchers and research within disciplines and cross academia may be difficult if not impossible to define. It is also because of this that in academia we started to use 'objective' indicators and metrics which have been shown however to be poor proxies for quality and excellence. In the attempt to go beyond these flawed metrics there are now many initiatives to design meaningful ways to evaluate research. Based on their observation in this paper, the authors feel that is problematic since quality, excellence and prestige are ill-defined and researchers have many different views of it and most times rely on the flawed classical metrics, i.e. papers and where they are published and cited.

I would suggest to the authors to include a discussion of those initiatives that really are trying to deal with this problem. In the introduction they cite several of the papers that discuss this, f.i . Aksnes et al 2019, Hicks et al 2015 which is The leiden manifesto and Moher et al 2018.

Instead of looking for the impossible, i.e. absolute, timeless measures for quality and prestige, these initiatives start with the conclusion that quality, excellence and prestige are context dependent. Research quality, its products and evaluation are highly context dependent which makes direct comparisons of historians, philosophers and chemists and even researchers within for instance the domain of biomedical and health research, irrelevant. It is about 'rigor, plausibility, originality, societal value' (Asknes 2019) but in a given research setting, with a strategy, aim, specific goals, a process (f. i .Open Science practices) and if applicable actions in the corresponding societal context (Nowotny et al, Rethinking Science, 2001).

To say that research is Excellent or Good requires from peers/reviewers, a narrative, a motivation of judgement of strengths and weakenesses based on reading and understanding of the content of the work.

This is the approach taken in the recently adopted National Strategic Evaluation Protocol (SEP) in The Netherlands

https: //www.vsnu.nl/files/documenten/Domeinen/Onderzoek/SEP_2021-2027.pdf

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Lex Bouter, Amsterdam University Medical Centers and Vrije Universiteit Amsterdam

Reviewer #2: Yes: Frank Miedema

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Oct 28;16(10):e0257340. doi: 10.1371/journal.pone.0257340.r002

Author response to Decision Letter 0


2 Jul 2021

Dear Editor and Reviewers,

Thank you and the two reviewers for the helpful feedback on our manuscript “How faculty define quality, prestige, and impact in research”. We are pleased to resubmit our manuscript with the requested revisions. Attached you will find the reviewers’ feedback (in gray) interspersed with a description of how we addressed each of the points raised (in black).

We would like to take this opportunity to express our appreciation to the reviewers for their thoughtful feedback. We are convinced that we have adequately addressed all the expressed concerns and that the manuscript has been improved as a result of this process.

Sincerely,

Juan Pablo Alperin

Assistant Professor, Publishing

Associate Director, Public Knowledge Project

Director, Scholarly Communications Lab

Simon Fraser University

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Florian Naudet

7 Jul 2021

PONE-D-21-10048R1

How faculty define quality, prestige, and impact of academic journals

PLOS ONE

Dear Dr. Alperin,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Thank you for answering the reviewers's comments. However, after a rapid check, I'm afraid that you may have missed some of my editorial points. Please excuse me if I am wrong. 

- In order to facilitate my assessment, please follow a reporting guideline (and please add a form of checklist). I suggest SRQR (https://www.equator-network.org/reporting-guidelines/srqr/) but I may be wrong so feel free to use any other guideline if you think that it fits better with your research. Thank you in advance. 

- Please also make it explicit in the method section if there was a pre-specified protocol for this very specific research question (the one presented in this paper) and it if it was registered (and where). Please attach the protocol in a supplementary file. If there was no protocol, nor registration, please make it explicit and justify. 

- Please add a few words in the text and abstract about the main limitations, to avoid any spin. 

Please submit your revised manuscript by Aug 21 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Florian Naudet, M.D., M.P.H., Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 2

Florian Naudet

26 Jul 2021

PONE-D-21-10048R2

How faculty define quality, prestige, and impact of academic journals

PLOS ONE

Dear Dr. Alperin,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

First of all, I would like to thank the 2 reviewers for their fast peer review. As you will see there are still 2 minor issues. I agree with one reviewer that the 2 tables can be moved in appendix. I suggest to follow this suggestion. If you disagree, please explain why these tables/figures are, in your view, necessary. There is also a last conceptual point from the other reviewer and I think that he makes a good point here. Please address his comment.

Pending these two minor changes/edits, I'll be more than happy to accept this manuscript for publication. 

Please submit your revised manuscript by Sep 09 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Florian Naudet, M.D., M.P.H., Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: My comment:

"Instead of looking for the impossible, i.e. absolute, timeless measures for quality and prestige,

these initiatives start with the conclusion that quality, excellence and prestige are context

dependent. Research quality, its products and evaluation are highly context dependent which

makes direct comparisons of historians, philosophers and chemists and even researchers within

for instance the domain of biomedical and health research, irrelevant. It is about 'rigor,

plausibility, originality, societal value' (Asknes 2019) but in a given research setting, with a

strategy, aim, specific goals, a process (f. i .Open Science practices) and if applicable actions in

the corresponding societal context (Nowotny et al, Rethinking Science, 2001).

To say that research is Excellent or Good requires from peers/reviewers, a narrative, a motivation

of judgement of strengths and weakenesses based on reading and understanding of the content of

the work.

This is the approach taken in the recently adopted National Strategic Evaluation Protocol (SEP)

in The Netherlands"

https: //www.vsnu.nl/files/documenten/Domeinen/Onderzoek/SEP_2021-2027.pdf

The author's response:

We very much agree with this reviewer’s view and believe that the research presented here

(alongside the other publications from this project) help to confirm this perspective by showing,

through concrete empirical evidence, some of the pernicious effects of using seemingly objective

measures and standard definitions for evaluations that should be context specific

My response now:

Despite this positive reaction , the author's in the paper do not respond to this major comment of mine. They write in the paper: "While it is known that there is a lack of definitions for many of the terms and concepts used in research assessment (Dean et al., 2016; Hatch, 2019; Moore et al., 2017; van Mil & Henman, 2016), this study explores how three key terms are understood by faculty in absence of these definitions."

They leave it open at the end of the discussion how to deal with this problem. It is felt to be a problem by those who believe it useful to compare evaluations between very different fields of research. These comparisons now based on JIF, h-index. numbers of citations etc are as they argue correctly deeply flawed. They do not conclude, as they seem to do in the reply to my comments, that these terms are dependent on discipline, sub-disciplines and strategy and on thematics of research fields and topics. Give the change in science and society they are not timeless either.

Thus, that many terms they have investigated are quite different in use and have no absolute universal definition or meaning, must mean that looking for absolute generally applicable indicators is not the way to approach what by many is felt to be a problem. This inescapable conclusion, which for many brought up in current science may be difficult, should be mentioned in the discussion and that, as a way forward, we need to broadly introduce and train the use of narratives both from researchers and reviewers, auditors, peers, to transparently deal with this context dependency. This is possible impact of their work on development of policies of research evaluations and highly relevant to suggest.

The SEP in The Netherlands has dealt with this 'problem', but narratives have been introduced in the REF in the UK and elsewhere before.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Lex Bouter, professor of Methodology and Integrity, Amsterdam University Medical centers and Vrije Universiteit Amsterdam, The Netherlands

Reviewer #2: Yes: Frank Miedema

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Oct 28;16(10):e0257340. doi: 10.1371/journal.pone.0257340.r006

Author response to Decision Letter 2


26 Aug 2021

Thank you and the two reviewers for this rapid second review. We apologize for our own delay in returning our manuscript “How faculty define quality, prestige, and impact in research”. The response came while many of us were away on vacation and it is only now that we were all able to review our response.

We are pleased to resubmit our manuscript with the requested revisions. As you can see in the attached manuscript, we have moved the two tables to supplementary materials and made substantial additions to directly address the remaining reviewer comment in the discussion. You will now find additions that directly include the reviewer’s suggestions on p. 13 (lines 15-16; 19-29), p. 14 (lines 13-17; 22-28), as well as in a line in the abstract.

We are certain these additions would be met with agreement by the reviewer, as they substantially incorporate their views into the discussion and conclusions of our work. We trust you will agree.

Attachment

Submitted filename: Response to Reviewers R&R2.docx

Decision Letter 3

Florian Naudet

31 Aug 2021

How faculty define quality, prestige, and impact of academic journals

PONE-D-21-10048R3

Dear Dr. Alperin,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

I would like to thank you for all the work you did during the peer review process and I would like to thank the 2 reviewers again for their important feedback.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Florian Naudet, M.D., M.P.H., Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Frank Miedema

Acceptance letter

Florian Naudet

18 Oct 2021

PONE-D-21-10048R3

How faculty define quality, prestige, and impact of academic journals

Dear Dr. Alperin:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Pr. Florian Naudet

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Breakdown of definitions by demographic characteristics.

    Overview of the participants’ definition of Quality, Prestige and Impact by their gender, institution type, and age. The color scale illustrates the distribution of responses, where green indicates a high percentage of responses and red indicates a low percentage of responses.

    (DOCX)

    S2 Table. Breakdown of definitions by demographic characteristics.

    Overview of the participants’ definition of Quality, Prestige, and Impact by discipline. The color scale illustrates the distribution of responses, where green indicates a high percentage of responses and red indicates a low percentage of responses.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers R&R2.docx

    Data Availability Statement

    Survey responses can be found at the following publication: Niles, Meredith T.; Schimanski, Lesley A.; McKiernan, Erin C.; Alperin, Juan Pablo, 2020, "Data for: Why we publish where we do", https://doi.org/10.7910/DVN/MRLHNO, Harvard Dataverse, V1 Data regarding RPT documents can be found at the following data publication: Alperin, Juan Pablo; Muñoz Nieves, Carol; Schimanski, Lesley; McKiernan, Erin C.; Niles, Meredith T., 2018, "Terms and Concepts found in Tenure and Promotion Guidelines from the US and Canada", https://doi.org/10.7910/DVN/VY4TJE, Harvard Dataverse, V3, UNF:6:PQC7QoilolhDrokzDPxxyQ== [fileUNF].


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES