Skip to main content
F1000Research logoLink to F1000Research
. 2018 Jun 27;7:920. [Version 1] doi: 10.12688/f1000research.15436.1

Who and why do researchers opt to publish in post-publication peer review platforms? - findings from a review and survey of F1000 Research

Jamie Kirkham 1,a, David Moher 2
PMCID: PMC6053701  PMID: 30079245

Abstract

Background:  Preprint servers and alternative publication platforms enable authors to accelerate the dissemination of their research.  In recent years there has been an exponential increase in the use of such servers and platforms in the biomedical sciences, although little is known about who, why and what experiences researchers have with publishing on such platforms.  In this article we explore one of these alternative publication platforms, F1000 Research, which offers immediate publication followed by post-publication peer review. 

Methods: From an unselected cohort of articles published between 13 th July 2012 and 30 th November 2017 in F1000 Research, we provided a summary of who and what was published on this platform and calculated the percentage of published articles that had been indexed on a bibliographic database ( PubMed) following successful post-publication peer review.  We also surveyed corresponding authors to further understand the rationale and experiences of those that have published using this platform.      

Results: A total of 1865 articles had been published in the study cohort period, of which 80% (n=1488) had successfully undergone peer review and were indexed on PubMed within a minimum period of six months since first publication. Nearly three-quarters of articles passed the peer review process with their initial submission.  Survey responses were received from 296 corresponding authors. Open access, open peer review and the speed of publication were the three main reasons why authors opted to publish with F1000 Research.    

Conclusions: Many who published with F1000 Research had a positive experience and indicated that they would publish again with this same platform in the future.  Nevertheless, there remained some concerns about the peer review process and the quality of the articles that were published.

Keywords: F1000 Research, Journalology, Peer Review, Rapid Publication

Introduction

The conventional method of journal publication involves manuscript submission, peer review and editorial oversight, revision and publication. While this process is presumed to ensure the scientific integrity of the research undertaken, the availability of the research findings entering the public domain may take several months or even years depending on factors such as a journal editors decision to publish or reject an article, peer reviewer availability or a journal’s publication frequency. The efficiency of peer review was underlined in a recent peer review survey conducted in 2018 by ASAPbio ( Accelerating Science and Publication in biology). The survey revealed that for their most recent published article, about 50% of the authors surveyed (132/259) submitted their article to two or more journals, with 7% (18/259) submitting to five or more [ http://asapbio.org/peer-review/survey]. This process can result in a substantial delay in research findings entering the public domain. Traditionally, authors have not been able to add such research findings to their curriculum vitae and/or grant applications. In some scientific fields such as pandemics or humanitarian emergencies, the time to deliver research findings may be as equally as important as research quality, and may be critical to health care provision.

While some journals may offer a ‘fast track’ service to publication, preprint servers offer rapid publication on all articles but without systematic refereeing, which brings significant benefits to the authors, the presentation of the article and the readers. The profile of preprint servers is increasing, with many major funders (e.g. Wellcome Trust and National Institute of Health (NIH)), now endorsing the use of preprint servers, particularly for grant applications [ https://wellcome.ac.uk/news/we-now-accept-preprints-grant-applications, http://www.sciencemag.org/news/2017/03/nih-enables-investigators-include-draft-preprints-grant-proposals].

However, due to lack of peer review, preprint servers in a life sciences setting have been criticised as they may lack quality and subsequently have the potential to report flawed research which may harm patients 1. To improve scientific integrity, new emerging options towards publication are being considered under the Open Science Initiative. An example includes peer review before the results are known ‘registered reports’, which aims to eliminate questionable research practices and poor research design [ https://cos.io/rr/]. The registered reported publishing format is currently being used by over 100 journals.

F1000 Research is an example of an alternative ‘platform’ which offers the advantages of a preprint server in terms of immediate publishing on a variety of research article types linked to biomedical research, with the added advantage of post-publication open peer review. Once peer review is complete (at least two approved referee reviews, or one approved plus two approved with reservations reviews) the article is subsequently indexed in a bibliographic database such as PubMed.

As of May 2018, F1000 Research has published over 2000 articles since its inception in 2012. Little is known about who, and the reasons why authors publish in F1000 Research. The aim of this study is to provide a descriptive summary of the research that has been published in F1000 Research, and to determine how much of this published research has been accepted for bibliographic database indexing. We also survey authors who have published in F1000 Research to further understand the rationale and experiences of those that have published using this publication platform.

Methods

We studied a cohort of all article types that were first published on F1000 Research between 13 th July 2012 (earliest publication) and 30 th November 2017. A data extraction form was developed and piloted on the first page of 20 listed publications. For each article, the following information was extracted; article type, the year of publication, funding sources, the country of the first listed corresponding author and the peer review status. The peer review status for all articles was last verified on May 30 th 2018, i.e. six months after the last published article in the study cohort. At the same time, we also checked whether articles were indexed on the bibliographic database, PubMed.

Articles published in F1000 Research between the period 13 th July 2012 and 30 th November 2017

Copyright: © 2018 Kirkham J and Moher D

Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).

Survey of corresponding authors who have published with F1000 Research.

With the exception of Editorials and F1000 Faculty Critiques which are published by invitation only and not subject to external peer review, the first listed corresponding author of all published studies in the study cohort were contacted via a personalised email. We removed any duplicate email addresses such that a corresponding author who had published multiple articles were contacted only once. Participants were asked to participate in a short online survey with regards to their main reasons and experiences of publishing with F1000 Research. The survey was constructed using the online survey software, REDcap [ https://www.project-redcap.org/], and was open for responses between 6 th April 2018 and 10 th May 2018. The survey questions are available in Supplementary File 1 and reflect the importance of a series of factors that may influence the decision to submit to F1000 Research as rated on a five-point Likert scale ranging from ‘very important’ to ‘not important’. Similarly, on a 5-point Likert scale we asked about the importance of articles being indexed on a bibliographic database, the importance of a transparent peer review process and the likelihood that they would submit future manuscripts to F1000 Research or recommend the platform to others. Participants could also provide free text comments on positive or negative experiences associated with submitting or publishing with the platform. Non-responders were contacted periodically if a response to the survey was not received. The data were presented as the frequency distribution for each level of response. All positive and negative experiences were independently reviewed by both authors and categorized into common topics. Any discrepancies were solved via discussion.

The University of Liverpool Ethics Committee was consulted and granted ethical approval for this study (Reference 3233). Informed consent was assumed if a participant responded to the survey.

Results

Summary of articles publishing in F1000 Research

A total of 1865 articles were published in F1000 Research between the period 13 th July 2012 and 30 th November 2017. Just over a third of articles published were research articles (677/1865; 36%) with no more than 10% of the remaining articles published representing a different article type ( Table 1). The majority of articles published received non-commercial funding (1054/1865; 57%), while a large proportion also declared no funding source (745/1865; 40%). The first corresponding author in nearly 80% of articles published were from high income countries (1480/1865; 79%) and less than 2% were from low income countries ( Table 1, Figure 1). The six countries with more than 50 articles published were USA (618 articles), UK (232 articles), Germany (91 articles), Australia (84 articles), India (82 articles) and Canada (78 articles). There appeared to be a gradual increase in the number of articles published over time with over 400 articles published in each of the last two years in the study cohort ( Table 1).

Table 1. Article characteristics of all articles published in F1000 Research (13 July 2012 to 30 th November 2017).

Article Characteristics N=1865 (%) Article Characteristics N=1865 (%)
Article Type: Funding a:
Antibody Validation Article 13 (0.7) Commercial 66 (3.5)
Case Report 192 (10.3) Non-Commercial 1054 (56.5)
Clinical Practice Article 18 (1.0) None 745 (39.9)
Commentary 23 (1.2) Corresponding Author Location b:
Correspondence 33 (1.8) Low Income Country 30 (1.6)
Data Article 8 (0.4) Lower Middle Income Country 181 (9.7)
Data Note 34 (1.8) Upper Middle Income Country 174 (9.3)
Editorial 51 (2.7) High Income Country 1480 (79.4)
F1000 Faculty Critique 7 (0.4) Year submitted:
Method Article 104 (5.6) 2012 (earliest submission 13 th July) 72 (3.9)
Observation Article 15 (0.8) 2013 287 (15.4)
Opinion Article 174 (9.3) 2014 322 (17.3)
Research Article 677 (36.3) 2015 357 (19.1)
Research Note 161 (8.6) 2016 418 (22.4)
Review 96 (5.2) 2017 (up to 30 th November) 409 (21.9)
Short Research Article 41 (2.2)
Software Tool Article 153 (8.2)
Study Protocol 23 (1.2)
Systematic Review 17 (0.9)
Web Tool 25 (1.3)

aStudies that were partially funded by industry (e.g. pharmaceutical) were classified as ‘commercial funding’

bEconomic status was classified according to the World Bank list of economies (June 2017)

Figure 1. Map of corresponding author locations of all articles published in F1000 Research (13 th July 2012 to 30 th November 2017).

Figure 1.

Peer review and bibliographic database indexing

Allowing for a minimum of six months from the first publication of all articles, 80% (1488/1865) had successfully undergone peer review (with the exception of 51 editorials that were not subject to peer review) and were indexed in the bibliographic database, PubMed ( Table 2). For the remaining 20% of articles, the lack of indexing was because peer review was incomplete (n=317), peer review had discontinued (n=36) or the article had been removed by authors (n=3). When peer review was incomplete, the peer review process had been ongoing for over 12 months for 80% of articles (253/317). In a small number of articles (n=14) the peer review process was complete but the article had not yet appeared in PubMed ( Table 2). Of the articles that were published in F1000 Research, 74% (1065/1448) passed the peer review process with the initial submission, 29% (n=336) after one revision, 3% (n=42) after two revisions, while five articles required four or five revisions.

Table 2. Article peer review and bibliographic database indexing status of all articles published in F1000 Research (13 July 2012 to 30 th November 2017).

Submitted Article Status N=1865 (%)
Indexed in Bibliographic Database 1488 (79.8)
            Article underwent full external peer review 1437 a, b
            Editorials not subject to external peer review 51
Not Indexed in Bibliographic Database 377 (20.2)
            Peer review incomplete 356
                        Peer review ongoing 317 c
                        Peer review discontinued 36
                        Article removed by authors 3
            Peer review complete 14 d
            F1000 Faculty Critique (not indexed) 7

aTwo of these articles were not published by F1000 Research (South Asian Journal of Cancer (case report), La Tunisie Medicale (research article)).

bOne article was indexed on a bibliographic database but the peer review process was incomplete.

c253: peer review ongoing for over 12 months since the article was first submitted.

dFour: peer review completed but article not indexed on a bibliographic database within 12 months of last publication date.

Survey of corresponding authors publishing with F1000 Research

After excluding 58 articles that did not undergo peer review (editorials and or F1000 Faculty Critiques), there were 1476 unique first listed corresponding author email addresses (out of 1807 articles) that were targeted in the survey. Notably two authors were listed as the same corresponding author on 18 (Germany) and 16 articles, respectively (USA/India).

Responses to the survey were received from 296 corresponding authors. An exact response rate was difficult to estimate but we approximate this to be between 25–30% given the number of returned survey emails that had invalid and/or expired email addresses or ‘out of offices’ during the period the survey was live. The majority of responders were academic affiliated (74%; 219/296), while a minor proportion represented non-profit organisations (9%; 22/296), industry (5%; 14/296) and government (7%; 22/296). The remaining 14 represented other entities such as independent self-employed private researchers, consulting agencies, schools and hospitals. There was an increasing trend in terms of the respondents research experience with 7% (21), 18% (52), 33% (98) and 42% (125) out of the 296 respondents representing trainee, early-, mid- and senior researchers, respectively.

The importance of factors that influenced an author’s decision to submit an article to F1000 Research are presented in Figure 2. The open access policy, open peer review policy and the speed of publication were the three top reasons for publishing with F1000 Research, with more than 70% of participants reporting these factors as either important or very important. Linked to peer review policy, the transparency of the F1000 Research peer review system that includes reviewer’s names was rated as important or very important by nearly 70% of respondents (202/295). Of less importance was the recommendation to publish in F1000 Research by colleagues, had previously peer reviewed for F1000 Research and for promotion and tenure. Nevertheless, 80% (237/295) of respondents said they would either likely or very likely recommend F1000 Research to a colleague.

Figure 2. The level of importance of factors that were influential to authors when deciding to publish with F1000 Research.

Figure 2.

The respondents listed 58 additional new items relating to three themes that influenced their decision to publish with F1000 Research. One theme was linked to promotional activities connected directly with F1000 Research which included fee waivers (n=14), personal invitations to submit an article (n=2) and commissioned calls for specific articles (n=3). In another theme, the reasons were unconnected to F1000 Research but were a result of failures to publish in alternative journals (n=16), reasons cited included; other journals not interested in the type of article/analysis, bad reviews, biased peer review and editorial biases. In a third theme, authors chose to submit to F1000 Research due to specific characteristics of the publishing platform. Characteristics included, accessibility of previous versions of the article and the ability to access and respond to reviewer comments (n=5), ability to be able to publish negative findings and material on controversial topics (n=3), no size limit on articles (n=2), ability to share public datasets (n=1), quality of publication of images (n=1) and good altmetrics (n=1). The remaining ten items related to the fact that authors were intrigued to test out a new publication platform.

Nearly 90% of respondents (261/295) stated that it was either important or very important that following immediate publication with F1000 Research, their article was later indexed on a bibliographic database after the article was approved following peer review. The 261 corresponding authors who thought this was important listed 198 reasons for this. The most common reason was related to making the article visible and easily accessible (n=81), and ensuring that the article was sufficiently exposed (n=27) to its intended readership. Others stated that indexing enhanced the credibility and quality (n=32) of articles as it was the benchmark that the article had undergone peer review. Article indexing for assessment purposes (n=37) was also seen to be important in terms of the assessment of research impact, assessing scientists (e.g. for promotion), assessing institutes (e.g. the Research Excellence Framework in the UK) and when applying for grant income. Finally, some respondents thought that the indexing of articles was important for personal distinctions (n=21), examples included personal recognition amongst peers, citation counts and the enhancement of personal portfolios.

Experiences of those submitting articles to F1000 Research

The main criticism of those submitting to F1000 Research was related to the peer review process following publication. Authors found the process of nominating and reviewers agreeing to review challenging (n=9) given the strict criteria for selection [ https://f1000research.com/for-authors/tips-for-finding-referees], this it was felt led to typically a longer period of peer review (n=12) than most other journals. Some authors also questioned the quality of the reviews (n=9), with the suggestion that author selected reviewers may be biased or reserved in terms of lack of criticism given the public record and naming of reviewers when a platform operates an open peer review policy. A few authors felt ‘trapped’ in the peer review process and felt that once the article was published there was ‘no way out’ if reviewers could not be found or reviewers had stopped providing reviews. Publication fees was seen as major barrier for this publication platform, particularly in some areas of research where funds available for publishing is limited (n=10). The impact of articles published with F1000 Research was also seen as a limitation (n=16), and while this was not necessarily the authors personal concern, the perceived reputation that this would be considered a low-quality publication and poorly cited on a platform with no reputable impact factor were foreseen as issues with peers and within scientific organisations. Three authors provided condemning reviews of the platform and suggested that it ‘provided an easy opportunity to publicly criticise the work of others in an act that constitutes unwarranted bullying’ and were subsequently forced into using the platform to correct and refute the criticisms to protect personal reputation. A number of authors (n=5) also commented that the platform was difficult for editing and writing purposes and was particularly tedious when making a data deposition (n=2).

Despite some negative feedback regarding F1000 Research, there were also many positive responses with a number stating that this was ‘their best experience in publishing’ with the hope that this publication style becomes ‘dominant in the future’. Based on their experience, 74% (218/296) of the respondents said that they would likely or very likely submit to F1000 Research in the future. The speed and efficiency of publication (n=11) was the main reason that authors felt the experience was positive, while others (n=8) thought the extent and the transparency of reviews was both helpful and important. Some authors also found the editorial staff to be cooperative and professional (n=7) while other benefits included the ease of use of the platform and the standard of the publication.

Discussion

The explosion in the number or preprint platforms and the number of researchers submitting to preprint servers and alternative platforms in the biomedical sciences is rising exponentially [ http://www.prepubmed.org/monthly_stats/], in yet relatively little is known about them. F1000 Research offers a unique publishing platform, which like preprint servers offer immediate publishing but with the added advantage of post-publication peer review and eventual article indexing on a bibliometric database. The speed of the publication, alongside the open access and open peer review policy were particularly attractive traits to authors who submitted their research to this platform. Having an article indexed on a bibliometric database was seen to be important by the majority of the respondents and this study revealed that 80% of the articles achieved indexing within six months of submission to F1000 Research. Visibility and accessibility of research articles were deemed to be the most common reason for submission. The visibility of publishing research without peer review plays another important role. The peer review for about 15% of the articles had either been ongoing for more than a year or discontinued completely meaning that many of these articles may have contributed to the vast quantity of inaccessible unpublished literature (and the potential for publication bias) had these articles been subjected to the standard peer review before publication model.

The F1000 Research publication model was not without criticisms. Some found that the peer review process took longer than standard journals because there was more emphasis on the authors rather than editors to find peer reviewers. There was also a sense that there was the potential for an article to become caught up in the process, immediate publication meant that there was limited scope to remove or submit elsewhere if peer reviewers could not be found or existing reviewers failed to provide subsequent reviews. While these criticisms may have reflected the experiences of some survey respondents, this process is not dissimilar from the many journals/publishers of standard journals which request names of peer reviewers, and in some instances release articles if peer reviewers cannot be found in a reasonable timeframe.

The majority of respondents generally saw open peer review as a good thing, but some respondents felt this process could lead to inferior and reserved poorer quality reviews that lacked criticism, this was perhaps evidenced by the fact that 75% of articles passed the peer review stage based on the first submitted version. Despite this finding, a randomised trial has found that asking a reviewer to consent to be identified to the author had no important effect on the quality of the review but it may significantly increase the likelihood of reviewers declining to review 2.

The strength of this study is that we evaluated a large, unselected cohort of articles that were published with F1000 Research. The response to the survey was quite poor with only 296 corresponding authors engaging with the survey from the potential 1476 unique email addresses identified. Nevertheless, calculating an exact response rate was particularly challenging given several hundred of those contacts were found to be invalid or expired, a consequence of targeting some authors that published their articles several years ago. Even with the potential for such response bias, the open text comments received appeared to be relatively balanced in terms positive and negative experiences of publishing with F1000 Research with key themes identified. There was also a general sense that the F1000 Research platform appeared to be ‘modern’ and yet was potentially ‘less attractive to the early career researcher’ because the need to publish in recognised journals with high impact factors is still considered the standard by the vast majority of researchers and institutes to gain promotion and tenure.

It was clear that researchers from all around the world have published on the F1000 Research platform. The importance of alternative publications platforms is beginning to extend beyond an authors choice to submit to them. For example, recently Public Library of Science (PLOS) have partnered with the preprint server bioRxiv, and from May 1 st 2018, authors will have the option to post their submitted manuscript on to the preprint server in order to disseminate their work prior to peer review [ http://blogs.plos.org/plos/2018/04/one-small-step-for-preprints-one-giant-step-forward-for-open-scientific-communications/]. To a large extent this mimics the idea of the already existent F1000 Research publication model.

In conclusion, there is undoubtedly and increase in the use of researchers publishing their research on alternative platforms for biomedical sciences, but there still remains a level of dogma surrounding their use by many, and there remains concerns about the quality of the articles published on these platforms.

Data availability

The data referenced by this article are under copyright with the following copyright statement: Copyright: © 2018 Kirkham J and Moher D

Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication). http://creativecommons.org/publicdomain/zero/1.0/

Dataset 1: Articles published in F1000 Research between the period 13 th July 2012 and 30 th November 2017 10.5256/f1000research.15436.d208308 3

Funding Statement

The author(s) declared that no grants were involved in supporting this work.

[version 1; referees: 2 approved

Supplementary material

Supplementary Appendix A: Survey questions sent to corresponding authors of articles published in F1000 Research between 13 th July 2012 and 30 th November 2017

References

  • 1. Chalmers I, Glasziou P: Should there be greater use of preprint servers for publishing reports of biomedical science? [version 1; referees: not peer reviewed]. F1000Res. 2016;5:272. 10.12688/f1000research.8229.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. van Rooyen S, Godlee F, Evans S, et al. : Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ. 1999;318(7175):23–27. 10.1136/bmj.318.7175.23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Kirkham J, Moher D: Dataset 1 in: Who and why do researchers opt to publish in post-publication peer review platforms? - findings from a review and survey of F1000 Research. F1000Research. 2018. 10.5256/f1000research.15436.d208308 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2018 Jul 19. doi: 10.5256/f1000research.16821.r35534

Referee response for version 1

Anisa Rowhani-Farid 1

Overall

This is an excellent meta-research study of the open science platform, F1000 Research, using observational data and survey results.  The results of this study contribute to the shaping of journals’ conceptual framework, which influence their publishing practices, such as the adoption of post-publication peer-review, reducing publication costs, implementing open access policies, and reducing time between submission and publication.  I approve this study for publication, upon addressing the following points:

Title

The title was a bit confusing grammatically as the “who and why” would require differing conjugation and the “survey of F1000 Research” phrase might need to include “researchers”.  Here is a suggestion for the title:  A meta-research study of the post-publication platform, F1000 Research – who publishes there and why – findings from a review and survey. 

Introduction

This was good and covered the issue of “waste” in research because of the traditional journal publication system.  I wonder whether there is a dollar figure that demonstrates this waste?

I think two other articles might be worthwhile citing in this introduction:

Aleksic et al 1

Tracz et al 2

Methods and results

The methods used are scientifically sound and clear.

I think the ~one-month response period for the survey could be identified as a limitation of the study, especially given that there was only a 25-30% response rate, and after a few reminders too.  The emails could have ended up in researchers’ spam/clutter folders too.   

I have had a look at the data shared.  The excel spread sheet contains the observational data of the articles published with F1000 Research from 13 July 2012 to 30 November 2017.  It might be worthwhile to export an anonymised copy of the survey data from REDCap to share as well.

I am wondering why peer review was discontinued for those 36 articles.

Discussion

This was clear.  I have no further comments.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

References

  • 1. Aleksic J, Alexa A, Attwood TK, Chue Hong N, Dahlö M, Davey R, Dinkel H, Förstner KU, Grigorov I, Hériché JK, Lahti L, MacLean D, Markie ML, Molloy J, Schneider MV, Scott C, Smith-Unna R, Vieira BM., as part of the AllBio: Open Science & Reproducibility Best Practice Workshop: An open science peer review oath. F1000Res.2014;3: 10.12688/f1000research.5686.2 271 10.12688/f1000research.5686.2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Tracz V, Lawrence R: Towards an open science publishing platform. F1000Res.2016;5: 10.12688/f1000research.7968.1 130 10.12688/f1000research.7968.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2018 Jul 10. doi: 10.5256/f1000research.16821.r35535

Referee response for version 1

David Mellor 1

The authors provide an appropriate and needed review of a leading platform in post publication peer review and preprints, F1000. They collect information on the country of origin, the "success" at being formally indexed, and survey authors on their experiences with the process (which is rarely shared about any other platform, so this information is particularly needed). My critiques below are all modest points for improvement. 

"Traditionally, authors have not been able to add such research findings to their curriculum vitae and/or grant applications." The only harm implied by this lag is to the author. This roundabout system is also inefficient for reviewers, journals, and for dissemination to potentially interested readers. 

"In some scientific fields such as pandemics or humanitarian emergencies, the time to deliver research findings may be as equally as important as research quality, and may be critical to health care provision." The authors should couch this as a balance between two conflicting needs: speed for potentially urgent information and ensuring that health-related information is of the highest possible quality. 

"Open Science Initiative" I am not aware of a formal "initiative". Recommend revise to: "...new emerging options towards publication are being considered as part of the the "Open Science" movement."

An additional benefit of Registered Reports is to address publication bias towards significant or novel results. 

It might be worthwhile to mention that F1000 itself offers the RR workflow.   https://blog.f1000.com/2017/10/12/transparency-meets-transparency/ 

In the intro, the following citations would provide a bit more context for the reader 

  1. Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers 1

  2.  Altmetric Scores, Citations, and Publication of Studies Posted as Prep 2

  3. And for the section mentioning the rationale for Registered Reports, "Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond" 3

"Non-responders were contacted periodically if a response to the survey was not received." Please provide your rule for re-contact (e.g. once per week until response was received or four contacts were made). 

"All positive and negative experiences were independently reviewed by both authors and categorized into common topics. Any discrepancies were solved via discussion." What does this mean? That each free response, positive/negative opinion was scored by this study's authors (since you're describing a survey or authors, please clarify who the "authors" are in this sentence). 

"with no more than 10% of the remaining articles published representing a different article type" Case report was slightly over 10% (this is obviously a nitpick, but it did lead me to wonder if I was looking at the correct column in the correct table). 

"The majority of articles published received non-commercial funding" Recommend revise to "The majority of articles published reported receiving non-commercial funding..."

"Economic status was classified according to the World Bank list of economies (June 2017)" Please provide a specific link to this list.

"For the remaining 20% of articles, the lack of indexing was because peer review was incomplete (n=317), peer review had discontinued (n=36) or the article had been removed by authors (n=3)." Does "incomplete" include both articles that received poor reviews and articles that have received 1 or fewer reviews? Clarify the number of articles that received something equivalent to "rejections". I am casually familiar with F1000, perhaps more so than typical but less so than a frequent reader. It is unclear to me if "peer review ongoing" could ever revert to another status. Were there any articles that received multiple reviews that indicated poor article quality? Perhaps provide a bit more explanation of the F1000 workflow in the introduction and also define these categories a bit more precisely. 

"29% (n=336), after one revision," There appears to be a math error here, probably should be 23%, but please check.

Figure 2, "Recommendation" I think should be revised to "Recommendation by colleague", as all the other titles were self-explanatory (to me), but that one I had to open up the survey to understand what it meant (thanks for providing the survey!).

"Authors found the process of nominating and reviewers agreeing to review challenging (n=9)" This sentence seems like you are inferring an opinion by 9 people to the entire author pool. I recommend leading the section "Experiences of those submitting articles to F1000 Research" with opinions that can be reliably noted by larger groups from within your sample, and then mention these small N opinions with appropriate caveats (e.g. "A few authors noted...."). Giving the Ns means you are not misleading anyone obviously, it just reads a bit odd to present this as a common experience. Likewise with the last sentence in that paragraph, "A number of authors [had various complains, n=5 and 2]." 

"The speed and efficiency of publication (n=11) was the main reason that authors felt the experience was positive" change "main" to "most often noted in the free responses" (if that is what you mean by "main").

"The peer review for about 15% of the articles had either been ongoing for more than a year or discontinued completely meaning that many of these articles may have contributed to the vast quantity of inaccessible unpublished literature (and the potential for publication bias) had these articles been subjected to the standard peer review before publication model." This sentence implies that the rest of the articles would have been published elsewhere if not submitted to F1000. I think that is overly optimistic. I would describe that 15% as a floor of that estimate. Also, this sentiment is likely to be met by a cynical reader (one who is more skeptical about the value of preprints than I am), that this is a good thing, that many published articles, and certainly many preprints, do not "deserve" to be published. I recommend addressing or acknowledging that sentiment here. 

"There was also a general sense that the F1000 Research platform appeared to be ‘modern’ and yet was potentially ‘less attractive to the early career researcher’ because the need to publish in recognised journals with high impact factors is still considered the standard by the vast majority of researchers and institutes to gain promotion and tenure."

‘less attractive to the early career researcher’  Is that a quote from a survey respondent? State that in this part of the discussion, as that could also have come from other sources discussing the pressures on ECRs to publish in more "prestigious" venues.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1. Ross-Hellauer T, Deppe A, Schmidt B: Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLoS One.2017;12(12) : 10.1371/journal.pone.0189311 e0189311 10.1371/journal.pone.0189311 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Serghiou S, Ioannidis J: Altmetric Scores, Citations, and Publication of Studies Posted as Preprints. JAMA.2018;319(4) : 10.1001/jama.2017.21168 10.1001/jama.2017.21168 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. D. Chambers C, Feredoes E, D. Muthukumaraswamy S, J. Etchells P, (1 Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, ): Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience.2014;1(1) : 10.3934/Neuroscience.2014.1.4 4-17 10.3934/Neuroscience.2014.1.4 [DOI] [Google Scholar]
F1000Res. 2018 Jul 4. doi: 10.5256/f1000research.16821.r35531

Referee response for version 1

Larry Peiperl 1

F1000 research provides a platform for immediate publishing of research followed by open peer review and, if approved by reviewers, indexing in PubMed or other databases.  In the current article, the authors, who are leaders in the fields of metaresearch and journalology, provide a descriptive summary of the 1865 articles that were published in F1000 Research from earliest publication in July 2012 through November 2017, verifying peer review status through May 2018.  They also conduct an online survey in April-May 2018 through email requests to corresponding authors of articles that had been subject to external peer review.

The results are descriptive and presented in summary format, which seems appropriate to the study design.  No statistical inference is undertaken.

A spreadsheet of study-specified characteristics for all 1865 F1000 articles, and survey questions (not responses, which may possibly reflect an effort to preserve confidentiality of respondents) are provided as supplementary files.

The bibliometric analysis appears straightforward. The authors may wish to clarify the following points, most of which pertain to the survey and interpretation of its (partially qualitative) results.

Methods

1. “All positive and negative experiences were independently reviewed by both authors and categorized into common topics. Any discrepancies were solved via discussion.”  Can you provide more detail on how this qualitative analysis was performed? For example, were the “common topics” pre-defined?  If not, how did you arrive at the 3 thematic categories that you present in the Results?  Especially in light of the appropriately declared author competing interest, and the apparent relevance of the conclusions to the platform on which the authors are publishing the article, further information about how the study methodology may have supported objective interpretation of the qualitative data could strengthen the presentation.

Results

2. Thirty-six articles had peer review “discontinued”. Under  what circumstances does discontinuation of peer review occur?

3. Of 1476 unique corresponding author email addresses targeted in the survey, you received 296 responses, representing some 20% of the corresponding authors.  You identify the low response rate as a limitation of the study, but can you comment further on the representativeness of this sample?  Are you able to draw any useful conclusions based on the data you obtained for the full set of articles (year of publication, country income level, etc) regarding the extent to which responders may have differed from non-responders?  If such an analysis is not feasible, more caution would seem appropriate in drawing conclusions on the basis of the survey results.

Discussion

The authors might clarify their reasoning at several points:

4. Article: “There was also a sense that there was the potential for an article to become caught up in the process, immediate publication meant that there was limited scope to remove or submit elsewhere if peer reviewers could not be found or existing reviewers failed to provide subsequent reviews. While these criticisms may have reflected the experiences of some survey respondents, this process is not dissimilar from the many journals/publishers of standard journals which request names of peer reviewers, and in some instances release articles if peer reviewers cannot be found in a reasonable timeframe.”

Comment: If an author believes that an article published prior to peer review in F1000 cannot then be submitted to another journal, that author’s situation would seem different from when a journal rejects an article after peer reviewers cannot be found.  In the latter situation, the author has the option to pursue indexing by submitting the article to another journal.  It’s not clear from this paragraph whether or not authors have this option for articles that haven’t passed peer review in F1000.

5.  While there are some similarities, I’m not sure it’s fair to say that the PLOS partnership with bioRxiv “largely mimics the idea of the existing F1000 Research publication model.”   PLOS is partnering with bioRxiv in a way that other journals may choose to adapt, to complement the traditional journal publishing model by speeding dissemination while maintaining author choice regarding preprint posting and ultimate publication venue. Within this collaboration, bioRxiv does not function as an end-to-end publishing platform like F1000Research. Preprints posted to bioRxiv are not identified as having been submitted to a PLOS journal prior to publication in that journal, and authors are free to submit their posted work to another journal if the PLOS journal does not accept it.

6. Article: “There still remains a level of dogma surrounding [preprint platform] use by many, and there remain concerns about the quality of the articles published on these platforms.”

Comment:  It's not clear what you refer to as “dogma." The term seems dismissive, yet your research article identifies a number of reasonable concerns pertaining to preprint platform use.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Articles published in F1000 Research between the period 13 th July 2012 and 30 th November 2017

    Copyright: © 2018 Kirkham J and Moher D

    Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).

    Data Availability Statement

    The data referenced by this article are under copyright with the following copyright statement: Copyright: © 2018 Kirkham J and Moher D

    Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication). http://creativecommons.org/publicdomain/zero/1.0/

    Dataset 1: Articles published in F1000 Research between the period 13 th July 2012 and 30 th November 2017 10.5256/f1000research.15436.d208308 3


    Articles from F1000Research are provided here courtesy of F1000 Research Ltd

    RESOURCES