Abstract
Gold OA as a publishing model has created conflicts of interest for authors and journals that create a risk for research integrity. Alternative models of publishing and peer review could resolve some of these conflicts.
Subject Categories: S&S: Media & Publishing; S&S: Politics, Policy & Law
During the past years, an increasing number of research funders and governments have been supporting Open Access (OA) publishing. In the USA, the Fair Access to Science and Technology Research Act and the Public Access to Public Science Act require that results from research supported with public funding are made freely accessible 1. The EU has also decided that all publications funded by Horizon 2020 should be freely available 2. Most authors who publish their work as OA—whether mandated or not—usually choose Gold OA, in which the cost of publishing is covered by the authors (Fig 1).
Figure 1. The current conflicts of interest could be alleviated through a new publishing system.
Simplified schematic representation of conflicts of interest in Gold OA publishing (left) and how they can be alleviated (right). Accessibility to services and scientific outcomes is illustrated with locks and may be free (open) or subjected to fees (closed).
However, Gold OA has some drawbacks. Most importantly, it creates a conflict of interest: in a situation where the number of scientists is larger than the number of available positions, both journals and scientists benefit from publishing as many articles as possible 3. This is a fertile ground for cheaters on both sides, and helps to explain the emergence of predatory journals 4 and the increasing number of cases of scientific misconduct 5.
An alternative to Gold OA is Green OA, by which authors self‐archive their articles in online repositories 6. However, the fact that Green OA lacks peer review discourages many scientists from self‐archiving their papers. In addition, journals offer a range of services such as editorial review, peer review, copy editing and long‐term archiving and so that make it more attractive for scientists to submit their paper.
New publishing alternatives therefore try to combine the advantages of both models by focusing on low costs and transparent peer review. One of these is F1000 Research. Manuscripts are published immediately after submission, and peer review is performed openly by researchers identified by their name and institution. Once a paper has been approved by at least two referees, or one has approved it and two have “approved it with reservations”, it is indexed in PubMed and other databases. The publication costs are substantially lower than for most Gold OA journals.
Recently, major funding organisations such as the Wellcome Trust and the Bill & Melinda Gates Foundation, and institutions like the Montreal Neurological Institute and Hospital, have reached agreements with F1000, the company behind F1000 Research, to create their own publishing platforms. Another important funder, the European Commission, could follow suit 7. However, these platforms are only open to scientists funded by these institutions, which generates a new conflict of interest. If successful, the prestige associated with publishing there will likely increase, thereby pushing scientists to align their research with the interests of these funders. Moreover, only a minority who are already well funded will benefit, thus promoting a rich‐gets‐richer model.
In addition, new grass‐root platforms for publishing are emerging. One of them is “Peer Community in”, a non‐profit scientific organisation that promotes review and recommendation of articles in different fields. Its most successful branch, the Peer Community in Evolutionary Biology, allows free peer review. Once a manuscript is uploaded to a pre‐print repository, a “recommender” selects at least two reviewers. If, based on their comments, he recommends the article, the paper can be cited as peer‐reviewed. A similar procedure can be applied to papers that have already been published. However, the platform does not publish negative reviews, which would be an interesting approach to deal with low‐quality publications.
A more ambitious example of a collaborative platform is the Self‐Journals of Science (SJS). Articles submitted to SJS are posted online and open to peer review by any registered scholar. Reviews are signed and subjected themselves to debate by other peers. Users can vote whether they believe that a paper “has reached scientific standards” or “needs revisions”. Authors can then improve the paper, and the contribution of reviewers is visible to everyone in the next versions. SJS also publishes positive and negative comments on previously published papers. Finally, any scientist can curate a group of papers and convert them into a “Self‐Journal”. More importantly, all these tasks are tracked and evaluated by peers, generating a self‐organised process of transparent publishing and fair evaluation.
While these initiatives go in the right direction, the lack of funding in terms of long‐term archiving and evaluation holds back progress. The current situation therefore requires public institutions to take the lead towards a more ambitious publishing system that not only makes research publicly accessible, but also more transparent and free of the conflicts of interest as the one between Elsevier and German universities 8.
We therefore suggest a model that would redistribute funding and the role of different actors—scientists, metric companies, librarians and so on—to maximise the impact of their respective skills for the benefit of science (Fig 1).
Research papers and scientific data should be published in several specialised, open and publicly funded storage repositories (SR). Although these will not need to be centralised to take advantage of the particular specialised services that each might offer, standardised protocols should be implemented. Librarian services would be critical for these platforms, and there should be specific funding to support them. This path is already being followed by the Confederation of Open Access Repositories (COAR).
Peer review should be self‐organised in a centralised and publicly funded peer review platform (PRP). Data and articles or even published manuscripts would be linked to each scientific profile in the PRP and subject to discussion. Thus, the PRP would be a space for scientific debates where all activities performed by each scientist—commenting or peer‐reviewing, for example—will be tracked and publicly evaluated by other scientists. The surge of new services to integrate publications, such as ReFigure by eLife, should be considered a symptom of urgency for such a space, and a number of successful self‐organised projects, such as Wikipedia or StackOverflow, support the viability of peer‐to‐peer models. Metrics developed to evaluate scientists should take into account mainly their activity within the platform to minimise the influence of funders or publishers.
Companies that evaluate scientific research could convert the self‐evaluation of scientists in the PRP into simple metrics for funders. They would also have the important task of estimating the impact of articles already published through these new standards.
Free from organising peer review and its associated costs, journals could focus on making research more accessible. This would allow them to expand their audiences to include industries and the public at large. Although publishing in journals would no longer constitute the main evaluation for scientists, they would still be interested in collaborating with journals to generate reviews, commentaries and other products that would give them more visibility among their peers and facilitate outreach.
Public institutions should take the lead towards this change. They should establish a clear roadmap for setting new evaluation procedures, after which only publications in a PRP would be considered. In economic terms, the cost of implementing this model would be minuscule compared with current investments.
In summary, we believe that it is urgent to free science from the existing conflict of interests, given the deteriorating tendency of the current system. Our model would enable journals, evaluators, scientists, librarians and funders to develop and improve their activities, while separating their services and economical interests from research. Scientists would be more free to pursue research for its own sake and their evaluation would become more transparent.
Conflict of interest
APG is working member of OpenScholar, a not‐for‐profit independent organisation of scientists governing Self‐Journals of Science. OpenScholar had no role in the preparation of the manuscript, and APG do not receive any economical reward for his activity in the organisation.
References
- 1. Kaiser J (2015) Science. https://doi.org/10.1126/science.aac8931 [Google Scholar]
- 2. Enserink M (2017) Science. https://doi.org/10.1126/science.aag0577 [Google Scholar]
- 3. Buranyi S (2017) The Guardian. https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science
- 4. Bealls J (2016) Nature 534: 326 [Google Scholar]
- 5. Kornfeld DS, Titus SL (2016) Nature 537: 29–30 [DOI] [PubMed] [Google Scholar]
- 6. Houghton JW, Oppenheim C (2009) Prometheus 26: 41–54 [Google Scholar]
- 7. Enserink M (2017) Science. https://doi.org/10.1126/science.aal0977 [Google Scholar]
- 8. Matthews D (2017) Times Higher Education. https://www.timeshighereducation.com/news/no-deal-between-germany-and-elsevier-what-would-it-mean#survey-answer