Abstract
At Proceedings of the Royal Society A, something we are always concerned and vigilant about is publication malpractice. This editorial examines the background to some small changes to our reviewer forms that will help us in identifying patterns of worrying behaviour. The importance of this in the context of the relationship of science to policy-making and the public perception of science is stressed.
Keywords: citations, bibliometrics, ethics
1. Introduction
We all have personal experience of metrics, originally designed to quantify and improve performance, having unintended consequences. In his 2018 book The Tyranny of Metrics [1], Muller catalogues and analyses examples in many walks of modern life. Therefore, it comes as no surprise that bibliometrics in academia are open to not only unintended consequences, but also to misuse and malpractice. The more that bibliometrics are used to decide academic appointments and promotions, or to award research grants and academic prizes to individuals, or to quantify the status (‘impact factor’) of a journal, then the greater is the incentive to try to manipulate those metrics to one’s own advantage. You would think that academics would have the knowledge and the intelligence to know better—however, there is always somebody who thinks they can ‘game the system’ and get away with it, seemingly not knowing—or not caring—about the damage they do.
This is an issue we are very aware of at Proceedings of the Royal Society A (PRSA). It is also an issue that we take very seriously. Peer review originated in 1665 with Heinrich Oldenburg [2], the Royal Society’s first secretary and the first editor of Philosophical Transactions. Over the following 200 years, peer review evolved with the growing professionalization of science into the key mechanism by which science arrives at a consensus. It is often described as out-dated, frustrating, cumbersome, flawed, and inefficient. And it is, for sure, all of those things except—and it is a very, very important exception—it is certainly not out-dated. At a time when powerful commercial and political forces see the benefit of undermining scientific consensus where it gets in the way of their self-interest, it is more important than it has ever been that we allow nothing to compromise the integrity of peer review and the consensus it builds. This means that ‘gaming’ the peer review system is not a harmless bit of self-promotion; it is bringing into disrepute the central pillar of the age of reason that has brought so many benefits to all mankind. Keeping the literature record ‘clean’ is that important.
As discussed in the next section, journals have uncovered examples of ‘citation rings’ (a cabal of authors who agree to cite each other’s papers even where they are completely inappropriate) and ‘fake reviewer accounts’ (where authors create, and suggest as a potential reviewer, fake accounts to enable them to review their own papers and to demand that other authors that they review cite their papers). This has required some drastic and dramatic corrective actions. In the current issue, the paper by Smith and Cumberledge, ‘Quotation errors in general science journals’ [3], raises some smaller-scale but still uncomfortable concerns about possible malpractice in academic publication procedures for general science journals which, like PRSA, cover a wide range of scientific, mathematical and engineering disciplines.
2. Some examples of publication malpractice in science
In July 2014, SAGE announced the retraction of 60 articles implicated in a peer review and citation ring at the Journal of Vibration and Control (JVC). The full extent of the malpractice was only uncovered following a 14-month investigation and centred on the suspected misconduct of Peter Chen, formerly of National Pingtung University of Education, Taiwan and possibly other authors at the same institution. This even led to the resignation of the Education Minister Chiang Wei-ling because of his links to Chen. While investigating the JVC papers submitted and reviewed by Chen, it was discovered that as an author he had created various aliases, providing different email addresses to set up more than one reviewer account. Consequently, SAGE scrutinized further the co-authors of and reviewers selected for Chen’s papers, these names appeared to form part of a peer review ring. The investigation also revealed that on at least one occasion, Chen reviewed his own paper under one of the aliases he had created. Sadly, this is not the only example of such behaviour that has been uncovered and several other publishers have subsequently found it necessary to retract a large number of papers (often more than 50).
Sometimes the malpractice originates from within the journal structure and procedures [4]. In January 2020, the editors at the Journal of Theoretical Biology announced in an editorial that they had investigated and barred an unnamed editor from the board for ‘scientific misconduct of the highest order’. The publisher, Elsevier, later confirmed that the barred editor was biophysicist Kuo-Chen Chou, who founded and ran the Gordon Life Science Institute, in Boston, USA. Chou reportedly asked authors of dozens of papers he was editing to cite many of his publications, in some instances more than 50 and had even suggested that they change the titles of their papers to mention his work. The investigation found that, on average, Chou had requested that the authors add an average of 35 citations, 90% of them to papers he had authored or co-authored [5].
Such malpractice can crop up in any discipline and in any part of the world. As topical editor and reviewer, Artemi Cerdà handled 82 manuscripts for two journals of the European Geophysical Union (EGU), Solid Earth and SOIL. An investigation by EGU published in February 2017 found that for half of these manuscripts, he had suggested that authors add a total of 622 additional references. In one case, it was found that he suggested that an author add 53 references to just one manuscript. The report found that the authors complied with 399 of these citation requests (64%). In his role as reviewer, Cerdà also reportedly suggested that authors add another 423 additional references. EGU concluded: ‘From our analysis, it appears that only one editor, Artemi Cerdà, violated our ethical rule that any manipulation of citations (e.g. including citations not contributing to a manuscript’s scientific content, citations solely aiming at increasing an author’s or a journal’s citations) is regarded as scientific malpractice’. Cerdà and several other editors of the two journals resigned.
These examples were brazen but the results of Smith & Cumberledge [3] suggest that we need to be aware of the potential for subtler malpractice at a lower level that is much less easy to detect. These authors show an alarming number of ‘misquotations’—that is citations that do not show what is claimed of them. Of course, these can arise from typographical errors, or from authors misremembering which paper contains which evidence or perhaps even just them misunderstanding the cited paper. However, it is also the most likely sign that a citation ring is at work. The authors make the point that spotting misquotations currently requires expert knowledge and this is a general problem with attempts to develop algorithms to automate detection of citation rings [6]. As a result, Smith and Cumberledge only reviewed papers in areas where they had, or had access to, the required expertise. This means their results cannot be taken as an overall indication of the level of the problem, as results are very likely to vary with the discipline. However, they are enough to set alarm bells ringing.
We worry about this for the sake of science in general but also for the journal itself. Thomson Reuters suspended the journal Applied Clinical Informatics (ACI) for its role in distorting the citation performance of Methods of Information in Medicine (MIM). Both journals are published by Schattauer Publishers in Germany. They found that 39% of 2015 citations to MIM came from ACI. More importantly, 86% of these citations were directed to the previous 2 years of publication, the years that count toward the journal’s impact factor. Thomson Reuters avoided using the terms ‘cartel’ or ‘cabal’ which imply a deliberate attempt to cheat, and used the more ambiguous term ‘citation stacking’ to describe the pattern, but the existence of the pattern alone was enough to cause them to suspend the journal.
These are just some of the examples, others are detailed by the website Retraction Watch which was launched in August 2010 by two health journalists, Ivan Oransky and Adam Marcus. However, a report in the journal Science in 2018, generated by working with Retraction Watch, concluded that the rapid growth in the total number of papers requiring retraction (due to honest error, data fraud, and plagiarism, in addition to fake peer review and citation malpractice) was more due to better awareness and vigilance by journals than a rapid rise in malpractice [7]. The rate of growth in the number of retractions required (for journals, excluding conference publications) had slowed. The number of scientific papers published worldwide roughly doubled between 2003 and 2016 and although the fraction needing retraction roughly doubled between 2003 and 2009, since 2012 it has stayed roughly constant at about four papers in every 10 000.
3. What are we doing to guard against malpractice?
At PRSA, we take these issues very seriously indeed, for the sake of the honest authors who publish with us, for the sake of the reputation of the journal and the Society, and for the sake of science in general. The Royal Society is a member of the Committee on Publication Ethics (COPE) and they have provided guidance that we follow [8]. We continue to monitor the development of automated algorithms to detect malpractice and will implement them as soon as they are sufficiently effective and reliable [6]. In this area, we note that analysis of citation practices to spot emerging scientific trends (e.g. [9]) is an important area of study and the algorithms needed could emerge from such research. Meanwhile, we have to rely on expertise of individuals. In the recent past, we have investigated a number of worrying patterns, in some cases challenging individuals to explain them, and we do maintain a watching brief on anything that has given rise to concern in the past. But, we are now also revising our guidelines so that reviewers and authors can help the editors and the journal office to detect patterns that may need investigation.
Mixed in with attempts to deal with this is the well-understood phenomenon of ‘self-citation’. Excessive self-citation by authors is a common problem and sometimes arises out of the fact that the author knows his/her own work better than he/she knows more appropriate literature that is also published. Because the word ‘excessive’ is a matter of judgement, this is not usually malpractice but neither is it acceptable. Given the ever-growing volume of modern publication, this is very likely to be on the rise as it gets harder for one individual, or even research team, to know all the relevant literature. However, it is relatively easy to detect and most reviewers are well aware of the potential for this to occur and can (and do) request other papers be cited (even if it is one of their own which, given the cases discussed above, editors unfortunately have to now check is a valid request). We do ask referees and editors to suggest alternative or additional citations where they consider them appropriate, but in future, these must be listed separately at the end of a review and these will be kept in a database that will be searched for worrying patterns. Authors will be at liberty to ignore any suggested additional citations that are not listed this way. We will also revise advice to authors and to referees to remind them of the importance of adherence to high ethical standards. Advice to authors will also ask that they should alert us if they feel they are being coerced into citing inappropriate publications. We will also ask referees to make particular note of citations that they think are clear, and potentially deliberate, misquotations and will keep a database of such cases.
To stress the key point, gaming the publication system for one’s own benefit is an attack on the very fabric of science and really does endanger the massive societal, technological, health and economic benefits that come from science. Sadly, a large and growing factor in how the world makes decisions is the Dunning–Kruger effect [10]. Because they fail to understand the full implications, many seek to cherry-pick the science and dismiss science that is inconvenient to them and search the literature, press and social media with confirmation bias [11,12]. This is a particular problem for areas such as climate science (e.g. [13]) and the COVID-19 pandemic (e.g. [14]) where science has urgent and vital implications for policy. Hence evidence of malpractice in just one paper in a journal can—and so will—be used by individuals exemplifying the Dunning–Kruger syndrome to disparage inconvenient science in a perfectly valid paper in the same journal. If mankind is to continue to enjoy the massive benefits of the age of reason, we must not allow this to happen and we must do all we can to stamp out publication malpractice.
Data accessibility
This article has no additional data.
Competing interests
I declare I have no competing interest.
Funding
No funding has been received for this article.
Reference
- 1.Muller J. 2018. The tyranny of metrics. Princeton, NJ: Princeton University Press; ( 10.2307/j.ctvc77h85) [DOI] [Google Scholar]
- 2.Rix H. 1893. Henry Oldenburg, first secretary of the Royal Society. Nature 49, 9–12. ( 10.1038/049009a0) [DOI] [Google Scholar]
- 3.Smith N, Cumberledge A. 2020. Quotation errors in general science journals. Proc. R. Soc. A 476, 20200538 ( 10.1098/rspa.2020.0538) [DOI] [Google Scholar]
- 4.Wilhite AW, Fong EA. 2012. Coercive citation in academic publishing. Science 335, 542–543. ( 10.1126/science.1212540) [DOI] [PubMed] [Google Scholar]
- 5.Van Noorden R. 2020. Highly cited researcher banned from journal board for citation abuse. Nature 578, 200–201. ( 10.1038/d41586-020-00335-7) [DOI] [PubMed] [Google Scholar]
- 6.Fister I Jr, Fister I, Perc M. 2016. Toward the discovery of citation cartels in citation networks. Front. Phys. 4, 49 ( 10.3389/fphy.2016.00049) [DOI] [Google Scholar]
- 7.Brainard J. 2018. Rethinking retractions. Science 362, 390–393. ( 10.1126/science.362.6413.390) [DOI] [PubMed] [Google Scholar]
- 8.COPE Council. 2019. COPE discussion document: citation manipulation, July 2019. (doi:10.24318/cope.2019.3.1)
- 9.Asatani K, Mori J, Ochi M, Sakata I. 2018. Detecting trends in academic research from a citation network using network representation learning. PLoS ONE 13, e0197260 ( 10.1371/journal.pone.0197260) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Kruger J, Dunning D. 1999. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J. Pers. Soc. Psychol. 77, 1121–1134. ( 10.1037/0022-3514.77.6.1121) [DOI] [PubMed] [Google Scholar]
- 11.Nickerson RS. 1998. Confirmation bias: a ubiquitous phenomenon in many guises. Rev. General Psychol. 2, 175–220. ( 10.1037/1089-2680.2.2.175) [DOI] [Google Scholar]
- 12.Tappin BM, van der Leer L, McKay RT. 2017. The heart trumps the head: desirability bias in political belief revision. J. Exp. Psychol.: General 146, 1143–1149. ( 10.1037/xge0000298) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Yang X, Chen L, Ho SS. 2019. Does media exposure relate to the illusion of knowing in the public understanding of climate change? Public Underst. Sci. 29, 94–111. ( 10.1177/0963662519877743) [DOI] [PubMed] [Google Scholar]
- 14.Oehmen J, Locatelli G, Wied M, Willumsen P. 2020. Risk, uncertainty, ignorance and myopia: their managerial implications for B2B [Business-to-Business] firms. Ind. Mark. Manage. 88, 330–338. ( 10.1016/j.indmarman.2020.05.018) [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
This article has no additional data.