Skip to main content
Medical Journal, Armed Forces India logoLink to Medical Journal, Armed Forces India
. 2016 Dec 24;73(2):181–183. doi: 10.1016/j.mjafi.2016.11.008

The ethics of peer and editorial requests for self-citation of their work and journal

Jaime A Teixeira da Silva 1,
PMCID: PMC5592272  PMID: 28924321

Abstract

Peer reviewers are expected to be experts in a field of study and should be versed with the pertinent literature related with the manuscript they are reviewing. Editors might not necessarily be experts in a particular field, but they have the responsibility of overseeing the requests made by peers, and assessing whether these are ethically appropriate, or not. Thus, requests by peers to cite unrelated literature, which may or may not be their own literature, could be unethical, especially if the objective is to improve their own citations or to boost the citations of the journal for which they are reviewing. In contrast, requests to cite pertinent work that is in fact missing from the paper's literature, even if it may be the reviewer or editor's work, or from the same journal, is acceptable. Editors ultimately approve the requests and suggestions made by reviewers, so inappropriate suggestions made by peer reviewers are the responsibility of the editor and journal. There needs to be a bias-free mechanism in place that offers protection to authors who wish to complain, and consequences for editors who do not conduct an impartial decision. Authors have the right to challenge such suggestions, but may face unfair retaliation in the form of a rejection if they resist making changes that they perceive as being inappropriate.

Keywords: Citation manipulation, Journal impact factor (JIF), Self-citation, Unrelated or inappropriate references

Incentives in science publishing are perverted

The Thomson Reuters Journal Impact Factor (JIF) is often erroneously used to judge the quality of a paper, and also the quality of scientists that publish in JIF-based journals.1 The JIF was originally established to assess the level to which a journal was cited, and was meant to exclusively judge the “popularity” of a journal, but not the productivity or quality of a paper, or author. With the expansion of the “publish or perish” culture in science, in which research institutes increasingly expect to see tangible proof of research productivity, most noticeably as published papers, the JIF gradually became a central mantle-piece for the publishing industry, attracting repeated and new authorship by luring them with this simple carrot, the JIF. It may also stimulate cheating and misconduct.2 Such metrics, including other non JIF-based metrics of the “predatory” open access movement, are mere tactics to offer a false sense of appraisal to scientists, who can then show a tangible and quantifiable value to their research output and publishing productivity, even if erroneously. For example, very crudely, authors whose papers are never cited might be attributed with a JIF score of that journal,3 i.e., they may be falsely rewarded with a value even though their work is not tangibly appreciated, or used, by fellow peers. Unfortunately, metrics like the JIF are used for formal evaluations and promotions of scientists in academia, particularly in developing nations, and are thus open to wide abuse by authors, editors and journals. Several developing countries, such as Iran or China, reward their scientists financially based on the JIF score of their publications, and offer other perverted rewards based on this dubiously academic criterion, namely higher salaries, larger research grants, or even professorships if the JIF score is high enough or if the target journal is noble enough, such as Science or Nature. Thus, a system is in place that encourages potential abuse by authors who wish to promote themselves in a score-tangible hierarchy, by editors who wish to see their journal achieve a higher JIF while under their leadership, and by publishers who wish to see authors and editors that are satisfied by this citation game, thereby attracting new authors to the same journal. When the academic base of a journal and of publishing are compromised by a non-academic parameter such as JIF, or other metrics, then academic objectives are skewed, true intentions are fuzzy, and trust becomes lost. Journals that strive to newly obtain a JIF or that wish to increase their JIF may be tempted to manipulate citations in ways that boost their JIF. Citation stacking and citation cartels–which involve the collusion of two or more journals who inappropriately cite each other to offer mutual increases in citations,4 or citation rings,5 which involve fraudulent techniques such as fake emails or peer reports to guarantee publication and thus citation–form part of a widening corrupting trend in the abuse of the JIF, and thus contribute to an overall collapse in academic integrity. There are concerns that the recent (July 11, 2016) sale of JIF to Onex Corporation and Baring Private Equity Asia, which are private equity funds, will result in further abuses of the JIF as it becomes more commercialized and marketed, not as an academic parameter, but rather as a sales pitch. The use of non- or pseudo-academic indices like the JIF in an attempt to classify the academic quality of a journal can of course lead to abuses, leading some prominent journals to start turning away from the JIF.6 However, it is likely that this movement represents the minority of cases, and that the overall trend will be toward an increase in the use and reliance of the JIF for scoring journal and paper quality, and scientists’ quality and productivity.

The role of peers and editors in securing the integrity of the academic record

Peer reviewers and editors serve as the gate-keepers of the scientific quality, and integrity, of a published paper, even though authors, including the corresponding author, ultimately should take responsibility for what they have submitted, and published.7, 8 The responsibility of peers is to offer a strict, but unbiased and balanced critique of the scientific validity of the claims made, and where possible, editors should use several peers to complete the task of vetting the scientific validity of a paper. Bias in the selection of peers by editors, and in the evaluation of manuscripts, even though editors are endowed with editorial independence, must be avoided.9 The validity of a peer should also be carefully vetted, to avoid abuse of the peer review process, either as a result of unqualified peers, or as the result of fraudulently nominated peers.10 Peers that suggest the citation of their own work when it is clearly unrelated to the paper being reviewed, should be removed from the peer reviewer pool (i.e., blacklisted) and their review reports should be disregarded by the handling editor. Similarly, there should be channel of complaint by authors who detect similar abuse of the scientific vetting process by editors, and authors should be able to approach the publisher with confidentiality and without the fear of professional retaliation.

The risks imposed when editors or peers request citation of their work or journal

During peer review, peer reviewers or editors may request authors to add citations to their own literature or to literature from the same journal, most commonly to the introduction or discussion sections, but occasionally to the methods section. If these references are thematically closely related to the paper, and if they fortify the background or an understanding of the mechanism behind the paper's findings, even if they are of the editor or of the reviewer, then such requests are perfectly valid, and ethical. However, requests to add citations from that journal as a pre-condition to acceptance, or the inclusion of references that may be of marginal or no relevance to the paper's topics constitute unethical requests and should be sanctioned, and called out, without the fear of retaliation. Scientists may feel pressure to include such references, or to abide to requests for citation manipulation simply because they might fear that by not doing so would result in a rejection, even if the scientific aspects of their papers may be intact. Unfortunately, a culture of fear that pervades the traditional publishing system11 will likely spur authors to twist their ethical stance to satisfy a peer reviewer or editor, rather than to enter into direct conflict, and face a rejection. Scientists should not be placed in such uncomfortable situations where they are forced to opt for an unethical editorial request to guarantee the acceptance of their paper. Compromising acceptance by placing authors in an ethical bind is unfair and may also constitute professional incompetence on the part of the reviewer or editor, who should be selected based on strict academic criteria, including solid ethical stances and thematic qualifications. Moreover, authors deserve more rights to decline such requests and to report such unethical behavior to the publisher's management and seek the removal of such editors from the editor board of a journal. Sadly, in the publishing process, authors’ rights are minimal or suppressed12 and the culture of fear and peer and editorial coercion persists when they are faced with unethical and/or pressure-inducing requests to cite the literature from that peer, editor, or journal.

Conclusions

Science publishing remains a very biased process by virtue of the biased nature of humans. Since the human factor cannot be eliminated from the publishing process, however, publishers and editors have devised ways of reducing bias, but all currently used processes have flaws. The first flaw is the existence of perverted publishing incentives that spur scientists to publish for the wrong reasons, often chasing a JIF score for a published paper rather than a structurally sound manuscript that has value to peers, independent of the venue. Research institutes and bureaucrats seeking to promote the profiles of higher institutes of learning are mostly responsible for inculcating this perverse culture of metrics. The publishers naturally build their empires on such perverted incentives, masquerade their true for-profit intentions with sweetly worded marketing punch-lines, line their business model with an ethical coating, and exploit the biases and fallibilities of human nature to lure authors and to exploit the professionalism of qualified professionals to serve as free labor in the form of peer reviewers and editors, edifying them to smoky pedestals with no real intrinsic value. This human exploitation also expresses itself in the form of false incentives, like the JIF, and other metrics, which are themselves useless, academically speaking. Yet, the publishing market, and author demand and participation are remarkably driven by such metrics. And within this highly biased and toxic environment, perverted incentives and citation cartels13 may also drive peer reviewers and editors to seek excessive citation of their own work, or of their journal, in a bid to drive up their own citations. Authors need a fortified and independent framework in which their rights are clearly spelled, and where they have the ability of challenging such editorial abuses, to ensure that the publishing process is just a little more robust, accountable, and ultimately trustworthy.

Conflicts of interest

The author has none to declare.

Acknowledgments

The author thanks Omid Mahian (Department of Mechanical Engineering, Ferdowsi University of Mashhad, Iran) for exchanging ideas in an initial stage of development of this article.

References

  • 1.Casadevall A., Fang F.C. Impacted science: impact is not importance. mBio. 2015;6(5):e01593–e1615. doi: 10.1128/mBio.01593-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Qiu J. Publish or perish in China. Nature. 2010;463:142–143. doi: 10.1038/463142a. [DOI] [PubMed] [Google Scholar]
  • 3.Nielsen M.B., Seitz K. Impact factors and prediction of popular topics in a journal. Ultraschall Med. 2016;37(4):343–345. doi: 10.1055/s-0042-111209. [DOI] [PubMed] [Google Scholar]
  • 4.Mongeon P., Waltman L., de Rijcke S. 2016. What do we know about journal citation cartels? Some evidence and a call for information.https://www.cwts.nl/blog?article=n-q2w2b4 (accessed 23.10.16) [Google Scholar]
  • 5.Biagioli M. Watch out for cheats in citation game. Nature. 2016;535:201. doi: 10.1038/535201a. [DOI] [PubMed] [Google Scholar]
  • 6.Callaway E. Beat it, impact factor! Publishing elite turns against controversial metric. Nature. 2016;535:210–211. doi: 10.1038/nature.2016.20224. [DOI] [PubMed] [Google Scholar]
  • 7.Teixeira da Silva J.A. Responsibilities and rights of authors, peer reviewers, editors and publishers: a status quo inquiry and assessment. Asian Aust J Plant Sci Biotechnol. 2013;7(Special Issue 1):6–15. [Google Scholar]
  • 8.Teixeira da Silva J.A., Dobránszki J., Van P.T., Payne W.A. Corresponding authors: rules, responsibilities and risks. Asian Aust J Plant Sci Biotechnol. 2013;7(Special Issue 1):16–20. [Google Scholar]
  • 9.Kassirer J.P. Editorial independence: painful lessons. Lancet. 2016;387(10026):1358–1359. doi: 10.1016/S0140-6736(16)30089-7. [DOI] [PubMed] [Google Scholar]
  • 10.Ferguson C., Marcus A., Oransky I. Publishing: the peer-review scam. Nature. 2014;515:480–482. doi: 10.1038/515480a. [DOI] [PubMed] [Google Scholar]
  • 11.Teixeira da Silva J.A., Dobránszki J. Problems with traditional science publishing and finding a wider niche for post-publication peer review. Account Res. 2015;22(1):22–40. doi: 10.1080/08989621.2014.899909. [DOI] [PubMed] [Google Scholar]
  • 12.Al-Khatib A., Teixeira da Silva J.A. What rights do authors have? Sci Eng Ethics. 2016 doi: 10.1007/s11948-016-9808-8. (in press) [DOI] [PubMed] [Google Scholar]
  • 13.Davis P. 2016. Visualizing citation cartels.https://scholarlykitchen.sspnet.org/2016/09/26/visualizing-citation-cartels [accessed 23.10.16] [Google Scholar]

Articles from Medical Journal, Armed Forces India are provided here courtesy of Elsevier

RESOURCES