The use of artificial intelligence (AI) in academic paper writing is witnessing a surge in popularity. Specifically, ChatGPT, a member of the generative pre-trained transformer (GPT) family of large language models (LLMs), has found application across diverse scientific and medical domains, encompassing detection of drug interactions, drug safety monitoring, medical image analysis, and the refinement of research papers.1
Concerns and ethical considerations
Through the use of sophisticated algorithms and data analysis capabilities, ChatGPT plays a pivotal role in expediting the scientific writing process for researchers. This assistance is particularly valuable, as researchers often grapple with time constraints during the composition of scientific papers, a task known for its inherent lengthiness. ChatGPT assists researchers by aiding in the organization of pertinent material, data gathering and analysis, organizing the relevant material, gathering and analyzing data, suggesting titles, outlining data analysis methods and results, drawing conclusions from the detailed results, generating the initial draft of a scientific paper, and proofreading the final version.2 Furthermore, ChatGPT proves invaluable for non-native speakers of the academic language, who may encounter difficulties in crafting scientific manuscripts.
Ethical considerations
Although ChatGPT offers undeniable advantages in scientific paper writing, several concerns have emerged. One worry is that it may facilitate plagiarism if researchers abuse the software to produce complete essays without putting in any effort. One notable apprehension is its potential to facilitate plagiarism if unscrupulous researchers misuse the software to effortlessly produce complete essays. The emergence of AI-generated texts presents a novel and intriguing development; however, it is fraught with unresolved issues from both legal and ethical standpoints.
Fostering dependency on AI
Research by Faisal et al.3 has demonstrated that ChatGPT may produce erroneous or misleading interpretations in scientific writing. Fabricating or falsifying research becomes a disturbingly simple endeavor, especially in critical fields such as medicine, in which treatment decisions directly affect patients’ well-being. Consequently, researchers using ChatGPT must exercise caution and demonstrate a profound understanding of its multifaceted manifold components to uphold ethical standards. Another potential drawback lies in the possibility that ChatGPT could discourage originality in writing by fostering excessive dependence. Overreliance on ChatGPT for generating ideas may diminish researchers’ capacity to formulate unique thoughts and perspectives, ultimately leading to a stagnant writing style bereft of creativity and originality (Figure 1).
Challenges in detection and authorship
Moreover, as previously reported,4 academic reviewers presented with abstracts generated by ChatGPT were able to identify only 63% of the fraudulent ones. Furthermore, authorship involves being liable for the work, which is a key point with which LLMs cannot comply effectively. As a result, extant authorship guidelines, prevalent in all Springer Nature journals5 and the Elsevier journals (see https://www.elsevier.com/about/policies/publishing-ethics), unequivocally stipulate that LLMs such as ChatGPT should not be acknowledged as co-authors.
Embracing the benefits of ChatGPT
Notwithstanding the potential drawbacks and risks associated with ChatGPT in academic paper writing, its benefits are both substantial and undeniable. ChatGPT can enable novel and creative approaches to academic paper writing, by facilitating constructive feedback, guidance, and resources. ChatGPT can also enhance efficiency and effectiveness, by streamlining some of the mundane and laborious tasks of writing, such as formatting, referencing, and proofreading (Figure 1). The incorporation of ChatGPT into academic paper writing processes holds the potential to elevate the quality and productivity of research endeavors, yielding superior outcomes.
In sum, ChatGPT is a technological tool with the potential to yield both positive and negative outcomes contingent upon its prudent and judicious application. Although it has the potential to be a valuable asset for scientific writing, it is incapable of supplanting human writing and critical thinking skills. Therefore, any use of AI in academic writing must adhere steadfastly to the principles of honesty, rigor, and originality.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (grants 82374108 and 82004004) and the Shanghai Municipal Science and Technology Major Project.
Declaration of interests
The authors declare no competing interests.
Published Online: October 17, 2023
References
- 1.Lee P., Bubeck S., Petro J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N. Engl. J. Med. 2023;388:1233–1239. doi: 10.1056/NEJMsr2214184. [DOI] [PubMed] [Google Scholar]
- 2.Salvagno M., Taccone F.S., Gerli A.G. Can artificial intelligence help for scientific writing? Crit. Care. 2023;27:75. doi: 10.1186/s13054-023-04380-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Elali F.R., Rachid L.N. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns. 2023;4 doi: 10.1016/j.patter.2023.100706. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Thorp H.H. ChatGPT is fun, but not an author. Science. 2023;379:313. doi: 10.1126/science.adg7879. [DOI] [PubMed] [Google Scholar]
- 5.Editorial N. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612. doi: 10.1038/d41586-023-00191-1. [DOI] [PubMed] [Google Scholar]