Skip to main content
Korean Journal of Radiology logoLink to Korean Journal of Radiology
letter
. 2023 Aug 16;24(9):924–925. doi: 10.3348/kjr.2023.0738

The Integration of Large Language Models Such as ChatGPT in Scientific Writing: Harnessing Potential and Addressing Pitfalls

Shunsuke Koga 1,
PMCID: PMC10462902  PMID: 37634646

I recently read the article by Dr. Park in the Korean Journal of Radiology addressing the integration of generative artificial intelligence (AI) in scientific publications [1]. The swift rise of AI, particularly advanced language models like ChatGPT, has showcased its proficiency in generating coherent, adaptable text. This capability, especially in the domain of scientific writing, has sparked discussions regarding AI's potential role in revolutionizing the process of drafting scientific manuscripts [2,3]. I appreciate the comprehensive and thoughtful approach Dr. Park adopted in exploring both the challenges and opportunities presented by large language models (LLMs) and other AI tools in this rapidly evolving domain of scientific writing [4]. With this foundation, I wish to contribute additional insights on the topic.

With the increasing influence of LLMs in scientific writing, there emerges a pressing need for specific education on their strengths and limitations. Traditionally, universities and libraries have provided guidance on scientific writing, ethics, and literature searches. It has now become essential to incorporate education on LLMs into these instructional sessions. A prominent limitation of LLMs, commonly referred to as “hallucination,” arises when they generate seemingly credible but fabricated information [5]. This is especially concerning when LLMs create fictitious citations [6]. Such potential pitfalls necessitate heightened awareness for researchers and professionals utilizing these tools.

Furthermore, I am in alignment with Dr. Park’s perspective on the role of LLMs in improving the linguistic quality of submissions. The dominance of English in scientific discourse, while streamlining global communication, also introduces barriers to non-native speakers, potentially sidelining valuable research and perspectives due to linguistic challenges [7]. This is particularly relevant considering a recent study, which highlights the challenges faced by non-native English speakers in the scientific community [8]. As the study suggests, non-native speakers often invest more effort in conducting scientific activities in English. Using LLMs as linguistic aids can bridge this language gap, ensuring that their invaluable research and insights are not marginalized by linguistic constraints.

While the potential of LLMs in drafting initial manuscripts is evident, human authors must continue to play a pivotal role in the creative process [1,9]. LLMs can be instrumental in shaping these drafts, but the final review, validation, and approval should undoubtedly be a human endeavor [10]. Moreover, while capitalizing on the benefits of LLMs, it is essential to acquire the novel skills of meticulously reviewing and adeptly editing their outputs. Our roles as authors will inevitably expand, heightening our responsibilities in the process.

In conclusion, the guidelines proposed, with a focus on human oversight and ethical considerations, are well-poised to guide the responsible use of generative AI in scientific publications [4]. Clear guidelines and continuous education are the cornerstone of ensuring the ethical and effective incorporation of AI in our revered scientific community.

Acknowledgments

This manuscript was proofread by ChatGPT (GPT-4) on August 6, 2023, and the author has verified the final content.

Footnotes

Conflicts of Interest: The author has no potential conflicts of interest to disclose.

Funding Statement: None

References

  • 1.Park SH. Use of generative artificial intelligence, including large language models such as ChatGPT, in scientific publications: policies of KJR and prominent authorities. Korean J Radiol. 2023;24:715–718. doi: 10.3348/kjr.2023.0643. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613:620–621. doi: 10.1038/d41586-023-00107-z. [DOI] [PubMed] [Google Scholar]
  • 3.Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379:313. doi: 10.1126/science.adg7879. [DOI] [PubMed] [Google Scholar]
  • 4.Park SH. Authorship policy of the Korean Journal of Radiology regarding artificial intelligence large language models such as ChatGTP. Korean J Radiol. 2023;24:171–172. doi: 10.3348/kjr.2023.0112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ji ZW, Lee N, Frieske R, Yu TZ, Su D, Xu Y, et al. Survey of hallucination in natural language generation. Acm Computing Surveys. 2023;55:1–38. [Google Scholar]
  • 6.McGowan A, Gui Y, Dobbs M, Shuster S, Cotter M, Selloni A, et al. ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Res. 2023;326:115334. doi: 10.1016/j.psychres.2023.115334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Woolston C, Osório J. When English is not your mother tongue. Nature. 2019;570:265–267. doi: 10.1038/d41586-019-01797-0. [DOI] [PubMed] [Google Scholar]
  • 8.Amano T, Ramírez-Castañeda V, Berdejo-Espinola V, Borokini I, Chowdhury S, Golivets M, et al. The manifold costs of being a non-native English speaker in science. PLoS Biol. 2023;21:e3002184. doi: 10.1371/journal.pbio.3002184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5:e105–e106. doi: 10.1016/S2589-7500(23)00019-5. [DOI] [PubMed] [Google Scholar]
  • 10.Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023 Jul 17; doi: 10.1038/s41591-023-02448-8. [Epub] [DOI] [PubMed] [Google Scholar]

Articles from Korean Journal of Radiology are provided here courtesy of Korean Society of Radiology

RESOURCES