Skip to main content
Korean Journal of Radiology logoLink to Korean Journal of Radiology
editorial
. 2023 Jul 17;24(8):715–718. doi: 10.3348/kjr.2023.0643

Use of Generative Artificial Intelligence, Including Large Language Models Such as ChatGPT, in Scientific Publications: Policies of KJR and Prominent Authorities

Seong Ho Park 1,
PMCID: PMC10400373  PMID: 37500572

Generative artificial intelligence (AI) refers to algorithms that can be used to create new content, such as text, code, images, videos, and audio. Particularly, with the introduction of generative adversarial networks (GAN) in medical imaging [1,2], generative AI has gained significant attention in the scientific community, leading to numerous publications in the past few years. The Korean Journal of Radiology (KJR) has published several articles on this topic [3,4,5]. However, the landscape of generative AI in scientific research and publication has dramatically shifted with the emergence of generative large language models (LLMs), such as ChatGPT, which are capable of generating text that closely resembles human writing and easily accessible to the public. The use of LLMs is rapidly expanding in scientific publications [6], creating ethical and legal concerns and challenges related to research integrity, plagiarism, copyright infringement, and authorship, not only for authors, but also for peer reviewers and editors [7,8,9]. Moreover, these concerns and challenges extend beyond AI-generated text and LLMs to include other AI-generated content used in scientific publications.

Despite these concerns and challenges, generative AI can significantly enhance the reporting of scientific work, if used responsibly. Thus, implementing an outright ban on this technology would be shortsighted [10]. Instead, it is crucial to establish guidelines to promote the responsible and effective use of generative AI in scientific publications [10]. KJR has already adopted a policy that explicitly prohibits authorship assignment to LLMs [11]. Herein, we present a more comprehensive journal policy regarding the use of generative AI in scientific publications. Our policy aligns with the policies of several prominent authorities in scientific publishing, as summarized in Table 1 [6,9,12,13,14,15,16,17]. Notably, Science Journals have a stricter stance than others, including KJR, banning the use of AI-generated content without explicit permission from the editors [16].

Table 1. Comparative summary of policies on the use of generative artificial intelligence by prominent authorities in scientific publication and the Korean Journal of Radiology .

Name* Guidelines for AI authorship Additional guidelines for authors, reviewers, and editors
Journal
JAMA and JAMA Network journals [12] Nonhuman AI, language models, machine learning, or similar technologies do not qualify for authorship. • The submission and publication of content/images created by AI, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.
If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. • Authors should report the use of AI, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods. This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)
Journal of Clinical Oncology (JCO) [13] JCO does not accept manuscripts with nonhuman authors. LLMs and AI tools cannot be listed as an author under any circumstances. • Authors must be aware of the rapidly evolving capabilities and deficiencies of these tools. Authors remain responsible for the accuracy of all content submitted and are liable for any breach of publication ethics.
• JCO generally discourages the use of LMMs and AI tools to generate written content in submissions. LLMs and AI tools used to assist in writing Original Reports or Clinical Trial Updates must be noted in the Acknowledgments. If LLMs or AI tools are used in the research itself (eg, data analysis), it must be disclosed in the Methods section. In either place, the authors must note the LLM or AI tool used, the version number, the date accessed, and the manufacturer/creator name along with a description of how and for which parts of the submission the tools were used. AI tools used to assist with grammar, spelling, formatting, and reference clean up do not need to be disclosed.
• JCO forbids the use of LLMs or AI tools in the preparation of submissions primarily advancing the authors opinion and perspective.
• Reviewers may not use LLMs or AI tools when reviewing work submitted to JCO for peer review.
Korean Journal of Radiology (KJR) Authorship assignment to AI is prohibited. • Authors who employ generative AI tools are solely responsible for all content produced and submitted.
• KJR discourages the use of generative AI tools for the primary purpose of creating any types of content for scientific manuscripts. If such tools are used, the authors must report their use transparently, including specific details and a comprehensive explanation of the use in the study conduct and manuscript writing.
• The use of LLMs or other AI tools to enhance the linguistic quality of a submission is considered acceptable and does not require specific disclosure.
• When generative AI itself is the focus of a study, the use of AI should be explicitly detailed in the Materials and Methods section.
• Reviewers are forbidden from using LLMs for the primary purpose of generating review comments.
Nature and Springer Nature journals [14,15] LLMs, such as ChatGPT, do not currently satisfy our authorship criteria. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.
Science journals [16] An AI program cannot be an author of a Science journal paper. Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors.
Organization
COPE [6] COPE joins organisations, such as WAME and the JAMA Network among others, to state that AI tools cannot be listed as an author of a paper. Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.
ICMJE [17] Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship. • At submission, the journal should require authors to disclose whether they used AI-assisted technologies (such as LLMs, chatbots, or image creators) in the production of submitted work.
• Authors who use such technology should describe, in both the cover letter and the submitted work, how they used it.
• Humans are responsible for any submitted material that included the use of AI- assisted technologies.
• Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased.
Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. • Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI.
• Humans must ensure there is appropriate attribution of all quoted material, including full citations.
WAME [9] Chatbots cannot be authors. • Authors should be transparent when chatbots are used and provide information about how they were used.
• Authors are responsible for material provided by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including original sources for material generated by the chatbot).
• Editors and peer reviewers should specify, to authors and each other, any use of chatbots in the evaluation of the manuscript and generation of reviews and correspondence. If they use chatbots in their communications with authors and each other, they should explain how they were used.
• Editors need appropriate tools to help them detect content generated or altered by AI. Such tools should be made available to editors regardless of ability to pay for them, for the good of science and the public, and to help ensure the integrity of healthcare information and reducing the risk of adverse health outcomes.

*Listed in alphabetical order, Direct quotes from the statements of respective authorities, Summary of the current policy statements. Please refer to the main text for further details.

AI = artificial intelligence, LLM = large language model, COPE = Committee on Publication Ethics, WAME = World Association of Medical Editors, ICMJE = International Committee of Medical Journal Editors

We present the following guidelines for the proper use of generative AI in manuscripts submitted to KJR:

1. Authorship assignment to AI is prohibited, as stated in our previous policy editorial [11].

2. Authors who employ generative AI tools are solely responsible for all content produced and submitted. They shall be accountable for any ethical or legal breach such as plagiarism or copyright violation.

3. KJR discourages the use of generative AI tools for the primary purpose of creating any types of content for scientific manuscripts except for studies mentioned in point 5 below. However, if such tools are used, the authors must report their use transparently. The report should include specific details, such as the name and version of the AI tool, date of access, name of the manufacturer/creator, and a comprehensive explanation of the use in the study conduct and manuscript writing. Authors may provide this information in a relevant section of the manuscript (e.g., figure legends for AI-generated figures) or collectively in the Acknowledgments section.

4. The use of LLMs or other AI tools to enhance the linguistic quality of a submission is considered acceptable. This includes improving grammatical accuracy, rectifying typographical errors, enhancing formatting, ensuring clarity, etc. Such applications can be particularly beneficial for non-native English speakers and do not require specific disclosure.

5. When generative AI itself is the focus of a study, for example, research employing GAN in medical image analysis or investigating the use of LLMs for medical inquiries [3,5,18,19], the use of AI should be explicitly detailed in the Materials and Methods section.

6. Reviewers are forbidden from using LLMs for the primary purpose of generating review comments. The review process is valued for its human expert perspective, and substitution of this perspective with AI-generated inputs is not permitted. However, reviewers may use LLMs or other AI tools to enhance the linguistic quality of their review comments (improve grammatical accuracy, rectify typographical errors, enhance formatting, ensure clarity, avoid demeaning or condescending tones, etc).

KJR acknowledges that authors and reviewers may find generative AI tools, particularly LLMs, useful for scientific writing and review processes. However, generative AI tools should be used carefully and responsibly. We believe that these guidelines will promote the proper use of generative AI and facilitate the sharing of valuable scientific information through publications while avoiding scientific misconduct and breach of publication ethics.

Footnotes

Conflicts of Interest: The author has no potential conflicts of interest to disclose.

Funding Statement: None

References

  • 1.Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence N, Weinberger KQ, editors. 27th Conference on Neural Information Processing Systems; NIPS'14: Proceedings of the 27th International Conference on Neural Information Processing Systems; 2014 Dec 8-13; Montréal, Canada. Cambridge: MIT Press; 2014. pp. 2672–2680. [Google Scholar]
  • 2.Wolterink JM, Mukhopadhyay A, Leiner T, Vogl TJ, Bucher AM, Išgum I. Generative adversarial networks: a primer for radiologists. Radiographics. 2021;41:840–857. doi: 10.1148/rg.2021200151. [DOI] [PubMed] [Google Scholar]
  • 3.Bae K, Oh DY, Yun ID, Jeon KN. Bone suppression on chest radiographs for pulmonary nodule detection: comparison between a generative adversarial network and dual-energy subtraction. Korean J Radiol. 2022;23:139–149. doi: 10.3348/kjr.2021.0146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Park JE, Vollmuth P, Kim N, Kim HS. Research highlight: use of generative images created with artificial intelligence for brain tumor imaging. Korean J Radiol. 2022;23:500–504. doi: 10.3348/kjr.2022.0033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Yan C, Lin J, Li H, Xu J, Zhang T, Chen H, et al. Cycle-consistent generative adversarial network: effect on radiation dose reduction and image quality improvement in ultralow-dose CT for evaluation of pulmonary tuberculosis. Korean J Radiol. 2021;22:983–993. doi: 10.3348/kjr.2020.0988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Committee on Publication Ethics. Authorship and AI tools: COPE position statement. [accessed on July 10, 2023]. Available at: https://publicationethics.org/cope-position-statements/ai-author .
  • 7.Donker T. The dangers of using large language models for peer review. Lancet Infect Dis. 2023;23:781. doi: 10.1016/S1473-3099(23)00290-6. [DOI] [PubMed] [Google Scholar]
  • 8.Garcia MB. Using AI tools in writing peer review reports: should academic journals embrace the use of ChatGPT? Ann Biomed Eng. 2023 Jun 27; doi: 10.1007/s10439-023-03299-7. [Epub] [DOI] [PubMed] [Google Scholar]
  • 9.Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, Generative AI, and scholarly manuscripts: WAME recommendations on Chatbots and Generative artificial intelligence in relation to scholarly publications. [accessed on July 10, 2023]. Available at: https://wame.org/page3.php?id=106 . [DOI] [PMC free article] [PubMed]
  • 10.Li H, Moon JT, Purkayastha S, Celi LA, Trivedi H, Gichoya JW. Ethics of large language models in medicine and medical research. Lancet Digit Health. 2023;5:e333–e335. doi: 10.1016/S2589-7500(23)00083-3. [DOI] [PubMed] [Google Scholar]
  • 11.Park SH. Authorship policy of the Korean Journal of Radiology regarding artificial intelligence large language models such as ChatGPT. Korean J Radiol. 2023;24:171–172. doi: 10.3348/kjr.2023.0112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.JAMA. Instructions for authors. [accessed on July 10, 2023]. Available at: https://jamanetwork.com/journals/jama/pages/instructions-for-authors .
  • 13.Miller K, Gunn E, Cochran A, Burstein H, Friedberg JW, Wheeler S, et al. Use of Large language models and artificial intelligence tools in works submitted to Journal of Clinical Oncology. J Clin Oncol. 2023;41:3480–3481. doi: 10.1200/JCO.23.00819. [DOI] [PubMed] [Google Scholar]
  • 14.Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612. doi: 10.1038/d41586-023-00191-1. [DOI] [PubMed] [Google Scholar]
  • 15.Nature. For authors: initial submission. [accessed on July 10, 2023]. Available at: https://www.nature.com/nature/for-authors/initial-submission .
  • 16.American Association for the Advancement of Science. Science journals: editorial policies. [accessed on July 10, 2023]. Available at: https://www.science.org/content/page/science-journals-editorial-policies .
  • 17.International Committee of Medical Journal Editors. Recommendations. [accessed on July 10, 2023]. Available at: https://www.icmje.org/recommendations .
  • 18.Haver HL, Ambinder EB, Bahl M, Oluyemi ET, Jeudy J, Yi PH. Appropriateness of breast cancer prevention and screening recommendations provided by ChatGPT. Radiology. 2023;307:e230424. doi: 10.1148/radiol.230424. [DOI] [PubMed] [Google Scholar]
  • 19.Rahsepar AA, Tavakoli N, Kim GHJ, Hassani C, Abtin F, Bedayat A. How AI responds to common lung cancer questions: ChatGPT vs Google Bard. Radiology. 2023;307:e230922. doi: 10.1148/radiol.230922. [DOI] [PubMed] [Google Scholar]

Articles from Korean Journal of Radiology are provided here courtesy of Korean Society of Radiology

RESOURCES