Skip to main content
Journal of Korean Medical Science logoLink to Journal of Korean Medical Science
. 2025 Jun 10;40(23):e187. doi: 10.3346/jkms.2025.40.e187

Defining the Boundaries of AI Use in Scientific Writing: A Comparative Review of Editorial Policies

Jin-Hong Yoo 1,
PMCID: PMC12170296  PMID: 40524628

Abstract

The rapid rise of generative artificial intelligence (AI) is fundamentally transforming the landscape of medical writing and publishing. In response, major academic organizations and high-impact journals have released guidelines addressing core ethical concerns, including authorship qualification, disclosure of AI use, and the attribution of accountability. This review analyzes and compares key statements from several international medical or scientific editors’ organizations along with submission policies of major leading journals. It also evaluates the AI usage policy of the Journal of Korean Medical Science (JKMS), which presents one of the most specific frameworks among Korean journals, and offers suggestions for refinement. While most journals prohibit listing AI tools as authors, their stance on AI-assisted writing varies. JKMS aligns with international norms by prohibiting AI authorship and recommending that authors explicitly report the tool name, prompt, purpose, and scope of AI use. This policy demonstrates a flexible but principled approach to AI integration. The limitations of AI detection tools are also discussed. These tools often struggle with accuracy and bias, with known tendencies to misclassify human-written content as AI-generated. As such, sole reliance on detection tools is insufficient for editorial decisions. Instead, fostering a culture of ethical authorship and responsible disclosure remains essential. This review highlights the need for balanced policies that promote transparency without impeding innovation. By clarifying disclosure expectations and reinforcing human accountability, journals can guide the ethical use of AI in scientific writing and maintain the integrity of scholarly communication.

Keywords: Generative AI, ChatGPT, Authorship, Writing, Publishing

Graphical Abstract

graphic file with name jkms-40-e187-abf001.jpg

INTRODUCTION

The recent emergence of generative artificial intelligence (AI) is fundamentally transforming the paradigm of medical publishing.1,2 Large-scale language model-based AI tools, including ChatGPT, have made significant progress in recent years, with rapid performance improvements. Consequently, the use of these tools in the preparation of medical manuscripts has seen a dramatic increase.3,4,5 However, this technological advancement has raised numerous ethical and practical concerns, including issues related to academic integrity, copyright, and accountability. In response, leading international medical or scientific editors’ organizations and major leading journals have been expediting the establishment of clear guidelines for AI utilization.6

Generative AI has advanced at an unprecedented pace. If the exponential growth in large-language-model capability continues, its writing performance may soon reach a “writing singularity,” the point at which machine-generated text is virtually indistinguishable from human writing. Tasks that once required weeks—or even months—such as the exhaustive collection and organization of source materials can now be completed in a matter of hours. Because the essence of scientific writing lies in how creatively and rigorously researchers refine their own scholarly questions, rather than in the manual drudgery of data gathering, continuing to rely on outdated workflows is difficult to justify. Inevitably, most researchers will turn to AI tools including “Deep-Research from ChatGPT and Gemini” to streamline repetitive tasks. AI-assisted writing has become an increasingly integral part of contemporary scholarship. Given the scale and inevitability of this transformation, it is no longer sufficient to treat AI as a marginal tool. In light of this paradigm shift, an outright ban on AI use in scientific publishing warrants serious reconsideration. Does such a restriction genuinely advance academic progress, or does it risk hindering it?

This is not a purely theoretical concern, as emerging publication patterns already hint at AI’s tangible impact. A concrete example illustrates how generative AI may already be reshaping the academic publishing ecosystem: a recent study reported a striking surge, beginning around 2022, in research papers utilizing public datasets such as the National Health and Nutrition Examination Survey (NHANES).7 These papers have been disproportionately submitted to a limited number of journals and exhibit remarkably similar structures and analytical approaches. Rather than representing a triumph of reproducibility, this trend suggests a pattern of mass production resembling that of a “paper mill,” following rigid templates.

What is particularly noteworthy is the temporal overlap between this phenomenon and the widespread availability of generative AI technologies. GPT-based drafting tools have enabled researchers—even those without substantial experience in coding or statistical analysis—to rapidly generate manuscripts. This growing reliance on AI-generated drafts is further compounded by academic evaluation systems that prioritize publication count as a key performance metric, inadvertently incentivizing volume over quality.

It is likely that the NHANES-related trend represents only the tip of the iceberg. If left unaddressed, such trends risk undermining the integrity of scholarly communication. Therefore, only by going beyond the mere detection of technical errors or instances of plagiarism can scientific journals safeguard the integrity of scholarly communication. Instead, they should enhance qualitative peer review criteria that assess the originality of research questions, creativity in data utilization, and the soundness of analytical methods. Such developments exemplify the pressing need for clearer, ethically sound guidelines on how generative AI can be used—appropriately and transparently—within scientific writing.

This review provides a comparative analysis of submission guidelines regarding AI usage, with particular emphasis on the policies of the Journal of Korean Medical Science (JKMS) as a reference point. The guidelines of major organizations related to publication ethics will be examined, such as the International Committee of Medical Journal Editors (ICMJE), the World Association of Medical Editors (WAME), and the Committee on Publication Ethics (COPE), as well as prominent major leading medical or scientific journals. By identifying commonalities and differences among these guidelines, this review proposes directions for enhancing AI-assisted writing policies.

To provide a foundation for this work, it is first necessary to define the types of AI involvement in scientific writing.

AI-GENERATED VS. AI-ASSISTED WRITING

The use of AI in medical research writing is at the center of an ethical debate. It is crucial to clearly distinguish between AI-generated scientific writing (i.e., fully AI-generated text) and AI-assisted scientific writing (i.e., AI-supported writing). The former refers to cases where AI autonomously generates text without human intervention, which can seriously compromise academic integrity and transparency, making it unacceptable. This concern is particularly significant in medical research, where trustworthiness and accuracy are paramount, as it directly impacts human health and lives.

On the other hand, AI-assisted scientific writing refers to the process where researchers leverage AI tools to support their writing tasks. This may include generating preliminary drafts, enhancing sentence structures, or suggesting language improvements. While the use of AI to generate drafts is considered ethically acceptable by many, this acceptance is typically predicated on the condition that human authors retain full responsibility by critically reviewing and revising the content. The key point is that researchers do not blindly accept the AI-generated content. Instead, they critically review, edit, and ensure that the final work adheres to scientific validity and ethical standards. Such active human involvement guarantees accountability and ultimately maintains the reliability of the research.

GUIDELINES OF INTERNATIONAL EDITORIAL ORGANIZATIONS

ICMJE

The ICMJE explicitly clarified in its 2023 guidelines that generative AI tools do not meet the criteria for authorship and, therefore, cannot be recognized as authors.8 This determination is based on the fact that AI cannot perform tasks such as creative decision-making, assuming accountability, or disclosing conflicts of interest. Furthermore, if AI tools are used in manuscript preparation, the ICMJE mandates that this must be clearly disclosed in the manuscript, with a detailed explanation of the purpose and extent of use. The ICMJE emphasizes that such disclosure is essential for maintaining transparency and academic integrity.

WAME

The WAME does not explicitly prohibit the use of AI tools but strongly advocates for clear boundaries.6 WAME insists that if authors use AI, this must be explicitly disclosed, especially because AI involvement can potentially influence scientific interpretation and conclusions. Authors must ensure that the entire manuscript is thoroughly reviewed and validated by human authors, who retain full responsibility. WAME further emphasizes that AI use must not replace or obscure the authors' actual contributions, and academic accountability must remain clear.

COPE

The COPE does not ban the use of AI but requires transparent disclosure so that editors and reviewers can recognize and assess the influence of AI usage.9 COPE warns that AI use can pose risks to research integrity, including data fabrication, copyright infringement, and ethical violations. Therefore, it recommends the establishment of an editorial judgment system to preemptively mitigate these risks. COPE also highlights the need to collect cases of publication ethics violations related to AI use and to strengthen educational efforts for editors, reviewers and authors.

SUBMISSION GUIDELINES ON AI USAGE IN MAJOR ACADEMIC JOURNALS

Leading academic journals around the world adopt varying positions regarding the use of AI, but they consistently emphasize two core principles: author accountability and disclosure.

The New England Journal of Medicine (NEJM) requires that if AI tools are used in a submitted manuscript, this usage must be disclosed, with a clear description of the specific AI tools used and the content generated; AI cannot be credited as an author, and authors are responsible for ensuring the accuracy, integrity, and originality of AI-generated materials, which cannot be cited as primary sources.10

The Lancet does not impose an outright ban on the use of AI but requires clear disclosure. Authors are expected to specify the scope and role of AI usage within the manuscript. However, it strongly emphasizes that full responsibility for the content remains with the human authors.11

Nature acknowledges the potential use of AI tools but explicitly states that AI cannot be listed as a co-author. It mandates that any use of AI must be clearly disclosed, specifying its role and scope, particularly within the Methods section of the manuscript.12

The Science journals adopt one of the strictest positions regarding the use of generative AI. According to their official policy, tools such as ChatGPT do not qualify for authorship and may not be credited as authors or coauthors. Moreover, even citations of content generated by AI tools are prohibited. If authors use AI tools to assist in manuscript preparation, they must clearly disclose this in the cover letter and the acknowledgments section. In addition, the methods section should include the exact prompt used and the version of the AI tool employed. Editors may decline to proceed with review if AI has been used inappropriately—effectively amounting to a ban on undisclosed or unapproved AI-generated text. This policy goes beyond merely clarifying authorship qualifications. It emphasizes scientific accuracy, the prevention of plagiarism, and caution against algorithmic bias. Notably, Science also prohibits peer reviewers from using AI tools when drafting evaluations, underscoring the need to protect confidentiality and ethical responsibility during the review process.13

The British Medical Journal (BMJ) permits the use of AI tools in submitted manuscripts but requires clear disclosure of such usage, including the name of the AI tool used, its purpose, the manner of its use, and the content it generated. AI cannot be recognized as an author, as The BMJ only accepts human authors who can take full responsibility for the work. The responsibility for the accuracy, integrity, and originality of AI-generated content rests entirely with the human authors, who are accountable for any errors or inaccuracies in AI-generated material.14,15

Annals of Internal Medicine permits the use of AI tools in the creation of submitted manuscripts. However, authors are required to explicitly disclose any use of such technologies at the time of submission. This disclosure must be clearly stated in both the cover letter and the manuscript itself, detailing the specific AI tools used, their purpose, and the manner in which they were applied. AI tools cannot be listed as authors. This policy aligns with the recommendations of the ICMJE, which stipulate that authors must be accountable for the accuracy, integrity, and originality of their work. Authors must ensure that any material produced using AI is accurate, free of errors, and ethically sound, and they are fully accountable for any inaccuracies or ethical breaches associated with AI-generated content.16

Journal of the American Medical Association (JAMA) maintains that generative AI tools cannot replace the creativity, judgment, and ethical responsibility required of human authors. Accordingly, these tools cannot be credited as authors, and the use of AI-generated text or images must be explicitly disclosed. Specifically, authors must report the name and version of the tool used, the prompt entered, and the nature of the AI-generated content, typically in the Acknowledgments or Methods section. The submission system also includes a structured process to confirm whether such tools were used. In addition, the use of AI-generated clinical images is, in principle, prohibited—exceptions may be granted only when such use is an integral part of the study design. Peer reviewers are similarly restricted: entering manuscript content into an AI tool to analyze or generate review text constitutes a breach of confidentiality. If AI tools are used in a supportive capacity, the reviewer must disclose this transparently. Interestingly, JAMA does allow limited use of AI technologies on the editorial side. These tools may assist with technical processes such as detecting duplicate submissions, parsing metadata, and verifying citations. However, final editorial decisions remain the sole responsibility of human editors. JAMA’s policies align closely with the recommendations issued by COPE and the ICMJE, and the journal notes that its policies will continue to evolve in response to future developments in AI technology.17

Cell also adheres to similar guidelines, maintaining that AI cannot be recognized as an author. They require transparent disclosure of any AI usage in manuscript preparation.18

Among domestic journals, JKMS provides the detailed guidelines regarding AI usage. JKMS adopts a more flexible management policy.19 Notably, differences exist in terms of the permissible scope of AI usage, the specificity of disclosure requirements, and the extent of editorial discretion. Journals such as Yonsei Medical Journal (YMJ) and Korean Journal of Radiology (KJR) primarily follow the guidelines established by the ICMJE.20,21

As summarized in Table 1, most academic journals and institutions share the following common principles regarding the use of generative AI (Table 1):

Table 1. Comparative summary of generative AI policies in major journals and editorial organizations.

Journal/Organization AI use prohibited? Disclosure required? Disclosure AI authorship
ICMJE No Yes Detailed disclosure of AI usage (purpose, extent) No authorship
WAME No Yes Explicit disclosure; human responsibility emphasized No authorship
COPE No Yes Transparent disclosure; ethical risk warnings No authorship
NEJM No Yes Tool name, content generated, use details No authorship
Lancet No Yes Scope and role of AI specified No authorship
Nature No Yes Role and scope disclosed in Methods section No authorship
Science Effectively Yes Yes Tool name, version, prompts disclosed (Methods, Acknowledgments, Cover letter) No authorship; no AI-generated citations
BMJ No Yes Tool name, purpose, content generated No authorship
Ann Intern Med No Yes Tool name, purpose, usage details in Cover letter and manuscript No authorship
JAMA Yes (for images) Yes Tool name, version, prompt, content; structured submission form No authorship; peer review use restricted
Cell No Yes Transparent disclosure required No authorship
JKMS No Yes Detailed guidance; flexible policy No authorship
YMJ / KJR No Yes Follow ICMJE guidelines No authorship

AI = artificial intelligence, ICMJE = International Committee of Medical Journal Editors, WAME = World Association of Medical Editors, COPE = Committee on Publication Ethics, NEJM = New England Journal of Medicine, BMJ = British Medical Journal, JAMA = Journal of the American Medical Association, JKMS = Journal of Korean Medical Science, YMJ = Yonsei Medical Journal, KJR = Korean Journal of Radiology.

First, AI cannot be recognized as an author. This reflects the understanding that AI cannot replace human researchers or make scientific judgments.

Second, the use of AI must be explicitly disclosed, ensuring transparency in research and allowing readers to understand the role of AI in the work.

Lastly, all responsibility for the content lies with the human authors. AI is considered a tool, and the scientific accuracy and ethical responsibility for the generated results ultimately rest with the human researchers.

However, despite these common principles, there are notable differences in the approaches and requirements for the use of AI. Some journals prohibit the use of AI-generated text, while others allow it, provided that the usage is transparently disclosed. The level of disclosure required varies depending on the journal’s policy. Some journals may require a description of AI usage in the Methods section, while others might require explicit mention of the specific AI tools used. These differences are determined by each journal’s policies and editorial direction, which researchers need to consider when using AI in their work.

Many journals may permit AI-assisted scientific writing, provided that ethical red lines are respected. This is because AI is used purely as a tool, while final decisions and responsibility rest solely with the researchers. However, even in this context, researchers must transparently disclose the use of AI, clearly specify the extent of AI’s contribution, and ensure that readers understand the distinction. This approach provides transparency to readers and protects the integrity of the research. While AI is a powerful tool, human judgment must remain final and decisive, especially in the field of medicine.

ACADEMIC DISCLOSURE OF THE USE OF AI, TO WHAT EXTENT IS THE OBLIGATION?

The question of “To what extent and how should AI usage be disclosed?” has emerged as a new ethical issue across academia.22 While it is true that AI tools can be beneficial for researchers, clear boundaries must be established. For instance, citing AI-generated ideas without proper attribution or presenting them as original creations can lead to issues of plagiarism and misrepresentation of contribution, which constitute serious violations of research ethics. If a substantial portion of a manuscript is written by AI but this fact is not disclosed, it can severely undermine the transparency and credibility of the research.

Furthermore, using AI-modified or synthesized images of patient information without disclosure can lead to violations of privacy and misinterpretation of results. Conversely, using AI for minor tasks such as grammar correction or sentence refinement does not constitute creative contribution and may be ethically acceptable if properly disclosed. Similarly, using AI for structural design assistance or reference management is generally permissible, provided that such usage is transparently disclosed.

Resnik and Hosseini23 categorize the disclosure of AI usage into three levels—mandatory, optional, and unnecessary—based on ethical principles. They define the criteria for mandatory disclosure as “intentional and substantial use.” Specifically, any instance where AI is used to establish research hypotheses, write portions of the manuscript, collect or analyze data, or generate figures and tables must be disclosed, as such use directly impacts the essence of the work.

Conversely, if AI is used for tasks such as grammar correction, text refinement, or sentence restructuring, disclosure is considered optional, provided that the human author maintains primary control over the content. Finally, if AI is used solely for basic spell-checking, reference management, or as a search assistant, disclosure is deemed unnecessary.

As listed in Table 2, the following are considered unethical uses of AI in academic research and publishing:

Table 2. Ethical and disclosure-based assessment of common AI usage cases in academic research.

AI usage scenario Disclosure Ethically Remarks
1. Generating research hypotheses Mandatory Acceptable if disclosed Considered substantial and intentional use
2. Writing portions of the manuscript Mandatory Acceptable if disclosed Directly impacts originality; must be transparent
3. Collecting or analyzing data Mandatory Acceptable if disclosed AI influences core scientific processes
4. Generating figures or tables Mandatory Acceptable if disclosed Affects interpretation and presentation
5. Modifying/synthesizing patient images Mandatory Unethical if undisclosed Serious risk of privacy breach and misrepresentation
6. Citing AI-generated ideas as if original - Unethical Constitutes plagiarism; attribution is mandatory
7. Using AI for grammar correction/sentence refinement Optional Acceptable Provided that human retains content control
8. Restructuring text (e.g., for fluency or clarity) Optional Acceptable No creative authorship; judgment by human remains central
9. Using AI for spell-checking or formatting references Unnecessary Acceptable Routine technical tasks; minimal influence
10. Using AI as a literature search assistant Unnecessary Acceptable Comparable to traditional search engines
11. Using AI to design manuscript structure (outline/framework) Optional/Mandatory (context) Acceptable if disclosed Transparency in Methods section recommended

This table has been adapted and reorganized on the basis of the framework (mandatory, optional, unnecessary) proposed by Resnik and Hosseini.23 The examples listed here are illustrative rather than exhaustive, and many more cases are likely to emerge. A future casebook of our own is under consideration, grounded in this framework.

AI = artificial intelligence.

  • 1) Citing AI-generated ideas without proper attribution or presenting them as original creations.

  • 2) Allowing AI to generate substantial portions of a manuscript without disclosing this fact.

  • 3) Using AI-modified or synthesized images without disclosure.

On the other hand, the following uses of AI are considered ethically acceptable:

  • 1) Using AI for linguistic assistance, such as grammar correction or text refinement.

  • 2) Utilizing AI to help establish a structural framework for the manuscript, provided this use is transparently described in the Methods section.

  • 3) Employing AI for non-creative auxiliary tasks, such as formatting references.

Ultimately, the shift from the initial stance of “disclose all AI usage” to the current position of “disclose only intentional and substantial use” should not be viewed as a mere retreat but rather as a practical advancement that reflects real-world considerations. Key challenges will include refining disclosure standards that account for the context of each academic field, enhancing the education of editors and reviewers, and achieving harmonization of guidelines across journals (Table 2).

TOOLS FOR DETECTING AI-GENERATED TEXT IN ACADEMIC PAPERS AND THEIR LIMITATIONS

Various tools for detecting AI usage such as GPTZero (https://gptzero.me), Originality.AI (https://originality.ai), Turnitin AI Detector (https://www.turnitin.com/solutions/topics/ai-writing/), OpenAI classifier (https://openai.com/index/new-ai-classifier-for-indicating-ai-written-text/) have emerged, but their accuracy and practicality remain highly debated.

For instance, GPTZero analyzes the perplexity and burstiness indices to detect sentences that are likely generated by AI.24,25 Perplexity measures how predictable a sentence is to a language model, while burstiness reflects the variation in sentence length and structure. Generally, AI-generated text tends to be more predictable and uniform, resulting in lower scores on both metrics.

However, this approach has inherent limitations. Human writing styles vary widely depending on individual, cultural, and contextual factors. In particular, non-native English writers often use simpler syntax and limited vocabulary, which may inadvertently resemble AI-generated patterns. Conversely, advanced language models can produce text that mimics human creativity and variation with surprising fluency. Though GPTZero reports a self-claimed false positive rate of 1% and asserts an accuracy of 96% in mixed samples, a further concern is that these tools may unfairly disadvantage non-native-English authors.26 Moreover, AI-generated content can be paraphrased or lightly edited by a human to evade detection, further undermining the reliability of these tools.27 Other detector tools mentioned earlier also seem to share similar strengths and weaknesses with GPTZero.

OpenAI classifier discontinued its services as of July 2023, with one of the reasons being low accuracy and confusion caused by false positives.28,29

What truly matters is not detecting superficial sentence patterns suggestive of AI authorship, but assessing whether the core ideas and direction of the paper were meaningfully shaped by human authorship. In this sense, I think the premise behind current detection tools is fundamentally flawed. The core of academic integrity lies not in who typed a sentence, but in who contributes intellectually and accepts responsibility for it. The ethical significance lies not in the use of AI itself, but in how it is understood and applied. Integrity in academic writing depends not on automation, but on transparent disclosure, genuine authorship, and human ethical judgment. Given their limitations, current detection tools cannot be fully trusted; their results should be used only as supplementary input, not as the basis for academic decisions.

Moreover, as AI capabilities continue to advance, they could ultimately reach human-level proficiency. Given this trajectory, there will undoubtedly come a singularity point where it becomes impossible for these tools to distinguish between papers written by AI and those genuinely written by humans any more. In the end, the final determination regarding the use of AI will still rely on the author’s disclosure, conscience, and the self-regulation of the academic community.

This naturally raises a skeptical question:

If efforts to detect AI-generated text ultimately cannot keep up with AI that closely resembles human writing, what is the significance of all the guidelines we have discussed so far?

The answers to this question are as follows:

First, even if AI usage guidelines do not have legal binding force, they serve as a reference point for establishing ethical self-regulation within the research community. For instance, ICMJE stipulates that “AI cannot be an author, and its use must be disclosed,” while the COPE also recommends that “responsibility for AI lies with human authors, and its use should be transparently disclosed.” By clearly defining the boundaries between permissible and prohibited use, these guidelines promote a culture of responsible AI utilization.

Second, when issues such as errors, copyright infringement, or data manipulation arise from AI usage, guidelines provide a basis for post-incident regulation and determination of responsibility. In particular, failure to disclose AI usage may lead to investigations for research misconduct.

Third, guidelines also serve educational and symbolic functions. They convey the message to beginners or young researchers that “AI may be a convenient tool, but it does not replace academic responsibility,” thereby enhancing ethical awareness.

As AI’s document generation capabilities become increasingly sophisticated in the future, ethical self-regulation within the research community and a culture of transparent disclosure will become far more important than regulation through “detection.” Researchers must not only enjoy the convenience of AI but also take full responsibility for every tool they use. This responsibility extends not only to the final product but also to the integrity of the entire process of creation. As AI’s creative assistance capabilities continue to advance, the role of human authors must be increasingly redefined as originators of creativity, custodians of logic, and guardians of ethics.

Journal editorial policies and publication ethics should adopt more detailed and proactive measures in line with this trend. To do so effectively, more detailed and proactive measures must be implemented. For example, rather than merely requiring a disclosure of “whether AI was used,” journals could mandate that authors specify the exact level and role of AI usage. They could also establish provisions that preemptively restrict certain types of automatically generated text. Furthermore, policy design should encourage disclosure of AI usage to promote researcher accountability, ensuring that researchers remain responsible for the integrity of their work.

Beyond policy enforcement, the broader imperative lies in establishing social norms and institutional frameworks that position AI not as a tool for writing papers on behalf of researchers but as an aid that enables researchers to ask better questions and develop more refined thinking. This is the true direction that academic ethics should pursue in the era of AI.

EVALUATION AND DIRECTIONS FOR IMPROVEMENT OF JKMS GUIDELINES

In June 2023, the JKMS became the first domestic academic journal to establish clear guidelines on AI usage.19

This represents a measure based on the recommendations of the ICMJE and COPE, with significance in that it explicitly denies AI authorship and mandates transparent disclosure. However, in terms of clarity of detailed criteria, the following improvements are deemed necessary:

First, the current general guideline of JKMS offers a desirable degree of flexibility, which should be maintained. Given the diversity of research environments and manuscript preparation practices, enforcing a rigid, uniform standard in all cases would be neither practical nor fair. A flexible approach allows authors to disclose AI usage in a manner appropriate to their specific context.

However, it would be advisable to offer more detailed instructions for authors—preferably in the form of selectable templates or a structured checklist. To implement this effectively, it is essential to classify the purpose and extent of AI use during the manuscript preparation process.

For instance, if AI tools were used solely for minor language editing—such as correcting grammatical errors or improving stylistic clarity—a brief explanation may suffice. In contrast, when AI plays a substantive role—such as drafting introductory sections, organizing the logical flow, or enhancing the overall structure of the manuscript—a more comprehensive and transparent disclosure is warranted.

Such differentiation would provide editors and reviewers with a clearer understanding of the degree of AI involvement and would ultimately enhance the credibility and transparency of the submitted work.

Second, it is essential to develop separate and detailed guidelines specifically intended for editors and peer reviewers, particularly regarding the use of AI tools during the review process. While current policies predominantly focus on authors’ responsibilities in disclosing AI use, reviewers are also likely to increasingly rely on such technologies, necessitating equally rigorous standards on the editorial side. In the absence of clear guidance, these practices may raise legitimate concerns about the fairness, independence, and integrity of the peer review process.

To address this, journals should implement a transparent disclosure policy, to be enforced by editors, requiring reviewers to report whether AI tools were used. Editors, in turn, must be equipped to assess the appropriateness of such AI involvement and, when necessary, take corrective steps—such as requesting additional review or replacing the reviewer.

Establishing this framework would not only help prevent misuse of AI but also reinforce the core principles of independent and unbiased peer review.

Third, it is necessary to present concrete and systematically categorized examples of potential ethical violations that may arise from the misuse or abuse of AI. While most current guidelines remain at the level of general principles, they offer insufficient guidance for addressing the complex ethical dilemmas that authors and editors increasingly face. To move beyond broad policy statements, academic journals should delineate specific categories of likely violations and illustrate them with real-world examples.

Providing concrete examples of these misconduct scenarios serves both a preventive and normative function: it alerts authors to unacceptable practices and equips editors with clear evaluative criteria. These examples can also be incorporated into internal editorial training materials, used as case studies in research ethics workshops, or serve as discussion points in broader academic dialogues on responsible AI use. Such efforts help cultivate a culture of integrity and accountability in an era where the boundaries between human authorship and AI-assistance are becoming increasingly blurred.

These considerations call for journal-level leadership in policy innovation and ethical standard-setting. Building upon its early adoption of AI usage guidelines, JKMS has the potential to set an ethical and operational benchmark for other domestic journals. As outlined in Table 3, these efforts, which go well beyond editorial policy alone, position JKMS to play a central role in guiding the broader academic community toward a responsible and constructive engagement with emerging technologies (Table 3).

Table 3. Directions for improvement of JKMS guidelines on AI-assisted writing.

Area Current status Directions for improvement Expected
Author disclosure Disclosure required in cover letter and manuscript Maintain flexibility, but provide structured templates or checklists; differentiate disclosure level by purpose and degree of AI use (e.g., grammar check vs. content drafting) Enhances transparency and avoids confusion
Reviewer/Editor policy No detailed policy yet for editors or reviewers using AI Develop separate, explicit guidelines for reviewers and editors; require disclosure of AI use in peer review; allow editors to assess appropriateness and take corrective steps if needed Ensures fairness and maintains review integrity
Ethical misuse examples General ethical principles stated; lack of concrete examples Provide concrete and categorized examples of AI-related misconduct (e.g., paraphrased plagiarism, fabricated citations, hallucinated data); use in training materials and workshop Aids prevention and editorial consistency
Educational role Early adoption of AI policy Expand role via education programs for authors/reviewers/editors; initiate community dialogue platforms to discuss evolving AI challenges Promotes a culture of responsible AI use
Leadership in Korea First domestic journal to implement AI disclosure policy Continue positioning as national benchmark by balancing Korean scholarly context with international ethical standards Sets national benchmark for responsible AI use

JKMS = Journal of Korean Medical Science, AI = artificial intelligence.

CONCLUSION

Generative AI is rapidly expanding its influence in academic research and publishing. Its introduction has improved overall productivity by reducing time and effort in manuscript preparation.

However, this transformation has also revealed critical vulnerabilities in the publication system, especially regarding ethics, transparency, and accountability. A central debate concerns whether AI can be credited as an author. Major academic organizations and journals have firmly stated that AI cannot qualify for authorship, primarily because it cannot assume legal or ethical responsibility. Consequently, while AI contributions must be transparently disclosed, final accountability must rest with human authors.

Leading medical and scientific journals now mandate clear disclosure of AI usage and offer detailed authorship guidelines. This reflects the principle that even when AI is used as a tool, researchers must clarify its scope and limitations. Korean journals are also aligning with this global movement by adopting similar policies.

At the same time, concerns persist regarding the reliability of AI detection tools such as GPTZero. These tools attempt to determine whether a text was AI-generated, but their accuracy remains limited and they may unfairly penalize non-native authors. Such limitations highlight the need to shift from technical detection to human-centered evaluation, authorial responsibility, and ethical oversight.

Clearly, AI can enhance both the quality and efficiency of academic writing. But to avoid ethical ambiguity or diffusion of responsibility, the academic community must establish independent standards and invest in ongoing ethics education.

As this review concludes, I would like to offer a personal concern and perspective. If AI continues to evolve rapidly, it may eventually rival human-level proficiency. At that point, our current understanding of AI-assisted writing—grounded in voluntary ethics—could be fundamentally disrupted.

Whether this leads to an outright ban or to recognizing AI as a co-author, existing frameworks may not suffice. Of course, the latter may seem unrealistic for now, but once technology crosses a certain threshold, all possibilities must be considered.

The pressing question is whether today’s ethical norms and individual conscience can withstand the pace of technological change. While we are striving to build a sound ethical framework, it may prove fragile in the face of future transformations. No one can predict the direction of this paradigm shift, but one thing is clear: we must begin discussing and preparing now to ensure that human authors are not relegated to mere tool users.

No one can predict with certainty which direction this paradigm shift will take. But one thing is clear: the time to initiate serious discussion and preparation is now. To ensure that human authors do not become relegated to mere users of tools, we must begin thinking about—and preparing for—that future, starting today.

Footnotes

Disclosure: The author has no conflict of interest to disclose.

References


Articles from Journal of Korean Medical Science are provided here courtesy of Korean Academy of Medical Sciences

RESOURCES