To the Editor:
The review by Jin-Hong Yoo,1 an esteemed Editor-in-Chief of this Journal, is one of the most comprehensive and insightful reviews of artificial intelligence (AI) in medical writing. Of the issues he raised, I believe the most important point is: we must now “prepare” for the future when AI will surpass human writing ability. I totally agree. I add some issues, which may complement Yoo’s viewpoint.
First, we must contemplate for whom medical papers exist. Top priority is patients. Patients rarely read papers. Papers must transfer the “more useful” data to readers (mainly doctors) “more clearly.” Let’s assume AI may find and more clearly describe “new DNA triple-helical pattern” (that cannot be identified or less clearly written by humans). This means AI creates “more useful” data and writes manuscripts “more clearly” than humans. In that case, we must adopt wider use of AI in research and writing. All is good that benefits patients—an easily acceptable starting point.
Second, we must consider: it requires time until apparently groundbreaking data that at first looked highly beneficial to patients will be confirmed so. Looking from my half-century of obstetrics-gynecology career, I have never seen a scenario where a single study changed the practice overnight. In general, it usually requires considerable time before a study involving a Copernican evolution proves true and benefits patients. What if AI-generated study, initially looked “great,” will be proved wrong? Error is human; but can we be at ease when facing “error is AI”? This leads to a deeper consideration.
Third, though contradictory to my first claim, research, especially papers, exists also for “researchers and authors.” I wrote over 600 papers. Looking back, my research and papers, I believe, benefited patients; however, it’s doubtful whether they “greatly” changed practice. In contrast, writing my own papers greatly trained my brain. I struggled to write; this memory is priceless.2 I am concerned: if AI will take the lead, such deep satisfaction will still remain. Indeed, beyond memory, self-satisfaction is the fundamental driving force of researchers. A further consideration arises here.
Fourth, some may still believe, based on their intrinsic motivation, that they should adhere to their own writing, whereas others, heavily depending on AI, write papers quickly. The former may write one paper whereas the latter ten annually. This may create inequity in competition. “Publish or perish” atmosphere, extrinsic motivation, may overcome intrinsic motivation that may be inherent to humans. Thus, we are entangled by the four-fold considerations: there seems to be no one-size-fits-all solution at present.
I have two humble propositions, both related to preparing for the future, as Yoo concluded. My viewpoint, however, is slightly different. First, for the time being, many researchers (including myself) understand both the merits and demerits of human writing and AI-aided writing; they are familiar with the present “hybrid” writing. Soon, this group will disappear, or at least decrease, because AI use will become the norm. Thus, now is the time to deeply consider how we use AI in medical writing. If we delay, discussions of AI pros versus cons will become difficult, as the population adhering to hybrid or fully human writing will be greatly reduced.
Second, we cannot dismiss the scenario where AI-crafted writing may be shown to bring some “harm” in medical writing or to individual humans. For that reason, we must preserve genuine human writing. This may sound somewhat dogmatic; however, as described,3,4 letters, opinions, and viewpoint pieces—thought-expressing works that are mainly subjective and not based on objective data—should be written solely by humans and regulated as such. This may serve as a natural heritage amidst artificial constructs.
ACKNOWLEDGMENTS
A part of the present concept was described and is cited.
Footnotes
Disclosure: The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
- 1.Yoo JH. Defining the boundaries of ai use in scientific writing: a comparative review of editorial policies. J Korean Med Sci. 2025;40(23):e187. doi: 10.3346/jkms.2025.40.e187. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Matsubara S. Comparing letters written by humans and ChatGPT: a preliminary study. Int J Gynaecol Obstet. 2025;168(1):320–325. doi: 10.1002/ijgo.15827. [DOI] [PubMed] [Google Scholar]
- 3.Matsubara S, Matsubara D. What’s the difference between human-written manuscripts versus ChatGPT-generated manuscripts involving “human touch”? J Obstet Gynaecol Res. 2025;51(2):e16226. doi: 10.1111/jog.16226. [DOI] [PubMed] [Google Scholar]
- 4.Matsubara S. ChatGPT use should be prohibited in writing letters. Am J Obstet Gynecol. 2024;231(3):e110. doi: 10.1016/j.ajog.2024.04.046. [DOI] [PubMed] [Google Scholar]
