Dear Editor,
A letter by Polat et al., published in Malaysian Family Physician in November 2025, highlights a marked reduction in hallucination – defined as inaccurate outputs or fabricated information – in the latest version of ChatGPT (GPT-5).1 We agree that this significant improvement is a promising step towards increasing trust in the contribution of generative artificial intelligence (AI) to the medical and scientific literature.1
Hallucination was a major challenge in the use of generative AI.2 In our experience, one of the most apparent forms of hallucination involves fabricated references. In 2023, we conducted a simple study using a generative AI model (GPT-3.5) to generate an introduction section of a dummy article with references. We found that 100% of the references produced did not actually exist.3 This confirmed the presence of hallucination in our setting.
AI developers have continued to enhance their models to reduce such issues. For instance, the newer model of ChatGPT (GPT-5) has been reported to produce significantly fewer hallucinations.4 We repeated our previous experiment in November 2025 and, on this occasion, all provided references were real. This improvement supports the growing trust that human authors can place in generative AI as a reliable assistant.
However, a critical review of the generated text and reference list showed that some referenced publications were not appropriately cited. For example, although some elements of the referenced publications could be used to support parts of the AI-generated text, they were not central to the publications’ main conclusions and originated from different research settings. This underscores the essential role of human authors. Ultimately, authors retain full responsibility for the accuracy and integrity of AI-generated outputs. Major organisations related to medical and scientific publishing, including the Committee on Publication Ethics and the International Committee of Medical Journal Editors, reinforce authors’ responsibility and accountability for accuracy, originality and ethical compliance.5,6
In conclusion, recent AI models show substantial improvements in reducing hallucination. Generative AI can serve as an assistant or a co-pilot to human authors but not as a co-author. We believe that AI will become even more capable and reliable in the future, continuing to accelerate the scientific and medical literature. Nonetheless, issues such as ethical considerations, copyright and authorship roles will remain important areas for discussion in 2026 and beyond.
Acknowledgments
We used ChatGPT (GPT-5.1, OpenAI, USA) to check grammar and refine the language.
Funding Statement
None.
Author Contributions
AW, SW and CM conceived the study. All authors edited and approved the final version of themanuscript.
Conflicts of interest
AW is an editorial board member of the journal. The other authors declare no conflicts of interest.
References
- 1.Polat S, Alyanak B, Dede BT, Temel MH, Yildizgoren MT, Bagcier F. Marked reduction in hallucination rates with GPT-5: a positive development for medical and scientific writing. Malays Fam Physician. 2025;20:75. doi: 10.51866/lte.1004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Jamaluddin J, Gaffar NA, Din NSS. Hallucination: a key challenge to artificial intelligence-generated writing. Malays Fam Physician. 2023;18:68. doi: 10.51866/lte.527. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wattanapisit A, Photia A, Wattanapisit S. Should ChatGPT be considered a medical writer? Malays Fam Physician. 2023;18:69. doi: 10.51866/lte.483. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.OpenAI Why language models hallucinate. [ November 30; 2025 ]. https://openai.com/index/why-language-models-hallucinate/
- 5.International Committee of Medical Journal Editors Defining the role of authors and contributors. [ November 30; 2025 ]. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
- 6.Committee on Publication Ethics Authorship and AI tools. [ November 30; 2025 ]. https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
