Skip to main content
Malaysian Family Physician : the Official Journal of the Academy of Family Physicians of Malaysia logoLink to Malaysian Family Physician : the Official Journal of the Academy of Family Physicians of Malaysia
letter
. 2026 Feb 25;21:12. doi: 10.51866/lte.1083

Affirming generative artificial intelligence as a co-pilot, not a co-author, in medical and scientific writing

Apichai Wattanapisit 1,2,3,, Sanhapan Wattanapisit 4, Christian Mallen 5
PMCID: PMC12967415  PMID: 41804336

Dear Editor,

A letter by Polat et al., published in Malaysian Family Physician in November 2025, highlights a marked reduction in hallucination – defined as inaccurate outputs or fabricated information – in the latest version of ChatGPT (GPT-5).1 We agree that this significant improvement is a promising step towards increasing trust in the contribution of generative artificial intelligence (AI) to the medical and scientific literature.1

Hallucination was a major challenge in the use of generative AI.2 In our experience, one of the most apparent forms of hallucination involves fabricated references. In 2023, we conducted a simple study using a generative AI model (GPT-3.5) to generate an introduction section of a dummy article with references. We found that 100% of the references produced did not actually exist.3 This confirmed the presence of hallucination in our setting.

AI developers have continued to enhance their models to reduce such issues. For instance, the newer model of ChatGPT (GPT-5) has been reported to produce significantly fewer hallucinations.4 We repeated our previous experiment in November 2025 and, on this occasion, all provided references were real. This improvement supports the growing trust that human authors can place in generative AI as a reliable assistant.

However, a critical review of the generated text and reference list showed that some referenced publications were not appropriately cited. For example, although some elements of the referenced publications could be used to support parts of the AI-generated text, they were not central to the publications’ main conclusions and originated from different research settings. This underscores the essential role of human authors. Ultimately, authors retain full responsibility for the accuracy and integrity of AI-generated outputs. Major organisations related to medical and scientific publishing, including the Committee on Publication Ethics and the International Committee of Medical Journal Editors, reinforce authors’ responsibility and accountability for accuracy, originality and ethical compliance.5,6

In conclusion, recent AI models show substantial improvements in reducing hallucination. Generative AI can serve as an assistant or a co-pilot to human authors but not as a co-author. We believe that AI will become even more capable and reliable in the future, continuing to accelerate the scientific and medical literature. Nonetheless, issues such as ethical considerations, copyright and authorship roles will remain important areas for discussion in 2026 and beyond.

Acknowledgments

We used ChatGPT (GPT-5.1, OpenAI, USA) to check grammar and refine the language.

Funding Statement

None.

Author Contributions

AW, SW and CM conceived the study. All authors edited and approved the final version of themanuscript.

Conflicts of interest

AW is an editorial board member of the journal. The other authors declare no conflicts of interest.

References


Articles from Malaysian Family Physician : the Official Journal of the Academy of Family Physicians of Malaysia are provided here courtesy of Academy of Family Physicians of Malaysia

RESOURCES