We appreciate Wang and Zhou’s [1] keen interest in our publication [2]. Below are our responses to their comments.
First, Wang and Zhou [1] argued potential biases could arise from the 8 experts’ personal preferences, academic or cultural backgrounds, or their familiarity with the subject matter. However, the 30 papers we selected were from 3 different journals and explored various topics (as outlined in Table S3 in Multimedia Appendix 1 of our paper [2]). Therefore, these biases are unlikely to occur, considering the significant variation among the articles.
Second, we did not use ChatGPT 3.5. We used ChatPDF, which is based on ChatGPT 3.5 but can efficiently analyze PDF content.
Third, Wang and Zhou argued that if artificial intelligence (AI) assistance in refining an article does not alter the accuracy of the research and aids in clearer communication of the content, such a practice should be deemed acceptable. However, in our study, we found that 3 of the 30 ChatGPT-generated abstracts contained incorrect conclusions.
Finally, Wang and Zhou are right in cautioning against complete reliance on AI language models for labor-intensive tasks, which may lead to a lack of critical thinking and the production of low-quality or erroneous articles. However, their perspective differs from the purpose of our study. We aimed to assess the applicability of an AI model in generating abstracts for basic preclinical research. We found that the quality of ChatGPT-generated abstracts for basic preclinical research was suboptimal and that accuracy was not 100%.
Abbreviations
- AI
artificial intelligence
Footnotes
Conflicts of Interest: None declared.
References
- 1.Wang Z, Zhou C. Reassessing AI in medicine: exploring the capabilities of AI in academic abstract synthesis. J Med Internet Res. 2024 doi: 10.2196/55920. https://www.jmir.org/2024/1/e55920/ [DOI] [PubMed] [Google Scholar]
- 2.Cheng S, Tsai S, Bai Y, Ko C, Hsu C, Yang F, Tsai C, Tu Y, Yang S, Tseng P, Hsu T, Liang C, Su K. Comparisons of quality, correctness, and similarity between ChatGPT-generated and human-written abstracts for basic research: cross-sectional study. J Med Internet Res. 2023 Dec 25;25:e51229. doi: 10.2196/51229. https://www.jmir.org/2023//e51229/ v25i1e51229 [DOI] [PMC free article] [PubMed] [Google Scholar]