Dear Editor,
I am writing in response to the recent publication “Using AI-generated suggestions from ChatGPT to optimize clinical decision support.”1 This ground-breaking study provides an essential exploration into the potential of artificial intelligence (AI), specifically large language models, to augment the logic of clinical decision support (CDS) alerts, thus addressing a critical challenge within the healthcare industry. While this is a promising stride forward, I would like to shed light on some limitations and offer suggestions for future research.
First, the research underscores the considerable potential of the ChatGPT model in understanding and providing relevant suggestions to optimize CDS. However, the model's applicability is compromised by the cut-off of its training data, which only extends up to the year 2021. This is a significant concern, as it means the model is incapable of including advancements or changes in medical knowledge that occur beyond this year. It is a common and pressing issue in the implementation of AI in healthcare—the necessity for the model to be constantly updated to remain concurrent with the dynamic nature of medical knowledge. It would be valuable for future investigations to consider the logistics and feasibility of real-time or frequent updates to the model's training data.
Second, while the evaluation of AI-generated suggestions by CDS experts provides a practical initial assessment, it remains an inherently subjective measure. Furthermore, it does not directly correlate to the ultimate goal of improved patient outcomes. Thus, subsequent studies should consider incorporating objective, quantifiable metrics such as the impact of these AI suggestions on clinical outcomes, patient satisfaction, and healthcare cost-effectiveness. Real-world implementation and monitoring of the AI suggestions could be considered to gather these data.
Additionally, the study revealed instances of “hallucination,” a phenomenon where the AI model generates incorrect or non-existent information. In the clinical context, this could potentially lead to harmful outcomes. Therefore, developing a rigorous validation system or a method to cross-check the AI suggestions with the latest clinical guidelines or expert knowledge before actual implementation is a crucial step that future research should address.
AI’s potential in healthcare extends beyond optimizing clinical decision support systems. It can Another aspect that was not addressed in the study is the potential for unconscious bias in expert ratings. While it's noted that the AI-generated suggestions were somewhat different in tone and length compared to human suggestions, this could potentially influence the experts' perception, leading to biased evaluation. A possible solution could be to conduct a double-blind study where the evaluators are not aware of the origin of the suggestions to ensure an unbiased rating process.
Despite the limitations, the paper sets a substantial foundation for the integration of AI into healthcare. The research highlights the usefulness of AI, not as a replacement but as a complementary tool to human expertise. It brings forth the idea of a symbiotic relationship between AI and human clinicians—where AI provides a unique perspective, and human experts refine these suggestions based on their clinical experience and knowledge.2,3
Actually, the paper1 provides valuable groundwork, demonstrating the potential role of AI in healthcare. However, it also highlights the need for future research to address the model's limitations and focus on integration issues. I am hopeful that my comments can contribute to the growing body of knowledge around effectively integrating AI into healthcare, ultimately improving patient care and outcomes.
Author contributions
Partha Pratim Ray (Study conceptualization, methodology, writing of original draft, Software use)
Funding
None declared.
Conflicts of interest
None declared.
Rererences
- 1. Liu S, Wright AP, Patterson BL, et al. Using AI-generated suggestions from ChatGPT to optimize clinical decision support. J Am Med Inform Assoc. 2023;30:1237-1245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Liao Z, Wang J, Shi Z, Lu L, Tabata H. Revolutionary potential of ChatGPT in constructing intelligent clinical decision support systems. Ann Biomed Eng. 2024;52:125-129. [DOI] [PubMed] [Google Scholar]
- 3. Liu J, Wang C, Liu S. Utility of ChatGPT in clinical practice. J Med Internet Res. 2023;25:e48568. [DOI] [PMC free article] [PubMed] [Google Scholar]
