Skip to main content
Journal of Clinical Sleep Medicine : JCSM : Official Publication of the American Academy of Sleep Medicine logoLink to Journal of Clinical Sleep Medicine : JCSM : Official Publication of the American Academy of Sleep Medicine
letter
. 2023 Dec 1;19(12):2133. doi: 10.5664/jcsm.10768

ChatGPT, obstructive sleep apnea, and patient education

Amnuay Kleebayoon 1,, Viroj Wiwanitkit 2,3
PMCID: PMC10692939  PMID: 37551829

We found the article on “Evaluating ChatGPT responses on obstructive sleep apnea for patient education”1 to be interesting. In order to better educate patients, Campbell et al1 assess the quality of ChatGPT responses to queries about obstructive sleep apnea (OSA), and examined the effects of chatbot prompts on the accuracy, estimated grade level, and references of answers. According to Campbell et al’s analysis, ChatGPT generally offers suitable responses to the majority of OSA questions.1 Given ChatGPT’s quick adoption, sleep experts may seek to further examine its medical literacy and utility for patients.1 Campbell et al noted that, while prompting lowers the response grade level, all responses remained above accepted recommendations for presenting medical information to patients.

The study only looked at ChatGPT’s performance with 4 different sorts of prompts: no prompting, patient-friendly prompting, physician-level prompting, and statistics/reference prompting. While these prompts cover a wide range of circumstances, they may not reflect the full spectrum of variables in real-world clinical situations. A broader set of cues could enable a more comprehensive assessment of ChatGPT’s performance in various therapeutic contexts. Furthermore, while the study focused on the accuracy of ChatGPT’s responses, it did not analyze the quality or reliability of the information supplied. While ChatGPT may provide accurate responses, it is critical to assess if the material is evidence-based and up to date. More research into ChatGPT’s sources and references would be good to assure the authenticity and dependability of the information it delivers. Finally, there was no comparison to other existing tools or resources routinely utilized in dermatological education and practice. Comparing ChatGPT’s performance with other recognized references or expert opinions would provide a more thorough assessment of its utility and efficacy.

Effective governance and monitoring procedures must be put in place to ensure that the benefits and risks of generative artificial intelligence (AI) are balanced. Sensitive content should be able to be reviewed by humans before being created, modified, or approved by AI.2 ChatGPT can provide a plethora of information on issues and recommendations. The findings on ChatGPT imply that some of these datasets may contain false presumptions or beliefs. As a result, patients might be given inaccurate or misleading information. Before using AI chatbots in academic research, it is essential to consider any potential ethical concerns. If there were any biases in the data, algorithms, authorship attribution, or intellectual property rights, these ought to be thoroughly investigated.

DISCLOSURE STATEMENT

The authors report no conflicts of interest.

Citation: Kleebayoon A, Wiwanitkit V. ChatGPT, obstructive sleep apnea, and patient education. J Clin Sleep Med. 2023;19(12):2133.

REFERENCES

  • 1. Campbell DJ , Estephan LE , Mastrolonardo EV , Amin DR , Huntley CT , Boon MS . Evaluating ChatGPT responses on obstructive sleep apnea for patient education . J Clin Sleep Med. 2023. ; 19 ( 12 ): 2135 – 2136 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Kleebayoon A , Wiwanitkit V . Artificial intelligence, chatbots, plagiarism and basic honesty: comment . Cell Mol Bioeng. 2023. ; 16 ( 2 ): 173 – 174 . [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Clinical Sleep Medicine : JCSM : Official Publication of the American Academy of Sleep Medicine are provided here courtesy of American Academy of Sleep Medicine

RESOURCES