Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Jan 17.
Published in final edited form as: Tob Control. 2025 Apr 1;34(2):251–253. doi: 10.1136/tc-2023-058009

Exploring the ChatGPT platform with scenario-specific prompts for vaping cessation

Samia Amin 1, Crissy T Kawamoto 1, Pallav Pokhrel 1
PMCID: PMC10792116  NIHMSID: NIHMS1925759  PMID: 37460216

INTRODUCTION

Recently, Artificial Intelligence (AI) technology has generated a lot of interest among health practitioners as AI may be used to make disease diagnosis, predict disease progression, and design personalized treatment plans [1]. Applying AI driven virtual chatbots to lifestyle modification interventions appear promising for the development of scalable and cost-effective health promotion interventions, including substance use prevention and treatment programs [2]. For example, the World Health Organization recently developed an AI-based program called Florence, which is a virtual chatbot that purports to provide trustworthy information for assistance with smoking cessation [3]. However, research shows that application of AI-based virtual chatbots may as yet lack adequate functionality to understand and appropriately respond to different types of queries concerning public health issues such as poor mental health, interpersonal violence, physical health, pregnancy, and addiction-related help-seeking [46].

Recently launched AI chatbot ChatGPT can generate coherent and contextually suitable textual outputs and engage in humanlike dialogue [7]. At present, there is a lack of studies examining the usefulness of ChatGPT in responding to queries about issues relevant to public health. E-cigarette use or vaping has emerged as a significant public health issue in recent times [8]. Notably, increasing number of vapers appear to be willing to quit vaping and may approach ChatGPT for guidance [9,10]. Thus, we aimed to test how ChatGPT may respond to scenario-specific prompts for vaping cessation. In order to simulate real-world queries about vaping cessation, similar to a previous study [11], we obtained texts from a discussion forum on Reddit where individuals asked questions about vaping cessation and used those texts as ChatGPT prompts.

METHOD

We posted 10 real-world random prompts collected from Reddit (r/Quitvaping community) on ChatGPT (see Supplement 1) and invited (via email) 5 experts in tobacco and other substance use research to evaluate each response. We summarized the qualitative outcome of each ChatGPT response using NVivo (version 12) [12]. Inductive thematic saturation was achieved for the identification of themes by new information threshold approach [13].

Each prompt was quantitatively evaluated by experts in tobacco research on the following criteria, each criterion scored on a 3-point Likert scale (coded as 3 for ‘excellent’ 2 for ‘satisfactory’ and 1 for ‘poor’): ‘purpose’ (GPT response provided content’s relevance and appropriateness), ‘accuracy’ (GPT response addressed specific needs of the target audience), ‘quality’ (GPT response provided logical flow and consistency of ideas), ‘clarity’ (GPT response ensured language correctness and readability) and ‘empathy’ (GPT response created emotionally grounded content) of each prompt. This method of assessing ChatGPT outputs is similar to methods used in prior studies [11,14]. The mean and standard deviation of ratings was computed using IBM SPSS Statistics Version 22.

RESULT

ChatGPT generated fairly detailed recommendations for each prompt (see Supplement 1). Out of the 10 prompts, 6 prompts were inquiries related to one’s own vaping cessation, 3 were inquiries related to family members’ (i.e., mom, sister, and brother) vaping cessation, and 1 was a general question about the benefits of quitting vaping. The themes that were represented in the responses included self-care guidance (e.g., “find healthier habits that you enjoy,” 4 of 10 prompts), peer support (e.g., “seek support from friends and family,” 4 of 10 prompts), nicotine withdrawal symptoms (e.g., “you may experience nicotine withdrawal symptoms,” 4 of 10 prompts), benefit of vaping cessation (e.g., “improve lung health,” 3 of 10 prompts), professional advice (e.g., “consider seeking professional help,” 2 of 10 prompts), Nicotine Replacement Therapy (e.g., “NRT can help you quit,” 1 of 10 prompts), quit date (e.g., “choose a quit date,” 1 of 10 prompts), authentic reference (“according to the American Cancer Society.”, 1 of 10 prompts) and encouragement (e.g., “keep trying and don’t give up,” 4 of 10 prompts) for vaping cessation.

The experts’ average ratings across responses for ‘purpose’ (Mean ± SD: 2.7 ± 0.27), ‘accuracy’ (Mean ± SD: 2.5 ± 0.27), ‘quality’ (Mean ± SD: 2.7 ± 0.13), ‘clarity’ (Mean ± SD: 2.7 ± 0.14) and ‘empathy’ (Mean ± SD: 2.3 ± 0.39) ranged between the scores of 2 and 3.

DISCUSSION

The present study examined ChatGPT responses to queries specific to vaping cessation with the purpose of initiating a conversation of potential applicability of ChatGPT or similar chatbots to vaping cessation interventions and treatment programs. The qualitative analysis of the data indicated that ChatGPT, across the ten prompts used in the study, covered themes central to the promotion of smoking cessation. For example, the responses provided information on nicotine withdrawal symptoms, highlighted importance of self-regulation, peer support, and NRT, and attempted to provide motivational support. Expert evaluation of the ChatGPT responses indicated that the responses were ‘satisfactory’ to ‘excellent’ in areas of accuracy, quality, clarity, and empathy. The finding that ChatGPT’s responses tended to be empathetic is consistent with a recent finding where ChatGPT responses were rated to be significantly more empathetic to patient questions than physician responses [11]. Hence, the strategic utilization of ChatGPT in healthcare may enhance care providers’ efficiency, maximize performance, and better serve patient needs [15].

However, there appears to be certain limitations to ChatGPT’s current functionality. For example, ChatGPT may not always know how to answer a particular question properly. For example, in the current study, ChatGPT was not able to properly respond to a potential health emergency as indicated by a prompt that expressed difficulty with breathing. This highlights the fact that ChatGPT can only use data that it has been taught to comprehend information that has been fed into its training database and may not be able to respond to new information included in some or most queries. Relatedly, a study found that a ChatGPT model could answer 77% of cirrhosis and hepatocellular carcinoma (HCC) related questions correctly but was unable to determine decision-making cut-offs and treatment durations [16]. Compared to physicians/trainees, ChatGPT lacked knowledge of regional variations in guidelines, such as those pertaining to the HCC screening criteria [16]. Overcoming challenges such as these may require a combination of human expertise and advanced AI technology.

The present study has a number of implications for future research on potential utilization of ChatGPT in vaping cessation research. First, there needs to be a more rigorous study focused on the quality of ChatGPT responses to a wider variety of user prompts related to vaping cessation. Such studies may involve evaluation of a representative sample of prompts by a nationally representative sample of experts. Second, future research may need to focus on how to train AI models to respond to nuanced questions related to vaping cessation. Effective training of AI models may also require the ability to rapidly incorporate emerging evidence regarding effective vaping cessation strategies. Conversely, such training may need to develop strategies to filter out misinformation. Third, there needs to be more research on how to include AI in models of cessation service delivery. This may involve coaching those seeking help on how to effectively use authorized AI Chatbots.

A main limitation of the current study is its small sample size (N=10 prompts) which limits our ability to infer firm conclusions about application of ChatGPT in vaping cessation research and practice. Another limitation is that we used a convenience sample of five tobacco/substance use experts to rate the ChatGPT responses. Third, the rating criteria and scales that we used for evaluation of ChatGPT responses have not been validated. Despite the limitations, the study is significant for starting a conversation on how ChatGPT may be effectively utilized in service of an emerging public health problem, namely vaping. For this preliminary investigation, we conducted both qualitative and quantitative analysis to explore relevance and accuracy of ChatGPT responses. We utilized a group of experts in tobacco control research to evaluate the ChatGPT responses and found that ChatGPT can provide answers to vaping cessation related common queries quickly and proficiently.

Thus, the present results indicate that if managed by a group of experts, including clinicians, and behavioral and computer scientists, a platform such as ChatGPT may be leveraged to design tailored interventions for tobacco use, including vaping cessation. Developing robust algorithms and machine learning models may generate vaping cessation strategies tailored to meet users’ individual needs as well as the needs of various vulnerable demographic groups. Hence, a synergistic collaboration among transdisciplinary experts devoted to public health may be crucial to develop tailored and scalable technology-driven vaping cessation interventions using Chatbots.

Supplementary Material

Supp1

What this paper adds.

What is already known on this subject?

  • Application of Artificial Intelligence-driven virtual chatbots appear to show some promise in promoting healthy behavior.

What important gaps in knowledge exist on this topic

  • ChatGPT’s potential for promoting vaping cessation is poorly understood.

What this study adds

  • This study sought to highlight ChatGPT’s potential to promote vaping cessation.

  • Real-world queries about vaping cessation were examined as ChatGPT prompts.

  • ChatGPT responses represent major strategies used to promote tobacco use cessation.

  • Experts evaluated ChatGPT responses to be ‘satisfactory’ to ‘excellent’.

Funding disclosure

The study was supported by funds from the National Cancer Institute (R01CA228905).

Footnotes

Conflicts of interest

None

Reference

  • 1.Secinaro S, Calandra D, Secinaro A, Muthurangu V, Biancone P. The role of artificial intelligence in healthcare: a structured literature review. BMC Med Inform Decis Mak. 2021;21(1):125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ogilvie L, Prescott J, & Carson J (2022). The use of chatbots as supportive agents for people seeking help with substance use disorder: a systematic review. Eur Addict Res, 28(6), 405–418. [DOI] [PubMed] [Google Scholar]
  • 3.World Health Organization (2021). Meet Florence. Accessed January 21, 2023, https://www.who.int/europe/news/item/14-02-2021-meet-florence-who-s-digital-health-worker-who-can-help-you-quit-tobacco
  • 4.Nobles AL, Leas EC, Caputi TL, Zhu SH, Strathdee SA, & Ayers JW (2020). Responses to addiction help-seeking from Alexa, Siri, Google Assistant, Cortana, and Bixby intelligent virtual assistants. NPJ Digit Med, 3, 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Miner AS, Milstein A, Schueller S, Hegde R, Mangurian C, & Linos E (2016). Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Intern Med, 176(5), 619–625. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Schindler-Ruwisch J, & Palancia Esposito C (2021). “Alexa, Am I pregnant?”: A content analysis of a virtual assistant’s responses to prenatal health questions during the COVID-19 pandemic. Patient Educ Couns, 104(3), 460–463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Chat GPT. Accessed January 21, 2023. https://openai.com/blog/chatgpt
  • 8.Jones K, & Salzman GA (2020). The vaping epidemic in adolescents. Mo Med, 117(1), 56–58. [PMC free article] [PubMed] [Google Scholar]
  • 9.Tattan-Birch H, Perski O, Jackson S, Shahab L, West R, & Brown J (2021). COVID-19, smoking, vaping and quitting: a representative population survey in England. Addiction, 116(5), 1186–1195. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Cuccia AF, Patel M, Amato MS, Stephens DK, Yoon SN, & Vallone DM (2021). Quitting e-cigarettes: quit attempts and quit intentions among youth and young adults. Prev Med Rep, 21, 101287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, Faix DJ, Goodman AM, Longhurst CA, Hogarth M, & Smith DM (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med, 183(6), 589–596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.QSR Inernational Pty Ltd. Nvivo, Version 12; QSR International: Burlington, MA, USA, 2018 [Google Scholar]
  • 13.Guest G, Namey E, & Chen M (2020). A simple method to assess and report thematic saturation in qualitative research. PloS one, 15(5), e0232076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Taherdoost H, (2019). What is the best response scale for survey and questionnaire design; review of different lengths of Rating Scale /Attitude Scale /Likert Scale. Int. J. Acad. Res. Manag, 8 (1): 1–10 [Google Scholar]
  • 15.Homolak J (2023). Opportunities and risks of ChatGPT in medicine, science, and academic publishing: a modern Promethean dilemma. Croat Med J, 64(1), 1–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, Ayoub W, Yang JD, Liran O, Spiegel B, et al. (2023) Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. medRxiv, 2023; Preprin [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supp1

RESOURCES