Skip to main content
Human Vaccines & Immunotherapeutics logoLink to Human Vaccines & Immunotherapeutics
. 2023 Sep 3;19(2):2235200. doi: 10.1080/21645515.2023.2235200

Chatting with ChatGPT to learn about safety of COVID-19 vaccines – A perspective

Antonio Salas a,b,c,, Irene Rivero-Calle b,c,d,e, Federico Martinón-Torres b,c,d,e
PMCID: PMC10478732  PMID: 37660470

ABSTRACT

Vaccine hesitancy is among the top 10 threats to global health, according to the World Health Organization (WHO). In this exploration, we delve into ChatGPT capacity to generate opinions on vaccine hesitancy by interrogating this AI chatbot for the 50 most prevalent counterfait messages, false and true contraindications, and myths circulating on the internet regarding vaccine safety. Our results indicate that, while the present version of ChatGPT’s default responses may be incomplete, they are generally satisfactory. Although ChatGPT cannot substitute an expert or the scientific evidence itself, this form of AI has the potential to guide users toward information that aligns well with scientific evidence.

KEYWORDS: ChatGPT, chatbot, artificial intelligence, vaccine safety, COVID-19, misinformation

Introduction

In 2019, the World Health Organization (WHO) signaled vaccine hesitancy as one of the top 10 threats to global health because it “threatens to reverse progress made in tackling vaccine-preventable disease.”1 The circulation of misinformation on (social) media has significantly contributed to generating unfavorable reactions among the population regarding COVID-19 vaccination and other pandemic control measures linked to social and public health. Acceptance of vaccines is facing new challenges.2 European populations were recognized as being among the least vaccine confident in the world in 2016.

ChatGPT is an artificial intelligence (AI) chatbot technology released by OpenAI. It utilizes natural language processing and machine learning to enable users to engage in conversations and interactions with a virtual assistant. The chatbot generates immediate responses to written prompts. However, concerns have been raised in Editorials published in high-ranked journals regarding its potential misuse within the academic and scientific communities. Consequencly, strict policies are being implemented to regulate its use.3,4 Its user-friendly interface makes it accessible to a wide population this fact has been profusely echoed in the media, and many governments are expressing worries about its potential to be fraudulently used in educational settings (https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html).5

The WHO Collaborating Center for Vaccine Safety at the University of Santiago de Compostela (WHO-CC VSS; Spain) has been addressing myths and false contraindications to vaccination. This misconceptions, often found in clinical practice and profusely disseminated in social media, contribute significantly to vaccine hesitancy and reluctancy among populations. Several actions have been taken from this center through the development of specific educational platforms (www.covid19infovaccines.com) and materials (https://apps.who.int/iris/handle/10665/350968) to counteract these issues.6 In light of this context, we took the opportunity to assess the accuracy of ChatGPT regarding the safety aspects of COVID-19 vaccines.

Methods

To assess the ability of this AI in generating responses in accordance with the available scientific evidence, we challenged it with the 50 most-frequently asked questions received at WHO-CC-VSS and organized in three headings: (i) misconceptions regarding safety (such as the integration of mRNA vaccine into the human genome, or the vaccine causing Long COVID, etc.), (ii) false contraindications (related to the use of the vaccines in immunosuppressed patients, breastfeeding women, etc.), and (iii) true contraindications, safety signals, or precautions (linked to anaphylaxis, myocarditis, etc.). The answers were analyzed independently by three professionals (WHO-CC-VSS: Siddhartha Sankar Datta, IR-C, and FM-T), who specialize in this field. The responses were rated in terms of veracity and precision against the current scientific evidence and recommendations of the WHO and other supra-national agencies (see Supplementary Text). This exercise is highly relevant because popular information sources, such as social media platforms (e.g., Twitter, Instagram) or internet search engines (e.g., Google Web SearchTM) often employ algorithms that cater to public preferences, potentially leading to biased or incorrect answers.7

Results

All the questions under the three headings were collectively evaluated since no significant differences in veracity and precision were detected. In terms of precision, the majority of questions received accurate answers, with most responses being graded as ‘excellent’ or ‘good’; averaging a score of 9 out of 10 [SD: 0.9]). The experts also indicate that the responses are accurate (averaging 85.5%) or ‘accurate but with gaps’ (averaging 14.5%) (Figure 1). For instance, to a response deemed by an evaluator as ‘accurate but with gaps’, we reproduce the following query: “COVID-19 vaccination during pregnancy causes birth defects” (Query 1.8; Supplementary Text). The default answer only refers to mRNA vaccines; however, if the user asks the system to elaborate more on this point, the chatbot provides an expanded response without losing scientific rigor. Overall, ChatGPT constructs a narrative that is aligned to the available scientific evidence, debunking myths circulating on the social media, and thereby potentially facilitates an increase in vaccine uptake. Furthermore, it provides correct answers to queries that could be ascribed to genuine myths, and to those that are commonly considered in clinical recommendation guidelines as false or true contraindications.

Figure 1.

Figure 1.

Results of the evaluation by three experts of the 50 top popular questions related to vaccine safety, on (1) veracity, which categorized responses as accurate, accurate but with gaps, wrong, (2) precision, which assessed the quality of answers as excellent, good, average, insufficient, and (3) a quality rank, ranging from 1 (worst) to 10 (best) score. Average values for the 50 questions are provided. Veracity and precision were used as subjective terms based on the opinions of three independent experts. Therefore, the present study serves as a pragmatic exercise aimed at evaluating the potential scope of ChatGPT in addressing misconceptions and falsehoods related to vaccine safety. Abbreviations: Experts: E1, E2, and E3. See Supplementary Text for more details.

However, there are a few considerations to bear in mind. The responses generated by ChatGPT are determinant on the phrasing of the prompts. The responses are also dynamic, and ChatGPT usually provides different answers if the same question is repeated within a short time frame (although, it consistently provides alternaive answers in its current form). The tool has the capability to interact with users, which means it could be trained to provide answers that deviated from scientific evidence; potentially leading to an undesirable confirmation bias. We have solely evaluated the default responses, recognizing that ChatGPT has the capacity to generate a multitude of complex scenarios with users. ChatGPT is rapidly evolving, and future versions of this AI may exibit variaions in how it responds to users. There are already browser extensions incorporating ChatGPT that offer additional information to users, such as links to original references supporting its narrative.

Discussion

It has recently been reported in a JAMA editorial that, when challenging ChatGPT with controversial topics, it produces well-written responses, but these “are formulaic (which was not easily discernible), not up to date, false or fabricated, without accurate or complete references, and worse, with concocted nonexistent evidence for claims or statements it makes.”8 While we partially agree with this statement, we argue that interacting with this form of AI can serve as an information medium for the general public, even those without specialized knowdege in the topic, and positively influence decision-makers to aligh with with the available scientific evidence.

Overall, ChatGPT can detect counterfeit questions related to vaccines and vaccination. In its current form, the language used by this AI is not overly technical, making it easily understandable to the public but without sacrificing scientific rigor. We acknowledge that the present-day version of ChatGPT cannot replace an expert or scientific evidence per se. However, the results suggest it could be a reliable source of information to the public.

Supplementary Material

Supplemental Material

Acknowledgments

We are in debt with Siddhartha Sankar Datta for his very kind contribution to the manuscript as a WHO expert evaluator.

Funding Statement

This study received support from Instituto de Salud Carlos III (ISCIII): GePEM [PI16/01478/Cofinanciado FEDER; A.S.], DIAVIR [DTS19/00049/Cofinanciado FEDER, A.S.], Resvi-Omics [PI19/01039/Cofinanciado FEDER, A.S.], Agencia Gallega de Innovación (GAIN): Grupos con Potencial de Crecimiento [IN607B 2020/08, A.S.]; Agencia Gallega para la Gestión del Conocimiento en Salud (ACIS): BI-BACVIR [PRIS-3, A.S.], and CovidPhy [SA 304 C, A.S.]; ReSVinext [PI16/01569/Cofinanciado FEDER, F.M.-T.], Enterogen [PI19/01090/Cofinanciado FEDER, F.M.-T.], OMI-COVI-VAC [PI PI22/00406/Cofinanciado FEDER, F.M.-T], and consorcio Centro de Investigación Biomédica en Red de Enfermedades Respiratorias [CB21/06/00103; F.M.-T.]; GEN-COVID [IN845D 2020/23, F.M.-T.] and Grupos de Referencia Competitiva [IIN607A2021/05, F.M.-T.]. The funders were not involved in the study design, collection, analysis, interpretation of data, the writing of this article, or the decision to submit it for publication.

Disclosure statement

AS declares no competing interests. IRC has participated in advisory boards organized by MSD, GSK, Sanofi and Pfizer. IRC has been involved in clinical trials funded by Ablynx, Abbot, Seqirus, Sanofi Pasteur MSD, Cubist, Wyeth, Merck, Pfizer, Roche, Regeneron, Jansen, Medimmune, Novavax, Novartis and GSK, although the funds were awarded to the institution. FM-T has received honoraria from GSK, Pfizer Inc, Moderna, Astra Zeneca, Sanofi Pasteur, MSD, Seqirus, Biofabri and Janssen for taking part in advisory boards and expert meetings and for acting as a speaker in congresses outside the scope of the submitted work. Federico Martinón-Torres has also acted as principal investigator in randomized controlled trials of the above-mentioned companies as well as Ablynx, Gilead, Regeneron, Roche, Abbott, Novavax, and MedImmune, with honoraria paid to his institution.

Author’s contributions

A.S and F.M.-T. conceived the manuscript. All the authors analyzed the data. A.S. and F.M.-T. wrote the first draft of the manuscript. All authors made contributions to the manuscript and approved the submitted version.

Supplementary data

Supplemental data for this article can be accessed on the publisher’s website at https://doi.org/10.1080/21645515.2023.2235200.

References

  • 1.Organization WH . Ten Threats to global health in 2019. 2019. https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019 [Google Scholar]
  • 2.Karafillakis E, Van Damme P, Hendrickx G, Larson HJ.. COVID-19 in Europe: new challenges for addressing vaccine hesitancy. Lancet. 2022;399(10326):699–3. doi: 10.1016/S0140-6736(22)00150-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313. doi: 10.1126/science.adg7879. [DOI] [PubMed] [Google Scholar]
  • 4.Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613(7945):620–1. doi: 10.1038/d41586-023-00107-z. [DOI] [PubMed] [Google Scholar]
  • 5.Stokel-Walker C. AI bot ChatGPT writes smart essays - should professors worry? Nature. 2022. doi: 10.1038/d41586-022-04397-7. [DOI] [PubMed] [Google Scholar]
  • 6.Rivero I, Raguindin PF, Buttler R, Martinon-Torres F. False vaccine contraindications among healthcare providers in Europe: a short survey among members of the European Society of Pediatric Infectious Diseases. Pediatr Infect Dis J. 2019;38(9):974–6. doi: 10.1097/INF.0000000000002401. [DOI] [PubMed] [Google Scholar]
  • 7.Pias-Peleteiro L, Cortes-Bordoy J, Martinon-Torres F. Dr Google: what about the human papillomavirus vaccine? Hum Vaccin Immunother. 2013;9(8):1712–19. doi: 10.4161/hv.25057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637. doi: 10.1001/jama.2023.1344. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Material

Articles from Human Vaccines & Immunotherapeutics are provided here courtesy of Taylor & Francis

RESOURCES