Skip to main content
JAMIA Open logoLink to JAMIA Open
. 2025 Feb 19;8(1):ooaf005. doi: 10.1093/jamiaopen/ooaf005

VaxBot-HPV: a GPT-based chatbot for answering HPV vaccine-related questions

Yiming Li 1, Jianfu Li 2, Manqi Li 3,4, Evan Yu 5, Danniel Rhee 6, Muhammad Amith 7,8, Lu Tang 9, Lara S Savas 10, Licong Cui 11, Cui Tao 12,
PMCID: PMC11837857  PMID: 39975811

Abstract

Objective

Human Papillomavirus (HPV) vaccine is an effective measure to prevent and control the diseases caused by HPV. However, widespread misinformation and vaccine hesitancy remain significant barriers to its uptake. This study focuses on the development of VaxBot-HPV, a chatbot aimed at improving health literacy and promoting vaccination uptake by providing information and answering questions about the HPV vaccine.

Methods

We constructed the knowledge base (KB) for VaxBot-HPV, which consists of 451 documents from biomedical literature and web sources on the HPV vaccine. We extracted 202 question-answer pairs from the KB and 39 questions generated by GPT-4 for training and testing purposes. To comprehensively understand the capabilities and potential of GPT-based chatbots, 3 models were involved in this study: GPT-3.5, VaxBot-HPV, and GPT-4. The evaluation criteria included answer relevancy and faithfulness.

Results

VaxBot-HPV demonstrated superior performance in answer relevancy and faithfulness compared to baselines. For test questions in KB, it achieved an answer relevancy score of 0.85 and a faithfulness score of 0.97. Similarly, it attained scores of 0.85 for answer relevancy and 0.96 for faithfulness on GPT-generated questions.

Discussion

VaxBot-HPV demonstrates the effectiveness of fine-tuned large language models in healthcare, outperforming generic GPT models in accuracy and relevance. Fine-tuning mitigates hallucinations and misinformation, ensuring reliable information on HPV vaccination while allowing dynamic and tailored responses. The specific fine-tuning, which includes context in addition to question-answer pairs, enables VaxBot-HPV to provide explanations and reasoning behind its answers, enhancing transparency and user trust.

Conclusions

This study underscores the importance of leveraging large language models and fine-tuning techniques in the development of chatbots for healthcare applications, with implications for improving medical education and public health communication.

Keywords: HPV vaccine, GPT, large language model, Chatbot, medical education

Introduction

Human Papillomavirus (HPV) is a group of viruses that infect the skin and mucous membranes, with over 100 types identified.1 HPV is primarily transmitted through sexual contact and can infect the genital area, leading to genital warts and various cancers, including cervical, anal, penile, vaginal, vulvar, and oropharyngeal cancers.2–4 Among these, cervical cancer stands out as the most common HPV-related cancer and a leading cause of cancer-related deaths in women worldwide, contributing to an estimated 266 000 cervical cancer deaths annually due to HPV infection.5–9 This burden is especially pronounced in low- and middle-income countries where access to screening and treatment is limited.5

Similar to other infectious diseases, the development of HPV vaccines has also been a significant advancement in preventive healthcare.10–19 HPV vaccines primarily target HPV types 16 and 18, which are responsible for approximately 70% of cervical cancers and a significant proportion of other HPV-related cancers.20 By preventing HPV infection, these vaccines can effectively reduce the incidence of HPV-related diseases, including cervical cancer.20 Clinical trials have demonstrated the high efficacy of HPV vaccines in preventing HPV infection and related diseases.21 Furthermore, population-based studies have shown a substantial decline in HPV infections and HPV-related outcomes in countries with high HPV vaccination coverage, highlighting the real-world effectiveness of these vaccines.22 Overall, HPV vaccines are a crucial tool in the prevention of HPV-related diseases, particularly cervical cancer. Widespread vaccination has the potential to significantly reduce the burden of HPV-related cancers and improve the overall health outcomes of populations globally.23

Despite the proven benefits of HPV vaccination, there are various concerns and forms of hesitancy surrounding its use.24 Some individuals and communities are hesitant due to insufficient and inadequate information about HPV vaccination or misinformation about the vaccine’s safety and efficacy through social media and other channels.25–27 Concerns about the long-term effects of the vaccine and its perceived necessity for individuals who may not consider themselves to be at high risk for HPV-related diseases also contribute to hesitancy.27 Additionally, cultural or religious beliefs, distrust of pharmaceutical companies, and concerns about the vaccination’s affordability and accessibility in low-resource settings can all play a role in vaccine hesitancy.28 Addressing these concerns through accurate information, targeted education campaigns, and improved access to vaccination services is crucial in increasing HPV vaccination rates and reducing the burden of HPV-related diseases.

Traditionally, question answering (QA) systems have been developed using rule-based approaches, information retrieval techniques, deep learning-based approaches, or hybrid methods.29,30 Rule-based QA systems rely on predefined rules and patterns to extract relevant information from a knowledge base (KB) or document collection in response to a question.31 Tsampos and Marakakis, for example, developed a rule-based medical QA system in Python using spaCy for natural language processing (NLP) and Neo4j for graph database management.32 They used Cypher queries to retrieve information from the graph database to answer user questions, and the system can handle complex questions by searching for relations between remote nodes and using synonyms to match nodes or paths.32 Cairns et al. developed MiPACQ, a rule-based QA system, by first retrieving candidate answer paragraphs using a paragraph-level baseline system based on the Lucene search engine.33 The paragraphs were then re-ranked using a fixed formula that incorporated semantic annotations from the MiPACQ annotation pipeline.33 This method utilized a scoring function that combined original paragraph scores with bag-of-words and UMLS entity components, ensuring that relevant paragraphs were prioritized for better QA performance.33 Information retrieval-based QA systems use keyword matching and ranking algorithms to retrieve documents or passages likely to contain the answer.34 For example, Guo et al. developed a retrieval-based medical QA system that efficiently retrieves answers using Elasticsearch and enhances them with semantic matching and knowledge graphs.35 The system’s novel siamese-based answer selection architecture outperformed baseline models and systems in both Chinese and English datasets, demonstrating consistent improvements in quantification and qualification evaluations.35 Deep learning-based QA systems have emerged as a more flexible and adaptable approach, leveraging techniques such as powerful neural network architectures to automatically learn to understand and respond to questions.36 Yin et al. developed Evebot, a conversational system for detecting negative emotions and preventing depression through positive suggestions.37 It uses deep-learning models including a Bi-LSTM for emotion detection and an anti-language sequence-to-sequence neural network for counseling.37

While these traditional QA systems have been effective for certain types of questions and domains, they have several limitations. One major limitation is the reliance of rule-based approaches on predefined rules or keywords, which makes them less flexible and adaptable to new or complex questions.38 These systems also struggle with understanding natural language queries and context, often leading to inaccurate or incomplete answers. Additionally, traditional QA systems are limited by the quality and coverage of their underlying KB or document collection, which can affect the accuracy and relevance of their answers.39 For deep learning-based QA systems, one major limitation is their dependency on large amounts of labeled training data.40–43 These systems require vast datasets to learn patterns in language and develop accurate models, which can be challenging and resource-intensive to obtain, especially for specialized domains or languages.40 Additionally, deep learning-based QA systems may struggle with out-of-domain or adversarial examples, where the input falls outside the scope of the training data, leading to errors or inaccurate responses.36,44,45

Another limitation of traditional QA systems is their inability to provide explanations or reasoning behind their answers.46 These systems typically return a single answer without any supporting context or evidence, making it challenging for users to understand how the answer was derived.47,48 This lack of transparency can reduce user trust and confidence in the system, especially in critical applications such as healthcare or legal domains.49 Overall, while traditional QA systems have been valuable in certain contexts, their limitations have led to the development of more advanced approaches.

In recent years, the advent of large language models (LLMs), such as the Generative Pre-trained Transformer (GPT), has revolutionized the field of NLP and opened up new possibilities for conversational agents.42,50–53 GPT, developed by OpenAI, is a state-of-the-art deep learning model capable of generating human-like text based on the input it receives.42,50–52,54 The latest iteration, GPT-4, is distinguished by its ability to learn from vast amounts of text data, supported by its billions of parameters, enabling it to capture complex patterns in language and generate highly coherent and informative text.55–58 However, a significant challenge with GPT models, including ChatGPT, is their tendency to produce hallucinations or responses that, while plausible, are factually incorrect.59 This issue has raised concerns about the reliability of these models, especially in critical applications such as healthcare.60 To address this problem, researchers and developers are investigating the use of well-curated KBs to refine the models. By integrating authenticated and reliable information from KBs, the goal is to enhance the model’s capability to generate pertinent and accurate responses, thereby decreasing the risk of hallucinations. This has led to the development of chatbots and QA systems powered by GPT that can provide information and assistance across various domains.57

In the context of healthcare, the potential of GPT-powered QA systems and chatbots is particularly promising.61 Seenivasan et al. developed an end-to-end trainable Language-Vision Generative Pre-trained Transformer (LV-GPT) model to leverage GPT-based LLMs for Visual Question Answering (VQA) in robotic surgery.62 The LV-GPT model extends GPT2 to process vision input (images) by incorporating a vision tokenizer and vision token embedding.62 The model outperforms other state-of-the-art VQA models on public surgical-VQA datasets and a newly annotated dataset, demonstrating its effectiveness in capturing context from both language and vision modalities.62 Shi et al. developed a GPT-based QA System for Fundus Fluorescein Angiography (FFA) with an image-text alignment module and a GPT-based interactive QA module.63 The system showed satisfactory performance in automatic evaluation and high accuracy and completeness in manual assessments, facilitating dynamic communication between ophthalmologists and patients for enhanced diagnostic processes.63 Although GPT-powered QA systems and chatbots in healthcare hold significant promise, we found that these systems exhibit hallucination issues because they use pre-trained GPT models directly without fine-tuning.63 In the case of HPV vaccination, where inadequate information and misconceptions are prevalent, leveraging fine-tuning techniques with GPT models can significantly enhance the accuracy and reliability of information provided. A GPT-powered chatbot, when properly fine-tuned, could play a crucial role in educating the public and increasing awareness about the importance of vaccination.

In this article, we present the development and evaluation of a GPT-powered chatbot (VaxBot-HPV) designed to provide information and answer questions about the HPV vaccine. We also describe the design and implementation of the chatbot, its capabilities and limitations, as well as its potential impact on public health.

Methods

The study is structured around 3 primary stages. Initially, we constructed a KB and collected question-answer pairs relevant to the HPV vaccine within the KB to develop the benchmark. Subsequently, we inferred answers for the questions in the test benchmark using both pretrained GPT models and GPT models fine-tuned on the benchmark. Finally, we assessed the results in terms of faithfulness and answer relevancy. Figure 1 shows the overview of the study framework.

Figure 1.

Graphical representation of the three-stage study process: (1) Construction of a Knowledge Base (KB) and collection of HPV vaccine-related question-answer pairs; (2) Fine-tuning GPT models, and inferring answers using both pre-trained GPT models and VaxBot-HPV; (3) Evaluation of results based on faithfulness and answer relevancy.

Overview of the framework.

KB and gold standard construction

To construct the KB for VaxBot-HPV, we referred to the studies by Amith et al.64,65 A search was conducted from April 2022 to July 2022, focusing on both scientific literature and other patient-friendly resources like hospital websites. The primary goal was to provide comprehensive and credible information about the HPV vaccine.

Screening of scientific literature

To clarify the role of peer-reviewed biomedical literature in informing the chatbot’s responses, it served 2 key purposes: (1) providing context for the generated answers to ensure reliability and alignment with established scientific consensus, and (2) identifying and generating additional topics or questions that were not present in web sources.

The search for scientific literature aimed to identify peer-reviewed journals that discuss influential factors related to HPV vaccination. We applied the following inclusion and exclusion criteria.

Inclusion criteria:

  • Journals discussing influential factors suggested by peer-reviewed studies.

  • The study population included healthcare providers, or patients.

  • The research must be directly related to HPV vaccination.

  • Publications from the last 20 years.

Exclusion criteria:

  • Journals with purely theoretical discussions or without concrete, actionable results.

Screening of web-based resources

We expanded the KB to include patient-friendly resources by reviewing information from reputable health websites such as the Centers for Disease Control and Prevention (CDC), National Cancer Institute (NCI), Mayo Clinic, MD Anderson, Cleveland Clinic, and Johns Hopkins. These sources were selected to provide accurate, accessible information relevant to patients and caregivers.

For the web-based resources, there were no strict inclusion or exclusion criteria, as the goal was to complement the scientific literature with essential, easy-to-understand information. We focused on ensuring that each resource addressed key factors related to HPV vaccination, supplementing the KB with practical information.

Ensuring comprehensive and credible responses

By combining peer-reviewed literature and patient-friendly web content, we aimed to ensure the KB, consisting of a total of 451 documents, would provide both comprehensive and credible responses to potential questions about the HPV vaccine. We implemented several quality control measures. In addition to including studies from peer-reviewed literature or reputable sources, the content of the KB was meticulously reviewed by 2 invited domain experts in the fields of immunization and public health. These experts assessed the accuracy and relevance of both the scientific literature and patient-friendly resources, ensuring that the information was reliable and comprehensive.

Constructing gold standard question-answer pairs

In this study, the gold standard was developed from both KB and GPT-generated questions.

To extract frequently asked questions (FAQs) and their corresponding answers related to the HPV vaccine from the collected webpages in the KB, we first identified and localized the FAQ sections within each webpage. These sections were often distinguished by special fonts or formats that set the questions and answers apart from the rest of the content. For instance, questions were frequently displayed in bold or italic font, and some utilized larger font sizes to create a clear visual hierarchy. Additionally, we noticed that many webpages presented questions with specific indents or bullet points, while answers were often found in separate paragraphs or text boxes, sometimes accompanied by icons such as question marks for emphasis. We leveraged these unique formatting characteristics and developed extraction rules to automate the process of identifying and extracting the question-answer pairs.

To enhance question diversity and ensure the generalizability of our findings, we also employed GPT-4 models to generate 80 question-answer pairs using the following prompts:

Using the provided context from referencing articles on HPV vaccine, formulate a question that captures an important fact from the context. Restrict the question to the context information provided. Please only output the question.”

After the initial extraction, we conducted a thorough manual review of the extracted question-answer pairs to ensure their accuracy and completeness. During this step, we carefully checked the content for any inconsistencies or errors and identified any missing questions that might not have been captured through the automated process. To ensure the quality of the FAQs, 2 domain experts (D.R. and M.A.) independently evaluated each question-answer pair based on 3 criteria: relevance, clarity, and reliability. A question-answer pair was considered relevant if it directly addressed significant topics related to HPV vaccination, clear if the language was easily understandable and free from ambiguity, and reliable if it was factually accurate and supported by credible sources. Each question-answer pair was scored on a binary scale (0 or 1) for all 3 domains, with “1” indicating the question-answer pair met the criteria for that domain and “0” indicating it did not. In cases where experts disagreed in their evaluations, a negotiation process was carried out to reach consensus on the final score for each question-answer pair. Only those question-answer pairs that received a score of “1” in all 3 domains were included in the final dataset. As a result of this rigorous process, we obtained 202 FAQ pairs from the KB and 39 pairs generated by GPT-4.

Models

We utilized 2 state-of-the-art LLMs, GPT-3.5, and GPT-4, developed by OpenAI, as the key components of this study.

  1. GPT-3.5: GPT-3.5 is the iteration in OpenAI’s series of large-scale language models. With an even larger model size (175 billion parameters) and enhanced capabilities, GPT-3.5 exhibits proficiency in understanding and generating human-like text, showcasing its potential for a wide range of applications including chatbots, content creation, and language translation.66,67

  2. GPT-4: GPT-4, the advancement in OpenAI’s renowned GPT series, marks a significant milestone in the field of NLP. With its remarkable increase to 170 trillion parameters, GPT-4 handles more complex language tasks with improved accuracy and understanding.66

Experiment setup

The question-answer pairs derived from the KB were divided into 162 samples for training purposes and 40 for testing purposes. Among the GPT-generated questions, 28 question-pairs were randomly selected for training and the rest for testing.

Initially, GPT-3.5 models were fine-tuned using the OpenAI API on the training set of the extracted question-answer pairs. The training was conducted using the default parameters, and after fine-tuning, the parameters for VaxBot-HPV were refined and are outlined in Table 1. During inference process, we used the following prompt to instruct the GPT models in answering the query:

Table 1.

Parameters of VaxBot-HPV.

Parameter Value
n_epochs 2
batch_size 1
learning_rate_multiplier 1
Temperature 0.3
context_window 2048
Token limit 4096

You are an expert Q&A system that is trusted around the world.

Always answer the query using the provided context information, and not prior knowledge.

Some rules to follow:

  1. Never directly reference the given context in your answer.

  2. Avoid statements like ‘Based on the context, …’ or ‘The context information …’ or anything along those lines.”

VaxBot-HPV’s development involved the performance comparison of various models, including GPT-3.5, as well as GPT-4, for each experimental set.

The experiments were carried out using a high-performance server containing 8 Nvidia A100 GPUs, each with a memory capacity of 80GB. This server configuration facilitated the effective training and evaluation of the models, ensuring the production of reliable and precise results.

Evaluation

The evaluation involved answer relevancy and faithfulness. Both are critical aspects in assessing the quality of generated responses. Answer relevancy gauges the extent to which the answers align with the questions, while faithfulness ensures factual accuracy, a fundamental requirement for reliable information retrieval. These metrics collectively provide a comprehensive evaluation of the model’s performance in understanding and responding to user queries. The assessment of all outcomes was carried out using the Ragas metrics, which are GPT-supported measures widely adopted in NLP tasks to evaluate the quality of generated text.68

Results

Table 2 illustrates the automatic performance evaluation of different GPT models in answer relevancy and faithfulness on the questions extracted from the KB. The results indicate that the VaxBot-HPV outperformed both the GPT-3.5 and GPT-4 models in terms of answer relevancy, achieving a score of 0.85 compared to 0.80 and 0.83, respectively. Similarly, the VaxBot-HPV exhibited higher faithfulness, scoring 0.97, compared to 0.92 for the GPT-3.5 model and 0.91 for the GPT-4 model. These results suggest that fine-tuning the GPT-3.5 model leads to improved performance in both answer relevancy and faithfulness compared to using the models in their pretrained states.

Table 2.

Performance evaluation of different GPT models in answer relevancy and faithfulness on the questions extracted from the knowledge base.

Model Answer relevancy Faithfulness
GPT-3.5 0.80 0.92
VaxBot-HPV 0.85 0.97
GPT-4 0.83 0.91

Table 3 presents the performance evaluation of different GPT models in terms of answer relevancy and faithfulness on questions generated by GPT-4. The GPT-3.5 model achieved an answer relevancy score of 0.80 and a faithfulness score of 0.90. In comparison, VaxBot-HPV showed improved performance with an answer relevancy score of 0.85 and a faithfulness score of 0.96. These results highlight the benefits of fine-tuning the GPT model, demonstrating its broader generalizability, applicability and robustness.

Table 3.

Performance evaluation of different GPT models in answer relevancy and faithfulness on the questions generated by GPT-4.

Model Answer relevancy Faithfulness
GPT-3.5 0.80 0.90
VaxBot-HPV 0.85 0.96

Figure 2 shows 2 samples of questions and its generated questions by 4 systems. We selected 2 questions. One question (“What are the risks of cervical cancer besides pregnancy at an early age?”) is generated by GPT, and another question (“What are the risks of the HPV vaccine?”) is from the test benchmark. VaxBot-HPV demonstrates an advantage in providing comprehensive and accurate responses to health-related inquiries compared to other systems. For instance, when asked about the risks of cervical cancer besides early pregnancy, VaxBot-HPV effectively listed multiple risk factors, including having multiple sexual partners, weakened immune systems, and specific health conditions. In contrast, the GPT-3.5 failed to identify any additional risk factors, while the GPT-4 provided information not directly to the question, such as “genital warts occurred most in adolescents and young adults,” which could be misleading. Additionally, ChatGPT, although comprehensive, was not succinct and failed to answer the question directly. Furthermore, when it comes to the question “What are the risks of the HPV vaccine,?” VaxBot-HPV effectively summarized over 12 years of safety monitoring, highlighted common and rare side effects, and provided actionable advice on preventing fainting-related injuries, all while maintaining a clear and concise format. In contrast, the GPT-3.5 and GPT-4, though accurate, lacked depth, information sources and reassurance, merely listing side effects without addressing common myths or providing detailed context. ChatGPT-4, despite its comprehensiveness, often failed to deliver succinct answers, resulting in verbose responses that lacked focus. These examples illustrate that VaxBot-HPV not only enhances the specificity and clarity of responses but also ensures that users receive accurate, reliable and actionable health information efficiently.

Figure 2.

Figure with two subfigures (A and B) comparing responses from VaxBot-HPV, GPT-3.5, GPT-4, and ChatGPT-4. A presents responses to “What are the risks of cervical cancer besides pregnancy at an early age?” B shows responses to “What are the risks of the HPV vaccine?”

Samples of questions and answers (A) GPT generated question (B) question in the test benchmark over systems.

Discussion

The development and evaluation of VaxBot-HPV, a chatbot designed to provide information and answer questions about the HPV vaccine, demonstrates the potential of large language models, particularly GPT-3.5 and GPT-4, in healthcare applications. Compared to traditional QA systems, VaxBot-HPV leverages the capabilities of GPT models, especially after fine-tuning, to generate relevant and accurate responses to user queries.

VaxBot-HPV has a substantial advantage over existing pre-trained GPT models. This extensive pre-training allows VaxBot-HPV to have a deeper understanding of language and context, enabling it to provide more relevant and accurate answers to user queries. Unlike rule-based systems, which rely on predefined rules and patterns, and retrieval-based systems, which use keyword matching and ranking algorithms, VaxBot-HPV allows it to generate responses based on a broader understanding of the topic. This capability enhances the chatbot’s ability to address a wide variety of questions and provide more informative and helpful responses to users. Moreover, VaxBot-HPV allows the answers to be dynamically generated, potentially offering more tailored responses to users compared to standard, one-size-fits-all answers. The fine-tuning process further enhances VaxBot-HPV’s performance, particularly in the context of HPV vaccination, by adapting it to the specific domain. This adaptation improves answer relevancy and faithfulness, addressing common issues of ChatGPT such as hallucinations, where the model generates plausible but inaccurate responses. By fine-tuning on a dataset specific to HPV vaccination, VaxBot-HPV can learn the nuances of the topic, including relevant terminology, common misconceptions, and specific concerns that users may have. This specificity allows the chatbot to provide more accurate and tailored responses, increasing its overall effectiveness in addressing user queries related to the HPV vaccine. Furthermore, in the field of health communication, health misinformation has severe consequences, such as delays in seeking care, vaccine hesitancy, medication non-compliance, and increased disease outbreaks and/or burden, particularly in underserved populations.69 Inaccurate health information can also mislead individuals, contributing to poor health decisions, thereby impacting the quality of life and health behavior.70 What’s worse, the spread of misinformation undermines public trust in healthcare recommendations and erodes the clinician-patient relationship.71 However, the fine-tuning process helps mitigate bias and misinformation that may be present in generic language models to a great extent, ensuring that VaxBot-HPV provides reliable and trustworthy information to users seeking information about HPV vaccination. Additionally, the specific fine-tuning, which includes context in addition to question-answer pairs, enables VaxBot-HPV to extend beyond just answering questions. These models can also provide explanations or reasoning behind their answers, increasing transparency and user trust. This feature is particularly important in healthcare applications, where understanding the rationale behind medical advice is crucial for informed decision-making.

In terms of evaluations, incorporating multiple sources, including questions generated by GPT models, strengthens the credibility and reliability of our findings regarding VaxBot-HPV’s performance. By leveraging questions from diverse sources, we were able to assess the chatbot’s ability to handle a wide range of queries beyond those explicitly included in the KB. This comprehensive evaluation approach not only ensures the robustness of our results but also demonstrates VaxBot-HPV’s versatility in addressing various user inquiries. Overall, the use of multiple evaluation metrics underscores the effectiveness and adaptability of VaxBot-HPV in providing reliable information and support to users.

While VaxBot-HPV demonstrates promising performance, there are several limitations to consider. First, the chatbot’s effectiveness is contingent on the quality and comprehensiveness of the underlying KB. Incomplete or inaccurate information in the KB could lead to erroneous or insufficient responses from the chatbot. Additionally, the chatbot’s reliance on text-based interactions may limit its accessibility to individuals with visual or cognitive impairments who may benefit from alternative communication methods. Moreover, the evaluation of VaxBot-HPV was primarily based on its performance in answering questions, overlooking other aspects of user interaction such as ease of use, user satisfaction or engagement. The inclusion of manual evaluations is needed to provide a holistic assessment of the chatbot’s performance, enhancing the depth and validity of our conclusions. Furthermore, the small dataset size reduced the statistical power, limiting our ability to conduct hypothesis testing or determine confidence intervals to assess the significance of fine-tuning. Finally, the generalizability of our findings may be limited to the specific domain of HPV vaccination and may not extend to other healthcare contexts.

Future research could focus on several areas to enhance the capabilities and impact of VaxBot-HPV. First, expanding the KB to include a broader range of topics related to HPV vaccination and addressing emerging concerns or misconceptions could improve the chatbot’s effectiveness and relevance. Second, integrating multimedia capabilities, such as image or video recognition, could enhance the chatbot’s ability to provide information and support in a more interactive and engaging manner. Third, incorporating feedback mechanisms to gather user input and improve the chatbot’s responses over time could enhance its usability and user satisfaction. Furthermore, exploring the integration of VaxBot-HPV with existing healthcare systems or platforms could facilitate its adoption and integration into clinical workflows, potentially improving access to information and promoting HPV vaccination uptake. Additionally, to ensure VaxBot-HPV remains up-to-date, we will integrate it with RefAI, our automated tool for retrieving relevant biomedical literature.58 Collaboration with healthcare institutions and public health organizations will also facilitate access to the latest research, policy updates, and recommendations on HPV vaccination. We plan to automate regular updates by accessing live databases or APIs from organizations like the CDC and WHO, along with employing automated literature tracking. These measures will enable the chatbot to continuously provide accurate, up-to-date information as knowledge of HPV vaccines evolves. Lastly, we need to add a user interface to VaxBot-HPV to make it more accessible and user-friendly, enhancing the overall user experience and encouraging more people to use the chatbot for reliable information on HPV vaccination and related topics.

Conclusion

In conclusion, the development of VaxBot-HPV demonstrates the potential of GPT-powered chatbots in healthcare, particularly in promoting vaccination uptake and addressing common concerns and misconceptions. The study also underscores the importance of leveraging large language models and fine-tuning techniques in healthcare chatbot development. The efficacy of VaxBot-HPV highlights the transformative impact of such technologies on medical education, healthcare communication, and information dissemination.

Contributor Information

Yiming Li, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States.

Jianfu Li, Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL 32224, United States.

Manqi Li, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States; Department of Biostatistics and Data Science, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States.

Evan Yu, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States.

Danniel Rhee, Department of Health Promotion and Behavioral Sciences, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States.

Muhammad Amith, Department of Biostatistics and Data Science, School of Public and Population Health, The University of Texas Medical Branch, Galveston, TX 77550, United States; Department of Internal Medicine, The University of Texas Medical Branch, Galveston, TX 77550, United States.

Lu Tang, Department of Communication and Journalism, College of Arts and Science, Texas A&M University, College Station, TX 77843, United States.

Lara S Savas, Center for Health Promotion and Prevention Research, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States.

Licong Cui, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, United States.

Cui Tao, Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL 32224, United States.

Author contributions

Yiming Li, Jianfu Li, Cui Tao (Methodology); Yiming Li and Jianfu Li (Software); Lu Tang and Lara S. Savas (Validation); Manqi Li, Yiming Li, and Jianfu Li (Formal analysis); Muhammad Amith, Danniel Rhee, Evan Yu (Investigation); Cui Tao (Resources); Yiming Li (Data curation); Yiming Li (Writing—original draft preparation); Cui Tao (Writing—review and editing); Yiming Li (Visualization); Cui Tao (Supervision); Cui Tao (Project administration); Cui Tao, Licong Cui (Funding acquisition). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Institute of Allergy And Infectious Diseases of the National Institutes of Health grant number [R01AI130460 and U24AI171008], National Institute of Diabetes and Digestive and Kidney Diseases grant number [R21DK134815], and CPRIT grant number [RP220244].

Conflicts of interest

The authors declare that there are no competing interests.

Ethics approval and consent to participate

Not applicable.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.

References

  • 1. das Leto M GP, dos Santos Júnior GF, Porro AM, et al.  Human papillomavirus infection: etiopathogenesis, molecular biology and clinical manifestations. An Bras Dermatol. 2011;86:306-317. 10.1590/S0365-05962011000200014 [DOI] [PubMed] [Google Scholar]
  • 2. Brianti P, De Flammineis E, Mercuri SR.  Review of HPV-related diseases and cancers. New Microbiol. 2017;40:80-85. [PubMed] [Google Scholar]
  • 3. Alhamlan FS, Alfageeh MB, Al Mushait MA, et al.  Human papillomavirus-associated cancers. In: Kishore U, ed. Microbial Pathogenesis: Infection and Immunity. Springer International Publishing; 2021:1-14. [DOI] [PubMed] [Google Scholar]
  • 4. Chelimo C, Wouldes TA, Cameron LD, et al.  Risk factors for and prevention of human papillomaviruses (HPV), genital warts and cervical cancer. J Infect. 2013;66:207-217. 10.1016/j.jinf.2012.10.024 [DOI] [PubMed] [Google Scholar]
  • 5. Hull R, Mbele M, Makhafola T, et al.  Cervical cancer in low and middle-income countries (review). Oncol Lett. 2020;20:2058-2074. 10.3892/ol.2020.11754 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Okunade KS.  Human papillomavirus and cervical cancer. J Obstet Gynaecol. 2020;40:602-608. Published Online First: July 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Arbyn M, Weiderpass E, Bruni L, et al.  Estimates of incidence and mortality of cervical cancer in 2018: a worldwide analysis. Lancet Glob Health. 2020;8:e191-e203. 10.1016/S2214-109X(19)30482-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Arbyn M, Castellsagué X, de Sanjosé S, et al.  Worldwide burden of cervical cancer in 2008. Ann Oncol. 2011;22:2675-2686. 10.1093/annonc/mdr015 [DOI] [PubMed] [Google Scholar]
  • 9. Tesfaye E, Kumbi B, Mandefro B, et al.  Prevalence of human papillomavirus infection and associated factors among women attending cervical cancer screening in setting of Addis Ababa, Ethiopia. Sci Rep. 2024;14:4053. 10.1038/s41598-024-54754-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Ali SS, Nirupama AY, Chaudhuri S, et al.  Therapeutic HPV vaccination: a strategy for cervical cancer elimination in India. Indian J Gynecol Oncol. 2024;22:38. 10.1007/s40944-024-00800-5 [DOI] [Google Scholar]
  • 11. Li Y, Li J, Dang Y, et al.  Adverse events of COVID-19 vaccines in the United States: temporal and spatial analysis. JMIR Public Health Surveill. 2024;10:e51007. 10.2196/51007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Stanley M.  Immunobiology of HPV and HPV vaccines. Gynecol Oncol. 2008;109:S15-21. 10.1016/j.ygyno.2008.02.003 [DOI] [PubMed] [Google Scholar]
  • 13. Markowitz LE, Schiller JT.  Human papillomavirus vaccines. J Infect Dis. 2021;224:S367-S378. 10.1093/infdis/jiaa621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Li Y, Lundin SK, Li J, et al.  Unpacking adverse events and associations post COVID-19 vaccination: a deep dive into vaccine adverse event reporting system data. Expert Rev Vaccines. 2024;23:53-59. 10.1080/14760584.2023.2292203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Lu B, Kumar A, Castellsagué X, et al.  Efficacy and safety of prophylactic vaccines against cervical HPV infection and diseases among women: a systematic review & meta-analysis. BMC Infect Dis. 2011;11:13. 10.1186/1471-2334-11-13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. World Health Organization. WHO position on HPV vaccines. Vaccine. 2009;27:7236-7237. 10.1016/j.vaccine.2009.05.019 [DOI] [PubMed] [Google Scholar]
  • 17. Li Y, Li J, Dang Y, et al.  Temporal and spatial analysis of COVID-19 vaccines using reports from vaccine adverse event reporting system. JMIR Prepr. 2023. 10.2196/preprints.51007 [DOI] [Google Scholar]
  • 18. Li Y, Li J, Dang Y, et al.  COVID-19 vaccine adverse events in the United States: a temporal and spatial analysis. JMIR Prepr. 2024;10:e51007. 10.2196/51007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Zhang K, Dang Y, Li Y, et al.  Impact of climate change on vaccine responses and inequity. Nat Clim Chang. 2024;14:1216-1218. 10.1038/s41558-024-02192-y [DOI] [Google Scholar]
  • 20. Iqbal L, Jehan M, Azam S.  Advancements in mRNA vaccines: a promising approach for combating human Papillomavirus-related cancers. Cancer Control. 2024;31:10732748241238629. 10.1177/10732748241238629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Gonçalves CA, Pereira-da-Silva G, Silveira RCCP, et al.  Safety, efficacy, and immunogenicity of therapeutic vaccines for patients with high-grade cervical intraepithelial neoplasia (CIN 2/3) associated with human papillomavirus: a systematic review. Cancers (Basel). 2024;16:672. 10.3390/cancers16030672 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Webster EM, Ahsan MD, Kulkarni A, et al.  Building knowledge using a novel web-based intervention to promote HPV vaccination in a diverse, low-income population. Gynecol Oncol. 2024;181:102-109. 10.1016/j.ygyno.2023.12.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Lowy DR, Schiller JT.  Reducing HPV-associated cancer globally. Cancer Prev Res (Phila Pa). 2012;5:18-23. 10.1158/1940-6207.CAPR-11-0542 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Szilagyi PG, Albertin CS, Gurfinkel D, et al.  Prevalence and characteristics of HPV vaccine hesitancy among parents of adolescents across the US. Vaccine. 2020;38:6027-6037. 10.1016/j.vaccine.2020.06.074 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Nguyen KH, Santibanez TA, Stokley S, et al.  Parental vaccine hesitancy and its association with adolescent HPV vaccination. Vaccine. 2021;39:2416-2423. 10.1016/j.vaccine.2021.03.048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Jennings W, Stoker G, Bunting H, et al.  Lack of trust, conspiracy beliefs, and social media use predict COVID-19 vaccine hesitancy. Vaccines (Basel). 2021;9:593. 10.3390/vaccines9060593 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Gauna F, Verger P, Fressard L, et al.  Vaccine hesitancy about the HPV vaccine among French young women and their parents: a telephone survey. BMC Public Health. 2023;23:628. 10.1186/s12889-023-15334-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Adeyanju G. Behavioral insights into vaccine hesitancy determinants in sub-saharan Africa. Published Online First: September 21, 2022. Accessed March 28, 2024. https://www.db-thueringen.de/receive/dbt_mods_00053424
  • 29. Chen Y, Zulkernine F. BIRD-QA: a BERT-based information retrieval approach to domain specific question answering. In: 2021 IEEE International Conference on Big Data (Big Data). IEEE; 2021:3503-3510.
  • 30. Vanitha G, Sanampudi S, Lakshmi M I.  Approches for question answering systems. Int J Eng Sci Technol. 2011;3:990-995. [Google Scholar]
  • 31. Thalib I, Widyawan, Soesanti I. A review on question analysis, document retrieval and answer extraction method in question answering system. In: 2020 International Conference on Smart Technology and Applications (ICoSTA). IEEE; 2020:1-5.
  • 32. Tsampos I, Marakakis E. A medical question answering system with NLP and graph database. 2023. Accessed December 11, 2024. https://ceur-ws.org/Vol-3379/HeDAI_2023_paper406.pdf
  • 33. Cairns BL, Nielsen RD, Masanz JJ, et al.  The MiPACQ clinical question answering system. AMIA Annu Symp Proc. 2011;2011:171-180. [PMC free article] [PubMed] [Google Scholar]
  • 34. Feng X, Liu Q, Lao C, et al. Design and implementation of automatic question answering system in information retrieval. In: Proceedings of the 7th International Conference on Informatics, Environment, Energy and Applications. Association for Computing Machinery; 2018:207-211.
  • 35. Guo Q, Cao S, Yi Z.  A medical question answering system using large language models and knowledge graphs. Int J Intell Sys. 2022;37:8548-8564. 10.1002/int.22955 [DOI] [Google Scholar]
  • 36. Saeed N, Humaira A, Jhanjhi N.  Deep learning based question answering system (survey). Preprints. Published Online First: December 22, 2023. 10.20944/preprints202312.1739.v1 [DOI] [Google Scholar]
  • 37. Yin J, Chen Z, Zhou K, et al. A deep learning based chatbot for campus psychological therapy. 2019. Accessed December 11, 2024. 10.48550/arXiv.1910.06707 [DOI]
  • 38. Khennouche F, Elmir Y, Himeur Y, et al.  Revolutionizing generative pre-traineds: insights and challenges in deploying ChatGPT and generative chatbots for FAQs. Expert Syst Appl. 2024;246:123224. 10.1016/j.eswa.2024.123224 [DOI] [Google Scholar]
  • 39. Yin P, Duan N, Kao B, et al. Answering questions with complex semantic constraints on open knowledge bases. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. Association for Computing Machinery; 2015:1301-1310.
  • 40. Abdallah A, Piryani B, Jatowt A.  Exploring the state of the art in legal QA systems. J Big Data. 2023;10:127. 10.1186/s40537-023-00802-8 [DOI] [Google Scholar]
  • 41. Li Y, Peng X, Li J, et al. Development of a natural language processing tool to extract acupuncture point location terms. In: 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI). IEEE; 2023:344-351. 10.1109/ICHI57859.2023.00053 [DOI]
  • 42. Li Y, Tao W, Li Z, et al.  Artificial intelligence-powered pharmacovigilance: a review of machine and deep learning in clinical text-based adverse drug event detection for benchmark datasets. J Biomed Inform. 2024;152:104621. 10.1016/j.jbi.2024.104621 [DOI] [PubMed] [Google Scholar]
  • 43. He J, Li F, Li J, et al.  Prompt tuning in biomedical relation extraction. J Healthc Inform Res. 2024;8:206-224. 10.1007/s41666-024-00162-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Stroh E, Mathur P. Question answering using deep learning. Accessed December 11, 2024. http://cs224d.stanford.edu/reports/StrohMathur.pdf
  • 45. Li J, Li Y, Pan Y, et al.  Mapping vaccine names in clinical trials to vaccine ontology using cascaded fine-tuned domain-specific language models. J Biomed Semantics. 2024;15:14. 10.1186/s13326-024-00318-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Lu P, Mishra S, Xia T, et al. Learn to explain: multimodal reasoning via thought chains for science question answering. 2022. Accessed December 11, 2024. https://proceedings.neurips.cc/paper_files/paper/2022/hash/11332b6b6cf4485b84afadb1352d3a9a-Abstract-Conference.html
  • 47. Lin J, Quan D, Sinha V, et al. What makes a good answer? the role of context in question answering. 2023. Accessed December 11, 2024. https://d1wqtxts1xzle7.cloudfront.net/46870824/download-libre.pdf?1467161548=&response-content-disposition=inline%3B+filename%3DWhat_makes_a_good_answer_The_role_of_con.pdf&Expires=1738301442&Signature=VuIos2gQUJu6m-qMf-9GzhZSm7xIkYY3∼rugfHwSgRHas2FKVjUyMPQdJsl9DEO8cx2z1VUvH6z1EcQVIaIojuCnYxLryQAuiWwIR8trUHoIyKXGf-pzprhRWeF4-1dbplhWHK-LprGjvf9nNzR7K0-pLDDaUXvDYZzQ-i∼∼X79HqeZiF6MdaiRyExvEokNXHBOZTK65ErxGy∼tUUEO6ff5WwLXL-6dYttSJKb362EwsTrWXee9-av41h277ECwcXYyc0pZfKTtn5MErUymrH8CJPmGUPEioFr12o8RLBsr6Ta4Nv1zAfnUOJvcOWrESNAb∼RvEACxyyyTXC1qJNxA__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA
  • 48. Min S, Zhong V, Socher R, et al. Efficient and robust question answering from minimal context over documents. 2018. Accessed December 11, 2024. 10.48550/arXiv.1805.08092 [DOI]
  • 49. Goyal N, Briakou E, Liu A, et al. What else do I need to know? the effect of background information on users’ reliance on QA systems. 2023. Accessed December 11, 2024. 10.48550/arXiv.2305.14331 [DOI]
  • 50. Li Y, Li J, He J, et al.  AE-GPT: using large language models to extract adverse events from surveillance reports-a use case with influenza vaccine adverse events. PLoS One. 2024;19:e0300919. 10.1371/journal.pone.0300919 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Hu Y, Ameer I, Zuo X, et al. Zero-shot clinical entity recognition using ChatGPT. arXiv.org. Published Online First: 2023. Accessed October 3, 2023. 10.48550/arXiv.2303.16416 [DOI]
  • 52. Li Y, Peng X, Li J, et al.  Relation extraction using large language models: a case study on acupuncture point locations. J Am Med Inform Assoc. 2024;31:2622-2631. 10.1093/jamia/ocae233 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Li Y, Wei Q, Chen X, et al.  Improving tabular data extraction in scanned laboratory reports using deep learning models. J Biomed Inform. 2024;159:104735. 10.1016/j.jbi.2024.104735 [DOI] [PubMed] [Google Scholar]
  • 54. Li Y, Viswaroopan D, He W, et al. Improving entity recognition using ensembles of deep learning and fine-tuned large language models: a case study on adverse event extraction from multiple sources. arXiv.org. Published Online First: 2024. Accessed December 11, 2024. 10.48550/arXiv.2406.18049 [DOI] [PubMed]
  • 55. Chang E. Examining GPT-4’s capabilities and enhancement with SocraSynth. 2023. Accessed December 11, 2024. https://www.researchgate.net/profile/Edward-Chang-22/publication/374753069_Examining_GPT-4's_Capabilities_and_Enhancement_with_SocraSynth/links/656a9327b1398a779dced10c/Examining-GPT-4s-Capabilities-and-Enhancement-with-SocraSynth.pdf
  • 56. Kalyan KS.  A survey of GPT-3 family large language models including ChatGPT and GPT-4. Nat Lang Process J. 2024;6:100048. 10.1016/j.nlp.2023.100048 [DOI] [Google Scholar]
  • 57. Al-Hasan TM, Sayed AN, Bensaali F, et al.  From traditional recommender systems to GPT-based chatbots: a survey of recent developments and future directions. BDCC. 2024;8:36. 10.3390/bdcc8040036 [DOI] [Google Scholar]
  • 58. Li Y, Zhao J, Li M, et al.  RefAI: a GPT-powered retrieval-augmented generative tool for biomedical literature recommendation and summarization. J Am Med Inform Assoc. 2024;31:2030-2039. 10.1093/jamia/ocae129 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Li J, Cheng X, Zhao X, et al. HaluEval: a large-scale hallucination evaluation benchmark for large language models. In: Bouamor H, Pino J, Bali K, eds. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics 2023:6449-6464.
  • 60. McIntosh TR, Liu T, Susnjak T, et al.  A culturally sensitive test to evaluate nuanced GPT hallucination. IEEE Trans Artif Intell. 2024;5:2739-2751. 10.1109/TAI.2023.3332837 [DOI] [Google Scholar]
  • 61. García-Méndez S, de Arriba-Pérez F.  Large language models and healthcare alliance: potential and challenges of two representative use cases. Ann Biomed Eng. 2024;52:1928-1931. Online First: February 3. 10.1007/s10439-024-03454-8 [DOI] [PubMed] [Google Scholar]
  • 62. Seenivasan L, Islam M, Kannan G, et al.  SurgicalGPT: end-to-end language-vision GPT for visual question answering in surgery. In: Greenspan H, Madabhushi A, Mousavi P, et al. , eds. Medical Image Computing and Computer Assisted Intervention—MICCAI 2023. Springer Nature Switzerland; 2023:281-290. [Google Scholar]
  • 63. Shi D, Chen X, Zhang W, et al. FFA-GPT: an interactive visual question answering system for fundus fluorescein angiography. 2023. Accessed December 11, 2024. https://www.researchsquare.com/article/rs-3307492/v1
  • 64. Amith M, Zhu A, Cunningham R, et al.  Early usability assessment of a conversational agent for HPV vaccination. Stud Health Technol Inform. 2019;257:17-23. [PMC free article] [PubMed] [Google Scholar]
  • 65. Amith M, Lin R, Cunningham R, et al.  Examining potential usability and health beliefs among young adults using a conversational agent for HPV vaccine counseling. AMIA Jt Summits Transl Sci Proc. 2020;2020:43-52. [PMC free article] [PubMed] [Google Scholar]
  • 66. Koubaa A. GPT-4 vs GPT-3.5: a concise showdown. Preprints. Published Online First: March 24, 2023. 10.20944/preprints202303.0422.v1 [DOI]
  • 67. Nayanam K, Sharma V. Towards architecting research perspective future scope with Chat GPT. 2024. Accessed December 11, 2024. https://www.researchgate.net/profile/Kamal-Nayanam/publication/378704631_TOWARDS_ARCHITECTING_RESEARCH_PERSPECTIVE_FUTURE_SCOPE_WITH_CHAT_GPT/links/65e5cd55c3b52a117015b7e7/TOWARDS-ARCHITECTING-RESEARCH-PERSPECTIVE-FUTURE-SCOPE-WITH-CHAT-GPT.pdf
  • 68.Metrics | Ragas. Accessed March 8, 2024. https://docs.ragas.io/en/latest/concepts/metrics/index.html
  • 69. Goldwire MA, Johnson ST, Abdalla M, et al.  Medical misinformation: a primer and recommendations for pharmacists. J Am Coll Clin Pharm. 2023;6:497-511. 10.1002/jac5.1760 [DOI] [Google Scholar]
  • 70. Indira DE.  G R. COMPREHENDING the characteristics and effects of the health misinformation—a study among social media users. ShodhKosh J Vis Perform Arts. 2023;4:147-157. 10.29121/shodhkosh.v4.i1SE.2023.409 [DOI] [Google Scholar]
  • 71. Arora VM, Madison S, Simpson L.  Addressing medical misinformation in the patient-clinician relationship. JAMA. 2020;324:2367-2368. 10.1001/jama.2020.4263 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.


Articles from JAMIA Open are provided here courtesy of Oxford University Press

RESOURCES