Skip to main content
Frontiers in Digital Health logoLink to Frontiers in Digital Health
. 2026 Mar 19;8:1777667. doi: 10.3389/fdgth.2026.1777667

Chatbots as frontline educators in sexual reproductive health rights: evidence, limitations, and ethical considerations

Chioma Uzoma 1, Goldie Chukwuedo 1, Ejeh Isaac 1, Victory Uzoma 1, Stanley Chinedu Eneh 2,3, Felix Peace Chinyere 4, Oluchi Mmesoma Precious 5, Francisca Ogochukwu Onukansi 6,*
PMCID: PMC13044073  PMID: 41938607

Abstract

Chatbots are increasingly used in digital health to expand access to information and support user engagement. In sexual and reproductive health and rights (SRHR), where stigma, privacy concerns, and health system constraints often limit timely access to accurate information, chatbots have been proposed as scalable tools for delivering education and facilitating service navigation. This perspective examines the role of chatbots as frontline educators in SRHR, drawing insights from existing evidence to assess the effectiveness of chatbots, practical implications, key limitations, and ethical considerations. Available evidence suggests that chatbots can enhance access to SRHR information, support user engagement, and contribute to improved knowledge, confidence, and linkage to services. However, the current evidence base remains uneven, with limited rigorous evaluation of long-term behavioural or health outcomes. Challenges related to accuracy, contextual responsiveness, privacy, equity, and accountability persist, underscoring the need for careful design and governance. Positioning chatbots as complementary components within integrated SRHR strategies, rather than standalone solutions, may offer a pragmatic pathway to harness their potential while safeguarding user rights and health outcomes.

Keywords: artificial intelligence in health care, chatbots, digital health, ethics, data privacy, health education, sexual and reproductive health and rights

Introduction

Digital health technologies are reshaping how health education and information are delivered, with chatbots emerging as one of the most rapidly adopted tools for scalable, user-centric engagement (1). These conversational agents, ranging from simple rule-based systems to advanced artificial intelligence models, can provide on-demand, tailored responses to users’ queries without direct human mediation (2). Chatbots can be broadly grouped into rule-based and AI-enabled systems. Rule-based chatbots operate through predefined scripts and decision trees that trigger fixed responses to specific user inputs, making them suitable for structured education, frequently asked questions, and service signposting but less adaptable to complex or sensitive discussions; for example, Tess delivers psychoeducational content through scripted conversational pathways and has been evaluated as a rule-driven mental health support agent (3, 4). In contrast, AI-enabled chatbots use natural language processing and machine-learning models to interpret free-text input and generate context-responsive replies, allowing more flexible and personalized interaction, as demonstrated by conversational agents such as Woebot and large language model–based systems like ChatGPT. Recent evidence also shows that prompt-tuned AI chatbots can outperform base models in delivering sexual health information, while still remaining prone to occasional errors, highlighting the need for human oversight (5, 6). Some chatbots combine rule-based logic with AI-driven features, forming a hybrid category that leverages structured guidance while allowing limited adaptive interaction, particularly in SRHR contexts (7).

Within sexual and reproductive health and rights (SRHR), where stigma, privacy concerns, and provider shortages often limit access to timely information, the potential for chatbots to bridge gaps in knowledge and support is especially compelling (8). A Synthesis of digital healthcare literature suggests that chatbots can serve as promising interventions for SRHR information and service delivery because they offer anonymous, responsive conversation and linkage to services where appropriate, though they remain underdeveloped in many contexts and require careful integration with broader health systems (810). Empirical work in adjacent domains of health education supports the notion that AI-powered chatbots may improve engagement and promote health behaviors (11). Consequentially, a systematic review has found modest but positive effects of chatbots on health behavior change, including lifestyle and wellness outcomes, and highlight features such as personalization and 24/7 availability as key drivers of user engagement (12). In the mental health space, evidence indicates that chatbots can reduce distress and support behavior change among adolescents and young adults, though their effectiveness varies by design and context and rarely surpasses traditional intervention approaches (13).

Despite these promising signals, the evidence base remains fragmentary and context-dependent. Studies of SRHR chatbots reveal mixed perceptions among health professionals about their usefulness, with acceptance higher for general information and signposting but lower for complex counselling or emotional support (14). Moreover, questions about accuracy, equity, privacy, and integration into existing health services persist across digital health domains, underscoring the need for critical evaluation of both intended benefits and unintended consequences (15). This perspective therefore, examines the evidence on effectiveness, practical limitations, and ethical considerations of deploying chatbots as frontline educators in SRHR, with an emphasis on aligning technological innovation with health outcomes and community needs.

Role of chatbots in sexual reproductive health right education

Chatbots in sexual and reproductive health and rights (SRHR) are digital conversational agents designed to simulate human dialogue and deliver tailored health information and guidance. These systems range from rule-based interfaces to advanced AI-driven chatbots capable of interpreting user input and offering context-specific responses (9). In digital health research, conversational agents are recognized for enhancing access to information and supporting patient engagement, particularly where traditional resources are limited or considered taboo (16).

One of the principal roles of chatbots in SRHR education is to offer anonymous and nonjudgmental access to sensitive information (17). Evidence from a realist synthesis, indicates that individuals are more likely to engage with SRHR content when they can interact with a chatbot without social or interpersonal judgment, especially in contexts where sexual health discussions are stigmatized. These interactions help users disclose personal concerns that might otherwise remain unspoken in face-to-face consultations, thereby facilitating initial empowerment and knowledge seeking (9).

Closely associated with anonymity is a chatbots’ ability to provide responsive and conversational information delivery. Unlike static content, chatbots can present complex SRHR topics, such as contraception options, sexually transmitted infection (STI) prevention, or consent frameworks, in an interactive sequence that adapts to user questions and follow-ups (9, 10). Evidence suggests that such conversational structures can increase user understanding and engagement, making health education more accessible and digestible (9, 18, 19).

Chatbots also play an important role in guiding users through SRHR service pathways. Several implementations integrate educational dialogue with signposting functions, such as directing users to nearby clinics, testing services, counselling resources, or appointment booking systems. Evidence from pilot deployments shows that chatbot interactions can prompt users to take concrete next steps toward care, positioning these tools as bridges between information access and service utilization (20).

Beyond individual information exchange, chatbots are increasingly embedded within broader digital health ecosystems, including mobile health applications, youth-friendly platforms, and clinic-based digital interfaces. Their integration allows SRHR programs to extend reach beyond physical facilities and standard operating hours, while maintaining consistency in messaging and alignment with established health guidelines (1, 9).

Evidence on chatbot effectiveness

Empirical research on the effectiveness of chatbots as educational or behavioral interventions in sexual and reproductive health and rights (SRHR) remains limited, but emerging evidence provides initial insights into their influence on knowledge, attitudes, and behaviors (10). For instance, a mixed-methods study of a pleasure-oriented SRHR chatbot deployed with young adults in Kenya found statistically significant improvements in users’ confidence discussing contraception (P ≤ 0.02), confidence discussing sexual feelings and needs (P ≤ 0.001), and ability to exercise sexual rights (P ≤ 0.01) after engaging with the chatbot. Participants valued the chatbot as a confidential and judgment-free source of information, and qualitative data indicated increased sex-positive communication and safer practices following interaction (21). Comparable digital interventions exist in both high-income and LMIC settings. For example, Ask Roo and Layla's Got You have engaged millions of conversations on contraception, STIs, and healthy relationships among adolescents and young adults in the United States, serving as widely cited models of chatbot-mediated SRHR education and signposting (10). Also, the SnehAI chatbot in India was designed to support adolescents and young adults with culturally tailored SRHR information and demonstrated strong engagement and accessibility features in program evaluation research (22). In addition to SnehAI, chatbots like Tina and a pleasure-positive bot documented in systematic reviews illustrate implementations in Uganda and other LMICs, offering contraceptive information and behavior support to broader audiences (10).

Additionally, the effectiveness of chatbot is also noted in other health domains, for instance, a systematic review and meta-analysis of chatbot-delivered interventions on vaccination attitudes found that tailored chatbot interactions led to significant improvements in attitude metrics compared to non-tailored approaches, although effects on actual vaccination intentions were mixed (23). Similarly, comprehensive reviews of AI chatbot interventions in women's health indicate that chatbot use was associated with improvements in psychological and behavioural outcomes, including reductions in anxiety and enhancements in engagement with health information, across multiple studies including randomized controlled trials and experimental designs. These findings suggest that chatbot interventions can influence intermediate outcomes relevant to SRHR education (24). More broadly, research on SRHR chatbot interventions across multiple countries highlighted that such systems are being implemented in diverse geographic and health-system contexts, including high-income and low-income settings, with varying levels of integration into service delivery and education programs (9).

It is worthy to note that, pilot implementation data also speak to the feasibility and uptake of chatbot application. In a pilot rollout of an AI chatbot for sexual and reproductive health information in clinical and community settings, the system recorded 1,749 user queries from 425 unique users over nine months, and around 10% of users went on to schedule a clinic appointment after interacting with the chatbot (20). While this study was not designed as a randomized efficacy trial, it demonstrates real-world engagement and suggests potential linkage between chatbot interaction and health service utilization. Similarly, recent evaluations using real-world clinical sexual-health questions show that chatbot performance can vary substantially depending on model design and training, reinforcing the need for careful evaluation of accuracy and safety when deploying SRHR chatbots in practice (6).

Reviews specifically focused on SRHR chatbots emphasize that, although evidence supporting definitive effects on health outcomes is limited, chatbots are promising interventions for delivering responsive information and supporting service access (9, 10). Overall, existing empirical work shows early signals of effectiveness in improving SRHR-related knowledge, attitudes, and service linkage, but also highlights that rigorous evaluation, particularly controlled trials with health behavior or clinical outcomes, remains sparse (23). This calls for more standardized evaluation frameworks, larger sample sizes, and longer follow-up to better assess how chatbot interventions compare to traditional health education strategies and how they influence sustained behavior change.

Practical implications for using chatbot in SRHR programs

Emerging evidence, though limited, points toward several practical considerations for integrating chatbots into sexual and reproductive health and rights (SRHR) programs. First, chatbots can serve as scalable tools for expanding access to reliable SRHR information, particularly in settings where stigma or limited provider availability constrains traditional channels (9, 25). Evaluation studies using real clinical queries found that prompt-tuned conversational agents achieved high levels of correctness and safety in delivering sexual health information, with one AI chatbot outperforming a base language model in key measures including accuracy and provision of necessary information. This suggests utility for chatbots to supplement existing client education systems by offering immediate, guideline-aligned responses to common SRHR questions (6).

Programs aiming to increase knowledge and empowerment among target populations may also find value in chatbots designed to influence attitudes and self-efficacy. A study from Kenya reported statistically significant improvements among young adults in confidence discussing contraception, sexual rights, and sexual feelings after interacting with a pleasure-oriented SRHR chatbot. Participants described increased access to trustworthy, on-demand information and reported positive shifts in communication with partners, a practical outcome that community-focused education programs can leverage (21). Secondly, from a service integration perspective, chatbots can function as navigational bridges between information and care access. Real-world feasibility data from a clinic-based AI chatbot showed measurable linkage to services, with approximately one-tenth of users proceeding from chatbot interaction to schedule clinical care. This indicates potential for chatbots to support demand generation for SRHR services and to embed them as pre-visit engagement tools within service delivery pathways (20).

Thirdly, for programs prioritizing quality control and safety, broader systematic evidence from women's health research indicates that chatbot interventions can have measurable effects on health-related outcomes, such as reducing anxiety and improving engagement with care processes. Although this evidence is not SRHR-specific, it underscores the feasibility and potential utility of AI chatbots to support psychosocial and cognitive aspects of health education when coupled with appropriate oversight (24). Broader evidence of chatbot-based health interventions indicates that evidence for sustained behavioural change remains mixed, with effects varying across outcomes and intervention designs. This suggests that SRHR chatbots are likely to be most effective when integrated within multicomponent strategies, such as referral pathways or complementary digital and community-based interventions, rather than deployed as standalone solutions for complex behavioural change (26). Therefore, SRHR programs considering chatbot integration should prioritize domain-specific content adaptation, expert-informed quality assurance, and thoughtful integration within existing service delivery and referral systems.

Key chatbot limitations

Despite their potential, chatbots used in health education, face notable limitations related to safety, communication capacity, and evaluation quality. A study revealed that chatbots highlight persistent gaps in handling complex or high-risk interactions and point to a broader lack of rigorous randomized trials and standardized evaluation frameworks, which constrains confidence in sustained impact and generalizability of findings (27, 28). Firstly, a key limitation concerns the accuracy and reliability of chatbot output, particularly in complex or ambiguous health contexts. One evidence-based study indicates that AI chatbots may generate imprecise or inappropriate information and are susceptible to producing plausible but incorrect responses, underscoring the need for domain-specific validation and human oversight to ensure safe use (24). Additionally, hallucinations, where generative conversational models produce confident but fabricated or misleading health information, particularly when responding to sensitive or context-dependent SRHR queries is another concern. Such risks are reported in recent evaluations of AI-driven health chatbots, highlighting the importance of safeguards such as curated medical knowledge integration, response verification layers, and escalation pathways to human professionals (6, 29).

Secondly, another important constraint is privacy, data security, and user trust. Empirical analyses of health chatbot applications reveal significant gaps in privacy protections, with many widely used apps lacking transparent privacy policies, limited options for users to control data sharing, and minimal safeguards against unauthorized access or misuse of sensitive health information. These issues have implications for user trust and ethical compliance, especially in SRHR where disclosures may involve highly sensitive personal details (30, 31). Concerns about data handling may also influence how users interact with chatbots, including willingness to disclose sensitive information or continue engagement, thereby affecting both safety and effectiveness (32). In addition, limitations in communication and contextual understanding may also hinder chatbot effectiveness.

Thirdly, a similar study (33), suggests that chatbots can struggle with complex language, cultural nuance, and context-specific cues essential to SRHR education, and that insufficient integration of behaviour-change theory and cultural adaptation may limit user engagement and relevance. Importantly, these limitations may differ depending on chatbot architecture. Rule-based systems or hybrid chatbots connected to verified health knowledge libraries may provide more consistent factual accuracy but reduced conversational flexibility. In contrast, fully generative AI chatbots may enable more natural interaction yet carry higher risks of misinformation, hallucinated responses, or context errors unless appropriately constrained. Recent work demonstrates how hybrid designs combining generative dialogue with structured health databases can improve reliability and safety of outputs (34, 35).

However, accessibility and equity concern further limit the reach of chatbot solutions. Uneven access to smartphones, reliable internet connectivity, and digital literacy across and within populations may exacerbate existing disparities in health information access (28). Finally, the ethical and accountability landscape for chatbot deployment remains underdeveloped. There are ongoing questions about responsibilities for harm when machine-generated advice leads to adverse outcomes, and few established regulatory frameworks clearly delineate accountability for errors or misinformation as argued by Kooli (36). These concerns highlight the need for explicit safety safeguards in chatbot deployment, including transparency about chatbot limitations, monitoring for harmful responses, audit trails, and clear referral mechanisms to qualified providers when risks are detected (36).

Ethical and privacy considerations

Beyond technical limitations, the use of chatbots in SRHR raises important ethical and privacy concerns. Reviews of health chatbots indicate that privacy and data security are often insufficiently addressed, with many studies failings to report clear safeguards, data governance practices, or compliance with privacy regulations, highlighting a gap between technological deployment and responsible practice (37). Empirical evidence also highlights the role of user trust and privacy perceptions in chatbot adoption. Studies in digital health show that concerns about data privacy can reduce engagement and willingness to disclose sensitive information (38), a dynamic that is particularly salient in SRHR contexts involving intimate personal details.

Beyond privacy, the ethics literature on AI in health care further highlights accountability, transparency, and preservation of human-centred care as key concerns. Unclear responsibility for AI-generated guidance, limited transparency, and over-reliance on automated systems pose ethical risks, underscoring the need for clear disclosure and oversight (39, 40), particularly in sensitive SRHR context. Additional work in digital health ethics reviews underscores the need for procedural safeguards around informed consent and data governance. Users should be made aware of how their information is handled, what data is collected, and how it might be used. Transparent communication about data practices, coupled with robust encryption and adherence to relevant legal standards, are foundational to maintaining confidentiality and autonomy in sensitive health domains (41, 42). Additionally, User experience research indicates that trust in AI health chatbots extends beyond technical performance to include emotional and relational factors. While users may value accessibility, concerns about relational distance and privacy persist, highlighting the importance of ethical design that incorporates transparency, empathy, and respect for user autonomy (43).

Future directions

Future research and practice on chatbots as frontline educators in sexual and reproductive health and rights (SRHR) should move beyond proof-of-concept deployments toward rigorous evaluation, ethical governance, and context-responsive design. Although early evidence suggests potential benefits for access, engagement, and education, multiple reviews highlight that the current evidence base remains fragmented and methodologically uneven, limiting conclusions about long-term effectiveness and scalability (26). First, there is a clear need for robust effectiveness studies using standardized outcome measures. Systematic reviews of health chatbot interventions consistently note a shortage of randomized controlled trials and longitudinal designs, with many studies relying on short-term self-reported outcomes or usability metrics (27). Future SRHR-focused research should prioritize experimental and quasi-experimental designs that assess not only knowledge gains but also sustained effects on attitudes, service uptake, and informed decision-making, while accounting for contextual and demographic variation.

Second, theoretical grounding and intervention transparency should be strengthened. Reviews of digital and conversational health interventions indicate that many chatbot systems lack explicit integration of established behaviour-change or educational theories, making it difficult to interpret mechanisms of action or replicate successful models (32). Embedding SRHR chatbots within clearly articulated conceptual frameworks, such as health literacy, empowerment, or social cognitive models, would enhance interpretability and guide more systematic development. Additionally, future directions should emphasize equity-centred and context-sensitive design, particularly for low- and middle-income settings. Evidence from global digital health research suggests that digital interventions may inadvertently reinforce existing inequalities when access, language, cultural norms, and digital literacy are insufficiently addressed (27, 44).

Notably, advancing the field will require clearer governance, accountability, and regulatory frameworks for AI-enabled SRHR tools. Ethics-focused analyses of AI in health care emphasize the absence of clear responsibility structures when automated systems deliver health information, particularly in sensitive domains (40). Future work should explore governance models that define accountability for content accuracy, data protection, and harm mitigation, alongside transparent disclosure of AI use and limitations to users. Finally, future research should investigate hybrid models that integrate chatbots with human-led SRHR services. Exploring models where chatbots support triage, education, and navigation while maintaining clear pathways to human support may offer a pragmatic balance between scalability and ethical responsibility.

Conclusion

Chatbots show promise as complementary tools for delivering SRHR education and facilitating access to services, particularly in contexts shaped by stigma and limited provider availability. While early evidence indicates benefits for information access and user engagement, significant limitations and ethical considerations remain. Advancing their role in SRHR will require rigorous evaluation, equity-centred design, and robust governance to ensure that technological innovation aligns with rights-based and health system priorities.

Funding Statement

The author(s) declared that financial support was not received for this work and/or its publication.

Footnotes

Edited by: Ann Borda, The University of Melbourne, Australia

Reviewed by: Nkosi Nkosi Botha, University of Cape Coast, Ghana

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author contributions

CU: Conceptualization, Supervision, Writing – original draft, Writing – review & editing. GC: Writing – original draft, Writing – review & editing. EI: Writing – original draft, Writing – review & editing. VU: Writing – original draft, Writing – review & editing. SE: Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing. FC: Writing – original draft, Writing – review & editing. OP: Writing – original draft, Writing – review & editing. FO: Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1.Barreda M, Cantarero-Prieto D, Coca D, Delgado A, Lanza-León P, Lera J, et al. Transforming healthcare with chatbots: uses and applications—a scoping review. Digit Health. (2025) 11:20552076251319174. 10.1177/20552076251319174 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Mariani MM, Hashemi N, Wirtz J. Artificial intelligence empowered conversational agents: a systematic literature review and research agenda. J Bus Res. (2023) 161:113838. 10.1016/j.jbusres.2023.113838 [DOI] [Google Scholar]
  • 3.Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health. (2018) 5(4):e9782. 10.2196/mental.9782 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Dhinagaran DA, Martinengo L, Ho MHR, Joty S, Kowatsch T, Atun R, et al. Designing, developing, evaluating, and implementing a smartphone-delivered, rule-based conversational agent (DISCOVER): development of a conceptual framework. JMIR MHealth UHealth. (2022) 10(10):e38740. 10.2196/38740 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. (2017) 4(2):e7785. 10.2196/mental.7785 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Latt PM, Aung ET, Htaik K, Soe NN, Lee D, King AJ, et al. Evaluation of artificial intelligence (AI) chatbots for providing sexual health information: a consensus study using real-world clinical queries. BMC Public Health. (2025) 25(1):1788. 10.1186/s12889-025-22933-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Brennan K. Chatbots in sexual and reproductive health: bridging the divide in accessibility and equity. Contraception. (2025) 0:111199. 10.1016/j.contraception.2025.111199 PubMed PMID: 40902965. [DOI] [PubMed] [Google Scholar]
  • 8.Tamrat T, Malpani R, Mengistu S, Kovacs A, Zhao Y, Kapilashrami A, et al. Beyond no harm: advancing research on artificial intelligence for sexual and reproductive health and rights. NPJ Womens Health. (2025) 3(1):65. 10.1038/s44294-025-00113-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Mills R, Mangone ER, Lesh N, Mohan D, Baraitser P. Chatbots to improve sexual and reproductive health: realist synthesis. J Med Internet Res. (2023) 25:e46761. 10.2196/46761 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Mills R, Mangone ER, Lesh N, Jayal G, Mohan D, Baraitser P. Chatbots that deliver contraceptive support: systematic review. J Med Internet Res. (2024) 26(1):e46758. 10.2196/46758 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Chen T, Li Q, Zhao D, Zhang W, Chen Y, Yang J, et al. Utilisation of AI-driven chatbots for perioperative health information seeking: a descriptive qualitative study of orthopaedic patients and family members. BMJ Open. (2025) 15(9):e099824. 10.1136/bmjopen-2025-099824 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Aggarwal A, Tam CC, Wu D, Li X, Qiao S. Artificial intelligence-based chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. (2023) 25:e40789. 10.2196/40789 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Feng X, Tian L, Ho GWK, Yorke J, Hui V. The effectiveness of AI chatbots in alleviating mental distress and promoting health behaviors among adolescents and young adults: systematic review and meta-analysis. J Med Internet Res. (2025) 27(1):e79850. 10.2196/79850 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Nadarzynski T, Lunt A, Knights N, Bayley J, Llewellyn C. “But can chatbots understand sex?” attitudes towards artificial intelligence chatbots amongst sexual and reproductive health professionals: an exploratory mixed-methods study. Int J STD AIDS. (2023) 34(11):809–16. 10.1177/09564624231180777 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. (2024) 10(4):e26297. 10.1016/j.heliyon.2024.e26297 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Car T, Dhinagaran L, Kyaw DA, Kowatsch BM, Joty T, Theng S, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. (2020) 22(8):e17158. 10.2196/17158 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Mondal H, Mondal S. The role of large language model chatbots in sexual education: an unmet need of research. J Psychosexual Health. (2025) 7(2):120–7. 10.1177/26318318251323714 [DOI] [Google Scholar]
  • 18.Nadarzynski T, Bayley J, Llewellyn C, Kidsley S, Graham CA. Acceptability of artificial intelligence (AI)-enabled chatbots, video consultations and live webchats as online platforms for sexual health advice. BMJ Sex Reprod Health. (2020) 46(3):210–7. 10.1136/bmjsrh-2018-200271 [DOI] [PubMed] [Google Scholar]
  • 19.Balaji D, He L, Giani S, Bosse T, Wiers R, de Bruijn GJ. Effectiveness and acceptability of conversational agents for sexual health promotion: a systematic review and meta-analysis. Sex Health. (2022) 19(5):391–405. 10.1071/SH22016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bull S, Hood S, Mumby S, Hendrickson A, Silvasstar J, Salyers A. Feasibility of using an artificially intelligent chatbot to increase access to information and sexual and reproductive health services. Digit Health. (2024) 10:20552076241308994. 10.1177/20552076241308994 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Njogu J, Jaworski G, Oduor C, Chea A, Malmqvist A, Rothschild CW. Assessing acceptability and effectiveness of a pleasure-oriented sexual and reproductive health chatbot in Kenya: an exploratory mixed-methods study. Sex Reprod Health Matters. (2023) 31(4):2269008. 10.1080/26410397.2023.2269008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Wang H, Gupta S, Singhal A, Muttreja P, Singh S, Sharma P, et al. An artificial intelligence chatbot for young people’s sexual and reproductive health in India (SnehAI): instrumental case study. J Med Internet Res. (2022) 24(1):e29969. 10.2196/29969 PubMed PMID: 34982034; PubMed Central PMCID: PMC8764609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Chan PSF, Fang Y, Cheung DH, Zhang Q, Sun F, Mo PKH, et al. Effectiveness of chatbots in increasing uptake, intention, and attitudes related to any type of vaccination: a systematic review and meta-analysis. Appl Psychol Health Well-Being. (2024) 16(4):2567–97. 10.1111/aphw.12564 [DOI] [PubMed] [Google Scholar]
  • 24.Kim HK. The effects of artificial intelligence chatbots on women’s health: a systematic review and meta-analysis. Healthc Basel Switz. (2024) 12(5):534. 10.3390/healthcare12050534 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Deva R, Ramani D, Divate T, Jalota S, Ismail A. “Kya family planning after marriage hoti hai?”: integrating cultural sensitivity in an LLM chatbot for reproductive health. In: Yamashita N, editor. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery; (2025). p. 1–23. (CHI ‘25). Available online at: 10.1145/3706598.3713362 (Accessed December 18, 2025). [DOI] [Google Scholar]
  • 26.Singh B, Olds T, Brinsley J, Dumuid D, Virgara R, Matricciani L, et al. Systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviours. NPJ Digit Med. (2023) 6(1):118. 10.1038/s41746-023-00856-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Xue J, Zhang B, Zhao Y, Zhang Q, Zheng C, Jiang J, et al. Evaluation of the current state of chatbots for digital health: scoping review. J Med Internet Res. (2023) 25:e47217. 10.2196/47217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, users, benefits, and limitations of chatbots in health care: rapid review. J Med Internet Res. (2024) 26:e56930. 10.2196/56930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Kim Y, Jeong H, Chen S, Li SS, Lu M, Alhamoud K, et al. Medical Hallucination in Foundation Models and Their Impact on Healthcare. New York: medRxiv; (2025). p. 2025.02.28.25323115. 10.1101/2025.02.28.25323115 Available online at: https://www.medrxiv.org/content/10.1101/2025.02.28.25323115v1 (Accessed February 20, 2026). [DOI] [Google Scholar]
  • 30.Wairimu S, Iwaya LH. On the security and privacy of AI-based mobile health chatbots. arXiv (2025). Available online at: http://arxiv.org/abs/2511.12377 (Accessed December 19, 2025).
  • 31.Yener R, Chen GH, Gumusel E, Bashir M. Can I trust this chatbot? Assessing user privacy in AI-healthcare chatbot applications. arXiv (2025). Available online at: http://arxiv.org/abs/2509.14581 (Accessed December 19, 2025).
  • 32.Gumusel E. A literature review of user privacy concerns in conversational chatbots: a social informatics approach: an annual review of information science and technology (ARIST) paper. J Assoc Inf Sci Technol. (2025) 76(1):121–54. 10.1002/asi.24898 [DOI] [Google Scholar]
  • 33.Choi К(, Fitzek S. User and provider experiences with health education chatbots: qualitative systematic review. JMIR Hum Factors. (2025) 12(1):e60205. 10.2196/60205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Feng S, Li X (Leah), Wake AN. Engaging artificial intelligence (AI)-based chatbots in digital health: a systematic review. PLOS Digit Health. (2026) 5(2):e0001201. 10.1371/journal.pdig.0001201 PubMed PMID: 41678558; PubMed Central PMCID: PMC12900317. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Du Q, Ren Y, Meng Z-L, He H, Meng S. The efficacy of rule-based versus large language model-based chatbots in alleviating symptoms of depression and anxiety: systematic review and meta-analysis. J Med Internet Res. (2025) 27:e78186. 10.2196/78186 PubMed PMID: 41343858; PubMed Central PMCID: PMC12677872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kooli C. Chatbots in education and research: a critical examination of ethical implications and solutions. Sustainability. (2023) 15(7):5614. 10.3390/su15075614 [DOI] [Google Scholar]
  • 37.May R, Denecke K. Security, privacy, and healthcare-related conversational agents: a scoping review. Inform Health Soc Care. (2022) 47(2):194–210. 10.1080/17538157.2021.1983578 [DOI] [PubMed] [Google Scholar]
  • 38.Ellis JR, Dellavalle NS, Hamer MK, Akerson M, Andazola M, Moore AA, et al. The halo effect: perceptions of information privacy among healthcare chatbot users. J Am Geriatr Soc. (2025) 73(5):1472–83. 10.1111/jgs.19393 [DOI] [PubMed] [Google Scholar]
  • 39.van Kolfschooten H, Gonçalves J, Orchard N, Figueroa C. AI Chatbots for promoting healthy habits: legal, ethical, and societal considerations. Digit Health. (2025) 11:20552076251390004. 10.1177/20552076251390004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical considerations of using ChatGPT in health care. J Med Internet Res. (2023) 25(1):e48009. 10.2196/48009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Badawy W, Zinhom H, Shaban M. Navigating ethical considerations in the use of artificial intelligence for patient care: a systematic review. Int Nurs Rev. (2025) 72(3):e13059. 10.1111/inr.13059 [DOI] [PubMed] [Google Scholar]
  • 42.Radanliev P. AI Ethics: integrating transparency, fairness, and privacy in AI development. Appl Artif Intell. (2025) 39(1):2463722. 10.1080/08839514.2025.2463722 [DOI] [Google Scholar]
  • 43.Rahayu B, Subiyanto P. Patients’ lived experiences of trust and privacy in AI health chatbots for chronic care. J Digit Health Innov Med Technol. (2025) 1(9):381–91. [Google Scholar]
  • 44.Bitomsky L, Nißen M, Kowatsch T. Equity by design principles for digital health interventions. Int J Equity Health. (2025) 24:271. 10.1186/s12939-025-02645-6 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.


Articles from Frontiers in Digital Health are provided here courtesy of Frontiers Media SA

RESOURCES