Key points
With most teens reporting use of artificial intelligence (AI) “companions,” conversational AI is rapidly becoming a first point of contact for distress and suicidality — often before clinicians or families are aware.
Current AI suicide-prevention tools largely lack adequate safeguards; such tools should be transparent about being AI, offer explanations for their advice, prompt users to ask for a reason for advice given, and function as a bridge — not a barrier — to appropriate care and suicide prevention.
Evidence shows that human connection and support are key factors in suicide prevention, which means AI tools should be developed to promote connection with supports.
The use of conversational generative artificial intelligence (AI) systems, or chatbots, is on the rise, particularly among youth. In a recent US-based survey, 72% of 1060 youth aged 13 to 17 years reported using an AI companion, and 52% indicated regular use.1 Recent data released by OpenAI showed that more than 1.2 million ChatGPT users of all ages express suicidal ideation in their interactions each week.2 Several lawsuits in the United States claim that conversational AI contributed to deaths by suicide among youth, alleging that AI explicitly guided youth to attempt suicide, suggested suicide methods, and offered to write suicide notes.2 We discuss how the rapid adoption of conversational AI, combined with the possibility of serious harms, including death by suicide, makes AI safety an urgent consideration for public health approaches to suicide prevention.
In the field of suicide prevention, AI presents a paradox. It is increasingly being developed and deployed to advance public health approaches to suicide prevention, such as suicide risk prediction, clinical support, and education and training applications.3 Yet, ethical considerations are not reported in the vast majority of related research articles.3 These include ensuring trust, safety, and access to human support.
Young people today live, learn, and connect in digital spaces. For many, technology is their first confidant when they are struggling. Search engines, general-purpose chatbots, and social platforms often receive expressions of distress before family, teachers, or clinicians. Wellness applications powered by conversational AI offer coaching, light psychological support, self-help, and psychoeducation; specialized chatbots provide interventions to treat depression, anxiety, addictions, and other mental health conditions, as well as manage expressions of suicidal thoughts and behaviours.1,3,4
Artificial intelligence can listen without fatigue, respond immediately, and provide pathways to crisis support. A well-designed chatbot can normalize help-seeking, reduce isolation, and offer coping strategies at moments of distress; it could even support treating clinicians by helping to identify symptom patterns, early warning signs, and opportunities for outreach.4 However, in cases where poorly designed AI fails to recognize suicidality, mishandles disclosures, or provides unsafe or misleading responses, real harms can arise. Ethical development, deployment, and research on AI for mental health, particularly in tools used by youth, must be guided by principles of trust and safety. To ensure nonmaleficence, tools should be transparent about being AI, offer explanations for their advice, and be developed with rigorous attention to incorporation of safeguards to protect users who ask suicide-related questions, as summarized in Appendix 1 (available at www.cmaj.ca/lookup/doi/10.1503/cmaj.251693/tab-related-content) and below.
Key suicide-related harms exist. Conversational AI may directly or indirectly increase suicide risk among users. Unsafe portrayals of suicide — such as providing details about means, glorifying suicide, or including graphic content — can increase suicide risk,5 as can failure to detect it. Interactions with AI that encourage suicide or provide responses that lack empathy can directly increase suicide risk. When an AI agent receives suicide-related queries, it should provide a preapproved, human-written, compassionate response; offer crisis helpline numbers tailored to the user’s location; encourage reaching out to trusted people; and then terminate the conversation, rather than continue automated “support.” It should never provide details on suicide methods, ignore expressed risk, or substitute generic algorithmic replies for real relational support. Emerging evidence suggests that AI chatbots are providing safer responses, but algorithms must continually be updated to provide optimal safety.6
An emerging concern is the capacity to invest AI conversational agents with human qualities. This anthropomorphizing effect can lead users to feel supported, yet, at an extreme, can make them vulnerable to coercion (e.g., taking an AI’s suggestion to engage in harmful behaviour).7 An illusion of connection can lead to increasing isolation, sometimes called the companionship–alienation paradox, which directly contrasts with the social connections and support that are known to be protective factors for suicide prevention.8 To combat this, AI conversational agents should foreground transparency, such as announcing very clearly that “I am a machine,” and make their limitations in providing support explicit, while steering the user toward engaging with human support. At a public level, AI companies should report on how the AI system was built, such as model cards9 or data sheets10 documenting model provenance; they should monitor safety incidents and report on these and on updates to the tool.
Equity concerns related to digital health and data sets used in AI development must also be appreciated. Equity-denied groups at increased risk of suicide may fail to be identified by the AI tool, or may not have their needs and values met. Artificial intelligence should be developed and maintained with a focus on bias, fairness, and equity.11 Development of AI requires training and testing on diverse data, including linguistic and cultural variations in how distress is expressed; codesign with youth from diverse communities; performance monitoring across subgroups to detect disparities (such as under- or overidentifying distress in a particular group); and designing for accessibility, including contexts of poor Internet coverage and multilingualism.12
Users, including youth, their caregivers, and health providers, may not understand what the AI is capable of (e.g., they may overestimate or misunderstand the support it can provide). Artificial intelligence developers and companies must commit to explainability so that users understand what an AI system is doing and why. Explainability should include a “Why did you say that?” prompt option for users to request clarification about AI responses, and explanations of critical decisions (e.g., why a response was escalated). Clear onboarding messages about the purpose and limits of the AI can also improve understanding.13
As suicidality is among the most sensitive forms of personal disclosure, data privacy and security are paramount considerations. Safeguards must include collecting only minimal necessary data; providing youth-friendly explanations of how data are used, stored, or deleted; employing robust encryption, access controls, and privacy-preserving storage; and respecting youth rights to erase or withhold data.
Any conversational AI system must meet legal and regulatory compliance, such as alignment with child protection, digital health, and data protection laws. This entails mapping relevant regulatory frameworks across jurisdictions in which the AI tool may be used and implementing parental consent mechanisms where legally required. Most of the public, and many professionals, do not yet know what to demand of AI developers or products. Leadership is needed from developers and organizations; large players such as OpenAI, Meta, and Anthropic; smaller and emerging laboratories and companies; and governments, regulators, and health professions. Canada should strengthen federal AI regulation and its implications for health care, including suicide prevention. Robust regulatory controls are needed to protect youth and set clear rules for conversational AI, whether health focused or general purpose. National and international suicide prevention bodies, such as the International Association for Suicide Prevention, should provide guidance on AI and suicide prevention.
Medical professionals must be aware of youth AI use, its risks, and recommended safeguards. This is an emerging area of medicolegal risk, although recent guidance from the Canadian Medical Protective Association did not explicitly discuss AI use by patients and associated risk of suicide, and how that would affect physicians.14 Digital literacy is required to navigate this new and rapidly evolving landscape and is fast becoming a required medical competency.
The promise of health technologies must not come at the expense of evidence-based practice, or of the human connection, trust, and presence that remain central to suicide prevention. Decades of research confirm that relationships with family, peers, mentors, and care providers protect against suicide, with a sense of belonging being among the most reliable lifelines.8,15,16 The limits of AI agents should be acknowledged; such tools should have robust safeguards built in and direct users toward friends, family, community helpers, and trained crisis professionals as appropriate. Embedding safeguards, partnering with experts and youth, and maintaining humility about the limits of technology can help ensure that AI serves as a bridge — not a barrier — to the human connections that are known to prevent suicide.
Supplementary Information
Footnotes
Competing interests: Allison Crawford reports receiving funding from the Public Health Agency of Canada as the chief medical officer of 9-8-8: Suicide Crisis Helpline (in support of the current manuscript), and as a board member of the Canadian Association for Suicide Prevention (unpaid role). No other competing interests were declared.
This article has been peer reviewed.
Contributors: Both authors contributed to the conception and design of the work. Allison Crawford drafted the manuscript. Both authors revised it critically for important intellectual content, gave final approval of the version to be published, and agreed to be accountable for all aspects of the work.
References
- 1.Robb MB, Mann S. Talk, trust, and trade-offs: how and why teens use AI companions. San Francisco: Common Sense Media; 2025. [Google Scholar]
- 2.Chat GPT sees one million users in mental distress each week, APS in The Daily Aus. Melbourne (AU): Australian Psychological Society; 2025. Available: https://psychology.org.au/insights/chatgpt-sees-one-million-users-in-mental-distress (accessed 2026 Feb. 8). [Google Scholar]
- 3.Holmes G, Tang B, Gupta S, et al. Applications of large language models in the field of suicide prevention: scoping review. J Med Internet Res 2025;27:e63126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Feng X, Tian L, Ho GWK, et al. The effectiveness of AI chatbots in alleviating mental distress and promoting health behaviors among adolescents and young adults: systematic review and meta-analysis. J Med Internet Res 2025;27:e79850. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Sinyor M, Schaffer A, Heisel MJ, et al. Media guidelines for reporting on suicide: 2017 update of the Canadian Psychiatric Association policy paper. Can J Psychiatry 2018;63:182–96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Campbell LO, Babb K, Lambie GW, et al. An examination of generative AI response to suicide inquires: content analysis. JMIR Ment Health 2025;12:e73623. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Deshpande A, Rajpurohit T, Narasimhan K, et al. Anthropomorphization of AI: opportunities and risks. ArXiV 2023. May 24. Available: https://arxiv.org/abs/2305.14784 (accessed 2026 Feb. 8).
- 8.Darvishi N, Farhadi M, Poorolajal J. The role of social support in preventing suicidal ideations and behaviors: a systematic review and meta-analysis. J Res Health Sci 2024;24:e00609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Mitchell M, Wu S, Zaldivar A, et al. Model cards for model reporting. Proceedings of the FAT* ‘19: Conference on Fairness, Accountability, and Transparency; 2019 Jan. 29–31, Atlanta: 220–9. Available: https://arxiv.org/abs/1810.03993 (accessed 2026 Feb. 8). [Google Scholar]
- 10.Gebru T, Morgenstern J, Vecchione B, et al. Datasheets for datasets. Commun ACM 2021;64:86–92. [Google Scholar]
- 11.Sikstrom L, Maslej MM, Hui K, et al. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 2022; 29:e100459. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Crawford A, Serhal E. Digital health equity and COVID-19: the innovation curve cannot reinforce the social gradient of health. J Med Internet Res 2020; 22:e19361. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Sarkar S, Gaur M, Chen LK, et al. A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement. Front Artif Intell 2023;6:1229805. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.The medico-legal lens on AI use by Canadian physicians: the deep dive. Ottawa: Canadian Medical Protective Association; 2024. Available: https://www.cmpa-acpm.ca/en/research-policy/public-policy/the-medico-legal-lens-on-ai-use-by-canadian-physicians (accessed 2026 Feb. 8). [Google Scholar]
- 15.Carrasco JP, de la Puente L. Rethinking suicide prevention: human connection as a lifeline. JAMA Netw Open 2025;8:e2525678. [DOI] [PubMed] [Google Scholar]
- 16.O’Connor RC, Kirtley OJ. The integrated motivational-volitional model of suicidal behaviour. Philos Trans R Soc Lond B Biol Sci 2018;373:20170268. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
