Skip to main content
World Psychiatry logoLink to World Psychiatry
. 2026 Jan 14;25(1):89–90. doi: 10.1002/wps.70008

Use of artificial intelligence to enhance social inclusion in mental health care: promises and pitfalls

Samson Tse 1
PMCID: PMC12805069  PMID: 41536076

Social inclusion plays a pivotal role in mental health recovery, serving as both a dynamic process (e.g., active pursual of educational and employment opportunities) and a desired outcome (e.g., tangible achievements such as vocational training completion and job attainment). This experience is relevant for both personal and functional recovery, encompassing subjective feelings of hope and sense of belonging alongside objective, measurable results 1 .

Globally, the rapid evolution of artificial intelligence (AI) is changing the landscape of social inclusion, bringing forth promises and perils for individuals with severe mental illness (SMI). The central question is not whether AI will have an impact, but how its various applications will shape the social inclusion of this population. Digital inclusion, i.e. the ability to access technology and digital content, is itself a critical aspect of the broader inclusion‐exclusion debate within society. By enabling access to communication and information, digital inclusion facilitates improvements across important life domains, including health care and social welfare services, social connections, education and employment. In this sense, AI holds significant potential to enhance social inclusion.

Emerging evidence highlights the positive impact of AI technologies on social inclusion for individuals with various disabilities 2 . For example, in employment, AI‐driven platforms can facilitate more flexible work arrangements, remote work options, and a wider range of job opportunities. Similarly, studies have shown that robotic AI companions can provide support and facilitate self‐monitoring of health status for older adults. These achievements in crucial functional domains demonstrate the potential of AI to address specific needs.

AI algorithms can effectively analyze large, intricate datasets, enabling the creation of more precise predictive models to identify key determinants of social inclusion outcomes. Given the complex variables and dynamic interactions involved in social inclusion, these models can guide targeted interventions by professionals, and empower individuals with SMI to enhance their ability to achieve desired social inclusion outcomes.

AI applications in mental health care can enhance diagnostic processes and support continuous monitoring of individual recovery. Various machine learning models and deep learning methods, implemented through mobile applications, have shown varying degrees of success in managing symptoms and improving outcomes related to recovery and social inclusion 3 , 4 . Moreover, AI‐assisted language capabilities enable individuals with SMI to access recovery narratives, connect with peers globally, and engage in supportive online communities, fostering deeper friendships and a stronger sense of community across linguistic and cultural boundaries.

While AI presents opportunities for enhancing social inclusion, significant challenges exist regarding its potential to truly facilitate this process for individuals with SMI. A primary concern stems from the current state of AI technology. While human‐inspired AI can recognize social cues and emotions and generate expressive responses, it lacks true empathy. We are still developing “humanized AI”, where cognitive, emotional and social intelligence converge, allowing machines to simulate human experiences of pain, hope, suffering and healing 5 .

Healing, in its essence, often transcends mere text or verbal communication. Meaningful connections, deep acceptance, and empathy between individuals arise from a shared understanding of lived experiences, sometimes conveyed through simply being present with one another – something that an AI agent can never replicate. Recent research suggests that AI technologies are best utilized as supplementary tools within human‐human interactions, rather than as standalone solutions in human services 4 . Moreover, some applications may be designed to foster addictive tendencies, targeting vulnerable groups to maximize product benefits, which raises ethical concerns and highlights the need for robust personal data protection 6 .

AI algorithms trained on biased data may further entrench social divisions and deepen isolation. For instance, a recent study revealed significant differences in SMI outcomes suggested by four large language models (LLMs): ChatGPT‐3.5, ChatGPT‐4, Claude, and Bard 7 . The ChatGPT‐3.5 model suggested a notably pessimistic prognosis for individuals with schizophrenia under professional treatment compared to other LLMs. Such narratives often foster a culture of limited hope, and stigmatized perceptions linking SMI with violence and poor outcomes. This issue is particularly relevant as people with SMI, caregivers and health care professionals increasingly utilize LLMs for consultations. The messages that these models provide can significantly impact patient care, and are crucial for social inclusion, guidance and interventions. Therefore, they need to be aligned with current research evidence.

While recognizing the potential of AI to promote social inclusion, it is crucial to consider that the digital gap will actually widen. Marginalized communities, including those with SMI, often lack the essential resources such as high‐speed Internet, quality devices, and digital literacy training needed to fully leverage AI. This disparity means that average users access less sophisticated tools, while the privileged few reap the benefits of superior resources. Moreover, a significant challenge in AI lies in the misalignment of algorithms, which prioritize prominent data patterns and may overlook essential human values and experiences such as resilience, citizenship, and the importance of human virtues. This unintended oversight could have far‐reaching implications for AI’s impact on social inclusion in mental health.

To ensure that AI advances social inclusion, the active involvement of individuals with lived experience is crucial now more than ever. However, the window of opportunity may be missed unnoticeably. The principle of “nothing about us without us” should guide AI development. A Canadian study highlights the need for diverse participants representing various recovery experiences and perspectives from developed and developing countries 8 . Professionals must advocate for the rights of individuals with SMI and their caregivers, while vendors and innovators should prioritize the voices of those with lived experience and critically examine training data.

The mention of AI also brings to mind the concept of appreciative inquiry (AI+), a strengths‐based approach that can significantly enhance social inclusion 9 . By focusing on what works well within organizations and communities, AI+ fosters a collaborative dialogue among stakeholders. This approach encourages a positive culture and collective vision for social good. Following a 5‐D cycle (Definition, Discovery, Dream, Design and Destiny), it invites participants to envision possibilities and co‐create actionable strategies. Ultimately, AI+ not only enhances engagement and innovation, but also promotes sustainable change by building on existing strengths. However, social inclusion, as both a recovery process and an outcome, is a non‐linear journey where each individual has his/her own pace and readiness. Poorly implemented or forced social inclusion can be more harmful than social isolation.

In conclusion, while AI offers transformative opportunities to enhance social inclusion in mental health care, addressing ethical concerns and engaging individuals with lived experience in technology development is imperative. By prioritizing inclusivity and incorporating the voices of those with SMI, we can create AI systems that genuinely support social inclusion, fostering a culture of hope and belonging for individuals with SMI on a global scale.

REFERENCES

  • 1. Henderson C, Kotera Y, Lloyd‐Evans B et al. World Psychiatry 2026;25:56‐82. [DOI] [PubMed] [Google Scholar]
  • 2. Voultsiou E, Moussiades L. Educ Inf Technol 2025;30:19141‐81. [Google Scholar]
  • 3. Muetunda F, Sabry S, Jamil ML et al. ACM Trans Comput Healthc 2025;5:1‐24. [Google Scholar]
  • 4. Torous J, Linardon J, Goldberg SB et al. World Psychiatry 2025;24:156‐74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Singh A, Mourya P, Singh V et al. Int Sci J Eng Manag 2025;4:1‐7. [Google Scholar]
  • 6. van Kolfschooten HB, van Oirschot J. When people become data points: the potential impact of AI in mental healthcare. Amsterdam: Health Action International, 2025. [Google Scholar]
  • 7. Elyoseph Z, Levkovich I. JMIR Mental Health 2024;11:e53043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Morgan P, Cogan NA. J Public Ment Health 2025;24:150‐65. [Google Scholar]
  • 9. Barrett FJ. Appreciative inquiry: a positive approach to building cooperative capacity. Chagrin Falls: Taos Institute Publishing, 2005. [Google Scholar]

Articles from World Psychiatry are provided here courtesy of The World Psychiatric Association

RESOURCES