Abstract
Mental health disorders contribute significantly to the global burden of disease, affecting quality of life and causing disability. These challenges are compounded by inequitable access to timely and effective mental health services, particularly in low-resource settings. Recently, artificial intelligence (AI) has emerged as a transformative tool in mental healthcare, offering novel approaches to enhance diagnosis, personalize treatment, and support continuous patient monitoring. This review explores the current landscape of non-generative AI applications in mental health, focusing on core methodologies such as machine learning, deep learning, and natural language processing. These techniques show promise in improving diagnostic accuracy, enabling adaptive and scalable digital therapy delivery systems, facilitating real-time mental health risk prediction through the analysis of multimodal data. According to our study, the majority of research demonstrated increased therapy personalization and diagnostic accuracy; however, significant challenges still exist due to low dataset diversity, algorithmic bias, and a lack of clinical validation. Ethical considerations and the need for transparent, explainable, and clinician-trustworthy AI are increasingly recognized as critical to successful implementation. Overall, AI-driven methods have strong potential to improve accessibility and effectiveness in mental health treatment, provided future studies prioritize equity, interpretability, and clinical relevance. We ran a narrative review between January 2019 to June 2025, screened in duplicate, and used thematic synthesis across diagnosis, therapy support, and monitoring.
Keywords: Artificial intelligence, mental health, digital health, explainable AI, multimodal data analysis
Introduction
Anxiety disorders, bipolar disorder, schizophrenia, and post-traumatic stress disorder (PTSD) rank among the most significant and persistent global public health challenges. As reported by the World Health Organization (WHO), nearly one in eight individuals globally is affected by a mental health condition, contributing substantially to the overall burden of disease and to disability adjusted life years (DALYs). 1 Despite increased global awareness and the implementation of mental health policies, service provision remains critically under resourced particularly in low- and middle-income countries (LMICs). Key barriers include a severe shortage of trained mental health professionals, persistent stigma surrounding mental illness, and the lack of coordinated and continuous care delivery systems. Within this challenging landscape, the advent of artificial intelligence (AI) technologies offers a promising avenue for the transformation of mental healthcare delivery, with the potential to enhance early diagnosis, optimize therapeutic interventions, and support long-term patient monitoring and engagement.2,3
AI refers broadly to computational systems that emulate aspects of human intelligence, such as learning, reasoning, and decision making. In the realm of healthcare, AI has already demonstrated transformative potential in areas such as radiology, genomics, drug discovery, and personalized treatment planning. Mental health, though more complex and context-dependent, is increasingly becoming a focus of AI research and application. 4
A key driver behind the growing role of AI in mental health is the digitization of behavioral data. Individuals today generate massive volumes of data through smartphones, wearable devices, social media platforms, and online interactions. 5 These digital footprints can provide valuable insights into psychological states, behavioral patterns, and social functioning. AI systems can harness such data streams to detect mental health risks in real time, often before clinical symptoms become apparent. 6
Another compelling advantage of AI in this domain is its ability to uncover latent patterns in complex data sets. Mental health conditions are inherently multifactorial, influenced by genetic, neurobiological, psychological, and environmental variables. 7 AI models, particularly those using machine learning (ML) and deep learning (DL), are capable of integrating diverse data modalities from text and speech to neuroimaging and genomic profiles to identify predictive markers and stratify patient subgroups. 8
The integration of AI into mental health care is not without challenges. One major concern is the quality and representativeness of training data. Mental health data sets often suffer from small sample sizes, demographic imbalances, and subjective labeling. 9 Mental health data is highly sensitive, raising important questions about privacy, consent, and data governance. The opaque nature of many AI algorithms, often described as black boxes, also poses ethical and clinical concerns. Without clear interpretability, clinicians may hesitate to rely on AI generated outputs, specially when they contradict clinical intuition or patient narratives. 10
Another layer of complexity is the social and cultural context of mental health. Emotional expression, symptom presentation, and help-seeking behavior can vary widely across cultures. AI models developed in one context may fail to perform accurately in another, highlighting the need for culturally aware and inclusive AI development. Furthermore, the human element in mental health care, empathy, rapport, and therapeutic alliance cannot be easily replicated by machines. AI should not be viewed as a replacement for human clinicians but as a complementary tool to enhance care delivery. 11
The COVID-19 pandemic further accelerated interest in AI-enabled mental health solutions. Lockdowns, social isolation, and economic uncertainty led to a surge in mental health issues, while simultaneously disrupting access to in-person care. 12 Digital health platforms saw unprecedented growth during this period, and AI-based tools played a pivotal role in expanding access and managing patient loads. As we move into a post-pandemic world, the role of AI in supporting mental health systems is expected to become even more pronounced. 13
Given this backdrop, the objective of this review is to provide a comprehensive overview of how AI is currently being applied in the field of mental health, with a focus on three primary domains: diagnosis, therapy support, and patient monitoring. The paper begins by outlining the core AI technologies and methods commonly used in mental health applications. It then examines key use cases, including early detection of depression, AI chatbots for therapy, and wearable-based mood prediction. The review also delves into ongoing challenges ethical, technical, and societal and concludes with a discussion of emerging trends and future research directions.
In choosing a narrative review format, this article does not aim to exhaustively catalog every study or quantitative result. Instead, it synthesizes themes and developments from the current literature to provide a broad yet insightful picture of the AI-mental health intersection. Such a perspective can be particularly valuable for researchers, clinicians, developers, and policymakers who seek to understand both the opportunities and limitations of deploying AI in this sensitive and vital domain.
Scope and purpose of this review
Figure 1 shows the aims and scope of the narrative review to explore the evolving role of AI in mental health care, focusing on how AI technologies are being applied to enhance diagnosis, therapy, and patient monitoring. Emphasizing non-generative AI methods such as ML, DL, and natural language processingNLP), the review synthesizes current research and practical implementations to provide a clear picture of ongoing advancements and challenges.
Figure 1.
An overview of the review’s scope and purpose in artificial intelligence (AI) for mental health.
Scope
The scope if the review covers:
Core AI methodologies relevant to mental health applications.
Clinical use cases including diagnostic support, AI-based therapy tools (e.g. chatbots), and real-time monitoring through mobile and wearable technologies.
Types of data used in AI models, such as text, voice, facial expressions, and behavioral patterns.
Ethical, technical, and cultural challenges, including privacy, bias, and interoperability.
Future research directions, such as explainable AI (XAI) and culturally adaptive systems.
Purpose
Provide interdisciplinary readers with a foundational understanding of AI in mental health.
Highlight promising applications and real-world use cases.
Encourage responsible and inclusive AI development.
Support informed decision-making by clinicians, researchers, and policymakers.
Contributions to the literature
This review makes several key contributions to the growing body of literature at the intersection of AI and mental health care:
Interdisciplinary synthesis: It integrates insights from computer science, clinical psychiatry, and digital health to provide a holistic overview of AI applications in mental health, making it accessible to both technical and non-technical audiences.
Focus on practical applications: Unlike many technically focused reviews, this paper emphasizes clinically relevant use cases, including AI-based diagnosis, therapy support tools (e.g. chatbots), and behavioral monitoring, thereby bridging the gap between research and real-world implementation.
Highlighting non-generative AI methods: The review specifically concentrates on non-generative AI techniques, such as ML, DL, and NLP, providing clarity on their distinct roles and limitations in mental health contexts.
Ethical and cultural contextualization: It addresses not only the technical potential but also the ethical, social, and cultural considerations necessary for the responsible deployment of AI in mental healthcare settings, particularly in diverse populations.
Forward-looking perspective: The article outlines emerging research directions, such as multimodal emotion recognition and culturally adaptive AI systems, offering a roadmap for future interdisciplinary collaboration.
How this review adds value: Recent 2025 systematic reviews focus on specific modalities or settings. Our goal is complementary: a cross-modal, application-focused synthesis (diagnosis, therapy workflows, and in-the-wild monitoring/ecological momentary assessment (EMA)) with deployment realities (governance, interoperability, and model stewardship). Where systematic reviews exist, we acknowledge and contrast them, then extend with practical guidance for implementation
Positioning and novelty
Several 2025 systematic reviews synthesize AI in mental health. Dehbozorgi et al. 14 survey broad AI applications and ethical issues; Cruz-Gonzalez et al. 2 structure evidence across diagnosis, monitoring, and intervention up to February 2024; Wang et al. 15 focus on generative AI (large language models (LLMs)). In contrast, our narrative review: (i) concentrates on non-generative AI (ML/DL/NLP) with governance and implementation as core through-lines; (ii) is updated to June 2025; (iii) adds a representativeness analysis (high-income country (HIC) vs. LMIC; age groups) and a deployment readiness checklist (data-centric practices, external validation, model reporting, and fairness/privacy); and (iv) includes a modality outcome mapping table to support clinical translation. Together, in Table 1, showing the comparison of recent reviews on AI in mental health and this narrative review.
Table 1.
Comparison of recent reviews on AI in mental health and this narrative review.
| Reference | AI scope | Methodological approach | Application domains | Governance/implementation emphasis | Distinctive outputs/notes |
|---|---|---|---|---|---|
| Dehbozorgi et al. 14 | Broad AI (applications and challenges) | Systematic review | Diagnosis, monitoring, and intervention | Privacy, transparency, and ethics discussed alongside applications | Field-level scan of AI uses and challenges in mental health |
| Cruz-Gonzalez et al. 2 | Predominantly non-generative AI | Systematic review | Diagnosis, monitoring,and intervention (structured across three pillars) | Calls for stronger transparency/interpretability and more diverse data sets | Domain-based synthesis linking methods to clinical use |
| Wang et al. 15 | Generative AI (LLMs) | Systematic review | Assessment/diagnosis, counseling/chatbots,and use cases | Emphasis on trustworthiness, safety, and cultural competence | GenAI-specific lens;and roadmap of evidence gaps |
| Thakkar et al. 16 | Broad AI with positive mental-health emphasis | Narrative review | Awareness, support, and interventions; ML/DL/NLP overview | Ethics and cultural sensitivity highlighted | Conceptual overview focused on well-being/positive health |
| Alhuwaydi 17 | Broad AI in mental healthcare | Narrative review | Screening, diagnosis, and treatment; challenges/limitations | Privacy/ethics and implementation considerations | Practice-oriented discussion of roles and risks |
| Lee et al. 18 | Broad AI for clinical care | Review/perspective (clinician-facing synthesis) | Diagnosis, prognosis, and treatment; pathway integration | Barriers discussed: bias, privacy, and interpretability | Translational overview for clinicians |
| This work | Non-generative AI (ML/DL/NLP) with governanceand implementation emphasis | Narrative review | Diagnosis/screening; therapy and clinical decision support | Practice-focused: data-centric development, external validation, model reporting, and fairness/privacy | Deployment-readiness checklist and modality, outcome mapping to support clinical translation |
AI: artificial intelligence; ML: machine learning; DL: deep learning; NLP: natural language processing; LLMs: large language models.
Methods
Design. Narrative review, reported with SANRA 2.0 in mind.
Sources. PubMed/MEDLINE, Scopus, Web of Science Core Collection, PsycINFO, and IEEE Xplore; and plus Google Scholar (top 200 by relevance) and backward citation checks.
Time window. 1 January 2019–30 June 2025.
Search strategy. ("mental health" OR depression OR anxiety OR suicid*) AND ("machine learning" OR "deep learning" OR "natural language processing") NOT (generative OR LLM)
Eligibility criteria
Population/setting. Human participants in clinical or community settings; no age restriction.
Intervention/exposure. Non-generative AI (e.g. machine learning, deep learning, classical NLP, and multimodal fusion) applied to diagnosis/screening, therapy or clinical decision support, monitoring/relapse detection, or phenotyping in mental health.
Comparators. Any or none (e.g. usual care and clinician judgment).
Outcomes. Model performance (e.g. area under the curve (AUC) and F1), clinical/functional outcomes, usability/acceptability, implementation, interpretability, and privacy/fairness.
Included study types. Primary empirical studies (randomized and non-randomized trials; cohort/case-control; cross-sectional diagnostic/prognostic validation; and feasibility/implementation).
Excluded. Studies focused solely on generative models; simulation-only or animal-only studies; narrative/opinion pieces without primary data; and non-English full texts.
Extraction. We extracted domain, population, data type (text/speech/sensors/EHR), method (ML/DL/NLP), task, sample size, validation approach, metrics, and limitations.
Synthesis. Thematic synthesis. Studies were coded and grouped into diagnosis, therapy support, and monitoring/risk. Conflicting findings were handled by examining data set quality, setting, and methods; when results disagreed, both are reported.
AI techniques in mental health
The integration of AI into mental health care has opened new avenues for diagnosis, treatment, and monitoring. Key AI techniques namely ML, DL, NLP, and multimodal AI have shown promise in enhancing mental health services. This section shows these techniques, highlighting recent advancements and applications.
Machine learning
ML involves algorithms that learn from data to make predictions or decisions. In mental health, ML has been utilized for:
Symptom classification and risk prediction
One of the most impactful applications of ML in mental health is its ability to classify psychiatric symptoms and predict an individual’s risk for developing mental health disorders. 19 By leveraging structured clinical data, behavioral patterns, and even unstructured digital traces from social media or smartphone usage, ML models can identify early warning signs of conditions such as depression, anxiety, and suicidal ideation. 20 Recent studies have demonstrated the efficacy of using supervised learning algorithms such as SVM, random forests, and logistic regression to detect depressive symptoms with high accuracy. 21 Another studies report robust supervised-learning performance for depression detection/severity, including support vector machines (e.g. electroencephalogram (EEG)-based classification), random forests (e.g. biomarker plus clinical features), and logistic regression baselines in EHR prediction pipelines. 22
ML models are increasingly being used in real-time digital platforms to enhance continuous mental health monitoring. A notable example is the work, who integrated sentiment analysis with mobile-based ML algorithms to detect early signs of depression and anxiety from users’ social media activity and text messages. 23 The system, trained on longitudinal data, provided automated alerts to mental health professionals when risk thresholds were exceeded, enabling timely outreach. These developments suggest that AI can act not only as a diagnostic aid but also as a preventative tool capable of bridging the gap between symptom emergence and clinical diagnosis, specially in underserved populations or high stigma settings where traditional mental health services are less accessible. 24
Treatment response prediction
One of the most promising applications of AI in mental health is the prediction of treatment response, which aims to identify how individual patients will respond to specific therapeutic interventions before those treatments are fully administered. By leveraging ML algorithms trained on historical clinical data, symptom profiles, and biometric information, AI systems can forecast the likely efficacy of antidepressants, psychotherapy, or other interventions. For instance, a recent study by researchers demonstrated that combining early EEG recordings with ML models could predict antidepressant treatment response with over 70% accuracy within the first week of therapy. Such predictive capabilities enable clinicians to tailor treatment plans to individual patients, potentially shortening recovery times and reducing adverse effects. 25
These AI-based predictive models often utilize a variety of features, including genetic data, patient demographics, baseline symptom severity, and digital phenotyping data from smartphones or wearables. DL models have also been employed to capture nonlinear interactions among these variables, improving predictive performance. 26 However, while the results are encouraging, real-world clinical implementation remains limited due to challenges such as data set heterogeneity, lack of external validation, and ethical concerns regarding algorithmic transparency. 27
Multimodal data integration
Multimodal data integration refers to the process of combining multiple data sources such as speech, facial expressions, physiological signals, and textual input to create a more comprehensive and accurate representation of a person’s mental state. 28 This approach is particularly valuable in mental health, where symptoms are often expressed through diverse behavioral and physiological cues. 29 For example, recent work demonstrated that integrating audio, video, and text modalities significantly improved the accuracy of depression detection compared to single modality models. 30
Multimodal integration plays a critical role in remote and real-time mental health assessment systems. In teletherapy or virtual consultations, AI systems can synthesize facial micro-expressions, vocal tone, and verbal content to infer the patient’s emotional state, thus supporting clinicians in making more informed evaluations even in the absence of in person interaction. 31 Wearable devices and smartphone sensors further expand the scope of multimodal data by providing continuous, passive monitoring of behavioral patterns such as movement, sleep, and social interaction. Platforms like RADAR-MDD utilize multimodal fusion techniques to detect mood changes and relapse risk in patients with depression. As these systems evolve, they promise to offer scalable, personalized, and context-aware mental health support while also raising important considerations around data privacy, interoperability, and ethical deployment. 29
Deep learning
DL, a subset of ML, utilizes neural networks with multiple layers to model complex patterns in data. In mental health, DL has been applied to:
Speech and audio analysis
Speech and audio signals are rich sources of behavioral and emotional information, making them valuable for the early detection and monitoring of mental health conditions. 32 Recent advances in DL, particularly the use of recurrent neural networks (RNNs) and long short-term memory (LSTM) architectures, have enabled the automatic analysis of prosodic features such as pitch, tone, speaking rate, and pauses. In a recent study, developed a speech-based depression detection system using a hybrid LSTM-convolutional neural network (CNN) framework, achieving high classification accuracy across both clinical and non-clinical data sets. Such models allow for passive, non-intrusive mental health screening through regular voice recordings, offering a scalable alternative to traditional diagnostic interviews. 33
Beyond depression detection, speech analysis is also being applied to evaluate the severity and progression of disorders such as schizophrenia, bipolar disorder, and PTSD. Temporal patterns in spontaneous speech and the semantic coherence of verbal responses can provide indicators of cognitive disorganization or emotional distress. 34 Moreover, mobile applications are increasingly incorporating voice analysis tools for real-time assessment, offering continuous support outside clinical settings. As these technologies mature, integrating speech-based models into telepsychiatry and mobile health (mHealth) platforms holds immense potential to improve access, continuity of care, and early intervention in mental health treatment. 35
Facial expression recognition (FER)
FER has emerged as a powerful tool in the assessment of mental health conditions, particularly for detecting subtle affective cues that may be overlooked during traditional clinical evaluations. 36 By leveraging computer vision and DL techniques especially CNN-FER systems can automatically analyze facial muscle movements, micro-expressions, and gaze patterns to infer emotional states. 37 Recent advancements have enabled these systems to capture nonverbal indicators of depression, anxiety, and emotional dysregulation with increasing accuracy. CNN-based models trained on annotated facial data sets can identify sadness, apathy, or avoidance behaviors in patients, which are common in mood disorders. These systems are particularly valuable in telehealth settings, where clinicians may have limited visual access to patients, and in longitudinal studies requiring non-intrusive, automated monitoring. 38
FER has been integrated into multimodal AI systems to enhance diagnostic robustness by combining facial cues with vocal tone, linguistic features, and physiological data. This approach has shown promise in improving the early detection of disorders such as major depressive disorder (MDD) and schizophrenia. 39 For example, study by Li et al. 37 utilized a multimodal framework combining FER and speech emotion recognition to detect depressive symptoms in real time, achieving over 85% accuracy in clinical simulations.
Multimodal data integration
Multimodal data in Figure 2 showing integration in mental health AI refers to the combined analysis of diverse data streams such as facial expressions, speech patterns, text input, physiological signals, and behavioral metrics to capture a comprehensive picture of an individual’s mental state. Unlike unimodal approaches, which analyze a single type of input (e.g. text or audio), multimodal systems can detect complex, subtle patterns by correlating signals across modalities. 40 An individual’s voice tone, facial micro-expressions, and word choices during a therapy session may each suggest mild distress, but their combined analysis can significantly increase the reliability and sensitivity of detecting early signs of depression or anxiety. 41 Recent research has demonstrated that such integrative methods improve the performance of emotion recognition and mood prediction systems, particularly in naturalistic settings where single-modality data may be noisy or incomplete. 42
Figure 2.
Integration of multimodel data for mental health.
Advancements in DL architectures, particularly transformer-based models and attention mechanisms, have enabled more sophisticated fusion of multimodal inputs. These models dynamically weigh the importance of each input channel (e.g. visual, auditory, and textual) to make real-time inferences.43,44 Applications include telepsychiatry platforms that automatically assess affective states during video consultations and wearable-integrated apps that track movement, heart rate, and verbal interactions to identify changes in mood or stress levels. 45
Natural language processing
NLP enables machines to understand and interpret human language. In mental health, NLP has been instrumental in:
Sentiment analysis and mood detection
Sentiment analysis and mood detection are among the most widely used NLP techniques in mental health research and practice. These methods analyze textual data ranging from social media posts and text messages to clinical notes and therapy transcripts to identify emotional tone and infer psychological states. 46 By detecting linguistic patterns associated with sadness, hopelessness, anxiety, or agitation, sentiment analysis tools can serve as early warning systems for mental health conditions such as depression and anxiety. 47 ML-based sentiment classifiers trained on large data sets of labeled mental health content have achieved impressive accuracy in detecting depressive symptoms from Twitter and Reddit posts. Recent studies have demonstrated that sentiment scores, when tracked over time, can reveal mood fluctuations and correlate strongly with clinically validated depression scales, suggesting their potential for continuous digital mental health monitoring. 48
In clinical settings, sentiment analysis is also being integrated into electronic health records (EHRs) and patient-reported outcome platforms to support therapists and psychiatrists. 49 By automatically highlighting emotionally charged language or sudden shifts in tone, these tools can assist clinicians in identifying emerging psychological crises or treatment response. Moreover, recent advancements in transformer-based language models, such as BERT and RoBERTa, have further improved the granularity and contextual accuracy of mood detection. 50 These models can capture subtleties in expression, such as sarcasm, hesitation, or emotional masking, that traditional NLP methods often miss. As sentiment analysis continues to evolve, its integration with multimodal systems (e.g. voice tone and facial expression) holds promise for building more comprehensive and real-time mental health assessment tools that can augment both self-monitoring applications and professional care. 51
Chatbot development for therapy support
In recent years, AI powered chatbots have emerged as scalable tools for delivering mental health support, particularly cognitive behavioral therapy (CBT)-based interventions. These systems leverage NLP and ML algorithms to simulate therapeutic conversations, offering users real-time emotional assistance, psychoeducation, and mood tracking. Popular examples include Woebot, Wysa, and Tess, which have been deployed globally to support individuals experiencing depression, anxiety, or stress. These chatbots use conversational agents trained on large data sets of therapy interactions and psychological scripts to provide structured, empathetic responses.
Several studies have demonstrated the efficacy and acceptability of such chatbots in both clinical and non-clinical populations. 52 A randomized controlled trial found that users engaging with a CBT-based chatbot reported significant reductions in anxiety symptoms over a four week period compared to control groups. Moreover, chatbots offer a degree of anonymity and accessibility that lowers barriers for individuals hesitant to seek traditional therapy due to stigma or logistical constraints. 53 While these tools are not intended to replace professional care, they serve as valuable adjuncts, especially in contexts with limited mental health resources. Ongoing advancements in NLP, emotion recognition, and personalization are further enhancing the responsiveness and clinical utility of AI-driven mental health chatbots. 54
Clinical documentation analysis
Clinical documentation, including progress notes, discharge summaries, and psychiatric evaluations, contains a wealth of unstructured data that is critical for understanding patient history and mental health trajectories. NLP techniques have emerged as powerful tools for extracting clinically relevant information from these narratives. 55 By parsing linguistic patterns, identifying named entities (e.g. symptoms, medications, and risk factors), and mapping text to standardized ontologies such as SNOMED CT or ICD-10, NLP models can convert qualitative records into structured data suitable for analysis. This capability supports mental health professionals in diagnosing conditions, tracking symptom progression, and tailoring treatment plans with greater precision. 56
Recent advancements have demonstrated the effectiveness of NLP in automating and enhancing clinical workflows in mental health settings. NLP-based models could accurately identify markers of depression, anxiety, and suicidal ideation from clinician notes, improving both screening efficiency and diagnostic accuracy. 57
Multimodal AI
Multimodal AI systems integrate data from multiple sources such as text, audio, video, and physiological signals to provide a comprehensive assessment of mental health. Applications include:
Emotion recognition
Emotion recognition is a critical capability within AI-enabled mental health systems, aiming to identify an individual’s emotional state based on observable cues such as facial expressions, vocal tone, and linguistic patterns. 39 Traditional approaches relying on single modalities like text-only sentiment analysis often fail to capture the full spectrum of human emotion, especially in complex mental health contexts where symptoms may be subtle or atypical. 40 Multimodal AI overcomes this limitation by integrating inputs from multiple channels, resulting in a more robust and nuanced understanding of affective states. When a patient’s facial expression suggests neutrality, but their speech tone choice indicate sadness or distress, a multimodal model can reconcile these signals to more accurately classify the emotional state. 58
Wearable technology integration
Wearable technologies have emerged as powerful tools for real-time monitoring of mental health states by continuously collecting physiological and behavioral data. Devices such as smartwatches, fitness bands, and biosensors can track parameters like heart rate variability, sleep quality, physical activity, skin conductance, and even voice tone. These data streams offer insights into emotional regulation, stress levels, and depressive symptomatology. 59
Moreover, the fusion of wearable sensor facilitates the development of personalized and adaptive interventions that align with the user’s daily life and environment. Figure 3 highlights how NLP serves as a cornerstone in processing multimodal inputs from wearables, facilitating key functionalities like natural language understanding, classification, and interactive dialogue systems. Recent studies show that AI-enhanced wearable systems can outperform traditional assessment tools by identifying subtle, moment to moment fluctuations in mood and behavior that might be missed during routine clinical visits. 60 As wearable devices become increasingly affordable and ubiquitous, their role in population-level mental health monitoring is likely to grow, offering scalable and non-invasive means to support early detection, self-management, and remote care delivery. 61
Figure 3.
Core components of NLP in wearable AI systems for mental health. NLP: natural language processing; AI: artificial intelligence.
Telepsychiatry enhancements
Multimodal AI has emerged as a powerful tool to bridge this gap by integrating audio, video, and text based data in real-time. These systems analyze vocal features (like tone, pitch, and speech rate), facial expressions, and linguistic patterns simultaneously, allowing clinicians to access a richer and more objective set of indicators. AI can flag subtle speech hesitations or facial micro-expressions associated with anxiety or depression signals that might be missed in a standard video call. Such capabilities not only augment the clinician’s assessment but also support early detection of mood fluctuations and emotional distress. 61
Moreover, advanced AI models can operate passively during virtual consultations to generate continuous mental health risk scores, enabling clinicians to monitor patients over time without increasing session length or clinician workload. Recent studies have demonstrated the efficacy of combining NLP with facial and speech analytics to improve diagnostic accuracy in telehealth settings. 62 These tools can also be adapted for culturally diverse populations by training models on language and context specific data sets, thus improving inclusivity and reducing diagnostic bias. As telepsychiatry continues to expand, integrating multimodal AI can transform remote mental health care into a more data-driven, accessible, and personalized service, especially for underserved or geographically isolated populations.63,64
Applications in mental health
AI has increasingly become integral to mental health care, offering innovative solutions for diagnosis, therapy support, and continuous monitoring. The following sections delve into these applications, highlighting recent advancements and their implications. Figure 4 demonstrates the potential of the different fields for mental health care. And also Table 2 presents the summary of AI techniques and applications in mental health.
Figure 4.
Potential domains of artificial intelligence (AI) application in mental healthcare, highlighting both opportunities and challenges.
Table 2.
Summary of AI techniques and applications in mental health.
| AItechniques | Keyalgorithms/models | Common datatypes | Mental healthapplications | Ref |
|---|---|---|---|---|
| Machinelearning | Support vectormachines,random forest, and XGBoost | Clinical scores and self-report questionnaires | Risk prediction, symptom classification, and patient subgroup clustering | Ndikumana et al. 80 and Madububambachu et al. 81 |
| Deeplearning | Convolutionalneural networks, recurrentneural networks, longshort-termmemory, andtransformers | Audio recordings, video data,and facial expression images | Depression detection, emotion recognition,and facial micro- expression analysis | Marriwala and Chaudhary 82 |
| Naturallanguageprocessing | BERT, GPT,and latent Dirichletallocation | Social media posts, clinical notes, and therapy transcripts | Sentiment analysis, chatbot-based therapy,and mood tracking | Malgaroli et al. 62 and Zhang et al. 83 |
| Multimodal AI | Multimodal fusion models and hybrid DL architectures | Combined text, voice, facial expression, and physiological signals | Emotion recognition, real-time monitoring, and telepsychiatry applications | Sadeghi et al. 40 and Khoo et al. 84 |
AI: artificial intelligence; BERT: bidirectional encoder representations from transformers; GPT: generative pre-trained transformer; DL: deep learning.
Diagnosis
AI technologies have revolutionized the diagnostic process in mental health by enabling the analysis of complex data sets, including speech patterns, text inputs, and behavioral activities. These tools assist in identifying conditions such as depression, anxiety, PTSD, and bipolar disorder.
Speech and text analysis
Speech and text analysis have become essential tools in the application of AI for mental health diagnostics. ML algorithms can extract and interpret linguistic features such as tone, prosody, word choice, sentence structure, and speech pace to detect subtle cues associated with psychological conditions. 62 These models have shown significant promise in identifying early indicators of depression, anxiety, and PTSD, specially when traditional clinical assessments are unavailable or infeasible. Variations in speaking rate, reduced lexical diversity, and increased use of negative emotion words are commonly observed in individuals experiencing depressive episodes. 65
This decentralized model enables collaborative learning from data distributed across devices, making it particularly suitable for mental health applications where confidentiality is paramount. The system demonstrated improved accuracy in detecting both depression and anxiety by training on diverse, real-world language samples without aggregating sensitive personal data. This innovation not only advances the technical frontiers of AI-based diagnosis but also addresses critical concerns related to ethics, data protection, and clinical applicability in digital mental health tools. 66
Behavioral activity monitoring
Behavioral activity monitoring has become a critical application of AI in mental health care, leveraging data from wearable devices and smartphone sensors to gain real time insights into an individual’s physical and social behaviors. These technologies continuously collect metrics such as step count, heart rate variability, sleep duration, screen time, and geolocation. AI algorithms then process this high-frequency, high-dimensional data to identify patterns and anomalies that may be indicative of emerging mental health issues. By detecting these behavioral changes early, AI systems offer the possibility of proactive, rather than reactive, mental health interventions. 67
ML models can classify mood states and predict mental health deterioration by analyzing multimodal behavioral signals, even in the absence of explicit self reporting. 68 Research using ecological momentary assessment (EMA) integrated with smartphone sensor data has demonstrated promising results in predicting short-term suicide risk and depressive relapse. While behavioral monitoring presents a powerful tool for continuous mental health surveillance, its effectiveness depends on data quality, user compliance, and ethical handling of sensitive personal information. Ensuring transparency, user consent, and cultural adaptability remains essential for the responsible deployment of AI-based behavioral monitoring systems. 69
Multimodal data integration
Multimodal data integration has emerged as a critical approach in enhancing the diagnostic accuracy of mental health conditions by leveraging diverse data streams. Traditional assessments often rely on self-reported symptoms or clinician observations, which can be limited by subjectivity and recall bias. In contrast, integrating speech patterns, textual inputs (such as social media posts or therapy transcripts), and behavioral metrics (like sleep, movement, or phone usage) provides a richer, more objective picture of an individual’s mental state. 70
Underscored the significant potential of passive sensing data collected from smartphones and wearable devicesin predicting suicidal ideation and high risk behaviors. Their findings highlight that combining data modalities not only improves predictive performance but also enables more proactive mental health care. 71 A model that merges GPS location data with communication frequency and voice sentiment can identify social withdrawal key indicator of depression far earlier than traditional clinical assessments. 72
Therapy support
AI-based tools have expanded access to therapeutic resources, particularly through chatbots offering CBT techniques and emotional support.
AI chatbots
AI powered chatbots such as Woebot and Wysa are increasingly being integrated into digital mental health interventions, offering users immediate, interactive platforms to manage emotional distress and psychological challenges. These chatbots leverage NLP to engage users in conversational exchanges that mimic human interaction, providing real-time coping strategies, mood tracking, and CBT-based guidance. 73
Empirical evidence supports the effectiveness of these AI chatbots in reducing symptoms of mental distress. A recent study demonstrated that users of Woebot experienced significant reductions in depression and anxiety symptoms, along with high rates of user satisfaction and continued engagement. 74 Similarly, Wysa has been adopted by millions globally, with users reporting improved emotional resilience and well-being over time. Despite their promising results, experts emphasize that chatbots should not replace human therapists but rather serve as complementary tools within a stepped-care model providing scalable, low-intensity support while flagging more severe cases for professional intervention. 75
Accessibility and cost-effectiveness
AI-powered chatbots provide a scalable, cost-effective solution to the growing demand for mental health support, particularly in regions where access to professional care is limited. These chatbots, available around the clock, can engage users in therapeutic conversations, deliver evidence-based interventions such as CBT, and assist with mood tracking and emotional regulation. 76 By eliminating geographical and financial barriers, AI chatbots have democratized access to mental health services, especially for individuals in rural or low resource settings where mental health professionals are scarce or overburdened. 77
Recent trends in countries like Taiwan and China highlight the increasing reliance on AI chatbots among younger populations. Faced with long wait times, stigma surrounding mental illness, and a shortage of clinicians, many individuals are turning to digital tools as the first step toward managing their mental health. These chatbots are perceived as nonjudgmental, private, and easily accessible qualities that are specially appealing in cultures where open discussion of emotional distress may be discouraged. However, experts caution that while AI chatbots can provide immediate, supportive care, they should be used as a complement not a replacement for professional mental health services.78,79
Integration with professional care
While AI-powered chatbots have demonstrated significant promise in offering immediate, accessible mental health support, they are not intended to replace the expertise of licensed mental health professionals. These tools can effectively deliver cognitive behavioral strategies, emotional check-ins, and self-care prompts, making them valuable for early intervention and ongoing self-management. 85 However, their responses are limited by programmed algorithms and cannot fully replicate the nuanced understanding, empathy, and clinical judgment of human therapists. As such, chatbots are best positioned as supplementary resources that can support individuals between therapy sessions or during periods when traditional care is inaccessible. 11
Mental health experts consistently emphasize the importance of integrating AI systems within a broader, professionally guided treatment framework. When used in conjunction with clinical care, AI tools can enhance therapeutic outcomes by providing continuous engagement, tracking patient-reported outcomes, and identifying early warning signs of relapse or crisis. 86 However, over-reliance on these technologies without human oversight may lead to misinterpretation of symptoms or neglect of complex psychological issues. Therefore, a hybrid model combining the scalability of AI with the personalized care of mental health professionals offers the most promising path for sustainable, effective mental health support. 65
Monitoring and risk prediction
Continuous monitoring of mental health through AI enables early detection of mood swings and potential crises, facilitating timely interventions.
Wearable devices
Wearable devices have become increasingly vital in the real-time monitoring of physiological and behavioral markers associated with mental health. Smartwatches, fitness bands, and rings are equipped with sensors that track heart rate variability, sleep quality, movement, and even skin temperature factors closely linked to emotional regulation and stress levels. 87 These continuous, non invasive data streams allow for the detection of early signs of mood fluctuations, anxiety, or depressive episodes. Irregular sleep patterns or reduced physical activity captured over days or weeks can signal an impending mental health crisis, prompting timely intervention. These tools are particularly valuable for individuals at risk of relapse or those with limited access to clinical care, providing clinicians with objective, longitudinal data for informed decision-making. 88
Advancements in wearable technology introduced a new generation of home based devices specifically tailored for mental health and sleep disorders. One such innovation includes a smart ring capable of high-fidelity sleep pattern monitoring, designed to aid in the early detection and treatment of obstructive sleep issue often comorbid with mood disorders. 89 These wearables not only enhance patient engagement by providing feedback on daily wellness but also integrate seamlessly with mobile applications that analyze and visualize mental health trends. As these tools evolve, they offer promising potential for more personalized, data-driven mental healthcare, though their adoption must be accompanied by strict adherence to privacy, consent, and ethical guidelines. 90
Smartphone applications
Smartphone applications have become a valuable tool in the assessment and support of mental health by leveraging the ubiquitous presence of mobile devices in daily life. 91 By applying ML algorithms to these data streams, smartphone-based platforms can detect early signs of psychological distress, mood fluctuations, and behavioral changes associated with conditions such as depression and anxiety. This passive sensing allows for unobtrusive, real-time mental health monitoring, providing a scalable solution for early intervention, especially in settings where access to clinicians is limited. 92
Recent research underscores the clinical impact of these technologies. A study demonstrated that individuals on waiting lists for traditional therapy experienced significant reductions in depression, anxiety, and suicidality after using evidence-based mental health apps in a combination with wearable sensors. 93 These digital tools offered users immediate coping strategies, self-assessment features, and behavioral feedback, bridging the gap in care during critical waiting periods. However, experts emphasize the need for careful validation, personalization, and data privacy safeguards to ensure these tools are both effective and ethically responsible in long-term mental health management. 94
Ecological Momentary Assessment (EMA)
EMA is a data collection method that captures individuals’ moods, thoughts, and behaviors in real time and within their natural environments. Unlike retrospective assessments, EMA minimizes recall bias and provides a more accurate, dynamic view of a person’s psychological state. By prompting users to record their experiences multiple times a day via smartphones or wearable devices, EMA generates rich, temporally sensitive data. This approach is especially valuable in mental health research, where symptoms such as mood fluctuations or stress responses can vary significantly throughout the day and are influenced by contextual factors 95 .
Recent advancements in AI have enhanced the utility of EMA by enabling predictive modeling of acute mental health risks. Studies have shown that ML algorithms applied to EMA data can effectively identify patterns associated with near-term suicidal ideation, stress episodes, or depressive relapses. This integration of EMA and AI paves the way for proactive and personalized mental health interventions, allowing for early support before a crisis occurs. However, challenges remain regarding data privacy, adherence, and the need for culturally sensitive algorithms that generalize across diverse populations. 96
Data-centric AI
Privacy-preserving learning (federated and secure enclaves) and synthetic data can widen data access but create new risks (shifted distributions and disclosure through rare combinations). Projects should document data lineage, audit drift post-deployment, and evaluate fairness on subgroups, not only overall metrics.
By keeping data locally, federated learning provides robust privacy protections, but it has some limitations as well. Particularly in low-resource environments, heterogeneity among sites, such as differences in sample distributions, device quality, or data collecting procedures, can result in models that are inconsistent or biased. Furthermore, it’s still difficult to guarantee equal engagement across devices because underrepresented groups could unintentionally be left out of training. Similar to this, creating synthetic data might increase access to training data sets but also presents ethical and legal challenges. Unintentionally reproducing hidden biases or, in rare instances, allowing new identification through distinct feature configurations are two possible outcomes of synthetic samples. There is still ambiguity surrounding declaration and authenticity rules because existing frameworks like GDPR and HIPAA only offer partial recommendations. Therefore, in order to protect trust and accountability in mental health platforms, data-centric AI technologies should be paired with open regulation, such as data set lineage tracking, fairness audits, and unambiguous disclosure of synthetic data utilization.
Challenges and limitations
The integration of AI into mental health care offers promising advancements in diagnosis, treatment, and patient monitoring. However, Table 3 shows the summary of several challenges and limitations and also explain in detail addressed to ensure ethical, equitable, and effective implementation. Key concerns include data privacy, bias and fairness, and the interpretability of AI systems.
Table 3.
Challenges and limitations in applying AI to mental health.
| Challenge | Description | Impact onpractice | Mitigationstrategies | References |
|---|---|---|---|---|
| Dataprivacy | Mental health data is highly sensitive and vulnerable to breaches | Erodes patient trust and violates ethical norms | Federated learning, differential privacy, and secure storage protocols | Lundberg and Lee 97 |
| Bias andfairness | AI models trained on limited or skewed data sets may reflect systemic biases | Misdiagnosis or exclusion of minority populations | Diverse data setsand fairness-aware ML models | Benrimoh 98 |
| Modelinterpretability | DL models often act as ‘‘black boxes” | Limits clinical trust and regulatory acceptance | Explainable AI and feature attribution methods | Topol 99 |
| Culturalsensitivity | AI may fail to recognize culture-specific emotional expressions or behaviors | Reduced accuracy and appropriateness in multicultural settings | Cultural calibrationand local data set inclusion | Gill 100 |
| Clinicalintegration | Tools may not align with clinical workflows or time constraints | Low adoption by clinicians | User-centered design and co-development with practitioners | Weng 101 |
| Regulation and ethics | Lack of consistent regulation for AI mental health tools | Legal ambiguityand misuse risks | Standards for AI auditability and consent | Zhang 102 |
| Datalabeling | Annotating mental health data (e.g. severity of depression) is time-consuming and subjective | Poor training quality and model drift | Semi-supervised learning and expert validation | Naslund 103 |
| Long-termeffectiveness | Few models are validated over time in real-world settings | Reduced reliability and user engagement | Longitudinal studiesand post-deployment monitoring | Gao 104 |
AI: artificial intelligence; DL: deep learning.
Data privacy
Mental health data is inherently sensitive, encompassing personal narratives, behavioral patterns, and clinical diagnoses. The digitization and analysis of such data by AI systems raise significant privacy concerns.
Risks of data breaches and unauthorized access
The potential for data breaches presents a profound threat to the integrity and confidentiality of mental health records. Unlike general medical data, mental health information often includes intimate personal disclosures, therapy notes, emotional histories, and behavioral patterns. When such sensitive data is compromised, it can lead to significant emotional harm, social stigma, and even legal consequences for affected individuals. The breach not only violated patient privacy but also led to blackmail attempts and widespread public distress, illustrating the uniquely high stakes of mental health data security. 105
These incidents underscore the urgent need for robust cybersecurity infrastructure and strict data governance frameworks in AI powered mental health systems. AI applications often require extensive data sets to function effectively, increasing the attack surface for malicious actors. 106 Mental health services become more digitized with mobile apps, online counseling platforms, and wearable devices collecting data the complexity and vulnerability of these systems grow. Developers and healthcare providers must implement end to end encryption, secure cloud storage, and anonymization protocols. 107
Regulatory challenges
The regulatory landscape surrounding AI in mental health is still in its formative stages, struggling to keep pace with the rapid development and deployment of these technologies. As AI tools become more prevalent in mental health diagnostics and therapy, they often operate in legal gray areas regarding consent, data usage, and cross-border data flows. 108 The lack of standardized guidelines makes it difficult for developers and healthcare institutions to determine acceptable practices, specially when sensitive patient data is involved. While general frameworks such as the EU’s General Data Protection Regulation (GDPR) offer baseline protections, they do not yet account for the unique ethical and clinical nuances associated with AI-based mental health interventions. 109
A notable example highlighting the urgent need for more tailored regulation is the case in which Italy’s data protection authority fined Luka Inc., the company behind the Replika AI chatbot, €5 million. The fine was issued for violations including the collection and processing of user data without a legal basis and the absence of effective age verification mechanisms. As mental health applications increasingly involve emotionally vulnerable populations, strong oversight mechanisms are essential to ensure ethical compliance and to foster public trust.110,111
Public perception and trust
Public trust in AI technologies is a cornerstone for their successful integration into mental health care. While many users express curiosity and optimism about the role of AI in expanding access and efficiency, they often remain skeptical about the safety of their personal and psychological data. This skepticism is further amplified in the context of mental health, where users may share highly intimate and stigmatized information, making them particularly sensitive to issues of data misuse or exposure. 112
Moreover, the perception of AI systems as impersonal or emotionally detached can also impact trust, especially in therapeutic settings where empathy, nuance, and human connection play central roles. Users may question whether AI-driven interventions can truly understand or support their psychological struggles. Building trust requires active user involvement, culturally sensitive design, and communication strategies that clarify the benefits and limitations of AI systems. As digital mental health tools become more prevalent, aligning technological capabilities with ethical and user-centered design will be key to gaining sustained public confidence. 113
Mitigation strategies
To address growing concerns around the privacy of mental health data, developers and researchers are increasingly adopting privacy-enhancing technologies such as federated learning and differential privacy. Federated learning enables AI models to be trained over decentralized devices or servers that hold local data samples, without transferring the raw data to a central repository. 114 This technique is especially relevant for mental health applications that involve mobile phones or wearable devices, where user data can remain on the device while contributing to global model improvement. In parallel, differential privacy adds controlled statistical noise to data sets or outputs to prevent the identification of individual users, even when multiple queries are made. These technologies collectively offer robust protection mechanisms, allowing developers to uphold user confidentiality while still extracting meaningful insights from sensitive data sets. 115
Transparent consent mechanisms, user control over data sharing, and the ability to withdraw participation are critical components of trustworthy AI systems in mental health care. Furthermore, collaborations with legal experts and compliance with regulations such as the GDPR or HIPAA are essential to align AI deployments with privacy laws. Recent initiatives have also called for ‘‘privacy-by-design” frameworks, where data protection is integrated at every stage of AI system development. 116
Bias and fairness
AI systems in mental health care are susceptible to biases that can lead to unfair or inaccurate outcomes, particularly for underrepresented groups.
Sources of bias
Bias in AI models often originates from imbalanced or non-representative training data sets, which fail to capture the diversity of the populations they are intended to serve. In mental health contexts, this can result in models that perform well for certain demographic groups such as white, English-speaking populations but poorly for others, including ethnic minorities or non-native speakers. 117 When AI models are trained on such narrow data sets, their outputs may systematically underdiagnose or misclassify mental health conditions in marginalized communities, reinforcing existing healthcare disparities rather than alleviating them. 118
Impact on diverse populations
AI systems used in mental health often reflect the biases inherent in the data sets on which they are trained. When these data sets underrepresent certain demographic groups, racial minorities, nonnative language speakers, or individuals from low-income communities the resulting models tend to perform poorly for those populations. For instance, studies have shown that AI models trained to detect depression via social media text achieved high accuracy in white populations but exhibited significantly reduced performance when applied to posts by Black Americans. 119 These discrepancies are attributed to differences in linguistic expression, cultural context, and the use of vernacular language, which are often not captured adequately in training data. This kind of systemic bias risks excluding or misclassifying vulnerable populations, leading to inequitable access to AI-supported mental health care. 120
Cultural variation in the expression and conceptualization of mental illness further complicates AI deployment across diverse groups. What constitutes a symptom in one culture may be normalized behavior in another, resulting in either over-diagnosis or underdiagnosis. Language barriers, dialect differences, and varying norms about help-seeking behavior can affect the inputs that AI systems rely on, particularly those using NLP or voice analysis. If these systems are not trained with culturally diverse data or adapted to recognize such variability, their conclusions may be misleading or even harmful. 121
Mitigation strategies
Addressing bias in AI systems for mental health care requires a multifaceted approach that begins with the construction of diverse and representative data sets. Models trained exclusively on homogeneous populations, whether by geography, ethnicity, age, or socioeconomic status are more likely to perpetuate inequalities and produce unreliable results for underrepresented groups. 122 Efforts must be made to include data from varied demographics to ensure that AI models capture the full spectrum of human behavior and cultural nuances. This includes leveraging community-based data collection, partnering with global research institutions, and ensuring informed consent in ethically appropriate ways. Moreover, regular audits of training data sets for imbalances or exclusionary patterns should become standard practice in AI development workflows. 123
In parallel, the use of fairness-aware algorithms and post-hoc bias correction techniques can mitigate disparities in model predictions. These methods adjust for known biases either during model training (e.g. reweighting samples) or after deployment (e.g. recalibrating outputs). Equally important is the implementation of continuous model monitoring to assess performance across different subgroups over time. Regulatory oversight, such as mandatory fairness reporting and model explainability standards can further enforce accountability, while the integration of interdisciplinary ethics boards in AI projects ensures that social, cultural, and legal dimensions are adequately addressed. Together, these strategies foster transparency, improve model trustworthiness, and contribute to more equitable mental health outcomes. 124
Interpretability
The ‘‘black-box” nature of many AI models poses challenges for their adoption in clinical settings, where transparency and explainability are paramount.
Challenges of black-box models
One of the most critical limitations of advanced AI systems, particularly DL models, lies in their inherent lack of transparency often referred to as the ‘‘black-box” problem. These models, including CNN and RNN, are capable of identifying subtle patterns across vast, high-dimensional data sets such as voice recordings, social media posts, or facial micro-expressions. The internal logic by which these models arrive at specific predictions or classifications is typically not accessible or interpretable to human users. In the context of mental health care, where clinical decisions must be justified and understood, this opacity poses a significant barrier to adoption. Unlike traditional diagnostic tools that are based on interpretable criteria (e.g. DSM-5 checklists), black-box AI outputs cannot be easily explained to clinicians, patients, or regulatory bodies, raising questions about accountability and clinical validity. 130
This disconnect can result in missed opportunities for intervention or inappropriate reliance on potentially flawed outputs. Moreover, legal and ethical implications emerge when decisions are made based on systems that cannot offer a rationale. As mental health professionals are ultimately responsible for patient outcomes, reliance on non-transparent tools without clear decision paths may conflict with clinical guidelines and malpractice standards. 131
Recent studies have emphasized the importance of bridging this gap by developing models that balance predictive power with interpretability. For example, attention-based neural networks and post-hoc explanation methods such as SHAP and LIME have been applied in mental health contexts to clarify which features (e.g. tone of voice, specific keywords, or behavioral patterns) contributed to a prediction. 132 However, these approaches are still evolving and may not fully resolve the trust deficit. Until interpretability becomes a standard feature of AI tools used in psychiatry and psychology, the black-box challenge will remain a significant obstacle to widespread clinical integration and acceptance. 130
XAI approaches
XAI has emerged as a critical area of research in response to the “black-box” nature of many AI models, particularly DL architectures. In the context of mental health care, where clinical decisions must be transparent and justifiable, XAI offers a way to make complex model outputs more understandable to clinicians, patients, and regulatory bodies. Unlike traditional ML models, which may rely on a small number of human-readable features, many DL systems generate decisions from millions of parameters, making it difficult to trace their reasoning. This opacity has raised concerns about accountability, especially when AI tools are used to support sensitive diagnoses like depression, anxiety, or suicidal ideation. 133
An XAI-enhanced framework for mental health screening, demonstrating how model predictions could be accompanied by visual or textual explanations. The system integrated SHAP values to highlight which input features such as changes in language sentiment, vocal tone, or behavioral data contributed most significantly to a diagnosis of depression or anxiety. By surfacing these model contributions, the framework enabled clinicians to better understand the rationale behind each prediction, improving confidence in AI-assisted tools. Importantly, the model was evaluated not only on accuracy but also on the clarity and usefulness of its explanations from the perspective of practicing mental health professionals. 131
Beyond SHAP, other XAI techniques such as LIME, attention visualization in neural networks, and counterfactual reasoning are being explored for mental health AI. These methods allow developers and clinicians to examine how small changes in input data can alter predictions, revealing underlying model behavior and potential biases. For instance, an AI model trained to assess social media data for depression risk could use LIME to show which phrases or keywords were most influential in the model’s classification. This can help identify whether the model is relying on clinically relevant indicators or spurious correlations, thereby guiding improvements in model design and data curation. 134
Despite these advancements, the implementation of XAI in clinical mental health settings still faces significant hurdles. One challenge is the trade-off between model complexity and interpretability: simpler models are easier to explain but may lack the predictive power of more complex architectures. Additionally, there’s a risk that superficial interpretability tools might lend a false sense of transparency, potentially leading to overreliance on flawed models. As such, future efforts must prioritize not only the development of technically sound XAI methods but also the co-design of these tools with mental health practitioners to ensure they align with real-world clinical reasoning and decision-making processes.135,136
Clinical integration
For AI systems to be effectively integrated into mental health care, they must align with existing clinical workflows and complement, rather than disrupt, clinician decision-making. Mental health professionals rely heavily on clinical judgment, therapeutic relationships, and nuanced understanding of individual patient histories. If AI tools are perceived as intrusive, opaque, or inconsistent with standard practices, their utility in real-world settings diminishes. Therefore, seamless integration requires AI models to provide actionable insights in a format that clinicians can readily interpret and incorporate into their care routines. 137
Interpretable models play a critical role in fostering clinician trust and adoption. Unlike “black-box” systems that offer predictions without explanation, interpretable AI enables mental health professionals to trace how and why a model arrived at a particular conclusion. For example, attention-based neural networks or decision trees can highlight which symptoms or behavioral indicators contributed most to a diagnosis or risk score. This level of transparency not only improves clinical confidence but also facilitates more informed discussions with patients, particularly when AI-generated insights influence treatment recommendations. 138
Moreover, clinical integration involves more than model transparency it also requires interoperability with EHR systems, alignment with ethical guidelines, and adaptability across diverse patient populations. AI tools must be rigorously validated in clinical trials, tested across multiple settings, and continuously monitored for performance drift over time. 139 Institutions should also provide training for mental health professionals on how to interpret and apply AI outputs. By ensuring these systems are intuitive, reliable, and context-aware, we can bridge the gap between cutting-edge technology and compassionate, patient-centered care. 140
Representativeness and selection bias
The evidence synthesized here is disproportionately drawn from HICs and academic/tertiary care settings, with comparatively fewer studies originating from LMICs and community services. Many cohorts reflect convenience samples with stable access to smartphones, wearables, or well-curated EHRs, which may under-represent populations experiencing digital exclusion or fragmented care. Age distributions are often adult-dominant, with limited youth and older-adult representation, and demographic reporting is inconsistently granular, constraining subgroup assessment. As a result, model performance and implementation findings may not generalize across health systems, languages, devices, or culturally mediated symptom expression.
Language and indexing bias
Our search was restricted to English-language sources, which likely excluded relevant studies published in other languages or local journals/repositories, particularly from LMIC contexts. This introduces language and indexing bias and may over-weight HIC evidence. Future reviews should incorporate multilingual searches (e.g. regional databases) and translation workflows, and primary studies should report standardized demographics and conduct external validation across diverse sites to strengthen generalisability.
Future directions
Future directions in Table 4 are highlighting key objectives, implementation needs, and expected impact of AI to mental health. This table also includes relevant references supporting each direction, guiding the development of more ethical, interpretable, and inclusive AI systems in mental health care.
Table 4.
Key future directions in AI applications for mental health, with associated goals and implementation needs.
| Directions | Objectives | Implementationneeds | Anticipated impact | References |
|---|---|---|---|---|
| Culturallysensitive AI | Develop models that recognize linguistic, cultural, and behavioral differences in emotional expression | Use diverse, inclusive training data sets and local context calibration | Improve fairness and effectiveness across global populations | Okolo 125 |
| Multimodalemotionanalysis | Combine voice, facial expressions, and text to assess mental state more accurately | Fusion of multiple data modalities using DL and attention mechanisms | Enhance real-time emotion recognition and diagnostic precision | Wu et al. 126 |
| Clinician-AIcollaboration | Position AI as a decision support tool rather than a replacement | Co-design interfaces with clinicians, provide training and integrate with EHR systems | Increase clinician trust and adoptionand improve shared decision-making | Auf et al. 86 |
| XAI | Ensure transparency and interpretability in AI decisions for mental health | Develop feature attribution, local surrogate models, and saliency maps | Improve model trust, enable regulatory compliance, and facilitate validation | Adadi and Berrada 127 |
| Longitudinalvalidation | Test AI models over time and in diverse, real-world clinical settings | Conduct prospective cohort studies and real-world monitoring trials | Ensure generalizability, detect model drift, and maintain performance | Lee et al. 18 |
| Adaptive andpersonalized AI | Build AI systems that learn from user interactions and tailor outputs to individual patterns | Implement reinforcement learning, user-feedback loops, and dynamic thresholds | Provide personalized care recommendationsand increase patient engagement | Karine and Marlin 128 |
| Ethicalgovernance | Establish clear legal and ethical frameworks for AI in mental health | Define standards for consent, data governance, accountability, and auditability | Ensure responsible deployment, protect users’ rights and data | Morley et al. 129 |
| Low-resourceadaptation | Design models that can operate in low-data or low-computing environments (e.g. rural clinics) | Lightweight architectures, synthetic data generation, and federated learning | Broaden access to AI mental health tools in under-resourced areas | Okolo 125 |
AI: artificial intelligence; DL: deep learning; XAI: explainable artificial intelligence; EHRs: electronic health records.
Culturally sensitive AI
As AI systems become more integrated into mental health care, ensuring cultural sensitivity is paramount. Mental health experiences and expressions vary widely across cultures, and AI models must account for these differences to provide effective and equitable care.
Recent studies highlight the importance of training AI models on diverse data sets that encompass various cultural contexts. For instance, a systematic review emphasized that generative AI tools often lack cultural competency, leading to misinterpretations or inappropriate recommendations in mental health assessments. To address this, researchers advocate for the inclusion of culturally diverse data during model development and the incorporation of feedback from diverse user groups.141,142 As illustrated in Figure 5, the integration of AI into EHRs provides numerous advantages, such as predictive analysis, NLP, and improved data fetching each of which contributes to culturally sensitive and efficient mental health care.
Figure 5.
Future prospects of AI in EHRs, highlighting potential benefits across clinical and analytical domains. AI: artificial intelligence; EHRs: electronic health records.
Moreover, initiatives like the NIH-funded project on improving patient-provider cultural attunement using AI are exploring ways to enhance therapeutic alliances through culturally sensitive AI interventions. These efforts aim to ensure that AI tools not only understand linguistic nuances but also respect cultural values and norms, thereby improving inclusivity in mental health care. 143
Multimodal emotion analysis
Understanding human emotions is a complex task that benefits from analyzing multiple modalities, such as facial expressions, speech tone, and textual content. Multimodal emotion analysis leverages these diverse data sources to provide a more comprehensive assessment of an individual’s mental state. 144
Advancements in this area include the development of models that integrate EEG and electrocardiogram data with traditional modalities to enhance emotion recognition accuracy. Systematic reviews have identified effective AI-based multimodal dialogue systems capable of emotion recognition, highlighting the potential of these technologies in mental health interventions. 144
However, challenges remain in ensuring the accuracy and reliability of these systems across diverse populations. Future research should focus on developing models that can adapt to individual differences and cultural variations in emotional expression, thereby enhancing the effectiveness of AI-based mental health assessments.
Figure 6 illustrates the general workflow of an AI-driven multimodal emotion analysis system. It begins with the analysis of various data inputs, such as text, speech, and physiological signals, and proceeds through structured stages understanding, feature extraction, and algorithmic processing to generate emotion related outputs.
Figure 6.
Artificial intelligence (AI)-driven pipeline for multimodal emotion analysis, illustrating five key stages from input analysis to output generation.
Clinician-AI collaboration
Integrating AI into clinical practice requires a collaborative approach where AI serves as a decision support tool rather than a replacement for human clinicians. This collaboration can enhance trust and effectiveness in mental healthcare settings.
Studies reveal that AI-enabled Clinical Decision Support Systems (AI-CDSS) show great potential in assisting healthcare professionals by providing personalized treatment recommendations and enhancing shared decision-making between clinicians and patients. These systems can analyze vast amounts of patient data to generate evidence-based insights, potentially improving diagnostic accuracy and treatment outcomes. However, integrating AI-CDSS into routine clinical workflows presents significant challenges. One of the primary concerns among clinicians is the trustworthiness of AI-generated outputs, especially in high-stakes medical situations. Many healthcare providers remain cautious to rely solely on AI recommendations, stressing the need of human validation and oversight. The prevailing sentiment that AI should complement, not replace, clinical judgment. This highlights the need for transparency, interpretability, and effective collaboration between AI systems and medical professionals to ensure successful adoption and seamless integration into clinical practice. 145
To address these concerns, future efforts should prioritize the development of explainable AI models that offer clear and interpretable insights. Additionally, implementing comprehensive training programs for clinicians on effectively using AI tools, while incorporating their feedback into AI system design, can promote smoother integration. By positioning AI as a complement to human expertise, mental health care can achieve improved diagnostic accuracy and more tailored treatment planning.
Conclusion
AI is transforming the field of digital health by enabling earlier detection of mental health conditions, offering scalable support tools, and providing continuous real-time monitoring of emotional and behavioral changes. Our analysis of non-generative AI techniques (ML, DL, NLP, and multimodal approaches) demonstrates how they might enhance medical decisions, increase exposure through digital resources, and enhance diagnostic precision. Nonetheless, issues with accessibility, bias in algorithms, data security, and cultural diversity still exist. To protect confidence in patients and efficacy, researchers and practitioners should collaborate on the design of AI tools, give explainability first priority, and provide post-deployment assessment. For implementation, it is crucial to provide appropriate data sets and incorporate AI results into current procedures. To ensure responsible implementation, policymakers and healthcare providers should mandate data linkage records, confidentiality-preserving mechanisms (like federated learning), and fairness audits. To maintain access to funding for small-scale, culturally sensitive AI solutions is also required. AI has the potential to be a reliable adjunct to personalized mental health services with careful control and cooperation.
Footnotes
ORCID iD: Zeeshan Abbas https://orcid.org/0000-0003-1472-183X
Ethical approval: Not applicable.
Authors’ contributions: MA and SA: conceptualization; MA, SA, and QA: methodology, data curation, and writing–original draft preparation; SA and QA: investigation; MA and ZA: writing–review and editing; ZA and SWL: supervision; SWL: funding acquisition. All authors have read and agreed to the published version of the manuscript.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the Ministry of Education and Ministry of Science & ICT, Republic of Korea (grant numbers: NRF [2021-R1-I1A2 (059735)], RS [2024-0040 (5650)], RS [2024-0044 (0881)], RS [2019-II19 (0421)], and RS [2025-2544 (3209)].
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Guarantor: SWL
References
- 1.World Health Organization. World mental health report: Transforming mental health for all. WHO, 2022. https://www.who.int/publications/i/item/9789240049338.
- 2.Cruz-Gonzalez P, He AWJ, Lam EP, et al. Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention applications. Psychol Med 2025; 55: e18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Dehbozorgi R, Zangeneh S, Khooshab E, et al. The application of artificial intelligence in the field of mental health: a systematic review. BMC Psychiatry 2025; 25: 132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Pandey HM. Artificial intelligence in mental health and well-being: evolution, current applications, future challenges, and emerging evidence. arXiv preprint 2024; 2501.10374.
- 5.Mohr DC, Zhang M, Schueller SM. Personal sensing: understanding mental health using ubiquitous sensors and machine learning. Annu Rev Clin Psychol 2017; 13: 23–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Huckvale K, Venkatesh S, Christensen H. Toward clinical digital phenotyping: a timely opportunity to consider purpose, quality, and safety. NPJ Digit Med 2019; 2: 88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bzdok D, Meyer-Lindenberg A, Thirion B. Machine learning for precision psychiatry: opportunities and challenges. Biol Psychiat: Cognit Neurosci Neuroimag 2021; 6: 223–230. [DOI] [PubMed] [Google Scholar]
- 8.Etkin A. Precision psychiatry: a neural circuit taxonomy for depression and anxiety. Lanc Psych 2022; 9: 303–313. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Bouderhem R. Shaping the future of AI in healthcare through ethics and governance. Human Soc Sci Commun 2024; 11: 1–12. [Google Scholar]
- 10.Timmons AC, Duong JB, Simo Fiallo N, et al. A call to action on assessing and mitigating bias in artificial intelligence applications for mental health. Perspect Psychol Sci 2023; 18: 1062–1096. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Zhang Z, Wang J. Can AI replace psychotherapists? Exploring the future of mental health care. Front Psychiatry 2024; 15: 1444382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Moreno C, Wykes T, Galderisi S, et al. “How mental health care should change as a consequence of the COVID-19 pandemic”: correction. Lancet Psychiatry 2021; 8: e14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Chandra A, Sreeganga SD, Ramaprasad A. Mental healthcare systems research during COVID-19: lessons for shifting the paradigm post-COVID-19. Urban Governance 2024; 4: 5–15. [Google Scholar]
- 14.Dehbozorgi R, Zangeneh S, Khooshab E, et al. The application of artificial intelligence in the field of mental health: a systematic review. BMC Psychiatry 2025; 25: 132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Wang L, Bhanushali T, Huang Z, et al. Evaluating generative AI in mental health: systematic review of capabilities and limitations. JMIR Ment Health 2025; 12: e70014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health 2024; 6: 1280235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Alhuwaydi AM. Exploring the role of artificial intelligence in mental healthcare: current trends and future directions – a narrative review for a comprehensive insight. Risk Manag Healthc Policy 2024; 17: 1339–1348. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Lee EE, Torous J, De Choudhury M, et al. Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biol Psych: Cognit Neurosci Neuroimag 2021; 6: 856–864. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Abdul Rahman H, Kwicklis M, Ottom M, et al. Machine learning-based prediction of mental well-being using health behavior data from university students. Bioengineering 2023; 10: 575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Razavi M, Ziyadidegan S, Mahmoudzadeh A, et al. Machine learning, deep learning, and data preprocessing techniques for detecting, predicting, and monitoring stress and stress-related mental disorders: scoping review. JMIR Ment Health 2024; 11: e53714. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Bader M, Abdelwanis M, Maalouf M, et al. Detecting depression severity using weighted random forest and oxidative stress biomarkers. Sci Rep 2024; 14: 16328. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Yang CY, Chen YZ. Support vector machine classification of patients with depression based on resting-state electroencephalography. Asian Biomed: Res Rev News 2024; 18: 212–223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Mardini MT, Khalil GE, Bai C, et al. Identifying adolescent depression and anxiety through real-world data and social determinants of health: machine learning model development and validation. JMIR Ment Health 2025; 12: e66665. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Torous J, Linardon J, Goldberg SB, et al. The evolving field of digital mental health: current evidence and implementation issues for smartphone apps, generative artificial intelligence, and virtual reality. World Psychiatry 2025; 24: 156–174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Jaworska N, de la Salle S, Ibrahim MH, . et al. Leveraging machine learning approaches for predicting antidepressant treatment response using electroencephalography (EEG) and clinical data. Front Psychiatry 2019; 9: 768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Abbas Z, Rehman MU, Tayara H, et al. Ori-explorer: a unified cell-specific tool for origin of replication sites prediction by feature fusion. Bioinformatics 2023; 39: btad664. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.D’Souza RF, Mathew M, Amanullah S, et al. Navigating merits and limits on the current perspectives and ethical challenges in the utilization of artificial intelligence in psychiatry – an exploratory mixed methods study. Asian J Psychiatr 2024; 97: 104067. [DOI] [PubMed] [Google Scholar]
- 28.Li Z, An Z, Cheng W, et al. MHA: a multimodal hierarchical attention model for depression detection in social media. Health Inf Sci Syst 2023; 11: 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Matcham F, Leightley D, Siddi S, et al. Remote assessment of disease and relapse in major depressive disorder (radar-MDD): recruitment, retention, and data availability in a longitudinal remote measurement study. BMC Psychiatry 2022; 22: 136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Zhang Z, Zhang S, Ni D, et al. Multimodal sensing for depression risk detection: integrating audio, video, and text data. Sensors 2024; 24: 3714. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Huang X, Wang F, Gao Y, et al. Depression recognition using voice-based pre-training model. Sci Rep 2024; 14: 12734. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Liu L, Liu L, Wafa HA, et al. Diagnostic accuracy of deep learning using speech samples in depression: a systematic review and meta-analysis. J Am Med Inform Assoc 2024; 31: 2394–2404. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Seneviratne N, Espy-Wilson C. Speech based depression severity level classification using a multi-stage dilated CNN-LSTM model. arXiv preprint 2021. 2104.04195.
- 34.Plank L, Zlomuzica A. Reduced speech coherence in psychosis-related social media forum posts. Schizophrenia 2024; 10: 60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Worlikar H, Coleman S, Kelly J, et al. Mixed reality platforms in telehealth delivery: scoping review. JMIR Biomed Eng 2023; 8: e42709. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Aina J, Akinniyi O, Rahman MM, et al. A hybrid learning-architecture for mental disorder detection using emotion recognition. IEEE Access 2024; 12: 91410–91425. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Li T, Zhang X, Wang C, et al. Facial expression analysis using convolutional neural network for drug-naive and chronic schizophrenia. J Psychiatr Res 2025; 181: 225–236. [DOI] [PubMed] [Google Scholar]
- 38.Hadjar H, Vu B, Hemmje M. Therasense: deep learning for facial emotion analysis in mental health teleconsultation. Electronics 2025; 14: 422. [Google Scholar]
- 39.Kraack K. A multimodal emotion recognition system: integrating facial expressions, body movement, speech, and spoken language, 2024. ArXiv preprint, 2412.17907.
- 40.Sadeghi M, Richer R, Egger B, et al. Harnessing multimodal approaches for depression detection using large language models and facial expressions. npj Mental Health Res 2024; 3: 66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Udahemuka G, Djouani K, Kurien AM. Multimodal emotion recognition using visual, vocal and physiological signals: a review. Appl Sci 2024; 14: 8071. [Google Scholar]
- 42.Goyal S, Dutta R, Dev S, et al. Mindlift: AI-powered mental health assessment for students. Neurosci Informat 2025; 5: 100208. [Google Scholar]
- 43.Flores R, Tlachac ML, Shrestha A, et al. Wavface: a multimodal transformer-based model for depression screening. IEEE J Biomed Health Inform 2025; 29: 3632–3641. [DOI] [PubMed] [Google Scholar]
- 44.Abbas Z, Rehman MU, Tayara H, et al. m5c-seq: machine learning-enhanced profiling of RNA 5-methylcytosine modifications. Comput Biol Med 2024; 182: 109087. [DOI] [PubMed] [Google Scholar]
- 45.Steurer B, Vanhaelen Q, Zhavoronkov A. Multimodal transformers and their applications in drug target discovery for aging and age-related diseases. J Gerontol Ser A: Biol Sci Med Sci 2024; 79: glae006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Benrouba F, Boudour R. Emotional sentiment analysis of social media content for mental health safety. Soc Netw Anal Min 2023; 13: 17. [Google Scholar]
- 47.Hur JK, Heffner J, Feng GW, et al. Language sentiment predicts changes in depressive symptoms. Proc Natl Acad Sci 2024; 121: e2321321121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Merayo N, Ayuso-Lanchares A, González-Sanguino C. Machine learning and natural language processing to assess the emotional impact of influencers’ mental health content on instagram. PeerJ Comput Sci 2024; 10: e2251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Zhang T, Yang K, Ananiadou S. Sentiment-guided transformer with severity-aware contrastive learning for depression detection on social media. In: Proceedings of the 22nd Workshop on biomedical natural language processing and BioNLP shared tasks, 2023, pp.114–126.
- 50.Hossain MM, Hossain MS, Mridha MF, et al. Multi-task opinion enhanced hybrid BERT model for mental health analysis. Sci Rep 2025; 15: 3332. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Vale L, Lee A. Advancing medical diagnosis: enhancing sentiment analysis in electronic medical records with transformer models. J Comput Technol Software 2024; 3. [Google Scholar]
- 52.Klos MC, Escoredo M, Joerin A, et al. Artificial intelligence-based chatbot for anxiety and depression in university students: pilot randomized controlled trial. JMIR Format Res 2021; 5: e20678. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.He Y, Yang L, Be BW, et al. Mental health chatbot for young adults with depressive symptoms: a single-blind, three-arm, randomized controlled trial. J Med Internet Res 2022; 24. DOI: 10.2196/41504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Li H, Zhang R, Lee YC, et al. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digit Med 2023; 6: 236. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Chaturvedi J, Stewart R, Ashworth M, et al. Distributions of recorded pain in mental health records: a natural language processing based study. BMJ Open 2024; 14: e079923. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Scharp D, Hobensack M, Davoudi A, et al. Natural language processing applied to clinical documentation in post-acute care settings: a scoping review. J Am Med Dir Assoc 2024; 25: 69–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Vance LA, Way L, Kulkarni D, et al. Natural language processing to identify suicidal ideation and anhedonia in major depressive disorder. BMC Med Inform Decis Mak 2025; 25: 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Pan J, Fang W, Zhang Z, et al. Multimodal emotion recognition based on facial expressions, speech, and EEG. IEEE Open J Eng Med Biol 2023; 5: 396–403. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Li K, Cardoso C, Moctezuma-Ramirez A, et al. Heart rate variability measurement through a smart wearable device: Another breakthrough for personal health monitoring? Int J Environ Res Public Health 2023; 20: 7146. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Rashid N, Mortlock T, Al Faruque MA. Stress detection using context-aware sensor fusion from wearable devices. IEEE Int Things J 2023; 10: 14114–14127. [Google Scholar]
- 61.Hassan L, Milton A, Sawyer C, et al. Utility of consumer-grade wearable devices for inferring physical and mental health outcomes in severe mental illness: systematic review. JMIR Ment Health 2025; 12: e65143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Malgaroli M, Hull TD, Zech JM, et al. Natural language processing for mental health interventions: a systematic review and research framework. Transl Psychiatry 2023; 13: 309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Naderbagi A, Loblay V, Zahed IUM, et al. Cultural and contextual adaptation of digital health interventions: narrative review. J Med Internet Res 2024; 26: e55130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Karimzadeh D, Saeedi A. AI for mental health assessment and intervention: a systematic review. Int J Modern Achiev Sci Eng Technol 2024; 2: 96–104. [Google Scholar]
- 65.Ajayi R. AI-powered innovations for managing complex mental health conditions and addiction treatments. Int Res J Mod Eng Technol Sci 2025; 7: 87. [Google Scholar]
- 66.Abbas SR, Abbas Z, Zahir A, et al. Federated learning in smart healthcare: a comprehensive review on privacy, security, and predictive analytics with IoT integration. Healthcare 2024; 12: 2587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Kargarandehkordi A, Li S, Lin K, et al. Fusing wearable biosensors with artificial intelligence for mental health monitoring: a systematic review. Biosensors 2025; 15: 202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Morgiève M, Yasri D, Genty C, et al. Acceptability and satisfaction with emma, a smartphone application dedicated to suicide ecological assessment and prevention. Front Psychiatry 2022; 13: 952865. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Colombo D, Fernández-Álvarez J, Patané A, et al. Current state and future directions of technology-based ecological momentary assessment and intervention for major depressive disorder: a systematic review. J Clin Med 2019; 8: 465. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Diep B, Stanojevic M, Novikova J. Multi-modal deep learning system for depression and anxiety detection, 2022. ArXiv preprint, 2212.14490.
- 71.Narayan S, Chaurasia S, Waize S, et al. Machine learning application in detecting mental health issues using social media. https://papers.ssrn.com/abstract=5192456, 2025. Available at SSRN: 5192456.
- 72.Büscher R, Winkler T, Mocellin J, et al. A systematic review on passive sensing for the prediction of suicidal thoughts and behaviors. npj Mental Health Res 2024; 3: 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Olawade DB, Wada OZ, Odetayo A, et al. Enhancing mental health with artificial intelligence: current trends and future prospects. J Med Surg Public Health 2024; 3: 100099. [Google Scholar]
- 74.Yao X, Mikhelson M, Watkins SC, et al. Development and evaluation of three chatbots for postpartum mood and anxiety disorders, 2023. ArXiv preprint, 2308.07407.
- 75.Suharwardy S, Ramachandran M, Leonard SA, et al. Feasibility and impact of a mental health chatbot on postpartum mental health: a randomized controlled trial. AJOG Global Rep 2023; 3: 100165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Balcombe L. AI chatbots in digital mental health. Informatics 2023; 10: 82. [Google Scholar]
- 77.Farzan M, Ebrahimi H, Pourali M, et al. Artificial intelligence-powered cognitive behavioral therapy chatbots: a systematic review. Iran J Psychiatry 2025; 20: 102–110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Kruger A, Chan E, Zhang S. Verification, monitoring and responsible reporting in an age of information disorder: a guide for practitioners in Southeast Asia. Available upon request or through Institutional Repository, 2022. Guide or manual publication.
- 79.Sofyan S, Sofyan AS, Mansyur A. Evaluating Indonesian Islamic financial technology scholarly publications: a bibliometric analysis. IKONOMIKA: J Ekonomi Dan Bisnis Islam 2022; 7: 233–256. [Google Scholar]
- 80.Ndikumana F, Izabayo J, Kalisa J, et al. Machine learning-based predictive modelling of mental health in Rwandan youth. Sci Rep 2025; 15: 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Madububambachu U, Ukpebor A, Ihezue U. Machine learning techniques to predict mental health diagnoses: a systematic literature review. Clin Pract Epidemiol Mental Health: CP & EMH 2024; 20: e17450179315688. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Marriwala N, Chaudhary D. A hybrid model for depression detection using deep learning. Measurement: Sensors 2023; 25: 100587. [Google Scholar]
- 83.Zhang T, Schoene AM, Ji S, et al. Natural language processing applied to mental illness detection: a narrative review. NPJ Digit Med 2022; 5: 46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Khoo LS, Lim MK, Chong CY, et al. Machine learning for multimodal mental health detection: a systematic review of passive sensing approaches. Sensors 2024; 24: 348. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J Med Internet Res 2019; 21: e13216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Auf H, Svedberg P, Nygren J, et al. The use of AI in mental health services to support decision-making: scoping review. J Med Internet Res 2025; 27: e63548. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Akre S, Seok D, Douglas C, et al. Advancing digital sensing in mental health research. npj Digit Med 2024; 7: 362. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Gomes N, Pato M, Lourenco AR, et al. A survey on wearable sensors for mental health monitoring. Sensors 2023; 23: 1330. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Wu Z, Wu H, Fang K, et al. A transformer-based deep learning model for sleep apnea detection and application on ringconn smart ring. In: Proceedings of the 2024 IEEE international symposium on circuits and systems (ISCAS), 2024, pp.1–5. IEEE. DOI: 10.1109/ISCAS49353.2024.10319605.
- 90.Wang M, Chen C, Wu H, et al. Will smart ring be next wave of wearables? In: Proceedings of the 2024 IEEE biomedical circuits and systems conference (BioCAS), 2024, pp.1–5. IEEE. DOI: 10.1109/BioCAS58943.2024.10330352.
- 91.Beames JR, Han J, Shvetcov A, et al. Use of smartphone sensor data in detecting and predicting depression and anxiety in young people (12–25 years): a scoping review. Heliyon 2024; 10. DOI: 10.1016/j.heliyon.2024.e28079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Lamichhane B, Moukaddam N, Sabharwal A. Mobile sensing-based depression severity assessment in participants with heterogeneous mental health conditions. Sci Rep 2024; 14: 18808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Rastpour A, McGregor C. Predicting patient wait times by using highly deidentified data in mental health care: enhanced machine learning approach. JMIR Ment Health 2022; 9: e38428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Garrido S, Oliver E, Chmiel A, et al. Encouraging help-seeking and engagement in a mental health app: what young people want. Front Digit Health 2022; 4: 1045765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Hammoud R, Tognin S, Smythe M, et al. Smartphone-based ecological momentary assessment reveals an incremental association between natural diversity and mental wellbeing. Sci Rep 2024; 14: 7051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Balli M, Dogan AE, Senol SH, et al. Machine learning based identification of suicidal ideation using non-suicidal predictors in a university mental health clinic. Sci Rep 2025; 15: 13843. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Adv Neural Inf Process Syst 2017; 30. [Google Scholar]
- 98.Mautang T. The global and cultural context of using AI for mental health. J Public Health 2023; 46: e343. [DOI] [PubMed] [Google Scholar]
- 99.Topol E. Deep medicine: how artificial intelligence can make healthcare human again. Hachette UK: Basic Books, 2019. [Google Scholar]
- 100.Shumate J. Governing AI in mental health: 50-state legislative review. JMIR Mental Health 2025; 12. DOI: https://doi.org/10.2196/80739. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Rodrigues F. Semi-supervised and ensemble learning to predict work-related stress. J Intell Inf Syst 2024; 64: 77–90. [Google Scholar]
- 102.Yan W. Challenges for artificial intelligence in recognizing mental disorders. Diagnostics 2023; 13: 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Naslund JA. Technology use and interest in digital apps for mental health promotion and lifestyle intervention among young adults with serious mental illness. J Affect Disord Rep 2021; 6: 100227. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Gao F. GIVL: improving geographical inclusivity of vision-language models with pre-training methods. arXiv 2023. [Google Scholar]
- 105.Looi JC, Looi RC, Maguire PA, et al. Psychiatric electronic health records in the era of data breaches – what are the ramifications for patients, psychiatrists and healthcare systems? Austr Psych 2024; 32: 121–124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106.Javaid M, Haleem A, Singh RP, et al. Towards insighting cybersecurity for healthcare domains: a comprehensive review of recent practices and trends. Cyber Sec Appl 2023; 1: 100016. [Google Scholar]
- 107.Sig HBD, Model GHLC, Leaders YE, et al. Artificial intelligence and cybersecurity in healthcare (yel2023), 2023. Unpublished or informal publication.
- 108.Osamika D, Adelusi BS, Kelvin-Agwu MTC, et al. A systematic review of security, privacy, and compliance challenges in electronic health records: current practices and future directions. World Scient News 2024; 190: 1–20. [Google Scholar]
- 109.Patel P. Chronovault – human AI collaborative mental health treatment. Authorea Preprints 2025, https://www.authorea.com/users/123456/articles/chronovault-human-ai-collaborative-mental-health-treatment. Preprint.
- 110.Piispanen JR, Myllyviita T, Vakkuri V, et al. Smoke screens and scapegoats: the reality of general data protection regulation compliance – privacy and ethics in the case of replika AI. arXiv preprint arXiv:241104490 2024, https://arxiv.org/abs/2411.04490.
- 111.Dewitte P. Better alone than in bad company: addressing the risks of companion chatbots through data protection by design. Comput Law Sec Rev 2024; 54: 106019. [Google Scholar]
- 112.Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Med Ethics 2024; 25: 74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 113.Shen J, DiPaola D, Ali S, et al. Empathy toward artificial intelligence versus human experiences and the role of transparency in mental health and social support chatbot design: comparative study. JMIR Ment Health 2024; 11: e62679. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Grataloup A, Kurpicz-Briki M. A systematic survey on the application of federated learning in mental state detection and human activity recognition. Front Digit Health 2024; 6: 1495999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Khalil SS, Tawfik NS, Spruit M. Exploring the potential of federated learning in mental health research: a systematic literature review. Appl Intell 2024; 54: 1619–1636. [Google Scholar]
- 116.Amini M, Jesus M, Fanaei Sheikholeslami D, et al. Artificial intelligence ethics and challenges in healthcare applications: a comprehensive review in the context of the European GDPR mandate. Mach Learn Knowled Extract 2023; 5: 1023–1035. [Google Scholar]
- 117.Hanna M, Pantanowitz L, Jackson B, et al. Ethical and bias considerations in artificial intelligence/machine learning. Mod Pathol 2025; 38: 100686. [DOI] [PubMed] [Google Scholar]
- 118.Alanzi T, Alsalem AA, Alzahrani H, et al. AI-powered mental health virtual assistants’ acceptance: an empirical study on influencing factors among generations x, y, and z. Cureus 2023; 15: e0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Rai S, Stade EC, Giorgi S, et al. Key language markers of depression on social media depend on race. Proc Natl Acad Sci 2024; 121: e2319837121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Weisenburger RL, Mullarkey MC, Labrada J, et al. Conversational assessment using artificial intelligence is as clinically useful as depression scales and preferred by users. J Affect Disord 2024; 351: 489–498. [DOI] [PubMed] [Google Scholar]
- 121.Heinz MV, Bhattacharya S, Trudeau B, et al. Testing domain knowledge and risk of bias of a large-scale general artificial intelligence model in mental health. Digit Health 2023; 9: 20552076231170499. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 122.Ueda D, Kakinuma T, Fujita S, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024; 42: 3–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 123.Saeidnia HR, Hashemi Fotami SG, Lund B, et al. Ethical considerations in artificial intelligence interventions for mental health and well-being: ensuring responsible implementation and impact. Soc Sci 2024; 13: 381. [Google Scholar]
- 124.Koçak B, Ponsiglione A, Stanzione A, et al. Bias in artificial intelligence for medical imaging: fundamentals, detection, avoidance, mitigation, challenges, ethics, and prospects. Diagnost Intervent Radiol 2025; 31: 75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Okolo CT. AI in the “real world”: examining the impact of ai deployment in low-resource contexts. arXiv preprint arXiv:201201165 2020; https://arxiv.org/abs/2012.01165.
- 126.Wu Y, Zhang S, Li P. Improvement of multimodal emotion recognition based on temporal-aware bi-direction multi-scale network and multi-head attention mechanisms. Appl Sci 2024; 14: 3276. [Google Scholar]
- 127.Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 2018; 6: 52138–52160. [Google Scholar]
- 128.Karine K, Marlin B. Using LLMs to improve RL policies in personalized health adaptive interventions. In: Proceedings of the second workshop on patient-oriented language processing (CL4Health), 2025, pp.137–147.
- 129.Morley J, Machado CC, Burr C, et al. The ethics of AI in health care: a mapping review. Soc Sci Med 2020; 260: 113172. [DOI] [PubMed] [Google Scholar]
- 130.Joyce DW, Kormilitzin A, Smith KA, et al. Explainable artificial intelligence for mental health through transparency and interpretability for understandability. npj Digit Med 2023; 6: 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Tang H, Miri Rekavandi A, Rooprai D, et al. Analysis and evaluation of explainable artificial intelligence on suicide risk assessment. Sci Rep 2024; 14: 6163. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Loh HW, Ooi CP, Seoni S, et al. Application of explainable artificial intelligence for healthcare: a systematic review of the last decade (2011–2022). Comput Methods Programs Biomed 2022; 226: 107161. [DOI] [PubMed] [Google Scholar]
- 133.Saranya A, Subhashini R. A systematic review of explainable artificial intelligence models and applications: recent developments and future trends. Dec Anal J 2023; 7: 100230. [Google Scholar]
- 134.Mertes S, Huber T, Weitz K, et al. Ganterfactual-counterfactual explanations for medical non-experts using generative adversarial learning. Front Artif Intell 2022; 5: 825565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Alsaleh MM, Allery F, Choi JW, et al. Prediction of disease comorbidity using explainable artificial intelligence and machine learning techniques: a systematic review. Int J Med Inform 2023; 175: 105088. [DOI] [PubMed] [Google Scholar]
- 136.Scarpato N, Nourbakhsh A, Ferroni P, et al. Evaluating explainable machine learning models for clinicians. Cognit Comput 2024; 16: 1436–1446. [Google Scholar]
- 137.Miele F, Godoy Jr C, van Deen WK. “Being informed about my health without going to a doctor’s appointment”: doctors’ and patients’ narratives about a future with AI. In: Reframing Algorithms: STS perspectives to healthcare automation, 2024, pp.123–145. Cham: Springer International Publishing.
- 138.Rosenbacke R, Melhus A, McKee M, et al. How explainable artificial intelligence can increase or decrease clinicians’ trust in AI applications in health care: systematic review. JMIR AI 2024; 3: e53207. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Ye J, Woods D, Jordan N, et al. The role of artificial intelligence for the application of integrating electronic health records and patient-generated data in clinical decision support. In: AMIA Summits on translational science proceedings, 2024, p.459. American Medical Informatics Association. [PMC free article] [PubMed]
- 140.Tavory T. Regulating AI in mental health: ethics of care perspective. JMIR Ment Health 2024; 11: e58493. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 141.Sai S, Gaur A, Sai R, et al. Generative AI for transformative healthcare: a comprehensive study of emerging models, applications, case studies and limitations. IEEE Access 2024; 12: 31078–31106. [Google Scholar]
- 142.Kolding S, Lundin RM, Hansen L, et al. Use of generative artificial intelligence (AI) in psychiatry and mental health care: a systematic review. Acta Neuropsychiatr 2024; 1–14. DOI: 10.1017/neu.2024.5. [DOI] [PubMed] [Google Scholar]
- 143.del Río Diéguez M, Jiménez CP, Ávila BSA. Art therapy as a therapeutic resource integrated into mental health programmes: components, effects and integration pathways. Arts Psychotherapy 2024; 91: 102215. [Google Scholar]
- 144.Yan J, Li P, Du C, et al. Multimodal emotion recognition based on facial expressions, speech, and body gestures. Electronics 2024; 13: 3756. [Google Scholar]
- 145.Hassan N, Slight R, Bimpong K, et al. Systematic review to understand users perspectives on AI-enabled decision aids to inform shared decision making. npj Digit Med 2024; 7: 332. [DOI] [PMC free article] [PubMed] [Google Scholar]






