Abstract
Background
The rising prevalence of mental disorders, coupled with limited access to mental health services, underscores the urgent need for innovative solutions. Artificial Intelligence (AI) offers transformative potential in managing mental health conditions through multimodal data analysis.
Objective
This study explores emerging applications of AI in early detection, personalized treatment, and the prevention of symptom escalation in mental disorders.
Methods
A narrative review was conducted using comprehensive searches of PubMed, Scopus, and IEEE Xplore databases (2015–2025). Selected sources included studies on natural language processing (NLP), deep learning, and the analysis of multimodal data (eg, voice, text, and biosensor inputs). A qualitative synthesis was employed to identify key patterns, challenges, and innovations.
Findings
AI enhances early detection through concepts such as a “psychological digital signature” and reports high performance in some studies (reported accuracies vary widely, eg, up to ~91% in selected cohorts). However, many high-accuracy reports derive from single-site or limited datasets with variable external validation; therefore, these figures should be interpreted cautiously. We discuss study-specific limitations (sample size, validation methods, and population diversity) in the Methods and Critical Appraisal sections.
Conclusion
AI provides a patient-centered, preventive framework for reimagining mental health care. However, its effective integration requires robust ethical standards and digital infrastructure. Ethical considerations are critically linked to clinical implementation, particularly regarding privacy, fairness, and transparency in AI-assisted decision-making.
Keywords: artificial intelligence, early detection, personalized treatment, prevention, mental health, psychological digital signature
Graphical Abstract
Introduction
Mental disorders affect nearly one billion people worldwide (WHO, 2022), yet access to timely diagnosis and effective treatment remains profoundly unequal, particularly in low- and middle-income regions where the psychiatrist-to-population ratio often falls below one per 100,000.1,2 Despite growing awareness, late detection, social stigma, and limited service capacity continue to widen the global treatment gap. At the same time, rapid digitalization and the proliferation of mobile and wearable technologies have created new opportunities for data-driven mental health innovation. Artificial intelligence (AI), encompassing methods such as natural language processing (NLP), deep learning, and multimodal data fusion, offers unique potential to detect, predict, and monitor mental health symptoms before they escalate.
AI-driven systems can analyze textual, auditory, and biometric inputs to identify subtle behavioral markers of mental distress. For instance, NLP techniques can detect depressive signals from linguistic features in social media content,3 while deep learning models applied to wearable data have been used to forecast mood changes and relapse episodes.4 By integrating heterogeneous data modalities such as voice, text, and physiology,5 AI enables the construction of more comprehensive and individualized psychological profiles. These approaches increasingly align with clinical care pathways, ranging from early screening and triage to relapse monitoring and prevention, bridging the gap between research and clinical application.
Recent reviews have further explored AI-driven psychotherapy across multiple disorders, highlighting both its potential and limitations. For example, Beg & Verma (2024) and Beg et al provide comprehensive syntheses of digital and AI-based psychotherapy in ADHD, OCD, schizophrenia, and substance use disorders, identifying persistent gaps in methodological rigor and generalizability.6,7 Building on these foundations, the present review extends the discussion toward a broader conceptual synthesis that unifies early detection, personalized intervention, and preventive care within a single AI-enabled ecosystem.
To clarify scope and novelty, this narrative review deliberately focuses on clinically oriented AI applications (screening, diagnosis, and therapy augmentation) rather than general wellness or productivity tools. It draws on sources published between 2015 and 2025, reflecting the decade of most rapid AI progress in healthcare. The narrative approach was selected to accommodate interdisciplinary perspectives and emerging conceptual frameworks that may not fit within rigid systematic inclusion criteria.
Three integrative constructs frame the discussion: (1) the “digital psychological signature”, referring to AI-derived, multimodal behavioral patterns that may signal early risk; (2) “empathetic AI”, encompassing emotion-aware conversational systems that enhance therapeutic engagement; and (3) the “digital mental health ecosystem”, representing an interconnected infrastructure for continuous monitoring and preventive intervention. These constructs are anchored in established fields such as digital phenotyping and affective computing rather than presented as entirely novel entities.
From an ethical and equity perspective, this review also acknowledges key risks associated with dataset bias, limited demographic diversity, and privacy leakage. These issues are further examined in later sections, along with the role of international governance frameworks (eg, WHO, ISO/IEC, and national data protection policies) that can ensure transparency, accountability, and trust in clinical AI deployment.
Accordingly, this review addresses three guiding questions:
(1) What added value do multimodal AI models offer for early detection and diagnosis of mental disorders?
(2) How can empathetic and adaptive AI systems enhance personalization and therapeutic alliance in digital care?
(3) What ethical, regulatory, and infrastructural standards are required to ensure safe, equitable, and clinically responsible use of AI in mental health care?
Methods
This study was conducted as a narrative review with a transparent and reproducible search and selection process, aligning with the SANRA (Scale for the Assessment of Narrative Review Articles) guidelines to improve methodological clarity and reduce selection bias. While narrative reviews allow conceptual flexibility, this revision emphasizes transparency in how the literature was searched, selected, and synthesized.
Search Strategy
A structured and transparent search was performed across three major academic databases, PubMed, Scopus, and IEEE Xplore, for publications between 2015 and 2025. This time frame was chosen to capture recent advances in artificial intelligence (AI) technologies and their growing applications in mental health. The search was conducted in May 2025, using combinations of the following keywords and MeSH terms:
“artificial intelligence”, “machine learning”, “deep learning”, “digital phenotyping”, “digital psychological signature”, “empathetic AI”, “chatbot”, “mental health”, “depression”, “bipolar disorder”, “schizophrenia”, “prediction”, and “monitoring”.
Reference lists of included articles and relevant reviews were also screened to ensure comprehensiveness.
Inclusion and Exclusion Criteria
Given the exploratory scope of a narrative review, we included peer-reviewed empirical studies, systematic reviews, narrative reviews, and technical or policy reports that addressed the application of AI in detection, diagnosis, treatment, or prevention of mental health disorders. We excluded non-English papers, conference abstracts without full text, and studies unrelated to psychological or clinical applications of AI. When multiple reports described the same dataset, the most comprehensive or recent version was included.
Selection Process
The lead author screened all titles and abstracts for relevance. Full texts of potentially relevant papers were reviewed and cross-checked by co-authors to ensure consistency. Disagreements were resolved through discussion until consensus was reached. A PRISMA-type flow was not applied due to the conceptual (rather than quantitative) aim of the review.
Data Extraction and Synthesis
For each included study, we extracted details on study design, sample size, AI model type, validation method (internal/external), performance metrics (accuracy, sensitivity, specificity, AUC), and reported limitations.
Findings were synthesized qualitatively and organized under three key thematic domains:
Early detection and diagnosis (eg, speech, facial, and behavioral data through AI-based digital signatures);
Personalized interventions and treatment augmentation (eg, therapeutic chatbots and adaptive algorithms);
Preventive monitoring and relapse prediction (eg, multimodal sensor and wearable data analytics).
Critical Appraisal
We conducted a qualitative critical appraisal of all empirical studies to assess the reliability of reported findings. Evaluation criteria included sample size, demographic diversity, validation strategy, and external generalizability.
Several studies reported high performance metrics (eg, accuracies above 90%) but often relied on small or single-site datasets without external validation. To contextualize these results, we categorized studies as high, moderate, or low reliability based on transparency of reporting and risk of bias.
This appraisal provides a balanced understanding of the robustness and limitations of the existing evidence.
Analytical Framework
An interdisciplinary approach was adopted to integrate perspectives from psychology, data science, and ethics. Beyond summarizing evidence, the review introduces a conceptual integration framework, linking AI-driven digital signatures, affective computing, and empathic algorithms, to envision a preventive mental health ecosystem.
Ethics Statement
This review did not involve the collection of new data from human or animal participants. Therefore, institutional ethical approval was not required. All analyzed materials were obtained from publicly available, peer-reviewed scientific literature, and all sources have been appropriately acknowledged and cited.
Critical Appraisal of Included Studies
A structured qualitative appraisal was conducted for empirical studies cited in this review. Key appraisal domains included sample size, study design (prospective vs retrospective), internal and external validation of AI models, demographic diversity, and reporting of limitations. Overall, we found the following patterns:
1. Sample sizes and validation: Several high-accuracy reports (reported accuracies in the range of ~85–92%) were derived from single-site or cohort studies with limited external validation. For example, Lee et al reported high predictive performance in a prospective cohort of 320 patients, but external validation in other populations was not reported. Where external validation was performed, performance typically decreased, indicating possible overfitting in development datasets.4
2. Population diversity: Many studies used geographically or demographically restricted samples (eg, single-country cohorts or convenience samples), limiting generalizability across ethnic and socioeconomic groups.
3. Reporting quality: A proportion of studies did not report key methodological details such as exact train/test splits, cross-validation procedures, or handling of missing data, information necessary to assess reproducibility and bias.
4. Outcome measures and clinical relevance: Reported performance metrics (accuracy, AUC) often lack confidence intervals or calibration metrics; few studies reported prospective clinical impact or cost-effectiveness.
Based on these observations, we categorized the body of evidence as mixed: promising signal and technical feasibility in controlled datasets, but limited external/generalizability evidence and inconsistent reporting of methodological rigour.
Main Body
Early Detection with AI: The Digital Psychological Signature
Early detection of mental disorders, such as depression, anxiety, or bipolar disorder, is critical for reducing symptom severity and improving treatment outcomes.8 Artificial intelligence (AI), with its ability to analyze complex data and uncover hidden patterns, holds transformative potential in this domain. AI technologies, particularly natural language processing (NLP), deep learning, and multimodal data analysis, enable the identification of psychological symptoms before overt clinical signs emerge. This section explores the applications of these technologies and introduces the innovative concept of the “digital psychological signature”, which can make early detection more personalized and precise.
Natural language processing (NLP), a cornerstone of AI, facilitates the analysis of textual and spoken content to identify indicators of mental disorders. For instance, Balani et al demonstrated that NLP algorithms can detect signs of depression from social media posts, such as those on Twitter, with up to 85% accuracy. By analyzing linguistic features, such as the frequency of negative emotion-related words or reduced social engagement, the study identified patterns associated with depression.3
Deep learning, utilizing complex neural networks, excels at detecting nonlinear patterns in large datasets. For example, Lee et al applied deep learning models to data collected from wearable devices, such as smartwatches, to predict symptom exacerbation in bipolar disorder.4 Their findings revealed that changes in sleep patterns, physical activity, and heart rate could predict symptom escalation with up to 90% accuracy, up to a week in advance. This approach, known as “digital phenotyping”, supports continuous, non-invasive monitoring.5
Multimodal data analysis, which integrates textual, auditory, and biometric data (eg, from wearable sensors), significantly enhances diagnostic accuracy. For instance, Cummins et al combined acoustic analysis (eg, tone of voice and speech rate) with textual data to detect depression symptoms, achieving a diagnostic accuracy of 92%. This study highlighted that multimodal data can capture more complex patterns than single-source data.9 Similarly, recent projects10 have integrated biometric sensor data (eg, sleep patterns) and speech analysis to predict symptom exacerbation in schizophrenia, reporting promising results.
Figure 1 summarizes the reported performance ranges of major AI-based methods used for early detection of psychiatric disorders, based on data synthesized from peer-reviewed studies.3–7
Figure 1.
Accuracy of AI Algorithms in Early Detection of Psychiatric Disorders.
Physiological signal analysis shows reported accuracies ranging between 88% and 91% for depression detection, followed closely by deep learning approaches for bipolar disorder (86–90%) and physiological modeling for schizophrenia (84–88%).
Natural Language Processing (NLP)–based methods for depression detection typically achieve accuracies in the 80–85% range.
These figures should be interpreted with caution, as they reflect heterogeneous datasets and variable validation procedures (eg, internal cross-validation vs external testing). Differences in population diversity, data sources, and outcome labels may influence reported performance.
Data sources: synthesized from representative studies by Balani et al, Cummins et al, Lee et al, and related works cited in Early Detection with AI: The Digital Psychological Signature.3,4,9
Visualization: created by the authors to illustrate relative trends; scales are standardized for interpretability.
Limitations of Reported Performance Metrics
Reported high accuracies (eg, 85–92%) across studies often come from datasets with limited external validation or narrow population sampling. For instance, Lee et al reported high predictive accuracy in a prospective South Korean cohort (n=320) but did not provide multi-country external validation; other studies base high performance on retrospective convenience samples. Crucially, many published algorithms lack reported confidence intervals, calibration measures, or details on external testing, factors that limit confidence in direct clinical translation.4
Innovative Concept: The Digital Psychological Signature
This article introduces the concept of the “digital psychological signature”, which refers to an AI-driven algorithm that integrates an individual’s unique behavioral patterns, such as changes in voice tone, sleep patterns, online activity, or social interactions, to enable early detection of mental disorders. Drawing inspiration from digital phenotyping,8 this concept emphasizes advanced personalization, allowing the creation of tailored psychological profiles for each patient. For instance, a digital psychological signature could combine textual data (eg, analysis of a patient’s text messages), auditory data (eg, variations in voice tone), and biometric data (eg, heart rate patterns from a smartwatch).
By integrating multimodal data into a cohesive AI model, this approach can enhance diagnostic accuracy and facilitate timely interventions.
We position the “digital psychological signature” not as an entirely novel phenomenon but as an integrative advancement of established approaches such as digital phenotyping and multimodal affective computing. Its contribution is primarily conceptual: harmonizing heterogeneous data streams into a clinically useful, personalized profile.
Figure 2 illustrates the conceptual workflow for constructing a digital psychiatric signature by integrating heterogeneous data streams, textual (eg, messages, social media posts), auditory (eg, tone, speech rate), and biometric (eg, heart rate, sleep patterns). Specialized analytic pathways process each input type using methods such as natural language processing (NLP), acoustic signal analysis, and sensor data interpretation. These outputs are then merged within a unified AI model to generate a personalized psychiatric profile capable of identifying early signs of mental health disorders and issuing timely alerts.
Figure 2.
Process of Generating a “Digital Psychiatric Signature” Using an Integrated AI Model.
The framework reflects multimodal fusion approaches reported in representative studies (Balani et al, 2015; Cummins et al, 2015; Lee et al, 2023; Ceccarelli & Mahmoud, 2022) and emphasizes interpretability and clinical applicability.3,4,9
Data sources: synthesized from peer-reviewed studies cited in Early Detection with AI: The Digital Psychological Signature. Visualization created by the authors for illustrative purposes.
Real-World Examples
Emerging examples illustrate the potential of this approach. For instance, Ceccarelli and Mahmoud (2022) utilized machine learning models to integrate multimodal data, including speech, wearable device data, and online activity, achieving an 88% accuracy in predicting symptom exacerbation in bipolar disorder. This study demonstrated that combining multimodal data not only enhances accuracy but also enables more personalized detection.10
Proposed Innovation
The primary innovation proposed in this section is the broader adoption of multimodal data to create personalized psychological profiles. Unlike traditional methods that rely on static clinical criteria (eg, DSM-5 questionnaires), the digital psychological signature enables dynamic and continuous detection. This approach can be implemented globally using existing technologies, such as smartwatches and mobile applications. For example, an AI-equipped app could analyze real-time auditory data (from phone calls), textual data (from messages), and biometric data (from wearable sensors) to issue early warnings to patients or clinicians. Inspired by recent studies,5,10 this concept could shift mental health care from a reactive to a preventive paradigm.
Personalized Treatments: Empathetic AI
Personalized treatments in mental health, tailored to individual needs, play a crucial role in improving therapeutic outcomes. Artificial intelligence (AI), with its advanced capabilities in processing emotional and behavioral data, enables the delivery of dynamic, individualized interventions. This section explores the applications of therapeutic chatbots (eg, Woebot and Tess) and virtual reality (VR) in treating disorders such as post-traumatic stress disorder (PTSD) and phobias, while introducing the innovative concept of “empathetic AI” a system that delivers more human-like therapeutic responses by analyzing emotional data, such as voice tone and facial expressions.
Therapeutic Chatbots
Therapeutic chatbots, such as Woebot and Tess, leverage natural language processing (NLP) and cognitive-behavioral therapy (CBT)-based techniques to provide scalable psychological support. A study by Tong et al investigated the effectiveness of a topic-based chatbot in reducing anxiety and depression symptoms. This randomized controlled trial with 285 participants found that 10 days of chatbot use led to significant improvements in self-care efficacy (P<0.01) and a reduction in anxiety symptoms (Cohen’s d=0.39).11 Similarly, Fitzpatrick et al evaluated Woebot’s effectiveness compared to an information-only control group, reporting a significant reduction in depression symptoms (Cohen’s d=0.44).12 Inkster et al further demonstrated that frequent users of the Wysa chatbot experienced greater reductions in depression symptoms (Cohen’s d=0.47).13
Tess, another CBT-based chatbot, was evaluated by Fulmer et al for reducing anxiety symptoms among university students. The results showed that consistent use of Tess over four weeks led to significant reductions in anxiety symptoms (P<0.05) and increased psychological resilience.14 By offering guided conversations, mindfulness exercises, and educational tools, these chatbots enhance access to therapy in low-resource settings or for individuals who avoid in-person interactions.15
Figure 3 presents a comparative summary of the reported effectiveness of four major therapeutic chatbots, Woebot (for depression), Woebot (for anxiety), Tess (for anxiety), and Wysa (for depression) in reducing psychological symptoms, based on peer-reviewed clinical studies. Effectiveness is expressed in terms of symptom reduction magnitude (Cohen’s d). Among the compared systems, Wysa for depression demonstrates the highest effect size (d = 0.47), followed by Woebot for depression (d = 0.44), suggesting stronger outcomes for depressive symptoms than for anxiety management. These data synthesize findings from randomized and real-world studies to illustrate relative therapeutic performance. Data sources: Fitzpatrick et al, Fulmer et al, Inkster et al, Tong et al. Visualization created by the authors for interpretive comparison.11–14
Figure 3.
Comparative Effectiveness of Therapeutic Chatbots in Reducing Psychological Symptoms.
Virtual Reality in Exposure Therapy
Virtual reality (VR) has gained prominence as a complementary tool for exposure therapy in disorders such as post-traumatic stress disorder (PTSD) and specific phobias. VR enables the creation of controlled, simulated environments where patients can gradually and safely confront fear-inducing stimuli. A study by Jonathan et al demonstrated that VR-based exposure therapy for patients with trauma-related PTSD resulted in a 35% reduction in symptom severity (based on the CAPS-5 scale) after eight weeks. The study utilized VR simulations to recreate traumatic scenes in a safe setting, comparing its effectiveness to standard cognitive-behavioral therapy (CBT).16 Similarly, Botella et al explored VR’s application in treating specific phobias, such as fear of heights, reporting significant improvements in phobia severity (P<0.01). This technology offers interactive, controlled experiences, allowing the intensity of exposure to be tailored to each patient’s needs.17
Innovative Concept: Empathetic AI
This study introduces the concept of “empathetic AI”, a system that integrates emotional data analysis (eg, voice tone, facial expressions, and speech patterns) with advanced language models to deliver more human-like therapeutic responses. Unlike traditional chatbots that rely solely on text or predefined patterns, empathetic AI can analyze a user’s emotional state through multimodal data (eg, video, audio, and biometric sensors) and generate responses tailored to their emotional needs.
For instance, a study by de Gennaro et al showed that chatbots equipped with emotion-detection algorithms provided effective emotional support for individuals experiencing social isolation, achieving a user satisfaction rate of 4.3 out of 5.18
“Empathetic AI” should be understood as an evolution of affective computing and emotion-aware conversational agents rather than a wholly new paradigm. The novelty lies in integrating real-time biometric cues with advanced language models to enhance therapeutic adaptivity while recognizing the foundational work in affective computing and existing chatbot research.
Figure 4 illustrates the operational framework of an “Empathetic AI” system designed to deliver emotionally responsive mental health interventions.
Figure 4.
Emotion-Aware AI Workflow for Personalized Therapeutic Interventions.
Multimodal emotional inputs, including vocal tone, facial expressions, and physiological signals (eg, heart rate, skin conductance) are processed through affective computing algorithms and advanced language models.
These analyses dynamically inform the selection and delivery of therapeutic content, such as Cognitive Behavioral Therapy (CBT) exercises and mindfulness interventions, which are adapted in real time to the user’s emotional state.
By integrating real-time emotion recognition with adaptive dialogue systems, this workflow transcends static chatbot scripts and exemplifies how empathetic AI can enhance therapeutic empathy and adaptability, particularly for individuals with limited access to human clinicians.
Data sources: synthesized and conceptually adapted from representative studies, including de Gennaro et al, Laymouna et al, and Eskandar (2024).18–20 Visualization created by the authors for illustrative purposes.
Recent Studies
Recent studies (Laymouna et al, 2024) have shown that chatbots equipped with empathetic capabilities, such as voice tone recognition and personalized responses, can enhance users’ sense of trust and emotional connection. These chatbots, leveraging machine learning models to analyze emotional data, adjust therapeutic content (eg, CBT exercises or relaxation techniques) based on the user’s anxiety or depression severity.19 For example, the Wysa chatbot, combining NLP and audio data analysis, has proven effective in reducing anxiety symptoms in patients with chronic conditions like diabetes.11
Proposed Innovation
The primary innovation proposed in this section is the integration of advanced language models (eg, GPT-4 or similar) with biometric sensors to deliver dynamic, empathetic treatments. This approach could involve using real-time data on heart rate, sleep patterns, or voice tone changes (via smartphone microphones) to tailor interventions. For instance, an empathetic AI system might detect acute anxiety (based on elevated heart rate and anxious voice tone) and automatically provide breathing exercises or guided CBT dialogues. Inspired by recent studies,18 this concept could transform mental health treatments from static to dynamic, responsive interventions.
This approach could also integrate with virtual reality to create more interactive therapeutic environments. For example, a VR system equipped with empathetic AI could analyze a user’s facial expressions via VR cameras and adjust therapeutic content (eg, exposure therapy or mindfulness exercises) in real time. However, challenges such as data privacy and the need for bias-free algorithms must be addressed. Eskandar et al emphasized that AI systems require transparent protocols to protect users’ emotional data and build public trust.20
Prevention and Monitoring: A Digital Mental Health Ecosystem
Preventing the escalation of mental disorder symptoms and continuously monitoring patients’ conditions are key priorities in mental health care. With growing social, occupational, and economic pressures, the need for scalable, accessible preventive solutions is increasingly urgent.1 Artificial intelligence (AI), with its ability to analyze large datasets and detect subtle patterns, enables real-time monitoring and prevention of mental health crises. This section examines AI’s role in preventing symptom exacerbation, focusing on data from wearable devices like smartwatches, and proposes a “digital mental health ecosystem” that seamlessly integrates AI, biometric sensors, and human interventions.
AI’s Role in Prevention and Monitoring
AI, utilizing data from wearable devices (eg, smartwatches and fitness trackers) and multimodal data analysis, can detect subtle changes in behavioral and physiological patterns that may signal early symptom escalation. For instance, a study by Lee et al demonstrated that machine learning models analyzing biometric sensor data (eg, heart rate, sleep patterns, and physical activity) from smartwatches could predict depressive episodes in bipolar disorder patients with 91% accuracy up to 10 days in advance. The study, involving 320 patients over 12 months, found that changes in heart rate variability (HRV) and reduced physical activity served as predictive indicators.4 Similarly, Abd-alrazaq et al reported in a systematic review that AI-equipped wearable devices, such as Fitbit and Apple Watch, could detect anxiety symptoms with 80–84% accuracy by analyzing physiological data like HRV and sleep patterns.21
Beyond wearables, AI can identify early signs of mood changes through social media and textual data analysis. For example, Garriga et al showed that machine learning algorithms analyzing linguistic patterns in social media posts and temporal data (eg, posting schedules) could predict mental health crises with clinical relevance in 64% of cases.22 These tools enable non-invasive, continuous monitoring and can send early alerts to clinicians or patients. Additionally, recent projects combined biometric sensor data and textual data to monitor schizophrenia patients, achieving 89% accuracy in predicting symptom exacerbation. This study utilized acoustic data (eg, changes in speech velocity) and wearable data (eg, movement patterns) to develop a comprehensive predictive model.21
Figure 5 presents a comparative summary of reported accuracy ranges of artificial intelligence (AI) models used to predict four major mental disorders, depression, schizophrenia, anxiety, and bipolar disorder, based on findings synthesized from peer-reviewed studies.4,21–23 Ensemble and deep learning methods demonstrate high predictive performance, with accuracies typically ranging between 84% and 91%. Models predicting depression and schizophrenia show the highest accuracy (around 90%), followed closely by those targeting anxiety and bipolar disorder (approximately 85–88%).
Figure 5.
Accuracy of AI Models in Predicting Mental Disorders.
These data illustrate the promising potential of AI to assist in clinical diagnostics through multimodal data integration (eg, biometric, linguistic, and behavioral signals). However, the reported accuracies should be interpreted cautiously, as they often derive from single-site datasets or limited external validation cohorts.4,21
Data sources: synthesized from Lee et al, Abd-Alrazaq et al, Garriga et al, and Cotes et al. Visualization created by the authors for interpretive comparison.4,21–23
Digital Mental Health Ecosystem
This article proposes the innovative concept of a “digital mental health ecosystem”, an integrated system that combines AI, biometric sensors, and human interventions to deliver preventive care and continuous monitoring. This ecosystem comprises three core components:
Multimodal Data Collection: Utilizing wearable devices (eg, smartwatches to track heart rate, sleep, and physical activity), mobile applications (for self-reported mood and daily activity logs), and social media data (to analyze linguistic patterns and social interactions).
AI Analysis and Prediction: Leveraging machine learning models (eg, logistic regression, recurrent neural networks, and transformer-based models) to identify predictive patterns of symptom exacerbation. These models integrate multimodal data to generate personalized alerts.
Human and Digital Interventions: Combining automated interventions (eg, mindfulness recommendations via apps or therapeutic chatbots) with human interventions (eg, consultations with psychologists or psychiatrists) to provide comprehensive, responsive care.
This ecosystem can be accessed through digital platforms, such as mobile apps or clinical dashboards, by patients and healthcare providers. For instance, an AI-powered app could analyze heart rate and sleep data from a smartwatch, detect mood changes from a user’s text messages, and send notifications to both the patient and clinician if a risk of symptom escalation is identified. This approach, inspired by models proposed in recent studies,23,24 emphasizes the integration of multimodal data for continuous monitoring.
Figure 6 illustrates the conceptual architecture of a Digital Mental Health Ecosystem, an integrated, AI-enabled framework designed for continuous and preventive mental health care. The ecosystem combines multimodal data streams, including biometric signals (eg, heart rate variability, sleep, physical activity) from wearable devices, self-reported and textual inputs from mobile applications, and linguistic or behavioral cues from digital interactions and social media.
Figure 6.
Digital Mental Health Ecosystem.
These heterogeneous data sources are processed through advanced machine learning and deep learning models (eg, recurrent and transformer-based networks) to predict potential symptom exacerbations in real time. Based on these predictions, the system delivers timely and personalized interventions, either automated (eg, chatbot-based cognitive behavioral therapy or mindfulness recommendations) or clinician-guided (eg, teleconsultations).
This framework reflects the integration pathways discussed in recent empirical studies and highlights how combining continuous monitoring with adaptive intervention can create a scalable, personalized, and preventive mental health infrastructure.
Data sources: synthesized from peer-reviewed studies.4,23,24 Visualization created by the authors for conceptual representation.
Real-World Examples
Recent projects (2025) have provided notable examples of AI applications in continuous mental health monitoring. For instance, Cotes et al developed an AI-based monitoring system that integrated biometric sensor data (eg, heart rate variability and sleep patterns) with acoustic data (eg, voice tone) to predict symptom exacerbation in schizophrenia patients. This system achieved 89% accuracy in delivering early warnings, enabling preventive interventions.23 Similarly, Lee et al utilized wearable devices and mobile applications to monitor patients with major depressive disorder, demonstrating that AI models could predict depressive episodes with 91% accuracy. This study underscored the importance of multimodal data integration and deep learning algorithms to enhance prediction accuracy.4
Additionally, platforms like BioBase, which leverage biometric sensor data to prevent occupational burnout, have shown the ability to reduce sick days by up to 31%.25 By analyzing physiological data (eg, heart rate and sleep) and offering personalized recommendations, BioBase exemplifies a practical implementation of a digital mental health ecosystem.
Proposed Innovation
The primary innovation in this section is a framework for a digital mental health ecosystem that makes preventive care universally accessible. This framework includes:
Global Accessibility: Utilizing affordable consumer devices (eg, smartwatches and mobile apps) to ensure scalability and reach, particularly in low-resource areas.
Multimodal Data Integration: Combining biometric, textual, and behavioral data to create comprehensive profiles and more accurate predictions.
Hybrid Interventions: Integrating automated AI interventions (eg, chatbots and mindfulness recommendations) with human interventions to preserve the human element of mental health care.
Transparency and Ethics: Implementing transparent protocols for data privacy protection and bias mitigation, inspired by recommendations from Eskandar et al.20
This framework could be deployed through digital platforms, such as mental health apps or integrated hospital systems, enabling preventive care on a global scale. For example, a digital ecosystem could provide remote communities with continuous monitoring and early interventions while equipping clinicians with precise data for clinical decision-making.
Challenges and Considerations
Despite the ecosystem’s potential, challenges such as data privacy, algorithmic bias, and the need for digital infrastructure must be addressed. Ni et al emphasized that AI systems should employ robust encryption protocols and informed consent processes to build public trust.24 Moreover, biases in training data could lead to unequal diagnoses, particularly for minority groups.20 These challenges will be explored in greater detail in subsequent sections of this article.
Ethical Challenges and Novel Solutions
The application of artificial intelligence (AI) in mental health, while transformative, raises significant ethical challenges that could impact public trust and the technology’s effectiveness. Key issues include data privacy, algorithmic bias, and patient acceptance, necessitating innovative and practical solutions to ensure responsible AI use. This section analyzes these challenges, drawing on recent studies (2023–2025), and proposes solutions such as “transparent AI” and a “mental health AI ethical charter” to enhance public trust and elevate global standards.
Ethical Challenges
Data Privacy
AI in mental health relies on sensitive data, including clinical records, behavioral patterns, and biometric data (eg, heart rate or sleep patterns). Improper management of this data can lead to privacy breaches or misuse. Haque et al highlighted that ambient intelligence technologies, such as contactless sensors, collect vast amounts of personal data, exacerbating ethical concerns related to privacy and informed consent.26 Additionally, Mennella et al noted that the transfer of sensitive data between institutions often lacks adequate oversight, increasing the risk of data breaches.27 For instance, cyberattacks on AI systems could expose patient information, a particularly critical issue in mental health due to the sensitive nature of the data.28
Algorithmic Bias
Algorithmic bias is another major challenge that can lead to inequities in mental health care. If AI training data lacks diversity in gender, race, or socioeconomic status, algorithms may produce inaccurate or discriminatory predictions. McCradden et al found that biases in health data can perpetuate existing inequalities, particularly for marginalized groups.29 For example, Obermeyer et al reported that an algorithm used for population health management unfairly allocated fewer resources to Black patients due to biased training data.30 In mental health, Yang et al demonstrated that AI models may perform less accurately in diagnosing mental disorders among minority groups if trained on non-diverse datasets.31
Patient Acceptance
Patient acceptance, as an ethical challenge, hinges on public trust and understanding of AI technologies. Patients may resist AI due to concerns about reduced human interaction or lack of transparency in AI decision-making. Young et al found that patients expressed concerns about AI’s accuracy in complex scenarios (eg, diagnosing rare mental disorders) and the privacy of their data.32 Furthermore, Rahsepar Meadi et al reported in a scoping review that a lack of transparency in therapeutic chatbot functionality could erode patient trust, particularly when patients perceive AI as replacing human therapeutic relationships. These concerns are especially pronounced in mental health, where the therapeutic alliance is central.33
Novel Solutions
Transparent AI
A proposed solution to address ethical challenges is the development of “transparent AI”, where algorithms explain their decision-making logic to patients and clinicians in an understandable manner. This approach can enhance public trust and improve patient acceptance. Hauser et al emphasized that mixed-initiative interfaces, which integrate evidence-based information with patient profiles, can strengthen shared decision-making between patients and clinicians. For example, a transparent AI system could clarify why a specific intervention (eg, a mindfulness exercise) was recommended based on a patient’s biometric and behavioral data.34 Bernal and Mazo (2022) reported in an international survey that health and IT professionals believe transparency in model interpretability and reporting can improve both professional and public trust.35
To implement transparent AI, developers should use interpretable models (eg, rule-based models or neural networks with explainable layers) and provide standardized reports on model performance. These reports should include group-specific performance metrics (eg, accuracy across different gender or racial groups) to mitigate bias.29
Mental Health AI Ethical Charter
The second proposed solution is the establishment of a “mental health AI ethical charter” to set global standards for the responsible development and use of AI in mental health. This charter could promote principles such as privacy, fairness, transparency, and accountability. The World Health Organization’s 2024 guidelines on the ethics and governance of large multimodal models (LMMs) provide a foundation for this charter, emphasizing transparent information about AI design and use, as well as risk management for bias and privacy.36
Figure 7 presents the conceptual structure of a Mental Health AI Ethical Charter, a proposed global framework for guiding the responsible design and deployment of artificial intelligence in mental health care. The charter is grounded in the core ethical principles of privacy, transparency, fairness, and accountability, ensuring that AI systems protect patient confidentiality, provide explainable decisions, and operate without discrimination or bias.
Figure 7.
Mental Health AI Ethical Charter.
The framework aligns with the World Health Organization’s 2024 guidelines on the ethics and governance of large multimodal AI models and emphasizes practical safeguards such as informed consent, encrypted data handling, regular bias and equity audits, transparent performance reporting, and independent oversight mechanisms.
This ethical architecture highlights the importance of integrating human oversight and trust into digital mental health innovation, ensuring that technological advancement remains consistent with human dignity, autonomy, and equity.
Data sources: adapted from the World Health Organization (2024) “Ethics and Governance of Artificial Intelligence for Health: Large Multimodal Models”, and related peer-reviewed policy literature cited in Ethical Challenges and Novel Solutions. Visualization created by the authors for conceptual representation.36
Proposed Charter
The proposed mental health AI ethical charter could include the following principles:
Privacy and Informed Consent: Mandating data encryption, transparent consent processes, and minimizing data transfers.27
Bias Mitigation: Utilizing diverse datasets and regular audits to identify and address biases.29
Transparency and Accountability: Requiring public reports on model performance and establishing feedback mechanisms for patients and clinicians.35
Continuous Oversight: Creating independent regulatory bodies to monitor ethical compliance and address complaints.37
This charter could be developed in collaboration with international organizations such as the WHO, ISO, and IEC, which have previously introduced standards like ISO/IEC 42001:2023 for responsible AI management.37
Evidence from 2020–2025 Studies
Recent studies have extensively explored ethical challenges in AI for mental health. For instance, Young et al found in a systematic review that patients are concerned about privacy loss and health inequities due to AI, but transparency and stakeholder engagement can mitigate these concerns.32 Hauser et al emphasized the importance of AI models that transparently report errors and uncertainties to prevent cognitive biases, such as over-reliance on technology.34 Rahsepar Meadi et al analyzed ethical challenges in therapeutic chatbots, recommending that principles like privacy, informed consent, and bias reduction be embedded in their design. These studies underscore the need for practical solutions, such as transparent AI and standardized ethical frameworks.33
Proposed Innovation
The primary innovation in this section is the proposal of practical solutions to enhance public trust in AI for mental health:
Interactive Transparent AI Platforms: Developing applications that provide clear, understandable explanations of AI decision-making, such as patient-centered dashboards displaying input data (eg, heart rate or speech patterns) and recommended outputs (eg, therapeutic interventions).
Global Ethical Charter: Establishing an international charter that mandates developers, clinicians, and policymakers to adhere to ethical principles, including protocols for complaint resolution and ongoing oversight.
Public and Professional Education: Implementing training programs for patients and clinicians on how AI functions and their data rights to boost acceptance and trust.
Stakeholder Engagement: Involving patients, clinicians, and minority groups in the design and evaluation of AI systems to ensure fairness and inclusivity.
These solutions, inspired by recent studies, aim to strengthen public trust and address ethical barriers.33,37
Discussion
Artificial intelligence (AI) holds the potential to revolutionize mental health management by offering tools for early detection, personalized treatments, and prevention of symptom escalation. This narrative review synthesizes key findings on innovative AI applications, including natural language processing (NLP), deep learning, and multimodal data analysis, while introducing concepts like the “digital psychological signature”, “empathetic AI”, and “digital mental health ecosystem”. This section compares AI’s effectiveness with traditional methods, analyzes limitations, explores implications for stakeholders, and suggests directions for future research.
Comparing AI Effectiveness with Traditional Methods
Traditional mental health management approaches, such as cognitive-behavioral therapy (CBT), pharmacotherapy, and in-person counseling, are widely used due to their proven effectiveness. For example, a meta-analysis by Giovanetti et al found that CBT for depression has a moderate to strong effect (Cohen’s d=0.65).38 However, these methods face limitations, including high costs, the need for trained professionals, and limited access in low-resource areas.2 In contrast, AI technologies, such as therapeutic chatbots and virtual reality (VR), offer significant advantages. Tong et al demonstrated that CBT-based chatbots can reduce anxiety symptoms with an effect size comparable to in-person CBT (Cohen’s d=0.39), but at a lower cost and with broader accessibility.11 Similarly, Eskandar et al reported that VR-based exposure therapy for PTSD achieves comparable effectiveness to in-person therapy (35% symptom reduction) while allowing more precise control of therapeutic environments.20
However, AI cannot fully replace human interactions. Fulmer et al found that while the Tess chatbot effectively reduced anxiety, it was less successful than human therapists in fostering deep emotional connections. These findings suggest that AI can complement traditional methods, particularly for early interventions or in resource-scarce settings.14 The proposed “empathetic AI” concept, which integrates emotional data analysis (eg, voice tone and facial expressions), could bridge the gap between technology and human interaction.17
The Figure 8 compares the reported effect sizes of traditional psychotherapy (Cognitive Behavioral Therapy – CBT) and AI-based interventions, including therapeutic chatbots and Virtual Reality (VR) exposure therapy, in treating common mental disorders.
Figure 8.
Comparison of Effectiveness of Traditional vs AI Approaches in Treating Mental Disorders.
Data are synthesized from randomized controlled trials and meta-analyses reported in peer-reviewed studies.11–14,16,38
Traditional CBT for depression shows the highest effect size (Cohen’s d ≈ 0.65),38 indicating strong clinical efficacy in symptom reduction. Among AI-based modalities, chatbot interventions demonstrate moderate effects, Woebot (d = 0.44)12 and Wysa (d = 0.47)13 for depression, and Tess (d ≈ 0.35–0.39)11,14 for anxiety, suggesting comparable though slightly lower outcomes relative to therapist-led CBT.
VR-based exposure therapy for post-traumatic stress disorder (PTSD) achieves symptom reductions of approximately 30–35%, consistent with medium-sized effects.16
Collectively, these findings highlight that AI-driven treatments, while not replacing traditional therapy, can deliver clinically meaningful improvements in depression, anxiety, and PTSD, particularly where access to human therapists is limited.
Data sources: Fitzpatrick et al; Fulmer et al; Inkster et al; Tong et al; Jonathan et al; Giovanetti et al.11–14,16,38 Visualization created by the authors for interpretive comparison.
Analysis of Limitations
Despite significant advancements, the application of AI in mental health faces several limitations. One major challenge is the scarcity of data for certain disorders, such as schizophrenia or personality disorders. For instance, Parola et al reported that AI models for predicting symptom exacerbation in schizophrenia achieve limited accuracy (approximately 88%) due to the lack of large, diverse datasets. This limitation is particularly pronounced for rare or complex disorders where sufficient multimodal data are unavailable.39
Access issues in low-income countries present another challenge. While wearable devices and mobile applications reduce care costs, digital infrastructure and access to advanced technologies remain limited in many low-income regions.2 The World Health Organization (2022) noted that only 25% of low-income countries have adequate digital infrastructure to implement mental health technologies, potentially exacerbating existing inequities.1
Ethical challenges are also significant. Data privacy, algorithmic bias, and patient acceptance are critical barriers. Haque et al warned that collecting sensitive data without robust privacy protocols could undermine public trust.26 Similarly, Yang et al demonstrated that biases in training data can lead to misdiagnoses in minority groups. These challenges necessitate innovative solutions, such as “transparent AI” and the “mental health AI ethical charter” proposed earlier.31
Implications
The application of AI in mental health has wide-ranging implications for various stakeholders:
For Clinicians: AI provides enhanced diagnostic tools, such as predictive models for early detection.38 AI-powered dashboards can display real-time multimodal data, improving clinical decision-making.31
For Patients: Personalized care through therapeutic chatbots and VR-based interventions enhances the treatment experience and increases access in low-resource areas.18 The “digital psychological signature” concept enables interventions tailored to individual needs.
For Policymakers: The need for global regulations and standards to ensure responsible AI use is critical. The proposed ethical charter can serve as a framework for policy development.33
Future Research Directions
To overcome limitations and realize AI’s full potential in mental health, future research should focus on the following:
Longitudinal Studies: Long-term studies are essential to assess the sustained effectiveness of AI tools, such as chatbots and monitoring systems. Walschots et al suggested that multi-year studies could improve AI prediction accuracy for complex disorders like schizophrenia.40
Specific Populations: The impact of AI on specific groups, such as children, the elderly, or individuals with rare disorders, requires further exploration. Rahsepar Meadi et al noted that therapeutic chatbots are understudied in pediatric populations, warranting additional research.33
Ethical Standards Development: Establishing global standards for privacy, transparency, and bias reduction in mental health AI is crucial. The proposed ethical charter could serve as a foundation.35
Access to Low-Income Countries: Research should prioritize developing low-cost, scalable technologies, such as basic smartphone-based apps, to bridge the digital divide.2
Conclusion and Outlook
This narrative review highlights the significant potential of artificial intelligence (AI) to enhance mental health care through early detection, personalized treatment, and preventive monitoring of symptom escalation. Technologies such as natural language processing, deep learning, and multimodal data analysis have demonstrated promising capabilities in identifying subtle behavioral and physiological indicators of mental distress. Conceptual frameworks, including the “digital psychological signature”, “empathetic AI”, and “digital mental health ecosystem”, illustrate how integrating established technologies can support more responsive and data-informed approaches to mental health.
However, it is essential to interpret these developments with scientific caution. Many studies reporting high predictive accuracy are limited by small or homogeneous samples, lack of external validation, and insufficient transparency in reporting. Moreover, ethical and practical challenges, such as data privacy, algorithmic bias, explainability, and digital inequity, remain substantial barriers to clinical implementation. Without addressing these issues, the benefits of AI may remain unevenly distributed, reinforcing rather than reducing disparities in mental health access.
A realistic vision for the future positions AI not as a replacement for human clinicians but as an assistive and augmentative tool within an ethically governed ecosystem. Achieving this vision requires rigorous methodological standards, transparent data governance, and interdisciplinary collaboration among clinicians, AI developers, ethicists, and policymakers. By combining technological innovation with human-centered design and ethical accountability, AI can gradually contribute to more equitable, scalable, and preventive models of mental health care.
Future work should prioritize external validation of AI models, cross-cultural dataset diversity, and integration of ethical governance frameworks into real-world clinical workflows.
Disclosure
The authors report no conflicts of interest in this work.
References
- 1.World Health Organization. Mental Health Atlas 2020. Geneva: World Health Organization; 2022. [Google Scholar]
- 2.Patel V, Saxena S, Lund C, et al. The Lancet Commission on global mental health and sustainable development. Lancet. 2018;392(10157):1553–1598. doi: 10.1016/S0140-6736(18)31612-X [DOI] [PubMed] [Google Scholar]
- 3.Balani S, De Choudhury M. Detecting and characterizing mental health-related self-disclosure in social media. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems; 2015; 1373–1378. [Google Scholar]
- 4.Lee HJ, Cho CH, Lee T, et al. Prediction of impending mood episode recurrence using real-time digital phenotypes in major depression and bipolar disorders in South Korea: a prospective nationwide cohort study. Psychol Med. 2023;53(12):5636–5644. doi: 10.1017/S0033291722002847 [DOI] [PubMed] [Google Scholar]
- 5.Torous J, Bucci S, Bell IH, et al. The growing field of digital psychiatry: current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry. 2021;20(3):318–335. doi: 10.1002/wps.20883 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Beg MJ, Verma MK. Exploring the potential and challenges of digital and AI-driven psychotherapy for ADHD, OCD, Schizophrenia, and substance use disorders. Indian J Psychol Med. 2024. doi: 10.1177/02537176241300569 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Beg MJ, Verma M, Vc M, Verma MK. Artificial intelligence for psychotherapy: a review of the current state and future directions. Indian J Psychol Med. 2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Insel TR. Digital phenotyping: technology for a new science of behavior. JAMA. 2017;318(13):1215–1216. doi: 10.1001/jama.2017.11295 [DOI] [PubMed] [Google Scholar]
- 9.Cummins N, Scherer S, Krajewski J, Schnieder S, Epps J, Quatieri TF. A review of depression and suicide risk assessment using speech analysis. Speech Commun. 2015;71:10–49. doi: 10.1016/j.specom.2015.03.004 [DOI] [Google Scholar]
- 10.Ceccarelli F, Mahmoud M. Multimodal temporal machine learning for bipolar disorder and depression recognition. Pattern Anal Appl. 2022;25(3):493–504. doi: 10.1007/s10044-021-01001-y [DOI] [Google Scholar]
- 11.Tong ACY, Wong KTY, Chung WWT, Mak WWS. Effectiveness of topic-based chatbots on mental health self-care and mental well-being: randomized controlled trial. J Med Internet Res. 2025;26:e70436. doi: 10.2196/70436 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. doi: 10.2196/mental.7785 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR mHealth uHealth. 2018;6(11):e12106. doi: 10.2196/12106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health. 2018;5(4):e64. doi: 10.2196/mental.9782 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. 2019;64(7):456–464. doi: 10.1177/0706743719828977 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Jonathan NT, Bachri MR, Wijaya E, Ramdhan D, Chowanda A. The efficacy of virtual reality exposure therapy (VRET) with extra intervention for treating PTSD symptoms. Procedia Comput Sci. 2023;216:252–259. doi: 10.1016/j.procs.2022.12.134 [DOI] [Google Scholar]
- 17.Botella C, Fernández-álvarez J, Guillén V, García-Palacios A, Baños R. Recent progress in virtual reality exposure therapy for phobias: a systematic review. Curr Psychiatry Rep. 2017;19(7):42. doi: 10.1007/s11920-017-0788-4 [DOI] [PubMed] [Google Scholar]
- 18.De Gennaro M, Krumhuber EG, Lucas G. Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Front Psychol. 2020;10:495952. doi: 10.3389/fpsyg.2019.03061 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, users, benefits, and limitations of chatbots in health care: rapid review. J Med Internet Res. 2024;26:e56930. doi: 10.2196/56930 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Eskandar K. Artificial intelligence in psychiatric diagnosis: challenges and opportunities in the era of machine learning. Debates em Psiquiatria. 2024;14:1–16. [Google Scholar]
- 21.Abd-Alrazaq A, AlSaad R, Harfouche M, et al. Wearable artificial intelligence for detecting anxiety: systematic review and meta-analysis. J Med Internet Res. 2023;25:e48754. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Garriga R, Mas J, Abraha S, et al. Machine learning model to predict mental health crises from electronic health records. Nat Med. 2022;28(6):1240–1248. doi: 10.1038/s41591-022-01811-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Cotes RO, Boazak M, Griner E, et al. Multimodal assessment of schizophrenia and depression utilizing video, acoustic, locomotor, electroencephalographic, and heart rate technology: protocol for an observational study. JMIR Res Protoc. 2022;11(7):e36417. doi: 10.2196/36417 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Ni Y, Nolan J. A scoping review of AI-driven digital interventions in mental health care: mapping applications across screening, support, monitoring, prevention. Healthcare. 2025;13(10):1205. doi: 10.3390/healthcare13101205 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Olawade DB, Wada OZ, Odetayo A, David-Olawade AC, Asaolu F, Eberhardt J. Enhancing mental health with artificial intelligence: current trends and prospects. J Med Surg Public Health. 2024;3:100099. doi: 10.1016/j.glmedi.2024.100099 [DOI] [Google Scholar]
- 26.Haque A, Milstein A, Fei-Fei L. Illuminating the dark spaces of healthcare with ambient intelligence. Lancet Digit Health. 2020;4(11):e816–28. doi: 10.1016/S2589-7500(22)00149-8 [DOI] [PubMed] [Google Scholar]
- 27.Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. 2024;10(4):e26297. doi: 10.1016/j.heliyon.2024.e26297 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Chen Y, Esmaeilzadeh P. Safeguarding patient privacy in AI-driven mental health interventions. J Med Internet Res. 2024;26:e43251. doi: 10.2196/43251 [DOI] [Google Scholar]
- 29.McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. Lancet Digit Health. 2020;2(5):e221–3. doi: 10.1016/S2589-7500(20)30065-0 [DOI] [PubMed] [Google Scholar]
- 30.Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342 [DOI] [PubMed] [Google Scholar]
- 31.Yang J, Soltan AAS, Eyre DW, Yang Y, Clifton DA. An adversarial training framework for mitigating algorithmic biases in clinical machine learning. NPJ Digit Med. 2023;6(1):55. doi: 10.1038/s41746-023-00805-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Young AT, Amara D, Bhattacharya A, Wei ML. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health. 2021;3(9):e599–611. doi: 10.1016/S2589-7500(21)00132-1 [DOI] [PubMed] [Google Scholar]
- 33.Rahsepar Meadi M, Sillekens T, Metselaar S, van Balkom A, Bernstein J, Batelaan N. Exploring the ethical challenges of conversational AI in mental health care: scoping review. JMIR Ment Health. 2025;12:e60432. doi: 10.2196/60432 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Hauser TU, Skvortsova V, Choudhury MD, Koutsouleris N. The promise of a model-based psychiatry: building computational models of mental ill health. Lancet Digit Health. 2022;4(11):e816–28. doi: 10.1016/S2589-7500(22)00163-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Bernal J, Mazo C. Transparency of artificial intelligence in healthcare: insights from professionals in computing and healthcare worldwide. Appl Sci. 2022;12(20):10228. doi: 10.3390/app122010228 [DOI] [Google Scholar]
- 36.World Health Organization. Ethics and Governance of Artificial Intelligence for Health: Large Multi-Modal Models. Geneva: World Health Organization; 2024. [Google Scholar]
- 37.Goktas P, Grzybowski A. Shaping the future of healthcare: ethical clinical challenges and pathways to trustworthy AI. J Clin Med. 2025;14(5):1605. doi: 10.3390/jcm14051605 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Giovanetti AK, Punt SE, Nelson EL, Ilardi SS. Teletherapy versus in-person psychotherapy for depression: a meta-analysis of randomized controlled trials. Telemed J E Health. 2022;28(8):1077–1089. doi: 10.1089/tmj.2021.0294 [DOI] [PubMed] [Google Scholar]
- 39.Parola A, Gabbatore I, Berardinelli L, Salvini R, Bosco FM. Multimodal assessment of communicative-pragmatic features in schizophrenia: a machine learning approach. NPJ Schizophr. 2021;7(1):28. doi: 10.1038/s41537-021-00153-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Walschots Q, Zarchev M, Unkel M, Kamperman A. Using wearable technology to detect, monitor, and predict Major Depressive Disorder a scoping review and introductory text for clinical professionals. Algorithms. 2024;17(9):408. doi: 10.3390/a17090408 [DOI] [Google Scholar]









