ABSTRACT
This integrative literature review examines the evolving role of artificial intelligence (AI) and machine learning (ML) based clinical decision support systems (CDSS) in mental health (MH) care, expanding on findings from a prior review (Higgins et al. 2023). Using and integrative review framework, a systematic search of six databases was conducted with a focus on primary research published between 2022 and 2024. Five studies met the inclusion criteria and were analysed for key themes, methodologies, and findings. The results reaffirm AI's potential to enhance MH care delivery by improving diagnostic accuracy, alleviating clinician workloads, and addressing missed care. New evidence highlights the importance of clinician trust, system transparency, and ethical concerns, including algorithmic bias and equity, particularly for vulnerable populations. Advancements in AI model complexity, such as multimodal learning systems, demonstrate improved predictive capacity but underscore the ongoing challenge of balancing interpretability with innovation. Workforce challenges, including clinician burnout and staffing shortages, persist as fundamental barriers that AI alone cannot resolve. The review not only confirms the findings from the first review but also adds new layers of complexity and understanding to the discourse on AI‐based CDSS in MH care. While AI‐driven CDSS holds significant promise for optimising MH care, sustainable improvements require the integration of AI solutions with systemic workforce enhancements. Future research should prioritise large‐scale, longitudinal studies to ensure equitable, transparent, and effective implementation of AI in diverse clinical contexts. A balanced approach addressing both technological and workforce challenges remain critical for advancing mental health care delivery.
Keywords: artificial intelligence, clinical decision support systems, ethical AI, machine learning, mental health, missed care, psychiatry, workforce challenges
1. Introduction
The rapid advancements in artificial intelligence (AI) and machine learning (ML) have significantly expanded the technological capabilities available to healthcare systems, particularly with the emergence of large language models (LLMs) such as ChatGPT. These developments present opportunities to transform healthcare delivery while raising challenges related to their integration into clinical workflows, adherence to ethical standards, and alignment with community benefit. Since the publication of a relatively recent widely cited original literature review on the topic of AI and ML‐based decision support systems in the mental health context (Higgins et al. 2023), the capabilities of AI and ML have advanced significantly. The rapid technological advances since that time necessitate a sequel review of the literature to ensure that the newest evidence is presented. Healthcare systems now face the challenge of integrating these powerful tools into clinical workflows while maintaining the highest standards of care, medical ethics, and community benefit. This sequel updates the original literature review with research published between 1 January 2022 and 30 June 2024. Notably, five articles have been published in the past 18 months, compared to four in the previous 5 years, reflecting the rapid growth of AI and ML in health care. This updated review continues to examine how these technologies can support and enhance clinicians' decision making, providing advanced decision support tools to improve care delivery. Additionally, it investigates how AI/ML systems can enhance the client's experience in mental health (MH) services by ensuring safe and high‐quality care. Finally, the review aims to deepen the understanding of AI/ML adoption in MH care, highlighting both the opportunities and challenges that have emerged as these technologies continue to evolve and become more prevalent in clinical practice.
This updated review revisits the original research aims (Higgins et al. 2023), with a particular focus on how AI can complement and augment clinicians' decision‐making processes. The findings highlight the potential of AI‐driven clinical decision support systems (CDSS) to enable clinicians to make more informed, accurate, and timely decisions, ultimately reducing instances of missed care. However, the importance of maintaining clinician oversight remains a central theme, ensuring that AI tools enhance, rather than replace, human judgement. The review further addresses the findings of the original review by examining ongoing challenges in AI adoption, particularly issues of trust, transparency, and bias. These factors are essential to building confidence among clinicians and patients in the use of AI systems. LLMs, with their complex and often opaque decision‐making processes, pose additional concerns about interpretability and transparency. Ultimately, this review underscores the need for a balanced approach to integrating AI into healthcare systems. While AI innovations show promise in complementing clinical decision‐making, addressing systemic workforce challenges and ensuring trust and transparency in AI application remain pivotal. A synergistic strategy that combines technological advancements with improvements to the healthcare workforce will be crucial for achieving sustainable and meaningful progress in MH care, aligning with the overarching aims of this study.
2. Methods
2.1. Aims
The research questions guiding this integrative review were:
Is there evidence to support the use of artificial intelligence or machine learning‐based decision support systems in the delivery of mental health care?
What barriers exist for mental health end users (clinicians and patients) in the adoption of artificial intelligence or machine learning‐based decision support systems?
2.2. Design
This integrative literature review followed the framework outlined by Whittemore and Knafl (2005), which encompasses the stages of problem identification, conducting a systematic literature search, retrieving data, evaluating articles, and performing data analysis and presentation. This framework was chosen for its ability to rigorously and systematically analyse data from two disciplines that are not traditionally combined, facilitating a comprehensive synthesis of information while accommodating all research designs. The methodological guidelines provided in the Preferred Reporting Items for Systematic Reviews and Meta‐Analysis Protocols (PRISMA‐P) 2020 (Page et al. 2021) statement checklist guided the review process to ensure it was thorough and adhered to high standards (Figure 1).
FIGURE 1.

PRISMA flowchart.
2.3. Search Strategy
A literature search of databases was conducted on 30 June 2024. The search was developed in Medline then adapted as necessary to Scopus, Web of Science, Google Scholar, IEEE Xplore, and CINAHL with Full Text (EBSCOhost). To find relevant literature, the search terms “machine learning” OR “artificial intelligence” AND “mental health” OR “psychiatry” AND “decision support” were used. The search was limited to articles published between 2022 and 2024, written in English, and focused on AI‐ or ML‐based CDSS for MH. Only primary research articles published from 2022 onward and accessible online were included. This approach aimed to identify technological advancements in AI/ML‐based decision support systems for MH, while excluding non‐primary research, non‐English articles, and studies not focused on AI‐ or ML‐based CDSS for MH or psychiatry. For further details, please review Higgins et al. (2023). The method proposed by Whittemore and Knafl (2005) was employed in synthesising and analysing the data. This approach involved data reduction, display, comparison, and conclusion verification. Information from each article was categorised as study demographics, methodologies, sample populations, key findings, and limitations. The articles were grouped based on these themes for comparison and conclusion verification, and the resulting information is displayed in Table 1 to highlight further themes and relationships.
TABLE 1.
Article summary.
| Author(s) | Study aim | Design | Results | Limitations | Recommendations |
|---|---|---|---|---|---|
|
Liu et al. (2024) USA |
Evaluate and optimise a machine‐learning model for predicting postpartum depression (PPD). |
Model evaluation. Three datasets were used: (1) EHR data from an academic medical centre in 2019; (2) EHR data from the same AMC in 2020; and (3) EHR data from a clinical research network between January 2014 and September 2018. Decision curve analysis was used to evaluate the model's clinical utility and net benefit across different decision thresholds. |
A revised model, using fairness through blindness, was identified as the most suitable, balancing performance and fairness considerations. This approach aimed to mitigate potential biases in the prediction model, particularly concerning race. | The study primarily relied on data from a single urban region within the US, potentially limiting the generalisability of the findings to other settings and populations. |
|
|
Moggia et al. (2022) Germany |
Introduce a data‐informed approach to case formulation in psychotherapy using the TTN. | A case study focusing on a single patient is presented. The Trier Treatment Navigator (TTN) is used to guide case formulation and treatment decisions based on routinely collected outcome monitoring data. | The TTN facilitated a comprehensive and dynamic case formulation process, allowing for continuous adaptation of treatment strategies based on the patient's progress and feedback. The authors propose that data‐informed case formulation and psychotherapy share a common goal of treatment personalisation and mutually benefit each other. | The study's reliance on a single case limits the generalisability of the findings. |
|
|
Ngan et al. (2022) USA |
Develop an HDSS to determine if a patient is at risk of MH problems |
Case study. n = 298 participants recruited from a southwestern community agency serving over 30 000 low‐income immigrant individuals and families in West Texas and New Mexico. Bilingual English‐Spanish survey containing 81 questions in ten categories |
The HDSS achieved 91.11% accuracy and 92.30% sensitivity, indicating its effectiveness in identifying potential MH risks in the studied population. This approach outperformed the data analytics approach but was slightly less accurate than domain experts' criteria. | The study was limited to a specific population and geographical area. |
|
|
Qassim et al. (2023) Canada |
Evaluate the perceived clinical utility and impact on the clinician–patient relationship of Aifred. |
Naturalistic follow‐up. 7 clinicians and 14 patients participating in a study assessing the acceptability and usability of Aifred, a novel AI‐enabled CDSS designed for treating adults with major depression. Questionnaires and semi‐structured interviews conducted with clinicians and patients. |
86% of clinicians perceived a more comprehensive understanding of patients' situations, with 71% finding the information helpful. 62% of patients reported improved care, while 46% noticed an improvement in the clinician–patient relationship. | Small sample size and pre‐existing treatment changes limit the study's ability to verify the impact on treatment outcomes. Virtual study design due to COVID‐19 limited opportunities for shared screen viewing between clinicians and patients. |
|
|
Yang et al. (2024) China |
Develop a multimodal, multitask learning model to enhance psychiatric rehabilitation outcomes. | n = 6727 patients diagnosed with SMI at the Shanghai MH Centre. |
Model achieved an accuracy exceeding 78.5% for all four tasks and performed exceptionally well on the tasks of “medication adherence” (94.3%) and “dangerous behaviour” (90.2%). AUC of all four tasks was 0.70, and except for “dangerous behaviour”, the AUC exceeded 0.75 rates, indicating its robustness. |
The fairness of AI decisions may be influenced by potential biases in the training data, and its robustness might be insufficient when dealing with novel or extreme cases Due to the inherent complexity of the ML models, their decision‐making processes may lack full transparency, which could be a potential limitation in their application to clinical decision support. |
|
3. Results
Clinical decision support systems (CDSS) for MH, including those incorporating AI, show promise in improving patient care. Overall, the results of this subsequent literature review highlight the potential of CDSS to improve MH care but emphasise the importance of rigorous evaluation, debiasing efforts, and ongoing monitoring to ensure fairness, accuracy, and clinical utility. Five articles were deemed eligible and included in this review. Interestingly, two of these articles, Qassim et al. (2023) and Moggia et al. (2022), are further research papers from studies that were included in the original literature review (Higgins et al. 2023). The research articles included a range of study designs and methods. One article employed a mixed‐methods approach, utilising questionnaires and semi‐structured interviews to examine the feasibility of an AI‐powered clinical decision support system for depression treatment in Quebec, Canada (Qassim et al. 2023). Another study focused on developing a healthcare decision support system to predict MH risks, drawing on data from a bilingual survey, in‐depth interviews, and a case study of immigrants and refugees in the US–Mexico border region (Ngan et al. 2022). Similarly exploring AI's role in MH, a research article from Shanghai, China, investigated a multimodal, multitask learning model for rehabilitating patients with serious mental illnesses, using patient records and expert interviews to evaluate their algorithm (Yang et al. 2024). Although focusing on data‐informed case formulation within psychotherapy, a German study does not explicitly state its study design or methodologies (Moggia et al. 2022).
The evaluation of an ML model's suitability for integration into an electronic health record (EHR) system to predict postpartum depression (PPD) risk has been a focal point of recent research. One study utilised this model to identify high‐risk patients and recommended preventative interventions for clinicians, with a specific emphasis on refining decision thresholds through decision curve analysis to optimise sensitivity while minimising the risk of overtreatment (Liu et al. 2024). Similarly, the feasibility and acceptability of integrating AI‐driven CDSS, such as Aifred, into the treatment of major depression were explored, rather than directly testing interventions. This investigation highlighted how clinicians incorporated the system into their practice, particularly in terms of appointment length and the clinician–patient relationship (Qassim et al. 2023), building upon previous work by Benrimoh et al. (2020) which was included in Higgins et al. (2023).
In another study, a novel multimodal, multitask learning model was directly applied as an intervention to predict rehabilitation outcomes for individuals with severe mental illness (SMI) (Yang et al. 2024). The model, which integrated Bidirectional Encoder Representations from Transformers (BERT) for text data (e.g., doctor's notes) and TabNet for structured diagnostic information, was evaluated for its predictive accuracy in comparison to traditional single‐task models across four domains: referral risk, dangerous behaviours, self‐awareness, and medication adherence (Yang et al. 2024).
Further research has detailed the development of a healthcare decision support system (HDSS) aimed at identifying MH risks among immigrants and refugees. Rather than implementing a separate intervention, the study compared the HDSS's performance in detecting these risks with assessments from domain experts and a purely data‐driven analytics approach (Ngan et al. 2022). Additionally, the potential of the Trier Treatment Navigator (TTN), a CDSS for psychotherapy, was examined, with an emphasis on its role in data‐informed case formulation and treatment adaptation. While this study did not focus on a specific intervention or comparative analysis, it built on previous work by Lutz et al. (2022), also referenced in Higgins et al. (2023), and presented the TTN's application through a clinical case study (Moggia et al. 2022).
3.1. Thematic Analysis of Research on Decision Support Systems in Mental Health
The findings from the follow‐up review align well with the key themes identified in the previous literature review, particularly concerning the potential and limitations of AI‐based CDSS in MH care. Both reviews underscore the significant promise of AI in improving MH care delivery, with an emphasis on supplementing, rather than replacing, clinical judgment. However, this follow‐up review extends the discussion by incorporating recent studies that further illustrate the practical applications and challenges that AI‐driven systems face.
The current review reinforces the theme of AI's role in alleviating clinicians' workload, as highlighted in Higgins et al. (2023). However, it also builds upon the previous insights by stressing the necessity of rigorous evaluation frameworks, debiasing efforts, and continuous monitoring of these systems to ensure fairness, accuracy, and clinical utility. For instance, Qassim et al. (2023) and Moggia et al. (2022), both from studies that were included in the original review (via Lutz et al. (2022) and Benrimoh et al. (2020)), offer further evidence of the feasibility and acceptability of AI‐driven CDSS in real‐world clinical settings. This continuation of previously reviewed work illustrates how AI tools can be integrated into clinical practice, highlighting the need for clinician engagement and system refinement to improve decision‐making processes. The review strengthens the focus on interdisciplinary collaboration, particularly by incorporating various study designs, such as mixed‐methods approaches (Qassim et al. 2023), and examining CDSS systems across diverse populations and settings. The use of AI in the context of marginalised populations, such as immigrants and refugees (Ngan et al. 2022), adds depth to the conversation on equity and the ethical deployment of AI, a key issue raised in the earlier review.
Additionally, the introduction of advanced ML models, such as the multimodal, multitask learning model explored by Yang et al. (2024), builds upon the previous emphasis on developing interpretable AI models tailored to MH applications. This research (Yang et al. 2024) highlights how such models can be used to predict complex MH outcomes, enhancing the capacity of AI to support clinical decision‐making in rehabilitation contexts. This reinforces the theme of transparency and interpretability from the previous review, where it was noted that clinician trust is pivotal for the successful adoption of AI technologies in MH care.
The ongoing exploration of the TTN, as discussed in Moggia et al. (2022), aligns with earlier considerations on using AI to improve data‐informed case formulation in psychotherapy. While previous literature acknowledged AI's potential to optimise workflows and enhance clinical decision‐making, the case study in this review offers further examples of how these systems can be applied in psychotherapy, though without focusing on specific interventions.
This review builds upon the original findings reported in Higgins et al. (2023) by further demonstrating the practical applications of AI in MH care, expanding the range of study designs and methodologies, and reinforcing the importance of clinician and patient engagement. It also advances the discussion on ethical concerns, particularly around transparency, interpretability, and bias mitigation, while providing new insights into AI's role in addressing MH challenges across diverse populations. These studies collectively emphasise that, while AI has significant potential in MH care, its integration must be carefully managed through ongoing research, collaboration, and a balanced approach that combines technological innovation with workforce development. The included research articles explore the intersection of clinical expertise and data‐driven insights in MH care, particularly through developing and applying CDSS. These systems aim to provide clinicians with tools to enhance decision making, personalise treatment, and potentially improve patient outcomes. A recurring theme is the potential of CDSS to integrate diverse data sources, including patient demographics, diagnostic information, treatment history, and even unstructured text data like clinicians' notes, to generate predictions about various aspects of MH.
3.1.1. Harnessing Data for Personalised Mental Health Care
A central theme across the articles is the pursuit of personalised treatment in MH care. Traditionally, treatment decisions often rely on clinicians' judgement and experience. However, the articles highlight the potential of CDSS to leverage data to tailor treatments to individual patient characteristics and needs. This shift towards data‐informed personalisation is exemplified by systems like Aifred (Qassim et al. 2023), which provide clinicians with remission probabilities for different treatment options based on patient data and predictive algorithms. Similarly, the HDSS described by Ngan et al. (2022) combines domain expertise with data analytics to predict the risk of MH problems in specific populations, such as immigrants and refugees.
3.1.2. The Evolving Role of Technology and Data in Clinical Practice
The research articles collectively illustrate the evolving role of technology and data in MH care. There is a clear emphasis on moving beyond traditional methods of diagnosis and treatment towards more data‐driven approaches. This transition is evident in the development of CDSS, which integrates routine outcome monitoring (ROM) and measurement‐based care (MBC) into clinical workflows (Moggia et al. 2022). ROM and MBC regularly collect and analyse patient data throughout treatment to track progress and inform treatment decisions. By incorporating these data‐driven practices, the articles suggest clinicians can gain a more objective and comprehensive understanding of their patients' needs and progress.
3.1.3. Addressing Challenges and Ethical Considerations
While the potential benefits of CDSS are widely acknowledged, the research articles also address the challenges and ethical considerations associated with their development and implementation. A key concern is ensuring the accuracy, fairness, and clinical utility of these systems. One article emphasises the importance of rigorously evaluating CDSS before widespread adoption to avoid potential biases and ensure they are effective in real‐world clinical settings (Yang et al. 2024). Another article explores the technical aspects of developing a multimodal, multitask learning model for assessing the rehabilitation status of patients with severe mental illness (Qassim et al. 2023). The authors highlight the complexities of working with multimodal data (e.g., text records and structured diagnostic data) and the need for robust model evaluation to ensure accuracy and fairness.
Beyond technical challenges, the articles also underscore the importance of addressing ethical considerations, such as data privacy and the potential for bias. For instance, the development of the multimodal, multitask learning model included obtaining informed consent from all participants and addressing privacy concerns by ensuring data confidentiality. The researchers also acknowledge the potential for algorithmic bias and the need for ongoing monitoring to mitigate these risks.
3.1.4. Fostering Collaboration and Trust in the Age of AI
A crucial aspect of successfully implementing CDSS in MH care is fostering collaboration and trust between clinicians, patients, and technology. The articles emphasise that CDSS are not intended to replace clinical judgement but rather to augment it by providing clinicians with additional insights and support. For example, one article describes how the TTN provides therapists with theoretical input, examples of therapeutic dialogues, and potential exercises to support their clinical decision‐making in the context of data‐informed psychotherapy (Moggia et al. 2022).
Ultimately, the research articles convey a sense of cautious optimism about CDSS's potential to transform MH care. They highlight the opportunities presented by data‐driven insights while acknowledging the importance of addressing ethical concerns and prioritising patient well‐being. The articles suggest that, by carefully navigating these complexities, the field can harness the power of technology to deliver more personalised, effective, and equitable MH care for all.
4. Discussion: Advancing Artificial Intelligence –Based CDSS in Mental Health Care
The follow‐up review reaffirms and extends the themes identified in the initial literature review (Higgins et al. 2023) on the potential of AI‐based CDSS in MH care. The findings continue to highlight the promise of AI technologies in addressing missed care, alleviating clinicians' workloads, improving diagnostic accuracy, and addressing workforce challenges. However, the newly reviewed studies bring further insights and raise further complexities, particularly concerning the real‐world implementation, ethical considerations, and the need for more rigorous evaluations of AI systems in diverse MH contexts.
4.1. Addressing Missed Care Through Artificial Intelligence
A growing body of literature underscores the capacity of AI technologies to alleviate clinicians' workloads and improve diagnostic accuracy in MH care. Studies such as those by Stein et al. (2022) and Bzdok and Meyer‐Lindenberg (2018) have shown that AI‐driven systems can assist clinicians in decision making by analysing vast amounts of patient data, thus enabling more accurate diagnoses and personalised treatment plans. This aligns with the findings reported in Higgins et al. (2023), which emphasise AI's role in addressing missed care and supporting overburdened clinicians. Baddal et al. (2024) further argue that AI‐based tools can mitigate clinician burnout by automating routine tasks, allowing professionals to focus on more complex aspects of patient care. This suggests that the benefits of AI extend beyond clinical decision making to improving overall healthcare workflow, supporting the workforce challenges highlighted in the original review.
4.2. Building Clinician Trust and System Transparency
One of the key contributions of this review is the deeper exploration of clinician engagement and trust (Higgins et al. 2024), which emerged as critical factors in the first review (Higgins et al. 2023). The original literature review underscored that the successful implementation of AI‐based CDSS in health care hinges on clinician trust, system transparency, and interpretability (Higgins et al. 2024). The current review builds on this by examining studies such as Qassim et al. (2023), which investigated the feasibility and acceptability of the Aifred system in clinical settings. This adds new empirical data on how clinicians interact with AI systems in practice, with a particular focus on patient–clinician interactions, appointment length, and the clinician's confidence in AI‐driven recommendations. These findings confirm the initial review's assertion that system usability and clinicians' involvement in development are essential for successful adoption (Higgins et al. 2024).
The issue of clinician trust and system transparency, as discussed in Higgins et al. (2023) and Higgins et al. (2024), has been further elaborated in recent studies. For instance, Shortliffe and Sepúlveda (2018) argue that the “black box” nature of many AI models contributes to clinicians' hesitancy in adopting these systems. They advocate the development of interpretable AI models that provide clinicians with clear reasoning behind the generated recommendations, a call echoed by Molnar (2020). Molnar's work on explainable AI (XAI) highlights that the integration of transparency‐enhancing techniques, such as local interpretable model‐agnostic explanations or SHapley Additive exPlanations (SHAP), could significantly improve clinician trust. This aligns with the empirical findings from Qassim et al. (2023) and Higgins et al. (2024) suggesting that clinician engagement in the development of AI systems is crucial to ensuring their usability and acceptance in clinical settings.
4.3. Ethical and Fairness Considerations in Artificial Intelligence
Another notable advancement in this review is the incorporation of research addressing the ethical and fairness concerns surrounding AI in MH care. The earlier review identified these as key areas of concern, particularly regarding algorithmic bias and transparency. The study by Ngan et al. (2022), which focuses on healthcare decision support systems for migrants and refugees, illustrates the importance of addressing equity issues in AI implementation. Ngan et al. (2022) provides insights into the performance of AI systems when applied to vulnerable populations, demonstrating that, while AI can offer substantial benefits, the risk of perpetuating bias remains a significant challenge. The inclusion of this study in the follow‐up review underscores the necessity for ongoing monitoring and the development of robust debiasing strategies, as recommended in the previous literature review (Higgins et al. 2023).
The ethical considerations associated with AI in MH care have garnered increasing attention in recent years. In particular, algorithmic bias, a key concern in the review by Higgins et al. (2023), has been explored in depth by scholars such as Obermeyer et al. (2019) and Gerke et al. (2020). Obermeyer et al. (2019) demonstrated that many AI systems inadvertently perpetuate healthcare inequalities, particularly in marginalised groups, due to biased training datasets. This aligns with the findings of Ngan et al. (2022), whose research on healthcare decision support systems for refugees and migrants highlights the urgent need for bias mitigation strategies in AI deployment. Gerke et al. (2020) further argue that the inclusion of diverse datasets in AI model development is crucial to prevent these systems from reinforcing existing healthcare disparities. These authors collectively reinforce the call for continuous monitoring and the development of robust debiasing frameworks, as discussed in the follow‐up review.
4.4. Advancements in Artificial Intelligence Modalities and Model Complexity
The current review also brings to the forefront the growing complexity of AI models being applied in MH care. The research by Yang et al. (2024), which evaluated a novel multimodal, multitask learning model for predicting rehabilitation outcomes in severe mental illness, marks a progression in the sophistication of AI models since the first review. While the original review called for the development of interpretable ML models tailored specifically to MH applications, this new study illustrates the potential of more complex models to integrate diverse data sources, such as clinical notes and structured diagnostic information. Although promising, these advancements also highlight the ongoing challenge of balancing model complexity with interpretability, reinforcing the need for clinician‐friendly AI systems (Higgins et al. 2024).
Furthermore, the advancements in AI complexity, such as the multimodal and multitask learning model explored by Yang et al. (2024), are indicative of broader trends in AI research. Similarly, Miotto et al. (2018) highlight the increasing use of deep learning algorithms in health care to process heterogeneous data sources, such as electronic health records, clinical notes, and imaging data. These models are not only more sophisticated but also offer the potential to uncover new insights into patient care. However, as Miotto et al. (2018) and Kamel Rahimi et al. (2024) both point out, the challenge remains in balancing model complexity with interpretability. Kamel Rahimi et al. (2024) specifically emphasise the need for clinician‐friendly interfaces that allow healthcare professionals to interact with AI systems without needing extensive technical knowledge. This reinforces the findings in Higgins et al. (2023) and Yang et al. (2024) regarding the ongoing challenge of integrating complex AI models into clinical workflows while maintaining transparency and ease of use.
4.5. Artificial Intelligence Support in Mental Health Case Formulation
Furthermore, the role of AI in supporting MH case formulation, as explored in Moggia et al. (2022), complements the findings from the earlier review on AI's ability to optimise clinical workflows. The TTN provides an example of AI‐driven decision support being used in psychotherapy, although without the rigorous intervention or comparative analysis frameworks seen in other studies (Moggia et al. 2022). This aligns with the call for the development of interpretable AI systems in Higgins et al. (2023) and demonstrates the continuing evolution of AI applications in diverse MH contexts, including psychotherapy.
The role of AI in psychotherapy and MH case formulation, as examined by Moggia et al. (2022), is also a growing area of interest. Studies by Thieme et al. (2023) and Shatte et al. (2019) have highlighted the potential of AI‐driven systems to assist in therapeutic decision making and personalising treatment interventions. Thieme et al. (2023) argue that AI can be used to analyse session transcripts, identifying patterns and suggesting therapeutic approaches based on the content of conversations. This complements Moggia et al. (2022) findings on the TTN system and underscores the continued evolution of AI applications in psychotherapy. However, Shatte et al. (2019) caution that the efficacy of AI in psychotherapy remains contingent on rigorous clinical validation, a point also raised in the original literature review. Shatte et al. (2019) argue that, without robust intervention frameworks, the use of AI in therapeutic settings could lead to an over‐reliance on technology and a potential loss of the human touch in psychotherapy, a critical element in patient outcomes.
5. Conclusion: Integrating Artificial Intelligence With Workforce Solutions for Sustainable Care
In conclusion, this review not only confirms the findings from the first review but also adds new layers of complexity and understanding to the discourse on AI‐based CDSS in MH care. However, despite the advances in AI technology, systemic workforce challenges, such as staffing shortages and clinician burnout, remain fundamental drivers of missed care, as noted in Higgins et al. (2023). The review reiterates that AI should be seen as a complement to, rather than a replacement for, human care. As previously highlighted, without addressing these workforce challenges, AI alone cannot resolve the root causes of missed care. The research continues to show significant potential for AI technologies to improve care delivery, reduce clinicians' burden, and enhance decision‐making processes. Nevertheless, it also highlights that the successful deployment of these systems is contingent upon ongoing efforts to address issues of trust, transparency, bias, and workforce integration. Future research must focus on conducting large‐scale studies to ensure these systems can be effectively and equitably implemented in diverse real‐world settings. This balanced approach of integrating AI‐driven solutions with systemic workforce improvements will be essential for achieving sustainable and meaningful improvements in MH care.
Author Contributions
Oliver Higgins: concept development, project design, data collection, data analysis, manuscript preparation. Rhonda L. Wilson: data analysis, project design, data collection, data analysis, manuscript contribution, supervision of project.
Ethics Statement
The authors have nothing to report.
Conflicts of Interest
Prof. Rhonda Wilson is an Editorial Board Member of the International Journal of Mental Health Nursing.
Acknowledgements
The authors would like to acknowledge the support of Central Coast Local Health District. Open access publishing facilitated by RMIT University, as part of the Wiley ‐ RMIT University agreement via the Council of Australian University Librarians.
Funding: Partial financial support was received from the NSW Ministry of Health as part of the Towards Zero Suicides initiative.
Data Availability Statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
References
- Baddal, B. , Taner F., and Uzun Ozsahin D.. 2024. “Harnessing of Artificial Intelligence for the Diagnosis and Prevention of Hospital‐Acquired Infections: A Systematic Review.” Diagnostics (Basel) 14, no. 5: 484. 10.3390/diagnostics14050484. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Benrimoh, D. , Tanguay‐Sela M., Perlman K., et al. 2020. “Using a Simulation Centre to Evaluate Preliminary Acceptability and Impact of an Artificial Intelligence‐Powered Clinical Decision Support System for Depression Treatment on the Physician‐Patient Interaction.” BJPsych Open 7, no. 1: e22. 10.1192/bjo.2020.127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bzdok, D. , and Meyer‐Lindenberg A.. 2018. “Machine Learning for Precision Psychiatry: Opportunities and Challenges.” Biological Psychiatry: Cognitive Neuroscience and Neuroimaging 3, no. 3: 223–230. 10.1016/j.bpsc.2017.11.007. [DOI] [PubMed] [Google Scholar]
- Gerke, S. , Minssen T., and Cohen G.. 2020. “Chapter 12—Ethical and Legal Challenges of Artificial Intelligence‐Driven Healthcare.” In Artificial Intelligence in Healthcare, edited by Bohr A. and Memarzadeh K., 295–336. Academic Press. 10.1016/B978-0-12-818438-7.00012-5. [DOI] [Google Scholar]
- Higgins, O. , Chalup S. K., and Wilson R. L.. 2024. 2024. “Artificial Intelligence in Nursing: Trustworthy or Reliable?” Journal of Research in Nursing 0, no. 2: 143–153. 10.1177/17449871231215696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Higgins, O. , Short B. L., Chalup S. K., and Wilson R. L.. 2023. “Artificial Intelligence (AI) and Machine Learning (ML) Based Decision Support Systems in Mental Health: An Integrative Review.” International Journal of Mental Health Nursing 32: 966–978. 10.1111/inm.13114. [DOI] [PubMed] [Google Scholar]
- Kamel Rahimi, A. , Pienaar O., Ghadimi M., et al. 2024. “Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers.” Journal of Medical Internet Research 26: e49655. 10.2196/49655. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu, Y. F. , Joly R., Turchioe M. R., et al. 2024. “Preparing for the Bedside‐Optimizing a Postpartum Depression Risk Prediction Model for Clinical Implementation in a Health System.” Journal of the American Medical Informatics Association 31, no. 6: 1258–1267. 10.1093/jamia/ocae056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lutz, W. , Deisenhofer A. K., Rubel J., et al. 2022. “Prospective Evaluation of a Clinical Decision Support System in Psychological Therapy.” Journal of Consulting and Clinical Psychology 90, no. 1: 90–106. 10.1037/ccp0000642. [DOI] [PubMed] [Google Scholar]
- Miotto, R. , Wang F., Wang S., Jiang X., and Dudley J. T.. 2018. “Deep Learning for Healthcare: Review, Opportunities and Challenges.” Briefings in Bioinformatics 19, no. 6: 1236–1246. 10.1093/bib/bbx044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moggia, D. , Schaffrath J., Bommer J., Weinmann‐Lutz B., and Lutz W.. 2022. “Data‐Informed Case Formulation With the Trier Treatment Navigator.” Revista De Psicoterapia 33, no. 123: 151–171. 10.33898/rdp.v33i123.35971. [DOI] [Google Scholar]
- Molnar, C. 2020. Interpretable Machine Learning. Leanpub. https://christophm.github.io/interpretable‐ml‐book/. [Google Scholar]
- Ngan, C. K. , Paat Y. F., and Green R.. 2022. “HDSS: A Healthcare Decision Support System on Combining Domain Knowledge and Data Analytics for Predicting Potential Risk of Mental Health.” International Journal of Applied Decision Sciences 15, no. 4: 465–491. 10.1504/IJADS.2022.123844. [DOI] [Google Scholar]
- Obermeyer, Z. , Powers B., Vogeli C., and Mullainathan S.. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366, no. 6464: 447–453. 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
- Page, M. J. , McKenzie J. E., Bossuyt P. M., et al. 2021. “The PRISMA 2020 statement: an updated guideline for reporting systematic reviews.” British Medical Journal 372, no. n71. 10.1136/bmj.n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qassim, S. , Golden G., Slowey D., et al. 2023. “A Mixed‐Methods Feasibility Study of a Novel AI‐Enabled, Web‐Based, Clinical Decision Support System for the Treatment of Major Depression in Adults.” Journal of Affective Disorders Reports 14: 100677. [Google Scholar]
- Shatte, A. B. R. , Hutchinson D. M., and Teague S. J.. 2019. “Machine Learning in Mental Health: A Scoping Review of Methods and Applications.” Psychological Medicine 49, no. 9: 1426–1448. 10.1017/s0033291719000151. [DOI] [PubMed] [Google Scholar]
- Shortliffe, E. H. , and Sepúlveda M. J.. 2018. “Clinical Decision Support in the Era of Artificial Intelligence.” JAMA 320, no. 21: 2199–2200. 10.1001/jama.2018.17163. [DOI] [PubMed] [Google Scholar]
- Stein, D. J. , Shoptaw S. J., Vigo D. V., et al. 2022. “Psychiatric Diagnosis and Treatment in the 21st Century: Paradigm Shifts Versus Incremental Integration.” World Psychiatry: Official Journal of the World Psychiatric Association (WPA) 21, no. 3: 393–414. 10.1002/wps.20998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thieme, A. , Hanratty M., Lyons M., et al. 2023. “Designing Human‐Centered AI for Mental Health: Developing Clinically Relevant Applications for Online CBT Treatment.” ACM Transactions on Computer‐Human Interaction 30, no. 2: 1–50. [Google Scholar]
- Whittemore, R. , and Knafl K.. 2005. “The Integrative Review: Updated Methodology.” Journal of Advanced Nursing 52, no. 5: 546–553. 10.1111/j.1365-2648.2005.03621.x. [DOI] [PubMed] [Google Scholar]
- Yang, H. Y. , Zhu D., He S. Y., et al. 2024. “Enhancing Psychiatric Rehabilitation Outcomes Through a Multimodal Multitask Learning Model Based on BERT and TabNet: An Approach for Personalized Treatment and Improved Decision‐Making.” Psychiatry Research 336: 115896. 10.1016/j.psychres.2024.115896. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
