Abstract
Background
Despite a growing body of research into both Artificial intelligence and mental health inpatient flow issues, few studies adequately combine the two. This review summarises findings in the fields of AI in psychiatry and patient flow from the past 5 years, finds links and identifies gaps for future research.
Methods
The OVID database was used to access Embase and Medline. Top journals such as JAMA, Nature and The Lancet were screened for other relevant studies. Selection bias was limited by strict inclusion and exclusion criteria.
Research
3,675 papers were identified in March 2020, of which a limited number focused on AI for mental health unit patient flow. After initial screening, 323 were selected and 83 were subsequently analysed. The literature review revealed a wide range of applications with three main themes: diagnosis (33%), prognosis (39%) and treatment (28%). The main themes that emerged from AI in patient flow studies were: readmissions (41%), resource allocation (44%) and limitations (91%). The review extrapolates those solutions and suggests how they could potentially improve patient flow on mental health units, along with challenges and limitations they could face.
Conclusion
Research widely addresses potential uses of AI in mental health, with some focused on its applicability in psychiatric inpatients units, however research rarely discusses improvements in patient flow. Studies investigated various uses of AI to improve patient flow across specialities. This review highlights a gap in research and the unique research opportunity it presents.
Keywords: Mental health, Patient flow, Artificial intelligence, National health service, Inpatient units
Mental Health, Patient Flow, Artificial Intelligence, National Health Service, Inpatient Units
1. Introduction
Healthcare services face a number of challenges to improve the quality and efficiency of care delivery, despite rising costs and demand. Internal inefficiencies, such as poor patient flow, affect patient safety, patient/staff satisfaction, and the overall quality of care and outcomes [1, 2]. Mental health is, by its nature, a particularly complex sector. The growing demand for healthcare coupled with limited resources has created opportunities for digital and technological solutions such as artificial intelligence (AI) to help solve some of the challenges. AI can be used for improvements in clinical outcomes and patient safety, as well as cost reductions, population measurements, and advancements in research [3]. Patient flow can be defined as ‘the ability of healthcare systems to manage patients effectively and with minimal delays as they move through stages of care’ [2] with quality and patient satisfaction maintained throughout the process. As such, the concept of using patient flow to improve care has received increasing interest, ‘especially in relation to reductions in patient waiting times for emergency and elective care’ [4]. So far, a significant amount of work and attention has been given to implementing AI in the patient-facing environment, however, potential for improvement remains in “back-end” operations and service provision. Although research has been done on possible applications of AI in mental health as well as in patient flow, not many studies identify the specific opportunities of improving patient flow in inpatient mental health units using AI. A bibliometric analysis conducted by Tran, McIntyre & Latkin et al. [5] found an increasing number of papers over the last few years discussing and evaluating the applications of AI in various aspects of depression and other psychiatric fields including: “clinical predictive analytics, neuropsychiatric diseases' treatment and healthcare, and biomedical applications”. However, their keyword and abstract analysis uncovered an absence of literature content addressing the privacy and confidentiality aspects of using AI with patient data, which is an essential limitation of AI to address in the current era of big data.
This review will be a useful addition to current AI literature due to the abundance of information, its rapidly evolving nature, and lack of consensus on best practices. This paper aims to explore the prospect of AI to improve patient flow in mental health inpatient units, further discussing the technical, regulatory and logistical hurdles AI poses, by reviewing the literature from the past 5 years and identifying gaps for future research.
2. Methodology
A narrative literature review was conducted to bridge the gap between AI in mental health and AI in patient flow (Figures 1 and 2). A content analysis method was adopted in this review as a way to identify, analyse and report patterns uncovered in the literature in the form of themes. The OVID database was used to further access Embase and Medline databases. Top journals (JAMA, Nature, The Lancet) were screened for the most relevant studies. Selection bias was limited by strict inclusion and exclusion criteria (Appendix Table 1 & 2). 3,675 papers were identified in March 2020 based on the search strings (Appendix Table 3). 323 were selected after initial screening and 83 were ultimately analysed in this review (Appendix Table 4 & 5). The papers were then split into 2 topics: AI in inpatient mental health units (49 papers) and AI in patient flow (34 papers). The number of papers specifically targeting the use of AI for improving patient flow in mental health units was limited.
Figure 1.
PRISMA flowchart of studies selected for this review.
Figure 2.
Flowchart showing the structure of this literature review.
3. Results
3.1. AI in inpatient mental health
Three main themes emerged from the literature for AI in inpatient mental health units: diagnosis, prognosis and treatment.
3.2. Diagnosis
Diagnosis of mental health disease still lacks objective measures [6], often relying on subjective self-reported questionnaires leading to commonplace misdiagnosis and underdiagnosis [7]. Problems with diagnosis lead to poor outcomes and resource inefficiencies. Applications of AI in diagnosis can happen at all three stages: prediagnosis, peridiagnosis and postdiagnosis.
3.2.1. Prediagnosis
In the prediagnosis stage, AI could provide assistance in triaging patients and diverting patients who do not need interventions [8]. Brodey et al. [9] used Machine Learning (ML) to validate an early psychosis screener and achieved an area under the curve (AUC) of 0.899 (a AUC range of 0.8–0.9 is considered excellent as a discriminator) when differentiating individuals with low risk from high risk. Similarly, Singh et al. [10] developed a triage system for psychiatric cases with an overall classification accuracy of 77%. Although promising, these studies have limited generalisability and require validations on large and carefully chosen samples.
3.2.2. Peridiagnosis
In the peridiagnosis stage, AI can help diagnose patients accurately and enable new objective methods of diagnosis. AI could also enhance the our understanding of diseases. A recent study used clustering to identify five psychosis subgroups that had ‘distinctive clinical signatures and illness courses’ [11]. Drysdale et al. [12] defined four novel subtypes of depression using clustering analysis of functional MRI (magnetic resonance imaging) scans, their neuroimaging biomarkers achieved high sensitivity and specificity (82–93%) in multisite and out-of-sample replication. Treatments could be personalised based on the different subtypes.
Many studies were based on neuroimaging data. ML has been used to enhance our understanding of the brain, identifying ill patients from controls and recently constructing predictive models [13]. Koutsouleris et al. [14] used MRI-based multivariate pattern classification to distinguish schizophrenia from major depressive disorder (72%–80% accuracy). Similarly, Lu et al. [15] used ‘support vector machines’ (SVM) to analyse MRIs and discriminate schizophrenia patients from controls (88.4% accuracy). Although those studies showed that ML can be beneficial for neuroanatomical diagnosis in uncertain scenarios, it's important to highlight that studies based on imaging data often lack consistency in the techniques and datasets used [16].
Apart from neuroimaging, other types of data, such as blood inflammatory markers [17] and blood DNA methylation data [18], are being analysed to improve diagnostic accuracy. Liang et al. [19] used ML to develop an objective blood biomarker for hazardous alcohol consumption based on DNA methylation which had higher accuracy of diagnosis than traditional self-reporting (73.9% vs 57.5%). Additionally, research has been done to successfully screen and diagnose patients based on ‘natural language processing’ (NLP) analysis of self-reports [20] and clinical notes [21].
3.2.3. Postdiagnosis
In the postdiagnosis stage, AI could be used for reviews, detection of errors and quality improvement, and monitoring patients’ progress. Research focused on therapy and prognosis will be discussed further on in the review.
3.3. Prognosis
Predictive models are crucial in evidence-based medicine, as they guide healthcare professionals' decisions on investigations and treatments [22]. Currently, the predictions are made by classifying a patient to a certain group and referring to that group's averages [23]. Given the multimodality of mental health disorders, the current predictions are rarely specific to an individual. ML can enhance understanding of complex relationships between risk factors and outcomes [8]. 19 out of 49 reviewed studies had prognosis as their major theme and the main subthemes identified were: depression, psychosis and suicide.
Several papers focused on the use of AI to predict risk [24] or severity of depression [25, 26]. Kautzky et al. [27] used 47 clinical and sociodemographic factors to predict treatment resistant depression using Random Forests (RF) and 10-fold cross-validation (75% accuracy).
Numerous paths were taken for psychosis related predictions. For example, analysis of psychopathological data with SVM predicted transition to psychosis with 64.6% accuracy in high risk patients [28]. To improve clinical uses of the model, more types of data should be analysed. Fond et al. [29] found that physical aggressiveness and anger were the best predictors of psychosis relapse using ML analysis of neurophysiological, biological and socio-demographics factors (71% sensitivity), while other studies used neuroanatomical data [30]. A novel approach used NLP analysis of unstructured text and speech [16]. Bedi et al. [31] discovered speech features that predicted psychosis development with 100% accuracy. However, further validation will be needed due to its small sample size.
Inpatient suicide can be difficult to predict and prevent. The current gold standard for suicide screening is an inpatient psychiatric assessment, which has a high level of variability. A number of AI studies aimed to predict the progression from various mental health conditions to suicide [32]. Common methods include analysis of electronic health records (EHR), national registries [33] and self-reported questionnaires. Melhem et al. [34] conducted a longitudinal study in which they developed a risk scale through statistical analysis of self-reports from children whose parents had mood disorders. A risk score of three and above proved to be 87% sensitive and 63% specific for predicting suicide attempts. However, the reliability of self-reporting of symptoms is limited due to the patient's motivation, capacity and self-awareness. Desjardins et. al [35]. constructed a suicide risk assessment tool using a neural network model, with 94% accuracy compared to expert psychiatric assessments. This tool also suggests an appropriate intervention based on the risk score, which can be particularly useful in busy wards to help triage high risk patients. Moreover, it increases efficiency by allowing nurses to conduct the screening regularly without the need of experts in the initial stages. Further research comparing the rates of suicide with or without the tool would be a useful addition to the literature.
Researchers often utilise EHR data to predict outcomes. Miotto et al. [36] analysed EHR using unsupervised learning of more than 76,000 patients encompassing 78 diseases, including schizophrenia, to predict 1 year health outcomes. As any analysis of EHR is often highly reliant on the quality of primary data, researchers admit that pre-processing of the EHR was helpful in achieving high accuracy (above 80%). In another study, Menger et al. [37] developed a model to assess inpatient violence risk using ML analysis of EHR validated at two different sites (AUC from 0.643 to 0.797). Similar research was done by Suchting et al. [38] who identified the strongest predictors of aggressive events on psychiatric units: homelessness, having been convicted of assault, and having witnessed abuse. Walsh et al. [39] predicted future suicide attempts based on EHR (AUC = 0.84). Some models are already being used in practice, for example the US Veterans’ Administration system [40] McCoy et al. [41] used NLP to extract additional information from hospital discharge notes on psychiatric units, specifically, signs of sentiment and its correlation with readmission and mortality risks. Greater positive sentiment was associated with decreased risk of readmission and mortality. A valuable addition to the literature would be a similar longitudinal study that also follows up the patients. Recent predictive models have been built based on passive (EHR, ECG, online) or active (self-reports) data [42] and although promising, they are often subject to data quality issues.
3.4. Treatment
Pharmaceutical and psychotherapeutic treatments are only effective for 30–50% of mental health patients and currently there is no objective gold standard for a combination of psychotherapy and pharmacological treatments [23]. Treatments tailored to patients based on a thorough understanding of the disease and predictions about its development could maximise the likelihood of recovery and optimise resource use. Recent AI research in this field focused on predictions of a patient's response to the treatment and decision support for the most promising intervention for a given patient.
A large part of the literature focuses on neuroimaging. For example, researchers used brain MRI scans to accurately predict response to antidepressants [43, 44] cognitive behavioural therapy [45, 46] and neurostimulation therapy [47]. Although these are promising studies with high accuracy, more research using external validation and larger (ideally prospective) data sets needs to be performed. It's unclear if the accuracy would be the same with different scanners and on different populations.
Other studies focused on individualised medications. Research has been particularly successful in predicting responses to drugs. Checkroud et al. [48] used unsupervised ML to measure responsiveness of symptom clusters to certain medications and found significant differences between the groups. Large sets of data and a robust study design ensured high significance and generalisability of the results which were used in a commercial decision support platform for primary care providers (SpringHealth) [23]. However, researchers used patient reported outcomes to describe the severity of depression (25-item questionnaire), which has some degree of subjectivity so the quality of the results depended on the design of those self-reporting questionnaires.
Although depression seems to be researched most frequently, some studies predicted treatment outcomes for patients with substance use disorders [49], schizophrenia [50] and psychosis. For example, Koutsouleris et al. [14] used pre-treatment patient data to predict psychosis outcomes after 12 and 52 weeks with 75% and 73.8% accuracy respectively. Researchers were able to predict risk of non-adherence to treatment and other outcomes using factors such as unemployment, poor education, functional deficits, and more. The high quality of the study was ensured by conducting leave-site-out validation across 44 European sites. Although replication is needed before implementation into clinical practice, the study was a successful attempt to show the usefulness of ML in predicting treatment outcomes.
Some studies focused on genetic biomarkers. Genetics variants have been to be linked with therapy outcomes such as in lithium therapy for bipolar disorder [51]. Those results still need to be confirmed (possibly with the use of more modern, robust ML approaches), however they are a promising step towards finding accurate genetic markers.
Monitoring is an important part of effective therapy. Nurses are required to take regular observations, which range from hourly to close proximity supervision at all times [52]. However, these have also been shown to disturb the patient, particularly at night, lowering their chances of a faster recovery, which can then lead to longer length of stay [53]. Barrera et al. [53] used AI to introduce digitally assisted nursing observations which improved both patient and staff experiences. The tool allowed nurses to take observations remotely via a sensor utilising computer vision, signal processing and AI to observe micromovements that allow for pulse and breathing rate calculation. As the study so far only collected preliminary qualitative data, more research is needed to validate the intervention (eg. accuracy and health outcome measurement). Digital observations can potentially have a broad application as they don't require any patient cooperation, compared with standard methods or wearable technology.
Future work could focus on the discovery of both objective biomarkers for development of targeted treatments and objective ways to measure therapy's effectiveness [53]. Studies on AI in therapy aim to enhance decisions and personalise interventions to improve recovery and allocate resources efficiently. Most widely researched conditions include depression, psychosis, bipolar disorders, schizophrenia and substance misuse disorders.
3.5. AI in patient flow
Clinical patient data has been widely used to predict outcomes such as length of stay and costs [36, 54, 55, 56]. Recommendations and guidelines were made based on accurate predictions of such measures [57, 58, 59].
Factors such as biomarkers, sociodemographic, lifestyle and co-morbidities are used to estimate prognosis, costs, length of stay and risk of readmission [36, 54, 55, 56, 60, 61]. Those important clinical predictions can then be used as quality indicators and act to improve resource allocation such as better bed occupancy, more accurate local funding and personalised post-discharge care packages [62, 63, 64].
Three main themes emerged in the literature for AI in patient flow: bed occupancy, use of EHR and risk of readmission.
3.5.1. Bed occupancy
Bed occupancy has been used as a measure of quality of care in the NHS, and The Royal College of Psychiatrists [65] has a recommended maximum occupancy level of 85%. AI solutions can be used to enhance efficient bed allocations as they could help in ‘preventing avoidable admissions, reducing variation in LOS (length of stay) and improving discharge of patients’ [66].
3.5.2. Use of EHRs
EHRs are an important source of data for AI models that aim to improve patient flow. For example, Wolff et al. [67] were able to analyse EHR data, applying statistical models to discover hidden trends and predict length of stay. Kovalchuk et al. [68] used ‘a combination of data, text, process mining techniques, and ML approaches for the analysis of EHRs with discrete-event simulation and queueing theory’ to build a framework for different patient pathways. This simulation of patient flow was implemented in the clinical setting to predict outcomes of each pathway, and in turn reducing length of stay.
3.5.3. Risk of readmission
Many AI models have been able to predict LOS and evaluate the risk of readmission at the time of discharge [56, 69, 70]. Purushotham et al. [71] used linear regression to benchmark four clinical predictions: risk of mortality, LOS, physiological decline and phenotype classification. The model was designed to overcome the barrier of single-prediction-at-a-time by formulating ‘a heterogeneous multitask’ algorithm that predicts all four factors simultaneously. In a recent study, Alaeddini et al. [72] challenged the approach and suggested that risk predictions of LOS are dynamic and nonlinear. Researchers argued that post-discharge monitoring is vital and risk prediction should be made on an individual basis. They investigated the timing of discharge to optimise the discharge process, reducing the likelihood of adverse events post-discharge [72].
Donzé et al. [69] used the HOSPITAL score to predict avoidable readmissions. The scoring system was used to categorise patients into different risk clusters based on their biochemistry results and hospital care history. This “nip-it-in-the-bud” approach has the potential to reduce the burden of emergency admissions on hospitals by predicting and deploying interventions before admission. Some patients may have a shorter LOS, but they require more treatment and care, whereas others may stay for longer, but need less intensive monitoring. The key is to ensure that all services of the hospital are utilised without exceeding its limits [73].
Even with the advancement in digital technologies and data analysis, clinical predictions can still be problematic and can differ from hospital to hospital [67]. A 2019 report suggested that more accurate and complex prediction models need to be applied to healthcare; the “untapped potential” in patient data cannot be unlocked without a joint effort from both the clinicians as well as the technicians. Most healthcare professionals lack the skills and tools, and the highly fragmented analytical community also limits the opportunity of making such high-quality complex analytical models [74].
4. Discussion
This literature review shows that the research investigating the use of AI to improve mental health diagnostics, prognostics and treatment, rarely took patient flow into account as one of the potential measures of improvements or even acknowledged it as a problem to be solved. However, knowing the stages of patient flow, one could argue that innovations in those areas can have an indirect positive effect on patient flow. The inpatient journey begins at admission and thus ML-enhanced triage system could improve the efficiency of the flow and reduce workload by channeling patients to the right services and potentially replacing triage interviews with self-report screening.
Development of objective, quick to administer, accurate, AI-enhanced diagnostic tools could also ultimately improve patient flow by speeding up the processes and time-to-treatment. Research suggests that such tools could be developed based on a combination of patient data such as neuroimaging, biomarkers, and self-reports. Husain, Yu & Tran et al. determined functional near-infrared spectroscopy (fNIRS) to be cost-effective compared to the traditional fMRIs used in psychiatry discussed above. fNIRS also adds additional benefits to the patients as they do not require the use of ionising radiation, restraints or loud noise [75]. While fMRIs show notable changes in the brain of depressed patients, they are significantly more expensive than fNRIS and, due to the lack of mobility, pose the issue of accessibility. A systematic review conducted by Ho, Lim & Lim et al. [76] suggested the increased uptake of fNIRS in psychiatry as a diagnostic and predictive tool of major depressive disorders due to the consistent patterns it uncovered in the brains of depressed patients. Adopting ML to read these fNIRS could lead to quicker diagnosis in patients and thus earlier administration of accurate treatment eliminating the current trial and error approach and the problem of misdiagnosis. Prognostic solutions could also have an impact on the patient flow. As discussed, predictive tools have a wide range of applications and could aid with earlier intervention and prevention as well as planning and delivering services.
Applying AI in treating mental health could also have an impact on patient flow. Studies show that, with the use of AI, it may be possible to offer appropriate treatment options to those who are most likely to benefit from them, which could ultimately improve recovery and thus reduce readmission and length of stay, consequently improving patient flow.
The studies that focused on patient flow were rarely conducted in the context of mental health units, however they reveal that AI could be a useful tool to enhance the flow of patients. The same indicators - LOS, risk of readmission - could be used on inpatient mental health units to improve resource allocation which could lead to better outcomes in bed occupancy, more accurate local funding, and personalised post-discharge care packages.
These projections are theoretical and more research needs to be done to investigate the use of AI in mental health inpatient units to improve patient flow. Prediction of LOS is incredibly complex and multidimensional [77]. Moreover, mental health inpatient patient flow is complex and difficult to measure with a single variable. In this literature review, we have outlined numerous applications of AI in mental health and patient flow and discussed how those could potentially be combined to improve outcomes on mental health inpatient units, however further research is needed. It is imperative that a congruent, dynamic and multifactorial model is designed which encompasses the entire patient journey including all of a patient's personal factors.
4.1. Technical hurdles
A useful model needs to be accurate, externally validated in a large, heterogeneous population and demonstrate improvements in clinical outcomes [8]. Liu et al. [8] evaluated studies that used AI in medical imaging and concluded that less than 1% of the papers had good enough design to be included in the meta analysis [78]. Initiatives that improve reporting, transparency and quality of prediction models such as TRIPOD [79] or CONSORT [80] and their specific extensions for AI (TRIPOD-ML, CONSORT-AI) should be widely implemented to decrease risks of bias and ensure clinical usefulness. Sultan et al. [81] outlined two main barriers around external validation - the heterogeneity of datasets and the methodological challenges. Data sharing initiatives and “big data” need to be put in place in the future to ensure that algorithms are trained on large and varied datasets [82]. The choice of the architecture of the model needs to be appropriate for the setting and more modern ML techniques are preferable [8]. With the fast pace of technological development and new AI approaches, timely updates of models is crucial [83].
4.2. Regulatory & ethical hurdles
A lack of clear regulations and guidelines complicates development of AI models for healthcare [84] and can lead to ethical issues [83]. The advancement of AI diagnosis, treatment and prognosis predictions may result in an inequality of care, as hospitals may treat the less severe with better outcome predictions [64]. Poorly representative training data sets (eg. based solely on white British men) can lead to biased algorithms [85]. Appropriate measures need to be put in place to minimise the risk of bias, for example the US FDA (Food & Drug Administration) “excellence criteria” for assessing medical software [86]. Medical data handling is also controversial. Mental health data is particularly sensitive, any data disclosure could have significant consequences. He et al. [87] highlighted concerns around the use of AI in clinical settings such as data sharing, transparency of decision-making, and patient safety. The European GDPR (General Data Protection Regulation) sets up consent form and data processing requirements and standards to address data-sharing challenges in legislation [85]. The issue of transparency relates to “black-box” algorithms that are not fully interpretable, meaning developers cannot understand how the algorithm derived specific outputs from the given inputs. This raises questions regarding clinical decisions supported by black-box AI and concerns regarding doctors becoming over-reliant on AI. Finally, there are issues regarding liability - who is responsible for medical errors that occur when AI is used in clinical contexts [88]. New regulatory bodies such as institutional review boards, ethics review committees, and health technology assessment organisations will play a significant role in AI regulation [85].
4.3. Logistic hurdles
Implementation of a new technology may come at a great monetary cost and time disruption for health providers [88]. The complexity of the hospital environments can mean significant difficulties with any potential changes. Often, new solutions require training and the transition period may be disruptive. For example, when introducing EHR, some hospitals required double documentation to prevent data loss in the initial stages which increased administrative workload for HCPs (Healthcare Professionals). Cresswell et al. [89] categorised factors that contribute to the slow implementation of information technology in hospitals into: strategic, organisational, social and technical. Oftentimes, benefits are difficult to measure or appear over a longer period of time, which can be discouraging. Studies tried to determine factors that contribute to a successful implementation such as perceived ease-of-use and perceived usefulness [90]. Technological solutions may have poor usability, functionality and performance which can lead to frustration and even threaten patient safety [89]. Sheikh et al. [91] found that some technological solutions decreased face-to-face contact amongst HCPs and between HCPs and patients. On the other hand, Harvey et al. [92] showed that when a site receives appropriate support from a national agency when implementing technology, the receptiveness is high.
In order for the solution to be adopted, it needs to be perceived as clinically useful and time-efficient rather than disrupting HCPs. Collaboration between medical professionals, academia, and technical experts is required to ensure the right solutions are being implemented with the right support for the hospital staff.
4.4. Patient experience
Although research on AI applications in healthcare is accelerating, we are still lacking an understanding of the impacts of such solutions on patients' experiences, however experts have been speculating about potential benefits as well as risks.
The ML-enhanced triaging systems could aid in directing patients straight to the healthcare services that are most suitable for their needs. This will reduce the number of times patients move between services before reaching the right care service. This could lead to shorter waiting times, better clinical outcomes, and higher satisfaction. Moreover, predictive and diagnostic tools could also improve clinical outcomes and thus could potentially improve patients' experiences. Predictive AI tools offering earlier interventions and preventions can forecast patients most at risk of adverse psychiatric events, allowing for earlier contact and intervention-based care, reducing the detrimental consequences of the conditions if treated later on. A potential benefit of AI is the personalisation of care and the elimination of a “one-size fits all” approach to treatment. By taking into account all patients' personal factors and tailoring management plans to them, there's a potential for greater patient engagement, reduction of non-compliance, and improved long-term outcomes. Finally, some administrative jobs could be automated with AI granting healthcare professionals more time to spend with their patients, probably the most important factor for patient experience and satisfaction. These additional patient contact hours foster better doctor-patient relationships resulting in better therapeutic outcomes in psychiatric care [93], raising staff morale and retention.
With all this being said, there's also a risk that AI could adversely affect the relationships between the patients and healthcare practitioners. If AI is not adequately implemented, it could potentially create more work for doctors as they would need to divert time from their patients to solve the potential problem or spend time on additional administrative tasks. Ineffective implementation of technology can result in less patient contact as concluded by Sheikh et al [91]. The impact these logistical hurdles could have on HCPs and consequently their patients might be detrimental, and the benefits of AI need to be thoroughly outweighed by the risks.
Patient trust in healthcare AI and their relationship with a doctor has to be considered as a two way street: one can affect the other. This doctor patient relationship has been described as the “heart and art of medicine” [94] with trust at its core [95]. There are three key routes to doctor-patient trust: the licensure and regulation of doctors as having an expert knowledge in the field, the values placed on a doctor due to their position in society, and repeated experience of a doctor's abilities [96]. AI has the potential to disrupt all three in both positive and negative ways, for example through patients' willingness to disclose more socially negative information to AI than humans [97], by raising questions around the regulation of AI, and whether AI can hold these same social values.
The way in which AI would be integrated into doctor-patient conversations must also be considered as this constant dialogue and information transfer is at the epicentre of the patient experience in healthcare. As patients become more aware of AI and the increasing use of AI in healthcare, concerns around AI remain prevalent, with 43% of Europeans polling that AI was mostly harmful compared to 38% polling mostly helpful [98]. Lack of trust and human touch have been identified as drivers of negative perceptions [99], which could lead to patients becoming less open and thus impacting on clinicians’ ability to effectively treat them. With healthcare data breach and misuse frequency, magnitude, and resulting financial losses on the rise, largely due to increased incidents of hacking [100], a study unsurprisingly found that only 10.2% of patients would be willing to share their anonymous data with “a tech company, for the purposes of improving health care” [101].
Given the potential benefits of the technology, those risks need to be addressed when technologies are being developed and evaluated. Regulatory and legal frameworks are needed to ensure safe development and implementation of AI technologies, and alternatives need to be continually offered whilst AI assisted healthcare becomes the norm. One has to be aware that patient trust and attitudes towards their doctors, the whole of the NHS, and other healthcare providers could on the one hand be enhanced by positive patient experiences and attitudes towards the AI used, but marred by the opposite.
5. Conclusions
The literature review of AI in mental health revealed that AI could be used for improving diagnostic accuracy, personalisation of treatment and predicting clinical outcomes to ensure timely delivery of interventions; the main objective often being the improvement of the quality of patient care. The literature review on using AI in patient flow explored predicting avoidable readmissions, improving care efficiency, optimising resource allocation, reducing length of stay, and validating existing algorithms for more generalised purposes. In both areas, studies vary in their accuracy and generalisability. There's a need for further research that will focus specifically on patient flow in mental health units. Although this review focused on solutions in the hospital setting, it is important to highlight that the future of healthcare will include high integration between services, thus any community interventions will have a significant indirect impact on the inpatient patient flow.
These potential AI implementations can affect patients' experience either positively or negatively. Further studies should look at the patient perspective on the integration of AI in healthcare as patient experience is a key consideration in healthcare, especially in a patient centred approach. Addressing patient concerns is crucial to wider scale implementation of AI in order to maintain patient autonomy around their sensitive data, particularly in the field of psychiatry.
The use of AI in psychiatric healthcare remains largely unexplored. Important aspects such as the patient's experience, clinical significance and ethical considerations require further studies and evaluation.
Declarations
Author contribution statement
All authors listed have significantly contributed to the development and the writing of this article.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
No data was used for the research described in the article.
Declaration of interests statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper.
Appendix A. Supplementary data
The following is the supplementary data related to this article:
Appendix Table 1-5 HLY e06626 _V2
References
- 1.Tlapa D., Zepeda-Lugo C.A., Tortorella G.L., Baez-Lopez Y.A., Limon-Romero J., Alvarado-Iniesta A. Effects of lean healthcare on patient flow: a systematic review. Value in Health. 2020;23(2):260–273. doi: 10.1016/j.jval.2019.11.002. [Online] [DOI] [PubMed] [Google Scholar]
- 2.NHS Improvement . 2017. Good Practice Guide: Focus on Improving Patient Flow.https://improvement.nhs.uk/documents/1426/Patient_Flow_Guidance_2017___13_July_2017.pdf [Online] Available from: [Google Scholar]
- 3.NHSx . 2019. The NHS Long Term Plan #NHSLongTermPlan.www.longtermplan.nhs.ukhttps://www.longtermplan.nhs.uk/wp-content/uploads/2019/01/nhs-long-term-plan-june-2019.pdf [Online] Available from: [Google Scholar]
- 4.Mental Health Foundation . Mental Health Foundation; 2019. What Are Mental Health Problems?https://www.mentalhealth.org.uk/your-mental-health/about-mental-health/what-are-mental-health-problems [Online] Available from: [Google Scholar]
- 5.Tran B.X., McIntyre R.S., Latkin C.A., Phan H.T., Vu G.T., Nguyen H.L.T. The current research landscape on the artificial intelligence application in the management of depressive disorders: a bibliometric analysis. Int. J. Environ. Res. Publ. Health. 2019;16(12):2150. doi: 10.3390/ijerph16122150. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Fakhoury M. Artificial intelligence in psychiatry. Front. Psychiatr. 2019:119–125. doi: 10.1007/978-981-32-9721-0_6. [Online] [DOI] [PubMed] [Google Scholar]
- 7.Al-Huthail Y.R. Accuracy of referring psychiatric diagnosis. Int. J. Health Sci. 2008;2(1):35–38. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3068718/ [Online] Available from: [PMC free article] [PubMed] [Google Scholar]
- 8.Liu Y., Chen P.-H.C., Krause J., Peng L. How to read articles that use machine learning: users’ guides to the medical literature. JAMA. 2019;322(18):1806–1816. doi: 10.1001/jama.2019.16489. [Online] [DOI] [PubMed] [Google Scholar]
- 9.Brodey B.B., Girgis R.R., Favorov O.V., Addington J., Perkins D.O., Bearden C.E. The Early Psychosis Screener (EPS): quantitative validation against the SIPS using machine learning. Schizophr. Res. 2018;197:516–521. doi: 10.1016/j.schres.2017.11.030. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Singh V.K., Shrivastava U., Bouayad L., Padmanabhan B., Ialynytchev A., Schultz S.K. Machine learning for psychiatric patient triaging: an investigation of cascading classifiers. J. Am. Med. Inf. Assoc. 2018;25(11):1481–1487. doi: 10.1093/jamia/ocy109. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Dwyer D.B., Kalman J.L., Budde M., Kambeitz J., Ruef A., Antonucci L.A. An investigation of psychosis subgroups with prognostic validation and exploration of genetic underpinnings: the PsyCourse study. JAMA Psychiatr. 2020;77(5):523–533. doi: 10.1001/jamapsychiatry.2019.4910. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Drysdale A.T., Grosenick L., Downar J., Dunlop K., Mansouri F., Meng Y. Resting-state connectivity biomarkers define neurophysiological subtypes of depression. Nat. Med. 2016;23(1):28–38. doi: 10.1038/nm.4246. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Woo C.-W., Chang L.J., Lindquist M.A., Wager T.D. Building better biomarkers: brain models in translational neuroimaging. Nat. Neurosci. 2017;20(3):365–377. doi: 10.1038/nn.4478. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Koutsouleris N., Meisenzahl E.M., Borgwardt S., Riecher-Rössler A., Frodl T., Kambeitz J. Individualized differential diagnosis of schizophrenia and mood disorders using neuroanatomical biomarkers. Brain. 2015;138(7):2059–2073. doi: 10.1093/brain/awv111. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Lu X., Yang Y., Wu F., Gao M., Xu Y., Zhang Y. Discriminative analysis of schizophrenia using support vector machine and recursive feature elimination on structural MRI images. Medicine. 2016;95(30) doi: 10.1097/MD.0000000000003973. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Shatte A.B.R., Hutchinson D.M., Teague S.J. Machine learning in mental health: a scoping review of methods and applications. Psychol. Med. 2019;49(9):1426–1448. doi: 10.1017/S0033291719000151. [Online] [DOI] [PubMed] [Google Scholar]
- 17.Powell T., Gaspar H.A., Chung R., Keohane A., Gunasinghe C., Uher R. A study OF 42 inflammatory markers IN 321 control subjects and 887 major depressive disorder cases: the role OF bmi and other confounders, and the prediction OF current depressive episode BY machine learning. Eur. Neuropsychopharmacol. 2019;29:S908. [Online] [Google Scholar]
- 18.Chen J., Zang Z., Braun U., Schwarz K., Harneit A., Kremer T. Association of a reproducible epigenetic risk profile for schizophrenia with brain methylation and function. JAMA Psychiatr. 2020 doi: 10.1001/jamapsychiatry.2019.4792. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Liang X., Justice A.C., So-Armah K., Krystal J.H., Sinha R., Xu K. DNA methylation signature on phosphatidylethanol, not on self-reported alcohol consumption, predicts hazardous alcohol consumption in two distinct populations. Mol. Psychiatr. 2020:1–16. doi: 10.1038/s41380-020-0668-x. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.He Q., Veldkamp B.P., Glas C.A.W., de Vries T. Automated assessment of patients’ self-narratives for posttraumatic stress disorder screening using natural language processing and text mining. Assessment. 2017;24(2):157–172. doi: 10.1177/1073191115602551. [Online] [DOI] [PubMed] [Google Scholar]
- 21.Tran T., Kavuluru R. Predicting mental conditions based on “history of present illness” in psychiatric notes with deep neural networks. J. Biomed. Inf. 2017;75S:S138–S148. doi: 10.1016/j.jbi.2017.06.010. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Fusar-Poli P., Hijazi Z., Stahl D., Steyerberg E. The science of prognosis in psychiatry: a review. JAMA Psychiatr. 2018 doi: 10.1001/jamapsychiatry.2018.2530. https://pubmed.ncbi.nlm.nih.gov/30347013/ [Online] Available from: [DOI] [PubMed] [Google Scholar]
- 23.Dwyer D.B., Falkai P., Koutsouleris N. Machine learning approaches for clinical psychology and psychiatry. Annu. Rev. Clin. Psychol. 2018;14(1):91–118. doi: 10.1146/annurev-clinpsy-032816-045037. [Online] [DOI] [PubMed] [Google Scholar]
- 24.Dipnall J.F., Pasco J.A., Berk M., Williams L.J., Dodd S., Jacka F.N. Getting RID of the blues: formulating a Risk Index for Depression (RID) using structural equation modeling. Aust. N. Z. J. Psychiatr. 2017;51(11):1121–1133. doi: 10.1177/0004867417726860. [Online] [DOI] [PubMed] [Google Scholar]
- 25.Hatton C.M., Paton L.W., McMillan D., Cussens J., Gilbody S., Tiffin P.A. Predicting persistent depressive symptoms in older adults: a machine learning approach to personalised mental healthcare. J. Affect. Disord. 2019;246:857–860. doi: 10.1016/j.jad.2018.12.095. [Online] [DOI] [PubMed] [Google Scholar]
- 26.Kessler R.C., van Loo H.M., Wardenaar K.J., Bossarte R.M., Brenner L.A., Cai T. Testing a machine-learning algorithm to predict the persistence and severity of major depressive disorder from baseline self-reports. Mol. Psychiatr. 2016;21(10):1366–1371. doi: 10.1038/mp.2015.198. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Kautzky A., Dold M., Bartova L., Spies M., Vanicek T., Souery D. Refining prediction in treatment-resistant depression: results of machine learning analyses in the TRD III sample. J. Clin. Psychiatr. 2018;79(1) doi: 10.4088/JCP.16m11385. [Online] [DOI] [PubMed] [Google Scholar]
- 28.Mechelli A., Lin A., Wood S., McGorry P., Amminger P., Tognin S. Using clinical information to make individualized prognostic predictions in people at ultra high risk for psychosis. Schizophr. Res. 2017;184:32–38. doi: 10.1016/j.schres.2016.11.047. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Fond G., Bulzacka E., Boucekine M., Schürhoff F., Berna F., Godin O. Machine learning for predicting psychotic relapse at 2 years in schizophrenia in the national FACE-SZ cohort. Prog. Neuro Psychopharmacol. Biol. Psychiatr. 2019;92:8–18. doi: 10.1016/j.pnpbp.2018.12.005. [Online] [DOI] [PubMed] [Google Scholar]
- 30.Chung Y., Addington J., Bearden C.E., Cadenhead K., Cornblatt B., Mathalon D.H. Use of machine learning to determine deviance in neuroanatomical maturity associated with future psychosis in youths at clinically high risk. JAMA Psychiatr. 2018;75(9):960–968. doi: 10.1001/jamapsychiatry.2018.1543. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Bedi G., Carrillo F., Cecchi G.A., Slezak D.F., Sigman M., Mota N.B. Automated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophrenia. 2015;1(1):1–7. doi: 10.1038/npjschz.2015.30. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Torous J., Larsen M.E., Depp C., Cosco T.D., Barnett I., Nock M.K. Smartphones, sensors, and machine learning to advance real-time prediction and interventions for suicide prevention: a review of current progress and next steps. Curr. Psychiatr. Rep. 2018;20(7) doi: 10.1007/s11920-018-0914-y. [Online] [DOI] [PubMed] [Google Scholar]
- 33.Gradus J.L., Rosellini A.J., Horváth-Puhó E., Street A.E., Galatzer-Levy I., Jiang T. Prediction of sex-specific suicide risk using machine learning and single-payer health care registry data from Denmark. JAMA Psychiatr. 2019 doi: 10.1001/jamapsychiatry.2019.2905. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Melhem N.M., Porta G., Oquendo M.A., Zelazny J., Keilp J.G., Iyengar S. Severity and variability of depression symptoms predicting suicide attempt in high-risk individuals. JAMA Psychiatr. 2019;76(6):603–613. doi: 10.1001/jamapsychiatry.2018.4513. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Desjardins I., Cats-Baril W., Maruti S., Freeman K., Althoff R. Suicide risk assessment in hospitals. J. Clin. Psychiatr. 2016;77(7):e874–e882. doi: 10.4088/JCP.15m09881. [Online] [DOI] [PubMed] [Google Scholar]
- 36.Miotto R., Li L., Kidd B.A., Dudley J.T. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 2016;6(1) doi: 10.1038/srep26094. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Menger V., Spruit M., van Est R., Nap E., Scheepers F. Machine learning approach to inpatient violence risk assessment using routinely collected clinical notes in electronic health records. JAMA Network Open. 2019;2(7) doi: 10.1001/jamanetworkopen.2019.6709. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Suchting R., Green C.E., Glazier S.M., Lane S.D. A data science approach to predicting patient aggressive events in a psychiatric hospital. Psychiatr. Res. 2018;268:217–222. doi: 10.1016/j.psychres.2018.07.004. [Online] [DOI] [PubMed] [Google Scholar]
- 39.Walsh C.G., Ribeiro J.D., Franklin J.C. Predicting risk of suicide attempts over time through machine learning. Clin. Psychol. Sci. 2017;5(3):457–469. [Online] [Google Scholar]
- 40.Lyon J. New data on suicide risk among military Veterans. JAMA. 2017;318(16):1531. doi: 10.1001/jama.2017.15982. [Online] [DOI] [PubMed] [Google Scholar]
- 41.McCoy T.H., Castro V.M., Cagan A., Roberson A.M., Kohane I.S., Perlis R.H. Sentiment measured in hospital discharge notes is associated with readmission and mortality risk: an electronic health record study. PloS One. 2015;10(8) doi: 10.1371/journal.pone.0136341. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Ebert D.D., Harrer M., Apolinário-Hagen J., Baumeister H. Digital interventions for mental disorders: key features, efficacy, and potential for artificial intelligence applications. Front. Psychiatr. 2019:583–627. doi: 10.1007/978-981-32-9721-0_29. [Online] [DOI] [PubMed] [Google Scholar]
- 43.Williams L.M., Korgaonkar M.S., Song Y.C., Paton R., Eagles S., Goldstein-Piekarski A. Amygdala reactivity to emotional faces in the prediction of general and medication-specific responses to antidepressant treatment in the randomized iSPOT-D trial. Neuropsychopharmacology. 2015;40(10):2398–2408. doi: 10.1038/npp.2015.89. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Crane N.A., Jenkins L.M., Bhaumik R., Dion C., Gowins J.R., Mickey B.J. Multidimensional prediction of treatment response to antidepressants with cognitive control and functional MRI. Brain. 2017;140(2):472–486. doi: 10.1093/brain/aww326. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Hahn T., Kircher T., Straube B., Wittchen H.-U., Konrad C., Ströhle A. Predicting treatment response to cognitive behavioral therapy in panic disorder with agoraphobia by integrating local neural information. JAMA Psychiatr. 2015;72(1):68–74. doi: 10.1001/jamapsychiatry.2014.1741. [Online] [DOI] [PubMed] [Google Scholar]
- 46.Whitfield-Gabrieli S., Ghosh S.S., Nieto-Castanon A., Saygin Z., Doehrmann O., Chai X.J. Brain connectomics predict response to treatment in social anxiety disorder. Mol. Psychiatr. 2016;21(5):680–685. doi: 10.1038/mp.2015.109. [Online] [DOI] [PubMed] [Google Scholar]
- 47.Bhugra D., Tasman A., Pathare S., Priebe S., Smith S., Torous J. The WPA-lancet psychiatry commission on the future of psychiatry. Lancet Psychiatr. 2017;4(10):775–818. doi: 10.1016/S2215-0366(17)30333-4. [Online] [DOI] [PubMed] [Google Scholar]
- 48.Chekroud A.M., Gueorguieva R., Krumholz H.M., Trivedi M.H., Krystal J.H., McCarthy G. Reevaluating the efficacy and predictability of antidepressant treatments. JAMA Psychiatr. 2017;74(4):370. doi: 10.1001/jamapsychiatry.2017.0025. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Acion L., Kelmansky D., van der Laan M., Sahker E., Jones D., Arndt S. Use of a machine learning framework to predict substance use disorder treatment success. Niaura R (ed.) PloS One. 2017;12(4) doi: 10.1371/journal.pone.0175383. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Cao B., Cho R.Y., Chen D., Xiu M., Wang L., Soares J.C. Treatment response prediction and individualized identification of first-episode drug-naïve schizophrenia using brain functional connectivity. Mol. Psychiatr. 2020;25(4):906–913. doi: 10.1038/s41380-018-0106-5. [Online] [DOI] [PubMed] [Google Scholar]
- 51.Hou L., Heilbronner U., Degenhardt F., Adli M., Akiyama K., Akula N. Genetic variants associated with response to lithium treatment in bipolar disorder: a genome-wide association study. Lancet. 2016;387(10023):1085–1093. doi: 10.1016/S0140-6736(16)00143-4. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.NHS Solent . 2018. Psychiatric Observations and Engagement.https://www.solent.nhs.uk//media/1191/psychiatric-observations-and-engagement-policy.pdf [Online] Available from: [Google Scholar]
- 53.Barrera A., Gee C., Wood A., Gibson O., Bayley D., Geddes J. Introducing artificial intelligence in acute psychiatric inpatient care: qualitative study of its use to conduct nursing observations. Evid. Base Ment. Health. 2020;23(1):34–38. doi: 10.1136/ebmental-2019-300136. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Zhen X., Lundborg C.S., Zhang M., Sun X., Li Y., Hu X. Clinical and economic impact of methicillin-resistant Staphylococcus aureus : a multicentre study in China. Sci. Rep. 2020;10(1):1–8. doi: 10.1038/s41598-020-60825-6. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Harutyunyan H., Khachatrian H., Kale D.C., Steeg G.V., Galstyan A. Multitask learning and benchmarking with clinical time series data. Sci. Data. 2019;6(1):96. doi: 10.1038/s41597-019-0103-9. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Rahimian F., Salimi-Khorshidi G., Payberah A.H., Tran J., Ayala Solares R., Raimondi F. Predicting the risk of emergency admission with machine learning: development and validation using linked electronic health records. Sheikh A (ed.) PLoS Med. 2018;15(11) doi: 10.1371/journal.pmed.1002695. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Tomašev N., Glorot X., Rae J.W., Zielinski M., Askham H., Saraiva A. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. 2019;572(7767):116–119. doi: 10.1038/s41586-019-1390-1. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Komorowski M., Celi L.A., Badawi O., Gordon A.C., Faisal A.A. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat. Med. 2018;24(11):1716–1720. doi: 10.1038/s41591-018-0213-5. [Online] [DOI] [PubMed] [Google Scholar]
- 59.Park J.H., Cho H.E., Kim J.H., Wall M.M., Stern Y., Lim H. Machine learning prediction of incidence of Alzheimer’s disease using large-scale administrative health data. npj Digital Med. 2020;3(1):1–7. doi: 10.1038/s41746-020-0256-0. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Davoren M., Byrne O., O’Connell P., O’Neill H., O’Reilly K., Kennedy H.G. Factors affecting length of stay in forensic hospital setting: need for therapeutic security and course of admission. BMC Psychiatr. 2015;15(1) doi: 10.1186/s12888-015-0686-4. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Durojaiye A., McGeorge N., Puett L., Stewart D., Fackler J., Hoonakker P. Mapping the flow of pediatric trauma patients using process mining. Appl. Clin. Inf. 2018;9(3):654–666. doi: 10.1055/s-0038-1668089. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Anselmi L., Everton A., Shaw R., Suzuki W., Burrows J., Weir R. Estimating local need for mental healthcare to inform fair resource allocation in the NHS in England: cross-sectional analysis of national administrative data linked at person level. Br. J. Psychiatr. 2019:1–7. doi: 10.1192/bjp.2019.185. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.NHS England . 2016. Developing a Capitated Payment Approach for Mental Health Detailed Guidance Published by NHS England and NHS Improvement.https://improvement.nhs.uk/documents/491/Developing_a_capitated_payment_approach_for_mental_health_FINAL.pdf [Online] Available from: [Google Scholar]
- 64.Macdonald A., Elphick M. Combining routine outcomes Measurement and “Payment by results”: will it Work and is it worth it? Br. J. Psychiatr.: J. Ment. Sci. 2011 doi: 10.1192/bjp.bp.110.090993. https://pubmed.ncbi.nlm.nih.gov/21881095/ [Online] Available from: [DOI] [PubMed] [Google Scholar]
- 65.Royal College of Psychiatrist Bed Occupancy across Mental Health Trusts. Mental Health Watch. https://mentalhealthwatch.rcpsych.ac.uk/indicators/bed-occupancy-across-mental-health-trust [Online] Available from:
- 66.Adlington K., Brown J., Ralph L., Clarke A., Bhoyroo T., Henderson M. Better care: reducing length of stay and bed occupancy on an older adult psychiatric ward. BMJ Open Quality. 2018;7(4) doi: 10.1136/bmjoq-2017-000149. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Wolff J., McCrone P., Patel A., Kaier K., Normann C. Predictors of length of stay in psychiatry: analyses of electronic medical records. BMC Psychiatr. 2015;15(1) doi: 10.1186/s12888-015-0623-6. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Koblauch H., Reinhardt S.M., Lissau W., Jensen P.-L. The effect of telepsychiatric modalities on reduction of readmissions in psychiatric settings: a systematic review. J. Telemed. Telecare. 2016;24(1):31–36. doi: 10.1177/1357633X16670285. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Donzé J.D., Williams M.V., Robinson E.J., Zimlichman E., Aujesky D., Vasilevskis E.E. International validity of the HOSPITAL score to predict 30-day potentially avoidable hospital readmissions. JAMA Int. Med. 2016;176(4):496–502. doi: 10.1001/jamainternmed.2015.8462. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Islam M., Hasan M., Wang X., Germack H., Noor-E-Alam M. A systematic review on healthcare analytics: application and theoretical perspective of data mining. Healthcare. 2018;6(2):54. doi: 10.3390/healthcare6020054. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Purushotham S., Meng C., Che Z., Liu Y. Benchmarking deep learning models on large healthcare datasets. J. Biomed. Inf. 2018;83:112–134. doi: 10.1016/j.jbi.2018.04.007. [Online] [DOI] [PubMed] [Google Scholar]
- 72.Alaeddini A., Helm J., Shi P., Faruqui S. IISE Transactions on Healthcare Systems Engineering. 2019. An integrated framework for reducing hospital readmissions using risk trajectories characterization and discharge timing optimization.https://www.tandfonline.com/doi/full/10.1080/24725579.2019.1584133?casa_token=4uwuruj-go8AAAAA%3ANUPQ1Ck3jbRi5HPCSV-v8j6nX4WRB2YrhxKWxVAXi-AwMn62fJnkqI7qqWdWeGem1v9puJREhxmKLQ0 [Online] Available from: [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Karakusevic S. 2016. Briefing Understanding Patient Flow in Hospitals.http://finnamore.co/images/pdfs/understanding_patient_flow_in_hospitals_web_1.pdf [Online] Available from: [Google Scholar]
- 74.Bardsley Steventon, Fothergill Untapped potential: investing in health and care data analytics. The Health Foundation. https://www.health.org.uk/publications/reports/untapped-potential-investing-in-health-and-care-data-analytics [Online] Available from:
- 75.Husain S.F., Yu R., Tang T.-B., Tam W.W., Tran B., Quek T.T. Validating a functional near-infrared spectroscopy diagnostic paradigm for Major Depressive Disorder. Sci. Rep. 2020;10(1) doi: 10.1038/s41598-020-66784-2. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Ho C.S.H., Lim L.J.H., Lim A.Q., Chan N.H.C., Tan R.S., Lee S.H. Diagnostic and predictive applications of functional near-infrared spectroscopy for major depressive disorder: a systematic review. Front. Psychiatr. 2020;11 doi: 10.3389/fpsyt.2020.00378. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Mental Health Foundation What are mental health problems?. Mental Health Foundation. https://www.mentalhealth.org.uk/your-mental-health/about-mental-health/what-are-mental-health-problems [Online] Available from:
- 78.Faes L., Liu X., Wagner S.K., Fu D.J., Balaskas K., Sim D.A. A clinician’s guide to artificial intelligence: how to critically appraise machine learning studies. Trans. Vis. Sci. Techn. 2020;9(2) doi: 10.1167/tvst.9.2.7. 7–7.[Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Moons K.G.M., Altman D.G., Reitsma J.B., Ioannidis J.P.A., Macaskill P., Steyerberg E.W. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration. Ann. Intern. Med. 2015;162(1):W1. doi: 10.7326/M14-0698. [Online] [DOI] [PubMed] [Google Scholar]
- 80.Begg C. Improving the quality of reporting of randomized controlled trials. JAMA. 1996;276(8):637. doi: 10.1001/jama.276.8.637. [Online] [DOI] [PubMed] [Google Scholar]
- 81.Sultan A.A., West J., Grainge M.J., Riley R.D., Tata L.J., Stephansson O. Development and validation of risk prediction model for venous thromboembolism in postpartum women: multinational cohort study. BMJ. 2016:355. doi: 10.1136/bmj.i6253. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Davenport T., Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc. J. 2019;6(2):94–98. doi: 10.7861/futurehosp.6-2-94. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Kelly C.J., Karthikesalingam A., Suleyman M., Corrado G., King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17(1) doi: 10.1186/s12916-019-1426-2. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Reed C. How should we regulate artificial intelligence? Phil. Trans. Math. Phys. Eng. Sci. 2018;376(2128):20170360. doi: 10.1098/rsta.2017.0360. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Vayena E., Blasimme A., Cohen I.G. Machine learning in medicine: addressing ethical challenges. PLoS Med. 2018;15(11) doi: 10.1371/journal.pmed.1002689. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Lee T.T., Kesselheim A.S.U.S. Food and drug administration precertification pilot program for digital health software: weighing the benefits and risks. Ann. Intern. Med. 2018;168(10):730. doi: 10.7326/M17-2715. [Online] [DOI] [PubMed] [Google Scholar]
- 87.He J., Baxter S.L., Xu J., Xu J., Zhou X., Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 2019;25(1):30–36. doi: 10.1038/s41591-018-0307-0. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Reddy S., Allan S., Coghlan S., Cooper P. A governance model for the application of AI in health care. J. Am. Med. Inf. Assoc. 2019 doi: 10.1093/jamia/ocz192. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Cresswell K.M., Sheikh A. Health information technology in hospitals: current issues and future trends. Fut. Hosp J. Roy. Coll. Phys. 2015;2(1):50–56. doi: 10.7861/futurehosp.2-1-50. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Ruiz Morilla M.D., Sans M., Casasa A., Giménez N. Implementing technology in healthcare: insights from physicians. BMC Med. Inf. Decis. Making. 2017;17(1) doi: 10.1186/s12911-017-0489-2. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Sheikh A., Cornford T., Barber N., Avery A., Takian A., Lichtner V. Implementation and adoption of nationwide electronic health records in secondary care in England: final qualitative results from prospective national evaluation in “early adopter” hospitals. BMJ. 2011;343(oct17 1) doi: 10.1136/bmj.d6054. d6054–d6054 [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Harvey G., Llewellyn S., Maniatopoulos G., Boyd A., Procter R. Facilitating the implementation of clinical technology in healthcare: what role does a national agency play? BMC Health Serv. Res. 2018;18 doi: 10.1186/s12913-018-3176-9. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Lakdawala P. Doctor-patient relationship in psychiatry. Mens. Sana Monogr. 2015;13(1):82. doi: 10.4103/0973-1229.153308. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Ha J.F., Longnecker N. Doctor-patient communication: a review. Ochsner J. 2010;10(1):38–43. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3096184/ [Online] the Academic Division of Ochsner Clinic Foundation. [PMC free article] [PubMed] [Google Scholar]
- 95.Goold S.D., Lipkin M. The doctor-patient relationship. J. Gen. Intern. Med. 1999;14(S1):S26–S33. doi: 10.1046/j.1525-1497.1999.00267.x. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.LaRosa E., Danks D. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society - AIES ’18. 2018. Impacts on trust of healthcare AI. [Online] [Google Scholar]
- 97.Lucas G.M., Gratch J., King A., Morency L.-P. It’s only a computer: virtual humans increase willingness to disclose. Comput. Hum. Behav. 2014;37:94–100. [Online] [Google Scholar]
- 98.Neudert L.-M., Knuutila A., Howard P. 2020. Global Attitudes towards AI, Machine Learning & Automated Decision Making.https://oxcaigg.oii.ox.ac.uk/wp-content/uploads/sites/124/2020/10/GlobalAttitudesTowardsAIMachineLearning2020.pdf [Online] [Google Scholar]
- 99.Gao S., He L., Chen Y., Li D., Lai K. Public perception of artificial intelligence in medical care: content analysis of social media. J. Med. Internet Res. 2020;22(7) doi: 10.2196/16649. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Seh A.H., Zarour M., Alenezi M., Sarkar A.K., Agrawal A., Kumar R. Healthcare data breaches: insights and implications. Healthcare. 2020;8(2):133. doi: 10.3390/healthcare8020133. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Ghafur S., Van Dael J., Leis M., Darzi A., Sheikh A. Public perceptions on data sharing: key insights from the UK and the USA. Lancet Dig. Health. 2020 doi: 10.1016/S2589-7500(20)30161-8. [Online] [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix Table 1-5 HLY e06626 _V2
Data Availability Statement
No data was used for the research described in the article.


