INTRODUCTION
Globally, one in every three individuals suffers from a mental illness during their lifetimes. Low- and middle-income countries (LAMICs) bear 80% of the mental health disease burden. The stigma associated with mental disorders results in delayed help-seeking, reduced access to health services, suboptimal treatment, poor outcomes, and an increased risk of individuals' human rights violations. The actual mental health burden is arguably far higher; assuming the overlap between psychiatric and neurological disorders, mental health disorders (MHDs) account for 32.4% of total years of health lost due to disability (YLDs). Depression, anxiety disorders, bipolar disorder, schizophrenia and other psychoses, dementia, substance use disorders, attention-deficit/hyperactivity disorder, and developmental disorders, including autism, are the leading contributors to mental health morbidity. It is estimated that by 2030, depression will be the third and second highest causes of disease burden in LAMICs and middle-income countries, respectively.[1]
Moreover, mental health challenges have increased in recent decades with a rise in suicides, substance use, and loneliness, worsened by the coronavirus disease 2019 (COVID-19) pandemic. Mental health care is compounded by a shortage of mental healthcare professionals, stigma, and lack of facilities. Artificial intelligence (AI) presents a potential solution to address this shortage of mental health professionals, and it is increasingly employed in healthcare fields, such as oncology, radiology, ophthalmology, and dermatology.[2]
WHAT IS AI
AI is a field of science and engineering concerned with the computational understanding of what is commonly called intelligent behavior and with the creation of artifacts that exhibit such behavior. AI systems are understood as programs, which enable computers to function in ways that make people seem intelligent.[3] AI functions based on certain principles, which include image processing, computer vision, artificial neural network, machine learning (ML), deep learning (DL), and natural language processing [Table 1]. Among all these, ML is at the heart of AI. ML involves various methods of enabling an algorithm to learn. It is a process of fitting predictive models to data or identifying informative groupings within data. ML attempts to approximate or imitate humans' ability to recognize patterns in an objective manner, using computation.[4] The most common styles of ML used for healthcare purposes include supervised, semi-supervised, unsupervised, Deep learning, and reinforcement learning.[5] A major strength of AI is its rapid pattern analysis of large datasets. AI is used in health care for the early detection of diseases, better understanding of disease progression, optimizing medication and treatment dosages, and uncovering novel treatments.[5] Additionally, intelligent systems are increasingly being used to support clinical decision-making as AI-powered machines can rapidly synthesize information from an unlimited amount of medical information sources. To optimize the potential of AI, very large datasets are ideal (e.g. electronic health records) that can be analyzed computationally, revealing trends and associations regarding human behaviors and patterns that are often hard for humans to extract.
Table 1.
A. Image processing: The AI processes any image or pattern of measurement into a specific mathematical algorithm. Basically, the input is a picture and the output is a better-defined picture for a specific applied purpose. B. Computer vision: The AI identifies any image input and provides the output for that as a response. C. Artificial neural networks (ANNs): This process imitates the human brain in processing several types of data and creating patterns for decision-making through neural networks. In ANN, the input is entered into a set of algorithms, and their output is reentered into a different set of algorithms to reach the final output. D. Machine learning: The ability of a computer to learn from experience, that is, to modify its processing based on newly acquired information. This process can be based on a simple decision-making tree, such as if-then, which leads to a conclusion, or using DL algorithms, which imitate the human brain in processing several types of data and creating patterns for use in decision-making through neural networks. E. Deep learning: It is a subset of machine learning, which is structured similarly to human brain processing, taking into account multiple data sets at the same time, which are evaluated and reprocessed for second and third different evaluations and so on, until reaching an output. Every evaluation is conducted in a different layer, meaning that it is based on the output of the previous layer. These layers of computation are called hidden layers because their inputs and outputs are not visible. F. Natural language processing (NLP): It refers to how computers process and analyze human language in the form of unstructured text and involves language translation, information extraction, and semantic understanding. |
AI algorithms can perform as well, or better than experienced clinicians in evaluating images for abnormalities or subtleties undetectable to the human eye (e.g., gender from the retina).[5] Among the different branches of medicine, AI has been most successfully used to leverage pattern recognition in ophthalmology, cancer detection, and radiology.[5]
AI IN PSYCHIATRY
Mental illnesses pose a heavy burden on society at large. The future of AI in psychiatry appears to have great potential with the growing need and utilization of AI bots in managing psychiatric symptoms and augmenting therapeutic treatments [Table 2].
Table 2.
Health
|
Mental Health and Illness
|
AI is modeled after human intelligence, and today's AI can accomplish various concrete tasks far more quickly than a human being by replicating discrete human intelligence skills, such as processing speed, memory, quantitative reasoning, visuospatial ability, auditory processing, and comprehension knowledge.
AI has been used to help mental health care in mainly three ways, that is, digital phenotyping by personal sensing, natural language processing, and chatbots.
Digital phenotyping is the data analyzed for mental health markers through activity measuring geolocation by the Global Positioning System (GPS), sleep parameters, typing speeds, interactions on social media, etc., For example, digital phenotyping can assess a person's activity level, interaction patterns, and time spent on social media to indicate whether the person is in a manic or depressive phase of illness.
Natural language processing (NLP) analyzes and tracks the use of language in social media (Facebook, WhatsApp, Instagram, etc.) that most people are using on their smartphones to communicate, purchase, do office work, etc. Based on this analysis, mental health conditions can be monitored, depressive content with suicidal ideas can be flagged, and preventive action can be initiated.
Chatbots are already being used in different industries, such as banking, aviation, and mental health care. They can interact with patients, ask screening questions, and direct them to appropriate courses of actions, such as psychotherapy and/or pharmacotherapy. Chatbots are now capable of providing psychotherapy, and the patient is not able to distinguish between a human psychotherapist and chatbot.
Woebot is an automated conversational application available through Facebook Messenger or mobile apps that provides tools that automate the process of cognitive behavioral therapy (CBT). This tool was developed to monitor symptoms and manage episodes of anxiety and depression through learned skills, such as identifying and challenging cognitive distortions.[6] According to a randomized controlled trial (RCT), 70 subjects were randomized into Woebot and an e-book reading for depression and the trial showed that the Woebot group reported a significant decrease in depression compared with the e-book group.[7]
Tess is another program available as a phone number that utilizes text messaging to coach individuals through times of emotional distress. This tool enables the user to have similar therapeutic conversations as though they were conversing with a psychologist and delivers emotional wellness coping strategies.[8]
In a similar approach, new forms of Avatar therapy have been developed to provide therapeutic conversations with their users.
Another use of Avatars is in Avatar therapy where computer-generated images of faces interact with patients with schizophrenia via intelligent algorithms. Patients undergo six ten-minute sessions of Avatar therapy where they challenge the persecutory voice hallucinations experienced by the patients, and gradually, the patient learns to gain control over the distressing voices. Initial studies have shown that Avatar therapy decreases the amount of distress patients feel in relation to their voices, the frequency of hearing voices, and the extent to which they feel overwhelmed by them.[9]
AI-enabled robots have also been studied to help children with autism spectrum disorders (ASDs) through education and therapy. Robots, such as Kaspar and Nao, are able to teach social skills to children and help them with facial recognition and appropriate gaze response, with initial studies reporting that children with ASDs perform better with the robotic intervention compared with human therapists.[10]
Another application that has made an impact on these individuals is Apple's virtual assistant Siri, which can engage children who have ASDs and address the hyper-focus on specific interests that can come with the disorder. Humans may not have the desire or the patience to engage with children in the minutiae that they are focusing on, but Siri has the ability to do so. Through engagements like this, provided by AI assistants, such as Siri, children can develop the skills necessary to socially interact with others without negative recourse for the social faux pas that inevitably occurs. Siri can be of great help by providing the child with a safe learning environment and the patience necessary to practice these skills.[11]
Research in embodied AI has increasing clinical relevance for therapeutic applications in mental health services. With innovations ranging from “virtual psychotherapists” to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. To enable responsible clinical implementation, the ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed.[6] AI-supported virtually embodied psychotherapeutic devices are currently developing at a rapid speed.
Often cited as a digital tool to reach underserved populations across the world that lack mental health services, the bots can explain to users the clinical terms for what they are experiencing—such as cognitive distortions—or provide concrete advice for recognizing and dealing with difficult situations.[12]
It is expected that the use of AI in mental health can increase accessibility to mental health care by reducing lengthy and costly travel to centrally located mental health clinics, enable overburdened mental health professionals to increase the reach of their services, and can also help people whose conditions restrict their ability to travel, such as patients suffering from agoraphobia or physical health problems and persons who struggle with building in-person connections.[13] AI can also increase accessibility to mental health by circumventing some of the stigma surrounding mental illness and can enable greater personalization of mental health care. AI can carry out sophisticated analysis of large volumes of data that can be used for better prediction of therapeutic response and side effect profiles from different medications and may lead to a reduction in the performance of multiple trials of different medications. The use of AI and NLM can increase objectivity in the assessment of patients with mental illness. The use of data obtained from sensors and smartphone applications will improve the monitoring of patients in the community. AI can also be used to monitor and improve medication adherence, and the use of physiological data through wearable sensors can provide objective measures of mental health and help provide existing well-established interventions (e.g., providing cognitive behavior therapy through chatbots).[13]
AI IN WELL-BEING
As mentioned earlier, sensor-based data can help monitor the well-being of human beings. Sensor-based data can help monitor sleeping habits, eating habits, physical activity, social interactions, social networks, exposure to green spaces, exposure to noise and air pollution, monitoring of cognitive functioning, practicing relaxation, etc.
Replika is a smartphone application that allows users to have conversations about themselves, allowing users to gain a better understanding of the good qualities within themselves. Replika reconstructs a footprint of your personality out of the digital remains or text conversations you have with your avatar. One of the strongest draws of Replika is that the user can have vulnerable conversations with their Avatar without fear of judgment throughout the interaction. Similar to therapy sessions with a psychiatrist or personal conversations with a trusted friend, the Avatar can have therapeutic conversations with the user and help the user gain insight into their own personality.[14]
In addition to AI designed to replicate human processes, clinicians and scientists have explored the concept of using intelligent animal-like robots to improve psychiatric outcomes, such as reducing stress, loneliness, and agitation and improving mood. Companion bots, such as Paro, a robotic seal, and eBear, an expressive bear-like robot, interact with patients and provide the benefits of animal therapy. Paro has already been used to help patients with dementia who may be isolated or experiencing feelings of depression. Sex robots are being used for the teaching of sex. AI-powered companions are being used to overcome loneliness. However, there are concerns that excessive reliance on chatbots and other similar techniques can increase loneliness and social isolation.
However, it is important to recognize the advantages and disadvantages of the use of AI in mental health [Table 3].
Table 3.
Advantages
|
Disadvantages
|
AI models trained with imaging data acquired from one setting may poorly generalize to other practice settings in other locations with new patients. The use of models based on training data that are not representative of the population, case mix, modalities, and acquisition protocols can compromise performance and confidence in their use, particularly if overfitting has occurred.[15]
ETHICAL ISSUES
However, the use of AI in mental health is associated with significant ethical challenges [Table 4].
Table 4.
Patient safety |
Privacy:
|
Responsibility
|
Justice:
|
Autonomy:
|
Data ownership |
Autonomy:
|
Privacy protection typically concerns regulating access to a person or what is known about him or her. Such access may involve the individual's right to bodily privacy, personal information, property or place of habitation, or control of his or her name, image, or likeness. Informational privacy is protected by data security measures that ensure that reasonable steps are taken to prevent protected health information from being accessed by individuals who have no right to it or need to know it. In particular, in AI applications, clinical and business entities must protect such data from hackers and be careful not to place protected data on insecure or vulnerable servers.[17] Deep learning is data ravenous. Scientists developing deep learning applications will gladly take hundreds of thousands of cases to develop and test new tools, especially given their familiarity with databases, such as “Imagenet,” which now has over 14 million images. The desire to create and market new AI applications in medicine has created a demand and marketplace for patient-derived data. However, ownership and the rights to use these data are complex and vary by jurisdiction, sometimes depending on the degree to which the data have been de-identified or anonymized.[18,19] Many psychiatrists expressed their concern about the therapeutic relationship between the patient and the therapist, which is a vital factor in treating a psychiatric illness. AI might have the virtue of saving time and cost of the treatment, but the empathy that is required for developing rapport seems lacking. Merely facial expressions, body posture, and speech interpretation, although a basic element in the mental status examination, do not suffice to conclude a diagnosis and thereby may make mistakes. Although a minor proportion of psychiatrists supported the role of AI, emphasizing the AI as nonjudgmental while diagnosing the case, the chances of countertransference are zero.[20]
One of the main problems is the privacy of algorithms and technologies that are opaque, which makes evaluation and reproducibility difficult.
FUTURE OF AI: ARTIFICIAL WISDOM (AW)
AI is modeled after human intelligence, and today's AI can accomplish various concrete tasks far more quickly than a human by replicating discrete human intelligence skills, such as processing speed, memory, quantitative reasoning, visuospatial ability, auditory processing, and comprehension knowledge. AI will continue to improve and develop into superintelligence. Yet, it does not have the ability to make compassionate, fair, and equitable decisions. AI cannot self-reflect, self-correct, or consider diversity among people and perspectives, ethics, and morality. Reframing such future AI for what it really is—that is, AW—highlights the limitations of today's AI and the need for the wisdom it should offer in the future. Human wisdom is a multicomponent personality trait that includes prosocial behaviors, such as empathy and compassion, emotional regulation, self-reflection (with self-correction), acceptance of uncertainty and diversity of perspectives, social decision-making, and perhaps spirituality.[21] Wisdom, rather than intelligence, is associated with greater individual and societal well-being. In all likelihood, only humans can be truly wise, as unique human characteristics, including consciousness, autonomy, and will, are key to cultivating wisdom. The notion of creating AI that shares our societal values and could be considered to be wise is a relatively new area of exploration.[22,23,24] The future AI will need to have some aspects of emotional intelligence,[25] morality,[26] and empathy.[27] Paiva et al.[28] defined “empathic agents” as “agents that have the capacity to place themselves into the position of a user's or another agent's emotional situation and respond appropriately.” It is improbable that human wisdom could be fully programmed into a robot, but partial examples exist in the form of robotic social workers and physical therapists deployed in nursing homes,[29,30] social robots for loneliness, and those providing cognitive assistance to older adults. These tools illustrate the efforts to develop computers that can perform actions that employ wise principles and result in wise acts.
Ethical and wise AI—that is, AW, will help promote individual and societal well-being.[31,32,33] As noted above, freedom from bias is essential for widespread practical use. The evolution of AW will require active collaboration between computer scientists, engineers, psychiatrists, psychologists, neuroscientists, and ethicists.
CONCLUSION
Common concerns about AI today often focus on the “four horsemen of the AI apocalypse”: loss of jobs for humans, unethical decision-making, hostile robot-led takeovers, and uninterpretable “black-box” decision-making.[34]
These can be countered by the hope for developing behavior-based digital biomarkers, redefining diagnoses, facilitating earlier detection of mental illnesses, continuous learning systems that can assess patients in context, tools to help patients and clinicians better understand an illness and themselves, personalized approaches to diagnosis and treatment, and built-in computational models that make mental healthcare safer, more efficient, and personalized.
This dichotomy between fears and aspirations can be reframed by considering ways to develop AW to support mental healthcare. The path to achieving sustainability, implementation, and clinician and patient acceptance will require transparency, trust, and wisdom.[2]
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
REFERENCES
- 1.Javed A, Lee C, Zakaria H, Buenaventura RD, Cetkovich-Bakmas M, Duailibi K, Ng B, Ramy H, et al. Reducing the stigma of mental health disorders with a focus on low- and middle-income countries. Asian J Psychiatr. 2021;58:102601. doi: 10.1016/j.ajp.2021.102601. doi: 10.1016/j.ajp. 2021.102601. [DOI] [PubMed] [Google Scholar]
- 2.Lee EE, Torous J, De Choudhury M, Depp CA, Graham SA, Kim HC, et al. Artificial intelligence for mental health care: Clinical applications, barriers, facilitators, and artificial wisdom. Biol Psychiatry Cogn Neurosci Neuroimaging. 2021;6:856–64. doi: 10.1016/j.bpsc.2021.02.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Ramesh AN, Kambhampati C, Monson JR, Drew PJ. Artificial intelligence in medicine. Ann R Coll Surg Engl. 2004;86:334. doi: 10.1308/147870804290. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Greener JG, Kandathil SM, Moffat L, Jones DT. A guide to machine learning for biologists. Nat Rev Mol Cell Biol. 2022;23:40–55. doi: 10.1038/s41580-021-00407-0. [DOI] [PubMed] [Google Scholar]
- 5.Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim HC, et al. Artificial intelligence for mental health and mental illnesses: An overview. Curr Psychiatry Rep. 2019;21:1–8. doi: 10.1007/s11920-019-1094-0. doi: 10.1007/s11920-019-1094-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: Ethical implications of embodied artifcial intelligence in psychiatry, psychology, and psychotherapy. J Med Internet Res. 2019;21:e13216. doi: 10.2196/13216. doi: 10.2196/13216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Fitzpatrick K, Kara A, Darcy, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Ment Health. 2017;4:e19. doi: 10.2196/mental.7785. doi: 10.2196/mental. 7785. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Fulmer R, et al. Using psychological artifcial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomized controlled trial. JMIR Mental Health. 2018;5:e64. doi: 10.2196/mental.9782. doi: 10.2196/mental. 9782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Garety P, et al. Optimising AVATAR therapy for people who hear distressing voices: Study protocol for the AVATAR2 multi-centre randomised controlled trial. Trials. 2021;22:366–6. doi: 10.1186/s13063-021-05301-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Pham KT, Nabizadeh A, Selek S. Artificial intelligence and chatbots in psychiatry. Psychiatr Q. 2022;93:249–53. doi: 10.1007/s11126-022-09973-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Raccio AJ, Newman To Siri with love: A mother, her autistic son, and the kindness of machines. J Autism Dev Disord. 2019;49:3472–3. [Google Scholar]
- 12.Sachan D. Self-help robots drive blues away. Lancet Psychiatry. 2018;5:547. doi: 10.1016/S2215-0366(18)30230-X. [DOI] [PubMed] [Google Scholar]
- 13.Lovejoy CA. Technology and mental health: The role of artificial intelligence. Eur Psychiatry. 2019;55:1–3. doi: 10.1016/j.eurpsy.2018.08.004. [DOI] [PubMed] [Google Scholar]
- 14.Murphy M, Templin J. Replika.replika.ai. 2021 Available from: https://replika.ai/about/story. [Last accessed on 2021 Sep 16] [Google Scholar]
- 15.Park SH, Han K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology. 2018;86:800–9. doi: 10.1148/radiol.2017171920. [DOI] [PubMed] [Google Scholar]
- 16.Luxton DD. Artificial intelligence in psychological practice: Current and future applications and implications. Prof Psychol Res Pract. 2013;45:332–9. [Google Scholar]
- 17.Safdar NM, Banja JD, Meltzer CC. Ethical considerations in artificial intelligence. Eur J Radiol. 2020;122:108768. doi: 10.1016/j.ejrad.2019.108768. doi: 10.1016/j.ejrad. 2019.108768. [DOI] [PubMed] [Google Scholar]
- 18.Hall MA, Schulman KA. Ownership of medical information. JAMA. 2009;301:1282. doi: 10.1001/jama.2009.389. [DOI] [PubMed] [Google Scholar]
- 19.Cartwright-Smith L, Gray E, Thorpe JH. Health information ownership: Legal theories and policy implications. Vand J Ent Tech L. 2016;19:207. [Google Scholar]
- 20.Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of psychiatry: Insights from a global physician survey. Artif Intell Med. 2020;102:101753. doi: 10.1016/j.artmed.2019.101753. doi: 10.1016/j.artmed. 2019.101753. [DOI] [PubMed] [Google Scholar]
- 21.Jeste DV, Lee EE. Emerging empirical science of wisdom: Definition, measurement, neurobiology, longevity, and interventions. Harv Rev Psychiatry. 2019;27:127. doi: 10.1097/HRP.0000000000000205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Jeste DV, Graham SA, Nguyen TT, Depp CA, Lee EE, Kim H-C. Beyond artificial intelligence: Exploring artificial wisdom. Int Psychogeriatr. 2020;32:993–1001. doi: 10.1017/S1041610220000927. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Sevilla DC. The quest for artificial wisdom. AI Soc. 2013;28:199–207. [Google Scholar]
- 24.Tsai C. Artificial wisdom: A philosophical framework. AI Soc. 2020;35:937–44. [Google Scholar]
- 25.Fan L, Scheutz M, Lohani M, McCoy M, Stokes C. Do we need emotionally intelligent artificial agents? First results of human perceptions of emotional intelligence in humans compared to robots. In: Beskow J, , Peters C, , Castellano G, , O'Sullivan C, , Leite I, , Kopp S, , editors. Intelligent Virtual Agents. IVA 2017. Lecture Notes in Computer Science. Vol. 10498. Springer; Cham: 2017. [Google Scholar]
- 26.Conitzer V, Sinnott-Armstrong W, Borg JS, Deng Y, Kramer M. Moral decision making frameworks for artificial intelligence. Proceedings of the AAAI Conference on Artificial Intelligence. 2017;31:31. [Google Scholar]
- 27.Banerjee S. A framework for designing compassionate and ethical artificial intelligence and artificial consciousness. Interdiscip Descr Complex Syst INDECS. 2020:1885–95. [Google Scholar]
- 28.Paiva A, Leite I, Boukricha H, Wachsmuth I. Empathy in virtual agents and robots: A survey. ACM Trans Interact Intell Syst. 2017;7:1–40. doi: 10.1145/2912150. [Google Scholar]
- 29.Šabanović S, Chang WL, Bennett CC, Piatt JA, Hakken D. A robot of my own: Participatory design of socially assistive robots for independently living older adults diagnosed with depression. In: Zhou J, , Salvendy G, , editors. Human Aspects of IT for the Aged Population. Design for Aging. ITAP 2015. Lecture Notes in Computer Science() Vol. 9193. Springer; Cham: 2015. [Google Scholar]
- 30.Hebesberger D, Koertner T, Gisinger C, Pripfl J, Dondrup C. Lessons learned from the deployment of a long-term autonomous robot as companion in physical therapy for older adults with dementia a mixed methods study. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2016:27–34. [Google Scholar]
- 31.Torous J, Larsen ME, Depp C, Cosco TD, Barnett I, Nock MK, et al. Smartphones, sensors, and machine learning to advance real-time prediction and interventions for suicide prevention: A review of current progress and next steps. Curr Psychiatry Rep. 2018;20:51. doi: 10.1007/s11920-018-0914-y. [DOI] [PubMed] [Google Scholar]
- 32.Torous J, Firth J. Bridging the dichotomy of actual versus aspirational digital health. World Psychiatry. 2018;17:107–8. doi: 10.1002/wps.20464. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Nebeker C, Torous J, Ellis RJB. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med. 2019;17:137. doi: 10.1186/s12916-019-1377-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Matheny M, Israni ST, Ahmed M, Whicher D. Washington, DC: National Academy of Medicine; 2020. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril; pp. 94–97. [PubMed] [Google Scholar]