Skip to main content
Bulletin of the World Health Organization logoLink to Bulletin of the World Health Organization
. 2020 Jan 27;98(4):245–250. doi: 10.2471/BLT.19.237198

Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare

L'intelligence artificielle et le besoin constant d’empathie, de compassion et de confiance dans le secteur de la santé

La inteligencia artificial y la continua necesidad de empatía, compasión y confianza en la atención sanitaria

الذكاء الاصطناعي والحاجة المستمرة للتعاطف والشفقة والثقة في الرعاية الصحية

医疗保健领域内的人工智能和对理解、同情和信任的持续需求

Искусственный интеллект и постоянная потребность в эмпатии, сочувствии и доверии в сфере здравоохранения

Angeliki Kerasidou a,
PMCID: PMC7133472  PMID: 32284647

Abstract

Empathy, compassion and trust are fundamental values of a patient-centred, relational model of health care. In recent years, the quest for greater efficiency in health care, including economic efficiency, has often resulted in the side-lining of these values, making it difficult for health-care professionals to incorporate them in practice. Artificial intelligence is increasingly being used in health care. This technology promises greater efficiency and more free time for health-care professionals to focus on the human side of care, including fostering trust relationships and engaging with patients with empathy and compassion. This article considers the vision of efficient, empathetic and trustworthy health care put forward by the proponents of artificial intelligence. The paper suggests that artificial intelligence has the potential to fundamentally alter the way in which empathy, compassion and trust are currently regarded and practised in health care. Moving forward, it is important to re-evaluate whether and how these values could be incorporated and practised within a health-care system where artificial intelligence is increasingly used. Most importantly, society needs to re-examine what kind of health care it ought to promote.

Introduction

Empathy, compassion and trust are fundamental values of a patient-centred, relational model of health care. In recent years, the pursuit of greater efficiency in health care, including economic efficiency, has often resulted in these values being side-lined, making it difficult or even impossible for health-care professionals to incorporate them in practice. Artificial intelligence is increasingly being used in health care and promises greater efficiency, and effectiveness and a level of personalization not possible before. Artificial intelligence could help improve diagnosis and treatment accuracy, streamline workflow processes, and speed up the operation of clinics and hospital departments. The hope is that by improving efficiency, time will be freed for health-care professionals to focus more fully on the human side of care, which involves fostering trust relationships and engaging with patients, with empathy and compassion. However, the transformative force of artificial intelligence has the potential to disrupt the relationship between health-care professionals and patients as it is currently understood, and challenge both the role and nature of empathy, compassion and trust in this context. In a time of increasing use of artificial intelligence in health care, it is important to re-evaluate whether and how these values could be incorporated and exercised, but most importantly, society needs to re-examine what kind of health care it ought to promote.

Empathy, compassion and trust

Over the past decades, the rise of patient-centred care has shifted the culture of clinical medicine away from paternalism, in which the therapeutic relationship, the relationship between the health-care professional and the patient, is led by medical expertise, towards a more active engagement of patients in shared medical decision-making. This model of engagement requires the health-care professional to understand the patient’s perspective and guide the patient in making the right decision; a decision which reflects the patient’s needs, desires and ideals, and also promotes health-related values.1 The central point of the patient-centred model of doctor–patient relationship is that medical competency should not be reduced to technical expertise, but must include relational moral competency, particularly empathy, compassion and trust.2

Empathy, compassion and trust are broadly recognized as fundamental values of good health-care practice.35 Empathy allows health-care professionals to understand and share the patient’s feelings and perspective.6 Compassion is the desire to help, instigated by the empathetic engagement with the patient.7,8 Patients seek out and prefer to engage with health professionals who are competent, but also have the right interpersonal and emotional skills. The belief and confidence in the professional’s competency, understanding and desire to help is what underpins patient trust.913 Research has demonstrated the benefits of patient trust and empathetic care, including improved patient satisfaction, increased treatment adherence and improved health outcomes.14,15

Despite their importance, empathy and compassion in health care are often side-lined. In recent years, for example, socioeconomic factors, including an ageing population and austerity policies in Europe that followed the 2008 economic collapse, have led to the marginalization of these values.16 As health-care systems struggle with resourcing, the space for empathy and compassion has shrunk while the need for efficiency has grown.17 In the United Kingdom of Great Britain and Northern Ireland, high-profile cases and reports, such as the Francis report, which followed the Mid Staffordshire scandal,18 the report by the Health Service Ombudsman entitled Dying without dignity,19 and the Leadership Alliance for the Care of Dying People report,20 all pointed at the lack of empathy as a major problem in clinical care. What these cases also showed was a conflicting relationship between the need for empathy and the pursuit of greater economic efficiency and of meeting operational targets. In 2017, Sir Robert Francis, who chaired the inquiry into the Mid Staffordshire scandal, mentioned in an interview that “at the time at Mid Staffordshire there was huge pressure on organizations to balance their books, to make productivity improvements and matters of that nature. It all became about figures in the books, rather than outcomes for the patient. And I do believe there’s a danger of that happening again.”21 Research in 2017 in accident and emergency departments in England on the effect of austerity policies on the everyday experiences of health-care professionals found that the pressure to meet targets negatively affected the doctors’ and nurses’ ability and opportunity to practise empathetic and holistic care,22 which led to moral distress and burnout among these professionals.23

Against this backdrop, artificial intelligence has been heralded as a way to save struggling national health-care systems24 and transform the future of health care by providing greater efficiency, effectiveness and high levels of personalized care.25

Artificial intelligence in health care

Artificial intelligence is broadly defined as “computing technologies that resemble processes associated with human intelligence, such as reasoning, learning and adaptation, sensory understanding, and interaction.”26 The hope is that these technologies will transform health-care delivery by“ streamlining workflow processes […] improving the accuracy of diagnosis and personalizing treatment, as well as helping staff work more efficiently and effectively.”25 Artificial intelligence could help health-care systems achieve greater efficiency, including economic efficiency, in two ways: (i) by improving time to and accuracy of diagnosis and treatment for patients, and where possible assisting with early prevention; and, (ii) by using health-care staff more efficiently.

A report published in 2018 in the United Kingdom suggested that the national health system could save up to 10% of its running costs by outsourcing repetitive and administrative tasks to artificial intelligence technologies.24 The same report also envisaged bedside robots performing social-care tasks such helping patients to eat, wash and dress, thus reducing the workload on care staff by 30%. But it is not only nursing and administrative tasks that artificial intelligence can help with. With regard to effectiveness, artificial intelligence systems could be used to deliver better clinical services both by assisting with the diagnosis and management of patients, and by providing the diagnosis and prescribing treatments. Research conducted so far has shown that machines can perform as well as, or even better than, humans in detecting skin cancer,27 heart arrhythmia28 and Alzheimer disease.29 Furthermore, human–machine partnerships can provide far better results than either humans or machines alone.30 In these examples, the principal benefits of artificial intelligence stem from its ability to improve efficiency and effectiveness by guiding diagnoses, delivering more accurate results and thus eliminating human error. With regard to greater efficiency through prevention, artificial intelligence technologies that track and analyse the movement of individuals could be used to detect people at risk of stroke and eliminate that risk through early intervention.31

Health care is already using technology to improve its efficiency and effectiveness. From scalpels and syringes to stethoscopes and X-ray machines, the list of technologies used in medicine to facilitate and improve patient care is long. However, artificial intelligence differs from previous medical technological advances. Whereas previous technologies were used to increase the senses and physical capacities of health-care professionals, consider, for example, how the stethoscope enhanced the hearing of doctors and X-rays their vision, the main role of artificial intelligence is to increase their reasoning and decision-making capacities. In this way, artificial intelligence is entering the health-care arena as another morally relevant actor that assists, guides or makes independent decisions regarding the treatment and management of patients.

Proponents of artificial intelligence technology in health care maintain that outsourcing tasks and decisions to rational machines will free up time for health-care professionals to engage in empathetic care and foster trust relationships with patients.4,25,32,33 A review, outlining recommendations for National Health Service to be the world leader in using technology to benefit patients, notes that while artificial intelligence cannot deliver indispensable human skills, such as compassion and empathy, “the gift of time delivered by the introduction of these technologies […] will bring a new emphasis on the nurturing of the precious inter-human bond, based on trust, clinical presence, empathy and communication.”25

The hope is that more free time for health-care professionals would not only lead to more trustworthy and empathetic care for patients, but also to less stress for and burnout of doctors and nurses.34 In addition, despite concerns that artificial intelligence will lead to job losses in health care, a report by the British Academy on the impact of artificial intelligence on work pointed out that professions that require the application of expertise and interaction with people will be less affected by automation through artificial intelligence.35 According to these aforementioned publications, the introduction of artificial intelligence technologies in health care offers the possibility of a win–win situation: patients benefit from more accurate diagnosis, better treatment outcomes, and increased empathy and compassion from medical staff, who in turn experience greater job satisfaction and less burnout.

The reimagination of health care, where artificial intelligence takes over specific, and even specialist, tasks while freeing time for health-care professionals to communicate and empathize with patients, assumes that the value attached to empathy, compassion and trust will remain high. However, patients and the health-care system might value accuracy and efficiency more than empathy and judgement, which could shift the focus in medicine away from human-specific skills.36 In which direction health-care delivery will evolve is an important theoretical and practical question that requires examination. Currently, it is still unclear whether and how health-care practice will be transformed by artificial intelligence, and what effect it may have, particularly on the role of health-care professionals and on the therapeutic relationship.

Potential implications of artificial intelligence

Clinical competency is a fundamental aspect of the identity of health-care professionals and underpins the trust relationship between doctors and patients. Patient trust is based on the belief that doctors and nurses have the right skills and expertise required to help the patient and also the right motivation to do so. This combination of clinical skill with empathy and compassion is what justifies patients assuming a position of vulnerability towards the health-care professionals. Vulnerability is a fundamental characteristic of a trust relationship.37 The person placing trust in another knows and accepts that this trusted person can decisively influence the outcome of the entrusted action. Trust relationships involve a degree of uncertainty that cannot be mitigated; it is only the belief in the trusted person’s abilities and good will that justifies taking on the risk of this uncertainty. In the clinical context, the patient knows that things can go wrong, but believes and hopes that this wrong would not be intentional, but rather because of bad luck or unforeseeable circumstances. Rules and regulations are put in place to protect patients from negligence and preventable mistakes. The constant quest to improve care highlights the fundamental moral obligations of non-maleficence and of acting in the best interests of patients. However, the fact remains that, in some cases, preventable harm could be the outcome of a medical action.

The use of artificial intelligence to optimize accuracy of diagnosis and treatment could raise issues of accountability when things go wrong, not only in cases where doctors follow the recommendations of artificial intelligence, but also when they decide to override these recommendations.38 In such situations, it is unclear who should be held accountable, whether responsibility should lie with the algorithm developer, the data provider, the health system that adopted the artificial intelligence tool, or the health-care professional who used it. In addition, even in situations where the role of artificial intelligence is assistive, health-care professionals might not feel confident to override its recommendation. If machines are brought into health care because they are better than humans at making certain rational decisions, how could humans rationally argue against them? Yet, the question of accountability is not the only issue raised here. The role and nature of trust in the therapeutic relationship is also at stake. Would and should patients still trust health-care professionals? If the introduction of artificial intelligence tools results in outsourcing clinical and technical skills to machines, would a belief in the good will of the doctor be enough to sustain a therapeutic trust relationship as currently understood? One of the great promises of artificial intelligence is that by increasing effectiveness, accuracy and levels of personalization in clinical care, it will succeed in replacing trust with certainty.39 In this case, patients might stop considering health-care professionals as experts in whose skills and knowledge they need to trust. This change might lead to a different relationship between health-care professionals and patients, one not characterized by vulnerability, but one of an assistive partnership.2 However, even in this more positive scenario, the transformation of society’s expectations of care provision and the role of health-care professionals are unclear. It is important therefore to consider how the introduction of artificial intelligence will alter the public’s perception and understanding of trust in the clinical encounter as well as the way in which trust relationships will be formed in this context.

Similarly, artificial intelligence calls into question the role and value of empathy and compassion in health care. As mentioned earlier, in patient-centred care, empathy allows health-care professionals to understand the patients’ perspective, and thus helps health professionals tailor care to promote the patients’ values and address their individual needs. Empathy and compassion therefore play a very important role in an interpersonal model of care that rejects medical paternalism and brings the doctor and the patient together to discuss options and find appropriate solutions.40 To preserve this ideal of patient-centred care, artificial intelligence systems should be built in a way that allows for value-plurality, meaning the possibility that different patients might hold different values and have different priorities related to their care.41 In this way, the ethical ideal of shared decision-making can be maintained and not be replaced by another form of paternalism, one practised not by doctors, but by artificial intelligence algorithms.

Even if artificial intelligence tools are able to operate in a care context characterized by value-plurality, the role of empathy remains unclear. If what patient-centred care needs to survive in a future of artificial intelligence health care is machines programmed to incorporate more than one value, what does this mean about the nature and role of empathy in care provision? Is empathy still a professional value, or should it be now understood as another technology to be written into code and optimized? Indeed, research in the field of artificial intelligence suggests that it is possible to create empathetic machines42,43 as a way of relieving doctors and nurses from the substantial emotional work their professions require.44 The likely effects of such complete optimization and operationalization of health care are unclear. This optimization could improve health-care outcomes and personalized care; alternatively, it could lead to the reinstitution of a reductionist approach to medicine.45,46 Beyond these practical concerns, one should also consider whether something intangible, yet morally important will be lost if the therapeutic relationship is reduced to a set of functions performed by a machine, however intelligent. On the other hand, will our current understanding of empathy, compassion and trust change to fit the new context where some parts of care are provided by intelligent machines?

Conclusion

The potential impact of artificial intelligence on health care, in general, and on the therapeutic relationship between health-care providers and patients, in particular, is widely acknowledged,38,47,48 as is the fact that society needs to learn how to deal “with new forms of agents, patients and environments.”49 Artificial intelligence has great potential to improve efficiency and effectiveness in health care. However, whether artificial intelligence can support other values central to the delivery of a patient-centred care, such as empathy, compassion and trust, requires careful examination. Moving forward, and as artificial intelligence is increasingly entering health care, it is important to consider whether these values should be incorporated and promoted within the new type of health care that is emerging and, if yes, how. More importantly, it is crucial to reflect on what kind of health care society should promote and how new technologies, including artificial intelligence, could help achieve it.

Acknowledgements

AK is also affiliated with the Wellcome Centre for Ethics and Humanities, University of Oxford.

Competing interests:

None declared.

References

  • 1.Emanuel EJ, Emanuel LL. Four models of the physician–patient relationship. JAMA. 1992. April 22–29;267(16):2221–6. 10.1001/jama.1992.03480160079038 [DOI] [PubMed] [Google Scholar]
  • 2.Bauchat JR, Seropian M, Jeffries PR. Communication and empathy in the patient-centered care model: why simulation-based training is not optional. Clin Simul Nurs. 2016;12(8):356–9. 10.1016/j.ecns.2016.04.003 [DOI] [Google Scholar]
  • 3.Serrant L. Compassion NHS: evidencing the impact. London: NHS England; 2016. Available from: https://www.england.nhs.uk/wp-content/uploads/2016/05/cip-yr-3.pdf [cited 2019 Apr 17]. [Google Scholar]
  • 4.Tweedie J, Hordern J, Dacre J. Advancing medical professionalism. London: Royal College of Physicians; 2018. [Google Scholar]
  • 5.Spiro H. Commentary: the practice of empathy. Acad Med. 2009. September;84(9):1177–9. 10.1097/ACM.0b013e3181b18934 [DOI] [PubMed] [Google Scholar]
  • 6.Halpern J. From detached concern to empathy: humanizing medical practice. New York: Oxford University Press; 2001. 10.1093/acprof:osobl/9780195111194.001.0001 [DOI] [Google Scholar]
  • 7.Nussbaum M. Compassion: the basic social emotion. Soc Philos Policy. 1996;13(01):27–58. 10.1017/S0265052500001515 [DOI] [Google Scholar]
  • 8.Goetz JL, Keltner D, Simon-Thomas E. Compassion: an evolutionary analysis and empirical review. Psychol Bull. 2010. May;136(3):351–74. 10.1037/a0018807 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Charon R. The patient–physician relationship. Narrative medicine: a model for empathy, reflection, profession, and trust. JAMA. 2001. October 17;286(15):1897–902. 10.1001/jama.286.15.1897 [DOI] [PubMed] [Google Scholar]
  • 10.Chin JJ. Doctor–patient relationship: a covenant of trust. Singapore Med J. 2001. December;42(12):579–81. [PubMed] [Google Scholar]
  • 11.O’Neill O. Autonomy and trust in bioethics. Cambridge: Cambridge University Press; 2002. 10.1017/CBO9780511606250 [DOI] [Google Scholar]
  • 12.Mechanic D. Changing medical organization and the erosion of trust. Milbank Q. 1996;74(2):171–89. 10.2307/3350245 [DOI] [PubMed] [Google Scholar]
  • 13.Halpern J. What is clinical empathy? J Gen Intern Med. 2003. August;18(8):670–4. 10.1046/j.1525-1497.2003.21017.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Kelley JM, Kraft-Todd G, Schapira L, Kossowsky J, Riess H. The influence of the patient–clinician relationship on healthcare outcomes: a systematic review and meta-analysis of randomized controlled trials. PLoS One. 2014. April 9;9(4):e94207. 10.1371/journal.pone.0094207 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Joffe S, Manocchia M, Weeks JC, Cleary PD. What do patients value in their hospital care? An empirical perspective on autonomy centred bioethics. J Med Ethics. 2003. April;29(2):103–8. 10.1136/jme.29.2.103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kerasidou A, Kingori P, Legido-Quigley H. “You have to keep fighting”: maintaining healthcare services and professionalism on the frontline of austerity in Greece. Int J Equity Health. 2016. July 26;15(1):118. 10.1186/s12939-016-0407-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kerasidou A, Horn R. Making space for empathy: supporting doctors in the emotional labour of clinical care. BMC Med Ethics. 2016. January 27;17(1):8. 10.1186/s12910-016-0091-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Francis R. Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry. London: The Stationary Office; 2013. [Google Scholar]
  • 19.Dying without dignity: investigations by the Parliamentary and Health Service Ombudsman into complaints about end of life care. London: Parliamentary and Health Service Ombudsman; 2015. [Google Scholar]
  • 20.One chance to get it right. London: Leadership Alliance for the Care of Dying People; 2014. Available from: https://wales.pallcare.info/files/One_chance_to_get_it_right.pdf [cited 2019 Apr 27].
  • 21.Matthews-King A. Government says NHS hospitals can wring out another £300m in efficiency savings. Independent. 2017 Nov 8.
  • 22.Kerasidou A. Empathy and efficiency in healthcare at times of austerity. Health Care Anal. 2019. September;27(3):171–84. 10.1007/s10728-019-00373-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Kerasidou A, Kingori P. Austerity measures and the transforming role of A&E professionals in a weakening welfare system. PLoS One. 2019. February 13;14(2):e0212314. 10.1371/journal.pone.0212314 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Darzi L. The Lord Darzi review of health and care. London: Institute for Public Policy Research, 2018. [Google Scholar]
  • 25.Topol E. The Topol review: preparing the healthcare workforce to deliver the digital future. London: National Health Service; 2019. [Google Scholar]
  • 26.Artificial intelligence (AI) in healthcare and research. London: Nuffield Council on Bioethics; 2018. [Google Scholar]
  • 27.Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017. February 2;542(7639):115–8. 10.1038/nature21056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Hannun AY, Rajpurkar P, Haghpanahi M, Tison GH, Bourn C, Turakhia MP, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019. January;25(1):65–9. 10.1038/s41591-018-0268-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Fraser KC, Meltzer JA, Rudzicz F. Linguistic features identify Alzheimer’s disease in narrative speech. J Alzheimers Dis. 2015;49(2):407–22. 10.3233/JAD-150520 [DOI] [PubMed] [Google Scholar]
  • 30.Patel BN, Rosenberg L, Willcox G, Baltaxe D, Lyons M, Irvin J, et al. Human–machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ Digit Med. 2019. November 18;2(1):111. 10.1038/s41746-019-0189-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Villar JR, González S, Sedano J, Chira C, Trejo-Gabriel-Galan JM. Improving human activity recognition and its application in early stroke diagnosis. Int J Neural Syst. 2015. June;25(4):1450036. 10.1142/S0129065714500361 [DOI] [PubMed] [Google Scholar]
  • 32.Davenport TH, Glover W, Artificial J. Intelligence and the augmentation of health care decision-making. Massachusetts: NEJM Catalyst; 2018. [Google Scholar]
  • 33.Topol E. Deep medicine: how artificial intelligence can make healthcare human again. New York: Basic Books; 2019. [Google Scholar]
  • 34.Nundy S, Hodgkins ML. The application of AI to augment physicians and reduce burnout [internet]. Health Affairs Blog. 2018 Sep 18. Available from: https://www.healthaffairs.org/do/10.1377/hblog20180914.711688/full/ [cited 2019 Apr 27].
  • 35. The impact of artificial intelligence on work: an evidence synthesis on implications for individuals, communities, and societies. London: British Academy and Royal Society; 2018. [Google Scholar]
  • 36.Susskind R, Susskind D. The future of the professions: how technology will transform the work of human experts. Oxford: Oxford University Press; 2015. [Google Scholar]
  • 37.Wright S. Trust and trustworthiness. Philosophia. 2010;38(3):615–27. 10.1007/s11406-009-9218-0 [DOI] [Google Scholar]
  • 38.Ross J, Webb C, Rahman F. Artificial intelligence in healthcare. London: Academy of Medical Royal Colleges; 2019. [Google Scholar]
  • 39.Zuboff S. The age of surveillance capitalism: the fight for a human future at the new frontier of power. London: Profile Books; 2019. [Google Scholar]
  • 40.Ahmad N, Ellins J, Krelle H, Lawrie M. Person-centred care: from ideas to action. London: The Health Foundation; 2014. [Google Scholar]
  • 41.McDougall RJ. Computer knows best? The need for value-flexibility in medical AI. J Med Ethics. 2019. March;45(3):156–60. 10.1136/medethics-2018-105118 [DOI] [PubMed] [Google Scholar]
  • 42.el Kaliouby R. We need computers with empathy. Cambridge: MIT Technology Review; 2017. [Google Scholar]
  • 43.Johnson K. Google empathy lab founder: AI will upend storytelling and human–machine interaction: Venturebeat. 2018 Mar 11. Available from: https://venturebeat.com/2018/03/11/google-empathy-lab-founder-ai-will-upend-storytelling-and-human-machine-interaction/ [cited 2019 Jan 20].
  • 44.Srivastava K, Das RC. Empathy: process of adaptation and change, is it trainable? Ind Psychiatry J. 2016. Jan-Jun;25(1):1–3. 10.4103/0972-6748.196055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Goldhahn J, Rampton V, Spinas GA. Could artificial intelligence make doctors obsolete? BMJ. 2018. November 7;363:k4563. 10.1136/bmj.k4563 [DOI] [PubMed] [Google Scholar]
  • 46.Beresford MJ. Medical reductionism: lessons from the great philosophers. QJM. 2010. September;103(9):721–4. 10.1093/qjmed/hcq057 [DOI] [PubMed] [Google Scholar]
  • 47.Fenech M, Strukelj N, Buston O. Ethical, social and political challenges of artificial intelligence in health. London: Wellcome Trust Future Advocacy; 2018. [Google Scholar]
  • 48.Loh E. Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health. BMJ Leader. 2018;2(2):59–63. 10.1136/leader-2018-000071 [DOI] [Google Scholar]
  • 49.Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, et al. AI4people – an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach (Dordr). 2018;28(4):689–707. 10.1007/s11023-018-9482-5 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Bulletin of the World Health Organization are provided here courtesy of World Health Organization

RESOURCES