Abstract
Over the past decade, there has been a notable surge in AI-driven research, specifically geared toward enhancing crucial clinical processes and outcomes. The potential of AI-powered decision support systems to streamline clinical workflows, assist in diagnostics, and enable personalized treatment is increasingly evident. Nevertheless, the introduction of these cutting-edge solutions poses substantial challenges in clinical and care environments, necessitating a thorough exploration of ethical, legal, and regulatory considerations.
A robust governance framework is imperative to foster the acceptance and successful implementation of AI in healthcare. This article delves deep into the critical ethical and regulatory concerns entangled with the deployment of AI systems in clinical practice. It not only provides a comprehensive overview of the role of AI technologies but also offers an insightful perspective on the ethical and regulatory challenges, making a pioneering contribution to the field.
This research aims to address the current challenges in digital healthcare by presenting valuable recommendations for all stakeholders eager to advance the development and implementation of innovative AI systems.
Keywords: Artificial intelligence, Technologies, Decision-making, Healthcare, Ethics, Regulatory guidelines
1. Introduction
Reforming and improving long-term care programs and healthcare outcomes face significant challenges under current healthcare policies. Recent technological advancements have generated interest in cutting-edge healthcare applications, turning this field into a growing area of research. Monitoring and decision support systems based on artificial intelligence (AI) show promise in extending individualized care programs to remote and home settings.
The World Health Organization (WHO) has recognized the crucial role of technology in seamlessly integrating long-term healthcare services into patients' daily lives [1]. Technology is considered vital in achieving universal health coverage for all age groups by promoting cost-effective integration and service delivery. Technological advancements, particularly in AI and robotics, are now being harnessed to support healthcare research and practice.
Over the past decade, research into utilizing AI to enhance critical clinical processes and outcomes has steadily expanded. AI-based decision support systems, in particular, have the potential to optimize clinical workflows, improve patient safety, aid in diagnosis, and enable personalized treatment [2]. Various digital technologies, including smart mobile and wearable devices, have been developed to gather data and process information for assessing health and tracking individualized therapy progress. Additionally, social and physical assistance systems are being deployed to aid individuals in their recovery from illness, injury, or sensory, motor, and cognitive impairments [3].
In the context of decision support systems, AI-based tools have demonstrated the potential to optimize clinical workflows, enhance patient safety, aid in diagnosis, and facilitate personalized treatment [4]. Cutting-edge AI technologies are transforming healthcare, achieving notable successes in different medical areas. For instance, in skin disease identification, a smart learning system displayed remarkable accuracy, outperforming other methods and offering quick assistance through a mobile app [5]. Similarly, a model predicting breast cancer spread achieved high accuracy, aiding doctors in precise analysis and potentially preventing complications [6]. Different unique models for timely COVID-19 identification offered a high-accuracy solution, reducing reliance on time-consuming tests [7], [8]. In cancer imaging, technology improved accuracy, providing valuable insights for doctors [9]. Moreover, the exponential growth of medical data is reshaping the landscape of security and privacy in healthcare. Robust security upgrades are crucial to guarantee the safe handling of vast datasets, and advancements in technology play a pivotal role in enhancing secure data storage, as discussed by recent studies [10]. These strides underscore the transformative influence of AI technologies and medical data, promising to significantly elevate healthcare efficiency and enhance patient outcomes.
While these AI-driven innovations promise to improve healthcare outcomes, ethical and regulatory challenges must be addressed. The rapid evolution of AI in healthcare has led to the emergence of tools and applications that often lack regulatory approvals, posing ethical and legal concerns. Therefore, it is crucial to comprehensively explore and understand the ethical and regulatory challenges associated with AI technologies in healthcare to ensure responsible development and practical implementation. This research aims to fill the existing gap in the literature by providing valuable insights into these challenges and contributing to the responsible integration of AI-driven healthcare applications.
The integration of AI in healthcare, while promising, brings about substantial challenges related to ethics, legality, and regulations. The need to ensure patient safety, privacy, and compliance with existing healthcare standards makes it imperative to address these challenges. The rapid evolution of AI, particularly in the domain of healthcare, has led to the emergence of numerous tools and applications that often lack regulatory approvals. Despite being promising, these innovations pose ethical and regulatory challenges, forming the basis for research in this area.
AI is reshaping the landscape of decision-support systems in healthcare by making them more effective, efficient, and patient-centered. These AI-driven systems are not only enhancing the way healthcare professionals interact with data but are also playing a crucial role in monitoring and assisting patients, ultimately improving the overall quality of healthcare delivery.
In this context, AI healthcare applications driven by advanced computational algorithms hold the potential to implement individual and efficient integrated programs for patients, particularly in remote and home care settings. However, it is important to acknowledge that these significant innovations pose substantial future challenges in clinical and care settings, encompassing ethical, legal, and regulatory considerations.
The practical implications of AI in improving healthcare outcomes make it a crucial area of investigation. However, while there is research on the broader applications of AI in healthcare, the specific considerations for related challenges are under-explored. This article seeks to shed light on the key ethical and regulatory issues, rules, and principles associated with the introduction of AI technologies in healthcare services.
The research aims to provide valuable insights into the ethical and regulatory challenges posed by AI technologies in healthcare. By addressing these challenges, the research contributes to the responsible development and practical implementation of AI-driven healthcare applications. The outcomes of this research can inform policymakers, healthcare professionals, and developers, ensuring that AI benefits are realized without compromising ethical standards or regulatory compliance.
2. Materials & methods
The primary objective of this study was to provide a comprehensive overview of the existing evidence concerning AI technologies, examining both technical aspects and regulatory considerations. The focal point was the identification of categories within clinical practice, aiming to deepen understanding of the diverse technologies involved and the overarching framework of ethics and legal guidelines governing the application of AI in healthcare.
To achieve this, a rigorous narrative review was undertaken, scrutinizing literature from a wide array of sources, including books, published reports, newsletters, media reports, and electronic or paper-based journal articles [11], [12].
In the exploration of literature, reports, and studies related to the ethical and regulatory framework, thorough searches were conducted in relevant databases. International organizations' websites were consulted, and comprehensive searches were performed on Scopus, ACM Digital Library, and PubMed using the following query: ‘Artificial intelligence’ AND ‘clinical decision’ AND (ethics OR law OR regulation).
The information gleaned from the identified references underwent systematic assessment for relevance and was judiciously utilized in constructing the narrative review.
This review aims to offer a comprehensive analysis of AI categories within the healthcare field, shedding light on the diverse technologies that have evolved in this dynamic and rapidly advancing field. Consequently, it strives to provide a nuanced overview of the ethical and regulatory landscape associated with the integration of AI in healthcare, addressing the main challenges linked to incorporating such technologies into clinical practice.
The structure of the review is outlined as follows.
Section 3 presents a comprehensive overview of emerging trends in AI technologies employed as decision-support and assistive systems in healthcare. This section delves into the specific role of AI in bolstering digital healthcare.
Section 4 succinctly summarizes prevailing ethical principles and regulatory aspects intended to guide the development and deployment of AI technologies in the healthcare domain.
Section 5 discusses the primary challenges associated with the utilization of AI technologies in clinical practice.
Section 6 concludes the work by emphasizing key issues that require attention to steer the future development of AI technologies within the realm of digital healthcare services.
3. The impact of AI in healthcare
3.1. Foundations
The concept of AI was first discussed in 1956 [13], referring to technology used to mimic human behavior. Since then, the field has made remarkable strides in development. As a subfield of AI, Machine Learning (ML) was conceptualized by Arthur Samuel in 1959 [14]. He emphasized the importance of systems that automatically learn from experience instead of being programmed. In the 1980s, ML demonstrated great potential in computer foresight and predictive analytics, including clinical practice and machine translation [15]. Deep Learning (DL), a subfield of ML, has ushered in new breakthroughs in information technology. DL may study underlying features in data from multiple processing layers using neural networks, similar to the human brain [16]. Since the 2010s, DL has garnered immense attention in many fields, especially in image recognition and speech recognition [17].
The term AI generally refers to the performance of tasks that are commonly associated with intelligent beings by software and/or devices. A specific definition of AI in a recommendation by the OECD Council on Artificial Intelligence states: ‘An artificial intelligence system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions that affect real or virtual environments.
The basis of AI technologies is algorithms and they may operate with different levels of autonomy (see Table 1) [18].
Table 1.
Level | Definition | Example | Final decision |
---|---|---|---|
0 | No presence of AI | Standard care. | Human |
1 | AI suggests decision to human | Clinicians consider AI recommendations but ultimately make the final decision on treatment and therapy. | Human |
2 | AI makes decisions, with permanent human supervision | AI makes clinical decisions on treatment and therapy, with human doctors providing ongoing supervision. | Human |
3 | AI makes decisions, with no continuous human supervision but human backup available | AI autonomously makes clinical decisions regarding treatment and therapy, but it can alert human users in case of uncertainty, minimizing the need for constant supervision. | AI |
4 | AI makes decisions, with no human backup available | AI autonomously govern the clinical decision with no human backup. | AI |
Two main approaches are actually proposed to develop AI algorithms: the rule-based approach and the ML based approach whose definitions are described in Table 2 [19]. These algorithms are translated into computer code that contains instructions for the rapid analysis and transformation of data into conclusions, information, or other results.
Table 2.
Category | Definition | Example |
---|---|---|
Rule-based algorithm [20] | The system follows a set of rules predefined by experts. | The expert defines the knowledge representation of a phenomenon and integrates this model into the system. |
ML-based algorithm [21] | The AI system facilitates the incorporation of intricate knowledge representations using statistics and probability theory. | The focus is not on defining a prior knowledge model, but rather on the collection of data and their integration into a training set. This approach allows for the continuous development of knowledge in a specific application domain through the ongoing utilization of data. |
The huge amounts of data and the ability to analyze data rapidly feed AI to perform increasingly complex tasks [22]. For this reason, AI in medicine raises the idea that AI replaces doctors and human decision-making. However, applications of AI are still relatively new and AI is not yet routinely used in clinical decision-making. Few of these systems have been evaluated in clinical studies [23].
The use of AI in clinical care is expected to bring about the major changes schematically reported in Fig. 1. There are four main trends recognized by the WHO: i) the evolution of the role of the patient in the clinical care process; ii) the shift from hospital to community-based care; iii) the use of AI to provide clinical care outside the formal healthcare system; iv) the use of AI for resource allocation and prioritization [24]. Each of these trends has ethical implications, which will be discussed later.
Patients already take significant responsibility for their care, including taking medication, improving their diet and nutrition, engaging in physical activity, treating wounds, or administering injections. However, AI could further change the way patients independently manage their medical conditions, particularly in chronic disease conditions. AI could help in self-care, for instance through conversational agents such as ‘chatbots’, health monitoring tools and risk prediction technologies for prevention programs [25]. While the move to patient-based care may be seen as empowering and beneficial for some patients, others may find the added responsibility stressful and may limit an individual's access to formal healthcare services.
Telemedicine is part of a broad revolution marking the transition from hospital to home care, and the use of AI technologies is helping to accelerate this journey. The shift to home care has been partly facilitated by the increase in the use of search engines (which rely on algorithms) for medical information, as well as the growth in the number of text or voice chatbots for healthcare [26]. In addition, AI technologies can play a more active role in managing patients' health outside clinical settings, such as in ‘just-in-time adaptive interventions’. These rely on sensors to provide patients with specific interventions according to previously collected data [27]. The growth and use of wearable sensors and devices may improve the effectiveness of ‘just-in-time adaptive interventions’, but also raise concerns in light of the amount of data these technologies are collecting, how they are being used, and the burden these technologies may shift to patients.
The increasing use of digital self-management applications and technologies also raises broader questions about whether these technologies should be regulated as clinical applications, thus requiring more regulatory control, or as ‘wellness applications’, requiring less regulatory control. Many digital self-management technologies are likely to fall in a ‘grey area’ between these two categories and may present a risk if they are used by patients for their own disease management or clinical care, but remain largely unregulated or could be used without prior medical advice. These concerns are exacerbated by the distribution of such applications by entities that are not part of the formal healthcare system. Indeed, AI applications in healthcare are no longer used exclusively in healthcare (or home care) systems, as AI technologies for health can easily be acquired and used by entities in the non-health system. Emerging is therefore the issue concerning the use of AI to extend ‘clinical’ care beyond the formal healthcare system.
Finally, with the trend towards self-management, the use of mobile and wearable devices driven by software capable of acquiring and processing data through sophisticated algorithms has increased [28]. Self-management systems empower individuals to take an active role in managing their health. They provide the necessary resources and tools for patients to track their health, adhere to treatment plans, and make informed decisions about their well-being. This approach fosters a proactive approach to healthcare, empowering patients to become partners in their health management. Wearable technologies include those placed in the body (artificial limbs, smart implants), on the body (insulin pump patches, electroencephalogram devices), or near the body (activity trackers, smart watches, and smart glasses). Wearable devices will create more opportunities to monitor a person's health and capture more data to predict health risks, often more efficiently and in a more timely manner. Although such monitoring of ‘healthy’ individuals may generate data to predict or detect health risks or improve a person's treatment when necessary, it raises concerns as it allows for near-constant surveillance and the collection of excessive data that would otherwise have to remain unknown or uncollected. Such data collection also contributes to the growing practice of ‘bio-surveillance’, a form of surveillance of health and other biometric data, such as facial features, fingerprints, temperature and pulse [29]. The growth of biosurveillance raises significant ethical and legal concerns, including the use of such data for medical and non-medical purposes for which explicit consent may not have been obtained or the re-use of such data for non-health purposes by a government or company, such as within the criminal justice or immigration systems. Therefore, such data should be subject to the same levels of protection and security as data collected on an individual in a formal clinical care setting.
3.2. Clinical applications
Digital technologies are at the forefront of transforming healthcare practices. Recent innovations hold the promise of improving preventive measures, facilitating early detection of severe illnesses, and enabling remote management of chronic conditions beyond the confines of traditional healthcare settings. These developments open up new possibilities for delivering healthcare services at any time and in any place, aligning with the era of disruptive and minimally invasive medicine.
Within the realm of digital health technologies, a diverse array of innovative healthcare tools has emerged, such as health information technologies, telemedicine applications, robotic platforms, mobile and wearable devices, and Internet of Things (IoT) networks. While these technologies may differ significantly from a technical perspective, they all share a common objective: to provide decision support in the context of healthcare practice. They accomplish this by gathering data during their use, consequently enriching the informativeness and effectiveness of medical practice through the analysis of the recorded information.
A decision support system in healthcare can be considered a computerized tool or software that assists healthcare professionals, including doctors, nurses, and administrators, in making informed and evidence-based decisions related to patient care, treatment options, and healthcare management. It integrates patient data, medical knowledge, and analytical tools to provide real-time information and recommendations, aiding healthcare providers in diagnosing conditions, creating treatment plans, and optimizing healthcare operations. Decision support systems have the potential to enhance the quality of care, reduce errors, and improve efficiency by offering valuable insights and suggestions based on the latest medical research and patient data.
In detail, AI is playing a pivotal role in revolutionizing decision support systems in healthcare, fundamentally transforming the way data is collected, analyzed, and utilized. The integration of AI introduces several significant advancements. AI algorithms can process vast amounts of patient data rapidly and with a high degree of accuracy. This allows for more nuanced and precise diagnostic and treatment recommendations. AI can identify patterns and anomalies in patient data that might be challenging for human professionals to detect. AI enables the tailoring of treatment plans to individual patients. By analyzing a patient's unique health data, genetic information, and treatment history, AI can suggest personalized therapies that are more effective and have fewer side effects. This personalized approach can lead to better outcomes and a higher quality of care. AI-driven decision support systems can use predictive data analysis to anticipate potential health issues. By continuously monitoring and analyzing patient data, AI can alert healthcare providers to early signs of disease or complications, enabling proactive intervention and preventive measures. AI can monitor patients in real-time, whether they are in a healthcare facility or at home. Wearable devices and sensors connected to AI systems can provide continuous updates on a patient's health status. This real-time monitoring is particularly valuable for chronic disease management and remote patient care. AI technology allows also to extraction of valuable information from unstructured data sources such as medical notes and reports. This capability simplifies the retrieval of relevant patient information, aiding in diagnosis and treatment planning. AI-powered care assistive systems enhance the patient experience during remote consultations. They can assist in collecting and interpreting patient data during telehealth visits, ensuring that healthcare providers have access to the information they need for informed decision-making.
To delve deeper into the discussion on AI technologies, these systems can be methodically categorized into two primary groups: diagnosis support systems, and care assistive systems.
The subsequent paragraphs offer an in-depth examination of each system category, elucidating their distinct healthcare functions as shown in Fig. 2. A comprehensive overview of the AI technologies adopted for each category and current trends as evidenced in existing literature are reported.
3.2.1. Diagnosis support systems
Diagnosis support systems are designed to assist healthcare professionals in accurately diagnosing medical conditions, often by analyzing patient data, medical records, and clinical information. They aid in the formulation of precise diagnoses and the selection of appropriate treatment plans.
The use of AI in disease diagnosis and treatment has been a focus of research since the 1970s when MYCIN, developed at Stanford, was used for diagnosing blood-borne bacterial infections [30].
In various medical fields, researchers have harnessed a range of AI-based techniques to detect diseases that require early diagnosis. Fig. 3 offers a comprehensive summary of the distribution of medical areas of interest, considering the results of a recent systematic review [31].
AI is proving to be a valuable tool for image analysis and is increasingly employed by professionals in radiology for early disease diagnosis and the reduction of diagnostic errors in preventive medicine. AI also aids in analyzing images and signals from various diagnostic tools to support decision-making. For instance, the Ultromics platform, implemented in Oxford, utilizes AI to analyze echocardiography scans, detecting heartbeat patterns and ischemic heart disease [32]. AI has shown promising results in the early detection of diseases such as breast and skin cancer, eye diseases, and pneumonia using various imaging techniques [33], [34], [35]. More recently, a system integrated with the advanced DL model enhanced precision in identifying ductal carcinoma in breast cancer imaging, providing valuable insights for practitioners [9]. Furthermore, AI is becoming an integral part of clinical practice, aiding in diagnostic and therapeutic imaging analysis within the context of the prostate cancer pathway [36]. This integration facilitates enhanced risk stratification and enables more precisely targeted subsequent management.
Further, surprising speech pattern analysis with AI has been shown to predict psychotic occurrences and identify features of neurological diseases like Parkinson's disease [37], [38]. In a recent study, AI-based models predicted the onset of diabetes [39]. Additionally, AI has been instrumental in assisting the public in the battle against the virus, as it has played a crucial role in the diagnosis of COVID-19 using various imaging techniques, such as computed tomography (CT), X-rays, magnetic resonance imaging (MRI), and ultrasound (US) [40], [41], [42].
AI is significantly impacting clinical decision-making and disease diagnosis. It can process, analyze, and report large volumes of data from different sources, aiding in disease diagnosis and clinical decision-making. AI has the potential to assist physicians in making more informed clinical decisions and, in some cases, may even replace human decisions in therapeutic domains [43]. Moreover, investigations employing computer-aided diagnostics have shown remarkable sensitivity, accuracy, and specificity in detecting subtle radiographic abnormalities, contributing to advancements in public health. However, it's worth noting that the assessment of AI outcomes in imaging studies often focuses on lesion detection, potentially overlooking the biological severity and type of a lesion. This approach may lead to a skewed interpretation of AI output. Additionally, the use of non-patient-related radiological and pathological endpoints may increase sensitivity but at the cost of higher false positives and potentially overestimate diagnosis by detecting minor abnormalities that could mimic subclinical disease [44]. Furthermore, AI has found application in motion analysis, with machine learning-driven video analysis showcasing the capacity of computers to automate the identification of gait abnormalities and associated pathologies in individuals afflicted with orthopedic and neurological disorders [45], [46].
Despite significant advancements in recent years, the field of precise clinical diagnostics still faces several challenges that demand continuous improvement to effectively combat emerging illnesses and diseases. Even healthcare professionals acknowledge the obstacles that need to be addressed before illnesses can be accurately detected in collaboration with AI. Currently, doctors are not entirely reliant on AI-based approaches because they are uncertain about their ability to predict diseases and associated symptoms. Therefore, substantial efforts are needed to train AI-based systems, enhancing their accuracy in disease diagnosis. Consequently, future AI-based research should take into account the aforementioned limitations to establish a mutually beneficial relationship between AI and clinicians. Utilizing a unified model for disease diagnosis across different institutions can significantly improve accuracy, thereby aiding in the early diagnosis of diseases.
3.2.2. Care assistive systems
Care assistive systems encompass a diverse range of versatile systems that offer comprehensive monitoring and support capabilities across various healthcare settings. They enable real-time monitoring and provide assistance, fostering patient engagement and compliance in care. Whether the patient is physically present with the healthcare provider or in a remote location, these systems ensure accessibility and convenience, thereby improving the overall patient experience.
Due to remarkable technological advancements, AI has ushered in innovative applications within the realm of care assistive [23]. Significant progress was made in wearable devices capable of measuring physiological changes and facilitating real-time patient monitoring [47]. Remote patient monitoring, a subset of telehealth, enables healthcare providers to remotely monitor and assess patient conditions, reducing the reliance on traditional in-person visits. This approach leverages sensors and communication technologies, simplifying the process of remotely collecting and evaluating health data, and empowering patients to take control of their health [48], [49].
Traditionally, patient monitoring systems relied heavily on clinicians' time management and invasive methods requiring skin contact. However, patient remote monitoring in healthcare now incorporates innovative IoT techniques, including contact-based sensors, wearable devices, and telehealth applications. These technologies enable the examination of vital signs and physiological variables, such as motion recognition, which supports medical decision-making and therapeutic strategies for various conditions, including psychological illnesses and movement disorders [50]. Healthcare providers have also harnessed remote patient monitoring platforms to ensure the continuity of patient care during the COVID-19 pandemic.
Conventional AI techniques are commonly employed in virtual applications to detect early signs of patient deterioration, understand patient behavior patterns through reinforcement learning, and tailor the monitoring of patient health parameters via federated learning. AI plays a pivotal role in managing chronic diseases, including diabetes mellitus, hypertension, sleep apnea, and chronic bronchial asthma, through non-invasive, wearable sensors [51]. Smart homes equipped with sensors that monitor physiological variables such as respiratory rate, pulse rate, breathing waveform, blood pressure, and ECG can aid residents in their daily activities and alert caregivers when assistance is required. Additionally, smart mobile and wearable devices allow users to collect data and monitor progress toward personalized therapy goals [52], [53]. Inertial sensors in wearable technology can assess individuals' adherence to exercise regimens, particularly in rehabilitation programs [54], [55]. Recent reviews highlight the potential of AI integration into wearable technologies while addressing the issue of user retention, and underscore the need for patient education to improve AI acceptance [56], [57], [58]. AI-driven data processing from sensors has the potential to monitor patterns in physiological measurements, positional data, and kinematic data, offering insights into improving athletic performance. AI can enhance injury prediction models, increase the diagnostic accuracy of risk stratification systems, enable continuous patient health monitoring, and enhance the patient experience. Despite these benefits, the adoption of AI in wearable devices faces challenges such as missing data, socioeconomic bias, data security, outliers, signal noise, and the acquisition of high-quality data using wearable technology [59], [60]. Patient acceptance also presents a critical hurdle to the widespread adoption of these technologies. However, AI's transformative potential in remote patient monitoring is accompanied by challenges related to privacy, signal processing, data volume, uncertainty, imbalanced datasets, feature extraction, and explainability [50].
The integration of AI language models can further revolutionize patient care, with the potential for digital applications to assist patients in managing their treatment regimens. These virtual assistants, similar to personal healthcare advisors, can provide reminders for medication adherence and offer health status updates. The rise of virtual patient assistants using the AI natural language model exemplifies the application of AI in healthcare. These virtual assistants can guide patients in managing chronic conditions like diabetes, prescribe over-the-counter medications, and offer guidance on remote therapy sessions. Physically and socially supportive robots have proven invaluable in aiding individuals in their recovery from injuries or illnesses, bridging cognitive, motor, or sensory deficits. These technologies contribute significantly to enhancing functional abilities, independence, and overall well-being [53]. ML methods have also found application in the evaluation of patient data, clinical decision support, and diagnostic imaging. In therapy, artificial cognitive applications can assess rehabilitation sessions based on machine-generated signals [52]. The incorporation of AI-driven tools, such as the AI natural language model, into rehabilitation sessions can complement traditional therapy, offering personalized guidance and monitoring for patients during their recovery journey [61]. Furthermore, the AI language model can assist individuals in practicing speech and language skills, making it accessible both at home and outside. Studies have demonstrated ChatGPT's ability to redraft text empathetically, enhancing peer-to-peer mental health support and various community-based self-managed therapy tasks, including cognitive behavioral therapy [62]. Furthermore, the utilization of apps and online portals for patient-physician communication has significantly improved patient engagement rates. Healthcare apps securely store and distribute patient data in the cloud, providing patients with easy access to their health information and facilitating better health outcomes. AI-based medical consultation apps allow patients to obtain information (non-emergency) and even offer medication reminders. AI language model's integration into healthcare apps streamlines time-consuming tasks like summarization, note-taking, and report generation, making healthcare more efficient. These apps also assist patients in symptom checking, appointment scheduling, medication management, patient education, and the self-management of chronic diseases [63]. Various digital platforms, including mobile apps, voice assistants, and websites, can facilitate access to care assistive. However, the utilization of virtual assistants in healthcare is not without its challenges, encompassing ethical concerns, data interpretation, privacy, security, consent, and liability issues [63].
AI is sparking a transformative wave in patient care, touching upon a multitude of healthcare domains, ranging from remote monitoring to injury prevention and virtual assistance. Through ongoing innovation and comprehensive education, AI stands poised to elevate the quality and global reach of healthcare services. Yet, it simultaneously raises pertinent questions regarding ethical considerations and the evolving regulatory landscape.
4. The regulatory framework
4.1. Ethical principles and guidelines
The ethical foundation in medicine rests upon a set of fundamental principles that guide healthcare professionals in delivering compassionate and patient-centered care. Central to these principles is respect for patient autonomy, acknowledging individuals' rights to make informed decisions about their healthcare. Concurrently, the principle of beneficence underscores the healthcare provider's duty to act in the best interests of patients, promoting well-being and striving to maximize positive outcomes while minimizing harm, in adherence to the principle of non-maleficence [64].
Justice in medical ethics emphasizes the equitable distribution of healthcare resources, treatments, and opportunities, addressing disparities and ensuring universal access. Veracity and confidentiality highlight the importance of honest and transparent communication while safeguarding patient privacy.
Fidelity, or professional faithfulness, underscores the commitment of healthcare professionals to fulfill their duties and obligations, maintaining trust within the physician-patient relationship and the broader healthcare system. These principles collectively form the ethical compass that guides decision-making in medicine, ensuring a balance between individual rights, societal equity, and the integrity of the medical profession. As the healthcare landscape evolves, these principles remain essential, fostering ethical practices that prioritize patient welfare and uphold the core values of the medical profession. Continuous reflection on these ethical considerations ensures that healthcare professionals navigate the complexities of medical practice with integrity, compassion, and an unwavering commitment to ethical standards.
When developing digital technologies for healthcare, it is essential to take into account the requirements for monitoring patient safety, privacy, traceability, accountability, and security. Furthermore, plans should be established to address any breaches that may occur.
The WHO initiated in 2019 the development of a framework to facilitate the integration of digital innovations and technology into healthcare. The WHO's guidelines for digital interventions in healthcare emphasize the importance of evaluating these technologies based on factors such as ‘benefits, potential drawbacks, acceptability, feasibility, resource utilization, and considerations of equity.’ These recommendations underscore that these digital tools should be perceived as essential aids in the quest for universal health coverage and long-term sustainability [65].
The ethical principles for applying AI in the field of healthcare are designed to provide guidance to developers, users, and regulators to enhance the design and utilization of these technologies while ensuring proper oversight.
At the heart of all ethical principles lie human dignity and the intrinsic worth of every individual. These foundational values underpin the ethical guidelines that outline duties and responsibilities within the sphere of developing, implementing, and continually evaluating AI technologies for healthcare. The European Regulation enacted on April 21, 2021, categorizes AI products with precision based on their potential risk to fundamental rights such as health and safety, dignity, freedom, equality, democracy, the right to be free from discrimination, and data protection.
Given this classification, ethical principles play a pivotal role for all stakeholders engaged in the responsible advancement, deployment, and assessment of AI technologies for healthcare. This inclusive group encompasses physicians, system developers, healthcare system administrators, health authority policymakers, as well as local and national governments. Ethical principles should serve as catalysts, encouraging and aiding governments and public sector agencies in adapting to the rapid evolution of AI technologies through legislation and regulation. Moreover, these principles should empower medical professionals to judiciously employ AI technologies in their practice.
Within a general framework of the use of AI techniques in the service of society, six fundamental principles in favor of the ethical development of such technologies have been identified in the literature. Some of them are fundamental principles commonly used in bioethics: beneficence and non-maleficence (i.e. to do no harm and minimize the benefit/risk trade-off), autonomy (respecting the individual's interest in making decisions), justice (ensuring fairness and that no person or group is subjected to discrimination, neglect, manipulation, domination or abuse). Other principles, on the other hand, principles draw from moral and legal standards. These principles emphasize maintaining the epistemological aspect of intelligibility, encompassing both the need for clear explanations of technology's operations and the responsibility to trace cause-and-effect relationships resulting from technology's actions. Furthermore, they underscore the importance of safeguarding and upholding individual privacy to empower individuals to retain control over sensitive information concerning themselves, thereby preserving their capacity for self-determination and, in turn, respecting their autonomy [66].
In a recent WHO initiative conducted in June 2021, a comprehensive set of indications, recommendations, and guidelines about the development, application, and utilization of AI technologies in medicine was established [24]. The WHO work offered a detailed exploration of fundamental ethical principles designed to guide the development and implementation of AI technologies. Strongly endorsing the adoption of this updated document within the AI domain, the subsequent paragraphs provide a comprehensive review of the key ethical guidelines delineated by WHO for a more thorough examination of the topic. Fig. 4 provides a schematic summary of recommended ethical principles and guidelines for AI in healthcare.
Protection of autonomy The integration of AI may lead to scenarios where decision-making is either transferred to or shared with machines. Upholding the principle of autonomy necessitates that any expansion of machine autonomy should not compromise human autonomy. In the context of healthcare, this implies that individuals should retain complete control over healthcare systems and medical choices. AI systems should be meticulously and consistently designed to align with established principles and human rights, specifically focusing on their role in aiding individuals, whether they are healthcare professionals or patients, in making well-informed decisions. Respecting autonomy also encompasses the accompanying responsibilities of safeguarding privacy, maintaining confidentiality, and ensuring informed and valid consent through the implementation of appropriate legal frameworks for data protection.
Promoting welfare, safety and public interest AI technologies must adhere to strict regulatory standards concerning safety, accuracy, and effectiveness before they are made available to the public. These requirements are essential to ensure not only initial quality control but also to promote continuous improvement. As a result, individuals and entities involved in funding, development, and utilization of AI technologies carry an ongoing duty to evaluate and oversee the performance of AI algorithms to confirm their intended functionality.
Ensure transparency, explainability and intelligibility AI should be comprehensible to developers, users, and regulators. Achieving transparency necessitates the provision of adequate information, which should be documented or made available before an AI technology is designed and implemented. This commitment to transparency not only enhances the overall system quality but also serves as a protective measure for patient safety and public health. For instance, system evaluators rely on transparency to identify and rectify errors, and government regulators depend on it to carry out effective oversight.
AI technologies should be as explainable as possible, and tailored to the comprehension levels of their intended audience. Striking a balance between full algorithmic explainability (even at the cost of some accuracy) and enhanced accuracy (possibly at the expense of explainability) is a critical consideration.
Promoting accountability and responsibility Accountability can be established through the application of ‘human assurance,’ which entails the evaluation of AI technologies by both patients and physicians during their development and implementation. In the context of human assurance, regulatory principles are employed both upstream and downstream of the algorithm, creating human supervision points. The primary aim is to ensure that the algorithm maintains its medical effectiveness, remains open to evaluation, and upholds ethical accountability. Consequently, the utilization of AI technologies in the field of medicine necessitates accountability within intricate systems where responsibility is distributed among various stakeholders.
In cases where AI technologies make medical decisions that result in harm to individuals, accountability processes should unequivocally identify the respective roles of producers and clinical users in the harm incurred. To avert the diffusion of responsibility, where ‘everyone's problem becomes nobody's responsibility,’ a robust accountability model, often referred to as ‘collective responsibility,’ holds all parties involved in the development and deployment of AI technologies accountable. This approach encourages all stakeholders to act with integrity and minimize harm. It's worth noting that this remains a continually evolving challenge and is yet to be fully addressed in the laws of many countries.
To ensure appropriate redress for individuals and groups adversely affected by decisions made by algorithm-based systems, mechanisms for compensation must be put in place. This should encompass access to prompt and effective remedies and redress facilitated by both government bodies and companies employing AI technologies in the healthcare sector.
Ensuring inclusivity and equity Inclusivity dictates that AI employed in healthcare should be intentionally designed to promote the broadest possible and equitable utilization and access, irrespective of factors such as age, gender, income, ability, or other distinguishing characteristics. AI technologies should not solely cater to the requirements and usage patterns prevalent in high-income settings; they must also be adaptable to accommodate various devices, telecommunications infrastructures, and data transfer capabilities, particularly in less economically privileged environments. Both industry and governments bear the responsibility of bridging the ‘digital divide’ within and between countries to ensure that new AI technologies are accessible on an equitable basis.
AI developers must ensure that AI data, especially training data, are free from sampling bias and characterized by accuracy, comprehensiveness, and diversity. Special provisions should be in place to safeguard the rights and well-being of vulnerable populations, coupled with mechanisms for redress in cases where biases and discrimination emerge or are alleged.
Promoting responsiveness and sustainability AI responsiveness necessitates that designers, developers, and users engage in a continuous, systematic, and transparent assessment of AI technology to ensure that it functions effectively, and appropriately, in accordance with the expectations and requirements of the specific context in which it is deployed. When an AI technology proves to be ineffective or causes dissatisfaction, the obligation to be responsive involves instituting a structured process to resolve the issue, which may include discontinuing the technology's use. Therefore, AI technologies should only be introduced if they can be seamlessly integrated into the healthcare system and receive adequate support. Regrettably, in under-resourced healthcare systems, new technologies are frequently underutilized, left unrepaired, or not upgraded, which squanders precious resources that could have been invested in other beneficial interventions.
Sustainability also hinges on the proactive response of governments and companies to anticipate workplace disruptions. This includes providing training for healthcare professionals to adapt to the integration of AI and addressing potential job losses due to the adoption of automated systems for routine health functions and administrative tasks.
4.2. Legislative measures
It is imperative that AI models maintain simplicity in their properties and functions to ensure ease of operation by healthcare providers [67]. Nevertheless, several hurdles hinder the widespread adoption of AI in healthcare. These challenges encompass capacity limitations in developing and maintaining infrastructure to support AI processes, elevated costs associated with data storage and backup for research purposes, and the substantial expenses required to enhance data reliability [68]. AI algorithms, while powerful, are not without their limitations, including limited applicability beyond the training domain, and susceptibility to bias [69]. To address these challenges, healthcare stakeholders should develop and execute a carefully planned strategy for AI implementation in healthcare to handle the cost, technological infrastructure, and AI system integration into clinical workflows. Furthermore, clinicians frequently experience mistrust and lack of understanding concerning AI-based clinical decision support systems, primarily due to undisclosed risks and the reliability of such systems [70]. This skepticism acts as a substantial obstacle to broad adoption. To address this, there is a growing focus on implementing explainable AI solutions to boost user trust and navigate these issues [71].
All these pieces of evidence emphasize that the increasing integration of AI technologies into healthcare necessitates effective governance to address regulatory, ethical, and trust-related concerns [72], [25]. A recent study also underscores the critical role of governing AI technologies at the healthcare system level in ensuring patient safety, healthcare system accountability, bolstering clinician confidence, enhancing acceptance, and delivering substantial healthcare benefits [73].
Maintaining control over regulated domains, especially healthcare, underscores the urgency of implementing national and international regulations. These regulations are essential for the responsible integration of AI-driven applications in healthcare while upholding the tenets of medical ethics [74]. This section concentrates on the proposal and refinement of five essential European acts, which together can form a unified regulatory framework for governing AI in healthcare (see Fig. 5).
In this endeavor, the European Union (EU) has already initiated actions by enacting the General Data Protection Regulation (GDPR) in 2018. GDPR is specifically crafted to protect personal data handled by data processors or controllers operating within the EU, setting a precedent for substantial regulatory changes in the United States and Canada [75].
Moreover, the European Commission has introduced the recent Artificial Intelligence Act (AIA), which is designed to address a range of risks linked to the extensive implementation of AI [76], [77]. This set of regulations advocates for the responsible deployment of AI and endeavors to prevent or alleviate potential harms arising from specific technology applications. According to this proposed act, high-risk AI systems are obligated to undergo pre-deployment compliance assessments and post-market monitoring to ensure their adherence to all the requirements outlined by the AIA [78].
Furthermore, the European Union has recently implemented the Medical Device Regulation (MDR) 2017/745/EU [79], which replaces both the Medical Device Directive (MDD) 93/42/EC [80], and the Active Implantable Medical Device Directive (AIMDD) 90/385/EC [81] which aims to enhance the regulation of the medical device market.
In general, the implementation of the MDR does not remove any requirements from the replaced regulatory acts. Instead, it introduces new ones, emphasizing a life-cycle approach to device safety. To summarize, the primary changes brought about by the MDR's entry into force and application for medical devices are as follows:
i) the MDR enhances controls to ensure the safety and effectiveness of devices.
ii) the mechanism allowing for the acceleration of placing devices on the market or putting them into service through equivalence to existing devices is no longer applicable for all medical devices.
iii) post-marketing clinical follow-up is extended to all medical devices, leading to an increased importance and number of clinical evaluations and investigations.
According to Article 51(1) of the MDR, medical devices are classified into four main classes: I, IIa, IIb, and III. This classification depends on their intended purpose and inherent risks, based on the criteria specified in Annex VIII. In general, class I includes most non-invasive and non-active devices, representing the lowest risk. Class II(a) devices are of medium risk, while class II(b) devices are medium to high risk. Class III is reserved for high-risk devices. Regarding software, it is classified as class II(a) when it is intended to provide information for diagnostic or therapeutic decisions, unless these decisions could result in death or irreversible deterioration of a patient's health (in which case it is classified as class III), or if they lead to serious health deterioration or surgery (in which case it is classified as class II(b)). Furthermore, if the software is designed for monitoring physiological processes, it is classified as Class II(a), unless it is meant for monitoring vital physiological parameters, where changes in these parameters could pose an immediate danger to the patient (in which case it is classified as Class IIb).
All other software falls under class I (as per rule 11 of Annex VIII). However, it is essential to note that, concerning AI-based medical devices, class I risk classification is typically not applicable. From this point of view, however, it should be considered that, as far as AI-based MD is concerned, it is not plausible that risk class I could be applicable [82]. Nonetheless, AI technologies are medical devices in the light of their definition under the MDR. However, this conclusion requires further clarification. Specifically, the choice of applicable law hinges on whether a medical product incorporated into an AI device is considered ‘ancillary’, leading to the application of the MDR, or ‘non-accessory’, which triggers the application of laws related to medicines for human use [83]. In this context, one could argue that, according to Article 1 of the MDR, if the medicinal product incorporated into the device has an ancillary role concerning the device, it falls under the evaluation and authorization process defined by the MDR. However, if it serves a principal (i.e., non-accessory) function, the comprehensive product will be regulated by Directive 2001/83/EC of the European Parliament and of the Council, which pertains to medicinal products for human use, or by Regulation 726/2004/EC, which governs Community procedures for authorizing and overseeing medicinal products for human and veterinary use, including the establishment of the European Medicine Agency (EMA) when applicable.
Furthermore, when discussing accountability and transparency in AI, the Clinical Trial Regulation (CTR) offers relevant suggestions for regulation.
Of particular significance are the regulations surrounding two key aspects of the product lifecycle, particularly during the pre-market phase: the responsibilities of sponsors and investigators. A sponsor of a clinical study or experiment can be an individual, a company, an institution, or an organization that assumes the responsibility for initiating, managing, and financing the said investigation or experiment. On the other hand, an investigator in a clinical study or experiment is an individual responsible for conducting it at a clinical investigation site. Under the MDR, the sponsor, whether acting alone or in conjunction with the investigator, is obliged to adhere to various obligations concerning the execution of clinical investigations. These obligations, outlined in Article 11 of the MDR and Article 25 of the AIA, encompass the following tasks:
i) maintaining a copy of the EU Declaration of Conformity and technical documentation available for review by the national competent authority in the context of market surveillance;
ii) furnishing the national competent authority, upon request, with all the necessary information and documentation to demonstrate the conformity of a high-risk AI system;
iii) collaborating with the national competent authority, upon request, on any actions about the AI system.
Finally, the accountability process for regulating AI products may be extended to users, which encompasses health professionals, laypersons, legal entities, public authorities, agencies, and other organizations that utilize medical devices under their jurisdiction.
Article 29 AIA lays down a series of responsibilities for users employing high-risk AI systems. These responsibilities include adherence to the provided instructions for use, ensuring the data inputted aligns with the system's intended purpose, diligent monitoring of the system's operation as per the instructions, immediate notification of the developer or distributor, and discontinuation of the AI medical device use if there are concerns about its application not complying with the instructions for use. Users are also expected to report identified serious risks to the sponsor and investigator, utilize relevant information for fulfilling their obligations, and comply with data protection regulations specified in Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where applicable.
5. Open challenges
A plethora of challenges emerges in the healthcare landscape with the growing integration of AI. These challenges span technological advancements, regulatory considerations, and ethical dimensions.
This section delves into the nuanced implications of AI technologies in clinical practice, focusing the discussion on two pivotal aspects: the doctor-patient relationship and AI-driven clinical decision-making. Additionally, challenges related to health record data are addressed, offering insights into current practices regarding the utilization process of medical data.
5.1. The doctor-patient relationship
The patient-doctor relationship is a critical aspect of healthcare, characterized by mutual trust, effective communication, and collaboration. Establishing trust between patients and healthcare providers is foundational for successful medical care. It involves open and transparent communication, empathy, and a shared understanding of the patient's concerns, values, and treatment preferences [84].
Issues related to trust within the patient-doctor relationship are multifaceted. Patients place trust in the expertise and knowledge of their healthcare providers, relying on their guidance for accurate diagnoses and effective treatments. At the same time, healthcare providers trust in the information shared by patients to make informed decisions about their care. Exploring these trust dynamics is essential for a comprehensive understanding of the patient-doctor relationship, especially when integrating AI into clinical decision support systems.
The incorporation of AI prompts a reassessment of the conventional doctor-patient relationship.
Some argue that the traditional model of a nurturing relationship has become outdated. It appears that the concept of patients seeking a doctor's expert advice and placing themselves under their care is evolving into a model where patients actively participate in generating health knowledge and acquiring expertise to manage their illnesses [85].
The healing relationship must be understood as an idealistic picture of the relationship between ‘expert doctors’ and ‘vulnerable patients’. This concept encompasses the factors that drive patients to seek professional medical assistance or leverage knowledge and technology for self-care. Whether one opts for professional medical services or self-directed care, the fiduciary responsibilities arising from this vulnerability remain consistent, regardless of the sources of expertise. These sources can include medical professionals, repositories of medical information and guidance, or other technologies and systems that support self-care, like telemedicine or easily accessible medical information on the internet.
In this context, the importance of redefining how fiduciary obligations of medicine are met takes on renewed significance, especially with the future deployment of AI in medicine. Questions of relevance have been raised, for instance, about the validity and effectiveness of medical knowledge available through Internet portals. Furthermore, as medical information becomes increasingly accessible through various means, the role of expertise as an indicator of trustworthiness does not undergo any change.
Built upon this foundation, the healing relationship model can be construed as an illustration of the ethical attributes and responsibilities intrinsic to medical practice. Traditionally, these principles were embodied by healthcare professionals, but they are now extending across diverse platforms and individuals, encompassing web portals, consumer device creators, wellness service providers, and others. While contemporary medicine has progressed beyond the conventional doctor-patient model delineated in the therapeutic relationship, the responsibilities associated with this connection have not evaporated. Instead, the diffusion and transfer of these responsibilities to new technological participants in medicine raise concerns regarding the oversight of AI integration in healthcare.
When assessing the impact of AI and algorithmic technologies on the doctor-patient relationship, the selection of metrics becomes paramount. If we solely gauge it in terms of cost-effectiveness or utility, the rationale for incorporating AI into healthcare and expanding its role is evident. Nonetheless, while algorithmic technologies might enhance efficiency and reduce the cost of treating more patients, they could potentially erode the non-mechanical aspects of care. One can distinguish between the effects of algorithmic systems (and their utility components) that contribute to the welfare of the patient or the practice of medicine, governed by established ethical standards, and those that benefit medical institutions and healthcare services.
The moral engagement intrinsic to the doctor-patient relationship, where treatment ideally stems from the practitioner's contextual and historically informed evaluation of the patient's condition, is challenging to replicate in interactions with AI systems. The role of the patient, the factors prompting individuals to seek medical assistance, and the vulnerability of patients remain unaltered with the introduction of AI as a mediator or enhancer of medical care. What does transform is how care is administered, the possibilities for its delivery, and who delivers it. The delegation of care skills and responsibilities to AI systems can be disruptive in multiple respects.
The deployment of AI machines and robots in medicine, particularly when they demonstrate enhanced efficiency, precision, speed, and cost-effectiveness, appears appealing when considering their substitution for humans in tasks that are repetitive, tedious, perilous, demeaning, or exhausting. If used judiciously, AI has the potential to reduce the time healthcare professionals allocate to bureaucratic, routine tasks, or those that expose them to unnecessary risks, thereby offering a lower-risk, patient-focused environment.
Traditionally, clinical care and the doctor-patient relationship are ideally rooted in the physician's contextual and historically informed evaluation of the patient's condition. This type of care is challenging to replicate in technologically mediated healthcare. Patient data representations inherently confine the doctor's comprehension of the patient's case to quantifiable attributes. This can pose challenges as clinical assessments increasingly rely on data representations derived from sources like remote monitoring technologies or data collected without in-person interactions.
Patient data representations can be perceived as an ‘objective’ gauge of an individual's health and overall well-being, potentially diminishing the significance of contextual health elements or the perspective of the patient as a socially situated individual. These data representations can generate an ‘illusion of certainty,’ where ‘objective’ monitoring data are regarded as an accurate portrayal of the patient's condition, often overlooking the patient's interpersonal context and other unspoken knowledge [86].
Healthcare providers encounter this challenge when integrating AI systems into patient care protocols. The volume and intricacy of data and suggestions derived from technology can complicate the identification of crucial missing contextual details regarding a patient's condition. Depending solely on data obtained from health apps or monitoring technologies (e.g., smartwatches) as the primary information source about a patient's health may lead to the oversight of facets of a patient's well-being that aren't readily measurable. This encompasses vital aspects of mental health and overall well-being, such as the patient's emotional, mental, and social status. Consequently, a ‘decontextualization’ of the patient's state may transpire, wherein the patient relinquishes some influence over how their condition is conveyed and comprehended by healthcare professionals and caregivers. These scenarios imply that the interactions essential for cultivating the fundamental trust inherent in a traditional doctor-patient relationship may be impeded by technological intervention. Technologies that obstruct the conveyance of ‘psychological signals and emotions’ may impede the physician's comprehension of the patient's condition, jeopardizing ‘the establishment of a physician-patient relationship grounded in trust and the pursuit of healing’ [85].
Serving as an intermediary position between the doctor and the patient, AI systems modify the dynamics between healthcare providers and patients, as they delegate a portion of the patient's continuous care to a technological platform. This shift could lead to a growing divide between healthcare professionals and patients, potentially implying a missed chance to cultivate an intuitive grasp of the patient's health and overall well-being [86].
Relying on AI systems for clinical care or expert diagnostic capabilities may hinder the cultivation of expertise, professional networks, and the establishment of ‘best practice’ standards in the field of medicine. This phenomenon is commonly known as ‘de-skilling’ and contradicts the principles of ‘human-centered AI’ advocated by the WHO. Human-centered AI aims to bolster and enrich human skills and competency development rather than diminishing or substituting them [87].
As care becomes increasingly technologically mediated and involves non-professional entities, the development, maintenance, and enforcement of internal standards aimed at fulfilling moral obligations to patients may face potential compromises. There is a looming prospect that algorithmic systems could supplant the traditional roles of healthcare professionals, with a primary focus on efficiency and cost-effectiveness.
To safeguard against the erosion of comprehensive, patient-centered care, new care providers and entities outside the conventional medical community must place significant emphasis on upholding these moral obligations to benefit and respect patients. In doing so, they can ensure that healthcare remains not only technically ‘efficient’ but also holistic and genuinely beneficial.
A key role of human clinical expertise is to safeguard the well-being and safety of patients. When this human expertise is compromised through skill degradation or displaced by automation bias, it becomes essential for tests and clinical effectiveness trials to step in and bridge the gap, thus ensuring patient safety [88]. This trade-off is mirrored in the context of transparency and precision. Some scholars contend that medical AI systems may not require comprehensible explanations if their clinical accuracy and effectiveness can be consistently verified [89].
Automation in data acquisition, interpretation, diagnosis processing, and therapy identification cannot operate in complete isolation from human involvement. It necessitates continuous validation and, therefore, does not diminish the significance of the doctor-patient relationship's uniqueness. Each individual's ailment is inherently distinctive, and personal interaction remains the fundamental component of every diagnosis and treatment. In this context, machines cannot replace humans in a relationship founded on the interplay of complementary realms of autonomy, competence, and responsibility.
AI should be regarded solely as a tool to aid physicians in their decision-making processes, which must always be under human control and supervision. Ultimately, the responsibility for making the final decision rests with the doctor, as the machine's role is exclusively supportive, involving the gathering and data analysis, serving in an advisory capacity. It is important to underscore that an ‘automated cognitive assistance system’ in diagnostic and therapeutic procedures does not equate to an ‘autonomous decision-making system’ [88]. It assists by collating clinical and documentary data, comparing them with statistics on similar patients, and expediting the physician's analytical process.
An important concern arises: what happens when AI outperforms the capabilities of a doctor? In specific scenarios, this is indeed a technical possibility that must be considered. It is within this particular realm that the feared ‘replacement’ of machines for humans might potentially occur in the future.
However, a more immediate consequence might involve the delegation of decision-making to technology. Entrusting complex tasks to AI systems can result in the erosion of human and professional qualities. To maintain the doctor-patient relationship as one grounded in trust, in addition to care, it is crucial to preserve the essential role of the ‘human doctor.’ Only the human doctor possesses the unique capacities of empathy and genuine understanding that cannot be replicated by AI. While predetermined standards of behavior and codes of conduct, such as protocols and guidelines, provide support based on knowledge and experience in professional practice, the demands of diagnosis and treatment often necessitate deviation from these predetermined models.
It would be of great concern if the space seemingly left to the presumed neutrality of machines led to the ‘neutralization’ of the patient. The vast potential offered by AI should be viewed as a valuable opportunity through which technology can expand ethical horizons, enhancing the patient's opportunity to be heard and fostering a deeper connection with the progression of their illness. In this context, AI serves as a valuable tool that saves the physician time on routine tasks, allowing more time to be dedicated to the doctor-patient relationship.
5.2. AI-powered clinical decision-making
A substantial portion of the momentum in AI stems from the belief that applying these technologies in diagnosis, care, or healthcare systems could enhance clinical and institutional decision-making.
Physicians and healthcare professionals are susceptible to various cognitive biases, leading to diagnostic errors. The National Academy of Sciences (NAS) has reported that approximately 5% of US adults seeking healthcare guidance are subject to misdiagnosis, with such errors contributing to 10% of all patient fatalities [90].
AI has the potential to mitigate inefficiencies and errors, leading to a more judicious allocation of resources, provided that the foundational data is both accurate and representative. Accountability plays a pivotal role in holding individuals and entities responsible for any adverse consequences of their actions. It is an indispensable element for upholding trust and safeguarding human rights.
Nevertheless, certain attributes of AI technologies introduce complexities to notions of accountability. These attributes include their opacity, reliance on human input, interactions, discretionary functions, scalability, capacity to uncover concealed insights and software intricacies. One particular challenge in ascribing responsibility arises from the ‘control problem’ associated with AI. In this scenario, AI developers and designers may not be held accountable, as AI-driven systems can operate independently and evolve in ways that the developer may assert are unpredictable [91]. Assigning responsibility to the developer could provide an incentive to take all possible measures to minimize harm to the patient. Such expectations are already well established for manufacturers of other commonly used medical technologies, including drug and vaccine manufacturers, medical device companies, and medical equipment manufacturers.
Another challenge is the ‘traceability of harm,’ which is a persistent issue in complex decision-making systems within healthcare and other domains, even in the absence of AI. Due to the involvement of numerous agents in AI development, ascribing responsibility is a complex task, entailing both legal and moral dimensions. The diffusion of responsibility can have adverse consequences, including the lack of compensation for individuals who have suffered harm, incomplete identification of the harm and its root causes, unaddressed harm, and potential erosion of societal trust in such technologies when it appears that neither developers nor users can be held accountable [92].
Physicians routinely employ various non-AI technologies in the diagnosis and treatment process, ranging from X-rays to computer software. When a medical professional commits an error in the utilization of such technology, they can be held accountable, especially if they have received training in its application [93]. However, in cases where an error arises from the algorithm or data used to train an AI technology, it may be more appropriate to assign responsibility to those involved in the development or testing of the AI system. This approach avoids placing the onus on the doctor to assess the AI technology's effectiveness and usefulness, as they might not possess the expertise to evaluate complex AI systems [94].
Numerous factors argue against placing exclusive responsibility on physicians for decisions made by AI technologies, many of which apply to assigning responsibility for the use of healthcare technologies beyond AI. To begin with, physicians do not wield control over an AI-driven technology or the recommendations it provides [95]. Nonetheless, physicians should not be entirely absolved of liability for inaccuracies in the content, as this is necessary to prevent ‘automation bias’ and encourage critical evaluation of whether the technology aligns with their requirements and those of the patients [92]. Automation bias occurs when a physician disregards errors that should have been identified through human-guided decision-making. While it's crucial for doctors to have trust in an algorithm, they should not set aside their own experience and judgment to blindly endorse a machine's recommendation. Automation bias occurs when a physician disregards errors that should have been identified through human-guided decision-making. While it's crucial for doctors to have trust in an algorithm, they should not set aside their own experience and judgment to blindly endorse a machine's recommendation [94].
Certain AI technologies may present not just a single decision but a range of options from which a physician must make a selection. When a physician makes an incorrect choice, determining the criteria for holding them accountable becomes a multifaceted challenge.
The complexity of attributing liability is magnified when AI technology is integrated into a healthcare system. In such cases, the developer, the institution, and the physician may all have contributed to a medical error, making it difficult to pinpoint full responsibility [90]. Consequently, accountability might not rest solely with the provider or developer of the technology; instead, it may lie with the government agency or institution responsible for selecting, validating, and implementing the technology.
Fortunately, as of today, the shift of decision-making in healthcare from humans to machines has not reached its culmination. While AI today is primarily proposed to augment human decision-making in the practice of public health and medicine, epistemic authority has begun to debate the issue of why, in some circumstances, AI systems (as with the use of computer simulations) can move humans from the center of knowledge production [96], [97]. The debates evidence a possible complete transfer of routine medical tasks to AI, prompting questions about the legality of such full delegation. Modern laws increasingly acknowledge an individual's right not to be solely subjected to automated decisions when these decisions hold substantial consequences. Additional concerns may emerge if human judgment is progressively supplanted by machine-driven assessment, giving rise to broader ethical considerations associated with the loss of human oversight, particularly if predictive healthcare becomes standard practice.
Nonetheless, it is improbable that AI in the field of medicine will attain complete autonomy; instead, it may reach a stage of conditional automation or necessitate ongoing human support [98].
Substituting human judgment with AI and relinquishing control over certain aspects of clinical care offers distinct advantages. Humans are capable of making decisions that may be less equitable and more biased compared to machines (the concern about bias in AI usage is further elaborated below).
The utilization of AI systems for specific, well-defined decisions can be entirely justified if there is compelling clinical evidence indicating that the system outperforms a human in that particular task. However, the transition to the application of AI technologies for more intricate aspects of clinical care presents a set of challenges.
One such challenge is the potential emergence of a ‘disagreement between equals’ between two competent experts: an AI machine and a doctor. In such scenarios, there is no feasible method for harmonizing decisions or engaging in a rationale with the algorithm since accessing it or committing to a change of mind is not possible. Furthermore, there are no clear-cut rules for determining which entity is correct, and whether a patient should place trust in technology or a doctor. The choice may hinge on factors that are not rooted in the ‘competence’ of either the machine or the doctor. Some argue that the algorithm's recommendation should be favored, as it incorporates the expertise of multiple individuals and a wealth of data points. However, opting for one over the other may result in an undesirable outcome. If the physician disregards the machine's recommendation, the AI's value addition can be limited. If the physician concurs with the machine's decision, it might erode their authority and diminish their responsibility [90].
The delegation of decision-making to AI-driven technologies and the subsequent loss of human control could impact various aspects of clinical care and the healthcare system.
While offering individuals more opportunities to share data and access autonomous health advice might enhance their sense of empowerment and self-care, it could also potentially trigger anxiety and fatigue [93]. As these technologies accumulate more personal data and incorporate it into the decision-making process of physicians, patients might find themselves gradually marginalized in shared decision-making, potentially compromising their ability to exercise free will or autonomy in health-related decisions [90].
The utilization of AI in medicine, especially when its use is not disclosed, poses challenges to the fundamental principles of informed consent and broader public trust in healthcare. While some may view the reduction of physician control over patients as a way to promote patient autonomy, there is an equally significant risk of relinquishing decision-making to AI technology. This risk is heightened if the technology is presented to the patient as offering a superior understanding of their health status and prognosis compared to a physician [94].
5.3. Health data records
The integration of AI technology in the healthcare sector faces notable limitations, particularly in the realm of managing health record data. Health data encompasses a patient's complete medical history, including details from physical examinations, investigations, and treatment records—all stored in electronic form.
The 2018 WHO-UNICEF Global Primary Health Care (PHC) Conference underscored the importance of harnessing AI techniques to augment existing systems and leverage data for electronic decision support and analytics [99]. The overarching objective is to attain Universal Health Coverage and advance healthcare equity for both individuals and populations. However, ethical challenges arise in the context of health records. The use of electronic health records introduces ethical issues related to beneficence, autonomy, fidelity, and justice. Autonomy is compromised when patients' health data are shared or linked without their knowledge. Fidelity is at risk when health organizations fail to take adequate precautions to safeguard identifiable health data. Justice is undermined when disparities exist in access to health information and services based on income, language, age, geography, literacy, and disability [100].
A comprehensive examination of health records and their implications necessitates the involvement of various stakeholders, including health personnel, leaders, policymakers, ethicists, and engineers.
Despite significant strides in computerization, challenges persist throughout the data life cycle—from collection and documentation to storage, management, sharing, and utilization [101], [102]. Efforts to address these challenges are essential to fully realize the potential benefits of AI in healthcare.
Legislation for AI regulation in the EU emphasizes trustworthy and ethical AI with explicit processes and human oversight for high-risk systems. Equitable AI is proposed as a solution, emphasizing fairness in capacity-building strategies. Integrating data quality assessment and management with information governance in the big data environment is essential for ethical health information ecosystems.
To effectively navigate the complexities of integrating AI technology into healthcare, a comprehensive approach that combines data usage, data quality management, governance, and ethics is essential. In this context, key recommendations have been identified to guide stakeholders in the enhancement of data utilization in healthcare: transparency, use limitation, access and correction, data quality, and security [103].
Healthcare professionals play significant roles as creators, collectors, managers, and users of observational health data. This raises the question of whether we need new consent models from clinicians, patients, and other healthcare providers within the referral and integrated care network at different points in the data life cycle. Determining the adequacy of consent, its informativeness, and relevance becomes a critical consideration, prompting the need for clear criteria and decision points throughout the life cycle.
In promoting good governance, it is essential to establish consent or employ other legal processes, such as opting out or utilization for public health purposes, to regulate access and sharing. Adhering to ethical principles, data quality categories, and information governance is crucial, ensuring their relevance at various stages in the data life cycle. Innovative technologies like blockchain are actively being utilized to expedite the consent process for clinical trials, potentially alleviating concerns related to personal data privacy associated with integrated service delivery [104].
Establishing and maintaining trust is paramount in implementing sustainable data creation and collection practices.
It's crucial to recognize that community-driven health data repositories may not provide the level of privacy citizens assume, particularly in the context of free online services with user agreements allowing the service owner to utilize collected data. The deployment of applications and systems facilitating unethical and unlawful exchange of digital information erodes trust. Building and maintaining trust in the health information exchange network requires reciprocity, transparency, and mutual trust among all actors, including data custodians and providers. The trustworthiness of data custodians hinges on their competence, commitment, and motives. Patients and service users generally have high levels of trust in the professionalism of clinical teams and health services. However, unnecessary tests and procedures resulting from unclear data or misinterpretations of unstructured data can compromise patient autonomy and hinder informed decision-making.
In the United States, legislation such as the Health Insurance Portability and Accountability Act (HIPAA) specifically excludes protections once data leaves a covered entity. Controversy surrounds the transmission of data to large companies for data mining, primarily due to the lack of explicit consent from patients, placing control in the hands of system designers and owners. In contrast, the EU's GDPR empowers individuals with control over their personal data.
Careful consideration is needed for all data processes, as they may harbor unrecognized risks.
Transferring data from information systems to data repositories demands a commitment to security, safety, and accuracy [105]. Privacy is a paramount concern, and while privacy-preserving linkage techniques exist for integrating observational data from information systems, their accuracy and security are not always guaranteed [106]. Beyond the threat of re-identification during data, there exists the potential for data loss and compromise of data integrity.
For instance, a recent study underscored inaccuracies in cohort identification when utilizing vocabulary mappings within a common data model during the data process [107]. These mappings, integral to data, may suffer from inaccuracies stemming from programming bugs and errors that escape detection during quality assurance stages.
It is imperative to thoroughly identify, assess, and establish contingency plans to mitigate all risks associated with the data process. Vigilance in recognizing potential pitfalls and implementing safeguards is essential for ensuring the reliability and security of the data transfer process [108].
Transparent data processing is imperative, along with the ethical and secure sharing of research data, methodologies, and algorithms. Achieving reproducibility and generalizability while safeguarding patient privacy, financial investments, and intellectual property is essential.
Explainable AI (XAI) algorithms allow doctors to understand and validate results, aligning with the ‘learned intermediary’ principle where clinicians play a central role in decision-making. However, discerning between biased or inadequately explained AI guidance and the clinician's interpretation poses challenges. Collaborative efforts among clinicians can potentially address these biases and enhance the design of XAI, ensuring a critical appraisal of AI guidance in primary care that augments rather than undermines the patient-physician relationship [109].
Conscious efforts to develop discrimination-aware algorithms aim to reduce bias in databases and associated applications, ensuring that data mining models do not lead to discriminatory decisions against vulnerable groups, even if the underlying dataset is inherently biased. Empirical studies evaluating AI practices are necessary to highlight ethical concerns and propose solutions. Trials comparing AI-based practices with conventional methods are vital for evidence-based policymaking in the face of poorly regulated AI practices. Reducing the opacity and complexity of AI methods, including deep machine learning and neural networks, is imperative.
The ethical governance of AI platforms necessitates modifications to traditional medical research ethics principles. This includes considerations for informed consent, protecting individual and group-level harms and benefits, ensuring patient empowerment, preserving the patient-doctor relationship, and upholding research subject rights in AI-supported projects. Data protection regulations for research and personal health services must be robust and effectively enforced.
In conclusion, adopting an integrated strategy encompassing data utilization, quality assurance, governance, and ethical considerations is pivotal. Key recommendations involve ensuring consent remains a focal point throughout the data life cycle, fostering sustainable data practices, meticulously examining data processes, seamlessly integrating data governance with quality management, navigating ethical challenges posed by AI, and applying a structured ethical framework aligned with the data life cycle. These proactive measures collectively strive to uphold trust in current systems and anticipated developments.
6. Conclusions
This study serves as a valuable addition to the existing literature by aggregating insights into AI's applications in healthcare. It also provides a comprehensive perspective on the ethical and governance challenges encountered by stakeholders interested in introducing AI in healthcare.
In the ongoing narrative of healthcare, AI assumes a pivotal role. As the driving force behind precision medicine, it fulfills a pressing need for enhanced care. While initial efforts to provide diagnoses and treatment recommendations have been met with many challenges, the trajectory suggests that AI will eventually master this domain.
The primary hurdle to widespread AI adoption in healthcare does not revolve around the capabilities of the technology but rather centers on ensuring seamless integration into daily clinical practice. For comprehensive adoption, AI systems must receive regulatory approval, attain standardized protocols that guarantee uniform functionality, train clinicians in their use, secure financial support from public and private payer organizations, and undergo iterative supervision updates. While these adoption challenges will ultimately be surmounted, they will necessitate more time than the maturation of the technologies themselves.
AI technologies are making remarkable strides across a spectrum of healthcare applications, serving as a vital catalyst for advancements in medical diagnosis, virtual patient care, treatment adherence, and administrative efficiency. Nevertheless, AI confronts a constellation of technical, ethical, and governance challenges on its path to healthcare integration. Issues about data security and privacy loom large due to the utilization of sensitive, legally bound health data. Additionally, the limitations of AI in mirroring distinctly human emotional and behavioral traits, like compassion, can pose constraints on its utility.
Crucially, it is becoming increasingly evident that AI systems are not poised to replace human clinicians on a grand scale. Instead, they are poised to augment the capabilities of human caregivers. As time progresses, healthcare professionals are likely to shift their focus toward roles that leverage uniquely human skills. Meanwhile, AI is poised to evolve into an essential tool, enhancing the collaborative efforts of clinicians as they strive to deliver superior patient care.
AI undeniably offers significant advantages, but it is crucial to emphasize that it can never supplant the human connections that underlie collaborative healthcare decisions. The intricate dynamics of multidisciplinary teamwork elude machines, as they cannot forge the profound bonds that humans establish through their experiential knowledge in personalizing patient care. To tackle these challenges, the future governance of AI must prioritize human expertise and experience in overseeing these technologies, ensuring that the ultimate decisions, even when contrary to AI recommendations, remain in the hands of healthcare professionals.
Additionally, the lack of AI accountability continues to pose a significant obstacle to its adoption. AI systems operate as black boxes, taking inputs and generating outputs without disclosing their underlying measurements or reasoning, a challenge known as the black-box problem. To tackle this issue and prevent healthcare practitioners from being wrongly held responsible for AI errors, the implementation of standardized policies and governmental measures is imperative.
A holistic approach involving all stakeholders, including providers, payers, and patients, is crucial for understanding clinical needs before advancing the adoption of AI in healthcare. To ensure the secure and proficient use of AI in the future, it is advisable to include AI training in the curricula of healthcare professionals. This will enhance their understanding of potential risks and their expectations regarding AI performance.
Concluding, this review may have far-reaching implications across various stakeholders. It may serve as a valuable resource offering policymakers insights into potential pitfalls and hurdles in integrating these systems into healthcare. This knowledge may facilitate the development of frameworks that can balance innovation with the protection of patient safety, privacy, and ethical considerations. Healthcare professionals may benefit from the guidance provided, enabling them to navigate challenges, make informed decisions, and incorporate AI technologies responsibly into clinical workflows. Moreover, the research may contribute to responsible development by aiding developers and technologists in designing systems that can prioritize patient safety, privacy, and compliance with healthcare standards. The identification of ethical and regulatory challenges may also serve as a proactive approach to risk mitigation, contributing to the overall success and acceptance of AI technologies. Patient trust, a critical factor in healthcare, may be enhanced by addressing ethical concerns and ensuring compliance with regulations, fostering transparency in the implementation of AI technologies. Additionally, the research may support education and training initiatives for healthcare professionals and stakeholders, ensuring they may be well-informed and equipped to navigate the complexities of AI technologies. Lastly, the study underscores the need for continuous improvement in ethical guidelines and regulatory frameworks to keep pace with the evolving landscape of AI technology in healthcare.
CRediT authorship contribution statement
Ciro Mennella: Writing – review & editing, Writing – original draft, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Umberto Maniscalco: Writing – review & editing, Supervision, Methodology. Giuseppe De Pietro: Writing – review & editing. Massimo Esposito: Writing – review & editing, Validation, Supervision.
Declaration of Competing Interest
All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version. This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue. The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript.
Acknowledgement
Ciro Mennella is a PhD student enrolled in the National PhD in Artificial Intelligence, XXXVII cycle, course on Health and life sciences, organized by Università Campus Bio-Medico di Roma (Via Alvaro del Portillo 21, 00128 Roma, Italy).
References
- 1.W.H. Organization, et al. 2017. Integrated care for older people: guidelines on community-level interventions to manage declines in intrinsic capacity. [PubMed] [Google Scholar]
- 2.W.H. Organization, et al. 2022. Ageism in artificial intelligence for health: who policy brief. [Google Scholar]
- 3.Magrabi F., Ammenwerth E., McNair J.B., De Keizer N.F., Hyppönen H., Nykänen P., Rigby M., Scott P.J., Vehko T., Wong Z.S.-Y., et al. Artificial intelligence in clinical decision support: challenges for evaluating ai and practical implications. Yearb. Med. Inform. 2019;28:128–134. doi: 10.1055/s-0039-1677903. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Kumar Y., Koul A., Singla R., Ijaz M.F. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. J. Ambient Intell. Humaniz. Comput. 2022:1–28. doi: 10.1007/s12652-021-03612-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Srinivasu P.N., SivaSai J.G., Ijaz M.F., Bhoi A.K., Kim W., Kang J.J. Classification of skin disease using deep learning neural networks with mobilenet v2 and lstm. Sensors. 2021;21:2852. doi: 10.3390/s21082852. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Vulli A., Srinivasu P.N., Sashank M.S.K., Shafi J., Choi J., Ijaz M.F. Fine-tuned densenet-169 for breast cancer metastasis prediction using fastai and 1-cycle policy. Sensors. 2022;22:2988. doi: 10.3390/s22082988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Rao G.E., Rajitha B., Srinivasu P.N., Ijaz M.F., Woźniak M. Hybrid framework for respiratory lung diseases detection based on classical cnn and quantum classifiers from chest x-rays. Biomed. Signal Process. Control. 2024;88 [Google Scholar]
- 8.Ahmad I., Merla A., Ali F., Shah B., AlZubi A.A., AlZubi M.A. A deep transfer learning approach for Covid-19 detection and exploring a sense of belonging with diabetes. Front. Public Health. 2023;11 doi: 10.3389/fpubh.2023.1308404. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Praveen S.P., Srinivasu P.N., Shafi J., Wozniak M., Ijaz M.F. Resnet-32 and fastai for diagnoses of ductal carcinoma from 2d tissue slides. Sci. Rep. 2022;12 doi: 10.1038/s41598-022-25089-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Kumar M., Verma S., Kumar A., Ijaz M.F., Rawat D.B., et al. Anaf-iomt: a novel architectural framework for iomt-enabled smart healthcare system by enhancing security based on recc-vc. IEEE Trans. Ind. Inform. 2022;18:8936–8943. [Google Scholar]
- 11.Baumeister R.F., Leary M.R. Writing narrative literature reviews. Rev. Gen. Psychol. 1997;1:311–320. [Google Scholar]
- 12.Slavin R.E. Best evidence synthesis: an intelligent alternative to meta-analysis. J. Clin. Epidemiol. 1995;48:9–18. doi: 10.1016/0895-4356(94)00097-a. [DOI] [PubMed] [Google Scholar]
- 13.McCarthy J., Minsky M.L., Rochester N., Shannon C.E. A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 2006;27:12–14. [Google Scholar]
- 14.Samuel A.L. Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 2000;44:206–226. [Google Scholar]
- 15.Bengio Y., Courville A., Vincent P. Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35:1798–1828. doi: 10.1109/TPAMI.2013.50. [DOI] [PubMed] [Google Scholar]
- 16.LeCun Y., Bengio Y., Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- 17.Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117. doi: 10.1016/j.neunet.2014.09.003. [DOI] [PubMed] [Google Scholar]
- 18.La Rue F. Human Rights Council; 2011. Report of the special rapporteur on the promotion and protection of the right to freedom of opinion and expression; p. 16. [Google Scholar]
- 19.P. O. for Economic Cooperation Development . 2019. Recommendation of the council on artificial intelligence. (Oecd Legal Instruments. OECD/LEGAL/O449) [Google Scholar]
- 20.La Vattiata F.C. Ai-based medical devices: the applicable law in the European Union. BioLaw J.-Riv. BioDiritto. 2022:412–437. [Google Scholar]
- 21.Liu R., Wang M., Zheng T., Zhang R., Li N., Chen Z., Yan H., Shi Q. An artificial intelligence-based risk prediction model of myocardial infarction. BMC Bioinform. 2022;23:1–17. doi: 10.1186/s12859-022-04761-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Lambercy O., Lehner R., Chua K., Wee S.K., Rajeswaran D.K., Kuah C.W.K., Ang W.T., Liang P., Campolo D., Hussain A., et al. Neurorehabilitation from a distance: can intelligent technology support decentralized access to quality therapy? Front. Robot. AI. 2021;8 doi: 10.3389/frobt.2021.612415. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Mennella C., Maniscalco U., De Pietro G., Esposito M. The role of artificial intelligence in future rehabilitation services: a systematic literature review. IEEE Access. 2023 [Google Scholar]
- 24.Russell S.J. Pearson Education, Inc.; 2010. Artificial Intelligence a Modern Approach. [Google Scholar]
- 25.W.H. Organization, et al. Who Guidance; 2021. Ethics and Governance of Artificial Intelligence for Health. [Google Scholar]
- 26.L.N.H. Service The topol review: preparing the healthcare workforce to deliver the digital future. 2019. https://topol.hee.nhs.uk/
- 27.Nadarzynski T., Miles O., Cowie A., Ridge D. Acceptability of artificial intelligence (ai)-led chatbot services in healthcare: a mixed-methods study. Digit. Health. 2019;5 doi: 10.1177/2055207619871808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Roski J., Hamilton B., Chapman W., Heffner J., Trivedi R., Del Fiol G., Kukafka R., Bleicher P., Estiri H., Klann J., et al. 2019. How Artificial Intelligence Is Changing Health and Health Care. [Google Scholar]
- 29.Gamble A. Artificial intelligence and mobile apps for mental healthcare: a social informatics perspective. Aslib J. Inf. Manag. 2020;72:509–523. [Google Scholar]
- 30.Bush J. How ai is taking the scut work out of health care. Harv. Bus. Rev. 2018;5 [Google Scholar]
- 31.Kumar Y., Koul A., Singla R., Ijaz M.F. Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. J. Ambient Intell. Humaniz. Comput. 2022:1–28. doi: 10.1007/s12652-021-03612-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Ghosh P. Ai early diagnosis could save heart and cancer patients, Science Correspondent. BBC News. 2018 [Google Scholar]
- 33.Wang D., Khosla A., Gargeya R., Irshad H., Beck A.H. Deep learning for identifying metastatic breast cancer. 2016. arXiv:1606.05718 arXiv preprint.
- 34.Esteva A., Kuprel B., Novoa R.A., Ko J., Swetter S.M., Blau H.M., Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–118. doi: 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Rajpurkar P., Irvin J., Zhu K., Yang B., Mehta H., Duan T., Ding D., Bagul A., Langlotz C., Shpanskaya K., et al. Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning. 2017. arXiv:1711.05225 arXiv preprint.
- 36.Thomas M., Murali S., Simpson B.S.S., Freeman A., Kirkham A., Kelly D., Whitaker H.C., Zhao Y., Emberton M., Norris J.M. Use of artificial intelligence in the detection of primary prostate cancer in multiparametric mri with its clinical outcomes: a protocol for a systematic review and meta-analysis. BMJ Open. 2023;13 doi: 10.1136/bmjopen-2023-074009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Bedi G., Carrillo F., Cecchi G.A., Slezak D.F., Sigman M., Mota N.B., Ribeiro S., Javitt D.C., Copelli M., Corcoran C.M. Automated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophr. 2015;1:1–7. doi: 10.1038/npjschz.2015.30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Cecchi G. 2017. With AI, our words will be a window into our mental health. [Google Scholar]
- 39.Chou C.-Y., Hsu D.-Y., Chou C.-H. Predicting the onset of diabetes with machine learning methods. J. Pers. Med. 2023;13:406. doi: 10.3390/jpm13030406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Gudigar A., Raghavendra U., Nayak S., Ooi C.P., Chan W.Y., Gangavarapu M.R., Dharmik C., Samanth J., Kadri N.A., Hasikin K., et al. Role of artificial intelligence in Covid-19 detection. Sensors. 2021;21:8045. doi: 10.3390/s21238045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Khanna V.V., Chadaga K., Sampathila N., Prabhu S., Chadaga R., Umakanth S. Diagnosing Covid-19 using artificial intelligence: a comprehensive review. Netw. Model. Anal. Health Inform. Bioinform. 2022;11:25. [Google Scholar]
- 42.Krishnan K.S., Krishnan K.S. 2021 6th International Conference on Signal Processing, Computing and Control (ISPCC) IEEE; 2021. Vision transformer based Covid-19 detection using chest x-rays; pp. 644–648. [Google Scholar]
- 43.Secinaro S., Calandra D., Secinaro A., Muthurangu V., Biancone P. The role of artificial intelligence in healthcare: a structured literature review. BMC Med. Inform. Decis. Mak. 2021;21:1–23. doi: 10.1186/s12911-021-01488-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Oren O., Gersh B.J., Bhatt D.L. Artificial intelligence in medical imaging: switching from radiographic pathological data to clinically meaningful endpoints. Lancet Digit. Health. 2020;2:e486–e488. doi: 10.1016/S2589-7500(20)30160-6. [DOI] [PubMed] [Google Scholar]
- 45.Aggarwal R., Ganvir S.S., et al. Artificial intelligence in physiotherapy. Physiotherapy. 2021;15:55. [Google Scholar]
- 46.Lambercy O., Lehner R., Chua K., Wee S.K., Rajeswaran D., Kuah C., Ang W., Liang P., Campolo D., Hussain A., Aguirre-Ollinger G., Guan C., Kanzler C., Wenderoth N., Gassert R. Neurorehabilitation from a distance: can intelligent technology support decentralized access to quality therapy? Front. Robot. AI. 2021;8 doi: 10.3389/frobt.2021.612415. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Natarajan A., Su H.-W., Heneghan C. Assessment of physiological signs associated with Covid-19 measured using wearable devices. npj Digit. Med. 2020;3:156. doi: 10.1038/s41746-020-00363-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Al Kuwaiti A., Nazer K., Al-Reedy A., Al-Shehri S., Al-Muhanna A., Subbarayalu A.V., Al Muhanna D., Al-Muhanna F.A. A review of the role of artificial intelligence in healthcare. J. Pers. Med. 2023;13:951. doi: 10.3390/jpm13060951. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Baig M.M., GholamHosseini H., Moqeem A.A., Mirza F., Lindén M. A systematic review of wearable patient monitoring systems–current challenges and opportunities for clinical adoption. J. Med. Syst. 2017;41:1–9. doi: 10.1007/s10916-017-0760-1. [DOI] [PubMed] [Google Scholar]
- 50.Shaik T., Tao X., Higgins N., Li L., Gururajan R., Zhou X., Acharya U.R. Remote patient monitoring using artificial intelligence: current state, applications, and challenges. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2023;13 [Google Scholar]
- 51.Kim J., Campbell A.S., Wang J. Wearable non-invasive epidermal glucose sensors: a review. Talanta. 2018;177:163–170. doi: 10.1016/j.talanta.2017.08.077. [DOI] [PubMed] [Google Scholar]
- 52.Anderson D. Artificial intelligence and applications in PM&R. Am. J. Phys. Med. Rehabil. 2019;98:e128–e129. doi: 10.1097/PHM.0000000000001171. [DOI] [PubMed] [Google Scholar]
- 53.Luxton D.D., Riek L.D. Artificial intelligence and robotics in rehabilitation. 2019. https://doi.org/10.1037/0000129-031
- 54.Goldzweig C.L., Orshansky G., Paige N.M., Towfigh A.A., Haggstrom D.A., Miake-Lye I., Beroes J.M., Shekelle P.G. Electronic patient portals: evidence on health outcomes, satisfaction, efficiency, and attitudes: a systematic review. Ann. Intern. Med. 2013;159:677–687. doi: 10.7326/0003-4819-159-10-201311190-00006. [DOI] [PubMed] [Google Scholar]
- 55.Sinsky C.A., Willard-Grace R., Schutzbank A.M., Sinsky T.A., Margolius D., Bodenheimer T. In search of joy in practice: a report of 23 high-functioning primary care practices. Ann. Fam. Med. 2013;11:272–278. doi: 10.1370/afm.1531. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Tran V.-T., Riveros C., Ravaud P. Patients' views of wearable devices and ai in healthcare: findings from the compare e-cohort. npj Digit. Med. 2019;2(53) doi: 10.1038/s41746-019-0132-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Soliño-Fernandez D., Ding A., Bayro-Kaiser E., Ding E.L. Willingness to adopt wearable devices with behavioral and economic incentives by health insurance wellness programs: results of a US cross-sectional survey with multiple consumer health vignettes. BMC Public Health. 2019;19:1–8. doi: 10.1186/s12889-019-7920-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Gao Y., Li H., Luo Y. An empirical study of wearable technology acceptance in healthcare. Ind. Manag. Data Syst. 2015;115:1704–1723. [Google Scholar]
- 59.Chidambaram S., Maheswaran Y., Patel K., Sounderajah V., Hashimoto D.A., Seastedt K.P., McGregor A.H., Markar S.R., Darzi A. Using artificial intelligence-enhanced sensing and wearable technology in sports medicine and performance optimisation. Sensors. 2022;22:6920. doi: 10.3390/s22186920. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Gichoya J.W., McCoy L.G., Celi L.A., Ghassemi M. Equity in essence: a call for operationalising fairness in machine learning for healthcare. BMJ Health Care Inf. 2021;28 doi: 10.1136/bmjhci-2020-100289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Mennella C., Maniscalco U., De Pietro G., Esposito M. A deep learning system to monitor and assess rehabilitation exercises in home-based remote and unsupervised conditions. Comput. Biol. Med. 2023 doi: 10.1016/j.compbiomed.2023.107485. [DOI] [PubMed] [Google Scholar]
- 62.Sharma A., Lin I.W., Miner A.S., Atkins D.C., Althoff T. Human–ai collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nat. Mach. Intell. 2023;5:46–57. [Google Scholar]
- 63.Javaid M., Haleem A., Singh R.P. BenchCouncil Trans. Benchmarks, Stand. Eval., vol. 3. 2023. Chatgpt for healthcare services: an emerging stage for an innovative perspective. [Google Scholar]
- 64.Beauchamp T.L., Childress J.F. Oxford University Press; USA: 2001. Principles of Biomedical Ethics. [Google Scholar]
- 65.Guideline W. World Health Organization; 2019. Recommendations on Digital Interventions for Health System Strengthening. 2020-10. [PubMed] [Google Scholar]
- 66.2020. https://medium.com/digital-freedom-fund/what-is-biosurveillance-c8bffe70d16f What is ‘biosurveillance’? The Covid-19 measures getting under our skin.
- 67.Brown R. Data Science Central; 2022. Challenges to successful ai implementation in healthcare. [Google Scholar]
- 68.Tachkov K., Zemplenyi A., Kamusheva M., Dimitrova M., Siirtola P., Pontén J., Nemeth B., Kalo Z., Petrova G. Barriers to use artificial intelligence methodologies in health technology assessment in central and East European countries. Front. Public Health. 2022;10 doi: 10.3389/fpubh.2022.921226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Marcus G. Deep learning: a critical appraisal. 2018. arXiv:1801.00631 arXiv preprint.
- 70.Choudhury A., Asan O. Impact of accountability, training, and human factors on the use of artificial intelligence in healthcare: exploring the perceptions of healthcare practitioners in the US. Hum. Factors Healthc. 2022;2 [Google Scholar]
- 71.Giuste F., Shi W., Zhu Y., Naren T., Isgut M., Sha Y., Tong L., Gupte M., Wang M.D. Explainable artificial intelligence methods in combating pandemics: a systematic review. IEEE Rev. Biomed. Eng. 2022 doi: 10.1109/RBME.2022.3185953. [DOI] [PubMed] [Google Scholar]
- 72.Reddy S., Allan S., Coghlan S., Cooper P. A governance model for the application of ai in health care. J. Am. Med. Inform. Assoc. 2020;27:491–497. doi: 10.1093/jamia/ocz192. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Marwaha J.S., Landman A.B., Brat G.A., Dunn T., Gordon W.J. Deploying digital health tools within large, complex health systems: key considerations for adoption and implementation. npj Digit. Med. 2022;5:13. doi: 10.1038/s41746-022-00557-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.A. House of Lords Select Committee, et al. House of Lords; 2018. AI in the UK: Ready, Willing and Able; p. 36. [Google Scholar]
- 75.Forcier M.B., Gallois H., Mullan S., Joly Y. Integrating artificial intelligence into health care through data access: can the gdpr act as a beacon for policymakers? J. Law Biosci. 2019;6:317–335. doi: 10.1093/jlb/lsz013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.European Commission . European Commission; Brussels, Belgium: 2021. Proposal for a Regulation Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) p. 206. COM (2021) 206 final. [Google Scholar]
- 77.European Commission . 2021. Commission Staff Working Document, Impact Assessment, Accompanying the Proposal for a Regulation Laying down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) p. 1. SWD (2021) 84 final, Part 1/2. [Google Scholar]
- 78.Schaake M. Standford Univeristy Human-Centered Artificial Intelligence (BHAI); Standford, Canada: 2021. The European Commission's Artificial Intelligence Act. 2021-06. [Google Scholar]
- 79.Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC.
- 80.Council Directives 90/385/EEC and 93/42/EEC.
- 81.Habli I., Lawton T., Porter Z. Artificial intelligence in health care: accountability and safety. Bull. World Health Organ. 2020;98:251. doi: 10.2471/BLT.19.237487. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Martelli N., Eskenazy D., Déan C., Pineau J., Prognon P., Chatellier G., Sapoval M., Pellerin O. New European regulation for medical devices: what is changing? Cardiovasc. Interv. Radiol. 2019;42:1272–1278. doi: 10.1007/s00270-019-02247-0. [DOI] [PubMed] [Google Scholar]
- 83.Migliore A. On the new regulation of medical devices in Europe. Expert Rev. Med. Devices. 2017;14:921–923. doi: 10.1080/17434440.2017.1407648. [DOI] [PubMed] [Google Scholar]
- 84.Goold S.D., Lipkin M., Jr The doctor–patient relationship: challenges, opportunities, and strategies. J. Gen. Intern. Med. 1999;14:S26. doi: 10.1046/j.1525-1497.1999.00267.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Floridi L., Cowls J. Machine Learning and the City: Applications in Architecture and Urban Design. 2022. A unified framework of five principles for ai in society; pp. 535–545. [Google Scholar]
- 86.Lupton D. M-health and health promotion: the digital cyborg and surveillance society. Soc. Theory Health. 2012;10:229–244. [Google Scholar]
- 87.Coeckelbergh M. E-care as craftsmanship: virtuous work, skilled engagement, and information technology in health care. Med. Health Care Philos. 2013;16:807–816. doi: 10.1007/s11019-013-9463-7. [DOI] [PubMed] [Google Scholar]
- 88.Kluttz D.N., Mulligan D.K. Automated decision support technologies and the legal profession. Berkeley Technol. Law J. 2019;34:853–890. [Google Scholar]
- 89.of Europe C. 2019. https://rm.coe.int/responsability-and-ai-en/168097d9c5
- 90.Topol E.J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 2019;25:44–56. doi: 10.1038/s41591-018-0300-7. [DOI] [PubMed] [Google Scholar]
- 91.T. S. N. C. on Medical Ethics In brief – artificial intelligence in healthcare. 2020. https://smer.se/wp-content/uploads/2020/06/smer-2020-2-in-brief-artificial-intelligence-in-healthcare.pdf
- 92.Yeung K. 2018. A study of the implications of advanced digital technologies (including ai systems) for the concept of responsibility within a human rights framework. MSI-AUT(2018)05. [Google Scholar]
- 93.Grote T., Berens P. On the ethics of algorithmic decision-making in healthcare. J. Med. Ethics. 2020;46:205–211. doi: 10.1136/medethics-2019-105586. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Ross J., Webb C., Rahman F. Academy of Medical Royal Colleges; London: 2019. Artificial Intelligence in Healthcare. [Google Scholar]
- 95.Braun M., Hummel P., Beck S., Dabrock P. Primer on an ethics of ai-based decision support systems in the clinic. J. Med. Ethics. 2021;47 doi: 10.1136/medethics-2019-105860. e3–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Babic B., Gerke S., Evgeniou T., Cohen I.G. Beware explanations from ai in health care. Science. 2021;373:284–286. doi: 10.1126/science.abg1834. [DOI] [PubMed] [Google Scholar]
- 97.Durán J.M. 2018. Computer simulations in science and engineering. [Google Scholar]
- 98.Humphreys P. The philosophical novelty of computer simulation methods. Synthese. 2009;169:615–626. [Google Scholar]
- 99.Organization W.H., et al. World Health Organization; Geneva: 2016. Framework on Integrated, People-Centred Health Services; p. 2019. [Google Scholar]
- 100.Layman E.J. Ethical issues and the electronic health record. Heal. Care Manag. 2008;27:165–176. doi: 10.1097/01.HCM.0000285044.19666.a8. [DOI] [PubMed] [Google Scholar]
- 101.De Lusignan S., Liaw S.-T., Krause P., Curcin V., Vicente M.T., Michalakidis G., Agreus L., Leysen P., Shaw N., Mendis K. Key concepts to assess the readiness of data for international research: data quality, lineage and provenance, extraction and processing errors, traceability, and curation. Yearb. Med. Inform. 2011;20:112–120. [PubMed] [Google Scholar]
- 102.Liaw S.-T., Powell-Davies G., Pearce C., Britt H., McGlynn L., Harris M.F. Optimising the use of observational electronic health record data: current issues, evolving opportunities, strategies and scope for collaboration. Aust. Fam. Phys. 2016;45:153–156. [PubMed] [Google Scholar]
- 103.Wilkinson M.D., Dumontier M., Aalbersberg I.J., Appleton G., Axton M., Baak A., Blomberg N., Boiten J.-W., da Silva Santos L.B., Bourne P.E., et al. The fair guiding principles for scientific data management and stewardship. Sci. Data. 2016;3:1–9. doi: 10.1038/sdata.2016.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Benchoufi M., Ravaud P. Blockchain technology for improving clinical research quality. Trials. 2017;18:1–5. doi: 10.1186/s13063-017-2035-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Prasser F., Spengler H., Bild R., Eicher J., Kuhn K.A. Privacy-enhancing etl-processes for biomedical data. Int. J. Med. Inform. 2019;126:72–81. doi: 10.1016/j.ijmedinf.2019.03.006. [DOI] [PubMed] [Google Scholar]
- 106.Culnane C., Rubinstein B.I., Teague V. Health data in an open world. 2017. arXiv:1712.05627 arXiv preprint.
- 107.Guo G.N., Jonnagaddala J., Farshid S., Huser V., Reich C., Liaw S.-T. Comparison of the cohort selection performance of Australian medicines terminology to anatomical therapeutic chemical mappings. J. Am. Med. Inform. Assoc. 2019;26:1237–1246. doi: 10.1093/jamia/ocz143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.Liaw S.-T., Liyanage H., Kuziemsky C., Terry A.L., Schreiber R., Jonnagaddala J., de Lusignan S. Ethical use of electronic health record data and artificial intelligence: recommendations of the primary care informatics working group of the international medical informatics association. Yearb. Med. Inform. 2020;29 doi: 10.1055/s-0040-1701980. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 109.Lin S.Y., Mahoney M.R., Sinsky C.A. Ten ways artificial intelligence will transform primary care. J. Gen. Intern. Med. 2019;34:1626–1630. doi: 10.1007/s11606-019-05035-1. [DOI] [PMC free article] [PubMed] [Google Scholar]