TABLE 3.
A summary of the five main aspects of AIHT’s exceptionalism.
| Key considerations (In italic key sub-considerations) |
Examples from the reviewed sample | |
|---|---|---|
| 1. AIHT’s Distinctive Features from Traditional Health Technologies | AIHTs are different from traditional health technologies because of their capacity to continuously learn, their potential for ubiquity throughout the health care system, the opaqueness of their recommendations and the ambiguity of their definition (Ambiguous Definition of AIHTs) | Locked AIHTs could become outdated potentially from the moment they are prevented from evolving. Thus, locking AIHT may cause it to become outdated and increase chance of contextual bias in real-life contexts |
| Locked algorithms will always yield the same result when it is fed by the same data. They are not per se safer and may require new regulatory approvals, though they are easier to assess than unlocked algorithms. Unlocked or adaptive algorithms improve over time, which demands that their safety and security must be continually re-evaluated. ‘Lifecycle’ regulation seems to be key in addressing these concerns, but for the most part burden lies on the regulators to adjusted their assessment of an AIHT in light of the evolving evidence, which is very resource intensive and for which HTA agencies are not yet equipped to conduct. (Locked and Unlocked AIHTs) | ||
| Algorithms will need to be regularly updated (at high or even prohibitive prices) due to advances in medical knowledge and access to new datasets or at the risk of their usage becoming malpractice. Updating or replacing an AIHT will involve additional post-acquisition costs to the clinics and hospitals that purchased them. The difficulty of managing the consequences of an outdated algorithm outweighs those of a drug or other health product that must be withdrawn from the market (The Update Problem) | ||
| 2. Systemic Impacts on Health | AI may have systemic effects that can be felt across an entire health care system, or across health care systems in several jurisdictions, initiating extensive and lasting transformations that are likely to affect all actors working in, using or financing the health system. In addition, AIHTs can have systemic real-world consequences for patients and non-ill or non-frequent users of the health care system. However, AI will not address everything that has to do with the overall well-being of people (Disruptive for Both the Healthcare Sector and for Individuals) | AI’s role in health surveillance, care optimization, prevention, public health, and telemedicine will cause AIHTs to affect non-ill or non-frequent users of the health care systemAn AIHT trained on medico-administrative data in a context where physicians have often modified their billing to enter the highest paying codes for clinical procedures would cause the algorithm to infer that these codes represent the usual, standard, or common practice to be recommended, thus introducing a bias in the algorithm and leading to a cascade of non-cost effective recommendations |
| Mistakes due to AIHTs used in clinical care and within the health care system have the potential to widely affect the patient population, suggesting that it is all the more necessary that all algorithms should submitted to extensive scrutiny. In addition, “tropic effects” (i.e., code embedded propensity towards certain behaviors or effects) may increase the risk of inappropriate treatment and care, and may result in importing AIHT-fueled standards and practices that are exogenous and non-idiosyncratic to local organizations. Furthermore, the large-scale systematization of certain behaviors may end up resulting in significant costs and harms (Harms, Tropism and Framing Effect) | ||
| Some authors suggest AIHTs should be regarded as a “health system transformation lever” for improving health care and a key enabler of learning healthcare systems (LHS) (AI as a Transformation Lever for the Health Sector) | ||
| 3. Increased Expectations | The “automation bias” describes the belief that an AI-generated outcome is inherently better than a human one. This is reinforced by the technological imperative, i.e., the pressure to use a new technology just because it exists (Belief that Since a Result Comes from AI it is Better)These high expectations toward AIHTs form the basis of the inevitability of AI in health. However, the concept of AI chasm refers to the phenomenon that while AIHTs are very promising, very few will actually be successful once implemented in clinical settings and can help rebalance the expectations. HTA agencies have an important role to play here to contain this phenomenon (Inevitability of AI in Healthcare) | The adoption and impact of AIHTs are unlikely to be uniform or to improve performance in all health care contexts because of the technology’s distinctive features, its systemic effects on health care organizations and the human biases associated with the use of these technologies. AIHTs can significantly affect and highlight particularities of workflow and design of individual hospital systems, causing them not to respond in an intended way. Therefore, AIHTs represent great challenges for deciding whether marketing authorization is justified |
| AI is currently in an era of promises rather than of fulfillment of what is expected from it. Possible consequences of this hype can be very significant but HTA agencies and regulators have an important role to play (Navigating the Hype) | ||
| 4. New Ethical, Legal and Social Challenges | AIHTs present new ethical, legal and social challenges in the context of health care delivery; by calling into question the roles of patients, HCPs and decision-makers; and by conflicting with medicine’s ethos of transparency | Patients who compare very well with historic patient data will be the ones benefiting the most from AIHTs, calling for caution with regards to patient and disease heterogeneity |
| Key AIHT-stemmed ethical challenges in care delivery are: AI-fostered potential bias; patient privacy protection; trust of clinicians and the general public towards machine-led medicine; new health inequalities (Health Care Delivery) | Practical and procedural ethical guidance for supporting HTA for AIHTs has not yet been thoroughly defined. For instance, distributive justice role in HTA for AIHT is not well specified | |
| AI being unlike most other health technologies, it forces the questioning of the very essence of humans. It also raises new existential questions regarding the role of regulators and public decision-makers AIHTs unparalleled autonomy intensifies ethical and regulatory challenges (Existential Questions) | AI-stemmed existential questionning includes the reflection that more and more clinicians are having about the proper role of healthcare professionals and what it means to be a doctor, a nurse, etc. And from the patients’ perspective, what it means to be cared for by machines and to feel more and more like a number in a vast system run by algorithms | |
| AIHTs are often opaque, which poses serious problems for their acceptance, regulation and implementation in the health care system. AI’s benefits for health care will come at the price of raising ethical issues specific to the technology (Challenging Medical Ethics’ Ethos) | ||
| 5. New Evaluative Constraints | AIHTs raise new evaluative constrains at the technological level due to the data and infrastructure required (Data-Generated Issues)New constraints also appear at the clinical level because of the greater variation in AIHTs performance between the test environment and the real-word context than those of drugs and medical devices (Real-World Usages and Evidential Issues) | The adoption and impact of AIHTs are unlikely to be uniform or to improve performance in all health care contexts because of the technology’s distinctive features, its systemic effects on health care organizations and the human biases associated with the use of these technologies. Therefore, AIHTs represent great challenges for deciding whether marketing authorization is justified, and it forces to question whether marketing authorization at the 10,000 foot level for the product is appropriate and efficient as opposed to for more specific uses closer to the impacted communities and the point of delivery |
| This high level of complexity requires a special regulation of AIHT, specifically adapted to its complexity (Undeveloped Regulatory Infrastructure and Processes) |