Artificial intelligence (AI) encompasses computational algorithms that, partially or completely, autonomously perform beneficial tasks usually considered representative of human intelligence. 1 This revolutionary technology has the potential to shape the scope of healthcare in incredible ways. From data-driven treatment recommendations, real-time intraprocedural support, predicting outcomes, and more, there are vast possibilities for implementing AI in interventional radiology (IR) to help maximize patient care. 2 3 4 5 While there exists much enthusiasm for integrating this cutting-edge technology in IR, there are many ethical issues to consider in its use, such as questions about data ownership and distribution, culpability in the setting of AI-associated adverse events, and amplification of inequities and bias. This article explores some of these challenges and suggests a framework for navigating them.
Data Ownership, Distribution, and Protection
Personal health information is of great value not only to the patients' whose medical care it guides but also to the technology industry. In 2016, IBM spent $2.6 billion to acquire Truven Health Analytics, acquiring a data bank of millions of health records that could subsequently be monetized for analysis, access, and use. 6 Google also endorsed its valuation of this data, paying $2.1 billion in 2019 to acquire Fitbit and its data. 7 Access to these valuable datasets is necessary for developing useful AI models but also raises question regarding ownership and appropriate use of healthcare datasets.
One set of questions revolves around ownership and sales of healthcare data. Do the healthcare institutions where the care was provided own these data, what about a clinician who aggregates and organizes data into a usable database, and what about patients? Do they deserve compensation or at least the ability to opt out of having their data used? Much of the health data utilized for AI is done so secondarily, that is, the information has already fulfilled its primary use of guiding medical care for that patient. While laws and regulations that protect data collected for primary use exist, there lacks a clear consensus regarding ownership and sharing of de-identified “secondary use” data for AI systems. 8 In some places, patients control how their sensitive health data are re-used. 9 10 In others, this control is superseded by the potential to benefit the society at large, or, in the case of radiological data, ownership may even belong to the entity that conducted the imaging. 10
It can be argued that all participants within healthcare systems (patients, providers, institutions, and industry alike) bear some moral responsibility to improve such systems, with patients mainly contributing to this through the secondary use of their de-identified data in guiding discovery, learning, and medical development. 11 To level the playing field and address issues of ownership, it is reasonable to consider secondary use of clinical data for AI as a public good intended to benefit future patients, with for-profit sale and distribution under exclusive arrangements prohibited. 8 In other words, if one considers these valuable anonymized datasets a public good, they should be able to be shared freely without explicit consent but should not be sold. This is different from AI algorithms or systems developed from those datasets, which are the intellectual property of the developers and could be sold for profit.
For this premise to work, guidelines would need to be developed and widely accepted by all individuals and entities with access to patient health data, including methods for ensuring accountability, which are further outlined in a publication by Larson et al. 8 For example, protection of patients' privacy in how their information is used and distributed is vital for social acceptance of AI. 12 13 With ever-increasing cyberattacks, and even the ability to re-associate de-identified data with their human sources, data protection for AI systems is challenging and constantly evolving. 10 13 14 Rather than being reactive to these risks, IRs developing or implementing AI algorithms should proactively ensure policies and safeguards are in place and routinely reassess them. 15 16 IRs can also be advocates for such protections through interactions with industries and organizations as well as mentorship of trainees.
Complications and Culpability
Another set of questions raised by integrating AI systems into clinical practice revolves around the management of adverse events. Who or what is at fault when using AI results in patient harm? What are patients and families owed when these adverse events occur? In many respects, it seems reasonable to approach adverse events caused by using AI the same as any other clinical tool/device. When an adverse event occurs, clinicians and institutions/practices should have mechanisms (e.g., morbidity and mortality conferences, quality review teams) to evaluate these events to determine their root cause and potential means of preventing similar events in the future. Patients and families should also be provided similar answers. When a complication occurs, people tend to seek clarity regarding why it happened. 17 18 Transparency and providing explanation not only allow patients to seek timely care to correct any issues but also rebuilds trust and a vital sense of support. 19 Likewise, lack of communication and transparency when complications occur is a common driver of litigation. 20
From a legal perspective, litigation regarding clinical use of an AI system should mirror medical device suits where courts differentiate whether the clinician utilizing the technology or the technology developers are at fault. For example, recent Cook and Bard IVC filter multidistrict litigation has focused on potential deficiencies in the device manufacturing, which would make the device companies more at fault than the clinicians who used them. 21 This is distinct from a clinician who uses an IVC filter in a manner that deviates from practice standards leading to harm, who would then be liable rather than the device manufacturer. However, the potential additional complexities with AI systems are their inscrutable “black box” nature, which makes identification of exact sources of their errors more challenging, and the extensive list of actors within the complex AI–health network, which complicates the allocation of responsibility and accountability. 22 23
Another related potential issue with the integration of AI systems is that they may further decrease clinician–patient interactions in exchange for increased efficiency. Establishing trust and rapport with patients prior to a procedure can be invaluable, particularly when complications occur. There may be pressure to utilize AI systems to automate and streamline tasks such as consent conversations or postoperative questions or check-ins. IRs should be wary of these uses of AI systems, since this technology lacks the necessary emotional and social intelligence (at least at this time) to account for patients' values, and it is the clinician's inherent abilities to elucidate and integrate them that ultimately separates algorithmic prediction from meaningful intervention. 22 23 24 25
Perpetuating Bias and Inequity
A final set of questions that have been raised about the integration of AI systems into clinical workflows is the potential to perpetuate bias that exists within current healthcare data and worsen inequalities. 26 AI models are only as comprehensive as the data they are built with, and existing datasets in IR are often not representative of underrepresented minorities and low socioeconomic status patients. 27 As such, models built upon such data will tend to provide recommendations in line with those biases. 5 For example, if socioeconomically disadvantaged patients tend to do worse when receiving treatment compared with the overall population, AI algorithms may recommend against treating them. 9 Integrating AI systems in clinical workflows also requires the initial capital to invest in such a system and local expertise to use it. Like other technological advances, this is likely to occur more readily in urban and affluent communities that may further widen inequalities in the care people receive.
Addressing bias in AI is challenging because multiple sources of bias can be introduced at any stage along the AI development pipeline. 26 There can be bias based on the data used to develop the models, the way data are handled, and the selection of performance evaluation metrics. 26 To mitigate this, IRs involved in future studies can work to ensure that emerging data are more reflective of multiple patient populations and differentiate outcomes as feasible. IRs involved in the development of AI systems can also advocate for thoughtful utilization of such data and systems to avoid introducing additional bias and ensure wider access to such systems. For example, it may be necessary to over-sample data from underrepresented populations in developing AI algorithms to compensate for the underrepresentation of such populations in available data.
Conclusion
Application of AI in IR carries vast potential for enhancing healthcare but also raises questions regarding the distribution and sales of healthcare datasets to build AI systems, how to manage adverse events that occur with the use of AI, and how to avoid these systems exacerbating biases and inequitable care. These questions and the ways to ideally navigate them will continue evolving as AI systems are integrated. IRs involved in developing and using AI systems can be proactive in considering these potential issues and working to establish policies and safeguards to ensure these systems elevate the care we provide rather than undermining it.
Funding Statement
Funding None.
Footnotes
Conflict of Interest The authors have no relevant disclosures.
References
- 1.Russell S, Norvig P. 2nd ed. Upper Saddle River, NJ: Prentice-Hall; 2003. Artificial Intelligence: A Modern Approach. [Google Scholar]
- 2.Abajian A, Murali N, Savic L J. Predicting treatment response to intra-arterial therapies for hepatocellular carcinoma with the use of supervised machine learning—an artificial intelligence concept. J Vasc Interv Radiol. 2018;29(06):850–8570. doi: 10.1016/j.jvir.2018.01.769. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Gurgitano M, Angileri S A, Rodà G M. Interventional radiology ex-machina: impact of artificial intelligence on practice. Radiol Med (Torino) 2021;126(07):998–1006. doi: 10.1007/s11547-021-01351-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.von Ende E, Ryan S, Crain M A, Makary M S. Artificial intelligence, augmented reality, and virtual reality advances and applications in interventional radiology. Diagnostics (Basel) 2023;13(05):892. doi: 10.3390/diagnostics13050892. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Malpani R, Petty C W, Bhatt N, Staib L H, Chapiro J. Use of artificial intelligence in nononcologic interventional radiology: current state and future directions. Dig Dis Interv. 2021;5(04):331–337. doi: 10.1055/s-0041-1726300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Mearian L.Yes, Google's using your healthcare data – and it's not aloneComputer World. Accessed March 30, 2023 at: https://www.computerworld.com/article/3453818/yes-googles-using-your-healthcare-data-and-its-not-alone.html. Published November 15, 2019
- 7.Austin P L.The real reason Google is buying Fitbit. Time. Accessed March 30, 2023 at: https://time.com/5717726/google-fitbit/. Published November 4, 2019
- 8.Larson D B, Magnus D C, Lungren M P, Shah N H, Langlotz C P. Ethics of using and sharing clinical imaging data for artificial intelligence: a proposed framework. Radiology. 2020;295(03):675–682. doi: 10.1148/radiol.2020192536. [DOI] [PubMed] [Google Scholar]
- 9.Fernandez-Quilez A. Deep learning in radiology: ethics of data and on the value of algorithm transparency, interpretability and explainability. AI Ethics. 2022;3(01):257–265. [Google Scholar]
- 10.Brady A P, Neri E. Artificial intelligence in radiology—ethical considerations. Diagnostics (Basel) 2020;10(04):231. doi: 10.3390/diagnostics10040231. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Faden R R, Kass N E, Goodman S N, Pronovost P, Tunis S, Beauchamp T L.An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics Hastings Cent Rep 201343(Spec No, s1):S16–S27. [DOI] [PubMed] [Google Scholar]
- 12.Anom B Y. Ethics of big data and artificial intelligence in medicine. Ethics Med Public Health. 2020;15:100568. [Google Scholar]
- 13.Aggarwal R, Farag S, Martin G, Ashrafian H, Darzi A. Patient perceptions on data sharing and applying artificial intelligence to health care data: cross-sectional survey. J Med Internet Res. 2021;23(08):e26162. doi: 10.2196/26162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Kruse C S, Frederick B, Jacobson T, Monticone D K. Cybersecurity in healthcare: a systematic review of modern threats and trends. Technol Health Care. 2017;25(01):1–10. doi: 10.3233/THC-161263. [DOI] [PubMed] [Google Scholar]
- 15.Geis J R, Brady A P, Wu C C. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Radiology. 2019;293(02):436–440. doi: 10.1148/radiol.2019191586. [DOI] [PubMed] [Google Scholar]
- 16.Bhuyan S S, Kabir U Y, Escareno J M. Transforming healthcare cybersecurity from reactive to proactive: current status and future recommendations. J Med Syst. 2020;44(05):98. doi: 10.1007/s10916-019-1507-y. [DOI] [PubMed] [Google Scholar]
- 17.Levinson W. Disclosing medical errors to patients: a challenge for health care professionals and institutions. Patient Educ Couns. 2009;76(03):296–299. doi: 10.1016/j.pec.2009.07.018. [DOI] [PubMed] [Google Scholar]
- 18.Keller E J. Reflect and remember: the ethics of complications in interventional radiology. Semin Intervent Radiol. 2019;36(02):104–107. doi: 10.1055/s-0039-1688423. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.O'Connor E, Coates H M, Yardley I E, Wu A W. Disclosure of patient safety incidents: a comprehensive review. Int J Qual Health Care. 2010;22(05):371–379. doi: 10.1093/intqhc/mzq042. [DOI] [PubMed] [Google Scholar]
- 20.Phillips-Bute P D. Transparency and disclosure of medical errors: It's the right thing to do, so why the reluctance? Campbell Law Rev. 2013;35(03):333. [Google Scholar]
- 21.Keller E J, Vogelzang R L. Providing context: medical device litigation and inferior vena cava filters. Semin Intervent Radiol. 2016;33(02):132–136. doi: 10.1055/s-0036-1581086. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Morley J, Machado C CV, Burr C. The ethics of AI in health care: a mapping review. Soc Sci Med. 2020;260:113172. doi: 10.1016/j.socscimed.2020.113172. [DOI] [PubMed] [Google Scholar]
- 23.Pesapane F, Volonté C, Codari M, Sardanelli F. Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging. 2018;9(05):745–753. doi: 10.1007/s13244-018-0645-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Zhang Z, Citardi D, Wang D, Genc Y, Shan J, Fan X.Patients' perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data Health Informatics J 2021270214604582211011215 [DOI] [PubMed] [Google Scholar]
- 25.Verghese A, Shah N H, Harrington R A. What this computer needs is a physician: humanism and artificial intelligence. JAMA. 2018;319(01):19–20. doi: 10.1001/jama.2017.19198. [DOI] [PubMed] [Google Scholar]
- 26.Tejani A.Understanding bias in AI. Understanding Bias in AI. Accessed March 1, 2023 at: https://www.acr.org/Practice-Management-Quality-Informatics/ACR-Bulletin/Articles/February-2023/Understanding-Bias-in-AI. Published January 23, 2023
- 27.Trivedi P S, Guerra B, Kumar V. Healthcare disparities in interventional radiology. J Vasc Interv Radiol. 2022;33(12):1459–14670. doi: 10.1016/j.jvir.2022.08.026. [DOI] [PubMed] [Google Scholar]
