Short abstract
Considering that artificial intelligence (AI) technologies have the potential to change cancer care, this article discusses the AI features of which oncologist should most be aware.
Artificial intelligence (AI) technologies promise to reshape the management, evaluation, and delivery of cancer care. These are specially designed computer programs that can identify meaningful associations within datasets by mimicking higher functions of human intelligence. A number of AI products have already received clearance from the U.S. Food and Drug Administration (FDA), with hundreds more under active development at research institutions worldwide [1]. Their potential within the oncology space is vast, with performance equaling or even exceeding the acumen of seasoned clinicians.
The Next Frontier of Oncology Care
AI applications span the entirety of the oncological spectrum, from screening to treatment (Table 1). For one, the large volume of information produced by next‐generation sequencing presents fertile ground for AI techniques. Its utility as a precision medicine instrument has also been well established, enabling biomarker identification, tumor subtyping, and treatment response prediction [2, 3]. Similar or superior results have been noted in breast, colonic polyp, and lymph node radiography and microscopy, with attendant prognostic value [4, 5]. Application now extends to procedural care; in April, the FDA cleared a product which would enable real‐time lesion detection during colonoscopy [6].
Table 1.
Selected examples of U.S. Food and Drug Administration–cleared oncology artificial intelligence devices
Product | Developer | Features | Pathway | Date of first decision |
---|---|---|---|---|
GI Genius | Cosmo Artificial Intelligence, Ltd. | Real‐time lesion identification during colonoscopy | De novo | April 9, 2021 |
QUIBIM Precision Prostate (Qp‐Prostate) | QUIBIM S.L. | Prostate MRI image processing, segmentation | 510(k) | February 4, 2021 |
Quantib Prostate | Quantib BV | Abnormality detection, PI‐RADS scoring of prostate MRI images | 510(k) | October 11, 2020 |
Koios DS for Breast | Koios Medical, Inc. | Computer‐aided detection of ultrasound‐identified breast lesions | 510(k) | July 3, 2019 |
QuantX | Quantitative Insights, Inc. | Computer‐aided detection, MRI‐guided biopsy assistance of breast lesions | 510(k) | May 17, 2017 |
All data from U.S. Food and Drug Administration Center for Devices and Radiological Health database.
Abbreviations: MRI, magnetic resonance imaging; PI‐RADS, Prostate Imaging Reporting and Data System.
Across each of these domains, the quantity of data and interlinked nonlinear relationships therein naturally lend themselves to AI's pattern recognition capabilities. They are further enhanced by improved algorithmic techniques in newer generations of AI, including deep learning. Its deployment stands to sharpen diagnostic efficacy, streamline treatment, facilitate patient‐centered care, and reduce oncologist workload [7]. For instance, recent studies have demonstrated that newer deep learning techniques in mammography improved radiologists’ cancer detection without prolonging reading time (Fig. 1) [4, 8].
Figure 1.
Graphic representation of mammography artificial intelligence (AI) tool. First, the target population of women subject to screening mammography is identified. Mammograms are then conducted on this population and digitally processed. The AI tool that is “trained” based on a data set of true positives and true negatives as confirmed by pathology, subsequently digest billions of data points from each mammogram to identify subtle relationships between them. Using deep learning techniques such as neural networks (mimicking the networked structure of the human brain), the AI tool can identify areas suspicious of malignancy based on density, location, shape, and calcifications, among other characteristics across multiple images. It can then be leveraged to detect lesions in new mammography images, provide a prognostic score, and inform treatment plans in a clinical setting.
As AI continues to proliferate, oncologists may begin to find themselves implicated not only in their end use but also their development. However, AI breaks new legal ground, with unanswered questions of FDA oversight and malpractice liability. This piece sheds light on these issues by discussing the core concepts that bear upon clinical oncology practice.
Pathways to FDA Review: The Chicken or the Egg?
The FDA follows a functional approach to regulating AI as medical devices based on their (a) risk profile and (b) intended clinical use, rather than any AI‐specific framework [9]. Three pathways exist under this regime (class I to class III), stratified by the degree of patient risk exposure and differentiated by testing requirements (Table 2). Most class III devices are subject to the strictest level of device review, known as premarket approval. Class I and II submissions are held to one of two somewhat less onerous standards. Low‐risk novel products are held to de novo review, whereas those with previously approved analogs are subject to the premarket notification [also known as the 510(k)] process; this requires a demonstration of “substantial equivalence” to a device already cleared by the FDA. However, this option may be unavailable to cutting‐edge AI technologies which, because of their novelty, cannot draw upon previously approved predicates.
Table 2.
Existing FDA review pathways for artificial intelligence tools
FDA review pathway | Description |
---|---|
Premarket approval | For novel devices posing the greatest degree of patient risk exposure. Highest level of regulatory scrutiny, default classification for class III medical devices. |
510(k) (premarket notification) | For products sharing “substantial similarity” with a previously authorized predicate device. Manufacturers bear the burden of substantiating this similarity through performance testing. For class I and II devices. |
De novo review | For novel class I and II devices. |
Exemptions | For nondevice software utilized for administrative support, maintenance of a healthy lifestyle, use in electronic health records, storage or display of clinical data, or otherwise authorized by the FDA as exempt from premarket review. |
Abbreviation: FDA, U.S. Food and Drug Administration.
In recent years, the FDA has issued additional clarification concerning its view of AI‐enabled imaging software, distinguishing between computer‐assisted detection and computer‐assisted diagnosis products. It regards the former AI detection technologies, which “identify, mark, highlight, or…direct attention” to features as class II devices, while retaining the class III designation for the latter diagnostic technologies, which are capable of autonomous functioning [10]. Additionally, the 21st century Cures Act created exemptions to the legal definition of medical “device,” removing from FDA oversight products deployed (a) for administrative support, (b) for maintenance of a healthy lifestyle, (c) in electronic health records, (d) for the storage or display of clinical data, and (e) “unless the function is intended to acquire, process, or analyze a medical image.. .for the purpose of (i) displaying, analyzing, or printing medical information….(ii) supporting or providing recommendations to a health care professional…and…(iii) enabling such health care professional to independently review the basis for such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient” [11]. In other words, a physician must retain the ability to “fact‐check” the AI output.
It would follow that application of AI within these specified contexts falls outside the ambit of FDA review. Unfortunately, this characterization is deceptively simple. For one, some complex algorithms by their very nature do not permit independent review of their output [12]. In a broader sense, AI exposes the limitations of existing regulatory frameworks; at what point would a continuously learning algorithm develop so many modifications that it ceases to share “substantial equivalence” with its FDA‐cleared precursor, exceeding the bounds of its original 510(k) clearance? In light of this and similar issues, the FDA seeks to overhaul its approach by constructing an entirely new paradigm through its purpose‐built AI/machine learning Action Plan (Table 3) [13]. It has indicated its preference for a “good machine learning practices” framework (akin to the “good manufacturing practices” required of pharmaceutical manufacturers), with a view to features of the developer rather than the product itself [14]. Developers are to stipulate the range of permissible algorithmic modifications, set parameters for managing these modifications, and disclose these features to the FDA in their submission for premarket review. Although a pilot program has been launched, the initiative remains in its nascency and will likely require Congressional action to scale.
Table 3.
Key objectives of the FDA's AI/ML action plan
Goal | Mechanism |
---|---|
Provide a tailored regulatory framework for AI/ML‐based devices | Issue a guidance document outlining the content of a complete AI medical device submission and offering examples of permissible modifications. |
Encourage harmonization of Good Machine Learning Practice development | Engage in key partnerships with certification agencies, such as the International Medical Device Regulators Forum and the International organization for Standardization. |
Adopt a patient‐centered approach incorporating transparency to users | Hold a public workshop incorporating broad perspectives on data labeling, disclosures, and use standards. |
Foster regulatory science methods related to algorithm bias and robustness | Initiate partnerships with key research institutions, promote racial and socioeconomic diversity among training data set and user populations. |
Support voluntary pilot programs to incorporate real world performance monitoring | Establish a series of pilot programs incorporating all of the above measures, seek real‐time performance data sharing with industry participants. |
Abbreviations: AI, artificial intelligence; ML, machine learning.
Navigating Liability
In the same vein, AI's novelty, complexity, and intrinsic characteristics strain conventional theories of tort liability. Suppose an oncologist leverages an imaging AI tool to monitor progression of their patient's small cell lung cancer. They rely upon the algorithm's readout, which fails to detect faint new metastases, and discharge their patient home. Prior to the next scheduled follow‐up, the patient rapidly deteriorates and is readmitted in grave condition. Is the oncologist liable for this untoward outcome?
The truth is less than straightforward. The theory of negligence is typically applied to physicians in malpractice suits. For a plaintiff to successfully recover damages, they must demonstrate the existence of a (a) duty of care, (b) a breach of this duty, and (c) resultant damages (d) sustained through a causal mechanism existing between the latter two requirements. Courts largely defer to the medical community in analyzing prongs (a) and (b), evaluating standards of care established by professional societies, practices of peer physicians in the surrounding community (deemed the “locality rule”), as well as institutional and departmental procedures. It is therefore vital to hospitals, clinics, and oncology practices that robust training programs and thoughtfully designed AI use guidelines are in place [15]. When a new AI program is adopted, it is the responsibility of the physician and their institution to recognize both its limitations and specific utility.
Additionally, the adequacy of informed consent bears on judicial treatment of the “duty” and “breach” factors. Did the oncologist notify the patient of their use of the AI algorithm? If so, how much detail did they provide? This remains a gray area owing to the lack of relevant caselaw; as of yet, courts have not articulated whether or to what extent AI demands specific patient disclosures [16]. Again, much would likely depend on protocols established by professional societies. In a number of non‐AI informed consent lawsuits, courts have traditionally required “those disclosures which a reasonable medical practitioner would make under the same or similar circumstances,” relating the sufficiency of consent to the standard followed by the medical community at large [17].
The issue of (c) damages likely depends upon the acuity of the patient's condition; if they bore a terminal prognosis prior to the missed findings, it would be more difficult to compellingly argue this point. (d) Causation presents a more complex question. Here, the oncologist appears to have relied upon AI output to the exclusion of their own manual overread. Given that physicians wield ultimate clinical decision‐making authority, there is strong case to be made for causation. But what if the oncologist had instead disregarded a true positive result flagged by the AI tool, substituting its readout with their own incorrect clinical opinion? Courts have not yet pronounced upon these unique circumstances given the sparsity of medical AI‐related caselaw. However, if AI works its way into oncologist standards of care, this may well serve as a potential liability trigger in the future [18].
Future Direction
Cancer is the leading cause of death in developed countries and faces rising prevalence with aging populations across the world. Improved research and care management techniques are technological imperatives necessary to address the needs of coming decades. AI technologies hold revolutionary potential for cancer care, owing as much to their extraordinary technical capabilities as to their unique legal risk profile. This piece surveys those features of which oncologists should be apprised. Courts and regulatory bodies have proven slow to adapt, contributing to an ambiguous environment regarding their appropriate evaluation, clinical deployment, and risk management. Policy changes add further uncertainty to an already dynamic environment [19]. Where the law is silent, professional societies can and should step in to fill the void, thereby fostering specialty‐wide uniformity alongside patient safety. The American Society of Clinical Oncology should articulate clear AI‐specific practice guidelines for its constituent physicians, in addition to training programming, continued support for research, and policy initiatives over the coming years as AI technologies are increasingly approved for clinical use.
Disclosures
The authors indicated no financial relationships.
Disclosures of potential conflicts of interest may be found at the end of this article.
No part of this article may be reproduced, stored, or transmitted in any form or for any means without the prior permission in writing from the copyright holder. For information on purchasing reprints contact commercialreprints@wiley.com. For permission information contact permissions@wiley.com.
References
- 1.Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence‐based FDA‐approved medical devices and algorithms: An online database. NPJ Digit Med 2020;3:118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Abajian A, Murali N, Savic LJ et al. Predicting treatment response to intra‐arterial therapies for hepatocellular carcinoma with the use of supervised machine learning‐An artificial intelligence concept. J Vasc Interv Radiol 2018;29:850–857.e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Sakellaropoulos T, Vougas K, Narang S et al. A deep learning framework for predicting response to therapy in cancer. Cell Rep 2019;29:3367–3373.e4. [DOI] [PubMed] [Google Scholar]
- 4.Rodríguez‐Ruiz A, Krupinski E, Mordang JJ et al. Detection of breast cancer with mammography: Effect of an artificial intelligence support system. Radiology 2019;290:305–314. [DOI] [PubMed] [Google Scholar]
- 5.Bera K, Schalper KA, Rimm DL et al. Artificial intelligence in digital pathology ‐ New tools for diagnosis and precision oncology. Nat Rev Clin Oncol 2019;16:703–715. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.U.S. Food and Drug Administration. FDA authorizes marketing of first device that uses artificial intelligence to help detect potential signs of colon cancer. 2021. Available at https://www.fda.gov/news-events/press-announcements/fda-authorizes-marketing-first-device-uses-artificial-intelligence-help-detect-potential-signs-colon. Accessed April 1, 2021.
- 7.Nagy M, Radakovich N, Nazha A. Machine learning in oncology: What should clinicians know? JCO Clin Cancer Inform 2020;4:799–810. [DOI] [PubMed] [Google Scholar]
- 8.McKinney SM, Sieniek M, Godbole V et al. International evaluation of an AI system for breast cancer screening. Nature 2020;577:89–94. [DOI] [PubMed] [Google Scholar]
- 9.Harvey HB, Gowda V. How the FDA regulates AI. Acad Radiol 2020;27:58–61. [DOI] [PubMed] [Google Scholar]
- 10.Radiology devices; Reclassification of medical image analyzers. Fed Reg 2020;83:25598 [Google Scholar]
- 11.Regulation of Medical and Certain Decisions Support Software, 21 U.S.C. §360j(o) (2010). [Google Scholar]
- 12.Price WN. Big data and black‐box medical algorithms. Sci Transl Med 2018;10:eaao5333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.U.S. Food and Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)‐Based Software as a Medical Device (SaMD) Action Plan. Washington, DC: U.S. Food and Drug Administration, 2021. Available at https://www.fda.gov/media/145022/download. Accessed April 1, 2021.
- 14.Gerke S, Babic B, Evgeniou T et al. The need for a system view to regulate artificial intelligence/machine learning‐based software as medical device. NPJ Digit Med 2020;3:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Harvey HB, Gowda V. Clinical applications of AI in MSK imaging: A liability perspective Skeletal Radiol 2021. [Epub ahead of print]. [DOI] [PubMed]
- 16.Cohen IG. Informed consent and medical artificial intelligence: What to tell the patient? Georgetown Law J 2020;108:1425–1469. [Google Scholar]
- 17.Natanson v. Kline 354 P.2d 670 (Kan 1960).
- 18.Price WN 2nd, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA 2019;322:1765–1176. [DOI] [PubMed] [Google Scholar]
- 19.Making permanent regulatory flexibilities provided during the COVID‐19 public health emergency by exempting certain medical devices from premarket notification requirements; Withdrawal of proposed exemptions. Fed Reg 2021;86:20174. [Google Scholar]