Skip to main content
Biomedical Instrumentation & Technology logoLink to Biomedical Instrumentation & Technology
. 2024;58(2):39–42. doi: 10.2345/0899-8205-58.2.39

Liability Exposure of Clinicians in Artificial Intelligence–Driven Healthcare

Brian Lee 1, Rotem Naftalovich 2,, Saad Ali 4, Faraz A Chaudhry 5, George L Tewfik 6
PMCID: PMC10987009  PMID: 38564605

In recent years, artificial intelligence (AI)/machine learning (ML)-enabled medical devices have been gaining more presence in the healthcare industry. More than 500 medical devices using ML are approved for use by the Food and Drug Administration (FDA) currently, and it is inevitable that providers will encounter an increased number of these devices in the clinical setting.1 Each of these medical devices uses its own ML algorithm—a model that is believed to be useful clinically. Nevertheless, as with the aphorism attributed to the statistician George Box—“All models are wrong”—these devices are not error proof.2

The ML component of AI uses software to process large data sets from which it draws predictions. Two general categories of techniques are used in ML: supervised learning and unsupervised learning.3 Supervised learning trains the software on known input and output data so that it generates reasonable predictions for the response to new data, while unsupervised learning allows the software to find hidden patterns or intrinsic structure in input data without labeled responses.3

As a result, supervised learning can predict discrete responses (classification), such as whether a tumor is cancerous or benign, or continuous responses (regression) such as the temperature of a patient.4 Unsupervised learning allows for identification of hidden patterns or subgroups in data, as defined by multiple characteristics (clustering).4 For example, for patients with depression, subgroups such as age of onset (early versus late onset) and severity of symptoms (mild, moderate, or major depressive disorder) can be identified using this technique.

The benefits of AI utilizing ML thus can range from serving as a low-risk adjunct monitor for patients in a postanesthesia care unit to more complex applications (e.g., a pneumothorax detection tool5). However, it is important to appreciate that although supervised/locked software does not evolve on its own, unsupervised software is constantly learning through the use of its ML algorithms; in this sense, it is evolving. Accordingly, these devices do not represent static products. Their dynamic nature is such that their limitations may not be evident at a given time of a single review, and hence, a unique element of uncertainty is introduced.

In January 2021, the FDA issued Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, which highlighted strategies to better regulate software’s learning over time.6 The action plan document called for improvements in a predetermined change control plan (PCCP), which consists of SaMD prespecifications to describe the aspects that the software algorithm can learn and change and an algorithm change protocol to describe “how” the algorithm will learn and change while remaining safe and effective.6,7 The action plan also called for development of good ML practice principles, improvements in transparency to users, better detection of algorithm bias, and improvements to real-world performance monitoring requirements.6,7

In April 2023, the FDA issued Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions.8 This draft guidance provides recommendations on the information to be included in a PCCP for AI/ML-enabled devices: a detailed description of modifications, a modification protocol, and an impact assessment (see sections VI through VIII in the draft guidance).

Although this is a step in the right direction, ML algorithms, also known as “black box” algorithms, are sometimes designed with such complexity that some of their computational processes are methodologically opaque to humans.9 For example, Corti is a software that listens to emergency calls to look for “verbal and non-verbal patterns of communication” to detect early signs of cardiac arrest over the phone. Its algorithms are “black box” because even Corti’s software developers do not understand how the software reaches a conclusion that someone is having a cardiac arrest.10

Liability

As more AI/ML-enabled devices enter the market, examining the potential consequences when these devices make mistakes is important. At the moment, general tort law principles dictate that clinicians who engage in good faith reliance on AI/ML-enabled devices are liable for medical malpractice when these devices give an incorrect treatment recommendation that is not consistent with the standard of care and results in harm to the patient.11 In addition, many diagnostic AI/ML-enabled devices used in radiology and cardiology (i.e., the top two medical specialties utilizing AI/ML-enabled devices) tend to have high false-positive rates in order to avoid missing an important diagnosis.12 With the help of ML algorithms, these devices are likely to become more accurate over time, producing fewer mistakes and lowering false-positive rates. However, until the use of AI/ML for treatment recommendations by clinicians gets recognized as the standard of care, the best option for clinicians to minimize the risk of medical malpractice liability is to use it as a confirmatory tool to assist with decision making.

Mitigating the Liability of Clinicians

Possibilities exist for mitigating the degree of liability of clinicians, namely through claims of product liability, claims of corporate negligence, or use of compensations.

Product liability claims against the software developers of AI/ML-enabled devices place liability on the manufacturers for product defects. The categories include manufacturing defect, design defect, marketing defect, and breach of warranty, any of which may be filed by either the clinician or the patient.13 However, courts tend to hesitate when applying traditional product liability theories to software developers in healthcare because AI/ML-enabled devices do not yet dictate standard of care and, at the moment, simply are considered tools to aid clinicians in making decisions.14

Corporate negligence claims against the hospitals that acquire AI/ML-enabled devices place blame on negligent credentialing before acquiring the device, similar to how hospitals would be liable if one of their employees was not adequately screened before getting hired.15 Such claims may be filed by the patient and serve as an opportunity to sue the healthcare business as a whole instead of filing a claim against just one individual.16

Financial compensation commonly is used by U.S. vaccine manufacturers to pay those who have adverse reactions after receiving vaccines.15 Manufacturers of AI/ML-enabled devices may be able to use a similar approach to incentivize the use of their products. However, such an approach may give less incentive to manufacturers to ensure their product’s reliability and safety and would have little to no beneficial effect on clinician’s wariness of the products.

Advice for Clinicians

When using AI/ML-enabled medical devices, mistakes can arise from a simple software glitch. They also can occur because the AI algorithms may be trained on data sets that are not large enough to generalize across a variety of populations and therefore introduce bias that can detract from their accuracy and reliability.17 For example, a software trained predominantly using data from white patients for detection of cardiovascular disease risks certainly can over- or underestimate the risks when applied to other ethnicities/races. Similarly, if a U.S. military health data source is used, a gender bias in the accuracy of the software’s performance can occur because most service members are men.

Because the training datasets are not unlimited, it is understandable that all AI/ML-enabled medical devices will have a degree of bias. However, not being transparent about certain limitations can result in a loss of trust. One well-publicized example is the use of IBM’s Watson for oncology.18 This software was developed to assess patient’s information from medical records and offer the best cancer treatment options. In 2018, the software was criticized for offering “unsafe and incorrect” treatment options. Later it was found that instead of training the algorithm software with real patient data, only a few “synthetic” cases devised by physicians at the Memorial Sloan Kettering Cancer Center were used to train the algorithm.18

To avoid events like this, improvements to product labeling should be made to clearly delineate the training dataset used and provide assessment of potential biases. That way, clinicians can better understand the intended patient demographics for use, the exact clinical data captured and assessed for each patient, and any intended or unintended biases. Clinicians then can mitigate liability beyond the intended use. Trust in IBM’s Watson for oncology was diminished after the above-described publicity. Transparency seems to be the key to the growth of AI in healthcare, fostering trust among software developers, clinicians, and patients.

Conclusion

AI/ML-enabled medical devices impart many benefits, including earlier detection of diseases and increased access to medical care, especially in underserved populations. At the same time, with so many new devices coming on the market, clinicians who encounter these devices need to be certain that device performance can match the standard of care. Such assurance is needed to prevent fear of malpractice liability from curtailing clinician use of these innovative devices.

Providers and hospitals should validate the algorithmic results of AI/ML-enabled medical devices independently before officially acquiring them for use. In addition, providers should clearly delineate the potential risks and benefits of the devices when obtaining informed consent from patients. That way, innovators in the clinical setting will not hamper AI/ML development and its integration into healthcare but instead experience all of the benefits of present-day innovations.

References


Articles from Biomedical Instrumentation & Technology are provided here courtesy of Association for the Advancement of Medical Instrumentation

RESOURCES