Introduction
As artificial intelligence (AI) continues to gain traction in the healthcare industry, physicians must navigate the potential benefits and risks of incorporating AI tools into their practices. These tools hold the promise of reducing administrative burdens, addressing provider burnout, and improving patient outcomes. There are many questions regarding how AI ought to be used by healthcare providers to achieve the benefits of AI while mitigating potential liability. Specifically, this article will: (1) explore the broad continuum of AI use cases in healthcare, (2) clarify how AI intersects with the standard of care, (3) discuss the current enforcement landscape, and (4) offer strategies to mitigate risks. By addressing these topics, we aim to equip physicians with the knowledge needed to make informed decisions about incorporating AI into their practices.
Broad Continuum of AI Use Cases in Healthcare
AI tools are typically created by training machine learning algorithms on large datasets.1 These datasets are often derived from historical information, such as medical records, imaging studies, or other clinical data.2 During this training process, the AI system learns to identify patterns and make predictions or decisions based on the data it has been exposed to. However, the quality and completeness of the training data are critical factors that can influence the performance of the AI system. If the data used to train the AI is incomplete or reflective of skewed sources, the resulting AI tool may produce inequitable outcomes.3 Such results could lead to disparities in diagnosis, treatment, and potential negative patient outcomes.
The integration of AI tools in healthcare presents a spectrum of risks that vary depending on the tool’s use case. Use cases can range from administrative applications to clinician administrative tools to AI embedded Software as a Medical Device (SaMD). The risks of SaMD are regulated by the Food and Drug Administration (FDA), but there are also risks implementing AI at the lower end of the risk continuum. Administrative tools such as billing and scheduling systems, which, while primarily operational, can still pose significant risks if AI errors lead to incorrect billing or scheduling conflicts. In more complex applications, AI communications and Natural Language Processing tools for scribes and notetaking may introduce the risk of AI hallucinations. For example, the tool may misinterpret or misrepresent spoken information or generate inaccurate content. This risk underscores the critical need for physician oversight, particularly in distinguishing between AI scribes and the physician’s ultimate responsibility to verify and validate AI-generated outputs.
Standard of Care
In medical malpractice cases, liability often hinges on the standard of care, which measures a healthcare provider’s actions. If a physician acts in line with this standard, they are typically not negligent. However, failing to meet the standard may result in liability for negligence. Determining the standard of care often depends on the physician’s clinical judgment.4 The standard of care regarding AI is nascent but as AI is implemented more broadly across healthcare, the standard of care will evolve.
Industry groups, such as the Coalition for Health Artificial Intelligence (CHAI), are trying to build industry consensus and establish a standard for providers using AI.5 For example, CHAI has published an Assurance Standards Guide which addresses areas like: (1) diagnostic imaging for a mammography with AI, (2) care management with claims-based outpatient use with AI, and (3) EHR query and extraction with generative AI.6
Additionally, CHAI has developed other tools such as the Responsible AI Guide, a “playbook for the development and deployment of AI in healthcare, providing actionable guidance on ethics and quality assurance”7 and the Responsible AI Checklist (RAIC), to aid the “development and evaluation of a complete AI solution and system against CHAI standards for trustworthy AI.”8 RAIC translates best practice considerations that meet core ethical and quality principles into detailed yes/no questions, or evaluation criteria, to determine if best practice standards are met.9 In the absence of a recognized standard of care, efforts by industry groups like CHAI, seek to establish standards and evaluation tools for AI-driven tools and will be important to monitor and consider adopting in order to mitigate liability risks.
Enforcement Landscape
Enforcement actions against healthcare providers using AI tools remain limited, and medical malpractice cases specifically tied to AI use (or non-use) are still in their infancy.10 As AI adoption grows, questions about potential litigation and penalties continue to surface. Existing laws and doctrines will likely adapt to address the unique challenges AI presents in healthcare.11 While AI tools are advancing in the clinical realm,12 there is a lack of standardized guidance on “objectively evaluating AI systems with regards to clinically meaningful performance metrics.”13 Industry groups such as CHAI are working to fill this gap.14 Meanwhile, regulators may need to re-define how medical errors such as diagnostic mistakes are classified and addressed in the context of AI-driven care.15
Generally, the healthcare industry relies on regulation, medical malpractice, and malpractice insurance mechanisms to facilitate remedies to resolve an adverse situation. However, AI presents some unique challenges in terms of enforcement and accountability. One major issue is that some AI tools present a “black box” where it is difficult for humans to understand the reasoning or processes behind the AI’s decisions. Additionally, the algorithms underlying AI tools often function in ways that differ from traditional clinical reasoning, and their outcomes are not always the same, making it harder for users and regulators to assess their decision-making processes. These factors create significant barriers for both users and regulators in the AI space. As AI adoption continues to grow, these challenges will need to be addressed to ensure transparency, accountability, and equitable outcomes in healthcare.
Enforcement in the healthcare space has generally focused on AI suppliers and/or vendors. For example, the Texas Attorney General (AG) Ken Paxton reached a settlement with an AI healthcare technology company for a series of false and misleading statements about the accuracy and safety of its products.16 The investigation found that the company’s metrics were likely inaccurate and may have deceived hospitals about their accuracy and safety.17 AG Paxton stated that “healthcare entities must consider whether AI products are appropriate and train their employees accordingly.”18
Relatedly, an app where users talk to AI-generated chatbots is being sued by a variety of parents whose children use the app. 19 In a tragic example, after frequent conversations with a chatbot on the app, a 14-year-old boy killed himself.20 In these lawsuits, the parents: (1) allege the company knowingly exposed minors to an unsafe product, and (2) demand that the app be taken offline until there are stronger guardrails to protect children.21 Additionally, in some of these lawsuits, “generative AI chatbots have been labeled as psychotherapists, therapists, and psychologists.”22 These qualifications are false, deceptive, and could result in public harm.23
Notably, “[i]f a human were making such misrepresentations, the state of Texas, through the Texas Behavioral Health Council and the Texas State Board of Examiners of Psychology, would and should use its enforcement authority to enjoin the individual from unlicensed practice and fraudulent behavior.”24 However, because the chatbots are AI, this puts a unique spin on the typical course of action. Additionally, the American Psychological Association (APA) has asked the Federal Trade Commission (FTC) to investigate chatbots acting as mental health professionals.25 The FDA has also been called upon to “balance consumer safety and industry stability as it considers regulatory guardrails and the processes of machine learning and AI enabled device approval.”26
AI tools used to process medical claims may carry the risk of False Claims Act liability. In a recent case, a company paid $145 million to settle allegations of receiving unlawful kickbacks for implementing clinical decision support alerts in its electronic health record (EHR) software, which ignored medical standards and increased drug prescriptions.27 This situation highlights the potential for AI tools to improperly affect medical decisions.
The risks of using AI necessitate rigorous oversight and validation processes to: (1) ensure compliance, (2) underscore accuracy, and (3) guard against potential legal and financial repercussions. This is true across the healthcare use case continuum—from administrative support to patient care.
States have begun to enact legislation addressing the use of AI. For example, California has a law that addresses generative AI used by healthcare providers in rendering care to patients in the context of written or verbal patient communications.28 If the California law is violated, it could lead to enforcement against providers.29 Utah enacted the Artificial Intelligence Policy Act, which requires disclosures when consumers are interacting with AI systems in healthcare professions (and other regulated occupations).30 Colorado was the first state to enact comprehensive AI legislation, although the law is not effective until February 1, 2026.31 Many other states have proposed legislation. These legislative efforts reflect a growing recognition of the need to regulate the use of AI in sensitive industries, to ensure transparency, accountability, and the protection of consumer and patient rights.
Risk Mitigation Strategies
If a physician is considering utilizing AI in its practice, then she should consider the following three recommendations to mitigate her risks. Overall, a physician should: (1) understand how their AI tool(s) work, including appropriate use cases and the data that the tool was trained on, (2) examine their insurance policies regarding the extent of coverage for the use of AI tools, and (3) recognize that the AI tools require supervision and close monitoring to ensure that they are performing as expected, and proceed with caution accordingly.
First, to avoid being misled, a physician should learn about how an AI tool works in advance. There are many types of AI. Because AI scours the internet for information, it is a common mistake to consider AI an expert. However, language learning models (LLMs) are simply good at predicting what a user might say next (i.e., pattern recognition). Additionally, LLMs tend to both: (1) generate factually incorrect outputs, and (2) source information non-transparently.32 To mitigate these issues a physician should receive training to better understand how AI tools operate to avoid being misled. For example, continuous education programs that “evolve with technological advancements,” and emphasize “the development of diagnostic and decision-making skills” for physicians could be implemented by healthcare systems.33
Second, the physician (or someone on her behalf ) should examine insurance policies to determine the extent of coverage for the use of the AI tools. Additionally, a physician should double check that her medical malpractice insurance carrier will cover the physician’s use of a cutting-edge tool. Depending on the coverage, there may be an overlay of different policies at issue for AI devices including for example: medical device insurance, cyber insurance, and medical malpractice insurance. To grapple with insurance policies and discern the level(s) of risk, it may be wise to coordinate with an attorney and/or insurance broker.
Third, a physician should proceed with caution when using AI tools. There are both positives and negatives for a physician using AI. In the proper context, there can be positive outcomes in terms of improving patient care and reducing physician burnout. Specifically, AI tools offer promise in: (1) charting, (2) administrative functions, (3) diagnostics, (4) personalized medicine, (5) augmentation of clinical practice, and more.34 However, AI may not always lead to benefits. For example, if a physician is not “adequately trained” to leverage AI tools, then the increased complexity in medical information and cases “could overwhelm providers,” and increase physician burnout.35 Even well-regarded health information technology using AI can create problems for physicians. An EHR vendor previously offered a new AI tool to help detect sepsis.36 However, the tool “missed a higher share of true cases and was less timely than other sepsis tools.”37 It was hypothesized that the sepsis model relied on “perhaps unintentionally, clinician suspicion that the patient has sepsis.”38 This illustrates that there may be bias in AI and underscores that physicians should not over rely upon AI. Ultimately, physicians must counter the “benefits of AI with vigilant oversight of its outputs,” to foster “a culture of critical engagement to enhance capabilities without compromising care standards.”39 Additionally, AI lacks the nuanced clinical judgment and empathy that human practitioners bring to patient care. This makes it essential for physicians to critically evaluate AI-generated outputs rather than accepting them at face value. Over-reliance on AI could also inadvertently erode clinical skills and decisionmaking expertise over time. To ensure safe and effective care, AI should be seen as a complementary tool rather than a replacement for the physician’s expertise, intuition, and ethical responsibility.
Conclusion
This article highlights the risks and benefits associated with AI in healthcare. Until there are more developments in this space, there is minimal risk from the legal repercussions of a physician’s non-use of AI. However, AI tools are increasingly being integrated into healthcare systems to meet regulatory standards, enhance early detection and intervention for disease, and improve compliance. As these technologies become more widespread, physicians who do not adopt AI may face difficulties in meeting these emerging standards. Importantly, if the use of AI becomes part of the established standard of care, a physician’s decision to avoid such tools could be viewed as a deviation from this standard, potentially raising concerns about their professional judgment.
AI has the potential to reduce administrative burdens and improve patient outcomes. However, before adopting AI tools, physicians must thoroughly understand their benefits and limitations to minimize risks and ensure patient safety. Given the rapidly evolving nature of AI and its associated standards, it is crucial for physicians to stay informed through industry organizations (e.g., CHAI) and consult regulatory professionals to remain aligned with best practices and legal requirements. Staying connected to these resources can help navigate the shifting landscape of AI in healthcare.
Footnotes
L to R: Kimberly Chew, JD, Senior Counsel, Kathleen Snyder, JD, Senior Counsel, and Colleen Pert, JD, Associate, work at Husch Blackwell which represents a full spectrum of healthcare providers and other businesses in developing compliance strategies, preliminary enforcement measures, employment concerns, and litigation matters. The information contained in this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general information purposes only, and readers are encouraged to consult their attorney concerning specific situations and specific legal questions. NOTE: The AI landscape is moving quickly. This content was correct and current as of April 25, 2025.
References
- 1. https://cloud.google.com/learn/artificial-intelligence-vs-machine-learning
- 2.See Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021 Jul;8(2):e188–e194. doi: 10.7861/fhj.2021-0095.
- 3.Hanna MG, Pantanowitz L, Jackson B, Palmer O, Visweswaran S, Pantanowitz J, Deebajah M, Rashidi HH. Ethical and Bias Considerations in Artificial Intelligence/Machine Learning. Mod Pathol. 2025 Mar;38(3):100686. doi: 10.1016/j.modpat.2024.100686. Epub 2024 Dec 16. [DOI] [PubMed] [Google Scholar]
- 4.Tobia Kevin, Nielsen Aileen, Stremitzer Alexander. When Does Physician Use of AI Increase Liability. J Nucl Med. Jan, 2021. https://pmc.ncbi.nlm.nih.gov/articles/PMC8679587/#:~:text=In%20particular%2C%20given%20tort%20law’s,risk%20of%20medical%20malpractice%20liability . [DOI] [PMC free article] [PubMed]
- 5. https://chai.org/draft-chai-applied-model-card/
- 6.Assurance Standards Guide. CHAI, Draft 3 released for Public Comment on June 26, 2024. https://chai.org/wp-content/uploads/2024/07/CHAI-Assurance-Standards-Guide-6-26-2024.pdf .
- 7.Responsible AI Guide and Checklists. Coalition for Health Artificial Intelligence (CHAI), edited on Feb 26, 2025. https://chai.org/responsible-ai-guide/
- 8.Id
- 9.Id
- 10.Mello Michelle M, Dr, Guha Neel. JAMA Health Forum. May 18, 2023. ChatGPT and Physicians’ Malpractice Risk. [DOI] [PubMed] [Google Scholar]
- 11.Who Pays When AI Steers Your Doctor Wrong? Daniel Payne, Politico. Mar 24, 2024. https://www.politico.com/news/2024/03/24/who-pays-when-your-doctors-ai-goes-rogue-00148447 .
- 12.Harriet Evans, Snead David. Understanding the Errors Made by Artificial Intelligence Algorithms in Histopathology in Terms of Patient Impact. NPJ Digit Med. https://pmc.ncbi.nlm.nih.gov/articles/PMC11006652/ [DOI] [PMC free article] [PubMed]
- 13.Id
- 14. https://chai.org/draft-chai-applied-model-card/
- 15.Evans Harriet, Snead David. Understanding the Errors Made by Artificial Intelligence Algorithms in Histopathology in Terms of Patient Impact. NPJ Digit Med. https://pmc.ncbi.nlm.nih.gov/articles/PMC11006652/ [DOI] [PMC free article] [PubMed]
- 16.Attor ney Gen er al Ken Pax ton Reach es Set tle ment in First-of-its-Kind Health care Gen er a tive AI Investigation. Sep 18, 2024. https://www.texasattorneygeneral.gov/news/releases/attorneygeneral-ken-paxton-reaches-settlement-first-its-kind-healthcare-generative-ai-investigation .
- 17.Id.
- 18.Id.
- 19.An AI Companion Suggested He Kill His Parents. Now His Mom is Suing. Nitasha Tiku, The Washington Post. Dec 13, 2024. https://www.washingtonpost.com/technology/2024/12/10/character-ai-lawsuit-teen-kill-parents-texas/
- 20.Id.
- 21.Id.
- 22.Letter to FTC from the APA (Arthur C. Evans, Ph.D.), dated December 20, 2024. https://www.apaservices.org/advocacy/generative-ai-regulation-concern.pdf .
- 23.Id.
- 24.Id.
- 25.Id.
- 26.Letter to the FDA (Dr. Califf) from Congress (Rep. Greg Murphy (R-N.C.), a urologist and co-chair of the GOP Doctors Caucus in the House), dated January 3, 2024. https://murphy.house.gov/sites/evo-subsites/murphy.house.gov/files/evo-media-document/20240103145548837.pdf .
- 27.See https://www.justice.gov/archives/opa/pr/electronic-health-records-vendor-pay-145-million-resolve-criminal-and-civil-investigations-0
- 28.Cal. Health & Safety Code § 1339 75
- 29.Id.
- 30.U.C.A. 1953§ 13-11-4
- 31. https://leg.colorado.gov/bills/sb24-205
- 32.ChatGPT and Physicians’ Malpractice Risk. Mello Michelle M, Dr, Guha Neel. JAMA Health Forum. 2023 May 18; doi: 10.1001/jamahealthforum.2023.1938. [DOI] [PubMed] [Google Scholar]
- 33.Pavuluri Suresh, et al. Balancing Act: The Complex Role of Artificial Intelligence in Addressing Burnout and Healthcare Workforce Dynamics. BMJ Health Care Informatics. Aug 24, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11344516/ [DOI] [PMC free article] [PubMed]
- 34.Letter to the FDA (Dr. Califf) from Congress (Rep. Greg Murphy (R-N.C.), a urologist and co-chair of the GOP Doctors Caucus in the House), dated January 3, 2024. https://murphy.house.gov/sites/evo-subsites/murphy.house.gov/files/evo-media-document/20240103145548837.pdf .
- 35.Suresh Pavuluri, et al., editors. Balancing Act: The Complex Role of Artificial Intelligence in Addressing Burnout and Healthcare Workforce Dynamics. BMJ Health Care Informatics. Aug 24, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11344516/ [DOI] [PMC free article] [PubMed]
- 36.Accuracy of Epic’s Sepsis Model Faces Scrutiny. Naomi Diaz, Becker’s Hospital Review. Apr 3, 2024. https://www.beckershospitalreview.com/ehrs/accuracy-of-epics-sepsis-model-faces-scrutiny.html#:~:text=Epic%27s%20model%20is%20used%20to,timely%20than%20other%20sepsis%20tools .
- 37.Id.
- 38.Id.
- 39.Pavuluri Suresh, et al. Balancing Act: The Complex Role of Artificial Intelligence in Addressing Burnout and Healthcare Workforce Dynamics. BMJ Health Care Informatics. Aug 24, 2024. https://pmc.ncbi.nlm.nih.gov/articles/PMC11344516/ [DOI] [PMC free article] [PubMed]

