Skip to main content
. 2023 Jul 6;6:120. doi: 10.1038/s41746-023-00873-0

Table 3.

A list of regulatory challenges related to the rise of LLMs.

Regulatory challenge Short description
Patient Data Privacy Ensuring that patient data used for training large language models are fully anonymized and protected from potential breaches. This poses a significant regulatory challenge, as any violation could lead to serious consequences under privacy laws like HIPAA in the US.
Intellectual Property If an LLM generates content similar to proprietary medical research or literature, it could lead to issues regarding intellectual property rights.
Medical Malpractice Liability Determining who is responsible when an AI’s recommendations lead to patient harm. Is it the AI developers, the healthcare professionals who used it, or the institutions that adopted it?
Quality Control & Standardization Regulation is required to ensure the reliability and consistency of AI-generated medical advice, which can vary based on the data used to train the AI.
Informed Consent Patients need to be informed and give consent when AI tools are used in their healthcare management. This is challenging because it can be difficult for patients to fully understand the implications of AI use.
Interpretability & Transparency Regulations need to ensure transparency about how decisions are made by the AI. This is particularly challenging with AI models that are often termed as "black boxes" due to their complex algorithms.
Fairness and Bias Regulation is needed to prevent biases in AI models, which could be introduced during the training process using patient data. This can lead to disparities in healthcare outcomes.
Data Ownership It can be challenging to define and regulate who owns the data that large language models learn from, especially when it comes to patient data.
Over-reliance on AI Models Over-reliance on AI could lead to decreased human expertise and potential errors if the AI malfunctions or provides incorrect information. Regulations are needed to balance the use of AI and human expertise.
Continuous Monitoring & Validation Ensuring the continuous performance, accuracy, and validity of AI tools over time and across different populations is a critical regulatory challenge.