Skip to main content
EJVES Vascular Forum logoLink to EJVES Vascular Forum
letter
. 2023 Oct 31;61:1. doi: 10.1016/j.ejvsvf.2023.10.003

Ethical Concerns Regarding the Use of Large Language Models in Healthcare

Fabien Lareyre 1,2,, Juliette Raffort 3,4,5
PMCID: PMC10679759  PMID: 38025830

Large language models (LLMs) have brought new perspectives in healthcare but present also some risks and pitfalls. We thank Daungsupawong et al.1 for their comments following the publication of our comprehensive literature review of Natural Language Processing in vascular surgery.2

LLMs offer promising perspectives of applications including patient care (by providing medical knowledge and patient empowerment, assistance for writing, translations, summaries), education (with interactive learning and opportunities to develop personalised education), and research (by facilitating access to scientific knowledge, science communication, or production of scientific content).3

Nevertheless, the field is in its infancy, and we completely agree that clinicians, patients, and society should be very cautious and aware of the limitations and risks of LLMs.4 While LLMs reproduce some of the characteristics of human language, it is important to keep in mind that they do not comprehend the language they are dealing with, neither the input data (used for the training) nor the output data (responses generated).4,5 As LLMs are dependent on data that have been used for the training, they can be biased due to misinformation, errors, or outdated information in the training dataset.4,5 The models have no self assessment of the generated content and, therefore, no control ever whether the input information is true or accurate. There is thus a critical lack of accountability. Finally, as LLMs are probabilistic algorithms, they might not provide the same answer to the same task or when the question is repeated multiple times, making it extremely challenging to evaluate their reliability and reproducibility.4,5 Like other AI driven applications, LLMs raise major ethical and legal concerns regarding their applications in healthcare. This includes questions related to health data protection, equity and fairness, safety and security, transparency, responsibilities and accountability, clinical benefits and costs, acceptability, perception, and integration by patients and health professionals.6

Methods for evaluating LLMs in the real world remain unclear and there is a critical need to build guidelines and recommendations. Specific standards to assess accuracy and quality of AI applications in healthcare are currently being developed7 and it would be of great interest to build specific guidelines for LLMs to help evaluate their potential benefits and risks before their implementation in clinical practice. As highlighted by Shah et al., health professionals cannot step aside but should be proactive to ensure that AI driven innovations will augment human expertise without replacing it in the aim to improve care provided to patients.8

Acknowledgements

This work was supported by the French government through the National Research Agency (ANR) with the reference number ANR-22-CE45-0023-01 and through 3IA Côte d’Azur Investments in the Future project, managed with reference number ANR-19-P3IA-0002.

References

  • 1.Daungsupawong H., Viroj Wiwanitkit V. Natural Language Processing in vascular surgery. EJVES Vasc Forum. 2023 doi: 10.1016/j.ejvsvf.2023.10.004. [Epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Lareyre F., Nasr B., Chaudhuri A., Di Lorenzo G., Carlier M., Raffort J. Comprehensive review of Natural Language Processing (NLP) in vascular surgery. EJVES Vasc Forum. 2023;60:57–63. doi: 10.1016/j.ejvsvf.2023.09.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Clusmann J., Kolbinger F.R., Muti H.S., Carrero Z.I., Eckardt J.N., Laleh N.G., et al. The future landscape of large language models in medicine. Commun Med (Lond) 2023;3:141. doi: 10.1038/s43856-023-00370-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine. 2023;90 doi: 10.1016/j.ebiom.2023.104512. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Li H., Moon J.T., Purkayastha S., Celi L.A., Trivedi H., Gichoya J.W. Ethics of large language models in medicine and medical research. Lancet Digit Health. 2023;5:e333. doi: 10.1016/S2589-7500(23)00083-3. [DOI] [PubMed] [Google Scholar]
  • 6.Lareyre F., Maresch M., Chaudhuri A., Raffort J. Ethics and legal framework for trustworthy artificial intelligence in vascular surgery. EJVES Vasc Forum. 2023;60:42–44. doi: 10.1016/j.ejvsvf.2023.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Lareyre F., Wanhainen A., Raffort J. Artificial intelligence-powered technologies for the management of vascular diseases: building guidelines and moving forward evidence generation. J Endovasc Ther. 2023 doi: 10.1177/15266028231187599. [DOI] [PubMed] [Google Scholar]
  • 8.Shah N.H., Entwistle D., Pfeffer M.A. Creation and adoption of large language models in medicine. JAMA. 2023;330:866–869. doi: 10.1001/jama.2023.14217. [DOI] [PubMed] [Google Scholar]

Articles from EJVES Vascular Forum are provided here courtesy of Elsevier

RESOURCES