Skip to main content
Experimental Biology and Medicine logoLink to Experimental Biology and Medicine
editorial
. 2024 Feb 4;248(24):2497–2499. doi: 10.1177/15353702241226801

Creative and generative artificial intelligence for personalized medicine and healthcare: Hype, reality, or hyperreality?

Arash Shaban-Nejad 1,, Martin Michalowski 2, Simone Bianco 3
PMCID: PMC10854468  PMID: 38311873

Introduction

Creative artificial intelligence (AI) refers to the applications of AI techniques that empower computers to perform tasks (e.g. creating art, music, storytelling, writing, and interactive decision-making) usually linked with human cognition, creativity, and imagination. Creative AI platforms often use generative models such as neural networks and other machine learning techniques to generate novel, innovative, and meaningful outputs.

Generative artificial intelligence (GenAI) is a subdomain of creative AI, that can assist in generating new content, data, information, or insight based on patterns and structures learned from training on very large data sets. Generative AI then uses prompts to create context-appropriate content on its own, often in a way that mimics human creativity. Large language models (LLMs) are a type of generative AI particularly designed for natural language processing (NLP), understanding, and generation of a sequence of contextually relevant material.

Personalized medicine and care have been significantly advanced in recent years through the extensive use of AI. 1 LLMs such as OpenAI’s GPT (Generative Pre-trained Transformer) and ChatGPT series are powerful tools and show promising results in various applications in medicine and healthcare25 along with other innovative technologies in AI and precision medicine.6,7 Some of the applications of GenAI in healthcare include content creation, conversational agents, and text summarization.811

GenAI, including LLMs, has the potential to play a significant role in advancing personalized medicine through improving data analysis, pattern recognition, and prediction. By analyzing a very large amount of multimodal5,12 and multidimensional clinical and administrative data, including genetic information, electronic health records, socioeconomic determinants of health, and environmental and lifestyle data, GenAI detects patterns of interest and generates new clinical insights. By revealing and identifying patterns and correlations in this integrated data set, GenAI can support understanding the unique attributes of individuals and predicting and infer possible health outcomes. It can also assist in the early diagnosis of diseases by highlighting specific patterns, which allows personalized and targeted treatment plans based on the patient’s specific risk factors.

GenAI can play a role in pharmaceutical drug discovery and development 4 to empower personalized and customized treatments and interventions for individuals or communities, enhance health outcomes, and impact disease susceptibility by predicting potential drug candidates using existing biological, genomics, and biomedical data. It can also assist physicians in clinical decision-making by recommending personalized treatment options. Moreover, GenAI has shown significant potential in educating patients and enabling them to actively participate in their health-care decisions and adhere to personalized treatment plans.

It is important to note that while GenAI holds great promise in personalized medicine, some major challenges and limitations need to be considered such as the need for robust and multimodal data sets, eliminating exacerbated biases, ethical and legal considerations regarding patient confidentiality and consent, lack of explainability and interpretability of the models,13,14 and the need for clinical validation and evaluation of AI-generated recommendations, insights, and actions. 15

GenAI relies profoundly on the data it is trained on. If the training data are biased, non-representative, or incomplete, the generated results may also show biases or inaccuracies. 16 Personalized medicine requires diverse and representative data sets to ensure that the AI models can cater to different individuals and communities. If the training data are skewed, it may not generalize well to under-represented groups. Class imbalances when sufficient data for training are not available can also give rise to biased and unfair results. This is especially prevalent in the case of rare diseases that often have small patient cohorts. 17 Personalized medicine often involves sensitive health data. Maintaining patient privacy and adhering to ethical standards are crucial. GenAI models may inadvertently reveal private information if not designed and implemented with privacy in mind. 18 Safeguarding robust security measures and data anonymization is a constant challenge. Moreover, as the task of the GenAI algorithm is usually to generate the next token (i.e. in the case of LLMs, to generate the next word), they are prone to hallucination, defined as the generation of fabricated content. 19 As imaginable, this is of particular concern in the biomedical field, especially when patients are involved, and their health is at stake.

Therefore, the integration of GenAI models into clinical practice requires rigorous validation to ensure their reliability and effectiveness. Limited clinical validation studies may hinder the adoption of these models in real-world health-care settings. Regulatory bodies and health-care professionals must be cautious about relying on AI without sufficient convincing evidence of its clinical utility. 20 Another issue that GenAI is dealing with is related to keeping up with the pace of change in fast-evolving disciplines such as medicine. There is a high probability that the GenAI models become outdated quickly; therefore, they need to constantly adapt to novel medical findings and incorporate them into their models. In addition, many GenAI models are considered “black boxes” because they can be very hard, or in many cases impossible, to interpret how they come to a particular decision or recommendation. 20 This is specifically important in mission-critical domains such as personalized medicine, where both clinicians and patients need justifications to understand and trust the underlying reasoning behind AI-generated recommendations.

Despite the aforementioned challenges and limitations, GenAI provides great opportunities to advance personalized medicine. As the field continues to evolve, addressing the challenges will be crucial to achieving ethical, responsible, equitable, value-based, and trustworthy AI models in personalized medicine.

Thematic issue on health AI

This thematic issue on Health AI includes various contributions presenting results on theory, methods, systems, and applications of AI in healthcare and biomedicine.

To identify the subpopulation that can benefit the most from a statin treatment plan while avoiding the risks and choosing the optimal initial statin treatment plan, Yew et al. 21 proposed a five-step Decision Rule for Statin Treatment pipeline for personalized statin treatment. Thomas et al. 22 proposed a data-driven parameter estimation approach in the domain of wearable medical devices to calculate the trust score, a relative measure of the trustworthiness when different devices are evaluated in the same test conditions and using the same Bayesian structure. Werner et al. 23 presented a pipeline in which machine learning techniques, including the explainable hierarchical clustering method, are used for automatic identification and evaluation of subtypes of hospital patients and improved risk prediction for those admitted in a large UK teaching hospital during 2017–2021.

Alwuthaynani et al. 24 presented research on predicting which patients will progress from mild cognitive impairment (MCI) to Alzheimer’s disease (AD) using the class decomposition transfer learning (CDTL) model to detect Alzheimer’s progression. This model helps clinicians in identifying individuals at risk for AD and those with the earliest signs of clinical impairment, so they can implement therapeutic or preventive interventions at the early stages of disease. Liu et al. 25 presented a study to construct and evaluate a deep learning model, utilizing ultrasound images, to accurately differentiate benign and malignant thyroid nodules. Tang et al. 26 proposed using Pharmacophore-Based Virtual Screening and Machine Learning for the discovery of Subunit-Selective GluN1/GluN2B NMDAR. Yang et al. 27 proposed the Quantum-based Oversampling Method (QOSM) to address challenges in highly imbalanced and overlapped data sets, which are prevalent in biology and medical sciences.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

  • 1. Shaban-Nejad A, Michalowski M, Bianco S. Artificial intelligence for personalized care, wellness, and longevity research. In: Shaban-Nejad A, Michalowski M, Bianco S. (eds) Artificial intelligence for personalized care, wellness, and longevity research (Studies in computational intelligence), vol. 1106. Cham: Springer, 2023, pp.1–9 [Google Scholar]
  • 2. Mesko B. The ChatGPT (generative artificial intelligence) revolution has made artificial intelligence approachable for medical professionals. J Med Internet Res 2023;25:e48392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Karabacak M, Ozkara BB, Margetis K, Wintermark M, Bisdas S. The Advent of generative language models in medical education. JMIR Med Educ 2023;9:e48163 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. The Lancet Regional Health-Europe. Embracing generative AI in health care. Lancet Reg Health Eur 2023;30:100677. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Meskó B. The impact of multimodal large language models on health Care’s future. J Med Internet Res 2023;25:e52865 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Shaban-Nejad A, Michalowski M, Bianco S, Brownstein JS, Buckeridge DL, Davis RL. Applied artificial intelligence in healthcare: listening to the winds of change in a post-COVID-19 world. Exp Biol Med 2022;247:1969–71 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Shaban-Nejad A, Michalowski M, Peek N, Brownstein JS, Buckeridge DL. Seven pillars of precision digital health and medicine. Artif Intell Med 2020;103:101793. [DOI] [PubMed] [Google Scholar]
  • 8. Lee P, Bubeck S, Petro J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N Engl J Med 2023;388:1233–9 [DOI] [PubMed] [Google Scholar]
  • 9. Singh N, Lawrence K, Richardson S, Mann DM. Centering health equity in large language model deployment. PLoS Digit Health 2023;2:e0000367 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Patrinos GP, Sarhangi N, Sarrami B, Khodayari N, Larijani B, Hasanzad M. Using ChatGPT to predict the future of personalized medicine. Pharmacogenomics J 2023;23:178–84 [DOI] [PubMed] [Google Scholar]
  • 11. Temsah MH, Jamal A, Aljamaan F, Al-Tawfiq JA, Al-Eyadhy A. ChatGPT-4 and the global burden of disease study: advancing personalized healthcare through artificial intelligence in clinical and translational medicine. Cureus 2023;15:e39384 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Shaban-Nejad A, Michalowski M, Bianco S. Multimodal artificial intelligence: next wave of innovation in healthcare and medicine. In: Shaban-Nejad A, Michalowski M, Bianco S. (eds) Multimodal AI in healthcare: a paradigm shift in health intelligence (Studies in computational intelligence series), vol. 1060. Cham: Springer, 2023 [Google Scholar]
  • 13. Shaban-Nejad A, Michalowski M, Brownstein JS, Buckeridge DL. Guest editorial explainable AI: towards fairness, accountability, transparency and trust in healthcare. IEEE J Biomed Health Inform 2021;25(7):2374–75 [Google Scholar]
  • 14. Shaban-Nejad A, Michalowski M, Buckeridge DL. Explainability and interpretability: keys to deep medicine. In: Shaban-Nejad A, Michalowski M, Buckeridge DL. (eds) Explainable AI in healthcare and medicine (Studies in computational intelligence), vol. 914. Cham: Springer, 2021, pp.1–10 [Google Scholar]
  • 15. Goldsack JC, Coravos A, Bakker JP, Bent B, Dowling AV, Fitzer-Attas C, Godfrey A, Godino JG, Gujar N, Izmailova E, Manta C, Peterson B, Vandendriessche B, Wood WA, Wang KW, Dunn J. Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for Biometric Monitoring Technologies (BioMeTs). npj Digit Med 2020;3:55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J, Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Carter L, Wright R. Opinion paper: “So what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manag 2023;71:102642 [Google Scholar]
  • 17. Chen RJ, Lu MY, Chen TY, Williamson DFK, Mahmood F. Synthetic data in machine learning for medicine and healthcare. Nat Biomed Eng 2021;5:493–7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical considerations of using ChatGPT in health care. J Med Internet Res 2023;25:e48009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Salvagno M, Taccone FS, Gerli AG. Artificial intelligence hallucinations. Crit Care 2023;27:180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. npj Digit Med 2023;6:120. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Yew PY, Liang Y, Adam TJ, Wolfson J, Tonellato PJ, Chi CL. Decision rules for personalized statin treatment prescriptions over multi-objectives. Exp Biol Med 2024;248:2526–37 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Thomas M, Boursalie O, Samavi R, Doyle TE. Data-driven approach to quantify trust in medical devices using Bayesian networks. Exp Biol Med 2024;248:2578–92 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Werner E, Clark JN, Hepburn A, Bhamber RS, Ambler M, Bourdeaux CP, McWilliams CJ, Santos-Rodriguez R. Explainable hierarchical clustering for patient subtyping and risk prediction. Exp Biol Med 2024;248:2547–59 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Alwuthaynani MM, Abdallah ZS, Santos-Rodriguez R. A robust class decomposition-based approach for detecting Alzheimer’s progression. Exp Biol Med 2024;248:2514–25 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Liu Y, Feng Y, Qiana L, Wang Z, Hu XD. Deep learning diagnostic performance and visual insights in 3 differentiating benign and malignant thyroid nodules on ultrasound 4 images. Exp Biol Med 2024;248:2538–46 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Tang J, Jin J, Huang Z, An F, Huang C, Jiang W. The discovery of subunit-selective GluN1/GluN2B NMDAR antagonists via 2 pharmacophore-based virtual screening and machine learning. Exp Biol Med 2024;248:2560–77 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Yang B, Tian G, Luttrell J, Gong P, Zhang C. A quantum-based oversampling method for classification of highly imbalanced and overlapped data. Exp Biol Med 2024;248:2500–13 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Experimental Biology and Medicine are provided here courtesy of Frontiers Media SA

RESOURCES