Abstract
Artificial intelligence (AI) has emerged as a promising tool to enhance medical practice and improve patient outcomes. However, introducing AI in interactions between patients, support persons (SPs) and physicians may create real or perceived information asymmetries and may not always be well accepted by end-users. To ensure that AI contributes to patient empowerment rather than undermining it, there is a need to better understand how AI-based tools affect communication, trust and decision-making in clinical encounters. Research should focus on identifying how AI can support patients’ autonomy, trust and acceptance, how it may strengthen the role of SPs and promote transparent and ethically sound care. With these findings, applying a human-centered design with established technology acceptance frameworks (e.g. TAM, UTAUT) will be crucial to guide evidence-based implementation. Only by involving patients, SPs and physicians in AI development can these technologies unfold their full potential to deliver equitable, interpretable and patient-centered healthcare.
The rise of artificial intelligence in healthcare
Machine learning models and big data are not only making their way into everyday life, entertainment, and commerce, but also into healthcare. There has been much optimism about the potential of AI to improve various aspects of care from diagnosis to treatment planning and patient monitoring [1]. Evidence suggests that AI algorithms perform as well as, or even better than humans in several tasks, such as medical image analysis or disease prediction [2, 3]. In addition, methods that use AI-derived data, such as wearable devices that detect arrhythmias or chatbots that suggest diagnosis to patients, and decision support systems (DSS) [4, 5] are making significant contributions to medical decision-making processes. AI-based systems may mitigate physicians’ workload [6], achieved through tasks encompassing the transcription of clinical notes, adeptly inputting and structuring patient data, and even extending to the realm of patient diagnosis. Such advances underscore AI’s potential to enhance medical practice and improve patient outcomes [7].
Common medical AI-based systems used to enhance optimal clinical decision-making include clinical decision support systems (CDSS), which often consist of machine learning systems that simulate human reasoning to some extent. These systems can learn from past human behavior and then statistically make suggestions for new data. Through its precision-driven approach, AI demonstrates the potential to mitigate the adverse effects of pharmaceutical interventions. For example, through real-time monitoring, an AI-based application on mobile devices in measuring and increasing medication adherence in stroke patients on anticoagulation therapy led to increased adherence and change in behavior, particularly in patients on direct oral anticoagulant therapy. Suboptimal adherence went undetected as routine laboratory tests were not reliable indicators of adherence, placing patients at increased risk of stroke and bleeding [9]. This proficiency extends to potential life-saving capabilities, underscoring the vital role AI can play in medical practice.
However, while technical performance of AI is often well documented, we believe that the social, ethical and interpretative challenges remain underexplored. For instance, algorithmic transparency, data privacy and compliance with data protection regulations such as the General Data Protection Regulation (GDPR) are critical to ensuring that AI systems are trustworthy and legally sound [8]. Moreover, interpretability, the ability of AI systems to make decisions that are understandable to physicians, patients and their support persons (SPs), is essential for responsible clinical use.
Empirical gaps in AI-assisted healthcare
AI and communication in triadic encounters: integrating patients and their SPs
Traditionally, patients’ interactions with healthcare professionals and physicians rely on inquiry-based assessments and standard examinations. In recent decades, researchers, patient advocates, and policymakers around the world have intensified efforts to shift healthcare from a paternalistic to a patient-centered approach that focuses on the patient as a person [9]. High-quality healthcare includes shared decision-making as a collaborative process that integrates medical expertise with patients’ needs and values [4]. Factors such as age, gender, cultural beliefs, or a changing current life situations impact patients’ communication preference. In this context, AI holds promise to enhance patient empowerment, defined as enabling patients to make informed decisions, express preferences and actively participate in their care. AI-based tools may help patients access understandable health information, visualize treatment options and manage their conditions more effectively. Here, an often-overlooked component of this interaction is the SP. SPs are often one of the most important sources of information and advice for patients and have been shown to facilitate patient involvement in healthcare decisions [10]. After consulting their SPs, patients often feel more confident about their decisions [11]. Yet their role in physician-patient-interactions remains conspicuously understudied [10]. Although some studies have shown that involving SPs in interventions such as discharge planning and medication management can reduce hospital readmissions and improve patient medication adherence [12, 13],there is a significant lack of research on the role of SPs and how the patient-SP relationship can be leveraged to increase patient engagement in healthcare decisions, especially AI-assisted DSS is added into the equation. This may shift communication patterns and decision hierarchies. Therefore, future research should explore how AI may influence the roles, responsibilities, and communicative dynamics and preferences of patients and their SPs in medical decision-making. Such research may include, but not be limited to exploring:
How SPs interpret and communicate AI generated treatment recommendations to patients, and whether this enhances or hinders patient understanding and engagement.
How AI affects the relational balance, for instance, whether SPs feel empowered to advocate for patients’ preferences when AI recommendations are presented.
How triadic trust is distributed among patients, SPs, physicians, and AI-based systems, and how this influences acceptance and satisfaction with AI-assisted decisions.
Addressing these questions will help ensure that AI systems are designed and implemented in ways that reinforce, rather than erode, supportive communication structures in healthcare. Understanding SPs’ perspectives and needs in the context of AI-assisted care is indispensable for building genuinely human-centered and empowering AI systems in healthcare.
Trust, acceptance, emotional and ethical challenges of end-users in AI-assisted medical interactions
Although initial research on the impact of AI on medical encounters between patients, SPs, and physicians suggests that AI could become a valuable tool to improve communication and decision-making [1] empirical data on its real-world impact remain understudied [14]. Ongoing research has mostly focused on technical performance rather than interactional dimension of AI use; comparatively little attention was paid to the impact on patient-SP-physician interactions and regulatory issues rising with the introduction of AI in healthcare [4]. Understanding how AI shapes trust, autonomy and prioritizing the needs and preferences of all stakeholders interacting with or affected by AI, such as physicians, patients, and SPs should be a directive for future research [15]. This would help provide evidence-based guidelines on how to use AI within medical encounters, enhance patient-centered communication, and prevent new forms of dependency or exclusion [16].
Moreover, the acceptance of AI is a decisive factor for a successful implementation, understanding patients’ barriers and facilitators influencing their acceptance of AI in the medical settings is indispensable [9, 10]. Patients may be reluctant to accept the use of AI due to a so-called “uniqueness neglect”, a concern that AI-based providers, respectively the applied AI-based devices, may be less able than human providers to account for patients’ unique characteristics [11]. To facilitate and progress the involvement of AI in clinical decision-making, frameworks such as the technology acceptance model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) provide structured approaches to assess the end-user’s attitudes, perceived usefulness and trust towards AI in healthcare [12, 13]. Haan et al. have observed that additional patient education regarding AI might be necessary to enhance patients’ willingness to accept AI and provide valuable input on the optimal utilization of AI systems [14]. Despite the optimism surrounding AI’s potential to improve medical prognosis, diagnosis, and decision-making, there is a need for more empirical evidence to help understand patients’, SPs’ and physicians’ perspectives and expectations regarding the influence of AI on their interactions. Having such evidence may also help evaluate and continuously refine existing and innovative AI technologies to advance healthcare. Finally, addressing ethical and legal considerations including bias mitigation, fairness and data governance will be critical to build sustainable, patient-trusted AI Systems. Building on these considerations, future research should address the following directions to ensure AI integration aligns with human values and patient-centered principles:
How AI-assisted care influences communication, shared decision-making, and the prioritization of patient and SPs preferences in medical encounters.
What factors shape patients’, SPs, and physicians’ trust in, acceptance of, and willingness to use AI in healthcare, including concerns about personalization and “uniqueness neglect”.
How ethical, legal, and practical challenges, including bias, fairness, data governance, and potential dependency may be addressed to ensure patient-centered, equitable, and sustainable AI integration in medical practice.
Human-centered design and co-development: closing the gap between technology and care
Only a few AI-based systems have successfully transitioned from the research laboratory to practical medical applications. One reason is the lack of a human-centered design during the development, resulting in tools that do not align with the realities and values of end-users [17]. When focusing on human-centered AI in development processes, the goal should be to design AI systems that align with human values, understand human context, and enhance human experiences [15] preserving the “human touch” [10, 18]. Effective integration of AI requires more than technical accuracy. It demands end-user acceptance, interpretability, ethical integrity and emotional intelligence in human-machine interactions [19]. Privacy concerns, workflow disruption, and legal liability are other barriers to adoption that must be proactively managed [20]. Human-centered design involving all stakeholders in the design, such as physicians, healthcare workers, patients and their SPs, pilot testing, and refinement of AI-assisted tool will be crucial to successfully implementing these interventions.
By generating empirical evidence on how AI affects patient-SP-physician communication and decision-making, we may help improve and adapt medical AI systems to the needs of all stakeholders using a human-centered approach. This evidence will be essential for translating the technical potential of AI into daily medical practice to foster the provision of optimal patient-centered healthcare [16]. To achieve this, future efforts should:
Implement participatory design processes that actively involve all stakeholders throughout the stages of AI development.
Integrate interdisciplinary expertise from behavioral science, ethics and informatics to ensure that AI tools are not only technically sound, but also interpretable, transparent and alight with human values.
Conclusion
In all, this research agenda addresses [1] communication in AI-assisted triadic encounters [2] trust, acceptance and ethical-legal frameworks and [3] participatory co-design that will be essential to ensure that AI-based systems in enhance, rather than replace the human relationships at the core of medical care. Future studies should integrate interdisciplinary approaches and perspectives from social sciences, ethics and human-computer interaction to capture the complexity of AI in clinical use. By systematically investigating how AI shapes communication, decision-making and trust among end-users such as patients, their SPs and physicians, research may help translate the promise of AI into evidence-based, equitable and genuinely patient-centered care.
Acknowledgements
None.
Author contributions
Z. S. wrote the comment. S. E., R. R., A. B., K. B., M. T., D. S., S. M. and A. H. reviewed it and made slight modifications and additions according to their field of expertise.
Funding
Funded by the German Federal Ministry of Education and Research (grant number: 01GP2202A).
Data availability
No datasets were generated or analysed during the current study.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Given.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Niel O, Bastard P. Artificial intelligence in nephrology: core Concepts, clinical Applications, and perspectives. Am J Kidney Dis. 2019;74(6):803–10. [DOI] [PubMed] [Google Scholar]
- 2.Yasaka K, Abe O. Deep learning and artificial intelligence in radiology: current applications and future directions. PLoS Med. 2018;15(11):e1002707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Char DS, Shah NH, Magnus D. Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. 2018;378(11):981–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, et al. The coming of age of artificial intelligence in medicine. Artif Intell Med. 2009;46(1):5–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Basu K, Sinha R, Ong A, Basu T. Artificial intelligence: how is it changing medical sciences and its future? Indian J Dermatol. 2020;65(5):365–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Yu K-H, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2(10):719–31. [DOI] [PubMed] [Google Scholar]
- 8.Labovitz DL, Shafner L, Reyes Gil M, Virmani D, Hanina A. Using artificial intelligence to reduce the risk of nonadherence in patients on anticoagulation therapy. Stroke. 2017;48(5):1416–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello C-P, et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med. 2023;6(1):111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Sassi Z, Eickmann S, Roller R, Osmanodja B, Spencker JJ, Ömeroğlu ÖE, et al. Because human interaction still needs to be there – expectations and needs of kidney transplant patients and their support persons regarding AI-based DSS: a qualitative study at a tertiary care center (Preprint); 2025.
- 11.Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res. 2019;46(4):629–50. [Google Scholar]
- 12.Yang HJ, Lee J-H, Lee W. Factors influencing health care technology acceptance in older adults based on the technology acceptance model and the unified theory of acceptance and use of technology: meta-analysis. J Med Internet Res. 2025;27:e65269. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Venkatesh. Morris, Davis. User acceptance of information technology: toward a unified view. MIS Q. 2003;27(3):425. [Google Scholar]
- 14.Haan M, Ongena YP, Hommes S, Kwee TC, Yakar D. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol JACR. 2019;16:1416–9. [DOI] [PubMed] [Google Scholar]
- 15.Shneiderman B, Plaisant C. Designing the user interface: strategies for effective human-computer interaction. 4th ed. Boston: Pearson/Addison Wesley; 2004. [Google Scholar]
- 16.Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271–97. [DOI] [PubMed] [Google Scholar]
- 17.Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M. Trends and Trajectories for Explainable, Accountable and Intelligible Systems. In: Engage with CHI: CHI 2018: proceedings of the 2018 CHI Conference on Human Factors in Computing Systems: April 21-26, 2018, Montréal, QC, Canada. New York, New York: The Association for Computing Machinery; 2018. p. 1–18.
- 18.Sassi Z, Sascha E, Roller R, Osmanodja B, Burchardt A, Hahn M et al. Enhancing human- AI-interaction in medical decision support in nephrology: Kidney-transplant-recipients' experiences and perceptions when using AI in physician-patient-communication; 2025.
- 19.Yang Q, Banovic N, Zimmerman J. Mapping Machine Learning Advances from HCI Research to Reveal Starting Places for Design Innovation. In: Engage with CHI: CHI 2018 : proceedings of the 2018 CHI Conference on Human Factors in Computing Systems : April 21–26, 2018, Montréal, QC, Canada. New York, New York: The Association for Computing Machinery; 2018. p. 1–11.
- 20.Torous J, Roberts LW. Needed innovation in digital health and smartphone applications for mental health: transparency and trust. JAMA Psychiatry. 2017;74(5):437–8. [DOI] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No datasets were generated or analysed during the current study.
