Abstract
Background
The integration of artificial intelligence (AI) into medical practice has generated both enthusiasm and hesitation among physicians, yet no comprehensive theoretical appraisal has systematically evaluated the paradigms used to explain this phenomenon. This study aims to critically assess existing theories, conceptual models, and frameworks to determine their capacity to explain AI acceptance and resistance among medical doctors.
Methods
A structured critical examination was conducted using the developed T2P2 criteria, which categorizes 12 evaluation dimensions into theoretical, technological, professional, and personal groups. A structured paradigm search across Scopus and Web of Science research articles identified 28 peer-reviewed studies that applied 21 distinct paradigms to examine AI usage by physicians. Each paradigm was then evaluated using a trichotomous rating scale alongside interpretive analysis to assess both quantitative alignment and conceptual coherence.
Results
Among the evaluated paradigms, the Social Cognitive Theory, Self-Determination Theory, and Dual Factor Model consistently met the T2P2 criteria, demonstrating strong explanatory sufficiency across internal psychological mechanisms and external structural factors. Other paradigms, while useful in specific contexts, often exhibited insufficiency in intelligent systems relevance, complimentary duality, professional functionality, specialization applicability, pedagogical usability, and healthcare context specificity.
Conclusion
This study clarifies the theoretical robustness of existing paradigms in explaining AI usage among medical doctors, highlighting the need for integrative paradigms that account for both cognitive-motivational and socio-technical factors. The findings inform IS researchers, clinicians, and policymakers on selecting appropriate explanatory paradigms for responsible AI integration in healthcare.
Keywords: IS theories, technology acceptance, technology resistance, artificial intelligence, medical doctors
Introduction
The medical profession relies on human expertise, refined through years of training and experience. With clinical insight and scientific knowledge, licensed medical doctors who could either be residents, general practitioners, specialists, and subspecialists can diagnose complex conditions, plan treatments, and provide high-quality care. Beyond traditional technologies, recent advances in artificial intelligence (AI) such as surgical robotics, expert systems, clinical decision support, large language models, and generative tools are revolutionizing clinical practice by simulating medical expertise through advanced data-driven systems.1–4 Advances in AI support diagnosis, personalize treatment, enhance imaging, and streamline tasks, improving efficiency and patient outcomes.5–7 Despite its promise, AI usage among physicians remains slow due to various cognitive, motivational, social, and technical challenges that contribute to resistance.4,8 Like other technologies, AI usage may follow a path of acceptance leading to adoption or that of resistance leading to rejection. Medical practice, influenced by collaboration, experience, and judgment, can drive either outcome. With this, Information Systems (IS) research focuses on acceptance and resistance, as these behaviors influence all subsequent responses to technology use.9–12 Throughout the years, researchers have used various theories, models, and frameworks to study technology acceptance and resistance across users and contexts. Common IS theories often examine the behavioral intent behind technology use in medicine.9,13,14 Conceptual models are also developed for the purposes of either explaining, predicting, explaining and predicting, or design and action.15–17 The abundance and variety of IS paradigms have left medical decision makers uncertain about which are best suited to explain challenges in technology acceptance and resistance. With the rise of AI, these paradigms offer a scientific approach to evaluating doctors’ behavioral intentions in healthcare settings. Behavioral informatics researchers have classified IS theories according to their primary aim, whether to analyze, explain, predict, or prescribe technology-related behaviors and actions.15,16 Thus, explanatory paradigms, which include theories, conceptual models, and frameworks, are essential for understanding technology use because they provide structured insights into how and why individuals engage with emerging technologies in complex clinical settings. AI applications in healthcare cover diverse tasks, making acceptance and resistance more difficult to capture within a single paradigm. Despite recent advances, there is still a need to simplify the explanation of AI use so healthcare decision makers can adopt more strategic and proactive approaches.
Modeling usage intentions through various paradigms remains an ongoing focus in behavioral and medical informatics. In response, numerous paradigms such as the classic Technology Acceptance Model (TAM) and the derived Unified Theory of Acceptance and Use of Technology (UTAUT) have emerged, each with different strengths, limitations, and levels of applicability. Without systematic evaluation, these paradigms risk becoming outdated, misused, or inadequate for explaining the complexities of emerging technologies like AI.18–21 Without rigorous assessment, AI in healthcare may be deployed without fully addressing workflow, pedagogical, and ethical factors, risking patient safety and care quality.22–24 In both the IS and medical fields, a holistic approach is essential to explain the complex interplay of cognitive-motivational and socio-technical factors, especially when applying evolving technologies like AI in high-stakes healthcare settings.3,4,18,25 Despite growing interest in AI usage, no thorough appraisal has evaluated theories, conceptual models, and frameworks that explain both acceptance and resistance behaviors among medical doctors across specialties. Prior research has focused more on applying paradigms than on critically assessing their assumptions, structures, and explanatory value.3,4,7,12,14,17 This study addresses this gap by systematically appraising paradigms used in empirical IS research on AI in medicine. Based on existing literature, this study examines how well these paradigms explain physician's decisions to accept, adopt, resist, abandon, or reject AI. By clarifying their strengths, limitations, and applicability, the appraisal advances IS theory development in areas where current models may fall short. Identifying the most robust paradigms supports more evidence-based strategies for effective, responsible, and ethically sound AI integration in clinical practice.14,26,27 This study critically examines AI usage across clinical, surgical, research, and administrative contexts to support technology leaders and healthcare policymakers in recognizing AI as a supportive innovation in medicine. Ultimately, with this contribution in behavioral and medical informatics, expertise of the doctor's human intelligence and advancements brought about by AI can be seamlessly harmonized to elevate the innovative capacity of practitioners within the medical profession, eventually contributing to the highest level of care for the patients.
Methodology
Paradigm search
Technology usage paradigms help explain structural and behavioral relationships. In order to facilitate proper appraisal, a paradigm search was conducted to identify relevant theories, conceptual models, and frameworks. Once identified, previously published literature can serve as a foundation for recognizing which paradigms were most commonly used, in what contexts, and how they inform the explanation of AI acceptance and resistance among licensed medical doctors, including residents, general practitioners, specialists, and subspecialists. As shown in Table 1, the keyword search focused on healthcare settings with doctors as the end users of AI.1,18,28,29 AI as technology keyword is designed to encompass its expected subclasses such as medical imaging AI, explainable AI, generative AI, large language models, virtual health assistants, medical expert systems, natural language processing, robotics, analytics, clinical decision support systems, and other emerging intelligent systems.
Table 1.
Keywords used in the paradigm search.
| Domain | Query keywords |
|---|---|
| Examined technology | Artificial intelligence, AI |
| Healthcare setting | healthcare, health care, hospital, health services, health facilities, medical care, health |
| User group | doctors, physicians, healthcare professionals, residents |
| Behavioral outcomes | technology adoption, technology acceptance, technology resistance, technology rejection |
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines provide a methodological framework for evaluating the relevance of published academic literature, which can be used as a reference for identifying paradigms (Supplemental material).9,29,30 Figure 1 shows how PRISMA was used to guide the paradigm search and identify studies applying explanatory paradigms to technology use. Peer-reviewed articles were retrieved using keywords listed in Appendix, developed with the assistance from the NCKU Kun-Yen Medical Library. Scopus and Web of Science were chosen for their broad coverage and high-quality journals in medical and behavioral informatics.1,5,9,29 The database search was limited to English-language, peer-reviewed journal articles, and excluded conference papers. After removing 544 duplicates, 1378 English-language articles were retrieved. In accordance with PRISMA guidelines, studies were included if they were peer-reviewed journal articles that applied paradigms to explain AI usage involving physicians as end users. This refers to studies where doctors directly interacted with the AI system for clinical tasks such as diagnosis, treatment, administration, or decision support. Studies were excluded if they involved non-doctor end users, lacked a theoretical or conceptual framework, had no accessible full text, or examined AI tools without applying a paradigm. Using EndNote 21, title and abstract screening excluded 911 articles without doctors as end users, 409 lacking theories or frameworks, and 7 without full texts, leaving 51 studies. Full-text screening excluded 20 articles with multi-actor perspectives involving patients, nurses, or other healthcare professionals alongside doctors. Two articles were removed for focusing solely on software development from the developer's perspective, and one was excluded for proposing but not testing explanatory paradigms. This resulted in 28 articles for appraisal, including 23 empirical and 5 review studies. Review studies were included to identify and assess how paradigms were applied in technology use research.
Figure 1.
PRISMA flowchart as applied in the paradigm search.
Paradigm identification
Studying AI usage requires an iterative process to organize the complex factors driving acceptance or resistance. Before appraisal, identifying the paradigms used in the literature is essential, as their core concepts and assumptions form the basis for analysis. Paradigm identification systematically catalogs existing paradigms, laying the foundation for understanding the theoretical landscape.15,16 This step ensures that both explicit and implicit paradigms from the search are identified and included for deeper theoretical appraisal.1,15,18 To identify paradigms, each study was reviewed to extract the named theory, conceptual model, or framework, along with its constructs, assumptions, and application in AI-related healthcare. Variants with different names were grouped, while evolving paradigms with added perspectives were distinguished. Paradigm identification for this study was conducted based on the 28 research articles as a result of the previous paradigm search on how medical doctors use AI. Table 2 shows 11 studies utilized the UTAUT in studying the usage of AI among medical doctors. The explanatory paradigms that were also employed by 2 or more studies are the Dual Factor Model (DFM), Grounded Theory (GT) and the Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework, TAM, and Technology Resistance Theory (TRT). Various trust-related theories, which were fundamentally utilized in three studies, were aggregated under the label Technology Trust Theories (TTT) for the purpose of analytical clarity and comparative evaluation, although this is not their official designation. Consequently, only one study used the Diffusion of Innovation (DOI) theory, Extended Technology Acceptance Model (E-TAM), Expectancy-Value Theory (EVT), Information System Success Model (ISSM), Innovation Resistance Theory (IRT), Social Cognitive Theory (SCT), Self-Determination Theory (SDT), Stimulus-Organism-Response (SOR) framework, Status Quo Bias Theory (SQBT), Theory of Effective Use (TEU), Technology-Organization-Environment (TOE) framework, Theory of trust and Acceptance of Artificial Intelligence Technology (TrAAIT), and Extended Unified Theory of Acceptance and Use of Technology (UTAUT-2). Collectively, these paradigms represent the diverse theoretical approaches used to study AI usage among doctors and form the basis for this appraisal.
Table 2.
Theories, conceptual model and frameworks of the paradigm search.
| Explanatory paradigms | Count | Studies | Explanatory paradigms | Count | Studies | Explanatory paradigms | Count | Studies |
|---|---|---|---|---|---|---|---|---|
| UTAUT | 11 | 3,4,7,12,14,17,20,31–34 | DOI | 1 | 35 | SDT | 1 | 36 |
| DFM | 7 | 4,12,20,24,37–39 | E-TAM | 1 | 35 | SOR | 1 | 40 |
| TRT | 4 | 8,12,34,41 | EVT | 1 | 42 | SQBT | 1 | 12 |
| TAM | 3 | 41,43,44 | FITT | 1 | 45 | TEU | 1 | 40 |
| TTT | 3 | 3,12,17 | ISSM | 1 | 17 | TOE | 1 | 14 |
| GT | 2 | 46,47 | IRT | 1 | 8 | TrAAIT | 1 | 17 |
| NASSS | 2 | 45,48 | SCT | 1 | 49 | UTAUT-2 | 1 | 42 |
Understanding the factors driving doctors to accept, adopt, resist, or reject AI is essential in both theory and practice. Notably, the most cited paradigms align with earlier studies on how healthcare consumers and professionals respond to AI.9,28,30 TAM, being one of the foundational bases for E-TAM, UTAUT, and UTAUT-2 is still being used but is currently outperformed by its preceding derivatives in its utility to explain AI usage among medical professionals. While majority of the paradigms were originally designed to explain technology acceptance and eventually adoption, paradigms like IRT, NASSS, SQBT, and TRT, are more concentrated to explain technology resistance, abandonment, and rejection. Moreover, the literature distinguishes IRT from TRT, noting that while IRT emphasizes functional and psychological barriers, TRT addresses a broader set of factors, including perceived threats, user characteristics, and organizational dynamics.8,34,50 To come up with an equitable stance, the DFM, although not explicitly mentioned as the nomenclature in some of the seven studies, emerged as the second pervasively applied paradigm due to its even-handed view in technology usage through examining the dual factors of enablers and inhibitors.4,12,20,24,37–39 Although presented as a mere straightforward enumeration, this summary of theories, conceptual models, and frameworks offers practical value to both IS researchers and healthcare administrators by clarifying how AI is adopted in medicine. Identifying the main paradigms provides a reference for understanding the theoretical foundations of past research.2,9,18 In doing so, it helps future studies by clarifying the factors that influence doctors’ acceptance or resistance to AI in clinical settings.
Paradigm assessment
Building on the identification of the paradigms, the next step involves examining how well each one can explain technology usage behaviors within the specific context of AI in healthcare. Since every study used at least one paradigm to examine doctors’ acceptance or resistance, the researchers developed criteria to assess each paradigm's conceptual strength and contextual fit. Paradigm assessment involves systematically analyzing constructs, relationships, and assumptions to examine explanatory coherence, theoretical soundness, and relevance.2,15,21 As an initial step in the paradigm assessment, criteria were formulated to evaluate each paradigm's clarity, coherence, and relevance in explaining physician's acceptance or resistance to AI. Grouping these criteria into themes based on coherence, applicability, and purpose as guided by IS literature helped assess whether the paradigms offer clear explanations, testable propositions, and practical insights for AI use in healthcare.15,16,51 Employing the principles of cognitive-motivational and socio-technical perspectives, Table 3 shows that the 12 evaluation criteria were divided into four groups, referred to by the authors of this study as the T2P2 criteria for technology usage paradigm assessment, in order to strike a balance between theoretical, technological, professional, and personal characterizations. This grouping offers a complementary view of the essential qualities that a strong and adaptable AI usage paradigm for medical doctors should reflect, providing a solid foundation for focused appraisal.51,52 The theoretical criteria assess core concepts of the 21 paradigms, while the technological criteria ensure relevance to AI usage. The professional criteria address the broader medical community, and the personal criteria focus on individual medical doctors as end users. Grouping the criteria allows for a systematic and transparent evaluation, helping identify which paradigms best explain the complex nature of AI acceptance and resistance among physicians. This alignment of theory, technology, and user context across the T2P2 groups strengthens the assessment by exposing conceptual gaps, improving causal clarity, and enhancing relevance to real-world AI use in healthcare.
Table 3.
The T2P2 criteria in technology usage paradigm assessment.
| Theoretical criteria | Technological criteria | ||
|---|---|---|---|
| Structural parsimony | The paradigm should be simple yet complete to explain AI usage of medical doctors | Intelligent systems relevance | The paradigm should be relevant in explaining technologies that exhibit smart behavior |
| Complimentary duality | The paradigm should have the capability to be used in explaining both acceptance and resistance of AI technologies | Technology Scalability | The paradigm should be scalable to accommodate the explanation using AI along with its further developments |
| Holistic integrability | The paradigm should have the capability to accommodate enablers in technology acceptance and inhibitors in technology resistance | Paradigm flexibility | The paradigm should accommodate technology acceptance or resistance of AI under varying contexts or scenarios. |
| Professional criteria | Personal criteria | ||
| Healthcare context specificity | The paradigm should be applicable to explain AI usage in the healthcare context | Micro-level applicability | The explanatory paradigm should explain technology acceptance or resistance in an individual-centric perspective |
| Context generalizability | The paradigm can be expanded to be used to explain AI usage outside the healthcare context | Specialization applicability | The paradigm should be applicable to explain AI usage of doctors coming from different medical specialty and subspecialty |
| Profession functionality | The paradigm should be appropriate to explain AI usage of doctors having various responsibility functions | Pedagogical usability | The paradigm should be usable to explain AI usage within the professional learning and training environments of medical doctors |
Each of the 21 paradigms was evaluated using 12 criteria to assess how well their structures explain both acceptance and resistance to AI among physicians. A critical review examined each paradigm's foundations, assumptions, and coherence, followed by qualitative interpretation to reveal deeper alignment with clinical AI use. Commonly used in healthcare research and behavioral informatics, a trichotomous or three-point rating scale was applied to determine whether each paradigm met the characteristics defined by the T2P2 criteria.53–55 As used in this study, Figure 2 shows that a rating of 1 was given if the paradigm was deemed sufficient to meet the requisites of the condition, while a rating of −1 was assigned if the paradigm was found to be insufficient in meeting the stipulated requirements. A rating of 0 was then given if the paradigm moderately addressed the criteria, meaning it demonstrated only partial alignment and neither fully met nor clearly fell short in terms of its theoretical coherence or explanatory sufficiency. Explanatory sufficiency means the paradigm provides a coherent and comprehensive foundation for understanding both acceptance and resistance.10,15,51 Conversely, explanatory insufficiency means the paradigm lacks key constructs or fails to address the complexity of technology use. Furthermore, a moderate rating suggests it touches relevant points but lacks the depth or clarity for full applicability. This does not imply a strict 50% threshold. Rather, the rating of 0 reflects conceptual ambiguity, where alignment with the criteria is partial but not clearly sufficient or insufficient. Though trichotomous scales may lack granularity and are often unsuitable for complex attitudinal analysis, they provide a simple and structured baseline for comparison. In this study, their limitations were addressed by adding qualitative interpretation to contextualize each rating. The scale offered a flexible way to distinguish full, partial, and misalignment with T2P2 criteria, supporting a structured appraisal of explanatory adequacy in AI use among physicians.54,55 The trichotomous scale was chosen for its clarity in assessing how well each paradigm meets the T2P2 criteria for explaining AI use among doctors. After scoring, each paradigm underwent thematic qualitative analysis, focusing on conceptual alignment with the T2P2 criteria to interpret its rating and clarify conceptual distinctions. This combined method captured both structural sufficiency and contextual depth. Together, structured scoring and qualitative interpretation enabled a more comprehensive evaluation of each explanatory paradigm.
Figure 2.
The trichotomous scale used in paradigm assessment.
Results
The theoretical appraisal results show how each paradigm scored across the 12 criteria, revealing patterns in clarity, structure, and relevance to doctors’ AI use. The theoretical criteria are central to the appraisal, as they reflect the core assumptions, constructs, and coherence of each paradigm. Structural parsimony necessitates that the paradigm should be concise yet comprehensive.56,57 The researchers evaluated the 21 paradigms by weighing the tradeoff between simplicity and completeness in explaining physicians’ use of AI. Figure 3 shows that E-TAM, TrAAIT, UTAUT, and UTAUT-2, being all extended frameworks, were rated insufficient due to limited parsimony. Although these successors offer broader scope, they may reduce clarity in contexts where physicians need streamlined decision making. Similarly, NASSS and ISSM received insufficient parsimony ratings for their complexity in addressing multiple aspects of AI implementation. GT, though labeled a theory, is actually a methodology involving complex procedures for data collection and analysis.46,47,58 GT builds new theories from data through an inductive and iterative process that requires strong theoretical sensitivity. It is less parsimonious because it moves beyond surface data to uncover deeper patterns and concepts. 58 FITT received a moderate rating due to its multi-dimensional focus, complex interactions, and context-specific evaluations, which reduce its simplicity. Compared to TAM, which is widely recognized as one of the most parsimonious paradigms for explaining technology usage in IS, these findings are considered valid in terms of theoretical simplicity.52,57 In healthcare, simplicity in explaining AI use requires balancing explanatory strength with practical relevance, avoiding excess complexity while capturing key factors that influence physician's responses.
Figure 3.
Theoretical criteria results of the paradigm assessment.
Simplicity alone does not define a paradigm with high explanatory sufficiency. What distinguishes its effectiveness is its ability to integrate the dichotomous views of enablers into technology acceptance and inhibitors into technology resistance. Complimentary duality under the theoretical criteria requires that the framework should have the capability to be used in studying both acceptance and resistance of AI technologies.59,60 As a supplement, holistic integrability ensures that enablers can be fitted into acceptance and inhibitors can be fitted into resistance in one single study.15,25,57,59 Although these two criteria are closely related, paradigms like TOE and TTT have strength in explaining technology acceptance but exhibits weaknesses in explaining technology resistance primarily due to the lack of predefined constructs for understanding why users may resist, reject, delay, or totally abandon a particular technology after initial acceptance. TOE examines adoption through the lens of technological fit, organizational culture and resources, and competitive environmental factors.14,61 On the other hand, the TTT specializes in theorizing AI usage through the lens of trust, but it may overlook key policy-driven elements like healthcare regulations, infrastructure, training, and legal mandates.3,18,62 With these considerations, they were given a moderate rating in complimentary duality which eventually makes it difficult to integrate barrier variables into resistance and analyze technology usage as a whole leading to an insufficient rating in holistic integrability. Moreover, the unidimensional priority towards acceptance warrants that an insufficient rating on duality and integrability be given to DOI, E-TAM, EVT, ISSM, SQBT, TAM, TEU, TRT, TrAAIT, UTAUT, and UTAUT-2. In the same way, IRT is also insufficient in both duality and integrability since it only has a monolithic view of technology resistance which ignores other acceptance determinants. Studies support using paradigms that address both acceptance and resistance to better explain AI usage in healthcare.59,63 This calls for a dual mechanism where acceptance is understood in relation to resistance, and vice versa. Focusing solely on one limits a paradigm's explanatory capacity, especially in healthcare where both are closely linked. Examining the theoretical foundations of IS paradigms ensures they are not only adaptable for studying both behaviors but also conceptually strong and contextually relevant for guiding research and practice.
The second T2P2 cluster assesses each paradigm's technological value in explaining user responses to AI. After establishing theoretical soundness, the assessment considered how well paradigms address the specific complexities of AI. Unlike ubiquitous technologies, intelligent systems involve evolving usability, risks, and ethical concerns which are often overlooked in typical usage studies. The intelligent systems relevance criterion ensures that paradigms are practical, context-aware, and able to address real-world AI challenges while balancing methodological rigor with actionable insights.1,51,63 Although AI is just a subcategory, intelligent systems were used as the relevance benchmark to reflect AI's integration with related autonomous technologies. Figure 4 shows that most paradigms applied to intelligent systems are user-centric models. These received moderate ratings as they emphasize user behavior over technological attributes. In contrast, DFM and NASSS were rated sufficient for IS relevance due to their focus on system constraints, integration, and scalability. Originally developed for healthcare, the NASSS framework includes seven domains that assess complexity, adoption, support, regulation, and sustainability.45,48,64 Although technology-centric, ISSM is insufficient as it addresses only post-adoption, making it unsuitable for analyzing acceptance or resistance, which occur earlier. Effective paradigms must cover both pre- and post-adoption phases, capturing factors that drive initial responses and long-term use.2,9,11,15,21 For intelligent systems and AI, technology-centric frameworks are essential to explain the unique complexities of smart systems. Paradigms must look beyond user behavior to consider AI's complexity, integration, and scalability. Focusing on these aspects ensures practical relevance while preserving theoretical soundness in explaining AI acceptance and resistance.
Figure 4.
Technological criteria results of the paradigm assessment.
AI's rapid growth demands paradigms that keep pace with its evolving use. The technology scalability criterion considers temporality by assessing whether a paradigm can effectively explain AI usage as the technology advances.2,13,19 This means that the paradigm in question should not just consider the acceptance or resistance of the current state of the AI technologies but should also facilitate the examination of its future innovations. A moderate rating was given when long-term applicability was uncertain or context-dependent. Although GT allows flexibility for AI studies, it was rated as moderate since it lacks a structured and scalable model for AI advancements. IRT, TRT, and SQBT also received moderate ratings as they focus on resistance and may not support evolving AI use. NASSS, though effective in structured settings, was rated moderate for its limited fit with AI's evolving, self-learning nature, and governance challenges. Under technology scalability, only EVT and TEU were rated insufficient. EVT focuses on individual motivation and perceived value but does not explain adoption.42,65 It is not technologically scalable, lacking focus on evolution, integration, and long-term cross-industry use. TEU, in contrast, explains adoption rather than acceptance by focusing on how users engage with technology to optimize benefits.21,40,66 It was rated insufficient for assuming technology remains static, ignoring AI's continuous evolution and adaptability.
For these reasons, EVT and TEU are insufficient in both technology scalability and paradigm flexibility. The paradigm flexibility criteria refer to the paradigm's capacity to adapt AI acceptance and resistance across diverse contexts, levels of analysis, and full spectrum of technology usage without requiring major modifications.15,16,67 This appraisal expects paradigms to explain AI use by doctors across varying contexts and all stages of optimistic use, from pre-adoption to post-adoption. Paradigms were also evaluated on their ability to explain pessimistic stages, including resistance, non-usage, abandonment, and rejection. GT, SOR, UTAUT, and UTAUT-2 were rated moderate due to limited ability to cover the full usage spectrum over time, though they still show strong adaptability and structural flexibility. In contrast, FITT, IRT, NASSS, TRT, and TTT were rated insufficient for flexibility, as they could not adapt across diverse contexts or address both acceptance and resistance. ISSM, SQBT, TOE, and TrAAIT showed some flexibility but were rated moderate due to unclear scalability and limited coverage of usage stages. SQBT and TRT, rated moderate for technology scalability, were found insufficient for paradigm flexibility due to narrow focus. SQBT emphasizes resistance through loss aversion and cognitive bias, neglecting acceptance, abandonment, and rejection.60,68 Similarly, TRT focuses solely on resistance driven by perceived threats and discomfort, ignoring transitions to acceptance, abandonment, or rejection across AI usage stages.8,12,34,41 Surprisingly, E-TAM and TAM were rated insufficient for flexibility due to structural rigidity and lack of integration of resistance dynamics alongside continuous usage trajectories. These paradigm adaptability limitations are particularly problematic in medical practice where frontline physicians must continuously evaluate not only whether to adopt AI but also when to abandon or reject its use based on technological shifts, evolving clinical workflows, and changing patient and ethical demands. Paradigms that overlook AI's evolving use across contexts and conditions risk generating inadequate models for sustained and responsible integration in critical medical settings. That is why assessing paradigm flexibility reveals a paradigm's robustness, contextual relevance, and adaptability to technological change. Within the T2P2 criteria, intelligent systems relevance ensures AI applicability, technology scalability tests contextual adaptability, and paradigm flexibility captures acceptance and resistance over time.
The third T2P2 cluster assesses how well each paradigm explains AI use by medical doctors as a distinct user group. While earlier clusters focused on theory and technology, the professional criteria examine whether paradigms account for the unique structure, function, and context of the medical profession. Medicine is a highly stratified profession requiring life-or-death decisions, precision, and strict ethical and legal compliance. In such high-pressure settings, AI must meet rigorous accuracy and reliability standards while addressing physician's specific concerns to ensure safe, ethical, and effective usage.2,6,12,69 With this, healthcare context specificity refers to a paradigm's capacity to model, analyze, and interpret AI acceptance and resistance within the unique regulatory, ethical, and operational landscape of healthcare.2,25,26 These criteria assessed how well each paradigm aligns with clinical decision making, medical workflows, and the high-stakes nature of the medical practice. In this study, AI is positioned as an assistive tool that supports clinical judgment while preserving doctors’ decision-making autonomy and accountability. This aligns with high-stakes, regulated medical settings where both acceptance and resistance depend on how well AI integrates into workflows without reducing doctors’ control or adding legal risk. As shown in Figure 5, most paradigms met these standards, except SQBT and TEU, which are rated moderate, and EVT, which is rated insufficient. EVT tends to overlook high-stakes decisions, regulations, and risk aversion, while SQBT and TEU lack clarity on ethical and procedural complexities in medical settings. To balance healthcare specificity with broader use, context generalizability was added under the professional criteria. Paradigms vary in their capacity for context generalizability, with some being highly adaptable across multiple domains, others being more domain-specific and limited in applicability, and a few remaining uncertain due to insufficient research on their cross-context expansion.13,16,56 Thus, context generalizability refers to a paradigm's capacity to extend explanatory features beyond healthcare AI usage while preserving its theoretical integrity, adaptability, and analytical power across diverse industries, user groups, and technological settings.5,18,27 In light of this, DOI, GT, and TRT received moderate ratings for context generalizability due to differing limitations. DOI depends on industry-specific dynamics, GT varies by context, and TRT's focus on resistance limits its broader applicability. NASSS and TrAAIT were rated insufficient, as both are confined to healthcare and lack mechanisms for broader extension. TrAAIT, adapted from ISSM, UTAUT, and trust models, focuses on trust in healthcare AI, emphasizing transparency, reliability, ethics, and human AI collaboration in clinical decisions. 17 Appraising paradigms under these contrasting criteria was challenging. A healthcare-focused paradigm captures clinical realities but may lack broader applicability, while a general one may apply widely but miss the depth needed for high-risk, regulated medical contexts. Balancing healthcare context specificity and context generalizability ensures relevance to medical AI while allowing adaptability for broader use, supporting a comprehensive and scalable approach.
Figure 5.
Professional criteria results of the paradigm assessment.
Throughout their career, physicians take on roles in clinics, wards, operating rooms, research laboratories, and administrative units. As physicians shift roles, the type of AI they use also changes, ranging from diagnostic tools and surgical aids to research analytics and administrative automation. Paradigm assessment under the professional criteria must evaluate whether a framework explains AI acceptance and resistance across these diverse roles. Since this study positions AI as a tool that helps doctors without taking away their control, the criteria must assess both role diversity and how well each paradigm reflects AI's supportive role in different settings. Professional functionality refers to a paradigm's ability to account for AI use in clinical, surgical, research, and administrative duties.2,5,26 The assessment evaluated whether each paradigm could explain AI's role across medical functions, professional duties, and cross-functional impacts. EVT, IRT, SOR, and TOE received moderate ratings for different reasons. EVT has the tendency to overlook how expectancy and value perceptions vary among clinicians, surgeons, researchers, and administrators. Despite differences in professional concerns such as liability, workflow disruption, and ethical considerations, IRT may not account for role-specific resistance factors. Similarly, SOR explains external stimuli influence a doctor's cognitive and emotional state.40,70,71 It could explain how medical roles influence reactions to AI but was rated moderate because it does not clearly differentiate responses across clinical, surgical, research, and administrative settings. TOE also received a moderate rating for focusing on organizational adoption while overlooking AI's impact on individual healthcare professionals. Paradigms must reflect physician's diverse roles to capture how AI interacts with clinical judgment, surgical accuracy, research innovation, and administrative efficiency. This appraisal centers on medical doctors as primary users, emphasizing the need to explain varied responses to AI across functional roles. Identifying paradigms that miss these distinctions is essential, as they may overlook how acceptance or resistance differs by function. In contrast, paradigms that capture role diversity provide a fuller view of AI's impact and support better implementation strategies. A complete appraisal must evaluate each paradigm's fit with medical complexity, cross-sector adaptability, and capacity to model acceptance and resistance across functions.
The final T2P2 cluster evaluates whether paradigms explain AI use at the level of the individual doctor. While professional criteria assess system-wide factors, personal criteria focus on how doctors engage with AI in daily practice. This distinction should be emphasized since the professional criteria cover institutional and collective influences, while the personal criteria address individual decisions, cognition, and direct interaction with AI. Ignoring personal aspects risks missing essential cognitive, motivational, and experiential factors that influence individual responses. Assessing both levels is essential, as individual doctors ultimately determine how AI is used in daily work, even within broader systems. Paradigms vary by level of analysis based on design, ontology, and behavioral focus. Some suit individual perspectives by addressing personal decisions, while others focus on group dynamics and peer influence.13,16,52 Some paradigms use an organizational lens to analyze technology use within institutions, while others take a societal view to examine cultural, systemic, and regulatory influences on acceptance and resistance.9,15 The micro-level applicability criterion assesses a paradigm's ability to explain acceptance and resistance from an individual perspective. Figure 6 shows that ISSM and TOE focus on organizational-level use, emphasizing system performance, user satisfaction, and institutional outcomes. Similarly, DOI and NASSS examine technology use at societal and organizational levels, addressing diffusion, large-scale adoption, and systemic influences. DOI, in particular, explains how technologies spread through social influence, communication networks, and adoption patterns influenced by innovation traits and system factors.35,72 Most paradigms include broader perspectives, but DOI, ISSM, NASSS, and TOE exclude micro-level applicability. Using paradigms focused on organizational or societal levels to explain individual AI responses can lead to incomplete or oversimplified interpretations. Such approaches may overlook personal factors influencing physician's decisions. In contrast, paradigms sufficient at the micro level can show how systemic forces influence individual choices in medical practice.
Figure 6.
Personal criteria results of the paradigm assessment.
The personal criteria acknowledge that medical doctors, as AI end users, have varied specialty, subspecialty, and even advanced subspecialty. Practically, specialization enables doctors to deliver focused care and advance knowledge in areas requiring training beyond general practice.12,20,35 A doctor specializing in otorhinolaryngology-head and neck surgery with a subspecialty in craniomaxillofacial surgery may have a different reception to AI use compared to those with subspecialties in audiology or rhinology, even though they share the same core specialty. Medical doctors are a distinct user group, with training across general practice, specialties, and subspecialties, each requiring unique knowledge and decision-making, resulting in context-specific AI use. This diversity influences how AI is integrated, as each field demands tailored diagnostic processes, treatment protocols, and data-driven solutions. The specialization applicability criterion assesses whether a paradigm can explain AI acceptance and resistance across diverse medical specialties and subspecialties.5,11,12 Most of the 21 frameworks meet the specialization criterion, but DOI, ISSM, and TOE are insufficient due to their macro-level focus and lack of detail for specialized medical contexts. These paradigms may fall short in explaining AI use in fields like pediatrics, psychiatry, and dermatology, which represent distinct contextual challenges for AI application.73,74 While all specialties require ethical and patient-centered care, these fields are notable for involving vulnerable patients, complex diagnoses, and long-term, individualized treatment. These factors make it harder to use AI safely and effectively. As a result, they may pose unique AI challenges often missed by broad, macro-level paradigms. These paradigms may also be unsuitable for surgical fields like general, orthopedic, and neurosurgery, where hands-on precision and time-sensitive decisions require context-specific understanding.19,22,23,75 IRT, SQBT, and TRT were rated moderate for focusing on resistance and bias without addressing differences across specialties. SOR, TEU, and TTT received the same rating for being heavily technology-centric, emphasizing systems over user behavior in specialized fields. Such paradigms may not fully explain AI use in theoretical fields like pathology and radiology, which are research-driven and focus on disease mechanisms and treatment development. These fields require deep, data-intensive analysis often beyond the scope of less analytically robust paradigms.5,34,48,76 Moderately rated paradigms may also fall short in practical fields like family and internal medicine, which require direct care, complex case management, and long-term patient relationships. These areas need patient-centered paradigms that reflect the complexity of real-world clinical decisions.5,24 The specialization applicability criterion showed that paradigms designed for broad or technology-focused analyses often struggle to capture the context-specific factors needed to explain AI acceptance and resistance across medical specialties.
The final personal criterion is an essential feature of any paradigm explaining AI use among medical doctors. Lifelong learning must be integrated into the analysis of AI usage dynamics since continuous professional education is a cornerstone of the medical profession to ensure that physicians remain adept in advancing medical knowledge and evolving clinical practices.6,77 Typically, after obtaining a medical license, doctors progress through residency training to achieve diplomate status, followed by fellowship training to attain subspecialty. AI acceptance and resistance thus influence their learning and technology engagement over time. The pedagogical usability criterion evaluates whether a paradigm can explain these dynamics within medical education.6,11 This criterion assesses whether a paradigm explains AI integration in structured learning, skill development, and evolving technology usage behaviors during medical training. EVT, IRT, SOR, SQBT, and TRT were rated insufficient, as they focus on attitudes or resistance without addressing formal education settings. IRT, for example, explores resistance but lacks attention to structured learning, limiting its use in studying AI in medical education.8,78 TEU and TOE were also rated insufficient for focusing on technology effectiveness and organizational factors, without addressing AI's role in medical training or skill development. A doctor's education involves ongoing learning through medical school, residency, fellowship, and continuous training, each requiring specialized knowledge and practice.6,11,18,79 Overlooking doctors’ continuous learning leads to incomplete explanations of how they adopt or resist AI throughout their careers. The pedagogical usability criterion ensures paradigms reflect AI use within medical training. In a bigger picture, the T2P2 personal criteria offer a doctor-centered view, capturing AI acceptance and resistance across individual experience, specialty, and lifelong education.
Discussions
In order to provide a comprehensive basis for theoretical appraisal and discussion analysis, the individual scores from each of the four T2P2 criteria groups were aggregated for each paradigm. Figure 7 highlights SCT, SDT, and DFM as the most robust paradigms, showing strong alignment across all dimensions and effectively capturing the complex cognitive-motivational and socio-technical dynamics of AI use in clinical contexts. SCT and SDT incorporate cognitive efficacy, autonomy, and motivation, offering a richer understanding of sustained use and resistance. In contrast, paradigms like TAM emphasize usefulness and ease of use, E-TAM adds psychological drivers, and TEU redefines acceptance through actual use influenced by skills, conditions, and support.21,52,66 These paradigms fit task-specific contexts where usability and efficiency matter but often miss the context-driven decisions in clinical practice influenced by professional identity, patient care, and ethics.2,21 This narrow focus limits their capacity to capture the cognitive, motivational, and experiential factors driving long-term technology use, which often depends on broader psychological and professional considerations.5,9,27,28 Paradigms such as TRT, SQBT, FITT, and GT focus on narrow aspects of technology use but often overlook the cognitive and experiential factors that sustain AI engagement among physicians. TRT emphasizes barriers like psychological discomfort, perceived threats, and resistance during transitions. While useful for identifying challenges, it lacks flexibility to capture acceptance factors such as professional confidence, clinical autonomy, and patient-centered care, which are critical in medical settings.18,28,29 SQBT explains psychological inertia that hinders AI use but struggles to capture adaptive learning and ongoing professional growth, showing limited alignment with personal and professional dimensions.60,68 Despite its strong foundation, FITT often overlooks the cognitive and experiential factors shaping physicians’ AI use, focusing on task-technology alignment with the individual while missing the adaptive, context-specific learning central to medical practice.28,45 GT as a methodology is useful for exploring context-specific technology use but lacks the conceptual flexibility to capture the evolving nature of medical practice influenced by professional identity and patient-centered care.18,58 Trust-centric paradigms under TTT and that of TrAAIT focus heavily on relational dynamics, often overlooking broader psychological and motivational drivers of sustained technology use. While trust is vital for initial acceptance, it alone cannot explain the long-term cognitive and emotional factors influencing AI use in high-stakes clinical settings.28,62,69 Similarly, DOI and EVT emphasize innovation traits and personal utility but overlook the adaptive and experiential aspects of professional practice. This narrow focus limits their relevance in medical contexts, where adoption is driven by ongoing development, patient outcomes, and ethics, requiring greater personal and technological flexibility.18,27 IRT and SOR help explain specific resistance and behavioral responses but often isolate psychological triggers, missing the broader context-dependent dynamics of medical technology use. IRT focuses on discomfort and cognitive bias but lacks flexibility to capture the positive motivations behind long-term AI engagement in healthcare.9,10,78 Similarly, SOR centers on external stimuli but overlooks the cognitive evaluations, professional judgment, and experiential learning central to medical practice.70,71 While TOE effectively captures organizational contexts, it emphasizes structural factors over individual psychological drivers of long-term technology use. This limits its relevance in healthcare, where personal motivation, clinical experience, and patient-centered care are key.26,61 UTAUT and UTAUT-2, though widely applicable, tend to favor functional explanations over theoretical duality and individual motivations. While useful in specific cases, they often lack the integrated perspective needed to capture the full complexity of AI acceptance and resistance among medical professionals. This highlights the value of cognitive-motivational and socio-technical paradigms that balance internal psychology with external structures to explain long-term AI engagement in clinical settings.
Figure 7.
T2P2 criteria paradigm assessment.
A key finding is the strong performance of SCT, SDT, and DFM across all T2P2 dimensions, highlighting their ability to explain the complex dynamics of AI acceptance and resistance. Though less common in mainstream IS research, SCT and SDT outperformed more established models by capturing individual-level factors central to medical doctors’ decisions. Both scored highly in personal criteria, aligning closely with the psychological drivers of AI use. While mature IS paradigms are expected to show broad explanatory strength, SCT and SDT stood out for their cognitive and motivational focus. Unlike system-oriented models, they clarify the internal learning and motivational processes shaping physicians’ AI engagement.5,27,80,81 This suggests that IS researchers should prioritize internal factors such as the psychological and cognitive mechanisms emphasized by SCT and SDT, rather than focusing only on external elements. Unlike institutional policies or system design, internal drivers influence how physicians perceive and engage with AI. A combined view of cognitive-motivational and socio-technical perspectives highlights the importance of aligning AI with both technical systems and the cognitive and motivational needs of clinicians. This approach supports complex decision-making and promotes lasting and meaningful integration of AI in clinical practice.
In its essence, SCT, which was developed by Albert Bandura in 1986 as an extension of Social Learning Theory, explains human learning and behavior through the interaction of personal, behavioral, and environmental factors.80,82–84 SCT is widely used in technology acceptance and resistance research because it emphasizes self-efficacy, outcome expectations, and social influence, which are key to understanding how individuals adopt emerging technologies. For physicians, these constructs are especially relevant, as they reflect confidence in using AI, perceived clinical value, and the influence of peers and professional norms on technology choices.83,84 On the other hand, SDT was developed by Edward Deci and Richard Ryan in the 1980s to explain human motivation and behavior through the fulfillment of three fundamental psychological needs namely autonomy, competence, and relatedness.80,81,85,86 As a motivation theory, SDT is often applied in technology acceptance and resistance research for its focus on both intrinsic and extrinsic drivers of sustained use. In medical practice, AI is more likely to be adopted when it supports physicians’ autonomy, reinforces their clinical competence, and strengthens their sense of connection within the medical community.81,86 These internal mechanisms explain the cognitive and emotional factors that influence physicians’ technology use throughout their careers. As doctors gain experience and engage with diverse patients, their views on AI evolve through continuous professional development. Unlike external factors influenced by policy or technical changes, internal cognitive and motivational constructs are more enduring and essential for understanding long-term acceptance and resistance.
This emphasis on internal factors is complemented by the external considerations that the DFM facilitates, as it also demonstrates strong explanatory performance across the T2P2 criteria. DFM states that technology acceptance is driven by enablers, while resistance stems from inhibitors.4,25,37,38 This dual approach is useful for explaining AI use in healthcare, as it captures both supportive and opposing factors. Enablers such as perceived usefulness, ease of use, organizational support, and workflow compatibility significantly boost physicians’ willingness to adopt AI tools.25,30,37 Enablers improve decision-making, reduce errors, enhance outcomes, and increase efficiency, aligning with evidence-based medicine. Inhibitors such as perceived risk, distrust, fear of obsolescence, ethical concerns, and workflow disruption can reduce doctors’ willingness to adopt AI.2,30,38 Concerns about AI reliability, transparency, patient trust, and the devaluation of clinical expertise pose significant barriers to sustained use. Over-reliance, liability risks, and data privacy issues further fuel resistance among healthcare professionals. Unlike SCT and SDT, which focus on internal factors, DFM addresses external influences that are more easily influenced by IS strategies like organizational support, training, and clinical interventions.22,25 DFM is well-suited to explain the systemic, strategic, and operational factors shaping AI use in healthcare. It balances internal user motivations with the need to manage broader technological, institutional, and regulatory contexts.13,51,52 These perspectives highlight the need for a balanced approach that integrates the internal drivers from SCT and SDT with the external forces addressed by DFM. The authors of this study strongly believe that effective AI integration in healthcare requires a comprehensive understanding that extends beyond technical alignment to include the deeper cognitive, motivational, and affective factors that guide individual technology choices. SCT and SDT explain core psychological mechanisms like self-efficacy, autonomy, competence, and relatedness, while DFM addresses external influences, including enablers such as usefulness, support, and compatibility, and inhibitors like risk, ethics, and disruption.25,36,49 This integrated view provides a more comprehensive paradigm for explaining AI acceptance and resistance among medical doctors, aligning technical effectiveness with psychological and environmental needs. IS researchers should design and apply paradigms that address both cognitive-motivational drivers and broader contextual influences on physician behavior. From a cognitive-motivational perspective, this promotes AI integration that supports physicians’ confidence, autonomy, competence, and sense of professional connection. From a socio-technical perspective, this supports AI systems that are ethically sound, professionally relevant, and sustainably embedded in clinical practice.
This theoretical appraisal of explanatory paradigms for AI usage of physicians acknowledges possible shortcomings in terms of scope constraints, methodological limitations, potential biases, and generalizability challenges. By focusing solely on medical doctors as end users, the analysis excludes perspectives from other healthcare professionals and overlooks the broader multi-actor environments in which AI is often deployed. The evaluation may have a limited clinical scope as it concentrates on explanatory paradigms rather than applied evidence, even though paradigms are also intended to analyze, predict, and prescribe technology engagement. Methodologically, limited coverage may have resulted from relying on Scopus and Web of Science and restricting to English-language, peer-reviewed journals. The use of the T2P2 criteria and trichotomous rating scale provided structure but also introduced interpretive judgment, and because this study is a theoretical appraisal rather than an empirical investigation, causal validation of paradigms was not possible. These choices reflect the researchers’ perspectives and assumptions, which may have influenced the results, so the evaluation of 21 paradigms may not capture the full theoretical landscape or extend beyond doctors, who also need to interact with other healthcare professionals in using AI. Recognizing these limitations, future research can enhance practical relevance by applying SCT, SDT, and DFM to measurable technology usage determinants. Explanatory investigations utilizing stand-alone implementations of SCT, SDT, and DFM can respectively guide the use of self-efficacy scales to assess physician confidence in AI usage, validate measures of autonomy and competence, and identify context-specific enablers and inhibitors in clinical workflows. To further enhance explanatory sufficiency, a composite paradigm combining SCT, SDT, and DFM into a hybrid framework can better capture the complexities of AI usage in medical settings. Whether applied independently or as a composite, these paradigms offer actionable structures for designing surveys, interviews, and interventions that address internal motivation and external barriers. Their constructs can be empirically tested through methods such as PLS-SEM, usability testing, or action research, enabling the development of implementation strategies tailored to medical domains, whether highly clinical, highly surgical, or general practice.
Conclusion and recommendations
As AI advances in healthcare, its use by medical doctors remains slow due to a range of cognitive-motivational and socio-technical barriers influencing acceptance and resistance. Although many paradigms have examined technology use, no study has systematically evaluated their conceptual adequacy in explaining AI usage among physicians. To address this gap, this study appraised 21 paradigms using the T2P2 criteria, which assess theoretical, technological, professional, and personal dimensions. Through a rigorous review employing a trichotomous scale and qualitative analysis to evaluate structural, theoretical, and contextual alignment, DFM, SCT, and SDT emerged as the only paradigms that consistently satisfied all criteria. From both cognitive-motivational and sociotechnical perspectives, these paradigms emphasize physician confidence, autonomy, competence, and relatedness while supporting ethical, professional, and sustainable AI integration. This appraisal demonstrates that these paradigms provide actionable foundations for empirical tools assessing both internal psychological and external structural factors influencing AI acceptance and resistance.
By clarifying the strengths and gaps of existing frameworks, this study offers a more structured basis for promoting responsible and clinically relevant AI integration. Broader database coverage and the inclusion of non-English studies would improve inclusivity in paradigm search and identification. Empirical testing of how paradigms explain technology use would further enhance paradigm assessment, reduce bias, and strengthen the rigor and generalizability of this study's foundation. Future research should give weight to both internal psychological and external environmental drivers when examining AI use among medical doctors, and may also extend beyond doctors to include nurses, allied health professionals, and patients to better reflect the team-based use of AI in healthcare. A balanced approach combining internal and external determinants is needed to explain both technology acceptance and resistance. IS researchers should move beyond single-factor models and adopt dual-structured paradigms that consider enablers and inhibitors. Given the lack of a unified paradigm that captures this duality, future work should build on integrative approaches like SCT and SDT, which emphasize internal factors while remaining adaptable to cognitive-motivational and socio-technical contexts. To enhance practical relevance, future research can operationalize SCT, SDT, and DFM either independently or as a composite hybrid by translating their constructs into measurable determinants, guiding the design of empirical tools such as surveys and interviews, and applying analytical methods like PLS-SEM, usability testing, and action research suited to various clinical domains. These approaches can also help clarify how psychological and structural factors interact in influencing physician behavior towards AI usage. As a critical appraisal of explanatory paradigms, this study urges IS practitioners and health policymakers to select paradigms carefully. For medical doctors to effectively integrate AI, paradigms must account for both individual motivations and the structural realities of clinical practice.
Supplemental Material
Supplemental material, sj-docx-1-dhj-10.1177_20552076251384221 for Theoretical appraisal of explanatory paradigms for artificial intelligence usage by medical doctors by Lemuel Clark Velasco and Wei-Tsong Wang in DIGITAL HEALTH
Acknowledgements
The paradigm search was supported through the invaluable assistance of Ms. Ya Jhen Li of the Kun-Yen Medical Library, National Cheng Kung University.
Appendix
The details of the paradigm search are found here https://github.com/lemuelclarkvelasco/TheoreticalAppraisalParadigmSearch
Footnotes
ORCID iDs: Lemuel Clark Velasco https://orcid.org/0000-0003-3983-702X
Wei-Tsong Wang https://orcid.org/0000-0002-1448-7433
Author contributions: LCV and WTW handled conceptualization, data curation, formal analysis, investigation, methodology, project administration, resources, drafting of the original manuscript, and editing the manuscript. Generative AI was used solely for the purpose of improving the grammar, language and readability of the manuscript. The authors are ultimately responsible and accountable for the contents of the work.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by the National Science and Technology Council, Taiwan [Grant No.: NSTC 112-2410-H-006-052-MY3 and NSTC 113-2410-H-006-065].
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
All data generated or analyzed during this study are included in this published article and its supplementary information files.
Supplemental material: Supplemental material for this article is available online.
References
- 1.Collins C, Dennehy D, Conboy K, et al. Artificial intelligence in information systems research: a systematic literature review and research agenda. Int J Inf Manage 2021; 60: 102383. [Google Scholar]
- 2.Hassan M, Borycki EM, Kushniruk AW. Artificial intelligence governance framework for healthcare. Healthc Manage Forum 2025; 38: 125–130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wang X, Wang Y. Analysis of trust factors for AI-assisted diagnosis in intelligent healthcare: personalized management strategies in chronic disease management. Expert Syst Appl 2024; 255: 124499. [Google Scholar]
- 4.Fujimori R, et al. Acceptance, barriers, and facilitators to implementing artificial intelligence–based decision support systems in emergency departments: quantitative and qualitative evaluation. JMIR Form Res 2022; 6: e36501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Han R, Acosta JN, Shakeri Z, et al. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. Lancet Digit Health 2024; 6: e367–e373. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Li Q, Qin Y. AI In medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ 2023; 23: 52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Eiskjær S, Pedersen CF, Skov ST, et al. Usability and performance expectancy govern spine surgeons’ use of a clinical decision support system for shared decision-making on the choice of treatment of common lumbar degenerative disorders. Front Digit Health 2023; 5: 1225540. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Yang Y, Ngai EWT, Wang L. Resistance to artificial intelligence in health care: literature review, conceptual framework, and research agenda. Inform Manag 2024; 61: 103961. [Google Scholar]
- 9.Anisha SA, Sen A, Ahmad B, et al. Exploring acceptance of digital health technologies for managing non-communicable diseases among older adults: a systematic scoping review. J Med Syst 2025; 49: 35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Samhan B. Revisiting technology resistance: current insights and future directions. Australasian J Inform Syst 2018; 22: 1–11. [Google Scholar]
- 11.Issa WB, et al. Shaping the future: perspectives on the integration of artificial intelligence in health profession education: a multi-country survey. BMC Med Educ 2024; 24: 1166. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Prakash AV, Das S. Medical practitioner's adoption of intelligent clinical diagnostic decision support systems: a mixed-methods study. Inform Manag 2021; 58: 103524. [Google Scholar]
- 13.Venkatesh V, Morris MG, Davis GB, et al. User acceptance of information technology: toward a unified view. MIS Q 2003; 27: 425–478. [Google Scholar]
- 14.Yu Z, et al. The willingness of doctors to adopt artificial intelligence-driven clinical decision support systems at different hospitals in China: fuzzy set qualitative comparative analysis of survey data. J Med Internet Res 2025; 27: e62768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Gregor S. The nature of theory in information systems. MIS Q 2006; 30: 611–642. [Google Scholar]
- 16.Weber R. Evaluating and developing theories in the information systems discipline. J Assoc Inform Systems 2012; 13: 1–30. [Google Scholar]
- 17.Stevens AF, Stetson P. Theory of trust and acceptance of artificial intelligence technology (TrAAIT): an instrument to assess clinician trust and acceptance of artificial intelligence. J Biomed Inform 2023; 148: 104550. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Khosravi M, Zare Z, Mojtabaeian SM, et al. Artificial intelligence and decision-making in healthcare: a thematic analysis of a systematic review of reviews. Health Serv Res Manag Epidemiol 2024; 11: 23333928241234863. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Guni A, Varma P, Zhang J, et al. Artificial intelligence in surgery: the future is now. Eur Surg Res 2024; 65: 22–39. [DOI] [PubMed] [Google Scholar]
- 20.Lambert SI, et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med 2023; 6: 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Burton-Jones A, Straub DW. Reconceptualizing system usage: an approach and empirical test. Inf Syst Res 2006; 17: 228–246. [Google Scholar]
- 22.Constable MD, Shum HPH, Clark S. Enhancing surgical performance in cardiothoracic surgery with innovations from computer vision and artificial intelligence: a narrative review. J Cardiothorac Surg 2024; 19: 94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Ahuja AS, et al. Applications of artificial intelligence in cataract surgery: a review. Clin Ophthalmol 2024; 18: 2969–2975. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Alanzi T, et al. Barriers and facilitators of artificial intelligence in family medicine: an empirical study with physicians in Saudi Arabia. Cureus J Med Sci Nov 2023; 15: e49419. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Hoffman J, Wenke R, Angus RL, et al. Overcoming barriers and enabling artificial intelligence adoption in allied health clinical practice: a qualitative study. Digit Health 2025; 11: 20552076241311144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Lampreia F, Madeira C, Dores H. Digital health technologies and artificial intelligence in cardiovascular clinical trials: a landscape of the European space. Digit Health 2024; 10: 20552076241277703. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Panteli D, et al. Artificial intelligence in public health: promises, challenges, and an agenda for policy makers and public health institutions. Lancet Public Health 2025; 10: e428–e432. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Roppelt JS, Kanbach DK, Kraus S. Artificial intelligence in healthcare institutions: a systematic literature review on influencing factors. Technol Soc 2024; 76: 102443. [Google Scholar]
- 29.Ullah W, Ali Q. Role of artificial intelligence in healthcare settings: a systematic review. J Med Artif Intell 2025; 8: 24–24. https://jmai.amegroups.org/article/view/9683 . [Google Scholar]
- 30.Chomutare T, et al. Artificial intelligence implementation in healthcare: a theory-based scoping review of barriers and facilitators. Int J Environ Res Public Health Dec 2022; 19: 16359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Burns JL, Gichoya JW, Kohli MD, et al. Theory of radiologist interaction with instant messaging decision support tools: a sequential-explanatory study. PLOS Digit Health 2024; 3: e0000297. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Dahri AS, Massan SUR, Thebo LA. An overview of Ai enabled M-iot wearable technology and its effects on the conduct of medical professionals in public healthcare in Pakistan. 3c Tecnologia Jun-Sep 2020; 9: 87–111. [Google Scholar]
- 33.Huang Z, et al. Are physicians ready for precision antibiotic prescribing? A qualitative analysis of the acceptance of artificial intelligence-enabled clinical decision support systems in India and Singapore. J Glob Antimicrob Resist 2023; 35: 76–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Zhai H, et al. Radiation oncologists⇔ perceptions of adopting an artificial intelligence⇓assisted contouring technology: model development and questionnaire study. J Med Internet Res 2021; 23: e27122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Grunhut J, Wyatt ATM, Marques O. Educating future physicians in artificial intelligence (AI): an integrative review and proposed changes. J Med Educat Curr Devel Sep 2021; 8: 23821205211036836. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Liu H, Perera SC, Wang JJ, et al. Physician engagement in online medical teams: a multilevel investigation. J Bus Res 2023; 157: 113588. [Google Scholar]
- 37.Eltawil FA, Atalla M, Boulos E, et al. Analyzing barriers and enablers for the acceptance of artificial intelligence innovations into radiology practice: a scoping review. Tomography 2023; 9: 1443–1455. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Lokaj B, Pugliese MT, Kinkel K, et al. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34: 2096–2109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Schouten B, et al. Implementing artificial intelligence in clinical practice: a mixed-method study of barriers and facilitators. J Med Art Intell 2022; 5: 12. [Google Scholar]
- 40.Dalvi-Esfahani M, Mosharaf-Dehkordi M, Leong LW, et al. Exploring the drivers of XAI-enhanced clinical decision support systems adoption: insights from a stimulus-organism-response perspective. Technol Forecast Soc Change 2023; 195: 122768. [Google Scholar]
- 41.Zhang D, Zhao X. Understanding adoption intention of virtual medical consultation systems: perceptions of ChatGPT and satisfaction with doctors. Comput Human Behav 2024; 159: 108359. [Google Scholar]
- 42.Choudhury A, Asan O, Medow JE. Effect of risk, expectancy, and trust on clinicians’ intent to use an artificial intelligence system – blood utilization calculator. Appl Ergon 2022; 101: 103708. [DOI] [PubMed] [Google Scholar]
- 43.Panagoulias DP, Virvou M, Tsihrintzis GA. A novel framework for artificial intelligence explainability via the technology acceptance model and rapid estimate of adult literacy in medicine using machine learning. Expert Syst Appl 2024; 248: 123375. [Google Scholar]
- 44.Roy M, Jamwal M, Vasudeva S, et al. Physicians behavioural intentions towards AI-based diabetes diagnostic interventions in India. J Public Health (Berl.) 2024. 10.1007/s10389-024-02235-w [DOI] [Google Scholar]
- 45.Hogg HDJ, et al. Intervention design for artificial intelligence-enabled macular service implementation: a primary qualitative study. Implementation Sci Comm 2024; 5: 31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Mansour T, Bick M. How can physicians adopt AI-based applications in the United Arab Emirates to improve patient outcomes?. Digit Health 2024; 10: 1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Weber S, Wyszynski M, Godefroid M, et al. How do medical professionals make sense (or not) of AI? A social-media-based computational grounded theory study and an online survey. Comput Struct Biotechnol J 2024; 24: 146–159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Strohm L, Hehakaya C, Ranschaert ER, et al. Implementation of artificial intelligence (AI) applications in radiology: hindering and facilitating factors. Eur Radiol 2020; 30: 5525–5532. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Shaikh F, Afshan G, Anwar RS, et al. Analyzing the impact of artificial intelligence on employee productivity: the mediating effect of knowledge sharing and well-being. Asia Pacific J Human Resources 2023; 61: 794–820. [Google Scholar]
- 50.Ram S, Sheth JN. Consumer resistance to innovations: the marketing problem and its solutions. J Consum Market 1989; 6: 5–14. [Google Scholar]
- 51.Blut M, Chong AYL, Tsigna Z, et al. Meta-Analysis of the unified theory of acceptance and use of technology (UTAUT): challenging its validity and charting a research agenda in the red ocean. J Assoc Inform Syst 2022; 23: 13–95. [Google Scholar]
- 52.Lee Y, Kozar KA, Larsen KRT. The technology acceptance model: past, present, and future. Commun Assoc Inf Syst 2003; 12: 752–780. [Google Scholar]
- 53.Huang Y, et al. Comparison of three machine learning models to predict suicidal ideation and depression among Chinese adolescents: a cross-sectional study. J Affect Disord 2022; 319: 221–228. [DOI] [PubMed] [Google Scholar]
- 54.Hamard M, Sans Merce M, Gorican K, et al. The role of cone-beam computed tomography CT extremity arthrography in the preoperative assessment of osteoarthritis. Tomography 9: 2134–2147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Karakose T, Demirkol M, Aslan N, et al. A conversation with ChatGPT about the impact of the COVID-19 pandemic on education: comparative review based on human–AI collaboration. Educ Process: Int J 2023; 12: 7–25. [Google Scholar]
- 56.Plouffe CR, Hulland JS, Vandenbosch M. Research report: richness versus parsimony in modeling technology adoption decisions—understanding merchant adoption of a smart card-based payment system. Inf Syst Res 2001; 12: 208–222. [Google Scholar]
- 57.Bagozzi RP. The legacy of the technology acceptance model and a proposal for a paradigm shift. J Assoc Inform Syst 2007; 8: 244–254. [Google Scholar]
- 58.Sison R, Lavilles R. Software gigging: a grounded theory of online software development freelancingHuman behavior and is track. San Francisco, California, USA, 2018, pp.1–17. Available: https://aisel.aisnet.org/icis2018/behavior/Presentations/26/. [Google Scholar]
- 59.Lapointe L, Rivard S. A multilevel model of resistance to information technology implementation. MIS Q 2005; 29: 461–491. [Google Scholar]
- 60.Kim H-W, Kankanhalli A. Investigating user resistance to information systems implementation: a Status quo bias perspective. MIS Q 2009; 33: 567–582. [Google Scholar]
- 61.Yang J, Luo B, Zhao C, et al. Artificial intelligence healthcare service resources adoption by medical institutions based on TOE framework. Digit Health 2022; 8: 205520762211260. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Ball R, Talal AH, Dang O, et al. Trust but verify: lessons learned for the application of AI to case-based clinical decision-making from postmarketing drug safety assessment at the US food and drug administration. J Med Internet Res 2024; 26: e50274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Shrivastava P. Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors. Front Artificial Intell 2025; 8: 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Greenhalgh T, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017; 19: e367. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Wigfield A, Eccles JS. Expectancy–value theory of achievement motivation. Contemp Educ Psychol 2000; 25: 68–81. [DOI] [PubMed] [Google Scholar]
- 66.Burton-Jones A, Grange C. From use to effective use: a representation theory perspective. Inf Syst Res 2013; 24: 632–658. [Google Scholar]
- 67.Mirza KB, Arif M, Asim M. Investigating the factors influencing the adoption and use of artificial intelligence applications among Pakistani university research scholars: an empirical study. Inf Dev 2025: 02666669251333147. [Google Scholar]
- 68.Samuelson W, Zeckhauser R. Status quo bias in decision making. J Risk Uncertain 1988; 1: 7–59. [Google Scholar]
- 69.Choudhury A. Toward an ecologically valid conceptual framework for the use of artificial intelligence in clinical settings: need for systems thinking, accountability, decision-making, trust, and patient safety considerations in safeguarding the technology and clinicians. JMIR Human Factors 2022; 9: e35421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Mehrabian A, Russell JA. An approach to environmental psychology. Cambridge, MA: MIT Press, 1974. [Google Scholar]
- 71.Dalvi-Esfahani M, Mosharaf-Dehkordi M, Leong LW, et al. Exploring the drivers of XAI-enhanced clinical decision support systems adoption: insights from a stimulus-organism-response perspective. Technol Forecast Soc Change 2023; 195: 122768. [Google Scholar]
- 72.Rogers EM. Diffusion of Innovations. 5th ed. New York City, New York, United States: The Free Press, 2003. [Google Scholar]
- 73.Di Sarno L, et al. Artificial intelligence in pediatric emergency medicine: applications, challenges, and future perspectives. Biomedicines 2024; 12: 1220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Gomolin A, Netchiporouk E, Gniadecki R, et al. Artificial intelligence applications in dermatology: where do we stand? Front Med 2020; 7: 100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Gill B, et al. ChatGPT is a promising tool to increase readability of orthopedic research consents. J Orthopaed Trauma Rehabil 2024; 31: 148–152. [Google Scholar]
- 76.Ullah E, Parwani A, Baig MM, et al. Challenges and barriers of using large language models (LLM) such as ChatGPT for diagnostic medicine with a focus on digital pathology - a recent scoping review. Diagn Pathol 2024; 19: 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Benboujja F, et al. Overcoming language barriers in pediatric care: a multilingual, AI-driven curriculum for global healthcare education. Front Public Health 2024; 12: 1337395. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Kautish P, Siddiqui M, Siddiqui A, et al. Technology-enabled cure and care: an application of innovation resistance theory to telemedicine apps in an emerging market context. Technol Forecast Soc Change 2023; 192: 122558. [Google Scholar]
- 79.Cohen B, Dubois S, Lynch PA, et al. Use of an artificial intelligence-driven digital platform for reflective learning to support continuing medical and professional education and opportunities for interprofessional education and equitable access. Education Sci 2023; 13: 760. [Google Scholar]
- 80.Wang W-T. Examining the influence of the social cognitive factors and relative autonomous motivations on Employees’ knowledge sharing behaviors. Decis Sci 2016; 47: 404–436. [Google Scholar]
- 81.Bergdahl J, et al. Self-determination and attitudes toward artificial intelligence: cross-national and longitudinal perspectives. Telemat Inform 2023; 82: 102013. [Google Scholar]
- 82.Bandura A. Social foundations of thought and action: a social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall, 1986. [Google Scholar]
- 83.Kim B-J, Lee J. The mental health implications of artificial intelligence adoption: the crucial role of self-efficacy. Human Soc Sci Comm 2024; 11: 1561: 1-15. [Google Scholar]
- 84.Wu D, et al. Individual motivation and social influence: a study of telemedicine adoption in China based on social cognitive theory. Health Policy Technol 2021; 10: 100555. [Google Scholar]
- 85.Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol 2000; 55: 68–78. [DOI] [PubMed] [Google Scholar]
- 86.Huo W, Li Q, Liang B, et al. When healthcare professionals use AI: exploring work well-being through psychological needs satisfaction and job complexity. Behavioral Sci 15: 88. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-docx-1-dhj-10.1177_20552076251384221 for Theoretical appraisal of explanatory paradigms for artificial intelligence usage by medical doctors by Lemuel Clark Velasco and Wei-Tsong Wang in DIGITAL HEALTH







