Abstract
This study investigates how different forms of literacy shape trust in medical AI and its transfer in healthcare contexts. Based on a survey of 1,250 participants, three findings emerge. First, digital literacy and AI literacy exert opposite influences on medical AI trust: while digital literacy enhances trust, higher AI literacy unexpectedly reduces it. This paradox highlights a theoretical puzzle in technology acceptance, suggesting that deeper knowledge can generate informed skepticism rather than blind confidence. Second, trust in medical AI transfers hierarchically, flowing to hospitals only through physician trust as a critical intermediary, underscoring the role of interpersonal trust in institutional trust building. Third, scientific literacy moderates this process, with higher literacy dampening trust transfer, reflecting the impact of cognitive processing differences. These results extend theories of trust and technology acceptance by integrating multiple literacies and uncovering divergent cognitive pathways. Practically, they call for communication strategies and policy designs that calibrate trust—strengthening physicians’ role as trust brokers, balancing education about AI’s capacities and risks, and leveraging explainable AI tools to sustain appropriate confidence in medical AI.
Keywords: Medical artificial intelligence, Trust formation, Digital literacy, AI literacy, Scientific literacy, Technology trust, Healthcare
Introduction
In healthcare settings, patients often face dual challenges: information asymmetry and lack of professional knowledge. Information asymmetry refers to the imbalance of medical information and expertise between patients and physicians [5]. Lacking of professional knowledge reflects the specialized medical background and clinical skills that patients typically do not possess, which creates a cognitive gap between patients and healthcare providers [17]. This gap hinders effective communication and decision-making, making trust a crucial factor in fostering cooperation within doctor–patient relationships [67]. Medical trust is a multidimensional construct that includes both physician trust and hospital trust. High levels of medical trust are associated with greater treatment compliance, more effective clinical communication, and improved therapeutic outcomes [19, 34, 67].
In recent years, Artificial Intelligence (AI) has become an indispensable support tool in healthcare, such as detecting breast cancer from mammograms with accuracy comparable to or exceeding radiologists, and predicting hospital readmission risks to assist treatment planning [44, 56]. By providing decision support in diagnosis, prognosis, and treatment selection, AI has the potential to improve medical processes and outcomes [65]. However, the “black box” nature of AI—namely its lack of transparency and interpretability in complex models such as deep learning—creates new risks for medical trust. Because patients cannot fully understand how decisions are generated, they must rely on systems whose reasoning remains opaque [13, 53].
These challenges involve both building trust in AI systems themselves and understanding how AI affects patients’ existing trust relationships with physicians and hospitals. Medical AI trust can be defined as an individual’s positive expectations about an AI system’s capabilities, reliability, and intentions [3]. Unlike physician trust, which is rooted in interpersonal interaction, or hospital trust, which builds on institutional reputation, trust in medical AI relies more heavily on patients’ ability to understand and accept technological systems. Paradoxically, the high-risk and opaque nature of AI decision-making makes trust formation more complex, requiring individuals to develop initial trust in medical AI based on limited cognitive frameworks [15].
Previous research on medical AI adoption has primarily drawn on technology acceptance perspectives, emphasizing performance and ease of use as drivers of usage intentions [6, 39]. More recently, scholars have begun to explore trust in medical AI, focusing on issues such as transparency, accountability, and ethical implications [11, 25]. While these works provide valuable insights, they largely conceptualize trust as a single-dimensional phenomenon and emphasize system characteristics or ethical considerations, without systematically addressing the multi-level mechanisms of trust formation and transfer.Questions remain regarding how patients establish initial trust in medical AI without direct experience and how this trust influences broader attitudes toward the healthcare system.
Therefore, this study aims to explore the multi-level mechanisms of trust formation and transfer in medical AI environments, with three specific research objectives: First, to investigate how digital literacy and AI literacy influence patients' initial trust formation in medical AI. This is particularly important as most patients lack direct experience with medical AI systems. Second, to examine how medical AI trust affects overall trust in healthcare institutions through the mediating mechanism of physician trust. This objective seeks to reveal the transfer paths between technological, interpersonal, and institutional trust after AI technology introduction. Third, to test the moderating role of scientific literacy in the trust transfer process, understanding how patients with different cognitive ability levels differ in establishing and transferring trust.
To address these aims, we integrate trust transfer theory and cognitive processing theory to propose a Multi-level Trust Transfer Model (MTTM). This model constructs a hierarchical theoretical framework from the cognitive foundation layer (digital literacy, AI literacy), through the trust mediation layer (medical AI trust, physician trust), to the system trust layer (hospital trust), systematically exploring the internal mechanisms of trust formation and transfer in medical AI environments.
Literature review
Theoretical framework
As shown in Fig. 1, this study proposes the MTTM, which integrates Trust Transfer Theory and Cognitive Processing Theory to explain how trust is formed and transferred in AI-enabled healthcare environments.
Fig. 1.
Multi-level Trust Transfer Model, MTTM. Notes: Fig. 1 presents our model framework, illustrating potential paths between variables. The model hypothesizes associations between AI literacy, Digital literacy, and medical AI trust. Further, it postulates that Digital literacy and AI literacy influence hospital trust through the mediation of Medical AI trust and physician trust. Additionally, Medical AI trust is directly related to hospital trust. The model includes gender and age as control variables
Trust Transfer Theory posits that trust can move from one entity to another, either within the same level (intrachannel) or across different levels (interchannel) [35, 64]. In medical AI contexts, patients (trustors) may transfer trust initially formed in AI systems (new trustees) toward physicians and hospitals (established trustees). This perspective captures the multi-level and multi-channel nature of trust transfer. Cognitive Processing Theory complements this by explaining the mental mechanisms underlying trust judgments. Individuals often rely on System 1 (intuitive processing) for rapid judgments when direct experience is lacking, while System 2 (analytical processing) allows more deliberate evaluation of information [14, 32]. This dual-process framework clarifies why patients may establish initial trust in AI based on limited cues, and how cognitive capacities influence subsequent trust transfer.
Based on these theories, the MTTM is structured into three layers: Cognitive Foundation Layer: This includes digital literacy, AI literacy, and scientific literacy. Digital literacy refers to general competencies in operating and interpreting digital technologies [23, 41], which reduce barriers to interacting with AI. AI literacy represents domain-specific understanding of AI’s mechanisms, applications, and limitations [8, 9, 33, 36]. Scientific literacy, in contrast, reflects a meta-level capacity to evaluate evidence and reason across domains [46, 47]. Unlike digital and AI literacy, which directly shape initial trust, scientific literacy determines how individuals process and weigh information during trust transfer, thereby functioning as a moderator rather than a direct antecedent [66]. Trust Mediation Layer: At this level, medical AI trust represents the initial trust formed from cognitive foundations. This trust then mediates the transition from technological trust to interpersonal trust, influencing physician trust. System Trust Layer: At the institutional level, hospital trust reflects broader organizational confidence, completing the hierarchical chain of trust transmission.
This framework underscores how distinct types of literacy shape trust formation through different mechanisms: digital literacy reduces usability barriers, AI literacy provides domain-specific knowledge that can both increase and decrease trust, and scientific literacy moderates how trust is transferred across levels. Building on these foundations, the MTTM emphasizes how trust flows across technological, interpersonal, and institutional levels, and how different sources of trust interact to foster trust establishment in medical AI environments.
Hypothesis
Digital literacy and medical AI trust
Digital Literacy refers to an individual's ability to understand, evaluate, and use digital technologies [23, 41]. It encompasses basic technological, and operational skills and information evaluation, digital content creation, and critical thinking abilities in digital environments [2].
Cognitive ability theory suggests that individual technological competence influences perceived usefulness and ease of use of technology, which in turn affects technology acceptance [10]. Higher digital literacy can reduce cognitive load when individuals interact with new technologies [48, 50]. Previous studies have demonstrated that higher digital literacy correlates with increased technology acceptance and trust [40, 68, 70]. This relationship has been validated in various contexts, including educational AI usage, online health information-seeking behavior, and medical auxiliary AI applications [40, 68, 70].
Therefore, we propose H1: Digital literacy positively predicts medical AI trust.
AI literacy and medical AI trust
AI literacy refers to an individual’s ability to understand, evaluate, and interact with AI technologies, including knowledge of their decision-making processes, potential biases, and limitations [8, 9, 33, 36].
According to Social Cognitive Theory, domain-specific knowledge should enable individuals to form more accurate evaluations and foster trust in relevant technologies [4, 49]. Previous research has reported positive associations between AI literacy and trust in educational AI systems, medical chatbots, and general AI applications [29, 60, 70]. From this perspective, individuals with higher AI literacy are expected to perceive AI systems as more useful and reliable, thus increasing trust.
However, an alternative perspective suggests the opposite: individuals with higher AI literacy may also become more aware of AI’s “black box” characteristics, algorithmic bias, and potential risks, leading to greater skepticism and more cautious trust judgments [52, 62]. This possibility highlights the complex and context-dependent role of AI literacy.
Accordingly, we put forward a tentative hypothesis (H2): AI literacy positively predicts medical AI trust.
Medical AI trust and physician trust
Medical AI trust refers to an individual's positive expectations regarding an AI system's capabilities, reliability, and intentions [3]. Physician trust is reflected in confidence in doctors' professional competence, ethical standards, and willingness to communicate [57, 71]. Trust Transfer Theory suggests that an individual's trust in one object can transfer to other related objects [64].
Previous research has demonstrated a connection between technology trust and service provider trust. Trust in technology significantly influences users' trust attitudes toward technology providers [42]. This relationship has been validated in healthcare settings, where patients' trust in medical technology correlates with their trust in healthcare professionals [18]. Therefore, we hypothesize that this trust transfer effect also exists in medical AI environments. In AI-assisted healthcare settings, patients' trust in AI systems is expected to transfer to the physicians using these systems through cognitive balance mechanisms.
Therefore, we propose H3: Medical AI trust positively predicts physician trust.
Physician trust and hospital trust
Hospital Trust refers to patients' trust attitudes toward healthcare institutions' overall service capability and reliability [7, 71]. It reflects patients' recognition at the organizational level of the healthcare system. According to organizational trust theory, individuals' trust in organization members influences their trust attitudes toward the entire organization [69].
This trust transmission mechanism applies equally in healthcare service environments. As direct service providers, physicians are the primary interface between patients and hospitals. Their professional performance and service attitude often become crucial criteria for patient's evaluation of the entire medical institution [22, 45]. When patients develop high trust in physicians, this positive cognitive and emotional experience likely extends to their assessment of the whole hospital [7, 22].
Therefore, we propose H4: Physician trust positively predicts hospital trust.
Medical AI trust and hospital trust
Individual trust in technological systems influences overall evaluation and trust attitudes toward organizations adopting such technology [43]. Research shows that an organization's technological level and application effectiveness are key factors affecting organizational image and reputation [12]. Adopting advanced technology improves efficiency and sends positive signals about commitment to technological innovation and service quality [58]. When patients trust the AI technology adopted by hospitals, this positive cognition likely transfers to their evaluation of the hospital's overall service capability.
Thus, we propose H5: Medical AI trust positively predicts hospital trust.
Mediating and moderating effects
Based on trust transfer theory and our previous hypotheses, we propose that medical AI and physician trust play crucial mediating roles in our model. Trust transfer theory suggests that transferring from cognitive foundations to system trust requires specific mediating mechanisms [64]. Cognitive processing theory further reveals that trust formation is a multi-level cognitive processing requiring different cognitive assessment and judgment levels. In the context of medical AI, patients’ digital and AI literacy—as fundamental cognitive abilities—are unlikely to translate directly into trust in hospitals,instead, they must first shape trust in more specific and concrete objects such as AI systems and physicians.
According to organizational trust theory, trust in individual members can be transferred to trust in the broader institution [69]. In healthcare, physicians occupy a particularly central position in this process. Unlike other healthcare professionals, physicians assume primary responsibility for diagnosis, treatment planning, and clinical decision-making, making them the most salient representatives of the hospital’s professional competence and organizational values [22, 45]. Empirically, prior research shows that patients’ trust in physicians strongly influences their overall trust in hospitals, functioning as a bridge between interpersonal and institutional trust [7]. Thus, physicians act as boundary spanners: they not only embody the technical quality of AI-supported care but also legitimize the hospital’s role in integrating such technologies into clinical practice.
Therefore, we propose H6: Medical AI trust and physician trust mediate the relationship between cognitive foundations (digital literacy, AI literacy) and hospital trust.
Unlike digital literacy and AI literacy, which represent domain-specific competencies directly shaping patients’ initial trust in medical AI, scientific literacy functions at a meta-cognitive level. It reflects broader cognitive abilities such as critical reasoning, evidence evaluation, and systematic thinking [24, 47, 61]. From a theoretical perspective, scientific literacy does not necessarily increase or decrease trust directly. Instead, it determines how patients process and interpret information when transferring trust from technological systems to human agents [66]. According to Cognitive Processing Theory, individuals with higher scientific literacy are more likely to engage in System 2 (analytical) processing, carefully weighing AI outputs against physicians’ expertise, while those with lower scientific literacy rely more on System 1 (intuitive) processing, accepting or rejecting trust transfer with less scrutiny [14, 26]. In this sense, scientific literacy serves as a boundary condition that moderates the trust transfer process.
Based on this, we propose H7: Scientific literacy moderates the relationship between medical AI trust and physician trust.
Methods
Measurements
The questionnaire used in this study was adapted from previously published and validated instruments. The questionnaire consisted of two parts. The first part included demographic questions about participants' gender, age, and medical consultation experience in the past year. The second part contained items for each construct in the research model. Unless otherwise specified, all items were measured using a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).
Digital literacy was measured using the scale from Hargittai [23], with five items (DL1-DL5) used to query participants, for example, “I feel confident in my ability to search and evaluate information online.” AI literacy was measured using a five-item scale (AL1–AL5) adapted from Hargittai [23] and Laupichler et al. [33]. The scale specifically emphasizes awareness of AI’s limitations and potential risks, with items such as “I can describe risks that may arise when using artificial intelligence systems.” Medical AI trust was adapted from scales by Gulati et al. [20] and Huo et al. [28], using three items (AT1-AT3), including “Medical AI systems are safe and reliable.” Physician trust was adapted from scales by Calnan & Sanford [7] and Zheng et al. [71], comprising five items (DT1-DT5), such as “My doctor would do everything possible to ensure my health.” Hospital trust was adapted from scales by Calnan & Sanford [7] and Zheng et al. [71], using five items (HT1-HT5), such as “treatment from the Health Care System, no matter what the patient's race or ethnicity.” Science literacy was measured following the approach of Kahan et al. [31], where participants received one point for each correct answer, resulting in a cumulative score across eight questions.
Participants
Undergraduate students from a university in Eastern China were recruited via course chat groups and completed an anonymous online survey (Wenjuanxing) between April and June 2024. Eligibility required being ≥ 18 years old and having had a medical consultation in the past year. Informed consent was obtained, and ethics approval was granted by the Medical Ethics Committee of Shandong Second Medical University (Approval No. 2024YX181).
Of 1,427 responses, 1,250 valid cases remained after excluding failed attention checks and abnormal completion times (effective rate = 87.67%). The final sample (M_age = 19.96, SD = 0.96; 65.2% female) comprised full-time undergraduates. This group was targeted because students are highly responsive to emerging technologies such as AI and share relatively homogeneous socioeconomic backgrounds, as their financial resources mainly come from parental or state support, which reduces SES variability while at the same time limiting broader generalizability.
Data analysis and statistics
Path analysis in this study was conducted using PLS-SEM, with all analyses performed using the cSEM package in R. Descriptive statistics were obtained through R's base functions. After constructing the structural model, significance testing was performed using bootstrapping with 1,000 iterations. Since PLS-SEM is not a covariance-based method, it does not involve fit measures but validity assessment [59]. In PLS-SEM, HTMT and Fornell-Larcker criteria are commonly recommended methods for validity testing [1]. Mediation effects were calculated through the product of coefficients. Results are presented as standardized coefficients (β) with 95% confidence intervals (CI).
Results
Testing of validity and reliability
Reliability assessment revealed satisfactory internal consistency, with all constructs exhibiting Cronbach's α, rho_A, and Rho_C values above the 0.7 threshold and AVE values exceeding 0.5 (Table 1). Multicollinearity assessment of the path model using variance inflation factors (VIF) showed all values below 3 (Table 2), indicating the absence of significant multicollinearity in the path analysis.
Table 1.
Reliability test results
| Cronbach’s α | Ave | Rho_A | Rho_C | |
|---|---|---|---|---|
| Digital literacy | 0.874 | 0.570 | 0.884 | 0.866 |
| AI literacy | 0.874 | 0.577 | 0.877 | 0.871 |
| Medical AI trust | 0.771 | 0.633 | 0.790 | 0.771 |
| Physician trust | 0.880 | 0.555 | 0.890 | 0.880 |
| Hospital trust | 0.824 | 0.545 | 0.835 | 0.826 |
Table 2.
Variance inflation factors
| Medical AI trust | Physician trust | Hospital trust | |
|---|---|---|---|
| Digital literacy | 1.001 | - | - |
| AI literacy | 1.010 | - | - |
| Medical AI trust | - | 1.007 | 1.130 |
| Physician trust | - | - | 1.133 |
| Hospital trust | - | - | - |
| Science literacy | - | 1.0151 | - |
Table 3 demonstrates satisfactory discriminant validity across all constructs, with the square roots of Average Variance Extracted (AVE) values (ranging from 0.533 to 1.000) exceeding their inter-construct correlations. The Heterotrait-Monotrait (HTMT) analysis provided additional support for discriminant validity, with all HTMT ratios falling well below the conservative criterion of 0.85 (Table 4). These findings collectively establish that all constructs in the measurement model are empirically distinguishable and exhibit adequate discriminant validity.
Table 3.
Fornell-Larcker matrix
| Variables | AI literacy | Digital literacy | Physician trust | Medical AI trust | Hospital trust |
|---|---|---|---|---|---|
| AI literacy | 5.77E-01 | ||||
| Digital literacy | 5.30E-04 | 5.70E-01 | |||
| Physician trust | 1.47E-03 | 7.20E-03 | 5.55E-01 | ||
| Medical AI trust | 3.65E-02 | 2.37E-02 | 1.07E-01 | 5.33E-01 | |
| Hospital trust | 3.51E-03 | 2.69E-03 | 4.96E-01 | 7.60E-02 | 5.45E-01 |
Table 4.
HTMT matrix
| Variables | AI literacy | Digital literacy | Physician trust | Medical AI trust | Hospital trust |
|---|---|---|---|---|---|
| AI literacy | 1.000 | ||||
| Digital literacy | 0.025 | 1.000 | |||
| Physician trust | 0.038 | 0.089 | 1.000 | ||
| Medical AI trust | 0.185 | 0.154 | 0.326 | 1.000 | |
| Hospital trust | 0.059 | 0.051 | 0.701 | 0.275 | 1.000 |
Effects of demographics on the variables
As shown in Table 5, gender significantly affects physician trust (β = 0.072, 95% CI [0.013, 0.130]). However, gender does not significantly affect medical AI trust (β = 0.020, 95% CI [−0.039, 0.079]) or hospital trust (β = −0.020, 95% CI [−0.065, 0.025]).
Table 5.
Direct effect
| Characteristics | Estimate | 95% CI (Lower) | 95% CI (Upper) | T-value |
|---|---|---|---|---|
| Gender → Medical AI trust | 0.020 | −0.039 | 0.079 | 0.657 |
| Age → Medical AI trust | −0.056 | −0.119 | 0.010 | −1.744 |
| AI literacy → Medical AI trust | −0.182 | −0.266 | −0.113 | −4.581 |
| Digital literacy → Medical AI trust | 0.150 | 0.094 | 0.236 | 4.229 |
| Gender → Physician trust | 0.072 | 0.013 | 0.130 | 2.391 |
| Age → Physician trust | 0.048 | −0.003 | 0.104 | 1.757 |
| Science literacy → Physician trust | −0.026 | −0.088 | 0.038 | −0.841 |
| Medical AI trust → Physician trust | 0.313 | 0.254 | 0.381 | 9.667 |
| Science literacy*Medical AI trust → Physician trust | −0.118 | −0.167 | −0.039 | −3.611 |
| Gender → Hospital trust | −0.020 | −0.065 | 0.025 | −0.869 |
| Age → Hospital trust | −0.017 | −0.066 | 0.038 | −0.656 |
| Medical AI trust → Hospital trust | 0.049 | −0.016 | 0.115 | 1.447 |
| Physician trust → Hospital trust | 0.690 | 0.625 | 0.755 | 20.646 |
Age has no significant effect on medical AI trust (β = −0.056, 95% CI [−0.119, 0.010]), physician trust (β = −0.048, 95% CI [−0.003, 0.104]) and hospital trust (β = −0.017, 95% CI [−0.066, 0.038]).
Direct effects
As shown in Fig. 2 and Table 5, for medical AI trust, AI literacy has a significant negative effect (β = −0.182, 95% CI [−0.266, −0.113]), while digital literacy has a significant positive effect (β = 0.150, 95% CI [0.094, 0.236]).
Fig. 2.
Path coefficient diagram. Notes: The Figure shows the direct effects between the variables. *** p < 0.001
For physician trust, medical AI trust shows a significant positive effect (β = 0.313, 95% CI [0.254, 0.381]). Science literacy itself does not have a significant effect on physician trust (β = −0.026, 95% CI: [−0.088, 0.038]), but the interaction term between science literacy and medical AI trust significantly negatively affects physician trust (β = −0.118, 95% CI [−0.167, −0.039]).
Further simple effects analysis reveals that at a higher level of science literacy (+ 1 SD), the positive effect of medical AI trust on physician trust is 0.203 (β = 0.203, 95% CI: [0.090, 0.284]). At a lower level of science literacy (−1 SD), the positive effect of medical AI trust on physician trust is 0.425 (β = 0.425, 95% CI: [0.346, 0.525]). This suggests that the lower the level of science literacy, the stronger the effect of medical AI trust on physician trust (0.425 > 0.203).
Physician trust has a positive effect on hospital trust (β = 0.690, 95% CI [0.625, 0.755]). However, medical AI trust does not have a significant effect on hospital trust (β = 0.049, 95% CI [−0.016, 0.115]).
Indirect and total effects
As shown in Table 6, the effect of digital literacy on hospital trust through medical AI trust is insignificant (β = 0.008, 95% CI [−0.003, 0.023]). Similarly, the effect of AI literacy on hospital trust through medical AI trust is also insignificant (β = −0.009, 95% CI [−0.023, 0.003]).
Table 6.
Indirect and total effects
| Predictor and Path | Path Effect | 95% CI Lower | 95% CI Upper |
|---|---|---|---|
| Digital literacy → Medical AI trust → Physician trust → Hospital trust | 0.035 | 0.019 | 0.055 |
| AI literacy → Medical AI trust → Physician trust → Hospital trust | −0.041 | −0.058 | −0.025 |
| Digital literacy → Medical AI trust → Hospital trust | 0.008 | −0.003 | 0.023 |
| AI literacy → Medical AI trust → Hospital trust | −0.009 | −0.023 | 0.003 |
| Medical AI trust → Physician trust → Hospital trust | 0.221 | 0.168 | 0.272 |
| Digital literacy → Hospital trust (total effect) | 0.044 | 0.023 | 0.070 |
| AI literacy → Hospital trust (total effect) | −0.051 | −0.075 | −0.029 |
Digital literacy has a significant positive effect on hospital trust through the chain mediation of “medical AI trust → physician trust” (β = 0.035, 95% CI [0.019, 0.055]). In contrast, AI literacy negatively affects hospital trust through the same chain mediation path (β = −0.041, 95% CI [−0.058, −0.025]).
Medical AI trust significantly affects hospital trust through physician trust (β = 0.221, 95% CI [0.168, 0.272]). This indicates that physician trust is important in mediating the relationship between medical AI and hospital trust. Physician trust directly affects hospital trust and serves as a significant intermediary through which medical AI trust impacts hospital trust.
The total effect of digital literacy on hospital trust is 0.044 (β = 0.044, 95% CI [0.023, 0.070]), while the total effect of AI literacy on hospital trust is −0.051 (β = −0.051, 95% CI [−0.075, −0.029]).
Multi-group analysis by digital literacy and AI literacy
We conducted multi-group analysis (MGA) with bootstrapping to test path differences by digital and AI literacy (Table 7). For digital literacy, only two paths differed significantly: AI literacy → AI trust (Difference = 0.025, p < 0.001) and digital literacy → AI trust (Difference = –0.019, p < 0.001). For AI literacy, a similar pattern emerged, with significant differences in AI literacy → AI trust (Difference = 1.326, p < 0.001) and digital literacy → AI trust (Difference = 0.547, p < 0.001). No other paths showed significant variation.
Table 7.
MGA results of path coefficient differences
| Digital literacy | AI literacy | |||
|---|---|---|---|---|
| Difference (low -high) | p | Difference (low -high) | p | |
| AI literacy → AI trust | 0.025 | < 0.001 | 11.326 | < 0.001 |
| AI trust → Physician trust | 0.063 | 0.475 | 0.069 | 0.434 |
| AI trust → Hospital trust | −0.028 | 0.76 | 0.003 | 0.979 |
| Digital literacy → AI trust | −0.019 | < 0.001 | 0.547 | < 0.001 |
| Physician trust → Hospital trust | −0.084 | 0.209 | −0.06 | 0.509 |
Discussions
Differential effects of multi-level cognitive foundations
A key finding of this study is that digital literacy and AI literacy exhibit opposite pathways in trust formation. Unlike traditional models that assume a uniform positive association between cognition and technology acceptance [30, 51], our results reveal a differentiated pattern.
Specifically, digital literacy shows a significant positive effect on trust, consistent with prior evidence that individuals with higher digital literacy typically demonstrate stronger technological self-efficacy, greater openness to innovation, and more favorable perceptions of usefulness and ease of use [60]. These mechanisms jointly promote higher baseline trust in AI systems.
By contrast, AI literacy exerts a significant negative influence on trust. This effect can be interpreted through cognitive processing depth theory: while digital literacy reflects surface-level engagement that facilitates acceptance, AI literacy activates deeper cognitive processes, making individuals more attentive to systemic risks such as “black box” decision-making, algorithmic bias, and limits in handling non-routine medical cases [8, 36, 55]. This mechanism aligns with the AI literacy scale we employed, which explicitly incorporates not only AI’s advantages but also its risks and complexity, thereby increasing awareness of limitations and reducing “blind trust.” MGA further supports this interpretation: the difference values (low – high) for AI literacy → AI trust are significantly positive, indicating that individuals with lower literacy display much stronger trust, whereas those with higher literacy are more cautious. In contrast, the negative difference for digital literacy confirms that higher digital literacy fosters greater trust. Taken together, these results provide converging evidence that digital and AI literacy shape trust through distinct cognitive pathways—one promoting acceptance via generalized competence, the other inducing caution via heightened awareness of risks.
This finding underscores that trust construction depends not only on cognitive content (what to know) but also on cognitive method (how to know) [38]. By differentiating between surface-level and deep-level cognitive pathways, the study contributes to a more fine-grained integration of cognitive theory with trust theory. It highlights that in highly professionalized domains such as medicine, different forms of literacy can influence the trust-building process in fundamentally distinct ways, thereby enriching the theoretical framework of technology trust. Moreover, the negative association of AI literacy with trust suggests a potentially testable mechanism: as AI literacy increases, individuals may experience greater cognitive load when processing complex information about algorithmic limitations, while also exhibiting stronger affective responses to perceived risks. These dual processes—heightened cognitive burden and risk-related emotions—could jointly attenuate trust. Future research should empirically test this mechanism, for instance by manipulating information complexity or measuring emotional reactions to AI risk disclosures.
Medical trust transfer mechanism
This study reveals a unique pathway of medical trust transmission: Medical AI trust significantly impacts physician trust, and physician trust demonstrates a strong positive effect on hospital trust. However, the direct influence of medical AI trust on hospital trust is insignificant. More importantly, the study confirms a considerable mediation effect (medical AI trust → physician trust → hospital trust), suggesting a hierarchical nature of technology trust and carrying critical theoretical implications for understanding trust-building mechanisms in medical settings.
First, this mediation effect supports trust transfer theory [35, 64]. The findings indicate that trust establishment in the medical field is not a simple point-to-point relationship but rather follows a hierarchical structure from concrete to abstract: from specific technological tools (AI) to direct service providers (physicians), and ultimately extending to the abstract organizational level (hospitals). The central mediating role of physicians confirms the crucial function of interpersonal trust in constructing institutional trust [21]. As direct providers of medical services, physicians not only serve as key links connecting AI technology with medical institutions and act as core mediators ensuring the effective transformation of technology trust into institutional trust.
Second, the non-significant direct path from AI trust to hospital trust can be explained through the concept of institutional boundaries in public trust [27, 54]. Hospitals, as large and abstract organizations, are perceived not only through their technological capabilities but also through broader structural and social factors such as governance, reputation, and patient experiences [16, 63]. As such, patients’ trust in hospitals is less likely to be shaped directly by their perceptions of AI and more likely to be filtered through their interpersonal encounters with physicians who embody both the technical and social dimensions of care. This highlights that institutional trust requires crossing a boundary: technology-related trust must first be legitimized through interpersonal trust before extending to organizational entities.
This provides a theoretical framework for understanding trust reconstruction in digital transformation, and it further suggests that healthcare systems should strategically support physicians as trust brokers who translate patients’ cautious acceptance of AI into broader confidence in medical institutions.
The moderating effect of scientific literacy: cognitive complexity and trust formation
Another significant finding is the moderating role of scientific literacy in the trust transfer process. Results show that individuals with lower scientific literacy exhibit stronger transfer from AI trust to physician trust. This highlights how cognitive ability shapes trust construction and offers a new perspective on medical AI acceptance.
Drawing on Dual-Process Theory, individuals with lower scientific literacy are more likely to rely on intuitive thinking (System 1), making generalized judgments and thus transferring trust in AI directly to physicians [37]. Conversely, those with higher scientific literacy activate analytical thinking (System 2), engaging in systematic evaluation of factors such as AI reliability, application context, and physicians’ proficiency [61]. Their weaker trust transfer does not signal resistance but reflects a more cautious, evidence-based process.
This finding extends the application of Dual-Process Theory in technology trust research. It demonstrates that an individual's level of scientific literacy influences not only their assessment of the technology itself but also the transfer process of trust between different objects. The discovery of this moderating effect emphasizes the importance of considering individual cognitive characteristics in studying technology acceptance. It provides a theoretical foundation for understanding individual differences in medical AI adoption.
Practical implications
The findings of this study provide several actionable implications for the implementation and governance of medical AI.
First, communication and education strategies should account for the dual role of literacy. The results show that digital literacy fosters trust, whereas AI literacy generates a more cautious attitude. This indicates that knowledge does not function in a uniformly positive way; instead, deeper awareness of risks may lead to “informed skepticism.” For practice, this implies that educational efforts should not be reduced to either uncritical promotion or risk amplification. Instead, information materials should be designed to balance awareness of AI’s strengths with clarity about its boundaries. Such calibrated communication can help prevent both overtrust among low-literacy groups and distrust among highly literate audiences.
Second, physicians must be supported as central intermediaries in trust transfer. The analysis demonstrates that trust in AI does not directly translate into trust in hospitals; rather, it must be mediated through physician trust. This suggests that physicians function as crucial interpreters who contextualize AI outputs and reassure patients about its integration into care. Health systems therefore need to invest in training and institutional support that enable physicians to act as trust brokers—equipping them to explain, validate, and appropriately frame AI-assisted decisions within clinical encounters.
Third, policy design should address heterogeneous cognitive profiles. The moderating effect of scientific literacy indicates that individuals differ in how they process and transfer trust. Those with lower scientific literacy may rely more on intuitive reasoning, making them more susceptible to generalized trust transfer. In contrast, those with higher literacy demand systematic evidence before extending trust. This divergence suggests that communication strategies and policy frameworks should be differentiated. For instance, standardized disclosure mechanisms could include layered formats: intuitive explanations for general audiences alongside more detailed performance data, error rates, and safeguards for highly literate users.
Finally, technology design can play a direct role in shaping trust. Tools that make algorithmic processes more transparent and interpretable to end-users—such as interfaces that clarify how AI reaches conclusions and what its limitations are—may help mitigate the decline in trust observed among more knowledgeable users. In parallel, positioning AI as a support system for human professionals, rather than as an autonomous substitute, can reinforce collaborative trust and reduce fears of depersonalization in care.
In sum, these implications emphasize that building sustainable trust in medical AI requires a multi-level approach: designing balanced educational strategies, reinforcing the communicative role of physicians, tailoring information to heterogeneous literacy profiles, and embedding transparency into technological systems. Such measures can help healthcare institutions achieve both effective adoption and responsible oversight of AI in practice.
Limitations and future directions
This study has several limitations. First, the cross-sectional design constrains causal inference. The negative effect of AI literacy—interpreted as “informed skepticism”—requires further testing through longitudinal designs to track trust dynamics over time, or experimental studies to identify cognitive and affective mechanisms (e.g., cognitive load, risk perception) that mediate this effect.
Second, the reliance on young university students limits generalizability. Although this group is highly relevant given their exposure to AI discourse, they differ from older and more frequent healthcare users in literacy levels, trust orientations, and real-world experience with medical AI. Broader and more heterogeneous samples are needed to strengthen external validity.
Third, the regional focus of the sample narrows applicability. Future studies should extend to diverse healthcare contexts and examine whether the identified mechanisms generalize to other professional domains where AI is increasingly embedded, such as law and education.
Conclusions
This study has achieved significant theoretical and practical findings by exploring the mechanisms by which medical AI trust is formed. The results provide new perspectives on understanding trust construction in medical settings. First, digital literacy and AI literacy exert opposite effects: while digital literacy enhances trust, AI literacy reduces it. Rather than an anomaly, the negative effect of AI literacy constitutes a critical theoretical contribution, challenging conventional technology acceptance models and showing that deeper knowledge of AI’s risks promotes calibrated rather than blind trust. This highlights the importance of cognitive methods (how to know) alongside cognitive content (what to know) in shaping trust. Second, the study confirms a chain transmission mechanism of medical AI trust (Medical AI trust → physician trust → hospital trust). This non-linear transmission characteristic emphasizes the key mediating role of physicians, validating the crucial position of interpersonal trust in building institutional trust. This finding supports hierarchical trust transfer theory and highlights the deep interactive characteristics of socio-technical systems in medical settings. Third, the study reveals that scientific literacy plays an essential moderating role in the trust transfer process. Individuals with higher scientific literacy demonstrate lower trust transfer effects, reflecting the profound impact of cognitive ability differences on trust construction. This finding extends the application of Dual-Process Theory in technology trust research, emphasizing the importance of individual cognitive characteristics in the technology acceptance process.
Abbreviations
- AI
Artificial intelligence
- MTTM
The Multi-level Trust Transfer Mode
Authors’ contributions
J.Y. and Z. Z. analyzed the data and prepared the manuscript. W. H. designed the study, collected the data, and prepared the manuscript. Q. C. and Y. O. participated in data collection and analysis.
Funding
This work was supported by Shandong Provincial Natural Science Foundation [Grant Number ZR2024QC330] and Shandong Second Medical University Educational Teaching Reform and Research Project [Grant Number 2025ZNZX017].
Data availability
The datasets generated and analysed during the current study are not publicly available due to confidentiality or privacy concerns, but are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
This study was approved by the Medical Ethics Committee of Shandong Second Medical University (Approval No. 2024YX181). All participants provided written informed consent. The study was conducted in accordance with the Declaration of Helsinki.
Consent for publication
Not applicable.
Competing interest
The authors declare no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Jie Yao and Zhibo Zhou contributed equally to this work.
References
- 1.Afthanorhan A, Ghazali P, Rashid N. Discriminant validity: a comparison of CBSEM and consistent PLS using Fornell & Larcker and HTMT approaches. J Phys: Conf Ser. 2021;1874:012085. 10.1088/1742-6596/1874/1/012085. [Google Scholar]
- 2.Alkali YE, Amichai-Hamburger Y. Experiments in digital literacy. Cyberpsychol Behav. 2004;7(4):421–9. 10.1089/cpb.2004.7.421. [DOI] [PubMed] [Google Scholar]
- 3.Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. 2020;22(6):e15154. 10.2196/15154. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Bandura A. Social foundations of thought and action. Englewood Cliffs, NJ. 1986;1986(23–28):2. [Google Scholar]
- 5.Blomqvist Å. The doctor as double agent: information asymmetry, health insurance, and medical care. J Health Econ. 1991;10(4):411–32. [DOI] [PubMed] [Google Scholar]
- 6.Cadario R, Longoni C, Morewedge CK. Understanding, explaining, and utilizing medical artificial intelligence. Nat Hum Behav. 2021;5(12):1636–42. 10.1038/s41562-021-01146-0. [DOI] [PubMed] [Google Scholar]
- 7.Calnan MW, Sanford E. Public trust in health care: the system or the doctor? Qual Saf Health Care. 2004;13(2):92. 10.1136/qshc.2003.009001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Casal-Otero L, Catala A, Fernández-Morante C, Taboada M, Cebreiro B, Barro S. AI literacy in K-12: a systematic literature review. Int J Stem Educ. 2023;10(1):29. 10.1186/s40594-023-00418-7. [Google Scholar]
- 9.Celik I. Exploring the determinants of artificial intelligence (AI) literacy: digital divide, computational thinking, cognitive absorption. Telemat Inform. 2023;83:102026. [Google Scholar]
- 10.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Mis Q. 1989;13(3):319–40. 10.2307/249008. [Google Scholar]
- 11.DeCamp M, Tilburt JC. Why we cannot trust artificial intelligence in medicine. Lancet Digit Health. 2019;1(8):e390. 10.1016/S2589-7500(19)30197-9. [DOI] [PubMed] [Google Scholar]
- 12.Dwivedi YK, Ismagilova E, Rana NP, Raman R. Social media adoption, usage and impact in business-to-business (B2B) context: a state-of-the-art literature review. Inf Syst Front. 2023;25(3):971–93. 10.1007/s10796-021-10106-y. [Google Scholar]
- 13.Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients’ perceptions toward human-artificial intelligence interaction in health care: experimental study. J Med Internet Res. 2021;23(11):e25856. 10.2196/25856. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Evans JSBT, Stanovich KE. Dual-process theories of higher cognition: advancing the debate. Perspect Psychol Sci. 2013;8(3):223–41. 10.1177/1745691612460685. [DOI] [PubMed] [Google Scholar]
- 15.Ferrario A, Loi M, Viganò E. Trust does not need to be human: it is possible to trust medical AI. J Med Ethics. 2021;47(6):437. 10.1136/medethics-2020-106922. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Gilson L. Trust and the development of health care as a social institution. Soc Sci Med. 2003;56(7):1453–68. 10.1016/s0277-9536(02)00142-9. [DOI] [PubMed] [Google Scholar]
- 17.Gojanovic B, Fourchet F, Gremeaux V. Cognitive biases cloud our clinical decisions and patient expectations: a narrative review to help bridge the gap between evidence-based and personalized medicine. Ann Phys Rehabil Med. 2022;65(4):101551. [DOI] [PubMed] [Google Scholar]
- 18.Gong Y, Wang H, Xia Q, Zheng L, Shi Y. Factors determining a Patient’s willingness to choose ph in online healthcare communities: A trust theory perspective. Technol Soc. 2021;64:101510. 10.1016/j.techsoc.2020.101510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Gordon HS, Street RL, Sharf BF, Kelly PA, Souchek J. Racial differences in trust and lung cancer patients’ perceptions of physician communication. J Clin Oncol. 2006;24(6):904–9. 10.1200/JCO.2005.03.1955. [DOI] [PubMed] [Google Scholar]
- 20.Gulati S, Sousa S, Lamas D. Modelling trust in human-like technologies Proceedings of the 9th Indian Conference on Human-Computer Interaction, Bangalore, India. 2018. 10.1145/3297121.3297124.
- 21.Hall MA, Camacho F, Dugan E, Balkrishnan R. Trust in the medical profession: conceptual and measurement issues. Health Serv Res. 2002;37(5):1419–39. 10.1111/1475-6773.01070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Hall MA, Dugan E, Zheng B, Mishra AK. Trust in physicians and medical institutions: what is it, can it be measured, and does it matter? Milbank Q. 2001;79(4):613–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Hargittai E. Survey measures of web-oriented digital literacy. Soc Sci Comput Rev. 2005;23(3):371–9. 10.1177/0894439305275911. [Google Scholar]
- 24.Hendriks F, Kienhues D. Science understanding between scientific literacy and trust: psychological and educational research contribut. A. Leßmöllmann, M. Dascal, & T. Gloning (Eds.). De Gruyter Mouton; 2020. pp. 29–50. 10.1515/9783110255522-002.
- 25.Hengstler M, Enkel E, Duelli S. Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Change. 2016;105:105–20. 10.1016/j.techfore.2015.12.014. [Google Scholar]
- 26.Ho S, Leong A, Looi J, Chen L, Pang N, Tandoc E. Science literacy or value predisposition? A meta-analysis of factors predicting public perceptions of benefits, risks, and acceptance of nuclear energy. Environ Commun. 2018;13:1–15. 10.1080/17524032.2017.1394891. [Google Scholar]
- 27.Høyer HC, Mønness E. Trust in public institutions – spillover and bandwidth. J Trust Res. 2016;6(2):151–66. 10.1080/21515581.2016.1156546. [Google Scholar]
- 28.Huo W, Zheng G, Yan J, Sun L, Han L. Interacting with medical artificial intelligence: integrating self-responsibility attribution, human–computer trust, and personality. Comput Human Behav. 2022;132:107253. [Google Scholar]
- 29.Jin E, Ryoo Y, Kim W, Song YG. Bridging the health literacy gap through AI chatbot design: the impact of gender and doctor cues on chatbot trust and acceptance. Internet Research, ahead-of-print(ahead-of-print). 2024. 10.1108/INTR-08-2023-0702.
- 30.Kabakus AK, Bahcekapili E, Ayaz A. The effect of digital literacy on technology acceptance: an evaluation on administrative staff in higher education. J Inform Sci. 2023:01655515231160028. 10.1177/01655515231160028.
- 31.Kahan DM, Peters E, Wittlin M, Slovic P, Ouellette LL, Braman D, et al. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nat Clim Chang. 2012;2(10):732–5. 10.1038/nclimate1547. [Google Scholar]
- 32.Kahneman D, Frederick S. Representativeness Revisited: Attribute Substitution in Intuitive Judgment. InHeuristics and Biases: The Psychology of Intuitive Judgment. T. Gilovich, D. Griffin, & D. Kahneman (Eds.). Cambridge University Press; 2002. pp. 49–81. 10.1017/CBO9780511808098.004.
- 33.Laupichler MC, Aster A, Haverkamp N, Raupach T. Development of the “Scale for the assessment of non-experts’ AI literacy” – an exploratory factor analysis. Comput Human Behav Rep. 2023;12:100338. [Google Scholar]
- 34.Lee Y-Y, Lin JL. The effects of trust in physician on self-efficacy, adherence and diabetes outcomes. Soc Sci Med. 2009;68(6):1060–8. [DOI] [PubMed] [Google Scholar]
- 35.Lim KH, Sia CL, Lee MKO, Benbasat I. Do I trust you online, and if so, will I buy? An empirical study of two trust-building strategies. J Manage Inf Syst. 2006;23(2):233–66. 10.2753/MIS0742-1222230210. [Google Scholar]
- 36.Lintner T. A systematic review of AI literacy scales. NPJ Sci Learn. 2024;9(1):50. 10.1038/s41539-024-00264-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Listiani I, Susilo H, Sueb S. Relationship between scientific literacy and critical thinking of prospective teachers. AL-ISHLAH: Jurnal Pendidikan. 2022;14:721–30. 10.35445/alishlah.v14i1.1355. [Google Scholar]
- 38.Liu X, Magjuka RJ, Lee S-H. The effects of cognitive thinking styles, trust, conflict management on online students’ learning and virtual team performance. Br J Educ Technol. 2008;39(5):829–46. 10.1111/j.1467-8535.2007.00775.x. [Google Scholar]
- 39.Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res. 2019;46(4):629–50. 10.1093/jcr/ucz013. [Google Scholar]
- 40.Lucas M, Zhang Y, Bem-haja P, Vicente PN. The interplay between teachers’ trust in artificial intelligence and digital competence. Educ Inf Technol. 2024;29(17):22991–3010. 10.1007/s10639-024-12772-2. [Google Scholar]
- 41.Martin A, Grudziecki J. DigEuLit: Concepts and tools for digital literacy development. Innov Teach Learn Inform Comput Sci. 2006;5(4):249–67. 10.11120/ital.2006.05040249. [Google Scholar]
- 42.Mcknight DH, Carter M, Thatcher JB, Clay PF. Trust in a specific technology: an investigation of its components and measures. ACM Trans Manage Inf Syst. 2011;2(2):12. 10.1145/1985347.1985353. [Google Scholar]
- 43.McKnight DH, Liu P, Pentland BT. Trust change in information technology products. J Manage Inf Syst. 2020;37(4):1015–46. 10.1080/07421222.2020.1831772. [Google Scholar]
- 44.McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89–94. 10.1038/s41586-019-1799-6. [DOI] [PubMed] [Google Scholar]
- 45.Mechanic D. The functions and limitations of trust in the provision of medical care. J Health Polit Policy Law. 1998;23(4):661–86. 10.1215/03616878-23-4-661. [DOI] [PubMed] [Google Scholar]
- 46. Miller, JD. Scientific Literacy: A Conceptual and Empirical Review. Daedalus. 1983;112(2):29- 48. http://www.jstor.org/stable/20024852.
- 47.Miller JD. Public understanding of science and technology in the internet era. Public Understand Sci. 2022;31(3):266–72. 10.1177/09636625211073485. [DOI] [PubMed] [Google Scholar]
- 48.Mohammadyari S, Singh H. Understanding the effect of e-learning on individual performance: the role of digital literacy. Comput Educ. 2015;82:11–25. [Google Scholar]
- 49.Newell A, Simon HA. Human problem solving, vol. 104. Englewood Cliffs, NJ: Prentice-hall; 1972. [Google Scholar]
- 50.Ng W. Can we teach digital natives digital literacy? Comput Educ. 2012;59(3):1065–78. 10.1016/j.compedu.2012.04.016. [Google Scholar]
- 51.Panagoulias DP, Virvou M, Tsihrintzis GA. A novel framework for artificial intelligence explainability via the Technology Acceptance Model and Rapid Estimate of Adult Literacy in Medicine using machine learning. Expert Syst Appl. 2024;248:123375. [Google Scholar]
- 52.Park K, Young Yoon H. AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI. Humanit Soc Sci Commun. 2025;12(1):1160. 10.1057/s41599-025-05116-z. [Google Scholar]
- 53.Payerchin R. Patients don’t understand use of AI in health care, and many don’t trust it. Med Econ. 2023. https://www.medicaleconomics.com/view/patients-don-t-understand-use-of-ai-in-health-care-and-many-don-t-trust-it.
- 54.Platt JE, Jacobson PD, Kardia SLR. Public trust in health information sharing: a measure of system trust. Health Serv Res. 2018;53(2):824–45. 10.1111/1475-6773.12654. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Radaelli G, Lettieri E, Frattini F, Luzzini D, Boaretto A. Users’ search mechanisms and risks of inappropriateness in healthcare innovations: the role of literacy and trust in professional contexts. Technol Forecast Soc Change. 2017;120:240–51. [Google Scholar]
- 56.Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1347–58. 10.1056/NEJMra1814259. [DOI] [PubMed] [Google Scholar]
- 57.Richmond J, Boynton MH, Ozawa S, Muessig KE, Cykert S, Ribisl KM. Development and validation of the trust in my doctor, trust in doctors in general, and trust in the health care team scales. Soc Sci Med. 2022;298:114827. 10.1016/j.socscimed.2022.114827. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Samad S, Nilashi M, Almulihi A, Alrizq M, Alghamdi A, Mohd S, et al. Green supply chain management practices and impact on firm performance: the moderating effect of collaborative capability. Technol Soc. 2021;67:101766. [Google Scholar]
- 59.Sarstedt M, Ringle CM, Hair JF. Partial least squares structural equation modeling. Homburg C, Klarmann M, Vomberg AE (eds). InHandbook of Market Research. Cham: Springer; 2021. 10.1007/978-3-319-05542-8_15-2.
- 60.Schiavo G, Businaro S, Zancanaro M. Comprehension, apprehension, and acceptance: understanding the influence of literacy and anxiety on acceptance of artificial intelligence. Technol Soc. 2024;77:102537. [Google Scholar]
- 61.Sharon AJ, Baram-Tsabari A. Can science literacy help individuals identify misinformation in everyday life? Sci Educ. 2020;104(5):873–94. [Google Scholar]
- 62.Shin D. User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of fairness, accountability, transparency, and explainability. J Broadcast Electron Media. 2020;64(4):541–65. 10.1080/08838151.2020.1843357. [Google Scholar]
- 63.Steerling E, Siira E, Nilsen P, Svedberg P, Nygren J. Implementing AI in healthcare—the relevance of trust: a scoping review. Front Health Serv. 2023;3:1211150. 10.3389/frhs.2023.1211150. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Stewart KJ. Trust transfer on the world wide web. Organ Sci. 2003;14(1):5–17. [Google Scholar]
- 65.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. 10.1038/s41591-018-0300-7. [DOI] [PubMed] [Google Scholar]
- 66.van Antwerpen N, Green EB, Sturman D, Searston RA. The impacts of expertise, conflict, and scientific literacy on trust and belief in scientific disagreements. Sci Rep. 2025;15(1):11869. 10.1038/s41598-025-96333-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Wu D, Lowry PB, Zhang D, Tao Y. Patient trust in physicians matters—understanding the role of a mobile patient education system and patient-physician communication in improving patient adherence behavior: field study. J Med Internet Res. 2022;24(12):e42941. 10.2196/42941. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Yi-No Kang E, Chen D-R, Chen Y-Y. Associations between literacy and attitudes toward artificial intelligence–assisted medical consultations: the mediating role of perceived distrust and efficiency of artificial intelligence. Comput Hum Behav. 2023;139:107529. [Google Scholar]
- 69.Zaheer A, McEvily B, Perrone V. Does trust matter? Exploring the effects of interorganizational and interpersonal trust on performance. Organ Sci. 1998;9(2):141–59. 10.1287/orsc.9.2.141. [Google Scholar]
- 70.Zhang C, Rice RE, Wang LH. College students’ literacy, ChatGPT activities, educational outcomes, and trust from a digital divide perspective. New Media Soc. 2024:14614448241301741. 10.1177/14614448241301741.
- 71.Zheng S, Hui SF, Yang Z. Hospital trust or doctor trust? A fuzzy analysis of trust in the health care setting. J Bus Res. 2017;78:217–25. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets generated and analysed during the current study are not publicly available due to confidentiality or privacy concerns, but are available from the corresponding author on reasonable request.


