Abstract
Background
Generative artificial intelligence (GenAI) is reshaping higher education by influencing students’ learning, cognition, and academic decision-making. Understanding the factors that associated with students’ acceptance of this technology is crucial for its successful integration. This study extends the Unified Theory of Acceptance and Use of Technology (UTAUT) by incorporating attitude as a mediator and AI literacy as an antecedent to investigate psychological factors associated with GenAI adoption among Chinese university students.
Method
A cross-sectional survey design was employed. Data were collected from 1,536 Chinese university students via an online questionnaire. The instrument included validated scales measuring AI literacy (awareness, evaluation, ethics), UTAUT constructs (performance expectancy, effort expectancy, social influence, facilitating conditions), attitude, behavioral intention, and use behavior. Partial least squares structural equation modeling (PLS-SEM) was used to test the hypothesized relationships.
Results
Performance expectancy (β = 0.345, p < 0.001), social influence (β = 0.154, p < 0.001), and facilitating conditions (β = 0.118, p < 0.05) were positively correlated with students’ attitudes toward GenAI. Performance expectancy (β = 0.266, p < 0.001), effort expectancy (β = 0.078, p < 0.05), and social influence (β = 0.283, p < 0.001) were linked to behavioral intention. Attitude was positively associated with behavioral intention (β = 0.192, p < 0.001), whereas its direct association with use behavior was small and non-significant after FDR correction (β = 0.060, p > 0.05); instead, behavioral intention showed a strong positive association with use behavior (β = 0.257, p < 0.001). Among AI literacy dimensions, awareness (β = 0.137, p < 0.001) and evaluation (β = 0.101, p < 0.001) were positively associated with attitudes, while ethics demonstrated a non-significant negative relationship (β = − 0.038, p = 0.148). The model explained 50.7% of the variance in attitude and 47.6% in behavioral intention.
Conclusions
The findings presented herein highlight the relevance of AI awareness, evaluative competence, performance expectancy, and social endorsement for understanding students’ positive evaluations and intentions regarding GenAI use. Attitude emerges as a central affective correlate that connects these cognitive appraisals with students’ reported behavioral intentions. Taken together, these patterns may inform efforts to design AI literacy initiatives in university curricula and supportive learning environments that emphasize informed awareness, critical evaluation, and ethical reflection, thereby fostering more psychologically informed and responsible engagement with GenAI in higher education.
Supplementary Information
The online version contains supplementary material available at 10.1186/s40359-026-03989-6.
Keywords: Generative artificial intelligence, AI literacy, Technology acceptance, Unified Theory of Acceptance and Use of Technology extension, Higher education transformation, Structural equation modeling
Background
Generative artificial intelligence (GenAI) refers to technologies that autonomously generate text, images, audio, and other forms of content, thereby offering substantial potential to enhance efficiency and creativity in learning environments [1]. In China, the government has promoted AI-related policies to strengthen educational innovation. The New Generation Artificial Intelligence Development Plan of China (2015–2030) outlines national strategies for cultivating an AI-enabled education ecosystem [2]. As a result, GenAI has been widely integrated into schools, and its transformative impact on higher education has become increasingly evident [3].
The rapid integration of GenAI in higher education introduces both opportunities and challenges [4]. GenAI tools can support course learning, academic research, and everyday tasks [5]. For example, common AI-powered applications include adaptive learning platforms, intelligent tutoring systems, transcription services, enhanced assessment tools, and writing assistance [6–8]. In research contexts, GenAI can enable rapid data processing, provide novel perspectives, and predict academic trends [9]. GenAI has also been applied to enhance students’ psychological well-being, with studies demonstrating improvements in mental health and perceived social support after AI-based interventions [10, 11]. These educational and psychosocial benefits may motivate students to explore and adopt other GenAI tools. However, GenAI adoption is accompanied by notable risks. Students may misuse AI for unauthorized academic assistance, rely excessively on automated outputs, or experience diminished critical thinking and breaches of personal privacy [4, 12–16]. These concerns can reduce students’ willingness to engage with GenAI, particularly in contexts that emphasize academic integrity and ethical responsibility.
The coexistence of educational benefits and potential risks, combined with strong policy support and increasing institutional adoption in China, underscores the urgency to examine the factors associated with university students’ acceptance and use of GenAI in higher education.
Theoretical model evolution
Research on technology adoption is predominantly grounded in three influential frameworks: the Technology Acceptance Model (TAM), the Theory of Planned Behavior (TPB), and the Unified Theory of Acceptance and Use of Technology (UTAUT). These models are effective for predicting users’ acceptance and adoption of new technologies [17, 18].
TAM posits that an individual’s adoption decisions are shaped by two cognitive evaluations: perceived usefulness (the extent to which a technology enhances task performance) and perceived ease of use (the amount of effort required to operate the system). These perceptions are theorized to shape users’ attitudes, which are directly related to their behavioral intentions (the degree to which an individual plans or intends to engage with a technology in the future [19]) and ultimate usage behaviors (the actual frequency with which individuals employ a technology [20–23]).
TPB extends this perspective by incorporating additional social and control determinants. Within TPB, attitude represents an individual’s positive or negative evaluations of the target behavior, where a greater correspondence between attitude and behavioral elements is associated with stronger predictive validity [24, 25]. Subjective norms refer to perceived social pressures from significant others to perform a behavior, whereas perceived behavioral control reflects users’ beliefs about the available resources and support necessary to complete the behavior [26]. Together, these factors affect an individual’s behavioral intentions, which further relate to their actual behaviors. TPB therefore highlights how personal evaluations, social expectations, and structural resources shape technology adoption decisions, which are particularly salient in educational environments.
UTAUT integrates the core constructs of the two aforementioned models and identifies four predictors of technology acceptance and use: performance expectancy (the degree to which a technology is believed to enhance outcomes), effort expectancy (the perceived ease of using a technology), social influence (the perceived endorsement of technology use by important others), and facilitating conditions (the belief that organizational or technical infrastructure support is available for the technology) [27]. Empirical studies have consistently shown that these factors are associated with behavioral intentions and actual use across various digital systems in higher education [28–30].
Given its ability to assess cognitive beliefs, social expectations, and environmental supports, the present study adopts UTAUT as the theoretical foundation on which to examine generative AI acceptance among Chinese university students. UTAUT is particularly suitable in this context because it incorporates social influence and facilitating conditions, which are important factors in collectivist educational cultures, such as China, where peer endorsement and institutional support often shape students’ technology-related decisions [31, 32]. Moreover, prior research has demonstrated that UTAUT is applicable to a range of digital learning systems in higher education, thus justifying its use in this study [30, 33, 34]. Nevertheless, existing UTAUT-based research on AI adoption remains limited in several important respects. Although attitude is central to both TAM and TPB, the attitudinal pathway has received limited attention in AI research based on UTAUT. And the possibility that students’ cognitive evaluations are reflected in affective appraisals before being associated with behavioral intentions to engage with GenAI has not been examined in detail. To address this gap, the present study extends the UTAUT model by explicitly modelling attitudes as an evaluative link between belief-based predictors and GenAI-related intentions.
AI literacy: AI awareness, AI evaluation and AI ethics
In addition to general technology-acceptance constructs, recent studies have supplemented traditional technology-adoption variables with AI-specific variables to capture the distinctive characteristics of AI-based systems [18, 35]. In this regard, AI literacy has emerged as a key concept and reflects an individual’s ability to comprehend, evaluate, and proficiently use AI technology to accomplish tasks in an increasingly digital society [36]. Theoretically, students must first recognize AI’s capabilities, critically evaluate the system outputs, and consider ethical implications before forming positive attitudes or perceiving performance benefits. Consistent with this view, empirical work has frequently operationalized AI literacy via dimensions such as awareness, evaluation, ethics, and usage [37–39]. The present study conceptualizes AI literacy in three dimensions: AI awareness (AWA), AI evaluation (EVA) and AI ethics (ETH).
Specifically, AWA refers to an individual’s ability to understand AI-related concepts and technical details and to identify AI-powered technology when using certain applications [36]. Higher awareness about GenAI can reduce its uncertainty and ambiguity, making its functions and boundaries more cognitively accessible, thereby facilitating more confident and favorable attitudes. Prior studies have reported that higher levels of awareness of AI applications are positively associated with university students’ attitudes toward generative AI [18]. In higher-education settings, awareness of GenAI capabilities has been linked to generally favourable attitudes toward its educational value, particularly when accompanied by higher levels of trust in AI systems [40]. More broadly, research with K-12 mathematics teachers similarly finds that AI awareness is positively related to attitudes toward using GenAI for instruction, and these attitudes are in turn associated with reported GenAI use [41].
Moving from recognition to assessment, EVA denotes the capacity to critically analyze AI applications, including selecting appropriate AI tools and assessing their capabilities and limitations [36, 42]. This dimension aligns with the evaluation aspect of digital media literacy [43] and resonates with arguments that AI literacy involves not only critical evaluations but also effective communication and collaboration with AI systems [44]. Conceptually, stronger evaluative competence should improve the perceived reliability of AI outputs while reducing the likelihood of negative learning experiences, thereby supporting more positive attitudes toward using GenAI. Building on this perspective, empirical studies on GenAI adoption suggest that more favourable evaluations of GenAI outputs are associated with more positive attitudes and greater confidence in using these tools [35], and that the ability to scrutinise AI-generated content and potential biases is related to more favourable attitudes and, indirectly, stronger intentions to use GenAI [18].
Finally, ETH captures an individual’s ability to recognize and adhere to ethical principles when interacting with AI systems while remaining alert to potential risks [36]. Ethical considerations can shape attitudes in a dual manner: when users’ heightened ethical sensitivity emphasizes responsible use, it can foster more favorable attitudes, whereas when potential harms are overweighted, it may lead to more negative attitudes. This ethical space has been broadened by principles such as sociocultural responsibility, respect, and truthfulness [45] and is echoed in educational frameworks emphasizing fairness, accountability, transparency, and ethics in learning contexts [46]. Empirical findings further clarify how AI ethics shapes users’ evaluations of and attitudes toward AI technologies. Some studies have shown that university students’ ethical awareness of GenAI can simultaneously promote more ethically aligned intentions and heighten perceived ethical risks, which in turn suppress actual use behaviors when anxiety about AI’s moral consequences is strong [47]. Similarly, others found that students’ awareness of negative consequences strengthens personal moral norms yet directly reduces their intention to adopt GenAI-supported learning tools [48]; and positive ethical perceptions of AI such as transparency and data privacy protection can strengthen the role of trust in shaping social acceptance of AI technologies [49].
However, few studies have systematically integrated AI literacy into UTAUT framework, particularly in the context of GenAI-supported learning, where empirical work remains at a relatively early stage. Moreover, existing research tends to treat AI literacy as a single, undifferentiated construct, leaving unclear which specific literacy facets matter most for adoption decisions and through what psychological mechanisms. To address these limitations, the present study extends the UTAUT model by conceptualizing AI literacy as a three-dimensional construct − AI awareness, AI evaluation and AI ethics − and positioning these dimensions as proximal antecedents of students’ attitudes toward GenAI in education.
Hypothesis development
The proposed theoretical extension offers an integrated framework for examining how technology acceptance constructs and AI literacy are associated with GenAI-related attitudes and intentions in higher education. Within this framework, the study offers a timely, context-specific examination of GenAI acceptance in Chinese higher education, while providing a more fine-grained test of how distinct AI literacy facets (awareness, evaluation, and ethics) relate to attitudes and intentions within an extended UTAUT model. Accordingly, the purpose of this study is twofold: first, to identify the key factors associated with Chinese university students’ attitudes toward, and behavioral intentions regarding, GenAI adoption; and second, to examine how different dimensions of AI literacy are related to students’ attitudes in ways that may be linked to their intention to use GenAI-based tools in academic settings.
Based on these objectives, the following hypotheses are proposed, and Fig. 1 illustrates the proposed research model encompassing all hypothesis pathways.
Fig. 1.
Research model and hypothesis pathways
H1-3: AWA, EVA, and ETH are positively associated with university students’ GenAI UB, and these relationships are sequentially mediated by ATT and BI.
H4-7: PE, EE, SI, and FC are positively associated with university students’ GenAI UB, and these relationships are mediated by ATT and/or BI.
H8-11: PE, EE, SI, and FC are positively associated with university students’ GenAI UB, and these relationships are mediated by BI.
H12: ATT is positively associated with university students’ BI to use GenAI.
H13: ATT is positively associated with university students’ GenAI UB.
H14: BI is positively associated with university students’ GenAI UB.
Methodology
Measurement items
The questionnaire used in this study was designed based on the aforementioned models (TAM, TPB, and UTAUT), adapting items from validated scales to the context of Chinese university students and GenAI. The survey had two parts: the first part gathered demographic information (e.g., gender, age), and the second part contained items measuring the latent variables in the developed model. Table 1 presents the 29 measurement items adjusted to the GenAI context for Chinese students, along with the corresponding resources. All measurement items were originally developed in English and were translated into Chinese by five bilingual researchers. Consistent with recommendations for cross-cultural scale adaptation [50], the forward translations were then reviewed by two domain experts, who reconciled discrepancies and refined wording to ensure semantic accuracy, conceptual equivalence, and cultural appropriateness. A small pilot test confirmed the comprehensibility and contextual suitability of all translated items among Chinese university students. Although a formal back-translation procedure was not conducted, the translation was carefully adapted following standardized cross-cultural scale adaptation guidelines. Moreover, similar translated items have been used in previous research with Chinese university students [35] and have demonstrated satisfactory psychometric properties, further supporting their cultural applicability. Complementing this prior evidence, the present study’s confirmatory factor analysis (CFA) and PLS-SEM measurement model yielded satisfactory reliability and validity indices (e.g., factor loadings, AVE, composite reliability, and discriminant validity), indicating that the adapted Chinese scales exhibited sound structural validity in this sample.
Table 1.
Measurement scale
| Construct | Item | Items | Source |
|---|---|---|---|
| AWA | AWA1 | I can distinguish between GenAI systems and non-GenAI systems | [51] |
| AWA2 | I know how GenAI can help me | ||
| AWA3 | I can identify the GenAI employed in the applications and products | ||
| ETH | ETH1 | I always comply with ethical principles when using GenAI or products | |
| ETH2 | I am never alert to privacy and information security issues when using GenAI or products | ||
| ETH3 | I am always alert to the abuse of GenAI | ||
| EVA | EVA1 | I can evaluate the capabilities and limitations of GenAI or product after using it for a while | |
| EVA2 | I can choose a proper solution from various solutions provided by a smart agent | ||
| EVA3 | I can choose the most appropriate GenAI or product from a variety for a particular task | ||
| PE | PE1 | I find GenAI useful in my daily life | [52] |
| PE2 | Using GenAI helps me accomplish things more quickly | ||
| PE3 | Using GenAI increases my productivity | ||
| EE | EE1 | My interaction with GenAI is clear and understandable | |
| EE2 | Learning how to use GenAI is easy for me | ||
| EE3 | It is easy for me to become skillful at using GenAI | ||
| SI | SI1 | My parents support me in learning how to use GenAI | [51, 52] |
| SI2 | My classmates believe it is necessary to learn to use GenAI | ||
| SI3 | My teachers believe it is necessary to learn to use GenAI | ||
| SI4 | Most people I know believe that I should learn to use GenAI | ||
| FC | FC1 | I have the resources necessary to use GenAI | [52] |
| FC2 | I have the knowledge necessary to use GenAI | ||
| FC3 | GenAI is compatible with other technologies l use | ||
| FC4 | I can get help from others when I have difficulties using GenAI | ||
| BI | BI1 | I intend to continue using GenAI in the future | |
| BI2 | I plan to continue to use GenAI frequently | ||
| BI3 | I will always try to use GenAI in my daily life | ||
| ATT | ATT1 | GenAI makes work more interesting | [22, 25] |
| ATT2 | I like the idea of using GenAI | ||
| ATT3 | Using GenAI is pleasant |
All items were translated following the procedures described in the Measurement items section
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude
Except for actual use behavior, all items were measured using a five-point Likert scale, with scores ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). In line with prior UTAUT-based studies, actual GenAI use was assessed with a behavior-anchored, frequency-based item adapted from the original UTAUT scale [20]. Respondents indicated how often they used GenAI products on a seven-point scale (“never”, “once a month”, “several times a month”, “once a week”, “several times a week”, “once a day”, “several times a day”). For the multi-item constructs, the measurement model demonstrated adequate reliability and validity, including internal consistency reliability, convergent validity (AVE > 0.50, factor loadings > 0.70), and discriminant validity (Fornell-Larcker criterion, heterotrait-monotrait [HTMT] ratios, and cross-loading assessments), thus supporting the validity of the adapted Chinese scale.
Data collection
The study was conducted in April 2025. The online survey was administered via the Questionnaire Star platform (https://www.wjx.cn/), and a brief definition of GenAI was provided at the beginning of the questionnaire to ensure all respondents had a common understanding. The participants included university students across various regions of China. Ethical approval was obtained and informed consent was secured from all participants before data collection. Responses were collected voluntarily over a fixed period. A total of 1855 responses were received; after removing incomplete or invalid entries, 1536 valid responses remained, corresponding to an effective response rate of approximately 83%.
Sample characteristics
The demographic characteristics of the sample population (N = 1536) are presented using descriptive statistics in Table 2. Overall, 327 (21.2%) participants were male and 1209 (78.7%) were female. The sample covered a range of academic majors: 71.6% of the students majored in liberal arts (n = 1100), 9.7% majored in science (n = 150), 12.5% majored in engineering (n = 192), and 6.1% had other majors (n = 94). In terms of age, the majority of respondents (approximately 89.3%) were between 18 and 23 years old, consistent with the target demographic of university students. Approximately 10% of participants were older than 23 years, and less than 1% were older than 30 years, reflecting a predominantly young adult cohort. Geographically, students from eastern, central, and western regions of China participated, with the highest representation from the Hubei, Hunan, Henan, Ningxia, Jiangsu, and Guangdong provinces.
Table 2.
Profile and characteristics of Chinese university students (N = 1,536)
| Items | Characteristics | Frequency | Percentage (%) |
|---|---|---|---|
| Gender | Male | 327 | 21.2 |
| Female | 1209 | 78.7 | |
| Major | Liberal arts | 1100 | 71.6 |
| Science | 150 | 9.7 | |
| Engineering | 192 | 12.5 | |
| Other | 94 | 6.1 | |
| Age (years) | 18 to 23 | 1373 | 89.3 |
| 24 to 29 | 142 | 9.2 | |
| 30 to 35 | 14 | 0.9 | |
| > 35 | 7 | 0.4 | |
| Frequency of using GenAI applications | Never | 16 | 1.0 |
| Once a month | 65 | 4.2 | |
| Several times a month | 285 | 18.5 | |
| Once a week | 143 | 9.3 | |
| Several times a week | 568 | 36.9 | |
| Once a day | 92 | 5.9 | |
| Several times a day | 367 | 23.8 |
Respondents’ GenAI use frequency varied widely. The majority (approximately 76%) were regular users of GenAI applications, reporting use at least once a week. Approximately 23% used GenAI a few times a month or less, and 1% had never used GenAI. These figures indicate that most respondents were familiar with GenAI before the study.
Data analysis
Data cleaning and descriptive analyses were conducted using SPSS version 25, and the Kolmogorov–Smirnov test confirmed that the data did not follow a normal distribution. Before data analysis, common method variance was assessed, and Harman’s one-factor test indicated no severe common method bias; the first factor accounted for 36.7% of variance, well below the 50% threshold [53]. To further examine the structural validity of the adapted scales in the Chinese university student sample, a confirmatory factor analysis (CFA) was conducted in Mplus 8.3 using the robust maximum likelihood estimator (MLR). A nine-factor model was specified, with each item loading on its intended latent construct (AI awareness, AI evaluation, AI ethics, performance expectancy, effort expectancy, social influence, facilitating conditions, attitude, and behavioral intention).
This study then employed a partial least squares structural equation modeling (PLS-SEM) approach using SmartPLS4. PLS-SEM was considered appropriate because of the sample’s non-normal distribution and the exploratory nature of the developed model with potentially formative constructs [54–56].
The PLS-SEM approach adopted a two-step strategy. First, the measurement model was assessed, followed by the structural model. For the measurement model, the indicator loadings and composite reliability (CR) were determined to evaluate reliability, and the average variance extracted (AVE) was examined to assess convergent validity, following established thresholds and methodological guidelines [36, 56]. Discriminant validity was evaluated according to the HTMT ratio and the Fornell-Larcker criterion and by checking that each indicator’s loading on its own construct exceeded cross-loadings on other constructs [57, 58]. For the structural model, the multicollinearity was assessed based on variance inflation factors (VIF), and then, the pathway coefficients and their significance were evaluated. In addition to examining the direct relationships among constructs, specific indirect effects were explored to examine potential mediation pathways. The R2 values are reported herein for key endogenous constructs as a measure of explained variance [57]. On the basis of established methodological recommendations, 5000 bootstrapping resamples were used to derive stable standard errors and 95% confidence intervals (CIs) for all estimates [59]. All analyses were conducted using the PLS-SEM algorithm and bootstrapping module in SmartPLS4. To address the risk of inflated Type I error due to multiple tests of structural paths and indirect effects, we applied the Benjamini–Hochberg false discovery rate (FDR) procedure to all p-values, and statistical inferences were based on the FDR-adjusted values.
Results
Measurement model
The CFA conducted in Mplus 8.3 indicated that the hypothesized nine-factor measurement model showed good fit to the data, χ2(342) = 1331.87, χ2/df = 3.89, CFI = 0.95, TLI = 0.94, RMSEA = 0.043 (90% CI [0.041, 0.046]), and SRMR = 0.040. All standardized factor loadings were substantial (mostly above 0.70), with only a few items slightly below this threshold but still within an acceptable range. These results support the proposed factor structure and provide additional evidence for the structural validity of the adapted scales in this context.
Consistent with the CFA findings, the reliability and validity of the measurement model are presented in Tables 3, 4, and 5. Item reliability was first assessed based on indicator loadings and an indicator loading greater than 0.7 is regarded as evidence of strong reliability [59]. As shown in Table 3, the item loadings ranged from 0.723 to 0.982, all above the 0.7 threshold, confirming strong reliability. Internal consistency reliability can be measured using Cronbach's alpha and/or CR. Fornell and Larcker reported that if either value is greater than 0.7, this indicates adequate internal consistency [60]. As shown in Table 3, all constructs had Cronbach's α and CR above 0.7, confirming that the measurement items consistently captured the intended corresponding latent construct.
Table 3.
Reliability and validity statistics for constructs
| Construct | Item | Outer loading | Cronbach's alpha (α > 0.7) | Composite reliability (> 0.7) | AVE (> 0.5) |
|---|---|---|---|---|---|
| AWA | AWA1 | 0.723 | 0.751 | 0.827 | 0.658 |
| AWA2 | 0.878 | ||||
| AWA3 | 0.825 | ||||
| ETH | ETH1 | 0.908 | 0.859 | 0.872 | 0.779 |
| ETH2 | 0.889 | ||||
| ETH3 | 0.850 | ||||
| EVA | EVA1 | 0.834 | 0.799 | 0.804 | 0.714 |
| EVA2 | 0.877 | ||||
| EVA3 | 0.823 | ||||
| PE | PE1 | 0.949 | 0.954 | 0.954 | 0.915 |
| PE2 | 0.939 | ||||
| PE3 | 0.982 | ||||
| EE | EE1 | 0.858 | 0.805 | 0.819 | 0.717 |
| EE2 | 0.862 | ||||
| EE3 | 0.820 | ||||
| SI | SI1 | 0.766 | 0.854 | 0.859 | 0.697 |
| SI2 | 0.839 | ||||
| SI3 | 0.872 | ||||
| SI4 | 0.857 | ||||
| FC | FC1 | 0.786 | 0.798 | 0.800 | 0.622 |
| FC2 | 0.811 | ||||
| FC3 | 0.790 | ||||
| FC4 | 0.767 | ||||
| BI | BI1 | 0.922 | 0.884 | 0.893 | 0.812 |
| BI2 | 0.917 | ||||
| BI3 | 0.862 | ||||
| ATT | ATT1 | 0.920 | 0.932 | 0.933 | 0.881 |
| ATT2 | 0.916 | ||||
| ATT3 | 0.979 |
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude
Table 4.
Heterotrait-monotrait ratios
| ATT | AWA | BI | EE | ETH | EVA | FC | PE | SI | |
|---|---|---|---|---|---|---|---|---|---|
| ATT | |||||||||
| AWA | 0.604 | ||||||||
| BI | 0.620 | 0.505 | |||||||
| EE | 0.616 | 0.808 | 0.573 | ||||||
| ETH | 0.494 | 0.367 | 0.461 | 0.387 | |||||
| EVA | 0.521 | 0.683 | 0.562 | 0.656 | 0.509 | ||||
| FC | 0.669 | 0.902 | 0.591 | 0.907 | 0.494 | 0.730 | |||
| PE | 0.685 | 0.599 | 0.652 | 0.700 | 0.537 | 0.602 | 0.692 | ||
| SI | 0.593 | 0.553 | 0.655 | 0.552 | 0.575 | 0.655 | 0.661 | 0.597 |
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude
Table 5.
Fornell-Larcker criterion
| ATT | AWA | BI | EE | ETH | EVA | FC | PE | SI | UB | |
|---|---|---|---|---|---|---|---|---|---|---|
| ATT | 0.939 | |||||||||
| AWA | 0.538 | 0.811 | ||||||||
| BI | 0.566 | 0.444 | 0.901 | |||||||
| EE | 0.541 | 0.645 | 0.491 | 0.847 | ||||||
| ETH | 0.446 | 0.324 | 0.410 | 0.332 | 0.883 | |||||
| EVA | 0.451 | 0.546 | 0.477 | 0.530 | 0.424 | 0.845 | ||||
| FC | 0.579 | 0.719 | 0.501 | 0.730 | 0.414 | 0.586 | 0.789 | |||
| PE | 0.646 | 0.546 | 0.602 | 0.621 | 0.489 | 0.528 | 0.606 | 0.957 | ||
| SI | 0.530 | 0.467 | 0.574 | 0.464 | 0.496 | 0.545 | 0.547 | 0.540 | 0.835 | |
| UB | 0.205 | 0.112 | 0.291 | 0.131 | 0.109 | 0.117 | 0.132 | 0.201 | 0.210 | 1.000 |
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude, UB Use Behavior
Convergent validity reflects whether items converge to measure the same construct, and this factor was evaluated using AVE. All constructs had AVE values greater than the 0.5 threshold requirement (Table 3) [60]. Discriminant validity was examined using three complementary approaches: the HTMT ratio, the Fornell-Larcker criterion, and cross-loading analysis. As shown in Table 4, most HTMT ratios were below the conservative threshold of 0.9 [58], indicating satisfactory discriminant validity. Two construct pairs: awareness of AI and facilitating conditions (AWA—FC = 0.902), and effort expectancy and facilitating conditions (EE—FC = 0.907), slightly exceeded this threshold. However, such marginal deviations are acceptable when constructs are conceptually related, and other validity criteria are satisfied [57, 61]. The Fornell-Larcker criterion (Table 5) was also met, i.e., the square roots of the AVE values exceeded the inter-construct correlations, indicating that each construct shared more variance with its own indicators than with other constructs [60]. Finally, the cross-loading results (Appendix A) further supported discriminant validity; all measurement items loaded highest on their intended constructs, with differences of at least 0.10 relative to other constructs. Overall, the results from the HTMT ratio, Fornell-Larcker, and cross-loading analyses confirmed that discriminant validity was established for all constructs in the measurement model.
These results indicate that the measurement model has satisfactory reliability and validity, thus providing a solid foundation for structural model analysis.
Structural model
The structural model underwent systematic validation to ensure statistical robustness. Before examining pathway coefficients, the collinearity among predictors in the structural model was assessed. All VIF values (Table 6) were below the critical threshold of 5 [57], indicating no significant multicollinearity issues in the model.
Table 6.
Variance inflation factors
| ATT | BI | UB | |
|---|---|---|---|
| AWA | 2.238 | ||
| EVA | 1.845 | ||
| ETH | 1.492 | ||
| PE | 1.954 | 2.206 | |
| EE | 2.533 | 1.756 | |
| SI | 1.819 | 1.563 | |
| FC | 2.632 | 2.584 | |
| ATT | 1.933 | 1.471 | |
| BI | 1.471 |
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude, UB Use Behavior
Bootstrapping was used to obtain stable estimates of standard errors and t-statistics for each pathway coefficient (Table 7). After applying the FDR correction, statistically significant relationships were observed for most key theoretical pathways. For example, ATT showed a significant positive association with BI (β = 0.192, p < 0.001), and BI showed a strong positive association with UB (β = 0.257, p < 0.001). In contrast, the direct path from ATT to UB was small and did not reach statistical significance after FDR adjustment (β = 0.060, p > 0.05), suggesting that attitudes relate to usage primarily through behavioral intentions rather than via a direct link. Additionally, several notable relationships were observed among antecedent variables. Specifically, AWA and EVA were each positively associated with ATT (β = 0.137 and β = 0.101, both p < 0.001); PE was positively correlated with both ATT and BI (β = 0.345 and β = 0.266, both p < 0.001); EE was modestly associated with BI (β = 0.078, p < 0.05) but not with ATT; SI was positively correlated with both ATT and BI (β = 0.154 and β = 0.283, both p < 0.001); and FC was positively associated with ATT (β = 0.118, p < 0.05) but not BI. Meanwhile, three pathways were not statistically significant because their 95% CIs included 0 (ETH → ATT, EE → ATT, and FC → BI). All other hypothesized pathways were significant (95% CIs did not include 0), supporting the proposed model relationships. Overall, these findings validate the extended UTAUT framework. Notably, PE and SI were the strongest correlates of both ATT and BI, consistent with prior empirical findings [20].
Table 7.
Structural model pathway coefficients
| Relationship | Pathway coefficient (β) | t value | p value | Bias-corrected 95% CI | Significance (p < 0.05) |
|---|---|---|---|---|---|
| AWA → ATT | 0.137 | 4.410 | < 0.001 | [0.075, 0.198] | Yes |
| EVA → ATT | 0.101 | 3.832 | < 0.001 | [0.047, 0.151] | Yes |
| ETH → ATT | − 0.038 | 1.447 | 0.174 | [− 0.088, 0.014] | No |
| PE → ATT | 0.345 | 10.519 | < 0.001 | [0.282, 0.409] | Yes |
| PE → BI | 0.266 | 8.923 | < 0.001 | [0.206, 0.323] | Yes |
| EE → ATT | 0.067 | 1.916 | 0.082 | [− 0.003, 0.135] | No |
| EE → BI | 0.078 | 2.561 | 0.019 | [0.018, 0.137] | Yes |
| SI → ATT | 0.154 | 5.474 | < 0.001 | [0.100, 0.211] | Yes |
| SI → BI | 0.283 | 9.956 | < 0.001 | [0.227, 0.340] | Yes |
| FC → ATT | 0.118 | 2.767 | 0.011 | [0.033, 0.199] | Yes |
| FC → BI | 0.018 | 0.505 | 0.615 | [− 0.053, 0.086] | No |
| ATT → BI | 0.192 | 6.821 | < 0.001 | [0.138, 0.248] | Yes |
| ATT → UB | 0.060 | 2.036 | 0.067 | [0.001, 0.118] | No |
| BI → UB | 0.257 | 8.508 | < 0.001 | [0.197, 0.316] | Yes |
Reported p-values are Benjamini–Hochberg FDR-adjusted to control for multiple comparisons, and all statistical inferences in the text are based on these adjusted values
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude, UB Use Behavior
In addition to examining direct effects, formal mediation analyses were conducted using 5000 bias-corrected bootstrapping samples to assess the significance of specific indirect pathways. As shown in Table 8, the specific indirect effects analysis further clarified how attitudes and intentions link belief-based predictors to GenAI usage. After applying the FDR correction, several antecedents showed significant sequential indirect associations with UB via the ATT → BI pathway (e.g., AWA, EVA, SI, FC, and PE), indicating complementary partial mediation through an affective-intentional route. The corresponding indirect paths to BI alone were also significant for these predictors, whereas the analogous chains involving ETH were not, consistent with its weak association with ATT. In contrast, none of the simple indirect paths from the antecedents to UB via ATT alone remained significant once FDR adjustment was applied; for example, the indirect effect of PE on UB through ATT was small and non-significant, suggesting that performance expectancy is primarily related to usage through behavioral intention (with attitudes feeding into BI) rather than through attitudes alone. Finally, several predictors showed additional indirect links to UB through behavioral intentions only: ATT, EE, PE, and SI all had significant indirect effects on UB via BI, whereas the corresponding effect for FC was not significant. Taken together, these patterns suggest that attitudes and intentions jointly form a central evaluative-motivational pathway through which key cognitive and social beliefs are connected to GenAI use, whereas purely affective shortcuts from beliefs to usage appear weak or absent.
Table 8.
Structural model specific indirect effects
| Relationship | Indirect effect (β) | t value | p value | Bootstrapped 95% CI | Mediation type | |
|---|---|---|---|---|---|---|
| EVA → ATT → BI → UB | 0.005 | 3.137 | 0.004 | [0.002, 0.008] | Complementary partial | |
| AWA → ATT → BI → UB | 0.007 | 3.339 | 0.002 | [0.013, 0.042] | Complementary partial | |
| ETH → ATT → BI → UB | − 0.002 | 1.398 | 0.180 | [− 0.005, 0.001] | No mediation | |
| SI → ATT → BI → UB | 0.008 | 3.705 | < 0.001 | [0.004, 0.012] | Complementary partial | |
| FC → ATT → BI → UB | 0.006 | 2.458 | 0.023 | [0.002, 0.011] | Complementary partial | |
| EE → ATT → BI → UB | 0.003 | 1.728 | 0.105 | [0.000, 0.007] | Weak partial | |
| PE → ATT → BI → UB | 0.017 | 4.856 | < 0.001 | [0.011, 0.024] | Complementary partial | |
| AWA → ATT → BI | 0.026 | 3.573 | < 0.001 | [0.013, 0.042] | Complementary partial | |
| EVA → ATT → BI | 0.019 | 3.396 | 0.002 | [0.009, 0.031] | Complementary partial | |
| ETH → ATT → BI | − 0.007 | 1.414 | 0.180 | [− 0.018, 0.003] | No mediation | |
| FC → ATT → BI | 0.023 | 2.615 | 0.017 | [0.006, 0.040] | Complementary partial | |
| PE → ATT → BI | 0.066 | 5.766 | < 0.001 | [0.045, 0.090] | Complementary partial | |
| EE → ATT → BI | 0.013 | 1.785 | 0.099 | [0.000, 0.029] | Weak partial | |
| SI → ATT → BI | 0.030 | 4.124 | < 0.001 | [0.017, 0.045] | Complementary partial | |
| AWA → ATT → UB | 0.008 | 1.788 | 0.099 | [0.000, 0.018] | Weak partial | |
| EVA → ATT → UB | 0.006 | 1.748 | 0.104 | [0.000, 0.014] | Weak partial | |
| ETH → ATT → UB | − 0.002 | 1.048 | 0.310 | [− 0.008, 0.001] | No mediation | |
| FC → ATT → UB | 0.007 | 1.587 | 0.136 | [0.000, 0.017] | No mediation | |
| PE → ATT → UB | 0.021 | 1.965 | 0.076 | [0.000, 0.043] | Weak partial | |
| SI → ATT → UB | 0.009 | 1.864 | 0.089 | [0.000, 0.020] | Weak partial | |
| EE → ATT → UB | 0.004 | 1.329 | 0.199 | [0.000, 0.011] | No mediation | |
| ATT → BI → UB | 0.049 | 5.310 | < 0.001 | [0.032, 0.068] | Complementary partial | |
| EE → BI → UB | 0.020 | 2.469 | 0.023 | [0.004, 0.036] | Complementary partial | |
| FC → BI → UB | 0.005 | 0.503 | 0.615 | [− 0.013, 0.023] | No mediation | |
| PE → BI → UB | 0.068 | 6.089 | < 0.001 | [0.047, 0.092] | Complementary partial | |
| SI → BI → UB | 0.073 | 6.212 | < 0.001 | [0.051, 0.096] | Complementary partial | |
Reported p-values are Benjamini–Hochberg FDR-adjusted to control for multiple comparisons, and all statistical inferences in the text are based on these adjusted values
AWA AI Awareness, ETH AI Ethics, EVA AI Evaluation, PE Performance Expectancy, EE Effort Expectancy, SI Social Influence, FC Facilitating Conditions, BI Behavioral Intention, ATT Attitude, UB Use Behavior
The model’s explanatory power was evaluated based on the coefficient of determination (R2), which is the typical metric for assessing structural model fit in variance-based SEM [57]. According to Hair et al., R2 values of 0.75, 0.50, and 0.25 are considered substantial, moderate, and weak, respectively [57]. The model explained 50.7% of the variance in ATT (R2 = 0.507) and 47.6% in BI (R2 = 0.476), indicating moderate explanatory power. The variance explained by UB was lower (R2 = 0.086), consistent with the idea that actual usage can depend on many external factors. Overall, the structural model demonstrated acceptable explanatory strength, no issues with collinearity, and statistically significant pathways. Thus, it provides a robust foundation for substantive hypothesis interpretations.
The final model is depicted in Fig. 2.
Fig. 2.
Final model with pathway coefficients and p values
Discussion
This study examined the factors associated with Chinese university students’ attitudes toward GenAI and their intentions to adopt such AI tools. The findings suggest that both technology acceptance factors and specific dimensions of AI literacy show significant associations with students’ evaluative judgments and behavioral intentions. Performance expectancy was the strongest predictor of attitudes and intentions, while social influence also showed a notable association with students’ perceptions. Furthermore, attitudes and behavioral intentions exhibited a consistent pattern of associations with self-reported GenAI usage, with behavioral intentions showing the strongest direct association between attitudes and actual use. Overall, the results suggest that students are more inclined to adopt GenAI when they perceive it as useful, socially endorsed, and aligned with their academic needs.
PE and SI as key UTAUT correlates of GenAI intentions
This study revisited the core constructs of the UTAUT and demonstrated that performance expectancy, effort expectancy, and social influence were positively associated with behavioral intention, whereas facilitating conditions were not. This observation suggests that students’ willingness to engage with GenAI is related to their beliefs that such tools can enhance their academic productivity and be operated with minimal cognitive effort. This conclusion is consistent with prior technology acceptance research emphasizing perceived usefulness and ease of use [62–64]. The positive association between social influence and behavioral intention reflects the sociocultural context in which students evaluate emerging technologies. In a collectivist educational environment (i.e., such as that in China), the approval of peers, family members, and instructors may be linked to students’ motivations to explore GenAI. This finding aligns with reports in the healthcare and e-learning domains that showed a link between social endorsement and technology adoption behaviors [65, 66]. Notably, this result is in contrast to studies conducted in similar contexts that reported no significant association [18]. This distinction suggests that social influence may operate differently depending on institutional culture, perceived academic competition, or peer dynamics. Finally, facilitating conditions were not significantly related to behavioral intentions. This is consistent with previous work indicating that access to resources and institutional support may be more relevant for actual usage behaviors than for forming intentions, particularly during the early stages of technology introduction [27, 65, 67]. Together, these results refine expectations within the UTAUT framework by indicating that university students’ intentions to use GenAI are more strongly associated with belief-based evaluations than with the availability of external support structures.
Divergent associations of cognitive and ethical AI literacy with GenAI attitude
This study confirmed that AI literacy is a key concept for understanding how Chinese students evaluate and engage with GenAI. In the developed model, AI literacy comprised three dimensions: AI awareness, AI evaluation, and AI ethics. The results indicated that both AI awareness and AI evaluation were positively associated with Chinese students’ attitudes toward GenAI. Conceptually, this indicates that an individual’s ability to recognize AI’s capabilities and critically assess its academic relevance is related to how students appraise the personal value of these tools. Additionally, it suggests that cognitive assessments cannot be directly translated into usage behaviors. These associations highlight that beliefs regarding usefulness must be internalized as positive affective evaluations before they are linked to behavioral intention. Moreover, this pattern underscores the mediating role of attitude, as observed in this study. Students who are more familiar with the affordances and limitations of GenAI are better positioned to identify meaningful application scenarios, e.g., simplifying complex content, generating comparable examples, producing personalized learning plans, or offering supportive feedback. These perceptions are often accompanied by favorable attitudes because they align with common academic needs, thereby reinforcing the perceived relevance of GenAI in an educational context. Conversely, students who focus more on potential ethical risks, such as privacy leakage or algorithmic bias, tend to report less positive attitudes, suggesting that ethical apprehensions can limit positive attitudes despite acknowledged technical utility.
Interestingly, the findings revealed a weak and negative, but non-significnt association between AI ethics and Chinese students’ attitudes toward GenAI. This pattern contrasts with previous reports showing a positive relationship between ethical awareness and attitudes toward GenAI [18]. Alithough this negative associaton did not reach statistical significance, the observed tendency may still reflect several contextual features of the Chinese higher-education setting. One possibility is that students who receive extensive ethics instruction may adopt a dual evaluative process, simultaneously recognizing the benefits of AI while maintaining heightened vigilance regarding its societal implications, resulting in a more cautious overall appraisal [68]. In addition, collectivist cultural norms emphasizing social harmony may amplify concerns when ethical risks are perceived as community-level threats rather than individual-level inconveniences [69]. From a broader theoretical perspective, this pattern aligns with risk perception theories, which posit that motivation decreases when perceived risks exceed perceived benefits [70].
This study extends discussions on AI literacy by showing that ethical sensitivity is not necessarily associated with more positive attitudes toward technology; rather, it may generate hesitation or uncertainty in certain contexts. This nuance reinforces the value of distinguishing among the cognitive, evaluative, and ethical dimensions of AI literacy, rather than treating it as a unidimensional construct. The results of this study also suggest that attitudes provide evaluative filters that connect belief structures to behavioral intentions, which in turn are linked to GenAI usage. From a practical perspective, educators and policymakers should consider designing AI ethics curricula that frame ethical reflection as a constructive competency rather than a deterrent. By doing so, students’ ethical awareness can coexist with positive academic attitudes toward GenAI, thereby supporting more balanced and informed engagement.
Attitude as a central evaluative link between beliefs and GenAI intentions
The hypotheses derived from the TAM and TPB examined the relationships among performance expectancy, effort expectancy, social influence, facilitating conditions, and attitudes toward GenAI. Performance expectancy was positively associated with attitudes, supporting the well-established trend that students evaluate GenAI more favorably when they perceive it as academically useful [20, 71]. This highlights the psychological process through which cognitive beliefs form the basis for affective evaluations. In other words, perceptions of usefulness are transformed into personal feelings of approval, which are then associated with stronger intentions to adopt the technology. In contrast, effort expectancy was not directly associated with attitude. This is consistent with prior studies suggesting that ease of use may shape attitudes indirectly, e.g., by strengthening beliefs about usefulness, leading to more positive evaluations of GenAI [72, 73]. In the educational context, where tools such as DeepSeek and Doubao are already familiar and intuitive to many Chinese university students, usability may be taken for granted, thus limiting its role in differentiating attitudes.
Social influence and facilitating conditions were also positively associated with attitudes. The link between social influence and favorable evaluations is consistent with findings in other technology adoption domains, where approval from peers, family members, and instructors is linked to students’ beliefs that GenAI is appropriate and legitimate [18, 74, 75]. This reflects the role of normative cues in shaping students’ evaluative appraisals, particularly in cultures where collective expectations are important. Although facilitating conditions are often examined as predictors of actual usage, their association with attitude herein suggests that access to resources, training, and institutional support may help students feel more comfortable and confident in experimenting with GenAI. From a practical perspective, universities that provide stronger technical and instructional support may be associated with more positive attitudes, which are in turn associated with higher behavioral intentions to use GenAI. This finding underscores the role of institutional environments during the early technology adoption stage, when students are developing personal evaluations rather than relying on established habits.
Taken together, these findings highlight the central role of attitude as a mediating construct within technology acceptance. In general, cognitive beliefs and social cues do not automatically relate to behavioral intentions. Instead, they are closely linked to intention through a personal evaluative appraisal, reflecting how students feel about adopting the technology. This interpretive layer is consistent with foundational assumptions in the TAM, TPB, and UTAUT [22, 26, 27], as well as other recent studies showing that favorable attitudes are linked to higher engagement and increased usage [76–79]. The observed associations suggest that students who feel positively about GenAI are more inclined to incorporate it into their academic work and daily learning activities, particularly when these attitudes are accompanied by strong intentions to use GenAI in such contexts. The present findings indicate that attitudes are most strongly connected to GenAI usage indirectly, by shaping behavioral intentions that, in turn, predict actual use, rather than through a robust direct pathway from attitudes to usage. By clarifying this evaluative-intentional pathway, the present study provides a more nuanced understanding of how belief-based predictors are linked to behavioral intentions and subsequent use, and underscores the relevance of positive attitudes and concrete intentions in efforts to support responsible GenAI use in higher education.
Theoretical and practical implications
Theoretically, this study advances existing technology acceptance research by integrating AI literacy into a UTAUT-based framework and empirically examining its associations with GenAI-related outcomes. Evaluating attitude as a mediating construct further clarifies the evaluative pathway through which students’ beliefs are related to behavioral intentions and actual usage. The differentiated effects across literacy dimensions highlight the need to distinguish among cognitive, evaluative, and ethical components when examining learner responses to emerging AI systems.
Practically, the findings discussed herein offer several actionable implications for policymakers, educators, and technology providers. For policymakers, it is crucial to formulate targeted strategies to enhance students’ AI literacy in this rapidly evolving domain. Notably, the United Nations Educational, Scientific and Cultural Organization (UNESCO) released an AI Competency Framework for Students, emphasizing the importance of continuously cultivating students’ AI competencies [80]. Accordingly, policymakers should integrate AI literacy development into Science, Technology, Engineering, and Mathematics (STEM) curricula across educational levels. This would entail deliberately embedding AI concepts, practical applications, and ethical considerations into core instructional content, thereby fostering a gradual and holistic development of AI literacy from an early age. For educators, it is essential to actively integrate AI technology into daily teaching practices and design instructional plans that deliver personalized content through engaging and interactive learning experiences [7]. Beyond the mere adoption of AI tools, this approach leverages intelligent systems to analyze students’ learning, identify individual needs, and adjust teaching strategies accordingly. Considering that both performance expectancy and social influence positively have positive relationships with students’ attitudes and behavioral intentions toward AI, teachers have a crucial role in guiding students to use AI tools as supportive resources. For example, they can assist students in using AI-driven tools to diagnose academic weaknesses or explore AI-powered learning platforms. For AI developers and platform providers, it is imperative to improve operability and strengthen privacy protections. This includes enabling resource sharing across different AI platforms and implementing safeguards to prevent data leakage. This study revealed that concerns about AI ethics are negatively associated with university students’ attitudes toward AI applications, suggesting that developers should take proactive measures to address these risks. To foster trust, it is recommended to minimize algorithmic bias and rigorously protect user data. By enhancing transparency and security, technology providers can help alleviate ethical concerns and encourage more positive attitudes toward AI. Through such efforts, policymakers, educators, and technologists can create a supportive environment for GenAI adoption in education.
Limitations and future research
Although this study provides valuable insights regarding the factors that influence Chinese university students’ behavioral intentions to accept GenAI technology, it has certain limitations that require further investigation. First, the questionnaire design involved modest item reduction to maintain a reasonable survey length. Although this decision was necessary for respondent burden, removing or shortening items may have narrowed construct coverage and reduced the comprehensiveness of the measurements. In addition, actual GenAI usage was assessed using a single self-reported frequency item adapted from Venkatesh et al. [52]. While this format is consistent with prior UTAUT-based research and provides a practical indicator of usage, single-item measures do not allow conventional assessment of internal consistency and are more vulnerable to random measurement error. Future studies should address these limitations by including additional items to more fully represent the conceptual domain of GenAI acceptance-related constructs and by employing multi-item behavioral scales to measure GenAI use more robustly. Second, the cross-sectional research design has inherent methodological constraints. The findings reflect a snapshot of students’ perceptions but cannot establish causality among variables. For example, although performance expectancy was associated with positive attitudes, it cannot be concluded that changes in performance expectancy will cause changes in attitude. Moreover, GenAI is a rapidly evolving technology, and students’ literacy, perceptions, and usage patterns are likely to shift over time. Cross-sectional data cannot reflect such dynamic trajectories. Future research should therefore consider longitudinal designs (to track changes over time) or controlled laboratory experiments (to provide stronger causal inference). Third, this study is limited by the composition of the sample population and the lack of subgroup analyses. Approximately 79% of participants were female and 72% majored in liberal-arts–related disciplines, which may introduce sampling bias and restrict the generalizability of the findings. It is important to note that technology adoption behaviors and GenAI attitudes may vary across genders and between STEM and non-STEM fields. Moreover, no cross-group or demographic comparisons (e.g., gender, age, major, region) were conducted, meaning that the results reflect aggregate trends that may mask important subgroup heterogeneity. Future research efforts should aim to recruit more demographically balanced samples and apply moderation or multi-group analyses to evaluate potential subgroup-specific differences in GenAI acceptance. This would increase the explanatory power and external validity of the results. Finally, this study focused solely on Chinese university students, and therefore, the results may not be generalizable to other cultural or educational contexts. Factors associated with GenAI acceptance likely vary in other countries or in non-student populations. Moreover, recent research has shown that GenAI systems themselves may differ in terms of their underlying cognitive structures and cultural orientations [81–83]. Such differences could shape users’ attitudes and behavioral intentions when interacting with various GenAI platforms. Future studies could therefore compare user acceptance across culturally distinct GenAI systems to elucidate how embedded cultural and psychological characteristics of AI agents relate to technology acceptance outcomes.
Conclusions
As GenAI becomes increasingly embedded in Chinese higher education, it is essential to understand students’ evaluative and behavioral responses. This study employed an extended UTAUT framework (by incorporating attitude as a mediating factor and AI literacy as a conceptual precursor) to examine the structural relationships and pathway patterns through which key technological, social, and literacy-related factors are associated with students’ intentions and self-reported use of GenAI. The results indicate that performance expectancy, effort expectancy, and social influence are positively associated with behavioral intentions, while performance expectancy, social influence, and facilitating conditions correspond to more favorable attitudes toward GenAI. Attitude is positively associated with behavioral intentions, which in turn strongly predict actual usage, suggesting that attitudes are primarily associated with use primarily through behavioral intentions rather than via a direct pathway. The inclusion of AI literacy dimensions further enriches the developed model. AI awareness and evaluation correspond with more positive attitudes toward GenAI, whereas ethical sensitivity may generate caution, highlighting the complexity of moral reflection in technology adoption. These findings emphasize that ethical concerns and technical appreciation can coexist and should be balanced through educational initiatives.
In practical terms, policymakers should consider embedding AI literacy into formal curricula to enhance students’ informed and reflective engagement with emerging technologies. Educators are encouraged to foster learning environments that combine instructional support with opportunities for responsible GenAI use. Meanwhile, developers must enhance transparency and data security to alleviate ethical apprehensions. Collectively, these coordinated efforts can cultivate an informed, confident, and ethically-grounded culture of GenAI utilization in higher education.
Supplementary Information
Acknowledgements
We are grateful to all participants in this study.
Authors’ contributions
Conceptualization, X.Z.; methodology, X.Z. and L.L.; investigation, S.D.; data curation, L.L. and X.H.; formal analysis, Y.S.; writing-original draft preparation, X.Z. and L.L.; writing-review and editing, X.Z., L.L., X.C. and X.H.; visualization, Y.S.; supervision, X.Z.; project administration, X.Z.; manuscript formatting, X.C. and S.D.; language polishing, X.C. and S.D. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by “The National Social Science Fund of China”, grant number CEA230292.
Data availability
The datasets used in this study are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
All participants gave their informed consent for inclusion before they participated in the study. This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee at Central Normal University (CCNU-IRB-202504002A, approval date: 20 April 2025).
Consent for publication
Informed consent was obtained from all participants involved in the study.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Yinguang Sun, Lu Li, Xiaoxuan Zhang, Xiaowen Chen, Shiyi Deng and Xiaoling Hu contributed equally to this work.
References
- 1.Banh L, Strobel G. Generative artificial intelligence. Electron Markets. 2023;33(1):63. [Google Scholar]
- 2.Wu F, Lu C, Zhu M, Chen H, Zhu J, Yu K, et al. Towards a new generation of artificial intelligence in China. Nat Mach Intell. 2020;2(6):312–6. [Google Scholar]
- 3.Extance A. ChatGPT has entered the classroom: how LLMs could transform education. Nature. 2023;623(7987):474–7. [Google Scholar]
- 4.Michel-Villarreal R, Vilalta-Perdomo E, Salinas-Navarro DE, Thierry-Aguilera R, Gerardou FS. Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Educ Sci. 2023;13(9):856. [Google Scholar]
- 5.Xu J, Li Y, Shadiev R, Li C. College students’ use behavior of generative AI and its influencing factors under the unified theory of acceptance and use of technology model. Education and Information Technologies. 2025;30(14):19961-84.
- 6.Law L. Application of generative artificial intelligence (GenAI) in language teaching and learning: a scoping literature review. Comput Educ Open. 2024;6:100174. [Google Scholar]
- 7.Wei L. Artificial intelligence in language instruction: impact on English learning achievement, L2 motivation, and self-regulated learning. Front Psychol. 2023;14:1261955. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Mao J, Chen B, Liu JC. Generative artificial intelligence in education and its implications for assessment. TechTrends. 2024;68(1):58–66. [Google Scholar]
- 9.Dogru T, Line N, Hanks L, Acikgoz F, Abbott JA, Bakir S, et al. The implications of generative artificial intelligence in academic research and higher education in tourism and hospitality. Tour Econ. 2024;30(5):1083–94. [Google Scholar]
- 10.He Y, Yang L, Zhu X, Wu B, Zhang S, Qian C, et al. Mental health chatbot for young adults with depressive symptoms during the COVID-19 pandemic: single-blind, three-arm randomized controlled trial. J Med Internet Res. 2022;24(11):e40719. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Crawford J, Allen K-A, Pani B, Cowling M. When artificial intelligence substitutes humans in higher education: the cost of loneliness, student success, and retention. Stud High Educ. 2024;49(5):883–97. [Google Scholar]
- 12.Doğan M, Celik A, Arslan H. AI in higher education: risks and opportunities from the academician perspective. Eur J Educ. 2025;60(1):e12863. [Google Scholar]
- 13.Vieriu AM, Petrea G. The impact of artificial intelligence (AI) on students’ academic development. Educ Sci. 2025;15(3):343. [Google Scholar]
- 14.Cengiz S, Peker A. Generative artificial intelligence acceptance and artificial intelligence anxiety among university students: the sequential mediating role of attitudes toward artificial intelligence and literacy. Curr Psychol. 2025;44(9):7991–8000. [Google Scholar]
- 15.Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Differ. 2023;103:102274. [Google Scholar]
- 16.Lo CK. What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci. 2023;13(4):410. [Google Scholar]
- 17.Hazzan-Bishara A, Kol O, Levy S. The factors affecting teachers’ adoption of AI technologies: A unified model of external and internal determinants. Education and Information Technologies. 2025;30(11):15043-69.
- 18.Wang C, Wang H, Li Y, Dai J, Gu X, Yu T. Factors influencing university students’ behavioral intention to use generative artificial intelligence: Integrating the theory of planned behavior and AI literacy. International Journal of Human-Computer Interaction. 2025;41(11):6649–71. [Google Scholar]
- 19.Bagozzi RP, Davis FD, Warshaw PR. Development and test of a theory of technological learning and usage. Hum Relat. 1992;45(7):659–86. [Google Scholar]
- 20.Venkatesh V, Thong JYL, Xu X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly. 2012;36(1):157-78.
- 21.Fishbein M, Ajzen I. Predicting and Changing Behavior: The Reasoned Action Approach. Psychology Press; 2010. 10.4324/9780203838020.
- 22.Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. Management Information Systems Quarterly. 1989;13(3):319-40.
- 23.Leow LP, Phua LK, Teh SY. Extending the social influence factor: behavioural intention to increase the usage of information and communication technology-enhanced student-centered teaching methods. Educ Technol Res Dev. 2021;69(3):1853–79. [Google Scholar]
- 24.Ajzen I, Fishbein M. Attitudes and the attitude-behavior relation: reasoned and automatic processes. Eur Rev Soc Psychol. 2000;11(1):1–33. [Google Scholar]
- 25.Ajzen I, Fishbein M. Attitude-behavior relations: a theoretical analysis and review of empirical research. Psychol Bull. 1977;84(5):888. [Google Scholar]
- 26.Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211. [Google Scholar]
- 27.Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly. 2003;27(3):425-78.
- 28.Al-Emran M, AlQudah AA, Abbasi GA, Al-Sharafi MA, Iranmanesh M. Determinants of using AI-based chatbots for knowledge sharing: evidence from PLS-SEM and fuzzy sets (fsQCA). IEEE Trans Eng Manage. 2023;71:4985–99. [Google Scholar]
- 29.Foroughi B, Senali MG, Iranmanesh M, Khanfar A, Ghobakhloo M, Annamalai N, et al. Determinants of intention to use ChatGPT for educational purposes: Findings from PLS-SEM and fsQCA. International Journal of Human-Computer Interaction. 2024;40(17):4501–20. [Google Scholar]
- 30.Strzelecki A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact Learn Environ. 2024;32(9):5142–55. [Google Scholar]
- 31.Jing M, Guo Z, Wu X, Yang Z, Wang X. Higher education digital academic leadership: perceptions and practices from Chinese university leaders. Educ Sci. 2025;15(5):606. [Google Scholar]
- 32.Zheng H, Han F, Huang Y, Wu Y, Wu X. Factors influencing behavioral intention to use e-learning in higher education during the COVID-19 pandemic: A meta-analytic review based on the UTAUT2 model. Education and Information Technologies. 2025;30(9):12015-53.
- 33.Zhao L, Rahman MH, Yeoh W, Wang S, Ooi K-B. Examining factors influencing university students’ adoption of generative artificial intelligence: a cross-country study. Studies in Higher Education. 2025;50(12):2646-68.
- 34.Strzelecki A, ElArabawy S. Investigation of the moderation effect of gender and study level on the acceptance and use of generative AI by higher education students: comparative evidence from Poland and Egypt. Br J Educ Technol. 2024;55(3):1209–30. [Google Scholar]
- 35.Zhang X, Hu X, Sun Y, Li L, Deng S, Chen X. Integrating AI literacy with the TPB-TAM framework to explore Chinese university students’ adoption of generative AI. Behav Sci (Basel). 2025;15(10):1398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Wang B, Rau P-LP, Yuan T. Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behav Inf Technol. 2023;42(9):1324–37. [Google Scholar]
- 37.Carolus A, Koch MJ, Straka S, Latoschik ME, Wienrich C. MAILS-Meta AI literacy scale: development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change-and meta-competencies. Computers in Human Behavior: Artificial Humans. 2023;1(2):100014. [Google Scholar]
- 38.Laupichler MC, Aster A, Haverkamp N, Raupach T. Development of the “Scale for the assessment of non-experts’ AI literacy”–An exploratory factor analysis. Comput Human Behav Rep. 2023;12:100338. [Google Scholar]
- 39.Laupichler MC, Aster A, Schirch J, Raupach T. Artificial intelligence literacy in higher and adult education: a scoping literature review. Computers and Education: Artificial Intelligence. 2022;3:100101. [Google Scholar]
- 40.Chiu M-L. Exploring user awareness and perceived usefulness of generative AI in higher education: The moderating role of trust. Education and Information Technologies. 2025;30(15):21317-51.
- 41.Wang Y, Wei Z, Wijaya TT, Cao Y, Ning Y. Awareness, acceptance, and adoption of Gen-AI by K-12 mathematics teachers: an empirical study integrating TAM and TPB. BMC Psychol. 2025;13(1):478. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Ng DTK, Leung JKL, Chu KWS, Qiao MS. AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the association for information science and technology. 2021;58(1):504–9. [Google Scholar]
- 43.Hallaq T. Evaluating online media literacy in higher education: validity and reliability of the Digital Online Media Literacy Assessment (DOMLA). J Media Lit Educ. 2016;8(1):62–84.
- 44.Long D, Magerko B. What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; Honolulu, HI, USA: Association for Computing Machinery; 2020. p. 1–16.
- 45.Laine J, Minkkinen M, Mäntymäki M. Understanding the ethics of generative AI: established and new ethical principles. Commun Assoc Inf Syst. 2025;56(1):7. [Google Scholar]
- 46.Yang T, Cheon J, Cho M-H, Huang M, Cusson N. Undergraduate students’ perspectives of generative AI ethics. Int J Educ Technol High Educ. 2025;22(1):35. [Google Scholar]
- 47.Zhu W, Huang L, Zhou X, Li X, Shi G, Ying J, et al. Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. International Journal of Human-Computer Interaction. 2025;41(1):742–64. [Google Scholar]
- 48.Hsiao C-H, Tang K-Y. Beyond acceptance: an empirical investigation of technological, ethical, social, and individual determinants of GenAI-supported learning in higher education. Educ Inf Technol. 2025;30(8):10725–50. [Google Scholar]
- 49.Lin S-K, Chung H-C. An empirical study of the social development of AI technology and its social acceptance: the mediation of trust and the moderated mediation of ethical perceptions. SAGE Open. 2025;15(3):21582440251377226. [Google Scholar]
- 50.Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine. 2000;25(24):3186–91. [DOI] [PubMed] [Google Scholar]
- 51.Wang C, Wang H, Li Y, Dai J, Gu X, Yu T. Factors Influencing University Students’ Behavioral Intention to Use Generative Artificial Intelligence: Integrating the Theory of Planned Behavior and AI Literacy. International Journal of Human–Computer Interaction. 2025;41(11):6649-71.
- 52.Venkatesh V, Thong JYL, Xu X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly. 2012;36(1):157-78.
- 53.Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol. 2003;88(5):879. [DOI] [PubMed] [Google Scholar]
- 54.Sarstedt M, Hopkins L, Kuppelwieser VG. Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. Eur Bus Rev. 2014;26(2):106–21. [Google Scholar]
- 55.Hair JF, Hult GTM, Ringle CM, Sarstedt M. A Primer on Partial Least Squares Structural Equation Modeling. Sage; 2014.
- 56.Hair JF, Risher JJ, Sarstedt M, Ringle CM. When to use and how to report the results of PLS-SEM. Eur Bus Rev. 2019;31(1):2–24. [Google Scholar]
- 57.Hair Jr JF, Hult GTM, Ringle CM, Sarstedt M, Danks NP, Ray S. Partial least squares structural equation modeling (PLS-SEM) using R: A workbook: Springer Nature; 2021. 10.1007/978-3-030-80519-7.
- 58.Henseler J, Ringle CM, Sarstedt M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci. 2015;43(1):115–35. [Google Scholar]
- 59.Hayes AF. Beyond Baron and Kenny: statistical mediation analysis in the new millennium. Commun Monogr. 2009;76(4):408–20. [Google Scholar]
- 60.Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18(1):39–50. [Google Scholar]
- 61.Voorhees CM, Brady MK, Calantone R, Ramirez E. Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies. J Acad Mark Sci. 2016;44(1):119–34. [Google Scholar]
- 62.Andrews JE, Ward H, Yoon J. UTAUT as a model for understanding intention to adopt AI and related technologies among librarians. J Acad Librariansh. 2021;47(6):102437. [Google Scholar]
- 63.Camilleri MA. Factors affecting performance expectancy and intentions to use ChatGPT: using SmartPLS to advance an information technology acceptance framework. Technol Forecast Soc Change. 2024;201:123247. [Google Scholar]
- 64.Duong CD, Bui DT, Pham HT, Vu AT, Nguyen VH. How effort expectancy and performance expectancy interact to trigger higher education students’ uses of ChatGPT for learning. Interact Technol Smart Educ. 2024;21(3):356–80. [Google Scholar]
- 65.Ameri A, Khajouei R, Ameri A, Jahani Y. Acceptance of a mobile-based educational application (LabSafety) by pharmacy students: An application of the UTAUT2 model. Educ Inf Technol. 2020;25(1):419–35. [Google Scholar]
- 66.Chen CH, Lee WI. Exploring nurses’ behavioural intention to adopt AI technology: the perspectives of social influence, perceived job stress and human–machine trust. J Adv Nurs. 2025;81(7):3739–52. [DOI] [PubMed] [Google Scholar]
- 67.Arain AA, Hussain Z, Rizvi WH, Vighio MS. Extending UTAUT2 toward acceptance of mobile learning in the context of higher education. Universal Access Inf Soc. 2019;18(3):659–73. [Google Scholar]
- 68.Zhang B, Dafoe A. Artificial intelligence: American attitudes and trends. SSRN Electronic Journal. 2019. 10.2139/ssrn.3312874.
- 69.Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–99. [Google Scholar]
- 70.Slovic P. Perception of risk. The perception of risk: Routledge; 2016. p. 220–31. [Google Scholar]
- 71.Balakrishnan J, Abed SS, Jones P. The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technol Forecast Soc Change. 2022;180:121692. [Google Scholar]
- 72.Bazelais P, Doleck T, Lemay DJ. Investigating the predictive power of TAM: a case study of CEGEP students’ intentions to use online learning technologies. Educ Inf Technol. 2018;23(1):93–111. [Google Scholar]
- 73.Liesa-Orús M, Latorre-Cosculluela C, Sierra-Sánchez V, Vázquez-Toledo S. Links between ease of use, perceived usefulness and attitudes towards technology in older people in university: A structural equation modelling approach. Educ Inf Technol. 2023;28(3):2419–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Ho SM, Ocasio-Velázquez M, Booth C. Trust or consequences? Causal effects of perceived risk and subjective norms on cloud technology adoption. Comput Secur. 2017;70:581–95. [Google Scholar]
- 75.Yilmaz R, Yilmaz FGK. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Computers and Education: Artificial Intelligence. 2023;4:100147. [Google Scholar]
- 76.Chu J, Dai Y-Y. Extending the UTAUT model to study the acceptance behavior of MOOCs by university students and the moderating roles of free time management and leisure-study conflict. Int J Technol Hum Interact. 2021;17(4):35–57. [Google Scholar]
- 77.Dahri NA, Yahaya N, Al-Rahmi WM, Aldraiweesh A, Alturki U, Almutairy S, et al. Extended TAM based acceptance of AI-Powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon. 2024;10(8). 10.1016/j.heliyon.2024.e29317. [DOI] [PMC free article] [PubMed]
- 78.Pan Y, Huang Y, Kim H, Zheng X. Factors influencing students’ intention to adopt online interactive behaviors: merging the theory of planned behavior with cognitive and motivational factors. Asia Pac Educ Res. 2023;32(1):27–36. [Google Scholar]
- 79.Zhang Y, Yang X, Tong W. University students’ attitudes toward ChatGPT profiles and their relation to ChatGPT intentions. International Journal of Human-Computer Interaction. 2025;41(5):3199–212. [Google Scholar]
- 80.Miao F, Shiohira K. AI competency framework for students: UNESCO; 2024. 10.54675/JKJB9835.
- 81.Zhang Y, Li S, Yuan X, Yuan H, Che Z, Luo S. The high-dimensional psychological profile of ChatGPT. Sci China Technol Sci. 2025;68(8):1–16. [Google Scholar]
- 82.Yuan H, Che Z, Zhang Y, Li S, Yuan X, Huang L, et al. The cultural stereotype and cultural bias of ChatGPT. J Pac Rim Psychol. 2025;19:18344909251355673. [Google Scholar]
- 83.Lu JG, Song LL, Zhang LD. Cultural tendencies in generative AI. Nature Human Behaviour. 2025;9(11):2360-9. [DOI] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets used in this study are available from the corresponding author upon reasonable request.


