Skip to main content
BMC Psychology logoLink to BMC Psychology
. 2025 Nov 12;13:1257. doi: 10.1186/s40359-025-03559-2

What makes university students accept generative artificial intelligence? A moderated mediation model

Nuri Türk 1,, Barzan Batuk 2, Alican Kaya 3, Oğuzhan Yıldırım 4
PMCID: PMC12613713  PMID: 41225658

Abstract

Artificial Intelligence (AI) technology developments are increasing the importance of accepting and utilizing generative AI. Higher Education is one of the most common areas where AI tools are used. University students use AI tools such as ChatGPT for various purposes (e.g., homework and projects). However, there is limited research on the factors affecting university students' acceptance of AI. AI-related technology and literacy skills are effective in promoting the acceptance of new technologies. Additionally, attitude towards AI and AI self-efficacy can promote the acceptance of AI. Within this context, this study tested a hypothetical model to examine the relationships among university students' attitude towards AI, AI literacy, AI self-efficacy, AI learning anxiety, and AI acceptance. Data were collected from 356 participants (265 females) with a mean of 22.71 (SD = ± 3.72). Mediation and moderation analyses were used to examine the role of AI literacy, AI self-efficacy, and AI learning anxiety in the relationship between attitude towards AI and AI acceptance among Turkish university students. The research results showed that the relationship between attitude toward AI and acceptance of AI can be explained by AI literacy and AI self-efficacy. Moreover, AI learning anxiety can moderate the predictive role of students' attitudes towards AI on AI acceptance. The study broadens and enhances the educational AI literature related to the factors that complicate and facilitate AI acceptance.

Keywords: AI acceptance, AI literacy, AI self-efficacy, AI attitude, Educational AI

Introduction

Artificial Intelligence (AI) has provided countless advances in many areas. These advances are increasing exponentially [15, 115]. In recent years, AI, which has become a powerful actor that has transformed many areas from Education to health, transportation to public services, offers important opportunities, especially in the field of Education [58, 114]. Artificial Intelligence applications (e.g., AIEd) have made teaching processes more effective and efficient [7, 86]. In content presentation, feedback provision, and progress monitoring, intelligent instructional systems enable teachers to identify student knowledge gaps and provide appropriate support through person-specific and adaptive approaches [32]. Furthermore, by analyzing student engagement, AI systems can identify at-risk individuals in real-time, thus providing proactive intervention [95].

AI provides a customized learning environment that addresses the specific learning needs of each student [92]. These features of AI enable the development of a positive attitude towards it in higher Education. It has become not only a technological innovation [79] but also an element that transforms pedagogical and managerial practices [77], triggering a fundamental paradigm shift [62]. Contributions such as accelerated student enrollment processes and more coherent curricula have increased efficiency in the functioning of higher education institutions [60]. Moreover, the roles of faculty members are undergoing significant changes, and AI-supported tools are becoming central to teaching processes on online learning platforms [70]. This transformation supports autonomous learning by providing student-specific content tailored to learning styles, enabling students to have more in-depth learning experiences [112]. However, despite all these possibilities, human intervention is still needed in complex areas such as language, culture, and ethics [64]. This indicates that AI presents certain pedagogical, cultural, and ethical risks, in addition to its benefits for education. Thus, AI and generative artificial Intelligence (GenAI) have transitioned from a simple technical tool in higher Education to a crucial element transforming the very nature of Education. In this context, it offers opportunities and challenges that require careful consideration.

Recent development of AI and GenAI systems has further expanded the potential of AI in Education [37, 57]. GenAI tools are revolutionising teaching and learning processes in higher Education. Academics and students use AI tools for various purposes, including reading, writing, preparing presentations, and conducting analyses [68, 84]. However, the widespread use of AI tools has brought with it debates on ethical issues, the risk of plagiarism, and pedagogical compatibility. Such debates can directly shape the acceptance of AI in Education by influencing users' attitudes and perceptions towards AI technologies [13]. At this point, technology acceptance models (e.g., Technology Acceptance Model, Unified Acceptance and Use Theory) developed to explain how individuals approach new technologies provide an important framework. The models emphasize that factors such as general attitude towards new technologies, perceived usefulness, and ease of use play a critical role in the acceptance process [25, 33]. As the popularity and use of AI in public grows, effective cues are needed to understand people's attitudes, perceptions, and opinions towards AI and to accelerate AI adoption.

Although artificial intelligence has experienced rapid and transformative progress in education, research on higher education has focused on general attitudes, adoption, and behavioral intentions [4, 59, 101]. However, in terms of higher education, the driving force of education, an untouched field examining AI literacy, self-efficacy, and anxiety, remains to be explored [16, 49, 50]. Since AI literacy is defined as students' knowledge about AI, it may influence their self-efficacy, which reflects their confidence in using AI. AI literacy and self-efficacy, interacting with anxiety that results from individuals' emotional reactions, can guide students' behavior in accepting and utilizing AI. We have inferred from the literature that the causal triad (literacy-self-efficacy-anxiety), which plays a decisive role in student behavior during the learning and teaching process, can serve as a milestone for the increasing integration of AI into instructional processes and its adoption for learning purposes. In this context, investigating the AI literacy, self-efficacy, and anxiety levels of university students together is crucial in identifying and addressing an important gap.

Another gap that our research fills is the problem of context and cultural similarity of existing studies. As most AI-rooted existing studies have been conducted in Western contexts [23, 105, 113], this results in limited findings from countries with different demographic, cultural, and structural characteristics, such as Türkiye. So our study's participants consist of university students in Türkiye. Türkiye's rapidly digitalizing higher education system, young population potential, and the increasing importance given to artificial intelligence-based learning make examining this topic even more meaningful [27]. Therefore, investigating how Turkish university students use artificial intelligence tools, how they manage their confidence and anxiety, and how their attitudes and self-efficacy levels are shaped will make significant contributions not only to the local context but also to the global scale by making comparisons with different cultures [26].

AI attitude and generative artificial intelligence acceptance

There is a fast-growing interest in AI technologies [98]. AI has transformed daily routines for individuals, professionals, and education settings [61, 71, 94]. The public's increasing concern for this new technology makes it necessary to study the general attitudes of AI users [87]. The attitudes of individuals change the direction of their decision to accept or reject the use of AI technology [44]. People with positive attitudes experience a more suitable technology acceptance process with factors such as perceived usefulness and ease of use [53]. However, risks and barriers such as data security, ethical issues, lack of empathy, perceptions of AI's control over humans, job loss, and accuracy of the information presented also lead to negative attitudes toward AI [107]. Factors like perceived risk, usefulness, ethical concerns, and social diffusion significantly influence negative attitudes toward AI acceptance [2]. This dilemma renders the public's understanding of the AI acceptance process a critical issue. The increasing public ambivalence (e.g., positive vs. negative) towards AI systems makes it necessary to know the acceptance of new technologies (e.g., AI technologies) [53].

AI has many uses in health, education, transportation, and business [6, 19]. The rapidly expanding use of AI technology has been driving people to make various decisions depending on their acceptance and usage attitudes towards AI [80]. Positive attitudes are associated with AI providing high-quality solutions and assistance to users' demands. On the other hand, users with negative attitudes reject the use of AI technology for reasons such as insecurity, hallucinatory information, and cognitive control anxiety (human-or-machine decision) [82]. It is anticipated that a considerable disparity will emerge in the near future between individuals who utilize AI and those who do not [52]. While the benefits and opportunities presented by AI technology encourage greater usage, anxiety and ethical concerns that lead to negative attitudes create a paradoxical situation that distances individuals from using AI technology [53]. Individuals' attitudes toward new technology still exhibit uncertainty. Increasing uncertainty regarding AI usage has intensified the debate, making it a significant agenda item. Since the general attitudes of people play a dominant role in AI usage, understanding the perspective of users and introducing the public to the many advantages of AI, such as potential benefits and social impact, lies in the deep and meaningful relationship between attitude and acceptance [73, 75].

The role of AI literacy and AI self-efficacy

The increasing number of studies on AI indicates that AI has been rapidly integrated and popularized worldwide in recent years [80]. It requires users' intention to effectively integrate AI technologies into their daily lives and attitudes, which reflects an emotional and cognitive behavioral orientation for conscious use [36]. As a positive attitude affects the interest and curiosity of the person, it affects the users to optimize their cognitive capacity development to use and benefit from AI more actively [78]. Competent and responsible use is prominent as a positive attitude toward new technology requires talent, ability, and deep understanding (Acosta-Enriquez et al., 2024). Thus, it is necessary to develop AI literacy, which refers to understanding, evaluating, and the ethical use of AI (Xiao et al., 2025) [99]. In addition, the use of AI in many sectors, especially Education, to gain an advantage in the increasingly competitive environment, brings AI literacy to the agenda for higher education stakeholders [10, 12]. As AI technologies and social transformations are rapidly developing in all areas of life, the increasing integration of AI and other digital technologies in modern life requires sufficient knowledge about how these systems work and how they are employed [69]. Users lacking literacy about the functions of AI may suffer while using it and will benefit from its advantages to a limited extent [43]. That indicates that AI literacy is a significant factor for AI acceptance [11].

AI attitude directs people's AI self-efficacy, vital in shaping users' beliefs and behaviors about AI technology [22], (Ji et al., 2024). Research has shown that self-efficacy is a prominent factor in adopting new technology such as AI [1, 17, 39]. A person with high AI self-efficacy tends to be more open to and motivated by their beliefs and perceptions of their capacities and abilities [29]. AI self-efficacy is a milestone that also affects and activates an individual's AI literacy. People with higher AI literacy experience a desired performance in terms of self-efficacy [106]. Thus, users with AI self-efficacy are more proactive in providing creative and adaptive solutions to new technologies or complex situations [109]. AI self-efficacy facilitates AI acceptance by influencing individuals' perceptions of the benefits and usefulness of AI technologies [14]. Individuals with strong AI self-efficacy are more likely to view AI technology as an opportunity or advantage rather than a challenge or threat [9, 103].

The moderator effect of artificial intelligence anxiety

Various ethical and technical issues raised by the rapid development of AI technology have increased anxiety about AI [28, 48, 89, 100]. According to earlier research, autonomous systems like AI possess the potential to drastically alter the community that exists today. It could result in a significant impact on the labor market (Wang & Wang, 2022, Zhan et al., 2023) and raise issues with security, confidentiality, monitoring, disinformation, making ethical choices, reliability, and clearness [34, 89]. Therefore, the worry and fear of losing control over AI is defined as AI anxiety [88]. The innovation and complexity of AI, along with the new challenges it presents to society, including an uncertain future, are causing anxiety (Wang & Wang, 2022, Zhan et al., 2023). Concerns about AI lead to anxiety about learning AI, which negatively impacts one's perception of their ability to learn AI technology. The term "AI learning anxiety" describes people's low self-confidence when learning AI, which is seen as challenging [48]. Since AI is a computerized technology, mastering this technology is considered complicated. The fact that AI algorithms outperform the most competent people in some sectors leads people to have less confidence in learning AI [30]. Therefore, it is crucial to consider how someone's readiness for change, that is, their ability to adapt and accept AI, is impacted by their unique AI anxiety, which is the affective component of attitude.

Present study

The Technology Acceptance Model (TAM) has been utilized to conceptualize the present research. TAM provides a simple and extensible framework that explains people's behavior towards technology based on cognitive evaluations such as perceived use (PU) and perceived ease of use (PEOU) [25, 42], (Lin & Xu, 2021). While the strong influence of PU on the intention to use technology has been highlighted in many studies [25, 83, 108], the influence of PEOU may be more limited, especially when users' familiarity with technology increases [56, 67]. Despite the existence of alternative conceptualizations such as UTAUT [97] and artificially ıntelligent device use acceptance (AIDUA) [33], this study's focus on examining the mediating and moderating roles of AI literacy, AI self-efficacy, and AI anxiety in the relationship between students' general attitudes towards AI and their level of acceptance renders the structure of TAM, which focuses on student perceptions, particularly suitable for the present context. Furthermore, it has been established that PU and PEOU perceptions are influenced by exogenous variables such as individual technology experience [25]. These influences shape user acceptance of AI, and the model was able to ground the relationships between the study's variables theoretically.

Users' perceptions of AI Acceptance can be influenced by their PU, PEOU, and prior encounters with new technologies. AI literacy benefits how PU and PEOU are perceived, influencing how widely AI-powered technology is accepted [88]. AI literacy can improve self-confidence in AI tools by increasing AI self-efficacy. This self-confidence serves as a buffer against worry related to AI learning. Hence, depending on the level of AI literacy, learning anxiety, self-efficacy, and general attitude toward AI, people's adoption of AI may be simpler or more difficult. While many studies have examined the relationship between AI acceptance, attitude towards AI, AI anxiety, and intention to use AI in terms of TAM [44, 55, 110], very few have examined how AI learning anxiety, AI literacy, and AI self-efficacy levels affect the general attitude and acceptance of AI. Therefore, there is a need to examine factors associated with the general attitude toward AI in order to develop effective intervention strategies that support AI acceptance. Moreover, university students are also thought to have limited knowledge about the effective variables in AI acceptance [90]. This research is expected to contribute to a better understanding of the factors predicting university students' acceptance of AI.

We employed a serial mediation model, which is theoretically grounded in the Technology Acceptance Model (TAM) developed by Davis [25]. TAM emphasizes that individuals’ perceptions of a technology’s usefulness and ease of use are the main determinants that will affect its acceptance and adoption [85]. As individuals learn more about the new technologies, such as AI, their confidence in applying it is more likely to increase [5]. The enhanced confidence helps to reduce psychological barriers, including uncertainty, lack of control, and anxiety. In this context, AI literacy equips students with basic knowledge about how AI systems function, thereby enhancing their self-efficacy [8, 35]. Higher self-efficacy not only alleviates AI-related anxiety but also fosters greater openness to adopting and using AI [16]. Since these mediators are interconnected and operate as a sequential psychological mechanism rather than as separate factors, the application of a serial mediation model is theoretically well justified.

The present study proposed a new model based on aforementined empirical and theoretical relationships (see Fig. 1). Specifically, five hypotheses were tested based on the model: (i) General attitude toward AI predicts AI acceptance; (ii) AI literacy mediates the relationship between general attitude toward AI and AI acceptance; (iii) AI self-efficacy mediates the relationship between general attitude toward AI and AI acceptance; (iv) AI literacy and AI self-efficacy serially mediate mediates the relationship between general attitude toward AI and AI acceptance, and (v) AI learning anxiety moderates the relationship between general attitude toward AI and AI acceptance.

Fig. 1.

Fig. 1

Visualizing the proposed model

Method

Participants

A total of 356 university students from Türkiye participated in the study, of which 265 (74.4%) were female. The participants' ages ranged from 18 to 46 (M = 22.71 ± 3.72). When their grade level was examined, 73 (20.5%) were freshmen, 64 (18.5%) were sophomores, 125 (35.1%) were juniors, and 94 (26.4%) were seniors. In terms of self-reported socio-economic status (SRSS), 61 (17.1%) of the individuals were low, 288 (80.9%) were moderate, and 7 (2.0%) were high. The participants were from diverse departments, including social science (n: 21, 5.9%), theology (n:25, 7.0%), English language teaching (n: 26, 7.3%), mathematic education (n: 35, 9.8%), preschool education (n: 13, 3.7%), and guidance and psychological counseling (n:67, 23.3%), elementary teacher education (n:80, 22.5%), social studies education (n: 52, 14.6%), Turkish language education (n: 37, 10.4%). Mothers of participants had the following educational levels: 10 were illiterate (2.8%), 88 had primary school degrees (24.7%), 176 had middle school degrees (49.4%), 47 had high school degrees (13.1%), and 35 had university degrees or higher (e.g., master's degree or Ph.D.) (9.8%). The participants' fathers had the following educational levels: 4 were illiterate (1.1%), 66 had primary school degrees (18.6%), 119 had middle school degrees (33.4%), 83 had high school degrees (23.3%), and 84 had university degrees or higher (e.g., Ph.D. or master's degree) (23.6%). Participants voluntarily participated in the study through convenience sampling, which led to a gender imbalance (i.e., 74.4% were female). Although the convenience sampling method provides convenience in terms of accessibility, time, and money, it creates limitations on the generalizability of the sample. We acknowledge the potential impact of this factor appearing in the sample on the results.

Power analysis

A power analysis was utilized to calculate the required sample size to reveal the interaction between variables using the G* Power packet program (version 3.1.97). A medium effect size of f2 = 0.15 has been established based on the conventional significance level of 0.05 (α err prob) and the predetermined power of 0.95 (1-β err prob) [21]. After analyzing the data, it was determined that 119 samples were required. After obtaining an adequate sample size, we conducted a post-hoc power analysis to calculate the sample's power. It was found to be 0.99 (1-β err prob). Based on the obtained results, the sample size was determined to be adequate for analysis.

Measures

AI Attitude Scale (AIAS-4)

The 4-item AIAS-4 [31] assessed the general attitude toward AI. Participants rated items such as "I believe that AI will improve my work" on a ten-point Likert scale ranging from 1 (not at all) to 10 (completely agree). AIAS-4 was translated and adapted into Turkish for use in this study. The scale was translated into Turkish by four field experts who applied the translation-reverse translation procedure, and four field experts also completed the reverse translation. A final implementation form was evaluated by four field experts and finalized. In this study, Cronbach's alpha coefficient was 0.89, and McDonald's ω was 0.89. Moreover, confirmatory factor analysis showed that the model fitted the data well: χ2/df = 2.52, RMSEA = 0.06, CFI = 0.99, TLI = 0.99 [45, 91].

Artificial Intelligence Literacy Scale (AILS)

The 12-item AILS [99], Turkish version: [24] was used to assess the AI literacy. Items (e.g., "I can use AI applications or products to improve my work efficiency") are rated on a seven-point Likert scale from 1 (strongly agree) to 7 (strongly disagree). The total score ranges between 12 and 84. The higher the score, the higher the AI literacy. In this study, Cronbach's alpha coefficient was 0.76, and McDonald's ω was 0.78. Moreover, confirmatory factor analysis showed that the model fitted the data well: χ2/df = 3.06, RMSEA = 0.07, CFI = 0.95, TLI = 0.93 [45, 91].

Artificial Intelligence Self-Efficacy Scale (AISES)

The 5-item Technology Self-Efficacy Scale (Montag, 2023) assessed the technology self-efficacy. The items in the technology self-efficacy scale replaced "technology" with "artificial intelligence technologies/products." Participants rated items such as "I am unsure about my ability to use artificial intelligence technologies/products" on a seven-point Likert scale ranging from 1 (do not agree at all) to 7 (completely agree). AISES was translated and adapted into Turkish for use in this study. Four field experts translated the scale into Turkish by applying the translation-reverse translation procedure, and four field experts also completed the reverse translation. A final implementation form was evaluated by four field experts and finalized. In this study, Cronbach's alpha coefficient was 0.77, and McDonald's ω was 0.79. Moreover, confirmatory factor analysis showed that the model fitted the data well: χ2/df = 3.28, RMSEA = 0.08, CFI = 0.94, TLI = 0.86 [45, 91].

Generative Artificial Intelligence Acceptance Scale (GAIAS)

The 20-item GAIAS [111] assessed AI acceptance. Items (e.g., "Generative AI applications help me get things done faster") are rated on a five-point Likert scale from 1 (strongly disagree) to 5 (strongly agree). The total score ranges between 20 and 100. The higher the score, the higher the AI acceptance. In this study, Cronbach's alpha coefficient was 0.92 and McDonald's ω 0.92. Moreover, confirmatory factor analysis showed that the model fitted the data well: χ2/df = 3.02, RMSEA = 0.07, CFI = 0.92, TLI = 0.91 [45, 91].

Artificial Intelligence Anxiety Scale (AIAS)

The 21-item AIAS [104], Turkish version: [93] assessed the AI anxiety. This scale comprises learning, job replacement, AI configuration, and socio-technical blindness sub-dimensions. The learning sub-dimension, consisting of 9 items, was used for this study. Items (e.g., "Learning to use AI techniques/products makes me anxious.”) are rated on a seven-point Likert scale from 1 (never) to 7 (completely). The total score ranges between 9 and 63. The higher the score, the higher the AI learning anxiety. In this study, Cronbach's alpha coefficient was 0.91, and McDonald's ω was 0.91. Moreover, confirmatory factor analysis showed that the model fitted the data well: χ2/df = 1.61, RMSEA = 0.04, CFI = 0.99, TLI = 0.98 [45, 91].

Procedure

Data collection was conducted using Google Forms, which included scales and demographic data. First, content was provided describing the study's content and duration. After this information was presented, participants were asked to confirm that they participated voluntarily. The survey was not available to participants who did not participate voluntarily. Furthermore, all participants were permitted to leave the study at any time without giving any explanation. The inclusion criteria were being over 18 years of age and participating voluntarily. All stages of the research were carried out following the Declaration of Helsinki. This research was also approved by the Siirt University Review Board (reference number 6965). Convenience sampling was used to select the participants. Using convenience sampling, participants are chosen based on availability and accessibility (e.g., students conveniently located in a specific location). The study was conducted in collaboration with public institutions, and students enrolled at these institutions were invited to participate.

Data analysis

First, the data was cleaned to ensure that it was valid. Preliminary analysis was performed to determine whether the data were normal, multicollinear, multivariate normal, or linear. It was observed that the assumptions required to perform regression-based analyses were met. The data's descriptive and correlational statistical analysis was conducted using SPSS 26.0. The following analysis used Hayes' PROCESS macro (version 3.0) [81]. 5000 bootstraps were used to construct 95% confidence intervals (CIs). In addition, the JASP package program (version 0.18.3.0) was used for confirmatory factor analyses to investigate the validity of the scales.

Results

Descriptive statistics

Table 1 shows the relationships between variables and descriptive statistics. A moderate or low level of correlation appears to exist between all variables. AI Attitude had a low relationship with AI literacy (r = 0.12, p < 0.05), a low positive relationship with AI self-efficacy (r = 0.23, p < 0.001), and a moderate positive relationship with generative AI acceptance (r = 0.57, p < 0.001). AI anxiety had a negative relationship with AI Attitude (r = −0.18, p < 0.001), AI self-efficacy (r = −0.28, p < 0.001), and generative AI acceptance (r = −0.14, p < 0.05).

Table 1.

Pearson correlations and descriptive statistics among variables

1 2 3 4 5 Value
AI Attitude (1) - -
Artificial Intelligence Literacy (2) .12* - Low with 1
Artificial Intelligence Self-Efficacy (3) .23** -.16** - Low with 1, 2
Generative Artificial Intelligence Acceptance (4) .57** .12* .33** - High with 1; low with 2; medium with 3
Artificial Intelligence Anxiety (5) -.18** .22** -.28** -.14* - Low with 1, 2,3,4
Mean 26.64 45.96 21.48 68.58 28.29
Std. Deviation 7.81 10.65 5.14 12.04 10.77
Skewness -.49 -.07 -.19 -.36 .42
Kurtosis .07 -.45 .72 1.09 -.11

The value column outlines the correlation's strength according to Cohen's [20] criteria:.10–.29 = Low,.30–.49 = Medium, ≥.50 = High

** p <.001

*p < .05

Serial multiple mediational analyses

The direct effect was first examined to determine the mediating effect. AI attitude directly affected generative AI acceptance (Total; β = 0.877, p < 0.001; 95% CI = [0.74, 1.01]). As a result of the analysis, it was also determined that when two mediators (i.e., AI literacy and AI self-efficacy) were included simultaneously, the coefficient was still significant (Direct; β = 0.778, p < 0.001; 95% CI = [0.64, 0.91]). AI attitude also positively predicted AI literacy (β = 0.162, p < 0.001; 95% CI = [0.02, 0.30]) and AI self-efficacy (β = 0.168, p < 0.001; 95% CI = [0.10, 0.23]).

AI attitude weakly and significantly affected AI acceptance via AI literacy (indirect effect = 0.02, SE = 0.01, 95% CI = [0.00, 0.14]). Moreover, AI attitude has been shown to indirectly affect AI acceptance through AI self-efficacy (relatively higher than other indirect pathways) (indirect effect = 0.09, SE = 0.01, 95% CI = [0.05, 0.14]). As a result, AI attitude indirectly affected AI acceptance through AI literacy and self-efficacy (serial multiple mediation effect; indirect effect = −0.01, SE = 0.00, 95% CI = [−0.01, −0.00]). Although the serial pathway was of considerable importance, it nevertheless exhibited a limited effect. In Table 2, information regarding the serial mediator effect analysis is presented.

Table 2.

The indirect effect of AI attitude on AI acceptance through AI literacy and self-efficacy

Path Coefficient 95% Confidence interval
LL UL Decision
AI Attitude ➔ Artificial Intelligence Literacy ➔ Generative Artificial Intelligence Acceptance .02 .00 .14 Supported
AI Attitude ➔ Artificial Intelligence Self-Efficacy ➔ Generative Artificial Intelligence Acceptance .09 .05 .14 Supported
AI Attitude ➔ Artificial Intelligence Literacy ➔ Artificial Intelligence Self-Efficacy ➔ Generative Artificial Intelligence Acceptance -.01 -.01 -.00 Supported
Total effect .88 .50 .81
Direct effect .78 .24 .52
Total indirect effect .10 .18 .37

LL Lower limit, UL Upper limit

We used the PROCESS macro (Model 6) to investigate whether AI attitude affected AI acceptance through AI literacy and self-efficacy. Table 2 indicated that H1, H2, H3, and H4 hypotheses were supported. AI attitude was a significant predictor of AI literacy, AI self-efficacy, and generative AI acceptance (p < 0.05). The findings in Fig. 2 revealed significant pathways of the model. For H1, the 95% confidence interval was between [0.706 and 1.048]. Moreover, the 95% confidence interval for the indirect effect for H2 and H3 was between [0.001 and 0.974]. For H4, it was between [−0.018 and −0.001], indicating the results were statistically significant as it did not encompass zero. Finally, the mediation effect (β = 0.099; SE = 0.02, 95% CI = [0.06, 0.15]) was found to represent 11.32% of the total effect (β = 0.877; SE = 0.07, 95% CI = [0.74, 1.00]) (see Fig. 2).

Fig. 2.

Fig. 2

Modeling serial multiple mediations

Conditional process analysis

Considering the theoretical model, it was decided to adjust the second half-path of the mediating model and the direct path. Therefore, we tested conditional process analysis (i.e., moderated mediation) using Hayes' PROCESS macro [81]. To test H5, the indirect pathway was examined to determine whether AI self-efficacy mediates the effect of AI attitude on Generative AI acceptance. After, it was found that AI anxiety moderated the relationship between AI attitude and Generative AI acceptance (Interaction = −0.02, SE = 0.01, p < 0.05; 95% CI = [−0.03, −0.01]) (see Table 3). This result supported H4. Accordingly, the relationship between variables (i.e., AI attitude and AI acceptance) is affected at different levels depending on the level of AI anxiety (see Fig. 3).

Table 3.

Conditional process analysis

Antecedent M (Artificial intelligence self-efficacy) Y (Generative artificial intelligence acceptance)
Coeff SE p Coeff SE p
X (AI attitude) .15 .04 <.001 1.24 .25 <.001
M (Artificial intelligence self-efficacy) .48 .12 <.001
W (Artificial intelligence anxiety) .43 .20 <.05
X * W -.02 .01 <.05
Constant 17.41 .96 <.001 24.98 7.56 <.001

R =.23; ΔR2 =.05

F = 19.07; p <.001

R =.61; ΔR2 =.38

F = 35.41; p <.001

Conditional indirect effect(s) of predictor at values of artificial intelligence anxiety
Bootstrapped indirect effect Boot SE Boot LLCI Boot ULCI
–1 SD .96 .14 .69 1.22
M .78 .09 .61 .95
 + 1 SD .61 .09 .43 .79

Fig. 3.

Fig. 3

Modelling the moderating effect of artificial intelligence anxiety

Finally, conditional process analysis was performed to examine the effect of AI anxiety on three different levels (i.e., at 1 SD below, the mean, and 1 SD above). Specifically, the results indicate that the path of AI anxiety moderated the relationship between AI attitude and generative AI acceptance. The conditional effect of AI anxiety at the low levels was significant (β = 0.96, 95% CI = [0.69, 1.22]), moderate levels (β = 0.78, 95% CI = [0.61, 0.95]), and high levels (β = 0.61, 95% CI = [0.43, 0.79]) (see Fig. 3).

Discussion

This study tested the underlying mechanism between general attitudes towards AI and acceptance of AI. The results confirm the study's first hypothesis that general attitudes towards AI predict AI acceptance. Similarly, negative attitudes towards AI have been found to create barriers to its use. People are also thought to have positive attitudes if they perceive benefits from AI applications [53, 78]. It has been determined that university students admire artificial Intelligence but are also concerned [72]. Considering that the capacities of educational institutions are intense and consultancy services are becoming difficult, AI can provide opportunities for students in terms of consultancy. In addition, artificial Intelligence can make the academic field more personal, effective, and multicultural [76]. All of these reveal that having a positive attitude towards the benefits of artificial Intelligence will increase the acceptance of artificial Intelligence in the educational process. Therefore, it's important for educators to introduce AI to their students. They can also create assignments that support the functional use of AI to increase students' positive attitudes toward it.

A key element affecting the acceptance of AI is the perceived user-friendliness [65]. University students who recognize the benefits associated with AI, such as flexibility and convenience, may be more likely to accept and use it in their academic studies. Individuals who perceive that there are benefits, such as flexibility and convenience, associated with AI are more likely to accept it. Attitudes toward AI were found to predict desire for AI usage behavior and behavioral intention [44]. It suggests that a positive attitude is an important prerequisite for the acceptance of AI in general. For a positive attitude towards AI to be formed for students, it can be said that there should be antecedents such as perceived usefulness, ease of implementation, and desire. Based on these antecedents, it can be interpreted that students will have positive attitudes toward AI and will, therefore, be more accepting of AI.

Additionally, this study's findings reveal the second and third hypotheses, which are confirmed by AI literacy and artificial intelligence self-efficacy mediating between general attitude towards AI and AI acceptance. Constructs such as AI literacy and self-efficacy may play important roles in AI acceptance. Educators' strategies related to course content can play a critical role in improving university students' AI self-efficacy and literacy. This can be expected to improve students' AI literacy and self-efficacy. Furthermore, students with positive attitudes toward AI may have a higher acceptance level of AI. However, university students generally have a basic understanding of AI, but there are significant differences in their AI literacy [38]. Moreover, AI is expected to test digital literacy in particular [74]. It is suggested that digital literacy is also very important for AI-based interventions [41]. AI self-efficacy predicts behavioral intention [51]. It shows that individuals with high self-efficacy are more likely to embrace AI. From this perspective, literacy and self-efficacy skills in AI may be critical in accepting AI. Indeed, according to the research by Kong et al. [46], it can be stated that university students with high AI literacy and self-efficacy are more likely to be empowered, proactive, and ethically informed citizens. In addition, university students with high AI literacy and self-efficacy levels can access information quickly, perform statistical analyses, learn foreign languages ​​more easily, carry out projects, and exhibit artistic activities. All of these can be evaluated in relation to students' literacy and self-efficacy skills. Also, these activities can increase students' positive attitudes towards AI and their level of acceptance of AI. In another respect, the current finding is related to the negative correlation between AI literacy and self-efficacy. Students who possess high levels of AI literacy may experience low self-efficacy because they recognize the significant potential and function of AI. Individuals may also experience low self-efficacy due to the belief that advancements in artificial Intelligence will occur across multiple domains, consequently leading them to feel unable to keep pace with these developments. This correlation may also be related to the functional use of AI. Individuals who use AI for convenience and utilitarian purposes may perceive themselves as consumers. The perception of being merely a consumer of AI, rather than a producer, may undermine individuals' self-efficacy. However, unlike the current finding, Asio [5] found a positive relationship between AI literacy and self-efficacy. This difference can be explained by the differing dynamics of the research groups. Indeed, these studies diverge from one another regarding the number of participants, their places of origin, academic departments, socio-economic status, and parental attributes.

The present study found that AI literacy and AI self-efficacy had a serial multiple mediation effect in the relationship between general attitudes towards AI and generative artificial intelligence acceptance, confirming the fourth hypothesis of the present study. Similarly, digital literacy and self-efficacy were found to be related to perceptions of AI [54], and attitudes towards AI and self-efficacy are effective in using AI technology [47]. The results of the current study and similar research indicate that self-efficacy and literacy may play a critical role in AI acceptance. Furthermore, the relationship between technology self-efficacy and attitudes toward AI was mediated by a tendency to trust technology [66]. Similarly, AI ability was predictive of general self-efficacy and motivation to learn [40]. People with high trust in technology and a motivation to learn are more likely to accept artificial Intelligence. As a result, constructs such as literacy and self-efficacy play an important role in the relationship between attitudes toward and acceptance of AI. As university students develop their AI literacy and self-efficacy, they can more effectively accept and use AI and improve their academic skills.

This study indicates that AI learning anxiety may moderate the relationship between general attitudes toward AI and AI acceptance. The result confirms the fifth hypothesis. The moderating function of AI learning anxiety can be explained by the emotion of anxiety. Anxious individuals have been found to have high extrinsic motivation [63]. Furthermore, while AI learning anxiety negatively impacts learning motivation, anxiety has been found to positively impact extrinsic motivation [102]. Therefore, the moderating function of AI learning anxiety is an expected result. AI anxiety's learning and socio-technical dimensions act as complementary partial mediators between AI literacy and acceptance, meaning that part of the effect of literacy on acceptance is mediated by AI anxiety [88]. Additionally, AI anxiety and AI acceptance attitudes have a dual mediating effect on the relationship between AI perceptions and intentions to use AI [18]. University students with low AI anxiety may be more likely to utilize e-learning tools and accessible electronic resources, resulting in higher course participation rates [3]. In addition, perceived anxiety towards AI in university students has been associated with unemployment anxiety [96]. These results may increase students' willingness and acceptance of learning AI, thereby reducing potential problems in their future career planning. These findings demonstrate the critical impact of anxiety on AI attitudes, acceptance, and use. The results of the present study and similar research show the critical role of anxiety on attitudes towards, acceptance of, and use of AI. However, the current research offers practical implications for policymakers and educators. University students can be supported in developing positive attitudes towards AI, which is expected to become increasingly important in the near future, through access to accurate information. In this way, students can follow the developments in the modern age. Policy makers can develop incentive practices to support university students' AI literacy and self-efficacy. Policies can be developed at the university level to integrate AI into various scientific fields, curricula, and course content. In this way, students' AI acceptance levels and positive AI attitudes can be increased. Additionally, educators can assess their students' AI anxiety levels and take measures to reduce it. This can foster acceptance of AI in education and encourage its functional use. Indirectly, students' scientific development can be supported by reducing their anxiety about learning AI.

Limitations

There are some limitations in this study, as in all studies. Firstly, the sample size can be considered as a limitation. Although the sample size is sufficient to test the hypotheses in the research model, it is limited to represent all university students. However, it can be recommended that future studies be conducted on different age groups, such as high school students, middle-aged, and elderly people. Convenience sampling may also limit the generalizability of the study's results. Furthermore, the current study was gender imbalanced (three-quarters of the participants were women). Future studies could consider random sampling and include a more balanced and diverse number of participants. The second limitation can be seen in the data collection and analysis. Although valid and reliable scales were used in the study, multimodal data collection techniques can explore participants' attitudes, acceptance, anxiety, literacy, and self-efficacy levels toward AI. For example, qualitative data from interviews or voice recordings can be analyzed with existing qualitative data. In this way, both the data and the analyses can be enriched. In addition, since the research was conducted cross-sectionally, causal relationships cannot be established. The present study employed Hayes's PROCESS macro to test a hypothesized model. This modeling approach uses ordinary least squares (OLS) regression to test the model without assuming or testing measurement models or latent constructs (Hayes & Matthes, 2009). Despite the reliability and validity of the measurement tools being evaluated through this approach and the model being corroborated by existing literature, it nonetheless presents a limitation. Therefore, future research can be conducted using approaches (e.g., structural equation modeling) that test the reliability and validity of the model. This deficiency can be eliminated with longitudinal and experimental research on AI. Although all hypotheses in the study were confirmed, there may be different determinants related to AI. It is considered the third limitation. Therefore, other concepts from the Unified Theory of Acceptance and Use of Technology (UTAUT) and the Technology Acceptance Model (TAM) can be included in the research.

Acknowledgements

We are grateful to all the individuals who participated in this study.

Abbreviations

AI

Artificial Intelligence

TAM

Technology Acceptance Model

PU

Perceived Use

PEOU

Perceived Ease of Use

UTAUT

Unified theory of acceptance and use of technology

AIDUA

Artificially Intelligent Device Use Acceptance

Err

Error

Prob

Probability

CI

Confidence Intervals

LL

Lower Limit

UL

Upper Limit

Authors’ contributions

Study conception/design; BB, NT. Data collection; BB. Analysis; AK. Drafting of manuscript; BB, OY, AK, NT. Statistical expertise; AK. Supervisor and Editing; NT, AK. Administrative/technical/material support; BB, OY, AK, NT.

Funding

Not applicable.

Data availability

The data supporting this study's findings are available from the corresponding author upon reasonable request. The data were anonymized, ensuring that there was no breach of privacy. It will be shared in a manner that respects ethical protocols and data protection regulations. The dataset will be accessible only for academic purposes, and any use of the data will recognize the original study and maintain the confidentiality of the participants.

Declarations

Ethics approval and consent to participate

This research was also approved by the Ethics Committee of Siirt University (reference number 6965). Prior to data collection, each participant was asked to read the form containing information about the study. In addition, each participant accepted a written informed consent form.

Prior to data collection, each participant was asked to read the form containing information about the study. In addition, each participant accepted a written informed consent form.

The procedures followed throughout the study were in accordance with the 1964 Declaration of Helsinki, the ethics guidelines of the APA, and the ethics policy of higher education institutions for scientific research and publication. This research was also approved by the Siirt University Review Board (reference number 6965).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Agarwal R, Karahanna E. Time flies when you’re having fun: cognitive absorption and beliefs about information technology usage. MIS Q. 2000;24(4):665–94. 10.2307/3250951. [Google Scholar]
  • 2.Ahmad I. AI awareness, facilitating factors, perceived risk, attitude toward AI, behavior toward AI, AI application and ethical concerns: a study of higher education institutions. J Soc Sci Rev. 2025;5(1):600–16. 10.62843/jssr.v5i1.529. [Google Scholar]
  • 3.Almaiah MA, Alfaisal R, Salloum SA, Hajjej F, Thabit S, El-Qirem FA, et al. Examining the impact of artificial intelligence and social and computer anxiety in e-learning settings: students’ perceptions at the university level. Electronics. 2022;11(22):3662. 10.3390/electronics11223662. [Google Scholar]
  • 4.Al-Mamary YH. A comprehensive model for AI adoption: Analysing key characteristics affecting user attitudes, intentions and use of ChatGPT in education. Human Systems Management. 2024:01672533251340523. 10.1177/01672533251340523.
  • 5.Asio JMR. AI literacy, self-efficacy, and self-competence among college students: variances and interrelationships among variables. MOJES: Malaysian Online Journal of Educational Sciences. 2024;12(3):44–60. 10.22452/aldad.vol12no3.4. [Google Scholar]
  • 6.Batuk B, Aktu Y, Türk N. Yapay Zeka Kabul Ölçeği Kısa Formu’nun Psikometrik Özelliklerinin İncelenmesi. Çukurova Sosyal Bilimler Enstitüsü Dergisi. 2025;34(Uygarlığın Dönüşümü-Sosyal Bilimlerin Bakışıyla Yapay Zekâ):438–451. 10.35379/cusosbil.1695975.
  • 7.Bayne S. Teacherbot: interventions in automated teaching. Teach High Educ. 2015;20(4):455. 10.1080/13562517.2015.1020783. [Google Scholar]
  • 8.Bećirović S, Polz E, Tinkel I. Exploring students’ AI literacy and its effects on their AI output quality, self-efficacy, and academic performance. Smart Learn Environ. 2025;12(1):29. 10.1186/s40561-025-00384-3. [Google Scholar]
  • 9.Bellary S, Bala PK, Chakraborty S. Exploring cognitive-behavioral drivers impacting consumer continuance intention of fitness apps using a hybrid approach of text mining, SEM, and ANN. J Retail Consum Serv. 2024;81:104045. 10.1016/j.jretconser.2024.104045. [Google Scholar]
  • 10.Bozkurt A. Unleashing the potential of generative AI, conversational agents and chatbots in educational praxis: a systematic review and bibliometric analysis of GenAI in education. Open Prax. 2023;15(4):261–70. [Google Scholar]
  • 11.Börekci C, Çelik Ö. Exploring the role of digital literacy in university students’ engagement with AI through the technology acceptance model. Sakarya University Journal of Education. 2024;14(2):228–49. 10.19126/suje.1468866. [Google Scholar]
  • 12.Brew M, Taylor S, Lam R, Havemann L, Nerantzi C. Towards developing AI literacy: three student provocations on AI in higher education. Asian J Distance Educ. 2023;18(2):1–11. 10.5281/zenodo.8032387. [Google Scholar]
  • 13.Chan CKY. Students’ perceptions of “AI-giarism”: investigating changes in understandings of academic misconduct. Educ Inf Technol. 2025;30:8087–108. 10.1007/s10639-024-13151-7. [Google Scholar]
  • 14.Chang PC, Zhang W, Cai Q, Guo H. Does AI-driven technostress promote or hinder employees’ artificial intelligence adoption intention? A moderated mediation model of affective reactions and technical self-efficacy. Psychol Res Behav Manag. 2024:413-427. 10.2147/PRBM.S441444. [DOI] [PMC free article] [PubMed]
  • 15.Changsheng W. Research on prompt engineering for large model art image generation. J Graphics. 2024;45(6):1243–55. 10.11996/JG.j.2095-302X.2024061243. [Google Scholar]
  • 16.Chen C, Hu W, Wei X. From anxiety to action: exploring the impact of artificial intelligence anxiety and artificial intelligence self-efficacy on motivated learning of undergraduate students. Interact Learn Environ. 2025;33(4):3162–77. 10.1080/10494820.2024.2440877. [Google Scholar]
  • 17.Chen K, Chen JV, Yen DC. Dimensions of self-efficacy in the study of smart phone acceptance. Comput Stand Interfaces. 2011;33(4):422–31. 10.1016/j.csi.2011.01.003. [Google Scholar]
  • 18.Cho KA, Seo YH. Dual mediating effects of anxiety to use and acceptance attitude of artificial intelligence technology on the relationship between nursing students’ perception of and intention to use them: a descriptive study. BMC Nurs. 2024;23(1):212. 10.1186/s12912-024-01887-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Chu HC, Hwang GH, Tu YF, Yang KH. Roles and research trends of artificial Intelligence in higher Education: A systematic review of the top 50 most-cited articles. Australasian Journal of Educational Technology. 2022. 10.14742/ajet.7526.
  • 20.Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988. [Google Scholar]
  • 21.Cohen J. Statistical power analysis for the behavioral sciences. Routledge; 2013. [Google Scholar]
  • 22.Compeau DR, Higgins CA. Computer self-efcacy: development of a measure and initial test. MIS Q. 1995;19(2):189–211. 10.2307/249688. [Google Scholar]
  • 23.Crompton H, Burke D. Artificial intelligence in higher education: the state of the field. Int J Educ Technol High Educ. 2023;20(1):22. 10.1186/s41239-023-00392-8. [Google Scholar]
  • 24.Çelebi C, Yılmaz F, Demir U, Karakuş F. Artificial intelligence literacy: An adaptation study. Instructional Technology and Lifelong Learning. 2023;4(2):291–306. 10.52911/itall.1401740. [Google Scholar]
  • 25.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989;13(3):319–40. 10.2307/249008. [Google Scholar]
  • 26.Derinalp P, Ozyurt M. Adaptation of the student attitudes toward artificial intelligence scale to the Turkish context: validity and reliability study. Int J Human-Computer Interaction. 2025;41(8):4653–67. 10.1080/10447318.2024.2352921. [Google Scholar]
  • 27.Doğan M, Celik A, Arslan H. AI in higher education: risks and opportunities from the academician perspective. Eur J Educ. 2025;60(1):e12863. 10.1111/ejed.12863. [Google Scholar]
  • 28.Floridi L, Chiriatti M. GPT-3: its nature, scope, limits, and consequences. Minds Mach. 2020;30:681–94. 10.1007/s11023-020-09548-1. [Google Scholar]
  • 29.Gerçek M. Serial multiple mediation of career adaptability and self-perceived employability in the relationship between career competencies and job search self-efficacy. Higher Education, Skills and Work-Based Learning. 2024;14(2):461–78. 10.1108/HESWBL-02-2023-0036. [Google Scholar]
  • 30.Granter SR, Beck AH, Jr Papke DJ. AlphaGo, deep learning, and the future of the human microscopist. Arch Pathol Lab Med. 2017;141(5):619–21. 10.5858/arpa.2016-0471-ED. [DOI] [PubMed] [Google Scholar]
  • 31.Grassini S. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence. Front Psychol. 2023;14:1191628. 10.3389/fpsyg.2023.1191628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Guan C, Mou J, Jiang Z. Artificial intelligence innovation in education: a twenty-year data-driven historical analysis. Int J Innov Stud. 2020;4(4):134–47. 10.1016/j.ijis.2020.09.001. [Google Scholar]
  • 33.Gursoy D, Chi OH, Lu L, Nunkoo R. Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int J Inf Manage. 2019;49:157–69. 10.1016/j.ijinfomgt.2019.03.008. [Google Scholar]
  • 34.Hagendorff T. The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 2020;30(1):99–120. 10.1007/s11023-020-09517-8. [Google Scholar]
  • 35.He T, Huang J, Li Y, Wang L, Liu J, Zhang F, et al. The mediation effect of AI self-efficacy between AI literacy and learning engagement in college nursing students: a cross-sectional study. Nurse Educ Pract. 2025. 10.1016/j.nepr.2025.104499. [DOI] [PubMed] [Google Scholar]
  • 36.Ho MT, Mantello P, Ho MT. An analytical framework for studying attitude towards emotional AI: the three-pronged approach. MethodsX. 2023;10:102149. 10.1016/j.mex.2023.102149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Hong F, Dou W, Chen S. Research on the impact of artificial Intelligence on government public service quality. In Proceedings of the 2022 2nd International Conference on Public Management and Intelligent Society (PMIS 2022) (pp. 731–740). Atlantis Press. 2022. 10.2991/978-94-6463-016-9_74.
  • 38.Hornberger M, Bewersdorff A, Nerdel C. What do university students know about artificial intelligence? Development and validation of an AI literacy test. Comp Educ: Artificial Intelligence. 2023;5:100165. 10.1016/j.caeai.2023.100165. [Google Scholar]
  • 39.Hsu MH, Chiu CM. Internet self-efcacy and electronic service acceptance. Decis Support Syst. 2004;38(3):369–81. 10.1016/j.dss.2003.08.001. [Google Scholar]
  • 40.Jia XH, Tu JC. Towards a new conceptual model of AI-enhanced learning for college students: the roles of artificial ıntelligence capabilities, general self-efficacy, learning motivation, and critical thinking awareness. Systems. 2024;12(3):74–99. 10.3390/systems12030074. [Google Scholar]
  • 41.Kang EYN, Chen DR, Chen YY. Associations between literacy and attitudes toward artificial intelligence–assisted medical consultations: the mediating role of perceived distrust and efficiency of artificial intelligence. Comput Hum Behav. 2023;139:107529. 10.1016/j.chb.2022.107529. [Google Scholar]
  • 42.Kashive N, Powale L, Kashive K. Understanding user perception toward artificial intelligence (AI) enabled e-learning. Int J Inf Learn Technol. 2020;38(1):1–19. 10.1108/IJILT-05-2020-0090. [Google Scholar]
  • 43.Kaya F, Aydin F, Schepman A, Rodway P, Yetişensoy O, Demir Kaya M. The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. Int J Human-Comput Interact. 2024;40(2):497–514. 10.1080/10447318.2022.2151730. [Google Scholar]
  • 44.Kelly S, Kaye SA, Oviedo-Trespalacios O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat Inform. 2023;77:101925. 10.1016/j.tele.2022.101925. [Google Scholar]
  • 45.Kline RB. Principles and practices of structural equation modeling. Guilford Press; 2011. [Google Scholar]
  • 46.Kong SC, Cheung WMY, Zhang G. Evaluating an artificial intelligence literacy programme for developing university students' conceptual understanding, literacy, empowerment, and ethical awareness. Educ Technol Society. 2023;26(1):16–30. 10.30191/ETS.202301_26(1).0002.
  • 47.Kwak Y, Ahn JW, Seo YH. Influence of AI ethics awareness, attitude, anxiety, and self-efficacy on nursing students’ behavioral intentions. BMC Nurs. 2022;21(1):267. 10.1186/s12912-022-01048-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Li J, Huang JS. Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technol Soc. 2020;63:101410. 10.1016/j.techsoc.2020.101410. [Google Scholar]
  • 49.Li R, Ouyang J, Lin J, Ouyang S. Mediating effect of AI attitudes and AI literacy on the relationship between career self-efficacy and job-seeking anxiety. BMC psychology. 2025;13(1):454. 10.1186/s40359-025-02757-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Li R, Ouyang S, Lin J. Mediating effect of AI attitudes and AI literacy on the relationship between career self-efficacy and job-seeking anxiety. BMC Psychol. 2025;13:454. 10.1186/s40359-025-02757-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Li X, Jiang MYC, Jong MSY, Zhang X, Chai CS. Understanding medical students’ perceptions of and behavioral intentions toward learning artificial intelligence: a survey study. Int J Environ Res Public Health. 2022;19(14):8733. 10.3390/ijerph19148733. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Lichtenthaler U. Beyond artificial intelligence: why companies need to go the extra step. J Bus Strategy. 2018;41(1):19–26. 10.1108/JBS-05-2018-0086. [Google Scholar]
  • 53.Lichtenthaler U. Extremes of acceptance: employee attitudes toward artificial intelligence. J Bus Strategy. 2020;41(5):39–45. 10.1108/JBS-12-2018-0204. [Google Scholar]
  • 54.Lim EM. The effects of pre-service early childhood teachers’ digital literacy and self-efficacy on their perception of AI education for young children. Educ Inf Technol. 2023;28(10):12969–95. 10.1007/s10639-023-11724-6. [Google Scholar]
  • 55.Lin CY, Xu N. Extended TAM model to explore the factors that affect intention to use AI robotic architects for architectural design. Technology Analysis & Strategic Management. 2022;34(3):349–62. 10.1080/09537325.2021.1900808. [Google Scholar]
  • 56.Liu Z, Shan J, Pigneur Y. The role of personalized services and control: an empirical evaluation of privacy calculus and technology acceptance model in the mobile context. J Inf Priv Secur. 2016;12(3):123–44. 10.1080/15536548.2016.1206757. [Google Scholar]
  • 57.Lucci S, Kopec D, Musa SM. Artificial Intelligence in the 21st century, second ed. In: Mercury Learning and Information, Virginia. 2022. 10.1515/9781683922520.
  • 58.Lund BD, Wang T. Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Libr Hi Tech News. 2023;40(3):26–9. 10.1108/LHTN-01-2023-0009. [Google Scholar]
  • 59.Ma D, Akram H, Chen IH. Artificial intelligence in higher education: a cross-cultural examination of students’ behavioral intentions and attitudes. Int Rev Res Open Distrib Learn. 2024;25(3):134–57. 10.19173/irrodl.v25i3.7703. [Google Scholar]
  • 60.Ma Y, Siau KL. Artificial intelligence impacts on higher education. proceedings of the thirteenth midwest association for information systems conference. 2018;May 17–18(September):1–6.
  • 61.Makridakis S. The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures. 2017;90:46–60. 10.1016/j.futures.2017.03.006. [Google Scholar]
  • 62.McArthur D, Lewis M, Bishary M. The roles of artificial intelligence in education: current progress and future prospects. J Educ Technol. 2005;1(4):42–80. [Google Scholar]
  • 63.Mehta MM, Butler G, Ahn C, Whitaker YI, Bachi K, Jacob Y, et al. Intrinsic and extrinsic control impact motivation and outcome sensitivity: the role of anhedonia, stress, and anxiety. Psychol Med. 2024;54(16):4575–84. 10.1017/S0033291724002022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Mellul C. Emerging technologies in higher Education and the workplace: An assessment. THEFUTURE. 2018;223.https://uniapac.org/wp-content/uploads/2023/01/UNIAPAC_LIVRE_FUTURE-OF-ENTERPRISE_APRIL_2022_WEB.pdf#page=224.
  • 65.Mohr S, Kühl R. Acceptance of artificial Intelligence in German agriculture: an application of the technology acceptance model and the theory of planned behavior. Precis Agric. 2021;22(6):1816–44. 10.1007/s11119-021-09814-x. [Google Scholar]
  • 66.Montag C, Kraus J, Baumann M, Rozgonjuk D. The propensity to trust in (automated) technology mediates the links between technology self-efficacy and fear and acceptance of artificial intelligence. Comput Hum Behav Rep. 2023;11:100315. 10.1016/j.chbr.2023.100315. [Google Scholar]
  • 67.Mun YY, Jackson JD, Park JS, Probst JC. Understanding information technology acceptance by individual professionals: Toward an integrative view. Information & management. 2006;43(3):350–63. 10.1016/j.im.2005.08.006. [Google Scholar]
  • 68.Nemt-Allah M, Khalifa W, Badawy M, Elbably Y, Ibrahim A. Validating the ChatGPT usage scale: psychometric properties and factor structures among postgraduate students. BMC Psychol. 2024;12(1):497. 10.1186/s40359-024-01983-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Ng DTK, Leung JKL, Chu SKW, Qiao MS. Conceptualizing AI literacy: an exploratory review. Computers and Education: Artificial Intelligence. 2021;2:100041. 10.1016/j.caeai.2021.100041. [Google Scholar]
  • 70.Niu SJ, Luo J, Niemi H, Li X, Lu Y. Teachers’ and students’ views of using an AI-aided educational platform for supporting teaching and learning at Chinese schools. Educ Sci. 2022;12(12):858. 10.3390/educsci12120858. [Google Scholar]
  • 71.Olhede SC, Wolfe PJ. The growing ubiquity of algorithms in society: implications, impacts and innovations. Philos Trans R Soc Lond A Math Phys Eng Sci. 2018;376(2128):20170364. 10.1098/rsta.2017.0364. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Oluwadiya KS, Adeoti AO, Agodirin SO, Nottidge TE, Usman MI, Gali MB, et al. Exploring artificial intelligence in the Nigerian medical educational space: an online cross-sectional study of perceptions, risks, and benefits among students and lecturers from ten universities. Niger Postgraduate Med J. 2023;30(4):285–92. 10.4103/npmj.npmj_186_23. [DOI] [PubMed] [Google Scholar]
  • 73.Oprea SV, Nica I, Bâra A, Georgescu IA. Are skepticism and moderation dominating attitudes toward AI-based technologies? Am J Econ Sociol. 2024;83(3):567–607. 10.1111/ajes.12565. [Google Scholar]
  • 74.Oran BB. Correlation between artificial Intelligence in education and teacher self-efficacy beliefs: a review. RumeliDE Dil ve Edebiyat Araştırmaları Dergisi. 2023(34):1354–1365. 10.29000/rumelide.1316378.
  • 75.Otermans PC, Roberts C, Baines S. Unveiling AI perceptions: how student attitudes towards AI shape AI awareness, usage, and conceptions. Int J Technol Educ. 2025. 10.46328/ijte.995. [Google Scholar]
  • 76.Owoc ML, Sawicka A, Weichbroth P. Artificial intelligence technologies in Education: Benefits, challenges and strategies of implementation. IFIP Advances in Information and Communication Technology. 2021;599:37–58. 10.1007/978-3-030-85001-2_4. [Google Scholar]
  • 77.Paek S, Kim N. Analysis of worldwide research trends on the impact of artificial intelligence in education. Sustainability. 2021;13(14):7941. 10.3390/su13147941. [Google Scholar]
  • 78.Park J, Woo SE. Who likes artificial intelligence? Personality predictors of attitudes toward artificial intelligence. J Psychol. 2022;156(1):68–94. 10.1080/00223980.2021.2012109. [DOI] [PubMed] [Google Scholar]
  • 79.Popenici SA, Kerr S. Exploring the impact of artificial Intelligence on teaching and learning in higher education. Res Pract Technol Enhanc Learn. 2017;12(1):22. 10.1186/s41039-017-0062-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Potter K, Abill R, Louis F. The Impact of Artificial Intelligence on Students' Learning Experience (SSRN Scholarly Paper No. 4716747). 2024. 10.2139/ssrn.4716747.
  • 81.Preacher KJ, Hayes AF. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behav Res Methods. 2008;40(3):879–91. 10.3758/BRM.40.3.879. [DOI] [PubMed] [Google Scholar]
  • 82.Punzi C, Pellungrini R, Setzu M, Giannotti F, Pedreschi D. Ai, meet human: Learning paradigms for hybrid decision making systems. arXiv preprint http://arxiv.org/abs/2402.06287. 10.48550/arXiv.2402.06287.
  • 83.Rafique H, Almagrabi AO, Shamim A, Anwar F, Bashir AK. Investigating the acceptance of mobile library applications with an extended technology acceptance model (TAM). Comput Educ. 2020;145:103732. 10.1016/j.compedu.2019.103732. [Google Scholar]
  • 84.Rahiman HU, Kodikal R. Revolutionizing education: Artificial intelligence empowered learning in higher education. Cogent Education. 2024;11(1):2293431. 10.1080/2331186X.2023.2293431. [Google Scholar]
  • 85.Rane N, Choudhary SP, Rane J. Acceptance of artificial intelligence: key factors, challenges, and implementation strategies. J Appl Artif Intell. 2024;5(2):50–70. 10.48185/jaai.v5i2.1017. [Google Scholar]
  • 86.Roll I, Wylie R. Evolution and revolution in artificial intelligence in education. Int J Artif Intell Educ. 2016;26:582–99. 10.1007/s40593-016-0110-3. [Google Scholar]
  • 87.Schepman A, Rodway P. The general attitudes towards Artificial Intelligence Scale (GAAIS): confirmatory validation and associations with personality, corporate distrust, and general trust. International Journal of Human-Computer Interaction. 2023;39(13):2724–41. 10.1080/10447318.2022.2085400. [Google Scholar]
  • 88.Schiavo G, Businaro S, Zancanaro M. Comprehension, apprehension, and acceptance: understanding the influence of literacy and anxiety on acceptance of artificial intelligence. Technol Soc. 2024;77:102537. 10.1016/j.techsoc.2024.102537. [Google Scholar]
  • 89.Stahl BC, Wright D. Ethics and privacy in AI and big data: implementing responsible research and innovation. IEEE Secur Priv. 2018;16(3):26–33. 10.1109/MSP.2018.2701164. [Google Scholar]
  • 90.Strzelecki A. To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact Learn Environ. 2024;32(9):5142–55. 10.1080/10494820.2023.2209881. [Google Scholar]
  • 91.Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Pearson Education; 2007.
  • 92.Tapalova O, Zhiyenbayeva N. Artificial intelligence in education: AIed for personalised learning pathways. Electron J E-Learn. 2022;20(5):639–53. [Google Scholar]
  • 93.Terzi R. An adaptation of artificial ıntelligence anxiety scale into Turkish: reliability and validity study. Int Online J Educ Teaching. 2020;7(4):1501–15. [Google Scholar]
  • 94.Thomson SR, Pickard-Jones BA, Baines S, Otermans PC. The impact of AI on education and careers: what do students think? Front Artif Intell. 2024;7:1457299. 10.3389/frai.2024.1457299. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Tsai SC, Chen CH, Shiao YT, Ciou JS, Wu TN. Precision education with statistical learning and deep learning: a case study in Taiwan. Int J Educ Technol High Educ. 2020;17(1):1–13. 10.1186/s41239-020-00186-2. [Google Scholar]
  • 96.Uçar M, Çapuk H, Yiğit MF. The relationship between artificial intelligence anxiety and unemployment anxiety among university students. Work. 2024. 10.1177/10519815241290648. [DOI] [PubMed] [Google Scholar]
  • 97.Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS quarterly. 2003:27(3):425–78. [Google Scholar]
  • 98.Vesnic-Alujevic L, Nascimento S, Polvora A. Societal and ethical impacts of artificial Intelligence: critical notes on European policy frameworks. Telecommun Policy. 2020;44(6):101961. 10.1016/j.telpol.2020.101961. [Google Scholar]
  • 99.Wang B, Rau P-LP, Yuan T. Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behav Inf Technol. 2022;42(9):1324–37. 10.1080/0144929X.2022.2072768. [Google Scholar]
  • 100.Wang C, Xiao A. Anxiety induced by artificial intelligence (AI) painting: an investigation based on the fear acquisition theory. Psychol Trauma Theory Res Pract Policy. 2025. 10.1037/tra0001862. [DOI] [PubMed] [Google Scholar]
  • 101.Wang C, Wang H, Li Y, Dai J, Gu X, Yu T. Factors influencing university students’ behavioral intention to use generative artificial intelligence: integrating the theory of planned behavior and AI literacy. Int J Human-Computer Interaction. 2025;41(11):6649–71. 10.1080/10447318.2024.2383033. [Google Scholar]
  • 102.Wang YM, Wei CL, Lin HH, Wang SC, Wang YS. What drives students’ AI learning behavior: A perspective of AI anxiety. Interact Learn Environ. 2024;32(6):2584–600. 10.1080/10494820.2022.2153147. [Google Scholar]
  • 103.Wang YY, Chuang YW. Artificial intelligence self-efficacy: scale development and validation. Educ Inf Technol. 2024;29(4):4785–808. 10.1007/s10639-023-12015-w. [Google Scholar]
  • 104.Wang YY, Wang YS. Development and validation of an artificial intelligence anxiety scale: an initial application in predicting motivated learning behavior. Interact Learn Environ. 2019. 10.1080/10494820.2019.1674887. [Google Scholar]
  • 105.Wang Y, Liu C, Tu YF. Factors affecting the adoption of AI-based applications in higher education. Educ Technol Soc. 2021;24(3):116–29. [Google Scholar]
  • 106.Washington J. The impact of generative artificial Intelligence on writer's self-efficacy: a critical literature review. 2023. Available at SSRN 4538043. 10.2139/ssrn.4538043.
  • 107.Winick E. Every study we could find on what automation will do to jobs, in one chart. Technology Review. 2018. https://www.technologyreview.com/2018/01/25/146020/every-study-we-could-find-on-what-automation-will-do-to-jobs-in-one-chart/.
  • 108.Wu K, Zhao Y, Zhu Q, Tan X, Zheng H. A meta-analysis of the impact of trust on technology acceptance model: investigation of moderating influence of subject and context type. Int J Inf Manage. 2011;31(6):572–81. 10.1016/j.ijinfomgt.2011.03.004. [Google Scholar]
  • 109.Xu J, Li J, Yang J. Self-regulated learning strategies, self-efficacy, and learning engagement of EFL students in smart classrooms: a structural equation modeling analysis. Syst. 2024;125:103451. 10.1016/j.system.2024.103451. [Google Scholar]
  • 110.Xu N, Wang KJ. Adopting robot lawyer? The extending artificial intelligence robot lawyer technology acceptance model for legal industry by an exploratory study. J Manage Organ. 2021;27(5):867–85. 10.1017/jmo.2018.81. [Google Scholar]
  • 111.Yilmaz FGK, Yilmaz R, Ceylan M. Generative artificial intelligence acceptance scale: a validity and reliability study. Int J Human-Computer Interaction. 2023. 10.1080/10447318.2023.2288730. [Google Scholar]
  • 112.Yousif JH, Saini DK, Uraibi HS. Artificial Intelligence in e-leaning-pedagogical and cognitive aspects. In Proceedings of the World Congress on Engineering. 2011;2:6–8. [Google Scholar]
  • 113.Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int J Educ Technol High Educ. 2019;16(1):1–27. 10.1186/s41239-019-0171-0. [Google Scholar]
  • 114.Zhang S, Shan C, Lee JSY, Che S, Kim JH. Effect of chatbot-assisted language learning: a meta-analysis. Educ Inf Technol. 2023;16(11):1–21. 10.1007/s10639-023-11805-6. [Google Scholar]
  • 115.Zhang S, Zhao X, Zhou T, Kim JH. Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. Int J Educ Technol High Educ. 2024;21(1):34. 10.1186/s41239-024-00467-0. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data supporting this study's findings are available from the corresponding author upon reasonable request. The data were anonymized, ensuring that there was no breach of privacy. It will be shared in a manner that respects ethical protocols and data protection regulations. The dataset will be accessible only for academic purposes, and any use of the data will recognize the original study and maintain the confidentiality of the participants.


Articles from BMC Psychology are provided here courtesy of BMC

RESOURCES