Abstract
Recognising a need to investigate the concerns and barriers to the acceptance of artificial intelligence (AI) in education, this study explores the acceptability of different AI applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI, it addresses concerns related to data privacy, AI agency, transparency, explainability, and ethical deployment of AI. Using a vignette methodology, participants were presented with four scenarios where AI agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI’s global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario’s AI if it was available. The data collection, comprising a final sample of 1198 participants, focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI vary significantly across stakeholder groups and AI applications.
Subject terms: Education, Human behaviour
Introduction
Education stands as a cornerstone of society, nurturing the minds that will ultimately shape our future1. As we advance into the twenty-first century, exponentially developing technologies and the convergence of knowledge across disciplines are set to have a significant influence on various aspects of life2, with education a crucial element that is both disrupted by and critical to progress3. The rise of artificial intelligence (AI), notably generative AI and generative pre-trained transformers4 such as ChatGPT, with its new capabilities to generalise, summarise, and provide human-like dialogue across almost every discipline, is set to disrupt the education sector from K-12 through to lifelong learning by challenging traditional systems and pedagogical approaches5,6.
Artificial intelligence can be defined as the simulation of human intelligence and its processes by machines, especially computer systems, which encompasses learning (the acquisition of information and rules for using information), reasoning (using rules to reach approximate or definite conclusions), and flexible adaptation7–9. In education, AI, or AIED, aims to “make computationally precise and explicit forms of educational, psychological and social knowledge which are often left implicit“10. Therefore, the promise of AI to revolutionise education is predicated on its ability to provide adaptive and personalised learning experiences, thereby recognising and nurturing the unique cognitive capabilities of each student11. Furthermore, integrating AI into pedagogical approaches and practice presents unparalleled opportunities for efficiency, global reach, and the potential for the democratisation of education unattainable by traditional approaches.
AIED encompasses a broad spectrum of applications, from adaptive learning platforms that curate customised content to fit individual learning styles and paces12 to AI-driven analytics tools that forecast student performance and provide educators with actionable insights13. Moreover, recent developments in AIED have expanded the educational toolkit to include chatbots for student support, natural language processing for language learning, and machine learning for automating administrative tasks, allowing educators to focus more or exclusively on teaching and mentoring14. These tools have recently converged into multipurpose, generative pre-trained transformers (GPTs). These GPTs are large language models (LLMs) utilizing transformers to combine large language data sets and immense computing power to create an intelligent model that, after training, can generate complex, advanced, human-level output15 in the form of text, images, voice, and video. These models are capable of multi-round human-computer dialogues, continuously responding with novel output each time users input a new prompt due to having been trained with data from the available corpus of human knowledge, ranging from the physical and natural sciences through medicine to psychiatry.
This convergence highlights that a step change has occurred in the capabilities of AI to act not only as a facilitator of educational content but also as a dynamic tool with agentic properties capable of interacting with stakeholders at all levels of the educational ecosystem, enhancing and potentially disrupting the traditional pedagogical process. Recently, the majority of the conversation within the current literature concerning AIED is focused on the aspect of cheating or plagiarism16–18, with some calls to examine the ethics of AI19. This focus falls short of addressing the multidimensional, multi-stakeholder nature of AI-related issues in education. It fails to consider that AI is already here, accessible, and proliferating. It is this accessibility and proliferation that motivates the research presented in this manuscript. The release of generative AI globally and its application within education raises significant ethical concerns regarding data privacy, AI agency, transparency, explainability, and additional psychosocial factors, such as confidence and trust, as well as the acceptance and equitable deployment of the technology in the classroom20.
As education touches upon all members and aspects of society, we, therefore, seek to investigate and understand the level of acceptability of AI within education for all stakeholders: students, teachers, parents, school staff, and principals. Using factors derived from the explainable AI literature21 and the UNESCO framework for AI in education22. We present research that investigates the role of agency, privacy, explainability, and transparency in shaping the perceptions of global utility (GU), individual usefulness (IU), confidence, justice, and risk toward AI and the eventual acceptance of AI and intention to use (ITU) in the classroom. These factors were chosen as the focus for this study based on feedback from focus groups that identified our four independent variables as the most prominent factors influencing AI acceptability, aligning with prior IS studies21 that have demonstrated their central role in AI adoption decisions. Additionally, these four variables directly influence other AI-related variables, such as fairness -conceptualized in our study as confidence- suggesting a mediating role in shaping intentions to use AI.
In an educational setting, the deployment of AI has the potential to redistribute agency over decision-making between human actors (teachers and students) and algorithmic systems or autonomous agents. As AI systems come to assume roles traditionally reserved for educators, the negotiation of autonomy between educator, student, and this new third party becomes a complex balancing act in many situations, such as personalising learning pathways, curating content, and even evaluating student performance23,24.
Educational professionals face a paradigm shift where the agency afforded to AI systems must be weighed against preserving the educators’ pedagogical authority and expertise25. However, this is predicated on human educators providing additional needs such as guidance, motivation, facilitation, and emotional investment, which may not hold as AI technology develops26 That is not to say that AI will supplant the educator in the short term; rather, it highlights the need to calibrate AI’s role within the pedagogical process carefully.
Student agency, defined as the individual’s ability to act independently and make free choices27, can be compromised or enhanced by AI. While AI can personalise learning experiences, adaptively responding to student needs, thus promoting agency28, it can conversely reduce student agency through over-reliance, whereby AI-generated information may diminish students’ critical thinking and undermine the motivation toward self-regulated learning, leading to a dependency29.
Moreover, in educational settings, the degree of agency afforded to AI systems, i.e., its autonomy and decision-making capability, raises significant ethical considerations at all stakeholder levels. A high degree of AI agency risks producing “automation complacency“30, where stakeholders within the education ecosystem, from parents to teachers, uncritically accept AI guidance due to overestimating its capabilities. Whereas a low degree of agency essentially hamstrings the capabilities of AI and the reason for its application in education. Therefore, ensuring that AI systems are designed and implemented to support and enhance human agency through human-centred alignment and design rather than replacing it requires thorough attention to the design and deployment of these technologies31.
In conclusion, educational institutions must navigate the complex dynamics of assigned agency when integrating AI into pedagogical frameworks. This will require careful consideration of the balance between AI autonomy and human control to prevent the erosion of stakeholders’ agency at all levels of the education ecosystem and, thus, increase confidence and trust in AI as a tool for education.
Establishing confidence in AI systems is multifaceted, encompassing the ethical aspects of the system, the reliability of AI performance, the validity of its assessments, and the robustness of data-driven decision-making processes32,33. Thus, confidence in AI systems within educational contexts centres on their capacity to operate reliably and contribute meaningfully to educational outcomes.
Building confidence in AI systems is directly linked to the consistency of their performance across diverse pedagogical scenarios34. Consistency and reliability are judged by the AI system’s ability to function without frequent errors and sustain its performance over time35. Thus, inconsistencies in AI performance, such as system downtime or erratic behaviour, may alter perceptions of utility and significantly decrease user confidence.
AI systems are increasingly employed to grade assignments and provide feedback, which are activities historically under the supervision of educators. Confidence in these systems hinges on their ability to deliver feedback that is precise, accurate, and contextually appropriate36. The danger of misjudgment by AI, particularly in subjective assessment areas, can compromise its credibility37, increasing risk perceptions for stakeholders such as parents and teachers and directly affecting learners’ perceptions of how fair and just AI systems are.
AI systems and the foundation models they are built upon are trained over immense curated datasets to drive their capabilities38. The provenance of these data, the views of those who curate the subsequent training data, and how that data is then used within the model (that creates the AI) is of critical importance to ensure bias does not emerge when the model is applied19,39. To build trust in AI, stakeholders at all levels must have confidence in the integrity of the data used to create an AI, the correctness of analyses performed, and any decisions proposed or taken40. Moreover, the confidence-trust relationship in AI-driven decisions requires transparency about data sources, collection methods, and explainable analytical algorithms41.
Therefore, to increase and maintain stakeholder confidence and build trust in AIED, these systems must exhibit reliability, assessment accuracy, and transparent and explainable decision-making. Ensuring these attributes requires robust design, testing, and ongoing monitoring of AI systems, the models they are built upon, and the data used to train them.
Trust in AI is essential to its acceptance and utilisation at all stakeholder levels within education. Confidence and trust are inextricably linked42, representing a feedback loop wherein confidence builds towards trust and trust instils confidence, and the reverse holds that a lack of confidence fails to build trust. Thus, a loss of trust decreases confidence. Trust in AI is engendered by many factors, including but not limited to the transparency of AI processes, the alignment of AI functions with educational ethics, including risk and justice, the explainability of AI decision-making, privacy and the protection of student data, and evidence of AI’s effectiveness in improving learning outcomes33,43,44.
Standing as a proxy for AI, studies of trust toward automation45,46 have identified three main factors that influence trust: performance (how automation performs), process (how it accomplishes its objective), and purpose (why the automation was built originally). Accordingly, educators and students are more likely to trust AI if they can comprehend its decision-making processes and the rationale behind its recommendations or assessments47. Thus, if AI operates opaquely as a “black box”, it can be difficult to accept its recommendations, leading to concerns about its ethical alignment. Therefore, the dynamics of stakeholder trust in AI hinges on the assurance that the technology operates transparently and without bias, respects student diversity, and functions fairly and justly48.
Furthermore, privacy and security directly feed into the trust dynamic in that educational establishments are responsible for the data that AI stores and utilises to form its judgments. Tools for AIED are designed, in large part, to operate at scale, and a key component of scale is cloud computing, which involves resource sharing, which refers to the technology and the data stored on it49. This resource sharing makes the boundary between personal and common data porous, which is viewed as a resource that many technology companies can use to train new AI models or as a product50. Thus, while data breaches may erode trust in AIED in an immediate sense, far worse is the hidden assumption that all data is common. However, this issue can be addressed by stakeholders at various levels through ethical alignment negotiations, robust data privacy measures, security protocols, and policy support to enforce them22,51.
Accountability is another important element of the AI trust dynamic, and one inextricably linked to agency and the problem of control. It refers to the mechanisms in place to hold system developers, the institutions that deploy AI, and those that use AI responsible for the functioning and outcomes of AI systems33. The issue of who is responsible for AI’s decisions or mistakes is an open question heavily dependent on deep ethical analysis. However, it is of critical and immediate importance, particularly in education, where the stakes include the quality of teaching and learning, the fairness of assessments, and the well-being of students.
In conclusion, trust in AI is an umbrella construct that relies on many factors interwoven with ethical concerns. The interdependent relationship between confidence and trust suggests that the growth of one promotes the enhancement of the other. At the same time, their decline, through errors in performance, process, or purpose, leads to mutual erosion. The interplay between confidence and trust points towards explainability and transparency as potential moderating factors in the trust equation.
The contribution of explainability and transparency towards trust in AI systems is significant, particularly within the education sector; they enable stakeholders to understand and rationalise the mechanisms that drive AI decisions52. Comprehensibility is essential for educators and students not only to follow but also to assess and accept the judgments made by AI systems critically53,54. Transparency gives users visibility of AI processes, which opens AI actions to scrutiny and validation55.
Calibrating the right balance between explainability and transparency in AI systems is crucial in education, where the rationale behind decisions, such as student assessments and learning path recommendations, must be clear to ensure fairness and accountability32,56. The technology is perceived to be more trustworthy when AI systems articulate, in an accessible manner, their reasoning for decisions and the underlying data from which they are made57. Furthermore, transparency allows educators to align AI-driven interventions with pedagogical objectives, fostering an environment where AI acts as a supportive tool rather than an inscrutable authority58–60.
Moreover, the explainability and transparency of AI algorithms are not simply a technical requirement but also a legal and ethical one, depending on interpretation, particularly in light of regulations such as the General Data Protection Regulation (GDPR), which posits a “right to explanation” for decisions made by automated systems61–63. Thus, educational institutions are obligated to deploy AI systems that perform tasks effectively and provide transparent insights into their decision-making processes in a transparent manner64,65.
In sum, explainability and transparency are critical co-factors in the trust dynamic, where trust appears to be the most significant factor toward the acceptance and effective use of AI in education. Systems that employ these methods enable stakeholders to understand, interrogate, and trust AI technologies, ensuring their responsible and ethical use in educational contexts.
When taken together, this discussion points to the acceptance of AI in education as a multifaceted construct, hinging on a harmonious yet precarious balance of agency, confidence, and trust underpinned by the twin pillars of explainability and transparency. Agency involving the balance of autonomy between AI, educators, and students requires careful calibration between AI autonomy and educator control to preserve pedagogical integrity and student agency, which is vital for independent decision-making and critical thinking. Accountability, closely tied to agency, strengthens trust by ensuring that AI systems are answerable for their decisions and outcomes, reducing risk perceptions. Trust in AI and its co-factor confidence are fundamental prerequisites for AI acceptance in educational environments. The foundation of this trust is built upon factors such as AI’s performance, the clarity of its processes, its alignment with educational ethics, and the security and privacy of data. Explainability and transparency are critical in strengthening the trust dynamic. They provide stakeholders with insights into AI decision-making processes, enabling understanding and critical assessment of AI-generated outcomes and helping to improve perceptions of how just and fair these systems are.
However, is trust a one-size-fits-all solution to the acceptance of AI within education, or is it more nuanced, where different AI applications require different levels of each factor on a case-by-case basis and for different stakeholders? This research seeks to determine to what extent each factor contributes to the acceptance and intention to use AI in education across four use cases from a multi-stakeholder perspective.
Drawing from this broad interdisciplinary foundation that integrates educational theory, ethics, and human-computer interaction, this study investigates the acceptability of artificial intelligence in education through a multi-stakeholder lens, including students, teachers, and parents. This study employs an experimental vignette approach, incorporating insights from focus groups, expert opinion and literature review to develop four ecologically valid scenarios of AI use in education. Each scenario manipulates four independent variables—agency, transparency, explainability, and privacy—to assess their effects on perceived global utility, individual usefulness, justice, confidence, risk, and intention to use. The vignettes were verified through multiple manipulation checks, and the effects of independent variables were assessed using previously validated psychometric instruments administered via an online survey. Data were analysed using a simple mediation model to determine the direct and indirect effects between the variables under consideration and stakeholder intention to use AI.
Results
A multiple regression analysis was performed as a first step to examine the direct effects of the independent variables’ explainability, agency, privacy, and transparency on the dependent variable intention to use (ITU). The model significantly predicted intention to use, F(4, 1207) = 4.683, (p < 0.001), and accounted for ~1.5% of the variance in intention to use (R² = 0.015, Adjusted R² = 0.012). Agency was shown to be a significant predictor, with a higher level of Agency associated with a lower intention to use (β = -0.2519, p < 0.001). However, Transparency (β = 0.0168, p = 0.799), Explainability (β = 0.1189, p = 0.0071), and Privacy (β = 0.0718, p = 0.275) were not significant predictors of intention to use, indicating that other factors may mediate the intention to use AI in the classroom.
To determine the next steps in the data analysis, we performed a correlation analysis of all variables (see Fig. 1), which reported moderate correlations between dependent variables. For instance, ‘perceived utility’ (PU) and ‘global utility’ (GU) show a positive correlation, suggesting that as perceptions of (individual) utility increase, perceptions of overall utility also tend to be higher. However, no strong correlations were reported between the independent variables (agency, privacy, explainability, transparency) and the dependent variables, indicating that these factors might indirectly influence the dependent variables.
Fig. 1. Correlation matrix of all variables under investigation.
A correlation analysis reporting moderate correlations between dependent variables. For instance, ‘perceived utility’ (PU) and ‘global utility’ (GU) show a positive correlation, indicating that as perceptions of (individual) utility increase, perceptions of overall utility also tend to be higher. No strong correlations were reported between the independent variables (agency, privacy, explainability, transparency) and the dependent variables, indicating that these factors might indirectly influence the dependent variables.
We conducted a mediation analysis (see Fig. 2 to determine these influences through an examination of the direct and indirect effects of the independent variables’ explainability, agency, privacy, and transparency on the dependent variable intention to use through global utility, individual utility, justice, confidence, and risk from a multi-stakeholder perspective using Hayes66 Process Macro (Model 4) in SPSS). For all analyses performed, the confidence level for all confidence intervals in the output is 95.0%, and the number of bootstrap samples for percentile bootstrap confidence intervals was set to 5000. This analysis models the direct effect of C’ of each IV X on DV Y and the indirect effect of Xi on Y through mediator(s) Mi (ai*bi); the PROCESS macro allows up to ten mediators to operate in parallel.
Fig. 2. Simple mediation analysis.
Image shows the path of mediation or more specifically mapping the direct effect C’ of Xi: on Y: and indirect effects of Xi: mediated by Mi: on Y.
The following sections provide a detailed report of the results for the mediators of intention to use as reported by the simple mediation model analysis by stakeholder group and scenario, starting with student, then parent and finishing with teacher stakeholder groups.
The analysis performed on student data exposed to scenario one, “Correct-AI”, reported no direct effect of any independent variable upon the dependent variable intention to use. However, some mediation effects were reported when considering the effect of agency through perceptions of GU and justice upon ITU.
The analysis revealed that the level of agency significantly negatively impacted GU (β = -0.4261, p = 0.0011), indicating that higher levels of agency were associated with lower levels of perceived GU. Furthermore, GU was found to have a significant positive impact on ITU (β = 0.3402, p = 0.0001), suggesting that higher levels of both personal and overall benefit may lead to greater ITU.
Similarly, agency significantly negatively impacted justice (β = -0.3768, p = 0.0261), indicating that higher levels of agency were associated with a lower perception of how “just” the system would be. Justice was found to significantly impact ITU (β = 0.1760, p = 0.0041), suggesting that students were concerned that AI with high agency would be less just. However, the direct effect of agency on ITU was not statistically significant (β = -0.1056, p = 0.2982), indicating that the level of agency alone did not significantly predict intention to use.
An examination of the indirect effects of agency on ITU through GU and justice showed that the indirect effect through GU was statistically significant (Effect = -0.1449, BootSE =0.0606, BootLLCI = -0.2805, BootULCI = -0.0441), as was the indirect effect through justice (Effect = -0.0576, BootSE =0.0343, BootLLCI = -0.1382, BootULCI = -0.0047).
The analysis performed on the data for students exposed to scenario two, “Answer-AI”, reported no direct effect of any independent variable upon the dependent variable intention to use. However, a mediation effect was reported when considering the effect of privacy through the perception of GU upon the ITU.
The analysis reported that privacy had a significant positive impact on GU (β = 0.2987, p = 0.0439), indicating that higher levels of privacy were associated with higher levels of both personal and overall benefit. GU was found to have a significant positive impact on ITU (β = 0.3159, p = 0.0007), indicating that higher levels of both personal and overall benefit may lead to greater ITU.
However, the direct effect of privacy on intention to use was not statistically significant (β = 0.0369, p = 0.7346), indicating that privacy alone did not significantly predict ITU. In contrast, the indirect effect of privacy on intention through GU was found to be statistically significant (Effect = 0.0944, BootSE = 0.0586, BootLLCI = 0.0022, BootULCI = 0.2283).
The mediation analysis reported no direct or indirect effects for the third scenario “Tutor-AI”.
The analysis performed on the data for students exposed to scenario four, “Emotion -AI”, reported no direct effect of any independent variable upon the dependent variable ITU. However, a mediation effect was reported when considering the effect of explainability through the perception of GU upon ITU. Whereby explainability had a significant positive impact on GU (β = 0.2787, p = 0.0259), indicating that higher levels of explainability are associated with higher levels of personal and overall benefit. Moreover, GU was found to have a significant positive impact on ITU (β = 0.3089, p = 0.0039).
In this case, the direct effect of explainability on ITU was not statistically significant (β = -0.0429, p = 0.7187), highlighting that explainability alone did not significantly predict ITU. However, the indirect effect was found to be statistically significant (Effect = 0.0861, BootSE = 0.0513, BootLLCI = 0.0039, BootULCI = 0.2018).
The mediation analysis performed for parents exposed to all four scenarios reported significant results only for scenario one (Correct-AI see Table 1) concerning the relationship between agency, the mediator, confidence, and ITU.
Table 1.
Two vignette examples: Correct-AI and Answer-AI scenarios presented to participants in this study
| Scenario—Context 1—Correct-AI | Scenario—Context 2—Answer-AI |
|---|---|
|
Imagine a high school with Correct-AI, an artificial intelligence (AI) technology used to mark students’ exams. Correct-AI mobilises artificial intelligence algorithms capable of analysing complex written answers, understanding graphs and equations, and evaluating problem-solving strategies. Correct-AI is then able to automatically assign grades and provide feedback to students. Here is more relevant information about Correct-AI: Grades provided by Correct-AI [Agency-High: are not validated; Agency-Low: are validated] by a teacher by a teacher. Correct-AI [Privacy-Low: uses learner personal data beyond academic performance (e.g., gender, native language, diagnosis of learning difficulties, etc.); Privacy-High: does not use learner personal data beyond academic performance] in the correction algorithm. Correct-AI Provides the [Teacher, Student, Parent] with [Explainability-Low: only the note at the end of the correction without further explanation; Explainability-High: the note and a full explanation of the correction process]. It [Transparency-Low: is not communicated; Transparency-High: is clearly communicated] [to the Teacher, Student, Parent] what data is collected and how it is used by Correct-AI. |
Imagine a high school using Answer-AI, a chatbot (i.e., a conversational agent based on artificial intelligence) to answer students’ questions in various courses outside class hours. Answer-AI is based on state-of-the-art artificial intelligence algorithms that can automatically maintain a text conversation with students without the help of a human. This chatbot is able to understand the meaning and context of the conversation and is therefore able to provide students with sensible, course-appropriate answers. Here is more relevant information about Answer-AI: Answer-AI formulates its answers to students’ questions [Agency-High: without being supervised; Agency-Low: without being supervised] by the teacher. Answer-AI [Privacy-Low: shares the content of the conversation with the teacher and principals; Privacy-High: does not share the content of the conversation with anyone else]. Answer-AI provides the student with [Explainability-Low: only the answer to his/her question, without further explanation; Explainability-High: the answer to his/her question and the explanation for arriving at this answer]. The student [Transparency-Low: is not informed; Transparency-High: is informed] that he or she is communicating with a chatbot and not a human. |
The results revealed that agency negatively impacted confidence (β = -0.4668, p = 0.0236), indicating that higher levels of agency were associated with lower confidence levels in the context of an AI utilised to correct student work. Furthermore, confidence was found to have a significant positive impact on ITU (β = 0.5200, p < 0.001), indicating that greater confidence was associated with higher ITU. However, the direct effect of agency on ITU was not statistically significant (β = 0.0708, p = 0.5968), suggesting that agency alone did not significantly predict ITU.
Further analysis revealed an indirect effect of agency on ITU through confidence. Specifically, the indirect effect of agency on ITU through confidence was found to be statistically significant (Effect = -0.2428, BootSE = 0.1184, BootLLCI = -0.4956, BootULCI = -0.0347), suggesting that agency’s influence on ITU is partially mediated by confidence.
The analysis for the teacher stakeholder group reported several significant results across multiple scenarios.
For scenario one, the mediation analysis reported that explainability had a significant positive impact on global utility (β = 0.7161, p = 0.0057), indicating that higher levels of explainability were associated with higher levels of perceived personal and overall benefit. Additionally, GU was found to have a significant positive impact on ITU (β = 0.3639, p = 0.0286).
Additionally, explainability had a significant positive impact on perceived usefulness (β = 0.6634, p = 0.0059), indicating that greater levels of explainability were associated with higher perceptions of personal usefulness. Moreover, perceived usefulness significantly impacted ITU (β = 0.6205, p = 0.0087).
However, the direct effect of explainability on ITU was not statistically significant (β = -0.0215, p = 0.9366), indicating that explainability alone did not significantly predict ITU. In contrast, the mediation analysis reported that both the indirect effect through GU and perceived usefulness were found to be statistically significant (Effect = 0.2606, BootSE = 0.1717, BootLLCI = 0.0215, BootULCI = 0.6790) and (Effect = 0.4116, BootSE = 0.2456, BootLLCI = 0.0370, BootULCI = 0.9902) respectively. This result suggests that the influence of explainability on ITU is partially mediated by GU and perceived usefulness.
For scenario two, the mediation reported that both agency and explainability had a significant impact on perceived GU negatively (β = -0.5969, p = 0.0271) in the case of agency and positively (β = 0.8700, p = 0.0008) in the case of explainability.
Furthermore, in the case of agency, GU was found to have a significant positive impact on intention to use (β = .5590, p = 0.0040). However, the direct effect of agency was not statistically significant (β = -0.2283, p = 0.2847), indicating that agency alone did not significantly predict intention to use. Whereas the indirect effect of agency on ITU through GU was statistically significant (Effect = -0.3337, BootSE = 0.2091, BootLLCI = -0.8033, BootULCI = -0.0050).
Additionally, in the case of explainability, GU was found to have a significant positive impact on intention to use (β = 0.5970, p = 0.0030). However, the direct effect of explainability on intention to use was not statistically significant (β = -0.1479, p = 0.5028), indicating that explainability alone did not significantly predict intention to use. Whereas, the indirect effect of explainability on intention to use through GU was found to be statistically significant (Effect = 0.5194, BootSE = 0.2455, BootLLCI = 0.1120, BootULCI = 1.0618).
The mediation analyses for this scenario indicate that GU plays a mediating role for agency, explainability, and the intention to use AI in this context. While agency and explainability did not directly predict ITU, they influenced ITU indirectly through their effects on GU. These findings suggest the importance of held or considered utilitarian values as a mediator in understanding the impact of agency and explainability on teachers’ intention to use AI in this context.
The mediation analysis reported no direct or indirect effects for the “Tutor-AI” scenario.
For scenario four, the mediation analysis reported that privacy had a significant negative impact on confidence (β = -0.6581, p =0.0012), indicating that higher levels of privacy were associated with lower confidence levels in the context of an AI that supports emotional health. Additionally, confidence significantly impacted ITU (β = 0.4744, p = 0.0447). However, the direct effect of privacy on ITU was not statistically significant (β = 0.1118, p = 0.6144), indicating that privacy alone did not significantly predict ITU or accept the use of AI for this purpose. Whereas the indirect effect of privacy on intention through confidence was statistically significant (Effect = -0.3122, BootSE =0.1528, BootLLCI = -0.6559, BootULCI = -0.0511).
The results of this mediation analysis indicate that, for teachers, the level of privacy significantly influences confidence, which has a significant positive impact on ITU, where privacy is low. While the level of privacy did not directly predict ITU, it indirectly affects ITU or acceptance of AI for this purpose through its impact on confidence.
Discussion
When taken as a whole, the results indicate that different stakeholder groups have varying degrees of acceptance and trust across AI applications. The key mediators influencing the intention to use AI within education in the current study include global utility, justice, and confidence, which vary according to the level of AI agency, transparency, and explainability.
The results indicated that students’ intention to use AI in education was influenced by their perceptions of AI’s global utility and justice. For instance, in the Correct-AI scenario where AI is employed to correct student work, a higher level of agency in AI was associated with lower perceptions of global utility and justice. This suggests that students were concerned about AI systems that were too autonomous, perceiving them as less just and beneficial. The results indicate that privacy was valued in the second scenario, Answer-AI, a conversational agent that answers students’ questions in various courses outside class hours. However, privacy did not directly influence student/learner intention to use but rather was positively associated with higher perceived global utility. This finding indicates that an AI that keeps student queries private was perceived to be beneficial. For the fourth scenario, Emotion-AI, a conversational agent that provides emotional support to students, explainability positively impacted global utility, indicating that students perceived AI that was more explainable AI to be more beneficial, especially in the context of emotional support. However, explainability did not directly predict the intention to use AI, highlighting the importance of perceived benefit .
Moving to the second stakeholder group, parents, the mediation analysis results showed a significant relationship between agency and confidence, specifically in the Correct-AI scenario. Higher levels of AI agency led to lower confidence, negatively impacting their intention to use or allow AI to correct student work. This finding indicates that parents might be more accepting of AI in education if they have greater confidence in the system’s ability to act responsibly and fairly.
Perceptions about the other three scenarios were neutral, showing no significant variance, potentially due to the nature of the AI in question, which works to the benefit of a student alone, where the Correct-AI would have a direct impact on educational outcomes should autonomous AI judgements prove negative, which in turn would have far-reaching effects upon the future employment or academic prospects of the child in question.
For the third stakeholder group, teachers, the mediation analysis showed that the intention to use was influenced by their perceptions of global utility and perceived usefulness, particularly in the context of explainability. Scenario one Correct-AI reported that higher explainability was associated with a greater perception of global utility and usefulness. For scenario two, Answer-AI, both agency and explainability influenced perceived global utility. While agency had a negative influence, explainability positively influenced this perception, indicating that teachers prefer AI systems that are less autonomous but more explainable and understandable in this instance. Moving to scenario four, Emotion-AI, it was reported that higher privacy levels had a negative impact on confidence, indicating that teachers were concerned about AI systems with high privacy levels handling the emotional needs of students. Whereas confidence positively influenced their intention to use AI, suggesting the need for a balance between privacy and trustworthiness.
These results reflect the complex and varied perceptions of AI among different stakeholder groups in the educational ecosystem. The findings suggest that while there is a general trend toward valuing transparency, explainability, and privacy in AI systems, there are, however, specific concerns and preferences that differ significantly across scenarios and stakeholder groups.
Several key mediators of AI acceptance emerged from the current study: global utility, confidence, and justice. It appears that global utility was a significant factor influencing stakeholders’ intentions to use AI across multiple scenarios, indicating that perceptions of the overall benefit of AI, from a multi-tiered perspective, may have cohered into an aggregative consequentialist assessment, i.e., the net “total” of personal benefits or harms coupled more broadly with benefits or harms to the educational ecosystem and by consequence society67,68. Thus, based on the strength of this study’s results alone, a positive balance of this utilitarian assessment is crucial for the acceptance of AI in educational settings and, perhaps, the acceptance of AI more broadly. However, this generalisation is a matter of public discourse and lies far outside the scope of this article.
Confidence in AI systems also emerged as a significant mediator, especially for parents in the case of Correct-AI and teachers in the case of Emotion-AI. The level of confidence, influenced by factors such as level of agency and privacy, impacted their willingness to use or accept AI in educational contexts. This result underscores the importance of building reliable and trustworthy AI systems. In the case of confidence, the agency level assigned to an AI application is entwined with perceptions of how much data it will share externally and how that data will be used. From a teaching and administrative perspective, such data is valuable for predictive learning analytics to identify pedagogical issues and related needs, ostensibly so that learning content can be adapted within the boundaries of those needs69.
The results further highlighted that in some scenarios, the level of transparency and agency in AI systems was a significant concern. For instance, in the case of parents, low transparency was associated with higher perceived risk, and in the case of students, injustice. In comparison, AI, with greater transparency, reduced these perceptions. Additionally, the level of agency of AI systems affected stakeholders’ confidence and perceptions of justice, indicating a need for a balanced approach in AI design and deployment.
Seen through a lens consisting of AI transparency and agency mediated by confidence, justice, privacy, and risk, a question of bias concerning fairness emerges70. Conceivably, AI can serve the needs of students without resorting to sharing private information, reducing the need for classification and labelling and potentially improving the learning experience for the individual. However, education, like any industry, is driven by the economic forces installed at all levels of civil society71, and those “forces” through the enforcement of informational framing72 dictate where resources are directed, how individuals are classified, and what current and future value they will provide to the state and beyond. This creates tension between the “egalitarian ideal” distribution of educational resources and those allocated to the for-profit organisations that are increasingly integral to the education system73,74. Thus, this tension potentially coalesces into bias; students who do not succeed are classified as bad assets by an AI that is forced to share this information within the education ecosystem in which it is embedded. Unless there are subsidised programmes to provide specialised help, they become marginalised while resources are allocated elsewhere simultaneously75. Within this value-driven system, AI has the potential to champion egalitarian values and provide non-biased education at a pace suitable for all students. In light of the results from the current study, several questions emerge: at what level are teachers placing utilitarian value? Where is the greatest good? The student, the school, or state and corporate level stakeholders are looking for a return on investment.
The findings of this study, particularly in scenarios involving Correct-AI and Emotion-AI, echo concerns related to the potential of AI to redistribute agency23. In both scenarios, higher levels of AI agency were associated with lower perceptions of justice and general utility among students and decreased confidence among parents and teachers, reflecting the need to calibrate AI’s role in education carefully31.
Within the educational ecosystem, confidence and trust in AI are central to its acceptance34; AI systems must demonstrate consistency, reliability, and accuracy35 while contributing meaningfully to educational outcomes37 transparently47 and without bias48. What emerged from the findings reflected this: confidence in AI, influenced by factors such as agency and privacy, played a significant role in stakeholders’ intention to use AI. Specifically, concerning parents, confidence was shaped by the level of AI agency in Correct-AI, aligning with this emphasis on the reliability and accuracy of AI systems. Furthermore, trust, influenced by factors like explainability and transparency, significantly affected stakeholders’ acceptance of AI. In the case of Emotion-AI, the level of transparency impacted student confidence, which in turn influenced the intention to use AI. This highlights the complex dynamic of confidence and trust in that they are interdependent constructs reliant on many factors and interwoven with ethical concerns. Thus, educational institutions need clear policies to deploy AI that address the problem of agency, build confidence and trust, and insist on explainable AI systems.
Transparency and explainability contribute significantly towards confidence and trust building in AI systems in education and beyond; they open up AI to scrutiny and validation55 for stakeholders at various levels of the educational ecosystem to understand and rationalise the decisions taken and judgements made by AI53,54 In line with these assertions, the results showed that explainability significantly influenced perceptions of global utility and intention to use AI among students and teachers. Consequently, educators must be trained not only in how to use AI tools but also in understanding their implications and, of equal importance, any resistance to using AI tools.
The findings from this study indicate that there are a number of implications for the deployment and integration of artificial intelligence (AI) within educational settings; they emphasise a nuanced interplay between stakeholder perceptions and the characteristics of AI, such as agency, transparency, and explainability. Across all stakeholder groups -students, parents, and teachers- global utility emerged as the predominant key mediator, reinforcing the need to articulate clearly the demonstrable benefits of AI applications within the educational ecosystem. This apparent consequentialist perception of stakeholders toward AI applications suggests that framing AI utility explicitly regarding aggregated individual and societal gains is necessary to gain greater acceptance.
The variance in response to AI agency is of note in that increased AI autonomy consistently correlates with diminished perceptions of justice, confidence, and utility among stakeholders. This indicates that the level of agency granted to AI applications must be carefully adjusted to counterbalance these negative perceptions. Thus, AI applications must have sufficient agency to augment educational endeavours without superseding critical human oversight and ethical accountability, specifically in the context of the autonomous evaluation and correction of student work where erroneous conclusions can have significant impacts and long-term consequences.
Contrary to findings reported in the XAI literature76, when analysed separately transparency and explainability were not shown to be significant factors toward AI acceptance in the current study. Instead, they were shown to be core elements that facilitate building trust and confidence, leading to the eventual acceptance of AI among stakeholders. These findings highlight that context is key to the effectiveness of these factors regarding AI applications and that transparency and explainability mean different things to different stakeholders. In the current context, the findings further suggest that AI transparency significantly mitigates perceptions of injustice and risk, particularly for students and parents. In contrast, explainability enhances perceptions of utility among teachers. This would indicate that educational institutions should prioritize transparency in AI operational processes and proactively incorporate explainable AI methodologies to clarify decision-making criteria, thereby cultivating informed confidence and trust among stakeholders.
The findings related to privacy highlighted dynamic considerations in that students viewed privacy as indirectly influential, positively correlating with global utility perceptions. Whereas teachers expressed concerns when high privacy appeared to reduce transparency, notably in emotionally sensitive contexts. Taking a broad view, educational institutions must thus establish clear privacy protocols that balance confidentiality and transparency, ensuring that privacy does not undermine stakeholders’ confidence or diminish perceived utility.
Overall, the results of the current study demonstrate that stakeholder perceptions of AI technology are critical determinants of their acceptance and eventual use and that this acceptance and use is not uniform across AI applications. It thus becomes essential to understand how stakeholder perceptions can be positively influenced through best practice AI design and development, information framing, and policy reinforcement. Furthermore, the objective functions of AI applications developed for education needs must be clearly defined in terms of their utilitarian value and deployed responsibly. Artificial intelligence, as an advanced intelligent tool with ever-increasing agentic properties, has the potential to deliver on the liberal egalitarian ideal of education77, not only in Western culture but worldwide. However, to deliver this potential, we must promote a new cultural imaginary that is not anti-science and technology, based on fear, and that clearly demonstrates the benefits of AI within education on an individual, organisational, and global level.
We set out initially to capture as wide an array of perceptions toward multiple AI applications from a multi-stakeholder perspective as possible. While we succeeded in this goal, sample sizes could be increased for some specific stakeholder groups, such as teachers and school directors. Given the small sample size of the school director stakeholder group, these data were excluded from the mediation analysis, thus reducing the breadth of inference concerning perceptions toward AI in the education ecosystem. A study focused solely on this group may be fruitful in adding to the strength of the inferences made within this study. Similarly, increasing the sample size for teachers may provide a more granular appraisal of the mediators of acceptance of AI in education.
A further limitation that may dilute the strength of our conclusions concerns the population sample of respondents; we did not take into consideration the level of experience with AI of respondents. When the study was performed, generative AI had not progressed to its current level of sophistication and ubiquity, and the use of generative AI and other forms in an educational context was and still is a matter of considerable debate with few real-world applications. Given that our sample was self-selecting and responded using various internet-based technologies, we assumed a general knowledge of modern technologies, not specifically AI.
Another issue involves the vignette methodology; while this method is instrumental in gathering initial perceptions, providing concrete examples of AI applications after deployment would be beneficial. Once AI and the issues surrounding the technology have breached the public consciousness, this can be accomplished in future work with larger sample sizes. As AI technology appears to be improving exponentially, this research will become essential to the education sector and beyond.
This study showed that stakeholders’ acceptance of AI in education varied across different scenarios. Each AI application (e.g., Correct-AI, Answer-AI, Tutor-AI, Emotion-AI) elicited different reactions regarding global utility, justice, and confidence, reflecting the nuanced nature of trust and acceptance toward AI. Our findings suggest that applying a one-size-fits-all approach to AI integration in education is infeasible. Instead, careful consideration of specific AI applications and their impact on perceptions of global utility, justice, and confidence is needed to enhance stakeholder acceptance and trust in AI education solutions.
Methods
Integrating a multi-stakeholder perspective is essential as it captures the systemic complexity inherent in educational transformations, as highlighted by UNESCO78 in its “Reimagining Our Futures Together” report, which advocates for a participatory approach involving all actors for sustainable education. This holistic perspective, also championed by Fullan and Quinn79 in their “Coherence for Deep Learning” framework, promotes a deeper understanding of the needs, constraints, and opportunities perceived by different stakeholders, thus leading to technological innovations and practice changes that are better adapted to contextual realities and more likely to be sustainably adopted. The resulting systemic approach allows consideration of the interdependence of educational system components and anticipation of the cascade effects of any change, thereby strengthening the legitimacy and sustainability of envisioned transformations80.
For the purpose of this study, which included both manipulation checks to validate the developed vignettes and a broader multi-stakeholder investigation, we utilised an experimental vignette methodology81. The experimental vignette methodology (EVM) shares methodological similarities with scenario-based surveys, as both approaches manipulate independent variables within a contextual framework to examine their effects81. This design allowed the investigation of any causal relationships while maintaining ecological validity. EVM was chosen over direct observation in this study because it is particularly suitable for assessing beliefs about systems that have not yet been widely deployed in the target-setting82. When direct observation is not feasible, EVM provides a robust means of gauging user perceptions and behavioural intentions toward emerging technologies. Furthermore, technology acceptance research suggests that individuals form beliefs about future systems based on their anticipated characteristics and functionalities rather than direct experience83. This supports the use of EVM as a valid method for capturing user expectations and decision-making processes regarding AI adoption. The EVM procedure employed in the current study consisted of presenting participants with four carefully developed scenarios anchored in current and future educational reality to assess the effect of four independent variables Agency, Transparency, Explainability, and Privacy upon the perceptions of Global Utility (GU), Individual Usefulness (IU), Justice, Confidence, Risk, and the Intention to Use (ITU). The independent variables we chose as a focus for this study cover non-exhaustively the breadth of concerns related to modern generative AI.
Vignette Development
Before settling on the specific formulation and wording for the scenarios and the subsequent survey (the heart of the data collection), we conducted a series of small focus groups utilising the “foresight” methodology proposed by ref. 84. The purpose was to gain insight related to the current perceptions of AI by students, parents, high school teachers, and school directors. These insights were meant to qualify perceptions of AI in the classroom in the education literature43,85 and to inform the research team by providing current insights related to the current lived experience of five sixteen-year-old students (3 male), four teaching staff (3 female), two school directors (2 Female), and three parents (3 Male) of high school students.
The focus groups were structured around four scenarios ranging from probable to futuristic applications of AI in education: an AI that helps with grading “Correct-AI”, an AI that personalises educational content to student needs “Tutor-AI”, an AI that monitors student learning “Emotion-AI”, and finally, an AI that supports and helps boost student performance “Answer-AI”. The spectrum of probable to futuristic AI applications was based on the prior research and intuitions of the research team. Wherein, Correct-AI appeared to be the most probable application of AI in education in the short term86,87. Similarly, Answer-AI appeared probable, given that developments utilising generative AI are already emerging88. More advanced applications of AI, such as Tutor-AI and Emotion-AI, were placed in the futuristic category, given that fully autonomous and subject-dependent adaptation of educational content and personalised affective state support are vigorous areas of research89 and debate. At the end of each focus group session, a final question was asked: “Where do you set the hard boundary on integrating AI into the classroom?”.
Following each focus group, the wording of the scenarios in development was iterated until the final set of four scenarios and four independent variables was approved (see Table 1), resulting in a 2 x 2 x 2 × 2 design whereby a participant only saw one active term per scenario, i.e. “Grades provided by Correct-AI are not validated by a teacher” or Grades provided by Correct-AI are validated by a teacher” and so on, depending on the counterbalanced group. The complete set of vignettes is available in supplementary Table 2. The content and tone of focus groups and subsequent surveys were approved by the author’s institution’s research ethics board (certificate number 2023-5042).
Survey development
Based on a survey of the literature and focus group insights, a survey was developed to assess the impact of agency, transparency, explainability, and privacy upon the perceptions of global utility adapted from the utilitarian perspective67 as “that which provides the greatest good for the greatest number”; perceived individual usefulness as the degree to which a person believes that using a particular system or technology will enhance their job performance or productivity83; justice was taken from the personal relativist standpoint90 as how “just” an AI is perceived in its judgements; confidence as the belief that someone or something is honest, reliable, good, and effective; or the desire to depend on someone or something for security91,92, risk as the combination of uncertainty and the seriousness of an outcome in relation to performance, safety, and psychological or social uncertainties92, and intention to use as the likelihood or willingness of individuals to use a particular information system or technology83.
The survey instrument was created using a combination of verified scales adapted from the ethics and information systems literature (see Supplementary Materials). Table 2 shows the overall number of items and provenance of the original scales utilised for adaptation. All scales were translated from English to French using a 3-way majority vote with an IRR of 0.82 and modified for the target stakeholder.
Table 2.
Dependent variables and corresponding psychometric scales
| Dependent variable | Scale | # Items | Adapted from | Supplement |
|---|---|---|---|---|
| Intention to use | Likert 1-5 | Five items | 83 | Supplementary Table 2 |
| Perceived global utility | Likert 1-5 | Four items | 98–100 | Supplementary Table 3 |
| Justice | Likert 1-5 | Three Items | 98–100 | Supplementary Table 4 |
| Confidence (Trust) | Likert 1-5 | Three items | 92 | Supplementary Table 5 |
| Risk | Likert 1-5 | Four items | 92 | Supplementary Table 6 |
| Perceived individual usefulness | Likert 1-5 | Seven items | 92 | Supplementary Table 7 |
Validation of vignettes—manipulation checks
The four scenarios were tested for manipulation validity in the study’s second phase, carried out in 2022. The scenarios were administered using Qualtrics and distributed over the Prolific (Prolific Inc., UK) online platform four times, each time a different single context was presented with a different combination of independent variable directionality, i.e., low agency, low privacy, high explainability, high transparency to capture views from 500 participants across North America. Target participants were adults older than 18, with at least one child currently enroled in compulsory secondary education aged between 14 and 17 years. All participants provided digital consent and could cease completing the survey at any time. See Supplementary Tables 8–9 for examples of the questions used for the manipulation checks.
In total, 246 participants provided valid data for analysis across the four scenarios (see Table 3). The manipulation check data was analysed using the Wilcoxon sum rank test, and the results indicated a strong effect of the manipulations (see Fig. 3). That is, there were significant differences between low and high agency (z = 9.08, p < 0.0001), privacy (z = 8.26, p < 0.0001), explainability (z = 8.04, p < 0.0001), and transparency (z = -7.11, p < 0.0001), allowing the next stage of the data collection to proceed.
Table 3.
Participant distribution across scenarios
| Scenario | N | Sex |
|---|---|---|
| Scenario 1—Correct-AI | 62 | 36 Female |
| Scenario 2—Answer-AI | 59 | 33 Female |
| Scenario 3—Tutor-AI | 59 | 31 Male |
| Scenario 4—Emotion-AI | 66 | 35 Female |
Fig. 3. Results from manipulations checks.
The figure displays the results from the manipulation check analysis of each scenario, data were derived from 246 across the four scenarios. Data were analysed using the Wilcoxon sum rank test, and the results indicated a strong effect of each manipulation. Such that, significant differences between low and high agency, privacy, explainability, and transparency, were reported (*p < 0.05, **p < 0.005, ***p < 0.001).
Data collection
The final phase of the study, data collection, was completed in the winter of 2023 and was partially distributed by AlloProf (AlloProf Inc), a nonprofit education sector organisation providing academic support services at all levels of the education ecosystem in Quebec, Canada. In addition, we also utilised a social media campaign composed of advertisements in schools, Facebook private groups (parents, teachers, and school directors), and LinkedIn to recruit a multi-stakeholder pool of participants from within Quebec, Canada only. Thus, participants self-selected by choosing to undertake the study, resulting in a non-probability sampling approach. This sampling method is commonly employed and accepted for exploratory studies. However, it may limit the generalizability of the findings due to the lack of random selection93,94. The survey was presented in both English and French. Individual participants were exposed to only one of the four scenarios and the complete survey. In total, 4073 participants participated in the survey (see Table 4). This work was approved by the Comite d’ethique de la recherche HEC Montreal, certificate number 2023-5042.
Table 4.
Participant distribution and total respondents
| Student | parent | Teacher | Director | Total Respondents | |
|---|---|---|---|---|---|
| No. of responses | 2635 | 900 | 486 | 52 | 4073 |
| No. of finished responses | 1454 | 433 | 238 | 23 | 2148 |
| No. of valid responses | 608 | 381 | 203 | 20 | 1212 |
The survey was embedded with several attention checks to ensure quality participation and data fidelity. Data for this part of the survey exercise were only included if the survey was completed in full and all attention checks were passed. After removal of failed attention checks and incomplete survey data, the sample consisted of 608 students aged 16-18 (μ = 17.4, 357 Female, 190 Male, 61 Other), 381 parents (306 Female, 58 Male, 17 Other) with at least one child in mandatory education, 203 teachers (138 Female, 59, Male, 6 Other, years worked μ = 3.3 σ = 1.16) currently working in primary or secondary education and 20 directors (12 Female, 8 Male, years worked μ = 3.15 σ = 3.1) of secondary education institutions. However, due to the low sample size, the data for school directors was omitted from the mediation analysis. Furthermore, for consistency of analysis, data were removed in cases of failed attention checks (713), incomplete surveys and invalid responses (1925), distributed across stakeholder groups: student (911), parent (467), teacher (248) and director (30) see Table 4.
A prize draw of 1 iPad was also offered to encourage student participation, this decision was taken in an attempt to boost survey completion by student stakeholders. Research indicates that offering incentives in this way has a negligible effect on survey answer quality and the potential to enhance completion rates95,96. Furthermore, compared to no incentives, prize draws increase completion rates and may also reduce various incomplete participation patterns97.
As previously stated, each participant was exposed to only one scenario, and the complete survey instrument, shown in Table 5, is the distribution of participants across the four scenarios.
Table 5.
Distribution of respondents across the four scenarios
| Role | Scenario1 | Scenario 2 | Scenario 3 | Scenario 4 | Total |
|---|---|---|---|---|---|
| Director | 4 | 7 | 4 | 5 | 20 |
| Parent | 110 | 89 | 82 | 100 | 381 |
| Teacher | 51 | 50 | 52 | 50 | 203 |
| Student | 184 | 135 | 146 | 143 | 608 |
| Total | 349 | 281 | 284 | 298 | 1212 |
The survey instrument used in this study consisted of six subscales (see Table 2); after analysis, Cronbach’s alpha showed high internal consistency for all the scales utilised. The global utility subscale consisted of 5 items (α = 0.81), the perceived usefulness consisted of 7 items (α = 0.79), the justice subscale consisted of 3 items (α = 0.85), the confidence subscale consisted of 3 items (α = 0.71), the risk subscale consisted of 4 items (α = 0.74), and the intention to use subscale consisted of 5 items (α = 0.90).
Declaration of generative AI and AI-assisted technologies in the writing process
During the preparation of this work, the author(s) used [Grammarly] in order to [Proof]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
Supplementary information
Acknowledgements
We would like to thank our industrial research partner, Alloprof, Quebec, CA, for their support of this research. This work was funded by IVADO « L’IA centrée sur l’humain : du développement des algorithmes responsables à l’adoption de l’IA. » APOGÉE - IVADO - CCS: 38 153 310 64 R2884.
Author contributions
All authors have read and approved the manuscript content. The following contributions were made by participating authors: AJK, Writing, Editing, Interpretation, Data Analysis, Experimental Design; PC, Interpretation, Experimental Design; JTM, Interpretation, Experimental Design; AO, Interpretation, Editing, Experimental Design; AML, Experimental Design, Data collection; SS, Interpretation, Experimental Design; PML, Interpretation, Experimental Design.
Data availability
The datasets that were generated and/or analysed in the current study are not publicly available but can be made available upon request.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
The online version contains supplementary material available at 10.1038/s41539-025-00333-2.
References
- 1.Baum, S., Ma, J. & Payea, K. Education Pays, 2013: The Benefits of Higher Education for Individuals and Society. Trends in Higher Education Series. College Board (2013).
- 2.Roco, M., Bainbridge, W., Tonn, B. & Whitesides, G. Converging knowledge, technology, and society: Beyond convergence of nano-bio-info-cognitive technologies. Dordrecht, Heidelberg, New York, London450 (2013).
- 3.Penprase, B. E. The fourth industrial revolution and higher education. High. Educ. Era Fourth Ind. Revolut.10, 978–981 (2018). [Google Scholar]
- 4.Jo, A. The promise and peril of generative AI. Nature614, 214–216 (2023).36747115 [Google Scholar]
- 5.Akgun, S. & Greenhow, C. Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics2, 431–440 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Aoun, J. E. Robot-proof: higher education in the age of artificial intelligence. (MIT press, 2017).
- 7.Kaplan, A. & Haenlein, M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz.62, 15–25 (2019). [Google Scholar]
- 8.Davis, E. & Marcus, G. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM58, 92–103 (2015). [Google Scholar]
- 9.Russell, S. J. & Norvig, P. Artificial intelligence a modern approach. (London, 2010).
- 10.Self, J. The defining characteristics of intelligent tutoring systems research: ITSs care, precisely. Int. J. Artif. Intell. Educ.10, 350–364 (1998). [Google Scholar]
- 11.Bulger, M. Personalized learning: The conversations we’re not having. Data Soc.22, 1–29 (2016). [Google Scholar]
- 12.Kulik, J. A. Effects of using instructional technology in elementary and secondary schools: What controlled evaluation studies say. (Citeseer, 2003).
- 13.Picciano, A. G. The evolution of big data and learning analytics in American higher education. J. asynchronous Learn. Netw.16, 9–20 (2012). [Google Scholar]
- 14.Winkler, R. & Söllner, M. in Academy of Management Proceedings. 15903 (Academy of Management Briarcliff Manor, NY 10510).
- 15.Vaswani, A. et al. Attention is all you need. Advances in neural information processing systems30 (2017).
- 16.Sharples, M. Automated essay writing: An AIED opinion. Int. J. Artif. Intell. Educ.32, 1119–1126 (2022). [Google Scholar]
- 17.Abd-Elaal, E.-S., Gamage, S. H. & Mills, J. E. Assisting academics to identify computer generated writing. Eur. J. Eng. Educ.47, 725–745 (2022). [Google Scholar]
- 18.Yeo, M. A. Academic integrity in the age of Artificial Intelligence (AI) authoring apps. TESOL Journal, e716 (2023).
- 19.Borenstein, J. & Howard, A. Emerging challenges in AI and the need for AI ethics education. AI Ethics1, 61–65 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Xia, Q. et al. A self-determination theory (SDT) design approach for inclusive and diverse artificial intelligence (AI) education. Computers Educ.189, 104582 (2022). [Google Scholar]
- 21.Kelly, S., Kaye, S.-A. & Oviedo-Trespalacios, O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat. Inform.77, 101925 (2023). [Google Scholar]
- 22.Chan, C. K. Y. A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ.20, 38 (2023). [Google Scholar]
- 23.Holmes, W., Bialik, M. & Fadel, C. (Globethics Publications, 2023).
- 24.Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D. & Siemens, G. Impact of AI assistance on student agency. Computers Educ.210, 104967 (2024). [Google Scholar]
- 25.Selwyn, N. Should robots replace teachers?: AI and the future of education. (John Wiley & Sons, 2019).
- 26.Schiff, D. Out of the laboratory and into the classroom: the future of artificial intelligence in education. AI Soc.36, 331–348 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Bandura, A. Social cognitive theory: An agentic perspective. Annu. Rev. Psychol.52, 1–26 (2001). [DOI] [PubMed] [Google Scholar]
- 28.Xie, H., Chu, H.-C., Hwang, G.-J. & Wang, C.-C. Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. Computers Educ.140, 103599 (2019). [Google Scholar]
- 29.Tsai, Y. S., Poquet, O., Gašević, D., Dawson, S. & Pardo, A. Complexity leadership in learning analytics: Drivers, challenges and opportunities. Br. J. Educ. Technol.50, 2839–2854 (2019). [Google Scholar]
- 30.Parasuraman, R. & Manzey, D. H. Complacency and bias in human use of automation: An attentional integration. Hum. factors52, 381–410 (2010). [DOI] [PubMed] [Google Scholar]
- 31.Buckingham Shum, S., Ferguson, R. & Martinez-Maldonado, R. Human-centred learning analytics. J. Learn. Analytics6, 1–9 (2019). [Google Scholar]
- 32.Rader, E., Cotter, K. & Cho, J. in Proceedings of the 2018 CHI conference on human factors in computing systems. 1-13.
- 33.Kroll, J. A. Accountable algorithms, Princeton University, (2015).
- 34.Romero, C. & Ventura, S. Educational data mining and learning analytics: An updated survey. Wiley Interdiscip. Rev.: Data Min. Knowl. Discov.10, e1355 (2020). [Google Scholar]
- 35.Rauber, M. F. et al. Reliability and Validity of an Automated Model for Assessing the Learning of Machine Learning in Middle and High School: Experiences from the “ML for All!” course. Informatics in Education (2023).
- 36.O’neil, C. Weapons of math destruction: How big data increases inequality and threatens democracy. (Crown, 2017).
- 37.Balfour, S. P. Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™. Res. Pract. Assess.8, 40–48 (2013). [Google Scholar]
- 38.Bommasani, R. et al. On the opportunities and risks of foundation models. arXiv preprintarXiv:2108.07258 (2021).
- 39.Holmes, W. et al. Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 1-23 (2021).
- 40.Dietvorst, B. J., Simmons, J. P. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol.: Gen.144, 114 (2015). [DOI] [PubMed] [Google Scholar]
- 41.Ribeiro, M. T., Singh, S. & Guestrin, C. in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135-1144.
- 42.Karran, A. J., Demazure, T., Hudon, A., Senecal, S. & Léger, P.-M. Designing for confidence: The impact of visualizing artificial intelligence decisions. Front. Neurosci.16, 883385 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Kashive, N., Powale, L. & Kashive, K. Understanding user perception toward artificial intelligence (AI) enabled e-learning. Int. J. Inf. Learn. Technol.38, 1–19 (2020). [Google Scholar]
- 44.Lee, J., Yamani, Y., Long, S. K., Unverricht, J. & Itoh, M. Revisiting human-machine trust: a replication study of Muir and Moray (1996) using a simulated pasteurizer plant task. Ergonomics64, 1132–1145 (2021). [DOI] [PubMed] [Google Scholar]
- 45.Lee, J. D. & See, K. A. Trust in automation: Designing for appropriate reliance. Hum. factors46, 50–80 (2004). [DOI] [PubMed] [Google Scholar]
- 46.Hoff, K. A. & Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. factors57, 407–434 (2015). [DOI] [PubMed] [Google Scholar]
- 47.Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big data Soc.3, 2053951715622512 (2016). [Google Scholar]
- 48.Dignum, V. Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol.20, 1–3 (2018). [Google Scholar]
- 49.Amo Filvá, D. et al. Local technology to enhance data privacy and security in educational technology. Int. J. Interact. Multimed. Artif. Intell.7, 262–273 (2021). [Google Scholar]
- 50.Zuboff, S. in Social Theory Re-Wired 203-213 (Routledge, 2023).
- 51.Nguyen, A., Ngo, H. N., Hong, Y., Dang, B. & Nguyen, B.-P. T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol.28, 4221–4241 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Guidotti, R. et al. A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR)51, 1–42 (2018). [Google Scholar]
- 53.Gunning, D. et al. XAI—Explainable artificial intelligence. Sci. Robot.4, eaay7120 (2019). [DOI] [PubMed] [Google Scholar]
- 54.Arrieta, A. B. et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. fusion58, 82–115 (2020). [Google Scholar]
- 55.Lipton, Z. C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue16, 31–57 (2018). [Google Scholar]
- 56.Khosravi, H. et al. Explainable artificial intelligence in education. Computers Educ.: Artif. Intell.3, 100074 (2022). [Google Scholar]
- 57.Rosenfeld, A. & Richardson, A. Explainability in human–agent systems. Autonomous Agents Multi-Agent Syst.33, 673–705 (2019). [Google Scholar]
- 58.Leslie, D. Understanding artificial intelligence ethics and safety. arXiv preprintarXiv:1906.05684 (2019).
- 59.Bearman, M. & Ajjawi, R. Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology (2023).
- 60.Niemi, H. AI in learning: Preparing grounds for future learning. J. Pac. Rim Psychol.15, 18344909211038105 (2021). [Google Scholar]
- 61.Goodman, B. & Flaxman, S. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag.38, 50–57 (2017). [Google Scholar]
- 62.Hamon, R., Junklewitz, H., Sanchez, I., Malgieri, G. & De Hert, P. Bridging the gap between AI and explainability in the GDPR: towards trustworthiness-by-design in automated decision-making. IEEE Computational Intell. Mag.17, 72–85 (2022). [Google Scholar]
- 63.Felzmann, H., Villaronga, E. F., Lutz, C. & Tamò-Larrieux, A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc.6, 2053951719860542 (2019). [Google Scholar]
- 64.Wachter, S., Mittelstadt, B. & Floridi, L. Transparent, explainable, and accountable AI for robotics. Sci. Robot.2, eaan6080 (2017). [DOI] [PubMed] [Google Scholar]
- 65.Sovrano, F., Vitali, F. & Palmirani, M. in Electronic Government and the Information Systems Perspective: 9th International Conference, EGOVIS 2020, Bratislava, Slovakia, September 14–17, 2020, Proceedings 9. 219-233 (Springer).
- 66.Hayes, A. F. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. (Guilford publications, 2017).
- 67.Sinnott-Armstrong, W. Consequentialism. (2023).
- 68.Sen, A. Rights and agency. Philosophy & Public Affairs, 3-39 (1982).
- 69.Kizilcec, R. F. To Advance AI Use in Education, Focus on Understanding Educators. International Journal of Artificial Intelligence in Education, 1-8 (2023). [DOI] [PMC free article] [PubMed]
- 70.Madaio, M., Blodgett, S. L., Mayfield, E. & Dixon-Román, E. in The ethics of artificial intelligence in education 203-239 (Routledge, 2022).
- 71.Davies, B. & Bansel, P. Neoliberalism and education. Int. J. qualitative Stud. Educ.20, 247–259 (2007). [Google Scholar]
- 72.Kuypers, J. A. Rhetorical criticism: Perspectives in action. (Lexington Books, 2009).
- 73.Lobera, J., Rodríguez, C. J. F. & Torres-Albero, C. in Communicating Artificial Intelligence (AI) 80-97 (Routledge, 2020).
- 74.Newfield, C. How to Make “AI” Intelligent; or, The Question of Epistemic Equality. Critical AI1 (2023).
- 75.Miller, F. A., Katz, J. H. & Gans, R. The OD imperative to add inclusion to the algorithms of artificial intelligence. OD practitioner50, 8 (2018). [Google Scholar]
- 76.Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum.-computer Stud.146, 102551 (2021). [Google Scholar]
- 77.Brighouse, H. Egalitarian liberalism and justice in education. Political Q.73, 181–190 (2002). [Google Scholar]
- 78.UNESCO, P. Reimagining our futures together: A new social contract for education. (Educational and Cultural Organization of the United Nations Paris, France, 2021).
- 79.Fullan, M. & Quinn, J. Coherence: The right drivers in action for schools, districts, and systems. (Corwin Press, 2015).
- 80.Zhao, Y., Wehmeyer, M., Basham, J. & Hansen, D. Tackling the wicked problem of measuring what matters: Framing the questions. ECNU Rev. Educ.2, 262–278 (2019). [Google Scholar]
- 81.Aguinis, H. & Bradley, K. J. Best Practice Recommendations for Designing and Implementing Experimental Vignette Methodology Studies. Organ. Res. Methods17, 351–371 (2014). [Google Scholar]
- 82.Venkatesh, V., Morris, M. G., Davis, G. B. & Davis, F. D. User acceptance of information technology: Toward a unified view. MIS quarterly, 425-478 (2003).
- 83.Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340 (1989).
- 84.Inayatullah, S. Six pillars: futures thinking for transforming. foresight10, 4–21 (2008). [Google Scholar]
- 85.Chounta, I.-A., Bardone, E., Raudsep, A. & Pedaste, M. Exploring teachers’ perceptions of Artificial Intelligence as a tool to support their practice in Estonian K-12 education. Int. J. Artif. Intell. Educ.32, 725–755 (2022). [Google Scholar]
- 86.Daly, P. & Deglaire, E. AI-enabled correction: A professor’s journey. Innovations in Education and Teaching International, 1-17 (2024).
- 87.Luckin, R. & Holmes, W. Intelligence unleashed: An argument for AI in education. (2016).
- 88.Chen, Y., Jensen, S., Albert, L. J., Gupta, S. & Lee, T. Artificial intelligence (AI) student assistants in the classroom: Designing chatbots to support student success. Inf. Syst. Front.25, 161–182 (2023). [Google Scholar]
- 89.Lin, C.-C., Huang, A. Y. & Lu, O. H. Artificial intelligence in intelligent tutoring systems toward sustainable education: a systematic review. Smart Learn. Environ.10, 41 (2023). [Google Scholar]
- 90.Baghramian, M. & Coliva, A. Relativism. (Routledge, 2019).
- 91.Safa, N. S. & Von Solms, R. An information security knowledge sharing model in organizations. Computers Hum. Behav.57, 442–451 (2016). [Google Scholar]
- 92.Ye, T. et al. Psychosocial factors affecting artificial intelligence adoption in health care in China: Cross-sectional study. J. Med. Internet Res.21, e14316 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Bornstein, M. H., Jager, J. & Putnick, D. L. Sampling in developmental science: Situations, shortcomings, solutions, and standards. Developmental Rev.33, 357–370 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Etikan, I., Musa, S. A. & Alkassim, R. S. Comparison of convenience sampling and purposive sampling. Am. J. Theor. Appl. Stat.5, 1–4 (2016). [Google Scholar]
- 95.Singer, E. & Ye, C. The use and effects of incentives in surveys. ANNALS Am. Acad. Political Soc. Sci.645, 112–141 (2013). [Google Scholar]
- 96.Singer, E., Van Hoewyk, J., Gebler, N. & McGonagle, K. The effect of incentives on response rates in interviewer-mediated surveys. J. Off. Stat.15, 217 (1999). [Google Scholar]
- 97.Bosnjak, M. & Tuten, T. L. Prepaid and promised incentives in web surveys: An experiment. Soc. Sci. computer Rev.21, 208–217 (2003). [Google Scholar]
- 98.Hyman, M. R. A critique and revision of the multidimensional ethics scale. Journal of Empirical Generalisations in Marketing Science1 (1996).
- 99.Reidenbach, R. E. & Robin, D. P. Some initial steps toward improving the measurement of ethical evaluations of marketing activities. J. Bus. ethics7, 871–879 (1988). [Google Scholar]
- 100.Reidenbach, R. E. & Robin, D. P. Toward the development of a multidimensional scale for improving evaluations of business ethics. J. Bus. ethics9, 639–653 (1990). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets that were generated and/or analysed in the current study are not publicly available but can be made available upon request.



