Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2026 Feb 11;16:8341. doi: 10.1038/s41598-026-39778-9

AI-enabled learning analytics use relates to physical literacy and engagement in university PE via smart teaching and personalised feedback

Yi Chen 1, Dongjin Xian 2, Yuhu Zhao 1, Yawei Sun 3, Yang Ren 4,, Chao Wang 5
PMCID: PMC12966271  PMID: 41667600

Abstract

Digital transformation and AI-enabled learning analytics are reshaping higher education, and wearable-enabled analytics are increasingly used in embodied curricula such as university physical education (PE), but empirical evidence linking these systems to physical literacy remains limited. This study investigates how AI-enabled learning analytics use (wearable-derived dashboards and automated alerts) in smart PE relate to students’ physical literacy and learning engagement, and tests whether perceived smart teaching quality and personalised feedback mediate these associations within an established human-centred learning analytics perspective. An explanatory sequential mixed-methods design combined a survey of 1,182 students from four Chinese universities with semi-structured interviews with 12 students and six staff members. Structural equation modelling showed that analytics use was associated with perceived smart teaching quality (β = 0.47, p < .001) and personalised feedback (β = 0.39, p < .001), which were in turn related to physical literacy (β = 0.28 and β = 0.36, respectively, p < .001) and learning engagement (β = 0.24 and β = 0.31, respectively, p < .001); direct paths from analytics use to physical literacy (β = 0.06, p = .080) and engagement (β = 0.05, p = .110) were small and not statistically significant, while bias-corrected bootstrap mediation estimates (5,000 resamples) indicated that the association operated primarily through teaching and feedback processes. Thematic analysis showed that students and instructors experienced analytics both as a “mirror and coach” and as a source of pressure, fairness concerns and heightened bodily visibility, with system reliability, assessment regimes and data literacy shaping these interpretations. Overall, the findings suggest that AI-enabled analytics are more consistently linked with physical literacy and engagement through pedagogical and feedback processes rather than through data exposure alone. By applying and testing established human-centred learning analytics mechanisms in a compulsory university PE setting, the study provides mixed-methods evidence to inform the design of smart PE initiatives that support physical literacy in higher education. Because the survey data are cross-sectional and self-reported, common-method bias cannot be fully ruled out and findings should be interpreted as associations rather than causal effects.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-026-39778-9.

Keywords: AI-enabled learning analytics, Higher education, Mixed-methods research, Physical literacy, Smart physical education

Subject terms: Education, Information systems and information technology, Mathematics and computing

Introduction

The rapid digital transformation of higher education has intensified interest in how artificial intelligence (AI) and learning analytics (LA) can enhance teaching, assessment and student success. International policy reports now call for “human-centred” approaches that harness AI to improve quality and equity while protecting learners’ rights1,2. At the same time, an expanding body of work has begun to address multimodal and embodied forms of learning analytics beyond purely screen-based settings; however, empirical evidence and equity-focused debate remain comparatively limited in compulsory, credit-bearing university physical education (PE). University physical education (PE)—which directly shapes students’ bodily health, social participation and well-being—is still under-represented in learning analytics research and in many data-driven quality assurance agendas.

Recent systematic reviews document a sharp rise in AI-enabled tools for adaptive tutoring, predictive analytics and automated feedback in higher education3. While these technologies promise more personalised learning, scholars highlight concerns about opacity, bias and the erosion of teacher and student agency, and argue for learning analytics that are explicitly human-centred and participatory4,5. Research informed by critical data literacy further shows how datafied infrastructures can reproduce inequalities if educators and students are not supported to question how data are collected, interpreted and used6,7. Although these concerns are widely discussed in higher education learning analytics, they have been less systematically examined in contexts where learners’ bodies and health-related indicators are continuously tracked and made visible to others, such as university PE.

Physical education and sports science scholarship foregrounds the value of movement, play and health for students’ holistic development. Physical literacy has become a key concept for articulating these aims. Building on Whitehead’s foundational work, physical literacy is defined as the motivation, confidence, physical competence, knowledge and understanding that enable individuals to value and take responsibility for physical activity throughout life8,9. Recent studies among university students show robust associations between physical literacy and mental health, resilience and health-related quality of life10,11. In Chinese higher education, emerging research further demonstrates how physical literacy and mindfulness can mediate the impact of psychological distress on well-being and has produced validated instruments tailored to local student populations10,12,13. Despite this progress, most research relies on cross-sectional self-report surveys and provides limited insight into how everyday teaching practises and feedback processes in university PE cultivate—or undermine—students’ physical literacy.

In parallel, multimodal and physical learning analytics research examines how sensors, wearables and computer vision can capture learners’ bodily movement and physiological signals to support co-located, activity-rich learning14,15. These studies indicate that AI-enhanced analytics can generate detailed records of learners’ interactions with physical spaces and provide real-time performance feedback. However, this work has focused mainly on STEM laboratories and collaborative problem-solving tasks rather than on credit-bearing university PE, and it seldom examines students’ or teachers’ perspectives on being monitored, measured and nudged through AI-enabled systems. Moreover, mixed-methods studies in other higher education domains have repeatedly suggested that learning analytics tend to shape outcomes indirectly—through how teachers translate data into pedagogy and feedback—rather than through data exposure alone. The extent to which this established “human-centred” mediation logic holds in compulsory university PE, and whether it connects to physical literacy as a health-related educational outcome, remains unclear.

Taken together, these strands of research reveal three interrelated gaps. First, although human-centred learning analytics research has articulated principles of equity, agency and critical data literacy, comparatively few studies have examined how these principles play out in compulsory university PE where bodily visibility and assessment can heighten perceived fairness risks. Second, studies of AI-enhanced physical or smart learning environments tend to prioritise short-term performance indicators and teacher-centred efficiency, and seldom consider whether data-driven, personalised feedback supports students’ motivation, confidence and sense of fairness. Third, higher education research on physical literacy has not yet engaged systematically with the affordances and risks of AI-enabled analytics, and there is a lack of mixed-methods work that combines large-scale evidence on students’ engagement with analytics features with validated physical literacy measures and in-depth qualitative accounts of student and instructor experience.

Responding to these gaps, this study investigates AI-enabled learning analytics in university physical education in China. Using an explanatory sequential mixed-methods design, we combine a large-scale student survey (N = 1,182) with semi-structured interviews with students and staff to examine how smart teaching practises and personalised feedback are associated with AI-enabled analytics use and how these processes relate to students’ physical literacy and learning engagement, and how participants interpret the affordances and risks of smart PE. Specifically, the study addresses four research questions: (1) How are AI-enabled learning analytics configured in the sampled university PE courses, and how intensively do students report engaging with analytics features? (2) How are students’ reported use of learning analytics, perceived smart teaching quality and personalised feedback associated with their physical literacy and learning engagement, and to what extent are these associations explained by indirect pathways via teaching and feedback? (3) How do students and instructors perceive the benefits, challenges and equity implications of AI-enabled learning analytics in smart PE? and (4) Under what institutional and technological conditions do AI-enabled learning analytics appear most supportive of human-centred, physical-literacy-oriented university PE?

Literature review

Digital transformation, learning analytics and AI in higher education

Across higher education systems, digital transformation has intensified the “datafication” of teaching and learning, with institutions increasingly relying on digital platforms, metrics and dashboards to govern student success. Large-scale reviews of AI in education document rapid growth in systems that process log data, behavioural traces and assessments to support prediction, recommendation and adaptive teaching. They also highlight uneven evidence of impact and substantial ethical concerns3. Within this broader landscape, learning analytics has evolved from early work on descriptive dashboards and risk prediction to more sophisticated, AI-enhanced tools that combine modelling, recommendation and visualisation to inform instructional decision-making and student self-regulation4,16.

Recent human-centred reviews argue that AI and learning analytics should be understood as socio-technical systems embedded in institutional cultures rather than as neutral optimisation tools4; Buckingham Shum, 2024). This work calls for explicit attention to pedagogical alignment, stakeholder participation and the interpretability of metrics, and for a shift in emphasis from automated decision-making to the augmentation of human judgement. At the same time, a growing body of research on personalised adaptive learning in higher education suggests that well-designed, data-driven personalisation can improve academic performance and engagement, particularly when it is tightly integrated with formative assessment and feedback processes16. However, evidence from learning analytics dashboards and intervention research is mixed: effects are often modest, context-dependent, and in some cases dashboards can be ignored, misunderstood, or experienced as discouraging—especially when metrics are perceived as opaque or deficit-framed (e.g., learning analytics dashboard reviews; motivation- and equity-focused scholarship).

A complementary explanation for such variability is offered by technology acceptance and readiness perspectives. Technology acceptance models, particularly the Unified Theory of Acceptance and Use of Technology (UTAUT) and its extension UTAUT2, propose that students’ technology use is shaped by performance expectancy, effort expectancy, social influence, and facilitating conditions, as well as habit and related motivational factors17,18. Recent studies in higher education further indicate that students’ readiness and acceptance are associated with their utilisation of digital learning resources and AI-enhanced learning tools, albeit in context-specific ways19,20. In learning analytics contexts, these acceptance-related factors may influence not only whether students use dashboards and indicators, but also whether they perceive analytics as supportive, credible and actionable or as opaque, discouraging and controlling—thereby shaping engagement with analytics-enabled learning opportunities.

Importantly, learning analytics is no longer limited to screen-based traces. Multimodal/physical learning analytics research has demonstrated how sensor, location, movement and physiological data can be analysed to support learning in co-located, activity-rich environments (e.g., labs, simulations, dance, healthcare training). Yet, compared with these emerging multimodal contexts, compulsory university PE remains under-represented as a learning-analytics setting, particularly in studies that connect analytics-enabled teaching and feedback processes to broader educational outcomes such as physical literacy.

Equity, agency and data cultures in higher education

Critical scholarship on datafication in higher education highlights how analytics and AI reconfigure relations of power, responsibility and risk between institutions, staff and students21,22. These studies show that student data are increasingly entangled with institutional accountability, market pressures and surveillance logics, raising concerns about normalisation, deficit framings of “at-risk” learners and the reproduction of existing inequities. Conceptualising analytics as part of broader “data cultures” draws attention to how norms, expectations and tacit beliefs about data shape what is collected, how it is interpreted and whose voices are included in sense-making.

Within this landscape, human-centred learning analytics foregrounds issues of fairness, transparency and learner agency. Empirical work on student- and teacher-facing dashboards demonstrates that visualisations can support reflection and self-regulation, but also that perceived opacity, misalignment with learners’ goals, and limited control over data use can undermine trust (Buckingham Shum, 2024;Khalil et al.23,. Studies of personalised analytics in higher education similarly caution against assuming that data-rich feedback automatically empowers learners. Instead, agency depends on how indicators are negotiated, explained and embedded in supportive pedagogical relationships5,6.

A consistent implication of this literature is that learning analytics tends to influence student outcomes indirectly, through pedagogical translation: teachers’ orchestration of data, sense-making conversations, and the design of timely, actionable feedback. Accordingly, “pedagogy/feedback as mechanisms” should be treated as an established human-centred learning analytics proposition rather than a novel claim, and the key empirical question becomes whether—and under what conditions—this mediation logic holds in new contexts. This question is particularly salient when analytics extend to bodily and health-related data, because continuous tracking can heighten concerns about privacy, stigma, fairness and “visibility” in ways that differ from academic trace data.

AI-enabled smart physical education

Parallel to debates in higher education more broadly, physical education (PE) research has begun to explore “smart” and AI-enabled approaches that integrate sensors, computer vision and learning analytics. Recent reviews document applications of pose estimation, motion capture, wearable devices and intelligent tutoring systems to analyse technique, monitor workload and provide real-time feedback in PE and sports settings24,25. Systematic reviews of AI in PE conclude that these technologies can enhance individualised instruction, formative assessment and student motivation, but also emphasise that existing studies are recent, small-scale and methodologically heterogeneous26. Related evidence from schools and youth settings further suggests that wearables can support monitoring and goal-setting, while also generating ambivalence about surveillance, data ownership and unintended pressure—highlighting that acceptance and impact are not guaranteed.

Emerging conceptual work argues that AI offers a promising avenue for reimagining PE as a data-informed, student-centred domain in which personalised feedback, interactive simulations and gamified analytics can support diverse learners and broaden participation27,28. In university contexts, AI-based systems have been proposed to optimise training intensity, prevent injury and align PE curricula with public health goals10,29. However, the smart PE literature remains predominantly technical. Many studies prioritise algorithmic accuracy, recognition rates or physiological outcomes, with limited theorisation of teaching quality, student experience or equity. Few investigations conceptualise these systems explicitly as learning analytics that mediate relationships between students, teachers and data, or analyse how students interpret AI-generated feedback in compulsory university PE courses. Moreover, PE contexts can amplify equity risks (e.g., differential device accuracy across bodies, differential access to devices, or differential consequences of being “ranked” or “flagged”), making human-centred design and teacher mediation especially consequential.

Physical literacy and student development in higher education

Physical literacy has gained traction as a holistic construct for understanding how individuals value and sustain engagement in physical activity across the life course, encompassing motivation, confidence, physical competence, knowledge and understanding. Building on this conceptualisation, researchers in China and elsewhere have developed instruments to assess physical literacy among university students, including the College Student Physical Literacy Questionnaire (CSPLQ) and related index systems12,13. These tools provide multidimensional profiles of students’ physical literacy and offer a basis for evaluating PE curricula and targeted interventions in higher education.

Recent quantitative studies consistently show that higher levels of physical literacy are associated with more favourable health behaviours and outcomes in university populations. Cross-sectional work with Chinese university students has found that physical literacy is positively related to adherence to 24-hour movement guidelines and overall physical activity, suggesting that it supports healthier movement patterns in young adulthood30. Other studies report that physical literacy mediates or moderates relationships between psychosocial factors (such as eHealth literacy, grit or resilience) and well-being, highlighting its role as a protective resource20,31. A recent scoping review of tertiary students concludes that perceived physical literacy is robustly associated with better mental health, quality of life and reduced psychological distress, but notes that most available evidence is observational and cross-sectional32.

Taken together, this body of research positions physical literacy as a desirable outcome of university PE and health promotion. However, it rarely considers the learning environments and feedback mechanisms through which physical literacy is cultivated. Existing studies typically treat physical literacy as an individual attribute measured via questionnaires, rather than examining how specific pedagogical practises—such as personalised feedback, smart monitoring or analytics-informed coaching—shape students’ motivation, confidence and engagement in PE. There is also limited integration between physical literacy research and the human-centred learning analytics literature, despite shared concerns with agency, equity and sustainable behaviour change. This gap is consequential because physical literacy is intrinsically motivational and relational: without careful pedagogical framing and credible feedback, measurement-intensive “smart” PE could strengthen competence for some students while undermining confidence or perceived fairness for others.

Synthesis and research gap

Bringing these strands together reveals several gaps. First, while multimodal/embodied learning analytics has expanded beyond screen-based learning, there remains limited mixed-methods evidence in compulsory university PE that connects analytics use to physical literacy and engagement via clearly specified pedagogical mechanisms. Second, studies of AI-enabled smart PE demonstrate technical feasibility and potential benefits for performance, monitoring and health management, but seldom theorise these systems as part of broader data cultures or examine how they affect students’ perceptions of smart teaching quality, personalised feedback, fairness and agency in compulsory university PE settings. Third, higher education research on physical literacy has established its importance for movement behaviours and mental health, particularly among Chinese college students, but has rarely linked physical literacy to concrete instructional designs or to AI-driven analytics that might scaffold more engaging, student-centred PE.

The present study addresses these gaps by examining an AI-enabled learning analytics system implemented in a Chinese university PE course, focusing on how students’ use of the system relates to perceived smart teaching, personalised feedback, physical literacy and learning engagement, and on how students and instructors interpret its affordances and risks. Rather than proposing a wholly new theoretical model, we apply and test an established human-centred learning analytics proposition—namely, that analytics shape learning outcomes primarily through teachers’ pedagogical translation and feedback practises—in a compulsory, embodied university PE setting where bodily visibility and fairness concerns may be heightened. By adopting an explanatory sequential mixed-methods design, the study aims to contextualise and empirically scrutinise a human-centred account of smart PE in higher education that integrates learning analytics with physical literacy and student experience.

Conceptual model and hypotheses

Guided by an established human-centred learning analytics perspective, we conceptualise AI-enabled learning analytics as a socio-technical resource whose educational value depends on pedagogical translation. In this view, analytics features (e.g., wearable-derived dashboards, automated summaries, and alerts) do not improve learning outcomes by “data exposure” alone; rather, they are expected to relate to students’ outcomes primarily through how teachers interpret and enact data-informed instruction and how students experience feedback as timely, actionable, and fair. Accordingly, we propose that students’ reported analytics use (ALU) is associated with physical literacy (PL) and learning engagement in PE (LE) mainly via two process variables: perceived smart teaching quality (PST) and perceived personalised feedback (PPF). Figure 1 presents the conceptual model.

Fig. 1.

Fig. 1

Conceptual model linking AI-enabled learning analytics use to physical literacy and learning engagement through perceived smart teaching quality and perceived personalised feedback.

H1. AI-enabled learning analytics use (ALU) is positively associated with perceived smart teaching quality (PST).

H2. AI-enabled learning analytics use (ALU) is positively associated with perceived personalised feedback (PPF).

H3. Perceived smart teaching quality (PST) is positively associated with students’ physical literacy (PL).

H4. Perceived personalised feedback (PPF) is positively associated with students’ physical literacy (PL).

H5. Perceived smart teaching quality (PST) is positively associated with learning engagement in PE (LE).

H6. Perceived personalised feedback (PPF) is positively associated with learning engagement in PE (LE).

H7. The associations between ALU and (a) PL and (b) LE are explained primarily by indirect pathways via PST and PPF (i.e., ALU → PST/PPF → PL/LE).

Given prior findings that learning analytics effects can be modest and context-dependent, we treat any remaining direct associations between ALU and PL/LE as exploratory and interpret them as associations rather than causal effects.

Method

Research design

This study employed an explanatory sequential mixed-methods design to examine how AI-enabled learning analytics relate to students’ physical literacy and learning engagement in university physical education (PE). A large-scale survey (Phase 1) was followed by semi-structured interviews with students and staff (Phase 2). The qualitative strand was designed to elaborate and contextualise the quantitative findings. Consistent with an established human-centred learning analytics perspective, the mixed-methods design focused on pedagogical translation mechanisms (smart teaching and personalised feedback) rather than assuming that “analytics exposure” directly produces learning outcomes. A conceptual model and hypotheses are presented in Fig. 1.

Context and participants

The study was conducted in four comprehensive Chinese universities (two in eastern China and two in western China). University physical education (PE) is a compulsory course in the first two undergraduate years and may be continued as an elective thereafter. For the survey, intact classes that regularly used the smart PE system were invited to participate. A total of 1,182 valid responses were obtained from students across majors (education, engineering, business, humanities and others). Importantly, respondents were enrolled in a range of credit-bearing PE course types/activities, including Basketball, Long-distance Running, Yoga, Badminton, Football, and Table Tennis. Table 1 reports the distribution of the 1,182 students across these PE course types (n and %). Although course activities differed in modality (e.g., team-based vs. individual; endurance-/fitness-oriented vs. skill-oriented), all sampled courses implemented the same AI-enabled smart PE analytics infrastructure (wearable- and app-enabled data capture, student/instructor dashboards, and automated prompts/alerts).

Table 1.

Distribution of survey respondents across PE course types (N = 1,182).

PE course type/sport Activity modality n %
Basketball Team-based; Skill-oriented 245 20.7
Long-distance Running Individual; Endurance-oriented 228 19.3
Yoga Individual; Fitness-oriented 196 16.6
Badminton Dual-based; Skill-oriented 185 15.7
Football Team-based; Skill-oriented 168 14.2
Table Tennis Dual-based; Skill-oriented 160 13.5
Total - 1182 100.0

Note: Percentages are calculated based on N = 1,182 and may not sum to 100 due to rounding.

AI-enabled learning analytics system in smart PE (clarifying AI vs. analytics). Across the participating universities, the “smart PE” system comprised (1) wearable devices and/or mobile apps that captured physical activity and basic physiological indicators during PE sessions (e.g., steps, activity duration, intensity/heart-rate zones, and course-related performance records), and (2) an online platform that aggregated data and presented analytics to students and instructors via dashboards and periodic summaries. In this study, “analytics” refers to descriptive and diagnostic indicators (e.g., weekly summaries, trends, comparisons, and progress visualisations), whereas “AI-enabled” functions refer to automated classification, alerts, or recommendation features that trigger prompts or suggested actions based on detected patterns (e.g., automated reminders/alerts, flagged trends, and system-generated suggestions). Because some algorithmic details are proprietary, we report the observable inputs/outputs and user-facing functions (dashboards, alerts, summaries, feedback prompts) to ensure replicability at the level of pedagogical use. A system-components summary (data sources, indicators, update frequency, recipients, and pedagogical use cases) is provided in Table 2.

Table 2.

Components of the AI-enabled smart PE learning analytics system.

Layer Data source/modality Example indicators
(what is measured)
Update frequency Who can view AI vs. analytics Pedagogical use case
(how it is used)
Data layer Wearable sensor (wrist-worn devices) Real-time heart rate, cumulative steps, exercise duration, movement posture standardisation Real-time (during exercise) + post-exercise summary Students; PE instructors N/A Monitor exercise load, prevent overtraining, evaluate movement standardisation
Data layer Mobile app/check-in (student self-reported + location-based) Daily sign-in, course attendance, extracurricular exercise task completion Daily/weekly (depending on course design) Students; PE instructors N/A Record attendance, monitor task completion, support participation management
Analytics layer Student dashboard Weekly exercise summary, personal progress trend, goal attainment rate Weekly Students Analytics (descriptive/diagnostic) Support self-monitoring and reflection; formulate personal improvement plans
Analytics layer Instructor dashboard Class exercise load distribution, task completion rate ranking, abnormal data flagging (low exercise volume/high heart rate) Daily update (class data) + weekly summary PE instructors Analytics (descriptive/diagnostic) Adjust teaching task difficulty; identify students needing support; provide targeted guidance
AI-enabled layer Automated alert/flagging Heart rate exceeding-threshold alert, exercise duration non-compliance reminder, movement non-standardisation early warning Real-time trigger (when threshold is breached) Students; PE instructors AI-enabled (automated triggers/alerts) Enable timely intervention (e.g., pausing for rest); prompt minimum exercise compliance
AI-enabled layer Recommendation/suggestion Personalised exercise programme recommendation, movement correction tips, progressive training plan Weekly generation (based on weekly data summary) Students AI-enabled (recommendation) Provide personalised guidance; expand exercise options; support independent training
Other/institutional Assessment rules/grading linkage Proportion of exercise data in final grade, compliance score threshold, criteria for “excellent” performance evaluation Set at the beginning of semester + adjusted in mid-semester (if applicable) PE instructors; teaching administrators Not applicable (institutional policy/assessment rule) Standardise course assessment; link data with credit recognition; motivate participation

For the qualitative phase, a purposive maximum-variation strategy was used to capture diverse experiences. Twelve students and six staff members (PE instructors/administrators) were interviewed; their characteristics are summarised in Appendix D.

Instruments

Student survey (Appendix A)

The student questionnaire (Appendix A) comprised six sections.

AI-enabled learning analytics use (ALU; 6 items) assessed the frequency and depth of engagement with dashboard information and platform functions. Items were designed to capture students’ engagement with user-facing analytics features (e.g., checking progress dashboards, reviewing summaries, attending to system prompts/alerts), rather than assuming access to back-end system logs. Where relevant, items explicitly distinguish “viewing analytics” from “receiving AI-enabled prompts/alerts,” aligning measurement with the AI-versus-analytics clarification above.

Perceived smart teaching quality (PST; 8 items) captured data-informed instructional practises and students’ perceptions of technology-supported teaching. Perceived personalised feedback (PPF; 6 items) focused on the timeliness, specificity and actionability of feedback in PE. Physical literacy (PL; 12 items) measured motivation, confidence, physical competence, knowledge and understanding. Learning engagement in PE (LE; 6 items) captured behavioural, emotional and cognitive engagement. Background variables included gender, year, major, self-rated health and weekly physical activity frequency.

All items used a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). Psychometric evaluation in the main sample indicated strong measurement quality.

Interview protocols (Appendix B)

Two semi-structured interview guides were developed (Appendix B), one for students and one for staff. Interviews explored perceived benefits and risks of smart PE analytics, experiences of feedback and teaching changes, perceived fairness/visibility issues, and contextual constraints (e.g., assessment regimes, platform reliability, data literacy). To strengthen transparency and credibility, the interview guides included prompts for both positive and negative experiences (including counterexamples), and for concrete episodes illustrating how analytics were translated (or not translated) into teaching and feedback. Pilot interviews were conducted with one student and one instructor; minor revisions improved clarity and flow.

Data collection

Quantitative data were collected first. PE instructors distributed the survey link during class. Participation was voluntary and anonymous; informed consent was obtained before access to the questionnaire. After preliminary analysis of the survey data, we implemented stratified maximum-variation sampling for interviews to ensure diversity in gender, majors, and levels of analytics engagement.

To reduce common-method bias procedurally, we emphasised anonymity, clarified that responses would not affect grades, and separated measurement blocks with neutral instructions to minimise evaluation apprehension.

Data analysis

Quantitative analysis

Survey data were screened for missingness, normality and outliers. Item-level missingness was low and handled using appropriate robust procedures. To address RQ2, we estimated a structural equation model (SEM) to test the conceptual model.

Indirect effects and total effects. All indirect effects were estimated using bias-corrected bootstrap procedures (5,000 resamples), and we report bootstrapped 95% confidence intervals for each specific indirect pathway as well as the corresponding total effects. This reporting strategy was used to align inference with the study’s mediation logic and to avoid over-interpreting small direct paths.

Clustering/nesting and robustness. Because the survey was collected from intact classes, we assessed clustering by estimating intra-class correlations (ICCs) for the focal constructs at the class level. We then implemented a cluster-robust approach (sandwich/Huber–White standard errors) by specifying class as the clustering unit in SEM (e.g., TYPE = COMPLEX in Mplus or an equivalent robust estimator), and we report robustness results comparing key paths with and without clustering adjustments. If ICCs indicated non-trivial between-class variance, we additionally examined a two-level specification as a sensitivity check.

Common method bias checks. In addition to procedural remedies, we conducted statistical checks for common-method bias, including (a) a single-factor model comparison against the hypothesised multi-factor measurement model and (b) a Harman-style exploratory check. Results are reported to acknowledge that cross-sectional self-report data may inflate associations, and all substantive interpretations are framed as associations rather than causal effects.

Model fit was evaluated using standard indices (e.g., CFI, TLI, RMSEA, SRMR). We used robust methods to account for minor deviations from normality.

Qualitative analysis

Interview transcripts were analysed using reflexive thematic analysis (TA). Initial familiarisation was followed by iterative coding, theme development, refinement and narrative construction.

To address transparency and reflexivity expectations for reflexive TA, we documented (1) how themes were generated (from codes to candidate themes to refined themes), (2) how interpretive decisions were discussed among authors, and (3) how positionality and assumptions were actively examined through memoing and team debriefs. Although inter-coder reliability is not a requirement for reflexive TA, we enhanced dependability by independently coding an initial subset of transcripts, comparing interpretations to surface alternative readings, and maintaining an audit trail of codebook evolution and theme refinement decisions.

Integration

Quantitative and qualitative strands were integrated through (1) building (interview sampling informed by survey patterns), (2) connecting (qualitative explanations linked to key SEM paths), and (3) merging using a joint display that maps SEM results (including indirect effects) to qualitative themes and illustrative quotations. This joint display is presented in Table 3 to support a coherent mixed-methods meta-inference rather than a purely parallel presentation of findings.

Table 3.

Joint display integrating SEM results with qualitative themes.

SEM pathway
(quant)
Quantitative result
(standardised β; 95% CI/p)
Indirect/total effect (if applicable) Qualitative theme
(s) explaining mechanism
Illustrative quote
(student/staff)
Mixed-methods meta-inference
(one-sentence integration)
ALU → PST β = 0.47; 95% CI [0.41, 0.53]/p <.001 Teachers as interpreters and gatekeepers of data “Students check the numbers more when we discuss them together and set goals.” (T3) Higher reported analytics use was positively associated with perceived smart teaching quality, consistent with accounts that instructors translate dashboard information into in-class practises (e.g., pacing adjustments and goal-setting).
ALU → PPF β = 0.39; 95% CI [0.31, 0.46]/p <.001 Data as a mirror and coach “The teacher checks our data and then tells me exactly which part of my running posture to adjust.” (S11) Higher reported analytics use was positively associated with perceived personalised feedback, aligning with participants’ descriptions of using analytics outputs to obtain more targeted, actionable guidance.
PST → PL β = 0.28; 95% CI [0.20, 0.36]/p <.001 Data as a mirror and coach “I feel confident to try a bit more each week because the teacher uses data to point out my progress.” (S11) Perceived smart teaching quality was positively associated with physical literacy, echoing students’ reports that structured, data-informed instruction supports confidence and perceived competence.
PPF → PL β = 0.36; 95% CI [0.28, 0.43]/p <.001 Data as a mirror and coach “The specific feedback from the system lets me know how to adjust my exercise intensity to fit my condition.” (S11) Perceived personalised feedback showed a comparatively stronger positive association with physical literacy, consistent with accounts that specific feedback supports knowledge/understanding and competence development.
PST → LE β = 0.24; 95% CI [0.16, 0.32]/p <.001 Teachers as interpreters and gatekeepers of data “The class is more interesting when the teacher uses the dashboard to explain our exercise performance and adjust tasks.” (S3) Perceived smart teaching quality was positively associated with learning engagement, aligning with accounts that interactive, data-informed activities can increase interest and participation in PE.
PPF → LE β = 0.31; 95% CI [0.23, 0.39]/p <.001 Helpful and motivating feedback “The timely feedback makes me want to spend more time practising to achieve the goal.” (S11) Perceived personalised feedback showed a comparatively stronger positive association with learning engagement, consistent with accounts that timely feedback can strengthen motivation and encourage additional practice.
ALU → PL (direct) β = 0.06; 95% CI [− 0.01, 0.12]/p =.080 Indirect via PST: β = 0.132; 95% bootstrap CI [0.082, 0.191] Indirect via PPF: β = 0.140; 95% bootstrap CI [0.087, 0.198] Total effect: β = 0.332; 95% bootstrap CI [0.210, 0.450] Strain of constant measurement “The app just tells me ‘target not reached’… I do just enough to pass.” (S10) The direct association between analytics use and physical literacy was small and non-significant, whereas indirect associations via smart teaching and personalised feedback were larger, suggesting benefits depend on pedagogical translation and feedback rather than data exposure alone.
ALU → LE (direct) β = 0.05; 95% CI [− 0.01, 0.11]/p =.110 Indirect via PST: β = 0.113; 95% bootstrap CI [0.066, 0.170] Indirect via PPF: β = 0.121; 95% bootstrap CI [0.071, 0.179] Total effect: β = 0.284; 95% bootstrap CI [0.180, 0.390] Negotiating fairness, accuracy and visibility “If the bracelet is wrong, the score is also wrong… it feels unfair and makes me not want to try.” (S10) The direct association between analytics use and learning engagement was small and non-significant, while indirect associations via smart teaching and personalised feedback were larger; qualitative accounts indicate that perceived fairness and device accuracy can condition engagement.
Robustness Sensitivity check using cluster-robust standard errors (class as cluster) produced the same pattern of inferences as the main model. Key paths were stable under cluster-robust estimation (see Tables 2 and 5). Institutional and technological conditions “When the platform crashes often, we don’t trust the data and stop using it.” (S4) Sensitivity checks using cluster-robust estimation yielded similar substantive conclusions, suggesting findings are not driven by within-class dependence; reliable system functioning and transparent assessment rules appear to be prerequisites for meaningful analytics engagement.

Ethical considerations and positionality

Ethical approval for this study was obtained from the Changzhou University Huaide College Ethics Review Committee (Huaide College, Changzhou University, Jiangsu, China; Ethical Approval Certificate dated 26 July 2025). This committee reviewed and approved the survey and interview procedures for use in all four participating universities. All methods were performed in accordance with the relevant guidelines and regulations. Participation was entirely voluntary; all student and staff participants received an information sheet describing the aims of the study, the procedures, potential risks and benefits, the intended use of data and the measures taken to protect confidentiality, and provided electronic or written informed consent prior to data collection. Pseudonyms are used throughout the manuscript and all identifying details have been removed. Digital data were stored on encrypted, password-protected drives with access restricted to the research team.

Findings

Survey respondents and descriptive statistics

The final survey sample comprised 1,182 undergraduate students enrolled in AI-enabled smart physical education (PE) courses at four comprehensive Chinese universities. Students were distributed across all four years of study and a range of majors (e.g., engineering, education, business, humanities). Slightly more than half identified as female, with the remainder identifying as male or other/prefer not to say. Mean self-rated health was moderately high (M = 3.78, SD = 0.79 on a 5-point scale).

Table 4 reports the descriptive statistics for the core variables, and Table 5 presents bivariate correlations. Because the survey was collected from intact classes, we also conducted a sensitivity check using cluster-robust estimation; the substantive pattern of results was unchanged (see robustness note in Table 3).

Table 4.

Descriptive statistics and internal consistency of key constructs (N = 1,182).

Construct No. of items Cronbach’s α M SD Min Max
AI-enabled learning analytics use (ALU) 6 0.88 3.21 0.84 1.00 5.00
Perceived smart teaching quality (PST) 8 0.92 3.56 0.75 1.00 5.00
Perceived personalised feedback (PPF) 6 0.90 3.43 0.79 1.00 5.00
Physical literacy (PL) 12 0.94 3.68 0.62 1.17 5.00
Learning engagement in PE (LE) 6 0.91 3.71 0.66 1.00 5.00

Note. Scale scores range from 1 (strongly disagree) to 5 (strongly agree). Items are listed in Appendix A.

Table 5.

Correlations among AI-enabled learning analytics use, smart teaching, personalised feedback, physical literacy and learning engagement (N = 1,182).

Variable 1 2 3 4 5 M SD
1. ALU 3.21 0.84
2. PST 0.46*** 3.56 0.75
3. PPF 0.39*** 0.63*** 3.43 0.79
4. PL 0.28*** 0.56*** 0.59*** 3.68 0.62
5. LE 0.25*** 0.53*** 0.58*** 0.61*** 3.71 0.66

Note. ALU = AI-enabled learning analytics use; PST = perceived smart teaching quality; PPF = perceived personalised feedback; PL = physical literacy; LE = learning engagement in PE. Correlations are Pearson’s r. ***p <.001.

Perceived personalised feedback showed greater variability across students, whereas perceived smart teaching quality was more consistently rated. This dispersion in PPF suggests that students’ feedback experiences may be more unevenly distributed than general perceptions of “smart teaching,” and therefore particularly consequential for downstream outcomes.

RQ2: relationships between analytics use, smart teaching, personalised feedback, and outcomes

To address RQ2, we estimated a structural equation model (SEM) linking reported analytics use (ALU) to physical literacy (PL) and learning engagement (LE) through perceived smart teaching quality (PST) and perceived personalised feedback (PPF). The model showed acceptable fit (χ² (df = 225) = 681.40, p <.001; CFI = 0.94; TLI = 0.93; RMSEA = 0.048 (90% CI [0.043, 0.053]); SRMR = 0.045).

As shown in Table 6, ALU was positively associated with PST (β = 0.47, p <.001) and PPF (β = 0.39, p <.001). In turn, PST and PPF were positively associated with both PL (β = 0.28 and β = 0.36, respectively, ps < 0.001) and LE (β = 0.24 and β = 0.31, respectively, ps < 0.001). The direct associations between ALU and PL (β = 0.06, p =.080) and between ALU and LE (β = 0.05, p =.110) were small and not statistically significant. The model explained 22% of the variance in PST, 15% in PPF, 57% in PL, and 49% in LE. Notably, the relatively modest R² values for PST and PPF indicate that additional unmeasured factors likely shape students’ perceptions of teaching and feedback, beyond analytics use alone.

Table 6.

Structural equation model results: standardised path coefficients, model fit and explained variance (N = 1,182).

Panel/Parameter Estimate SE p 95% CI (lower, upper)
Panel A. Structural paths (standardised β)
ALU → PST 0.47 0.03 < 0.001 [0.41, 0.53]
ALU → PPF 0.39 0.04 < 0.001 [0.31, 0.46]
PST → PL 0.28 0.04 < 0.001 [0.20, 0.36]
PPF → PL 0.36 0.04 < 0.001 [0.28, 0.43]
PST → LE 0.24 0.04 < 0.001 [0.16, 0.32]
PPF → LE 0.31 0.04 < 0.001 [0.23, 0.39]
ALU → PL 0.06 0.03 0.080 [−0.01, 0.12]
ALU → LE 0.05 0.03 0.110 [−0.01, 0.11]
Panel B. Model fit indices
Index Value
χ² (df = 225) 681.40 (p <.001)
CFI 0.94
TLI 0.93
RMSEA (90% CI) 0.048 [0.043, 0.053]
SRMR 0.045
Panel C. Explained variance (R²)
Outcome
PST 0.22
PPF 0.15
PL 0.57
LE 0.49

Bootstrapped mediation tests further indicated that ALU was related to PL and LE primarily through indirect pathways via PST and PPF (Table 3). For PL, the indirect association via PST was β = 0.132 (95% bootstrap CI [0.082, 0.191]), and via PPF was β = 0.140 (95% bootstrap CI [0.087, 0.198]); the total association was β = 0.332 (95% bootstrap CI [0.210, 0.450]). For LE, the indirect association via PST was β = 0.113 (95% bootstrap CI [0.066, 0.170]), and via PPF was β = 0.121 (95% bootstrap CI [0.071, 0.179]); the total association was β = 0.284 (95% bootstrap CI [0.180, 0.390]). Across both outcomes, the PPF paths were comparatively stronger than the PST paths, consistent with the interpretation that timely, actionable feedback is a particularly proximal process variable in this context. Given the nested (class-based) sampling, we re-estimated the model with cluster-robust standard errors; key coefficients differed only marginally and substantive inferences were unchanged (see Table 3).

Taken together, these findings suggest that higher reported analytics use is associated with physical literacy and learning engagement mainly through pedagogical translation—how teachers use data to organise instruction (PST) and how students experience feedback as personalised and actionable (PPF)—rather than through “data exposure” alone.

RQ3: students’ and instructors’ experiences of AI-enabled analytics and personalised feedback

Interview data elaborated how analytics became educationally meaningful—or counterproductive—through teaching and feedback practises. Data as a mirror and coach. Many students described analytics as a “mirror” that made their effort and progress visible and as a “coach” that suggested adjustments. High-ALU/high-PL students reported routinely checking dashboards after classes to review heart rate, step counts and task completion, and using weekly reports to adjust pace or duration. One student noted that “the teacher checks our data and then tells me exactly which part of my running posture to adjust,” which made her “more confident to try a bit more each week” (S11, female, Year 3, high PPF–high PL). These accounts provide a process-level explanation for the positive ALU→PPF and PPF→outcome paths in the SEM, highlighting how “personalised feedback” is enacted through data-informed dialogue and specific guidance.

The strain of constant measurement. At the same time, several participants described feeling pressured by continuous monitoring and quantified targets. Some students reported that they “run for the numbers” or “do just enough to pass,” especially when indicators were tightly coupled to grades or publicly displayed. For students with lower PL, notifications such as “target not reached” without specific guidance were experienced as discouraging rather than developmental: one student commented that such messages “do not really help [him] know how to exercise better” (S10, male, Year 2, low PPF–low PL). This theme helps explain why the direct ALU→outcome paths were weak: analytics can increase visibility and pressure without necessarily improving learning unless translated into supportive teaching and feedback.

Negotiating fairness, accuracy and visibility. Participants frequently questioned whether the system treated different students fairly. Concerns centred on device reliability, differences in body type or baseline fitness, and the visibility of performance indicators. Some students worried that “if the bracelet is wrong, the score is also wrong,” and others felt embarrassed when low scores or rankings were visible to peers, describing this as “exposing your weak body in front of the whole class.” Staff also highlighted challenges in explaining grading rules and managing students’ expectations when technology malfunctioned. These accounts indicate that perceived fairness and technical credibility can condition whether students treat analytics as actionable information or as surveillance/assessment pressure.

Teachers as interpreters and gatekeepers of data. Across accounts, teachers emerged as key intermediaries between analytics and learning. When instructors took time to explain indicators, discuss their limitations and co-construct goals, students reported greater trust in data and stronger motivation: “We check the numbers more when we discuss them together and set goals” (T3, male, PE instructor). Conversely, when teachers relied primarily on automated feedback or used data only for summative grading, students tended to treat analytics as a requirement rather than a resource. This qualitative mechanism aligns directly with the SEM’s mediation structure (ALU→PST/PPF→PL/LE), reinforcing “teacher translation” as the operative pathway. To support transparency and integration, Table 7 summarises the main themes and illustrative quotes.

Table 7.

Themes on experiences of AI-enabled analytics and personalised feedback.

Theme Brief description Illustrative quote
(student unless noted)
Data as a mirror and coach Analytics visualise effort and guide self-adjustment “The teacher checks our data and then tells me exactly which part of my running posture to adjust.” (S11)
Strain of constant measurement Continuous monitoring generates pressure and instrumentalism “The app just tells me ‘target not reached’… I do just enough to pass.” (S10)
Negotiating fairness, accuracy, visibility Students question reliability, fairness and public display “If the bracelet is wrong, the score is also wrong… it feels unfair.”
Teachers as interpreters and gatekeepers Teachers translate data into meaningful, negotiated goals “Students check the numbers more when we discuss them together and set goals.” (T3)

RQ4: conditions shaping the educational value of AI-enabled analytics

Integrating quantitative patterns with qualitative themes pointed to several conditions under which AI-enabled analytics appeared more likely to support physical literacy and engagement in university PE. System reliability and usability. Basic technical reliability emerged as a prerequisite for educational value. Where devices and platforms functioned consistently, students were more inclined to treat indicators as credible and to use them formatively. Frequent technical failures, by contrast, undermined trust, prompted complaints about fairness and led some students and teachers to circumvent or ignore analytics.

Framing and delivery of feedback. The strong statistical links between PPF and both PL and LE were mirrored in participants’ narratives. Feedback that was timely, specific and privately delivered—whether via the system or teacher—was more often experienced as supportive and motivating. Publicly displayed results or generic messages focused on thresholds (“target not reached”) tended to generate anxiety, social comparison and surface-level compliance.

Alignment with assessment regimes. Students’ willingness to use analytics for self-regulation depended on how indicators were embedded in grading. When assessment rules were transparent and allowed for flexible ways of meeting activity requirements, students were more likely to link data to health and long-term activity. When rules were rigid or opaque, many focused on “meeting the numbers” rather than improving technique or enjoyment, despite similar access to feedback.

Institutional and professional support. Instructors’ practises varied according to the training and support they received. Some, often with access to professional development or collaboration with IT staff, used analytics to differentiate tasks, monitor class-level trends and facilitate reflection. Others treated the system primarily as an administrative tool for attendance or grading. Participants highlighted the need for sustained staff development that addresses not only technical functions but also pedagogical and ethical dimensions.

Student characteristics and data literacies. Finally, the impact of analytics was mediated by students’ existing levels of physical literacy and data literacy. High-PL students tended to combine analytics with prior knowledge about physical activity, using indicators to refine exercise habits. Low-PL students more often expressed uncertainty about interpreting metrics and concern about being judged on numbers they did not fully understand. These patterns suggest that building critical data literacies in PE—including understanding how sensors work, what indicators mean and how to interpret them in relation to one’s own body—is a practical condition for realising potential benefits, and a plausible contributor to the modest explained variance in PST/PPF.

Overall, the mixed-methods findings indicate that analytics in university PE are best understood as socio-technical resources whose educational value depends on reliable infrastructure, human-centred feedback practises, transparent assessment and sustained support for both staff and students.

Discussion

This mixed-methods study examined how AI-enabled learning analytics (LA) in university physical education (PE) relate to smart teaching, personalised feedback and students’ physical literacy and engagement. Quantitatively, analytics use was only weakly related to physical literacy and learning engagement when considered directly, but showed stronger indirect links through students’ perceptions of smart teaching and personalised feedback. Qualitatively, students and instructors characterised analytics as both a “mirror and coach” and a source of pressure, fairness concerns and heightened visibility, with system reliability, assessment regimes and institutional support shaping these experiences. Rather than advancing a new theoretical model, the study applies and empirically tests established human-centred learning analytics propositions—especially “pedagogical translation” via teaching and feedback—in a compulsory, embodied PE setting that remains comparatively under-represented in LA evidence.

Pedagogy and physical literacy in data-intensive PE

A first implication is that AI-enabled LA matter less as a technical layer and more as a pedagogical resource. The structural equation model showed that perceived smart teaching and personalised feedback explained a substantial proportion of variance in physical literacy and learning engagement, whereas direct paths from analytics use to these outcomes were small and non-significant. This pattern aligns with human-centred learning analytics research, which argues that dashboards and AI systems gain educational relevance primarily when they are deeply embedded in pedagogical and relational practises, rather than deployed as stand-alone optimisation tools4; Buckingham Shum, 2024).

The non-significant direct paths from AI-enabled learning analytics use (ALU) to both physical literacy (PL) and learning engagement (LE) are theoretically informative. They suggest that analytics “use” functions more as exposure to performance indicators than as an educational intervention per se. In compulsory, embodied PE contexts, simply checking dashboards or responding to system prompts may not be sufficient to shift the multidimensional dispositions captured by PL (e.g., motivation, confidence, perceived competence, and understanding) or to deepen LE. Instead, analytics become consequential when they are translated into actionable learning opportunities through pedagogy and feedback. This interpretation is consistent with human-centred learning analytics arguments that the educational value of dashboards and AI systems depends on sense-making and relational work—particularly teachers’ ability to contextualise indicators, negotiate goals, and provide timely, specific guidance—rather than on data exposure alone4; Buckingham Shum, 2024;Jivet et al.5,33.

Our qualitative findings help explain why direct associations may be weak even when indirect pathways are meaningful. First, analytics can increase visibility and accountability pressure without improving learning if messages remain generic or threshold-focused (e.g., “target not reached”), encouraging instrumental compliance (“do just enough to pass”) rather than competence development or sustained interest. Second, the credibility of analytics is not guaranteed: concerns about device accuracy and perceived fairness can reduce trust in indicators, prompting disengagement or resistance even among frequent users. Third, students vary in data literacy and baseline PL; those with lower PL more often reported uncertainty about how to interpret metrics and translate them into self-regulation, which can dilute any direct ALU→outcome relationship. Together, these mechanisms clarify why the “weak direct/stronger indirect” pattern observed in the SEM is plausible in smart-PE settings: analytics are most likely to support PL and LE when embedded in smart teaching practises and personalised feedback that reduces ambiguity, mitigates pressure and fairness concerns, and enables students to connect numbers to meaningful bodily understanding and improvement.

The strong links between perceived teaching quality, personalised feedback and physical literacy extend physical literacy theory into data-rich environments. Existing work has shown that physical literacy—conceived as motivation, confidence, competence and understanding across the life course—is associated with healthier movement behaviours and better mental health among university students8,32. Our findings suggest that in AI-enabled PE, these holistic dispositions are supported when teachers translate analytics into specific, timely and credible feedback that connects data (e.g. heart rate zones, weekly trends) with concrete technical or behavioural adjustments. Students in high–personalised-feedback clusters described becoming more confident and more willing to persist when feedback helped them interpret their own indicators and set realistic goals. By contrast, students who received generic or purely grade-focused feedback tended to “do just enough to meet the numbers” without linking metrics to health or long-term activity. This “weak direct/stronger indirect” pattern is consistent with prior LA dashboard and intervention research showing that effects are often modest and context-dependent, and may dissipate when analytics are not accompanied by structured facilitation, sense-making, and dialogic feedback that supports learner agency (e.g., Tsai et al.5,33).

Importantly, the relatively modest explained variance for PST (22%) and PPF (15%) suggests that analytics use is only one input into students’ perceptions of teaching and feedback. Our qualitative findings point to plausible additional determinants—including teachers’ data pedagogical competence, perceived legitimacy of assessment rules, device reliability, and peer-facing visibility norms—that likely shape whether analytics are experienced as meaningful instructional support or as administrative surveillance. Future models would benefit from explicitly incorporating such contextual and relational factors (e.g., trust, assessment transparency, teacher professional learning, and students’ data literacies) to better account for variability in PST and PPF.

Fairness, visibility and data cultures in embodied learning

A second contribution concerns fairness and visibility in embodied, data-intensive learning contexts. Students and instructors in this study raised concerns about device reliability, the public display of scores and rankings, and the fairness of indicators for students with different body types or baseline fitness levels. Some described feeling “exposed” when low scores were visible to peers, or frustrated when device errors seemed to translate directly into grades. These accounts echo broader critiques of datafication in higher education, which argue that learning analytics can reduce complex lives to auditable traces and intensify surveillance cultures21,34.

In contrast to most LA research, which focuses on clickstreams or assessment logs, smart PE systems collect highly granular bodily and movement data, often in real time. This brings multimodal and physical learning analytics into what Thompson and Prinsloo35 describe as the “data gaze”, in which bodies are rendered as data objects. For some students in our study, this visibility was motivating: data served as evidence of effort and progress, particularly when teachers used it to recognise incremental improvements rather than absolute performance. For others—especially those with lower physical literacy—the same visibility generated anxiety and avoidance, leading them to view analytics as a mechanism of judgement rather than support.

Recent work on fairness, accountability, transparency and ethics (FATE) in learning analytics and multimodal learning analytics cautions that data-rich systems can amplify inequities if issues of representation, interpretability and student voice are not explicitly addressed6,23. Our findings contribute additional empirical evidence from compulsory university PE by showing how fairness concerns are entangled with assessment regimes (e.g., rigid thresholds), display practises (public vs. private data), and limited learner control over how bodily data are interpreted and used. In this sense, AI-enabled analytics in PE do not simply measure physical literacy; they participate in shaping it, by signalling which bodies and behaviours are valued and by structuring the emotional climate of data-rich classes.

Towards human-centred smart PE: implications for practice

Bringing together the quantitative pathways and qualitative themes, our findings suggest that the educational value of AI-enabled analytics in university PE is realised primarily through pedagogical translation and feedback practises rather than through data exposure alone. This has concrete implications for university PE management departments, which typically make decisions about platform procurement, assessment rules, staff development and quality assurance. We outline actionable recommendations that align with the mechanisms and conditions identified in this study.

First, institutionalise a “minimum viable pedagogy” for smart PE rather than expanding features. Because perceived smart teaching and personalised feedback were the proximal processes linking analytics use to physical literacy and engagement, management should specify a small set of required teaching routines that translate data into learning. For example, departments can mandate (a) a brief in-class sense-making moment (e.g., 3–5 min) where teachers explain key indicators and their limitations, (b) weekly goal co-construction and progress review using dashboards (individual or small-group), and (c) task adjustment based on class-level distributions (e.g., differentiated intensity targets or technique drills). To support implementation, provide standardised templates (goal-setting sheets, feedback scripts, and reflection prompts) and allocate recognised workload/time for data-informed planning and feedback, ensuring the routines are feasible at scale.

Second, adopt feedback delivery standards that are privacy-by-default, specific, and dialogic. The strong role of personalised feedback implies that management should treat feedback quality as a policy and training issue, not an optional teacher preference. Concretely, departments can (a) require private feedback channels as the default (e.g., in-app messages or one-to-one check-ins) and prohibit public display of individual deficits; (b) set minimum criteria for feedback specificity (e.g., “what to adjust, why it matters, and what to do next”), replacing generic threshold messages with actionable guidance; and (c) embed opportunities for student response and negotiation (e.g., short feedback conferences, question windows, or structured peer discussion) to strengthen agency and reduce compliance-only participation. Where leaderboards or public screens are used, they should be opt-in, de-identified, and framed around personal progress (within-student improvement) rather than rank-order comparison.

Third, decouple analytics from high-stakes grading where possible, and where summative use is unavoidable, formalise transparent and flexible assessment rules. Our evidence indicates that tight coupling of analytics to grades can incentivise “gaming” and increase pressure without deepening learning. Management departments should therefore (a) prioritise formative uses of analytics in routine teaching, and cap the proportion of grades determined by device-derived indicators; (b) publish clear rubrics that explain how indicators are interpreted, including acceptable ranges and how outliers are handled; and (c) allow multiple ways of demonstrating progress (e.g., improvement trajectories, technique checklists, reflective logs, or teacher observation) so that assessment does not rely on a single metric. A formal appeal/override procedure should be established for device malfunction or disputed readings, with documented timelines and responsible roles, to protect perceived fairness and trust.

Fourth, operationalise fairness and reliability through ongoing technical audits and governance. Because perceived fairness and technical credibility conditioned whether analytics were experienced as supportive or as surveillance/pressure, departments should treat reliability and equity as continuous quality assurance tasks. Recommended actions include (a) pre-semester calibration checks and mid-semester reliability audits (including spot checks across different activity types and student profiles), (b) maintenance and replacement protocols with recorded error rates and downtime logs, and (c) accommodations for students whose data are systematically unreliable or who have legitimate constraints. In parallel, establish governance rules for bodily and health-related data: define access rights (who can view what), retention periods, and audit trails; ensure students are informed about what is collected and how it is used; and review vendor–institution arrangements to clarify accountability when system errors affect assessment. Finally, invest in capacity building: provide mandatory professional development on data-informed PE pedagogy (goal negotiation, dialogic feedback, ethical handling of bodily data) and a student orientation module that explains indicator meanings and limitations, reducing misinterpretation and resistance.

Taken together, these measures move beyond generic calls to “use AI in PE” and instead provide an implementation pathway for management departments: establish core pedagogical routines, standardise privacy-protective feedback practises, design fair and flexible assessment rules, and sustain technical and governance infrastructures that support trust, dignity and learning in measurement-intensive university PE.

Limitations and future directions

Several limitations should be noted. First, the study drew on self-report survey data from four Chinese universities and a purposive qualitative sample. Although the sample was diverse in institution type, major and levels of analytics use, it did not include all relevant groups, such as students with disabilities or elite athletes. Future research could adopt participatory and co-design approaches with a wider range of students and staff to reshape analytics features, feedback modes and assessment rules in more inclusive ways.

Second, our design was cross-sectional. While the SEM model is theoretically informed, causal inferences remain tentative. Longitudinal or intervention studies that track changes in analytics use, teaching practises, physical literacy and engagement over multiple semesters would provide stronger evidence about how human-centred smart PE can support sustained behaviour change. Given the reliance on self-reports collected in a single survey wave, common-method bias cannot be fully excluded even with procedural and statistical checks; accordingly, the SEM estimates should be interpreted as associations rather than causal effects. Future work should triangulate with objective indicators (e.g., system logs, observed teaching practises, and/or fitness/performance assessments) to reduce shared-method inflation and strengthen causal inference.

Third, we focused on students’ and teachers’ experiences of an existing commercial platform rather than on the algorithms and governance structures underpinning it. Critical work on datafication in higher education suggests that platform design, vendor–institution contracts and policy frameworks profoundly shape what is measured and how it can be used21,34. Future studies could combine technical audits, policy analysis and organisational ethnography to explore how fairness, transparency and legitimacy are negotiated across technical, institutional and classroom levels3648.

Finally, the study was situated in a specific national and cultural context where PE is compulsory and smart campus initiatives are strongly promoted. Comparative research across systems with different PE traditions, AI policies and data protection regimes would help clarify which aspects of our findings are context-specific and which may reflect broader dynamics of human-centred LA in embodied learning.

Despite these limitations, the study contributes to emerging conversations at the intersection of AI-enabled learning analytics, university physical education and physical literacy. It suggests that AI systems do not automatically enhance students’ physical literacy or engagement, but can do so when they are embedded in pedagogies that foreground personalised, dialogic feedback, fairness and critical engagement with data.

Conclusion

This mixed-methods study examined how AI-enabled learning analytics in university physical education (PE) relate to smart teaching, personalised feedback and students’ physical literacy and engagement. Drawing on survey data from 1,182 undergraduates and interviews with 12 students and six staff, we found that analytics use alone had limited direct associations with physical literacy and learning engagement. Its influence was largely indirect, operating through students’ perceptions of data-informed teaching and personalised feedback.

Qualitative findings showed that students experienced analytics as both a “mirror and coach” and a source of pressure, fairness concerns and heightened bodily visibility. System reliability, the framing and privacy of feedback, assessment rules and institutional support for teachers emerged as key conditions shaping whether analytics were interpreted as supportive or controlling. These results suggest that AI-enabled systems in PE do not automatically foster physical literacy; their educational value depends on how they are embedded within pedagogical relationships and assessment cultures.

By applying and testing an established human-centred learning analytics mechanism in a compulsory university PE setting, this study adds mixed-methods evidence to an already developing literature on embodied and multimodal analytics, while highlighting context-specific equity and visibility tensions associated with bodily data. Practically, the findings underscore the importance of prioritising pedagogical orchestration over technical feature accumulation, designing feedback that is private, specific and dialogic, aligning analytics transparently with assessment, and building critical data literacies among both staff and students. Given the cross-sectional, self-reported nature of the survey, conclusions should be interpreted as associations; future longitudinal and log-informed studies are needed to establish whether and how human-centred smart PE designs can support sustained gains in physical literacy and engagement.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1 (27.6KB, docx)

Author contributions

Y.C. conceived the study. Y.C. and D.X. designed the research and coordinated the implementation of the AI-enabled learning analytics system in the participating universities. Y.Z. and Y.S. organised and carried out data collection and preliminary data curation. C.W. and Y.R. performed the statistical analyses and qualitative coding, and C.W. and Y.R. wrote the main manuscript text; C.W. also prepared all figures and tables. All authors contributed to the interpretation of the results, critically reviewed the manuscript, and approved the final version.

Funding

This work was supported by the 2025 General Project of Philosophy and Social Science Research in Jiangsu Universities, entitled “Construction of a Smart Physical Education Teaching System in Universities under the Background of ‘Internet+’” (Grant No. 2025SJYB1735).

Data availability

The datasets used and analysed in the current study are available from the first author on reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Holmes, W. & Miao, F. Guidance for Generative AI in Education and Research (UNESCO Publishing, 2023). 10.54675/EWZM9535
  • 2.OECD. OECD Digital Education Outlook 2023: Towards an Effective Digital Education Ecosystem (OECD Publishing, 2023). 10.1787/c74f03de-en
  • 3.Wang, S. et al. Artificial intelligence in education: A systematic literature review. Expert Syst. Appl.252, Article 124167. 10.1016/j.eswa.2024.124167 (2024).
  • 4.Alfredo, R. et al. Human-centred learning analytics and AI in education: A systematic literature review. Computers Education: Artif. Intell.6, Article100215 (2024). https://arxiv.org/abs/2312.12751 [Google Scholar]
  • 5.Tsai, Y. S., Perrotta, C. & Gašević, D. Empowering learners with personalised learning approaches? Agency, equity and transparency in the context of learning analytics. Assess. Evaluation High. Educ.45 (4), 554–567. 10.1080/02602938.2019.1676396 (2020). [Google Scholar]
  • 6.Raffaghelli, J. E., Manca, S., Stewart, B., Prinsloo, P. & Sangrà, A. Supporting the development of critical data literacies in higher education: Building blocks for fair data cultures in society. Int. J. Educational Technol. High. Educ.17 (1), 58. 10.1186/s41239-020-00235-w (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Szcyrek, S., Stewart, B. & Miklas, E. Educators’ Understandings of digital classroom tools and datafication: perceptions from higher education faculty. Res. Learn. Technol.32, 3040. 10.25304/rlt.v32.3040 (2024). [Google Scholar]
  • 8.Cairney, J., Dudley, D., Kwan, M., Bulten, R. & Kriellaars, D. Physical literacy, physical activity and health: toward an evidence-informed conceptual model. Sports Medicine. 49 (3), 371–383. 10.1007/s40279-019-01063-3 (2019). [DOI] [PubMed] [Google Scholar]
  • 9.Whitehead, M. Physical Literacy: Throughout the Lifecourse (Routledge, 2010).
  • 10.Gao, T. Y. et al. The role of physical literacy and mindfulness on health-related quality of life among college students during the COVID-19 pandemic. Sci. Rep.14 (1), 237. 10.1038/s41598-023-50958-9 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ma, R. et al. Relationship among physical literacy, mental health, and resilience in college students. Front. Psychiatry. 12, 767804. 10.3389/fpsyt.2021.767804 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Luo, L. et al. Validity evaluation of the college student physical literacy questionnaire. Front. Public. Health. 10, Article 856659. 10.3389/fpubh.2022.856659 (2022). [DOI] [PMC free article] [PubMed]
  • 13.Wang, H. et al. Constructing and evaluating the physical literacy index for college students in china: a new insight and perspective. Front. Public. Health. 13, 1612356. 10.3389/fpubh.2025.1612356/abstract (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Martinez-Maldonado, R., Echeverria, V., Santos, O. C., Santos, A. D. P. & Yacef, K. Physical learning analytics: A multimodal perspective. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 375–379). ACM. (2018). 10.1145/3170358.3170379
  • 15.Mu, S., Cui, M. & Huang, X. Multimodal data fusion in learning analytics: A systematic review. Sensors20 (23), 6856. 10.3390/s20236856 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Du Plooy, E., Casteleijn, D. & Franzsen, D. Personalized adaptive learning in higher education: A scoping review of key characteristics and impact on academic performance and engagement. Heliyon10 (21). 10.1016/j.heliyon.2024.e39630 (2024). Article e39630. [DOI] [PMC free article] [PubMed]
  • 17.Venkatesh, V., Morris, M. G., Davis, G. B. & Davis, F. D. User acceptance of information technology: toward a unified view. MIS Q.27 (3), 425–478. 10.2307/30036540 (2003). [Google Scholar]
  • 18.Venkatesh, V., Thong, J. Y. & Xu, X. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q.36 (1), 157–178. 10.2307/41410412 (2012). [Google Scholar]
  • 19.Kumar, M., Tyagi, R., Gaumat, A. & Rani, J. Students’ perceptions and readiness for AI-Enhanced learning: A Utaut-Based study in Indian higher education institutions. J. Mark. Social Res.2, 495–500. 10.61336/jmsr/25-03-61 (2025). [Google Scholar]
  • 20.Liu, X., Wang, J. & Luo, Y. Acceptance and use of technology on digital learning resource utilization and digital literacy among Chinese engineering students: A longitudinal study based on the UTAUT2 model. Behav. Sci.15 (6), 728. 10.3390/bs15060728 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gourlay, L. Surveillance and datafication in higher education: Documentation of the human. Postdigital Sci. Educ.6 (4), 1039–1048. 10.1007/s42438-022-00352-x (2024). [Google Scholar]
  • 22.Selwyn, N. & Gašević, D. The datafication of higher education: discussing the promises and problems. Teach. High. Educ.25 (4), 527–540. 10.1080/13562517.2019.1689388 (2020). [Google Scholar]
  • 23.Khalil, M., Prinsloo, P. & Slade, S. Fairness, trust, transparency, equity, and responsibility in learning analytics. J. Learn. Analytics. 10 (1), 1–7. 10.18608/jla.2023.7983 (2023). [Google Scholar]
  • 24.Tohănean, D. I., Vulpe, A. M., Mijaica, R. & Alexe, D. I. Embedding digital technologies (AI and ICT) into physical education: A systematic review of Innovations, pedagogical Impact, and challenges. Appl. Sci.15 (17), 9826. 10.3390/app15179826 (2025). [Google Scholar]
  • 25.Wang, Y. & Wang, X. Artificial intelligence in physical education: comprehensive review and future teacher training strategies. Front. Public. Health. 12, Article 1484848. 10.3389/fpubh.2024.1484848 (2024). [DOI] [PMC free article] [PubMed]
  • 26.Bofill, J., Pla-Campas, G. & Sebastiani, E. M. Is artificial intelligence an educational resource in physical education? A systematic review. Apunts Educación Física Y Deportes. 160, 1–9. 10.5672/apunts.2014-0983.es.(2025/2).160.01 (2025). [Google Scholar]
  • 27.Cui, B., Jiao, W., Gui, S., Li, Y. & Fang, Q. Innovating physical education with artificial intelligence: a potential approach. Front. Psychol.16, Article 1490966. 10.3389/fpsyg.2025.1490966 (2025). [DOI] [PMC free article] [PubMed]
  • 28.Zhang, J. Leveraging artificial intelligence for enhanced physical education in universities: A paradigm shift towards Data-Driven and adaptive learning systems. Int. J. Web-Based Learn. Teach. Technol. (IJWLTT). 20 (1), 1–15. 10.4018/IJWLTT.393622 (2025). [Google Scholar]
  • 29.An, R. Artificial intelligence in health and sport sciences: Promise, progress, and Prudence. J. Sport Health Sci.14, Article 101054. 10.1016/j.jshs.2025.101054 (2025). [DOI] [PMC free article] [PubMed]
  • 30.Liu, Y. et al. Associations between levels of physical literacy and adherence to the 24-h movement guidelines among university students: A cross-sectional study. J. Exerc. Sci. Fit.22 (3), 221–226. 10.1016/j.jesf.2024.03.006 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Jiang, S., Ng, J. Y., Choi, S. M. & Ha, A. S. Relationships among eHealth literacy, physical literacy, and physical activity in Chinese university students: Cross-sectional study. J. Med. Internet. Res.2610.2196/56386 (2024). Article e56386. [DOI] [PMC free article] [PubMed]
  • 32.Leung, W. K. C., Sum, R. K. W. & Lam, S. C. Relationships between perceived physical literacy and mental health in tertiary education students: a scoping review. BMC Public. Health. 25 (1), 117. 10.1186/s12889-025-21337-y (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Jivet, I. et al. From students with love: an empirical study on learner goals, self-regulated learning and sense-making of learning analytics in higher education. Internet High. Educ.47, 100758. 10.1016/j.iheduc.2020.100758 (2020). [Google Scholar]
  • 34.Szcyrek, S. & Stewart, B. Surveillance in the system: data as critical change in higher education. Open/Technology Educ. Soc. Scholarsh. Association J.2 (2), 1–20. 10.18357/otessaj.2022.2.2.34 (2022). [Google Scholar]
  • 35.Thompson, T. L. & Prinsloo, P. Returning the data gaze in higher education. Learn. Media Technol.48 (1), 153–165. 10.1080/17439884.2022.2092130 (2023). [Google Scholar]
  • 36.Alwahaby, H. & Cukurova, M. Navigating the ethical landscape of multimodal learning analytics: A guiding framework. In S. Caballé, J. Casas-Roma, & J. Conesa (Eds.), Ethics in online AI-based systems: Risks and opportunities in current technological trends (pp. 25–53). Elsevier. (2024). 10.1016/B978-0-443-18851-0.00014-7
  • 37.Béland, S., Girard, S. & de Guise, A. A. A scoping review of latent moderated structural equations and recommendations. Quant. Methods Psychol.18 (2), 152–167. 10.20982/tqmp.18.2.p152 (2022). [Google Scholar]
  • 38.Braun, V. & Clarke, V. Thematic Analysis: A Practical Guide (SAGE, 2022).
  • 39.Buckingham Shum, S., Martínez-Maldonado, R., Dimitriadis, Y. & Santos, P. Human‐centred learning analytics: 2019–24. Br. J. Edu. Technol.55 (3), 755–768. 10.1111/bjet.13442 (2024). [Google Scholar]
  • 40.Byrne, D. A worked example of Braun and clarke’s approach to reflexive thematic analysis. Qual. Quant.56 (3), 1391–1412. 10.1007/s11135-021-01182-y (2022). [Google Scholar]
  • 41.Forsström, S. et al. The impact of digital technologies on students’ learning: Results from a literature review, OECD Education Working Papers, No. 335, OECD Publishing, Paris. (2025). 10.1787/9997e7b3-en
  • 42.Heiser, R., Stritto, M. E. D., Brown, A. & Croft, B. Amplifying student and administrator perspectives on equity and bias in learning analytics: alone together in higher education. J. Learn. Analytics. 10 (1), 8–23. 10.18608/jla.2023.7775 (2023). [Google Scholar]
  • 43.Hu, Z., Liu, Z. & Su, Y. AI-driven smart transformation in physical education: current trends and future research directions. Appl. Sci.14 (22), Article10616 (2024). https://www.mdpi.com/2076-3417/14/22/10616 [Google Scholar]
  • 44.Li, C., Cao, Y. & Lv, J. Design and implementation of a physical education teaching and training mode management system. Entertainment Comput.50 Article 100684 (2024).
  • 45.Liu, X. et al. From grit to flourishing: physical literacy’s mediating role in enhancing well-being among college students with obesity. PeerJ, 13, Article e19382. (2025). 10.7717/peerj.19382 [DOI] [PMC free article] [PubMed]
  • 46.Shi, D., DiStefano, C., Maydeu-Olivares, A. & Lee, T. Evaluating SEM model fit with small degrees of freedom. Multivar. Behav. Res.57 (2–3), 179–207. 10.1080/00273171.2020.1868965 (2022). [DOI] [PubMed] [Google Scholar]
  • 47.Subaveerapandiyan, A., Kalbande, D. & Ahmad, N. Perceptions of effectiveness and ethical use of AI tools in academic writing: A study among phd scholars in India. Inform. Dev.41 (3), 728–746. 10.1177/02666669251314840 (2025). [Google Scholar]
  • 48.Yuan, N., Yu, Q. & Liu, W. The impact of digital literacy on learning outcomes among college students: the mediating effect of digital atmosphere, self-efficacy for digital technology and digital learning. Front. Educ.10, Article 1641687. 10.3389/feduc.2025.1641687 (2025).

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 1 (27.6KB, docx)

Data Availability Statement

The datasets used and analysed in the current study are available from the first author on reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES