Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2023 May 4:1–27. Online ahead of print. doi: 10.1007/s10763-023-10376-9

Undergraduate Students’ Profiles of Cognitive Load in Augmented Reality–Assisted Science Learning and Their Relation to Science Learning Self-efficacy and Behavior Patterns

Xiao-Fan Lin 1,2,3,, Seng Yue Wong 4, Wei Zhou 1, Weipeng Shen 1, Wenyi Li 1, Chin-Chung Tsai 5,6
PMCID: PMC10157549  PMID: 37363784

Abstract

Research evidence indicated that a specific type of augmented reality–assisted (AR-assisted) science learning design or support might not suit or be effective for all students because students’ cognitive load might differ according to their experiences and individual characteristics. Thus, this study aimed to identify undergraduate students’ profiles of cognitive load in AR-assisted science learning and to examine the role of their distinct profiles in self-efficacy together with associated behavior patterns in science learning. After ensuring the validity and reliability of each measure, a latent profile analysis confirmed that 365 Chinese undergraduates carried diverse dimensions of cognitive load simultaneously. The latent profile analysis findings revealed four fundamental profiles: Low Engagement, Immersive, Dabbling, and Organized, characterized as carrying various respective cognitive loads. The multivariate analysis of variance findings revealed different levels of the six AR science learning self-efficacy dimensions across profiles. Low Engagement students displayed the lowest self-efficacy among all dimensions. Organized students recorded better conceptual understanding and higher-order cognitive skills than Dabbling ones. Students with the Immersive profile had the highest science learning self-efficacy. The lag sequential analysis results showed significant differences in behavior patterns among profiles. Among them, profiles with social interaction, test, and reviewing feedback behavior had a significantly higher score for self-efficacy than those patterns mainly based on test learning and resource visits. This finding provides a unified consideration of students’ diverse profiles and can inform interventions for effective design of AR-assisted science learning to match appropriate strategies to facilitate the science learning effect.

Keywords: Augmented reality, Science learning, Cognitive load, Self-efficacy, Behavior

Introduction

Science learning plays a crucial role in education for students’ scientific literacy to explore and comprehend the critical laws of scientific phenomena. Students may experience various learning difficulties in science learning (Yeh et al., 2019), such as lacking motivation and poor engagement when encountering difficulties (Ho et al., 2022; Hwang et al., 2019; Lin et al., 2022a, 2022b) and lacking substantial scientific learning experiences in the physical laboratory (Hu-Au & Okita, 2021). Thus, research is increasingly considering the role of technology in science learning to solve these problems. With the advantages of combining virtual and real objects with real-time and interactive information, augmented reality (AR) has been widely implemented in the field of science education and has been demonstrated to contribute to improved learning experience (Lai et al., 2019; Thees et al., 2020), learning performance (Chang & Hwang, 2018; Lai et al., 2019), science-conceptual understanding (Lin et al., 2020), motivation (Chen, 2019), and learning behavior (Lai et al., 2019; Sahin & Yilmaz, 2020).

Despite the merits of AR for the improvement of students’ achievements in learning, some instructional challenges have also emerged, including issues related to cognitive load (Wu et al., 2018), especially lowering students’ extraneous cognitive load (Thees et al., 2020). Therefore, an equally important but less investigated issue is to understand the current status of cognitive load as perceived by students regarding factors that relate to their AR-assisted science learning experiences and behavior. In particular, students may encounter different levels of cognitive load when engaging in various AR multimedia approaches in science learning (Lai et al., 2019; Thees et al., 2020). Hence, they may differ in their AR learning experiences in ways worth investigating, such as how their cognitive load can be manipulated to pedagogical benefit via AR, the influence on students’ motivational state, and students’ cognitive load in using AR tools. Thus, the design of AR-assisted science learning experiences should take students’ cognitive load into account, since cognitive load in AR learning is increasingly becoming a vital factor in balancing students’ expected efforts and their mental costs associated with the learning process in the design of learning tasks (Yang et al., 2021).

As implied by the above, previous researchers also indicate that a specific type of AR-assisted science learning design or support may not suit or be effective for all students because students’ cognitive load may differ according to their experiences and individual characteristics (Thees et al., 2020). Prior studies have shown contradictory findings on cognitive load. For example, students processing more cognitive load may perform better (Lai et al., 2019; Thees et al., 2020), less favorably (Sung et al., 2014; Thees et al., 2020), or with no significant difference (Hwang et al., 2018; Wu et al., 2018) between the experimental and control groups. In addition, multitasking is another issue related to the AR learning system, which can be a distraction factor that will lead students to ignore the instructions in their learning process (Garzón et al., 2019). These issues in AR education may involve the interaction of multiple factors that influence cognitive processes. Therefore, this study aimed to investigate students’ cognitive load in AR-assisted science learning experiences, which can help to inform instructors about the current status of learners’ profiles of cognitive load in AR-assisted science learning, and their relation to multiple science learning factors to fill the potential consistencies between AR-assisted science learning theories and practices for teachers to consider.

Furthermore, students with high self-efficacy may have a powerful sense of competence, which may assist in cognitive processes (Lin et al., 2020), academic performance (Wang et al., 2018), and achievement (Chang et al., 2018). It is worth regarding students’ self-efficacy as a key characteristic of students with different levels of cognitive load in science learning (Martin et al., 2021; Plass et al., 2009). For instance, Plass et al. (2009) confirmed that students with different levels of cognitive load varied in self-efficacy when learning science with designed scaffolds. Therefore, this study included self-efficacy as a factor to identify the underlying subgroups in AR-assisted science learning regarding students’ cognitive load, unlike previous studies (e.g. Thees et al., 2020), which only proved the effect of an AR-assisted learning system on science learning performance and cognitive load. Learning behavior pattern analysis can be analyzed via behavioral sequential analysis to point out the students’ complex behaviors for a more effective instructional mechanism for an AR learning system (Hwang et al., 2018). Researchers have highlighted the importance of exploring primary school students’ behavior patterns in the AR-based scientific inquiry to reveal intrinsic factors and key learning processes, such as self-efficacy and cognitive load (Ho et al., 2022; Lin et al., 2022a, 2022b). However, to the best of our knowledge, exploration of the relationship between students’ multi-dimensional profiles of cognitive load in AR-assisted science learning, self-efficacy, and the associated behavior patterns is still rare in the literature. This research gap may lead to an insufficient understanding of how to better facilitate students’ science learning. Accordingly, this study aimed to test hypotheses regarding how these multiple factors interact and identify Chinese undergraduate students’ profiles in terms of their critical perceptions of cognitive load in AR-assisted science learning. This study also compared AR science learning self-efficacy and associated behavior patterns among different learning profiles. The following research questions were therefore addressed:

  1. By latent profile analysis, what are undergraduate students’ profiles in AR-assisted science learning in terms of three critical cognitive load dimensions?

  2. By behavioral sequence analysis, what are the differences in the students’ science learning behavior patterns in terms of the different AR-assisted science learning profiles?

  3. How do the different undergraduate students’ AR-assisted science learning profiles differ in terms of science learning self-efficacy?

Literature Review

Cognitive Load

Considerable information processing is required of students to ensure they achieve their learning outcomes with meaningful learning. However, their capacity for cognitive processing is a critical barrier. Cognitive load is a multi-dimensional cognitive construct representing load when doing a particular task (Paas & van Merriënboer, 1994). However, the vast amount of cognitively relevant information contained in AR materials may lead to students’ overload in science learning activities (Ho et al., 2022; Lin et al., 2022a2022b). Sweller (1988) and Paas and van Merriënboer (1994) have identified various categories of cognitive load. First, Sweller (1988) identified three types of cognitive load—intrinsic, extraneous, and germane—focusing on the interpretation of performance and task-based measures. Intrinsic cognitive load is defined as an interaction between the inherent components of the material and the level of the learners’ prior knowledge or experience (Sweller, 1988). Extraneous cognitive load is caused by the poor design of instructional materials, which requires extra cognitive handling (Kirschner et al., 2006). Germane cognitive load is defined as the amount of working memory capacity required for managing intrinsic load. However, these three classical types only give a partial picture of cognitive load (Xie & Salvendy, 2000), and do not properly capture the mental effort-based aspect of cognitive load (Feldon, 2007).

Second, cognitive load theory proposed by Paas and van Merriënboer (1994) highlighted the factors that demonstrated students’ depletion of working memory when they encountered learning tasks. According to their theory, cognitive load is supposed to be assessed according to three dimensions: mental load, mental effort, and performance. Mental load is forced by a task or environmental request regardless of subject characteristics. As a task-centered factor, it shows how students persist with the assigned task in a certain environment. Mental effort is a human-centered factor, which measures the quantity of resources assigned to afford task requests or the needed amount of controlled processing. Mental effort also refers to an index that can be examined by subjective techniques, such as rating values and objective techniques, such as physiological parameters. Performance is an indirect indicator of cognitive load measure, where a higher level of performance means a higher amount of working memory capacity required for managing intrinsic load (Deleeuw & Mayer, 2008; Paas et al., 2003). Although the dimension performance of cognitive load increases the level of total cognitive load, it simultaneously increases students’ motivation, directs attention to the learning materials, and improves schema acquisition. Generally, the mental effort and performance together represent (i) the costs of cognitive load for the attained performance and (ii) the relative efficiency of instructional situations. This study takes performance into account when constituting the essential estimators of cognitive load together with mental load and mental effort, as suggested in the prior study (Deleeuw & Mayer, 2008; Wang et al., 2018). When conducting problem-solving activities in science learning, it is essential to be aware of both students’ performance and their invested mental effort to regulate their problem-solving processes since problem-solving activities utilize many of students’ limited cognitive capacities without being directly related to their learning (i.e. extraneous cognitive load), whereas the performance dimension of cognitive load may be restrained by their limited cognitive processing scope (van Gog et al., 2020).

Mental load, mental effort, and performance are also conceptualized as assessment factors for cognitive load. Mental load and mental effort are normally extracted or modified from the original cognitive load scales (e.g. mental load, mental effort, and performance) to measure learners’ cognitive load profiles (Wu et al., 2018). Although students’ mental load and mental effort have already received much attention in the literature (Hwang et al., 2018; Sung et al., 2014), their results show contradictory findings. The reason for these contradictory results is that the previous studies tend to explore the cognitive load with three separate measurable factors of mental load, mental effort, and performance in detail with limited information. It remains unknown if students of different cognitive load levels can equally benefit from the same AR-assisted science learning system. To fill this gap, educators need to identify students’ learning profiles from a comprehensive perspective.

Cognitive Load in the AR Context

The cognitive process of learners using technology is crucial in instructional design. AR technology utilizes real and virtual images via integrating digital content into real media images. The positive contribution of AR to cognitive load has been demonstrated in several studies (Ibili & Billinghurst, 2019). Prior studies have uncovered the effectiveness of AR in understanding complex and abstract learning concepts by lowering intrinsic and extraneous cognitive load (Ibili & Billinghurst, 2019; Keller et al., 2021). In the virtual learning environment, cognitive load increases if the difficulty of new learning content is high (Skulmowski & Xu, 2022). The AR approach can reduce learners’ efforts in information processing and their difficulty dealing with science learning content (Fadl & Youssef, 2020). AR directly affects memory resources related to spatial 3D representations, leading to reduced intrinsic load (Ibili & Billinghurst, 2019); then, students can invest lower cognitive effort while still successfully understanding the learning topics and comprehending scientific concepts (Keller et al., 2021). There are some cognitive load-related challenges for AR learning, such as learners’ cognitive overload due to the large amount of cognitively relevant information contained in learning materials and the complex tasks required in AR environments (Akçayir & Akçayir, 2017). Overuse of large learning materials and tasks when learning via AR simulation can cause confusion and cognitive overload (Dunleavy et al., 2009). Thus, this study focused on undergraduate students’ mental load, mental effort, and performance in AR-based learning to identify students’ profiles of cognitive load in AR-assisted science learning.

Science Learning Self-Efficacy

Self-efficacy can be described as an individual sense of self-ability to accomplish the given tasks and achieve the designated performance (Beehr, 1995). Lin (2021) found that high school students with high scores in self-efficacy tended to adopt deep learning strategies and showed high cognitive engagement in science learning. Some researchers have modified an instrument to examine students’ science learning self-efficacy, which comprises five dimensions: conceptual understanding, higher-order cognitive skills, practical work, everyday application, and science communication (Chang et al., 2018). They explored a more detailed understanding of students’ multi-dimensional learning self-efficacy. They indicated that learners with a constructive perception (who regarded learning as understanding knowledge deeply and not perceiving learning as testing, calculation, and remembering) could have great confidence in science communication.

Relationship Between Cognitive Load and Science Learning Self-Efficacy

Past achievements can enhance students’ self-efficacy, contributing to higher aspirations and the application of extra effort or initiative, and consequently improving performance (Beehr, 1995). Salomon (1984) stated that an individual’s capability would contribute to the mental effort level by influencing the stability of the performance goal, and low self-efficacy may cause anxiety. Students with low self-efficacy may get away from their tasks because they expect themselves to fail (Ho et al., 2022; Lin et al., 2022a, 2022b). When students faced a task beyond their working memory capacities, they automatically adjusted their performance targets and carried out the task with a lower mental effort level (Yang et al., 2021). Besides, students with a medium level of self-efficacy will invest more mental effort in their learning as they believe that they will not get better results if they do not work diligently (Wang et al., 2018). On the other hand, Feldon et al. (2018) found that mental effort contributed to self-efficacy, indicating that students may be conscious that mental effort was a necessary investment for self-efficacy when learning biology in an instructional condition. The investment of mental effort is one of the indicators of motivation, along with goal selection (where to invest mental effort) and persistence (how to sustain mental effort over time to achieve a goal) (Wigfield & Eccles, 2000). This implies that students may maintain high-order self-efficacy with the investment of mental effort. However, only a positive linear relationship between self-efficacy and mental effort has been exploded (Buchner et al., 2022). Students who believe in their self-perceived skills in science and mathematics were less affected by the extraneous load (Glogger-Frey et al., 2017). The above contrasting findings regarding self-efficacy and mental effort may imply that the relationship between cognitive load and self-efficacy in science learning is still unclear and is worth investigating further. In addition, most previous researchers have paid attention to the correlation between cognitive load and self-efficacy (Chang & Hwang, 2018; Lai et al., 2019). For example, Lai et al. (2019) uncovered that elementary students learning with an AR-based learning approach applied lower mental effort and less cognitive load for information processing than those who participated in the conventional science learning (non-AR-based). However, they only regarded cognitive load as one possible indicator to test the effect of the AR-based learning approach on cognitive load in science learning with a quasi-experimental study. To the best of our knowledge, the existing study failed to be aware of the complex mechanism that students’ behavior sequence and science learning self-efficacy might be different in relation to their latent profiles of perceived cognitive load in AR-assisted science learning. Thus, this study may present a new direction to better understand cognitive load’s role in affecting the efficacy of AR-based science learning by exploring students’ behavior patterns.

Methods

Research Design

This study designed a framework to represent profiles of cognitive load in AR-assisted science learning and their relation to science learning self-efficacy and behavior patterns. As shown in Fig. 1, cognitive load in AR-assisted science learning consists of three critical assessment dimensions (mental load, mental effort, and performance) to measure students’ cognitive load. Latent profile analysis was utilized to analyze these students’ cognitive load scales in the AR context for categorizing several distinct profiles.

Fig. 1.

Fig. 1

The framework of the relationship between cognitive load in augmented reality reality–assisted science learning and science learning self-efficacy. Note. Profile “N …” under CL profiles means an undetermined name of profiles, as the precise name will depend on the analysis results

Based on the above profile analysis in AR-assisted science learning, this study identified different students’ science learning behavior patterns according to the classification of each profile. The related behavior patterns were analyzed and described according to these different profiles. Comparisons will be done among the distinct profiles of cognitive load and science learning self-efficacy. Furthermore, this study compared the distinct profiles of cognitive load with behavior patterns in AR-assisted science learning. In addition, an AR-assisted science learning system was designed in order to apply the above framework.

Participants

There were 372 undergraduates involved as candidates. However, several undergraduates were removed because they did not finish their learning tasks or were absent. Ultimately, 365 participants from the same university in southern China completed the questionnaires; 31.5% were male. Their average age was 22.16 (SD = 3.47), ranging from 16 to 26 years old. All students had prior experience using smartphones or tablets. These participants were in their second (N = 129) and third (N = 236) years of study. A long-term science learning project in the mobile AR context supported by the Chinese subject instructional committee in 2020 was conducted for this study to help teachers implement science learning activities with a proposed AR-assisted science learning system. The students had prior experience using AR in education; they had already learned the basic concepts and knowledge about vehicle theory, vehicle gearbox, and the structure of automobile engines using AR. All the participants were recruited from only science-related majors (e.g. physics, environmental engineering, science, and astronomy).

Context

An AR-assisted science learning system was designed and applied in this study. It played a major role in the entire science learning process, including learning observation, experimental exploration, rectification, and reflection. The learning system consisted of authentic mobile AR-assisted science learning environments, a technology-embedded scientific inquiry system, and a behavior tracking system on students’ learning behavior, which were adopted from past studies (Lin et al., 2020; Yeh et al., 2019). This study was conducted in the Physics and Everyday Life curriculum course at the university. The course was designed for them to learn the principles of the automobile engine. The teaching goals of the course were for the students to learn the engines and individual systems, the basic structure and working principle of the mechanism, such as the structure and principle of the idle air control system, air inlet control system, and fuel injection system. The learning environment situated the students’ learning activities in the AR-assisted science learning system. The students spent 2 weeks (140 min each) completing three recommended AR-assisted science learning tasks implemented by the teacher with the purpose of reducing their cognitive load. Figure 2 demonstrates parts of the AR-assisted science learning system which was designed for this study. Participants joined this AR-assisted science learning system for science learning and demonstrated different learning pathways and data, which were used for this study’s analysis.

Fig. 2.

Fig. 2

Demonstration of the usage of the AR-assisted science learning system

The students learned scientific knowledge and concepts with AR in the first activity. The students scanned the QR code on the related content to access science learning materials for expanded reading, review, and consolidation, as demonstrated in Fig. 2. The teachers guided the students to compare the contextual science knowledge with their previous understanding. Students directly comprehended scientific knowledge with the interactive and dynamic 3D object representations by AR. In the activities, AR provided a context close to reality for observation and exploration, thus promoting understanding and reducing mental load.

The second activity involved contextual inquiry to solve the students’ scientific questions in the practical work via AR. The teachers provided the students with guidance on issue-based science questions in a task-based and context-aware environments with the support of image-based AR. Based on these AR materials and environments, an experimental exploration was designed for two purposes. First, the students’ mental efforts were moderately raised by leading them to experience real problem-solving situations presented by AR. Second, students’ extraneous cognitive load was reduced by integrating what they had learned with the learning objectives by allowing them to practice in the simulated experimental process. This was presented by an example in the following section. When the mental effort was moderately increased, students would achieve the best learning outcomes with lower extraneous cognitive load (Lai et al., 2019).

The third activity was designed to enhance students’ scientific methods via AR. The students needed to express their conceptual understanding of scientific knowledge in this activity. Teachers were trained to develop several test modules with timely feedback in the AR-assisted science learning system. In these science learning activities, the AR-based interactive reflective approach has the potential to improve the students’ learning performance and scientific thinking skills gradually. Only self-reflection based on appropriate feedback from the teacher can lead to the effective knowledge construction and consolidate the formation of scientific thinking.

This study took the scientific activity involving the “four-stroke cycle of automobile engine” as an example to explain how the use of an AR-assisted science learning system could facilitate students’ learning. Specifically, the students were provided with AR science learning materials, the guidance, and science questions in task-based and context-aware environment, with appropriate feedback to facilitate their self-reflection. First, the students were asked to watch the interactive videos and interact with the 3D object representations to understand the invisible structure and abstract concepts of automobile engines’ working process cycle (i.e. intake stroke, compression stroke, power stroke, and exhaust stroke). Accordingly, the students compared the presented model with their existing knowledge of how automobile engines work through AR-assisted science learning experiences. Second, the students were asked to solve an immersive automobile engine-related issue by providing an AR-based semi-real context and situational questions to apply their learned science knowledge in everyday applications. For example, the students needed to answer an everyday situational question about how to address the driving difficulty of failing to accelerate when keeping their foot on the gas. To express students’ ideas of how to address this difficulty, the students complete a test, writing possible solutions and related science knowledge on a worksheet within the AR-assisted science learning system. The students may encounter an abstract principle that keeping their foot on the gas only provides more gasoline to the engine, while releasing the throttle first, let the air in and thus results in faster acceleration. The structural movement of the gas pedal is hard to see but can be visualized by an AR-assisted science learning system. Therefore, the students were able to observe, interact, and perform experiments with the AR-simulated automobile engine. After analyzing the data from experiments, they understood the abstract principle regarding how the “automobile engine” applies a four-stroke cycle to convert gasoline into motion. Third, the students were assisted by the in-time feedback from the teacher and suitable guidance from the AR-assisted science learning system to cultivate their self-reflection, including re-examining assumptions, identifying knowledge gaps, and transforming thinking actively into practice. Accordingly, the students could evaluate their science learning performances and behaviors to bridge their science knowledge gaps in understanding the stroke cycle of automobile engines.

Instruments

Cognitive Load Scale in the AR Context

The scale of cognitive load in the AR context was modified from the original cognitive load developed by Ho et al. (2022), Lin et al. (2022a), (2022b), Chen et al. (2022), and Paas and van Merriënboer (1994). To adapt the scale to measure the students’ perceived cognitive load using the AR-assisted science learning system in this study, 21 items were retained and modified into three dimensions: mental load, mental effort, and performance. The participants rated all items on a 5-point Likert-type scale (1—strongly disagree, 2—disagree, 3—disagree nor agree, 4—agree, and 5—strongly agree). In this study, the Cronbach’s alpha values of mental load, mental effort, and performance are 0.92, 0.92, and 0.85, respectively, indicating that the scale is reliable. To validate the instrument, three expert teachers from the science and technology area were asked about the implementation appropriateness, content relevance, and writing clarity of the questionnaire. Furthermore, eight students participated in content validation to consider the course relevance of the questionnaire. The following items explain each dimension of the cognitive load scale in the AR context:

  • Mental effort: students’ reactions to the instructions of the learning task to complete the AR-assisted science learning tasks (e.g. I devoted much energy to the AR science learning activities)

  • Mental load: students’ perceived difficulty of the AR-assisted science learning content, learning task, subject characteristics, and materials to understand new information (e.g. It is difficult for me to be attentive in learning AR science materials)

  • Performance: students’ science learning performance in the science learning activities embedded in the AR-assisted learning system (e.g. I can achieve my science learning goals for a higher outcome with the help of the AR learning system)

AR Science Learning Self-Efficacy

The AR science learning self-efficacy scale was modified based on the original learning self-efficacy scale (Chang et al., 2018), which includes six subscales: conceptual understanding, higher-order cognitive skills, practical work, everyday application, science communication, and academic efficacy. Similar to the above procedure of validation and modification of the cognitive load scale in the AR context, 32 items originally designed by Chang et al. (2018) were retained for further analysis in this study to fit well with the context of AR science learning. Every subscale has five items, and all items are rated on a 5-point Likert-type scale. The Cronbach’s alpha values for these subscales range from 0.80 to 0.92 in this study; these values indicate that the AR science learning self-efficacy scale is reliable. In addition, expert validation and content validation tests were conducted to ensure the scale’s validity. The items for each AR science learning self-efficacy scale are as follows.

  • Conceptual understanding: students’ confidence in using basic scientific concepts, principles, or theories with the AR-assisted learning system (e.g. I can explain basic scientific concepts to others with the help of the AR system-assisted science learning activities)

  • Higher-order cognitive skills: students’ confidence in using sophisticated cognitive skills, such as problem solving, critical thinking, or self-directed inquiry with the interactive and dynamic 3D object representations by AR (e.g. I was able to observe the detailed processes and think of possible reasons behind specific scientific phenomena and principles (e.g. the automobile engine) when I was inquiring about scientific problems with the help of the AR-assisted science learning system)

  • Practical work: students’ confidence in conducting scientific explorations (e.g. experiments) by using AR as an effective approach to compare the contextual science knowledge with their previous understanding (e.g. After the AR science learning activities, I am confident in analyzing and understanding the meaning of the scientific data)

  • Everyday application: students’ confidence to apply scientific concepts and skills to their daily lives with the support of the AR-assisted learning system (e.g. I could understand many science concepts and phenomena in daily life with the help of AR-assisted science learning experiences)

  • Science communication: students’ confidence in discussing and communicating scientific knowledge with peers or others with the use of AR learning resources (e.g. I could discuss science-related content with my classmates naturally in different authentic AR learning contexts)

  • Academic efficacy: students’ confidence to achieve academic achievement and learning outcomes in the AR-assisted science learning process (e.g. I am able to use the correct science knowledge to answer the designated questions of the test in the AR system; I am confident that I could explain the invisible structure and working processes of an automobile engine to others after I learned and interacted with the AR materials on automobile engines)

Data Analysis

Five steps of data analysis were adopted to identify the students’ profiles of cognitive load in AR-assisted science learning and their relation to science learning self-efficacy and behavior patterns. First, a Kolmogorov–Smirnov test was conducted to test the data distributions, indicating the data was not normally distributed. Therefore, this study used the robust maximum likelihood estimator to analyze the data (Muthén & Muthén, 2012).

Second, confirmatory factor analysis (CFA) validated the constructs of the two instruments using Mplus 7.4. The three goodness-of-fit indices were used to assess the fitness of the model: (a) the root-mean-square error of approximation; (b) the comparative fit index; and (c) the standardized root-mean-square residuals. If the root-mean-square error of approximation ≤ 0.08, the comparative fit index ≥ 0.9, and the standardized root-mean-square residuals ≤ 0.08, the model fits well (Marsh et al., 2004).

Third, in order to identify latent profiles of the students’ AR-assisted science learning data, a statistical method of latent profile analysis of cognitive load data set (i.e. mental effort, mental load, and performance) was conducted for analyzing as continuous observed variables to identify unobservable groups among subjects. Several groups of students sorted by the latent profile analysis showed the profiles of the students’ cognitive load in AR-assisted science learning. A series of latent profile analysis models were constructed using the scores of the cognitive load scale’s items in Mplus 7.4. The low values of the Akaike information criterion, adjusted Bayesian information criterion, and higher entropy indicated that the models’ fits were good (Burnham & Anderson, 2002). The bootstrap likelihood ratio test and adjusted Lo-Mendell-Rubin test were also considered (Lo et al., 2001).

Fourth, this study compared the distinct profiles of cognitive load with behavior patterns. In terms of the various learning behavior pattern profiles, this study conducted the coding scheme proposed by Hwang et al. (2018), as shown in Table 1. The participants’ learning actions were collected and coded from different data sources (i.e. classroom observations, count = 21,572, 35.6%; digital video camera records, count = 32,237, 53.2%; and AR-assisted science learning system data, count = 6787, 11.2%). Then, the participants’ learning actions data could be transcribed into the behavior pattern diagram based on the coding scheme. The categorization processes were conducted by two experienced science education teachers who coded the learning actions based on the coding scheme. The raw data of all participants’ behaviors were converted into textual content. Long short-term memory (LSTM) and the convolutional neural network (CNN) algorithm architecture were applied to identify behavioral sequences based on the data. The LSTM-CNN algorithm framework extracted deep semantic features of the text based on the advantages of learning long-term dependencies and capturing local context. A similar mechanism has been described by Can et al. (2018), and they discovered that the LSTM-CNN algorithm outperforms previous separated methods.

Table 1.

Coding scheme

Code Learning behavior
C1 Find and read the learning content
C2 Scan the figure label or QR code to access more learning material
Q1 Click on the test module and review the test
Q2 Submit a wrong answer
Q3 Answer correctly
Q4 Conjecture the answer (answer wrong 3 times in 15 s)
Q5 Read the feedback of the answer (for more than 5 s)
Q6 Complete the learning task (e.g. summit the test answer)
Q7 Change to another learning task during an unfinished task
H1 Discuss with others
H2 Leaving messages or comments for help on others’ pages

C capture the information, Q quiz interaction, H human contact

In the analyzing process of LSTM-CNN, similar features were classified into one category, and the final exported categories were designed to match the established coding scheme and obtained preliminary behavioral classifications. Besides, the two coders checked and corrected the possible inconsistencies in the obtained preliminary classifications to get the data for the LSA analysis. The coders had rich coding experience using the coding scheme employed in this study. They could fully comprehend the coding scheme and were capable of resolving any ambiguity that might occur in the coding process. When the students’ learning behavior data failed to reach an agreed-upon categorization, the researchers reviewed the transcripts to obtain the final categorization. The inter-rater reliability kappa of the two teachers’ analytic results was 0.82 (p < 0.001), showing acceptable inter-rater reliability. This study performed a lag sequential analysis to examine the students’ learning behavior patterns from the AR-assisted science learning system. Statistical significance was analyzed by calculating Z-score (an adjusted residual value) statistics using a sequential pattern analysis program (GSEQ 5.1). In the specific behavior sequence analysis, Z-scores greater than 1.96 were used for filtering to obtain significant behavioral transformation data (Bakeman & Gottman, 1997).

Finally, this study conducted a multivariate analysis of variance to compare the differences between the identified unobservable subgroups of the students’ cognitive load and their role in AR science learning self-efficacy.

Results

Confirmatory Factor Analysis

CFA was conducted to explore the reliability and validity of the two instruments. After the CFA process, 17 out of 21 items in the cognitive load scale in the AR context and 30 out of 32 items in AR science learning self-efficacy were retained. According to the suggestion indices proposed by Marsh et al. (2004), the three-factor structure of cognitive load scales in the AR context fitted well (i.e. chi-squared = 407.87, p < 0.001, the degree of freedom = 186, the comparative fit index = 0.93, the root-mean-square error of approximation = 0.06, and the standardized root-mean-square residuals = 0.08). The measurement model of AR science learning self-efficacy, comprising six factors, also had a valid construction. Moreover, the goodness of fit, chi-squared = 973.60, p < 0.001, the degree of freedom = 390, the comparative fit index = 0.90, the root-mean-square error of approximation = 0.06, and the standardized root-mean-square residuals = 0.06 were acquired.

Latent Profile Analysis for Classifying Students’ Perceptions of Cognitive Load in the AR Context

As shown in Table 2, the bootstrap likelihood ratio test indicated that adding profiles was necessary for up to five profiles. According to the criterion of Finch and Bolin (2019), the number of participants in every profile should be more than 10%. The five-profile model should be removed because it contained less than 10% of the participants. Excluding the five-profile model, Akaike information criterion and adjusted Bayesian information criterion were the lowest for the four-profile model among all other models. Therefore, the four-profile model was better than the other three profile models in Akaike information criterion and Bayesian information criterion. Although the entropy of the four-profile model was not the highest, it was larger than 0.90, indicating that classification accuracy was very high. Thus, the four-profile model was preferred.

Table 2.

The fit of the compared latent profile models with increasing numbers of profiles (N = 365)

Number of profiles X2 Df AIC ABIC Entropy BLRT ALMRT Group size
1 8651.12 42 17,386.24 17,416.79
2 7848.92 64 15,825.83 16,075.42 .90 8651.12*** 1592.14 0.58/0.42
3 7402.67 86 14,977.33 15,039.88 .95 7848.92*** 885.68 0.15/0.31/0.54
4 7144.25 108 14,504.49 14,583.04 .94 7402.67*** 512.89 0.10/0.27/0.34/0.29
5 7007.98 130 14,275.95 14,370.50 .95 7144.24*** 270.46 0.10/0.32/0.29/0.08/0.21

AIC Akaike information criterion, ABIC adjusted Bayesian information criterion, BLRT bootstrap likelihood ratio test, ALMRT adjusted Lo–Mendell–Rubin test

*** p < .001

Table 3 illustrates the means (standard deviations) for the four profiles, the number of participants, profile names, and post hoc tests. The multivariate analysis of variance indicated that the four profiles were significantly different (mental effort: F = 268.66, p < 0.001; mental load: F = 320.47, p < 0.001; performance: F = 121.04, p < 0.001). In profile 2, 100 participants attained the highest mean values for mental effort, second-highest mean values for performance, and lowest mean values for mental load across the four profiles. The result implies these students actively devoted effort to tasks. In contrast, the participants in profile 1 (N = 38) had the lowest mean values for mental effort and performance but the highest mean values for mental load. They appeared to lack competence and focus. Profile 4 consisted of 107 participants, while profile 3 consisted of 120 participants who held approximately similar mean values for mental load. The mean values of mental load for profiles 4 and 3 were at the relative medium level among the four profiles. Similarly, the mean value of mental effort for the two profiles was medium as well, though participants in profile 4 scored higher than participants in profile 3; profile 4 also attained the highest mean values for performance. This study further considered the results of participants’ learning behaviors to distinguish the characteristics of the four profiles.

Table 3.

Comparisons of different profiles

AR cognitive load group mean (SD)
Mental effort Mental load Performance
Profile 1 (N = 38) 2.94 (0.42) 3.10 (0.55) 2.96 (0.52)
Profile 2 (N = 100) 4.79 (0.30) 1.17 (0.23) 3.48 (0.41)
Profile 3 (N = 120) 3.89 (0.36) 2.09 (0.35) 3.21 (0.31)
Profile 4 (N = 107) 4.29 (0.37) 2.07 (0.33) 3.62 (0.24)
F (ANOVA) 268.66*** 320.47*** 121.04***
Post hoc test (LSD) 1 < 3 < 4 < 2 2 < 4 = 3 < 1 1 < 3 < 2 < 4

ANOVA analysis of variance, LSD, Fisher’s least significant difference tests

*** p < .001

Analysis of Science Learning Behavior Patterns

The names of four profiles were decided according to the definition of cognitive load’s subfactors as well as students’ specific behavioral characteristics. A total of 60,595 coded behaviors were regarded in the analysis. The most and second-most frequent behaviors of students were finding and reading the learning content (C1, count = 18,427; 30.41%) and answering correctly (Q3, count = 13,423; 22.25%). It was also common that students scanned the figure label or QR code to assess more learning material (C2, count = 8433; 13.92%) or clicked on the test module and reviewed the test (Q1, count = 6355; 22.25%). Behaviors like “Changing to another learning task during an unfinished task” (Q7, count = 1046; 1.73%), “Leaving messages or comments for help on others’ pages” (H2, count = 1302; 2.15%), and “Conjecturing the answer” (Q4, count = 1581; 2.61%) rarely appeared. Other inconspicuous behaviors such as submitting a wrong answer (Q2, count = 2805; 4.63%), reading the feedback for the answer (Q5, count = 2666; 4.40%), discussing with others (H1, count = 2418; 4.00%), and completing the learning task (Q6, count = 2139; 3.53%) shared low frequency.

This study compared science learning behavior patterns among the four different profiles and named them, as shown in Table 4. As mentioned above, this study divided students into four profiles based on the latent profile analysis of their cognitive load. The behavior patterns of students in each profile were used as their characteristics to help name four profiles with the students’ three key cognitive load measures. The behaviors in the columns were the prior behavioral transitions of the behavioral transitions in the rows. For instance, the C1 → Q1 sequence revealed that students browsed content and searched for lesson questions. According to Tables 3 and 4, students in profile 1, with the highest mental load and lowest mental effort as well as performance, did not focus on emphasizing outcomes. Moreover, they did not complete tasks or submit their questions. They also were not interested in the answers to the test questions. Therefore, this study defined this profile as the Low Engagement profile. Compared to profile 1, students in profile 2 could focus on test modules. Their behavior patterns as C1 → C2, C2 → Q1, Q1 → H1, H1 → Q3, Q3 → Q5, and Q5 → Q6 showed they conscientiously regulated their learning process and finished tasks; thus, profile 2 was defined as the Immersive profile, with the highest mental effort and behavior patterns related to being immersed in work. For students in profiles 3 and 4, their medium-level mental load was similar, but students in profile 4 had relatively higher mental effort and performance. Based on behavior patterns, it appeared that the key difference between the two profiles was that students in profile 4 progressively finished the tasks, while those in profile 3 kept repeatedly browsing learning materials and trying to complete the tasks with new answers. As C1 → C2, C2 → Q1, and Q1 → C2 show, students in profiles 3 and 4 frequently accessed learning materials. Nevertheless, behavior patterns like Q2 → Q4 and Q4 → Q7 showed that students in profile 3 did not make an effort to understand the content. This indicates that they may find learning with AR interesting and simply want to finish the task. Compared to this, profile 4 students’ behavior patterns like H1 → Q3, Q3 → Q5, Q5 → Q6, Q5 → C1, and Q6 → C1 shows they had a progressive learning process. They followed the organized teaching activities, read materials, completed tasks, and got feedback step by step. They are students who were effectively organized by teaching. Students in profile 4 also have behavior patterns like Q5 → C1 and Q5 → C2, which indicates that they initiatively access learning materials to understand the tasks. In other words, they performed self-organized learning, which is even not obvious in the behavior patterns of profile 2 students. Besides, Q1 → H2 and Q1 → H1 show profile 4 students effectively took others’ suggestions and left messages to help them study. In conclusion, profile 4 students followed science learning activities and handled the balance between self-learning and online discussions. Hence, they created an effective, organized learning process. Based on the above analysis, this study thus regarded profile 4 as the Organized profile and profile 3 as the Dabbling profile.

Table 4.

Comparisons of science learning behavior patterns among different profiles

C1 C2 Q1 Q2 Q3 Q4 Q5 Q6 Q7 H1 H2
C1

2 + 

3 + 

4 + 

1 + 
C2

2 + 

3 + 

4 + 

Q1

2 + 

4 + 

1 + 

3 + 

3 + 

2 + 

4 + 

4 + 
Q2

1 + 

3 + 

Q3

2 + 

4 + 

Q4 3 + 
Q5

2 + 

4 + 

Q6

2 + 

4 + 

Q7
H1

3 + 

4 + 

H2

1 +  = Low Engagement, 2 +  = Immersive, 3 +  = Dabbling, and 4 +  = Organized

In the Low Engagement profile, students lacked behavioral transitions (see Fig. 3-a). Usually, when they logged in to the AR-assisted science learning system, their primary behavior path consisted of browsing the main content and searching for lesson questions (C1 → Q1). Subsequently, they tried to answer test questions (Q1 → Q2). The Q2 ↔ Q4 sequence revealed that they guessed the answer three times within 15 s. The average frequency per student was low (0.93 times per person). Consequently, AR learning was not the main activity for this profile. They lacked motivation and interest, as their overall engagement was low. Thus, this profile needed appropriate AR science learning support, as they may face major academic failures and risk dropping out of regular learning.

Fig. 3.

Fig. 3

Behavioral transition diagrams of the four profiles

Figure 3-b indicates that the behavioral transition frequency average among the AR-assisted science learning system Immersive profile students is significantly higher compared to that among Low Engagement and Dabbling profile students. Furthermore, this profile primarily focused on test modules, and its behavioral transition mode adds sequences of Q1 → H1, H1 → Q3, Q3 → Q5, Q5 → Q6, and Q6 → C1. This indicated that this profile showed more complete participation in the test activities and would have perceived the test as a way to enhance their knowledge. The Q3 → Q5 sequence revealed that the students read the feedback provided by the AR-assisted science learning system. The Q1 → H1 and H1 → Q3 sequences indicated that the students tried to seek help from partners during the test; accordingly, they would answer correctly more often. The behavior analysis revealed that the Immersive profile students were involved in browsing learning content and AR resources and asking for help in the forum of the AR-assisted science learning system. Considering that most of the tests are part of the scoring tasks in the course cases, this profile showed a high participation rate in the test activities.

The Dabbling profile students added test relative sequences to their behavioral transition patterns (see Fig. 3-b); this indicated that these students’ behavior in their initial test attempts was not random. Nevertheless, the majority of them preferred switching to another learning task over completing the test answers. As a result, they lacked strong performance motivation and demonstrated insufficient self-monitoring of AR learning. Although the Dabbling profile students’ behavioral transition frequency in AR learning was higher than that of the Low Engagement profile students, it was significantly less than that of the Immersive and Organized profile students. The results demonstrated that students with this profile logged in to the AR-assisted science learning system, usually to learn solely by using rote learning behavior (such as only writing tests for exams). They also lacked diversity in behavior and in-depth AR learning each time they logged in.

The Organized profile students’ behavioral transition in the AR-assisted science learning system was more complex than that of the Immersive profile students (as shown in Fig. 3-d). Their behavioral transition patterns added the Q5 → C1, Q5 → C2, and Q1 → H2 sequences, indicating that such students tend to browse the course content after test activities. This kind of learning path is more aligned with the expectations of cognitive and curriculum policy. Thus, this profile not only considered their test performance and discussion in search of help for activities but also made more reasonable use of AR learning resources for facilitating their learning, compared to the other profiles. They followed the course requirements, used the AR learning resources, and used step-by-step learning. Compared to other profiles, their AR science learning process was more aligned with the cognitive norms that were required to progressively finish the designed cognitive processes and tasks.

Comparisons of Self-Efficacy Among Different Profiles

As seen in Table 5, the multivariate analysis of variance revealed that the six dimensions of learning self-efficacy differ significantly among all profiles. For instance, conceptual understanding: F = 44.37, p < 0.001, higher-order cognitive skills: F = 60.07, p < 0.001, practical work: F = 65.90, p < 0.001, everyday application: F = 62.23, p < 0.001, social communication: F = 56.49, p < 0.001, and academic efficacy: F = 89.77, p < 0.001. Moreover, post hoc tests indicated that only profiles 3 and 4 were not significantly different in the dimensions of AR science learning self-efficacy, excluding conceptual understanding and higher-order cognitive skills.

Table 5.

Comparisons of self-efficacy among different profiles

AR learning profiles’ group mean (SD)
Profile 1 Profile 2 Profile 3 Profile 4 F (ANOVA) Post hoc test (LSD)
AR learning self-efficacy
  Conceptual understanding 3.07 (0.96) 4.38 (0.58) 3.86 (0.51) 4.02 (0.55) 44.37*** 1 < 3 < 4 < 2
  Higher-order cognitive skills 3.41 (0.88) 4.69 (0.40) 4.03 (0.51) 4.22 (0.48) 60.07*** 1 < 3 < 4 < 2
  Practical work 3.46 (0.80) 4.69 (0.41) 4.04 (0.46) 4.05 (0.43) 65.90*** 1 < 3 = 4 < 2
  Everyday application 3.21 (0.70) 4.46 (0.52) 3.81 (0.48) 3.90 (0.45) 62.23*** 1 < 3 = 4 < 2
  Science communication 3.18 (0.80) 4.47 (0.57) 3.83 (0.49) 3.90 (0.46) 56.49*** 1 < 3 = 4 < 2
  Academic efficacy 3.34 (0.74) 4.73 (0.38) 3.98 (0.47) 4.07 (0.46) 89.77*** 1 < 3 = 4 < 2

1 = Low Engagement, 2 = Immersive, 3 = Dabbling, and 4 = Organized

ANOVA analysis of variance, LSD Fisher’s least significant difference tests

*** p < .001

The participants in different profiles had significantly different results for conceptual understanding and higher-order cognitive skills. However, in the context of the other variables of AR self-efficacy, the participants in different profiles had significantly different results, except in the case of Dabbling and Organized profiles. Furthermore, for conceptual understanding and higher-order cognitive skills, the participants in the Immersive profile attained the highest mean values, and the participants in the Low Engagement profile attained the lowest mean values. Regarding conceptual understanding and higher-order cognitive skills, the participants in the Organized profile had higher mean values than those in the Dabbling profile.

Discussion and Conclusion

The AR-assisted science learning system was designed to facilitate students’ science learning. The main novelty of this study is the use of AR in science learning to improve students’ cognitive process, which affects their self-efficacy in science learning. It allows a stress-free and efficient AR-assisted science learning environment. Based on the latent profile analysis of the students’ three key cognitive load measures, four profiles for AR-assisted science learning were identified.

Low Engagement profile students showed the lowest AR science learning self-efficacy in all six dimensions, especially science communication (the lowest value). Since the students in Low Engagement profile invested less mental effort in AR-assisted science learning, they were well-exposed to learning content and assessment in the AR-assisted science learning system. They perceived the highest mental load, the lowest performance, and the lowest impact of their AR science learning self-efficacy. They may exhaust their ability to learn via the AR-assisted science learning system, showing the lowest mental effort, and thus may find it hard to gain good performance. This implies that the advantage of the AR-assisted science learning system tends to be hindered in terms of enhancing their engagement and reducing their mental load (Papanastasiou et al., 2019). Research evidence has shown that the adoption of AR science learning in facilitating students’ creative thinking and authentic problem-solving confidence could be helpful in promoting students’ science learning engagement (Yeh et al., 2019). Prior studies have indicated that students’ perceptions of deep strategies to learn physics positively predict their learning self-efficacy ( Chang et al., 2018). Thus, teachers could help Low Engagement profile students use a deep learning strategy to gain an in-depth understanding and self-efficacy in AR-assisted science learning when they could not achieve better performance with lower mental effort. Although the findings of this study have shown significant differences among Low Engagement, Immersive profiles, and AR science learning self-efficacy (all six dimensions), the discussion mentioned above can warrant future studies on this topic to be confirmed.

Compared to Low Engagement profile students, Dabbling profile students achieved higher performance with a lower mental load. They may browse more on learning content via the AR-assisted science learning system. However, they did not practice the assessment activities since they use the dabbling approach to learning in the AR-assisted science learning system. As a result, their mental effort is higher than Low Engagement profile students. Their performance is much better than Low Engagement profile students but lower than Immersive and Organized profile students. As stated by the previous study (Orru & Longo, 2018), the navigational demands related to those activities are not strictly directed to learning when the efforts are exerted to achieve understanding. Thus, these Dabbling profile students also demonstrated an intermediate level of learning self-efficacy.

Students with the Organized profile showed the highest performance among other profiles, with a high level of mental effort, medium level of mental load, and more communicative and collaborative behaviors. The condition of students in the Organized profiles is consistent with Yang et al.’s (2021) study. Their research of AR-assisted science learning indicated that students learning in AR environments incorporated with an instructional model empowering collaboration and communication had lower levels of cognitive load than students learning with traditional instructional model. The students with Organized profile in this study invested more effort into learning under the teachers’ guidance in the AR-assisted science learning to increase their mental effort. Particularly, they tended to discuss with others and leave comments to help others after completing tests on the AR-assisted science learning system, which resulted in more complex learning behavior patterns with more communicative and collaborative behaviors than the students in other profiles. There are several theories that can make conjectures of explaining the complex learning behavior patterns exhibited by the Organized profile. This study noted the collaborative cognitive load theory as one possible theory to explain how the collaborative behaviors in technology-assisted learning are associated with students’ cognitive load and lead to a change in students’ learning efficiency. On the one hand, according to the collaborative cognitive load theory, students’ cognitive load caused by the complex information generated in technology-assisted learning can be reduced by collaborations (Janssen & Kirschner, 2020). Therefore, it can be inferred that communication and collaborations in AR-assisted science learning system with others allowed students in Organized profiles to invest less cognitive resources than students lacking collaboration. When they discussed scientific issues or provided assistance to each other, they shared the burden of information processing with each other, which decreased their mental load. They also achieved higher performance, since collaboration urged them to put effort into AR-assisted science learning. On the other hand, there are transaction costs resulting from collaborative processes that are extraneous to the essential processing of relevant information (Janssen & Kirschner, 2020). When communication and collaboration require students to invest considerable mental effort, high transaction costs in interactions may overload their cognitive process. Therefore, when designing AR-assisted science learning activities, it is necessary to consider the balance between transaction costs and burden-sharing advantages in order to maximize the effect of collaborative learning. Due to the dearth of previous studies probing into lightening the cognitive load in AR-assisted science learning with communication and collaboration, we argue that the findings from this study can advance the current understanding in this line of research.

Moreover, students in Dabbling and Organized profiles did not show significant differences between cognitive load and AR science learning self-efficacy, except the conceptual understanding and higher-order cognitive skills. This result is consistent with Lin et al.’s (2020) verified assumptions that hands-on activity using AR boosts students’ understanding of scientific concepts. This may be because it involves investing lots of effort in AR-assisted science learning. Although the students with these two profiles may not focus on other self-efficacy dimensions, teachers could consider the cognitive process (e.g. conceptual understanding and higher-order cognitive skills) to facilitate AR science learning self-efficacy. The majority of the participants (i.e. 62.19%) in this study were categorized under the Dabbling and Organized profiles; thus, it would be more concise if educators and scholars could develop and regulate the AR-assisted science learning design in accordance with the characteristics of these two profiles.

Practical Implications

This study provides practical implications on how to implement profile classification for the promotion of students’ AR-assisted science learning practices according to different profiles and behavior. For example, the Dabbling profile students were associated with a lower level of learning self-efficacy than the Organized profile students. Simultaneously, these Dabbling profile students with lower self-efficacy tended to browse the learning content rather than practice the assessment activities. This lower self-efficacy caused students’ anxiety and away from their tasks (Lin et al., 2020), limiting students’ achievements. To address the surface learning problem, teachers are recommended to implement more immersive methods and tasks regarding conceptual understanding and higher-order cognitive skills for AR-assisted science learning rather than applying the testing, calculation, and remembering tasks.

Next, elaborative consideration is needed for targeted strategies to improve the students’ learning performance in diverse classifications. The design of AR-assisted science learning needs to stimulate students’ intrinsic motivation to ensure they can participate and engage in various types of learning activities. It is vital to effectively integrate these learning activities and this feedback into the scaffold design for Dabbling and Low Engagement students. Consistent with the existing study, multiple feedback and appropriate interaction can produce higher mental effort and better performance in science learning (Wang et al., 2018). Through the design of the learning path and learning scaffolding, dabbling and Low Engagement students’ cognitive process of AR-assisted science learning can be facilitated.

Furthermore, another practical implication is extending our knowledge of how to use AR resources for remoting science learning education in the COVID-19 pandemic. This study provides further evidence for the study of Chin and Wang (2021), who revealed that AR technology could provide different authentic learning activities for students to understand the difficulties (e.g. cognitive overload) in learning. To train teachers with effective pedagogy in using AR-assisted science learning resources, it is necessary to focus on reducing the barriers between students using emerging technology and resources, such as offering guidance on usage. The strategies of AR-assisted science learning should support students with special educational needs in learning new knowledge efficiently to facilitate cognitive learning justice. The researchers also aimed to provide a powerful new form of tangible and physical interactions within immersive digital environments by considering the students’ different cognitive load and learning behavioral paths, which may bridge the digital and physical worlds of learning.

Limitations and Future Research

This study has several limitations that can be considered as suggestions for future studies. The first is that the AR cognitive load learning profiles were focused on AR-based learning activities and three types of cognitive load: mental load, mental effort, and performance. Notably, other measures of cognitive processes, such as eye and hand movement tracking, may need to be considered for future study to elucidate understanding of the students’ visual and movement behavior, resulting in their cognitive load in the AR-assisted science learning environment. Future research needs to integrate new AR tools for different AR-assisted science learning activities, such as full hands-free AR tools, gesture recognition, audio, and video capturing analysis. These newly integrated AR tools can be used to collect more analytical data to explore more in-depth behavior patterns of learning. The interaction between learners and AR-assisted science learning activities in these four different profiles can be better understood.

Acknowledgements

The authors express their gratitude to the anonymous reviewers and editors for their helpful comments about this paper.

Author Contribution

Conceptualization: Xiao-Fan Lin and Chin-Chung Tsai; methodology: Wei Zhou and Seng Yue Wong; investigation: Xiao-Fan Lin; data curation: Xiao-Fan Lin and Seng Yue Wong; writing — original draft: Xiao-Fan Lin and Seng Yue Wong; writing — review and editing: Xiao-Fan Lin, Chin-Chung Tsai, and Weipeng Shen; software: Xiao-Fan Lin; data analysis: Wei Zhou and Xiao-Fan Lin; instruction: Xiao-Fan Lin.

Funding

This work was supported by the National Natural Science Foundation of China under grant number 62007010; the Major Project of the National Social Science Fund of China under grant number 18ZDA334; the Science and Technology Projects in Guangzhou under grant number 202102021217; the Key Project of National Nature Science Fund of China under grant number 62237001; the National Key R&D Program of China under grant number 2022YFC3303605; the Science and Technology Projects in Guangzhou under grant number 202102021217; the Special Funds of Climbing Program regarding the Cultivation of Guangdong College Students’ Scientific and Technological Innovation under grant number pdjh2023a0139; and 2023 annual Guangzhou Youth and the Communist Youth League project of “Guangzhou Youths’ Participation in Rural Revitalization Research: The I-SEED ‘Internet Plus’ Cloud Public Welfare to Empower Rural Education Revitalization” under grant number 2023TSW1.

Data availability

The data that support the findings of this study are available from the corresponding author upon request. The data is not openly available elsewhere.

Declarations

Ethics Approval

This retrospective chart review study involving human participants was in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The Institutional Review Board (IRB) of the South China Normal University approved this study, IRB-SCNU-EIT-2020–016.

Consent to Participate

Informed consent was obtained from all the individual participants included in the study.

Consent for Publication

Additional informed consent was obtained from all the individual participants for whom identifying information is included in this article.

Competing Interests

The authors declare no competing interests.

Contributor Information

Xiao-Fan Lin, Email: linxiaofan@m.scnu.edu.cn.

Seng Yue Wong, Email: wongsengyue@um.edu.my.

Wenyi Li, Email: liwenyi@pku.edu.cn.

Chin-Chung Tsai, Email: tsaicc@ntnu.edu.tw.

References 

  1. Akçayir M, Akçayir G. Advantage and challenges associated with augmented reality for education: A systematic review of the literature. Educational Research Review. 2017;20:1–11. doi: 10.1016/j.edurev.2016.11.002. [DOI] [Google Scholar]
  2. Bakeman R, Gottman JM. Observing interaction: An introduction to sequential analysis. Cambridge University Press; 1997. [Google Scholar]
  3. Beehr TA. Psychological stress in the workplace. London: Routledge; 1995. [Google Scholar]
  4. Buchner J, Buntins K, Kerres M. The impact of augmented reality on cognitive load and performance: A systematic review. Journal of Computer Assisted Learning. 2022;38(1):285–303. doi: 10.1111/jcal.12617. [DOI] [Google Scholar]
  5. Burnham KP, Anderson DR. Model selection and inference: A practical information theoretic approach. New York, NY: Springer; 2002. [Google Scholar]
  6. Can D-C, Ho T-N, Chng E-S. A hybrid deep learning architecture for sentence unit detection. International Conference on Asian Language Processing (IALP) 2018;2018:129–132. doi: 10.1109/IALP.2018.8629178. [DOI] [Google Scholar]
  7. Chang S-C, Hwang G-J. Impacts of an augmented reality-based flipped learning guiding approach on students’ scientific project performance and perceptions. Computers & Education. 2018;125:226–239. doi: 10.1016/j.compedu.2018.06.007. [DOI] [Google Scholar]
  8. Chang, H.-Y., Hsu, Y.-S., Wu, H.-K., Tsai, C.-C. (2018). Students’ development of socio-scientific reasoning in a mobile augmented reality learning environment. International Journal of Science Education, 40(12), 1410–1431.
  9. Chen Y-C. Effect of mobile augmented reality on learning performance, motivation, and Math anxiety in a Math course. Journal of Educational Computing Research. 2019;57(7):1695–1722. doi: 10.1177/0735633119854036. [DOI] [Google Scholar]
  10. Chen CH, Chan WP, Huang K, Liao CW. Supporting informal science learning with metacognitive scaffolding and augmented reality: Effects on science knowledge, intrinsic motivation, and cognitive load. Research in Science & Technological Education. 2022 doi: 10.1080/02635143.2022.2032629. [DOI] [Google Scholar]
  11. Chin K-Y, Wang C-S. Effects of augmented reality technology in a mobile touring system on university students’ learning performance and interest. Australasian Journal of Educational Technology. 2021;37(1):27–42. [Google Scholar]
  12. Deleeuw KE, Mayer RE. A comparison of three measures of cognitive load: Evidence for separable measures of intrinsic, extraneous, and germane load. Journal of Educational Psychology. 2008;100(1):223–234. doi: 10.1037/0022-0663.100.1.223. [DOI] [Google Scholar]
  13. Dunleavy M, Dede C, Mitchell R. Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. Journal of Science Education and Technology. 2009;18(1):7–22. doi: 10.1007/s10956-008-9119-1. [DOI] [Google Scholar]
  14. Fadl N, Youssef W. Science learning in augmented reality context: An exploration of learners’ cognitive load and attitudes. International Journal of Instructional Technology and Educational Studies. 2020;1(2):2682–3926. [Google Scholar]
  15. Feldon DF. Cognitive load and classroom teaching: The double-edged sword of automaticity. Educational Psychologist. 2007;42(3):123–137. doi: 10.1080/00461520701416173. [DOI] [Google Scholar]
  16. Feldon DF, Franco J, Chao J, Peugh J, Maahs-Fladung C. Self-efficacy change associated with a cognitive load-based intervention in an undergraduate biology course. Learning & Instruction. 2018;56:64–72. doi: 10.1016/j.learninstruc.2018.04.007. [DOI] [Google Scholar]
  17. Finch WH, Bolin JE. Multilevel modeling using Mplus. CRC Press; 2019. [Google Scholar]
  18. Garzón J, Pavón J, Baldiris S. Systematic review and meta-analysis of augmented reality in educational settings. Virtual Reality. 2019;23(4):447–459. doi: 10.1007/s10055-019-00379-9. [DOI] [Google Scholar]
  19. Glogger-Frey I, Gaus K, Renkl A. Learning from direct instruction: Best prepared by several self-regulated or guided invention activities? Learning & Instruction. 2017;51:26–35. doi: 10.1016/j.learninstruc.2016.11.002. [DOI] [Google Scholar]
  20. Ho, H. N. J., Liang, J. C., & Tsai, C. C. (2022). The interrelationship among high school students’ conceptions of learning science, self-regulated learning science, and science learning self-efficacy. International Journal of Science and Mathematics Education, 20, 943–962.
  21. Hu-Au E, Okita S. Exploring differences in student learning and behavior between real-life and virtual reality chemistry laboratories. Journal of Science Education and Technology. 2021;30(6):862–876. doi: 10.1007/s10956-021-09925-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hwang G-J, Chang S-C, Chen P-Y, Chen X-Y. Effects of integrating an active learning-promoting mechanism into location-based real-world learning environments on students’ learning performances and behaviors. Educational Technology Research & Development. 2018;66(2):451–474. doi: 10.1007/s11423-017-9567-5. [DOI] [Google Scholar]
  23. Hwang GJ, Lee HY, Chen CH. Lessons learned from integrating concept mapping and gaming approaches into learning scenarios using mobile devices: Analysis of an activity for a geology course. International Journal of Mobile Learning and Organisation. 2019;13(3):286–308. doi: 10.1504/IJMLO.2019.100412. [DOI] [Google Scholar]
  24. Ibili E, Billinghurst M. Assessing the relationship between cognitive load and the usability of a mobile augmented reality tutorial system: A study of gender effects. International Journal of Assessment Tools in Education. 2019;6(3):378–395. doi: 10.21449/ijate.594749. [DOI] [Google Scholar]
  25. Janssen J, Kirschner PA. Applying collaborative cognitive load theory to computer-supported collaborative learning: Towards a research agenda. Educational Technology Research and Development. 2020;68(2):783–805. doi: 10.1007/s11423-019-09729-5. [DOI] [Google Scholar]
  26. Keller S, Rumann S, Habig S. Cognitive load implications for augmented reality supported chemistry learning. Information. 2021;12(3):96. doi: 10.3390/info12030096. [DOI] [Google Scholar]
  27. Kirschner PA, Sweller J, Clark RE. Why minimal guidance does not work: An analysis of the failure of constructivist discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist. 2006;41(2):75–86. doi: 10.1207/s15326985ep4102_1. [DOI] [Google Scholar]
  28. Lai A-F, Chen C-H, Lee G-Y. An augmented reality-based learning approach to enhancing students’ science reading performances from the perspective of the cognitive load theory. British Journal of Educational Technology. 2019;50(1):232–247. doi: 10.1111/bjet.12716. [DOI] [Google Scholar]
  29. Lin TJ. Multi-dimensional explorations into the relationships between high school students’ science learning self-efficacy and engagement. International Journal of Science Education. 2021;43(8):1193–1207. doi: 10.1080/09500693.2021.1904523. [DOI] [Google Scholar]
  30. Lin, X. F., Tang, D., Shen, W., Liang, Z. M., Tang, Y., & Tsai, C. C. (2020). Exploring the relationship between perceived technology-assisted teacher support and technology-embedded scientific inquiry: the mediation effect of hardiness. International Journal of Science Education, 42(8), 1225–1252.
  31. Lin, X.-F., Hwang, G.-J., Wang, J., Zhou, Y., Li, W., Liu, J., & Liang, Z.-M. (2022a). Effects of a contextualised reflective mechanism-based augmented reality learning model on students’ scientific inquiry learning performances, behavioural patterns, and higher order thinking. Interactive Learning Environments, 1–21. 10.1080/10494820.2022.2057546
  32. Lin, X. F., Liang, Z. M., Chan, K. K., Li, W., & Ling, X. (2022b). Effects of contextual interactive healthcare training on caregivers of patients with suspected COVID‐19 infection: Anxiety, learning achievements, perceived support and self‐efficacies during quarantine. Journal of Computer Assisted Learning, 38(3), 731–742.
  33. Lo Y, Mendell NR, Rubin DB. Testing the number of components in a normal mixture. Biometrika. 2001;88(3):767–778. doi: 10.1093/biomet/88.3.767. [DOI] [Google Scholar]
  34. Marsh HW, Hau K-T, Wen Z. In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu & Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal. 2004;11(3):320–341. doi: 10.1207/s15328007sem1103_2. [DOI] [Google Scholar]
  35. Martin AJ, Ginns P, Burns EC, Kennett R, Munro-Smith V, Collie RJ, Pearson J. Assessing instructional cognitive load in the context of students’ psychological challenge and threat orientations: A multi-level latent profile analysis of students and classrooms. Frontiers in Psychology. 2021;12:656994. doi: 10.3389/fpsyg.2021.656994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Muthén LK, Muthén BO. Mplus statistical analysis with latent variables: User’s guide. 7. Los Angeles, CA: Muthén & Muthén; 2012. [Google Scholar]
  37. Orru, G., & Longo, L. (2018, September). The evolution of cognitive load theory and the measurement of its intrinsic, extraneous and germane loads: A review. In L. Longo & M. C. Leva (Eds.), International Symposium on Human Mental Workload: Models and Applications (pp. 23–48). Springer.
  38. Paas FG, van Merriënboer JJ. Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review. 1994;6(4):351–371. doi: 10.1007/BF02213420. [DOI] [Google Scholar]
  39. Paas F, Tuovinen JE, Tabbers H, van Gerven PWM. Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist. 2003;38(1):63–71. doi: 10.1207/S15326985EP3801_8. [DOI] [Google Scholar]
  40. Papanastasiou G, Drigas A, Skianis C, Lytras M, Papanastasiou E. Virtual and augmented reality effects on K-12, higher and tertiary education students’ twenty-first century skills. Virtual Reality. 2019;23(4):425–436. doi: 10.1007/s10055-018-0363-2. [DOI] [Google Scholar]
  41. Plass JL, Homer BD, Milne C, Jordan T, Kalyuga S, Kim M, Lee H. Design factors for effective science simulations: Representation of information. International Journal of Gaming and Computer-Mediated Simulations. 2009;1(1):16–35. doi: 10.4018/jgcms.2009010102. [DOI] [Google Scholar]
  42. Sahin D, Yilmaz RM. The effect of augmented reality technology on middle school students’ achievements and attitudes towards science education. Computers & Education. 2020;144:103710. doi: 10.1016/j.compedu.2019.103710. [DOI] [Google Scholar]
  43. Salomon G. Television is “easy” and print is “tough”: The differential investment of mental effort in learning as a function of perceptions and attributions. Journal of Educational Psychology. 1984;76(4):647–658. doi: 10.1037/0022-0663.76.4.647. [DOI] [Google Scholar]
  44. Skulmowski A, Xu KM. Understanding cognitive load in digital and online learning: A new perspective on extraneous cognitive load. Educational Psychology Review. 2022;34(1):171–196. doi: 10.1007/s10648-021-09624-7. [DOI] [Google Scholar]
  45. Sung HY, Hwang GJ, Liu SY, Chiu IH. A prompt-based annotation approach to conducting mobile learning activities for architecture design courses. Computers & Education. 2014;76:80–90. doi: 10.1016/j.compedu.2014.03.015. [DOI] [Google Scholar]
  46. Sweller J. Cognitive load during problem solving: Effects on learning. Cognitive Science. 1988;12(2):257–285. doi: 10.1207/s15516709cog1202_4. [DOI] [Google Scholar]
  47. Thees M, Kapp S, Strzys MP, Beil F, Lukowics P, Kuhn J. Effects of augmented reality on learning and cognitive load in university physics laboratory courses. Computers in Human Behavior. 2020;108:106316. doi: 10.1016/j.chb.2020.106316. [DOI] [Google Scholar]
  48. van Gog T, Hoogerheide V, van Harsel M. The role of mental effort in fostering self-regulated learning with problem-solving tasks. Educational Psychology Review. 2020;32(4):1055–1072. doi: 10.1007/s10648-020-09544-y. [DOI] [Google Scholar]
  49. Wang C, Fang T, Miao R. Learning performance and cognitive load in mobile learning: Impact of interaction complexity. Journal of Computer Assisted Learning. 2018;34(6):917–927. doi: 10.1111/jcal.12300. [DOI] [Google Scholar]
  50. Wigfield A, Eccles JS. Expectancy–value theory of achievement motivation. Contemporary Educational Psychology. 2000;25(1):68–81. doi: 10.1006/ceps.1999.1015. [DOI] [PubMed] [Google Scholar]
  51. Wu P-H, Hwang G-J, Yang M-L, Chen C-H. Impacts of integrating the repertory grid into an augmented reality-based learning design on students’ learning achievements, cognitive load, and degree of satisfaction. Interactive Learning Environments. 2018;26(2):221–234. doi: 10.1080/10494820.2017.1294608. [DOI] [Google Scholar]
  52. Xie B, Salvendy G. Prediction of mental workload in single and multiple tasks environments. International Journal of Cognitive Ergonomics. 2000;4(3):213–242. doi: 10.1207/S15327566IJCE0403_3. [DOI] [Google Scholar]
  53. Yang Y, Cai S, Wen Y, Li J, Jiao X. AR learning environment integrated with EIA inquiry model: Enhancing scientific literacy and reducing cognitive load of students. Sustainability. 2021;13(22):12787. doi: 10.3390/su132212787. [DOI] [Google Scholar]
  54. Yeh, H.-Y., Tsai, Y.-H., Tsai, C.-C., & Chang, H.-Y. (2019). Investigating students’ conceptions of technology-assisted science learning: A drawing analysis. Journal of Science Education and Technology, 28(4), 329–340.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. The data is not openly available elsewhere.


Articles from International Journal of Science and Mathematics Education are provided here courtesy of Nature Publishing Group

RESOURCES