Skip to main content
Nursing Open logoLink to Nursing Open
. 2022 Oct 17;10(3):1545–1555. doi: 10.1002/nop2.1405

Development of generic student engagement scale in higher education: An application on healthcare students

Shuang Li 1, Stephen Wai Hang Kwok 2, Summer Cho Ngan Siu 3, Jessie Yuk Seng Chung 4, Hemio Chung Yan Lam 5, Eva Yuen Mei Tsang 6, Kam Cheong Li 7, Joanna Wing Yan Yeung 4, Simon Ching Lam 4,
PMCID: PMC9912421  PMID: 36250923

Abstract

Aim

Student engagement is an important factor to the success of higher education. This study aimed to develop a Generic Student Engagement Scale (GSES) for face‐to‐face and online learning.

Design

This was a cross‐sectional psychometric study.

Methods

We tested the psychometric properties of GSES in 451 students at the school of nursing and health studies undertaking online and face‐to‐face learning at a local university in Hong Kong between 2016 and 2018.

Results

Content validity, face validity and test–retest reliability of GSES were satisfactory. The 29‐item GSES contains five factors “self‐regulated learning,” “cognitive strategy use,” “experienced emotion,” “teacher–student interaction,” and “enjoyment of school life” with the good model fit. The GSES is a reliable and valid psychometric instrument to measure student engagement in face‐to‐face and online learning among undergraduates and higher diploma students. Our results implied that student engagement can be assessed in routine or research by using our instrument.

Keywords: education and practice development, factor analysis, graduate nurses, information technology, psychometric testing

1. INTRODUCTION

Student engagement is the time and energy students devoted to educationally sound activities (Kuh, 2003), which is regarded as a key factor that facilitates school completion, and enhances motivation for achievements (Appleton et al., 2008). Student engagement is considered an important factor that leads to educational reform and evaluation, which mediates the influence of curricular policy and instructional reforms on students' achievements (Guthrie & Wigfield, 2000). Both Coates (2010) and Kuh (2003) consistently suggested that student engagement should be regarded as an indicator of institutional quality in higher education by encouraging students to participate in educationally purposeful activities (Groccia, 2018). The measurement of student engagement has been an important research topic as student engagement is never taken for granted and requires assessment and promotion. An increasing number of higher education institutes has emphasized the importance of advocating student engagement which aimed to improve students' achievement in a range of learning environments, and to tackle common problems in learning such as feelings of boredom, dropouts and attrition (Dixson, 2015; Walji et al., 2016).

With the advancement of technology in education, new learning environments have been created such as integrating online technology and face‐to‐face components. The blended learning environments such as the integration of e‐learning (O'Neil et al., 2013), mobile learning (Pimmer et al., 2014) and simulation (Khalaila, 2014) are nowadays common in healthcare education. The online components are flexible and offer open education for students to complement practical skills training and problem‐solving training based on current workplace scenarios. Thus such kind of blended learning has been favoured by medical schools and healthcare educational institutes in recent years (Goh & Sandars, 2020), which better meets individual learning needs with timely responses, and in which it is easier to design the teaching materials according to realistic working environments (McCowan, 2017). The e‐learning environment allows flexibility for students to create a self‐directed learning tool that can be adjusted according to their learning pace (Shorey et al., 2018) and can be integrated into the curricula to achieve improved learning outcomes (Kowitlawakul et al., 2017). Previous studies have shown that similar academic performance could be achieved in online and face‐to‐face learning modalities (Kemp & Grieve, 2014; Yen et al., 2018), and student engagement did not significantly differ between online and face‐to‐face learning (Brielmaier & Kuo, 2016; Butts et al., 2013).

2. BACKGROUND

In the literature, there are a number of psychometric scales aiming at measuring student engagement, but the key constructs, theoretical perspectives and educational contexts of concern varied to a certain extent (Sinatra et al., 2015) and most of them suffered from methodological limitations in the scale development. Previous studies proposed a variety of factors or strategies which might raise students' engagement in learning such as interpersonal relationships, problem‐based learning as a team (Amerstorfer & Freiin von Münster‐Kistner, 2021), inclusiveness, active teaching strategies (Arjomandi et al., 2018), senses of involvement and expectation (Bowden et al., 2021). As for data analysis, a partial data‐driven approach, that is, only conducting confirmatory factor analysis (CFA; Liem & Martin, 2012; Pintrich et al., 1993), or running exploratory factor analysis (EFA) separately on each predefined groups of items (Cho, 2012), might not identify distinct factors which have low correlations with others. Some other scales included items which assessed student's appraisal of the school, people and academic activities (Appleton et al., 2006; NSSE, 2021). Although these appraisals might have positive associations with student engagement, these items did not directly measure student's behavioural, emotional or cognitive involvement in educational activities. In the literature, there was an attempt of modifying a student engagement scale (Handelsman et al., 2005) to adapt it to measuring student engagement online (Dixson, 2015). However, the factor analysis procedure was unclear, and the factor of academic performance was disputable based on previous studies (Fredricks et al., 2004). Furthermore, there is no such an instrument that fits for measuring the student engagement both online and face‐to‐face learning.

2.1. Research question

Our study aimed to develop a generic student engagement scale, that is, GSES to measure the construct in a contemporary learning environment which leverages both face‐to‐face and online learning in a sample of students at the school of nursing and health studies. Our results will inform healthcare educators about the policy and practice to promote student engagement leading to positive educational outcomes.

3. THE STUDY

3.1. Design

This was a cross‐sectional psychometric study testing reliability and validity of GSES questionnaire in a sample of adult students in their first or second year of higher diploma or bachelor's degree at the school of nursing and health studies, undertaking ordinary online and face‐to‐face learning at a local university in Hong Kong. Ordinary learning approaches for nursing students included lectures in lecture halls, tutorials in classrooms, nursing skill practice in laboratories with medical equipment, online lectures, online self‐learning modules, etc. This study contained two phases. The first phase comprised instrument modification and examining validity. The second phase studied the instrument reliability and factor structure.

3.2. Method

3.2.1. Phase 1 (four steps): Instrument modification and examining validity

Step 1: Item modification and translation

Our study developed from previous work on Distance Student Engagement Scale (DSES; Li & Yu, 2015) which was formulated after extensive literature review based on three domains including behavioural involvements, emotional states and cognitive strategies (Fredricks et al., 2004) which were well recognized in the literature (Sinatra et al., 2015). We obtained permission from the original author of DSES before modification. We carefully modified the items to increase adaptability, conciseness and comprehensibility (Liu et al., 2020) of GSES (Appendix S1); and translated Chinese contents into English by following an adopted protocol (Sousa & Rojjanasrirat, 2011). Forty‐five items of DSES had been included among which 38 were modified.

Step 2: Examining content validity

Seven experts, including academics with experiences in nursing (n = 3), higher education (n = 3) or social science (n = 1), used four‐point ordinal scale (1 = not relevant to 4 = highly relevant) to evaluate the item relevancy and a dichotomous scale (Yes/No) to rate the scale adequacy. They were also invited to comment on items with unsatisfactory ratings. The cut‐offs of item‐content validity index (I‐CVI) were set at 0.8 and scale level CVI (S‐CVI) was 0.9 above which indicates good content validity (Polit et al., 2007; Portney & Watkins, 2015).

Step 3: Examining face validity

Twenty undergraduate or higher diploma students from the school of nursing and health studies were recruited randomly to rate the item comprehensibility on a dichotomous scale (Yes/No), and rephrase each item in their own words (i.e. the interpretability; Lam, 2015; Lam et al., 2017; Streiner et al., 2014). The researcher rated the accuracy and appropriateness of the rephrased item by using an ordinal scale (1 = fully correct to 4 = totally wrong; Portney & Watkins, 2015). Items interpretable to students were formulated after a discussion between student and researcher.

Step 4: Standardizing terms use

The research team rephrased the terms in GSES in a uniform manner and revised the items into statements with similar usage and word order to facilitate comprehensibility among respondents. The major principles of item construction and design followed were (1) maintaining clarity, (2) preference for short statements, (3) avoidance of double negatives, (4) avoidance of double‐barrelled statements and (5) avoidance of factual statements (Lam, 2015; Mishel, 1998; Streiner et al., 2014).

3.2.2. Phase 2: Instrument reliability and factor structure

Each GSES item was measured with a five‐point ordinal scale from “1 = not true at all” to “5 = absolutely true.” Scores of negatively keyed items were reversed in analysis, so higher global score indicated better student engagement. On average, each participant spent 20 min to complete the questionnaire at the university campus. Eligibility criteria of student recruitment were (1) full‐time undergraduates or higher diploma students, (2) enrolled in the school of nursing and health studies and (3) who had online and face‐to‐face learning experiences. By convenience sampling, the targeted sample size was 633 according to similar study (Li & Yu, 2015). Data were collected between 2016 and 2018. The statistical significance level was set at .05. Analyses were done on R 4.0.3 (R Core Team, 2021) and RStudio 1.4.1103 (RStudio, 2021).

Test–Retest reliability

Beyond the sample for factor analysis, another sample of 70 students were randomly recruited to complete GSES at baseline and 4 weeks later (Lam et al., 2018). Two‐way mixed model was used to compute intraclass correlation coefficient (ICC 3,1; Koo & Li, 2016), and the cutoff was set at .75 above which indicates good test–retest reliability (Portney & Watkins, 2015).

Normality, factorability and parallel analysis

Half of the sample were randomly drawn for EFA of the GSES (n = 225), another half was used for CFA (n = 226). The normality of EFA data was checked (RDocumentation, 2021f). A significant Bartlett's test result and Kaiser–Meyer–Olkin's measure of sampling adequacy (KMO's MSA) > 0.6 (RDocumentation, 2021d; Tabachnick & Fidell, 2007) indicate acceptable factorability of the correlation matrix. Parallel analysis was run with scree plot created (RDocumentation, 2021b). Spearman correlation matrix was analysed where eigenvalues for principal component analysis and principal axis factoring were computed with 500 simulated analyses.

Exploratory factor analysis

Models were run for factoring methods of minimum residual (MINRES), unweighted least square (ULS), weighted least square (WLS) and principal axis factoring (PA) respectively (RDocumentation, 2021c). The rotation method was direct oblimin assuming that factors could be correlated measuring the same construct. Standardized loadings on pattern matrix were based on Spearman correlation analysed by ULS. Communalities, uniqueness and Hoffman's index of complexity were computed (Hofmann, 1978; Pettersson & Turkheimer, 2010). Variance explained, factor correlations, and factor score adequacy in terms of its correlation and multiple R 2 with factors, and minimum correlation of possible factor scores, were evaluated.

Confirmatory factor analysis

The cutoff of the complexity index (Hofmann, 1978) was empirically determined as, with respect to each item, the number of factors having loadings larger than 0.3 became more than one or none, or having loadings ranging between 0.15 and 0.3 were multiple, when the complexity index reached 2.1 or above. On the other hand, at least three items per factor should be maintained (Marsh et al., 1998; Robinson, 2018). Therefore, items with a complexity index larger than 2.1 were excluded from analysis. Each item was assigned to the factor on which the loading was the highest across all factors. The model estimator was ULS (RDocumentation, 2021a, 2021e). The optimization method was nlminb. The exogenous latent variables were assumed to be correlated. Results were plotted with circular layout (Figure 1; RDocumentation, 2021h). Factor loadings larger than 0.3 were considered acceptable (Hon et al., 2013; Izquierdo et al., 2014; Maskey et al., 2018; Omondi Aduda et al., 2014; Wu, 2008; Yong & Pearce, 2013). The cut‐offs of fit indices were that the comparative fit index (CFI) and Tucker–Lewis index (TLI) should be larger than 0.95 (Hu & Bentler, 1999); the standardized root mean squared residual (SRMR) should be smaller than 0.08 (Cho et al., 2020; Hooper et al., 2008; Hu & Bentler, 1999) and root mean square error of approximation (RMSEA) should be smaller than 0.08 (Browne & Cudeck, 1993; Lam et al., 2018, 2020; Liu et al., 2020; MacCallum et al., 1996) to indicate good model fit. A Cronbach's alpha above 0.7 (Tavakol & Dennick, 2011) and average variance extracted (AVE; RDocumentation, 2021g) above 0.4 (Fornell & Larcker, 1981) were considered acceptable.

FIGURE 1.

FIGURE 1

Circular plot of standardized loadings, residual variances and covariances of exogenous variables, that is latent factors.

4. RESULTS

4.1. Sample characteristics

Initially, 665 students participated in the study with written informed consent. Then, 214 cases were removed after team discussions because of habitual responses and incomplete data. Hence the attrition rate was 32% and it left 451 cases for analysis. The sample age ranged between 18 and 33 years (mean = 21). Females accounted for 77% of the cases (n = 348). There were 163 and 91 higher diploma students who were studying in year 1 and 2 respectively. Regarding bachelor's degree, there were 112 and 91 students who were, respectively, studying in year 1 and 2.

4.2. Scale reliability and validity

Most of the items had I‐CVI ranged between 0.80 and 1.00 which were satisfactory. Six items scored 0.71 (item 12, 17, 23, 28, 34, 46) which was considered acceptable (Polit et al., 2007). Overall, the S‐CVI was satisfactory (0.92). All items had got 100% ratings of comprehensibility and interpretability which supported face validity in the targeted population. The test–retest reliability was satisfactory (ICC = .88, 95% CI [0.81–0.92], p < .001).

4.3. Normality, factorability and number of factors

Half of the cases (n = 225) were randomly drawn from the sample for EFA of the GSES, and the rest of the cases were analysed in CFA. For EFA data, the assumption of normality was not met (p < .001). The Bartlett's test statistics were significant (p < .001) for Kendall's correlation, Spearman's correlation as well as polychoric correlation although it was non‐positive definite (NPD) with smoothing done (Lorenzo‐Seva & Ferrando, 2021). The MSA of Kendall's correlation and Spearman's correlation were satisfactory (>0.85), but unsatisfactory for polychoric correlation (0.18). Parallel analysis suggested five factors and four components on scree plot.

4.4. Exploratory factor analysis

In the EFA of the GSES, the results between estimation methods were close. Generally, the root mean square of residuals, TLI and RMSEA were 0.046, 0.816 and 0.053 (90% CI = [0.048, 0.058]) respectively. Yet, the empirical chi square of WLS was the highest (945.96) and the values of MINRES and ULS were the lowest (945.46). The loadings are shown in Table 1. When the communalities were higher, the uniqueness was lower. However, the complexity index was neither correlated with communalities nor uniqueness. The higher the complexity index, the more obvious the cross‐loading (Osborne et al., 2008). The GSES contains five factors where 1 = “self‐regulated learning,” 2 = “cognitive strategy use,” 3 = “experienced emotion,” 4 = “teacher‐student interaction,” and 5 = “enjoyment of school life.” The proportional variance explained by factor 1 to 5 were 10, 10, 7, 8 and 4 percent respectively. The factor correlations across factor 1 to factor 4 ranged between 0.32 and 0.43; however, their correlations with factor 5 were less than 0.3 (0.06–0.22). The factor score adequacy of the first four factors in terms of correlation of regression scores with factors (0.90–0.94), and multiple R 2 of scores with factors (0.80–0.88), as well as minimum correlation of possible factor scores (0.61–0.75) were better than the findings of factor 5 which were 0.88, 0.77 and 0.53 respectively.

TABLE 1.

Standardized loadings (pattern matrix) based upon correlation matrix in EFA

Item Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 h2 u2 c
1 .19 .81 2.4
2 .41 .24 .76 2.7
3 .32 .19 .81 1.7
4 .55 .34 .66 1.0
5 .37 .33 .36 .64 2.6
6 .55 .46 .54 1.7
7 .58 .43 .57 1.1
8 .61 .41 .59 1.0
9 .42 .21 .79 1.1
10 .32 .36 .64 2.9
11 .43 .31 .69 1.4
12 .15 .85 2.5
13 .48 .38 .62 1.7
14 .55 .39 .61 1.1
15 .55 .37 .63 1.4
16 .47 .49 .51 1.9
17 .31 .53 .65 .35 2.1
18 .34 .43 .45 .55 2.3
19 .39 .24 .76 2.0
20 .41 .59 3.5
21 .31 .69 3.9
22 .27 .73 3.8
23 .62 .42 .58 1.1
24 .33 .29 .71 2.1
25 .57 .37 .63 1.3
26 .49 −.36 .58 .42 2.8
27 .33 −.33 .33 .67 3.4
28 .68 .48 .52 1.1
29 .34 .35 .65 3.2
30 .50 .51 .49 2.0
31 .54 .49 .51 1.3
32 .75 .64 .36 1.0
33 .68 .50 .50 1.1
34 .42 .34 .66 1.5
35 .25 .75 3.0
36 .39 .61 3.0
37 .35 .33 .67 2.0
38 .40 .32 .41 .59 2.6
39 .37 .39 .61 2.9
40 .47 .49 .51 1.9
41 .60 .50 .50 1.3
42 .73 .59 .41 1.1
43 .59 .48 .52 1.2
44 .63 .53 .47 1.7
45 .65 .45 .55 1.1

Note: Loadings with absolute values smaller than 0.3 were suppressed. Bolded = the highest loading of an item where c ≤ 2.1; c, complexity of factor loadings; h2, communalities; u2, uniqueness.

4.5. Confirmatory factor analysis

In the CFA of the 29‐item GSES, the parameter estimation ended after 59 iterations. The model chi‐square to df ratio was 635.45 (df = 367), compared with the 9,781.24 (df = 406) of the baseline model, that is, null model estimating means and variances. The CFI, TLI, Bentler's SRMR and RMSEA were 0.971, 0.968, 0.078 and 0.057 (90% CI = [0.050, 0.064]; p = .062) respectively. Overall, the fit indices of the instrument were satisfactory. From the results, items related to emotion management and metacognition could be explained by factor 1 (Figure 1). Items associated with memory and summarization skills were indicators of factor 2. Items of resource management were found in the first two factors. In factor 3, it contained items of curiosity and tiresome feelings. Factor 4 comprised items of participation, persistence and interaction. Indicators associated with happiness and sense of belonging were explained by factor 5. The standardized factor loadings in the first three factors ranged from 0.55 to 0.77. The loadings of factor 4 ranged between 0.42 and 0.64. In factor 5, when the loading of the first indicator was constrained to 1, the standardized loadings of the other two items were 0.79 and 0.48 respectively. The loading on item 19 was the smallest. Regarding standardized covariance, the smallest magnitude was the one between factor 2 and 5, and the largest one was covariance between factor 1 and 4. As to variance, the result of item 16 was the smallest which was less than 0.1. The second smallest variance was from factor 4 (0.35). The variance of factor 5 was the largest which was above 0.9. The second largest variance came from item 7 (0.83). The reliability measures of the first three factors in the 29‐item GSES were acceptable as Cronbach's alpha ranged between 0.83 and 0.88 (factor 1 to 3 = 0.88, 0.85, 0.83), and the AVE ranged from 0.45 to 0.53 (factor 1 to 3 = 0.48, 0.53, 0.45). The coefficients of factor 5 were good (Cronbach's alpha = 0.80, AVE = 0.61). However, the results of factor 4 were marginal according to the criteria (Cronbach's alpha = 0.76, AVE = 0.31).

5. DISCUSSION

5.1. Evidence in context

Our study aimed at developing the GSES (Appendix 1) to measure student engagement in face‐to‐face and online learning in a sample of students at school of nursing and health studies. The factor “self‐regulated learning” or reflection on learning progress was similar to the “Adaptive Behaviour” involving planning, task management and persistence in the Motivation and Engagement Scale (Liem & Martin, 2012). The factor “cognitive strategy use” was related to the educational indicators of “Reflective & Integrative Learning,” “Learning Strategies” and “Quantitative Reasoning” in the US National Survey of Student Engagement (NSSE, 2021). The indicators of “collaborative learning” and “student–faculty interaction” in NSSE (2021) were similar to the GSES factor “teacher–student interaction” associated with student's interaction with teachers and students. However, we also found items of reserving time for study and participating in optional learning activities in this factor. “Teacher–student interaction” was not explicit in other instruments tested on either adolescents (Appleton et al., 2006) or undergraduates (NSSE, 2021). Regarding emotional domain, we recognized factors of “experienced emotion” and “enjoyment of school life” which had the highest covariance among all factors. While these factors were not addressed in NSSE (2021), Student Engagement Instrument (Appleton et al., 2006) contained some items such as enjoying talking to teachers and students which might indicate enjoyment of school life but these items belonged to factors “teacher–student relationship” and “peer support for learning.” Liem and Martin (2012) included items which assessed student's feelings and emotions towards schoolwork, but these items were categorized into “adaptive cognition” or “impeding cognition” either facilitating or impeding learning in their non‐data‐driven theoretical framework of motivation. Compared with other student engagement scales, the GSES provides a more comprehensive assessment of student engagement addressing behavioural, cognitive and also emotional domains.

5.2. Added value and implications

In this study, we had balanced time and workload in designing item structure and response modality (Lam, 2015; Mishel, 1998), and obtained satisfactory results. We recognized factors such as “self‐regulated learning” and “cognitive strategy use” which fall into behavioural and cognitive engagement domains. In GSES, the factor “teacher‐student interaction” is not only behaviour but also attitude of proactiveness reflected in the time and resources used for study preparation. Previous study highlighted independence, autonomy and freedom of learning in student engagement (Christenson et al., 2012). Self‐regulation and reflection should be focus areas in higher education to improve education quality (Chen et al., 2019; Hew, 2016). Our key findings will undergo careful translations into clear policy, educational practice and course units to improve learning in higher education. Subgroup analysis in terms of age, gender and other important characteristics could be future research directions. Concerning emotional factors, “experienced emotion” and “enjoyment of school life” have significance to achieving positive learning outcomes. Review paper reported that emotions could have impacts on academic achievement which might be mediated by cognitive process and school peer relationships (Carmona‐Halty et al., 2021; MacCann et al., 2020; Valiente et al., 2012). A cross‐sectional study showed that better emotional functioning was associated with higher academic achievement (Sadeghi Bahmani et al., 2018). Teaching in higher education, which incorporates information technology and equipment, has also been advanced (Goh & Sandars, 2020). COVID‐19 pandemic and lockdown changed the dynamics of learning environment in which integration of online learning and traditional education becomes more popular (Basilaia & Kvavadze, 2020). This GSES allows cross‐method comparison for both online and face‐to‐face learning, which fits the trend of blended‐learning in healthcare education in the 21st century nowadays. With a new and better understanding on the students' engagement, policy should be formulated to seize new opportunities raising healthcare education quality in the new normal.

5.3. Future directions

In this study, we offered explicit descriptions and justifications of item modifications which could facilitate adoption of the scale and multiple group comparisons in the future (Lam, 2015; Lam et al., 2017; Portney & Watkins, 2015). Moreover, a larger sample size and a higher number of categories on ordinal scale could favour the implementation of parametric methods and the interpretation of relevant fit measures which are commonly used (Jia & Wu, 2019; Li, 2016; Rhemtulla et al., 2012). If the research conditions allow random sampling, it could increase the representativeness of the sample and the generalizability of the results in the future.

5.4. Limitations

Thirty‐two percent of the recruited cases were excluded from analysis due to habitual responses and incomplete data. The reasons for missing data could not be explored because the survey was anonymous. Concerning generalisability, we recognized that the learning mode regarding online and face‐to‐face approaches as well as sample characteristics could vary between institutions and locations. As for sample size in factor analysis, although de Winter et al. (2009) reported that the sample size in EFA could be as small as 50, and Wolf et al. (2013) argued that there was no rule of thumb for sample size in CFA. We decided to divide the sample into two halves for possible comparison in terms of sample size and fit measures.

6. CONCLUSION

This study developed 29‐item GSES to measure student engagement in a sample of students at school of nursing and health studies. The EFA identified latent factors such as “self‐regulated learning” and “cognitive strategy use” which are consistent with the literature, and factor “teacher–student interaction” which is new finding, and other factors “experienced emotion” and “enjoyment of school life” which had not been emphasized in relevant instruments previously. The construct validity of GSES was supported by good fit indices in CFA. Validity, factor reliability and test–retest reliability were satisfactory. The limitations of missing data and generalizability, as well as the implications of tailoring engagement promotion by focusing on factors identified in this study shed lights on new insights into future research directions.

AUTHOR CONTRIBUTIONS

“REDACTED” conceived the study. “REDACTED” performed data analysis and drafted the manuscript. “REDACTED” collected the data. “REDACTED” helped to revise the manuscript. All authors have read and approved the final version of the manuscript. SCL, SL, SCNS, JYSC, HCYL, EYMT and KCL conceived the study. SCL, SL, SCNS and SWHK performed data analysis and drafted the manuscript. JYSC, HCYL and EYMT collected the data. SCL, KCL, JWYY and SWHK helped to revise the manuscript.

All authors have agreed on the final version and meet at least one of the following criteria [recommended by the ICMJE (http://www.icmje.org/recommendations/)]:

  • substantial contributions to conception and design, acquisition of data or analysis and interpretation of data;

  • drafting the article or revising it critically for important intellectual content.

FUNDING INFORMATION

This work was supported by the National Natural Science Foundation of China (NSFC) [Grant No. 61977011]. The funding sources have no role in the study.

CONFLICT OF INTEREST

No conflict of interest to declare.

ETHICS STATEMENT

All the research meets the ethical guidelines, including adherence to the legal requirements of the study country. Ethical approval was obtained from the Ethical Review Committee regarding Human Research, The Open University of Hong Kong (Ref: HE15Mar2016‐URC201601). The study conforms to the recognized standard of the Declaration of Helsinki.

Supporting information

Appendix S1

APPENDIX 1. 29‐item generic student engagement scale

Please respond to the following questions based on your learning experience.

Note: “Teachers or students” in the following refers to people who come into contact during the learning process.

No. Items Not true at all Not exactly true Maybe Somewhat true Absolutely true
1 I can manage my study time well 1 2 3 4 5
2 When I feel a bit low emotionally during the learning process, I look for ways to regain my interest and enjoyment in learning 1 2 3 4 5
3 I always reflect on what I have learned and how I have grown during a learning process 1 2 3 4 5
4 At the beginning of a learning process, I always make a reasonable study plan 1 2 3 4 5
5 I try to follow my study plans with regular reviews on the progress 1 2 3 4 5
6 When learning does not go well, I reflect repeatedly on my learning targets and strategies to see if any adjustments need to be made 1 2 3 4 5
7 I always deduce conclusions about effective learning strategies 1 2 3 4 5
8 I regularly review my learning outcomes and analyse my learning problems 1 2 3 4 5
9 I always make use of memorization strategies to study during the course and review before exams (e.g. learning by rote, using images and mind maps) 1 2 3 4 5
10 In the learning process, I always try to connect what I have just learned with my existing knowledge 1 2 3 4 5
11 In the learning process, I try to find real‐life examples to enhance my understanding of important concepts 1 2 3 4 5
12 In the learning process, I try to summarize the contents in my own words 1 2 3 4 5
13 I try to arrange for a comfortable environment for my studying 1 2 3 4 5
14 I always feel curious about the course contents that I'm going to learn. 1 2 3 4 5
15 I always look forward to the upcoming course activities 1 2 3 4 5
16 I am not too interested in the course contents* 1 2 3 4 5
17 I always feel bored by the course contents during the learning process* 1 2 3 4 5
18 I like learning through online platforms (e.g. Online Learning Environment) 1 2 3 4 5
19 I feel bored while doing the assignments* 1 2 3 4 5
20 I reserve enough time to complete the course learning tasks 1 2 3 4 5
21 Even when not required, I participate in activities that might be useful for the course (e.g. self quizzes, talks and peer discussions) 1 2 3 4 5
22 I still work hard to make sure I have enough time for my studies even during stressful times of my work and life 1 2 3 4 5
23 I share my views and resources with my teachers or students 1 2 3 4 5
24 I actively respond to questions and calls for help from teachers or students 1 2 3 4 5
25 I always discuss extra‐curricular matters with my teachers or students 1 2 3 4 5
26 I participate actively in group learning activities (e.g. group discussion) 1 2 3 4 5
27 I feel happy when taking part in learning activities 1 2 3 4 5
28 I feel happy when sharing ideas with my classmates 1 2 3 4 5
29 I am willing to take part in student activities organized by the school or student organizations 1 2 3 4 5

*Negatively keyed items. © “REDACTED”, reproduced with permission of the copyright owners. Authors retain the copyright of the GSES, and reproduction of GSES is available with authors' permission only.

Li, S. , Kwok, S. W. H. , Siu, S. C. N. , Chung, J. Y. S. , Lam, H. C. Y. , Tsang, E. Y. M. , Li, K. C. , Yeung, J. W. Y. , & Lam, S. C. (2023). Development of generic student engagement scale in higher education: An application on healthcare students. Nursing Open, 10, 1545–1555. 10.1002/nop2.1405

DATA AVAILABILITY STATEMENT

The data that support the findings of this study are available from the corresponding author upon reasonable request.

REFERENCES

  1. Amerstorfer, C. M. , & Freiin von Münster‐Kistner, C. (2021). Student perceptions of academic engagement and student‐teacher relationships in problem‐based learning. Frontiers in Psychology, 12, 713057. 10.3389/fpsyg.2021.713057 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Appleton, J. J. , Christenson, S. L. , & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools, 45(5), 369–386. 10.1002/pits.20303 [DOI] [Google Scholar]
  3. Appleton, J. J. , Christenson, S. L. , Kim, D. , & Reschly, A. L. (2006). Measuring cognitive and psychological engagement: Validation of the student engagement instrument. Journal of School Psychology, 44(5), 427–445. 10.1016/j.jsp.2006.04.002 [DOI] [Google Scholar]
  4. Arjomandi, A. , Seufert, J. , O'Brien, M. , & Anwar, S. (2018). Active teaching strategies and student engagement: A comparison of traditional and non‐traditional business students. e‐Journal of Business Education & Scholarship of Teaching, 12(2), 120–140. [Google Scholar]
  5. Basilaia, G. , & Kvavadze, D. (2020). Transition to online education in schools during a SARS‐CoV‐2 coronavirus (COVID‐19) pandemic in Georgia. Pedagogical Research, 5(4), 1–9. 10.29333/pr/7937 [DOI] [Google Scholar]
  6. Bowden, J. L.‐H. , Tickle, L. , & Naumann, K. (2021). The four pillars of tertiary student engagement and success: A holistic measurement approach. Studies in Higher Education (Dorchester‐on‐Thames), 46(6), 1207–1224. 10.1080/03075079.2019.1672647 [DOI] [Google Scholar]
  7. Brielmaier, J. , & Kuo, Y.‐Y. (2016). A comparison of student engagement in the online vs. face‐to‐face environment. Cultivating Creative and Reflective Learners, 8. 10.13021/G80S3P [DOI] [Google Scholar]
  8. Browne, M. W. , & Cudeck, R. (1993). Alternative ways of assessing model fit. In Bollen K. A. & Long J. S. (Eds.), Testing structural equation models (pp. 136–162). Sage. [Google Scholar]
  9. Butts, F. , Heidorn, B. , & Mosier, B. (2013). Comparing student engagement in online and face‐to‐face instruction in health and physical education teacher preparation. Journal of education and learning, 2(2), 8–13. 10.5539/jel.v2n2p8 [DOI] [Google Scholar]
  10. Carmona‐Halty, M. , Salanova, M. , Llorens, S. , & Schaufeli, W. B. (2021). Linking positive emotions and academic performance: The mediated role of academic psychological capital and academic engagement. Current Psychology, 40(6), 2938–2947. 10.1007/s12144-019-00227-8 [DOI] [Google Scholar]
  11. Chen, J. H. , Björkman, A. , Zou, J. H. , & Engström, M. (2019). Self‐regulated learning ability, metacognitive ability, and general self‐efficacy in a sample of nursing students: A cross‐sectional and correlational study. Nurse Education in Practice, 37, 15–21. 10.1016/j.nepr.2019.04.014 [DOI] [PubMed] [Google Scholar]
  12. Cho, G. , Hwang, H. , Sarstedt, M. , & Ringle, C. M. (2020). Cutoff criteria for overall model fit indexes in generalized structured component analysis. Journal of Marketing Analytics, 8(4), 189–202. 10.1057/s41270-020-00089-1 [DOI] [Google Scholar]
  13. Cho, M.‐H. (2012). Factor validity of the motivated strategies for learning questionnaire in asynchronous online learning environment. Journal of Interactive Learning Research, 23(1), 5. [Google Scholar]
  14. Christenson, S. L. , Reschly, A. L. , & Wylie, C. (2012). Handbook of research on student engagement. Springer Science & Business Media. [Google Scholar]
  15. Coates, H. (2010). Development of the Australasian survey of student engagement (AUSSE). Higher Education, 60(1), 1–17. 10.1007/s10734-009-9281-2 [DOI] [Google Scholar]
  16. de Winter, J. C. , Dodou, D. , & Wieringa, P. A. (2009). Exploratory factor analysis with small sample sizes. Multivariate Behavioral Research, 44(2), 147–181. 10.1080/00273170902794206 [DOI] [PubMed] [Google Scholar]
  17. Dixson, M. D. (2015). Measuring student engagement in the online course: The online student engagement scale (OSE). Online Learning Journal, 19(4), n4. 10.24059/olj.v19i4.561 [DOI] [Google Scholar]
  18. Fornell, C. , & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. 10.2307/3151312 [DOI] [Google Scholar]
  19. Fredricks, J. A. , Blumenfeld, P. C. , & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59–109. 10.3102/00346543074001059 [DOI] [Google Scholar]
  20. Goh, P. , & Sandars, J. (2020). A vision of the use of technology in medical education after the COVID‐19 pandemic. MedEdPublish, 9(1), 49. 10.15694/mep.2020.000049.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Groccia, J. E. (2018). What is student engagement? New Directions for Teaching and Learning, 2018(154), 11–20. 10.1002/tl.20287 [DOI] [Google Scholar]
  22. Guthrie, J. T. , & Wigfield, A. (2000). Engagement and motivation in reading. In Kamil M. L., Mosenthal P. B., David Pearson P., & Barr R. (Eds.), Handbook of Reading research (Vol. III, pp. 403–422). Taylor & Francis Group. [Google Scholar]
  23. Handelsman, M. M. , Briggs, W. L. , Sullivan, N. , & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research, 98(3), 184–192. 10.3200/JOER.98.3.184-192 [DOI] [Google Scholar]
  24. Hew, K. F. (2016). Promoting engagement in online courses: What strategies can we learn from three highly rated MOOCS. British Journal of Educational Technology, 47(2), 320–341. 10.1111/bjet.12235 [DOI] [Google Scholar]
  25. Hofmann, R. J. (1978). Complexity and simplicity as objective indices descriptive of factor solutions. Multivariate Behavioral Research, 13(2), 247–250. 10.1207/s15327906mbr1302_9 [DOI] [PubMed] [Google Scholar]
  26. Hon, C. K. H. , Chan, A. P. C. , & Yam, M. C. H. (2013). Determining safety climate factors in the repair, maintenance, minor alteration, and addition sector of Hong Kong. Journal of Construction Engineering and Management, 139(5), 519–528. 10.1061/(ASCE)CO.1943-7862.0000588 [DOI] [Google Scholar]
  27. Hooper, D. , Coughlan, J. , & Mullen, M. R. (2008). Structural equation modelling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53–60. 10.21427/D7CF7R [DOI] [Google Scholar]
  28. Hu, L. t. , & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. 10.1080/10705519909540118 [DOI] [Google Scholar]
  29. Izquierdo, I. , Olea, J. , & Abad, F. J. (2014). Exploratory factor analysis in validation studies: Uses and recommendations. Psicothema, 26(3), 395–400. 10.7334/psicothema2013.349 [DOI] [PubMed] [Google Scholar]
  30. Jia, F. , & Wu, W. (2019). Evaluating methods for handling missing ordinal data in structural equation modeling. Behavior Research Methods, 51(5), 2337–2355. 10.3758/s13428-018-1187-4 [DOI] [PubMed] [Google Scholar]
  31. Kemp, N. , & Grieve, R. (2014). Face‐to‐face or face‐to‐screen? Undergraduates' opinions and test performance in classroom vs. online learning. Frontiers in Psychology, 5, 1278. 10.3389/fpsyg.2014.01278 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Khalaila, R. (2014). Simulation in nursing education: An evaluation of students' outcomes at their first clinical practice combined with simulations. Nurse Education Today, 34(2), 252–258. 10.1016/j.nedt.2013.08.015 [DOI] [PubMed] [Google Scholar]
  33. Koo, T. K. , & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. 10.1016/j.jcm.2016.02.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Kowitlawakul, Y. , Chan, M. F. , Tan, S. S. , Soong, A. S. , & Chan, S. W. (2017). Development of an e‐learning research module using multimedia instruction approach. CIN: Computers, Informatics, Nursing, 35(3), 158–168. 10.1097/cin.0000000000000306 [DOI] [PubMed] [Google Scholar]
  35. Kuh, G. D. (2003). What we're learning about student engagement from NSSE: Benchmarks for effective educational practices. Change: The Magazine of Higher Learning, 35(2), 24–32. 10.1080/00091380309604090 [DOI] [Google Scholar]
  36. Lam, C. (2015). Development and validation of a quality of life instrument for older Chinese people in residential care homes . (PhD), The Chinese University of Hong Kong.
  37. Lam, S. C. , Chan, Z. S. , Chong, A. C. , Wong, W. W. , & Ye, J. (2018). Adaptation and validation of Richmond compulsive buying scale in Chinese population. Journal of Behavioral Addictions, 7(3), 760–769. 10.1556/2006.7.2018.94 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Lam, S. C. , Chong, A. C. , Chung, J. Y. S. , Lam, M. Y. , Chan, L. M. , Shum, C. Y. , Wong, E. Y. N. , Mok, Y. M. , Lam, M. T. , Chan, M. M. , & Cheung, J. H. M. (2020). Methodological study on the evaluation of face mask use scale among public adult: Cross‐language and psychometric testing. Korean Journal of Adult Nursing, 32(1), 46–56. 10.7475/kjan.2020.32.1.46 [DOI] [Google Scholar]
  39. Lam, S. C. , Yeung, C. C. Y. , Chan, J. H. M. , Lam, D. W. C. , Lam, A. H. Y. , Annesi‐Maesano, I. , & Bousquet, J. (2017). Adaptation of the score for allergic rhinitis in the Chinese population: Psychometric properties and diagnostic accuracy. International Archives of Allergy and Immunology, 173(4), 213–224. 10.1159/000477727 [DOI] [PubMed] [Google Scholar]
  40. Li, C.‐H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. 10.3758/s13428-015-0619-7 [DOI] [PubMed] [Google Scholar]
  41. Li, S. , & Yu, C. (2015). Development and implementation of distance students engagement scale. Open Education Research, 21(6), 62–70. 10.13966/j.cnki.kfjyyj.2015.06.007 [DOI] [Google Scholar]
  42. Liem, G. A. D. , & Martin, A. J. (2012). The motivation and engagement scale: Theoretical framework, psychometric properties, and applied yields. Australian Psychologist, 47(1), 3–13. 10.1111/j.1742-9544.2011.00049.x [DOI] [Google Scholar]
  43. Liu, T. W. , Lam, S. C. , Chung, M. H. , & Ho, K. H. M. (2020). Adaptation and psychometric testing of the hoarding rating scale (HRS): A self‐administered screening scale for epidemiological study in Chinese population. BMC Psychiatry, 20(1), 159. 10.1186/s12888-020-02539-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Lorenzo‐Seva, U. , & Ferrando, P. J. (2021). Not positive definite correlation matrices in exploratory item factor analysis: Causes, consequences and a proposed solution. Structural Equation Modeling, 28(1), 138–147. 10.1080/10705511.2020.1735393 [DOI] [Google Scholar]
  45. MacCallum, R. C. , Browne, M. W. , & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. 10.1037/1082-989X.1.2.130 [DOI] [Google Scholar]
  46. MacCann, C. , Jiang, Y. , Brown, L. E. R. , Double, K. S. , Bucich, M. , & Minbashian, A. (2020). Emotional intelligence predicts academic performance: A meta‐analysis. Psychological Bulletin, 146(2), 150–186. 10.1037/bul0000219 [DOI] [PubMed] [Google Scholar]
  47. Marsh, H. W. , Hau, K.‐T. , Balla, J. R. , & Grayson, D. (1998). Is more ever too much? The number of indicators per factor in confirmatory factor analysis. Multivariate Behavioral Research, 33(2), 181–220. 10.1207/s15327906mbr3302_1 [DOI] [PubMed] [Google Scholar]
  48. Maskey, R. , Fei, J. , & Nguyen, H.‐O. (2018). Use of exploratory factor analysis in maritime research. The Asian Journal of Shipping and Logistics, 34(2), 91–111. 10.1016/j.ajsl.2018.06.006 [DOI] [Google Scholar]
  49. McCowan, T. (2017). Higher education, unbundling, and the end of the university as we know it. Oxford Review of Education, 43(6), 733–748. 10.1080/03054985.2017.1343712 [DOI] [Google Scholar]
  50. Mishel, M. H. (1998). Methodological studies: Instrument development. In Brink P. J. & Wood M. J. (Eds.), Advanced design in nursing research (2nd ed., pp. 235–282). Saga Publications. [Google Scholar]
  51. NSSE . (2021). Evidence‐based improvement in higher education . https://nsse.indiana.edu/
  52. O'Neil, C. A. , Fisher, C. A. , & Rietschel, M. J. (2013). Developing online learning environments in nursing education (3rd ed.). Springer US. [Google Scholar]
  53. Omondi Aduda, D. S. , Ouma, C. , Onyango, R. , Onyango, M. , & Bertrand, J. (2014). Systematic monitoring of male circumcision scale‐up in Nyanza, Kenya: Exploratory factor analysis of service quality instrument and performance ranking. PLoS One, 9(7), e101235. 10.1371/journal.pone.0101235 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Osborne, J. W. , Costello, A. B. , & Kellow, J. T. (2008). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. In Osborne J. W. (Ed.), Best practices in quantitative methods (pp. 86–99). SAGE Publications Inc. [Google Scholar]
  55. Pettersson, E. , & Turkheimer, E. (2010). Item selection, evaluation, and simple structure in personality data. Journal of Research in Personality, 44(4), 407–420. 10.1016/j.jrp.2010.03.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Pimmer, C. , Brysiewicz, P. , Linxen, S. , Walters, F. , Chipps, J. , & Gröhbiel, U. (2014). Informal mobile learning in nurse education and practice in remote areas‐‐A case study from rural South Africa. Nurse Education Today, 34(11), 1398–1404. 10.1016/j.nedt.2014.03.013 [DOI] [PubMed] [Google Scholar]
  57. Pintrich, P. R. , Smith, D. A. F. , Garcia, T. , & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813. 10.1177/0013164493053003024 [DOI] [Google Scholar]
  58. Polit, D. F. , Beck, C. T. , & Owen, S. V. (2007). Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing and Health, 30(4), 459–467. 10.1002/nur.20199 [DOI] [PubMed] [Google Scholar]
  59. Portney, L. G. , & Watkins, M. P. (2015). Foundations of clinical research: Applications to practice (4th ed.). Prentice Hall Health. [Google Scholar]
  60. R Core Team . (2021). R: A language and environment for statistical computing. The R Development Core Team. https://www.R‐project.org/ [Google Scholar]
  61. RDocumentation . (2021a). cfa: Fit confirmatory factor analysis models . https://www.rdocumentation.org/packages/lavaan/versions/0.6‐9/topics/cfa
  62. RDocumentation . (2021b). fa.parallel: Scree plots of data or correlation matrix compared to random "parallel" matrices . https://www.rdocumentation.org/packages/psych/versions/2.1.6/topics/fa.parallel
  63. RDocumentation . (2021c). fa: Exploratory factor analysis using MinRes (minimum residual) as well as EFA by principal axis, weighted least squares or maximum likelihood . https://www.rdocumentation.org/packages/psych/versions/2.1.6/topics/fa
  64. RDocumentation . (2021d). FACTORABILITY: Factorability of a correlation matrix . https://www.rdocumentation.org/packages/EFA.dimensions/versions/0.1.7.2/topics/FACTORABILITY
  65. RDocumentation . (2021e). lavOptions: Lavaan options . https://www.rdocumentation.org/packages/lavaan/versions/0.6‐9/topics/lavOptions
  66. RDocumentation . (2021f). mvn: Multivariate normality tests . https://www.rdocumentation.org/packages/MVN/versions/5.8/topics/mvn
  67. RDocumentation . (2021g). reliability: Calculate reliability values of factors . https://www.rdocumentation.org/packages/semTools/versions/0.5‐4/topics/reliability
  68. RDocumentation . (2021h). semPaths: Plot path diagram for SEM models . https://www.rdocumentation.org/packages/semPlot/versions/1.1.2/topics/semPaths
  69. Rhemtulla, M. , Brosseau‐Liard, P. É. , & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373. 10.1037/a0029315 [DOI] [PubMed] [Google Scholar]
  70. Robinson, M. A. (2018). Using multi‐item psychometric scales for research and practice in human resource management. Human Resource Management, 57(3), 739–750. 10.1002/hrm.21852 [DOI] [Google Scholar]
  71. RStudio . (2021). RStudio desktop . https://www.rstudio.com/products/rstudio/
  72. Sadeghi Bahmani, D. , Faraji, P. , Faraji, R. , Lang, U. E. , Holsboer‐Trachsler, E. , & Brand, S. (2018). Is emotional functioning related to academic achievement among university students? Results from a cross‐sectional Iranian sample. The Revista Brasileira de Psiquiatria, 40(3), 290–295. 10.1590/1516-4446-2017-2434 [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Shorey, S. , Kowitlawakul, Y. , Devi, M. K. , Chen, H. C. , Soong, S. K. A. , & Ang, E. (2018). Blended learning pedagogy designed for communication module among undergraduate nursing students: A quasi‐experimental study. Nurse Education Today, 61, 120–126. 10.1016/j.nedt.2017.11.011 [DOI] [PubMed] [Google Scholar]
  74. Sinatra, G. M. , Heddy, B. C. , & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1–13. 10.1080/00461520.2014.1002924 [DOI] [Google Scholar]
  75. Sousa, V. D. , & Rojjanasrirat, W. (2011). Translation, adaptation and validation of instruments or scales for use in cross‐cultural health care research: A clear and user‐friendly guideline. Journal of Evaluation in Clinical Practice, 17(2), 268–274. 10.1111/j.1365-2753.2010.01434.x [DOI] [PubMed] [Google Scholar]
  76. Streiner, D. L. , Norman, G. R. , & Cairney, J. (2014). Health measurement scales: A practical guide to their development and use (5th ed.). Oxford University Press. [Google Scholar]
  77. Tabachnick, B. G. , & Fidell, L. S. (2007). Using multivariate statistics. Allyn & Bacon, Pearson. [Google Scholar]
  78. Tavakol, M. , & Dennick, R. (2011). Making sense of Cronbach's alpha. International Journal of Medical Education, 2, 53–55. 10.5116/ijme.4dfb.8dfd [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Valiente, C. , Swanson, J. , & Eisenberg, N. (2012). Linking students' emotions and academic achievement: When and why emotions matter. Child Development Perspectives, 6(2), 129–135. 10.1111/j.1750-8606.2011.00192.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Walji, S. , Deacon, A. , Small, J. , & Czerniewicz, L. (2016). Learning through engagement: MOOCs as an emergent form of provision. Distance Education, 37(2), 208–223. 10.1080/01587919.2016.1184400 [DOI] [Google Scholar]
  81. Wolf, E. J. , Harrington, K. M. , Clark, S. L. , & Miller, M. W. (2013). Sample size requirements for structural equation models: An evaluation of power, bias, and solution propriety. Educational and Psychological Measurement, 76(6), 913–934. 10.1177/0013164413495237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Wu, A. D. L. (2008). Pratt's importance measures in factor analysis: A new technique for interpreting oblique factor models. (PhD), The University of British Columbia, Vancouver. [Google Scholar]
  83. Yen, S.‐C. , Lo, Y. , Lee, A. , & Enriquez, J. (2018). Learning online, offline, and in‐between: Comparing student academic outcomes and course satisfaction in face‐to‐face, online, and blended teaching modalities. Education and Information Technologies, 23(5), 2141–2153. 10.1007/s10639-018-9707-5 [DOI] [Google Scholar]
  84. Yong, A. G. , & Pearce, S. (2013). A beginner's guide to factor analysis: Focusing on exploratory factor analysis. Tutorial in Quantitative Methods for Psychology, 9(2), 79–94. 10.20982/tqmp.09.2.p079 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Nursing Open are provided here courtesy of Wiley

RESOURCES