Skip to main content
Heliyon logoLink to Heliyon
. 2023 Sep 1;9(9):e19538. doi: 10.1016/j.heliyon.2023.e19538

An empirical study on English preservice teachers’ digital competence regarding ICT self-efficacy, collegial collaboration and infrastructural support

Wuyun Dai 1
PMCID: PMC10558726  PMID: 37809426

Abstract

This study aims to investigate English preservice teachers’ digital competence regarding their self-efficacy in information and communication technologies (ICT), their collaboration with colleagues, and the support they received from the infrastructure. A questionnaire based on the “Digital competence of educators (DigCompEdu)” was used in this research. Structural equation modeling (SEM) was applied to verify the hypothesized model using data obtained from 425 graduate students majoring in English pedagogy. This study produced significant findings: (1) English preservice teachers’ ICT self-efficacy has strong or moderate positive associations with their perceptions of collegial collaboration, infrastructural support and digital competence; (2) the association between participants’ perspectives on collegial collaboration and digital competence is statistically equivalent to the association between their ICT self-efficacy and digital competence; (3) although English preservice teachers’ perceptions of infrastructural support have a positive association with their views on digital competence, it is not as significant as the former ones; and (4) further study is needed, as the dependent variables in this study explained only 66% of the variation in collegial collaboration, 44% in infrastructural support, and 78% in digital competence.

Keywords: Digital competence, Self-efficacy, English preservice teachers, Structural equation modeling, Collegial collaboration, Infrastructural support

1. Introduction

The unprecedented global pandemic has accelerated the integration of digital technologies into all walks of life [1]. Despite the apparent revolution in information and technology development in modern society, the general population does not seem to have mastered modern technologies as one might expect [1], especially English preservice teachers, who are now facing great demands of continuously improving their digital competence to better meet the challenges proposed by the so-called Society of Knowledge [2]. Thus, it is about time we started investigating English preservice teachers’ digital competence.

In China, the central government and the Ministry of Education have promulgated a range of policies aimed to enhance teachers’ digital proficiency. The document <Opinions of the CPC central committee and the state council on comprehensively deepening the construction of teaching staff in the new era> explicitly underscores the importance of enhancing teachers’ information literacy and encouraging them to proactively adapt to technological modernizations such as informatization and artificial intelligence [3]. In that spirit, the Chinese Ministry of Education released the <Education informatization action plan 2.0> in 2018, which highlights the utmost significance of fostering information literacy and digital competence among students in normal schools [4]. It is self-evident that the Chinese government finds it vitally crucial to have all teachers be digitally competent.

Digital technology has become indispensable in language teaching in China [1,5,6]. However, the digital competence of Chinese English teachers, especially preservice teachers, is still underresearched [1,5,6]. There has been some limited research on Chinese English teachers’ digital competence; see, for example, Tang’s analysis of three reports regarding the development of digital competence among vocational teachers issued by UNESCO-UNEVOC (United Nations Educational, Scientific and Cultural Organization-International Centre for Technical and Vocational Education and Training) in the last three years (2020–2022) [7], Li’s empirical research on college teachers’ digital competence development [8], and Zheng’s investigation on the construction and application of a digital competence model for K12 teachers in a Chinese education context [6]. Despite this limited research on the subject, there is a conspicuous absence of published literature that addresses this particular topic.

Tang’s analysis led him to assert that common challenges related to the digital competence of vocational teachers necessitate global attention. Examples include a negative perception of digital teaching, limited enthusiasm for participating in professional training, insufficient qualifications to be independent vocational teachers, and an uneven distribution of high-quality resources, among other notable issues [7]. Li’s investigation of 277 college teachers revealed that the majority of participants did not demonstrate a satisfactory level of digital competence across various aspects. Specifically, less than half of the teachers achieved a basic level of competence in areas such as employing effective teaching methods, adopting high-tech devices and promoting students’ digital competence in the curriculum-based learning process, with the exception being the slight advantage in the number of teachers with basic digital competence in class management [8]. Zheng [6] conducted a large-scale survey based on his teachers’ digital competence model among 10,054 teachers from two cities in China, namely, Shanghai and Shangrao. The survey yielded significant findings, which can be summarized as follows: first, the digital competence level of teachers in China is generally low and requires urgent improvement; second, teachers with better job performance demonstrate higher levels of digital competence; third, compared to digital teaching capacity, digital learning and innovation, digital values and pursuits, and primary personality traits, digital technology competence can be more easily developed and improved in a shorter period of time, which justifies continuous emphasis on these four areas. Overall, Zheng’s survey findings shed light on the current state of teachers’ digital competence, highlighting the need for improvement and specific areas that warrant attention and development.

Previous studies such as the ones mentioned above make it clear that teachers worldwide require enhanced digital competence. This necessitates addressing teachers’ negative perceptions of digital teaching, fostering enthusiasm for professional training, maintaining a well-structured distribution of high-quality resources at school, etc. In light of these issues faced by college teachers, this research aims to conduct a cross-sectional study on preservice teachers’ perceptions of their comprehensive digital competence. According to Mumtaz [9], multiple factors may exert great impacts when teachers perform digitally in pedagogical contexts, such as motivations for professional improvement, acquisition of integral resources, quality of teaching equipment, ease of use, policies inside and outside the school environment and financial and administrative support for ICT training. To keep the research focused and maintain strong practicality at the same time, this research mainly centered on personal factors such as self-efficacy and contextual factors such as collegial collaboration and infrastructural support. A theoretical model was developed under the inspiration of previous studies [10,11], and structural equation modeling was applied for the verification of the model using empirical data.

The fundamental rationale of this research is threefold. First, by effectively evaluating the current state of preservice teachers’ digital competence, this study aims to provide valuable references for stakeholders such as teacher educators and policy-makers on issues such as the evaluation of preservice teachers’ digital performance, the construction of digital training courses, and the implementation of digital teaching and learning praxis. Second, this research is expected to enhance preservice teachers’ awareness of how they can enhance their capacity to meet the demands of a digitally driven educational environment. By understanding their strengths and areas for improvement in digital competence, preservice teachers can better equip themselves for successful digital performance. Third, the long-term goal of this research is to contribute to the overall improvement of teachers’ digital teaching and learning capacity. Well-trained preservice teachers, with a solid foundation in digital competence, will play a crucial role in advancing the effectiveness and innovation of digital teaching practices within the educational system. In conclusion, this research seeks to provide valuable insights for stakeholders, empower preservice teachers in their digital competence development, and ultimately foster the improvement of teachers’ digital teaching and learning capacity.

2. Literature review

2.1. Definition of digital competence

The European Commission defines digital competence as follows: “Digital competence involves the confident and critical use of information society technology for work, leisure and communication” ([12], p.84). Ferrari et al. [12] thoroughly combined fifteen digital competence frameworks from the literature, half of which include a context-specific definition of digital competence. By exhaustively comparing the main components of all the definitions listed in the literature, they came up with a comprehensive definition of digital competence:

Digital competence is the set of knowledge, skills, attitudes, abilities, strategies, and awareness that is required when using ICT and digital media to perform tasks, solve problems, communicate, manage information, behave in an ethical and responsible way, collaborate, create and share content and knowledge for work, leisure, participation, learning, socializing, empowerment and consumerism ([12], p.84).

As the concept of digital competence is understood as a “multi-faceted moving target” ([12], p.79), it needs to be perceived pertinently in different domains. In an education-specific context, educators’ digital competence is understood as “a set of capacities, abilities, knowledge or skills that educators possess to solve educational problems by integrating ICT” ([2], p.2). Many theoretical frameworks have been developed for the measurement of teachers’ digital competence [[12], [13], [14], [15]]. Ferrari et al. [12] noted that most of the digital competence frameworks regard skills as tool-dependent, i.e., they center upon the participant’s actual capabilities of using certain digital gadgets. Another common feature shared by the current digital competence frameworks is their recognition of digital competence as a holistic system. They divide it into multiple areas with certain professional levels assigned to them accordingly. For instance, the European framework of the <Digital competence of educators (DigCompEdu)> decomposes digital competence into six areas with twenty-two subsets distributed under them. Those six areas are: professional engagement, digital resources, teaching and learning, assessment, empowering learners and facilitating learners’ digital competence [16]. Similarly, <Common digital competence framework for teachers> [13] disintegrates teachers’ digital competence system into five areas with twenty-one detailed competences included. The five areas are as follows: information and data literacy, communication and collaboration, digital content creation, safety and problem solving.

With these insights in mind, this research proposes that teachers’ digital competence can be defined as the comprehensive set of capacities, abilities, knowledge, and skills that educators possess to integrate ICT effectively and responsibly into their educational practices. It involves professional engagement, leveraging digital resources, incorporating ICT in teaching and learning processes, empowering learners to develop their own digital competence, and facilitating the acquisition of digital competence by learners. It includes competences in areas such as information and data literacy, communication and collaboration, digital content creation, ethical conduct, problem-solving, etc.

2.2. English preservice teachers’ ICT self-efficacy

Previous studies have stressed the fact that English preservice teachers’ ICT self-efficacy could be shaped by direct experiences [[17], [18], [19]], vicarious experiences [20], forms of practicums [21] and service-learning projects [22]. Bandura ([23], p.3) asserted that self-efficacy is in connection with “people’s beliefs in their capabilities to produce given attainments”. In the domain of teaching profession, teachers’ self-efficacy can be defined as “teachers’ beliefs about their own capabilities of carrying out professional actions in various situations and relevant arenas” ([24], p.4). Teachers’ self-efficacy is a developing capability in which “cognitive, social-emotional and behavioral sub-skills need to be organized and orchestrated for the individual to fulfill the teacher role” ([24], p.4). One’s self-efficacy is an indicator of his or her capacity to execute certain activities [25]. Outcome expectations also play a significant role in shaping individuals’ self-efficacy, which can manifest in various forms such as positive and negative physical outcomes, social outcomes and self-evaluative outcomes, “Within each form, the positive expectations serve as incentives, the negative ones as disincentives” ([25], p.309). Therefore, it is of utmost significance to take English preservice teachers’ self-efficacy into consideration when examining their overall digital competence. Bandura [23] claimed that there are four major sources that affect individuals’ self-efficacy: vicarious experiences, verbal persuasions, physiological arousal and mastery experiences. Since contextual factors such as collegial collaboration necessitate one’s opportunity for vicarious experiences and verbal persuasions [10] and it is reasonable to presume that the lack of infrastructural support tunes individuals’ mastery experiences, this study not only focuses on English preservice teachers’ ICT self-efficacy but also determines how contextual factors such as the ones mentioned above shape their perceptions of digital competence.

2.3. Contextual factors: collegial collaboration and infrastructural support

People are destined to live and work cooperatively; numerous achievements and human causes are the results of interdependent efforts [26]. Within the context of blending ICT into pedagogical use, whether teachers and other stakeholders (e.g., education researchers, teacher trainers, policymakers, etc.) maintain a well-functioning coordinating relationship can somehow determine the overall instructional outcomes. Collegial collaboration provides extra opportunities for teachers to learn about ICT together with their peers [10]. In doing so, they can enhance their professional capabilities under the demands of infusing ICT into instructional activities [10]. A previous study [27] concluded that the integration of information and technology strategies into instructional sessions is essential for teachers to enrich their professional knowledge. Angeli and Valanides [28] proclaimed that for less-experienced teachers, collaborating with their peers, receiving feedback from experts and observing and being involved in teaching demonstrations proved to be productive ways to help their ICT capabilities strengthened. Enlightened by previous research [10,29], this study integrated five items into an online questionnaire to measure English preservice teachers’ perceptions of collegial collaboration, and the results are shown in the data analysis section.

Apart from collegial collaboration, infrastructural support also serves as a fundamental component for the cultivation of English preservice teachers’ digital competence. In the field of education, ICT infrastructure includes access to pedagogical equipment, software, internet ease of use and other resources that fall into the same category [30]. As Tearle ([31], p.337) put it, ICT infrastructure refers to “the quantity, type, reliability of computer, access arrangements and location of equipment”. In other words, being the fundamental carrier of technology integration into the instructional context, ICT infrastructure is viewed as the combination of multiple computing devices and their supporting fittings that come along in various forms [32]. The sufficiency of technological resources serves as one of the most indispensable prerequisites of ICT integration in the school context [30,33], as the paucity of ICT facilities can be extremely suffocating for teachers who yearn to apply high-tech equipment for instructional purposes [9]. As Albirini [33] noted, a scarcity of ICT resources at teachers’ disposal has been widely acknowledged as a major hindrance to the process of technology integration in an education-specific context. This research attempts to determine the corresponding status quo in a Chinese institution of education.

2.4. Rationale of the theoretical framework

To enhance the structure of this research, it is necessary to construct a theoretical framework based on the current literature. Research on teacher self-efficacy has historically been approached from two different theoretical perspectives [34]: Rotter’s [35] concept of internal and external control and Bandura’s [23] proposal regarding the four sources of self-efficacy formation, namely, vicarious experiences, verbal persuasion, physiological arousal, and mastery experiences. Building on Rotter’s distinction between internal and external control, it has been assumed that teacher self-efficacy increases when teachers themselves are convinced that their professional competence can be enhanced through vocational training and institutional support [19,34,[36], [37], [38]]. Drawing from Bandura’s [23,25] self-efficacy theory, it can be inferred that contextual factors may play a role in preservice teachers’ vocational development. In this context, the present study aims to analyze the evaluation of English preservice teachers’ digital competence from three perspectives: their self-efficacy in ICT use, their perceptions of collegial collaboration, and the infrastructural support they receive through professional training. By considering both internal factors such as ICT self-efficacy and external factors such as collegial collaboration and infrastructural support, the study aims to establish a robust theoretical foundation.

2.5. Aim of the study

Based on these references, the interest of the present study is linked to the measurement of English preservice teachers’ digital competence regarding ICT self-efficacy, collegial collaboration and infrastructural support. Therefore, the main objectives of this study are to conduct a specifically tailored digital competence questionnaire, analyze the empirical data, and verify the theoretical model developed from the literature.

2.6. Research questions and hypotheses

This research aims to explore English preservice teachers’ digital competence regarding their self-efficacy in ICT use and their perceptions of collegial collaboration and infrastructural support. Research questions (RQs) that guide the whole study are as follows:

RQ1

Are personal factors such as ICT self-efficacy and contextual factors such as collegial collaboration and infrastructural support valid enough to explain English preservice teachers’ overall digital competence?

RQ2

Among these factors, which one(s) characterize (s) the most in terms of English preservice teachers’ digital competence in educational context?

Five hypotheses are proposed according to the RQs.

H1

English preservice teachers’ ICT self-efficacy has a positive association with their perceptions of collegial collaboration.

H2

English preservice teachers’ ICT self-efficacy has a positive association with their perceptions of infrastructural support.

H3

English preservice teachers’ ICT self-efficacy has a positive association with their overall digital competence.

The next two hypotheses are about contextual factors such as collegial collaboration and infrastructural support that relate to English preservice teachers’ digital competence:

H4

English preservice teachers’ perceptions of collegial collaboration have a positive association with their overall digital competence.

H5

English preservice teachers’ perceptions of infrastructural support have a positive association with their overall digital competence (Fig. 1).

Fig. 1.

Fig. 1

Theoretical model demonstrating the hypothesized relationships between English preservice teachers’ ICT self-efficacy, collegial collaboration, infrastructural support and their overall digital competence.

3. Methodology

3.1. Sample

This cross-sectional research intends to evaluate the current state of a group of preservice teachers’ digital competence regarding their ICT self-efficacy, collegial collaboration, and infrastructural support. A purposive sampling approach was applied among a population of 507 student teachers majoring in English pedagogy at Inner Mongolia Normal University. The final sample was n = 425 (response rate: 83.83%), which was composed of 348 females (81.88%) and 77 males (18.12%). According to the statistical results, most of them held the expectations of working in colleges and universities after their graduation (37.07% among female students and 42.86% among male students). The descriptive data are shown in Table 1. This research was approved by the ethical committee of Inner Mongolia Normal University, and all participants offered informed consent regarding their participation in the current research.

Table 1.

Percentage of respondents by age, gender and expectations in school stages.

Expectations in School Stages Age & Gender
20–25 (60.29%)
26–30 (32.84%)
31–35 (4.41%)
≥36 (2.45%)
Female
80.65%
Male
19.35%
Female
87.84%
Male
12.16%
Female
80%
Male
20%
Female
80%
Male
20%
Kindergarten 0.94% 0.47% 0 0
Primary School 7.06% 2.59% 0.47% 0
Junior High School 13.88% 3.76% 0.24% 0.24%
Senior High School 27.29% 4% 0.47% 0.47%
College/University 29.88% 6.59% 1.18% 0.47%
Total 425 (100%)

3.2. Instruments

Participants were required to answer a twenty-item questionnaire that covers four aspects: ICT self-efficacy, collegial collaboration, infrastructural support and digital competence. All the research items, together with detailed descriptive statistics such as means, standard deviation, skewness and kurtosis, are listed in Table 2.

Table 2.

Means, standard deviations, skewness and kurtosis.

Items M(SD) Skewness Kurtosis
Digital Competence (McDonald’s ω = 0.91)
When assigning digital tasks, I can consider possible problems students may encounter. 1.87 (0.67) 0.39 0.09
I can use digital technologies to provide personalized learning opportunities. 2.01 (0.74) 0.37 0.02
I can teach students how to identify erroneous and/or biased information. 1.80 (0.65) 0.48 0.82
I can assign tasks that require students to use digital media to communicate and cooperate with each other. 1.89 (0.68) 0.55 0.84
I can assign tasks that require students to create digital content. 1.96 (0.69) 0.32 −0.07
I can teach students how to behave properly while using ICT. 1.87 (0.66) 0.29 −0.10
I can encourage students to use digital technologies creatively. 1.88 (0.69) 0.63 1.01
ICT Self-Efficacy (McDonald’s ω = 0.79)
I can solve simple technical problems during ICT use. 1.63 (0.68) 1.08 2.26
I can use correct computer terminology when directing students’ computer use. 1.81 (0.67) 0.44 −0.01
If necessary, I can develop educational software based on teaching objectives. 1.77 (0.64) 0.63 1.43
If necessary, I can develop online teaching materials and tools. 1.82 (0.69) 0.38 −0.34
Collegial Collaboration (McDonald’s ω = 0.84)
I can collaborate with other teachers to develop ICT-based lesson plans. 1.90 (0.73) 0.71 1.12
I can collaborate with other teachers to select ICT-based instructional methods. 2.03 (0.73) 0.42 0.46
I can collaborate with other teachers to evaluate curriculum and programs. 2.39 (0.97) 0.26 −0.50
I can collaborate with other teachers to design ICT-based assessments for students. 2.11 (0.86) 0.61 0.43
I can collaborate with other teachers to give ICT-based assignments. 2.05 (0.82) 0.55 0.19
Infrastructural Support (McDonald’s ω = 0.80)
How much do you agree your school is conscious about ICT training of English preservice teachers? 2.18 (0.86) 0.35 −0.16
How much do you agree your school encourage you to use subject-specific digital teaching aids in teaching? 2.08 (0.88) 1.13 1.58
How much do you agree your school keep budget for the implementation of ICT? 1.88 (0.74) 0.41 −0.40
How much do you agree that digital teaching equipment at your school is satisfactory? 2.15 (0.93) 0.58 −0.04

3.2.1. Digital competence scale

The majority of the questionnaire was used to measure participants’ overall digital competence. Seven statements were extracted and adapted from the DigCompEdu check-in questionnaire [2,39], which originates from the European framework for the digital competence of educators [16]. Many scholars have made earnest endeavors to certify the reliability and validity of the DigCompEdu check-in questionnaire. Ghomi and Redecker [40] collected 335 effective data points submitted by teachers in Germany and found that the entire instrument showed excellent internal consistency with a Cronbach’s alpha value of 0.93. In addition, Cabero-Almenara et al. [2] ran the DigCompEdu check-in questionnaire by 2262 professors in Spain and claimed that the omega coefficient of the instrument was 0.97. According to Cabero-Almenara et al. [2], McDonald’s omega coefficient verifies a more precise estimation of the reliability of a given scale than Cronbach’s alpha does, which led them to confirm that both the instrument itself and the different competence areas that comprise it showed a significantly high level of trustworthiness. The current study did not adopt all the items from the original DigCompEdu check-in questionnaire [39], as it was originally developed for the measurement of in-service teachers’ digital competence, and it goes without saying that preservice teachers differ from in-service teachers in ways such as working experiences, pedagogical knowledge, and teaching capacities, which obviously play great roles in developing one’s digital competence [41]. In light of this, the current study extracted seven items out of the original twenty-two statements and carefully trimmed them for better suitability. Statistical results revealed an outstanding internal consistency of the adapted version (Table 2).

3.2.2. ICT self-efficacy scale

For the measurement of English preservice teachers’ self-efficacy in ICT, this study extracted four ICT self-efficacy-related statements from previous research [11,42]. Adaptations were made to meet the specific demands of the current study. A five-point Likert scale was used in the questionnaire, and participants were asked to choose only one suitable answer for each statement based on their own perceptions. Possible answers on the scale were assigned as follows: 1 = strongly agree, 2 = agree, 3 = neutral, 4 = disagree and 5 = strongly disagree. The internal consistency measure (McDonald’s ω) of this dimension was 0.79. The mean, standard deviation, skewness and kurtosis are shown in Table 2.

3.2.3. Collegial collaboration and infrastructural support scale

Enlightened by Goddard et al. [29] and I. K. R. Hatlevik and Hatlevik [10], this study adopted five statements for the measurement of English preservice teachers’ perceptions of collegial collaboration in ICT training and learning. Infrastructural support is the fourth and last aspect of the examination of English preservice teachers’ overall digital competence. Four statements were included in this dimension, all of which were developed based on previous studies [27,42]. Empirical data on this aspect also revealed acceptable internal consistency (Table 2).

3.3. Data collection

The sole data collection tool within this study was a structured online questionnaire. All 425 anonymous submissions came from graduate students in a normal university in northern China. The sociodemographic characteristics of the participants are shown in Table 1.

3.4. Analytical strategy

Factor analysis (FA) is applied for the data analysis process. It involves a set of complicated procedures that are employed to examine the interrelations between numerous observed variables [43]. These procedures reduce the data by grouping a smaller subset of variables into dimensions or factors that share similar characteristics [43,44]. As a multivariate statistical procedure [43], FA can be used for many purposes. First, it can be utilized to condense a vast number of variables into a more concise group of factors. Then, by identifying the fundamental interrelations between observed variables and latent constructs, FA enables the creation and improvement of theoretical frameworks. Furthermore, FA offers evidence for the construct validity of self-report scales [43].

FA mainly contains two types of analytical methods: exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) [45]. EFA is employed in situations where the researcher is uncertain about the number of factors required to elucidate the interrelations among a group of items, and it allows the researcher to clarify numerous variables into a smaller, more suitable set of underlying constructs [43,[46], [47], [48]]. On the other hand, CFA is used to evaluate how well the presumed classification of a group of identified factors fits the data. It could also be utilized to examine the effectiveness of the rudimentary dimensions of a construct identified via EFA and to test hypotheses regarding the linear structural relationships among a set of factors related to the model [44].

In the current study, the statistical software SPSS was applied for the execution of EFA. It was also used for the descriptions of sociodemographic characteristics of participants and for the calculation of observed variables’ values for mean, standard deviation, and the level of skewness and kurtosis. All these values are listed in Table 2. Apart from that, Structural Equation Modeling (SEM) was also used in the study, which was carried out using the statistical software Amos. SEM can validate the patterns of relationships among latent variables [49]. It can also be of great use in integrating all the observed variables into certain numbers of latent variables [50,51], which further functions as a measurement of verification of the research model. The model proposed in this study was a fully latent model [10] that was composed of four latent variables and five hypotheses indicating their relationships among each other (Fig. 1).

4. Results

4.1. Attributes of items

Before commencing the analytical procedure, all items went through a normal distribution test, which involves two indices: skewness and kurtosis. Skewness and kurtosis occurred either separately or together in a set of data, with the former indicating the state of the data in comparison to a normal distribution and the latter reflecting the sharpness of the peak [49]. For the data in this study, the values of skewness and kurtosis range from 0.26 to 1.13 and −0.50 to 1.58, respectively (Table 2). According to Lau and Yuen’s [52] proposition, only when the values of skewness and kurtosis are above 3 and 10, respectively, could the condition of the data be of serious concern. Therefore, the data within this study can be perceived as univariate normally distributed.

4.2. Exploratory factor analysis (EFA)

EFA is an indispensable procedure of factor analysis, as it enables researchers to refine their research items to the most and keeps the integrity of the constructs at the same time. In simple terms, it helps to minimize the volume of research items so that the remaining ones are better suited for the elucidation of the constructs being studied [46,53].

However, before refining the constructs, sampling adequacy and data suitability must be scrutinized, which can be achieved by examining the Kaiser‒Meyer‒Olkin (KMO) [54] and Bartlett’s test of sphericity [55]. According to Netemeyer et al. [47], a value of 0.50 or above for KMO is acceptable for the performance of EFA. Hair et al. [56] concluded it in a more precise way: the eigenvalue for the measure of sampling adequacy takes a small range from 0 to 1, which can be interpreted as follows: 0.80 or above-meritorious; 0.70 or above-middling; 0.60 or above-mediocre; 0.50 or above-unacceptable. Bartlett’s test of sphericity [55] is a statistical procedure that assesses the overall significance of the correlations presented in a correlation matrix [56], the chi-square output of which should demonstrate statistical significance (ρ < 0.001) to ensure the matrix’s suitability for further factor analysis [45]. Based on these references, the sampling adequacy and data suitability of the scale used in this study were strictly measured (Table 3).

Table 3.

KMO and Bartlett’s test of sphericity.

Indicator
Areaa
Number of Items KMO Bartlett’s test of sphericity
Chi-square Df Sig.
WS 20 0.95 4462.51 190 0.000
DC 7 0.92 1598.64 21 0.000
SE 4 0.79 483.10 6 0.000
CC 5 0.84 793.55 10 0.000
IS 4 0.78 495.52 6 0.000
a

WS = Whole Scale; DC = Digital Competence; SE = ICT Self-Efficacy; CC = Collegial Collaboration; IS = Infrastructural Support.

4.2.1. KMO and Bartlett’s test of sphericity

As shown in Table 3, the KMO value of the whole scale was 0.95, the chi-square in Bartlett’s test of sphericity was 4462.51, and the significance level (ρ) was 0.000 (<0.001), indicating that the data were statistically significant and suitable for further validity analysis [45,46].

4.2.2. Principal components extraction

The next step that follows the verification of KMO and Bartlett’s test of sphericity is the extraction of principal components [56]. Since the four areas within the current scale are theoretically correlated [11,57,58], the principal component direct orthogonalization method was applied for factor analysis. With eigenvalue four as the threshold for extraction, the statistical results revealed that the principal components extracted can explain 20.67%, 15.87%, 13.63% and 13.36% of the variation, respectively, with a cumulative percentage of 63.53% (Table 4). Therefore, according to the statistical results, it is justifiable to extract four components [59,60].

Table 4.

Total variance explained.

Indicator
Areaa
Initial eigenvalues
Rotation sums of squared loadings
Total Variance % Cumulative % Total Variance % Cumulative %
DC 9.15 45.72 45.72 4.13 20.67 20.67
SE 1.45 7.26 52.99 3.17 15.87 36.53
CC 1.12 5.61 58.59 2.73 13.63 50.16
IS 0.99 4.93 63.52 2.67 13.36 63.53
a

DC = Digital Competence; SE = ICT Self-Efficacy; CC = Collegial Collaboration; IS = Infrastructural Support.

4.3. Confirmatory factor analysis (CFA)

EFA validated the sampling adequacy and data suitability of the scale, which further certified the principal components extracted [56]. EFA in this study successfully extracted four constructs, namely, digital competence (7 items), ICT self-efficacy (4 items), collegial collaboration (5 items) and infrastructural support (4 items). Next is Confirmatory Factor Analysis (CFA). It is used to evaluate to what extent the indicators accurately reflect the latent constructs they are designed to measure and determine whether the latent constructs are distinct from each other [61]. A series of construct validity tests can be used for the justification of the study, namely, content validity, predictive validity, convergent validity and discriminant validity [61]. Content validity relies on a visual inspection to determine whether the indicators make a reasonable attempt to measure the unobserved construct based on their face value. The constructs in this study were tested via numerous latent variables, namely, digital competence with seven items, ICT self-efficacy with four items, collegial collaboration with five items and infrastructural support with four items, with a total of twenty items that guarantee the content validity of the study [61]. Predictive validity is about the “prediction” made in advance, which is then confirmed or refuted after all the tests are completed. Comparatively speaking, convergent validity and discriminant validity are rather complicated, with a series of calculating procedures involved, which will be carried out in the next section.

4.3.1. Model fitness

A model consisting of four latent variables, namely, digital competence, ICT self-efficacy, collegial collaboration and infrastructural support, was examined. While running the software Amos, common indices were frequently referenced. These indices were the chi-square (χ2) value, χ2/df, comparative fit index (CFI), Tucker‒Lewis fit index (TLI), root mean square error of approximation (RMSEA) and standardized root mean square residual (SRMR). During the calculation of these indices, certain rules of thumb were strictly obeyed: the cutoff value for CFI and TLI should be close to or higher than 0.95 [62,63], and the cutoff value for RMSEA should be close to or lower than 0.06 [62,63]. For the value of SRMR of the model, Hu and Bentler [62], together with Brown [64], reached an agreement that the eigenvalue should be close to or lower than 0.08 and the 90% confidence interval (90% CI) should be close to or lower than 0.06. In this study, the chi-square value of the model was significant (ρ = 0.000). However, the chi-square test is extremely sensitive to sample size [62,63], and the current sample volume is 425, which is regarded as medium (N = 500) size [62,63]. Other outcomes showed an acceptable model fit: CFI = 0.94, TLI = 0.93, RMSEA = 0.06 (LO 90 = 0.06, HI 90 = 0.07) and SRMR = 0.05. The value of standardized factor loadings took a small range from 0.59 to 0.79, and all of them are statistically significant (ρ < 0.001) (Table 5).

Table 5.

Composite reliability and average of variance extracted.

Constructsa Item Standardized factor loading Unstandardized factor loading S.E. t-value ρ SMC C.R. AVE
DC DC1 0.74 1.00 \ \ \ 0.54 0.90 0.58
DC2 0.77 1.16 0.07 16.04 *** 0.60
DC3 0.74 0.96 0.06 15.04 *** 0.54
DC4 0.78 1.08 0.07 15.91 *** 0.60
DC5 0.76 1.07 0.07 15.68 *** 0.58
DC6 0.77 1.02 0.07 15.68 *** 0.59
DC7 0.76 1.06 0.07 15.39 *** 0.57
SE SE1 0.61 1.00 \ \ \ 0.37 0.78 0.47
SE2 0.76 1.24 0.10 12.25 *** 0.58
SE3 0.74 1.15 0.10 11.83 *** 0.54
SE4 0.63 1.05 0.10 10.51 *** 0.40
CC CC1 0.74 1.00 \ \ \ 0.55 0.84 0.51
CC2 0.79 1.07 0.07 16.06 *** 0.62
CC3 0.70 1.26 0.09 13.92 *** 0.49
CC4 0.66 1.04 0.08 12.70 *** 0.43
CC5 0.70 1.06 0.08 13.59 *** 0.49
IS IS1 0.77 1.00 \ \ \ 0.59 0.80 0.50
IS2 0.70 0.93 0.07 13.42 *** 0.49
IS3 0.74 0.83 0.06 14.07 *** 0.55
IS4 0.59 0.83 0.08 10.98 *** 0.35

Model Fits Statistics: χ2 = 449.28, df = 165; CFI = 0.94, TLI = 0.93, RMSEA = 0.06, SRMR = 0.05.

a

DC = Digital Competence; SE = ICT Self-Efficacy; CC = Collegial Collaboration; IS = Infrastructural Support; C.R. = Composite Reliability; ***ρ < 0.001.\ = Items constrained for identification purposes.

4.3.2. Convergent and discriminant validity

After the initial factor analysis confirmed the loading of each indicator on its respective construct and the model fitness was deemed acceptable, the subsequent step was to evaluate the constructs’ convergent and discriminant validity. The basic difference between convergent and discriminant validity lies in their respective purposes: convergent validity examines whether indicators measuring the same construct converge or agree with each other, while discriminant validity assesses whether a construct is distinct or unrelated to other constructs [61]. Convergent validity is usually evaluated by computing composite reliability (CR) and the average variance extracted (AVE) of the constructs [65]. CR is based on factor loadings from CFA. It applies the same eigenvalue as Cronbach’s alpha to attain a reliability score above 0.70 [61]. The CRs for all constructs within this study are presented in Table 5, where one can see that they took a small range from 0.78 to 0.90, verifying a good composite reliability overall. For the calculation of AVE, it is recommended that the cutoff value for AVE should be no less than 0.50 for the indicators to be acceptable [61]. In this regard, all constructs in this study meet the cutoff value for AVE (Table 5), denoting that the indicators exhibit convergent validity with respect to the constructs.

The purpose of the examination for discriminant validity is to evaluate whether a construct is unique or not correlated with other constructs. This can be achieved by computing the shared variance [61] among each construct. A correlation analysis (Table 6) is needed for the computation of the shared variance, which then leads to the examination of the discriminant validity of the constructs in the study. To establish discriminant validity for a construct, the shared variance between that construct and other constructs must be calculated by squaring the correlations between them and comparing the results to their AVE scores [61]. Discriminant validity is supported only if the shared variance for each pair of constructs is lower than the AVE score of the construct being examined [61]. Taking digital competence and infrastructural support as an example, the shared value between them is (0.62)2 = 0.38, which is obviously lower than their AVEs, which are 0.58 for digital competence and 0.50 for infrastructural support. Therefore, one can say that these two constructs are distinct from each other (Table 6).

Table 6.

Correlation matrix for all constructs.

Constructsa AVE DC SE CC IS
DC 0.58 \
SE 0.47 0.84*** \
CC 0.52 0.83*** 0.81*** \
IS 0.50 0.62*** 0.67*** 0.54*** \

***ρ < 0.001.

a

DC = Digital Competence; SE = ICT Self-Efficacy; CC = Collegial Collaboration; IS = Infrastructural Support.

4.4. Psychometric properties: testing the model

All constructs in this study are statistically significant, and all of them present positive correlations with each other, which took a small range from 0.54 to 0.84 (Table 6). CFA revealed the outcomes of the regression paths relating to all the factors in the model (Fig. 2), and all the research hypotheses proposed in this study were corroborated to be statistically significant. English preservice teachers’ self-efficacy is positively associated with their understanding of collegial collaboration (H1: β = 0.81, ρ < 0.001), infrastructural support (H2: β = 0.57, ρ < 0.001), and digital competence (H3: β = 0.43, ρ < 0.001). In addition, respondents’ understandings of collegial collaboration and infrastructural support also showed different levels of positive association with their digital competence (H4: β = 0.43, ρ < 0.001; H5: β = 0.09, ρ < 0.05).

Fig. 2.

Fig. 2

Standardized estimates for confirmative factor analysis of the model. Model fit indices: χ2 = 449.28, df = 165 (ρ = 0.000); CFI = 0.94; TLI = 0.93; RMSEA = 0.06 (LO 90 = 0.06, HI 90 = 0.07); SRMR = 0.05. *ρ < 0.05, ***ρ < 0.001.

For the explained variance of dependent variables, the variables explained 66% of the variation in collegial collaboration, 44% in infrastructural support and 78% in digital competence.

5. Discussion

The objective of this study was to examine the interrelations among English preservice teachers’ self-efficacy in ICT, collegial collaboration, infrastructural support, and their overall digital competence. This research was guided by two research questions. The first research question aimed to verify the associations among the respondents’ digital competence, ICT self-efficacy, collegial collaboration, and infrastructural support. The second research question was proposed to address the primary variable(s) with respect to the respondents’ overall digital competence within the instructional context. Building on prior research [10,11,58], this study developed a conceptual model comprising four latent variables and five hypotheses that were proposed to operationalize the research questions.

The analysis of the model suggests that all the research hypotheses were statistically corroborated by the empirical data. First, English preservice teachers’ self-efficacy in ICT has a strong positive association with their perceptions of collegial collaboration (H1: β = 0.81, ρ < 0.001). This result unsurprisingly aligned with previous research, which consistently demonstrated a common consensus that individuals’ self-efficacy is shaped by their perceptions of collegial collaboration [10,28,29,66]. It is widely acknowledged that in the Society of Knowledge [2], the use of ICT has become one of the main skills for teachers to master [67], and teachers’ ICT self-efficacy is generally decisive in pedagogical circumstances, as it relates to teachers’ awareness and willingness with respect to incorporating ICT into their teaching procedures and their professional readiness in terms of instructing students on how to use ICT at their disposal for educational purposes. Therefore, it is highly recommended for teacher educators to develop more applicable measures to better boost teachers’ ICT-oriented competences [67,68]. Apart from that, Goddard et al. [29] noted that effective pedagogical activities rely highly on collegial collaborations, as collaboration among colleagues would help teachers converse knowledgably on theories, approaches, and teaching and learning procedures, which could consequently help raise confidence about their professional competencies.

Second, the statistical results suggest that English preservice teachers’ self-efficacy in ICT has a strong positive association with their understanding of infrastructural support (H2: β = .57, ρ < 0.001). This result aligns with previous research findings [30,42,69,70]. Kundu et al. [70] adopted the Unified Theory of Acceptance and Use of Technology (UTAUT) model [71] to examine how teachers’ self-efficacy and perceived infrastructure impact their ease of ICT use. Their findings revealed that in terms of facilitating ICT use, perceived infrastructure had a more pronounced individual effect than teachers’ self-efficacy. Amhag et al. ([72], p.4) directly pointed out that “the most encouraging factor for implementation of ICT was technological and pedagogical support”. In the same vein, Gil-Flores et al. [69] noted that teachers’ self-efficacy, together with the accessibility of educational equipment, ICT training procedures, peer collaboration, and teachers’ comprehension of teaching principles, all contribute to the integration of ICT into classroom practices. Therefore, it is suggested that authorities, such as school committees and the Ministry of Education, provide more technical and administrative support by taking action to equip teachers with state-of-the-art pedagogical theories and approaches and putting emphasis on introducing more applicable teaching resources into the educational environment [73].

Third, regarding the examination of the relationship between English preservice teachers’ self-efficacy in ICT and their overall digital competence, the model indicated that respondents’ ICT self-efficacy has a moderate positive association with their digital competence (H3: β = .43, ρ < 0.001). Similar conclusions can be found in previous research [[74], [75], [76]]. Compeau et al. [74] noted that self-efficacy significantly influenced individuals’ computer competency, both affectively and behaviorally. Eastin and LaRose [75] asserted that self-efficacy was positively related to one’s internet experience. In addition, Hasan [76] suggested that the effects of different types of computer experience vary in regard to people’s digital competence self-efficacy.

Fourth, English preservice teachers’ perceptions of collegial collaboration have a moderate positive association with their digital competence (H4: β = 0.43, ρ < 0.001). This finding is supported by prior studies, indicating that there is a need for teacher training institutions to foster a shared willingness to enhance teachers’ digital competence and to challenge conventional ways of teaching and learning [66,[75], [76], [77], [78], [79]]. In the same vein, leaders of teacher education organizations must take the initiative to effectively integrate digital technology into didactical practices and should be capable of providing adequate administrative resources for teaching staff to engage in continuous professional development. Similar statements were found in previous studies, emphasizing positive outcomes of peer collaboration when learning about putting ICT into educational use [66,80,81].

Fifth, the association between English preservice teachers’ perceptions of infrastructural support and their digital competence seems to be rather low compared to others (H5: β = 0.09, ρ < 0.05). One possible inference is that while the participants presumably acknowledge the importance of infrastructural assistance, they have not yet been able to actually test their perceptions in real-life teaching practices, which could be misleading to some extent. Previous research [5] echoed this presumption by stating that despite teachers being equipped with adequate “physical support”, there is no guarantee that they may receive necessary nonphysical support as well. In addition, Tearle and Golder [82] expressed a similar statement by claiming that “‘watching’ technology being used could not substitute for ‘doing’”. This conclusion was based on one of their trainees' comments about how to improve training: “Actually letting us experience using more ICT, more hands-on experience” ([82], p.63). Moreover, Barton and Haydn’s [83] findings revealed that preservice teachers demonstrated a stronger sense of achievement when given opportunities to apply their knowledge to actual practice. Another inference is that the current study only assigned four items to measure participants’ perceptions of infrastructural support during their ICT-related training. This may be deemed oversimplified if we consider Hew and Brush’s [32] arguments about how infrastructural support encompasses multiple dimensions, such as ICT development plans, ease of use of ICT resources, and provision of continuous professional instructions, among others. Further research is needed on this topic.

6. Conclusions

6.1. Theoretical and practical implications

The findings of this study have implications for both theory and practice. From a theoretical perspective, the results corroborate positive associations among English preservice teachers’ digital competence, ICT self-efficacy, collegial collaboration, and infrastructural support. This contributes to a more robust understanding of this field of study. In addition, these findings enhance our comprehension of the role of each construct studied, forming the foundation for the development of more sophisticated assessment models in the future [84].

Moreover, the practical implications of these findings for the training of preservice teachers are noteworthy. Feng et al. [1] highlighted that Chinese primary and secondary school teachers demonstrate a significantly lower level of digital competence than their international counterparts. This observation emphasizes the need to urgently tackle the obstacles that hinder the development of teachers’ digital competence. In this regard, the findings of this research shed light on the interconnectedness of English preservice teachers’ digital competence, ICT self-efficacy, collegial collaboration, and infrastructural support. By recognizing these associations, we can improve the design and implementation of effective training programs to enhance preservice teachers’ digital competence.

Ultimately, the implications of this research extend beyond the confines of theory and academia. They offer instructional guidance for educational institutions and policy-makers for the enhancement of preservice teachers’ digital competence in the domain of education.

6.2. Limitations

The limitations of the present study must be acknowledged. First, the data are gathered from a cross-sectional design, and it is typical in a cross-sectional study to simultaneously test all the research factors [85]. Consequently, it becomes challenging to ascertain the cause-and-effect relationships [10,85]. Cultural-historical activity theory posits the presence of interrelated relationships among the research variables [86]. Therefore, longitudinal qualitative-based studies are necessary to conduct a thorough exploration of the intrinsic reciprocal relationships among them.

In addition, considering that the data used in this research are self-reported data from a specific normal university, it is likely that the research findings may not be applicable to all educational settings. Therefore, it is worth advocating that future research take different educational settings and school levels into consideration [73].

Furthermore, this study only proposed one hypothesized model and did not compare it with other possible models. Although common indices were statistically acceptable and demonstrated the effectiveness and justification of the current model, the research design would have been more complete if alternative models were compared.

Finally, the present study chose English preservice teachers as the sole research subject, and one of the most outstanding characteristics of this group was that most of them lacked real-life teaching experience. According to Bandura’s theory [26], significant and meaningful experiences in specific domains can lead to significant changes in efficacy, which can manifest in various ways. Since preservice teachers currently lack opportunities to demonstrate their presumed capacities, it is possible that their self-efficacy may not fully align with their actual digital competence. This misalignment could potentially create gaps between their perceptions and their actual capabilities.

6.3. Concluding remarks

The conclusions outlined below may have relevance and applicability for teacher educators worldwide.

First, it is necessary to implement effective measures to address the lack of ICT-related instruction in preservice teachers’ training courses.

Second, teacher educators should develop a set of well-structured digital competence self-evaluation procedures that are tailored to preservice teachers, as they can provide guidance for their professional development in the future.

Third, policy-makers and school leaders should take steps to inspire and mobilize preservice teachers regarding digital collaboration. By doing so, both less experienced teachers and those who are digitally competent shall be well-equipped with digital pedagogy skills, which will serve as a common good for future generations in the long run.

Author contribution statement

Wuyun Dai: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

Data availability statement

Data included in article/supplementary material/referenced in article.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The author would like to express her heartfelt gratitude and appreciation to the reviewers and editor(s) for their invaluable insights and guidance during the revision process. She also wishes to thank all the participants who voluntarily took part in the data collection process. This research could not have been done without their participation.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.heliyon.2023.e19538.

Appendix A. Supplementary data

The following is the Supplementary data to this article:

Multimedia component 1
mmc1.docx (231.7KB, docx)

References

  • 1.Feng J., Wang Y., Bai Y. The research outline and prospect of teachers’digital competence: an analysis based on international literature. Teach. Educ. Res. 2022;34(2):118–128. doi: 10.13445/j.cnki.t.e.r.2022.02.010. [DOI] [Google Scholar]
  • 2.Cabero-Almenara J., Gutiérrez-Castillo J.-J., Palacios-Rodríguez A., Barroso-Osuna J. Development of the teacher digital cometence validation of DigCompEdu Check-In questionnaire in the university context of Andalusia (Spain) Sustainability. 2020;12(15):1–14. doi: 10.3390/su12156094. [DOI] [Google Scholar]
  • 3.Xinhua News Agency . 2018. Opinions of the CPC Central Committee and the State Council on Comprehensively Deepening the Reform of the Construction of Teachers in the New Era. (Report No. 2018-05) [Google Scholar]
  • 4.Ministry of Education of China . 2018. Education Informatization Action Plan 2.0. (Report No. ET-201806) [Google Scholar]
  • 5.Qiu C. University of Glasgow; 2015. The professional development of teacher educators in Shanghai [Doctoral dissertation]http://theses.gla.ac.uk/6798/ Retrieved from. [Google Scholar]
  • 6.Zheng X. [Doctoral dissertation] Ease China Normal University; Shanghai: 2019. Research on the Construction and Application of Digital Competence Model for K12 Teachers in China. [Google Scholar]
  • 7.Tang L. Development of digital competence for vocational education teachers: international perspectives and the Chinese path—an analysis and reference based on three reports from UNESCO-UNEVOC. Chinese Vocat. Tech. Educ. 2023;(4):27–35. [Google Scholar]
  • 8.Li J. An empirical study on digital competence of open university teachers from the perspective of curriculum ideological and political education-based on the survey data of 277 teachers in the system of Guangdong open university. J. Nanjing Open Univ. 2022;(4):28–35. [Google Scholar]
  • 9.Mumtaz S. Factors affecting teachers’ use of information and communications technology: a review of the literature. J. Inf. Technol. Teach. Educ. 2006;9(3):319–342. doi: 10.1080/14759390000200096. [DOI] [Google Scholar]
  • 10.Hatlevik I.K.R., Hatlevik O.E. Examining the relationship between teachers’ ICT self-efficacy for educational purposes, collegial collaboration, lack of facilitation and the use of ICT in teaching practice. Front. Psychol. 2018;9:1–8. doi: 10.3389/fpsyg.2018.00935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Hatlevik O.E. Examining the relationship between teachers’ self-efficacy, their digital competence, strategies to evaluate information, and use of ICT at school. Scand. J. Educ. Res. 2016;61(5):555–567. doi: 10.1080/00313831.2016.1172501. [DOI] [Google Scholar]
  • 12.Ferrari A., Punie Y., Redecker C. In: 21st Century Learning for 21st Century Skills-Proceedings of 7th European Conference on Technology Enhanced Learning. Ravenscroft A., Lindstaedt S., Kloos C.D., Hernández-Leo D., editors. Springer; 2012. Understanding digital competence in the 21st century: an analysis of current frameworks; pp. 79–92. [Google Scholar]
  • 13.INTEF . National Institute of Educational Technologies and Teacher Training; 2017. Common Digital Competence Framework for Teachers. [Google Scholar]
  • 14.Lázaro-Cantabrana J.L., Usart-Rodríguez M., Gisbert-Cervera M. Assessing teacher digital competence: the construction of an instrument for measuring the knowledge of preservice teachers. J. N. Approaches Educ. Res. 2019;8(1):73–78. doi: 10.7821/naer.2019.1.370. [DOI] [Google Scholar]
  • 15.Pérez-Navío E., Ocaña-Moral M.T., Martínez-Serrano M.d.C. University graduate students and digital competence: are future secondary school teachers digitally competent? Sustainability. 2021;13(15):1–14. doi: 10.3390/su13158519. [DOI] [Google Scholar]
  • 16.Redecker C. Joint Research Centre; 2017. European framework for the digital competence of educators: DigCompEdu (Report No. EUR-28775-EN) [Google Scholar]
  • 17.Abbitt J.T. An investigation of the relationship between self-efficacy beliefs about technology integration and technological pedagogical content knowledge (TPACK) among preservice teachers. J. Digital Learn. Teacher Educ. 2011;27(4):134–143. doi: 10.1080/21532974.2011.10784670. [DOI] [Google Scholar]
  • 18.Lee Y., Lee J. Enhancing pre-service teachers’ self-efficacy beliefs for technology integration through lesson planning practice. Comput. Educ. 2014;73:121–128. doi: 10.1016/j.compedu.2014.01.001. [DOI] [Google Scholar]
  • 19.Peebles J.L., Mendaglio S. The impact of direct experience on preservice teachers’ self-efficacy for teaching in inclusive classrooms. Int. J. Incl. Educ. 2014;18(12):1321–1336. doi: 10.1080/13603116.2014.899635. [DOI] [Google Scholar]
  • 20.Wang L., Ertmer P.A., Newby T.J. Increasing preservice teachers’ self-efficacy beliefs for technology integration. J. Res. Technol. Educ. 2004;36(3):231–250. doi: 10.1080/15391523.2004.10782414. [DOI] [Google Scholar]
  • 21.Gurvitch R., Metzler M.W. The effects of laboratory-based and field-based practicum experience on pre-service teachers’ self-efficacy. Teach. Teach. Educ. 2009;25(3):437–443. doi: 10.1016/j.tate.2008.08.006. [DOI] [Google Scholar]
  • 22.Bernadowski C., Perry R., Greco R.D. Improving preservice teachers’ self-efficacy through service learning: lessons learned. Int. J. InStruct. 2013;6(2):67–86. [Google Scholar]
  • 23.Bandura A. W. H. Freeman and Company; 1997. Self-efficacy: the Exercise of Control. [Google Scholar]
  • 24.Christophersen K.A., Elstad E., Turmo A., Solhaug T. Teacher education programmes and their contribution to student teacher efficacy in classroom management and pupil engagement. Scand. J. Educ. Res. 2015;60(2):240–254. doi: 10.1080/00313831.2015.1024162. [DOI] [Google Scholar]
  • 25.Bandura A. Information Age Publishing; 2006. Guide for Constructing Self-Efficacy Scales. Self-Efficacy Beliefs of Adolescents; pp. 307–337. [Google Scholar]
  • 26.Bandura A. Exercise of human agency through collective efficacy. Curr. Dir. Psychol. Sci. 2016;9(3):75–78. http://cdp.sagepub.com/content/9/3/75 [Google Scholar]
  • 27.Vanderlinde R., van Braak J. The e-capacity of primary schools: development of a conceptual model and scale construction from a school improvement perspective. Comput. Educ. 2010;55(2):541–553. doi: 10.1016/j.compedu.2010.02.016. [DOI] [Google Scholar]
  • 28.Angeli C., Valanides N. Epistemological and methodological issues for the conceptualization, development, and assessment of ICT–TPCK: advances in technological pedagogical content knowledge (TPCK) Comput. Educ. 2009;52(1):154–168. doi: 10.1016/j.compedu.2008.07.006. [DOI] [Google Scholar]
  • 29.Goddard Y.L., Goddard R.D., Tschannen-Moran M. A theoretical and empirical investigation of teacher collaboration for school improvement and student achievement in public elementary schools. Teach. Coll. Rec. 2007;109(4):877–896. [Google Scholar]
  • 30.Pelgrum W.J. Obstacles to the integration of ICT in education: results from a worldwide educational assessment. Comput. Educ. 2001:45–48. doi: 10.1016/S0360-1315(01. (2001) 163-178. [DOI] [Google Scholar]
  • 31.Tearle P. A theoretical and instrumental framework for implementing change in ICT in education. Camb. J. Educ. 2004;34(3):331–351. doi: 10.1080/0305764042000289956. [DOI] [Google Scholar]
  • 32.Hew K.F., Brush T. Integrating technology into K-12 teaching and learning: current knowledge gaps and recommendations for future research. Educ. Technol. Res. Dev. 2006;55(3):223–252. doi: 10.1007/s11423-006-9022-5. [DOI] [Google Scholar]
  • 33.Albirini A. Teachers’ attitudes toward information and communication technologies: the case of Syrian EFL teachers. Comput. Educ. 2006;47(4):373–398. doi: 10.1016/j.compedu.2004.10.013. [DOI] [Google Scholar]
  • 34.Skaalvik E.M., Skaalvik S. Dimensions of teacher self-efficacy and relations with strain factors, perceived collective teacher efficacy, and teacher burnout. J. Educ. Psychol. 2007;99(3):611–625. doi: 10.1037/0022-0663.99.3.611. [DOI] [Google Scholar]
  • 35.Rotter J.B. Generalized expectancies for internal versus external control of reinforcement. Psychol. Monogr.: General Appl. 1966;80(1):1–28. doi: 10.1037/h0092976. [DOI] [PubMed] [Google Scholar]
  • 36.Ogodo J.A., Simon M., Morris D., Akubo M. Examining K-12 teachers’ digital competency and technology self-efficacy during COVID-19 pandemic. J. Higher Educ. Theory Prac. 2021;21(11):13–27. doi: 10.33423/jhetp.v21i11.4660. [DOI] [Google Scholar]
  • 37.Mannila L., Nordén L.Å., Pears A. ICER 2018-Proceedings of the 2018 ACM Conference on International Computing Education Research. 2018. Digital competence, teacher self-efficacy and training needs; pp. 78–85. [DOI] [Google Scholar]
  • 38.Nordén L.-Å., Mannila L., Pears A. 2017 IEEE Frontiers in Education Conference, Indianapolis. 2017. Development of a self-efficacy scale for digital competences in schools. [Google Scholar]
  • 39.Cabero-Almenara J., Palacios-Rodríguez A. Marco europeo de competencia digital docente «DigCompEdu». Traducción y adaptación del cuestionario «DigCompEdu Check-In». Edmetic. 2020;9(1):213–234. doi: 10.21071/edmetic.v9i1.12462. [DOI] [Google Scholar]
  • 40.Ghomi M., Redecker C. Proceedings of the 11th International Conference on Computer Supported Education,Science and Technology Publications. 2019. Digital competence of educators (DigCompEdu): development and evaluation of a self-assessment instrument for teachers’ digital competence. [Google Scholar]
  • 41.Kervinen A., Portaankorva-Koivisto P., Kesler M., Kaasinen A., Juuti K., Uitto A. From pre- and in-service teachers’ asymmetric backgrounds to equal co-teaching: investigation of a professional learning model. Front. Educ. 2022;7:1–13. doi: 10.3389/feduc.2022.919332. [DOI] [Google Scholar]
  • 42.Kundu A., Bej T., Dey K.N. An empirical study on the correlation between teacher efficacy and ICT infrastructure. Int. J. Inform. Learn. Technol. 2020;37(4):213–238. doi: 10.1108/IJILT-04-2020-0050. [DOI] [Google Scholar]
  • 43.Williams B., Onsman A., Brown T. Exploratory factor analysis: a five-step guide for novices. JEPHC. 2010;8(3):1–13. [Google Scholar]
  • 44.Pett M.A., Lackey N.R., Sullivan J.J. In: Making Sense of Factor Analysis Sage Publications, Inc. Pett M.A., Lackey N.R., Sullivan J.J., editors. 2003. An overview of factor analysis; pp. 2–12. [Google Scholar]
  • 45.Taherdoost H., Sahibuddin S., Jalaliyoon N. Exploratory factor analysis; Concepts and theory. Adv. Appl. Pure Math. 2020:375–382. https://hal.science/hal-02557344 [Google Scholar]
  • 46.Henson R.K., Roberts J.K. Use of exploratory factor analysis in published research. Educ. Psychol. Measur. 2006;66(3):393–416. http://epm.sagepub.com/content/66/3/393 [Google Scholar]
  • 47.Netemeyer R.G., Bearden W.O., Sharma S. 2003. Scaling Procedures, Sage Publications, Inc. [Google Scholar]
  • 48.Swisher L.L., Beckstead J.W., Bebeau M.J. Factor analysis as a tool for survey analysis using a professional role orientation inventory as an example. Phys. Ther. 2004;84(9):784–799. doi: 10.1093/ptj/84.9.784. [DOI] [PubMed] [Google Scholar]
  • 49.Kline R.B. The Guilford Press; 2016. Principles and Practice of Structural Equation Modeling. [Google Scholar]
  • 50.Byrne B.M. Routledge; 2016. Structural Equation Modeling with Amos. [Google Scholar]
  • 51.Thakkar J.J. Springer; 2020. Structural Equation Modelling: Application for Research and Practice (With AMOS and R) [Google Scholar]
  • 52.Lau W.W.F., Yuen A.H.K. Factorial invariance across gender of a perceived ICT literacy scale. Learn. Indiv Differ. 2015;41:79–85. doi: 10.1016/j.lindif.2015.06.001. [DOI] [Google Scholar]
  • 53.Burton L.J., Mazerolle S.M. Survey instrument validity part I-Principles of survey instrument development and validation in athletic training education research. Athl. Train. Educ. J. 2011;6(1):27–35. doi: 10.4085/1947-380X-6.1.27. [DOI] [Google Scholar]
  • 54.Kaiser H.F. A second genration little Jiffy. Psychometrika. 1970;35(4):401–415. doi: 10.1007/BF02291817. [DOI] [Google Scholar]
  • 55.Burt C. Test of significance in factor analysis. British J. Psychol., Statist. Sec. 1952;5(2):109–133. [Google Scholar]
  • 56.Hair J.F.Jr., Black W.C., Babin B.J., Anderson R.E. 2019. Multivariate Data Analysis. Annabel Ainscow. [Google Scholar]
  • 57.Elstad E., Christophersen K.A. Perceptions of digital competency among student teachers: contributing to the development of student teachers’ instructional self-efficacy in technology-rich classrooms. Educ. Sci. 2017;7(1):1–15. doi: 10.3390/educsci7010027. [DOI] [Google Scholar]
  • 58.Gudmundsdottir G.B., Hatlevik O.E. Newly qualified teachers’ professional digital competence: implications for teacher education. Eur. J. Teach. Educ. 2018;41(2):1–30. doi: 10.1080/02619768.2017.1416085. [DOI] [Google Scholar]
  • 59.Andersson C.A. Direct orthogonalization. Chemometr. Intellig. Lab. Syst. 1999;47:51–63. doi: 10.1016/S0169-7439(98)00158-0. [DOI] [Google Scholar]
  • 60.Noord O.E. Multivariate calibration standardization. Chemometr. Intell. Lab. Syst. 1994;25:85–97. doi: 10.1016/0169-7439(94)85037-2. [DOI] [Google Scholar]
  • 61.Collier J.E. Routledge; 2020. Applied Structural Equation Modeling Using AMOS. [Google Scholar]
  • 62.Hu L., Bentler P.M. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model.: A Multidiscip. J. 1999;6(1):1–55. doi: 10.1080/10705519909540118. [DOI] [Google Scholar]
  • 63.Marsh H.W., Hau K.T., Wen Z. In search of golden rules: comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Struct. Equ. Model.: A Multidiscip. J. 2004;11(3):320–341. doi: 10.1207/s15328007sem1103_2. [DOI] [Google Scholar]
  • 64.Brown T.A. The Guilford Press; 2015. Confirmatory Factor Analysis for Applied Research. [Google Scholar]
  • 65.Fornell C., Larcker D.F. Evaluating structural equation models and unobservable variables and measurement error. J. Market. Res. 1981;18(1):39–50. doi: 10.1177/002224378101800104. [DOI] [Google Scholar]
  • 66.Tondeur J., van Braak J., Sang G., Voogt J., Fisser P., Ottenbreit-Leftwich A. Preparing pre-service teachers to integrate technology in education: a synthesis of qualitative evidence. Comput. Educ. 2012;59(1):134–144. doi: 10.1016/j.compedu.2011.10.009. [DOI] [Google Scholar]
  • 67.Fanni F., Rega I., Cantoni L. Using self-efficacy to measure primary school teachers’ perception of ICT: results from two studies. Int. J. Educ. Dev. using Inf. Commun. Technol. (IJEDICT) 2013;9(1):100–111. https://www.learntechlib.org/p/111898/ Retrieved July 25, 2023 from. [Google Scholar]
  • 68.Tschannen-Moran M., Hoy W.A. Teacher efficacy: capturing an elusive construct. Teach. Teach. Educ. 2001;17:783–805. doi: 10.1016/S0742-051X(01)00036-1. [DOI] [Google Scholar]
  • 69.Gil-Flores J., Rodríguez-Santero J., Torres-Gordillo J. Factors that explain the use of ICT in secondary-education classrooms: the role of teacher characteristics and school infrastructure. Comput. Hum. Behav. 2017;68:441–449. doi: 10.1016/j.chb.2016.11.057. [DOI] [Google Scholar]
  • 70.Kundu A., Bej T., Dey K. Investigating effects of self-efficacy and infrastructure on teachers’ ICT use, an extension of UTAUT. Int. J. Web Base. Learn. Teach. Technol. 2021;16(6):1–21. doi: 10.4018/IJWLTT.20211101.oa10. [DOI] [Google Scholar]
  • 71.Venkatesh V., Morris M., Davis G., Davis F. User acceptance of information Technology: toward a unified view. MIS Q. 2003;27(3):425–478. doi: 10.2307/30036540. [DOI] [Google Scholar]
  • 72.Amhag L., Hellström L., Stigmar M. Teacher educators’ use of digital tools and needs for digital competence in higher education. J. Digital Learn. Teacher Educ. 2019;35(4):1–18. doi: 10.1080/21532974.2019.1646169. [DOI] [Google Scholar]
  • 73.Moses P., Bakar A.K., Mahmud R R., W S. ICT infrastructure, technical and administrative support as correlates of teachers’ laptop use. Procedia-Soc. Behav. Sci. 2012;59:709–714. doi: 10.1016/j.sbspro.2012.09.335. [DOI] [Google Scholar]
  • 74.Compeau D., Higgins C.A., Huff S. Social cognitive theory and individual reactions to computing technology: a longitudinal study. MIS Q. 1999;23(2):145–158. doi: 10.2307/249749. [DOI] [Google Scholar]
  • 75.Eastin M.S., LaRose R. Internet self-efficacy and the psychology of the digital divide. J. Comput.-Mediat. Commun. 2000;6(1):1–22. doi: 10.1111/j.1083-6101.2000.tb00110.x. [DOI] [Google Scholar]
  • 76.Hasan B. The influence of specific computer experiences on computer self-efficacy beliefs. Comput. Hum. Behav. 2003;19(4):443–450. doi: 10.1016/S0747-5632(02)00079-1. [DOI] [Google Scholar]
  • 77.Lindfors M., Pettersson F., Olofsson A.D. Conditions for professional digital competence: the teacher educators’ view. Educ. Inquiry. 2021;12(4):390–409. doi: 10.1080/20004508.2021.1890936. [DOI] [Google Scholar]
  • 78.Lucas M., Bem-Haja P., Siddiq F., Moreira A., Redecker C. The relation between in-service teachers’ digital competence and personal and contextual factors: what matters most? Comput. Educ. 2021;160 doi: 10.1016/j.compedu.2020.104052. [DOI] [Google Scholar]
  • 79.Røkenes F.M., Grüters R., Skaalvik C., Lie T.G., Østerlie O., Järnerot A., Humphrey K., Gjøvik Ø., Letnes M.A. Teacher educators’ professional digital competence in primary and lower secondary school teacher education. Nordic J. Digital Literacy. 2022;17(1):46–60. doi: 10.18261/njdl.17.1.4. [DOI] [Google Scholar]
  • 80.Brush T., Glazewski K., Rutowski K., et al. Integrating technology in a field-based teacher training program. The PT3@ASU project. ETR&D. 2003;51(1):57–72. doi: 10.1007/BF02504518. [DOI] [Google Scholar]
  • 81.Thompson A.D., Schmidt D.A., Davis N.E. Technology collaboratives for simultaneous renewal in teacher education. ETR&D. 2003;51(1):73–89. doi: 10.1007/BF02504519. [DOI] [Google Scholar]
  • 82.Tearle P., Golder G. The use of ICT in the teaching and learning of physical education in compulsory education: how do we prepare the workforce of the future? Eur. J. Teach. Educ. 2008;31(1):55–72. doi: 10.1080/02619760701845016. [DOI] [Google Scholar]
  • 83.Barton R., Haydn T. Trainee teachers’ views on what helps them to use information and communication technology effectively in their subject teaching. J. Comput. Assist. Learn. 2006;22(4):257–272. doi: 10.1111/j.1365-2729.2006.00175.x. [DOI] [Google Scholar]
  • 84.Galindo-Domínguez H., Bezanilla M.J. Promoting time management and self-efficacy through digital competence in university students: a mediational model. Contempor. Educ. Technol. 2021;13(2):1–14. doi: 10.30935/cedtech/9607. [DOI] [Google Scholar]
  • 85.Wang X., Cheng Z. Cross-sectional studies. Strengths, weakness, and recommendations. Chest. 2020;158(1):S61–S71. doi: 10.1016/j.chest.2020.03.012. [DOI] [PubMed] [Google Scholar]
  • 86.Roth W.M., Lee Y.J. “Vygotsky’s neglected legacy”: cultural-historical activity theory. Rev. Educ. Res. 2007;77(2):186–232. doi: 10.3102/0034654306298273. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia component 1
mmc1.docx (231.7KB, docx)

Data Availability Statement

Data included in article/supplementary material/referenced in article.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES