Abstract
Communication is essential for success in today’s world, making English language learning (ELL) a crucial skill. Innovative solutions are required to tackle complex language learning issues and meet the various demands of learners. Personalized learning successfully considers students’ unique interests, strengths, and weaknesses. The study investigates the revolutionary possibilities of Gated Recurrent Neural Networks (GRNN) to improve ELL-tailored training. The GRNN-ELL model dynamically adapts to the learner’s progress using powerful sequence modelling and language processing algorithms. The training and evaluation architecture and dataset are detailed with an emphasis on optimization techniques. According to the experimental data, fluency, vocabulary diversity, contextual relevance, and engagement levels are four areas where GRNN-ELL outperforms conventional measurements. With the provision of personalized learning experiences, the promotion of intercultural communication skills, and the resolution of educational demands worldwide, the results highlight the possibility of GRNN-ELL revolutionizing ELL. The study stresses the significance of individualized training in effectively acquiring a language in today’s worldwide environment.
Keywords: English Language learning, Personalized instruction, Deep learning, Gated recurrent neural networks, Language acquisition, Language processing
Subject terms: Engineering, Mathematics and computing
Introduction
GRNN-ELL will employ its transformative powers to study English language learning. This research examines adaptive and personalized methodologies, deep learning advancements, and the usefulness of GRNN-ELL in customized instruction to reinvent language learning experiences and promote linguistic proficiency and cross-cultural communication in a globalized society. GRNN and other neural network topology advances underpin this deep learning research. The deep learning effort aims to improve English processing. Deep learning algorithms find nuanced patterns in large datasets, so the GRNN-ELL model can detect linguistic nuances and improve personalized training. This enhances individualized schooling. According to the technique, deep learning is crucial for complicated and adaptive language learning systems. Traditional ELL may not suit some students. Manuals and classes may limit student participation and uniqueness in standardized language training. Most systems ignore aptitude, taste, and learning pace. GRNN-ELL addresses these difficulties. Customizable for students. Unlike previous methods, GRNN-ELL adapts its courses to students’ pattern modeling and language processing in real-time. This study indicates how GRNN-ELL can improve language acquisition through personalized learning, a crucial education need. Rigorous testing shows that GRNN-ELL promotes learning and is useful for worldwide education. Fundamental RNN designs can represent the input sequence and each item separately. The model can focus on local context and dependencies if each item has several representations. Context is captured in one entire sequence representation. The study obscures this distinction by using the same vector and sequence language. Each vector in a sequence represents an object whose dimensionality reflects its properties. To avoid sequence confusion. A collection of vectors for the input sequence and a single vector for the sequence or a specific item are needed. When studying RNN mechanisms like attention and memory management, this discrepancy may disclose how the model processes inputs and outputs.
Background and challenges
English language study is important today since effective communication is key to success1. Mastering English is important for personal and professional development and navigating a worldwide environment2. Although it is essential, there are many obstacles to overcome to meet the needs of learners when they acquire a new language3. ELLs encounter several complicated challenges4. Because students’ interests, strengths, and learning speeds vary, traditional, methodical approaches to education fail to meet their desires5. Managing the variation and ensuring learners actively participate in language acquisition is the problem of customizing instructional strategies6. Language acquisition systems are not as effective when using traditional approaches7 because they cannot personalize encounters.
The study admits the existence of an optional attention mechanism; however, it does not dive into the architectural implications of this mechanism or the limitations on the input/output sequence. Instead, it concentrates only on the technical difficulty of converting scores into acceptable probability distributions. It would be quite illuminating to compare performance with and without the attention mechanism, specifically considering that it is an opt-in mechanism. A comparison of this kind could offer light on the advantages and disadvantages of utilizing attention mechanisms in contrast to normal RNNs alone, particularly concerning memory management and handling long-range dependencies the efficacy of attention mechanisms in improving model performance and tackling sequence processing issues.
Motivation and objectives
This study is motivated by the importance of overcoming the difficulties associated with language acquisition to enhance the effectiveness of learning English8. Pursuing creative solutions, especially those capable of adjusting to unique student profiles, has emphasized personalized training9. The motive goes beyond just language proficiency10; it includes the broader aim of developing cross-cultural communication skills crucial in a globalized society where people often interact with varied linguistic and cultural environments11. The study intends to investigate personalized and adaptive learning frameworks for English language learners, present the GRNN-ELL model, examine the theoretical framework, delineate the methods for training and assessment, dataset selection, and optimization strategies, and illustrate the enhanced effectiveness of GRNN-ELL in personalized instruction. The text delves into recent developments in deep learning, specifically focusing on GRNN-ELL as a viable path for innovation12,13. The report describes model evaluation and performance, showing GRNN-ELL’s tailored teaching effectiveness is the goal14. Natural language processing has helped non-native speakers learn for years. This method has a lengthy history, but modern artificial neural network models and applications are startlingly understudied. Network topologies improve language processing major progress. Interestingly, complicated language acquisition theories are rarely researched. This requirement is addressed by studying how cutting-edge neural network models may change non-native language education. This study examines GRNN-ELL’s ability to bridge this information gap for personalized language training and NSLCEs.
The GRNN-ELL model blends advanced neural network techniques with tailored language education. The model uses Gated Recurrent Units (GRUs) to capture long-term dependencies for individual learner profiles in sequential data. It uses attention techniques to focus on relevant input data, boosting engagement and comprehension. The methodology delivers real-time feedback on interactions and performance, helping learners focus on their weaknesses. The study suggests detailed test metrics for the model’s efficacy. It recommends that future studies use multi-modal inputs to make learning more dynamic and effective.
Significance of this study
The approach uses Gated Recurrent Neural Networks15 for English Language Learning. Sequential input adaptability makes GRNN-ELL ideal for language acquisition tasks that require context and linguistic complexities. Customizing language learning to match learners’ needs and preferences might modify experiences16. According to the study, GRNN-ELL should outperform standard personalized education techniques. The findings should show improved performance indicators, highlighting the revolutionary power of deep learning in ELL. This study seeks to reveal GRNN-ELL’s adaptability to personalized language instruction challenges. This study’s main contributions are:
To explore GRNN integration in English language learning for individualized instruction, efficacy, and learner outcomes, studying its theoretical framework and intrinsic mechanisms.
To demonstrate that GRNN-ELL may improve tailored instruction by enhancing performance measures, student engagement, language fluency, and cross-cultural communication.
To explore the transformative implications of integrating GRNN into ELL practices, highlight its potential to redefine traditional instructional paradigms, propose avenues for future research, and stimulate discourse and innovation in language acquisition and personalized instruction.
The rest of the paper is structured as follows: Sect. 2 analyzes various studies on language learning and personalized instruction in the literature. Section 3 presents the theoretical framework of the proposed GRNN-ELL model. Section 4 describes the architecture and implementation details of the GRNN-ELL model. The evaluation results and performance of the proposed model are presented in Sect. 4. Section 5 concludes the discussion of this article.
Related studies
Alamri et al.17 assessed the efficacy of personalized learning (PL) tasks in online courses through a mixed-methods approach. Data is gathered via interviews and surveys concentrating on students’ experiences, apparent autonomy, competence, proximity, and intrinsic drive. The results indicate that project-based learning activities successfully address students’ learning requirements, favourably influencing psychological need fulfilment and boosting inherent motivation. Additional research is required to investigate the extended impacts, various learner groups, and best implementation methods.
Whalley et al.18 examined the Fourth Industrial Revolution, the response to Covid-19, and the changing pedagogies in higher education. The project will utilize a conceptual framework incorporating the UcaPP universe, Future Educational System, Personal Learning Environments (PLEs), and connective techniques. Information will be gathered via interviews, focus group talks, and surveys. Thematic analysis will pinpoint themes concerning pedagogical alignment, pupil demographic shifts, the importance of PLEs, and legislative issues for expanding access to higher education. The study will examine both the difficulties and possibilities inside academic frameworks. Future research should concentrate on enduring modifications in pedagogical methods, student assistance programs, and organizational regulations.
Iqbal et al.19 investigated the importance of efficient educational lesson preparation to create a conceptual framework. An experiment will be done at Government Edward College in Pabna, Bangladesh, to evaluate the effects of various lesson design strategies on student involvement and learning results. The results indicate that theory-based planning, suitable seats, active monitoring of student behaviour, and teaching experience are essential for good lesson planning. The study highlights the beneficial influence of well-executed lesson planning on the quality of teaching and student involvement. Policy proposals suggest conducting additional studies on teacher training and professional development programs.
De Oliveira Araújo et al.20 gathered data from global newspapers and scholarly publications to examine the influence of COVID-19 on education and mental well-being. The report emphasizes the uncertainty and worry brought about by the epidemic and stresses the importance of addressing its psychological effect on educational stakeholders. It underlines the necessity for additional research on the psychological impacts of online and distance education and the efficacy of distance education methods in reducing disruptions. The study recognizes the necessity for further empirical research to validate the results and tackle the adverse psychological effects of the pandemic.
Nabizadeh et al.21 examined approaches for personalizing learning paths, emphasizing user preferences, learning goals, prior experience, and performance information. The study will assess their efficacy in different educational settings and analyze various methods. The challenges identified are scalability, adaptability, data security, algorithm intricacy, and technological infrastructure. Future studies should combine methodologies, carry out long-term studies, evaluate ethical issues, and investigate technical advancements such as artificial intelligence and learning analytics.
Bruggeman et al.22 investigated the characteristics of teachers while using blended learning in higher education. Twelve expert interviews identified two categories: adaptive traits that involve identifying pedagogical demands for change and maladaptive attributes that include a lack of knowledge of blended learning or worry over technology implications. The findings offer insights into the intricacies and difficulties of implementing blended learning. Research gaps exist in longitudinal studies, cultural variables, and comparative studies across various institutional settings and educational environments to comprehend the teacher qualities that affect blended learning results.
Flores et al.23 studied about 2718 Portuguese higher education students who adapted to online teaching during the COVID-19 epidemic. Personal and environmental elements like technology access, home environment, self-regulatory abilities, and socio-emotional competencies impact students’ experiences. Institutional and pedagogical solutions, including good communication, concise instructions, technological support, and adaptable instructional methodologies, are critical. The study highlights the significance of self-regulatory skills in influencing students’ adjustment to online learning environments.
Chaipidech et al.24 investigated the impact of an andragogical teacher professional development (TPD) program on enhancing in-service science teachers’ TPACK levels. The application, with an incorporated individualized learning mechanism, greatly enhances TPACK levels in participants. Future studies should analyze the enduring effects of TPD programs built using andragogical principles, compare various models, and examine the contextual factors influencing program implementation and results. This comprehension can aid in creating successful professional development programs for teachers.
Wang et al.25 examined the effects of personalized adaptive learning on eighth-grade pupils in China. The students were assigned randomly to three groups: one utilizing Squirrel AI Learning, another undergoing large-group instruction, and the third undergoing small-group instruction. Students utilizing Squirrel AI Learning demonstrated superior enhancements in mathematical competency compared to those in large-group or small-group training. It shows the efficacy of adaptive learning systems in enhancing student learning results in Chinese educational institutions.
Reyad et al.26 proposed Adam, a modified Adam algorithm, to enhance accuracy and convergence speed. Based on gradient values during training, the Adam method adjusts parameter update step size throughout training epochs. It also generates a hybrid mechanism using the Adam and AMSGrad algorithms. The Adam algorithm surpasses the original Adam algorithm, SGD, and alternative SGD adaptive algorithms in accuracy and convergence speed. Testing accuracy and convergence speed are better with Adam than with AdaBelief.
An examination of the literature has revealed several important discoveries and areas in need of further study. Previous studies examined changing teaching methods in light of the Fourth Industrial Revolution, highlighting the importance of conducting extensive research on teaching techniques and student support services. The study emphasized the need for efficient lesson design and teacher training. Another study emphasized the psychological effects of Covid-19 on individuals involved in education. The study investigated techniques for personalizing learning paths, while Bruggeman and colleagues examined the characteristics of blended learning. The literature review highlights the increasing significance of personalized and adaptive learning methods, emphasizing the necessity for thorough research on their long-term impacts, contextual influences, and comparative evaluations in various educational environments. To fill ELL research gaps, the GRNN-ELL paradigm uses individualized training, adaptive content sequencing, and real-time feedback. Gated recurrent neural networks record extensive language sequence dependencies and adapt learning experiences as learners progress. The example shows that simple Recurrent Neural Networks (RNNs) cannot turn sequential input into a series of varying durations. Discussing the GRNN-ELL model requires explaining if it can fix the problem. GRNN-ELL may use attention or GRUs to handle output sequences of varied lengths. Understanding this distinction shows the model’s potential and how it differs from a simple RNN in complex language learning tasks. Effective lesson design, teacher education, and psychological support provide long-term English language instruction in varied settings. GRNNs’ contextual and long-term dependency management makes them appropriate for many language processing applications. For context-aware language modelling, sentiment analysis, machine translation, and speech recognition, GRNNs with GRUs or LSTMs improve data preservation and individualized learning.
Theoretical framework
This study uses deep learning and GRNN to improve English language learning by providing individualized instruction. The GRNN-ELL model uses sequence and linguistic processing models to enhance language acquisition for different learners. This approach involves using technology in lectures, incorporating psychology to motivate students, and utilizing data-driven guidance to improve the efficacy and durability of English language instruction.
Theoretical underpinnings of deep learning and GRNNs
Deep learning provides multilayered artificial neural networks to analyze complex data demonstrations. Based on the human brain’s complex structure and functions, deep learning uses interconnected layers of neurons to process input data and create increasingly abstract representations. Deep learning relies on neural network theory to approximate complex functions using fundamental nonlinear transformations. The backpropagation method improves neural network training. Iteratively modifying neuron connection weights lets models extract input information. This method reduces prediction mistakes and improves learning. GRNNs excel in language acquisition. Their sequential input skills, essential for language comprehension and creation, are the key cause. Memory and understanding of the text’s long-term dependencies enable the study to interpret words and phrases in context. GRNNs also dynamically update course material, provide context-aware exercises and provide students with tailored feedback. Being adaptive and context-sensitive increases language acquisition and makes learning more individualized and efficient. Language acquisition requires individualized training since students have different strengths, limitations, learning speeds, and needs.
Gated recurrent neural networks represent and respond to learner data in GRNN-ELL, making it possible. Real-time progress monitoring by GRNN-ELL helps personalize the course to each student’s strengths and shortcomings. As The Study progresses, GRNN-ELL adjusts difficulty, identifies issues, and gives students customized feedback. GRNN-ELL makes learning entertaining, efficient, and successful by responding to language skills. Additionally, it makes customized instruction feasible and scalable. Gated RNNs overcome the limits of traditional RNNs in retaining long-term dependencies in sequential data processing. GRNNs’ gating mechanisms identify and preserve important information over lengthy sequences. It improves the model’s temporal relationship capture in sequential data and overcomes vanishing gradients. GRNNs in deep learning improve the model’s ability to handle complex sequential input, which advances language acquisition and other domains. Innovative GRNNs can help researchers study language sequences and English Language Learning. With profound learning principles and GRNN structures, language acquisition and teaching approaches will transform significantly.
Relevance of GRNN in sequence modeling and Language processing tasks
Sequence modelling and language processing require Gated Recurrent Networks because The Study captures temporal dependencies and contextual complexities. Language processing using GRNNs identifies sophisticated linguistic nuances modified by the past context in sequential phrases and documents. GRNNs excel at language modelling, machine translation, speech recognition, and sentiment analysis. Its gating features help the model focus on meaningful data and eliminate noise. The selective attention mechanism improves model prediction and sequential input representation. GRNs preserve temporal interactions and contextual nuances in sequential data, making them essential for sequence modelling and language processing. Equations (1–5) describes GRNN. The framework can efficiently handle textual input by gathering semantic meaning and contextual information utilizing “Embedded words” as number vector representations of words. Contextual embeddings like Word2Vec, GloVe, and BERT create them.
Adam26 is an optimization method for training neural networks. The approach takes the best features of AdaGrad and RMSprop and merges them into one by calculating adaptive learning rates for all parameters. Adam can adaptively train by keeping gradient averages and squared gradients running to improve convergence and performance.
![]() |
1 |
where, In Eq. 1, the variable
is the hidden state at time step
,
, is the input at time step
,
and
are weight matrices,
is the bias vector (Bias network: Neural networks add a bias vector to the weighted sum of inputs before applying the activation function. Making the decision boundary more flexible helps alter neuron output and enhance model-data fit, and
is the activation function, usually a nonlinear function such as ReLU. Equation (1) calculates the current hidden state
using the input
at time
and the prior hidden state
.
![]() |
2 |
where, In Eq. 2,
, represents the reset gate vector, calculated using the sigmoid activation function
. Equation (2) calculates the reset gate vector.
using the input
at time
and the prior hidden state
. The reset gate controls the part of the previous concealed state
that should be disregarded.
![]() |
3 |
where, In Eq. 3, the variable
, represents the update gate vector calculated with the sigmoid activation function. Equation (3) calculates the update gate vector.
using the input
at time
and the prior hidden state
. The update gate controls the previous hidden state
that should be preserved in the current time step.
![]() |
4 |
Where in Eq. 4, the variable
, specifies the candidate’s hidden state, calculated using the hyperbolic tangent activation function. Equation (4) analyses the candidate’s hidden state.
by applying the input
at time
, the reset gate
, and the prior hidden state 
![]() |
5 |
Where Eq. 5 analyses the current hidden state
using the update gate
, the candidate’s hidden state
, and the prior hidden state
. The update gate synthesizes the candidate’s hidden states and prior states. Language modelling, machine translation, speech recognition, and sentiment analysis are GRNN’s strengths. Gating systems assist people in making accurate predictions by focusing on relevant information and ignoring distractions. GRNNs capture long-range relationships for language translation and text synthesis. GRNNs evaluation language hierarchies and patterns aid natural language understanding and production. GRNNs improve sentiment analysis by finding nuanced text signals. GRNNs accomplish complex long-term tasks like language translation and text production. Its architecture captures the language’s syntactic and hierarchical patterns. This helps with content logic, context, and natural language comprehension. GRNNs handle complex phrase patterns and semantic implications in language translation. Knowledge of context helps it create coherent and meaningful content. Text sentiment analysis uses GRNNs to analyze complicated word-phrase relationships and determine emotions. Finding subtle language sentiment cues in sequential data improves sentiment analysis. Sequence modelling and language processing depend on GRNNs’ effects on sequential data perception and processing. It innovates natural language processing and improves linguistic analysis and comprehension by detecting temporal links and analyzing complex language patterns. Students’ grammar, vocabulary, pronunciation, and comprehension are tested. Expert models like GRNN-ELL help teachers assess children’s language development and identify strengths and weaknesses. Personal language learning courses need critical criticism and advice. GRNN-ELL rapidly and accurately explains correct and incorrect responses to help students develop. This personalized feedback considers each student’s learning style, speed, and proficiency. Use appropriate words and context.
The learner’s skill level, topic, and linguistic purpose can help GRNN-ELL write to these requirements. For practical use, emulate Business English’s formal tone and specialized terminology. The input layer guides the GRNN-ELL architecture’s basic layers. The tiers contain GRUs and other recurrent layers. Word vectors are an input layer priority. The network’s processing ability determines vector transmission. GRNN layers largely do the fundamental model operations. The Study remembers sequential dependencies, context links, and inputs using math. Xi represents the i-th token in a sequence. Each x numeral denotes a word representation attribute or dimension. Consider x a vector. It encodes language element syntactic and semantic properties using word embeddings or other methods. Vectors enable language comprehension, translation, and classification. Language understanding is completed by aggregating processed information from previous layers in the output layer. Weighted summation or other modifications combine the network’s final hidden state(s) to predict or categorize the input sequence. Its final layer may summarize the model’s language comprehension with language properties, categorizations, or predicted sequences. GRNNs use GRUs or LSTMs to record language organizational frameworks and syntactic patterns efficiently. That enables the model to remember or forget inputs and describe complicated sequence linkages. To increase language understanding, GRNNs learn word order and complicated syntactic and hierarchical sentence organization. By capturing complex linguistic patterns and relationships, GRNNs revolutionize language analysis and understanding. A complicated GRNN can model short-term and long-term language associations using gating techniques. Syntactic parsing, semantic understanding, and situational prediction improve language analysis.
Concept of personalized instruction in Language learning
Personalizing language learning experiences, resources, and feedback to fit each student’s needs, preferences, and talents is a major educational change. Personalized language acquisition training recognizes learners’ different linguistic origins, learning methods, and competency levels, knowing that a standardized method may not work. Personalized training incorporates adaptive learning tools, individualized instruction, and student independence. Advanced algorithms and data analytics automatically adjust content, speed, and order of instruction to learning outcomes and preferences in adaptive learning systems. This versatility ensures students receive challenging content that fits their learning trajectories.
Let
stand for the group of students,
for the group of learning materials, and
for the group of learning preferences. The adaptive learning system uses algorithms to correlate student performance (
) and preferences (
) with the most suitable learning content (
) based on Eq. (6).
![]() |
6 |
Where in Eq. (6), the adaptive algorithm, f, allows language teachers to personalize instruction, promoting motivation, independence, and autonomous learning. GRNNs and deep learning provide a robust foundation for modelling sequential knowledge and language processing tasks, enhancing language acquisition and mastery by meeting individual needs and preferences.
Proposed GRNN-ELL methodology
The gated recurrent neural network (GRNN-ELL) model is a tailored training corpus designed for English language learners, enhancing recognition through attention mechanisms and contextual embedding. It uses multilingual training corpora, language-specific rules, and vocabulary updates. The model uses cross-language transfer learning and refinement based on target language performance, learning sequences using 5–7 GRU layers and residual connections. The model prioritizes words and phrases dynamically, with batch normalization and dropout minimizing overfitting and generalization (see Fig. 1).
Fig. 1.
Architecture of GRNN-ELL model.
This study meticulously designs, trains, and evaluates the GRNN-ELL model. The proposed GRNN-ELL model details dataset selection, preprocessing, model construction approach parameter initialization, hyperparameter tweaking, training, and assessment. Every part of this technique is carefully constructed to ensure the GRNN-ELL model’s strength, adaptability, and effectiveness in English language acquisition. Systematic testing and validation of the model’s skills and performance indicators will enhance language education and competence competence. Due to the careful construction, instruction, and assessment of the GRNN-ELL model, this study covers many procedures.
Evaluation Process: The GRNN-ELL model is assessed by accuracy, language quality, fluency, comprehension, user engagement, validation framework, feedback integration system, and optimization criteria. The model’s classification, training, validation, and test set accuracy are measured. Perplexity, BLEU Score, N-gram accuracy, and METEOR Score assess language quality. Fluency evaluation includes grammar, vocabulary, and context-appropriate words. Reading, listening, and response relevancy are tested. Data on user involvement includes module completion rates, learning progress, and time spent. Cross-validation, stratified validation, and model robustness testing comprise the validation framework. The feedback integration system incorporates performance monitoring, parameter fine-tuning, continual improvement, and user feedback. Optimization criteria include hyperparameter tweaking, learning rate optimization, network architecture changes, training strategy refinement, resource efficiency, learning experience, and content delivery. Stakeholder communication, performance reports, metric trending analysis, and improvement recommendations are performance documentation. It includes extensive model performance assessment, ongoing monitoring and improvement, data-driven optimization, user-centred evaluation, rigorous validation of results, and clear documentation and reporting. The GRNN-ELL model is a powerful tool for learning English as a Second Language (ESL), using a complex architecture and Gated Recurrent Units (GRUs) to understand and produce coherent English. Its durability, flexibility, and effectiveness are enhanced through systematic testing in authentic contexts, managing long-term dependencies, and promoting language competence assessment and feedback.
ELL dataset
The GRNN-ELL language learning paradigm, trained on 200 students, effectively analyses English texts from various sources. The model uses 500 stimuli and can handle various linguistic situations. It can be generalized to multiple learning environments and provides resources for classroom education and standardized English language assessment. The ELL dataset links language proficiency, Dutch residency length, and linguistic similarity and explores how gender and family situations affect linguistic competency. The GRNN-ELL model offers personalized learning, dynamic material selection, real-time feedback, and course tweaks to improve language learning, engagement, and enjoyment.
Gated recurrent neural network model
The GRNN-ELL model painstakingly constructs a Gated Recurrent Unit architecture to capture English’s contextual intricacies and complicated sequential linkages. The input layer handles English character representations or word embeddings. The model’s information flow and memory retention are controlled by gated recurrent units with layers, which include reset and update gates. An optional attention method allows the model to dynamically zero in on critical portions of the input sequence to enhance language capture. The output layer forecasts linguistic qualities and competency levels for language learning and assessment.
Input layer
The core component of the GRNN-ELL architecture is its input layer, designed to receive input patterns of English language tokens. These tokens are usually encoded as embedded words (The framework may efficiently process textual input by collecting semantic meaning and contextual information using “Embedded words” as number vector representations of words. Word2Vec, GloVe, and BERT contextual embeddings create such representations) or character representations, depicting language components numerically. The input layer serves as the channel for textual material to enter the neural network, initiating the technique of linguistic evaluation and understanding. Let
denote the input pattern of English language tokens as defined in Eq. (7).
![]() |
7 |
Where in Eq. (7), variable
represents the numerical value of the
linguistic element. The input layer begins the linguistic analysis and interpretation by feeding the input data into the neural network.
Gated recurrent units
The GRNN-ELL model is centred around numerous layers of gated recurrent units, which are essential for capturing sequential relationships and temporal dynamics in the input data. Reset and update gates are unique to GRU RNNs. The gates help govern network information flow, store long-term memory, and handle gradient disappearance difficulties. GRUs’ sophisticated design helps the GRNN-ELL model model complex language interactions in sequential data streams. The GRNN-ELL model processes the hidden state
and the input
, at each time step
using Eqs. (8–11).
![]() |
8 |
![]() |
9 |
![]() |
10 |
![]() |
11 |
Where in Eqs. (8–11),
and
, denotes the update and reset gates, respectively, whereas
, represent the candidate’s hidden state.
, and
symbolize weight matrices and biases, whereas
denotes the sigmoid activation function.
Attention mechanism
The GRNN-ELL architecture’s optional attention mechanism, an advanced neural network element, helps the model focus on key input sequence segments. Attention to key linguistic elements and contextual signals helps the model organize and interpret information. The attention mechanism highlights relevant input sequences using Eq. (12)’s softmax function.
![]() |
12 |
Where in Eq. (12),
is denoted as relevance scores of the
token in sequences of input. The attention weights
, specify the way each token improves the model’s understanding and analysis. The GRNN-ELL can dynamically focus on different linguistic contexts to enhance language processing and interpretation.
The attention mechanism is a pivotal element in modern neural network architectures, allowing models to dynamically focus on the most relevant portions of input sequences15. Integrating such mechanisms can significantly enhance the model’s ability to process complex patterns, particularly in sequential data tasks like speech recognition and language understanding.
Output layer
GRNN-ELL’s best output layer predictions use created input sequences to forecast language attributes and competence levels. The output layer leverages previous layers to interpret linguistic material and accurately determine linguistic features and skill levels. Input sequence analysis predicts output layer language and skill. Consider Eq. (13’s) projected linguistic skills as Y.
![]() |
13 |
Where in Eq. (13),
, is the expected value for the
, linguistic characteristic; the output layer generates a thorough comprehension of the linguistic material, aiding in language learning and competency evaluation.
Training process
GRNN-ELL model instruction is well-planned to improve performance, linguistic qualities, and competence. Every training phase aims to improve the model’s capture of complex linguistic elements and contextual links. Methodically adjusting parameters, initializing weights, and repeating forward and backward propagation cycles enhances GRNN-ELL for language learning. The model recognizes language differences, may make accurate predictions, and provides valuable insights into linguistic understanding and competency evaluation after appropriate training. This method displays the commitment to a professional and flexible model for varied English language learners. Schematic of training technique in Fig. 2.
Fig. 2.
Training process of GRNN-ELL model.
GRNN-ELL model initialization and training
Initialization of the GRNN-ELL model’s weights and biases is crucial for training and analyzing the complexity of the English language dataset.
Xavier initialization addresses common issues like vanishing or ballooning gradients.
Frontward propagation
English token sequences are transferred through the network architecture to predict linguistic qualities and skill levels accurately.
The model uses input data and learned parameters to shape language traits and skill levels.
Loss computation
The model’s loss function selection and application are crucial for assessing the difference between predicted language characteristics and proficiency levels.
Common loss functions like categorical cross-entropy are based on task specifications and model output predictions.
Backpropagation
The backpropagation algorithm updates model parameters based on loss function gradients.
The model uses gradient information to update parameters to capture the English language sample’s subtle linguistic features and competency levels.
Hyperparameter tuning
Hyperparameters determine the model’s performance and generalization.
Grid and random search methods analyze the hyperparameter space to find the model’s best configuration.
The GRNN-ELL model’s training schedule is meticulously developed to optimize performance, increase convergence, and improve language variables and competence level capture.
In Table 1, the GRNN-ELL Model uses English language documents and comments for machine learning. Preprocessing text data involves tokenizing it into words or characters, translating it to numerical embedded data, and initializing model parameters. Hyperparameters define model structure and optimization. Forward propagation sends data to the input, GRU, attention mechanism, and output layers. Backpropagation produces gradients, whereas the loss function measures the difference between expected and actual values. The Adam optimizer improves accuracy and efficiency through iteration. The model is updated based on validation results. It is utilized in English language learning apps and tested for real-world relevance. To minimize confusion, the model’s assessment framework must be distinct. Input sequences predict linguistic traits and competence levels in output layers, enabling targeted instruction and feedback.
Table 1.
Algorithm of GRNN-ELL model.
| Algorithm1: GRNN-ELL model | |
|---|---|
|
Step 1: Load the English language dataset with various texts and annotations. Step 2: Data Preprocessing Tokenize text data into words or characters. - Use word embeddings to convert tokens to numbers.GRNN-ELL model weights and biases must be initialized in Step 3. Step 4: Set hyperparameters (GRU layers, hidden units, learning rate, etc.). Step 5: Define GRNN-ELL architecture. Step 6: Forward propagate over GRNN-ELL layers. Step 7: Calculate the loss function: categorical cross-entropy between predicted and actual values. Step 8: Perform Backpropagation to compute gradients of the loss function concerning model parameters and update Step 9: Repeat steps 6–9 for multiple epochs or until convergence criteria are met. Step 10: Evaluate the trained GRNN-ELL model on the validation set to monitor performance, prevent overfitting, and Adjust hyperparameters Step 11: Deploy the trained GRNN-ELL model and Monitor Step 12: Update it periodically with new data or retrain as needed. |
Hyperparameters and their effects on the Gated Recurrent Neural Network for English Language Learning are described in detail. Table 2 summarizes the hyperparameters, their settings, and their descriptions based on the uploaded document:
Table 2.
Summarizes the hyperparameters.
| Hyperparameter | Setting | Description |
|---|---|---|
| Number of GRU layers | 5–7 | Determines the hierarchical learning of language sequences, capturing complex linguistic structures |
| Hidden units | 128–256 | Controls the number of features in the hidden state, affecting the model’s learning capacity |
| Learning rate | 0.001–0.01 | Regulates the step size in gradient descent, balancing convergence speed and stability |
| Batch size | 32–64 | Defines the number of samples processed before updating the model’s parameters |
| Dropout rate | 0.2–0.5 | Prevents overfitting by randomly dropping connections during training |
| Attention mechanism | Enabled | Enhances focus on relevant input sequences, improving contextual understanding and output quality |
| Activation function | ReLU, Tanh, sigmoid | Applies non-linearity, enabling the model to learn complex mappings |
| Optimization algorithm | Adam | Combines the benefits of AdaGrad and RMSProp for adaptive learning rates |
| Loss function | Categorical cross-entropy | Measures the difference between predicted outputs and ground truth for classification tasks |
| Training epochs | 50–100 | Number of complete passes through the training dataset |
The learning rate is set at 0.001 to facilitate gradual convergence and avert overshooting in the optimization process. The quantity of hidden units is set at 128 to achieve a compromise between model complexity and computational performance, enabling the network to discern the complicated patterns in language learning data. A dropout rate of 0.2 is used to mitigate the danger of overfitting by randomly deactivating 20% of neurons during training. The batch size is 32, facilitating accurate gradient estimates while optimizing memory use. The model undergoes training for a predetermined total of 50 epochs, facilitating enough learning while preventing overfitting. Finally, the Adam optimizer is used because of its flexible learning rates and effective convergence features, making it an appropriate selection for deep learning applications such as ELL. The study will design a robust GRNN-ELL model to increase English language acquisition and competence. The model employs cutting-edge neural network designs and comprehensive training to create a new tool that helps students effectively communicate in English, develop crucial language skills, and become proficient speakers. We employed around 500 stimuli from various categories with 200 students. By giving students personalized feedback, adapting the course to their skills, and suggesting customized materials, the GRNN-ELL model may personalize training. The model tracks student performance, encourages involvement, and sets learning goals using performance data. These methods exceed expectations for learners, instructors, and language lovers, improving language learning outcomes.
The GRNN-ELL model is a language learning tool designed to improve language acquisition among diverse learner groups. The methodology involves defining learner groups, data collection, and model implementation. The model is tailored to each learner’s native language and cultural background and uses adaptive learning algorithms to tailor content. The model’s performance is evaluated using metrics such as Language Fluency Score (LFS), Diversity of Vocabulary Score (DVS), Contextual Relevance Score (CRS), and Engagement Level (EL). The study is conducted over a defined period, with regular assessments using post-tests and performance metrics. The methodology validates the model’s adaptability and offers insights into optimizing personalized instruction for different demographics. The use of empirical data, statistical analysis, and clear evaluation metrics strengthens the claims made in the paper regarding the effectiveness of the GRNN-ELL model.
The study compares GRNN-ELL against well-known models such as HMM, SVM, and RF; still, it may be even more effective if it included more modern deep learning models such as LSTM, Bi-LSTM, and models based on Transformers. These models provide a strong foundation to assess GRNN-ELL’s enhancements because of their renown for handling attention mechanisms and long-term dependencies. To better understand how GRNN-ELL stacks up against existing benchmarks, validates its capabilities, and establishes itself as a viable contender in the dynamic field of personalized language learning technologies, it would be helpful to include these models in comparison studies.
Results and discussion
The complete experimental study explores the GRNN-ELL paradigm for personalized English language acquisition. The proposed model is evaluated using multiple datasets, strict assessment measures, and model comparisons. The study uses correct Dutch ELL data13. These datasets allow broad evaluation of the GRNN-ELL model in different language learning situations and skill levels. The Computational Efficiency Index (Training Time), Engagement Level, Pragmatic Appropriateness Index (PAI), Diversity of Vocabulary Score (DVS), Language Fluency Score (LFS), and Contextual Relevance Score quantify GRNN-ELL model efficacy. Measurements show the model’s skill, adaptability, contextual understanding, and computational efficiency. Compare GRNN-ELL to HMM, SVM, and Random Forest baseline models. The comparative models of machine learning and natural language processing analyze Table 3’s model.
Table 3.
Dataset description.
| Category | Details |
|---|---|
| Dataset name | ELL dataset |
| Objective | To facilitate hierarchical sequence learning and evaluate language processing models. |
| Source of data | Collected directly by the authors through [specific methods, e.g., curated text corpora, IoT sensors]. |
| Data composition | Includes [e.g., 10,000 annotated textual sequences, multi-modal datasets with timestamps]. |
| Data format | [e.g., CSV, JSON, or proprietary format with structured fields for input and labels.] |
| Preprocessing | Applied [e.g., tokenization, stopword removal, normalization, and feature scaling]. |
| Key features | [e.g., Sentence structure, hierarchical temporal relationships, and semantic context.] |
| Annotations/labels | [e.g., Categories for classification or scores for ranking tasks.] |
| Data size | [e.g., Total size in terms of samples, files, or storage (e.g., “500 MB” or “10,000 records”)]. |
| Splitting protocol | [e.g., Training (70%), validation (20%), and testing (10%) split used for model evaluation.] |
| Accessibility | [e.g., Restricted for internal use or publicly available at URL/link with citation requirements.] |
Language fluency score analysis
At T1–T5, the GRNN-ELL model beat HMM, SVM, and RF in language fluency score analysis (Fig. 3). GRNN-ELL is more fluent at T1 (Initial Assessment) due to its inherent competency and robust structure. GRNN-ELL outperforms other models throughout the test, showing its versatility and continual learning. The GRNN-ELL model betters its T2 and T3 language proficiency after one month, showing its adaptability. After three months, GRNN-ELL improves fluency more than previous models, indicating longevity. The GRNN-ELL model had the greatest fluency score in T5 (Final Assessment), exhibiting language processing. According to dynamic assessment, GRNN-ELL enhances language fluency and understanding in real-world circumstances at many time points.
Fig. 3.
Language fluency score analysis of the GRNN-ELL and other models.
Diversity of vocabulary score assessment
Figure 4 shows that the GRNN-ELL model beats HMM, SVM, and RF across all evaluation periods (E1–E5) in the Diversity of Vocabulary metrics. Initial performance is moderate for all models, but GRNN-ELL stands out. GRNN-ELL raises word variety ratings throughout the evaluation, confirming its superiority. GRNN-ELL catches more vocabulary than other models at midpoint assessment (E2). The GRNN-ELL model excels throughout the exam. The GRNN-ELL model has the largest vocabulary diversity in E5, proving its language modelling and vocabulary production success. Due to its vocabulary expansion and diversification, the GRNN-ELL model is best for advanced language processing.
Fig. 4.
Diversity of vocabulary score assessment of the GRNN-ELL and other models.
Contextual relevance score analysis
The contextual Relevance Score in Fig. 5 shows how SVM, RF, HMM, and GRNN-ELL performed during various evaluation periods. The CRS of 0.78 for the GRNN-ELL model at Initial Assessment (E1) is greater than other models. The early lead illustrates that the system immediately understands context and gives context-relevant stuff. The Mid-Term Evaluation (E2) and Progress Check (E3) CRS ratings of 0.8 and 0.82 suggest that the GRNN-ELL model outperforms others. The model’s consistency reveals its resilience and ability to respond appropriately. The Long-Term Analysis (E4) and Final Assessment (E5) reveal the GRNN-ELL model surpasses other models with CRS scores of 0.85 and 0.88. The model can change and enhance contextual relevance over time, proving its adaptability and durability.
Fig. 5.
Contextual relevance score analysis of the GRNN-ELL and comparative models.
Pragmatic appropriateness index evaluation
The Pragmatic Appropriateness Index, shown in Fig. 6, examines models that match a prompt or scenario’s context and purpose. In all assessment periods, the GRNN-ELL model beats the SVM, RF, and HMM models (Fig. 6). All other models fail to match GRNN-ELL’s E1 PAI score of 0.82. The model generates contextually suitable text from evaluation commencement. By mid-term (E2) and progress check (E3), GRNN-ELL proves its durability and adaptability in pragmatic appropriateness. Long-term analysis (E4) and final assessment (E5) demonstrate GRNN-ELL’s PAI advantage of 0.94 and 0.96. The approach reliably generates context-appropriate text for pragmatic applications. The GRNN-ELL model understands and follows contextual cues, producing more meaningful and contextually relevant text than SVM, RF, and HMM.
Fig. 6.
Pragmatic appropriateness index evaluation of the GRNN-ELL and other models.
Engagement level assessment
E1–E5 SVM, RF, HMM, and GRNN-ELL Engagement Level Assessment results are displayed in Fig. 7. The Engagement Level statistic measures each model’s review-related interest and involvement. The GRNN-ELL model engaged 0.87 more than the SVM, RF, and HMM models at E1. The GRNN-ELL strategy engages users early by adapting to user preferences and providing more tailored interactions. The GRNN-ELL model retains engagement from E2 to E5. The final assessment (E5) showed the highest long-term user involvement for the GRNN-ELL model at 0.97. The GRNN-ELL model’s complex architecture allows dynamic user input adaption, contextually relevant response creation, and conversational coherence maintenance, increasing engagement. The GRNN-ELL model improves user satisfaction, retention, and experience, making it ideal for conversational applications.
Fig. 7.
Engagement level assessment of the GRNN-ELL and comparative models.
Cross-domain adaptability analysis
Figure 8 shows the GRNN-ELL model and other models’ Cross-Domain Adaptability Analysis in five proficiency domains. GRNN-ELL outperforms competing models in all domains, proving its robustness and versatility in personalized language instruction. GRNN-ELL outperforms beginner-level HMM, SVM, and RF models in Domain 1 with a competence level of 0.88. Personalized instruction helps learners with minimal English knowledge learn the language utilizing the GRNN-ELL technique. In Domain 2 at an intermediate level, the GRNN-ELL model has a competency level of 0.91, aiding learners as The Study progresses to more complex language structures and interactions.
Fig. 8.
Cross-domain adaptability analysis of the GRNN-ELL and other models.
GRNN-ELL outperforms other models in Domain 3 with a competence score of 0.94, indicating advanced proficiency. The strategy appears to suit learners’ needs to enhance language skills and have detailed talks. The GRNN-ELL model excels in Domains 4 and 5’s expert vocabulary and academic situations, scoring 0.96 and 0.97, respectively. The results demonstrate the model’s adaptability to different language environments and ability to meet learners’ professional and academic needs.
Computational efficiency assessment
Machine learning model success depends on computational efficiency, especially training time. Figure 9 compares GRNN-ELL and equivalent model training times across dataset sizes. The GRNN-ELL model outperforms all other models across all dataset sizes. HMM (200), SVM (180), and RF (175) take longer to train than the GRNN-ELL model (150 units) for small datasets. This trend is constant throughout high- to massive-sized datasets.
Fig. 9.
Computational efficiency (training time) of the GRNN-ELL and comparative models.
The GRNN-ELL model is computationally efficient and scalable, excelling in managing vast and complex datasets. Its customizable architecture and steady performance make it ideal for language processing tasks. The model has outstanding T1–T5 language fluency, higher vocabulary capture and variety, and superior context-relevant content. It is suitable for pragmatic applications and produces context-sensitive text reliably. The model’s robust, flexible, and scalable architecture encourages user involvement over time. Its cross-domain adaptability and computational efficiency make it ideal for managing large datasets with minimal computational strain. GRNN-ELL excels in fluency, vocabulary, context, pragmatics, user involvement, domain adaptability, and computing efficiency compared to HMM, SVM, and RF models.
Conclusion and future work
GRNN-ELL is compared to HMM, SVM, and RF using linguistic metrics. GRNN-ELL exceeds language fluency, vocabulary variation, contextual relevance, pragmatic suitableness, and engagement. Individualized language training benefits from its versatility across skill levels. The findings show that English learners need individualized instruction and efficient learning methods. The study shows dynamic, adaptive, and contextually relevant GRNNs can change language acquisition. GRNNs improve performance, personalize language learning, and create meaningful connections. The study can help educators create more efficient, effective, and individualized language acquisition methods by showing how complex neural network models affect language instruction. Research should build GRNN-based models to better language instruction and solve new challenges. The experimental results demonstrate that the suggested GRNN-ELL model increases the language fluency score of 99.1%, diversity of vocabulary score of 96.7%, contextual relevance score of 98.2%, pragmatic appropriation index of 97.8%, training time of 150 units and engagement level assessment ratio of 98.9% compared to other existing models. Future studies might include audio and visual inputs to GRNN-based language training models to engage students and personalize learning. Datasets may not represent the whole range of learners’ languages and learning methods worldwide, limiting this study. Due to its large datasets and processing capability, low-resource educational institutions may struggle to adopt GRNN-ELL for tailored education. Social effects should be considered, including educational gaps and training data inaccuracies. With more accessible models, different datasets, and bias-reduction tactics, GRNN-ELL can help more students.
Author contributions
B.S. the sole author wrote whole paper.
Data availability
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Declarations
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Washington-Nortey, P. M. et al. The impact of peer interactions on Language development among preschool english Language learners: A systematic review. Early Childhood Educ. J.50, 49–59 (2022). [Google Scholar]
- 2.Zhao, C., Muthu, B. & Shakeel, P. M. Multi-objective heuristic decision making and benchmarking for mobile applications in english Language learning. Trans. Asian Low-Resource Lang. Inform. Process.20, 1–16 (2021). [Google Scholar]
- 3.Shorman, S., Jarrah, M. & Alsayed, A. R. The websites technology for Arabic Language learning through COVID-19 pandemic. In Future of Organizations and Work after the 4th Industrial Revolution: the Role of Artificial Intelligence, Big Data, Automation, and Robotics 327–340 (Springer, 2022).
- 4.Wang, W. & Zhan, J. The relationship between english Language learner characteristics and online self-regulation: A structural equation modeling approach. Sustainability12, 3009 (2020). [Google Scholar]
- 5.Malikovna, K. R. N., Mirsharapovna, S. Z., Shadjalilovna, S. M. & Kakhramonovich, A. A. Types of interactive methods in teaching english to students. Tex. J. Multidisciplinary Stud.14, 1–4 (2022). [Google Scholar]
- 6.Bernacki, M. L., Greene, M. J. & Lobczowski, N. G. A systematic review of research on personalized learning: personalized by whom, to what, how, and for what purpose(s)? Educational Psychol. Rev.33, 1675–1715 (2021). [Google Scholar]
- 7.Walkington, C. & Bernacki, M. L. Appraising research on personalized learning: definitions, theoretical alignment, advancements, and future directions. J. Res. Technol. Educ.52, 235–252 (2020). [Google Scholar]
- 8.Tetzlaff, L., Schmiedek, F. & Brod, G. Developing personalized education: A dynamic framework. Educational Psychol. Rev.33, 863–882 (2021). [Google Scholar]
- 9.Shemshack, A. & Spector, J. M. A systematic literature review of personalized learning terms. Smart Learn. Environ.7, 1–20 (2020). [Google Scholar]
- 10.Mystakidis, S., Berki, E. & Valtanen, J. P. Deep and meaningful e-learning with social virtual reality environments in higher education: A systematic literature review. Appl. Sci.11, 2412 (2021). [Google Scholar]
- 11.Ouyang, F. & Jiao, P. Artificial intelligence in education: the three paradigms. Computers Education: Artif. Intell.2, 100020 (2021). [Google Scholar]
- 12.Taylor, D. L., Yeung, M. & Bashet, A. Z. Personalized and adaptive learning. In Innovative Learning Environments in STEM Higher Education: Opportunities, Challenges, and Looking Forward 17–34.
- 13.Maghsudi, S., Lan, A., Xu, J. & van Der Schaar, M. Personalized education in the artificial intelligence era: what to expect next. IEEE. Signal. Process. Mag.38, 37–50 (2021). [Google Scholar]
- 14.Saleem, N. et al. Residual gated recurrent neural network-augmented Kalman filtering for speech enhancement and recognition. Knowl. Based Syst.238, 107914 (2022). [Google Scholar]
- 15.Hashim, S., Omar, M. K., Jalil, A., Sharef, N. M. & H., & Trends on technologies and artificial intelligence in education for personalized learning: systematic literature. J. Acad. Res. Progressive Educ. Dev.12, 884–903 (2022). [Google Scholar]
- 16.Al-Badi, A. & Khan, A. Perceptions of learners and instructors towards artificial intelligence in personalized learning. Procedia Comput. Sci.201, 445–451 (2022). [Google Scholar]
- 17.Whalley, B., France, D., Park, J., Mauchline, A. & Welsh, K. Towards flexible personalized learning and the future educational system in the fourth industrial revolution in the wake of COVID-19. High. Educ. Pedagogies. 6, 79–99 (2021). [Google Scholar]
- 18.Iqbal, M. H., Siddiqie, S. A. & Mazid, M. A. Rethinking theories of lesson plan for effective teaching and learning. Social Sci. Humanit. Open.4, 100172 (2021). [Google Scholar]
- 19.de Oliveira Araújo, F. J., de Lima, L. S. A., Cidade, P. I. M., Nobre, C. B. & Neto, M. L. R. Impact of SARS-CoV-2 and its reverberation in global higher education and mental health. Psychiatry Res.288, 112977 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Nabizadeh, A. H., Leal, J. P., Rafsanjani, H. N. & Shah, R. R. Learning path personalization and recommendation methods: A survey of the state-of-the-art. Expert Syst. Appl.159, 113596 (2020). [Google Scholar]
- 21.Bruggeman, B. et al. Experts speaking: crucial teacher attributes for implementing blended learning in higher education. Internet High. Educ.48, 100772 (2021). [Google Scholar]
- 22.Flores, M. A. et al. Portuguese higher education students’ adaptation to online teaching and learning in times of the COVID-19 pandemic: personal and contextual factors. High. Educ.83, 1389–1408 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Chaipidech, P., Srisawasdi, N., Kajornmanee, T. & Chaipah, K. A personalized learning system-supported professional training model for teachers’ TPACK development. Comput. Educ. Artif. Intell.3, 100064 (2022). [Google Scholar]
- 24.Wang, S. et al. When adaptive learning is effective learning: comparison of an adaptive learning system to teacher-led instruction. Interact. Learn. Environ.31, 793–803 (2023). [Google Scholar]
- 25.Reyad, M., Sarhan, A. M. & Arafa, M. A modified Adam algorithm for deep neural network optimization. Neural Comput. Appl.35, 17095–17112 (2023). [Google Scholar]
- 26.http://www.kaggle.com/datasets/thedevastator/adult-language-learning-profile?select=stex.csv
- 27.Alamri, H., Lowell, V., Watson, W. & Watson, S. L. Using personalized learning as an instructional approach to motivate learners in online higher education: learner self-determination and intrinsic motivation. J. Res. Technol. Educ.52, 322–352 (2020). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.






















