Abstract
In recent years, microlearning has gained significant attention due to technological advancements such as generative AI (GenAI), diverse learner needs, and a growing emphasis on improving learning outcomes. However, designing effective microlearning content remains challenging, as existing instructional design frameworks are inadequate for optimising learning outcomes. Therefore, this study developed a novel Microlearning AI-Integrated Instructional Design (MIND) Model to support effective instructional design to enhance learning outcomes. The model is grounded in multiple theoretical foundations, primarily the Technological Pedagogical Content Knowledge (TPACK) framework and the Factors Influencing Learning Outcomes (FIL) framework. To validate the MIND model, a mixed-methods approach was used with an intervention consisting of microlearning modules, comparing the MIND model (experimental group) with the ADDIE model (control group). Quantitative analysis using ANCOVA revealed that the MIND model significantly outperforms the ADDIE model in supporting effective instructional design to enhance learning outcomes, such as increased knowledge acquisition, understanding of module content, and improved knowledge application. Furthermore, t-test results indicated that learning outcomes within the MIND model group were consistent across gender, employment status, and locality, demonstrating inclusivity and accessibility. Kruskal–Wallis test showed a difference in learning outcomes across age groups, with younger learners achieving higher outcomes. Thematic analysis of qualitative data revealed that the modules developed using the MIND model featured media richness, interaction, engagement, self-concept, motivation, and satisfaction, contributing to learning outcomes. The MIND model serves as an innovative instructional design model that guides stakeholders in designing micro modules while remaining adaptable to conventional approaches, supporting flexible integration of cutting-edge technologies across formal, non-formal, and informal learning.
Keywords: AI, Microlearning, Instruction design framework, Learning outcomes, Knowledge and skills improvement, Lifelong learning
Subject terms: Computer science, Information technology
Introduction
Microlearning has increasingly attracted the attention of researchers in recent years1–3. It is an instructional approach that delivers targeted, action-oriented, bite-sized content to achieve specific objectives within a short period4. Typically, this approach combines concise textual content with visuals, such as diagrams, images, or tables, and interactive micro-activities that promote learner engagement5. By presenting information in manageable segments6, microlearning facilitates more efficient and practical knowledge transformation7. This approach aligns with learners’ growing desire for autonomy, enabling them to decide what they learn, when they learn, and how they learn. The prevalence of short attention spans, together with the widespread use of smart devices and social media, further underscores the relevance of microlearning8.
Its rising adoption is largely attributed to the changing needs of learners, who increasingly demand flexible, accessible learning9. Furthermore, the increasing interest in microlearning is closely linked to the rise of social media, which, according to Taylor and Hung1, has transformed people’s information-consumption habits towards a single, discrete topic presented in a short duration. Platforms like X (Twitter), Instagram, TikTok, and YouTube Shorts have popularised the use of short-form content. For example, short tutorial videos have become a prevalent method of informal learning for numerous individuals1. Microlearning is applicable across various settings3, including workplace training and self-directed learning10 as well as formal education contexts7. Its concise nature allows content to be easily distributed across diverse media11.
Moreover, the increasing integration of AI in educational platforms has reinforced the significance of microlearning in delivering concise, learner-centred content. AI technologies can personalise learning experiences, automate feedback, and adapt content based on learner progress. However, research on how to systematically design and implement microlearning remains limited, and studies emphasise AI integration into microlearning instruction design4. Nevertheless, systematic guidance for designing microlearning while leveraging AI has received inadequate attention. Challenges persist in its design12 and form13, such as balancing conciseness with comprehensiveness14,15, avoiding content fragmentation12, and maintaining relevance across diverse topics. There is currently a lack of a widely accepted model integrating AI into microlearning instructional design, highlighting the need for a more structured approach. Therefore, this study aimed to develop the MIND Model to support effective instructional design and enhance learning outcomes.
Literature review
Microlearning
Prior studies have examined various aspects of microlearning, including user perceptions, instructional effectiveness, and design strategies. For example, Iqbal et al.16 explored postgraduate residents’ perceptions of microlearning environments, reporting overall satisfaction, with higher satisfaction among female residents, participants aged 25–30, and those in internal medicine. Similarly, Choo and Rahim17 demonstrated that microlearning can achieve outcomes comparable to face-to-face active learning in pharmacy education, while also offering flexibility, cost-effectiveness, and suitability for distance learning. This study examines key factors, including gender, age, locality, and employment status, in AI-enhanced microlearning, with the goal of determining whether they influence learning outcomes.
Other studies have focused on instructional design and efficacy. For example, Lee et al.14 evaluated a mobile microlearning course in journalism education, showing increased knowledge, decision-making confidence, and skill performance. The study highlighted areas for improvement, such as automated feedback, gamified exercises, and interactive content. Likewise, Dolowitz et al.18 found that mobile microlearning applications, such as NeNA, can enhance employee performance when iterative user feedback is incorporated. Moreover, Sankaranarayanan and Mithun19 examined AI-enabled microlearning in a database programming course, finding that students valued immediate feedback and simplified concepts but faced challenges with inconsistent or inaccurate responses for complex tasks. AI is essential, transforming how individuals learn, work, and interact20,21. Despite these advancements, designing effective microlearning experiences remains challenging12. Therefore, integrating AI in instructional design is critical for addressing the challenges and enhancing instructional effectiveness, particularly in diverse learner contexts.
AI in instructional design
The integration of AI into instructional design is essential for effective instruction. It can be used to enhance the analysis of different learning needs, content design, and development. Traditional instructional designs often assume homogeneity, neglecting differences in prior knowledge, pace, or motivation. AI, such as GenAI, can be used to tailor microlearning content and feedback to individual learners, resulting in improved engagement and performance. It can help in generating content and selecting assessment strategies aligned with established taxonomies such as Bloom’s taxonomy22,23. This, while maintaining close human oversight, streamlines the design process and saves time. It supports resource scalability by producing diverse instructional materials in various media formats, including audio, video, and presentations. Evidence from multiple studies highlights these benefits. For instance, Kohnke et al.24 demonstrated that GenAI-focused microlearning modules enabled pre-service teachers to increase their confidence in lesson planning and the ability to adapt AI tools for differentiated instruction, formative assessment, and culturally responsive teaching. Similarly, Willenborg and Withorn25 demonstrated that a six-lesson microcourse on GenAI in college classrooms enabled students to efficiently apply AI in research and writing while accommodating instructors’ time constraints.
Moreover, Baillifard et al.26 illustrated that with a personal AI tutor for psychology students by modelling each student’s understanding of key neuroscience concepts and generating adaptive retrieval practice; students achieved up to 15 percentile points higher than peers without AI support. In a meta-analysis, Sun and Zhou27 found that GenAI interventions significantly improved college students’ academic achievement, with medium effect sizes, particularly when AI-supported tasks involved independent learning, text-based content, or smaller cohorts. Practical applications illustrate the importance of thoughtful instructional design when integrating AI. Ahlgren et al.28 demonstrated that AI effectiveness depends on structured guidance, tailored tasks, and contextualised application principles directly transferable to instructional design in education.
GenAI can enhance microlearning design by enabling the creation of concise content in the form of videos, podcasts, and images—allowing learners to engage more deeply with material; however, challenges remain, including platform compatibility issues, fragmented learning for complex topics, and reduced social interaction, which necessitate careful educator strategies and human facilitation to maintain collaborative and social aspects of learning29. There is limited work on developing an instructional design framework that provides educators with detailed guidance for the practical development of microlearning instruction, particularly when integrating AI to enhance learning outcomes. Therefore, this study supports interactive microlearning instructional design responsive to diverse learners’ needs.
Microlearning instructional design
As education continues to evolve, novel instructional design models are essential for effective instructional design. Traditional instructional design models are increasingly proving ineffective and less adaptable in this rapidly changing world30. Factors such as shortened attention spans, widespread smartphone usage, and easy access to information through search engines have contributed to the diminishing relevance of these models31.
There are models, such as the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model, and the microlearning model. However, while these models are valuable, they are subject to several limitations. The ADDIE model, a colloquial term for a systematic approach to instructional development, lacks academic rigour and a single author, having evolved informally through oral tradition32. It is further stated that anyone is free to attribute their own interpretations and characteristics to the model as they may want. Traditional models, such as ADDIE, are challenging to apply in dynamic and adaptive educational contexts and are increasingly considered inapplicable in the modern technology-based era33,34. In addition, it is costly and requires adequate funding for implementation33,35. Moreover, it is also criticised for being linear, hierarchical, and time-intensive36. It faces challenges in adapting to rapidly changing educational contexts and integrating emerging technologies. Another model is proposed by Dolasinski and Reynolds31, a learning model that integrates microlearning in four phases: (1) predevelopment of the learning, (2) development and delivery of learning content, (3) learner participation and practice, and demonstration of activity, and (4) evaluation. However, this model also has significant limitations. First, it lacks empirical validation; that is, the model has not been tested for its effectiveness in real-world settings. Without empirical validation, its practical applicability and impact may be uncertain. Second, its generalisability is limited, as the inquiry is based on past research that is limited to workplace learning and theories31. As a result, its applicability to broader educational domains may be limited. Challenges in designing microlearning experience persist, as highlighted by recent studies37–39. Therefore, to overcome the existing challenges, this study developed the MIND model to support effective instructional design.
MIND model development
The novel MIND Model is developed to support effective instructional design through the flexible integration of cutting-edge technologies, including but not limited AI, to enhance learning outcomes. The novelty of the Model can be identified across several dimensions. First, it focuses on educators and learners’ needs, explicitly emphasised during the analysis stage. Other needs can be considered depending on the context, such as institutional requirements, technological infrastructure, curriculum standards. Second, the model is based on robust theoretical foundations, particularly the TPACK and the FIL frameworks. TPACK, introduced by Mishra and Koehler40, emphasises the interconnectedness of content knowledge (CK), pedagogical knowledge (PK), and technological knowledge (TK) in teaching. This is particularly important in microlearning, where content must be concise, requiring careful design considerations41. For example, educators need to understand influential factors, including PK and CK, and know what content to modularise, how much to modularise, and how to adjust delivery strategies. The study incorporates Situational Awareness (SA) into the TPACK framework for educators’ side, resulting in a conceptual extension of TPACK as SATPACK. This extension emphasises the importance of being aware of ongoing dynamics within the learning environment. Furthermore, it integrates the FIL framework to guide content design, development, and delivery. FIL framework is developed by Monib et al.41 and emphasise the importance of contextual (media richness), behavioural (interaction and engagement), cognitive (comprehension), and affective (motivation, self-concept, and satisfaction), influencing learning outcomes drawing from multiple theories such as the Expectancy-Disconfirmation, constructivism, Self-determination, Situational Awareness, Cognitive Multimedia Learning, and Andragogy. Together, these frameworks shape a comprehensive, theory-informed foundation for the MIND Model. Third, it is learner-centred, prioritising learners’ needs, enabling them to take responsibility for self-paced learning. Fourth, it uniquely separates the delivery stage (where the educator presents content) and the receiving stage (where learners receive, practice, interact, and engage with content, peers, and educators), emphasizing active learning, unlike ADDIE, which treats instruction as a single, passive implementation stage. This distinction is essential in the current MIND Model, which highly emphasises factors such as interactivity and engagement. The ADDIE model pays limited attention to interaction, such as learner-educator, learner-content, and learner-learner, in content delivery and has been criticised by constructivists. Bates42 states
ADDIE model is what might be called ‘front-end loaded’ in that it focuses heavily on content design and development, but does not pay as much attention to the interaction between instructors and students during course delivery. It has been criticised by constructivists for not paying enough attention to learner-instructor interaction, and for privileging more behaviourist approaches to teaching.
In contrast, MIND model emphasises meaningful interaction with content, educators, and peers, considering TPACK40 and FIL frameworks41. This interaction is critical, as delivery without reception, such as learners merely receiving content without interaction, may not produce meaningful learning outcomes. Fifth, the MIND Model is outcome-oriented, with all stages and sub-elements systematically aligned to achieve specific learning outcomes. The MIND model integrates both assessment, which measures learning outcomes, and evaluation, which evaluates course’s overall effectiveness and quality, including teaching, content, delivery, and learning experience. In the ADDIE model, evaluation tests the curriculum rather than individual learning, reflecting its curriculum-oriented nature. Evaluation does not explicitly measure learning outcomes and operates at the macro level, judging the quality or effectiveness of a programme, course, or curriculum (e.g., a committee reviewing a new curriculum after its first year of implementation, an educator receives feedback on a course at the end of the semester). As Peeters and Schmude43 clarify the difference,
Learning assessments from students are different from programmatic or curricular evaluation. While education disciplines…use assessment to mean assessments of students’ learning, it seems that many in American pharmacy education…use assessment to refer to program evaluation. This misuse of the term assessment for program evaluation seems unfortunate. There are textbooks and internet-based resources on program evaluation.
Their observation underscores the importance of maintaining conceptual clarity between assessment & evaluation to avoid conflating learner-level outcomes with programme-level judgments. This distinction is reinforced in institutional practices. For example, Colorado College differentiates the two in its guidelines, defining assessment as focused on student learning outcomes and evaluation as addressing broader departmental, programmatic, or administrative concerns44. The distinction is also recognised in scholarly publishing. The Springer journal Educational Assessment, Evaluation and Accountability45 differentiates the two, both in its title and its aims and scope, stating:
The main objective of this international journal is to advance knowledge and dissemination of research on and about assessment, evaluation and accountability of all kinds and on various levels as well as in all fields of education.
Sixth, it is cost-effective and time-efficient; unlike other models, which require extensive resources and multiple stakeholder involvement. Seventh, the model is grounded in microlearning principles, with AI integration supporting all stages—from needs analysis to content creation, content delivery, and content receiving, and assessment & evaluation—reducing resource demands while ensuring quality instructional design. Eighth, the MIND model is non-linear, where each stage is interconnected. If there is a need for improvement, educators can address and return to earlier stages. Lastly, it can be adopted across formal, non-formal, and informal learning settings. The model consists of six stages: Analysis stage, design stage, delivery stage, receiving stage, and assessment & evaluation stage. Certain stages of the MIND Model can be adapted to suit different learning settings—formal, non-formal, or informal—where, for example, assessment may remain essential in formal contexts but could be optional or modified in non-formal and informal learning (see Fig. 1).
Fig. 1.
The MIND model.
Analysis stage
The analysis stage involves examining the needs of both educators and learners. For educators, this includes assessing their preparedness to design microlearning experiences and identifying any required professional development. AI can support the identification of educators’ and learners’ needs, while human oversight and interpretation remain crucial. AI supports educators in assessing learners’ needs through multiple strategies, including learner profiling, needs mining, surveys, and diagnostic assessments to uncover knowledge gaps, skills, preferences, and challenges. For example, AI can analyse large volumes of data from students’ interactions, assessments, and learning preferences to determine existing knowledge and knowledge gaps46. Similarly, machine learning algorithms can cluster learners by proficiency or learning style and predict skills or concepts they may struggle with. Moreover, GenAI can assist in creating surveys to identify the needs of educators and/or learners. All identified needs should remain within the course scope and align with institutional policies. The principle of andragogy, which emphasises learners’ ‘need to know,’ should guide educators not only during the need analysis stage—ensuring that they address the why, what, and how questions47. Additionally, individual, situational, and subject differences, as well as other factors, should be considered. Overall, learners’ needs can then be translated into specific learning outcomes in the subsequent stage.
Design stage
The design stage involves designing microlearning content for formal, non-formal, or informal learning, following the analysis stage. Clear, specific, and measurable learning outcomes should be defined in alignment with Bloom’s Taxonomy22,23 or any other taxonomy, ensuring that both cognitive depth and skill application are systematically assessed. These outcomes guide the selection of essential content, ensuring both relevance and clarity. The appropriate assessment strategies should be planned in alignment with the learning outcomes and may include formative and summative assessments. GenAI can aid in the design stage by outlining the module including learning outcomes, selecting instructional content, and assessment & evaluation strategies, and other related elements, while ensuring alignment across all components. These elements can be revisited in subsequent stages as needed for refinement. Aligned with this, the mediality can be selected, including the delivery mode (e.g., online, face-to-face, or blended)48,49, as well as the media formats, such as videos, podcasts, presentation slides, and infographics. In mediality, selecting an appropriate microlearning platform is particularly crucial, as it facilitates effective content design and enables learners to navigate and engage with the modules meaningfully in later stages. In addition, it is crucial to consider other factors—such as time constraints, institutional policies, and factors identified at the analysis stage. Overall, a well-designed module ensures alignment across all components.
Development stage
The development stage involves creating and reviewing microlearning content, following the design stage. Content can be created in different media formats, such as short videos, handouts, or presentation slides, ensuring alignment with the intended learning outcomes and assessment strategies. Educators—having a clear understanding of the content and learning outcomes—can efficiently develop appropriate assessment & evaluation strategies, which can be revisited and refined in later stages if needed. Educators should consider various factors, including but not limited to content-specific factors such as media richness, interactivity, engagement, comprehension, self-concept, motivation, and satisfaction41. Instructional content should include interactive elements, e.g., quizzes and simulations, to foster active learning and provide opportunities for practical application6,50,51. To support diverse learners’ needs, content should be multi-sensory and multi-modal31,52, incorporating rich media such as visuals, graphs, charts, audio, and video to enhance engagement and comprehension. Modules should be presented in bite-sized segments to prevent cognitive overload and support sustained engagement3. The time is relatively short, subjective49, and can be a few seconds to a few minutes in duration, offering just-in-time access4,53. When creating bite-sized content, such as a video, ensure the content is scripted, produced, edited, and reviewed to maximise clarity and learning impact. Zhang and West75 state that content refinement often requires iterative revision to eliminate non-essential information. Therefore, creating a micro module remains particularly challenging in microlearning instructional design. In this context, GenAI can be used to mitigate this challenge by generating content, refining content, and efficiently condensing materials into digestible microlearning units3,54 as well as assessment strategies that align with learning outcomes. It further enables tailoring of content to individual learners’ needs, while considering the FIL framework. At this stage, if modules are designed for formats other than microlearning, the content size can be adjusted accordingly to suit the intended learning context. Overall, the development stage ensures that instructional materials are carefully designed, resulting in modules to support effective learning experiences.
Delivery stage
The delivery stage involves delivering the created content periodically, following the development stage. Microlearning content can be delivered through face-to-face48,49, online12,48, and/or blended mode12,48,55, aligned with the mediality specified during the earlier stage. In practice, microlearning is most often delivered online, given its flexibility and scalability. Regardless of the delivery of microlearning, AI-enabled platforms and formats play a crucial role in ensuring flexibility and accessibility for diverse learners. At this stage, AI can enhance microlearning by personalising content to individual needs, preferences, and pace, and by recommending adaptive learning paths based on performance. It can also automate notifications and reminders to keep learners on track, while providing accessibility features such as text-to-speech, translation, and alternative formats. Other learning management systems (LMS), educational apps, digital tools, and printed materials can also be used to deliver microlearning12,14,55–57. Based on learners’ needs, content can be delivered through diverse media such as bite-sized videos12,18, infographics, podcasts12,18, and so on. The timing and frequency of delivery remain flexible, ensuring that learning materials are accessible in a timely and aligned with learners’ pace. Overall, this stage ensures content is delivered in a timely while maintaining flexibility to accommodate individual schedules and learning paces.
Receiving stage
In the receiving stage, learners receive and practice the content. This stage emphasises active learning, engaging learners as they interact with the educator, peers, and content to develop understanding and skills. Learners engage with multiple forms of content—such as learning materials, scheduled quizzes, and interactive activities that encourage practice and reinforce learning. For example, while interacting with microlearning content, learners receive concise instructions from the educators, followed by a practical task such as using an AI tool to summarise a passage, enabling them to practice and reinforce their understanding. Learners interact with AI-driven exercises, quizzes, and simulations, receiving personalised, adaptive feedback and guidance on applying concepts, fostering autonomous, self-directed learning. AI analyses learners’ behaviour patterns, predicts potential academic issues, and facilitates timely interventions to enhance performance58. A key feature of this stage is learner autonomy, where learners select from the available modules and engage with them at any time, from any location, and via any device, such as a laptop, mobile phone, promoting flexible, accessible, and inclusive learning3. Overall, this stage ensures that learners can engage with and practice the content in a meaningful and flexible manner.
Assessment & evaluation stage
The assessment & evaluation stage involves assessing learning outcomes and the effectiveness of the course, following the receiving stage. Educators assess learning outcomes through the assessment strategies selected at the earlier stage, such as formative and summative assessments. These assessments serve the purpose of assessing learners’ knowledge, skills, and competencies. Formative assessment aims to provide ongoing feedback59, enabling learners to monitor their progress, identify areas for improvement, and refine their work for better learning outcomes60. In contrast, summative assessment, being graded, assesses overall achievement at the end of a learning period61. In this way, formative assessment supports growth and preparation, while summative assessment validates attainment. GenAI can assist educators in recommending personalised feedback supporting improvement during formative assessment. AI can simultaneously enhance automated grading, ensuring greater efficiency, consistency, and objectivity. For instance, a study by Gao et al.62 found that DeepSeek, an AI tool, achieved higher reliability, provided more relevant feedback on content, language use, organisation, and coherence, and was useful for enhancing English as a Foreign Language (EFL) writing assessment. Assessment measures learners’ learning outcomes, while evaluation considers evaluators’ perspectives on a course overall effectiveness and quality. For example, learners reflect on the effectiveness of teaching and the quality of the content during the course or at the end. Educators use this feedback to identify areas for improvement. This iterative process enhances instructional quality, promotes effective learning, and increases learner satisfaction. Overall, the MIND model stages are interconnected, allowing continuous refinement, flexibility, and non-linear progression throughout the instructional process.
Methodology
Research design and materials
This study employed a mixed-methods approach to validate the MIND Model through the intervention, in which the educator developed microlearning modules, delivered them to learners, and assessed subsequent learning outcomes. The research was conducted at C3L, UBD, with ethical approval granted by the C3L Ethics Committee on 21 December 2023, and all procedures followed relevant guidelines and regulations. Participants, comprising educators and learners, provided informed consent, confirming their voluntary participation and acknowledging their right to withdraw at any point without consequence. Learners were randomly selected, and only those who consented to participate in both the pretest and posttest were included in the study. They were granted free access to the course, which comprised a series of micro-modules. Confidentiality was maintained throughout the study.
The intervention used microlearning modules, with the experimental group applying the MIND model and the control group the ADDIE model (Table 1). For the experimental condition, educators first participated in workshops to address their needs identified during the MIND Model’s needs analysis stage, enhancing their skills in designing, developing, and delivering microlearning modules. Subsequently, one educator voluntarily designed and developed a course, Mastering AI-Powered Research Skills, consisting of 12 micro-modules, following the MIND Model design and development stages. The course was selected to reflect the evolving learning paradigm63,64 and to address learners’ needs identified at the analysis stage.
Table 1.
Mastering AI-powered research skills.
| No | Microlearning Modules |
|---|---|
| 1. | Overview: What is AI, and Importance for Research |
| 2. | Core AI Concepts for Research |
| 3. | AI-assisted Research Skills |
| 4. | Integrating AI into Research |
| 5. | Search Optimisation with ChatGPT and Google Scholar |
| 6. | Leveraging ChatPDF and Wordtune for Enhanced Reading |
| 7. | Unlocking Research Potential with Perplexity |
| 8. | ChatGPT Channels for Data Analysis Insights |
| 9. | Creating Visual Figures with Napkin |
| 10. | Understanding Ethical Principles in AI Research |
| 11. | Bias and Fairness in AI |
| 12. | Privacy and Data Protection |
A range of AI-powered tools was employed by the educator to develop the microlearning modules, guided by the FIL framework, including media richness, interaction, engagement, comprehension, motivation, self-concept, and satisfaction41. The tools included ChatGPT, Napkin AI, Gamma, NoteBookLM, and CapCut, and may be used alongside other emerging GenAI tools for designing microlearning modules in the future. The modules were delivered via the AI-enabled OpenLearning platform in multiple formats—videos, podcasts, presentation slides, and PDFs—to accommodate diverse learning needs. A registration link was made available to learners, enabling them to sign up and access the modules.
In the receiving stage, learners received a link to the platform, through which they signed up and enrolled in the microlearning modules. Afterward, they accessed the modules, actively engaged with the content, and practiced the activities, while platform analytics tracked their progress and facilitated interactions with the educator. Following the MIND Model’s assessment & evaluation stage, pre- and post-tests were administered, and learners’ reflections were collected to evaluate the effectiveness of the intervention in improving learning outcomes.
In the control condition, the educator developed the same course following the ADDIE model without using AI tools. During the analysis stage, learners’ needs were identified. In the design and development stages, instructional materials—including videos, handouts, and supplementary resources—were created to address the same content areas as the experimental condition. The implementation stage consisted of blended delivery, combining face-to-face sessions and access through Canvas LMS. Consistent with ADDIE, the evaluation stage primarily focused on course quality. However, to allow a direct comparison with the experimental condition, pre- and post-tests were administered in the control condition. This enabled assessment of learners’ improvement in learning outcomes.
The pre-test was administered to both the experimental and control groups to assess baseline knowledge prior to accessing the microlearning modules. After five weeks, the post-test was distributed to the same learners to measure learning outcomes. The pre- and post-tests consisted of 12 multiple-choice questions. These questions were categorised into three domains: knowledge of GenAI (Q1–Q3), applications of AI in research (Q4–Q8), and ethical considerations related to AI (Q9–Q12).
For qualitative insights, participants were asked to provide written reflections on their experience with the microlearning modules. They were specifically prompted to reflect on key content features such as media richness, interactivity, engagement, satisfaction, and comprehension, as well as to evaluate the impact of the modules on their knowledge and skills development. The reflection prompt read: ‘Write your reflection about your experience of microlearning modules. Reflect on content factors such as media richness, interactivity, engagement, satisfaction, and comprehension. Consider whether it helped improve your knowledge and skills.’ To ensure content validity, two academic experts from Universiti Brunei Darussalam independently reviewed and validated the microlearning modules, the pre- and post-tests, and the reflection prompts.
A total of 64 learners completed both the pre-test and post-test, with 32 in the experimental group and 32 in the control group. In the experimental group, 17 females and 15 males. Age distribution showed that 15 participants were aged 35–44, 10 were 25–34, and 7 were 18–24 years old. Twenty-five participants were employed, while 7 participants were non-working. In terms of residence, 28 participants lived in urban areas, and 4 participants resided in rural areas. In the control group, 13 females and 19 males. Age distribution comprised 10 participants aged 18–24 years, 14 aged 25–34 years, and 8 aged 35–44 years. Ten participants were working, and 22 were not. Regarding residence, 30 participants from urban areas and 2 from rural areas.
Analysis
This study analysed learners’ quantitative data from pre- and post-tests alongside qualitative data from their reflections to validate the MIND model (experimental group) and enable comparison with the ADDIE model (control group). Quantitative data were analysed using SPSS. An ANCOVA was conducted to compare post-test scores between the experimental and control conditions while controlling for pre-test scores. Prior to analysis, assumptions for ANCOVA and t-tests were satisfied. For ANOVA, the normality assumption was not met; therefore, a Kruskal–Wallis test was conducted. For the experimental group, further analysis included independent-samples t-tests to examine differences across participants’ gender, employment status, and locality, where the normality assumption was met. Differences across age groups were assessed using a Kruskal–Wallis test.
For qualitative data analysis, ATLAS.ti was used to facilitate thematic coding. Participant reflections were first cleaned and reviewed multiple times to ensure data familiarity, then imported into ATLAS.ti for systematic coding following a structured process65,66. Initial codes were generated to capture key features relevant to the research questions, considering both explicit and implicit meanings. Codes were examined for patterns, and similar ones were grouped into potential themes, which were then refined for internal consistency and distinction, and cross-checked against the data to ensure accuracy. Each theme was clearly defined and named to maintain conceptual clarity and alignment with the research focus. The analysis was finally written up using illustrative quotes.
Several measures were implemented to enhance the reliability and validity of the qualitative analysis. To establish intercoder reliability, a second coder independently analysed a portion of the dataset using the coding framework developed by the primary researcher. For inter-rater reliability, the codes were compared, and the level of agreement was measured using Cohen’s kappa, yielding a value of κ = 0.689, p = .01, indicating substantial agreement67, supporting the reliability of the thematic analysis68.
Discrepancies were resolved through discussion and consensus, with refinements made to code definitions as necessary, ensuring rigour across the full dataset. To enhance credibility, thick descriptions and direct participant quotes were incorporated to provide transparency and contextual depth. The quantitative findings were complemented and explained by qualitative evidence and further validated through comparison with existing literature.
Results
This section presents findings, including quantitative results from inferential statistical analysis and qualitative insights from thematic analysis.
Inferential statistics
Learners’ performance was assessed before and after the intervention to evaluate the effectiveness of the MIND model and compare it with the ADDIE model. The ANCOVA was conducted to compare post-test scores between the MIND and ADDIE groups, controlling pre-test scores. Descriptive statistics indicated that participants in the experimental condition who learned through the MIND model (M = 86.000, SD = 6.932) scored higher on the post-test than those in the control condition who learned through the ADDIE model (M = 79.312, SD = 5.038). Table 2 indicates the pre-test covariate was a significant predictor of post-test scores, F(1, 61) = 104.519, p < .001, partial η2 = .631. After adjusting for pre-test scores, the group effect remained significant, F(1, 61) = 45.976, p < .001, partial η2 = .430. This indicates that participants in the MIND model group outperformed those in the ADDIE model group, supporting effective instructional design to enhance learning outcomes. The overall model explained a substantial proportion of variance in post-test scores, R2 = .720, adjusted R2 = .710.
Table 2.
ANCOVA comparing MIND and ADDIE instructional design effectiveness.
| Source | SS | df | MS | F | p | Partial η2 |
|---|---|---|---|---|---|---|
| Pretest (covariate) | 1437.762 | 1 | 1437.762 | 104.519 | *** | .631 |
| Intervention (MIND vs ADDIE) | 632.445 | 1 | 632.445 | 45.976 | *** | .430 |
| Error | 839.113 | 61 | 13.756 |
***p < .001.
For the experimental condition, to examine differences in learning outcomes based on gender, age, employment status, and locality, independent samples t-tests and a Kruskal–Wallis Test were conducted. First, change scores were calculated for each learner by subtracting their pre-test score from their post-test score. The mean change scores were then compared across demographic groups. An independent samples t-test revealed no significant difference in learning outcomes between males (M = 12.400) and females (M = 13.117), t(30) = − 0.497, p > .05 (see Table 3). Similarly, no significant differences were found between urban (M = 12.607) and rural (M = 14.000) participants, t(30) = − 0.640, p > .05, or between working (M = 12.240) and non-working participants (M = 14.714), t(30) = -1.462, p > .05.
Table 3.
T-test across gender, locality, and employment status.
| Group | N | M | t | p | df | |
|---|---|---|---|---|---|---|
| Learning outcomes | Male | 15 | 12.400 | − .497 | ns | 30 |
| Female | 17 | 13.117 | ||||
| Urban | 28 | 12.607 | − .640 | ns | ||
| Rural | 4 | 14.000 | ||||
| Working | 25 | 12.240 | -1.462 | ns | ||
| Not working | 7 | 14.714 |
ns = not significant.
To compare differences among age groups, a Kruskal–Wallis test was used since the assumption of normality was not met. The results indicate a significant difference in learning outcomes across age groups, χ2(2) = 8.495, p < .05 (see Table 4). Following this significant result, Mann–Whitney U test was conducted where the results indicated that participants aged 18–24 scored significantly higher than those aged 35–44 and 25–34 (18–24 vs. 35–44; 25–34, p < .05), while differences between other groups were not significant (25–34 vs. 35–44, p > .05). This difference may be attributed to younger learners being more technologically savvy, enabling them to engage more effectively with microlearning environments.
Table 4.
Kruskal–Wallis and post hoc Mann–Whitney U tests across age groups.
| N | Kruskal–Wallis Test | Mann–Whitney U | ||||||
|---|---|---|---|---|---|---|---|---|
| Age | Mean Rank | χ2 (df) | p | Comparison | U | p | ||
| Learning outcomes | 7 | 18–24 | 24.360 | 8.495(2) | * | 18–24 versus 35–44 | 18.000 | * |
| 10 | 25–34 | 17.600 | 18–24 versus 25–34 | 14.500 | * | |||
| 15 | 35–44 | 12.100 | 25–34 versus 35–44 | 43.500 | ns | |||
*p < .05; ns = not significant.
Thematic analysis
Thematic analysis of reflections from the experimental group corroborated the quantitative findings, showing improvements in key learning outcomes—knowledge acquisition, comprehension, research skills, and problem-solving—while also identifying influential factors (see Table 5). Learners reported that microlearning was effective in enhancing their understanding and use of AI tools in academic contexts. Many participants expressed increased confidence in applying the knowledge and skills gained to real-life situations. For example, one participant reflected:
The module content was quite clear and easy to comprehend in a short time. I have gained knowledge that made me feel confident about GenAI and scientific resources. I can say that now I feel that I can easily refine the query, develop a clear understanding, and then apply it practically. [It] will surely help me while making my final year project reports, etc. Although I need further practice to master the skills, I feel that I learned more from it.
Table 5.
Themes and codes.
| Theme | Codes | |
|---|---|---|
| Learning outcomes | Improved knowledge acquisition | Improved problem-solving skills |
| Enhanced comprehension | Knowledge application | |
| Improved research skills | ||
| Media richness | Well-organised content | Concise text |
| Coherent media | Enough visuals | |
| Informative graphs and tables | clear visuals | |
| Interaction | Interaction with educators | Exercises |
| Asking questions | Quizzes | |
| Visual aids | Games | |
| Engagement | Various learning styles | Simple |
| Variety of media | Relevance | |
| Short | Reflections | |
| Interesting | ||
| Comprehension | Easy to understand | Well-organised |
| Effective presentation | Well-structured | |
| Clear examples | Concise information | |
| Self-concept of the learner | Self-directing learning | Autonomy |
| Plan own learning | Self-reflection | |
| Motivation to learn | Goal achievement | Personal growth |
| Rewarding | Curiosity | |
| Task completion | Accessible | |
| Satisfaction | Less cognitive overload | Interesting |
| Easy to access | Informative | |
| Easy to follow | Compact | |
| Easy to recap | Media-rich content | |
| Learn own pace | Unique | |
| Easy to understand | Enjoyable | |
Another participant reflected positively on the overall learning experience, stating, ‘Overall, the module was comprehensive and enhanced my knowledge and skills in AI-assisted research.’ Across reflections, learners reported enriched experiences in several areas, including media richness, interactivity, engagement, comprehension, satisfaction, motivation to learn, self-concept, and accessibility. Learners responded positively to the media richness, highlighting the use of clear visuals, informative graphs and tables, well-organised content, and concise text.
In terms of interactivity, participant feedback generally aligned with the quantitative findings—learners considered the content to be interactive. Interactive elements included interaction with the educator, verbal components (Q&A), non-verbal elements (games), and activity-based tasks (such as short exercises and quizzes). Visual aids further enhanced interactivity, while accessible communication channels contributed to a supportive learning environment. However, some learners noted limited peer interaction, particularly during online sessions. As one learner shared, ‘In the online class, I couldn’t interact with my peer.’
Moreover, many participants found the modules engaging, citing the relevance and interest of the content, the use of a variety of media, the support for effective learning, the accommodation of diverse learning styles, and simple and short content. One learner remarked, ‘I felt engaged due to the variety of media usage, and I found the short content relevant.’
Regarding comprehension, learners reported that the modules were easy to understand, attributing this to the inclusion of clear examples, effective presentation, a well-organised, structured format, and concise information. Furthermore, the self-concept of the learner involves exercising autonomy and self-directed learning. Several participants noted that microlearning encouraged independent learning and peer socialisation, as illustrated by the comment: ‘Microlearning is short and is standalone to learn independently.’
Learners’ motivation to learn was driven by a sense of goal achievement, personal growth, curiosity, accessible learning materials, and successful task completion. Learners were satisfied, reporting that the microlearning module was easy to follow, access, recap, and understand. They were satisfied because they found the approach unique, enjoyable, and interesting. They were also satisfied experiencing less cognitive overload, appreciating its short, informative, and media-rich content. For instance, one participant stated, ‘I appreciate the short video and am happy to have gained useful insights.’ Similarly, another participant reflected, ‘I like the short and informative nature of the video, and I feel it [was] effective. I am quite satisfied with the provided practical examples, and the module met my content and delivery format expectations.’
In addition, they acknowledged the feasibility of the microlearning module, highlighting its flexibility and accessibility anytime and anywhere through any device. One participant mentioned, ‘[In] terms of convenience, I can easily access it through my mobile or laptop and watch it multiple times, and, side by side, I can practice it too.’ Suggestions included more collaboration opportunities and interactive activities simulating real scenarios.
Discussion
The findings of this study revealed the effectiveness of the MIND Model in instruction design to enhance learning outcomes. The educator followed all stages of the model, from needs analysis to the design and delivery of microlearning modules, through the receiving, and assessment & evaluation. The ANCOVA results indicated that the instructional design developed using the MIND model led to significantly higher post-test scores than the instructional design developed using the ADDIE model. This suggests that the MIND model has significant effectiveness in improving learning outcomes, including knowledge, comprehension, and application. Based on Bloom’s taxonomy, this indicates improvements not only in lower-order outcomes but also in higher-order outcomes.
The overall model explained 72% of the variance in post-test scores, demonstrating a large and practically meaningful effect. Similarly, studies comparing ADDIE with other models, such as SAM, have reported lower performance for ADDIE. For example, the study by Ali et al.69 highlights the limitations of ADDIE-based instruction. Their comparison of SAM and ADDIE in STEM teaching showed higher post-test gains and better conceptual understanding for SAM, reflecting ADDIE’s lower engagement and limited interactivity.
Consistent with prior research on microlearning, which has reported increased knowledge, retention, higher-order thinking, professional competencies, and/or learning performance70–74, this study extends the literature by demonstrating the effectiveness of AI-enhanced microlearning designed according to the MIND Model. In a systematic literature review on microlearning, Monib et al.3 identified a range of learning outcomes categorised according to Bloom’s Taxonomy, such as knowledge acquisition, retention, recall, improvement, transfer, and application; higher-order skills such as critical thinking, problem-solving, feedback literacy, and self-regulation; professional competencies including digital and pedagogical competence; and performance outcomes such as test results. This study advances the literature by focusing on AI-enhanced microlearning and leveraging GenAI tools within instruction design based on the MIND model to improve learning outcomes, consistent with prior research highlighting the effectiveness of AI tools in enhancing learning gains70.
Furthermore, demographic analysis revealed no significant differences in learning outcomes by gender, employment status, or locality, suggesting the broad inclusivity and accessibility of the instruction design developed based on the MIND model. All age groups demonstrated improvement, with younger learners showing particularly significant gains, which may reflect their digital fluency and familiarity with AI-based platforms. Therefore, although microlearning can be broadly inclusive, targeted digital scaffolding is necessary to support less digitally savvy learners while considering their age. Despite the overall success, learners identified areas for improvement, particularly related to peer interaction and video pacing. As Zhang and West75 note, designing microlearning environments that balance efficiency with meaningful engagement remains a challenge.
The integration of AI tools such as ChatGPT, Gamma, Napkin AI, and NoteBookLM reflects TK, while the learner-centred, scaffolded presentation of the modules demonstrates PK. This suggests that the educator was aware of emerging technologies, following the training and has chosen a GenAI topic that learners need more than ever. The careful design, development, and delivery of the module demonstrate the educator’s CK, TK, and PK. Considering SA alongside TPACK is therefore instrumental in meeting learners’ expectations. This confirms that effective technology integration must go beyond its adoption and consider the situational context.
Microlearning was found to be flexible and accessible. The feasibility of microlearning is also reported by Gross et al.76, who conducted a study on delivering Crew Resource Management training in 15-min segments. This may be one of the reasons for its growing adoption in various contexts, particularly mobile-based microlearning, which is increasingly integrated into various instructional contexts77.
Similarly, the findings from qualitative reflections supported quantitative findings, where learners reported improvements in areas such as knowledge acquisition, comprehension, and application, research, and problem-solving skills. The microlearning modules were considered contextual (media richness), cognitive (comprehension), behavioural (interaction and engagement), and affective (motivation, self-concept, and satisfaction) factors. This indicates that effective instruction designs should consider these crucial factors. The authors concluded that, when guided by a robust instructional design, including the MIND Model, microlearning offers scalable and inclusive education, reaching learners across various demographics, including gender, employment status, and location, at minimal cost. The findings support SDG 4 by promoting equitable and quality education, SDG 5 by ensuring gender-inclusive learning opportunities, and contribute to Wawasan Brunei 2035 goal 1, which aims to have the people of Brunei Darussalam educated, highly skilled, and accomplished. The MIND Model is particularly important as it provides a detailed description of each stage to design microlearning that maximises learning outcomes while addressing diverse learners’ needs. Furthermore, the MIND model integrates AI and microlearning, but is flexible enough to accommodate conventional learning; regardless of the format, the stages—from analysis to outcomes—remain consistent.
The ADDIE model is predominantly curriculum-oriented and does not explicitly focus on monitoring learners’ progress, limiting educators’ ability to optimise learning outcomes. Shakeel et al.35 argue that the ADDIE model requires adequate funding and a longer implementation timeframe. It cannot meet the demands of modern technology-based education36,78 and faces challenges in rapidly changing digital learning environments and in integrating cutting-edge technology, such as AI. Using ADDIE for microlearning is particularly challenging, as microlearning instructional content needs to be concise, immediately applicable, and responsive to diverse learners’ needs. Furthermore, involving multiple stakeholders can prolong the development process, increasing the risk that the content becomes outdated before it is used and undermining instructional relevance. In contrast, the MIND model leverages AI to be cost- and time-efficient, responsive to immediate learner needs, and both learner-centred and outcomes-oriented.
Following the ADDIE model in the instructional design was time-consuming. Its rigid and linear structure limited flexibility in developing and delivering materials to cater to diverse learner needs and hindered the integration of additional resources, such as AI tools. ADDIE, along with traditional models, has limitations such as restricted customisation, long design cycles, and limited flexibility34. This shows that the ADDIE model is problematic for both microlearning and traditional contexts due to several limitations discussed. In contrast, the MIND model required less time for instructional design, enabled faster implementation at lower cost, and allowed greater flexibility across stages, supporting effective instructional design and improved learning outcomes.
Overall, the findings provide compelling evidence for the effectiveness of the MIND model relative to the ADDIE model. Systematically implemented, the MIND model supports learner-centred, outcomes-oriented instructional design that aligns curriculum with intended learning outcomes. Compared with the ADDIE model, the MIND model outperformed, highlighting the model’s practical value for effective and responsive instructional design.
Implications, limitations, and future research
The study has several crucial implications. Theoretically, the MIND Model extends instructional design theory by embedding situational awareness within TPACK (SATPACK), contributing to a learner-centred and outcome-oriented approach. This provides an in-depth understanding of how contextual factors, such as learners’ needs, learning environment, individual and situational differences, influence learning outcomes. In addition, the model is learner-centred and outcome-oriented, prioritising diverse learners’ needs and ensuring that all instructional decisions—from content design to assessment—are aligned with achieving specific outcomes. It distinguishes the implementation stage into delivery and receiving stages, unlike ADDIE, which treats implementation as a single passive process. Practically, the model provides educators with structured guidance to design instructional content. For institutions, it offers a cost-effective approach that enables the implementation of microlearning without excessive resource investment and is applicable across blended, online, and face-to-face modes in formal, non-formal, and informal learning settings.
Attention should be paid to design quality, particularly in ensuring content clarity and appropriate pacing. It is crucial for educators to focus on human-centred AI integration that helps rather than overwhelms learners. Designers should also explore integrating collaborative activities and peer interaction features to address the perceived lack of social engagement reported by some participants. While the study offers valuable insights, several limitations need to be considered. The study has not investigated its long-term influences on learning outcomes. Therefore, future research is recommended to investigate the long-term influence of how well-improved learning outcomes, such as acquired knowledge and skills, can be retained over time. This will provide deeper insights into its effectiveness. While the current study’s findings show that microlearning contributes to increased motivation to learn, further research is recommended to understand how it improves motivation.
Conclusion
The findings of this study indicate that the MIND model supports effective instructional design and enhances learning outcomes, outperforming the ADDIE model. Learning outcomes were consistent across gender, employment status, and locality within the MIND model group, highlighting the inclusivity and accessibility of instructional design and the microlearning approach. All age groups demonstrated improvement, with younger learners showing particularly significant gains, reflecting their digital fluency. Qualitative reflections complemented the quantitative findings, revealing contributing factors, including media richness, interaction, engagement, self-concept, motivation, and satisfaction. These findings underscore the effectiveness of the MIND model in enhancing learning outcomes and provide practical implications for instructional designers, educators, and policymakers seeking to implement microlearning in lifelong learning and beyond. Overall, the MIND model is an innovative instructional design model that integrates cutting-edge technologies into instructional design, remains adaptable to conventional approaches, and ensures consistency across stages from analysis through assessment & evaluation for continuous improvement.
Author contributions
Wali Khan Monib: Conceptualization, Methodology, Formal Analysis, Investigation, Resources, Data Curation, Visualization, Writing—Original Draft, Atika Qazi: Supervision, Rosyzie Anna Apong: Supervision, Jose H. Santos: Writing—Review & Editing, Malissa Maria Mahmud: Writing—Review & Editing.
Funding
This work was supported by Universiti Brunei Darussalam under research grant UBD/RSCH/URG/2024/012.
Data availability
Data will be available on a reasonable request from the correspondence author.
Declarations
Competing interests
The authors declare no competing interests.
Ethics, consent to participate, and consent to publish declarations
Ethical approval was received from the Ethics Committee of the Centre for Lifelong Learning on December 21, 2023.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Wali Khan Monib, Email: walikhan.szu@gmail.com.
Atika Qazi, Email: atika.qazi@ubd.edu.bn.
References
- 1.Taylor, A. D. & Hung, W. The effects of microlearning: A scoping review. Education Tech. Research Dev.70, 363–395. 10.1007/s11423-022-10084-1 (2022). [Google Scholar]
- 2.Leong, K., Sung, A., Au, D. & Blanchard, C. A review of the trend of microlearning. J. Work-Appl. Manag.13, 88–102. 10.1108/JWAM-10-2020-0044 (2021). [Google Scholar]
- 3.Monib, W. K., Qazi, A. & Apong, R. A. Mapping microlearning development and trends across diverse contexts: a bibliometric analysis (2007–2023). Interact. Learn. Environ.33, 1865–1910. 10.1080/10494820.2024.2402556 (2025). [Google Scholar]
- 4.Monib, W. K., Qazi, A. & Apong, R. A. Microlearning beyond boundaries: A systematic review and a novel framework for improving learning outcomes. Heliyon10.1016/j.heliyon.2024.e41413 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Skalka, J. et al. Conceptual framework for programming skills development based on microlearning and automated source code evaluation in virtual learning environment. Sustainability (Switzerland)10.3390/su13063293 (2021). [Google Scholar]
- 6.Robles, H. et al. Design of a micro-learning framework and mobile application using design-based research. PeerJ Comput. Sci.9, 1–31. 10.7717/PEERJ-CS.1223 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Beste, T. Knowledge transfer in a project-based organization through microlearning on cost-efficiency. J. Appl. Behav. Sci.59, 288–313. 10.1177/00218863211033096 (2021). [Google Scholar]
- 8.Garshasbi, S., Yecies, B. & Shen, J. Microlearning and computer-supported collaborative learning: An agenda towards a comprehensive online learning system. STEM Educ.1, 225–255. 10.3934/steme.2021016 (2021). [Google Scholar]
- 9.Corbeil, J. R. & Corbeil, M. E. Editorial note: Designing microlearning for how people learn. Educ. Technol. Soc.27, 134–136. 10.30191/ETS.202401_27(1).SP01 (2024). [Google Scholar]
- 10.Ghasia, M. A. & Rutatola, E. P. Contextualizing micro-learning deployment: An evaluation report of platforms for the higher education institutions in Tanzania. Int. J. Educ. Dev. Using Inf. Commun. Technol.17, 65–81 (2021). [Google Scholar]
- 11.Veletsianos, G., Houlden, S., Reid, D., Hodson, J. & Thompson, C. P. Design principles for an educational intervention into online vaccine misinformation. TechTrends66, 748–759. 10.1007/s11528-022-00755-4 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Fidan, M. The effects of microlearning-supported flipped classroom on pre-service teachers’ learning performance, motivation and engagement. Educ. Inf. Technol.10.1007/s10639-023-11639-2 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Şahin, Z. G. & Kırmızıgül, H. G. Experiences of a mathematics teacher implementing micro learning during emergency distance teaching. Kastamonu Educ. J.31, 218–229. 10.24106/kefdergi.1271489 (2023). [Google Scholar]
- 14.Lee, Y. M., Jahnke, I. & Austin, L. Mobile microlearning design and effects on learning efficacy and learner experience. Educ. Tech. Res. Dev.69, 885–915. 10.1007/s11423-020-09931-w (2021). [Google Scholar]
- 15.Salleh, D., Khairudin, N. & Ibrahim, M. Micro learning: Motivating students’ learning interests. Jurnal Psikologi Malaysia36, 153–162 (2022). [Google Scholar]
- 16.Iqbal, M. Z., Alaskar, M., Alahmadi, Y., Alhwiesh, B. A. & Mahrous, A. A. Perceptions of residents on the microlearning environment in postgraduate clinical training. Educ. Res. Int.1–6, 2021. 10.1155/2021/9882120 (2021). [Google Scholar]
- 17.Choo, C. Y. & Rahim, A. S. A. Pharmacy students’ perceptions and performance from a microlearning-based virtual practical on the elucidation of absolute configuration of drugs. Asian J. Univ. Educ.10.24191/ajue.v17i4.16187 (2021). [Google Scholar]
- 18.Dolowitz, A., Collier, J., Hayes, A. & Kumsal, C. Iterative design and integration of a microlearning mobile app for performance improvement and support for NATO employees. TechTrends67, 143–149. 10.1007/s11528-022-00781-2 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Sankaranarayanan, R. & Mithun, S. Exploring the effectiveness of AI-enabled microlearning in database design and programming course. In 2024 IEEE Frontiers in Education Conference (FIE), 1–7. 10.1109/FIE61694.2024.10892916 (2024).
- 20.Monib, W. K. et al. Generative AI and future education: a review, theoretical validation, and authors’ perspective on challenges and solutions. PeerJ Comput Sci10, e2105. 10.7717/peerj-cs.2105 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Niazai, H. & Monib, W. K. In Teaching in the Age of Medical Technology (ed D. Martínez Asanza) 91–122 (IGI Global Scientific Publishing, 2025).
- 22.Krathwohl, D. R. A Revision of Bloom’s Taxonomy: An Overview. Theory Into Practice41, 212–218. 10.1207/s15430421tip4104_2 (2002). [Google Scholar]
- 23.Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. & Krathwohl, D. R. Taxonomy of Educational Objectives: The Classification of Educational Goals (David McKay Company, 1956). [Google Scholar]
- 24.Kohnke, L., Zou, D. & Xie, H. Microlearning and generative AI for pre-service teacher education: a qualitative case study. Educ. Inf. Technol.10.1007/s10639-025-13606-5 (2025). [Google Scholar]
- 25.Willenborg, A. & Withorn, T. Generative AI for college students: A collaboratively developed online microcourse on GenAI in the college classroom. Commun. Inf. Lit.19, 113–130. 10.15760/comminfolit.2025.19.1.7 (2025). [Google Scholar]
- 26.Baillifard, A., Gabella, M., Lavenex, P. B. & Martarelli, C. S. Effective learning with a personal AI tutor: A case study. Educ. Inf. Technol.30, 297–312. 10.1007/s10639-024-12888-5 (2025). [Google Scholar]
- 27.Sun, L. & Zhou, L. Does Generative artificial intelligence improve the academic achievement of college students? A meta-analysis. J. Educ. Comput. Res.62, 1676–1713. 10.1177/07356331241277937 (2024). [Google Scholar]
- 28.Ahlgren, T. L., Sunde, H. F., Kemell, K.-K. & Nguyen-Duc, A. Assisting early-stage software startups with LLMs: Effective prompt engineering and system instruction design. Inf. Softw. Technol.187, 107832. 10.1016/j.infsof.2025.107832 (2025). [Google Scholar]
- 29.Sankaranarayanan, R., Yang, M. & Kwon, K. Exploring the role of a microlearning instructional approach in an introductory database programming course: an exploratory case study. J. Comput. High. Educ.10.1007/s12528-024-09408-2 (2025). [Google Scholar]
- 30.Lau, K. W., Lee, P. Y. & Chung, Y. Y. A collective organizational learning model for organizational development. Leadersh. Org. Dev. J.40, 107–123. 10.1108/LODJ-06-2018-0228 (2019). [Google Scholar]
- 31.Dolasinski, M. J. & Reynolds, J. Microlearning: A new learning model. J. Hosp. Tour. Res.44, 551–561. 10.1177/1096348020901579 (2020). [Google Scholar]
- 32.Molenda, M. In search of the elusive ADDIE model. Perform. Improv.54, 40–42. 10.1002/pfi.21461 (2015). [Google Scholar]
- 33.Santally, M. I., Rajabalee, Y. & Cooshna-Naik, D. Learning design implementation for distance e-learning: Blending rapid e-learning techniques with activity-based pedagogies to design and implement a socio-constructivist environment. Eur. J. Open Distance E-learn. (2012).
- 34.Khadija, H. & Chergui, M. Toward a new instructional design methodology in the era of generative AI. In International Symposium on Generative AI and Education 3-15 (Springer, Cham, 2025).
- 35.Shakeel, S. I., Al Mamun, M. A. & Haolader, M. F. A. Instructional design with ADDIE and rapid prototyping for blended learning: validation and its acceptance in the context of TVET Bangladesh. Educ. Inf. Technol.28, 7601–7630. 10.1007/s10639-022-11471-0 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Spatioti, A. G., Kazanidis, I. & Pange, J. A comparative study of the ADDIE instructional design model in distance education. Information13, 402 (2022). [Google Scholar]
- 37.Shabadurai, Y., Chua, F. F. & Lim, T. Y. Investigating the employees’ perspectives and experiences of microlearning content design for online training. Int. J. Inf. Educ. Technol.12, 786–793. 10.18178/ijiet.2022.12.8.1685 (2022). [Google Scholar]
- 38.Ilic, P. Micro-lessons as a response to emergency remote teaching. In Proceedings of 2022 IEEE Learning with MOOCS, LWMOOCS 2022, 155–160, 10.1109/LWMOOCS53067.2022.9927755 (2022).
- 39.Busse, J. & Schumann, M. Towards a pedagogical pattern language for micro learning in enterprises. In ACM International Conference Proceeding Series, 1–9. 10.1145/3489449.3489973 (2021).
- 40.Mishra, P. & Koehler, M. J. Technological pedagogical content knowledge: A framework for teacher knowledge. Teach. Coll. Rec.108, 1017–1054. 10.1111/j.1467-9620.2006.00684.x (2006). [Google Scholar]
- 41.Monib, W. K., Qazi, A., Apong, R. A. & Mahmud, M. M. Investigating learners’ perceptions of microlearning: Factors influencing learning outcomes. IEEE Access1, 17. 10.1109/ACCESS.2024.3472113 (2024). [Google Scholar]
- 42.Bates, T. Is the ADDIE model appropriate for teaching in a digital age?.https://www.tonybates.ca/2014/09/09/is-the-addie-model-appropriate-for-teaching-in-a-digital-age/? (2014).
- 43.Peeters, M. J. & Schmude, K. A. Learning assessment vs program evaluation. Am J Pharm Educ84, ajpe7938. 10.5688/ajpe7938 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Colorado College. Assessment vs. evaluation. https://www.coloradocollege.edu/other/assessment/what-is-assessment/assessment-vs-evaluation.html (2022).
- 45.Educational Assessment, Evaluation and Accountability. https://link.springer.com/journal/11092/aims-and-scope (2025).
- 46.Khanipoor, F. & Karimian, Z. Unleashing the power of data: The promising future of learning analytics in medical education: A commentary. Educ. Inf. Technol.30, 10373–10380. 10.1007/s10639-024-13273-y (2025). [Google Scholar]
- 47.Holton, E. F., Swanson, R. A. & Naquin, S. S. Andragogy in practice: Clarifying the andragogical model of adult learning. Perform. Improv. Q.14, 118–143. 10.1111/j.1937-8327.2001.tb00204.x (2001). [Google Scholar]
- 48.Chisega-Negrila, A.-M. Microlearning for professional development. J. Defense Resources Manag.13, 79–87 (2022). [Google Scholar]
- 49.Hug, T. Microlearning: A new pedagogical challenge (introductory note). In Microlearning: Emerging concepts, practices and technologies after E-learning: Proceedings of microlearning conference 2005: Learning & working in new media, 8–11 (2005).
- 50.Ghafar, Z. et al. Microlearning as a learning tool for teaching and learning in acquiring language: Applications, advantages, and influences on the language. Can. J. Educ. Soc. Stud.3, 45–62. 10.53103/cjess.v3i2.127 (2023). [Google Scholar]
- 51.Tabares, M. S., Vallejo, P., Montoya, A. & Correa, D. A feedback model applied in a ubiquitous microlearning environment using SECA rules. J. Comput. High. Educ.34, 462–488. 10.1007/s12528-021-09306-x (2022). [Google Scholar]
- 52.Prior Filipe, H., Paton, M., Tipping, J., Schneeweiss, S. & Mack, H. G. Microlearning to improve CPD learning objectives. Clin. Teacher17, 695–699. 10.1111/tct.13208 (2020). [DOI] [PubMed] [Google Scholar]
- 53.Roskowski, S. M., Wolcott, M. D., Persky, A. M., Rhoney, D. H. & Williams, C. R. Assessing the use of microlearning for preceptor development. Pharmacy11, 102. 10.3390/pharmacy11030102 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Boumalek, K., El Mezouary, A., Hmedna, B. & Bakki, A. In General Aspects of Applying Generative AI in Higher Education: Opportunities and Challenges (eds Lahby, M., Maleh, Y., Bucchiarone, A. & Elisa Schaeffer, S.) 241–262 (Springer Nature Switzerland, 2024).
- 55.Bannister, J., Neve, M. & Kolanko, C. Increased educational reach through a microlearning approach: Can higher participation translate to improved outcomes?. J. Eur. CME9, 1834761. 10.1080/21614083.2020.1834761 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Ho, Y. Y., Yeo, E. Y. & Wijaya, D. S. B. M. Turning coffee time into teaching moments through bite-sized learning for adult learners. J. Continuing Higher Educ.71, 183–198. 10.1080/07377363.2021.2024000 (2022). [Google Scholar]
- 57.Carter, J. W. & Youssef-Morgan, C. Psychological capital development effectiveness of face-to-face, online, and micro-learning interventions. Educ. Inf. Technol.27, 6553–6575. 10.1007/s10639-021-10824-5 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Recalde Drouet, E. M., Tello Salazar, D. M., Charro Domínguez, T. L. & Catota Pinthsa, P. J. Analysis of the repercussions of artificial intelligence in the personalization of the virtual educational process in higher education programs. Data Metadata3, 1. 10.56294/dm2024386 (2024). [Google Scholar]
- 59.Cizek, G. J., Andrade, H. L. & Bennett, R. E. Handbook of Formative Assessment in the Disciplines 3–19 (Taylor and Francis, 2019). [Google Scholar]
- 60.Buyukkarci, K. & Sahinkarakas, S. The impact of formative assessment on students’ assessment preferences. Read. Matrix Int. Online J.21, 142–161 (2021). [Google Scholar]
- 61.Tienson-Tseng, H. L. Best practices in summative assessment. ACS Symp. Ser.1337, 219–243. 10.1021/bk-2019-1337.ch010 (2019). [Google Scholar]
- 62.Gao, H., Hashim, H. & Md Yunus, M. Assessing the reliability and relevance of DeepSeek in EFL writing evaluation: a generalizability theory approach. Lang. Test. Asia15, 33. 10.1186/s40468-025-00369-6 (2025). [Google Scholar]
- 63.Monib, W. K., Qazi, A. & Mahmud, M. M. Exploring learners’ experiences and perceptions of ChatGPT as a learning tool in higher education. Educ. Inf. Technol.30, 917–939. 10.1007/s10639-024-13065-4 (2025). [Google Scholar]
- 64.Monib, W. K., Qazi, A., Silva, L. D. & Mahmud, M. M. Exploring learners’ experiences with ChatGPT in personalized learning. In 2024 6th International Workshop on Artificial Intelligence and Education (WAIE), 66–70, 10.1109/WAIE63876.2024.00019 (2024).
- 65.Braun, V. & Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol.3, 77–101. 10.1191/1478088706qp063oa (2006). [Google Scholar]
- 66.Creswell, J. W. & Creswell, J. D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches 6th edn. (Sage Publications, 2022). [Google Scholar]
- 67.Landis, J. R. & Koch, G. G. An application of hierarchical Kappa type statistics in the assessment of majority agreement among multiple observers. Biometrics33, 363–374. 10.2307/2529786 (1977). [PubMed] [Google Scholar]
- 68.Zhao, X., Feng, G. C., Ao, S. H. & Liu, P. L. Interrater reliability estimators tested against true interrater reliabilities. BMC Med. Res. Methodol.10.1186/s12874-022-01707-5 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Ali, C., Acquah, S. & Esia-Donkoh, K. A comparative study of SAM and ADDIE models in simulating STEM instruction. Afr. Educ. Res. J.9(4), 852–859. 10.30918/aerj.94.21.125 (2021). [Google Scholar]
- 70.Alshammari, M. T. Design and evaluation of online microlearning tailored to learning styles. Int. J. Adv. Appl. Sci.12, 213–224. 10.21833/ijaas.2025.04.023 (2025). [Google Scholar]
- 71.Chisholm, B. S., Wallace, M. L., Blockman, M. & Orrell, C. “WhatsApp is best!” Acceptability and feasibility of WhatsApp-based HIV microlearning for healthcare workers in remote South African clinics: A pragmatic, mixed-methods, cluster-randomised trial. Nurse Educ. Pract.10.1016/j.nepr.2025.104407 (2025). [DOI] [PubMed] [Google Scholar]
- 72.Frosch, K. & Lindauer, F. Learning with short bursts: How effectively can we build competencies in climate change-related areas based on microlearning?. Eur. J. Educ.10.1111/ejed.70088 (2025). [Google Scholar]
- 73.Hidayati, S. N. et al. Microlearning-oriented ISOCC learning model framed student worksheets for improving science argumentation skills. In AIP Conference Proceedings3116. 10.1063/5.0210440 (2024).
- 74.Sathiyaseelan, B., Mathew, J. & Nair, S. Microlearning and learning performance in higher education: A post-test control group study. J. Learn. Dev.11, 1–14. 10.56059/jl4d.v11i1.752 (2024). [Google Scholar]
- 75.Zhang, J. & West, R. E. Designing microlearning instruction for professional development through a competency-based approach. TechTrends64, 310–318. 10.1007/s11528-019-00449-4 (2020). [Google Scholar]
- 76.Gross, B. et al. Microlearning for patient safety: Crew resource management training in 15-minutes. PLoS ONE10.1371/journal.pone.0213178 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Moore, R. L., Hwang, W. & Moses, J. D. A systematic review of mobile-based microlearning in adult learner contexts. Educ. Technol. Soc.27, 137–146. 10.30191/ETS.202401_27(1).SP02 (2024). [Google Scholar]
- 78.Adnan, N. H. & Ritzhaupt, A. D. Software engineering design principles applied to instructional design: What can we learn from our sister discipline?. TechTrends62, 77–94. 10.1007/s11528-017-0238-5 (2018). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data will be available on a reasonable request from the correspondence author.

