Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2025 Nov 19;15:40807. doi: 10.1038/s41598-025-24650-z

Development of a digital peer-feedback tool for clinical skills training: a pilot study with second-year medical students

Mustafa Onur Yurdal 1,, Remzi Y Kıncal 2
PMCID: PMC12630813  PMID: 41258309

Abstract

Providing structured feedback in medical education is essential for skill development. However, limitations such as increasing student numbers, reducing trainer availability, and a lack of systematic feedback hinder effective learning. Digital peer-feedback tools offer a potential solution by enabling interactive and scalable assessment. This study introduces Medifeeding, a video-based peer-feedback digital tool designed to enhance clinical skills training among medical students. This study employed an exploratory convergent mixed-methods pilot feasibility design. Medifeeding was developed following Moonen’s 3-Space model and tested with 31 s-year medical students at Çanakkale Onsekiz Mart University. Data sources: a post-use online form (educational benefit yes/no, a forced ranking of 10 attributes, and a 5-point overall rating), semistructured interviews (n = 24), and one focus group (n = 5). Quantitative analyses included frequency analysis, the Friedman test with Kendall’s W and Wilcoxon signed-rank tests (Holm-adjusted). Qualitative data were analyzed thematically (Strauss’s framework). Strands were integrated in a joint display to generate meta-inferences on acceptability and usability. 90% (28/31) reported educational benefit. The overall rating averaged 4.13/5 (90% rated 4–5). For the feature ranking, the Friedman test showed an overall difference (**χ2(9) = 86.162, p < 0.001; Kendall’s W = 0.309). The highest-priority attribute was ‘educationally beneficial’ (mean rank ≈ 2.13; median 1), followed by ‘offering a different experience’ (≈ 4.35) and ‘digitally user-friendly’ (≈ 4.84); post-hoc Wilcoxon (Holm) confirmed multiple pairwise differences. Qualitatively, six themes emerged: (1) educational benefits, (2) peer interaction, (3) moderation and trust, (4) new usage scenarios, (5) awareness and motivation, and (6) technical infrastructure. Students highlighted self-assessment and peer learning; concerns included misinformation and technical limitations. Gamification and AI-assisted moderation were suggested as potential improvements. This study underscores the exciting possibilities of Medifeeding as a digital peer-feedback system that can greatly aid the clinical skill mastery of medical students. In this pilot with second-year students, participants reported educational benefit and positive usability while also noting concerns about the validity of peer reviews, the risk of misinformation, and variable engagement. Overall, our findings indicate that Medifeeding is a potential tool to support structured peer feedback in clinical skills training; however, larger and longer-term studies are needed to evaluate its impact on performance and learning outcomes, usability, and feedback validation.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-025-24650-z.

Keywords: Peer feedback, Medical education, Digital learning, Clinical skills, Student engagement, Video-based assessment

Subject terms: Education, Health care

Background

The development of clinical skills in modern medical education is critical for future physicians to deliver safe and effective patient care. However, some problems remain, such as limited practical training opportunities, a lack of clear criteria for assessing clinical skills, and inadequacies in integrating new technologies into traditional curricula13. There is a practical gap in delivering timely, criterion-referenced feedback at scale for second-year clinical skills. Existing digital peer-feedback tools seldom couple short, student-generated skill videos with structured, rubric-based comments aligned to course objectives. Medifeeding addresses this gap by enabling video-based peer feedback anchored in explicit criteria and near-peer learning.

Feedback matters for awareness, goal-setting, and improvement48, yet timely, individualized input is often difficult to provide at scale.

However, in traditional clinical training settings, providing timely and detailed feedback to each student presents practical challenges. Owing to the increasing number of students but the limited number of trainers, intense clinical workload, and constraints in the educational environment, many students do not receive sufficient feedback during clinical practice, and it is becoming increasingly difficult to provide individualized and quality feedback9,10. This issue is particularly important for Gen Z medical students who are currently studying in medical schools and have high expectations for detailed, timely, and constructive feedback11.

In this context, peer feedback and peer-supported learning methods stand out as increasingly important approaches in medical education5,1214. Current research shows that, when structured, peer feedback can be comparable to instructor-centered approaches4,1517.

The advantages of peer feedback are also supported by established learning theories. Drawing on Bandura’s social learning theory18, Medifeeding enables students to observe peers’ procedural videos, compare performances, and internalize modeled techniques—processes that can strengthen self-efficacy through vicarious learning and feedback. Consistent with Vygotsky’s (socio)constructivist theory19, the tool’s dialogic, criteria-based comments support the co-construction of understanding and provide scaffolded guidance within the zone of proximal development. In line with Zimmerman’s self-regulated learning theory20, structured prompts and rubrics encourage forethought (goal setting before viewing), performance (strategy use during practice), and self-reflection (adapting approaches based on received and provided feedback). Together, these mechanisms constitute the conceptual framework guiding our evaluation of Medifeeding’s acceptability and perceived educational value in second-year clinical skills training.

Problems related to the lack of feedback and trainers in clinical education, such as other problems, have become more evident with the COVID-19 pandemic, and the sudden shift to digital and hybrid learning methods has severely reduced the opportunities for face-to-face clinical practice2124

Thus, the restrictions associated with the COVID-19 pandemic have accelerated the move to digital solutions and highlighted the need for creative solutions to provide feedback in medical education and approaches that embrace digitalization and can facilitate distance education processes.

One of the most innovative methods is the video-based peer feedback (VBPF) method. Compared with conventional face-to-face feedback, VBPF enables self-review and comparison with peers2527. Evidence suggests video-supported feedback can aid recall and self-evaluation and is generally well-received by students2830. However, concerns remain about feedback quality and consistency, underscoring the need for clear criteria and rater preparation30,31.

In addition, student-centered, scalable feedback methods are needed in medical education in Türkiye, aligned with the National Core Curriculum; similar needs were identified at our institution32.

As a result, we target a structured, course-aligned peer-video workflow that reduces trainer burden and increases timely, criterion-referenced input for Gen Z learners. Medifeeding differs from generic platforms by (i) pairing short student videos with rubric-based ratings/comments, (ii) enabling near-peer learning, and (iii) organizing feedback around explicit clinical-skill criteria. The Medifeeding digital feedback tool aims to create a learning culture that integrates theoretical knowledge and practical skills by enabling students to record their clinical presentations and receive audio or written feedback from their peers.

Accordingly, this study pilot-tests Medifeeding’s acceptability and usability with second-year students and explores perceived educational value to inform future controlled evaluations.

Research aim and questions

This research explores how medical students from Generation Z engage with and view peer feedback during clinical skills training via the Medifeeding. It aims to assess the impact of peer feedback by analyzing user participation levels and learning patterns while also examining the dynamics of feedback interaction to uncover its advantages and disadvantages. The research investigates the following questions.

  1. What specific improvements can be made to the Medifeeding digital tool on the basis of user feedback?

  2. What are the perceived educational benefits and limitations of the Medifeeding digital tool from the students’ perspective?

  3. How do medical students utilize the digital peer feedback tools Medifeeding to enhance their clinical skills?

Methods

Study design

This research includes the design and pilot implementation stages of a digital peer feedback tool (Medifeeding) planned to be developed for use by medical school students during their clinical skills training. Moonen’s 3-Space digital learning tool model was adopted in the development phase of the digital tool33. To appraise this early prototype we adopted an exploratory convergent mixed-methods pilot feasibility design following the five-step planning guidance of Aschbrenner et al.34. The primary feasibility domains were acceptability and usability of the Medifeeding. Quantitative acceptability and priority ranking scores (online form) were integrated with qualitative themes from interviews and a focus group through a joint display to generate meta-inferences for future trials.

Ethical approval

This study was approved by the Scientific Research Ethics Committee of Canakkale Onsekiz Mart University on October 26, 2023 (Decision No. 13/58) and subsequently authorized by the Faculty of Medicine at Canakkale Onsekiz Mart University on November 8, 2023 (Document No. E-30566003-604.02-2300274597).

Participants

This study focused on second-year students (Term 2) of the Çanakkale Onsekiz Mart University (ÇOMÜ) Faculty of Medicine in the 2024–2025 academic year. The research specifically examined three clinical skills, namely, intravenous (IV) injection, suturing and endotracheal intubation, which are part of the Term 2 curriculum.

The participants were selected via a criterion sampling a type of purposive sampling strategy35, a common approach in qualitative research to identify information-rich cases while optimizing resources3538. Criterion sampling is a type of purposive sampling commonly used in qualitative and implementation research39,40,41,42. The inclusion criteria required the students to be enrolled to download the Medifeeding app; to upload performance videos for IV injection, suturing and intubation; to provide feedback via the app; and to consent to audio and video recording. Was applied to ensure that the participants met these prerequisites. Students who did not meet these criteria were excluded from the study. In total, 31 students voluntarily participated in the study by using Medifeeding during clinical skills training.

No incentives or rewards were provided (no cash, vouchers, course credit, or grade adjustments). Participation was entirely voluntary, and non-participation had no consequences for course standing. Recruitment and consent were handled by a researcher with no grading responsibilities. Students could withdraw at any time without penalty. Identifiable information was not shared with teaching staff; data were de-identified prior to analysis. Table 1 summarizes the sex distribution (female/male) across each data-collection component of the pilot.

Table 1.

Demographic breakdown of pilot participants by data-collection component (sex: female/male) (Values are n (%); row percentages.)

Component Female Male Total
Prototype users 17 (54.8%) 14 (45.2%) 31
Post-use quantitative feedback 17 (54.8%) 14 (45.2%) 31
Individual interviews 14 (58.3%) 10 (41.7%) 24
Focus group 4 (80.0%) 1 (20.0%) 5

Data collection tools

  1. Online evaluation form (first round) (this form is given in the supplementary file)

This form was designed by researchers to quantitatively evaluate the users’ perspective on Medifeeding after their first contact with the tool. This form consisted of 3 simple questions. It was created by considering the evaluation tools of the markets that generally offer software (Apple Store, Google Playstore, etc.).

  • 2)

    Individual interview form (second round) (this form is given in the supplementary file)

Interviewing willing participants one-on-one was the first stage of qualitative data collection. One-on-one interviews help one learn more about people’s ideas and experiences, therefore clarifying the way pupils applied Medifeeding. The students answered a semistructured interview form to provide feedback on Medifeediing. Among the topics the interview questions addressed were technical qualities, instructional value, simplicity of use, and areas for development. The interview form draws on the relevant literature4146. The validity and reability of the form were examined by experts in clinical skills training, medical education, instructional design, educational technology, and software development. Two groups of experts—those who assessed the items and those who judged the sufficiency of the review—were formed. Four clinical skills instructors, one peer assessment specialist in higher education, one expert in medical education and peer assessment, one expert in medical informatics and peer assessment, and one expert in medical education and peer support, participated in the review among the thirty experts who were contacted. Finally this form, derived from the literature and reviewed by domain experts via a 2–1–0 relevance scale (2 = appropriate; 1 = appropriate with revision; 0 = remove). Fleiss’s kappa (κ) statistic was used to evaluate interexpert agreement, an expansion of Cohen’s kappa that is relevant for many raters. With a Fleiss’ kappa coefficient of 0.734, the research revealed rather high expert agreement4749. On the basis of relevant research and validated by expert review, protocols for both individual and focus group interviews were developed4146.

  • 3)

    Focus group discussion form (third round) (this form is given in the supplementary file)

A focus group discussion forms the third round of qualitative data collection in the second stage of data collection. In qualitative research, focus groups are rather helpful since they rely on data collection from participant interactions and could offer insights that are not possible from individual interviews50,51. The participants in the conversation might voice their opinions, talk about those of others, and even change or criticize their answers to the questions52. In this sense, the participatory method provided the pupils with a more comprehensive and rich awareness of their Medifeeding experiences. In the related literature4146, both data collection tools were developed and tested by professionals in medical education, instructional design, peer evaluation, and educational technology. Using a similar approach to that of the individual interview form, a form for semistructured focus group discussion was also created. Accordingly also this form, derived from the literature and reviewed by domain experts via a 2–1–0 relevance scale (2 = appropriate; 1 = appropriate with revision; 0 = remove).The focus groups consisted of seven main questions and related subquestions, including ease of use, educational benefits, technical issues, comparisons with similar applications, perspectives from different stakeholders, alternative usage scenarios, digital integration suggestions, and ideal design improvements.

Process

The study proceeded in four phases, summarized in Fig. 1.

Fig. 1.

Fig. 1

Schematic overview of the four-phase research process of medifeeding.

  1. Development of medifeeding

Medifeeding is an iOS and Android-based digital peer feedback tool designed to allow medical students to upload clinical skill performance videos; exchange feedback in audio, video, and text formats; and evaluate the performance of their peers. The design of Medifeeding was guided by Moonen’s33 3 Space digital learning tool model in the development phase of the digital tool. In this phase, the procedural operations performed with this model are as follows:

Pedagogical content area: In this first phase of the 3-space model proposed by Moonen33 within the scope of our study, Falchikov’s peer feedback marking (PFM) model, which has been proven to be effective, was taken as the basis as a pedagogical element53. Again, at this stage, the learning guides used within the faculty were structured according to this model.

Interaction area: In this stage of the model, within the scope of our research, gamification elements were investigated, and both classical interaction types and innovative approaches were adopted. While features such as emojis, likes, text comments, and voice comments were added to posts classically, the innovative feature of adding video comments to posts, which we do not see used much, was designed to increase digital interaction.

Technological Infrastructure Area: For the third and final stages of the model, we adopted a rapid-prototyping, iterative software-development approach5456. In this phase, the research team first designed the underlying algorithms and implemented the application, then carried out internal testing and debugging to ensure stability, and ultimately released Medifeeding on both the Apple App Store (link) and Google Play Store (link).

  • 2)

    Pilot testing of the medifeeding beta version

On November 29, 2024, a briefing session was conducted with voluntary participants to introduce Medifeeding and explain the study objectives. (Illustrations are given in the supplementary file.) The participants downloaded and registered for the application via the App Store or Google Play Store. Students used Medifeeding during IV injection, suturing, and intubation training and recorded and shared their skill performance videos. (Illustrations are given in the supplementary file.) After completing their clinical training on December 5, 2024, the students were encouraged to use peer assessment features (ratings, text comments, emoji reactions, likes/dislikes, and video comments) for one week. At this stage, the online form, which is the first data collection tool, was presented to the students via Medifeeding after their first contact with this digital tool to quantitatively perceive their basic perspectives on this digital tool.

  • 3)

    Individual interviews (second round of data collection)

After one week of active usage, the students were invited to participate in semistructured individual interviews. All interviews were audio-recorded with participants’ informed consent and conducted in compliance with ethical guidelines.The shortest interviews were 21 min and 35 s, whereas the longest lasted 34 min and 19 s. On the basis of the interview findings, five students (with both positive and negative perspectives on Medifeeding) were selected for further exploration in the second phase of data collection—the focus group discussion. (Illustrations are given in the supplementary file.)

  • 4)

    Focus group discussion (third round of data collection)

The second round of data collection was conducted through a focus group discussion to obtain more in-depth data on students’ experiences using Medifeeding. The participants were told that the discussion would focus more on the participants’ views on the usability of the application, the level of effectiveness of the educational content, the technical issues that were encountered, and the possible enhancements that could be made (the illustrations are given in the supplementary file).

Data analysis

Quantative data analysis

In this study, the responses to the questions asked with the online form within the scope of quantitative data collection were examined via Friedman, Wilcoxon tests and frequency analysiswe used the nonparametric Friedman test to assess overall differences across attributes for repeated-measures/ordinal data. When the omnibus test was significant, we conducted pairwise Wilcoxon signed-rank tests (two-sided) between attributes and controlled familywise error with the Holm adjustment (α = 0.05). As an omnibus effect size, we reported Kendall’s W, computed from the Friedman statistic as W = χ2/[n(k−1)], where n is the number of participants and k the number of attributes.

Qualitative data analysis

Content analysis was used to examine the interviews and focus groups that were conducted for this research. Inductive and deductive methods have been identified as primary approaches to content analysis in the literature57,58. We conducted an inductive thematic/content analysis of the transcripts. After each interview, recordings were transcribed and coded by two researchers, who mutually confirmed codes before proceeding to the next interview; the individual-interview phase was stopped at Interview 24 when repetition of codes indicated thematic saturation. Focus-group data were analyzed with the same procedures. Following Strauss’s59 three-stage approach, we applied open, axial, and selective coding.

First, there is open coding, which is a more relaxed approach to reviewing data in search of beginning codes and themes. Axial coding then follows, strengthening connections between themes and establishing new ones. The last step, selective coding, is to zero on a major theme and build other results around it. The interview and focus group data were initially subjected to open coding in this study, which yielded six major themes. To move forward with confidence, we double-checked and improved the codes. At the axial coding stage, we mostly classified the results of the overarching topic during the selective coding step, and we built our study around it. Inter-theme connections were interpreted using axial memos and code co-occurrence across transcripts, and are visualized in the Results. Open coding produced a broader pool of labels that we iteratively consolidated using constant comparison and axial memos into a consensus structure of 29 codes, 17 subthemes, and 6 themes. Two coders reconciled differences by discussion to consensus. (The codebook provided as supplementary file).

Results

This study presents the quantitative and qualitative data collected during the pilot implementation in the order in which they were analyzed. The findings are presented chronologically in terms of data collection and analysis and research questions. In this context, quantitative data are presented first, followed by qualitative data.

Quantitative findings

The first of the three questions in the online form was “Do you think that using the Medifeeding application in clinical skills training would be educationally beneficial? (yes/no)”. We observed a female/male distribution of 17/14 among app users and survey respondents (N = 31). Ninety percent (n = 28) of the students answered “yes” to this question, whereas 10% (n = 3) answered “no”. According to these results, most of the participants reported that the Medifeeding application was "educationally beneficial”.

Another question asked in the online form, the second question, which asked students to rank the prominent features of Medifeeding from most prominent to least prominent. There were 10 options for this question. % 52 of the students placed “Educational Benefit” as their first choice in this ranking question.The Friedman test showed a clear overall difference among the ten attributes, χ2(9) = 86.162, p = 9.56 × 10− 15, with Kendall’s W = 0.309, indicating moderate agreement in priority rankings across participants. In terms of rank tendency (smaller rank = higher priority), “Educationally beneficial” had the lowest mean rank (≈ 2.13; median 1) and thus the highest priority, followed by “Offering a different experience” (≈ 4.35), “Digitally user-friendly” (≈ 4.84), “suitability for intended use” (≈ 5.06), and “Functional (digital and educational)” (≈ 5.19). At the lower-priority end were “Increasing participation” (≈ 6.74), “Aesthetics (appearance and usability)” (≈ 7.77), and “Competitors (already existing application” (≈ 7.77). Post-hoc Wilcoxon tests with Holm correction showed that “Educationally beneficial” was significantly more highly prioritized than each of the other nine attributes (all Holm-adjusted p < 0.05); in total, 18 of the 45 pairwise comparisons remained significant after correction. Figure 2 graphically illustrates how participants ranked the prominent features of Medifeeding.

Fig. 2.

Fig. 2

Participants’ responses to the question of ranking the prominent features of medifeeding.

The last question asked in the online form was structured similarly to the application evaluation of classic application markets. In this question, the students were asked to make a holistic evaluation of the Medifeeding application and to do this out of 5 points. The participants answered this question by giving the Medifeeding application 4.13 points out of 5 (N = 31). While 90% of the participants gave 4 and 5 points, it was determined that 2 students who gave applications 1 and 2 points in the first question in the online form stated that they did not find that medifeeding was educationally beneficial. The outcomes of this holistic evaluation are visually represented in Fig. 3.

Fig. 3.

Fig. 3

Overall evaluation of the Medifeeding application. Distribution of ratings from Level 1 (lowest) to Level 5 (highest); values indicate counts (N = 31, mean = 4.13).

According to the data obtained from the online form, the Medifeeding digital feedback tool is educationally useful, offers a different educational experience, is original, and is digitally user friendly.

Qualitative findings

Main themes and subthemes

Among interviewees (n = 24) the female/male distribution was 14/10, and in the focus group (n = 5) 4/1. The analysis of individual interviews and focus group discussions revealed six main themes:

(1) Educational benefits, (2) peer interaction, (3) moderation and trust, (4) new usage scenarios, (5) awareness and motivation, and (6) technical infrastructure.

These themes are interconnected; however, Educational Benefit is recognized as the central concept to which other factors are related (Fig. 4). The subsequent sections discuss the key findings pertaining to each theme.

Fig. 4.

Fig. 4

A conceptual map illustrating the core themes of the study.

  1. Educational benefits

Participants widely described Medifeeding as helpful for skill acquisition and knowledge retention. A primary benefit was self-review and error recognition: “Being able to review my own videos helps me detect and correct errors” (P2). The tool also fostered peer learning through observation and adoption of good practices, especially from more experienced peers: “Watching upper-year students’ videos provides valuable insights into procedural skills” (P4). This collaborative aspect was referenced more frequently in the focus group than in individual interviews (This figure is given as supplementary file 11).

Students further noted that revisiting videos supported retention of previously learned procedures: “We tend to forget certain medical procedures throughout the year, but reviewing these videos helps reinforce them” (P2). Related facilitators are summarized in Fig. 5.

Fig. 5.

Fig. 5

Comparison of facilitating and hindering factors identified in the study.

  • 2.

    Peer interaction

Participants described peer interaction as important to learning, with both opportunities and risks. Students valued learning from more experienced peers, noting better preparation through vertical learning: “Seeing upper-year students’ videos helps me come more prepared” (P11).

At the same time, views on peer feedback and ratings were mixed. Some found comments helpful for gauging performance, while others questioned reliability due to bias and potential misinformation: “If a poorly executed procedure receives high ratings, students might assume it is correct” (P5). Related facilitators and risks are shown in Fig. 5.

  • 3.

    Moderation and trust

Participants raised concerns about the reliability of peer assessments and the risk of misinformation—that flawed demonstrations might be taken as correct if highly rated: “If bad performance receives high ratings, students might falsely believe it’s correct” (P5).

To reduce this risk and improve content credibility, students suggested oversight mechanisms, including faculty review or AI-assisted moderation: “A teacher or AI-based system should monitor video content to prevent misleading information” (P7).

Students also highlighted safety features to support participation, such as reporting/flagging and privacy options (e.g., face blurring, anonymity): “Some students hesitate to upload videos because they don’t want their faces visible” (P2). Related facilitators and barriers appear in Fig. 5.

  • 4.

    New usage scenarios

Participants often looked beyond Medifeeding’s current skill-based use and proposed several extensions. One recurring idea was clinical case sharing with diagnostic discussion and imaging review: “…uploading real clinical scenarios or radiological images could enhance learning” (P3). Suggestions also covered basic sciences, using the tool for anatomy models and histology slides with threaded comments: “we could use it for anatomy models or histology slides, discussing what we see in the comment section” (P14). Several participants further proposed emergency simulations to practice decision-making in critical situations.

  • 5.

    Awareness and motivation

Low awareness and limited incentives appeared to hinder broader adoption and sustained use of Medifeeding. Several students noted a need for stronger institutional promotion: “Many students still don’t know about this tool” (P16). To encourage engagement, participants proposed gamification/incentives (e.g., points, badges, recognition): “We need a reason to use it, like points or some form of recognition” (P21).

Motivation was also linked to privacy. Some students hesitated to share performance videos and preferred options such as face blurring or anonymity: “I would be more comfortable if my face was blurred for privacy reasons” (P10).

  • 6.

    Technical infrastructure

Participants frequently mentioned technical issues. The most common were slow loading and limited search/filtering, which made specific videos hard to find: “The videos take too long to load, and it’s tough to find the specific one I need” (P1). Several students also noted the absence of personal video management: “I wish there was a profile section for my videos” (P4).

Students offered usability suggestions, including video speed control, bookmarks, and enhanced filtering; one also proposed a peer-tutor system: “A fast-forward option and a peer tutor system would be useful” (P9).

This theme appeared in both facilitator and barrier categories: while students valued accessibility and flexibility, performance and navigation limits constrained use.

This findings suggest that Medifeeding contributes significantly to skill acquisition and peer learning, but its impact is constrained by technical limitations and the need for improved content moderation. Educational benefits are widely recognized, with self-review and peer learning being the most valued features. Peer interaction was perceived as beneficial but also raised concerns about misinformation. A moderation system (AI or faculty) can increase the reliability of peer evaluations. Improvements in the technical infrastructure, especially in the areas of video filtering, bookmarking and the time it takes the site to load, are needed. Adding incentive mechanisms (badges, leaderboards) may enhance user engagement and the rate of adoption of digital tools.

Integration of findings

To synthesise the quantitative and qualitative strands, we constructed a joint display (Table 2) that aligns descriptive statistics for each feasibility domain with representative quotations from interviews and focus-group discussions. The display reveals strong convergence in the acceptability domain—90% of students rated the tool educationally beneficial, echoed by comments emphasising error awareness—while partial divergence appears in the usability domain, where a high mean usability score (4.13/5) contrasts with concerns about slow video uploads. These integrated meta-inferences informed immediate design refinements (clearer feedback prompts, faster upload speeds) and serve as progression criteria for a subsequent full-scale trial. Table 2 presents a joint display that maps each research question to feasibility domains, side-by-side quantitative and qualitative results, and resulting design decisions (adapted from Aschbrenner et al., 202234).

Table 2.

Joint display integrating quantitative and qualitative pilot-feasibility findings for Medifeeding (structure adapted from Aschbrenner et al.34).

*RQ Feasibility domain(s)** Quant (N = 31) Qual (n = 24 int.; n = 5 FG)*** Meta‑inference→Design decision
RQ1—Improvements Impl; Proc; Dem

Friedman χ2) (9) = 86.162, p = 9.56 × 10⁻15; W = 0.309

Top priority: “Educationally beneficial” (MR≈2.13; med = 1)

Wilcoxon (Holm): 18/45 sig

“Uploads slow; hard to find.” (Int.)

“Hybrid moderation needed.” (FG)

Optimize technical infrastructure (upload/streaming/search); add search and bookmarks; adopt hybrid moderation
RQ2—Benefits and limitations Acc; Usa; Impl

Benefit: 28/31 (90%)

Overall rating: 4.13/5; 90% = 4–5

“I noticed my errors.” (Int.)

“Recall before OSCE.” (FG)

Keep formative focus; improve usability performance
RQ3—Utilization patterns Proc; Intg No usage logs (pilot)

“Pre‑class viewing helps.” (FG)

“Criteria‑based fixes.” (Int.)

Embed pre‑class tasks; provide exemplar clips

*Rq, research question.

**Usa, usability; Dem, demand; Impl, implementation/practicality; Proc, process; Intg, integration.

***FG, focus group; Int., interviews.

Both quantitative and qualitative data indicate that the Medifeeding tool could be a potential tool for assessing students’ clinical skills, detecting errors, and improving them through peer learning. While the quantitative data demonstrate the educational benefit offered by the app, the qualitative data support this benefit but also highlight areas that require technical and moderation improvements. This allows us to provide a balanced analysis of the strengths and limitations of Medifeeding. Both quantitative and qualitative data show that users’ expectations from Medifeeding are based not only on current functionality and educational benefits but also on improvements in the app’s technical performance, moderation mechanisms, and broad usage scenarios, clarifying specific areas that should be focused on to enhance the app’s user experience.

Overall, Medifeeding has great potential as an educational resource; however, there are some recommendations for improving the technical infrastructure, moderation, and engagement to enhance its performance.

Discussion

This study explored medical students’ perceptions of the Medifeeding digital peer-feedback tool. Quantitative data from the online form indicated that Medifeeding is educationally useful, provides a novel learning experience, is perceived as original, and is digitally user-friendly. Qualitative analysis yielded six core themes—educational benefit, peer interaction, moderation and trust, new usage scenarios, awareness and motivation, and technical infrastructure—which largely paralleled the survey trends. The joint display that integrated both strands (Table 2) confirmed this strong convergence on overall acceptability while exposing a partial divergence on usability: students valued the concept yet voiced frustration with slow video upload and limited navigation. In other words, Medifeeding appears to foster greater learning autonomy, skill performance and peer interaction, but the tool still needs tighter moderation, smoother operation and stronger incentives. Ensuring consistent evaluation criteria within the tool will therefore make the learning process not only systematic but also sufficiently reflective to help students enhance their clinical competence.

These findings are consistent with earlier studies on digital learning in medical education, which showed that video-based collaborative learning improves the acquisition and retention of procedural skills as well as self-directed learning8,30,6065.

Unlike earlier studies, this begs issues about the validity of peer feedback and the use of artificial intelligence in moderation, therefore augmenting the current debate on trust and credibility in digital learning environments. Furthermore, since Medifeeding’s target population consists of Z generation students, its efficacy depends on its fit with the pedagogical traits of this generation, including the use of mobile devices, game elements, and instantaneous feedback11,66,67.

Educational benefit and skill retention

The use of videos in self-training and clinical training has been well documented. Kononowicz et al.61 argued that video recordings can assist students in evaluating their performance and enhance their metacognition. Similarly, McGee et al.65 reported that medical students who learned clinical skills from digital tools performed as well or even better than those who learned in traditional face-to-face groups. The results of our study are in line with the findings of the above studies, as the Medifeeding users mentioned self-review and error recognition as the main benefits. Additionally, Burgess et al.63,64 noted that peer learning, especially from other students, especially senior students, enhances clinical skills. This is true, as our study revealed that students mentioned that procedural videos of upper-year students were very useful in terms of skill acquisition. Nevertheless, the dependency on peer feedback instead of expert feedback is still debatable. Bokken et al.68 stressed the need to ensure that peer assessments are of high quality, a concern that was also raised by the participants in this study in relation to Medifeeding, the risk of wrong information.

Peer interaction and collaborative learning

Peer assisted learning (PAL) is commonly considered an affordable and interesting way of delivering medical education63,64,69,70. Our study revealed that peer videos increase clinical confidence and procedural knowledge. However, problems related to the reliability of peer assessment have not been resolved. A literature review revealed that students may overrate or inconsistently rate their peers4, which is a problem observed in Medifeeding as well. In their paper, Prados-Carmona et al.71 explained that since no faculty member is monitoring the cooperative learning environment, the quality of feedback may vary. Although Medifeeding did not employ artificial intelligence, these issues can be addressed through more stringent validation processes—such as AI-assisted content assessment and faculty moderation—to enhance the reliability of peer learning environments.

Moderation, trust, and the risk of misinformation

One of the biggest issues in using digital learning tools is ensuring the accuracy of content and trust72. Our findings highlight the need for effective moderation; however, the pilot prototype evaluated here did not implement any AI functionality. While recent literature discusses AI-assisted curation and feedback to enhance credibility72, others caution that overreliance on AI may undermine learner autonomy and engagement73. Accordingly, we position AI as a potential direction rather than a feature tested in this study. A cautious hybrid model evaluating AI-assisted moderation alongside faculty validation may improve reliability without jeopardizing student participation.

Technical infrastructure: strengths and limitations

Technical usability is still a critical factor in the effectiveness of digital learning tools. Cook and Ellaway74 stressed the importance of using user-friendly interfaces to adopt e-learning. Like our study, major barriers were identified as slow video loading, lack of filtering options, and difficulty accessing personal content. We need better personalization features such as the ability to control the speed of the videos and bookmark them, which is in line with Merrill’s75 instructional design principles for adaptive learning environments adapted to the user’s preferences. Research on post-COVID-19 digital education has highlighted the impact of technical constraints on learning efficiency in medical e-learning tools7678. The usability and accessibility of Medifeeding are therefore crucial for Generation Z’s digital learning expectations. According to Seemiller and Grace11, Twenge66 and Shorey et al.67, Gen Z learners are expected to find highly interactive, mobile friendly and gamified learning experiences.

Implications for medical education

Structured peer feedback: Medifeeding is based on the peer learning concept; however, the unverified information presented by peers is a major issue. Mixed models (faculty and AI) can help enhance the quality of content, and at the same time, the students will be in charge of when and how they learn.

AI-assisted moderation: Some studies suggest that the use of AI in providing feedback may enhance the credibility of the information provided in the feedback. However, there is a need to consider ethical issues such as student autonomy and trust in the system73. Since the target population of Medifeeding comprises Generation Z students, the effectiveness of the digital tool can be enhanced by aligning the features of the tool to meet their learning needs. Research has shown that Gen Z learners prefer practical, mobile, and game-based learning11,66,67. Real-time AI feedback, a better user experience and reward-based navigation and interaction may also enhance the achievement of learning objectives.

Gamification and incentives: The results of the study show that, for example, competitive and reward-based systems (such as leaderboards and badges) have the potential to enhance students’ engagement, as suggested by Ryan and Deci79 in their self-determination theory.

Usability enhancements: Some technical changes that could be made include better video loading, search functions, and better accessibility for the students to improve their learning ability. Generator Z is known to prefer easy, simple, and enjoyable learning processes11,66,67. Medifeeding should also include features that help increase the speed of video playback, search filters, and gamified incentives to enhance the user’s experience.

Limitations and future research

This pilot study was designed to investigate how medical students perceive the Medifeeding digital tool and how digital peer feedback influences their learning of clinical skills. The results indicate that Medifeeding is useful for peer interaction, self-assessment, and recognition of errors. As with any research, this study has certain limitations. First, since it was conducted within a single institutional setting, the findings may have limited generalizability. Expanding the study to multiple institutions in future research would enhance its external validity. Second, the study captures only short-term perceptions, leaving the long-term effects of Medifeeding on clinical performance unknown. Future research should explore skill retention over time and assess the sustained impact of digital peer feedback. Another limitation is that the study focuses solely on students’ perceptions, without considering faculty perspectives. The incorporation of faculty viewpoints in future studies could provide a more comprehensive evaluation of the potential of Medifeeding. This pilot used self-reported outcomes and a self-selected volunteer sample, which may limit generalizability. A pre-registered cluster RCT will address this with blinded OSCE outcomes, skill-retention follow-up, reflection measures, and prospective workload/cost tracking. Building on the present pilot dataset, we will refine Medifeeding and conduct a pre-registered, assessor-blinded cluster randomized controlled trial directly comparing Medifeeding with face-to-face peer support. The primary outcome will be OSCE performance, with secondary outcomes of skill retention (8–12-week follow-up) and reflective capacity; we will also prospectively quantify student workload and core resource requirements.

This pilot did not implement any AI-based feedback or moderation; references to AI reflect participant suggestions and directions for subsequent work. In future iterations that prototype AI-assisted tools, potential risks—accuracy, reliability, bias, transparency, and impacts on learner autonomy—should be evaluated prospectively. We recommend comparative trials of AI-assisted versus faculty-only moderation under pre-specified governance (human oversight, audit logs, data protection, and fairness checks), with outcomes reported for trust and learning.

Conclusion

This study underscores the exciting possibilities of Medifeeding as a digital peer-feedback system that can greatly aid the clinical skill mastery of medical students. By enabling reflective evaluation and video feedback, Medifeeding helps bridge the gap between theoretical knowledge and actual practice. However, to fully leverage the potential of the tool, issues such as the validity of peer reviews, the risk of misinformation, and low student engagement need to be addressed. It is recommended for future iterations that Medifeeding or similar applications may employ a hybrid model combining faculty oversight with AI-assisted tools to mitigate misinformation and enhance trust; AI was not used in the present pilot. Incorporate AI and expert supervision to eliminate misinformation, and employ gamification techniques to increase student motivation and engagement with Medifeeding. In addition, other technical modifications regarding the system’s navigation and graphical user interface are needed to enhance the overall user experience.

Although Medifeeding shows favorable prospects, particularly for Gen Z medical students who appreciate hands-on and contemporary learning, additional investigations are warranted regarding its longitudinal effects on clinical performance, usability, and feedback validation. In the end, AI processing for gamification via the feedback mechanism increases the impact of such actions within the educational environment of digital medicine. As a result, if Medifeeding is determined to be a step in the favorable direction toward the digitization of medical education, more efforts to content validation engagement mechanisms, as well as technical system provisions, need to be made for plausible attainment of success in the future.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 2 (19.9KB, docx)
Supplementary Material 3 (215.5KB, jpeg)
Supplementary Material 4 (369.1KB, png)
Supplementary Material 5 (676.2KB, png)
Supplementary Material 6 (834.1KB, png)
Supplementary Material 7 (808.6KB, jpg)
Supplementary Material 8 (186.2KB, png)
Supplementary Material 9 (24.2KB, docx)
Supplementary Material 10 (492.2KB, png)
Supplementary Material 11 (154.8KB, pdf)

Acknowledgements

The authors would like to thank all students that participated in the study.

Abbreviations

AI

Artificial intelligence

ÇOMÜ

Çanakkale Onsekiz Mart University

FG

Focus group discussion

INT

Interviews

IV

Intravenous

OSCE

Objective structured clinical examination

PAL

Peer-assisted learning

PFM

Peer feedback marking

UÇEP

Ulusal Çekirdek Eğitim Programı (National Core Curriculum—Türkiye)

VBPF

Video-based peer feedback

Author contributions

Mustafa Onur Yurdal and Remzi Y. Kıncal developed the digital feedback tool and collected the data, MOY&RYK analyzed the data, and each author took equal responsibility for writing, revising and editing the manuscript. All the authors read and approved the final manuscript.

Funding

This work was supported by Çanakkale Onsekiz Mart University The Scientific Research Coordination Unit, Project number: SDK-2024-4838.

Data availability

The datasets generated and/or analyzed during the current study are available on the researcher’s computer. The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Declarations

Competing interests

The authors declare no competing interests.

Ethical approval and consent to participate

Written informed consent to participate in the study was obtained from all participants in accordance with the Declaration of Helsinki. The study was conducted in full compliance with applicable ethical guidelines and was approved by the Scientific Research Ethics Committee of Canakkale Onsekiz Mart University on October 26, 2023 (Decision No. 13/58) and subsequently authorized by the Faculty of Medicine at Canakkale Onsekiz Mart University on November 8, 2023 (Document No. E-30566003-604.02-2300274597).

Consent for publication

All participants provided written informed consent for visual, video, and audio recordings as well as for the publication of their personal and clinical details, including any identifying images. The signed consent forms are securely archived and available upon request.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Grantcharov, T. P. & Reznick, R. K. Teaching procedural skills. BMJ336(7653), 1129–1131. 10.1136/bmj.39517.686956.47 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ramani, S. & Leinster, S. AMEE Guide no. 34: Teaching in the clinical environment. Med. Teach.30(4), 347–364. 10.1080/01421590802061613 (2008). [DOI] [PubMed] [Google Scholar]
  • 3.Norcini, J. et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med. Teach.33(3), 206–214. 10.3109/0142159X.2011.551559 (2011). [DOI] [PubMed] [Google Scholar]
  • 4.Lerchenfeldt, S. & Taylor, T. A. H. Best practices in peer assessment: Training tomorrow’s physicians to obtain and provide quality feedback. Adv. Med. Educ. Pract.11, 571–578. 10.2147/AMEP.S250761 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Henderson, M., Ryan, T. & Phillips, M. The challenges of feedback in higher education. Assess Eval. High Educ.10.1080/02602938.2019.1599815 (2019). [Google Scholar]
  • 6.Killingback, C., Drury, D., Mahato, P. & Williams, J. Student feedback delivery modes: A qualitative study of student and lecturer views. Nurse Educ. Today.84, 104237. 10.1016/j.nedt.2019.104237 (2020). [DOI] [PubMed] [Google Scholar]
  • 7.Paterson, C., Paterson, N., Jackson, W. & Work, F. What are students’ needs and preferences for academic feedback in higher education? A systematic review. Nurse Educ. Today.85, 104236. 10.1016/j.nedt.2019.104236 (2020). [DOI] [PubMed] [Google Scholar]
  • 8.Yoong, S. Q. et al. Using peer feedback to enhance nursing students’ reflective abilities, clinical competencies, and sense of empowerment: A mixed-methods study. Nurse Educ Pract.69, 103623. 10.1016/j.nepr.2023.103623 (2023). [DOI] [PubMed] [Google Scholar]
  • 9.Kelly, E. & Richards, J. B. Medical education: Giving feedback to doctors in training. BMJ366, l4523. 10.1136/bmj.l4523 (2019). [DOI] [PubMed] [Google Scholar]
  • 10.Shafian, S. et al. The feedback dilemma in medical education: Insights from medical residents’ perspectives. BMC Med. Educ.24(1), 424. 10.1186/s12909-024-05398 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Seemiller, C. & Grace, M. Generation Z Goes to College (Jossey-Bass, 2016). [Google Scholar]
  • 12.Field, M., Burke, J. M., McAllister, D. & Lloyd, D. M. Peer-assisted learning: A novel approach to clinical skills learning for medical students. Med. Educ.41(4), 411–418. 10.1111/j.1365-2929.2007.02713.x (2007). [DOI] [PubMed] [Google Scholar]
  • 13.Tripodi, N., Feehan, J., Wospil, R. & Vaughan, B. Twelve tips for developing feedback literacy in health professions learners. Med. Teach.43(8), 960–965. 10.1080/0142159X.2020.1839035 (2021). [DOI] [PubMed] [Google Scholar]
  • 14.Grau, T. C. Enhancing Feedback by Fostering Resident Feedback Literacy [dissertation]. (University of Pittsburgh, Pittsburgh (PA), 2022). https://www.proquest.com/dissertations-theses/enhancing-feedback-fostering-resident-literacy/docview/2714870243/se-2?accountid=15572.
  • 15.Hoo, H. T., Tan, K. & Deneen, C. Negotiating self-and peer-feedback with the use of reflective journals: An analysis of undergraduates’ engagement with feedback. Assess Eval. High. Educ.45(3), 431–446. 10.1080/02602938.2019.1665166 (2020). [Google Scholar]
  • 16.Sopka, S. et al. Peer video feedback builds basic life support skills: A randomized controlled noninferiority trial. PLoS ONE16(7), e0254923. 10.1371/journal.pone.0254923 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Tanveer, M. A. et al. Peer teaching in undergraduate medical education: What are the learning outputs for the student-teachers? A systematic review. Adv. Med. Educ. Pract.14, 723–739. 10.2147/AMEP.S401766 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Bandura, A. & Walters, R. H. Social Learning Theory Vol. 1, 141–154 (Prentice Hall, Englewood Cliffs, 1977). [Google Scholar]
  • 19.Vygotsky, L. S. & Cole, M. Mind in Society: Development of Higher Psychological Processes (Harvard University Press, 1978). [Google Scholar]
  • 20.Zimmerman, B. J. Becoming a self-regulated learner: An overview. Theory Pract.41(2), 64–70. 10.1207/s15430421tip4102_2 (2002). [Google Scholar]
  • 21.Chick, R. C. et al. Using technology to maintain the education of residents during the COVID-19 pandemic. J. Surg. Educ.77(4), 729–732. 10.1016/j.jsurg.2020.03.018 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Torda, A. J., Velan, G. & Perkovic, V. The impact of the COVID-19 pandemic on medical education. Med. J. Aust.213(4), 188–188. 10.5694/mja2.50705 (2020). [DOI] [PubMed] [Google Scholar]
  • 23.Yurdal, M. O., Sahin, E. M., Aytug Kosan, A. M. & Toraman, C. Development of medical school students’ attitudes toward online learning scale and its relationship with e-learning styles. Turk. Online J. Distance Educ.22(3), 310–325. 10.17718/tojde.961855 (2021). [Google Scholar]
  • 24.Papapanou, M. et al. Medical education challenges and innovations during COVID-19 pandemic. Postgrad Med. J.98(1159), 321–327. 10.1136/postgradmedj-2021-140032 (2022). [DOI] [PubMed] [Google Scholar]
  • 25.Vaughn, C. J. et al. Peer video review and feedback improve performance in basic surgical skills. Am. J. Surg.211(2), 355–360. 10.1016/j.amjsurg.2015.08.034 (2016). [DOI] [PubMed] [Google Scholar]
  • 26.Lehmann, M. et al. Influence of expert video feedback, peer video feedback, standard video feedback and oral feedback on undergraduate medical students’ performance of basic surgical skills. Creat. Educ.9(8), 1221. 10.4236/ce.2018.98091 (2018). [Google Scholar]
  • 27.Herrmann-Werner, A. et al. Face yourself! Learning progress and shame in different approaches of video feedback: A comparative study. BMC Med. Educ.19(1), 1–8. 10.1186/s12909-019-1519-9 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Makrides, A. & Yeates, P. Memory, credibility and insight: How video-based feedback promotes deeper reflection and learning in objective structured clinical exams. Med. Teach.44(6), 664–671. 10.1080/0142159X.2021.2020232 (2022). [DOI] [PubMed] [Google Scholar]
  • 29.Mitchell, O., Cotton, N., Leedham-Green, K., Elias, S. & Bartholomew, B. Video-assisted reflection: Improving OSCE feedback. Clin. Teach.18(4), 409–416. 10.1111/tct.13354 (2021). [DOI] [PubMed] [Google Scholar]
  • 30.Zhang, H. et al. Effectiveness and quality of peer video feedback in health professions education: A systematic review. Nurse Educ. Today.109, 105203. 10.1016/j.nedt.2021.105203 (2022). [DOI] [PubMed] [Google Scholar]
  • 31.Gamboa, O. A., Agudelo, S. I., Maldonado, M. J., Leguizamón, D. C. & Cala, S. M. Evaluation of two strategies for debriefing simulation in the development of skills for neonatal resuscitation: A randomized clinical trial. BMC Res. Notes.11(1), 1–5. 10.1186/s13104-018-3831-6 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.UÇEP-2020 UCG, Ulusal Cep-2020 UYVYCG, Ulusal Cep-2020 DSBBCG. Medical faculty—national core curriculum 2020. TED. 2020;19(57–1), 1–146. 10.25282/ted.716873.
  • 33.Moonen, J. A three-space design strategy for digital learning material. Educ. Technol.40(2), 26–32 (2000). [Google Scholar]
  • 34.Aschbrenner, K. A. et al. Applying mixed methods to pilot feasibility studies to inform intervention trials. Pilot. Feasibility Stud.8, 217. 10.1186/s40814-022-01178-x (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Robinson, R. S. Purposive sampling. In Encyclopedia of Quality of Life and Well-Being Research. (Springer International Publishing, Cham, 2024) 5645–5647. 10.1007/978-3-031-17299-1_2337.
  • 36.Campbell, S. et al. Purposive sampling: Complex or simple? Research case examples. J. Res. Nurs.25(8), 652–661. 10.1177/1744987120927206 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Etikan, I., Musa, S. A. & Alkassim, R. S. Comparison of convenience sampling and purposive sampling. Am. J. Theor. Appl. Stat.5(1), 1–4. 10.11648/j.ajtas.20160501.11 (2016). [Google Scholar]
  • 38.Suen, L. J. W., Huang, H. M. & Lee, H. H. A comparison of convenience sampling and purposive sampling. Hu Li Za Zhi61(3), 105. 10.6224/JN.61.3.105 (2014). [DOI] [PubMed] [Google Scholar]
  • 39.Palinkas, L. A. et al. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm. Policy Ment. Health.42, 533–544. 10.1007/s10488-013-0528-y (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Miles, M. B., Huberman, A. M. & Saldaña, J. Qualitative data analysis: A methods sourcebook (SAGE Publications, Inc, 2014). [Google Scholar]
  • 41.Merriam SB, Tisdell EJ. Qualitative Research: A Guide to Design and Implementation. California: Jossey-Bass, A Wiley Brand;(2016).
  • 42.Frey BB. The SAGE Encyclopedia of Research Design. California: Sage Publications, Inc; (2022).
  • 43.O’Connor, S. & Andrews, T. Smartphones and mobile applications (apps) in clinical nursing education: A student perspective. Nurse Educ. Today.69, 172–178. 10.1016/j.nedt.2018.07.013 (2018). [DOI] [PubMed] [Google Scholar]
  • 44.Cook, A. Using Interactive Learning Activities to Address Challenges of Peer Feedback Systems [dissertation]. (Carnegie Mellon University, Pittsburgh (PA), 2019). Retrieved from https://www.proquest.com/dissertations-theses/using-interactive-learning-activities-address/docview/2241621187/se-2.
  • 45.Chandran, V. P. et al. Mobile applications in medical education: A systematic review and meta-analysis. PLoS ONE17(3), e0265927. 10.1371/journal.pone.0265927 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Gladman, T. et al. A tool for rating the value of health education mobile apps to enhance student learning (MARuL): Development and usability study. JMIR Mhealth Uhealth.8(7), e18015. 10.2196/18015 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Landis, J. R. & Koch, G. G. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics33(2), 363–374. 10.2307/2529786 (1977). [PubMed] [Google Scholar]
  • 48.Şencan, H. Faktör analizi ve geçerlilik. Geçerlilik ve Güvenilirlik.1, 355–414 (2005). [Google Scholar]
  • 49.Bilgen, Ö. B. & Doğan, N. Puanlayıcılar arası güvenirlik belirleme tekniklerinin karşılaştırılması. J. Meas. Eval. Educ. Psychol.8(1), 63–78. 10.21031/epod.294847 (2017). [Google Scholar]
  • 50.Fraenkel, J. R., Wallen, N. E. & Hyun, H. H. How to Design and Evaluate Research In Education (McGraw Hill, 2012). [Google Scholar]
  • 51.Hennink, M. Focus Group Discussions: Understanding Qualitative Research. (Oxford University Press, New York, 2014). 10.1093/acprof:osobl/9780199856169.001.0001.
  • 52.Ary, D., Jacobs, L. C. & Sorensen, C. K. Introduction to Research in Education (Wadsworth Cengage Learning, California, 2010). [Google Scholar]
  • 53.Falchikov, N. Peer feedback marking: Developing peer assessment. Innov. Educ. Train. Int.32(2), 175–187. 10.1080/1355800950320212 (1995). [Google Scholar]
  • 54.Tripp, S. D. & Bichelmeyer, B. Rapid prototyping: An alternative instructional design strategy. Educ. Technol. Res. Dev.38(1), 31–44. 10.1007/BF02298246 (1990). [Google Scholar]
  • 55.Jones, T. & Richey, R. C. Rapid prototyping methodology in action: A development study. Educ. Technol. Res. Dev.48(2), 63–80. 10.1007/BF02313401 (2000). [Google Scholar]
  • 56.Hung, W. C., Smith, T. J., Harris, M. S. & Lockard, J. Development research of a teachers’ educational performance support system: The practices of design, development, and evaluation. Educ. Technol. Res. Dev.58, 61–80. 10.1007/s11423-007-9080-3 (2010). [Google Scholar]
  • 57.Merriam, S. B. & Tisdell, E. J. Qualitative Research: A Guide to Design and Implementation. (Jossey-Bass, A Wiley Brand, California, 2016).
  • 58.Mayring, P. Qualitative content analysis: Theoretical background and procedures. In Bikner-Ahsbahs, A., Knipping, C. & Presmeg, N (eds). Approaches to Qualitative Research in Mathematics Education—Examples of Methodology and Methods 365–380 (Springer, 2015). 10.1007/978-94-017-9181-6.
  • 59.Strauss AL. Qualitative Analysis for Social Scientists. (Cambridge University Press, New York, 1987). 10.1017/CBO9780511557842.
  • 60.Gormley, G. J., Collins, K., Boohan, M., Bickle, I. C. & Stevenson, M. Is there a place for e-learning in clinical skills? A survey of undergraduate medical students’ experiences and attitudes. Med. Teach.31(1), e6–e12. 10.1080/01421590802334317 (2009). [DOI] [PubMed] [Google Scholar]
  • 61.Kononowicz, A. A. et al. Virtual patient simulations in health professions education: Systematic review and meta-analysis by the Digital Health Education Collaboration. J. Med. Internet. Res.21(3), e14676. 10.2196/14676 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Basnak, J., Ortynski, J., Chow, M. & Nzekwu, E. A digital peer-to-peer learning platform for clinical skills development. Can. Med. Educ. J.8(1), e59–e66 (2017). [PMC free article] [PubMed] [Google Scholar]
  • 63.Burgess, A., van Diggele, C., Roberts, C. & Mellis, C. Planning peer assisted learning (PAL) activities in clinical schools. BMC Med. Educ.20(Suppl 2), 453. 10.1186/s12909-020-02289-w (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Burgess, A., McGregor, D. & Mellis, C. Medical students as peer tutors: A systematic review. BMC Med. Educ.14(1), 1–8. 10.1186/1472-6920-14-115 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.McGee, J., Zhao, Y. & Patel, K. The impact of digital clinical skills training on medical student performance: A systematic review. BMC Med. Educ.24(1), 64. 10.1186/s12909-024-06471-2 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Twenge JM. iGen: Why today’s superconnected kids are growing up less rebellious, more tolerant, less happy—and completely unprepared for adulthood. (Atria Books, 2017).
  • 67.Shorey, S., Chan, V., Rajendran, P. & Ang, E. Learning styles, preferences and needs of generation Z healthcare students: Scoping review. Nurse Educ. Pract.57, 103247. 10.1016/j.nepr.2021.103247 (2021). [DOI] [PubMed] [Google Scholar]
  • 68.Bokken, L., Linssen, T., Scherpbier, A., Van der Vleuten, C. & Rethans, J. J. Feedback by simulated patients in undergraduate medical education: A systematic review of literature. Med. Educ.43(3), 202–210. 10.1111/j.1365-2923.2008.03268.x (2009). [DOI] [PubMed] [Google Scholar]
  • 69.Herrmann-Werner, A. et al. Peer-assisted learning (PAL) in undergraduate medical education: An overview. Z Evid. Fortbild. Qual. Gesundhwes.121, 74–81. 10.1016/j.zefq.2017.01.001 (2017). [DOI] [PubMed] [Google Scholar]
  • 70.Falchikov, N. & Goldfinch, J. Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Rev. Educ. Res.70(3), 287–322. 10.3102/00346543070003287 (2000). [Google Scholar]
  • 71.Prados-Carmona, A. et al. A pilot study on the feasibility of developing and implementing a mobile app for the acquisition of clinical knowledge and competencies by medical students transitioning from preclinical to clinical years. Int. J. Environ. Res. Publ. Health.19(5), 2777. 10.3390/ijerph19052777 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Holmes, W., Bialik, M. & Fadel, C. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning (Center for Curriculum Redesign, 2019). [Google Scholar]
  • 73.Dignum, V. Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer10.1007/978-3-030-30371-6 (2019). [Google Scholar]
  • 74.Cook, D. A. & Ellaway, R. H. Evaluating technology-enhanced learning: A comprehensive framework. Med Teach.37(10), 961–970. 10.3109/0142159X.2015.1009024 (2015). [DOI] [PubMed] [Google Scholar]
  • 75.Merrill, M. D. First principles of instruction. Educ. Technol. Res. Dev.50(3), 43–59 (2002). [Google Scholar]
  • 76.Alsoufi, A. et al. Impact of the COVID-19 pandemic on medical education: Medical students’ knowledge, attitudes, and practices regarding electronic learning. PLoS ONE15(11), e0242905. 10.1371/journal.pone.0242905 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Totlis, T., Tishukov, M., Piagkou, M., Kostares, M. & Natsis, K. Online educational methods vs. traditional teaching of anatomy during the COVID-19 pandemic. Anat. Cell Biol.54(3), 332–339. 10.5115/acb.21.006 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Omole, A. E., Villamil, M. E. & Amiralli, H. Medical education during COVID-19 pandemic: A comparative effectiveness study of face-to-face traditional learning versus online digital education of basic sciences for medical students. Cureus.15(3), 35837. 10.7759/cureus.35837 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Ryan, R. M., Deci, E. L. Self-determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness. (The Guilford Press, 2017). 10.1521/978.14625/28806

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material 2 (19.9KB, docx)
Supplementary Material 3 (215.5KB, jpeg)
Supplementary Material 4 (369.1KB, png)
Supplementary Material 5 (676.2KB, png)
Supplementary Material 6 (834.1KB, png)
Supplementary Material 7 (808.6KB, jpg)
Supplementary Material 8 (186.2KB, png)
Supplementary Material 9 (24.2KB, docx)
Supplementary Material 10 (492.2KB, png)
Supplementary Material 11 (154.8KB, pdf)

Data Availability Statement

The datasets generated and/or analyzed during the current study are available on the researcher’s computer. The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES