Abstract
Introduction
Online multiple-choice question (MCQ) quizzes are popular in medical education due to their ease of access and ability for test-enhanced learning. However, a general lack of motivation among students often results in decreasing usage over time. We aim to address this limitation by developing Telegram Education for Surgical Learning and Application Gamified (TESLA-G), an online platform for surgical education that incorporates game elements into conventional MCQ quizzes.
Methods and analysis
This online, pilot randomised control trial will be conducted over 2 weeks. Fifty full-time undergraduate medical students from a medical school in Singapore will be recruited and randomised into an intervention group (TESLA-G) and an active control group (non-gamified quizzing platform) with a 1:1 allocation ratio, stratified by year of study.
We will evaluate TESLA-G in the area of endocrine surgery education. Our platform is designed based on Bloom’s taxonomy of learning domains: questions are created in blocks of five questions per endocrine surgery topic, with each question corresponding to one level on Bloom’s taxonomy. This structure promotes mastery while boosting student engagement and motivation. All questions are created by two board-certified general surgeons and one endocrinologist, and validated by the research team. The feasibility of this pilot study will be determined quantitatively by participant enrolment, participant retention and degree of completion of the quizzes. The acceptability of the intervention will be assessed quantitatively by a postintervention learner satisfaction survey consisting of a system satisfaction questionnaire and a content satisfaction questionnaire. The improvement of surgical knowledge will be assessed by comparing the scores of preintervention and postintervention knowledge tests, which consist of separately created questions on endocrine surgery. Retention of surgical knowledge will be measured using a follow-up knowledge test administered 2 weeks postintervention. Finally, qualitative feedback from participants regarding their experience will be obtained and thematically analysed.
Ethics and dissemination
This research is approved by Singapore Nanyang Technological University (NTU) Institutional Review Boards (Reference Number: IRB-2021-732). All participants will be expected to read and sign a letter of informed consent before they are considered as recruited into the study. This study poses minimal risk to participants. Study results will be published in peer-reviewed open-access journals and presented in conference presentations.
Trial registration number
Keywords: medical education & training, education & training (see medical education & training), surgery, general endocrinology
Strengths and limitations of this study.
This study contributes to the growing body of literature evaluating the use of test-based learning, messaging apps and gamification in medical education.
The gamified, Telegram-delivered, surgical education-focused, quizzing intervention in this study will be structured in line with Bloom’s taxonomy.
We will use quantitative and qualitative approaches to assess our intervention with the aim of informing a future randomised controlled trial.
A potential limitation of this study is that 14 days of intervention may be insufficient to observe improvements in surgical knowledge.
The intervention will focus on endocrine surgery and the findings may not be generalisable to other surgical or medical subspecialties.
Introduction
Background and rationale
Multiple-choice question (MCQ) quizzes are a well-known and widely used medium for summative assessment, especially in medical education.1 2 Their ability to provide objective grading and immediate feedback also make them popular tools for formative assessment.3 4 While MCQ quizzes tend to be associated with diagnostic or assessment tools, Roediger and Karpicke (2006) proposes that tests like these can be used to improve learning via test-enhanced learning.5 This effect has been increasingly explored in medical education.6 Randomised controlled trials (RCTs) have demonstrated how test-enhanced learning can increase acquisition and retention of new medical knowledge among medical students7 and healthcare professionals.8 Recent systematic review by Green et al (2018) also supported these findings.9
With the progressive use of technology in medical education, online learning is becoming increasingly common. Online learning has been found to be as effective as offline learning in medical education based on meta-analyses by Pei and Wu (2019) and Vaona et al (2018).10 11 Brame (2017) suggests that making MCQ quizzes available online would allow easy access to frequent practice, which could work synergistically with test-enhanced learning to promote student learning and long-term knowledge retention.12 This is further supported by studies that demonstrated increased examination performance after administering pre-exam online quizzes to undergraduate university students.13 14 Kibble (2007) also established a positive relationship between unsupervised formative online quizzes and academic performance among medical students.15
However, online MCQ quizzes are limited by a decline in quiz participation over time and an increase in drop-out rates.13 14 16 This was observed in several studies despite the initial enthusiasm and high take-up rate among the student body. Mitra and Barua (2015) and Johnson (2006) specifically attributed this high attrition to a general lack of motivation among the students to use the MCQ quizzes.13 16 We aim to address this limitation of low motivation among students using online MCQ quizzes by developing Telegram Education for Surgical Learning and Application-Gamified (TESLA-G). TESLA-G is an online platform for surgical education that incorporates game elements into conventional online MCQ quizzes.
Gamification can be defined as the use of game elements such as point systems, leaderboards and incentives in non-gaming contexts.17 A RCT by Barrio et al (2016) showed that the use of gamification in undergraduate education can boost motivation and interest.18 A recent review by Sandrone and Carlson (2021) further suggests that the use of gamification in medical education can promote engagement and motivation among learners.19 Additionally, systematic reviews20 21 suggest that the use of gamification for learning among healthcare professionals is as effective as other educational methods in promoting knowledge and expertise.
Nevin et al (2014) successfully incorporated gamification in medical education through Kaizen-IM, an online learning platform developed by the authors for internal medicine residents. The authors demonstrated that the gamification elements such as team-based score system, a leaderboard showing rankings and badges as incentives improve medical knowledge acquisition and retention. Participants also mentioned that the game elements, especially the leaderboard, were significant motivators in their usage of the platform.22 Notably, the use of mobile technology in gamified learning has been shown to be a motivating factor among students. Licorish et al (2018) found that the use of the gamified mobile platform Kahoot! among undergraduate students promoted student engagement and motivation.23 This was attributed to students being highly proficient in mobile technology, which greatly contributed to significant student enjoyment in trying out applications and games that have been built specifically for mobile platforms.23
Instant messaging applications have been used as a supplementary tool to medical education,24 and several studies have shown an improvement in knowledge level through the use of these platforms,25–28 most notably with the application WhatsApp. More recently however, the increasingly popular messaging application Telegram has been trialled for use in medical education in the context of the COVID-19 pandemic.29 30 Telegram was the most downloaded app in the world in January 2021,31 and still remains as the top five most popular messaging apps globally in 2022.32 Not only is this app accessible on almost all computer and mobile platforms, the increasing use of Telegram in the upcoming years would mean that it is very likely that participants will already have this app installed in their devices even before the commencement of the study. By tapping on existing Telegram features and its well-documented Application Programming Interface, we will implement TESLA-G to provide flexible and convenient surgical education anytime and anywhere. Hence, our online, gamified quizzing platform TESLA-G will be delivered using Telegram with the aim of easier delivery of the intervention and greater uptake. We intend to evaluate our intervention in a stepwise manner in line with the UK Medical Research Council’s guidance for developing and evaluating complex interventions.33 In this pilot study, we will evaluate the feasibility and acceptability of delivering our intervention with the aim of informing a future RCT.
Objectives
The main objective of the pilot RCT is to evaluate the feasibility and acceptability of a gamified, online, Telegram-delivered quizzing platform TESLA-G compared with conventional MCQ quizzes for surgical education among medical students.
More specifically, we will investigate the following:
The feasibility of recruitment, specifically the duration required to complete the recruitment process, the recruitment strategy, randomisation and stratification strategy and participants’ retention rate throughout the proposed intervention period.
The acceptability of the intervention to medical students, in terms of its technical, pedagogical and educational content (surgical content).
The participants’ adherence to the intervention, in terms of the number of completed quiz questions, frequency of use, and average daily quiz completion rate.
The fidelity of the intervention protocol, in terms of whether the assessment materials, technical implementation of the intervention, and study procedures are delivered and successful.
The participants’ experience of the intervention, by inviting a purposive sample of the medical students to share their views via semistructured interviews after the intervention.
Our secondary objectives are as follows:
To evaluate the effectiveness of TESLA-G compared with conventional MCQ quizzes in improving surgical knowledge by comparing the change in scores between the preintervention and postintervention tests.
To compare students’ satisfaction with TESLA-G compared with conventional MCQ quizzes using a postintervention satisfaction survey.
To evaluate the effectiveness of TESLA-G compared with conventional MCQ quizzes in retention of surgical knowledge which will be determined by comparing the scores of a follow-up knowledge test administered 2 weeks after the postintervention test.
Methods and analysis
Trial design
We report this protocol in line with the SPIRIT (Standard Protocol Items for Randomised Trials) recommendations.34 This is an online, pilot RCT with two parallel active groups. It will involve an intervention group and an active control group. Participants will be randomised into these two groups with a 1:1 allocation ratio stratified by year of study.
Study setting
The study is an online study conducted on Telegram. The study will recruit first to fifth year medical students from a medical school in Singapore.
Eligibility criteria
Eligible participants must be as follows:
At least 18 years of age.
Currently enrolled in a full-time 5 year undergraduate programme in the medical school that leads to the Bachelor of Medicine and Bachelor of Surgery (MBBS).
Willing and able to provide consent (online supplemental file 1) for participating in the entire duration of the study including all prestudy and poststudy assessments.
bmjopen-2022-068740supp001.pdf (155.5KB, pdf)
Intervention
TESLA-G is a novel gamified quizzing platform aimed at improving surgical knowledge among undergraduate medical students. For the purpose of this study, we will evaluate the use of TESLA-G in the learning of endocrine surgery among medical students. Our platform is designed based on Bloom’s taxonomy of learning domains, which is a widely applied and researched framework to construct learning objectives.35 Questions will be created in blocks, where each block will test a specific topic in endocrine surgery. Each block has five questions, and each question corresponds to the first five levels of the Bloom’s taxonomy (Remember, Understand, Apply, Analyse and Evaluate) and each level in game (table 1). The questions are structured in this way to promote mastery in endocrine surgery while boosting student engagement and motivation.
Table 1.
Sample block of five questions corresponding to different Bloom’s taxonomy levels
Level 1 | Remember | Questions here encourage students to recognise and recall facts. | FNAC thyroid reveals a benign follicular nodule. What Bethesda category would this be in? |
Level 2 | Understand | Questions here motivate students to understand the meaning behind the memorised facts. | FNAC thyroid reveals a Bethesda category II follicular nodule. What is the probability of malignancy? |
Level 3 | Apply | Questions here allow students to apply their knowledge in a clinical setting. | 31 year old female was referred by a GP who noticed a goitre incidentally. Which of the following clinical features would be most suspicious of malignancy? |
Level 4 | Analyse | Questions here would expect students to analyse the clinical presentation and use logical deduction to figure out the differential diagnoses. | 31 year old female was referred by a GP who noticed a goitre incidentally. Further workup shows a solitary nodule in the right thyroid 2.5 cm. FNAC reveals Bethesda category II follicular nodule. Patient’s thyroid function is normal. What is the most likely differential? |
Level 5 | Evaluate | Based on a constellation of clinical findings, students are expected to critically examine and select the most appropriate investigations or management options. | 31 year old female was referred by a GP who noticed a goitre incidentally. Further workup shows a solitary nodule in the right thyroid 2.5 cm. FNAC reveals Bethesda category II follicular nodule. Patient’s thyroid function is normal. What is the most appropriate management? |
FNAC, fine-needle aspiration cytology; GP, general practitioner.
For this study, we aim to create 56 blocks of five questions, totalling 280 questions. Each question in the five-question block will correspond to each level of the Bloom’s taxonomy, hence there will be an equal number of questions per level. All questions will be created by two board-certified general surgeons and one endocrinologist, and validated by the research team.
The aim of the game is for players to get as many points as they can before the timer runs out. Each game will feature only one block of five questions, allowing students to focus on a specific topic in endocrine surgery per game (figure 1).
Figure 1.
Flowchart showing the run-through of the game.
Below is a run-through of the game:
The game starts at level 1 with a 3 min countdown timer and three lives.
For every question answered correctly, students will be awarded points based on their speed and the question level. An explanation of the question will also be shown. They will then progress to the next level up till level 5.
Consecutive correct answers will be rewarded via a point multiplier system. The longer their ‘streak’ of correct answers, the higher the multiplier. It should be noted that levelling up does not require consecutive correct answers.
If they get a question wrong, they lose one life, and the point multiplier system resets. They must then answer that question again until they get it correct.
The game ends if the level 5 question is answered correctly, if all three lives have been used, or if the timer runs out. The total score is then tabulated.
To ensure proper mastery of surgical concepts, the countdown timer will pause in between questions for students to refer to the explanation pop-up. Students will be given as much time as they need to read the explanation and understand the information presented before they move on to the next question. There is no score deduction for spending too much time reading the explanation after each question.
To promote competition-based learning, all students are ranked against each other based on their points, and a leaderboard that displays the top 10 students is sent to all students daily. We also allow students to track their own learning by providing a personalised dashboard for students to see their score history and progress. Finally, to promote consistent usage of TESLA-G, a bonus point multiplier is awarded for every consecutive day the game is played.
Timeline
The study flow is summarised in figure 2 and the logic model is shown in figure 3. Before any participants are recruited, all tests and surveys will first be piloted by all researchers involved in this study. Qualitative feedback will be independently obtained from at least three researchers, and tests and surveys will be amended accordingly.
Figure 2.
Study flow diagram. MCQ, multiple choice questions; TESLA-G, Telegram Education for Surgical Learning and Application Gamified.
Figure 3.
Logic model of the study. TESLA-G, Telegram Education for Surgical Learning and Application Gamified.
Preintervention
Fifty medical students (n=50) will be recruited for the study based on the eligibility criteria. After obtaining informed consent, all participants will be expected to complete a demographics survey. Basic information about name, gender, age and year of study in medical school will be collected.
The Telegram username and email address of each participant will also be collected and verified. This is for the main purpose of disseminating information regarding our study. These include important deadlines to take note of, instructions on how to access the quizzing platforms and links to the other surveys required for our study.
All participants will then complete a preintervention knowledge test comprising 20 MCQs on endocrine surgery over a 30 min period. This will determine the baseline surgical knowledge level among all participants.
Intervention
Participants will be verified to have completed all the above before being randomised into either the intervention group or the control group. Participants will then be provided with detailed instructions on using either TESLA-G for the intervention group or the conventional quizzing platform for the control group.
Participants in the intervention group will be provided access to TESLA-G as previously described. A link to access TESLA-G will be sent to participants from an automated Telegram bot; this access will be provided for 14 days.
Participants in the control group will be given access to a conventional quizzing platform, which will be a non-gamified version of TESLA-G. Whenever a participant enters the platform, a question stem and five options will be displayed. The participant will select an option, and the correct answer appears with the explanation provided. The platform will then send the next question, and the process continues, until the participant decides to exit the platform or all the questions in the platform have been answered. Similar to the intervention group, the questions will be queued in blocks where each block corresponds to a specific topic in endocrine surgery. Within each block, questions will be randomised, that is, they will not be queued based on increasing levels of Bloom’s taxonomy. Participants will also not be notified what level of Bloom’s taxonomy each question is classified under. Just like the intervention group, participants will be sent the link to access this platform from an automated Telegram bot, and they will be able to use the platform for 14 days.
For both the intervention and the control groups, all questions will be made available to all participants from the beginning of the study. If any participant does not use their quizzing platform for more than 48 hours, a Telegram message will be sent as a gentle reminder to continue participation. This would promote continuous usage of the quizzing platforms throughout the entire duration of the study.
Of note, throughout the usage of either quizzing platform, participants will be shown new and previously completed questions, for the purpose of mastering new knowledge and reinforcing past concepts, respectively.
Participants will use their assigned platform at their own pace and time. During this period, user metrics will be collected from both the intervention and the control groups including overall platform, per-participant and per question metrics as presented in table 2.
Table 2.
Game metrics to be collected from both groups
Overall platform metrics | Per-participant metrics | Per-question/block metrics |
Game performance
|
Participant engagement
Participant performance
|
Question engagement
Question performance
|
A block refers to a question block of five questions, as previously described.
Finally, throughout the duration of the intervention, participants will be able to contact a designated study team member via Telegram or via email for any assistance with potential technical difficulties with using their quizzing platform.
Postintervention
After 14 days, all participants will complete a postintervention knowledge test which again comprises 20 MCQs on endocrine surgery over a 30 min period. This test will be of a similar standard to the preintervention test as described earlier. Improvement in surgical knowledge will be measured as the difference in knowledge test scores preintervention and postintervention.
All participants will also complete a postintervention learner satisfaction survey. This will be in the form of a Likert scale adapted from the System Usability Survey (SUS)36 and the Student Evaluation of Educational Quality (SEEQ) Questionnaire.37 The survey will find out if the intervention group of students are more satisfied than the control group. To incentivise participants, the top three participants of the leaderboard will receive small tokens of appreciation in the form of a certificate and a custom badge reel with the TESLA-G logo.
After another 14 days, participants will be asked to do a follow-up knowledge test which once again comprises 20 MCQs on endocrine surgery over a 30 min period. At this point, participants in both the control and the intervention groups will not have access to their respective online quizzing platforms. This test will be of a similar standard to the postintervention test as described earlier. Any retention in surgical knowledge will be measured as the difference in knowledge test scores postintervention and follow-up.
After the follow-up knowledge test, 10 participants from the intervention group will be purposefully selected for individual interviews. Two participants—one who has completed at least 80% of the quizzes and one who has not—will be selected from each of the five academic years of the medical school. We aim to obtain qualitative feedback regarding the overall experience of TESLA-G, along with the benefits, drawbacks, receptiveness and usefulness of the platform as a supplement to surgical education. The interviews will be conducted based on the interview guide (online supplemental file 2) which has been adapted from a previous exploratory study on using Telegram for surgical education.38 The interviews will also be piloted on up to five students and/or research team members prior to the beginning of the study.
bmjopen-2022-068740supp002.pdf (15.4KB, pdf)
The interviews will be conducted individually, either online or in-person and will last 60 min. Written and verbal consent will be taken, before the interviews are recorded to be transcribed into text data for qualitative analysis. The audio recordings from the interviews will be transcribed by an automated software and proof-read to ensure that the transcripts are accurate. The completed transcripts will not contain any identifiers.
Outcome measures
A mixed-method approach will be used to establish the feasibility and acceptability of the intervention as primary outcomes. Our secondary outcomes will be improvement of surgical knowledge between the control and intervention group and potential adverse effects. Finally, qualitative feedback from the participants regarding their experience of the intervention will be thematically analysed.
Feasibility
The feasibility of the intervention will be assessed quantitatively as shown below. These are the goals for our intervention:
Enrollment of 50 participants in a month.
Retention of at least 75% of participants who are enrolled.
Completion of at least 80% of the quizzes.
Retention of participants will be quantified by the number of participants who complete both the preintervention knowledge test and the postintervention knowledge test. Success in achieving all three goals will indicate that it is definitely feasible to conduct a full-scale RCT while achieving two out of three goals will indicate that it is probably feasible. Achieving less than two goals will suggest that a full-scale RCT is not feasible with the current procedure.
In addition to the above outcomes, we will also determine:
The feasibility of the randomisation procedure to ensure equal number of participants within the different university years in each stratum within the intervention and control groups
Amount of time on app and on task
Feasibility of the delivery method for the preassessment, postassessment and follow-up assessments
Number of app crashes
Number of app exits during task
Acceptability
The acceptability of the intervention and the study procedure will be assessed both quantitatively and qualitatively. Quantitative data will be collected via a postintervention learner satisfaction survey for both the control and intervention groups. This survey will consist of two parts—a system satisfaction questionnaire and a content satisfaction questionnaire.
The system satisfaction questionnaire measures the system acceptability score via a SUS.36 It is a 10-item questionnaire with a 5-point Likert scale that ranges from 1 (strongly disagree) to 5 (strongly agree). This widely used usability scale will be used to compare the relative usability between TESLA-G and our conventional quizzing platform based on a normalised score. For the SUS, we aim to get an average score of at least 70 which would indicate grade B based on the Sauro and Lewis39 curved grading scale.
The content satisfaction questionnaire measures the content satisfaction score and is loosely adapted from the SEEQ Questionnaire.37 The SEEQ evaluates nine distinct components of teaching effectiveness with a 5-point Likert scale that ranges from 1 (very poor) to 5 (very good). This validated study has been rigorously evaluated in higher education40 41 and has been used in clinical education.42 43 Reliability analysis will be conducted and user’s scores triangulated with their input from the qualitative interview to determine the reliability and validity of the questionnaire, respectively. This questionnaire will compare the relative benefit of content delivery between TESLA-G and the conventional quizzing platform. For this, we aim for a mean score of at least 6.0 for TESLA-G with a significantly higher score when compared with the conventional quizzing platform.
Improvement of surgical knowledge
The difference in the improvement of surgical knowledge between the control and intervention group will be determined by comparing the scores of preintervention and postintervention knowledge tests. These tests consist of questions on endocrine surgery which will be created separately by two board-certified general surgeons and one endocrinologist. All questions will also be validated by the research team. It should be noted however, that the study will not be sufficiently powered to identify the comparative effectiveness between the control and intervention groups in terms of improvement of surgical knowledge overtime. Hence, the analysis will be primarily conducted to determine any potential adverse effects and increase in the surgical knowledge within each group, and secondarily between groups. This will be done using a CI of 95% using a small effect size of 0.2.44
Qualitative feedback
Data will be analysed thematically using the six steps approach outlined by Braun and Clarke.45 The interview transcripts obtained from the participants who used TESLA-G will be coded into various categories, based on the overall question about how TESLA-G has been useful in improving surgical knowledge. Recurring themes will then be identified and substantiated with illustrative quotes. These themes as well as verbatim quotes will then be iteratively reviewed and incorporated into the discussion of the findings chapter as with all qualitative studies.
Sample size and recruitment
Fifty participants (n=50) will be recruited, that is, 25 participants per arm. This was based on a study by Whitehead et al (2015) which suggested that 25 participants for each arm is optimum for studies with small effect size (between 0.1 and 0.3) for 90% power.46 Additionally, a purposive sample of at least 10 medical students (n=10) will be invited to share their views of the intervention via semi-structured interviews. This is based on consideration of the resources available.
Our study advertisement will be disseminated to all medical students from a medical school in Singapore via the respective Telegram group channels for each cohort. We will also advertise through advertisement posters and recruitment calls during regular lectures/seminars as well as through personal contacts. The study advertisement has a link to a secure registration form hosted by the research IT department. Every participant will be expected to fill in this registration form. This registration form will collect the name and email address of each participant. A letter of informed consent will be emailed to each participant, and he/she will be expected to provide consent in order to be considered as recruited for the study.
Blinding and randomisation
Participants will be partially blinded. They will not know if they have been given the conventional quizzing platform or TESLA-G. Instead, they will be told that they have been randomly allocated to one of two different quizzing platforms. All researchers involved in statistical analysis will be blinded as far as possible. They will not know which group is the control and which group is the intervention until the analysis is complete.
Only one researcher will be involved in dissemination of information to participants, and s/he will not be blinded. This is necessary because both quizzing platforms will have different instructions on accessing and using the platforms. It is important that the correct instructions are sent to each participant. This researcher will also be involved in answering any platform-related queries from participants throughout the study.
Participants will be stratified by their year of study. Following this, participants will be randomly allocated into either the intervention or the control group. Permuted block randomisation will be conducted for each stratum using a computerised random number generator to ensure a 1:1 allocation ratio and equal group sizes. To ensure allocation concealment, this randomisation process will be conducted and kept confidential by a trusted individual outside of the research team.
Statistical analysis
Analysis of quantitative data comparing TESLA-G with conventional quizzing platforms will be performed using commercial statistical software (SPSS for Windows, V.22.0, Chicago, IL, USA). All categorical variables will be described as percentages and compared by χ2 test. The primary outcomes will be analysed descriptively with outcomes being the means, SD and interval estimates of variables relating to the feasibility and acceptability of the study. The secondary outcomes relating to the effectiveness of TESLA-G, such as the knowledge test scores and learner satisfaction scores for each group, will be presented using descriptive statistics as well. Comparison of scores will be done with paired t-test analysis and repeated-measures analysis of variance statistics using a CI of 95%.44
Analysis of the qualitative feedback from the interviews will be performed with a parallel thematic analysis of inductive and deductive analysis until saturation is reached. Open coding of the transcripts will be performed by at least two independent coders. Qualitative data will then be thematically analysed through a selected qualitative data analysis software. The codes identified will subsequently be categorised into themes. Number and frequency of responses will also be tabulated.
Data monitoring and harms
Participants will be encouraged to comply with the protocol as far as possible. Researchers will check that participants meet the submission deadlines of all preintervention and postintervention tests and surveys. Researchers will also ensure that all tests and surveys are properly filled up.
Participants who do not comply with the study protocol (such as submitting tests late or submitting surveys with all items answered with the same response) will not be removed from the study as analysis will be done with an intention-to-treat approach.
This is a very low-risk study. Throughout this study, students will be encouraged to inform the research team and their university’s student support services if they experience any issues, harm or psychological distress. Such incidents will also be recorded and reviewed for future improvement. A data monitoring committee will not be needed.
Patient and public involvement
None.
Ethics and dissemination
Research ethics approval
This research has been approved by NTU Institutional Review Boards (Reference Number: IRB-2021-732).
Consent
All participants will be expected to read and sign a letter of informed consent (online supplemental file 1) before they are considered as recruited into the study. Parental consent for participant below the age of 21 has been waived by the Institutional Review Board. This letter would include the study information, participant rights and contact details of the Project Investigators.
Participants will be reminded that they can withdraw from the study at any point in time without giving any reasons by informing the principal investigator and all data collected of the participant will be discarded.
Data management and confidentiality
All data collected during the study will be kept confidential. Any identifiable information will only be stored on the university’s secure storage folders. Only the principal investigators will have access to these data for the sole purposes of verification and participant follow-up, if needed. Data stored in Telegram servers and TESLA-G servers will be indexed according to the participants’ computer-generated unique ID to maintain anonymity, and at no point in time will these servers store identifiable information from any participant. At the end of the study, only anonymised data indexed by unique IDs will be made available by the principal investigators to the research team for data analysis.
Data will be backed-up regularly. Personal data will never be used in a publication or presentation. All data collected will be kept in accordance with the University’s Research Data Management Policy. Research data used in any publication will be kept for a minimum of 10 years before being discarded.
Dissemination plan
Study results will be published in peer-reviewed open-access journals and presented in conference presentations.
Study status
Recruitment to the study occurred between August 2022 and September 2022. The pilot trial commenced in October 2022, and results will be expected to be available within 2023.
Discussion
Implications
The primary objective of this pilot trial is to provide important information regarding the feasibility and acceptability of TESLA-G among medical students, with the aim of informing a future full-scale RCT. As a secondary objective, a demonstration of statistically significant improvements—in surgical knowledge, learner satisfaction and knowledge retention—by TESLA-G in this pilot trial may suggest a greater effect in a full-scale RCT with a larger sample size.
We aim to contribute to the growing body of literature evaluating the use of test-based learning, messaging apps and gamification in medical education. One key strength of TESLA-G is that it is structured in line with Bloom’s taxonomy,35 which has been recently applied in clinical learning, such as in clinical simulation47 and in clinical-oriented surgical education.48 Medical students in our study will attempt blocks of five questions that are specifically designed according to the increasing difficulty levels of Bloom’s taxonomy.
Limitations
First, the effectiveness of participant blinding is limited due to the difficulty in preventing communication among participants. It would be unfeasible to physically or digitally isolate the intervention and control groups from each other for the entire study duration. Participants will however be blinded as to whether their online quizzing platform belongs to the intervention group or the control group, and constant reminders will be given to participants before and during the study to ensure that they do not communicate with other study participants or compare the test and control platforms.
Second, the intervention duration of 14 days may be too short a time to allow the beneficial effects of TESLA-G (eg, improvement in surgical knowledge) to manifest. If this is so, a full-scale RCT with a larger sample size may demonstrate a greater effect.
Third, endocrine surgery is the only surgical topic evaluated in the study, hence results showing the effectiveness of TESLA-G may not be generalisable to other surgical subspecialties, or other medical specialties. Future work would involve expanding TESLA-G to encompass other surgical subspecialties and using TESLA-G as a template to develop more gamified online platforms that would better cater to the other medical specialties.
Finally, while all undergraduate medical students in the medical school we are recruiting from already have smartphones compatible with the Telegram application, the students will have to incur the cost of cellular data for the study. This could be a deterrent to students enrolling in the study and be a potential source of selection bias.
Supplementary Material
Footnotes
Contributors: CLKC and LTC conceived the study concept. MSPN, AIJ, TDRN, Y-IA, JLC, CLKC and LTC obtained funding. MSPN and AIJ developed the study design and statistical analysis plan. MSPN, AIJ, JLC and DNHT were involved with the software aspects of TESLA-G. MSPN, TDRN and Y-IA wrote the manuscript. AIJ, JLC, DNHT, JL, DCJM, CLKC and LTC provided critical revision to the manuscript. MSPN, AIJ, TDRN, Y-IA, JLC, DNHT, JL, DCJM, CLKC and LTC have approved the final version of the manuscript.
Funding: As of August 2021, this study has been funded by the Games for Health Innovations Centre (ALIVE) Serious Games Grant (Grant Number: SGG20/SN02).
Competing interests: None declared.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research.
Provenance and peer review: Not commissioned; externally peer reviewed.
Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Ethics statements
Patient consent for publication
Not required.
References
- 1.Douthit NT, Norcini J, Mazuz K, et al. Assessment of global health education: the role of multiple-choice questions. Front Public Health 2021;9:640204. 10.3389/fpubh.2021.640204 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.St-Onge C, Young M, Renaud J-S, et al. Sound practices: an exploratory study of building and monitoring multiple-choice exams at Canadian undergraduate medical education programs. Acad Med 2021;96:271–7. 10.1097/ACM.0000000000003659 [DOI] [PubMed] [Google Scholar]
- 3.Ryan A, Judd T, Swanson D, et al. Beyond right or wrong: more effective feedback for formative multiple-choice tests. Perspect Med Educ 2020;9:307–13. 10.1007/s40037-020-00606-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Yang BW, Razo J, Persky AM. Using testing as a learning tool. Am J Pharm Educ 2019;83:7324. 10.5688/ajpe7324 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Roediger HL, Karpicke JD. Test-enhanced learning: taking memory tests improves long-term retention. Psychol Sci 2006;17:249–55. 10.1111/j.1467-9280.2006.01693.x [DOI] [PubMed] [Google Scholar]
- 6.Larsen DP, Butler AC, Roediger HL. Test-enhanced learning in medical education. Med Educ 2008;42:959–66. 10.1111/j.1365-2923.2008.03124.x [DOI] [PubMed] [Google Scholar]
- 7.Jud SM, Cupisti S, Frobenius W, et al. Introducing multiple-choice questions to promote learning for medical students: effect on exam performance in Obstetrics and Gynecology. Arch Gynecol Obstet 2020;302:1401–6. 10.1007/s00404-020-05758-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Ayyub A, Mahboob U. Effectiveness of test-enhanced learning (TEL) in lectures for undergraduate medical students. Pak J Med Sci 2017;33:1339–43. 10.12669/pjms.336.13358 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Green ML, Moeller JJ, Spak JM. Test-enhanced learning in health professions education: A systematic review: BEME guide No.48. Medical Teacher 2018;40:337–50. 10.1080/0142159X.2018.1430354 [DOI] [PubMed] [Google Scholar]
- 10.Pei L, Wu H. Does Online learning work better than Offline learning in undergraduate medical education? A systematic review and meta-analysis. Med Educ Online 2019;24:1666538. 10.1080/10872981.2019.1666538 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Vaona A, Banzi R, Kwag KH, et al. E-learning for health professionals. Cochrane Database Syst Rev 2018;1:CD011736. 10.1002/14651858.CD011736.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Brame CJ, Biel R. Test-enhanced learning: the potential for testing to promote greater learning in undergraduate science courses. CBE Life Sci Educ 2015;14:14:es4. 10.1187/cbe.14-11-0208 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Mitra NK, Barua A. Effect of Online formative assessment on Summative performance in integrated musculoskeletal system Module. BMC Med Educ 2015;15:29. 10.1186/s12909-015-0318-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Orr R, Foster S. Increasing student success using Online Quizzing in introductory (majors) biology. CBE Life Sci Educ 2013;12:509–14. 10.1187/cbe.12-10-0183 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Kibble J. Use of Unsupervised Online Quizzes as formative assessment in a medical physiology course: effects of incentives on student participation and performance. Advances in Physiology Education 2007;31:253–60. 10.1152/advan.00027.2007 [DOI] [PubMed] [Google Scholar]
- 16.Johnson GM. Optional Online Quizzes: college student use and relationship to achievement. CJLT / RCAT 2006;32. 10.21432/T2J300 [DOI] [Google Scholar]
- 17.Deterding S, Khaled R, Nacke LE. Gamification: toward a definition. n.d. Available: https://scholar.google.ca/scholar?cluster=9364920636896300709,2586529636814645634,10275417439548738291,9081268206236386263,2642809066015279740,15822582342785988266,11509333419200940454,10318309385766568850,892090103379131544&hl=en&as_sdt=0,5&sciodt=0,5
- 18.Morillas Barrio C, Munoz-Organero M, Sanchez Soriano J. Can Gamification improve the benefits of student response systems in learning? an experimental study. IEEE Trans Emerg Topics Comput 2016;4:429–38. 10.1109/TETC.2015.2497459 [DOI] [Google Scholar]
- 19.Sandrone S, Carlson C. Gamification and game-based education in neurology and Neuroscience: applications, challenges, and opportunities. Brain Disorders 2021;1:100008. 10.1016/j.dscb.2021.100008 [DOI] [Google Scholar]
- 20.van Gaalen AEJ, Brouwer J, Schönrock-Adema J, et al. Gamification of health professions education: a systematic review. Adv Health Sci Educ Theory Pract 2021;26:683–711. 10.1007/s10459-020-10000-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Gentry SV, Gauthier A, L’Estrade Ehrstrom B, et al. Serious gaming and Gamification education in health professions: systematic review. J Med Internet Res 2019;21:e12994. 10.2196/12994 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Nevin CR, Westfall AO, Rodriguez JM, et al. Gamification as a tool for enhancing graduate medical education. Postgrad Med J 2014;90:685–93. 10.1136/postgradmedj-2013-132486 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Licorish SA, Owen HE, Daniel B, et al. Students’ perception of Kahoot!’s influence on teaching and learning. RPTEL 2018;13:1–23. 10.1186/s41039-018-0078-830595732 [DOI] [Google Scholar]
- 24.Coleman E, O’Connor E. The role of Whatsapp® in medical education; a Scoping review and instructional design model. BMC Med Educ 2019;19:279. 10.1186/s12909-019-1706-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Bakshi SG, Bhawalkar P. Role of Whatsapp-based discussions in improving residents’ knowledge of post-operative pain management: a pilot study. Korean J Anesthesiol 2017;70:542–9. 10.4097/kjae.2017.70.5.542 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Gon S, Rawekar A. Effectivity of E-learning through Whatsapp as a teaching learning tool. Mvpjms 2017;4:19. 10.18311/mvpjms/0/v0/i0/8454 [DOI] [Google Scholar]
- 27.Blumenfeld O, Brand R. Real time medical learning using the Whatsapp cellular network: a cross sectional study following the experience of a division’s medical officers in the Israel defense forces. Disaster Mil Med 2016;2:12. 10.1186/s40696-016-0022-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Alhalabi N, Salloum R, Aless A, et al. Messaging Apps use in undergraduate medical education: the case of national medical unified examination. Annals of Medicine & Surgery 2021;66. 10.1016/j.amsu.2021.102465 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Iqbal MZ, Alradhi HI, Alhumaidi AA, et al. Telegram as a tool to supplement Online medical education during COVID-19 crisis. Acta Inform Med 2020;28:94–7. 10.5455/aim.2020.28.94-97 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Gönüllü E, Soysal A, Can İ, et al. The use of social network in daily pediatric practice and education: Turkish pediatric Atelier. Int J Pediatr 2020;2020:7301309. 10.1155/2020/7301309 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Top Apps worldwide for January 2021 by Downloads. Available: https://sensortower.com/blog/top-apps-worldwide-january-2021-by-downloads [Accessed 23 May 2022].
- 32.Statista . Most popular Messaging Apps. Available: https://www.statista.com/statistics/258749/most-popular-global-mobile-messenger-apps/ [Accessed 23 May 2022].
- 33.Skivington K, Matthews L, Simpson SA, et al. A new framework for developing and evaluating complex interventions: update of medical research Council guidance. BMJ 2021;374:n2061. 10.1136/bmj.n2061 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Chan A-W, Tetzlaff JM, Gøtzsche PC, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 2013;346:e7586. 10.1136/bmj.e7586 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Krathwohl DR. A revision of bloom’s Taxonomy: an overview. Theory Into Practice 2002;41:212–8. 10.1207/s15430421tip4104_2 [DOI] [Google Scholar]
- 36.Brooke J. Sus: a "quick and Dirty’Usability. 1996. Available: https://books.google.com/books?hl=en&lr=&id=IfUsRmzAqvEC&oi=fnd&pg=PA189&dq=A+quick+and+dirty+usability+scale&ots=GbmBBblq3k&sig=D-O8vXda80cZ2WF1LPpXG7weZTo
- 37.Marsh HW. Seeq: A reliable, valid, and useful instrument for collecting students’ evaluations of University teaching. Br J Educ Psychol 1982;52:77–95. 10.1111/j.2044-8279.1982.tb02505.x [DOI] [Google Scholar]
- 38.Soon MKS, Martinengo L, Lu J, et al. Telegram education for surgical learning and application (TESLA): an exploratory study (preprint). JMIR Medical Education [Preprint]. 10.2196/preprints.35983 [DOI]
- 39.Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. Morgan Kaufmann, 2016. [Google Scholar]
- 40.Coffey M, Gibbs G. The evaluation of the student evaluation of educational quality questionnaire (SEEQ) in UK higher education. Assessment & Evaluation in Higher Education 2001;26:89–93. 10.1080/02602930020022318 [DOI] [Google Scholar]
- 41.Grammatikopoulos V, Linardakis M, Gregoriadis A, et al. Assessing the students’ evaluations of educational quality (SEEQ) questionnaire in Greek higher education. High Educ 2015;70:395–408. 10.1007/s10734-014-9837-7 [DOI] [Google Scholar]
- 42.Blanié A, Amorim M-A, Benhamou D. Comparative value of a simulation by gaming and a traditional teaching method to improve clinical reasoning skills necessary to detect patient deterioration: a randomized study in nursing students. BMC Med Educ 2020;20:53. 10.1186/s12909-020-1939-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Plana NM, Rifkin WJ, Kantar RS, et al. A prospective, randomized, blinded trial comparing Digital simulation to textbook for cleft surgery education. Plastic and Reconstructive Surgery 2019;143:202–9. 10.1097/PRS.0000000000005093 [DOI] [PubMed] [Google Scholar]
- 44.Bell ML, Whitehead AL, Julious SA. Guidance for using pilot studies to inform the design of intervention trials with continuous outcomes. Clin Epidemiol 2018;10:153–7. 10.2147/CLEP.S146397 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology 2006;3:77–101. 10.1191/1478088706qp063oa [DOI] [Google Scholar]
- 46.Whitehead AL, Julious SA, Cooper CL, et al. Estimating the sample size for a pilot randomised trial to minimise the overall trial sample size for the external pilot and main trial for a continuous outcome variable. Stat Methods Med Res 2016;25:1057–73. 10.1177/0962280215588241 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Orgill BD, Nolin J. Learning Taxonomies in medical simulation. In: StatPearls. Treasure Island (FL): StatPearls Publishing, 2022. [PubMed] [Google Scholar]
- 48.Tuma F, Nassar AK. Applying bloom’s Taxonomy in clinical surgery: practical examples. Ann Med Surg (Lond) 2021;69:102656. 10.1016/j.amsu.2021.102656 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjopen-2022-068740supp001.pdf (155.5KB, pdf)
bmjopen-2022-068740supp002.pdf (15.4KB, pdf)