Skip to main content
BMJ Simulation & Technology Enhanced Learning logoLink to BMJ Simulation & Technology Enhanced Learning
. 2020 Sep 1;7(3):146–153. doi: 10.1136/bmjstel-2019-000576

Knowledge transfer and retention of simulation-based learning for neurosurgical instruments: a randomised trial of perioperative nurses

David B Clarke 1,2,, Alena I Galilee 1, Nelofar Kureshi 1, Murray Hong 1, Lynne Fenerty 1, Ryan C N D’Arcy 3,4
PMCID: PMC8936786  PMID: 35518567

Abstract

Introduction

Previous studies have shown that simulation is an acceptable method of training in nursing education. The objectives of this study were to determine the effectiveness of tablet-based simulation in learning neurosurgical instruments and to assess whether skills learnt in the simulation environment are transferred to a real clinical task and retained over time.

Methods

A randomised controlled trial was conducted. Perioperative nurses completed three consecutive sessions of a simulation. Group A performed simulation tasks prior to identifying real instruments, whereas Group B (control group) was asked to identify real instruments prior to the simulation tasks. Both groups were reassessed for knowledge recall after 1 week.

Results

Ninety-three nurses completed the study. Participants in Group A, who had received tablet-based simulation, were 23% quicker in identifying real instruments and did so with better accuracy (93.2% vs 80.6%, p<0.0001) than Group B. Furthermore, the simulation-based learning was retained at 7 days with 97.8% correct instrument recognition in Group A and 96.2% in Group B while maintaining both speed and accuracy.

Conclusion

This is the first study to assess the effectiveness of tablet-based simulation training for instrument recognition by perioperative nurses. Our results demonstrate that instrument knowledge acquired through tablet-based simulation training results in improved identification and retained recognition of real instruments.

Keywords: Education, Knowledge transfer, Nursing, Simulation based learning

INTRODUCTION

Simulation-based learning is designed to engage, challenge and enrich the learner’s knowledge base and skill set. It provides the opportunity to experience a variety of clinical scenarios, both common and uncommon, in a safe environment that allows repeated skills training. It also facilitates the transfer of clinical knowledge to real clinical scenarios. Research demonstrates that simulation can improve learning of various clinical skills and is being increasingly used as an educational strategy in clinical care to support staff training and enhance the quality of patient care. 1–3

Background

Simulation training is being increasingly used to prepare nursing graduates for real-world scenarios and to provide continuing education and professional development to staff nurses. An essential component of nursing education is practical clinical experience, which allows for the development of competencies across disciplines. However, clinical experiences in hospitals are constrained by a limited number of preceptors, high acuity patients and patient safety considerations. 3 4 To overcome these challenges, many hospitals are embracing simulation-based learning as an alternative or adjunct to clinical placements. A landmark national simulation study from the USA provides substantial evidence that up to 50% of simulation can be effectively substituted for traditional clinical experience in all pre-licensure core nursing courses. 5 The findings from the National Simulation Study demonstrate that skills for entry-level clinicians and nurses can be learnt, honed and evaluated in a safe learning environment using simulation before students move to complex clinical environments, where system failures and complexity add risks to the processes of care. 6

Simulation-based learning is especially useful for training nurses who work in a dynamic operating room (OR) environment, dominated by high patient acuity and heavy workloads. Traditionally, perioperative nurses have learnt and practised in real clinical settings, which include making mistakes on real patients. A simulated environment allows perioperative nurses to learn a diverse set of skills, be challenged, make mistakes without the possibility of patient harm and, critically, be exposed to conditions which otherwise can be missed during usual clinical scenarios. 7

To date, several studies have established that simulation is an effective method of training in nursing education. 3 8–11 However, results from a recent meta-analysis suggest that the quality of evidence from these studies is relatively low because of inconsistent designs and relatively small numbers of participants, varying between 28 and 103 with only one study having more than 90 participants. 3 Furthermore, no study to date has examined simulation-based learning of surgical instruments in a cohort of perioperative nurses, which is a critical competency for efficient performance in the OR. Finally, there remains a research gap in the perioperative nursing literature demonstrating that surgical knowledge and skills acquired through simulation training are both transferable to real-world tasks and are retained over time.

This study reports on a prospective randomised controlled trial (RCT), which was undertaken with the following objectives:

  1. To investigate the effectiveness of simulation-based training in perioperative nurses, measured as nurses’ learning progress in the simulation environment.

  2. To determine whether the learning acquired through this training is transferable to recognising real surgical instruments.

  3. To evaluate whether simulation-based learning is retained for at least 1 week.

  4. To explore nurses’ satisfaction with simulation-based training.

Hypothesis

We hypothesised that perioperative nurses trained on a tablet-based simulation platform will have improved recognition of real instruments compared with those who did not have prior simulation-based training. In addition, we hypothesised that knowledge transfer of instrument learning would be retained in both groups 1 week after the simulation training.

METHODOLOGY

Design

The study was conducted using a parallel-group RCT in the period between October 2015 and May 2016, which was followed up by a retention test a week later. This study is registered as an RCT at ClinicalTrials.gov, Identifier: NCT03894644.

Description of simulation intervention

Participant orientation

One research coordinator provided an introduction to the study, which included an orientation to the iPad technology, study design, sequence of study tasks and average duration of time required to complete each task. Participants read through the research consent form to understand study details and were given the opportunity to ask questions during orientation.

Simulator type

PeriopSim is an iPad-based medical simulation application for training of surgical staff developed by Conquer Experience. PeriopSim Instrument Trainer is a game-based educational iPad app for learning 15 common instruments used in burr hole surgery 12 (see figure 1). Using the Instrument Trainer, participants learnt the following instruments: stapler, scalpels with #10 and #15 blades, Frazier suction, Jefferson fine-toothed forceps, Dewar elevator, self-retaining retractor, irrigation, Hudson brace, #2 forward angled cup curette, Penfield #1, bayonet bipolar forceps, needle driver, curved Metzenbaum scissors and Adson forceps. The user interface allows participants to scroll and select an instrument when prompted. If the response is absent after a few seconds, a photo of the instrument is shown so that the participant can find the correct instrument. Feedback is provided by verbal cueing (‘correct’, ‘well done’ or ‘incorrect’) and a digital scorecard during the simulation; at the end of the simulation, a summary scorecard is presented showing the number of correct instruments requested, total time taken to complete the training, time saved (time for a correctly submitted instrument before the prompt appeared on the screen) and overall points awarded for performance.

Figure 1.

Figure 1

PeriopSim Instrument Trainer. Image attribution: Conquer Experience.

PeriopSim for Burr Hole Surgery is a condensed version of a real burr hole surgery iPad app that includes OR video in which a surgeon requests specific instruments at critical stages during the procedure. Participants use skills acquired from the PeriopSim Instrument Trainer to provide the correct instruments as requested by the surgeon during the surgery. In addition to learning the instruments used in the surgery procedure, participants are also able to learn the order in which the instruments are used, anticipating which instruments are used at key procedural steps. Each session required 5–8 min for completion. The instruments used for PeriopSim for Burr Hole Surgery included the same instruments learnt during PeriopSim Instrument Trainer and a few additional requests, such as dressing (Band-Aid), Gelfoam and razor.

Simulation environment

Simulation and real instrument testing on Day 1 was conducted in a dedicated classroom at the health sciences centre. Participants were provided with headphones to block out any background noise while performing simulation study tasks.

Simulation scenario

Both simulation tasks were preprogrammed into the PeriopSim app. PeriopSim Instrument Trainer asked participants to identify 15 surgical instruments. Similarly, in PeriopSim for Burr Hole Surgery, a pre-recorded video clip of a burr hole surgery was shown, and participants were asked to pass instruments to the virtual surgical lead during the procedure. Simulation scenarios in the Instrument Trainer and surgery had the same instruments and sequence for all participants. To assess learning objective 1 (learning progress in the simulation environment), the outcomes of total score, time saved and accuracy were compared between all testing sessions.

Simulation was conducted in groups of participants. Facilitators during simulation scenarios were registered nurses and/or other graduate, qualified research coordinators.

Instructional design

The estimated duration to complete one session of PeriopSim Instrument Trainer was 4 min and was performed three times by each participant on Day 1 and once during recall testing. The estimated duration to complete one session of PeriopSim for Burr Hole Surgery was 7 min and was completed twice by each participant on Day 1 and once during recall testing. The predefined standards for participant performance were time saved, number of errors and total score as described in the section for outcome measures. The nonsimulation intervention was real instrument testing, which was performed after simulation by Group A and before simulation by Group B on Day 1.

Feedback

At the end of each simulation session, participants were presented with the same quantitative feedback (total score) by the PeriopSim module (figure 1).

Participants and sample

Study participants were perioperative nurses employed at a health centre in Canada. Assignment into study arms was achieved by simple blindisation of QR barcodes, which maintained complete randomness of the assignment. 13 One research coordinator was responsible for assigning participants into study arms. Although the coordinator was aware of the allocated arm, the principal investigator and data analysts were blinded to the allocation. Those assigned to Group A performed the simulation training before the recognition of real instruments. Those assigned to Group B performed the recognition of real instruments without prior simulation training (figure 2). The unique QR barcode was used to access the simulation applications for data collection. A moderate to high effect size was estimated in the current study because a small effect size has been previously identified as a potential confounder. 3 Assuming a moderate to high effect size (η2=0.6–0.8), a sample of size of 45 participants per group was sufficient (with an alpha level=0.05 and power of 80%) to attain a significant difference between the groups for real instrument recognition testing. In total, we aimed to recruit 100 nurses as we anticipated that some participants would not be able to return a week later for retained knowledge testing and we would be excluding nurses with neurosurgery as a speciality.

Figure 2.

Figure 2

Experimental design.

Data collection

Data collection took place at the study site. Nurses completed the study during scheduled academic mornings, when they were on break, or at the beginning or the end of their shift. Both simulation training and real instrument recognition tasks for Day 1 were completed on the same day. There was no break between real instrument and simulation training. The knowledge recall task was completed 7 days later (figure 2). Seven participants failed to complete the full three sessions of PeriopSim Instrument Trainer due to technical errors (three participants) or lack of availability to complete the Day 1 portion of the study (four participants); data from these participants were excluded from further analysis. Complete data for Day 1 were available for 93 participants. After removing neurosurgery nurses form the final analysis, there were 49 participants in Group A and 40 participants in Group B (figure 3).

Figure 3.

Figure 3

Participant flow chart diagram. ANOVA, analysis of variance.

Ethical considerations

Written informed consent was obtained from all participants, and the study was approved by the hospital’s Research Ethics Board. Participants received oral and written information about the purpose of the study and the option to withdraw at any point. Access to PeriopSim on an iPad was password protected to guarantee confidentiality. Research data did not contain any personally identifiable information and were stored on an online server. In addition, signed consent forms and papers containing research data were stored in locked cabinets with restricted access to study personnel. All aspects of the study, including conception, design, acquisition of data, statistical analysis, interpretation and drafting of the manuscript were performed independently by the research team without participation from Conquer Experience.

EXPERIMENTAL DESIGN

Objective 1: measurement of learning progress in simulated environment

The study design involved three consecutive sessions of the PeriopSim Instrument Trainer, followed by two consecutive sessions of the PeriopSim for Burr Hole Surgery (https://periopsim.com). The total time to complete these simulations was between 30 and 40 min. Participants in Group A (experimental group) performed all PeriopSim sessions before being tested on real instruments, while participants in Group B (control group) were first tested on identifying real instruments followed by PeriopSim sessions. This design allowed us to determine whether exposure to simulation-based learning at baseline (Group A) or after real instrument learning (Group B) affected the progress of simulation-based learning through consecutive sessions using PeriopSim.

Objective 2: transferability of simulation-based knowledge to real instrument recognition

To assess knowledge transfer from simulated tasks to the real world, participants were allotted a maximum of 45 s to correctly identify nine real surgical instruments from a tray of 15 instruments commonly used in burr hole surgery. Nine empty bins with instrument labels were placed on a table. Instruments were laid on a table in a different order from that shown in both PeriopSim simulations. Participants were instructed to identify and place corresponding instruments into the correct bin as quickly as possible.

Objective 3: knowledge recall of simulation-based learning and real instruments

To evaluate delayed recall following simulation-based training, all participants were asked to return after 7 days to perform real instrument testing. The instruments used in this task were the same instruments used in the knowledge transfer task. Participants were also given a short refresh of PeriopSim platforms, before (Group A) or after (Group B) the recall session using real instruments. Specifically, participants in Group A performed one session of PeriopSim Instrument Trainer, followed by a single session of PeriopSim for Burr Hole Surgery and were then tested for real instrument recognition. Participants in Group B performed real instrument recognition, followed by one session of simulation training on both Instrument Trainer and burr hole surgery procedure.

Objective 4: acceptability of simulation-based learning

All participants who completed both first and recall sessions completed a postsimulation evaluation survey. The survey assessed prior neurosurgery clinical experience as well as the relevance and quality of the PeriopSim platforms using a 5-point Likert scale.

Outcome measurements

Study outcomes for simulation tasks were based on metrics that measure current and desired performance of participants: amount of time saved, number of errors and total score. Time saved was defined as the duration in seconds from the time the participant correctly submitted an instrument prior to the maximum allotted time. Number of errors were counted as the number of incorrectly selected instruments on the first attempt. Total score was a gamification-based algorithm dependent upon the number of correct responses in the first attempt and time saved. In the knowledge transfer task, the ability of participants to correctly identify real instruments was assessed using score and time. Score was defined as the number of correctly identified instruments from the instrument tray during the instrument recognition task. Time was the number of seconds (to a maximum of 45) taken to complete the real instrument task.

Data analysis

Statistical analyses were performed in SPSS (IBM version 23). Associations between explanatory variables were assessed using analysis of variance (ANOVA). A 3×3×2 ANOVA with a within-subject factor of Session (Session1, Session 2 and Session 3) and a within-subject factor of Measure (total score, number of errors, time saved) as well as a between-subject factor of Group (Group A, Group B) were performed to assess participants’ performances during PeriopSim Instrument Trainer. A 3×2×2 ANOVA with a within-subject factor of Session (Session1, Session 2) and a within-subject factor of Measure (total score, number of errors, time saved) as well as a between-subject factor of Group (Group A, Group B) were performed to assess participants’ performances during PeriopSim for Burr Hole Surgery. Greenhouse-Geisser corrections were employed when the sphericity assumptions were violated in ANOVA.

Significant main effects and interactions found in ANOVA were followed by paired sample t-tests for post hoc testing. Bonferroni corrections were employed for multiple comparisons to follow up significant main effects and interactions (p<0.05). Between-group differences in real instrument recognition at Day 1 and at the recall session (Day 7) were assessed by independent sample t-tests. Descriptive statistics are reported as mean±SEM and mean differences (MD) between the explanatory variables.

Validity and reliability

The study was conducted by strictly following the study protocol. Eligible participants were randomly allocated by a research coordinator after they provided informed consent. Participants in both groups, as well as study coordinators conducting this research, received training in PeriopSim Instrument Trainer and PeriopSim for Burr Hole Surgery. Since the efficacy of PeriopSim has been reported in a previous study of neurosurgery residents, all instruments and outcomes used in the current study were deemed to be valid and reliable. 12 The results of the study are reported in compliance with the Consolidated Standards of Reporting Trials 2010 Statement. 14

RESULTS/FINDINGS

The most common main speciality for perioperative nurses was general surgery (42%) followed by orthopaedic surgery (20%) and cardiac surgery (12%). Five per cent of participants were identified as neurosurgery nurses and were excluded from further analysis. Nursing experience varied between the nurses, with 37% of nurses having more than 10 years of experience, while only 14% had <1 year of experience. There were no significant differences in years of OR nursing experience or nursing specialities between Group A and Group B (table 1).

Table 1.

Participants’ years of nursing experience and clinical speciality

Group A Group B P value
Nursing experience (years) n (%) n (%)
<1 6 (14) 5 (14) 0.99
1–4 16 (36) 10 (27) 0.37
5–10 5 (11) 9 (24) 0.12
>10 17 (39) 13 (35) 0.75
Nursing specialty
General surgery 16 (47) 12 (36) 0.93
Orthopedic surgery 8 (26) 5 (16) 0.68
Cardiac surgery 2 (6) 6 (19) 0.07

PeriopSim Instrument Trainer

ANOVA performed for the Instrument Trainer revealed a significant factor of Session (F(2174)=499, η2=0.86, p=1, p<0.0001) and Performance (F(1174)=16 332, η2=0.99, p=1, p<0.0001); a significant two-way interaction of Session×Performance (F(4,1.5)=499, η2=0.85, p=1, p<0.0001) and Session×Group (F(21 174)=74, η2=0.48, p=1, p<0.0001); a significant three-way interaction of Session×Performance×Group (F(4, 348)=75, η2=0.46 p=1, p<0.0001); and a significant between-subject factor of Group (F(1,86)=15 661, η2=0.99, p=1, p<0.0001). Follow-up post hoc paired sample comparisons were employed for the three-way interactions and the following effects were found.

For Group A, total score on the PeriopSim Instrument Trainer increased significantly from session 1 to session 2 (2829 (51) vs 3611 (39), p<0.0001) and from session 2 to session 3 (3611 (39) vs 3824 (33), p<0.0001, table 2). Compared with session 1, the number of errors in the identification of surgical instruments decreased by session 3 from 1.6 (0.3) (p<0.001) to 0.2 (0.08) (p<0.0001). In addition to fewer errors, Group A saved time with each PeriopSim Instrument Trainer repetition: between session 1 and session 2 (MD=32 (1.4), p<0.0001) as well as between session 2 and session 3 (MD=9.4 (0.9), p<0.0001).

Table 2.

Simulation Day 1: Instrument Trainer results across the three sessions for Group A and Group B

Session 1 Session 2 Session 3 ANOVA
Group A Score 2829 (51) 3611 (39) 3824 (33) <0.001
Time saved 61 (2.1) 93(1.7) 1.2(1.5) <0.001
Errors 1.6 (0.3) 0.4(0.1) 0.2 (0.08) <0.001
Group B Score 3475 (57) 3805 (44) 3914 (37) <0.001
Time saved 88 (2.3) 102 (1.9) 107 (1.6) <0.001
Errors 0.6 (0.3) 0.2 (0.1) 0.2 (0.1) 0.54

ANOVA, analysis of variance.

Group B demonstrated similar improvements in all outcome metrics (table 2). By session 3, participants achieved a mean score of 3914 (36), which was significantly higher than session 1 (3475 (57), p<0.0001) and session 2 (3806 (43), p<0.0001). Although there were no significant differences in numbers of errors for Group B between the sessions (MD<0.4 (0.3), p>0.1), time was saved with each repetition of the simulation by anticipating instruments: between session 1 and session 2 (88 (2.3) vs 101.9 (18), MD=13.9 (1.5), p<0.0001) and between session 2 and session 3 (108 (1.5), MD=5.5 (0.9), p<0.0001).

We also compared the performance of both groups on the PeriopSim Instrument Trainer: although Group B outperformed Group A in session 1 in all the three measures of performance (total score: 3475 (56) vs 2829 (51), p<0.001; time saved: 87.8 (2.3) vs 61.1 (2.1), p<0.001; and the number of errors: 0.6 (0.3) vs 1.6 (0.3), p<0.05), in session 2 by score (3805 (43.1) vs 3619 (37), p<0.01) and time saved (101.8 (1.9) vs 93.8 (1.6), p=0.001, in session 3, the performance of both groups did not differ in total score, number of errors or time saved (p>0.05).

PeriopSim for Burr Hole Surgery

ANOVA performed for burr hole surgery revealed significant factors of Session (F(1,90)=420, η2=0.82, p=1, p<0.0001) and Performance (F(2180)=9939, η2=0.99, p=1, p<0.0001), two-way interactions of Session×Performance (F(2180)=569, η2=0.86, p=1, p<0.0001) and Measure×Group (F(2180)=11.9, η2=0.12, p=0.99, p=0.01), three-way interaction of Session×Performance×Group (F(2,90)=40.8, η2=0.31, p=1, p<0.0001) and a significant between-subject factor of Group (F(1194)=8.8, η2=0.1, p=0.83, p<0.05). Follow-up post hoc comparisons were employed for the three-way interactions to identify significant differences.

In Group A, PeriopSim for Burr Hole Surgery, total scores increased significantly from session 1 to session 2 (2750 (46) vs 3205 (41), p<0.001); at the same time, the number of errors decreased significantly (2.1 (0.6) vs 0.8 (0.2), p<0.001; table 3). Group A showed a decrease in the time saved from session 1 to session 2 (61 (2.1) vs 15.8 (1.5), p<0.0001).

Table 3.

Simulation Day 1: burr hole surgery results across the two sessions for Group A and Group B

Session 1 Session 2 ANOVA
Group A Score 2750 (46) 3205 (41) <0.001
Time saved 61 (2.1) 15.8 (1.5) <0.001
Errors 2.1 (0.6) 0.8 (0.2) <0.001
Group B Score 2391 (49) 3183 (44) <0.001
Time saved 87.6 (2.2) 16.6 (1.6) <0.001
Errors 4.1 (0.68) 0.98 (0.2) <0.001

ANOVA, analysis of variance.

Group B’s use of PeriopSim for Burr Hole Surgery showed results similar to that of Group A (table 3). There was a significant increase in total scores between session 1 and session 2 (2391 (49) vs 3183 (44), p<0.0001). The decrease in the number of errors (4.1 (0.68) vs 0.98 (0.2), p<0.0001) and the time saved (87.6 (2.2) vs 16.6 (1.6), p<0.0001) were also significantly different over the two sessions.

We also compared the performance of both groups on the PeriopSim for Burr Hole Surgery: although Group A outperformed Group B in session 1 in all the three measures of performance (total score: 2750 (46) vs 2391 (49), p<0.05; time saved: 5.3 (0.7) vs 2.5 (0.7), p<0.05; and the number of errors: 2.1 (0.6) vs 4.1 (1.7), p<0.05), the performance of both groups did not differ in total score, number of errors or time saved by session 2 (p>0.1)

Day 1: knowledge transfer task: recognising real instruments

The results of the independent sample t-tests revealed that on Day 1, Group A identified 93.2% of real instruments correctly versus Group B who only identified 80.6% correctly (p<0.0001). Group A’s mean time to complete the task was quicker; 31.4s compared with Group B’s time of 40.1s. There were significant differences (MD=12.6(5; MD=8.7±(1.5), p<0.0001) for both score and time between Group A and B (figure 4). Penfield # 1 and Angled Cup Curette # 2 were most frequently incorrectly identified.

Figure 4.

Figure 4

Knowledge transfer and retention. The bar graphs represent percentage of correctly recognised instruments (per cent score) and time taken (seconds) to complete real instrument recognition on Day 1 and Day 7.

Day 7: knowledge transfer task: recognising real instruments

On retesting at Day 7, Group A’s total score for PeriopSim Instrument Trainer (3620±45) and PeriopSim for Burr Hole Surgery (2996±33) was not different from Group B (3547±59, p>0.5; 3014±42, p>0.5). For Group B, whose Day 1 testing was done prior to PeriopSim Instrument training, there was a significant increase in the accuracy of identifying real instruments between Day 7 and Day 1 (93.2 (10) vs 80.6 (15), p<0.0001), as well as a decrease in time taken to complete the task (31.3 (8.6) vs 40.1 (7.4), p<0.0001; see figure 4). One week following PeriopSim Instrument training by both Groups, there were high scores on correctly identifying real instruments (Group A, 97.8%; Group B, 96.2%, see figure 4). There were no significant differences for score and time between Group A and Group B (p>0.1). Among groups, we observed that Group A’s accuracy in identifying real instruments increased between Day 1 and Day 7 (92.4 (10) vs 97.8 (5), p<0.01). The absolute time taken to identify instruments remained the same (p>0.05).

Participants’ feedback

Nurses’ feedback on their experience with simulation platforms was very positive (table 4). The results of a postevaluation survey demonstrated that 78% rated PeriopSim as ‘extremely easy to use’ while 21% considered it ‘very easy to use’. In addition, 98% of participants indicated that it was ‘extremely likely’ or ‘very likely’ that they would like to use such platforms again in the future. All participants rated PeriopSim interface as ‘very good’ (94%, the highest rating) or ‘good’ (6%).

Table 4.

Participants’ satisfaction with PeriopSim interface

Question Percentage of ratings
How easy was it to use this app? Extremely easy Very easy
78.0% 21.0%
Rate the overall interface Very good Good
94.0% 6.0%
Did the app help you learn new instruments? Strongly agree Agree
81.0% 19.0%
How likely would you be to use this kind of app for learning? Extremely likely Very likely
98.0% 2.0%

Respondents were offered a choice of five precoded scales with the midpoint point being a neutral response. None of the respondents reported a negative or neutral attitude to the survey questions.

DISCUSSION

This is the first and largest study to date to investigate the effectiveness of tablet-based simulation for training in the recognition of surgical instruments in perioperative nurses, as well as nurses’ retention of learning acquired during the simulation procedure. Consistent with our hypotheses, the results are sufficiently powered to provide evidence for the effectiveness of tablet-based simulation as an educational tool for perioperative nurses in learning surgical instruments. We have shown that both groups of nurses demonstrated improved performance with repetition of the simulation when using PeriopSim Instrument Trainer and PeriopSim for Burr Hole Surgery. This dose–response relationship is in line with our previous research involving a small number of PGY-1 (Postgraduate Year-1) neurosurgery residents 12 and with earlier findings for high-fidelity simulation-based education, where repetitions result in progressively superior performance until a performance plateau is reached. 15 Studies on surgical skill acquisition suggest that performance learning curves plateau after 2–75 repetitions depending on the complexity of the skill, the learner’s experience and the desired performance plateau. 9 10 Considering the performance plateau may not reliably define the training endpoint, the current data suggest that although participants continue to improve with three repetitions of PeriopSim, this number may be enough to trigger the improvement in real instrument testing. Furthermore, by the third session of PeriopSim Instrument Trainer and the second session of PeriopSim for Burr Hole Surgery, both Groups A and B do equally well in the simulations.

We hypothesised that participants in Group A, with simulation-based training, would outperform those in Group B, without simulation-based training, during real instrument recognition. Indeed, our findings identified a significant difference between the two groups in overall performance in real instrument testing. Specifically, those in Group A identified real instruments with 16% greater accuracy and did so in less time (23% quicker) than those in Group B. These results provide evidence that participants who performed the simulation training before identifying real instruments were able to transfer simulation-based learning into a real clinical task. We also compared which instruments were most often identified incorrectly; during the real instrument recognition, highest errors were with instruments specific to burr hole neurosurgery procedure, such as Penfield #1 and Angled Cup Curette # 2. This finding is consistent with nurses’ demographics—since nurses with neurosurgery speciality were excluded from the sample, participants were less familiar with instruments specific to neurosurgery procedures.

Despite extensive research on the use of simulation-based approaches for learning surgical skills in surgery residents, 16 17 only a few studies to date have examined simulation-based training in perioperative nurses. Previous research concerning perioperative management team development suggests that incorporating longitudinal simulation methods improves patient safety and satisfaction. 18 19 It is worth noting that although clinical outcomes were not directly examined in previous research, nurses positively commented on a simulation-based approach, with improvement in their technical and communication skills, as well as awareness concerning medical safety in the OR. 18 Our results add to these previous findings by demonstrating nurses’ improvement in one of the main components critical to their perioperative technical performance: knowledge of surgical instruments. In addition, we found that the nurses were very enthusiastic in using tablet-based simulation, with most finding the technology ‘extremely easy’ to use and indicating that they would be ‘extremely likely’ to use similar platforms in the future.

Several studies have examined the retention of knowledge and skills learnt with simulation experiences for nursing students. 11 20 21 For example, Ackerman found that retention of cardiopulmonary resuscitation knowledge at 3 months was stronger for participants who underwent high-fidelity simulation-based training compared with controls. 11 Despite an abundance of existing literature on simulation-based education for nurses, surprisingly, only a few studies have examined both the transfer and retention of learning after simulation-based training. In our study, we were able to show that the simulation-based learning was retained when retested 1 week later and we attribute this retention in the control group (Group B) to having had simulation training after real instrument testing on Day 1. Taken together with previous research, the current findings further highlight that repetitive simulation-based training not only improves participants’ performance on a tablet but also results in retained knowledge that is transferable to the real world. These results have important implications for simulation-based surgical instrument training for nurses, nursing students and other clinical staff. Specifically, our findings further highlight that by promoting learning surgical instruments by all members of the operating team and in a safe environment, simulation-based perioperative training may serve as a vehicle promoting OR competency and safety. 7 18

Our study adds novel elements to the existing literature on simulation-based training for neurosurgical applications. The PeriopSim platform is unique in neurosurgery for its ability to gamify the user’s learning experience using techniques such as scoring and timed challenges, which motivate users to improve their individual performance while competing with peers. The current study focuses on knowledge acquisition of simulated instruments for burr hole surgery, a common neurosurgical procedure. Finally, the results demonstrate that low-fidelity simulation-based training is an effective tool to transfer simulation-based learning to real instrument recognition, a finding that is relevant to nurse educators and employers seeking a cost-effective method to enhance the readiness of perioperative nurses in the OR.

Limitations

Despite a well-controlled design, there are several limitations that should be acknowledged. First, we have not compared tablet-based simulation training with conventional instructor-led training; our suspicion, admittedly not examined, is that the tablet-based learning is more engaging, easier to schedule and more cost-effective than instructor-led training. The lack of an active control group also reflects the practical reality that in our institution, as in many, learning of instruments is an on-the-job exercise likely reflecting the organisational and financial challenges of instructor-led training.

Potentially confounding variables for age, gender, academic background and familiarity with technology were not collected and may have impacted the results. Despite being randomly selected, the sample was drawn from a single institution which may limit the generalisability of our results. Knowledge retention was measured over a period of 7 days and does not reflect sustained knowledge (more than a few months). It remains to be determined for how long this learning is retained, and whether there is value in a brief ‘refresher’ of simulation following a more prolonged period (weeks to months) after initial training. Finally, the use of a single stimulation session as ‘refresher’ may have impacted the retention training results. Despite these caveats, this is the first study to date evaluating simulation-based knowledge acquisition of surgical instruments in perioperative nurses.

CONCLUSION

Our study demonstrates that tablet-based simulation training for neurosurgical instrument identification leads to transferability of skills immediately following the simulation and contributes to retained (at least 1 week) knowledge for perioperative nurses. In addition, our results indicate a high level of nurses’ satisfaction with the simulation platforms. Taken together, our study shows that learning surgical instruments using tablet-based simulation training is effective and may serve as an efficient educational tool to promote OR competency and safety.

What is already known on this subject.

  • Rapid technological advances in the last 20 years have led to the exponential adoption of simulation in nursing education.

  • There is a paucity of evidence that neurosurgical knowledge and skills acquired through simulation training are retained over time and transferable to real-world tasks.

What this study adds.

  • Our findings suggest that instrument learning acquired through simulation training is transferred to knowledge of real surgical instruments, thus presenting a compelling model for surgical skills training.

  • The results have implications for simulation-based surgical instrument training for nurses, nursing students and other clinical staff.

Acknowledgments

We would like to acknowledge the following individuals for their support: Ron Hill, for the technical support for simulation platforms used in this study; Sandra Newton and Ginette Thibault-Halman (research nurses), for help with study implementation and data collection; Cindy Fulmore and Mary Cromwell (OR nurse educators) and Deborah Garnier (OR manager), for the coordination of the nurses’ schedules and overall support of this research study.

Footnotes

Contributors: DBC, AIG, NK, LF, MH and RCND contributed to study conception and design, acquisition of data and analysis of data. DBC, AIG, NK, MH and RCND were involved in drafting the manuscript. All authors approved the final version of the manuscript to be published. DBC and RCND are guarantors of the work.

Funding: This study was supported by Dalhousie University’s Brain Repair Center Knowledge Translation Grant (2015) awarded to Dr David Clarke.

Competing interests: DBC and RCND are members of the medical advisory board and stock option holders with Conquer Experience. The authors have no financial relationships with Conquer Experience that may have influenced the submitted work; specifically, the authors did not receive research funding or financial compensation from Conquer Experience for this study. Furthermore, study conception, study design, acquisition of data, statistical analysis, interpretation and drafting of the manuscript were performed independently by the research team without participation from Conquer Experience. No one from Conquer Experience has seen, or participated in, any part of the writing of the manuscript.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data are available upon reasonable request.

REFERENCES

  • 1. McFetrich J, Price C. Simulators and scenarios: training nurses in emergency care. Med Edu 2006;40:1139. 10.1111/j.1365-2929.2006.02591.x [DOI] [PubMed] [Google Scholar]
  • 2. Bland AJ, Topping A, Wood B. A concept analysis of simulation as a learning strategy in the education of undergraduate nursing students. Nurse Educ Today 2011;31:664–70. 10.1016/j.nedt.2010.10.013 [DOI] [PubMed] [Google Scholar]
  • 3. Hegland PA, Aarlie H, Strømme H, et al. Simulation-based training for nurses: systematic review and meta-analysis. Nurse Educ Today 2017;54:6–20. 10.1016/j.nedt.2017.04.004 [DOI] [PubMed] [Google Scholar]
  • 4. Barnett T, Cross M, Jacob E, et al. Building capacity for the clinical placement of nursing students. Collegian 2008;15:55–61. 10.1016/j.colegn.2008.02.002 [DOI] [PubMed] [Google Scholar]
  • 5. Hayden JK, Smiley RA, Alexander M, et al. The NCSBN national simulation study: a longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. J Nurs Regul 2014;5:S3–S40. 10.1016/S2155-8256(15)30062-4 [DOI] [Google Scholar]
  • 6. Kolawole B. International nurse migration to Canada: are we missing the bigger picture. Nurs Leadersh 2010;23:16–20. 10.12927/cjnl.2010.21829 [DOI] [PubMed] [Google Scholar]
  • 7. Fort C, Fitzgerald B. How stimulation improves perioperative nursing. OR Nurse 2011;5:36–42. 10.1097/01.ORN.0000394311.52907.28 [DOI] [Google Scholar]
  • 8. Kim J, Park JH, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ 2016;16:152. 10.1186/s12909-016-0672-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Brunner WC, Korndorffer JR, Sierra R, et al. Laparoscopic virtual reality training: are 30 repetitions enough? 1. J Surgical Res 2004;122:150–6. 10.1016/j.jss.2004.08.006 [DOI] [PubMed] [Google Scholar]
  • 10. Scott DJ, Young WN, Tesfay ST, et al. Laparoscopic skills training. Am J Surgery 2001;182:137–42. 10.1016/S0002-9610(01)00669-9 [DOI] [PubMed] [Google Scholar]
  • 11. Ackermann AD. Investigation of learning outcomes for the acquisition and retention of CPR knowledge and skills learned with the use of high-fidelity simulation. Clin Simul Nurs 2009;5:e213–22. 10.1016/j.ecns.2009.05.002 [DOI] [Google Scholar]
  • 12. Clarke DB, Kureshi N, Hong M, et al. Simulation-based training for burr hole surgery instrument recognition. BMC Med Educ 2016;16:153. 10.1186/s12909-016-0669-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Suresh KP. An overview of randomization techniques: an unbiased assessment of outcome in clinical research. J Hum Reprod Sci 2011;4:8. 10.4103/0974-1208.82352 [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 14. Moher D, Hopewell S, Schulz KF, et al. 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surgery 2012;10:28–55. 10.1016/j.ijsu.2011.10.001 [DOI] [PubMed] [Google Scholar]
  • 15. McGaghie WC, Issenberg SB, Petrusa ER, et al. Effect of practice on standardised learning outcomes in simulation‐based medical education. Med Edu 2006;40:792–7. 10.1111/j.1365-2929.2006.02528.x [DOI] [PubMed] [Google Scholar]
  • 16. Miyasaka KW, Buchholz J, LaMarra D, et al. Development and implementation of a clinical pathway approach to simulation-based training for foregut surgery. J Surg Educ 2015;72:625–35. 10.1016/j.jsurg.2015.01.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Dawe SR, Pena GN, Windsor JA, et al. Systematic review of skills transfer after surgical simulation‐based training. Br J Surgery 2014;101:1063–76. 10.1002/bjs.9482 [DOI] [PubMed] [Google Scholar]
  • 18. Fujiwara S, Komasawa N, Okada D, et al. Simulation-based perioperative team training in the operating room. Jpn J Anesthesiol 2015;64:768–71. [PubMed] [Google Scholar]
  • 19. Komasawa N, Berg BW. Interprofessional simulation training for perioperative management team development and patient safety. J Perioper Pract 2016;26:250–3. 10.1177/175045891602601103 [DOI] [PubMed] [Google Scholar]
  • 20. Tubaishat A, Tawalbeh LI. Effect of cardiac arrhythmia simulation on nursing students’ knowledge acquisition and retention. West J Nurs Res 2015;37:1160–74. 10.1177/0193945914545134 [DOI] [PubMed] [Google Scholar]
  • 21. Aqel AA, Ahmad MM. High‐fidelity simulation effects on CPR knowledge, skills, acquisition, and retention in nursing students. Worldviews Evidence-Based Nurs 2014;11:394–400. 10.1111/wvn.12063 [DOI] [PubMed] [Google Scholar]

Articles from BMJ Simulation & Technology Enhanced Learning are provided here courtesy of BMJ Publishing Group

RESOURCES