Skip to main content
BMJ Simulation & Technology Enhanced Learning logoLink to BMJ Simulation & Technology Enhanced Learning
. 2017 Jul 6;3(3):111–115. doi: 10.1136/bmjstel-2016-000137

The Eyesi simulator in training ophthalmology residents: results of a pilot study on self-efficacy, motivation and performance

Robert PL Wisse 1, Tessa Coster 1, Marieke Van der Schaaf 2, Olle Ten Cate 3
PMCID: PMC8990180  PMID: 35518903

Abstract

Purpose

To describe ophthalmology residents’ motivation and self-efficacy during cataract surgery training and to assess the relationship of self-efficacy and motivation on both simulator (Eyesi) and real-life surgical performance.

Methods

Prospective cohort study using a within-subject design. Eight residents were asked to fill out questionnaires on self-efficacy and motivation towards the Eyesi simulator and real-life cataract surgery at three different moments. Simulator performance was derived from the instrument’s output. Patient charts were reviewed to assess real-life surgical performance.

Results

Comparative analysis, using a paired sampled t-test, showed a significant increase in self-efficacy towards real-life cataract surgery after completing the cataract training on the simulator (p=0.005). Furthermore, we found a significant correlation between total tasks to complete the cataract training and self-efficacy scored after working with the simulator (p=0.038). Motivation towards the simulator remained stable over time and seemed not to be influenced by simulator or real-life performance.

Conclusions

We found evidence that performance on the simulator correlated with residents’ self-efficacy scored after the simulator training, supporting the theory that self-efficacy is determined by prior performance. Self-efficacy seemed inversely related to the ease of completion of a task: delivery of a greater effort leads to more satisfaction and a higher perceived self-efficacy towards this particular task. Future studies should include more subjects to provide a more accurate insight in the role of self-efficacy and motivation in training complex surgical skills.

Keywords: Eyesi®, virtual reality, cataract training, self-efficacy, motivation

Introduction

Simulation plays an increasingly important role in preparing medical students and residents for professional practice.1 2 As technology moves forward, medical simulators improve and have become inevitable tools in the curricula of medical students and residents in particular. Simulators fill a gap between theory and performance on real patients.1 An important finding is that training on medical simulators appears to increase self-efficacy3–6 and performances of residents.1 3 6 Second, patients seem to have more faith in letting students and residents perform medical procedures on them after simulator training.7

The Eyesi (VRmagic GmbH, Tübingen, Germany) is a virtual reality platform used in the training of ophthalmology residents. By combining a binocular surgical microscope, bimanual instruments and a sophisticated artificial eye, the simulator constructs a virtual environment resembling real-life surgery. Several studies have investigated the construct validity8 9 and the beneficial effects on the surgical performances, such as shorter operation duration,10 shorter phaco time, less intraoperative complications11 and better capsulorhexis.12 The Eyesi keeps a record of a myriad of parameters and calculates a score, and comes with a structured set of courses encompassing all relevant tasks in cataract surgery (Eyesi Courseware). All recorded data are stored on the device and can be extracted from the simulator at any time for further analysis or for insight in the residents’ learning curve.13 The Eyesi simulator can be equipped with training modules for cataract and vitreoretinal surgery. In this study we focus on cataract surgery.

The processes of learning and performing complex skills in cataract surgery in ophthalmology are influenced by many factors. Two crucial factors are motivation and self-efficacy towards the task at hand. To acquire complex skills, motivation plays an important role. Motivation affects learning, thereby building on previous work linking motivation and cognitive performance.14–17 Another important factor is self-efficacy, defined by Bandura as people’s beliefs in their capabilities to produce given attainments’.18 Bandura states that prior performances play a major role in developing self-efficacy. Building on the work of Bandura, Bernacki et al 19 found evidence that an increase in self-efficacy is associated with better performance and learning. Furthermore, they reported that self-efficacy is initially based on recent improvements in performance, although over time more on the fluency of performing a task. In the specific case of surgical simulators, it is very important to identify whether a device actually prepares learners for real-life surgery; a poor relationship between simulator performance and real-life clinical outcomes could be potentially harmful, especially if the leaner has a perception of competence/ability (quod non).

Currently, the literature on simulator performance in ophthalmology largely focuses on clinical endpoints rather than on the associations with aforementioned psychological constructs. In this study, the relationship between self-efficacy, motivation and performance in both a simulator environment and in real-life surgery is investigated. Here, self-efficacy and motivation are measurable by questionnaires. The Eyesi simulator enables the study of various quantitative variables. Our interest focused on three determinants of performance: the final examination score, the total number of tasks and the total time needed to complete the training.

Research questions

  1. Does self-efficacy towards cataract surgery in ophthalmology residents increase after training on the Eyesi simulator?

  2. To what extent do self-efficacy and motivation relate to Eyesi simulator performance?

  3. Does self-efficacy towards cataract surgery predict simulator or real-life surgery performance?

Materials and methods

Subjects

From February 2014 onwards, completing the Eyesi training programme is obligatory to be allowed to perform live surgery for ophthalmology residents in the University Medical Center Utrecht in The Netherlands. Ever since, all eight residents have been included in this study. In the rotational residency programme, every 3 months, one new resident enters cataract training. Cataract surgical training commences in the third year of residency. All residents were naïve towards cataract surgery, meaning they had never practised on the Eyesi nor performed any cataract surgeries. All residents were requested to fill out a set of questionnaires just prior to starting the cataract training on the simulator (T1), then after completing the simulator training but prior to their first real-life surgeries (T2), and lastly after they had performed 3 months of real-life cataract surgeries (T3). The questionnaires on self-efficacy and motivation were filled out on all time points.

Instrumentation

Questionnaires

The self-efficacy and motivation measures were based on the guide for constructing self-efficacy scales by Bandura18. We used an 11-point scale for the self-efficacy and motivation scales. The values on this scale ranged from not at all convinced (0) to completely convinced (100) of the following statement. Supervisors were blinded to the outcomes of the questionnaires to reduce observer bias. The questionnaires are available as online supplementary appendices.

Supplementary data

bmjstel-2016-000137.supp1.pdf (516.1KB, pdf)

Eyesi surgical simulator

The Eyesi surgical simulator was equipped with a training programme designed by the manufacturer. This Eyesi Courseware V.2.2 is a standardised training schedule and contained four tiers of increasing difficulty: A, B, C and D. Every tier consisted of several courses and every course was divided into several tasks. Typically, a task had a set minimum score (ie, 80/100) which had to be achieved three times consecutively in order to pass to the next task. For our study, the residents had to complete the first three categories of the training prior to starting their first real-life cataract surgeries. Data from category D of the training were not considered relevant for this study, as they contain courses in advanced surgery techniques with high difficulty levels. After completing the training, a standardised virtual surgery examination was taken, supervised by one staff member (RW). This examination represents the complete simulator experience and typically takes 20 min. The examination was subdivided into five tasks: (1) capsulorhexis, (2) hydrodissection, (3) sculpting, cracking, quadrant removal, (4) cortex removal and (5) lens placement. A minimum score of 60/100 per task was required to proceed; therefore, the total examination score ranged from 300 to 500.

Data were collected from the personal training reports and data output created by the simulator. Primary determinants were considered: total surgical time, total number of attempts, the scores of each course and the total score of the virtual surgery examinations. Second, the calculated scores on ‘tissue treatment’, ‘target achievement’, ‘efficiency’ and ‘instrument handling’ were also collected, as the feedback system of the Eyesi is based on these four items.13

We chose to use the following variables for indicating the performance on the Eyesi: 1. the virtual surgery examination score at the end of the training, 2. the number of tasks needed to complete the training and 3. the total time needed to complete the training.

We argued that the examination score is a relevant variable, although measured only at one point in the cataract training. The two other variables represent performance throughout the course. The completion of one course is a prerequisite to open the next course, and because a minimum set score needs to be achieved three times, the number of attempts was regarded a proxy of simulator performance; the able resident will only need three tries, where the less-proficient colleague potentially needs many more tries to pass the bar (the average score of all tries is not reported by the Eyesi Courseware). Factors 2 and 3 therefore increase for the less able residents, as they needed more tries to achieve acceptable scores. Residents are not aware of the performance outcomes of their peers, as there is no benchmark or insight in overall results. On the basis of the simulator data, we assumed that residents did not retake courses.

Real-life cataract surgery data gathering

To determine the performances of the residents in real-life surgery, the following data were extracted from the electronic patient charts: duration of the surgery (min), complications during the surgery, complications within 1 month after the surgery and target error of the refraction. Complications that occurred during the surgery were posterior capsule rupture with or without corpus vitreous loss or dropped nucleus, zonulolysis rhexis deviation. Complications that were recorded during the surgery were posterior capsule rupture (with or without corpus vitreous loss), a dropped nucleus, zonulolysis, or rhexis deviation. This last item was calculated as the difference between the target refraction and the spherical equivalent 1 month after surgery.

Data analysis

Questionnaires

Missing data in our questionnaires were completed by replacing the mean of the known items of that questionnaire to include these questionnaires in our statistical analyses. Missing items were only replaced by the mean if less than 20% of the items of that questionnaire were missing. Questionnaires with more than 20% missing items were excluded from the analyses. We used Cronbach's coefficient α to estimate the reliability of the questionnaires; a value >0.70 was considered viable.20 21 Progress between the residents’ self-efficacy and motivation scores at the different moments was calculated using paired sampled t-tests.

Combined analysis

To investigate potential correlations between self-efficacy, motivation towards the simulator, performances on the simulator and performances in real-life surgery, linear stepwise regression analyses were conducted. In these analyses the questionnaire outcomes were considered independent predictors. The determinants of surgical proficiency (examination score, number of tasks, total time) in both simulator and real-life environment were chosen as dependent variables. Because of the small sample size, the data were bootstrapped to a sample of 1000 in all our regression analyses. The p values shown in this article are the p values obtained after bootstrapping was performed.

Results

Questionnaire outcome

Questionnaire quality assessment

The Cronbach's coefficients α at T1, T2 and T3 were between 0.75 and 0.90 for the self-efficacy questionnaire and between 0.73 and 0.90 for the motivation questionnaire. All eight eligible residents participated in this study, and we received 77 questionnaires of the expected 88 questionnaires. A total of 10 items in 8 of those questionnaires were replaced (=10% of all data points). Five of the received questionnaires were missing too many items and were therefore excluded from the analyses.

Self-efficacy

Analysis showed that levels of self-efficacy increased after training with the Eyesi and that they were significantly higher than before training with the simulator (T1–T2: t(3)=7.398, p=0.005). Self-efficacy levels further increased after performing 3 months of real-life cataract surgery (T2–T3: t(4)=6.090, p=0.004) (figure 1).

Figure 1.

Figure 1

Mean self-efficacy scores of each resident at the three different moments of measuring.

Motivation

Analysis of the motivation questionnaires showed little change over time in the residents’ motivation towards the simulator (mean: 8, range: 6.5–9 on an 11-point scale). No significant differences were identified (T1–T2: t(3)=1.547, p=0.220; T2–T3: t(5)=0.489, p=0.646).

Simulator performance

Outcomes of primary and secondary determinants are given in table 1. The secondary determinants were assessed in every single task.

Table 1.

Descriptive statistics of the variables from the Eyesi simulator (n=8)

Variable Minimum Maximum Range Mean SD
Final examination score* 365 477 112 423 36
Total time (min) 343 1304 961 744 317
Total tasks 323 745 422 527 172
Sum course scores 18 131 22 555 4424 21 286 1544
Tissue treatment 1.33 2.57 1.24 1.71 0.40
Target achievement 1.55 2.22 0.66 1.84 0.21
Efficiency 1.42 1.85 0.43 1.62 0.16
Instrument handling 1.67 2.90 1.23 2.90 0.42

*Score derived from the cataract examination, taken under supervision after completing the Eyesi Courseware.

†Scores derived from the cataract training on the Eyesi (Eyesi Courseware).

Variables associated with self-efficacy

A good correlation was found between total tasks needed to complete the Eyesi trainings programme and self-efficacy at T2 (r(5)=0.891, p=0.038). Remarkable is that the residents who needed more attempts to complete the training on the simulator rated their self-efficacy higher. Other relationships between self-efficacy and simulator outcomes were assessed, but no relevant trends were identified. Correction for baseline self-efficacy (T1) did not materially alter these findings.

Variables associated with motivation

We identified a trend that higher motivation at T1 might associate with a higher examination score (r(3)=0.936, p=0.052), although a large time gap of 3–6 months between these two measurements exists. Motivation at T1 and total tasks or total time did not correlate (r(3)=0.424, p=0.499, respectively r(3)=0.028, p=0.815), nor did performance on the simulator correlate with motivation scored at T2.

Real-life surgery performance

Variables associated with self-efficacy and motivation

For the analyses in this study, we used the mean surgery time as the primary determinant of real-life surgery performance. No analyses were performed with the other variables, as these all showed little variation and thus little discriminative power (see table 2). No relevant correlations with any primary or secondary outcome could be identified. It was noticed that the resident with the most complications, a total of three during the surgery and four complications within a month, scored self-efficacy on T3 the lowest.

Table 2.

Descriptive statistics of the variables from real-life cataract surgery (n=6)

Variable Minimum Maximum Range Mean SD
Mean surgery time (min) 28.75 42.83 14.08 36.23 5.37
Surgery complications (n) 0 3 3 1.3 1.2
Complications <1 month (n) 0 4 4 0.8 1.6
Target refraction error 0.30 0.57 0.27 0.37 0.11

Discussion

Our study showed that the self-efficacy of ophthalmology residents increases over the various stages of their cataract training. Self-efficacy significantly improved after training on the Eyesi (p=0.005). Furthermore, we found emerging evidence for an inverted relationship between total tasks needed to complete the training and self-efficacy, where a more arduous simulator training corresponded with higher self-efficacy outcomes. Unfortunately, the predictive role of self-efficacy towards real-life surgery performance could not be assessed, mainly because the determinants of surgical performance showed hardly any variance.

Our findings regarding self-efficacy align with previous evidence that simulator training increases self-efficacy, extending the evidence for the benefits of using medical simulators in the training of residents.3–6 We hypothesised that prior performance influences self-efficacy,18 although our analysis identified an—not expected—inverse correlation between simulator performance (ie, total number of tasks) and self-efficacy scored after training with the simulator: the residents who needed more attempts to complete the training on the simulator (ie, the poorer performing ones) showed a relatively bigger increase in self-reported self-efficacy. One might conclude this difference between actual performance and perceived performance is a resultant of the experienced difficulty of the task rather than actual examination scores. In our study we could not establish that higher self-efficacy improves performances on the simulator or in real-life surgery. This is very unfortunate, as especially this aspect of emerging self-efficacy can potentially be harmful if the resident has developed an unjustified perception of performance.22 These findings are in contrast to the study of Ahlborg et al, 6 who found evidence for a correlation between higher self-efficacy and shorter laparoscopic operation duration in gynaecology residents.

Scores for motivation towards the simulator of each resident were rather high—mean of 8 on a scale of 11—and remained stable over time. We, therefore, postulate that performances on the simulator or in real-life surgery did not influence their attitude towards the stimulator or towards simulator training. On the basis of previous literature, we expected that a more positive motivation towards the simulator would correlate with better performances on the simulator.14 15 We did not see convincing signs of this relationship.

Our study has strengths and limitations. The questionnaires used in this study were based on published scales to assess self-efficacy and motivation and seemed to be reliable instruments, based on the high Cronbach’s α coefficient obtained. The ophthalmologists were blinded to the outcomes in order to prevent prejudice resulting from the questionnaires or performances of the residents. This created a safe environment where residents were free to report on their motivation and attitudes. The within-subject design enabled us to investigate the development of attributes during the course of the cataract training programme. All residents were genuinely naïve towards cataract surgery and all were obliged to complete the Eyesi training. Finally, the Eyesi provides extensive reports on training outcomes and feedback, based on hundreds of tasks, enabling a solid judgement of performance.

A major limitation of our study is its low number of residents included (8), which impedes solid statistical analyses and conclusions. Residents enter our programme at a certain pace, and increasing study size either would take considerable time or necessitate the collaboration with other training centres. Questionnaires were not filled out completely, which left us with missing items. Some residents commented on why they did not fill out certain items in the questionnaire, indicating that the instruction on filling out the questionnaires can be improved. A conservative imputation technique improved the number of valid questionnaires, but the regression analyses were nevertheless conducted on small samples. Bootstrapping was used to circumvent the statistical consequences hereof, but certainly our analyses could easily have been distorted by outliers. Given the exploratory nature of this study, no correction for multiple testing was used as we aimed at identifying potential relationships. Including more residents in the future is needed to solidify our results. Finally, we did not include a control group for comparing the outcomes of the questionnaires. We used the residents as their own control through comparing pretraining and post-training outcomes. The lack of correlation between simulator and real-life surgical performance hampers the generalisability of our results to the important domain of real-life surgical performance. This is in line with Pokroy et al, 10 who reported no difference in surgical complications and overall surgical time with Eyesi training. Our selected determinants can be considered too coarse and of little discriminative power, so if true surgical performance needs to be assessed, dedicated tools are required to measure the quality of surgery.12

Conclusion

In conclusion, training on the Eyesi cataract simulator increased the self-efficacy of ophthalmology residents towards real-life cataract surgery. We also concluded that the motivation towards the simulator remained stable over time and was not influenced by performance. The relationship between the total amount of tasks as a proxy of simulator performance and a higher perceived self-efficacy was remarkable. Follow-up of these psychological constructs in relation to performance in learning cataract surgery in a larger sample might solidify our initial findings and provide insights to further improve the cataract training curriculum.

Footnotes

Contributors: TC collected all data, conducted the study and wrote the manuscript.

RPLW planned the study, made the questionnaires and received them (blinded) after the participants filled them in, reviewed the manuscript and submitted the manuscript.

MVDS reviewed the manuscript.

OTC reviewed the manuscript.

Competing interests: None declared.

Ethics approval: An explicit informed consent of study participation was not deemed necessary by the Ethical Review Board of the UMC Utrecht since the Medical Research Involving Human Subjects Act (WMO) does not apply.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med 2009;76:330–43. 10.1002/msj.20127 [DOI] [PubMed] [Google Scholar]
  • 2. McGaghie WC, Issenberg SB, Barsuk JH, et al. A critical review of simulation-based mastery learning with translational outcomes. Med Educ 2014;48:375–85. 10.1111/medu.12391 [DOI] [PubMed] [Google Scholar]
  • 3. Surcouf JW, Chauvin SW, Ferry J, et al. Enhancing residents' neonatal resuscitation competency through unannounced simulation-based training. Med Educ Online 2013;18:18726–7. 10.3402/meo.v18i0.18726 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Dayal AK, Fisher N, Magrane D, et al. Simulation training improves medical students' learning experiences when performing real vaginal deliveries. Simul Healthc 2009;4:155–9. 10.1097/SIH.0b013e3181b3e4ab [DOI] [PubMed] [Google Scholar]
  • 5. Paskins Z, Peile E. Final year medical students' views on simulation-based teaching: a comparison with the best evidence medical education systematic review. Med Teach 2010;32:569–77. 10.3109/01421590903544710 [DOI] [PubMed] [Google Scholar]
  • 6. Ahlborg L, Hedman L, Nisell H, et al. Simulator training and non-technical factors improve laparoscopic performance among OBGYN trainees. Acta Obstet Gynecol Scand 2013;92:1194–201. 10.1111/aogs.12218 [DOI] [PubMed] [Google Scholar]
  • 7. Graber MA, Wyatt C, Kasparek L, et al. Does simulator training for medical students change patient opinions and attitudes toward medical student procedures in the emergency department? Acad Emerg Med 2005;12:635–9. 10.1111/j.1553-2712.2005.tb00920.x [DOI] [PubMed] [Google Scholar]
  • 8. Privett B, Greenlee E, Rogers G, et al. Construct validity of a surgical simulator as a valid model for capsulorhexis training. J Cataract Refract Surg 2010;36:1835–8. 10.1016/j.jcrs.2010.05.020 [DOI] [PubMed] [Google Scholar]
  • 9. Selvander M, Asman P. Cataract surgeons outperform medical students in eyesi virtual reality cataract surgery: evidence for construct validity. Acta Ophthalmol 2013;91:469–74. 10.1111/j.1755-3768.2012.02440.x [DOI] [PubMed] [Google Scholar]
  • 10. Pokroy R, Du E, Alzaga A, et al. Impact of simulator training on resident cataract surgery. Graefes Arch Clin Exp Ophthalmol 2013;251:777–81. 10.1007/s00417-012-2160-z [DOI] [PubMed] [Google Scholar]
  • 11. Belyea DA, Brown SE, Rajjoub LZ. Influence of surgery simulator training on ophthalmology resident phacoemulsification performance. J Cataract Refract Surg 2011;37:1756–61. 10.1016/j.jcrs.2011.04.032 [DOI] [PubMed] [Google Scholar]
  • 12. McCannel CA, Reed DC, Goldman DR. Ophthalmic surgery simulator training improves resident performance of capsulorhexis in the operating room. Ophthalmology 2013;120:1–6. 10.1016/j.ophtha.2013.05.003 [DOI] [PubMed] [Google Scholar]
  • 13. VRmagic. Eyesi surgical. https://www.vrmagic.com/simulators/eyesi-surgical/.
  • 14. Miller EM, Walton GM, Dweck CS, et al. Theories of willpower affect sustained learning. PLoS One 2012;7:3 3 00 00. 10.1371/journal.pone.0038680 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Colquitt JA, LePine JA, Noe RA. Toward an integrative theory of training motivation: a meta-analytic path analysis of 20 years of research. J Appl Psychol 2000;85:678–707. 10.1037/0021-9010.85.5.678 [DOI] [PubMed] [Google Scholar]
  • 16. Deci EL. Intrinsic motivation. New York: Plenum Press, 1975. [Google Scholar]
  • 17. Bandura A. Self-efficacy mechanism in human agency. American Psychologist 1982;37:122–47. 10.1037/0003-066X.37.2.122 [DOI] [Google Scholar]
  • 18. Bandura A. Self-Efficacy: the exercise of control. 10th edn. New York: W.H. Freeman and Company, 1997. [Google Scholar]
  • 19. Bernacki ML, Nokes-Malach TJ, Aleven V. Examining self-efficacy during learning: variability and relations to behavior, performance, and learning. Metacogn Learn 2015;10:99–117. 10.1007/s11409-014-9127-x [DOI] [Google Scholar]
  • 20. Pajares F, Urdan T. Self-Efficacy beliefs of adolescents. Greenwich, Connecticut: IAP - Information Age Publishing, 2006. [Google Scholar]
  • 21. Tavakol M, Dennick R. Making sense of cronbach's alpha. Int J Med Educ 2011;2:53–5. 10.5116/ijme.4dfb.8dfd [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Nishisaki A, Keren R, Nadkarni V. Does simulation improve patient safety? Self-efficacy, competence, operational performance, and patient safety. Anesthesiol Clin 2007;25:225–36. 10.1016/j.anclin.2007.03.009 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjstel-2016-000137.supp1.pdf (516.1KB, pdf)


Articles from BMJ Simulation & Technology Enhanced Learning are provided here courtesy of BMJ Publishing Group

RESOURCES