Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Feb 1.
Published in final edited form as: Surg Endosc. 2020 Feb 18;35(2):779–786. doi: 10.1007/s00464-020-07447-1

Virtual Reality Operating Room with AI Guidance: Design and Validation of a Fire Scenario

Di Qi 1, Adam Ryason 1, Nicholas Milef 1, Samuel Alfred 1, Mohamad Rassoul Abu-Nuwar 2, Mojdeh Kappus 2, Suvranu De 1, Daniel B Jones 2
PMCID: PMC7431365  NIHMSID: NIHMS1563489  PMID: 32072293

Abstract

Background:

Operating room (OR) fires are uncommon but disastrous events. Inappropriate handling of OR fires can result in injuries, even death. Aiming to simulate OR fire emergencies and effectively train clinicians to react appropriately, we have developed an artificial intelligence (AI) based OR fire virtual trainer based on the principle of the “fire triangle” and SAGES FUSE curriculum. The simulator can predict the user’s actions in the virtual OR and provide them with timely feedback to assist with training. We conducted a study investigating the validity of the AI-assisted OR fire trainer at the 2019 SAGES Learning Center.

Methods:

53 participants with varying medical experience were voluntarily recruited to participate in our Institutional Review Board approved study. All participants were asked to contain a fire within the virtual OR. Participants were then asked to fill out a 7-point Likert questionnaire consisting of ten questions regarding the face validation of the AI-assisted OR fire simulator. Shapiro-Wilk tests were conducted to test normality of the scores for each trial. A Friedman’s ANOVA with post-hoc tests was used to evaluate the effect of multiple trials on performance.

Results:

On a 7-point scale, eight of the ten questions were rated a mean of 6 or greater (72.73%), especially those relating to the usefulness of the simulator for OR fire-containing training. 79.25% of the participants rated the degree of usefulness of AI guidance over 6 out of 7. The performance of individuals improved significantly over the 5 trials, X2(4) =119.89, p<.001, and there was a significant linear trend of performance r=.97, p=0.006. A pair-wise analysis showed that only after the introduction of AI, did performance improve significantly.

Conclusions:

The AI-guided OR fire trainer offers the potential to assess OR personnel and teach the proper response to an iatrogenic fire scenario in a safe, repeatable, immersive environment.

Keywords: OR fire, virtual reality, artificial intelligence, validation, medical training

Introduction

Operating room (OR) fires are devastating and can result in patient burns and other severe injuries, or even death. Most OR fires occur in environments where flammable inhalational anesthesia is used in a large amount or oxygen concentration exceeds 30 percent [1]. More recently, the Emergency Care Research Institute estimates that approximately 550 to 650 OR fires happen nationally each year in the United States, which is as frequent as other surgical mishaps, including wrong-site surgery [2]. The importance of preventing OR fires has been recognized by different national organizations, and responses and strategies of fire management have been issued. The American Society of Anesthesiologists (ASA) has issued an updated practice advisory for the prevention and management of OR fires [3]. The Association of peri-Operative Registered Nurses (AORN) “Guideline for a safe environment of care” recommends all OR personnel receive education about the safety protocols for various fire scenarios [4]. The Joint Commission and the Food and Drug Administration (FDA) have initiated a fire prevention force to reduce surgical fires [1]. The Society for American Gastrointestinal and Endoscopic Surgeons (SAGES) has established the Fundamental Use of Surgical Energy (FUSE) curriculum program aiming to promote OR fire safety including recommendations for safely using medical devices during surgery [5]. One major component of OR fire management is to effectively and efficiently train OR personnel to respond properly to such hazardous and high-stress situations in order to contain the fires.

Due to the relative low frequency of OR fire events, simulation-based training offers an ideal educational platform allowing trainees to acquire knowledge and engage in sustained deliberate practice in a low-risk environment. Conducting fire drills is one method to train OR teams to respond appropriately to an emergency fire. However, regulations about the use of open flame or smoke can make it challenging to recreate realistic scenarios. Moreover, fire drills usually require the mobilization of resources and may affect the normal functioning of locations of the drills, leading to financial costs. Virtual reality (VR) technology has rapidly evolved, bringing widespread applications due to its adaptability to different problems and domains. VR simulation has become an important component of medical training curriculum, as it allows clinicians to practice their professional skills in a safe, repeatable, and full-immersive virtual environment. More importantly, trainees can make mistakes with no risk to patients and learn from those mistakes, making it ideal for medical training, especially in emergency situations. Moreover, VR simulation training has demonstrated the capability of skill transfer for various real-world surgical scenarios [68]. A recent study has found that training in an immersive VR environment improves performance when tested in a mock OR fire situation [9].

However, there is no guarantee of effective learning by simply providing trainees with access to simulators. A study for a colonoscopy simulator has studied the impact of feedback on the learning curve, and no improvement was found during training in the absence of feedback [10]. To be effective, simulation should offer constructive and timely feedback and assistance to the trainees to facilitate the acquisition of proper knowledge and skill, as one might from a human preceptor [11]. The simplest way to provide training guidance is through a rule-based approach [12], such as the “follow-me” method (ghost operation recorded by an expert) [13] or step-by-step instruction [14] in the context of surgical simulation. However, for those who are unfamiliar with the procedure, following the pace of the ghost expert can be difficult. Most importantly, guidance should be adapted to the trainees’ performance and delivered in a timely manner, as delayed feedback can lead to confusion or even failure. This requires the simulator to be able to adapt to the trainee’s need, specifically to predict their next move and prompt timely feedback before they commit an error. This is especially important to the training of emergency cases such as OR fire crises, where fast and appropriate decision-making and responses are a necessity. With advancement of artificial intelligence (AI) technology in recent years, intelligent agents play an increasingly important role in offering trainee-oriented individual adaptive training content [1517]. There is a rapidly growing interest in the use of intelligent agents in virtual simulation of various applications for different training purposes [1719]. However, none of these are for clinical emergency scenarios.

Aiming to simulate OR fire emergencies and effectively train clinicians to react properly, we have developed a new AI-assisted OR fire virtual simulator based on the principle of the “fire triangle” and the SAGES FUSE curriculum. The intelligent agents, namely Virtual Intelligent Preceptor (VIP), in the simulator can make early predictions about the user’s intention and provide them with real-time feedback to assist with training. The presented study aims to investigate evidence of validity of the AI-guided OR fire simulator while acquiring meaningful feedback regarding the potential use of the simulator to assess OR personnel and teach the proper responses to an iatrogenic fire scenario amongst the medical profession.

Material and Methods

Participants

We conducted this study at the 2019 annual conference of the Society for American Gastrointestinal and Endoscopic Surgeons (SAGES) Learning Center. The study was approved by the Institutional Review Board at Rensselaer Polytechnic Institute. 53 participants with varied clinical and FUSE training experience were voluntarily recruited to perform the fire containing task in a virtual OR using our AI-based OR fire simulator. All participants were informed about the purpose of the study and gave their informed consent prior to participating in the study. The demographic data are shown in Table 1.

Table 1.

Demographic data

Age, average (range) 38.5 (23–75)
Sex, Female: Male 14:39
Corrected vision, Yes: No 30:23
Medical experience 18 Attendings, 3 Fellows, 20 Residents, 12 Others
FUSE experience, Yes: No 20:30 (3 missing responses)
VR simulation, Yes: No 13:36 (4 missing responses)
Video game experience, Yes: No 17:33 (3 missing responses)

The AI-assisted OR fire trainer

The AI-assisted OR fire trainer was developed based on an open source platform, the Interactive Medical Simulation Toolkit (iMSTK) [20]. We utilized iMSTK’s Vulkan renderer [21] to render VR environment with fire and smoke effects. The simulator consists of a desktop computer Intel Core i7–6850 K, 3.6 GHz CPU and 16 GB RAM and an HTC Vive VR system [22], includes a head-mounted display (HMD) and a pair of hand-held controllers. To allow the user to move freely in the virtual OR, a wireless adapter [23] was installed on the HMD to wirelessly transmit the user’s motion data to the computer. The 6 degrees of freedom (DOF) position/orientation data of the user’s HMD and hand-hold controller are tracked by the VR system and transferred to the simulator to support interactions. As illustrated in Figure 1, each participant wore an HMD and held a Vive controller to interact with the virtual equipment in the OR.

Figure 1:

Figure 1:

The AI-guided OR fire trainer equipped with wireless VR headset and controller; a participate was performing the OR fire containing task at 2019 SAGES Learning Center.

• Virtual operating room with fire scenario

The AI-assisted OR fire training system is a full-immersive interactive VR simulator that is designed to train clinicians to contain an OR fire, by following the principle of the fire triangle and FUSE curriculum. In the virtual OR, see Figure 1, there is an anesthesia unit, an electrosurgery unit, a laparoscopic tower, a surgical cabinet, a ceiling-mounted surgical lamp, a carbon dioxide (CO2) fire extinguisher, and a patient lying on an operating table with a face mask connected to the ventilator. When the simulator starts, the surgical drape on the patient’s body has already caught fire, and fire alarms were added to the scene to increase the user’s psychological stress. The simulation task is to put out the fire as quickly as possible by following the correct sequence and critical steps determined by the FUSE guideline. In order to interact with a virtual object in the OR, for example to use the fire extinguisher, the user needs to walk near the fire extinguisher and use the controller to pick it up while holding the trigger button on the controller to release the extinguishing agent.

• Virtual Intelligent Preceptor (VIP) for virtual training

To provide timely guidance that adapts to the user’s performance and need, a new AI paradigm for virtual medical training, namely the Virtual Intelligent Preceptor (VIP), has been developed and integrated into the OR fire simulator to provide timely warning to the user if a wrong action is predicted (see the red cross sign in Figure 1). VIP consists of three collaborative intelligent agents to evaluate and train the learner using VR simulation:

  • An expert agent: Presents the correct action based on the current simulator status. For example, when the user has turned off the gas from the anesthesia workstation, the expert agent indicates that the face mask should be removed as well in order to completely terminate the gas flow to the patient’s airway. Since the domain knowledge for this task has been clearly identified by the FUSE guideline [24] and is relatively straightforward, the expert agent is implemented as a rule-based system [12] in the current simulator.

  • A learner agent: Represents the learner his/herself in the virtual simulator and can make an early prediction about their intention (i.e., next object to select) based on historical simulation data. The learner agent in the current simulator is developed based on Hidden Markov Models that are trained using real human body motion data captured by the same VR simulator in a pilot study conducted at Rensselaer Polytechnic Institute, where 58 undergraduate and graduate students were enrolled.

  • A tutor agent: Provides timely guidance to the user to improve their performance. Instead of constantly prompting guidance to the user regardless of their needs for help, the tutor agent determines whether to provide feedback to the user based on the input of both the expert agent and learner agent. If the user’s action predicted by the learner agent conflicts with the action suggested by the expert agent, the tutor agent will warn the user by projecting a warning sign (see the red cross sign in Figure 1) on top of the object that the user intends to select.

• Simulation task

OR fires can occur at any time when all three elements of the fire triangle are present: 1) Oxidizer (e.g., oxygen), 2) Ignition source (e.g., electrosurgical units), and 3) Fuel source (e.g., surgical drape, patient’s skin or hair). Elimination of these fire triangle elements by following the correct sequence is the major training task in the simulator. In order to successfully extinguish the fire, the participant should perform the following steps in sequence, as depicted in Figure 2. Based on the currently established FUSE protocol, when a fire occurs in an OR, the OR team should first terminate all gas flow to disconnect the patient from the anesthesia workstation, which comprises two steps: turning off the anesthesia machine and removing the face mask from the patient’s face; the order of these two steps can be exchanged in the simulator. After the gas flow to the patient’s airway is terminated, the next step is to remove all the burning material on the patient’s body and then use the fire extinguisher to put out the fire on the burning surgical drape.

Figure 2:

Figure 2:

Workflow of the fire containing task.

• Automated performance assessment

As the participants performed a task, the simulator automatically calculated their task performance, which is evaluated based on the sequence of steps to contain the fire, as depicted in Figure 2. Participants who conducted all the steps in the correct sequence without any additional actions passed the task. Missing any of the steps or mistaking the order leads to task failure. To be able to analyze the simulation performance of the participants with respect to the number of trials they conducted, each participant’s performance was evaluated with a score ranging from 0 to 4 based on the fire-containing sequence. Any score below 4 was considered failing. The score was calculated based on the following algorithm:

  • If the participant turned off the anesthesia unit or removed the mask first (1 point)

  • If the participant removed the mask following turning off the anesthesia unit or vice versa (1 point)

  • If the participant removed the burning surgical drape following removing the mask or turning off the anesthesia unit (1 point)

  • If the participant extinguished the fire following removing the burning surgical drape (1 point)

Procedure

Prior to conducting the simulation task, participants were asked to fill out a questionnaire about their demographics, medical experience, FUSE experience, and VR simulation experience as shown in Table 1. The participants were then presented with a verbal description of the goal of the task (i.e., extinguish the OR fire by following the correct sequence) and how to operate the VR simulator. They could perform one practice trial to become familiar with the setup of the simulator. Once the practice was over, each participant was asked to contain the OR fire within a minimum of 3 and a maximum of 5 trials as a formal test. AI guidance was turned on starting from the third trial, and there was no guidance provided in the first two trials. Participants were then asked to fill out a questionnaire consisting of ten questions by rating from 1 (not realistic/useful) to 7 (very realistic/useful) regarding the face validation of the AI-assisted OR fire simulator.

Statistical analysis

All statistics were generated using IBM® SPSS and plots were generated using the Python library, Seaborn & Matplotlib [25]. The face validation questions were analyzed by calculating the mean, standard deviation (SD), and percent of responses that were greater than a 6 on a 7-point scale.

The performance score data was checked for normality by performing a Shapiro-Wilks test on each trial. Due to the nature of our study, it is probable that the data is not normal and therefore requires a non-parametric test. The main affect that performance is influenced by the trial number are measured by a Friedman’s test. Following testing the main affect, pairwise comparisons are conducted between adjacent trials to see when the participants saw a significant difference in their score. The pairwise comparisons are adjusted by a Bonferroni correction due to the multiple tests conducted.

Results

Face validation

Post-task questionnaire results are shown in Table 2 on a 7-point scale, eight of the ten questions were rated a mean of 6 or greater (72.73%), especially those relating to the usefulness and visual/motion comfort of the simulator for OR fire-containing training. 84.91% participants had a better understanding of OR fire-containing through the use of the simulator. The highest rating (mean: 6.78) was assigned to the preference of using the simulator to learn about the OR fire containing procedure, and 92.45% of the participants chose to learn about this topic from the VR simulator over textbooks, with an average above 6 out of 7. 84.91% of the participants rated the degree of overall realism (looks and feels) of the simulator at least 5 out of 7, in which 64.44% rated this question at least 6. Some participants offered suggestions about adding more complex or difficult OR fire scenarios and how to further improve the realism of the simulator, for example adding cinematic elements like yelling, as stress can change reactions in such emergent scenario.

Table 2.

Post-task questionnaire

Questions Rating from 1 – 7
Mean SD >=6
1. I feel I have a better understanding of OR fire training through the simulator (1: Don’t agree ~ 7: Agree) 6.44 0.97 84.91%
2. Using the simulator to learn about this topic is more enjoyable than just using textbooks (1: Don’t agree ~ 7: Agree) 6.78 0.63 92.45%
3. If the simulator was available to me in my skills lab, I would use it (1: Don’t agree ~ 7: Agree) 6.40 1.35 84.91%
4. Please rate the degree of overall realism of the simulation (how it looks and feels), compared to the corresponding real-life scenario (1: Not realistic ~ 7: Very realistic) 5.83 1.18 54.72%
5. Please rate the degree of overall usefulness of AI-generated guidance of the simulation (1: Not useful ~ 7: Very useful) 6.31 0.98 79.25%
6. I feel I expect more guidance from the simulator (1: Don’t agree ~ 7: Agree) 4.94 1.99 45.28%
7. How much did the visual aspects of the environment involves you (1: Low ~ 7: High) 6.20 0.99 77.36%
8. How compelling was your sense of moving around inside the virtual environment (1: Low ~ 7: High) 6.44 0.98 84.91%
9. How quickly did you adjust to the virtual environment experience (1: Low ~ 7: High) 6.11 0.97 80.00%
10. How proficient in moving and interacting with the virtual environment did you feel at the end of the experience (1: Low ~ 7: High) 6.20 0.92 76.00%

79.25% of the participants rated the degree of overall usefulness of AI guidance at least 6 out of 7. Of the question asked whether the participants expect more AI-guidance in the simulator to assist with training, 45.24% of them agreed that they needed more assistance, with a mean above 6 out of 7. Some participants commented that providing guidance to indicate correct actions would be more helpful for training than warning of wrong actions.

Figure 3 shows the ratings related to the realism and usefulness of the simulator across attendings (18) and non-attendings (35). We combined the questions (7–10) related to visual appeal and motion comfort of the simulator as a single item in the radar plot. It can be observed that, attendings generally rated the simulator higher than non-attendings for all the questions in the plot, especially the usefulness of AI guidance, in which the mean score of attendings was 6.65 while that of non-attendings was 5.97. Regarding the overall realism of the simulator, attendings and non-attendings gave similar ratings, i.e., 6.03 and 5.97 respectively.

Figure 3:

Figure 3:

Questionnaire responses across attendings and non-attendings.

Simulator performance for repeated trials

To investigate the training effectiveness of the AI-guided VR simulator, we conducted an analysis on the simulation scores with respect to repeated trials. We had 44 participants out of 53 for this portion of the analysis as we only included participants who completed at least 3 trials. Participants were not required to do a 4th or 5th trial only if they received a perfect score on a previous trial. Of the 44 participants, only 3 (7%, Median=1) were able to put the fire out following the correct procedure (Score=4) in the first trial and 9 (20%, Median=2) participants in the second trial. With the introduction of AI guidance since Trial 3, the number of successful trials went up to 18 (41%, Median=2.5) and 26 (59%, Median 4) for Trials 3 and 4 respectively. By the fifth trial, 28 (64%, Median=4) participants were able to receive full points for extinguishing the fire. Figure 4 shows a boxplot of the participant scores for each trial.

Figure 4:

Figure 4:

Boxplot of participant score for each OR Fire trial. Trials 1 & 2 were completed without AI-guided assistance while Trials 3, 4 and 5 had AI-guidance. The simulator score for each participant improved significantly in Trial 3 when AI-guidance was introduced for the first time.

Shapiro-Wilk tests for each of the trials indicate that the participant scores do not follow a normal distribution, p<.05. Using a Friedman’s ANOVA, we noticed the simulator score for each participant did significantly change over the 5 trials, X2(4)=119.89, p<.001. Specifically, a pairwise analysis showed that the score only changed significantly when AI guidance was introduced for the first time in Trial 3, compared to the previous trial when no AI guidance was provided, T=−1.068, r=−.34, p=.015. It can be observed that, the simulation score did not change significantly from the previous trial either between the first two trials without AI or between the third, fourth or fifth trial, all of which had AI guidance. There was a significant linear trend r=.97, p=0.006, indicating that as an individual performed more trials, their score on the simulator increased proportionally.

Discussions

The major goal of this study was to investigate the validity of an AI-guidance enabled VR simulator for OR fire scenario. Based on our results, face validity of the simulator was established for many aspects of the simulator, and the AI-guidance demonstrated the potential to further enhance training effectiveness of the VR simulator and improved the user’s performance on the simulation.

Since this study focused on providing the design and face-validation of an AI-assisted OR fire simulator, all participants underwent the same procedure in our experiment, i.e., containing the OR fire within a minimum of 3 and a maximum of 5 trials, and AI guidance was enabled starting from the third trial. The reason why we start the AI guidance at Trial 3 is because we want to test how many participants could complete the task with no simulation training (i.e., pass at Trial 1) and how many participants could complete the task with the simulator alone without any AI guidance (i.e., pass at Trial 2). Future studies plan to further investigate the training effectiveness of AI guidance, by adding a control group trained without AI guidance for the entire training, then comparing their performance with participants trained with AI guidance. Moreover, it would be helpful to further compare the performance and error rates between OR personnel of different roles in surgery, such as surgeons, anesthesiologists, and nurses.

In the current simulator, we simulated a straightforward OR fire scenario that follows the SAGES FUSE curriculum. A next step that we take will be to increase the complexity such as introducing different patient-dependent and non-patient dependent factors as described by Jones et. al.[26]. Part of increasing the complexity will be to introduce items that could negatively affect the outcome, such as fire-blankets which could concentrate both heat and oxygen on the patient [27]. Our simulator also only provides a fire extinguisher as the means for extinguishing the fire, which is exceptionally rare, and therefore other means such as saline should be provided in the simulator. We also plan to increase the fidelity of the simulator with improved visual and sound effects to properly replicate the operating room environment.

An additional limitation is that the simulator is a single-user platform. Further technical advancement that is currently being investigated by our group is to allow multiple users of different OR roles such as surgeon, anesthesiologist, and nurse to be immersed in the same virtual environment and interact simultaneously with one another to perform a fire containing task collaboratively.

Conclusion

Simulators with AI assistance offer the potential to assess knowledge and augment learning in a safe, repeatable, immersive environment. The validity of our AI-guided OR fire simulator was established especially on its usefulness and effectiveness for OR fire training. We plan to use the simulator to assess OR personnel and teach the proper response to an iatrogenic fire scenario.

Acknowledgments

The research reported in this article was supported by the NIH/NIBIB under Award Number 2R01EB005807, 5R01EB010037, 1R01EB009362, 1R01EB014305, 1R01EB025241; NIH/NHLBI under Award Number 5R01HL119248; NIH/NCI under Award Number 1R01CA197491 and NIH under Award Number R44OD018334.

Footnotes

Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.

Disclosures

Drs. Di Qi, Adam Ryason, Nicholas Milef, Samuel Alfred, Mohamad Rassoul Abu-Nuwar, Mojdeh Kappus, and Suvranu De have no conflicts of interest or financial ties to disclose. Dr. Daniel B. Jones has no relevant conflicts related to this manuscript and is on the advisory board of Allurion Technologies Inc.

References

  • 1.Joint Commission on Accreditation of Healthcare Organizations (2003) Preventing Surgical Fires. Sentin Event Alert 29:1. [PubMed] [Google Scholar]
  • 2.Connor MA, Menke AM, Vrcek I, Shore JW (2018) Operating room fires in periocular surgery. Int Ophthalmol. doi: 10.1007/s10792-017-0564-9 [DOI] [PubMed] [Google Scholar]
  • 3.Apfelbaum JL, Caplan RA, Barker SJ, Connis RT, Cowles C, Ehrenwerth J, Nickinovich DG, Pritchard D, Roberson DW, Caplan RA, Barker SJ, Connis RT, Cowles C, de Richemond AL, Ehrenwerth J, Nickinovich DG, Pritchard D, Roberson DW, Wolf GL (2013) Practice Advisory for the Prevention and Management of Operating Room Fires. Anesthesiology. doi: 10.1097/aln.0b013e31827773d2 [DOI] [PubMed] [Google Scholar]
  • 4.Hauk L (2018) Guideline for a safe environment of care. AORN J 108:P10–P12. doi: 10.1002/aorn.12380 [DOI] [PubMed] [Google Scholar]
  • 5.Madani A, Jones DB, Fuchshuber P, Robinson TN, Feldman LS (2014) Fundamental Use of Surgical Energy™ (FUSE): a curriculum on surgical energy-based devices. Surg Endosc 28:2509–2512. doi: 10.1007/s00464-014-3623-6 [DOI] [PubMed] [Google Scholar]
  • 6.Seymour NE, Gallagher AG, Roman SA, O’Brien MK, Bansal VK, Andersen DK, Satava RM, Pellegrini CA, Sachdeva AK, Meakins JL, Blumgart LH (2002) Virtual reality training improves operating room performance results of a randomized, double-blinded study. Ann Surg. doi: 10.1097/00000658-200210000-00008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Hashimoto DA, Sirimanna P, Gomez ED, Beyer-Berjot L, Ericsson KA, Williams NN, Darzi A, Aggarwal R (2015) Deliberate practice enhances quality of laparoscopic surgical performance in a randomized controlled trial: from arrested development to expert performance. Surg Endosc. doi: 10.1007/s00464-014-4042-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Yaoyu F, Lora C, Di Q, K. P, V.S. A, A. E, S. D, S.D. S (2019) Characterizing the learning curve of a virtual intracorporeal suturing simulator (Vblast-ss©). Surg Endosc. doi: 10.1007/s00464-019-06703-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Sankaranarayanan G, Wooley L, Hogg D, Dorozhkin D, Olasky J, Chauhan S, Fleshman JW, De S, Scott D, Jones DB (2018) Immersive virtual reality-based training improves response in a simulated operating room fire scenario. Surg Endosc. doi: 10.1007/s00464-018-6063-x [DOI] [PubMed] [Google Scholar]
  • 10.Mahmood T, Darzi A (2004) The learning curve for a colonoscopy simulator in the absence of any feedback: No feedback, no learning. Surg Endosc Other Interv Tech. doi: 10.1007/s00464-003-9143-4 [DOI] [PubMed] [Google Scholar]
  • 11.Khan R, Scaffidi MA, Grover SC, Gimpaya N, Walsh CM (2019) Simulation in endoscopy: Practical educational strategies to improve learning. World J Gastrointest Endosc 11:209–218. doi: 10.4253/wjge.v11.i3.209 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Buchanan BG, Shortliffe EH, Davis R, Kin JJ (1984) The Origin of Rule-Based Systems in AI. Rule-Based Expert Syst MYCIN Exp Stanford Heuristic Program Proj MYCIN Exp Stanford Heuristic Program Proj [Google Scholar]
  • 13.Rhienmora P, Haddawy P, Suebnukarn S, Dailey MN (2011) Intelligent dental training simulator with objective skill assessment and feedback. Artif Intell Med. doi: 10.1016/j.artmed.2011.04.003 [DOI] [PubMed] [Google Scholar]
  • 14.Wijewickrema S, Ma X, Piromchai P, Briggs R, Bailey J, Kennedy G, O’Leary and S (2018) Providing Automated Real-Time Technical Feedback for Virtual Reality Based Surgical Training: Is the Simpler the Better? In: Artificial Intelligence in Education. Lodon, UK [Google Scholar]
  • 15.Laukkanen S, Karanta I, Kotovirta V, Markkanen J, Röonkk J (2004) Adding intelligence to virtual reality In:Frontiers in Artificial Intelligence and Applications [Google Scholar]
  • 16.Yilmaz L, Ören T, Aghaee NG (2006) Intelligent agents, simulation, and gaming. Simul Gaming. doi: 10.1177/1046878106289089 [DOI] [Google Scholar]
  • 17.Vaughan N, Gabrys B, Dubey VN (2016) An overview of self-adaptive technologies within virtual reality training. Comput. Sci. Rev [Google Scholar]
  • 18.Yu JQ, Brown DJ, Billett EE (2007) Design of virtual tutoring agents for a virtual biology experiment. Eur J Open Distance Learn [Google Scholar]
  • 19.Heuvelink A, van den Bosch K, Doesburg WA van, Harbers M (2009) Intelligent agent supported training in virtual simulations. In:Proceedings of the NATO HFM-169 Workshop on Human Dimensions in Embedded Virtual Simulation. NATO Human Factors and Medicine Panel. p nd [Google Scholar]
  • 20.Interactive Medical Simulation Toolkit (iMSTK). https://www.imstk.org/. Accessed 17 Oct 2019
  • 21.Milef N, Qi D, De S (2019) Rendering Surgery Simulation with Vulkan In: GPU Zen 2. Black Cat Publishing [Google Scholar]
  • 22.HTC VIVE VR System. https://www.vive.com/us/product/vive-virtual-reality-system/. Accessed 17 Oct 2019
  • 23.VIVE Wireless Adapter. https://www.vive.com/us/wireless-adapter/. Accessed 17 Oct 2019
  • 24.FUSE FUSE Manual. https://www.fuseprogram.org. Accessed 17 Oct 2019
  • 25.Hunter JD (2007) Matplotlib: A 2D graphics environment. Comput Sci Eng. doi: 10.1109/MCSE.2007.55 [DOI] [Google Scholar]
  • 26.Jones TS, Black IH, Robinson TN, Jones EL (2019) Operating Room Fires. Anesthesiology [DOI] [PubMed] [Google Scholar]
  • 27.Institute ECRI (2009) New clinical guide to surgical fire prevention. Patients can catch fire--here’s how to keep them safer. Health Devices [PubMed] [Google Scholar]

RESOURCES