Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 1.
Published in final edited form as: Neurosurgery. 2013 Oct;73(0 1):74–80. doi: 10.1227/NEU.0000000000000092

Translating the Simulation of Procedural Drilling Techniques for Interactive Neurosurgical Training

Don Stredney 2,4, Ali R Rezai 1, Daniel M Prevedello 1, J Bradley Elder 1, Thomas Kerwin 4, Bradley Hittle 4, Gregory J Wiet 2,3
PMCID: PMC4117341  NIHMSID: NIHMS606260  PMID: 24051887

Abstract

Background

Through previous and concurrent efforts, we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. This volumetric data helps drive an interactive multi-sensory, i.e., visual (stereo), aural (stereo), and tactile simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the CNS simulation initiative.

Objective

The goal of this multi-level development is to deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches.

Methods

We discuss issues of biofidelity as well as our methods to provide objective, quantitative automated assessment for the residents.

Results

We conclude with a discussion of our experiences by reporting on preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies.

Conclusion

We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum.

Keywords: skull base, simulation, virtual reality, neurosurgery, training

Introduction/Problem Statement

Whether defining safe and minimally invasive trajectories to access neoplasms, or training for techniques used in debulking a lesion, a consummate understanding of the regional anatomy is essential. To gain such understanding, a safe and effective environment is critical to provide an accurate emulation of the anatomy and deliberate practice for learning and maintenance of psychomotor skills.

To provide this environment, we explored various approaches, including physical modeling with material tissue correlates, virtual representation, and a hybrid approach. Obviously there are advantages and disadvantages to either a physical or virtual approach. Both approaches have issues with biofidelity, especially in the sense of visual accuracy and tissue interactions. Physical models can be elegantly constructed, especially with the advent of 3-D printing. Physical models do allow the trainee to use actual instruments to execute performance. However, cost and repetitive use can be limiting. Physical models often provide no anatomical or pathological variance. Virtual models permit relatively low-cost simulations. Computational technology and interface equipment have stabilized in utility and seen a reduction in cost. With the use of Open-source software, and GPU video board developments being pursued and led by the gaming industry, these costs also continue to decline. Because the model is digital, repetition is not an issue. Anatomical and pathological variance can be captured in the acquisition, i.e., using a specimen with variance, or often the digital model can be modified to emulate a variance. However, the field is relatively new and there are limited interfaces that can accurately emulate hand-instrument and instrument-tissue interaction.

Nevertheless, we assert that there are distinct advantages in a virtual approach. Below, we present our methods and results in our initial deployment of a fully virtual, interactive environment for learning neurosurgical techniques as well as user assessment. The context of this deployment was within CNS simulation courses and was based specifically on procedures involving drilling, i.e., (pterional burr hole, pre-sigmoid, and retro-sigmoid craniotomy).

Background

Simulation’s application to neurosurgery has evolved over the span of several decades. Kelly introduced the use of 3-D reconstructions for use in the pre-operative assessment and planning of volumetric stereotactic procedures.1 In 2006, Wang et al introduced realistic deformable models depicting prodding, pulling and cutting of simulated soft tissues.2 In 2007, Lemole et al demonstrated a system for ventriculostomy training that employed haptic feedback.3 Concomitantly, Acosta et al presented a haptic approach for a burr hole simulation.4 Both of these approaches combine natural viewing of the hands and synthesized visuals in an augmented reality approach, a hybrid of real and virtual components. Hofer presented using Navigated Control for avoiding critical structures during surgical intervention.5 These approaches rely on a virtual model for accurate and precise planning and execution. More recently, Delorme and others presented NeuroTouch®, an integrated system including stereo graphics and haptic manual interfaces for microneurosurgical training.6 Through funding from the National Research Council Canada, the effort includes 20 sites participating in beta testing and validation.

Our early studies related to this effort correlated structural information from volumetric magnetic resonance data with functional data from electroencephalograms into integrated displays used for investigating drug and alcohol addictions, and sleep disorders.7, 8 Subsequent work involved the development and evaluation of three-dimensional volumetric displays of patient-specific data as compared to traditional methods in the study of brain and cranial base tumors.915 Concurrent work involved simulations for training anesthesia residents in the delivery of an epidural.16, 17 The epidural anesthesia simulations were our first investigations into integrating volume graphics with haptics (force reflecting technology). Using volumetric techniques, we also simulated pelvic compression neuropathies associated with birthing.18 Subsequently, we were part of a multi-institutional effort to develop and evaluate a Functional Endoscopic Sinus Surgery simulator that integrated visual and haptic interfaces. This involved two parallel developments, one focusing on surface-based representations19, 20, the second focusing on volumetric representations.2126 These studies showed that although surface-based representations were expedient and could provide interactive rates, they lacked the complexity and realism found in volumetric displays.27 The ENT Surgical Trainer, as it has come to be known, has been identified as the first true procedural surgical simulation environment to undergo vigorous validation.28

We have developed a virtual simulation for use in the training of temporal bone dissection for the laboratory that combines multimodal representations, stereoscopic volume rendering, and haptic and aural (stereo) feedback.29 We have disseminated our temporal bone dissection simulator to ten additional institutions to obtain formative and preliminary summative evaluations.30 The study demonstrated that virtual representations were capable of providing introductory training equal to cadaveric models.31, 32 The simulator is currently being employed to conduct a multiple institution randomized controlled trial to evaluate its efficacy for use in training, specifically in the integration of standardized metrics and automated assessment of performance.

Recently, we demonstrated translation of the otological technique simulator for the emulation of skull base techniques used in neurosurgery.33 This simulation is completely virtual, providing visual, aural, and haptic (tactile) forces in an interactive, multisensory interface.

Methods

The course of our methods can be categorized into the following steps: Data Acquisition, Pre-Processing, Segmentation, Systems Integration, and Rendering and Use (Figure 1).

Figure 1.

Figure 1

Diagram of data workflow.

Data Acquisition

To obtain digital models, specimens were imaged using a Siemens 64-detector Somatom computed tomography scanner with a modified inner ear protocol with a FOV of 119 mm, an in-plane resolution of 0.232, and a slice thickness of .6mm.34 Imaging was conducted at the Ross Heart Hospital at The Ohio State University Wexner Medical Center (OSUWMC). Digital data were then transferred to the Ohio Supercomputer Center for preprocessing.

Pre-Processing and Segmentation

Pre-processing includes rescaling and filtering to clean and prepare the data for segmenting. Segmentation involves interactive user-mediated demarcation of critical structures, surfaces, and regions based on surgical significance. The segments are used to localize instruments, determine violation of key and critical structures, and the contextual use of force, metrics that will be employed in quantitative automated assessment. In the neurosurgical application of our system, we have processed the data sets to provide for emulation of pterional, and pre- and retro-sigmoid approaches (Figure 2) to the lateral skull base.

Figure 2.

Figure 2

Use of the simulation at CNS Chicago 2012. Left, Resident providing summative feedback. Right, Experts providing formative feedback.

Systems Integration, Rendering and Use

The system comprises a desk-side computer (Intel quad-core 3.6GHz processor with 16GB RAM running Wiindows7) with an NVidia (Quadro5000) Graphics Processing Unit (GPU). Processed and segmented data are loaded into the GPU for interactive stereo (20–30 frames per second) visuals. We employ a mix of custom rendering software using open-source (i.e., OpenGL) methods of direct volume rendering to achieve these results, including interactive drilling and removal of bone. A Geomagic Sensable® Omni® is used as a dexterous device to provide direct emulation of the user’s manipulation of instruments, specifically the drill burr used to remove bone and gain access to underlying structures. Forces are calculated based on regional voxel intensities. Resistance to the applied force is used to modulate stored sounds of drilling. Stereo glasses and a stereo-ready monitor and headphones complete the interface (Figure 3), providing a rich, multisensory environment based completely on the user’s interaction with a virtual model.

Figure 3.

Figure 3

Left, Pre-sigmoid approach in simulator. Right, Retro-sigmoid approach. Note: View is arbitrarily established and modified interactively by the user.

Table 1 demonstrates the directive and methods of scoring used in the study.

Table 1.

The following table lists the directive and methods of scoring used.

Directive Negative points Positive points Total points possible
Drill a single burrhole on the pteronial point. 10: If parts of each of the Sphenosquamosal suture, the Squamosal suture and Sphenoparietal suture are removed 10
Drill a single burrhole near the pterion that exposes only anterior fossa.
  • 5:If the middle fossa is entered

  • 5: If the orbit is entered

10: If the anterior cranial fossa is entered 10
Drill a single burrhole near the pterion that exposes the orbit and the anterior cranial fossa simultaneously.
  • 5: If the middle fossa is entered

5:If the orbit is entered
5: If the anterior cranial fossa is entered
10
Perform a pterional apporach and reduce the greater sphenoid wing to the superior orbital fissure without entering the orbit.
  • 3: If less than 5% or greater than 40% of the greater sphenoid wing removed:

  • 3: If less than 5% or greater than 40% of the lesser sphenoid wing removed

  • 5: Entered the orbit.

10 points by default 10
Identify the digastric sulcus by drilling part of it away while minimizing removal of nearby structures. 10:if more than 30% of the digastric sulcus is removed. 10
Drill a retro-sigmoid craniotomy at the edges of the transverse and sigmoid sinuses. The goal is to expose a slight amount of sinus (in blue).
  • 5: None or over 20% of the sigmoid sulcus removed

  • 5: None or over 20% of the transverse sinus removed

  • 10: None or over 50% of the Posterior Fossa DuralPlate removed

  • 5: Less than 20% of the digastric ridge removed

10 points by default. 10
Drill the mastoid anterior to the edge of the sigmoid sinus, exposing the superior petrosal sinus and the IAC, while avoiding the facial nerve and middle and inner ear.
  • 10: Over 10%of the sigmoid sulcus removed

  • 20: Drilled part of the facial nerve

  • 10: Drilled part of a semicircular canal

  • 20: Drilled cochlea

  • 20: Drilled bone of middle ear

10: Exposed sigmoid sulcus
20: Exposed IAC
10: Exposed the superior petrosal sinus
40

Quantitative Automated Assessment

Our current approach is based on a set of rules defined by expert surgeons. These rules rely on the volume data comprising defined structures and regions (e.g., sutures, sinuses, semicircular canals). Based on (virtual) procedures performed on the simulator by experts, we determine logical cutoff points for the amount of bone removal required for a certain level of expertise. For example, in the retro-sigmoid craniotomy task, between 1% and 50% of the Posterior Fossa Dural Plate must be removed during the procedure to get credit (Figure 4). Refer to Table 1 for our complete rule-based grading system. Penalties can be given if excessive or insufficient amounts of a specified structure are removed. A breakdown of one’s score is provided to the residents upon completion of the procedure and results are saved for further analysis. In a deployment of this system in a classroom setting, residents (and their attendings) will be able to see their progress in procedures over time. Faculty time investment is optimized in that both formative and summative feedback is provided to the trainee independent of the faculty’s physical presence.

Figure 4.

Figure 4

Left, View during mastoidectomy. Right, End result scored against expert composite score: Green=ALL experts removed bone. Cyan= SOME experts removed this region. Red (arrow) NO experts removed this bone.

For the pterional, pre and post-sigmoid approach, we provide quantitative automated scores to the final products of virtual models representing the application of procedural techniques.35, 36 We asked each of three experts to perform the 3 techniques as many times as possible. This information is employed to construct a composite score for determining expert variance of technique. Through the comparison of matching regions of voxels between high-quality expert products, i.e., drilled virtual bones, and those of residents using various distance metrics, we obtain a feature vector that is used in a range of clustering and classification algorithms. Given a set of final products previously graded by an expert, we then can extract features from the volumes and construct a decision tree.37 Preliminary results using this method have resolved information for assessment more powerful than hand motion analysis in some metrics, obtaining kappa scores above 0.6 when comparing expert and automated grading scores for such metrics as “Complete Saucerization” and “Antrum Entered”. This approach provides scoring for metrics that are not easily defined in terms of strict structural boundaries and can be straightforwardly extended to different procedures.

Results

We have presented our methods for constructing a virtual environment for procedural drilling techniques and their application to neurosurgery. We have conducted a small pilot study to gather feasibility data in practical courses that were held at the annual meetings CNS 2011 in Washington, DC, and CNS 2012, in Chicago, Illinois. A short didactic review was provided prior to the simulation study. Pre-assessment of skill was limited to the participants providing their rank. All participants were asked to perform the 3 techniques the best they could.

At this time, our data are not generalizable. We have seen a wide variance in user performance. In our initial data, we had (N=) 17 participants, not all of who finished every task listed above. A summary of the results is available in Table 2. The total possible score is 90 and the subjects were early-year residents. The low scores in general may indicate that the tasks are too difficult for early-year residents.

Table 2.

Results from pilot study: N=17

Time (minutes) Total Score
Mean 12.8 24.1
Std. Dv 7.4 12.6
Median 11.2 22.0

These preliminary pilot studies have, however, provided us valuable formative evaluation that will be integrated into the simulation to pursue the next level of controlled randomized validation studies required to further evaluate the efficacy of our designs in the neurosurgical curriculum. These studies will include pre and post test assessments, with a two-arm approach of simulation versus traditional training techniques.

The automated assessment provides a means of learning procedural techniques with feedback that does not require supervision by an attending surgeon. The breakdown with immediate scores gives a more concrete indication for improvement than hand motion analysis, i.e, economy of movement. Most important, all cases used for assessment are exactly equal in difficulty. This approach provides for a more quantified and objective assessment of procedural skills.

Discussion

We have presented the adaptation of virtual simulation techniques developed for otologic surgery to neurological techniques. Essential to this adaptation is the development of automated assessment techniques and their use in analysis of resident performance. These assessments are congruent with the CNS simulation initiative to provide bold and innovative methods for training and assessments. More specifically, it addresses the call for the formal integration of simulation as a training modality in the curriculum.38 This effort seeks to address the “technological gaps and limitations” of current simulators, and to help to balance “the cost and time needed for development of new simulations.”

Through the adoption and adaptation of virtual techniques we present the advantages of cost efficiency, cleanliness, standardization and repeatability, and provision of anatomical and, eventually, pathological variance. The current limitations of a virtual approach include lack of soft tissue emulation and representation of the management of fluids such as bleeding and irrigation, although we are actively pursuing these capabilities.39 In summary, we contend that a virtual approach can not only more rapidly capitalize on new technological advances, but that the virtual representation can itself iteratively drive innovation in other technological domains, such as imaging and robotics, as well as tool design and testing.

Conclusion

We have presented our efforts to translate an otologic simulation environment for use in neurosurgical training. We have demonstrated the initial proof of principles and now consider the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum as part of the CNS simulation initiative.

Acknowledgments

The authors would like to acknowledge the efforts of Kimerly Powell, PhD, Director of the Small Animal Imaging Shared Resource at The Ohio State University for her expertise, guidance, and assistance in acquiring and pre-processing the data sets used in this research.

Footnotes

Disclosure of Funding: This research was supported in part by the Congress of Neurological Surgeons. Additional developments were supported through funding from R01 DC011321001 A1 through the National Institute on Deafness and Other Communications Disorders (NIDCD) of the National Institutes of Health.

Financial support and industry affiliations: Dr. Ali Rezai is currently the President of the Congress of Neurological Surgery. Stredney and Wiet have received material support provided by Medtronic and Stryker (instruments) and Cochlear Americas (specimens). None of the other authors have any personal or institutional financial interest in drugs, materials, or devices described within this submission.

References

  • 1.Kelly PJ. Quantitative virtual reality enhances stereotactic neurosurgery. Bull Am Coll Surg. 1995 Nov;80(11):13–20. [PubMed] [Google Scholar]
  • 2.Wang PBA, Jones IA, Glover AT, Benford SD, Greenhalgh CM, Vloeberghs M. A Virtual Reality Surgery Simulation of Cutting and Retraction in Neurosurgery with Force-Feedback. Computer Methods and Programs in Biomedicine. 2006;84(1):11–18. doi: 10.1016/j.cmpb.2006.07.006. [DOI] [PubMed] [Google Scholar]
  • 3.Lemole GM, Jr, Banerjee PP, Luciano C, Neckrysh S, Charbel FT. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery. 2007 Jul;61(1):142–148. doi: 10.1227/01.neu.0000279734.22931.21. discussion 148–149. [DOI] [PubMed] [Google Scholar]
  • 4.Acosta E, Liu A, Armonda R, et al. Burrhole simulation for an intracranial hematoma simulator. Stud Health Technol Inform. 2007;125:1–6. [PubMed] [Google Scholar]
  • 5.Hofer M, Dittrich E, Scholl C, et al. First clinical evaluation of the navigated controlled drill at the lateral skull base. Stud Health Technol Inform. 2008;132:171–173. [PubMed] [Google Scholar]
  • 6.Delorme S, Laroche D, DiRaddo R, Del Maestro RF. NeuroTouch: a physics-based virtual simulator for cranial microneurosurgery training. Neurosurgery. Sep;71(1 Suppl Operative):32–42. doi: 10.1227/NEU.0b013e318249c744. [DOI] [PubMed] [Google Scholar]
  • 7.Lukas SESM, Stredney D, Torello MW, May SF, Scheepers F. Integration of P300 Evoked Potentials with Magnetic resonance Images(MRI) to Identify Dipole Sources in Human Brain. Washinton, D.C: Society of Neuroscience; 1993. [Google Scholar]
  • 8.Lukas SESM, Stredney D, Torello MW, May SF, Scheepers F. Apparent Source of EEG Sleep Spindles and K-Complexes: Correlations with Anatomical Sites Using Magnetic Resonance Imaging(MRI). Vol APSS 8th an Mtg; Boston, MA: Amer Sleep Disorders Assoc; 1994. [Google Scholar]
  • 9.Stredney DYR, May SF, Torello M. Supercomputer Assisted Brain Visualization with an Extended Ray Tracer. Proceedings of the 1992 Workshop on Volume Visualization.; December 1992; ACM, VVA; pp. 33–38. [Google Scholar]
  • 10.Stredney D, Crawfis R, Wiet GJ, Sessanna D, Shareef N, Bryan J. Interactive volume visualizations for synchronous and asynchronous remote collaboration. Stud Health Technol Inform. 1999;62:344–350. [PubMed] [Google Scholar]
  • 11.Stredney D, Agrawal A, Barber D, et al. Interactive medical data on demand: a high-performance imaged-based approach across heterogeneous environments. Stud Health Technol Inform. 2000;70:327–333. [PubMed] [Google Scholar]
  • 12.Wiet GJSD, Goodman J, Stredney D, Bender CF, Yagel R, Swan JE, Schmalbrock P. Virtual Simulations of Brain and Cranial Base Tumors. San Diego. Proc. Anal. Mtg. Am. Acad. Otolaryngology, Head and Neck Surgery; 1994.San Diego, CA: Sep, 1994. [Google Scholar]
  • 13.Wiet GJ, Stredney D, Yagel R, et al. Cranial base tumor visualization through high-performance computing. Stud Health Technol Inform. 1996;29:43–59. [PubMed] [Google Scholar]
  • 14.Wiet GJ, Stredney DL, Yagel R, Sessanna DJ. Using advanced simulation technology for cranial base tumor evaluation. Otolaryngol Clin North Am. 1998 Apr;31(2):341–356. doi: 10.1016/s0030-6665(05)70053-0. [DOI] [PubMed] [Google Scholar]
  • 15.Wiet GJSD, Schmalbrock P. Tumor Visualization. [Google Scholar]
  • 16.Stredney D, Sessanna D, McDonald JS, Hiemenz L, Rosenberg LB. A virtual simulation environment for learning epidural anesthesia. Stud Health Technol Inform. 1996;29:164–175. [PubMed] [Google Scholar]
  • 17.Hiemenz L, Stredney D, Schmalbrock P. Development of the force-feedback model for an epidural needle insertion simulator. Stud Health Technol Inform. 1998;50:272–277. [PubMed] [Google Scholar]
  • 18.McDonald JS, Yagel R, Schmalbrock P, Stredney D, Reed DM, Sessanna D. Visualization of compression neuropathies through volume deformation. Stud Health Technol Inform. 1997;39:99–106. [PubMed] [Google Scholar]
  • 19.Edmond CV, Jr, Heskamp D, Sluis D, et al. ENT endoscopic surgical training simulator. Stud Health Technol Inform. 1997;39:518–528. [PubMed] [Google Scholar]
  • 20.Weghorst DAC, Openheimer P, Edmond CV, Patience T, Heskamp D, Miller J. Validation of the Madigan ESS Simulator. In: JDW, editor. Proc MMVR6. Amsterdam: IOS Press; 1998. pp. 399–405. [PubMed] [Google Scholar]
  • 21.Rudman DT, Stredney D, Sessanna D, et al. Functional endoscopic sinus surgery training simulator. Laryngoscope. 1998 Nov;108(11 Pt 1):1643–1647. doi: 10.1097/00005537-199811000-00010. [DOI] [PubMed] [Google Scholar]
  • 22.Wiet GJ, Yagel R, Stredney D, et al. A volumetric approach to virtual simulation of functional endoscopic sinus surgery. Stud Health Technol Inform. 1997;39:167–179. [PubMed] [Google Scholar]
  • 23.Yagel RSD, Wiet GJ, Schmalbrock P, Rosenberg R, Sessanna DJ, Kurzion Y, King S. Multisensory Platform for Surgical Simulation. Santa Clara, CA: IEEE VRAIS; 1996. pp. 72–78. [Google Scholar]
  • 24.Rosenberg LB, SD . A Haptic Interface for Virtual Simulation of Endoscopic Surgery. In: Weal, editor. Proc MMVR4. Amsterdam: IOS Press; 1996. [PubMed] [Google Scholar]
  • 25.Yagel RSD, Wiet GJ, Schmalbrock P, Rosenberg L, Sessanna D, Kurzion Y. IEEE Multimedia. 1996. Towards Real-Time Multisensory Virtual Surgery. [Google Scholar]
  • 26.Yagel RSD, Wiet GJ, Schmalbrock P, Rosenberg L, Sessanna DJ, Kurzion Y. Building a Virtual Environment for Endoscopic Sinus Surgey Simulation. Comp & Graphics. 1996 Dec;20(6):813–823. [Google Scholar]
  • 27.Stredney D, Wiet GJ, Yagel R, et al. A comparative analysis of integrating visual representations with haptic displays. Stud Health Technol Inform. 1998;50:20–26. [PubMed] [Google Scholar]
  • 28.Gallagher AG, Ritter EM, Satava RM. Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surg Endosc. 2003 Oct;17(10):1525–1529. doi: 10.1007/s00464-003-0035-4. [DOI] [PubMed] [Google Scholar]
  • 29.Bryan JSD, Wiet GJ, Sessanna D. [Google Scholar]
  • 30.Wan D, Wiet GJ, Welling DB, Kerwin T, Stredney D. Creating a cross-institutional grading scale for temporal bone dissection. Laryngoscope. 2010 Jul;120(7):1422–1427. doi: 10.1002/lary.20957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Wiet GJ, Rastatter JC, Bapna S, Packer M, Stredney D, Welling DB. Training otologic surgical skills through simulation-moving toward validation: a pilot study and lessons learned. J Grad Med Educ. 2009 Sep;1(1):61–66. doi: 10.4300/01.01.0010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Wiet GJ, Stredney D, Kerwin T, et al. Virtual temporal bone dissection system: OSU virtual temporal bone system: development and testing. Laryngoscope. 2012 Mar;122( Suppl 1):S1–12. doi: 10.1002/lary.22499. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Prevedello DMSD, Rezai A. Skull Base Simulators: The Evolution of Neurosurgical Training. CNSQ-Summer. 2011;12(3):7–8. [Google Scholar]
  • 34.Wiet GJ, Schmalbrock P, Powell K, Stredney D. Use of ultra-high-resolution data for temporal bone dissection simulation. Otolaryngol Head Neck Surg. 2005 Dec;133(6):911–915. doi: 10.1016/j.otohns.2005.05.655. [DOI] [PubMed] [Google Scholar]
  • 35.Kerwin T, Wiet G, Stredney D, Shen HW. Automatic scoring of virtual mastoidectomies using expert examples. Int J Comput Assist Radiol Surg. 2012 Jan;7(1):1–11. doi: 10.1007/s11548-011-0566-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kerwin TSH, Stredney D. Capture and Review of Interactive Volumetric Manipulations for Surgical Training. Volume Graphics. 2006;106 [Google Scholar]
  • 37.Kerwin T, Stredney D, Wiet G, Shen HW. Virtual mastoidectomy performance evaluation through multi-volume analysis. Int J Comput Assist Radiol Surg. 2012 Jan;8(1):51–61. doi: 10.1007/s11548-012-0687-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Lobel DARA. Frontiers in Neurosurgery: Simulation in Resident Education. CNSq. 2011 Spring;:4–6. [Google Scholar]
  • 39.Kerwin T, Shen HW, Stredney D. Enhancing realism of wet surfaces in temporal bone surgical simulation. IEEE Trans Vis Comput Graph. 2009 Sep-Oct;15(5):747–758. doi: 10.1109/TVCG.2009.31. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES