Abstract
Objective:
To design and validate an augmented reality application for identification of temporal bone anatomy.
Background:
The anatomy of the temporal bone is highly complex and can present challenges for operative planning and for education of both patients and medical trainees.
Methods:
An augmented reality application for visualization and identification of temporal bone anatomy in 3D was developed using Slicer, OpenGL, and Angle libraries on the Augmented Reality on Microsoft HoloLens (AR-MH). A total of 14 physicians, including 7 otolaryngologists (4 trainees and 3 attendings) and 7 radiologists (4 trainees and 3 attendings), participated in this study to visualize temporal bone structures using 2D CT imaging, 3D CT model visualization on a monitor, and AR-MH. Quantitative metrics to compare the users’ performance between modalities included time taken to identify structures, accuracy of identification, and the NASA Task Load Index.
Results:
The rendering rate for individual models was 60 fps, excluding the temporal bone model. The mean time for participants to identify 16 structures was 3:04 minutes on 2D, 2:02 minutes on 3D, and 2:09 minutes on AR-MH. The adjusted accuracy of identifying structures was 89.0% on 2D, 93.2% on 3D, and 91.6% on AR-MH. Mean NASA-TLX values showed no significant difference in workload metrics between modalities. Visualization of anatomy in 3D (either on a monitor or via AR-MH) resulted in greater speed and accuracy of anatomy identification for trainees but not attendings.
Conclusion:
Augmented reality provides a means of intuitively visualizing temporal bone anatomy which may function as an effective tool for surgical planning and education, particularly for novices.
Keywords: augmented reality, skull base surgery, surgical navigation, temporal bone CT, medical education, patient education material, surgical education, 3D visualization, otology
Introduction
Pathologies of the lateral skull base are often deeply embedded within the temporal bone and intimately associated with delicate anatomic structures. Establishing safe surgical access to areas of disease can be challenging, even for the experienced surgeon. A wide degree of variation in temporal bone anatomy exists among patients,1,2 and landmarks can be especially challenging to identify when obscured by disease or distorted from prior operations or radiation. Injury to skull base structures during dissection, such as the inner ear, facial nerve, dura, and major vessels, carries significant morbidity risks and can dramatically affect a patient’s postoperative quality of life. The incidence of breaching inner ear structures has been reported as high as 20% to 30% in vestibular schwannoma resections, exhibiting the potential for a high rate of morbidity during temporal bone drilling.3–6 Also, understanding the relationship of disease to important anatomy may not be intuitive to patients and clinical trainees, and can present barriers to informed consent and medical training, respectively.
The standard of care for visualizing temporal bone anatomy is high resolution computed tomography (CT) imaging, which clearly shows the bony structures, combined with magnetic resonance imaging (MRI) which highlights the soft tissue structures and tumor. However, experience is required to develop an intuitive 3-dimensional spatial understanding of the anatomy when reviewing imaging in 2D. Imaging review in 3D is also possible, with segmentation of anatomic structures and pathology, but requires additional image processing and a specialized user interface.
Augmented reality (AR), the overlay of computer-generated objects onto the real world, goes 1 step beyond traditional imaging modalities to overlay 3-dimensional models of anatomy on the user’s visual field. AR applications in otolaryngology have been developed for anatomic and surgical education,7–9 as well as for pre- and intra-operative guidance for surgeons.8,10–12 There is a shortage of quantitative assessments evaluating the utility of these applications,13 and only recently have they begun to be reported in the literature.7,10 AR simulation has been shown to improve the mastoid drilling proficiency of medical students7 and the NASA Task Load Index performance metrics of surgeons performing cochlear implantation.10 In this paper, we contribute a quantitative evaluation of the utility, to both trainees and experienced physicians, of AR for understanding temporal bone anatomy. We have developed an augmented reality application on the Microsoft Hololens, a wearable headset which uses a combination of depth sensors, cameras, and artificial intelligence algorithms to generate AR experiences. In this application, a digital 3D hologram of temporal bone anatomy in 3 dimensions is overlaid on the users real-world environment and can be interacted with using hand gestures. The utility of this AR application for identification of the temporal bone structures was evaluated using experiments and questionnaires from 14 subjects, with comparison to conventional CT imaging on a monitor (either in 2D or 3D). To the best of our knowledge, this is the first in-depth validation study quantifying the utility of the augmented reality for identification of temporal bone anatomy.
Methods
System Architecture
Typical augmented reality applications on the AR head mounted display, the Microsoft HoloLens, are developed on the Unity game development platform.14 The workflow has multiple steps, which include model generation on an image-processing software, model import and decimation in Unity and application deployment to the HoloLens. Any change in the model appearance or addition of new models requires the user to redo the entire workflow. The proposed workflow in this paper solves this problem by bypassing the requirement to import the models to Unity and directly connects the HoloLens to the image-processing software. This, in turn, makes it suitable for intraoperative applications wherein models generated on-the-fly can be visualized directly in HoloLens without requiring significant user interaction.
The system architecture can be divided into 3 parts: the server (a desktop), client (the Microsoft HoloLens) and communication network protocol between them. The details are given below.
Server.
The workflow on the server side consists of the following steps (Figure 1).
Figure 1.

Server-client architecture for the augmented reality application on the HoloLens.
DICOM data loading and segmentation.
The diagnostic or intraoperative images (MRI or CTA) are directly transferred to Slicer,15 an open-source image-processing program, via a DICOM listener. Once the DICOM data has been loaded, the anatomical structures are segmented or outlined on the 2D DICOM images using semi-automatic intensity-based segmentation algorithms.
3D mesh model generation.
Using the ModelMaker module in 3D Slicer, which implements the Marching Cubes algorithm, 3D anatomical models of the segmented tissue are created. Marching Cubes is an image-processing algorithm that converts two-dimensional segmented images of the anatomical structures into 3D model by generating a dense mesh.
Initial data communication.
A Slicer loadable module called “SlicerVRHoloLens” with a user-friendly interface was developed to push data from the desktop computer (server) to the HoloLens (client). The anatomical surface models are converted from Visualization Toolkit (VTK) format16 into OpenGL compatible mesh models.17 The mesh models, which are segmented and generated, are transferred to the HoloLens to display to the user.
WiFi connection setup.
We integrated the OpenIGTLink protocol18 into SlicerVRHoloLens module. Using the OpenIGTLink protocol, the server can connect the HoloLens via WiFi and transfer data through the User Datagram Protocol (UDP) protocol. The packets transferred over the OpenIGTLink protocol include the OpenGL mesh, OpenGL transform, and model color and opacity, with which we can transfer data and control the display on the HoloLens.
Color and opacity modification.
We can label different structures with various colors to meet the clinical or educational objective, and adjust the opacity of the models to see the models clearly. The opacity and color of the models are packed in “Transform” data messages and sent to the HoloLens.
Client.
The Holographic DirectX 11 (Windows Universal) and the ANGLE engine are used as the underlying programs for the client. The client application is first initialized to activate the main window and handle user interaction events. After the main window is set, the application calls a method to load the graphics resource and initialize the state machine. The application loads the 3D models and renders the models using OpenGL until the window is closed. The client program recursively renders and updates the model information. Further, a dedicated branch of ANGLE specialized for Microsoft HoloLens was used to recognize a holographic space that replaces the native CoreWindow application and a spatial coordinate system used to create stereo view and projection matrices that are provided to the shader pipeline. The ANGLE library then draws in stereo using the instanced rendering technique supported by HoloLens to seamlessly render the patient anatomic models on the HoloLens.
Patient Data
Using the developed AR application on the Microsoft Hololens (AR-MH), we attempt to use the augmented reality system to visualize temporal bone anatomy in 3 dimensions and real size by placing the anatomic holograms in space. The objective of the study was to compare the users’ performance in identifying the anatomic structure using the AR device and compare their performance to standard-of-care visualization on a computer monitor. The study was an observational study approved by the Mass General Brigham Institutional Review Board.
Study Modules
Prior to the study, temporal bone structures were segmented on 10 CT scans of different patients (5 left side and 5 right side) and 3D surface mesh models were generated on 3D Slicer. Unique colors were assigned to the segmented models, which were transmitted to the AR-MH over OpenIGTLink using the custom-built SlicerVRHoloLens module in Slicer.
Study Design
The study consisted of observational timed clinical trials. A total of 14 physicians participated in the experiment, including 7 otolaryngologists (4 trainees and 3 attendings) and 7 radiologists (4 trainees and 3 attendings). Trainee subjects included participants who have practiced medicine for less than 5 years and were still undergoing otolaryngology residency training, whereas attending subjects had completed otolaryngology training and have practiced for 5 or more years.
Subjects were asked to identify 16 structures in each of 9 cases using 1 of 3 modalities: 2D CT imaging, 3D model visualization on a monitor, or AR-MH (see Figures 2 and 3). While the modality for each case was randomly chosen, it was ensured that each subject would use each modality 3 times. Thus, out of the 126 total trials run, there were 42 in the 2D modality, 42 in 3D, and 42 in AR-MH. Following each case, participants were asked to complete a NASA Task Load Index questionnaire19,20 to assess the perceived workload of that approach on 6 scales: mental demand, physical demand, temporal demand, performance, effort, and frustration. Before the start of the trial, there was a training period where the subject could familiarize themselves with the 3 modalities on a test case.
Figure 2.

(A) 2D axial non-contrast CT scan in bone windows of a left temporal bone displayed in Slicer. (B) 3D image of a right ear segmentation in Slicer from a superior view. (C) AR-MR of a right ear from a superior view. (D) Provided color sheet.
Figure 3.

(A) Subject marking a fiducial on a 2D CT scan. (B) Subject identifying anatomic structures on a 3D segmented CT model. (C) Subject identifying anatomic structures on an AR-MH model.
List of 16 Temporal Bone Structures
For each case, participants were asked to identify these structures called out in a random order: cochlea, malleus, incus, stapes, labyrinthine segment of the facial nerve, tympanic segment of the facial nerve, descending segment of the facial nerve, vestibule, lateral semicircular canal, posterior semicircular canal, superior semicircular canal, fundus of the internal auditory canal (IAC), midsection of the IAC, porus acusticus of the IAC, and the vestibular aqueduct.
2D Modality Using Digital Imaging and Communications in Medicine (DICOM) Files
The 2D DICOM files were provided to the subjects. They were able to view different slices of the axial temporal bone CT scan (0.6 mm slice thickness) by hovering the mouse over the scan and scrolling up or down. Subjects were asked to identify the structures by placing a fiducial or marker on the structure that they believe is correct, as shown in Figure 2(A).
3D Modality Using 3D Anatomical Models
The segmented 3D CT model was generated from the 2D temporal bone CT DICOMs in Slicer as described above. The 3D images were provided to the subjects on the computer monitor through Slicer, as shown in Figure 2(B). The structures were set at a superior view and subjects were allowed to adjust the angle for their convenience. Subjects were asked to identify the anatomic structures one by one. Unlike the 2D modality, subjects were asked to respond by calling out the color of the structure according to the color sheet in Figure 2(D).
The AR Microsoft HoloLens (AR-MH)
The Microsoft HoloLens was provided to subjects. When they put on the HoloLens, they were able to see the 3D models of the structures projected into space from a superior view, as shown in Figure 2(C). Subjects were taught how to use the device to get a closer look at structures as well as view them from different angles. Again, the subjects were asked to call out the color of the structure according to the color sheet.
Data Collection and Analysis
Data recorded for each case included time to completion, accuracy, and the NASA Task Load Index. The mean time and mean accuracy were calculated per modality. Analysis of variance (ANOVA) was used to test for NASA Task Load Index differences between the modalities with statistical significance set at P < .05.
Results
Technical Evaluation of the AR-MH
First, we tested the performance of the system by measuring the update rate of the rendering model. The temporal bone anatomic models were pushed from Slicer to the HoloLens using the developed architecture to test our system rendering performance. The test results are given in Table 1. The target rendering rate of HoloLens is 60 frames per second. From the table we can see, the rendering rate maintains at about 60 fps when rendering all the models except for the temporal bone model. Since the temporal bone model is very large and contains several thousand vertices, the rendering rate of HoloLens drops to 30 fps when this model is being pushed.
Table 1.
Technical Results of the Rendering of the Models in the HoloLens.
| Model name | Vertices | Strips | Rendering time (ms) | FPS (/s) | CPU utilization (%) | GPU utilization (%) |
|---|---|---|---|---|---|---|
|
| ||||||
| External-auditory-canal | 7809 | 2625 | 16.7 | 60.0 | 21 | 37.04 |
| Internal-auditory-canal | 4308 | 1759 | 16.7 | 60.0 | 17 | 32.77 |
| Malleus-incus-complex | 446 | 170 | 16.6 | 60.0 | 18 | 26.73 |
| Labyrinthine-FN | 436 | 191 | 16.7 | 60.0 | 23 | 26.62 |
| Geniculate-ganglion-FN | 445 | 168 | 16.7 | 60.0 | 22 | 26.97 |
| Tympanic-FN | 1086 | 420 | 16.7 | 60.0 | 25 | 27.64 |
| Posterior-Crus-SC | 518 | 221 | 16.7 | 60.0 | 22 | 26.72 |
| Anterior-Crus-SC | 317 | 143 | 16.7 | 60.0 | 26 | 26.44 |
| Lateral-Canal | 1248 | 508 | 16.7 | 60.0 | 29 | 28.22 |
| Posterior-Canal | 1245 | 522 | 16.7 | 60.0 | 28 | 28.02 |
| Porous | 2147 | 847 | 16.7 | 60.0 | 25 | 29.15 |
| Fundus | 1705 | 699 | 16.7 | 60.0 | 23 | 29.04 |
| Temporal Bone | 58 867 | 23939 | 33.3 | 30.0 | 14 | 56.19 |
| Inner-ear-structures | 21 710 | 8273 | 16.7 | 60.0 | 26 | 58.35 |
| All | 80 577 | 32212 | 33.3 | 30.0 | 17 | 71.71 |
Abbreviations: FN, facial nerve; SC, semicircular canal.
Time and Accuracy Required to Identify All Structures
The mean time for participants to identify 16 structures was 3:04 minutes on 2D, 2:02 minutes on 3D, and 2:09 minutes on AR-MH. The mean accuracy was 89.0% on 2D, 91.9% on 3D, and 87.9% on AR-MH. However, to account for confusion between similar colors, errors between the following colors should be neglected: red/maroon, pink/white, yellow/light green, blue/light blue/teal, and olive/brown. Removing these errors yielded mean adjusted accuracies of 93.2% on 3D and 91.6% on AR-MH. Subgroup analyses comparing performance between otolaryngologists and radiologists, and between attendings and trainees are graphed in Figure 4.
Figure 4.

(A) Mean anatomy identification time for each modality between otolaryngologists versus radiologists. (B) Mean anatomy identification time for each modality between attendings versus trainees. (C) Mean adjusted accuracy for each modality between otolaryngologists versus radiologists. (D) Mean adjusted accuracy for each modality between trainees versus attendings.
Assessment of Perceived Workload by Each Modality
The NASA Task Load Index (NASA-TLX)19,20 is used as a multidimensional subjective assessment tool to rank the workload, time, performance, and effort required to identify the structures by each modality. A difference of 1 point (on the simplified 10 total point scale used in our studies) is typically considered a clinically significant difference in workload.19,20 Mean NASA-TLX values showed no significant difference in mental demand (2D: 4.5; 3D: 3.8; AR-MH: 4.1; P = .53), temporal demand (2D: 3.8; 3D: 3.5; AR-MH: 3.3; P = .65), performance (2D: 3.5; 3D: 4.0; AR-MH: 3.6; P = .68), effort (2D: 4.5; 3D: 3.5; AR-MH: 3.4; P = .39), or frustration (2D: 4.1; 3D: 3.1; AR-MH: 3.9; P = .49). However, in terms of physical demand (2D: 2.6; 3D: 1.6; AR-MH: 2.7; P = .063), lower numbers were consistently selected for the 3D modality, indicating a non-significant trend towards lower physical demand.
NASA-TLX values between otolaryngologists and radiologists and between attendings and trainees are graphed in Figure 5. There was not a notable difference in perceived workload between otolaryngologists and radiologists. However, attendings physicians experienced less perceived workload than resident physicians for all 3 modalities, with the greatest discrepancy seen in use of the 2D modality.
Figure 5.

(A) Mean NASA-TLX score for each modality between otolaryngologists versus radiologists. (B) Mean NASA-TLX score for each modality between attendings versus trainees.
Discussion
We have presented an easy-to-use augmented reality application providing clear visualization of temporal bone anatomy which permits fast and accurate identification of anatomic structures when compared to conventional CT imaging visualization. While other groups have developed AR platforms for temporal bone anatomic evaluation,8,9,11,21,22 our report is the first to provide a quantitative evaluation of the users’ experience with comparison to other modalities. Amongst evaluating physicians, 3D visualization modalities (on a monitor and via AR-MH) permitted faster identification of anatomy, with identification of the 16 structures on average requiring approximately two-thirds of the time in 3D than in 2D. When analyzing all users together, there was approximately similar accuracy and perceived workload of anatomy identification across modalities. Subgroup analyses showed approximately similar performance and perceived workload between otolaryngologists and radiologists across modalities. However, when comparing trainees to attendings, the discrepancy in speed of anatomy identification between 3D and 2D visualization modalities was even greater for trainees. Trainees also had less anatomy identification accuracy when using 2D, a difference not seen when combining groups. Unsurprisingly, attendings had lower perceived workload compared to trainees across all 3 modalities, likely granted by their experience, with the discrepancy being greatest for 2D.
The default visuospatial reference frame by which we interact with the world is 3D, so it is unsurprising that users were able to more efficiently identify anatomy using 3D modalities. Attending users, having significant experience with the anatomy and imaging modality, were able to compensate for the visuospatial reference frame difference and maintain task accuracy in 2D, though were still slowed down. Hence, it seems reasonable to speculate that the benefits of 3D visualization for understanding temporal bone anatomy, whether for operative planning or the education of patients or medical trainees, would be most pronounced for those with the least familiarity with the material. Therefore, augmented reality anatomy applications may be most useful in the education of patients and medical students. While the literature suggests the utility of AR in anatomy and medical education,23 definitive quantitative outcomes supporting this are insufficient given limitations in published studies.24,25 AR is also a promising application for the field of surgical navigation, where most published reports of AR in otology focus.21 However, given its simultaneous potential and limited study, education focused AR applications in otology warrant further research.
There were several limitations of this study. Subjects were limited to physicians and did not include medical students nor patients—important demographics as discussed above. The sample size was also limited as this study was designed as a pilot trial with the primary goal of evaluating the feasibility and preliminary impact of different visualization modalities on user performance. No formal power calculation was performed prior to the study, as there is a lack of existing data on effect sizes for performance differences across these specific modalities in this context. Moving forward, a larger-scale study including a formal power analysis based on the preliminary effect sizes observed in this pilot will allow for more rigorous statistical evaluation and validation. It should also be noted that since the experiment took place in controlled laboratory conditions, the results may not be representative of environments such as the clinic, classroom, or operating room. In addition, as stated in the methods, the study involved confusion surrounding the similar colors, with difficulty distinguishing similar colors on the AR-MH. Future application versions can mitigate these color errors by using different model textures in addition to colors; however, this is not expected to significantly affect our conclusions since the errors were consistent among trials for individual participants and controlled for when analyzing the results. Lastly, significant computational time and human manual input was required for the semi-automatic segmentation of the anatomic models used in the AR-MH application. However, we anticipate machine learning approaches could enable complete automation of this step.26–28
Conclusions
We have presented an AR application which allows users to quickly and accurately identify temporal bone anatomy without requiring increased effort compared to conventional modalities. AR provides a means of intuitively visualizing temporal bone anatomy which may function as an effective tool for surgical planning and education, particularly for novices.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health through Grant Numbers P41EB015898 and R01EB025964. Brigham and Women’s Intramural funding was provided to CEC, AZ, and NBS. JPG has received unrelated research support from the American Society of Head and Neck Radiology through the 2017 William N. Hanafee Research Grant.
Footnotes
Declaration of Conflicting Interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Unrelated to this publication, Jayender Jagadeesan owns equity in Navigation Sciences, Inc. He is a co-inventor of a navigation device to assist surgeons in tumor excision that is licensed to Navigation Sciences. None of the other authors have any conflict of interest.
Consent to Participate
Experiment participants provided verbal informed consent for their participation in the study. Informed consent from the de-identified patients from which the CT images were obtained was waived given the retrospective nature of the study.
Consent for Publication
Informed consent was obtained for publication of data from participating individual persons.
Ethical Considerations
The study was an observational study approved by the Mass General Brigham institutional review board, and was Health Insurance Portability and Accountability Act (HIPAA) compliant.
Data Availability Statement
The authors invite requests for review of research data use in this study by contacting the corresponding author.
References
- 1.Krajewski R, Kukwa A. Infratentorial approach to internal acoustic meatus. Skull Base Surg. 1999;9(2):81–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Day JD, Kellogg JX, Fukushima T, Giannotta SL. Microsurgical anatomy of the inner surface of the petrous bone: neuroradiological and morphometric analysis as an adjunct to the retrosigmoid transmeatal approach. Neurosurgery. 1994;34(6):1003–1008. [DOI] [PubMed] [Google Scholar]
- 3.Ben-Shlomo N, Rahimi A, Abunimer AM, et al. Inner ear breaches from vestibular schwannoma surgery: revisiting the incidence of otologic injury from retrosigmoid and middle cranial fossa approaches. Otol Neurotol. 2024;45(3):311–318. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Tatagiba M, Samii M, Matthies C, el Azm M, Schonmayr R. The significance for postoperative hearing of preserving the labyrinth in acoustic neurinoma surgery. J Neurosurg. 1992;77(5):677–684. [DOI] [PubMed] [Google Scholar]
- 5.Matthies C, Samii M, Krebs S. Management of vestibular schwannomas (acoustic neuromas): radiological features in 202 cases—their value for diagnosis and their predictive importance. Neurosurgery. 1997;40(3):469–481; discussion 481–462. [DOI] [PubMed] [Google Scholar]
- 6.Yokoyama T, Uemura K, Ryu H, et al. Surgical approach to the internal auditory meatus in acoustic neuroma surgery: significance of preoperative high-resolution computed tomography. Neurosurgery. 1996;39(5):965–969; discussion 969–970. [DOI] [PubMed] [Google Scholar]
- 7.Hadida Barzilai D, Tejman-Yarden S, Yogev D, et al. Augmented reality-guided mastoidectomy simulation: a randomized controlled trial assessing surgical proficiency. Laryngoscope. 2025;135(2):894–900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Yamazaki A, Ito T, Sugimoto M, et al. Patient-specific virtual and mixed reality for immersive, experiential anatomy education and for surgical planning in temporal bone surgery. Auris Nasus Larynx. 2021;48(6):1081–1091. [DOI] [PubMed] [Google Scholar]
- 9.Maniam P, Schnell P, Dan L, et al. Exploration of temporal bone anatomy using mixed reality (HoloLens): development of a mixed reality anatomy teaching resource prototype. J Vis Commun Med. 2020;43(1):17–26. [DOI] [PubMed] [Google Scholar]
- 10.Ito T, Fujikawa T, Takeda T, et al. Integration of augmented reality in temporal bone and skull base surgeries. Sensors. 2024;24(21):7063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.McJunkin JL, Jiramongkolchai P, Chung W, et al. Development of a mixed reality platform for lateral skull base anatomy. Otol Neurotol. 2018;39(10):e1137–e1142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Bartholomew RA, Zhou H, Boreel M, et al. Surgical navigation in the anterior skull base using 3-dimensional endoscopy and surface reconstruction. JAMA Otolaryngol Head Neck Surg. 2024;150(4):318–326. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Zagury-Orly I, Solinski MA, Nguyen LH, et al. What is the current state of extended reality use in otolaryngology training? A scoping review. Laryngoscope. 2023;133(2):227–234. [DOI] [PubMed] [Google Scholar]
- 14.Pratt P, Ives M, Lawton G, et al. Through the HoloLens looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels. Eur Radiol Exp. 2018;2(1):2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Fedorov A, Beichel R, Kalpathy-Cramer J, et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn Reson Imaging. 2012;30(9):1323–1341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Bartholomew RA, Poe D, Dunn IF, Smith TR, Corrales CE. Iatrogenic inner ear dehiscence after lateral skull base surgery: therapeutic dilemma and treatment options. Otol Neurotol. 2019;40(4):e399–e404. [DOI] [PubMed] [Google Scholar]
- 17.Spasic M, Trang A, Chung LK, et al. Clinical characteristics of posterior and lateral semicircular canal dehiscence. J Neurol Surg B Skull Base. 2015;76(6):421–425. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Tokuda J, Fischer GS, Papademetris X, et al. OpenIGTLink: an open network protocol for image-guided therapy environment. Int J Med Robot. 2009;5(4):423–434. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Hart SG. Nasa-task load index (NASA-TLX); 20 years later. Proc Hum Factors Ergon Soc Annu Meet. 2006;50(9):904–908. [Google Scholar]
- 20.Hart SG, Staveland LE. Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol. 1988;52:139–183. [Google Scholar]
- 21.Chen JX, Yu SE, Ding AS, et al. Augmented reality in otology/neurotology: a scoping review with implications for practice and education. Laryngoscope. 2023;133(8):1786–1795. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Creighton FX, Unberath M, Song T, Zhao Z, Armand M, Carey J. Early feasibility studies of augmented reality navigation for lateral skull base surgery. Otol Neurotol. 2020;41(7):883–888. [DOI] [PubMed] [Google Scholar]
- 23.Barsom EZ, Graafland M, Schijven MP. Systematic review on the effectiveness of augmented reality applications in medical training. Surg Endosc. 2016;30(10):4174–4183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Szentirmai AB, Murano P. Enhancing learning through universally designed augmented reality: a comparative study of augmented and traditional learning materials. Stud Health Technol Inform. 2024;320:477–484. [DOI] [PubMed] [Google Scholar]
- 25.Williams A, Sun Z, Vaccarezza M. Comparison of augmented reality with other teaching methods in learning anatomy: a systematic review. Clin Anat. 2025;38(2):168–185. [DOI] [PubMed] [Google Scholar]
- 26.Neves CA, Tran ED, Kessler IM, Blevins NH. Fully automated preoperative segmentation of temporal bone structures from clinical CT scans. Sci Rep. 2021;11(1):116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Zhou L, Li Z. Automatic multi-label temporal bone computed tomography segmentation with deep learning. Int J Med Robot. 2023;19(5):e2536. [DOI] [PubMed] [Google Scholar]
- 28.Wang J, Lv Y, Wang J, et al. Fully automated segmentation in temporal bone CT with neural network: a preliminary assessment study. BMC Med Imaging. 2021;21(1):166. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The authors invite requests for review of research data use in this study by contacting the corresponding author.
