Skip to main content
Journal of Neurological Surgery. Part B, Skull Base logoLink to Journal of Neurological Surgery. Part B, Skull Base
. 2021 Sep 10;83(Suppl 2):e564–e573. doi: 10.1055/s-0041-1735509

Augmented Reality for Retrosigmoid Craniotomy Planning

Christoph Leuze 1,, Caio A Neves 2,3, Alejandro M Gomez 4,5, Nassir Navab 4,5, Nikolas Blevins 2, Yona Vaisbuch 2, Jennifer A McNab 1
PMCID: PMC9272246  PMID: 35832997

Abstract

While medical imaging data have traditionally been viewed on two-dimensional (2D) displays, augmented reality (AR) allows physicians to project the medical imaging data on patient's bodies to locate important anatomy. We present a surgical AR application to plan the retrosigmoid craniotomy, a standard approach to access the posterior fossa and the internal auditory canal. As a simple and accurate alternative to surface landmarks and conventional surgical navigation systems, our AR application augments the surgeon's vision to guide the optimal location of cortical bone removal. In this work, two surgeons performed a retrosigmoid approach 14 times on eight cadaver heads. In each case, the surgeon manually aligned a computed tomography (CT)-derived virtual rendering of the sigmoid sinus on the real cadaveric heads using a see-through AR display, allowing the surgeon to plan and perform the craniotomy accordingly. Postprocedure CT scans were acquired to assess the accuracy of the retrosigmoid craniotomies with respect to their intended location relative to the dural sinuses. The two surgeons had a mean margin of d avg  = 0.6 ± 4.7 mm and d avg  = 3.7 ± 2.3 mm between the osteotomy border and the dural sinuses over all their cases, respectively, and only positive margins for 12 of the 14 cases. The intended surgical approach to the internal auditory canal was successfully achieved in all cases using the proposed method, and the relatively small and consistent margins suggest that our system has the potential to be a valuable tool to facilitate planning a variety of similar skull-base procedures.

Keywords: augmented reality, craniotomy, retrosigmoid approach, acoustic neuroma, schwannoma, magicleap

Introduction

Surgical augmented reality (AR) applications provide an innovative way for clinicians to quickly assess and accurately locate critical anatomy during medical procedures. 1 2 3 By projecting patient-specific anatomic imaging data directly on the patient via an optical see-through AR head-mounted display (HMD), the surgeon has the potential to “look through” superficial structures to see internal anatomy at its actual location. 4 5 This is especially helpful in procedures involving complex interrelated anatomy and those with considerable naturally occurring or pathologic variation. 6

The retrosigmoid craniotomy is a commonly used approach to access the neurovascular contents of the posterior cranial fossa. 7 It is frequently used during the resection of tumors involving the cerebellopontine angle (CPA), such as vestibular schwannomas or meningiomas. The retrosigmoid craniotomy involves the removal of a square-shaped segment of occipital bone, abutting the sigmoid and transverse sinuses ( Fig. 1 ). 8 Optimal craniotomy placement involves the removal of bone immediately adjacent to these sinuses while maintaining their integrity. Given the variable nature of the dural sinuses and the lack of reliable surface landmarks to definitively indicate their position, a process to definitively locate them to plan the craniotomy would make bone removal safer and more efficient.

Fig. 1.

Fig. 1

A schematic of the craniotomy window placement behind the sigmoid sinus (blue structure) during the retrosigmoid approach. Picture taken from Jackler. 7 The copyright holders (Jackler and Gralapp) grant permission for the publisher to use the illustrations for this paper, in both its printed and digital version, but reserve copyright.

The traditional use of surface landmarks for planning can be refined using neuronavigation devices and external marker-based trackers. 9 Many surgical navigation techniques rely on optical or electromagnetic trackers to relate medical imaging data, such as magnetic resonance imaging (MRI) or computer tomography (CT), to their respective location inside the body. 10 11 12 Existing medical navigation systems are capable of calculating the spatial relation between the imaging data and the real-world location of the patient's anatomy. To do so, they frequently require attaching fiducial markers to the patient or a head frame, or track superficial anatomical landmarks. 13 The use of these image-guidance systems can be expensive, time consuming, and cumbersome, especially if they are used solely for craniotomy planning.

Existing surgical navigation techniques can be combined with an AR HMD which may offer a more intuitive and immersive experience for the surgeon. However, the use of these systems can still be limited if the rendering of the virtual anatomy model is not perceived to be accurately aligned with the patient. 14 This can happen either due to limitations of the computer vision tracking hardware and software or due to limitations of the optical system, for example, miscalibration of the interpupillary distance (IPD) or the vergence-accommodation conflict for displays with a fixed focal plane. 15

For cases when the patient's head is fixed and no continuous tracking is necessary, manual alignment of the imaging data to the patient by the surgeon is another important option. 16 Manual alignment is effectively a three-dimensional (3D) shape-matching task where a virtual object (the virtual rendering of the medical imaging data) is manually aligned to the real anatomy of the patient. The accuracy of such an approach is highly dependent on the ability of the user to identify and match the virtual with its physical equivalent. Few tools are currently available to assist with this critical process.

Alternatives, such as AR-guided solutions, have been proposed as novel means to assist in craniotomy planning and skull base surgery. 17 18 19 Such procedures are particularly suited for this approach given the high density of vital neurovascular structures in the area, the high degree of variation in anatomy, and the fact that much of the anatomy is rigidly constrained within bone. 20 21

In this work, we present a cadaver-based study of the feasibility of AR guidance with which a surgeon manually aligns a virtual rendering of patient-specific imaging data with the real-world patient to guide a retrosigmoid craniotomy. By testing the procedure and measuring the accuracy of such an AR-guided application on multiple anatomic specimens, we aim to show whether AR can be a safe and valuable solution for guiding skull base procedures.

Theory

Method

Two surgeons performed a retrosigmoid approach to the CPA on a total of 14 sides (eight left and six right) in eight cadaveric heads. One surgeon (surgeon A), who had completed an otology–neurotology fellowship and was quite familiar with the retrosigmoid approach, performed the procedure on seven cases. The second surgeon (surgeon B), who was an otolaryngologist and had postresidency training in anterior cranial base surgery but was not experienced with the retrosigmoid approach, performed the procedure on the other seven cases. Both surgeons had had prior experience using the HMD and alignment software: surgeon A through a training session on mannequins and surgeon B from prior cadaver dissections.

Three-Dimensional Model Preparation

First, the cadaver heads were scanned using a Cone Beam CT scan (J. Morita USA Inc., Irvine, California, United States) at a resolution of 0.33 mm. The CT data were loaded in 3D Slicer ( http://www.slicer.org/ ) 22 where we segmented key structures including the skin surface, bone surface, sigmoid sinus, and internal auditory canal, using a semiautomatic segmentation in 3D Slicer ( Fig. 2 ). For cases 8 to 14, surgeon B furthermore planned the craniotomy window on the bone segmentation prior to the surgery using the CardinalSim software. 23 24 We developed a custom AR application using Unity3D (Unity Technologies, San Francisco, CA, United States) to render the skin surface, bone surface, and sigmoid sinus on the Magic Leap One (Magic Leap, Inc. Plantation, Florida, United States) HMD.

Fig. 2.

Fig. 2

(Left) The segmentation of the skin and pinna with the sigmoid and transverse sinuses in blue. (Right) The segmentation of the skull, including the dural sinuses, mastoid process and associated air cells, the internal auditory canal (IAC), and the planned craniotomy.

Alignment

The cadaver heads were rigidly fixed in the surgical position prior to the alignment procedure ( Fig. 3A ). Using the HMD headset, the surgeon performed the standard visual calibration task and then proceeded with fiducial registration of the specimen. The surgeon placed four virtual fiducials around the pinna of the cadaver head to perform an initial landmark-based rigid registration approximation of the virtual model to the specimen according to the method described in a previous publication. 25 After initial alignment, the surgeon could freely translate and rotate the virtual model with the MagicLeap controller for a fine adjustment of the alignment. To help with the alignment task, the surgeon could furthermore choose between four different shaders. The shaders were adapted from Gomez at al 26 and included an opaque shader, a transparent shader, a wireframe shader, and the Fresnel shader which rendered areas of strong curvature in the object, and an outline shader which rendered the object outline ( Fig. 4 ). The surgeon could separately change the shader for skin and bone structures and could also completely enable or disable either the skin or the bone during the alignment task.

Fig. 3.

Fig. 3

A timeline of the complete AR-guided skull base procedure. ( A ) A cadaver head is placed inside a recipient with the ear pointing up. A view through the MagicLeap see-through display shows that the virtual rendering of the skin (white area in the top of the picture) is not yet aligned. ( B ) Using the MagicLeap controller, the surgeon aligns the virtual rendering of the anatomy with the real world head using the pinna and underlying bone such as the mastoid tip as landmarks. The view through the MagicLeap shows a rendering of the bone and the sigmoid sinus. ( C ) Once the virtual rendering and the real head are aligned, the surgeon marks the location of the planned craniotomy window. ( D ) After cutting the skin flap, the surgeon adjusts the alignment of the virtual rendering by visualizing the bone. ( E ) The drilling to create the craniotomy is performed with a microscope without the AR display. ( F ) Following bone flap removal, a view through the MagicLeap shows the rendering of the dural sinuses adjacent to the craniotomy. A video of the complete procedure for a single case can be found in the supplemental material ( Video 1 ). AR, augmented reality.

Fig. 4.

Fig. 4

The visualization techniques used to display the anatomy. ( A ) The skin with the opaque shader. ( B ) The transparent skin shader with the underlying skull bone. ( C ) The wireframe shader, ( D ) the Fresnel shader, and ( E ) the outline shader with the underlying sigmoid sinus. ( F ) The skull bone with the Fresnel shader with the underlying sigmoid sinus and the planned craniotomy window (red).

The surgeon could freely switch between these visualization methods and move the virtual model to align the virtual anatomy with the specimen with the physical anatomy. The MagicLeap controller used for manual alignment was covered to prevent contamination during the experiment.

Surgical Dissection

Video 1 Complete procedure for a single case.

Download video file (29.1MB, mp4)

Once the surgeon perceived the virtual model to be accurately aligned with the real head ( Fig. 3B ), the skin incision was marked ( Fig. 3C ). The surgeon then incised the skin and elevated the muscle flap around the markings to expose the occipital cortex. With the bone exposed, the surgeon could again adjust the alignment with the help of the rendering of the bone segmentation. After any additional adjustment of alignment, the boundary of the planned craniotomy could be marked directly on the skull surface ( Fig. 3D ), using the location of the virtual dural sinuses as a guide.

After this final planning stage, the surgeon removed the AR headset and performed the craniotomy using a microscope and an otologic drill to remove bone along the edges of the previously planned craniotomy ( Fig. 3E ). Once the bone window was completely incised, the bone flap was removed. The surgeons did not adjust their craniotomy based on anatomic findings exposed during drilling but attempted to remove bone as marked during the planning phase. Following bone flap removal, the surgeon visually assessed the integrity of the dural sinuses and whether the exposure of the internal auditory canal was as expected during actual surgery. A video of the complete procedure for a single case can be found in the supplemental material ( Video 1 ).

Accuracy Evaluation

Using the same acquisition protocol, a repeat CT scan was performed on each head following the craniotomy. On this second CT scan, we evaluated the distance between the margin of the craniotomy and the sigmoid and transverse sinuses. In 3D Slicer, we used a multiplanar reconstruction technique to measure the distance between the posterior and inferior margins of the sigmoid and transverse sinuses and the border of the craniotomy. The CT scan was resampled to produce a plane running tangential to the skull surface adjacent to the sinuses. The distance between the intersection of the projection of the sinuses to the skull surface and the craniotomy boundary was then measured to find the margin of the craniotomy.

We performed these measurements of the margin at six locations, three superiorly and three anteriorly ( Fig. 5 ). The ideal craniotomy would be immediately adjacent to the sinuses, resulting in small positive margins close to zero. Larger positive margins reflect unnecessary skull bone remained between the sinuses and the craniotomy that would need to removed later to optimize exposure. A negative margin means the craniotomy exposed the sinus potentially, exposing the structure to injury during drilling. We calculated the mean and standard deviation of all margins for each individual experiment, as well as overall experiments. Finally, we performed two-sample t -tests between all combinations of the three margins on the superior side of the craniotomy and between the three margins on the anterior side.

Fig. 5.

Fig. 5

A rendering of the inside of the skull showing the craniotomy, the transverse and sigmoid sinus and the internal auditory canal (IAC). We measured the margins d1 to d6 as a measure of how closely the surgeons were able to place the craniotomy to the transverse and sigmoid sinus without overlapping.

Surgeon Feedback

After finishing each experiment, we asked the surgeons to state the visualization methods they relied on during the alignment task, and why they chose that alignment method. We furthermore asked the surgeons for feedback on the techniques they used to achieve optimal alignment. Once all the experiments were finished, the surgeons were asked to name their preferred visualization method.

Results

Accuracy Evaluation

No dural sinuses were visibly damaged during any of the dissections and the bone window allowed access to the CPA as expected, for both surgeons.

The measured margins between the craniotomy and the dural sinuses for both surgeons can be seen in Fig. 5 . Negative margins mean that the transverse sinus (d1, Fig. 5 ) or the sigmoid sinus (d4–6, Fig. 5 ) was exposed which the surgeons tried to avoid during craniotomy planning. Surgeon A very accurately placed his first craniotomy close to the sinuses ( Fig. 6A,C ). In his second and third surgeries, surgeon A placed the virtual rendering too far above the sinuses and exposed both transvers and sigmoid sinus in case 2 and only slightly exposed part (d1 and d2) of the transverse sinus in case 3. After these first experiments, surgeon A placed all other osteotomies successfully very closely to the sinuses without exposing the sinuses.

Fig. 6.

Fig. 6

Box plots depicting the measured margins between the craniotomy and the transverse and sigmoid sinus. ( A, B ) The margins measured for the seven cases performed by both surgeons. The symbols o represent the six individual margin measurements, the x is the mean margin and the center line of the box plot the median margin. ( C ) An overview of the mean margins for all cases performed by surgeon A (cases 1–7) and surgeon B (cases 8–14; error bars indicate ± one standard deviation). ( D ) The mean margins over all cases at each anatomical location d1 to d6 for both surgeons (error bars indicate ± one standard deviation).

The median margin for surgeon A was d med  = 0.55 mm and on average surgeon A had a margin of d avg  = 0.6 ± 4.7 mm.

Surgeon B had a positive margin between the craniotomy and the sinuses for all cases, meaning that he never exposed any sinus ( Fig. 6B and C ). The median margin of surgeon B was slightly higher with d med  = 2.8 mm with an average of d avg  = 3.7 ± 2.3 mm.

Considering the margin location, there was also a trend for both surgeons that margin d3 at the transverse sinus ( Fig. 6D ) was slightly higher than margins d1 and d2, and margin d6 at the sigmoid sinus was slightly higher than margins d4 and d5. A statistical difference could only be observed between d4 and d6 ( p  < 0.04) for surgeon A and d4 and d6 ( p  < 0.005) and d5 and d6 (d < 0.006) for surgeon B.

The MagicLeap headset did not obstruct the surgeon's ability to mark the craniotomy. The cable that connects the headset with the computing unit did not interfere with the procedure since the computing unit was safely stowed away on the back of the surgeon and the cable was far from arms and equipment.

The manual 3D alignment of the virtual rendering and the skin of the cadaver head including setting the virtual landmarks and manually aligning the object took about two minutes per case.

Surgeon Feedback

The structures used by the surgeons for visual alignment were the pinna, as well as the skin in the occiput. After exposing the cortex, the surgeons readjusted the alignment using the virtual rendering of the bone for each cadaver, switching between rendering of the skin and rendering of the bone. The surgeons relied on characteristic landmarks on the skull such as the mastoid tip for alignment. Several techniques were deployed by the surgeon to overcome occlusion issues and change focus between virtual and real model during the manual alignment task.

To “turn off” the virtual rendering and quickly switch between a view of the real head and the virtual rendering, the surgeons preferred not to turn off the rendering via the controller but move their head toward the object until the near cutting plane of the AR HMD automatically removed the virtual rendering. This allowed gradual removal of the virtual rendering based on the user's head position. Another option the surgeons used was to vary the intensity of illumination on the cadaver. By sufficiently increasing the lighting intensity, the virtual rendering overlay became almost invisible to the surgeon.

Both surgeons preferred visualization methods with lower occlusion of the real anatomy and clear edges that help with alignment of the virtual rendering. The preferred visualization methods were the Fresnel and the outline shader. The Fresnel shader was used both for skin and bone alignment, since it provides clear details both for large structures, such as the pinna, but also fine structures such as skull crevices and surface irregularities. The outline shader was mainly used for large structures like the pinna, since small details in the skull anatomy were not displayed well with that shader. The Wireframe shader contained a very fine mesh, leading to strong occlusion of the real anatomy which obstructed alignment. The edges of the transparent shader were not as clear as for the Fresnel and outline shader, resulting complicated alignment.

The surgeons also used instruments to point at recognizable bone structures, especially the mastoid tip, but also small crevices from emissary veins if present and then moved between virtual rendering and real world view to test alignment of the structures. To estimate the accurate depth of the virtual rendering, the surgeons initially aligned the model, then moved their head to the side of the virtual model to perceive both virtual model and real cadaver head from a different angle. While surgeon A stated that this method helped with alignment for some cases, both surgeons stated that this method was unreliable due to “swimming” of the virtual model to a slightly different location after head movement.

Discussion

We have presented a manual AR alignment method of medical imaging data to guide placement of the retrosigmoid craniotomy during cadaver dissections. The ability to see the location of the sigmoid and transverse sinuses inside the skull provided the surgeons confidence to place the craniotomy in an optimal location. When using conventional surgical navigation devices, the real patient and the medical imaging data are brought in the same coordinate system on an external 2D screen. These systems require the surgeon to completely rely on the hardware and software of the navigation system. A recent survey showed that both otolaryngologists and neurosurgeons often omit navigation systems due to being “complicated” and because they “perceived the devices to be inaccurate.” 27

The AR guidance presented here augmented the surgeon's abilities by allowing manual alignment of the anatomy until the surgeon could be comfortable with the perceived location of the internal anatomy. This 3D view of the anatomy superimposed on the patient's skull provided the surgeons with the ability to plan the procedure and make adjustments in the planned craniotomy.

The preparation and alignment time of the AR-based guidance system took less than 2 minutes for the initial alignment of the skin and another 2 minutes for the alignment once the skull bone was exposed. This is a considerably faster compared with what is needed to set up traditional surgical navigation systems, including hardware setup, marker attachment, and calibration. The average planning time on a conventional workstation was, in comparison, reported to be 12 minutes (range: 8–20 minutes). 28

The measured margins compare favorably against landmark-based techniques to measure the transverse-sigmoid sinus junction (TSSJ) which corresponds to the corner of the craniotomy window between d3 and d4. Landmark-based techniques estimate this junction with accuracies between 6 to 18 mm 29 which is considerably higher than our presented AR-guided technique of on average 1 to 4 mm and a maximum of 8 mm for case 2. Da Silva et al 28 have shown that the asterion is placed above the TSSJ in 7 out of 30 cases with the asterion varying from 13-mm medial to 15-mm inferior to the virtual position of the TSSJ.

The reported fiducial registration error (FRE) of commercial navigation devices for skull-base surgery is reported to be between 0.6 and 2.7 mm, 30 0.7 to 0.9 mm, 31 and up to 2 mm 28 and depends on the fiducial localization error (FLE) which is the lowest for skull-implanted fiducials 32 and slightly higher for anatomical landmarks and adhesive fiducials. 33 Since we relied on manual alignment, we did not measure the FRE but only the final target registration error (TRE) in the form of the distance between the craniotomy window and the sigmoid and transverse sinuses. Thus, we expect our error to be slightly higher than the FRE, since it not only depends on the medical imaging data registration but also on the accuracy of the medical procedure consisting of marking the tissue and drilling. To optimally compare our technique against other approaches, it will be necessary in future work to have the same surgeons perform the retrosigmoid approach on three groups as follows: (1) one group where targeting is performed with a conventional neuronavigation system, (2) one group where the location is marked based on anatomical landmarks, and (3) one group where we perform targeting with our AR-targeting approach.

There are several possible reasons that influence suboptimal craniotomy margins:

Craniotomy Planning

Surgeon A only used the virtual rendering of the sigmoid sinus to plan the craniotomy. However, depending on the viewing angle, the sigmoid sinus is at a slightly different position underneath the skull, and the planned location might not be the optimal location. For this purpose, it may be beneficial to plan the craniotomy placement beforehand using surgical rehearsal software such as Cardinalsim, 23 24 which allows the surgeon to plan the retrosigmoid approach on the PC, and select and test various craniotomy sizes and locations before the actual surgery. This is supported by the finding that we did not observe any negative margins for surgeon B. Since surgeon B had no experience with the retrosigmoid approach, he planned the craniotomy placement prior to surgery with CardinalSim. While the results of surgeon A and surgeon B are not directly comparable due to the different planning procedure and different experience levels with AR and the retrosigmoid approach procedure, the entirely positive margins for surgeon B suggest that his AR experience and the use of planning software may have been beneficial in the planning process.

Learning

Surgeon A became familiar with the MagicLeap HMD and user interface during a short training procedure beforehand on a mannequin head but did not have other prior experience with AR HMDs and AR-guided surgery. While his first case was highly accurate, he struggled with accurate alignment in the second case, improved in the third case, and aligned cases 4 to 7 with a very high accuracy close to the transverse and sigmoid sinus. With increasing case numbers, surgeon A became more confident with the usage of the AR system and developed his own techniques for accurate alignment, such as pointing a tool to the mastoid tip and moving toward and away from the cadaver head to quickly switch between view of the virtual rendering and the cadaver head. This suggests that a learning effect in the usage of the AR device translated to the higher accuracy in his last four cases.

Surgeon B was already very familiar with the MagicLeap AR HMD and AR-guided surgery. Even though he had not done any lateral skull base procedures prior to these experiments, surgeon B achieved consistent results with margins between 1 and 6 mm for all of his surgeries. Only his first case had slightly higher margins, especially at the transverse sinus (d1) but otherwise no clear trend in the margins could be observed.

This supports the hypotheses that the learning effect and AR familiarity with the AR HMD plays an important role for accurate alignment and a safe procedure.

Tissue Deformation

Despite careful positioning, it is possible that anatomy relied on for alignment, such as the pinna, underwent deformation following the CT scan acquisition. This would lead to a deformation of virtual rendering that differed from the real head and thus could lead to inaccurate alignment of the virtual rendering with the real head.

Perception

The manual alignment task depended on the perception of the surgeon and effectively translated to a 3D shape-matching task for the AR user. Shape matching in 2D is a trivial task with 3 degrees of freedom (DoF). An online search for shape matching games shows a myriad of games that are recommended for 3- to 5-year old children where the goal is to align a pair of 2D shapes. Ideally, 3D shape matching would be as simple a procedure as these 2D games. Unfortunately, besides having 6 DoFs instead of only 3, limitations in the optical system make this procedure more difficult for the presented AR application. 3D shape matching has been tested in a virtual reality (VR) environment where the researchers showed that matching accuracy depends on the shader of the virtual rendering. 26 Depth perception is important for accurate 3D shape matching and the screens used in VR displays allow with perfect occlusion, one of the strongest near-field depth cues. 34 While there is work to improve occlusion for see-through AR HMDs, 35 this depth cue due to occlusion is not available in the MagicLeap. Fixed focal planes like in wave-guide displays lead to a conflict where the user's eyes can either accommodate on the one hand on the real object at approximately 40-cm distance or on the other hand on the focal plane of the waveguide display which is 1 and 3 m for the MagicLeap. 36 Moving the head to the side to check the depth dimension of the virtual rendering also did not solve this problem, as the virtual rendering tended to slightly swim away during the surgeon's head movement. While a visual calibration was performed before every experiment, the observed swim suggests that the IPD might have been measured incorrectly for the surgeon. Other depth cues, like ocular parallax, 37 are also limited for AR HMDs, complicating the manual alignment task in AR. A reflective AR setup that show the augmentations from multiple viewpoints 38 may be an important approach to facilitate this alignment task.

Properties of the virtual object such as the shader or texture can influence accurate depth perception. 39 The Fresnel and outline shaders facilitated the alignment task by allowing the surgeon to align with the help of characteristic borders and corners of the anatomy and the size of structures such as the pinna. 26 If the IPD was calibrated incorrectly, an alignment based on the size of the structure would lead to an error in the prescribed depth of the virtual rendering. This is not necessarily a problem in our experiments if the AR user perceives the virtual rendering accurately aligned. However, if the AR user moves the head to improve their depth perception, the virtual rendering will move and the alignment will become inaccurate.

Tissue Segmentation

Accurate tissue segmentation was both time consuming and is also a possible source of error. In the CT data of the cadaver head, the sigmoid sinus has very low contrast to the surrounding fluid and during segmentation, it is possible to over- or underestimate the shape of the sigmoid sinus in some places. We have taken care to segment the sigmoid sinus in detail, and the process was undertaken by a highly experienced user. In living patients, this segmentation may become easier due to the use of intravenous contrast agents or the concurrent use of MRI. The segmentation workload can potentially be reduced by deploying neural networks that are trained on manually segmented datasets to support the segmentation such as NVIDIA Clara SDK. 40

Motion

The cadaver head was positioned and fixed using a head holder to avoid movement of the head during the procedures. While movement was possible, it likely did not contribute to misalignment during our experiments. The realignment step done after the exposure of the skull reduced the possibility of inadvertent movement during the skin flap elevation.

Anatomy

Anatomical variations are common and expected in surgical practice, and differences in the position and caliber of the venous sinuses can influence the surgeon's judgment and lead to an increase in the measured margins, especially with respect to the outermost margins d1 and d6. The observed difference in margin size based on margin location can also be likely attributed to anatomical variations such as the shape of the sinuses that may lead to the observed significantly higher average margin for d6.

The surface anatomy of the temporal bone has few details to ensure identification of the sigmoid and transverse sinuses. The emissary vein foramen (natural bone holes for veins derived from the sigmoid sinus) may provide additional information of the position of the venous sinuses. In some cases, those foramina helped the surgeons to match the anatomy and identify the best location of the craniotomy.

For the sake of assessing the use of the AR system in making the craniotomy, surgeons avoided making corrections to the craniotomy margins based on the real-world identification of the venous sinuses. For the sake of assessing the use of the AR system in making the craniotomy, the surgeons performed the craniotomy solely on the basis of the planning process. They specifically avoided any subsequent bone removal at the margins based on the real-world identification of the venous sinuses. The lack of such corrections to the initial craniotomy resulted in the rather smooth margins shown in Fig. 5 . Those margins greater than 4 to 5 mm would have still required additional bone removal to optimize exposure and reduce the need for cerebellar retraction. However, the closer the planning craniotomy was to the ideal location, the lesser time and effort would have been needed to make this correction.

Conclusion

In this study, we presented how planning the retrosigmoid approach can be performed during cadaver dissection using AR guidance. We have shown that this technique can help surgeons intuitively navigate the complex anatomy of the posterolateral skull base adjacent to the transverse and sigmoid sinuses.

On the one hand, we have observed how an experienced surgeon with no prior AR knowledge quickly learned to effectively utilize the AR guidance interface to accurately plan and perform the retrosigmoid approach. On the other hand, we observed how a surgeon with AR experience but no prior experience in lateral skull base surgery could utilize the AR guidance to quickly and safely adapt his skills to the retrosigmoid approach.

There are still limitations with respect to the optical system for aligning virtual renderings with the real world and a need for further validation of the safety and efficacy compared with standard surgical navigation systems. Despite these limitations, the encouraging results compared with common surgical skull base targeting techniques make this study a step toward the use of these tools to reduce operative time and improve safety and efficacy for patients. This tool can furthermore be a useful tool to enhance the training experience for students by visualizing internal anatomy on cadavers which can lead to a better understanding of the surgical procedure.

Conflict of Interest None declared.

*

These authors contributed equally.

References

  • 1.Vávra P, Roman J, Zonča P et al. Recent development of augmented reality in surgery: a review. J Healthc Eng. 2017;2017:4.574172E6. doi: 10.1155/2017/4574172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Rolland J P, Fuchs H. Optical versus video see-through head-mounted displays in medical visualization. Presence Teleoperators Virtual Environ. 2000;9(03):287–309. [Google Scholar]
  • 3.Cho J, Rahimpour S, Cutler A, Goodwin C R, Lad S P, Codd P. Enhancing reality: a systematic review of augmented reality in neuronavigation and education. World Neurosurg. 2020;139:186–195. doi: 10.1016/j.wneu.2020.04.043. [DOI] [PubMed] [Google Scholar]
  • 4.Perkins S, Lin M, Srinivasan S, Wheeler A, Hargreves B, Daniel B.A Mixed-Reality System for Breast Surgical PlanningPresented at IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct); October 30, 2017;Nantes, France
  • 5.Fuchs H, Livingston M A, Raskar Ret al. Augmented reality visualization for laparoscopic surgery Cambridge, MA: irst International Conference Cambridge Proceedings; 1998:1496:934–943. [Google Scholar]
  • 6.Bajura M, Fuchs H, Ohbuchi R. Merging virtual objects with the real world: seeing ultrasound imagery within the patient. Comput Graph. 1992;26(02):203–210. [Google Scholar]
  • 7.Jackler R. New York: Thieme; 2008. Atlas of Skull Base Surgery and Neurotology; p. 2. [Google Scholar]
  • 8.Arnold H, Schulze M, Wolpert S et al. Positioning a novel transcutaneous bone conduction hearing implant: a systematic anatomical and radiological study to standardize the retrosigmoid approach, correlating navigation-guided, and landmark-based surgery. Otol Neurotol. 2018;39(04):458–466. doi: 10.1097/MAO.0000000000001734. [DOI] [PubMed] [Google Scholar]
  • 9.Heermann R, Schwab B, Issing P R, Haupt C, Lenarz T. Navigation with the StealthStation™ in skull base surgery: an otolaryngological perspective. Skull Base. 2001;11(04):277–285. doi: 10.1055/s-2001-18634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ledderose G J, Hagedorn H, Spiegl K, Leunig A, Stelter K. Image guided surgery of the lateral skull base: testing a new dental splint registration device. Comput Aided Surg. 2012;17(01):13–20. doi: 10.3109/10929088.2011.632783. [DOI] [PubMed] [Google Scholar]
  • 11.Birkfellner W, Watzinger F, Wanschitz F, Ewers R, Bergmann H. Calibration of tracking systems in a surgical environment. IEEE Trans Med Imaging. 1998;17(05):737–742. doi: 10.1109/42.736028. [DOI] [PubMed] [Google Scholar]
  • 12.Fischer G S. Baltimore, MD: Johns Hopkins University; 2005. Electromagnetic tracker characterization and optimal tool design [dissertation] p. 233. [Google Scholar]
  • 13.Eggers G, Mühling J, Marmulla R. Image-to-patient registration techniques in head surgery. Int J Oral Maxillofac Surg. 2006;35(12):1081–1095. doi: 10.1016/j.ijom.2006.09.015. [DOI] [PubMed] [Google Scholar]
  • 14.Azimi E, Qian L, Navab N, Kazanzides P.Alignment of the virtual scene to the 3D display space of a mixed reality head-mounted displayAccessed August 13, 2021 at:https://arxiv.org/pdf/1703.05834.pdf
  • 15.Kruijff E, Swan J E, Feiner S.Perceptual issues in augmented reality revisitedPresented at Presented at IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct); November 22, 2010: 3–12 Seoul, Korea (South)
  • 16.Mitsuno D, Ueda K, Hirota Y, Ogino M. Effective Application of Mixed Reality Device HoloLens: Simple Manual Alignment of Surgical Field and Holograms. Plast Reconstr Surg. 2019;143(02):647–651. doi: 10.1097/PRS.0000000000005215. [DOI] [PubMed] [Google Scholar]
  • 17.Li Y, Chen X, Wang N et al. A wearable mixed-reality holographic computer for guiding external ventricular drain insertion at the bedside. J Neurosurg. 2018;131(05):1–8. doi: 10.3171/2018.4.JNS18124. [DOI] [PubMed] [Google Scholar]
  • 18.Besharati Tabrizi L, Mahvash M. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique. J Neurosurg. 2015;123(01):206–211. doi: 10.3171/2014.9.JNS141001. [DOI] [PubMed] [Google Scholar]
  • 19.Incekara F, Smits M, Dirven C, Vincent A. Clinical feasibility of a wearable mixed-reality device in neurosurgery. World Neurosurg. 2018;118:e422–e427. doi: 10.1016/j.wneu.2018.06.208. [DOI] [PubMed] [Google Scholar]
  • 20.McJunkin J L, Jiramongkolchai P, Chung W et al. Development of a mixed reality platform for lateral skull base anatomy. Otol Neurotol. 2018;39(10):e1137–e1142. doi: 10.1097/MAO.0000000000001995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Neves C A, Vaisbuch Y, Leuze C et al. Application of holographic augmented reality for external approaches to the frontal sinus. Int Forum Allergy Rhinol. 2020;10(07):920–925. doi: 10.1002/alr.22546. [DOI] [PubMed] [Google Scholar]
  • 22.Fedorov A, Beichel R, Kalpathy-Cramer J et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn Reson Imaging. 2012;30(09):1323–1341. doi: 10.1016/j.mri.2012.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Monfared A, Mitteramskogler G, Gruber S, Salisbury J K, Jr, Stampfl J, Blevins N H. High-fidelity, inexpensive surgical middle ear simulator. Otol Neurotol. 2012;33(09):1573–1577. doi: 10.1097/MAO.0b013e31826dbca5. [DOI] [PubMed] [Google Scholar]
  • 24.Chan S, Li P, Locketz G, Salisbury K, Blevins N H. High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery. Comput Assist Surg (Abingdon) 2016;21(01):85–101. doi: 10.1080/24699322.2016.1189966. [DOI] [PubMed] [Google Scholar]
  • 25.Leuze C, Sathyanarayana S, Daniel B L, McNab J A.Landmark-based mixed-reality perceptual alignment of medical imaging data and accuracy validation in living subjectsPresented at Presented at IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct); December 14, 2020:1333 Porto de Galinhas, Brazil
  • 26.Martin-Gomez A, Eck U, Navab N.Visualization techniques for precise alignment in VR: A comparative studyPresented at IEEE Conference on Virtual Reality and 3D User Interfaces (VR); August 15, 2019:735–741 Osaka, Japan
  • 27.Jödicke A, Ottenhausen M, Lenarz T. Clinical Use of Navigation in Lateral Skull Base Surgery: Results of a Multispecialty National Survey among Skull Base Surgeons in Germany. J Neurol Surg B Skull Base. 2018;79(06):545–553. doi: 10.1055/s-0038-1635258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.da Silva E B, Jr., Leal A G, Milano J B, da Silva L FM, Jr., Clemente R S, Ramina R. Image-guided surgical planning using anatomical landmarks in the retrosigmoid approach. Acta Neurochir (Wien) 2010;152(05):905–910. doi: 10.1007/s00701-009-0553-5. [DOI] [PubMed] [Google Scholar]
  • 29.Hall S, Peter Gan Y C. Anatomical localization of the transverse-sigmoid sinus junction: Comparison of existing techniques. Surg Neurol Int. 2019;10(186):186. doi: 10.25259/SNI_366_2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Wiltfang J, Rupprecht S, Ganslandt O et al. Intraoperative image-guided surgery of the lateral and anterior skull base in patients with tumors or trauma. Skull Base. 2003;13(01):21–29. doi: 10.1055/s-2003-820554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Grauvogel T D, Engelskirchen P, Semper-Hogg W, Grauvogel J, Laszig R. Navigation accuracy after automatic- and hybrid-surface registration in sinus and skull base surgery. PLoS One. 2017;12(07):e0180975. doi: 10.1371/journal.pone.0180975. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Mascott C R, Sol J C, Bousquet P, Lagarrigue J, Lazorthes Y, Lauwers-Cances V.Quantification of true in vivo (application) accuracy in cranial image-guided surgery: influence of mode of patient registrationNeurosurgery 2006;59(1, suppl 1) ONS146–ONS156, discussion ONS146–ONS156 [DOI] [PubMed]
  • 33.Wolfsberger S, Rössler K, Regatschnig R, Ungersböck K.Anatomical landmarks for image registration in frameless stereotactic neuronavigation Neurosurg Rev 200225(1,2):68–72. [DOI] [PubMed] [Google Scholar]
  • 34.Cutting J, Vishton P. Perceiving layout and knowing distances: the integration, relative potency, and contextual use of different information about depth. Percept Sp Motion. 1995;22(05):69–117. [Google Scholar]
  • 35.Krajancich B, Padmanaban N, Wetzstein G. Factored occlusion: single spatial light modulator occlusion-capable optical see-through augmented reality display. IEEE Trans Vis Comput Graph. 2020;26(05):1871–1879. doi: 10.1109/TVCG.2020.2973443. [DOI] [PubMed] [Google Scholar]
  • 36.Schowengerdt B, Lin D, St Hilaire P.Multi-layer diffractive eyepieceAccessed August 13, 2021 at:https://patentscope.wipo.int/search/en/detail.jsf?docId=EP321535057
  • 37.Konrad R, Angelopoulos A, Wetzstein G. Gaze-contingent ocular parallax rendering for virtual reality. ACM Trans Graph. 2020;39(02):10. [Google Scholar]
  • 38.Fotouhi J, Unberath M, Navab N et al. Reflective-AR display: an interaction methodology for virtual-to-real alignment in medical robotics. IEEE Robot Autom Lett. 2020;5(02):2722–2729. [Google Scholar]
  • 39.Diaz C, Walker M, Szafir D A, Szafir D.Designing for depth perceptions in augmented realityIEEE International Symposium on Mixed and Augmented Reality (ISMAR); October 9–13, 2017:111–122 Nantes, France
  • 40.NVIDIA Clara documentationAccessed August 13, 2021 at:https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v2.0/index.html

Articles from Journal of Neurological Surgery. Part B, Skull Base are provided here courtesy of Thieme Medical Publishers

RESOURCES