Skip to main content
Neurologia medico-chirurgica logoLink to Neurologia medico-chirurgica
. 2015 Jul 28;55(8):674–679. doi: 10.2176/nmc.tn.2014-0278

A Microscopic Optically Tracking Navigation System That Uses High-resolution 3D Computer Graphics

Masanori YOSHINO 1,, Toki SAITO 2, Taichi KIN 1, Daichi NAKAGAWA 1, Hirofumi NAKATOMI 1, Hiroshi OYAMA 2, Nobuhito SAITO 1
PMCID: PMC4628159  PMID: 26226982

Abstract

Three-dimensional (3D) computer graphics (CG) are useful for preoperative planning of neurosurgical operations. However, application of 3D CG to intraoperative navigation is not widespread because existing commercial operative navigation systems do not show 3D CG in sufficient detail. We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG. This article presents the technical details of our microscopic optically tracking navigation system. Our navigation system consists of three components: the operative microscope, registration, and the image display system. An optical tracker was attached to the microscope to monitor the position and attitude of the microscope in real time; point-pair registration was used to register the operation room coordinate system, and the image coordinate system; and the image display system showed the 3D CG image in the field-of-view of the microscope. Ten neurosurgeons (seven males, two females; mean age 32.9 years) participated in an experiment to assess the accuracy of this system using a phantom model. Accuracy of our system was compared with the commercial system. The 3D CG provided by the navigation system coincided well with the operative scene under the microscope. Target registration error for our system was 2.9 ± 1.9 mm. Our navigation system provides a clear image of the operation position and the surrounding structures. Systems like this may reduce intraoperative complications.

Keywords: computer graphics, microscope, neuronavigation

Introduction

There are many important structures in brain and precise knowledge of three-dimensional (3D) anatomical structure is needed during brain surgery to avoid intra-operative complications. Though two-dimensional medical images such as magnetic resonance images (MRIs) and computed tomography (CT) images have been used to inform surgeons of anatomical structure, multi-modality fused 3D computer graphics (CG) images are becoming more common as image processing technology develops.15) Recent studies have shown that multi-modality fused 3D CG images are useful for depicting complex brain anatomy, such as that involved in skull base disease,3,6,7) and we recently reported that high-resolution multi-modality fused 3D CG images were useful for preoperative planning.810) Our multi-modality fused 3D CG images are created by registering all the necessary structures, including perforators and stretched cranial nerves, and constructing a 3D model using a surface-rendering method and a volume rendering method. In the surface-rendering method, the model employs multiple modalities and multiple thresholds for one tissue. As a result, we are able to create high-resolution 3D CG images that depict brain tissue with the same accuracy as the actual operative field.

Despite these advances, 3D CG images are not commonly used to provide intraoperative assistance because existing commercial operative navigation systems can show only simple 3D CG images that originate from a medical image, and are not able to show 3D CG images that are sufficiently detailed for use as a source of information.1114) To address this issue, we aimed to create an operative navigation system that is capable of using high-resolution 3D CG images. This article presents the technical details of our microscopic optically tracking navigation system.

Materials and Methods

I. Navigation system

We incorporated information on the position, focal length, and magnification of the operating microscope into our navigation system, creating a microscopic optically tracking navigation system15,16) (Fig. 1a). The navigation system consists of three components: (1) the operating microscope, (2) the registration system, and (3) the image display system. The operating microscope (OME-9000; Olympus, Tokyo) is used together with optical trackers (Rigid Body, Northern Digital Inc., Ontario, Canada) and a communications unit. Optical trackers are attached both to the patient’s body and the operating microscope (Fig. 1b, arrow), and real-time positions are monitored using an optical tracking device (Polaris, Northern Digital Inc., Ontario, Canada). The communications unit transmits the focal length and magnification of the microscope to the registration system. The registration system calculates matrices such as the registration matrix between the patient tracker coordinate system and the patient image coordinate system, relative matrices between trackers, the camera calibration matrix between the microscope camera and the microscope tracker, and the projection matrix corresponding to the microscope camera. The image display system creates a real-time navigation view based on the matrices from the registration system (Fig. 1b, arrowhead). In the image, 3D CG are displayed in the same field of view as the operative scene under the microscope (Figs. 3, 4).

Fig. 1.

Fig. 1.

a: A conceptual diagram of the navigation system. b: An image of the navigation system. The arrow indicates the optical tracker, and the arrowhead indicates the cable that exports the real-time position information and the focal length and the magnification information of the operative microscope to image display system. *: monitor of the navigation system, **: monitor of the operative microscope.

Fig. 3.

Fig. 3.

The 3D CG image displayed on the monitor of the navigation system corresponded with the operative scene under the microscope before (a) and after (b) tilt of the microscope to right side, and before (c) and after (d) the magnification of the microscope was increased. *: monitor of the navigation system, **: monitor of the operative microscope, CG: computer graphics, 3D: three-dimensional.

Fig. 4.

Fig. 4.

The opacity of each structure in the 3D CG image could be changed on the monitor of the navigation system, revealing anatomical structures that were obstructed from view in the phantom model. a: Image before opacity was manipulated. b: Image after the opacity was changed to make the brain translucent. c: Image after the opacity was changed to make the brain transparent. CG: computer graphics, 3D: three-dimensional.

Point-pair registration is used to create a calibration matrix between the patient tracker coordinate system and the patient image coordinate system. To reduce measurement errors, fiducial markers are attached to the skin of the patient (two on the bilateral roots of the zygoma and two on the forehead) and are used for registration17) (Fig. 2a). Camera calibration data are acquired in advance for each magnification and focal length of the microscope.

Fig. 2.

Fig. 2.

a: A 3D model constructed from magnetic resonance images obtained from a healthy male volunteer. b: A phantom model constructed from the 3D model using a 3D printer. 3D: three-dimensional.

All processing is performed on a personal computer (Precision M6800; Dell, Plano, TX, USA; CPU: Intel Core i7-4930MX, 3.0GHz; RAM: 16.0 GB; graphics card: NVIDIA Quadro K5100M).

II. Assessment

To assess the accuracy of this system, nine neurosurgeons (7 males, 2 females; mean age 32.9 years) participated in an experiment using a phantom model. Five neurosurgeons were specialists in neurosurgery and four were non-board-certified residents. Accuracy was quantified using target registration error (TRE),18) which is the distance between the true position of a target in the phantom model and its measured position in the imaging space after registration. TRE was calculated for the following targets: tragus, nasion, posterior limit of splenium of corpus callosum, and fastigium.

The phantom model was created from T1- and T2-weighted MRI and time-of-flight magnetic resonance angiography (TOF-MRA) images that were obtained from a healthy male volunteer using a 3.0-T system (Signa 3.0T; GE Healthcare, Milwaukee, WI, USA) equipped with an eight-channel phased-array head coil. The imaging parameters for T1-weighted images were: repetition time (TR), 5.9 ms; echo time (TE), 2.4 ms; slice thickness, 1.5 mm; field of view (FOV), 24 cm; matrix size, 256 × 256; flip angle, 15°. The imaging parameters for T2-weighted images were: TR, 4,200 ms; TE, 85.4 ms; slice thickness, 1.5 mm; FOV, 24 cm; matrix size, 256 × 256; flip angle, 90°. The imaging parameters for TOF-MRA were: TR, 22 ms; TE, 2.2 ms; slice thickness, 1.0 mm; FOV, 24 cm; matrix size, 256 × 256; flip angle, 20°. Data were provided as image stacks coded in digital imaging and communication in medicine (DICOM) format and were processed with Avizo 6.3 software (Visualization Science Group, Bordeaux, France) on a personal computer (Precision T7500; Dell, Texas, USA; CPU: Intel Xeon X5550, 2.67 GHz, 2.66 GHz; PAM: 8.00 GB; graphics card: NVIDIA Quadro FX5800) for 3D image reconstruction using a previously reported method.810) The 3D CG image was stored in Standard Triangulated Language19) format and the phantom model was created from this data using a 3D printer (Connex 500, Stratasys Ltd., Eden Prairie, MN, USA).

Results

The setup of the navigation system and the final navigation system are shown in Fig. 1. We were able to create a high-resolution 3D polygon model from T1- and T2-weighted MRI and TOF-MRA data (Fig. 2a) and created a phantom model from this 3D CG model (Fig. 2b). The image on the monitor of the navigation system corresponded well with the operative scene under the microscope when the microscope was tilted (Fig. 3a, b) and when the magnification and focal length of the microscope were changed (Fig. 3c, d). The image on the monitor of the navigation system changed as quickly as the operative scene under the microscope when the axis of microscope was systematically tilted forward, backward, and side-to-side, and when the magnification of microscope was systematically changed. It was possible to change the opacity of each structure in the 3D CG image on the monitor of the navigation system, so that the image showed the anatomical structures that were obstructed from view in the phantom model (Fig. 4).

The mean ± standard deviation TRE for our navigation system for all four locations and all nine neurosurgeons was 2.9 ± 1.9 mm.

Discussion

We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG images. In this study we showed that the 3D CG image displayed on the monitor of this navigation system corresponded well with the operative scene under the microscope.

One of the biggest merits of this system is that it enables neurosurgeons to intuitively comprehend the 3D anatomical structure peripheral to the operation position when using the microscope. Existing commercial operative navigation systems are able to show only simple 3D images that originate from a medical image, and are not able to show small anatomical structures such as perforators and cranial nerves that have been three-dimensionally stretched by a tumor.2022) As a result, conventional navigation systems do not show high-resolution 3D CG images that are of sufficient detail to be used for intraoperative assistance, and operation position has to be confirmed on two-dimensional DICOM images when using existing commercial operative navigation systems.12) Surgeons could estimate 3D anatomical structure peripheral to the operation position from two-dimensional images during surgery, but this significantly increases the load placed on the surgeon. The navigation system we have developed allows surgeons to intuitively comprehend the 3D operative anatomy peripheral to the operation position because the 3D image displayed on the monitor of the navigation system is as accurate as the actual operative field810) and is in line with the field-of-view of the microscope.

Other merits of our microscopic optically tracking navigation system include the sharing of operation position information between the operator and the assistant and the measurement of operation position without the surgeon having to move the microscope away from the operative field, which increases safety. The sharing of operation position information may be useful for avoiding intraoperative complications, especially for novice surgeons, because it enables the assistant to guide the surgeon and correct the operative plan if the surgeon becomes disorientated during surgery.

Disadvantages of this system at present are that the surgeon has to look away from the operative field to see the navigation display, and the 3D models used in the navigation system cannot be updated as the brain structure is deformed. It is possible to solve the first issue by using augmented reality, i.e., by overlaying the 3D CG image on the microscope view. At present it is difficult to give a 3D appearance to the overlaid 3D CG images, because the overlaid 3D CG images are usually deep-seated structures such as tumors or cerebral arteries, so they look as if they are floating when displayed in the microscopic view.11,13,15,16,23) As such, we have not yet tried using augmented reality in this system, and development of a method to view the navigation display without looking away from the operative field is a challenge that remains. Though it is not possible to solve the second issue at the present time, development of a method to update the image on the navigation system as the brain structure is altered would be useful, and is a challenge for future consideration.

The TRE for our system was as good as that for other reported systems.2428) Nevertheless, a TRE of 2.9 ± 1.9 mm is still too large to detect small anatomical structures using the navigation system, even if the 3D CG is as accurate as the actual operative field. Therefore, improvement of the accuracy of the system will be a future task.

Conclusion

We have developed a microscopic optically tracking navigation system that uses high-resolution 3D CG images. In this study we showed that this system provided a 3D CG image that coincided with the operative scene under the microscope. This system may reduce intraoperative complications because it will enable surgeons to intuitively comprehend the 3D operative anatomy.

Acknowledgments

This work was supported in part by Grant-in-Aid for Challenging Exploratory Research 25670618.

References

  • 1). Kockro RA, Serra L, Tseng-Tsai Y, Chan C, Yih-Yian S, Gim-Guan C, Lee E, Hoe LY, Hern N, Nowinski WL: Planning and simulation of neurosurgery in a virtual reality environment. Neurosurgery 46: 118– 135; discussion 135–137, 2000. [PubMed] [Google Scholar]
  • 2). Elder JB, Hoh DJ, Oh BC, Heller AC, Liu CY, Apuzzo ML: The future of cerebral surgery: a kaleidoscope of opportunities. Neurosurgery 62 (6 Suppl 3): 1555– 1579; discussion 1579–1582, 2008. [DOI] [PubMed] [Google Scholar]
  • 3). Stadie AT, Kockro RA, Reisch R, Tropine A, Boor S, Stoeter P, Perneczky A: Virtual reality system for planning minimally invasive neurosurgery. Technical note. J Neurosurg 108: 382– 394, 2008. [DOI] [PubMed] [Google Scholar]
  • 4). Du ZY, Gao X, Zhang XL, Wang ZQ, Tang WJ: Preoperative evaluation of neurovascular relationships for microvascular decompression in the cerebellopontine angle in a virtual reality environment. J Neurosurg 113: 479– 485, 2010. [DOI] [PubMed] [Google Scholar]
  • 5). Satoh T, Onoda K, Date I: Fusion imaging of three-dimensional magnetic resonance cisternograms and angiograms for the assessment of microvascular decompression in patients with hemifacial spasms. J Neurosurg 106: 82– 89, 2007. [DOI] [PubMed] [Google Scholar]
  • 6). Gandhe AJ, Hill DL, Studholme C, Hawkes DJ, Ruff CF, Cox TC, Gleeson MJ, Strong AJ: Combined and three-dimensional rendered multimodal data for planning cranial base surgery: a prospective evaluation. Neurosurgery 35: 463– 470; discussion 471, 1994. [DOI] [PubMed] [Google Scholar]
  • 7). Oishi M, Fukuda M, Ishida G, Saito A, Hiraishi T, Fujii Y: Presurgical simulation with advanced 3-dimensional multifusion volumetric imaging in patients with skull base tumors. Neurosurgery 68 (1 Suppl Operative): 188– 199; discussion 199, 2011. [DOI] [PubMed] [Google Scholar]
  • 8). Kin T, Oyama H, Kamada K, Aoki S, Ohtomo K, Saito N: Prediction of surgical view of neurovascular decompression using interactive computer graphics. Neurosurgery 65: 121– 128; discussion 128–129, 2009. [DOI] [PubMed] [Google Scholar]
  • 9). Kin T, Nakatomi H, Shojima M, Tanaka M, Ino K, Mori H, Kunimatsu A, Oyama H, Saito N: A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images. J Neurosurg 117: 78– 88, 2012. [DOI] [PubMed] [Google Scholar]
  • 10). Yoshino M, Kin T, Nakatomi H, Oyama H, Saito N: Presurgical planning of feeder resection with realistic three-dimensional virtual operation field in patient with cerebellopontine angle meningioma. Acta Neurochir (Wien) 155: 1391– 1399, 2013. [DOI] [PubMed] [Google Scholar]
  • 11). Kockro RA, Tsai YT, Ng I, Hwang P, Zhu C, Agusanto K, Hong LX, Serra L: Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery 65: 795– 807; discussion 807–808, 2009. [DOI] [PubMed] [Google Scholar]
  • 12). Gildenberg PL, Labuz J: Use of a volumetric target for image-guided surgery. Neurosurgery 59: 651– 659; discussion 651–659, 2006. [DOI] [PubMed] [Google Scholar]
  • 13). Rosahl SK, Gharabaghi A, Hubbe U, Shahidi R, Samii M: Virtual reality augmentation in skull base surgery. Skull Base 16: 59– 66, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14). Rohde V, Hans FJ, Mayfrank L, Dammert S, Gilsbach JM, Coenen VA: How useful is the 3-dimensional, surgeon’s perspective-adjusted visualisation of the vessel anatomy during aneurysm surgery? A prospective clinical trial. Neurosurg Rev 30: 209– 216; discussion 216–217, 2007. [DOI] [PubMed] [Google Scholar]
  • 15). King AP, Edwards PJ, Maurer CR, Jr, de Cunha DA, Hawkes DJ, Hill DL, Gaston RP, Fenlon MR, Strong AJ, Chandler CL, Richards A, Gleeson MJ: A system for microscope-assisted guided interventions. Stereotactic Funct Neurosur 72: 107– 111, 1999. [DOI] [PubMed] [Google Scholar]
  • 16). Edwards PJ, King AP, Maurer CR, de Cunha DA, Hawkes DJ, Hill DL, Gaston RP, Fenlon MR, Jusczyzck A, Strong AJ, Chandler CL, Gleeson MJ: Design and evaluation of a system for microscope-assisted guided interventions (MAGI). IEEE Trans Med Imaging 19: 1082– 1093, 2000. [DOI] [PubMed] [Google Scholar]
  • 17). Mandava VR, Fitzpatrick JM, Maurer CR, Jr, Maciunas RJ, Allen GS: Registration of multimodal volume head images via attached markers, in Medical Imaging VI, 1992. International Society for Optics and Photonics, pp 271–282 [Google Scholar]
  • 18). Maurer CR, Maciunas RJ, Fitzpatrick JM: Registration of head CT images to physical space using a weighted combination of points and surfaces. IEEE Trans Med Imaging 17: 753– 761, 1998. [DOI] [PubMed] [Google Scholar]
  • 19). Hope T, Westlye LT, Bjørnerud A: The effect of gradient sampling schemes on diffusion metrics derived from probabilistic analysis and tract-based spatial statistics. Magn Reson Imaging 30: 402– 412, 2012. [DOI] [PubMed] [Google Scholar]
  • 20). Orringer DA, Golby A, Jolesz F: Neuronavigation in the surgical management of brain tumors: current and future trends. Expert Rev Med Devices 9: 491– 500, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21). Mezger U, Jendrewski C, Bartels M: Navigation in surgery. Langenbecks Arch Surg 398: 501– 514, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22). Gumprecht HK, Widenka DC, Lumenta CB: BrainLab VectorVision Neuronavigation System: technology and clinical experiences in 131 cases. Neurosurgery 44: 97– 104; discussion 104–105, 1999. [DOI] [PubMed] [Google Scholar]
  • 23). Iseki H, Masutani Y, Iwahara M, Tanikawa T, Muragaki Y, Taira T, Dohi T, Takakura K: Volumegraph (overlaid three-dimensional image-guided navigation). Clinical application of augmented reality in neurosurgery. Stereotactic Funct Neurosurg 68 (1–4 Pt 1): 18– 24, 1997. [DOI] [PubMed] [Google Scholar]
  • 24). Shamir RR, Joskowicz L, Spektor S, Shoshan Y: Localization and registration accuracy in image guided neurosurgery: a clinical study. Int J Comput Assist Radiol Surg 4: 45– 52, 2009. [DOI] [PubMed] [Google Scholar]
  • 25). Shamir RR, Joskowicz L, Shoshan Y: Fiducial optimization for minimal target registration error in image-guided neurosurgery. IEEE Trans Med Imaging 31: 725– 737, 2012. [DOI] [PubMed] [Google Scholar]
  • 26). Roessler K, Ungersboeck K, Dietrich W, Aichholzer M, Hittmeir K, Matula C, Czech T, Koos WT: Frameless stereotactic guided neurosurgery: clinical experience with an infrared based pointer device navigation system. Acta Neurochir (Wien) 139: 551– 559, 1997. [DOI] [PubMed] [Google Scholar]
  • 27). Zheng G, Caversaccio M, Bächler R, Langlotz F, Nolte LP, Häusler R: Frameless optical computer-aided tracking of a microscope for otorhinology and skull base surgery. Arch Otolaryngol Head Neck Surg 127: 1233– 1238, 2001. [DOI] [PubMed] [Google Scholar]
  • 28). Suess O, Kombos T, Kurth R, Suess S, Mularski S, Hammersen S, Brock M: Intracranial image-guided neurosurgery: experience with a new electromagnetic navigation system. Acta Neurochir (Wien) 143: 927– 934, 2001. [DOI] [PubMed] [Google Scholar]

Articles from Neurologia medico-chirurgica are provided here courtesy of Japan Neurosurgical Society

RESOURCES