Skip to main content
World Neurosurgery: X logoLink to World Neurosurgery: X
. 2021 Mar 13;11:100102. doi: 10.1016/j.wnsx.2021.100102

Development of Innovative Neurosurgical Operation Support Method Using Mixed-Reality Computer Graphics

Tsukasa Koike 1, Taichi Kin 1,, Shota Tanaka 1, Yasuhiro Takeda 1, Hiroki Uchikawa 1, Taketo Shiode 1, Toki Saito 2, Hirokazu Takami 1, Shunsaku Takayanagi 1, Akitake Mukasa 3, Hiroshi Oyama 2, Nobuhito Saito 1
PMCID: PMC8059082  PMID: 33898969

Abstract

Background

In neurosurgery, it is important to inspect the spatial correspondence between the preoperative medical image (virtual space), and the intraoperative findings (real space) to improve the safety of the surgery. Navigation systems and related modalities have been reported as methods for matching this correspondence. However, because of the influence of the brain shift accompanying craniotomy, registration accuracy is reduced. In the present study, to overcome these issues, we developed a spatially accurate registration method of medical fusion 3-dimensional computer graphics and the intraoperative brain surface photograph, and its registration accuracy was measured.

Methods

The subjects included 16 patients with glioma. Nonrigid registration using the landmarks and thin-plate spline methods was performed for the fusion 3-dimensional computer graphics and the intraoperative brain surface photograph, termed mixed-reality computer graphics. Regarding the registration accuracy measurement, the target registration error was measured by two neurosurgeons, with 10 points for each case at the midpoint of the landmarks.

Results

The number of target registration error measurement points was 160 in the 16 cases. The target registration error was 0.72 ± 0.04 mm. Aligning the intraoperative brain surface photograph and the fusion 3-dimensional computer graphics required ∼10 minutes on average. The average number of landmarks used for alignment was 24.6.

Conclusions

Mixed-reality computer graphics enabled highly precise spatial alignment between the real space and virtual space. Mixed-reality computer graphics have the potential to improve the safety of the surgery by allowing complementary observation of brain surface photographs and fusion 3-dimensional computer graphics.

Key words: Brain shift, Computer graphics, Glioma, Landmark, Mixed-reality, Target registration error, Thin-plate spline

Abbreviations and Acronyms: 2D, 2-Dimensional; 3D, 3-Dimensional; 3DCG, 3-Dimensional computer graphics; AR, Augmented reality; CT, Computed tomography; FOV, Field of view; MRCG, Mixed-reality computer graphics; MRI, Magnetic resonance imaging; TE, Echo time; TR, Repetition time

Introduction

In neurosurgery, it is important for the surgeon to compare the virtual space (preoperative clinical image) with the real space (intraoperative findings) during the surgery to improve safety.1 At present, a navigation system is clinically used as a method for collating the medical images with the coordinate information of the intraoperative findings.2,3 The navigation system performs rigid registration at a plurality of planned points in each coordinate system of real space and the medical image and collates the coordinate information. Therefore, registration errors due to the navigation system would be problematic.4,5

In addition, the brain shift, which is a nonrigid deformation caused by craniotomy, reduces the alignment accuracy.6, 7, 8 Attempts have been made to correct this error; however, they are at the research stage.9, 10, 11 Additionally, augmented reality (AR) alignment methods that project virtual information into real space have been reported.12,13 However, because the medical image is projected on the surgical field and displayed in a superimposed manner, the observable angle is limited in the surgical field.14, 15, 16 Furthermore, surface registration is used as a method of adding the information of the brain surface to the virtual space.17,18 Various methods have been reported for the feature point-extraction and transformation methods; however, the alignment accuracy decreases owing to the outliers of the feature points.19, 20, 21, 22

In the present study, to overcome these issues, we developed a spatially and highly accurate registration method for medical fusion 3-dimensional (3D) computer graphics (3DCG) and cerebral surface photographs of the surgical field and subsequently measured the alignment accuracy.

Methods

Participants

Sixteen patients with glioma were included. The patient details are presented in Table 1. Our institutional review board approved the present study, and all participants provided written informed consent.

Table 1.

Clinical Patient Characteristics

Pt. No. Age (Years) Sex Pathology Tumor Location Tumor Volume (cm3)
1 31 Male OD Left SFG 70.2
2 36 Male GBM-O Left SFG 67.0
3 40 Male AOA Left SFG 17.0
4 28 Male OD Left SFG 13.7
5 30 Male OA Left insula 30.4
6 33 Male GBM Left SFG 5.1
7 40 Female OD Left MFG 25.6
8 35 Male OD Left SFG 24.2
9 69 Female Infiltrating astrocytoma Left insula 7.7
10 40 Male GBM Left SFG 33.5
11 30 Female GBM Left MFG 21.9
12 50 Female AO Left temporal lobe 135.5
13 56 Female AOA Left SFG 5.1
14 41 Female AO Left temporal lobe 120.4
15 56 Male DA Left IFG 12.1
16 23 Male DA Left MFG 71.0

Pt. No., patient number; OD, oligodendroglioma; SFG, superior frontal gyrus; GBM-O, glioblastoma, oligodendroglial component; AOA, anaplastic oligoastrocytoma; OA, oligoastrocytoma; GBM, glioblastoma; DA, diffuse astrocytoma; MFG, middle frontal gyrus; IFG, inferior frontal gyrus; TL, temporal lobe.

Proposed Methods

Fusion 3DCG

We obtained computed tomography (CT), magnetic resonance imaging (MRI), and 3D-rotational angiography (RA) datasets required to create fusion 3DCG. The details were as follows:

  • 1.
    MRI: an MRI scanner device (SIGNA 3.0T; GE Yokogawa Medical System, Tokyo, Japan) with 3T heads was used, with an 8-channel coil asset.
    • A.
      Contrast-enhanced MRI FIESTA (fast imaging employing steady-state acquisition)
      • In the original image of the 3D model of the brain stem and cranial nerves, the imaging parameters were repetition time (TR), 5 ms; echo time (TE), 1.9 ms; slice thickness, 0.4 mm; field of view (FOV), 20 cm; matrix size, 512 × 512; flip angle, 45°.
      • In the original image of the 3D model of the cerebral cortex, the imaging parameters were TR, 4.2 ms; TE, 1.6 ms; slice thickness, 1 mm; FOV, 24 cm; matrix size, 512 × 512; flip angle, 45°.
    • B.
      Time-of-flight magnetic resonance angiography
      • TR, 26 ms; TE, 3 ms; slice thickness, 0.8 mm; FOV, 24 cm; matrix size, 512 × 512; flip angle, 24°.
    • C.
      Time resolved imaging of contrast kinetics
      • TR, 3.6 ms; TE, 1.4 ms; slice thickness, 1 mm; FOV, 24 cm; matrix size, 512 x 512; flip angle 20°.
    • D.
      Fluid-attenuated inversion recovery
      • TR, 3.7 ms; TE, 1.9 ms; slice thickness, 2 mm; FOV, 24 cm; matrix size, 512 × 512; flip angle, 20°.
  • 2.

    CT: a 64-row CT (Aquilion; Toshiba Medical Systems, Tokyo, Japan) was used. The imaging parameters were as follows: collimation, 0.8 mm; tube voltage, 120 kV; tube current, 250 mA; rotation time, 0.6 second; reconstruction section width, 0.8 mm; reconstruction interval, 0.8 mm; voxel size, 0.43 × 0.43 × 0.43 mm.

  • 3.

    3D-RA: the Allura XperFD 20/10 (Philips Medical Systems, Best, The Netherlands) was used for 3D cerebral angiography. When the contrast agent was administered, the C arm was rotated 240° at 55°/second, and 120 FOV 17-in. images were obtained. For arterial phase imaging, a total of 18.5 mL of contrast medium was administered from the internal carotid artery or vertebral artery 1.5 seconds before C-arm rotation at 3.5 mL/second. For venous phase imaging, the timing of the venous phase was confirmed in advance using 2-dimensional (2D) cerebrovascular imaging, and a total of 20 mL of contrast medium was administered at 4.0 mL/second. The obtained original image was output as 3D volume data with a matrix size of 512 × 512 × 512 mm and voxel size 0.28 mm in Digital Imaging and Communications in Medicine format by Integris 3D-RA (Philips Medical Systems) of the workstation equipped in the cerebrovascular apparatus.

The morphological data of the fusion 3DCG were created based on reports from our group. In summary, CT, MRI, and 3D-RA datasets were automatically registered using the normalized mutual information method.23,24 The 3DCG was independently created using a threshold that best visualized the target tissue or organ from each image dataset.24 By superimposing the 3D image and the 2D original image, the threshold was adjusted to the level of the boundary of each structure in the 2D cross-sectional image and visualized using the surface rendering method.25 These processes were performed using an MP-i1620 computer (MouseComputer Co., Tokyo, Japan) with the following parameters: central processing unit, Intel Core i7-7700K at 4.20 GHz, 4.2 GHz; random access memory: 32.0 GB [Intel Corp., Santa Clara, California, USA]; graphic processing unit; image processing software Avizo Lite, version 9.3 [Thermo Fisher Scientific, Waltham, Massachusetts, USA]; NVIDIA GeForce GTX 1080 Ti [Nvidia Corp., Santa Clara, California, USA]).

2D- and 3D-Registration: Mixed-Reality Computer Graphics

We aligned the fusion 3DCG and the 2D brain surface photograph. The brain surface photograph was obtained using a digital camera just after the craniotomy (512 × 512 pixel size; JPEG [Joint Photographic Experts Group] format). Nonrigid registration using the thin-plate spline method based on paired landmarks was performed using the fusion 3DCG as the reference. The landmarks were set as feature points and included blood vessels, sulci, and gyri common to the fusion 3DCG and the brain surface photograph. The landmarks were installed at 20–30 points to be evenly distributed under these conditions. The thin-plate spline method enables nonrigid deformation by dividing the deformation equation into affine transformation and nonaffine transformation parts.25,26 The formula is as follows:

f(x)=x·A+ϕ(x)·ω

where A is an affine transformation matrix and ω is a non-affine transformation coefficient matrix. The vector ϕ(x) is associated with the thin-plate spline kernel. The brain surface photograph is transformed based on the landmark and moved to the fusion 3DCG. The described operation was performed using Avizo Lite, version 9.3 (Thermo Fisher Scientific). We termed this mixed-reality computer graphics (MRCG; Figure 1).

Figure 1.

Figure 1

Workflow of the proposed method. (A) Fusion 3-dimensional (3D) computer graphics (3DCG) were prepared before surgery. Multimodal volume data such as computed tomography, magnetic resonance imaging, and 3D-rotational angiography data were co-registered using the normalized mutual information (NMI) method. Each target organ or tissue was segmented with multiple thresholds. (B) The photograph of the operative field was acquired just after opening of the dura and output in JPEG (Joint Photographic Experts Group) format. (C) The proposed method to align the fusion 3DCG and intraoperative brain surface photograph. (Left) The pairs of landmarks such as the bifurcation of cortical vessels, sulci, and gyri were set by us in both the fusion 3DCG and the operative photograph. The number of landmark pairs was ∼20–30. (Right) Registration was performed using the thin-plate spline method, termed mixed-reality computer graphics.

Assessment

The registration error between the fusion 3DCG and the photograph of the operative field was measured on the MRCG. The target registration error at the midpoint of the landmarks, which has been considered to have the largest alignment error owing to the nature of the interpolation manner of the proposed method, was measured at 10 locations on the MRCG. The error was measured by 2 neurosurgeons, and the average and standard error were obtained (Figure 2).

Figure 2.

Figure 2

Evaluation of spatial registration accuracy of the proposed method. The area between the landmarks with the largest error was selected as the measurement point (yellow square).

Results

MRCG could be created in all cases. Approximately 3–8 hours were required to create the fusion 3DCG before surgery. The procedure of the proposed method to register the fusion 3DCG and photograph of the operative field was completed within ∼10 minutes. The average number of landmarks used for alignment was 24.6. The total measurement points were 160, and the target registration error was 0.72 ± 0.04 mm (minimum, 0.05; maximum, 3.38; Table 2).

Table 2.

Summary of Target Registration Error

Pt. No. Landmarks (n) TRE1 TRE2 TRE3 TRE4 TRE5 TRE6 TRE7 TRE8 TRE9 TRE10
1 22 1.75 0.29 0.32 0.37 1.33 0.41 0.42 0.17 0.70 0.66
2 30 1.41 1.00 0.55 0.15 0.66 0.97 0.26 0.48 0.41 0.69
3 28 0.84 0.17 0.50 0.37 0.64 1.60 0.37 0.05 0.68 0.97
4 24 0.59 0.46 0.54 0.73 0.37 0.57 0.28 0.42 0.65 0.67
5 24 0.41 0.74 1.01 0.85 0.28 0.82 0.96 1.01 0.78 2.07
6 21 0.24 0.87 0.70 0.22 0.32 0.46 0.40 0.92 0.84 0.88
7 30 1.30 1.83 1.15 0.58 0.15 0.21 0.69 1.65 2.06 1.29
8 20 0.34 1.29 0.91 0.25 0.78 0.08 0.44 1.00 0.50 0.85
9 20 1.93 3.38 1.68 0.49 1.39 0.31 3.35 0.24 1.16 0.28
10 28 0.74 0.89 0.20 1.21 0.49 0.58 0.48 0.59 1.03 0.23
11 23 0.57 1.16 0.35 0.48 0.42 0.54 0.05 0.26 0.11 0.86
12 25 1.20 0.89 0.27 0.28 0.63 0.55 0.42 0.66 0.37 0.46
13 21 0.16 0.19 1.04 0.46 0.33 0.31 0.24 0.45 0.36 0.52
14 30 0.77 0.80 2.84 0.90 1.08 0.32 0.46 2.32 0.23 0.92
15 21 0.13 0.09 0.38 0.49 0.93 0.54 0.51 0.42 0.71 0.31
16 26 0.60 0.82 1.71 0.77 0.19 0.28 0.25 0.74 1.11 1.41

Pt. No., patient number; TRE, target registration error.

Mean ± standard error, 0.72 ± 0.04; maximum, 3.38; minimum, 0.05; total number of landmarks, 393; mean number of landmarks, 24.6.

It was possible to confirm the correspondence between the medical image information and physical space with high accuracy. Even in the region where the maximum error was recognized, we found no hindrance because the detailed blood vessel structures in the surrounding area could be confirmed during surgery. We had no difficulty in deciding whether the information was correct. For the course of the blood vessels that were divided and could not be visualized with fusion 3DCG, a more accurate course of blood vessels could be confirmed by adding the blood vessel information from the brain surface photograph.

Illustrative Case 1: Patient 1

Patient 1 was a 31-year-old man with oligodendroglioma. It was challenging to identify the extent of the tumor using only the operative field image (Figure 3A). Moreover, it was difficult to match the spatial positional relationship in the operative field using only the fusion 3DCG (Figure 3B). MRCG enabled clear determination of the area of the tumor (Figure 3C).

Figure 3.

Figure 3

Illustrative case 1 (patient 1): a 31-year-old man with oligodendroglioma. (A) Intraoperative brain surface photograph in JPEG (Joint Photographic Experts Group) format. (B) Fusion 3-dimensional computer graphics (3DCG) created from preoperative imaging studies. The purple highlight indicates the tumor area. (C) Mixed-reality computer graphics created by aligning the intraoperative brain surface photograph and fusion 3DCG. The purple highlight indicates the tumor area.

Illustrative Case 2: Patient 3

Patient 3 was a 40-year-old man with anaplastic oligoastrocytoma. Preoperative surgical excision based on the arcuate fasciculus using diffusion tensor tractography was planned. MRCG allowed us to reproduce the cortical language function mapping site in medical images. From the mapping results and tumor area, the tumor resection in the language area could be extended to allow for conservation of function (Figure 4).

Figure 4.

Figure 4

Illustrative case 2 (patient 3): a 40-year-old man with anaplastic oligoastrocytoma who underwent awake surgery. The black spheres indicate the sites where speech arrest, dysarthria, and paraphasia were shown, and the white spheres denote the asymptomatic sites. The red dot line shows the extraction range. The purple highlight shows the extent of tumor development. The area surrounded by a red dotted line shows where it was judged that tumor resection can be safely added using mixed-reality computer graphics.

Illustrative Case 3: Patient 17

Patient 17 was a 23-year-old man with oligodendroglioma. A comparison of the registration accuracy between the navigation system and MRCG is shown in Figure 5. The bifurcation of the cortical vessel in the surgical field is shown in Figure 5A. The coordinate shown on the navigation system was in the brain parenchyma (Figure 5B). The distance between the coordinates of A reproduced using MRCG and the bifurcation of the cortical vessel of fusion 3DCG was 0.05 mm (Figure 5C). The alignment performed with MRCG resulted in greater spatial accuracy than that using the navigation system.

Figure 5.

Figure 5

Illustrative case 3 (patient 17): a 23-year-old man with oligodendroglioma. (A) The bifurcation of a cortical vessel shown by the surgeon. (B) The axial view of the magnetic resonance image displayed on the navigation system. The intersection of the extended orthogonal green lines is the coordinate of the site shown in A displayed on the navigation system. (C) The blue sphere shows the coordinates reproduced on mixed-reality computer graphics at the site indicated in A.

Discussion

Mixed-Reality Computer Graphics

We developed MRCG that fuses real space (brain surface photograph) and virtual space (fusion 3DCG) using nonrigid registration using both the landmark and the thin-plate spline methods. In MRCG, the coordinate information of the real and virtual space was fused, the position could be aligned with high spatial accuracy of 0.72 mm, and the relationship between the medical image information and the real space could be observed from any angle. The brain surface photograph was taken immediately after the craniotomy, and MRCG was created before the procedure with the operating microscope started. Therefore, it did not interfere with the progress of the surgery. The conditions for taking a brain surface photograph include that the craniotomy should align with the center of the photograph and the photograph should be taken from a distance of 60–80 cm in a normal direction from the craniotomy. The brain surface photograph was taken by the same photographer belonging to the operating room, and a commercially available digital camera and lens were used. Because the brain surface photographs were taken under specific conditions, it was possible to ensure that all the photographs met a certain quality in all cases and no practical difficulty was encountered. MRCG did not make a direct contribution to planning the size of the skin incision or craniotomy because MRCG was not used for these purposes. In contrast, the fusion 3DCG used to create the MRCG has high spatial resolution and can display functional information such as the language field. Hence, the fusion 3DCG was used for planning the skin incision and craniotomy size. Mixed reality is a general term for technologies that present a sense of fusion of the real world and a computer-generated virtual world.27,28 Intraoperative brain surface photographs were used to map the real space in the present study, which, strictly speaking, does not comprise real space. However, this system cannot be entirely considered to constitute virtual reality. Mixed reality is a term created to encompass technologies that do not apply to both virtual reality and AR, and the proposed method is considered within the scope of mixed reality technology. In previous reports of surface registration, a method using the sulci and blood vessels as landmarks was described.18,21,22 Because the sulci and cerebral surface blood vessels are the most recognizable features on the cerebral surface, it is believed they are suitable for identifying brain surface information but cannot manage outliers owing to nonrigid deformation.21,22 Studies have also reported the performance of surface registration using the thin-plate spline method.17,19 Cao et al.17 proposed a nonrigid registration using 3D blood vessel information by robust point matching. However, the registration error increased when the feature points were separated.17

In the present study, we used the thin-plate spline method, which allows the nonlinear interpolation to pass through multiple landmarks set on blood vessels, gyri, and sulci that can be seen on both sides of the brain surface and fusion 3DCG.25,26 This method requires a certain number of landmarks (≥3), and the tissues, such as cerebral surface blood vessels and sulci, that become landmarks on the brain surface photograph must also be visualized on the fusion 3DCG. Furthermore, to complete the alignment during surgery, the total numbers of landmarks to be set in the craniotomy area were 20–30 (1–15 in the peripheral area and 10–15 in the central area). It is possible to align the brain surface photograph and fusion 3DCG, regardless of the craniotomy size if corresponding characteristic anatomical structures are present in the craniotomy field. Therefore, a major craniotomy is not required to create the MRCG. It is a limitation that the landmark installation site is installed on a brain surface structure such as a blood vessel bifurcation or sulcus. Therefore, in a case in which the structure on the brain surface is lost by the surgery, a sufficient number of landmarks will not be installed, and alignment accuracy should not be expected. Moreover, MRCG cannot respond to the passage of time because brain surface photographs provide information at a point in time in the surgical field.

Comparison with Navigation System

Surgical navigation systems are widely used clinically as methods of matching the coordinate information between medical images and real space.8,9 These surgical navigation systems map the coordinate information of real space onto the medical image, and the surgeon eventually matches the image information of the coordinate information of real space with the real space of the operative field. Furthermore, using the navigation systems currently on the market, it is difficult to display a 3D image with high spatial resolution owing to the limited sequence and modality that can be displayed and the limited function of the image processing software. It is difficult to intraoperatively confirm or correct the registration error.29, 30, 31 MRCG allowed us to overcome the problems of these surgical navigation systems. However, for deep areas of the operative field and complicated spaces with severe irregularities, in principle, the surgical navigation system will be superior to MRCG with only surface information. Thus, it is necessary to understand the characteristics of both and use them properly. With the advent of surgical navigation systems that can be equipped with MRCG, better intraoperative support might be possible.

Comparison with AR

Many surgical AR simulations that project virtual information onto real space have been reported.14,16,32 AR can be more useful than MRCG in that it projects virtual information onto a true physical space. However, AR has the property of directly projecting an image onto real space, which has a low degree of freedom in the surgical field, and continuing registration in real-time.15 Therefore, it is difficult to observe a medical image projected at a free angle and to superimpose a high-definition fused 3D image and maintain registration accuracy. In this respect, MRCG can be considered superior to AR.

Conclusions

The use of MRCG enables highly precise spatial alignment between real space (brain surface photograph) and virtual space (fusion 3DCG) by combining the landmark and thin-plate spline methods. MRCG made it possible to observe the brain surface photograph and the fused 3D image complementarily at any angle, which could improve the safety of the surgery.

Declaration of Competing Interest

The present study was supported by Japan Science and Technology Agency CREST (grant JPMJCR17A1).

CRediT authorship contribution statement

Tsukasa Koike: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing - original draft, Writing - review & editing. Taichi Kin: Conceptualization, Funding acquisition, Methodology, Supervision. Shota Tanaka: Resources, Methodology. Yasuhiro Takeda: Resources. Hiroki Uchikawa: Resources. Taketo Shiode: Resources. Toki Saito: Methodology. Hirokazu Takami: Resources. Shunsaku Takayanagi: Resources. Akitake Mukasa: Resources. Hiroshi Oyama: Methodology. Nobuhito Saito: Funding acquisition, Supervision.

Supplementary Data

Data Profile
mmc1.xml (300B, xml)

References

  • 1.Garrett M., Spetzler R.F. Surgical treatment of brainstem cavernous malformations. Surg Neurol. 2009;72(suppl 2):S3–S9. doi: 10.1016/j.surneu.2009.05.031. [discussion: S9-S10] [DOI] [PubMed] [Google Scholar]
  • 2.Coenen V.A., Krings T., Mayfrank L. Three-dimensional visualization of the pyramidal tract in a neuronavigation system during brain tumor surgery: first experiences and technical note. Neurosurgery. 2001;49:86–93. doi: 10.1097/00006123-200107000-00013. [DOI] [PubMed] [Google Scholar]
  • 3.Bohinski R.J., Kokkino A.K., Warnick R.E. Glioma resection in a shared-resource magnetic resonance operating room after optimal image-guided frameless stereotactic resection. Neurosurgery. 2001;48:731–742. doi: 10.1097/00006123-200104000-00007. [discussion: 742, 734] [DOI] [PubMed] [Google Scholar]
  • 4.Maurer C.R., Maciunas R.J., Fitzpatrick J.M. Registration of head CT images to physical space using a weighted combination of points and surfaces. IEEE Trans Med Imaging. 1998;17:753–761. doi: 10.1109/42.736031. [DOI] [PubMed] [Google Scholar]
  • 5.Yoshino M., Saito T., Kin T. A microscopic optically tracking navigation system that uses high-resolution 3D computer graphics. Neurol Med Chir (Tokyo) 2015;55:674–679. doi: 10.2176/nmc.tn.2014-0278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ohue S., Kumon Y., Nagato S. Evaluation of intraoperative brain shift using an ultrasound-linked navigation system for brain tumor surgery. Neurol Med Chir (Tokyo) 2010;50:291–300. doi: 10.2176/nmc.50.291. [DOI] [PubMed] [Google Scholar]
  • 7.Nimsky C., Ganslandt O., Cerny S., Hastreiter P., Greiner G., Fahlbusch R. Quantification of, visualization of, and compensation for brain shift using intraoperative magnetic resonance imaging. Neurosurgery. 2000;47:1070–1079. doi: 10.1097/00006123-200011000-00008. [discussion: 1079-1080] [DOI] [PubMed] [Google Scholar]
  • 8.Berman J.I., Berger M.S., Chung S.W., Nagarajan S.S., Henry R.G. Accuracy of diffusion tensor magnetic resonance imaging tractography assessed using intraoperative subcortical stimulation mapping and magnetic source imaging. J Neurosurg. 2007;107:488–494. doi: 10.3171/JNS-07/09/0488. [DOI] [PubMed] [Google Scholar]
  • 9.Reyns N., Leroy H.A., Delmaire C., Derre B., Le-Rhun E., Lejeune J.P. Intraoperative MRI for the management of brain lesions adjacent to eloquent areas. Neurochirurgie. 2017;63:181–188. doi: 10.1016/j.neuchi.2016.12.006. [DOI] [PubMed] [Google Scholar]
  • 10.Roberts D.W., Miga M.I., Hartov A. Intraoperatively updated neuroimaging using brain modeling and sparse data. Neurosurgery. 1999;45:1199–1206. [discussion: 1206, 1197] [PubMed] [Google Scholar]
  • 11.Fan X., Roberts D.W., Schaewe T.J. Intraoperative image updating for brain shift following dural opening. J Neurosurg. 2017;126:1924–1933. doi: 10.3171/2016.6.JNS152953. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Lee C., Wong G.K.C. Virtual reality and augmented reality in the management of intracranial tumors: a review. J Clin Neurosci. 2019;62:14–20. doi: 10.1016/j.jocn.2018.12.036. [DOI] [PubMed] [Google Scholar]
  • 13.Meola A., Cutolo F., Carbone M., Cagnazzo F., Ferrari M., Ferrari V. Augmented reality in neurosurgery: a systematic review. Neurosurg Rev. 2017;40:537–548. doi: 10.1007/s10143-016-0732-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Inoue D., Cho B., Mori M. Preliminary study on the clinical application of augmented reality neuronavigation. J Neurol Surg A Cent Eur Neurosurg. 2013;74:71–76. doi: 10.1055/s-0032-1333415. [DOI] [PubMed] [Google Scholar]
  • 15.Tagaytayan R., Kelemen A., Sik-Lanyi C. Augmented reality in neurosurgery. Arch Med Sci. 2018;14:572–578. doi: 10.5114/aoms.2016.58690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Shuhaiber J.H. Augmented reality in surgery. Arch Surg. 2004;139:170–174. doi: 10.1001/archsurg.139.2.170. [DOI] [PubMed] [Google Scholar]
  • 17.Cao A., Dumpuri P., Miga M.I. Tracking cortical surface deformations based on vessel structure using a laser range scanner. 3rd IEEE Int Symp Biomed Imaging Nano to Macro. 2006:522–525. doi: 10.1109/ISBI.2006.1624968. accessed April 7, 2021. [DOI] [Google Scholar]
  • 18.Jiang J., Nakajima Y., Sohma Y. Marker-less tracking of brain surface deformations by non-rigid registration integrating surface and vessel/sulci features. Int J Comput Assist Radiol Surg. 2016;11:1687–1701. doi: 10.1007/s11548-016-1358-7. [DOI] [PubMed] [Google Scholar]
  • 19.Marreiros F.M., Rossitti S., Wang C., Smedby Ö. Non-rigid deformation pipeline for compensation of superficial brain shift. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):141–148. doi: 10.1007/978-3-642-40763-5_18. [DOI] [PubMed] [Google Scholar]
  • 20.DeLorenzo C., Papademetris X., Staib L.H., Vives K.P., Spencer D.D., Duncan J.S. Volumetric intraoperative brain deformation compensation: model development and phantom validation. IEEE Trans Med Imaging. 2012;31:1607–1619. doi: 10.1109/TMI.2012.2197407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Nakajima S., Atsumi H., Kikinis R. Use of cortical surface vessel registration for image-guided neurosurgery. Neurosurgery. 1997;40:1201–1208. doi: 10.1097/00006123-199706000-00018. [discussion: 1208-1210] [DOI] [PubMed] [Google Scholar]
  • 22.Sun H., Roberts D.W., Farid H., Wu Z., Hartov A., Paulsen K.D. Cortical surface tracking using a stereoscopic operating microscope. Oper Neurosurg. 2005;56:86–97. doi: 10.1227/01.neu.0000146263.98583.cc. [DOI] [PubMed] [Google Scholar]
  • 23.Kin T., Oyama H., Kamada K., Aoki S., Ohtomo K., Saito N. Prediction of surgical view of neurovascular decompression using interactive computer graphics. Neurosurgery. 2009;65:121–128. doi: 10.1227/01.NEU.0000347890.19718.0A. [discussion: 128-129] [DOI] [PubMed] [Google Scholar]
  • 24.Kin T., Shin M., Oyama H. Impact of multiorgan fusion imaging and interactive 3-dimensional visualization for intraventricular neuroendoscopic surgery. Neurosurgery. 2011;69(suppl oper):ons40–ons48. doi: 10.1227/NEU.0b013e318211019a. [discussion: ons48] [DOI] [PubMed] [Google Scholar]
  • 25.Kin T., Toki S. 2016. JPN Patent No. 6178615. [Google Scholar]
  • 26.Bookstein F.L. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans Pattern Analysis Mach Intell. 1989;11:567–585. [Google Scholar]
  • 27.Milgram P., Kishino F. A taxonomy of mixed reality visual displays. IEICE Trans Inform Syst. 1994;77:1321–1329. [Google Scholar]
  • 28.Kersten-Oertel M., Jannin P., Collins D.L. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph. 2013;37:98–112. doi: 10.1016/j.compmedimag.2013.01.009. [DOI] [PubMed] [Google Scholar]
  • 29.Kantelhardt S.R., Gutenberg A., Neulen A., Keric N., Renovanz M., Giese A. Video-assisted navigation for adjustment of image-guidance accuracy to slight brain shift. Oper Neurosurg (Hagerstown) 2015;11:504–511. doi: 10.1227/NEU.0000000000000921. [DOI] [PubMed] [Google Scholar]
  • 30.Frisken S., Luo M., Juvekar P. A comparison of thin-plate spline deformation and finite element modeling to compensate for brain shift during tumor resection. Int J Comput Assist Radiol Surg. 2020;15:75–85. doi: 10.1007/s11548-019-02057-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Reinertsen I., Lindseth F., Unsgaard G., Collins D.L. Clinical validation of vessel-based registration for correction of brain-shift. Med Image Anal. 2007;11:673–684. doi: 10.1016/j.media.2007.06.008. [DOI] [PubMed] [Google Scholar]
  • 32.Mahvash M., Tabrizi L.B. A novel augmented reality system of image projection for image-guided neurosurgery. Acta Neurochirurg. 2013;155:943–947. doi: 10.1007/s00701-013-1668-2. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Profile
mmc1.xml (300B, xml)

Articles from World Neurosurgery: X are provided here courtesy of Elsevier

RESOURCES