Abstract.
We present our work investigating the feasibility of combining intraoperative ultrasound for brain shift correction and augmented reality (AR) visualization for intraoperative interpretation of patient-specific models in image-guided neurosurgery (IGNS) of brain tumors. We combine two imaging technologies for image-guided brain tumor neurosurgery. Throughout surgical interventions, AR was used to assess different surgical strategies using three-dimensional (3-D) patient-specific models of the patient’s cortex, vasculature, and lesion. Ultrasound imaging was acquired intraoperatively, and preoperative images and models were registered to the intraoperative data. The quality and reliability of the AR views were evaluated with both qualitative and quantitative metrics. A pilot study of eight patients demonstrates the feasible combination of these two technologies and their complementary features. In each case, the AR visualizations enabled the surgeon to accurately visualize the anatomy and pathology of interest for an extended period of the intervention. Inaccuracies associated with misregistration, brain shift, and AR were improved in all cases. These results demonstrate the potential of combining ultrasound-based registration with AR to become a useful tool for neurosurgeons to improve intraoperative patient-specific planning by improving the understanding of complex 3-D medical imaging data and prolonging the reliable use of IGNS.
Keywords: image-guided neurosurgery, brain shift, augmented reality, registration, brain tumor
1. Introduction
Each year thousands of Canadians undergo neurosurgery for resections of lesions in close proximity to areas of the brain that are critical to movement, vision, sensation, or language. There is strong support in the literature demonstrating significantly increased survival benefit with complete resection of primary and secondary brain tumors,1 creating competing constraints that must be balanced during surgery for each patient: “achieving maximal resection of the lesions while causing minimal neurological deficit.”
Since the introduction of the first intraoperative frameless stereotactic navigation device by Roberts et al.2 in 1986, image-guided neurosurgery (IGNS), or “neuronavigation,” has become an essential tool for many neurosurgical procedures due to its ability to minimize surgical trauma by allowing for the precise localization of surgical targets. For many of these interventions, preoperative planning is done on these IGNS systems that provide the surgeon with tools to visualize, interpret, and navigate through patient-specific volumes of anatomical, vascular, and functional information while investigating their interrelationships. Over the past 30 years, the growth of this technology has enabled application to increasingly complicated interventions, including the surgical treatment of malignant tumors, neurovascular disorders, epilepsy, and deep brain stimulation. The integration of preoperative image information into a comprehensive patient-specific model enables surgeons to preoperatively evaluate the risks involved and defines the most appropriate surgical strategy. Perhaps more importantly, such systems enable surgery of previously inoperable cases by facilitating safe surgical corridors through IGNS-identified noncritical areas.
For intraoperative use, IGNS systems must relate the physical location of a patient with the preoperative models by means of a transformation that relates the two through a patient-to-image mapping (Fig. 1). By tracking the patient and a set of specialized surgical tools, this mapping allows a surgeon to point to a specific location on the patient and see the corresponding anatomy in the preoperative images and the patient-specific models. However, throughout the intervention, hardware movement, an imperfect patient–image mapping, and movement of brain tissue during surgery invalidate the patient-to-image mapping.3 These sources of inaccuracy, collectively described as “brain shift,” reduce the effectiveness of using preoperative patient-specific models intraoperatively. Unsurprisingly, most surgeons use IGNS systems to plan an approach to a surgical target but understandably no longer rely on the system throughout the entirety of an operation when accuracy is compromised and medical image interpretation is encumbered. Recent advances in IGNS technology have resulted in intraoperative imaging and registration techniques4,5 to help update preoperative images and maintain accuracy. Advances in visualization have introduced augmented reality (AR) techniques6–9 at different time points and for different tasks to help improve the understanding and visualization of complex medical imaging data and models and to help with intraoperative planning. We present a pilot study of eight cases combining the use of intraoperative ultrasound (iUS), for brain shift correction, and intraoperative AR visualization with traditional IGNS tools to improve intraoperative accuracy and interpretation of patient-specific neurosurgical models in the context of IGNS of tumors. While other groups have investigated iUS and AR independently, there are very few reports10–14 of using both technologies to overcome the visualization issues related with iUS and the accuracy issues related to AR. The goal of this pilot study is to investigate the feasibility of combining iUS-based brain shift correction and AR visualizations to improve both the accuracy and interpretation of complex intraoperative data. Our work aims to improve on some of the limitations of the previous work by being a prospective instead of retrospective clinical pilot study,12 being focused on evaluation in clinical scenarios as opposed to phantoms or animal cadavers,11 having high-quality MRI images for segmentations instead of from difficult to interpret US images,12,14 using patient-specific data instead of atlas-based data for greater registration accuracy,14 and finally, using a fast US-MRI registration that allows for an efficient workflow incorporating AR in the operating room (OR) that consumes less time and provides more information than previous reports.12,13
Fig. 1.
The patient’s head is positioned, immobilized, and a tracked reference frame is attached. The patient’s preoperative images and physical space are registered by using eight corresponding landmarks on the head and face to create a correspondence between the two spaces.
1.1. iUS in Neurosurgery
Intraoperative imaging has seen a wide range of use in neurosurgery over the last two decades. Its main benefit constitutes the ability to visualize the up-to-date anatomy of a patient during an intervention. iUS has been proposed and used as an alternative to intraoperative MRI due to its ease of use, low-cost, and widespread availability.5 iUS is relatively inexpensive and noninvasive and does not require many changes to the operating room or surgical procedures. However, its main challenges are associated with relating information to preoperative images, which are generally of a different modality. The alignment of iUS to MRI images is a challenging task due to the widely different nature and quality of the two modalities. While voxel intensity of both modalities is directly dependent on tissue type, US has an additional dependence on probe orientation and depth that can lead to intensity nonuniformity due to the presence of acoustic impedance transitions. Preoperative MR images allow for identification of tissue types, anatomical structures, and a variety of pathologies, such as cancerous tumors. iUS images are generally limited to displaying lesion tissue with an associated uncertainty regarding its boundary, along with a few coarsely depicted structure boundaries. Early reports using iUS in neurosurgery, such as Bucholz and Greco,15 show success with this technique using brightness mode (B-mode) information. B-mode US has been used to obtain anatomical information4,5,16 while Doppler US yields flow information for cerebral vasculature.17,18 The interested reader is directed to Ref. 3 for a history and overview of iUS in neurosurgery in the context of brain shift correction.
1.2. Augmented Reality in Neurosurgery
AR visualizations have become increasingly popular in medical research to help understand and visualize complex medical imaging data. AR is defined as “the merging of virtual objects with the real world (i.e., the surgical field of view).”19 The motivation for these visualizations comes from the desire to merge preoperative images, models, and plans with the real physical space of the patient in a comprehensible fashion. These augmented views have been proposed to better understand the topology and interrelationships of structures of interest that are not directly visible in the surgical field of view. AR has been explored for neurosurgery in the context of skull base surgery,20 transsphenoidal neurosurgery (i.e., for pituitary tumors),21 microscope-assisted neurosurgery,22 endoscopic neurosurgery,23,24 neurovascular surgery,25–27 and primary brain tumor resection planning.28,29 This list is far from comprehensive so the interested reader is referred to Meola et al.13 and Kersten-Oertel et al.19 for detailed reviews of the use of AR in IGNS. In all recently published studies, AR visualizations are described as an enhancement to the minimally invasiveness of a procedure through more tailored, patient-specific approaches. A recent study by Kersten-Oertel et al.30 evaluating the benefit of AR to specific neurosurgical tasks has demonstrated a major pitfall of these types of visualization is the lack of accurate overlay throughout an intervention making them useful only at early parts of an intervention. Recent literature has tried to address this issue through interactive overlay realignment31 or through manipulation of visualization parameters32,33 with some success. In this work, we aim to address this major issue with iUS imaging.
2. Materials and Methods
2.1. Ethics
The Montreal Neurological Institute and Hospital (MNI/H) Ethics Board approved the study, and all patients signed informed consent prior to data collection.
2.2. System Description
All data were collected and analyzed on a custom-built prototype IGNS system, the Intraoperative Brain Imaging System (IBIS).34 This system has previously been described in Refs. 25, 34, and 35 for use with iUS and AR as independent technologies. The Linux workstation is equipped with an Intel Core i7-3820 @ 360 GHz processor with 32-GB RAM, a GeForce GTX 670 graphics card, and Conexant cx23800 video capture card. Tracking is performed using a Polaris N4 infrared optical system (Northern Digital, Waterloo, Canada). The Polaris infrared camera uses stereo triangulation to locate the passive reflective spheres on both the reference and pointing tools with an accuracy of 0.5 mm.36 The US scanner, an HDI 5000 (ATL/Philips, Bothell, Washington) equipped with a 2D P7-4 MHz phased array transducer, enables intraoperative imaging during the surgical intervention. Video capture of the live surgical scene is achieved with a Sony HDR XR150 camera. Both the camera and US system transmit images using an S-video cable to the Linux workstation at . The camera and US transducer probe are outfitted with a spatial tracking device with attached passive reflective spheres (Traxtal Technologies Inc., Toronto, Canada) and are tracked in the surgical environment. Figure 2 shows the main components of the iUS-AR IGNS system.
Fig. 2.
The different components in an iUS-AR IGNS intervention and their relationship with the surgical and neuronavigation setup. Once the US and live video images are captured from the external devices, they are imported into the neuronavigation system, and all US and AR visualizations are displayed on the neuronavigation monitor (adapted with permission from Ref. 34).
2.3. Patient-Specific Neurosurgical Models
The patient-specific neurosurgical models refer to all preoperative data—images, surfaces, and segmented anatomical structures—for an individual patient. All patients involved in this study followed a basic tumor imaging protocol at the MNI/H with a gadolinium-enhanced T1-weighted MRI obtained on a 1.5-T MRI scanner (Ingenia Phillips Medical Systems). All images were processed in a custom image processing pipeline as follows:37 first, the MRI is denoised, after estimating the standard deviation of the MRI Rician noise.38 Next, intensity nonuniformity correction and normalization are done by estimating the nonuniformity field,39 followed by histogram matching with a reference image to normalize the intensities [Fig. 3(a)]. Within this pipeline, the FACE method40 is used to obtain a three-dimensional (3-D) model of the patient’s cortex [Fig. 3(b)]. After processing, the tumor is manually segmented using ITK-Snap,41 and a vessel model is created using a combination of a semiautomatic intensity thresholding segmentation, also in ITK-Snap and a Frangi Vesselness filter42 [Figs. 3(c) and 3(d)]. The processing is done on a local computing cluster at the MNI, and the combined time for the processing pipeline and segmentations is on the order of 2 h. A model of the skin surface was also generated using ray-tracing from the processed images in IBIS using a transfer function to control the transparency of the volume so all segmented structures can be viewed. The processed images and patient-specific models are then imported into IBIS [Fig. 3(e)].
Fig. 3.
Flowchart showing the preoperative steps for creating a patient-specific model and for iUS probe calibration and camera calibration for AR. (a) Preoperative MRI image after denoising, intensity nonuniformity correction, and intensity normalization. (b) The cortical surface is extracted from the MRI image using the FACE algorithm.40 (c) Vessels are extracted using an ITK-Snap thresholding segmentation and a Frangi Vesselness filter.42 (d) Tumor is manually segmented using ITK-Snap. (e) All preoperative models are combined into a patient-specific model that is imported into the IGNS system. (f) Calibration is performed by serial imaging of a checkboard pattern with an attached tracker in different positions in the tracked camera’s field of view allowing for simultaneous extraction of the intrinsic calibration matrix () and extrinsic calibration matrix (). , , and are the different transformation matrices used to determine .34 (g) Tracked US calibration is performed using an N-wire calibration phantom and a custom IGNS calibration plugin allowing for aligning of the virtual N-shaped wires with the intersecting US images.34
2.4. Tracked Camera Calibration and Creating the Augmented Reality View
To create AR visualizations from images captured by a tracked camera, prior calibration of the camera-tracker apparatus must be performed. The intrinsic and extrinsic calibration parameters are determined simultaneously. We determine the intrinsic calibration of the camera using a printed checkerboard pattern fixed on a flat surface with a rigidly attached tracker tool using the method described in Ref. 34. The different components and transformation matrix relationships are shown in Fig. 3(f). Multiple images are taken while displacing the pattern in the camera’s field of view. The intrinsic calibration matrix, , is obtained through automatic detection of the checkerboard corners and feeding the coordinates and tracked 3-D position through an implementation of Zhang’s method.43 This also creates a mapping between the space of the calibration grid and the optical center of the tracked camera . The extrinsic calibration matrix () is estimated by minimizing the standard deviation of grid points transformed by the right side of the following equation:
| (1) |
where represents the rigid transformation matrix between the tracking reference and tracked camera and is the transformation between the checkerboard tool and attached tracker. For a more detailed discussion of this procedure, the interested reader is directed to Drouin et al.44 The calibration error is measured using a leave-one-out cross-validation procedure with the calibration images to obtain the reprojection error. This is an estimate of the reprojection error that is expected to be obtained if the patient is perfectly registered with the system in the OR; however, this error is compounded with other registration errors that can lead to larger discrepancies. For the cases in this pilot study, the average calibration error was on the order of 0.89 mm (range 0.60 to 1.12 mm). Once the camera has been calibrated and is being tracked, the AR view is created by merging virtual objects, such as the segmented tumor, segmented blood vessels, segmented cortex, and iUS images, with the live view captured from the video camera. To create a perception such that the tumor and other virtual objects appear under the visible surface of the patient, edges are extracted and retained from the live camera view. Furthermore, the transparency of the live image is selectively modulated such that the image is more transparent around the tumor and opaque elsewhere [Fig. 4(d)]. For more details on these visualization procedures, the reader is directed to Refs. 33 and 45.
Fig. 4.
Flowchart of the intraoperative workflow and how surgical tasks are related to IGNS tasks. (a) Patient-to-image registration. After the patient’s head is immobilized, a tracking reference is attached to the clamp, and eight facial landmarks are chosen that correspond to identical landmarks on the preoperative images to create a mapping between the two spaces. (b) AR visualization on the skull is being qualitatively assessed by comparing the tumor contour as defined by the preoperative guidance images and the overlay of the augmented image. (c) A series of US images are acquired once the craniotomy has been performed on the dura and then reconstructed and registered with the preoperative MRI images using the gradient orientation alignment algorithm. (d) AR visualization on the cortex showing the location of the tumor (green) and a vessel of interest (blue). (e) The AR accuracy is quantitatively evaluated by having the surgeon choose an identifiable landmark on the physical patient, recording the coordinates, and then choosing the corresponding landmark on the augmented image, recording the coordinates and measuring the two-dimensional (2-D) distance between the coordinates.
2.5. Tracked US Probe Calibration
When using US guidance for neurosurgery, a correspondence between the physical location of the images and the physical space of the patient must be established. The accuracy of these procedures is closely related to that of device tracking, which is on the order of 0.5 to 1.0 mm for optical tracking systems36 but is often categorized separately since specific phantoms are needed to perform the calibration. Among the various calibration techniques, the N-wire phantoms have been most widely accepted in the literature46 due to their robustness, simplicity, and ability to be used by inexperienced users and, therefore, is the technique employed here. Before each case, the US probe was calibrated using a custom-built N-Wire phantom and calibration plugin following the guidelines described in Ref. 46. The US probe calibration is performed by rigidly attaching a tracker to the US probe and filling the phantom containing the N-wires with water. The entire phantom is then registered to an identical virtual model in the IGNS system using fiducial markers on the phantom followed by imaging the N-wire patterns at different US positions and probe depth settings [Fig. 3(g)]. The intersection of the N-wire patterns with the US image slice defines the 3-D position of a point in the US image, and three or more patterns together define the calibration transform for the registered phantom. Within IBIS is a custom manual calibration plugin that allows users to manually identify the intersection points of the wires within a sequence of US images, and the calibration is automatically recomputed after each interaction.34 Following the manual calibration, the calibration is compared with five other N-wire images, and the accuracy is reported as the root mean square of the difference between world coordinates and transformed US image coordinates of the intersection point of the N-wire and the US image plane. The accuracy for each of the cases in this study was on the order of 1.0 mm, which is consistent with reported and accepted values in the literature.47
2.6. US-MRI Registration
MR-US registration techniques to correct for brain shift have recently been developed, based on gradient orientation alignment, to reduce the effect of the nonhomogeneous intensity response found in iUS images.48 Once an iUS acquisition has been performed, the collected slices are reconstructed into a 3-D volume, resliced in the axial, coronal, and sagittal views, and overlaid on the existing preoperative images. The current simple volume reconstruction works with a raster scan strategy; for every voxel within the volume to be reconstructed, all pixels of all iUS images are evaluated in terms of distance to the reconstructed volume. If a pixel is within this user-specified distance (e.g., between 1 and 3 mm), the intensity of the voxel is increased by the intensity of the iUS pixel, modulated by a Gaussian weighting function. The registration algorithm is based on gradient orientation alignment48 focused on maximizing overlap of gradients with minimal uncertainty of the orientation estimates (i.e., locations with high gradient magnitude) within the set of images. This can be described mathematically as
| (2) |
where is the transformation being determined, is the overlap domain, and is the inner angle between the fixed image gradient, , and the transformed moving image gradient,
| (3) |
The registration is characterized by three major components: (1) a local similarity metric based on gradient orientation alignment [], (2) a multiscale selection strategy that identifies locations of interest with gradient orientations of low uncertainty, and (3) a computationally efficient technique for computing gradient orientations of the transformed moving images.48 The registration pipeline consists of two stages. During the initial, preprocessing stage, the image derivatives are computed and areas of low uncertainty gradient orientations are identified. The second stage consists of an optimization strategy that maximizes the average value of the local similarity metric evaluated on the locations of interest using a covariance matrix adaptation evolution strategy.49 For an in-depth discussion and more details on this procedure, the interested reader is directed to Ref. 48.
This specific framework was chosen for use in the pilot study due to its validation with clinical US-MRI data.48 This registration framework has been shown to provide substantially improved robustness and computational performance in the context of IGNS motivated by the fact that gradient orientations are considered to characterize the underlying anatomical boundaries found in MRI images and are more robust to the effect of nonhomogeneous intensity response found in US images. For this pilot study, only rigid registration transformations were investigated.
Both the volume reconstruction and registration techniques are incorporated into IBIS using a graphics processing unit implementation that allows for high-speed results (on the order of seconds) for reconstruction and rigid registration. This process is briefly summarized in Fig. 4(c).
2.7. Operating Room Procedure
All image processing for each patient-specific model is done prior to the surgical case and imported into the IBIS IGNS console. Once the patient has been brought into the operating room and anesthetized, the patient-to-image registration for IBIS is done simultaneously with the commercial IGNS system, a Medtronic StealthStation (Dublin, Leinster, Republic of Ireland), using eight corresponding anatomical landmark pairs50 [Fig. 4(a)]. The quality of this registration is evaluated by the clinical team in the OR. Quantitatively, the fiducial landmark error must be below a certain threshold of 5.0 mm but is often much lower. Qualitatively, the neurosurgeon evaluates the appearance of tracked probe position on the skin of the patient to ensure an appropriate quality of registration has been achieved. AR is then used at different time points during the intervention, and accuracy is qualitatively assessed on the scalp to verify skin incision, on the craniotomy [Fig. 4(b)] to verify craniotomy extent, and on the cortex throughout resection [Fig. 4(d)]. Once the dura has been exposed, tracked intraoperative US images are acquired and used to reregister the preoperative images to the patient. The accuracy of the alignment of the AR view on the cortex is then re-evaluated based on these updated views using both qualitative and quantitative criteria. Comments from the surgeon and visual inspection are used as qualitative criteria, while the pixel misalignment error51 and target registration error (TRE) of a set of landmark US-MRI landmark pairs are used as quantitative criteria. For each patient, a set of five corresponding landmarks was chosen on both preoperative MRI and iUS volumes to calculate the TRE—as the Euclidian distance between pairs of landmarks—before and after registration. Landmarks were chosen in areas of hyperechoic–hypoechoic transition (US) near tumor boundaries, ventricles, and sulci when the corresponding well-defined features on the MRI were also identifiable. Pixel misalignment error, as the name suggests, is calculated as the distance in pixels between where the augmented virtual model is displayed on the live view and where its true location is in the live view as determined through identification of a single pair of corresponding landmarks identified by the surgeon [Fig. 4(e)]. The precision of the pixel misalignment error measurements relies on the distance between the camera and patient, but caution was taken to ensure the AR camera was always at the edge of the sterile field to ensure that this distance remained relatively constant between cases and calibration. An example calculation is shown in Fig. 5. It is converted to a distance in mm based on the parameters defined through the camera calibration that determine the pixel dimensions.51
Fig. 5.
The surgeon was asked to place the tip of the tracked pointer at the closest edge of the visible tumor in the surgical field () and to also select the corresponding location on the preoperative images/virtual model (). The AR view was initialized and the pixel misalignment error, measured as the 2-D distance between and on the camera image by multiplying the number of pixels with the pixel size (as determined from the camera calibration) between the two points of interest before registration () and after registration ().
2.8. Study Design
Two neurosurgeons at the MNI/H were involved in this study. Neither surgeon had prior experience with the AR system before this study but both have been involved with work related to development of the custom neuronavigation system as well as its use and development for intraoperative US brain shift correction. Neither participant was trained with interpreting AR images before the study other than explanation of what the AR images represented and how they would be displayed in the OR.
3. Results
We present the results of our experience in eight iUS-AR IGNS cases. Relevant patient information, including age, sex, and tumor type, is summarized in Table 1.
Table 1.
Summary of patient information.
| Patient | Sex | Age | Tumor type | Lobe |
|---|---|---|---|---|
| 1 | F | 56 | Meningioma | L-O/P |
| 2 | M | 49 | Glioma | L-F/T |
| 3 | F | 72 | Metastases | L-O/P |
| 4 | M | 63 | Glioma | R-F |
| 5 | F | 77 | Meningioma | R-F |
| 6 | M | 24 | Glioma | L-F |
| 7 | F | 62 | Glioma | L-O/P |
| 8 | F | 55 | Metastases | R-F |
Note: F, frontal; O, occipital; P, parietal; L, left; and R, right.
3.1. Quantitative Results
For all but one case, the iUS-MRI registration improved to under 3 mm, and the pixel misalignment error was on the order of 1 to 3 mm. The average improvement was 68%. In case 2, the camera calibration data were corrupted so the pixel misalignment error could not be calculated. Table 2 summarizes all iUS-MRI registration and pixel misalignment errors. The second column represents the registration misalignment from the initial patient-to-image landmark registration. Columns 3 and 4 represent the registration errors between US and MRI volumes before and after registration, respectively. The final two columns pertain to the virtual model-to-video registration (pixel misalignment error) before and after US-MRI registration.
Table 2.
Summary of registration and pixel misalignment errors.
| Patient | Patient-to-image registration (mm) |
Pre iUS-MRI registration TRE (mm) | Post iUS-MRI registration TRE (mm) | Prereg pixel misalignment error (mm) | Postreg pixel misalignment error (mm) | |
|---|---|---|---|---|---|---|
| IBIS | Medtronic | |||||
| 1 | 3.23 | 3.07 | N/Aa | N/Aa | ||
| 2 | 2.88 | 3.22 | 5.39 | 1.19 | ||
| 3 | 3.96 | 3.54 | 6.46 | 1.06 | ||
| 4 | 4.20 | 3.66 | 6.88 | 1.80 | ||
| 5 | 2.77 | 3.12 | 7.20 | 2.35 | ||
| 6 | 2.33 | 3.20 | 3.57 | 1.32 | ||
| 7 | 4.35 | 2.98 | 5.55 | 3.27 | ||
| 8 | 3.85 | 3.15 | 4.32 | 1.22 | ||
For this case, the camera calibration data were corrupted, and we were unable to extract the necessary parameters to measure the misalignment error. The pre- to postregistration improvement of mean TRE were statistically significant in all cases (group -test, ).
3.2. Qualitative Results
Qualitative comments from the surgeon reflected largely the benefit of not having to look at or interpret US information but still being able to have an accurately overlaid virtual model that could be augmented to verify the surgical plan and to visualize the surgical target. Their main concerns were the limitation of camera maneuverability due to the size of tracking volume, difficulty in comparing AR visualization with preoperative navigation images, and a learning curve associated with the technology. Table 3 summarizes the different tasks where AR was used throughout these interventions and how surgeons considered it to be useful. Figure 6 is an image summary of four illustrative cases showing a surgical, preregistration AR, and postregistration AR to qualitatively show the improvement of the iUS registration on the AR visualizations.
Table 3.
Summary of AR tasks, benefits, and concerns, as per surgeons using the technology.
| Task | Use of AR | Benefits | Concerns |
|---|---|---|---|
| Preoperative planning | • Show location of tumor and other anatomy of interest after patient positioning and registration | • Share surgical plan with other assisting physicians | • Limitation of extent of camera maneuverability due to tracking volume |
| Craniotomy | • Visualize tumor location in relation to drawn craniotomy borders below bone | • Assess if there is a loss of accuracy from skin landmark registration | • Difficult to verify with navigation images at the same time |
| •Minimize craniotomy size. Added comfort in verification that virtual tumor location is within drawn boundaries before removing bone | |||
| Cortex pre-iUS Registration | • AR during this context is primarily for research purposes and to determine loss of accuracy from beginning of surgery and initial skin landmark registration | • Despite being used for research, surgeons still found the visualizations useful to have a understanding of the directions of the deviations from initial registration | • Difficulty in understanding data first time during use |
| •Limitation of extent of camera maneuverability due to tracking volume | |||
| Cortex post-iUS Registration | •Intraoperative planning | • Surgeons found it helpful to compare their physical interpretation of tumor borders with the virtual borders | • Limitation of extent of camera maneuverability due to tracking volume |
| •Tumor and vessel identification | • Surgeons commented on the benefit of seeing a virtual model of vessels when they were in close proximity of the tumor and deep to the area of resection | ||
| •Assessment of AR/IGNS accuracy | • AR helped confirm their surgical plan |
Fig. 6.
Four illustrative examples of the qualitative improvement of AR for tumor visualization. The left column is the surgical view, the middle column is the initial AR view before iUS registration, and the right column is the iUS brain shift corrected AR view. (a), (b), (c), and (d) are cases 1, 3, 6, and 7, respectively.
4. Discussion and Conclusions
In this study, we successfully combined two important technologies in the context of IGNS of brain tumors. The combination contributed to a high level of accuracy during AR visualizations and obviated the need to directly interpret the iUS images throughout the intervention. The fact that pre-iUS registration error was greater than the registration error following initial patient-to-image registration highlights the gradual loss of accuracy throughout the intervention.52 In each case, we improved the patient–image misalignment by registration with iUS data. This resulted in several advantages that included more accurate intraoperative navigation and more reliable AR visualizations as shown qualitatively and quantitatively with the improved TRE measurements and pixel misalignment error measurements, respectively.
The improved accuracy of the system was evaluated by two metrics: the TRE within a series of target points to assess the registration quality and the pixel misalignment error for the improved AR quality. A limitation of measuring the accuracy of AR overlays stems from the lack of a standardized and universal metric, in which the error in AR can be quantified. Some authors use pixel misalignment error,51 while others use pixel reprojection error,26 and other complex metrics are also described.6,53 The pixel misalignment error has the implicit assumption that the registration with iUS creates a perfectly aligned image. This assumption is inevitably violated, and thus pixel misalignment error is not a perfect measure of accuracy and is only an indication of relative error between the two AR views rather than an absolute error for either view. Despite this limitation, it was deemed here to be the most appropriate for quantitative evaluation of the AR images. Another consideration on the accuracy of the registration procedure is the effect of heart rate and blood pulsation during iUS acquisition. This detail was not considered in this pilot project and will be investigated in future work. This work is intended to serve as a pilot study to assess the feasibility of the combination of US and AR technologies to improve on each of their shortcomings. For this reason, an US-MRI registration algorithm that has been reported in the literature to work well with this type of clinical data was chosen, as well as an AR evaluation metric that was considered the most appropriate in evaluating the quality of virtual overlay for the data presented in this study. Future work will require more extensive validation against other US-MR registration frameworks to draw stronger conclusions about the quality of accuracy improvement, as well as other AR evaluation metrics to better describe the quality of virtual overlay improvement. Finally, registration errors on the order of 1.0 mm are ideally desired for neuronavigation-assisted tasks; however, this level of accuracy is rarely achievable, and a registration error on the order of 2.0 to 3.0 mm is sufficient to perform the intended tasks appropriately for this pilot study. In future work, with the use of nonlinear registration, we hope to further improve the level of registration accuracy to be closer to ideal conditions.
In this study, AR views were acquired with the use of an external camera to capture images of the surgical scene and render the AR view on the computer workstation. This strategy was employed since one of the surgeons involved in the project does not generally use a microscope while performing tumor resections. Augmenting microscope images is also compatible within IBIS and may facilitate integration of AR in the operating room for navigation.34 While the justification for AR in some of cases presented here may not be great due to the tumor’s proximity to the cortex, the potential of AR in more complicated scenarios should not be understated. For smaller tumors located much deeper within the brain or for tumors near eloquent brain areas, having the ability to see below the surface with accurate visualizations offered using AR creates the possibility of tailoring resection corridors to minimize the invasiveness of the surgery. This benefit can be accomplished only if we are able to maintain a high level of patient-to-image registration accuracy throughout the procedure. Combining iUS registration and AR with more accurate tumor segmentations, such as the process described in Ref. 54, would assist a surgeon in resecting as much tumorous tissue as possible with minimal resection of healthy tissue without having to rely solely on a mental map of the patient’s anatomy and the surgeon’s ability to discriminate tissue types.
It is clear from the qualitative comments from the surgeons involved in this work that there is a learning curve associated with AR in the context of IGNS. In the first several cases, the AR was employed simply as a tool to verify positions of anatomy of interest and to assess the accuracy of the AR image alignment with the tracked preoperative images. As the surgeons became comfortable with the system, the length of time AR was used increased and the number of tasks where it was deemed useful also increased. The surgeons commented on the usefulness of AR to assess and minimize the extent of the craniotomy and to assess the location of the anatomy of interest (i.e., tumors and vessels) once the cortex has been exposed. Additionally, the surgeons commented on the benefit of using AR to share the surgical plan with assisting residents and physicians by being able to show the vessels and tumor location before making an incision. The surgeons also commented on the fact that having a colored AR image for assessing the anatomy was more enjoyable than a grayscale US. The primary concern of the surgeons using the system was the limitation of camera maneuverability due to the size of tracking volume, which also led to difficulty in comparing AR visualization with preoperative navigation images. However, over the course of using the system, the comfort with the system quickly grew over the first few cases and the usefulness and amount of time and information requested increased. With continued use, surgeons found the information increasingly useful as they incorporated it into their intraoperative planning suggesting that with reliable accuracy and training this technology could provide a benefit to improve the minimal invasiveness of surgery and to help with patient model interpretation.
In conclusion, this pilot study highlights the feasibility of combining iUS registration and AR visualization in the context of IGNS for tumor resections and some of the advantages it can have. While many authors have investigated these techniques separately for brain tumor neurosurgery, few have looked at the benefits of combining these two technologies. Our pilot study in eight surgical cases suggests that the combined use of these technologies has the potential to improve on traditional IGNS systems. By adding improved visualization of the anatomy and pathology of interest, while simultaneously correcting for patient–image misalignment, extended reliable use of IGNS throughout the intervention can be maintained that will hopefully lead to more efficient and minimally invasive surgical intervention. In addition, with accurate AR visualizations, the neurosurgeon is not required to interpret the iUS images, which can be confusing to a nonexpert. With continued development and integration of the two techniques, the proposed iUS-AR system has potential for improving tasks, such as tailoring craniotomies, planning resection corridors, and localizing tumor tissue while simultaneously correcting for brain shift.
Acknowledgments
This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) (No. 238739), the Canadian Institutes of Health Research (CIHR) (No. MOP-97820), and the NSERC CHRP (No. 385864-10)
Biographies
Ian J. Gerard is a PhD candidate of biomedical engineering at McGill University. His current research involves improving the accuracy of neuronavigation tools for image-guided neurosurgery (IGNS) of brain tumors with focus on intraoperative imaging for brain shift management and enhanced visualization techniques for understanding complex medical imaging data.
Marta Kersten-Oertel is an assistant professor at Concordia University specializing in medical image visualization and IGNS. Her current research involves the development of an augmented reality (AR) neuronavigation system and the combination of advanced visualization techniques with psychophysics to improve the understanding of complex medical imaging data.
Simon Drouin is a PhD candidate in biomedical engineering at McGill University. His current research involves user interaction in AR with a focus on depth perception and interpretation of visual cues in augmented environments.
Jeffery A. Hall is a neurosurgeon and an assistant professor of neurology and neurosurgery at McGill University’s Montreal Neurological Institute and Hospital (MNI/H), specializing in the surgical treatment of epilepsy and cancer. His current research, in collaboration with other MNI clinician–scientists, includes developing noninvasive means of delineating epileptic foci, intraoperative imaging in combination with neuronavigation systems, and the application of image-guided neuronavigation to epilepsy surgery.
Kevin Petrecca is a neurosurgeon, an assistant professor of neurology and neurosurgery at McGill University, and the head of neurosurgery at the Montreal Neurological Hospital, specializing in neurosurgical oncology. His research at the MNI/H Brain Tumour Research Centre focuses on understanding fundamental molecular mechanisms that regulate cell motility with a focus on malignant glial cell invasion.
Dante De Nigris was formerly a PhD student in the Department of Electrical and Computer Engineering, McGill University. His research interests focus on analyzing and developing techniques for multimodal image registration, specifically on similarity metrics for challenging multimodal image registration contexts.
Daniel A. Di Giovanni is a PhD student in an integrated program in neuroscience at McGill University. His research interests focus on the analysis of functional MRI data in the context of brain tumors.
Tal Arbel is an associate professor in the Department of Electrical and Computer Engineering and a member of the McGill Centre for Intelligent Machines. Her research goals focus on the development of modern probabilistic techniques in computer vision and their application to problems in the medical imaging domain.
D. Louis Collins is a professor in neurology and neurosurgery, biomedical engineering and an associate member of the Center for Intelligent Machines, McGill University. His laboratory develops and uses computerized image processing techniques, such as nonlinear image registration and model-based segmentation to automatically identify structures within the brain. His other research focuses on applying these techniques to IGNS to provide surgeons with computerized tools to assist in interpreting complex medical imaging data.
Disclosures
All authors declare they have no conflicts of interest.
References
- 1.Sanai N., Berger M. S., “Operative techniques for gliomas and the value of extent of resection,” Neurotherapeutics 6(3), 478–486 (2009). 10.1016/j.nurt.2009.04.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Roberts D. W., et al. , “A frameless stereotaxic integration of computerized tomographic imaging and the operating microscope,” J. Neurosurg. 65(4), 545–549 (1986). 10.3171/jns.1986.65.4.0545 [DOI] [PubMed] [Google Scholar]
- 3.Gerard I. J., et al. , “Brain shift in neuronavigation of brain tumors: a review,” Med. Image Anal. 35, 403–420 (2017). 10.1016/j.media.2016.08.007 [DOI] [PubMed] [Google Scholar]
- 4.Comeau R. M., et al. , “Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery,” Med. Phys. 27(4), 787–800 (2000). 10.1118/1.598942 [DOI] [PubMed] [Google Scholar]
- 5.Mercier L., et al. , “Registering pre- and postresection 3-dimensional ultrasound for improved visualization of residual brain tumor,” Ultrasound Med. Biol. 39(1), 16–29 (2013). 10.1016/j.ultrasmedbio.2012.08.004 [DOI] [PubMed] [Google Scholar]
- 6.Azuma R., et al. , “Recent advances in augmented reality,” IEEE Comput. Graphics Appl. 21(6), 34–47 (2001). 10.1109/38.963459 [DOI] [Google Scholar]
- 7.Liao H., et al. , “3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay,” IEEE Trans. Biomed. Eng. 57(6), 1476–1486 (2010). 10.1109/TBME.2010.2040278 [DOI] [PubMed] [Google Scholar]
- 8.Tabrizi L. B., Mahvash M., “Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique,” J. Neurosurg. 123(1), 206–211 (2015). 10.3171/2014.9.JNS141001 [DOI] [PubMed] [Google Scholar]
- 9.Liao H., et al. , “An integrated diagnosis and therapeutic system using intra-operative 5-aminolevulinic-acid-induced fluorescence guided robotic laser ablation for precision neurosurgery,” Med. Image Anal. 16(3), 754–766 (2012). 10.1016/j.media.2010.11.004 [DOI] [PubMed] [Google Scholar]
- 10.Gerard I. J., et al. , “Improving patient specific neurosurgical models with intraoperative ultrasound and augmented reality visualizations in a neuronavigation environment,” Lect. Notes Comput. Sci. 9401, 28–35 (2015). 10.1007/978-3-319-31808-0_4 [DOI] [Google Scholar]
- 11.Ma L., et al. , “Augmented reality surgical navigation with ultrasound-assisted registration for pedicle screw placement: a pilot study,” Int. J. Comput. Assist. Radiol. Surg. 12, 2205–2215 (2017). 10.1007/s11548-017-1652-z [DOI] [PubMed] [Google Scholar]
- 12.Sato Y., et al. , “Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization,” IEEE Trans. Med. Imaging 17(5), 681–693 (1998). 10.1109/42.736019 [DOI] [PubMed] [Google Scholar]
- 13.Meola A., et al. , “Augmented reality in neurosurgery: a systematic review,” Neurosurg. Rev. 40, 1–12 (2016). 10.1007/s10143-016-0732-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Xiao Y., et al. , “Atlas-guided transcranial Doppler ultrasound examination with a neuro-surgical navigation system: case study,” Lect. Notes Comput. Sci. 9401, 19–27 (2016). 10.1007/978-3-319-31808-0_3 [DOI] [Google Scholar]
- 15.Bucholz R. D., Greco D. J., “Image-guided surgical techniques for infections and trauma of the central nervous system,” Neurosurg. Clin. N. Am. 7(2), 187–200 (1996). [PubMed] [Google Scholar]
- 16.Keles G. E., Lamborn K. R., Berger M. S., “Coregistration accuracy and detection of brain shift using intraoperative sononavigation during resection of hemispheric tumors,” Neurosurgery 53(3), 556–564, discussion 562–564 (2003). 10.1227/01.NEU.0000080949.44837.4C [DOI] [PubMed] [Google Scholar]
- 17.Reinertsen I., et al. , “Intra-operative correction of brain-shift,” Acta Neurochir. 156(7), 1301–1310 (2014). 10.1007/s00701-014-2052-6 [DOI] [PubMed] [Google Scholar]
- 18.Reinertsen I., et al. , “Clinical validation of vessel-based registration for correction of brain-shift,” Med. Image Anal. 11(6), 673–684 (2007). 10.1016/j.media.2007.06.008 [DOI] [PubMed] [Google Scholar]
- 19.Kersten-Oertel M., Jannin P., Collins D. L., “DVV: a taxonomy for mixed reality visualization in image guided surgery,” IEEE Trans. Visual Comput. Graphics 18(2), 332–352 (2012). 10.1109/TVCG.2011.50 [DOI] [PubMed] [Google Scholar]
- 20.Cabrilo I., et al. , “Augmented reality-assisted skull base surgery,” Neurochirurgie 60(6), 304–306 (2014). 10.1016/j.neuchi.2014.07.001 [DOI] [PubMed] [Google Scholar]
- 21.Kawamata T., et al. , “Endoscopic augmented reality navigation system for endonasal transsphenoidal surgery to treat pituitary tumors: technical note,” Neurosurgery 50(6), 1393–1397 (2002). 10.1097/00006123-200206000-00038 [DOI] [PubMed] [Google Scholar]
- 22.Paul P., Fleig O., Jannin P., “Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation,” IEEE Trans. Med. Imaging 24(11), 1500–1511 (2005). 10.1109/TMI.2005.857029 [DOI] [PubMed] [Google Scholar]
- 23.Rosahl S. K., et al. , “Virtual reality augmentation in skull base surgery,” Skull Base 16(2), 59–66 (2006). 10.1055/s-2006-931620 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Shahidi R., et al. , “Implementation, calibration and accuracy testing of an image-enhanced endoscopy system,” IEEE Trans. Med. Imaging 21(12), 1524–1535 (2002). 10.1109/TMI.2002.806597 [DOI] [PubMed] [Google Scholar]
- 25.Kersten-Oertel M., et al. , “Augmented reality in neurovascular surgery: feasibility and first uses in the operating room,” Int. J. Comput. Assist Radiol. Surg. 10(11), 1823–1836 (2015). 10.1007/s11548-015-1163-8 [DOI] [PubMed] [Google Scholar]
- 26.Cabrilo I., Bijlenga P., Schaller K., “Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations,” Acta Neurochir. 156(9), 1769–1774 (2014). 10.1007/s00701-014-2183-9 [DOI] [PubMed] [Google Scholar]
- 27.Cabrilo I., Bijlenga P., Schaller K., “Augmented reality in the surgery of cerebral aneurysms: a technical report,” Neurosurgery 10(Suppl. 2), 252–261, discussion 260–261 (2014). 10.1227/NEU.0000000000000328 [DOI] [PubMed] [Google Scholar]
- 28.Low D., et al. , “Augmented reality neurosurgical planning and navigation for surgical excision of parasagittal, falcine and convexity meningiomas,” Br. J. Neurosurg. 24(1), 69–74 (2010). 10.3109/02688690903506093 [DOI] [PubMed] [Google Scholar]
- 29.Stadie A. T., et al. , “Virtual reality system for planning minimally invasive neurosurgery: technical note,” J. Neurosurg. 108(2), 382–394 (2008). 10.3171/JNS/2008/108/2/0382 [DOI] [PubMed] [Google Scholar]
- 30.Kersten-Oertel M., et al. , “Augmented reality for specific neurovascular surgical tasks,” Lect. Notes Comput. Sci. 9365, 92–103 (2015). 10.1007/978-3-319-24601-7_10 [DOI] [Google Scholar]
- 31.Drouin S., Kersten-Oertel M., Collins D. L., “Interaction-based registration correction for improved augmented reality overlay in neurosurgery,” Lect. Notes Comput. Sci. 9365, 21–29 (2015). 10.1007/978-3-319-24601-7_3 [DOI] [Google Scholar]
- 32.Kersten-Oertel M., Chen S. J., Collins D. L., “An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery,” IEEE Trans. Visual Comput. Graphics 20, 391–403 (2013). 10.1109/TVCG.2013.240 [DOI] [PubMed] [Google Scholar]
- 33.Kersten-Oertel M., et al. , “Augmented reality visualization for guidance in neurovascular surgery,” Stud. Health Technol. Inf. 173, 225–259 (2012). 10.3233/978-1-61499-022-2-225 [DOI] [PubMed] [Google Scholar]
- 34.Drouin S., et al. , “IBIS: an OR ready open-source platform for image-guided neurosurgery,” Int. J. Comput. Assist. Radiol. Surg. 12, 363–378 (2017). 10.1007/s11548-016-1478-0 [DOI] [PubMed] [Google Scholar]
- 35.Mercier L., et al. , “New prototype neuronavigation system based on preoperative imaging and intraoperative freehand ultrasound: system description and validation,” Int. J. Comput. Assist. Radiol. Surg. 6(4), 507–522 (2011). 10.1007/s11548-010-0535-3 [DOI] [PubMed] [Google Scholar]
- 36.Gerard I. J., Collins D. L., “An analysis of tracking error in image-guided neurosurgery,” Int. J. Comput. Assist. Radiol. Surg. 10, 1579–1588 (2015). 10.1007/s11548-014-1145-2 [DOI] [PubMed] [Google Scholar]
- 37.Guizard N., et al. , “Robust individual template pipeline for longitudinal MR images,” in MICCAI Workshop on Novel Biomarkers for Alzheimer’s Disease and Related Disorders (2012). [Google Scholar]
- 38.Coupe P., et al. , “An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images,” IEEE Trans. Med. Imaging 27(4), 425–441 (2008). 10.1109/TMI.2007.906087 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Sled J. G., Zijdenbos A. P., Evans A. C., “A nonparametric method for automatic correction of intensity nonuniformity in MRI data,” IEEE Trans. Med. Imaging 17(1), 87–97 (1998). 10.1109/42.668698 [DOI] [PubMed] [Google Scholar]
- 40.Eskildsen S. F., Ostergaard L. R., “Active surface approach for extraction of the human cerebral cortex from MRI,” Lect. Notes Comput. Sci. 4191, 823–830 (2006). 10.1007/11866763_101 [DOI] [PubMed] [Google Scholar]
- 41.Yushkevich P., et al. , “User-guided level set segmentation of anatomical structures with ITK-SNAP,” in Insight Journal, Special Issue on ISC, NA-MIC/MICCAI Workshop on Open-Source Software (2005). [Google Scholar]
- 42.Frangi A. F., et al. , “Multiscale vessel enhancement filtering,” Lect. Notes Comput. Sci. 1496, 130–137 (1998). 10.1007/BFb0056195 [DOI] [Google Scholar]
- 43.Zhang Z., “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004). 10.1109/TPAMI.2004.21 [DOI] [PubMed] [Google Scholar]
- 44.Drouin S., et al. , “A realistic test and development environment for mixed reality in neurosurgery,” Lect. Notes Comput. Sci. 7264, 13–23 (2012). 10.1007/978-3-642-32630-1_2 [DOI] [Google Scholar]
- 45.Kersten-Oertel M., et al. , “Augmented reality in neurovascular surgery: feasibility and first uses in the operating room,” Int. J. Comput. Assist. Radiol. Surg. 10, 1823–1836 (2015). 10.1007/s11548-015-1163-8 [DOI] [PubMed] [Google Scholar]
- 46.Mercier L., et al. , “A review of calibration techniques for freehand 3-D ultrasound systems,” Ultrasound Med. Biol. 31(4), 449–471 (2005). 10.1016/j.ultrasmedbio.2004.11.015 [DOI] [PubMed] [Google Scholar]
- 47.Carbajal G., et al. , “Improving N-wire phantom-based freehand ultrasound calibration,” Int. J. Comput. Assist. Radiol. Surg. 8(6), 1063–1072 (2013). 10.1007/s11548-013-0904-9 [DOI] [PubMed] [Google Scholar]
- 48.De Nigris D., Collins D. L., Arbel T., “Fast rigid registration of pre-operative magnetic resonance images to intra-operative ultrasound for neurosurgery based on high confidence gradient orientations,” Int. J. Comput. Assist. Radiol. Surg. 8(4), 649–661 (2013). 10.1007/s11548-013-0826-6 [DOI] [PubMed] [Google Scholar]
- 49.Hansen N., Ostermeier A., “Completely derandomized self-adaptation in evolution strategies,” Evol. Comput. 9(2), 159–195 (2001). 10.1162/106365601750190398 [DOI] [PubMed] [Google Scholar]
- 50.Gerard I. J., et al. , “New protocol for skin landmark registration in image-guided neurosurgery: technical note,” Neurosurgery 11(Suppl. 3), 376–381, discussion 380–381 (2015). 10.1227/NEU.0000000000000868 [DOI] [PubMed] [Google Scholar]
- 51.Caversaccio M., et al. , “Augmented reality endoscopic system (ARES): preliminary results,” Rhinology 46(2), 156–158 (2008). [PubMed] [Google Scholar]
- 52.Nabavi A., et al. , “Serial intraoperative magnetic resonance imaging of brain shift,” Neurosurgery 48(4), 787–797, discussion 797–798 (2001). 10.1097/0006123-200104000-00019 [DOI] [PubMed] [Google Scholar]
- 53.Holloway R. L., “Registration error analysis for augmented reality,” Presence 6(4), 413–432 (1997). 10.1162/pres.1997.6.4.413 [DOI] [Google Scholar]
- 54.Subbanna N. K., et al. , “Hierarchical probabilistic Gabor and MRF segmentation of brain tumours in MRI volumes,” Lect. Notes Comput. Sci. 8149, 751–758 (2013). 10.1007/978-3-642-40811-3_94 [DOI] [PubMed] [Google Scholar]






