Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Nov 3.
Published in final edited form as: Med Image Anal. 2016 Jun 14;33:56–63. doi: 10.1016/j.media.2016.06.004

Image-guided interventions and computer-integrated therapy: Quo vadis?

Terry M Peters a,*, Cristian A Linte b
PMCID: PMC7609169  NIHMSID: NIHMS1640116  PMID: 27373146

Abstract

Significant efforts have been dedicated to minimizing invasiveness associated with surgical interventions, most of which have been possible thanks to the developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided interventions have promised to dramatically change the way therapies are delivered to many organs. However, in spite of the development of many sophisticated technologies over the past two decades, other than some isolated examples of successful implementations, minimally invasive therapy is far from enjoying the wide acceptance once envisioned. This paper provides a large-scale overview of the state-of-the-art developments, identifies several barriers thought to have hampered the wider adoption of image-guided navigation, and suggests areas of research that may potentially advance the field.

Keywords: Image-guided interventions, Navigation, Tracking, Visualization, Virtual reality, Surgical workflow

1. Introduction

Modern image-guided interventions (IGI) have been in use now for over 25 years. All image guidance platforms employ pre- and often intra-operative image data that are registered to the patient via spatial localization systems used to track the surgical instruments, together with sophisticated software to register the imaging data to the patient and to the visualization and navigation systems.

IGI has actually been around for well over 100 years, emerging as one of the first uses of X-rays a mere eight days after the publication of Roentgen’s first paper on the topic. Several years later Horsley and Clark (1908) reported on the concept that was later referred to as stereotaxy, which allowed the association of a coordinate system to a monkey’s head using external markers (aligned with the auditory canals and orbital rims) as reference points. This device, dubbed the “stereotactic frame”, was useful both for rigidly holding and guiding probes to selected targets in the brain, as well as for providing a convenient frame of reference to establish a spatial relationship between the patient and images. This concept was adopted for human use many years later and is still in use to this day. While stereotactic frames are still employed in some neurosurgical procedures, there are many other ways of registering images to the patient, including identification of homologous landmarks on the patient and in the images, surface matching or image feature matching via intra-operative imaging.

In addition, supporting technologies have made tremendous strides over the past 25 years. Image acquisition speed and resolution have improved by orders of magnitude; real-time intraoperative imaging has become routine; tracking techniques have become clinically accepted, and sophisticated visualization systems have moved from being expensive research tools to being commodity items.

For most of history, surgery has involved the exposure of the affected organ via typically large incisions. Image guidance has, for the most part, introduced less invasive alternatives for performing traditional interventions. Concurrent with technological and computational advances, two notable changes occurred in medicine during the past half century: the introduction of endoscopic imaging (Litynski, 1999), which considerably reduced the incision size used for access, and the establishment of interventional radiology as a surgical sub-specialty (Soares and Murphy, 2005). The latter enabled treatment via percutaneous approaches under X-ray fluoroscopy or ultrasound (US) image guidance. Both developments reflect a shift from using direct visual feedback to relying on feedback from medical imaging. X-rays are used to reconstruct 3D anatomical representations intra-operatively using cone-beam CT, or to guide catheters and other devices through the vasculature under continuous two-dimensional (2D) or 3D acquisition of fluoroscopic imaging. US imaging provides 3D and four-dimensional (3D + time) imaging of dynamic structures and has become standard of care for monitoring and guiding several complex cardiac interventions, some relying solely on this modality for intra-cardiac navigation.

As a result, surgeons use diagnostic imaging scans to plan the optimal therapy for each individual patient, and then access internal organs through small incisions while relying on video imaging acquired via miniature endo- or laparoscopic cameras as a surrogate for a direct view. They also employ real-time US, fluoroscopy or MRI to guide catheters and needles during percutaneous interventions. Such approaches have helped minimize therapy invasiveness by employing medical imaging to provide an alternate visualization to direct vision.

A cardiac surgeon colleague maintains that “surgery is a side-effect of therapy”. This statement certainly applies to conventional open-chest, open-heart surgery, during which most patient trauma is primarily caused by the process of reaching the target, which includes a median sternotomy, cardio-pulmonary bypass, and cardiac incision. While this approach affords the surgeon a bloodless environment within which to perform the repair, its efficacy is undetermined until the patient is “put back together”, the cardiopulmonary-bypass disconnected, and heart-lung function restored.

Many of these procedures are now being attempted in a minimally invasive fashion. Entry into the cardiac chambers is achieved either via the vascular system, with instruments introduced via the femoral artery in the leg, or via the heart wall, typically through the apex. It is not surprising there is a multi-billion dollar industry that is determined to provide minimally invasive solutions to cardiac interventions, which allow the heart to remain beating during the procedure and remove the need for major incisions and cardiopulmonary bypass.

2. Toward less invasive IGI

2.1. Mitral valve repair: NeoChord augmented virtuality navigation

A particular example of a minimally invasive cardiac intervention is illustrated in Fig. 1. This illustrates a procedure that focuses on the repair of a flailing mitral valve leaflet by introducing a rigid instrument into the heart chamber via the apex, capturing the leaflet, attaching to it an artificial chord to replace the native chord that has been damaged, and adjusting its tension under Doppler trans-esophageal US to minimize mitral regurgitation.

Fig. 1.

Fig. 1.

Example of navigation assistance using “Augmented Virtuality” with Ultrasound. Using the standard of care imaging (a), the instrument trajectory when navigating from the apex of the heart to the mitral-valve annulus was as shown in (b). When using the augmented virtuality display that shows the tracked tool and the approximate location of the target, the navigation task is accomplished 4 times faster with many fewer excursions into potential danger zones.

This procedure is one of many introduced over the past 5 years to perform minimally invasive repair of mitral valves. As standard of care, this procedure is performed while monitoring the progression of the probe using bi-plane trans-esophageal ultrasound. However, workflow analysis has revealed potential limitations, since it is not possible to visualize the probe tip and desired target (flailing mitral valve leaflet) in the same US image at all times during the procedure, nor is there any alignment between the image and the surgeon’s motor field map. While the grasping the leaflet with the device can be effected under US control when the instrument is correctly placed, anecdotal evidence indicated that the task of quickly placing the probe at the center of the mitral annulus, prior to performing the leaflet capture, was not straight forward.

This observation led to the development of a guidance system that attempted to address both challenges: permitting the surgeon to see, in an intuitive fashion, the instrument and the target at all times during the procedure, and constructing the display in such a manner that it also intuitively presented the guidance task to the surgeon. When validating the system on porcine models, the surgeons completed the navigation of the repair tool to the center of the mitral valve annulus an average of four times faster using the augmented virtuality guidance system, rather than under US imaging alone. In addition, since a more direct route was always identified using augmented virtuality guidance, potential damage by the probe to sensitive intra-cardiac structures was also mitigated.

Two strong messages have emerged from this study: the tool-to-target navigation task is as important as the on-target positioning task, and that in many image-guidance scenarios, different imaging technologies need to be employed for each phase. Just as a GPS or SatNav device cannot guide a driver from his starting point to the space between two parked cars as the destination, neither can a registered pre-operative image guide the placement of an instrument onto an intra-corporeal target with sufficient precision. In addition, the user interface can have a dramatic effect on the result. If well designed, an intuitive user interface can decrease the level of cognitive load experienced by the surgeon, and, as a result, positively impact the outcome of the procedure. We believe that these observations can lead to a new way of thinking about image-guided platforms that may lead to more rapid adoption by the surgical community. Perhaps many of the procedures for which we employ the paradigm of “operating on the model” can be simplified significantly by breaking down the workflow into navigation and positioning tasks. This approach certainly relaxes the accuracy requirements of a pre-operative model, especially since the model is being employed primarily for context, and shifts the responsibility for precision to the intra-operative imaging modality.

2.2. State of the art in IGI

A recent book by the late Ferenc Jolesz (Jolesz, 2014), one of the pioneers in image-guided interventions, provides a comprehensive survey of the state-of-the-art image guidance techniques in various organ systems. Many of the research projects described rely on sophisticated intra-operative imaging such as MRI, CT and PET, with many examples pertaining to the Advanced Multi-modality Image-guided Operating (AMIGO) suite at Brigham and Women’s Hospital in Boston (Fig. 2).

Fig. 2.

Fig. 2.

The Advanced Multi-modality Image Guided Operating (AMIGO) suite at Brigham and Women’s Hospital - a first step into the operating room of the future.

The AMIGO suite was conceived and implemented by Dr. Jolesz, and his colleagues with support for NIH and Brigham and Women’s Hospital as a part of the National Center for Image Guided Therapy (NCIGT). After Dr. Jolesz’s untimely passing, NCIGT and AMIGO have been led by Clare Tempany, MD. The Center has been funded by NIH since 2005 and as research with clinical impact is integral to NCIGT’s mission of “turning discovery into health”, AMIGO is the clinical test-bed for NCIGT (Tempany et al., 2015). Further details are available at www.ncigt.org.

While such intra-operative imaging facilities represent an expensive proposition for general use, this facility nevertheless serves as a “gold standard” against which lower footprint technologies may be compared.

2.3. Key issues related to IGI

The development of image-guided procedures has relied on technological developments in several key areas, including identifying the therapeutic target in images; registering images and image-derived models to the patient; tracking instruments with respect to the patient and inherently registered images and models; accounting for discrepancies between the images and the patient; validating the accuracy of the intervention; and, displaying the information to the surgeon in an intuitive manner. Each of these tasks represents a research area in its own right that has provided countless challenges to the research community; solutions to which have been reported by the community in this journal amongst many others. However, despite all these efforts, there is still little evidence of successful translation and widespread use of image guidance in clinical applications.

Some of the lessons learned by the authors over the past decade or so working in the development of tools for image-guided intervention, particularly in the field of valve repair/replacement, ablation, and atrial-septal defect repair, are illustrated as part of Fig. 3. These examples serve as a tour de force through several systems developed in support of minimally invasive, image-guided intervention in the beating heart.

Fig. 3.

Fig. 3.

Examples of cardiac IGI applications: (a) Real-time visualization for image-guided cardiac ablation monitoring; (b) image-guided mitral valve targeting under US imaging enhanced with pre-operative anatomy; (c) heart migration during coronary artery bypass grafting (CABG) procedures; (d) augmentation of pre-operative electroanatomic model with real-time US imaging for cardiac ablation interventions; (e) Augmented reality platform for optimizing port placement and identifying target for Robot-assisted CABG; (f) surgical suite during minimally-invasive mitral valve repair; (g, h) real and virtual environment mimicking an epicardial image-guided cardiac procedure in a beating heart phantom; (i) pre-operative identification of mitral valve using model-based registration; (j) intra-procedural lesion map showing the delivered ablation patterns during image-guided cardiac ablation extracted from pre-procedural MRI, (k) ultrasound-guided aortic valve replacement. procedure.

3. Making an impact through IGI: what does it take?

Although medical imaging has enabled a variety of minimally invasive procedures, this path has not been free of several bumps along the way. The primary challenge arises due to the fact that the procedure outcome depends upon the physician’s ability to mentally recreate the underlying ‘surgical scene’ based on the intra-operative images. This task is not trivial, given that the intra-operative images feature lower quality and smaller field of view compared with the pre-operative images. The goal of image guidance systems is to provide an accurate guidance to the target while avoiding critical anatomical structures. Most often this task is achieved by mapping information obtained pre-operatively – diagnostic images and/or models derived from them – to the intra-operative setting (Cleary et al., 2010).

For minimally invasive, image guidance techniques to become routinely employed in patient care, the use of low cost intra-operative imaging such as endoscopy, ultrasound, optical spectroscopy, and optical coherence tomography (OCT) need to be exploited fully. While these modalities, when registered with pre-operative imaging, may not always deliver equivalent performance to that of an intra-operative MRI guidance system, for example, because of their potential for widespread availability and affordable healthcare, their overall benefit to the patient population is likely to be much more significant. In many cases however, because of their higher resolution and specificity, they may well add significantly more information than is available with MRI, as has been demonstrated by Jermyn et al., (2015) in the case of Raman Spectroscopy.

Image-guided intervention technology continues to develop, stimulated not only by improvements in computing and image-processing capabilities, and hardware to support tracking and visualization, but also by the increasing number of surgeons who are willing to embrace new, non-traditional technologies and therapy paradigms. Nevertheless, widespread application in areas of the body outside of neuro and orthopedic surgery and prostate therapy, still remain elusive. So what are the major challenges that must be addressed by the scientific community to offer improved routes to commercialization and clinical adoption?

While many procedures can clearly benefit from well-designed image guidance platforms, are they likely to have significant effects on the patient’s long-term outcome? Will they significantly reduce the time in the operating room? Can the procedures be performed with the same (or fewer) personnel as the conventional approach, and will the overall cost of the procedure, including hardware, consumables, and operating room time, be reduced? An additional factor justifying the development and integration of new therapeutic technology is its potential impact on a population and the inherent demand for the technology, although the latter factor may sometimes be a perceived, rather than a real advantage. Negative responses to any of these questions can seriously impact the ability to move an image-guided procedure to clinical acceptance. However, even when a proposed surgical procedure meets the above criteria, there are still technical challenges that must be addressed.

3.1. Addressing an unmet clinical need

The primary measure of treatment success is the extent to which it has impact on the patient’s quality of life. This metric may be measured in terms of reduced complications post-surgery, as well as increased safety during the procedure. While a less invasive therapy for a particular condition may be desired, unless the proposed approach maintains or improves outcome and quality of life beyond that achieved using conventional methods, its adoption is difficult to justify. Unfortunately, long-term outcome data for new technologies with a statistically significant number of subjects is often difficult or impossible to obtain.

3.2. Faithful surgical target identification

In many instances, the surgical target manifests as a well-defined area of contrast enhancement on a standard radiological image (MRI, CT, PET, X-Ray). However, there are many lesions that remain hidden in standard images, even though the underlying numerical data may be representative of a lesion. A typical example is seen in some MR negative images of epilepsy patients. While reported lesion-free by radiologists, as highlighted by Goubran et al., (2015) analysis of underlying data in quantitative images (relaxation and diffusion maps for example) can demonstrate close correlations between changes in the numerical values of some of these parameters and the underlying cellular structure of the lesion. This study suggests that in many medical images there is more than meets the eye, and that in many instances, image-guided interventions could benefit from quantitative analysis of pre-operative images.

Another example, this time cardiac related, is the identification of arrhythmic foci for cardiac ablation, as well as the assessment of induced tissue injury. US imaging on its own cannot identify foci, neither can it differentiate between viable and irreversible ablated tissue. However, augmentation of the US imaging with electroanatomic maps and real-time physiological models for lesion prediction can significantly facilitate the detection of the target site and enhance navigation and guidance.

3.3. Accurate and precise tracking of surgical instruments

Tracking of instruments that relate the patient to the pre-operative images is key to the success of any image-guided procedure. To date, we have relied on two dominant technologies, (electro) magnetic and optical surgical instrument tracking. Although they have served the community well, in addition to their inherent drawbacks outlined below, they are both limited by the need for expensive sensors that must be retrofitted to each entity to be tracked, together with precision infrared (IR) optical cameras and magnetic field generators. Optical systems may suffer from lack of direct line-of-sight between the IR camera and instrument-mounted fiducials, while electromagnetic tracking is compromised by the presence of conductive and ferromagnetic materials near the tracker’s field of view, or close to the field generator. A clear need is therefore for a technology that could provide ubiquitous tracking for arbitrary instruments, similar to the inexpensive radiofrequency identification (RFID) or near-field communication (NFC) devices that are becoming common in many consumer items.

3.4. Growing the open source community

The availability of open source software, particularly 3D Slicer, along with VTK, ITK, as well as their extensions such as Simple-ITK, Slicer-IGT and PLUS (Public Library for Ultrasound), have greatly facilitated the development of research platforms for image-guided visualization (Kikinis et al., 2014). Often these systems are highly sophisticated, provide much more functionality than their commercial predicates, and in a potential clinical study can often provide significant clinical information that can affect the outcome of a procedure. However, because of regulatory hurdles, the only means of comparing these systems with the standard of care is to employ them in an operating room in such a manner that they do not inform the procedure, which presents a conundrum. In the current climate, the only means of obtaining regulatory approval for a new system is for a commercial entity to re-engineer the system and embark on a lengthy and expensive regulatory journey before the system can be effectively evaluated in a clinically relevant environment. This process severely retards the uptake of new technology, and, in many cases, kills it. It is therefore imperative to work with regulatory agencies to seek a route for “fast-track” approval of Slicer-based systems to enable quick demonstration of their clinical efficacy.

3.5. Designing intuitive and workflow-compatible displays

Most image guidance platforms integrate data and signals from several sources, including pre- and intra-operative images, functional (i.e., electrophysiology) data and surgical tracking information, all incorporated in a common coordinate frame and display. The operators’ performance is thus dependent on their perception and interpretation of this information. Thus, even a technically optimal system is still dependent on the human observer’s perception of the presented information.

Perhaps the greatest challenge towards successful adoption of an IGI system in the operating room is the interface between the user and the technology. Immense effort has been put into the development of visualization hardware and software with high-resolution 3D stereoscopic displays that just a few years ago were expensive research systems, bur have now become commodity items. Nevertheless, relatively little effort has been directed toward studies of human factors, information perception and interpretation. A requirement for successful adoption of IGI in a clinical setting is that it must be simple and intuitive to operate, and intrude minimally on workflow. The key components of image fusion, feature identification, and image-to-patient registration must occur with minimal intervention by the surgeon, and preferably, via an interface that is accessed without the use of physical keyboards or switches, all of which intrude on limited operating room space, and potentially lengthen procedure time.

One of the first publications to identify perception related issues with medical AR systems was by Johnson and her colleagues (Johnson et al., 2003). That work identified depth perception issues when using a semi-transparent structure overlay onto the visible surface in a stereoscopic setup. A later study by Sielhorst et al., (2006) evaluated the effect of seven rendering methods on stereoscopic depth perception. The best performance was achieved with semi-transparent surface rendering, and when using a virtual window overlaid onto the skin surface. Finally, even a system that is deemed ideal from the technical and perceptual standpoints may present us with new challenges. Dixon et al., (2013), for example, concluded that “advanced navigational displays may increase precision, but strategies to mitigate attentional costs need further investigation to allow safe implementation”. Thus, even by providing a compelling guidance environment, we are increasing the clinician’s focus on specific regions, while reducing their ability to detect unexpected findings nearby. In a randomized study using endoscopic augmented reality (AR), with the control being standard endoscopy, they showed that the AR system leads to more accurate results, but that it significantly reduced the operator’s ability to identify a complication or a foreign body in close proximity to the target. This is a critical issue in systems where the operator is the only individual viewing the scene and where no other members of the clinical staff can alert them to such issues.

3.6. Minimizing technology footprint and mitigating clinicians’ resistance to change

While newly developed technology is intended to address current clinical challenges, new ways of conducting routine procedures are often more challenging for clinicians. As a consequence, developers of navigation environments should strive to simplify the workflows and interaction with their systems. Currently the use of these navigation aids is often not intuitive and involves a steep learning curve. Training is thus conducted in dedicated high-end facilities such as the IRCAD center described by Cleary et al., (2010). While high quality training for clinicians requires considerable financial investment, the fundamentals of navigation can be taught using very low-cost simulations (Novak et al., 2007).

3.7. Managing data at the right time and right place

To ensure optimal integration into procedure workflows, guidance environments must feature accurate identification and targeting of the surgical sites, near real-time performance, and display only the relevant information. To achieve these goals, real- or near-real time non-rigid registration and organ tracking are currently an active area of research.

A seamless synchronization of all signals, images and other data is also paramount, ensuring that all components of the image guidance environment (i.e. pre- and intra-operative images and virtual representations of the tracked instruments and so on) are inte-grated into a common coordinate system and also accurately registered to the patient. Temporal synchronization between the real and virtual/augmented intra-operative environments and the patient also must be achieved and maintained at all times during the procedure. While an ideal real-time update time is desirable, image acquisition, tracking, registration and visualization all take time, resulting in inherent latency in the information being displayed (Linte et al., 2013).

3.8. Designing cost-effective solutions

The perceived cost/benefit ratio remains a major barrier to the adoption of new methodologies. Based on past performance, introduction of new technologies into the operating room has more often increased the cost of providing healthcare (Bodenheimer, 2005). From a financial perspective, only a few studies have focused on evaluating the cost-effective use of image-guided and robotic systems (Desai et al., 2011; Margier et al., 2015; Novak et al., 2007), and unfortunately none of these studies reported a clear financial benefit for using the proposed navigation systems. One system worth of mentioning in the context of cost-effectiveness evaluation is the RIO augmented haptic surgery system studied by Swank and Alkire, (2009). Not only was this system demonstrated to be cost effective, but the analysis also showed an increased number of patients undergoing the procedure, possibly attracted by the novelty of the technology.

Thus, successfully transitioning from a laboratory implementation and testing to clinical care now becomes not only a matter of providing improved healthcare, but also of being cost effective. If the intent is to develop systems that will be clinically adopted, then cost should also be considered during the research phase and not ignored until the clinical implementation phase, which seems to be the case for most recently developed systems that have experienced low traction in terms of their clinical translation.

4. Closing remarks

While there is a multitude of areas for which IGI techniques seem to have obvious application, demand must be motivated by real need or limitations of existing approaches. Unless the technology is truly disruptive, it should neither make the task of surgeons or support staff more difficult, lengthen procedure time nor change surgical workflow and training.

As the thrust towards greater use of minimally invasive interventions continues, image guidance will continue to be an integral component of such systems. However, wide acceptance will occur through close partnerships between scientists and surgeons, compelling studies that conclusively demonstrate major benefits in terms of patient outcome and cost, and a commitment from the surgical device and imaging industries to support these concepts.

While many challenges remain, continued multidisciplinary and multi-institutional research efforts, along with ongoing improvements to imaging and computer technology, will finally enable most interventions to be accomplished safely without the need to subject the patient to “surgery”.

Acknowledgments

The work reported in this paper was funded in part by the following sources: Canadian Institutes of Health Research (MOP 119447), the Natural Sciences and Engineering Research Council of Canada (RPGIN 2014-04504), the Canadian Foundation for Innovation (20094), two Doctoral research awards and two post-doctoral fellowships from the National Sciences and Engineering Research Council of Canada (NSECR-CGSD-2008-2010, PDF-2010-2012) and the Heart & Stroke Foundation of Canada (HSFC-DRA-2008-2010 and HRFC-RF-2012-2014), and the National Institutes of Health (RO1EB002834 - Biomedical Imaging Resource, Mayo Clinic and P41EB015898 - National Center for Image-guided Therapy at Brigham & Women’s Hospital and Harvard University). In addition, the assistance of students and staff in the VASST lab (Robarts Research Institute, London Ontario) and the Biomedical Imaging Resource (Mayo Clinic, Rochester MN) is gratefully acknowledged.

Biography

Terry M. Peters is a Scientist at the Robarts Research Institute at Western University, London, Canada and a Professor in Medical Imaging, Medical Biophysics and Biomedical Engineering at Western. Throughout his career, he has been working on problems in medical imaging in general and image-guided intervention in particular. He graduated with a PhD from the University of Canterbury in NZ, and after some time as a Medical Physicist at Christchurch Hospital, was recruited to the Montreal Neurological Institute at McGill University. After 19 years at the MNI, he moved to join the Medical Imaging group at Robarts in 1997.

Cristian A. Linte Linte completed a BASc in Mechanical and Materials Engineering at the University of Windsor in Windsor, Ontario, Canada in 2004, followed by a MESc and PhD in Biomedical Engineering from Western University in London, Canada in 2006 and 2010, respectively. In 2011, he joined the Biomedical Imaging Resource at Mayo Clinic in Rochester MN – a group with a long standing tradition in the development of medical image analysis and image-guided intervention technology – for a post-doctoral fellowship. For the past three years he has been an Assistant Professor in Biomedical Engineering at the Rochester Institute of Technology, Rochester NY, where his research interest is in Image-guided interventions.

References

  1. Bodenheimer T, 2005. High and rising health care costs. part 1: seeking an explanation. Ann Intern Med 142, 847–854. [DOI] [PubMed] [Google Scholar]
  2. Cleary K, Peters TM, Yarmush M, Duncan J, Gray M, 2010. Image-guided interventions: technology review and clinical applications. Annu. Rev. Biomed. Eng 12, 119–142. [DOI] [PubMed] [Google Scholar]
  3. Desai AS, Dramis A, Kendoff D, Board TN, 2011. Critical review of the current practice for computer-assisted navigation in total knee replacement surgery: cost-effectiveness and clinical outcome. Curr. Rev. Musculoskelet Med 4, 11–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Dixon BJ, Daly MJ, Chan H, Vescan AD, Witterick IJ , Irish JC, 2013. Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg. Endosc 27, 454–461. [DOI] [PubMed] [Google Scholar]
  5. Goubran M, Hammond RR, de Ribaupierre S, Burneo JG, Mirsattari S, Steven DA, Parrent AG, Peters TM, Khan AR, 2015. Magnetic resonance imaging and histology correlation in the neocortex in temporal lobe epilepsy. Ann. Neurol 77, 237–250. [DOI] [PubMed] [Google Scholar]
  6. Horsley V, Clarke RH, 1908. The structure and functions of the cerebellum examined by a new method. Brain. 31, 45–124. [Google Scholar]
  7. Jermyn M, Mok K, Mercier J, Desroches J, Pichette J, Saint-Arnaud K, Bernstein L , Guiot MC, Petrecca K, Leblond F, 2015. Intraoperative brain cancer detection with Raman spectroscopy in humans. Sci Transl Med 7, 274ra219. [DOI] [PubMed] [Google Scholar]
  8. Johnson LG, Edwards P, Hawkes D, 2003. Surface transparency makes stereo overlays unpredictable: the implications for augmented reality. Stud. Health Technol. Inf 94, 131–136. [PubMed] [Google Scholar]
  9. Jolesz FA, 2014. Intraoperative Imaging and Image-Guided Therapy. Springer, New York. [Google Scholar]
  10. Kikinis R, Pieper SD, Vosburgh KG, 2014. 3D slicer: a platform for subject-specific image analysis Visualization and Clinical Support. Springer, New York. [Google Scholar]
  11. Linte CA, Camp JJ, Holmes DR 3rd, Rettmann ME, Robb RA, 2013. Toward online modeling for lesion visualization and monitoring in cardiac ablation therapy. Med. Image Comput. Comput. Assist. Interv. 16, 9–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Litynski GS, 1999. Endoscopic surgery: the history, the pioneers. World J. Surg 23, 745–753. [DOI] [PubMed] [Google Scholar]
  13. Margier J, Tchouda SD, Banihachemi JJ , Bosson JL, Plaweski S, 2015. Computer-assisted navigation in ACL reconstruction is attractive but not yet cost efficient. Knee Surg. Sports Traumatol. Arthrosc 23, 1026–1034. [DOI] [PubMed] [Google Scholar]
  14. Novak EJ , Silverstein MD, Bozic KJ , 2007. The cost-effectiveness of computer-assisted navigation in total knee arthroplasty. J. Bone Joint. Surg. Am 89, 2389–2397. [DOI] [PubMed] [Google Scholar]
  15. Sielhorst T, Bichlmeier C , Heining SM , Navab N, 2006. Depth perception–a major issue in medical AR: evaluation study by twenty surgeons. Med. Image Comput. Comput. Assist. Interv 9, 364–372. [DOI] [PubMed] [Google Scholar]
  16. Soares GM, Murphy TP, 2005. Clinical interventional radiology: parallels with the evolution of general surgery. Semin. Intervent. Radiol 22, 10–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Swank ML, Alkire MR, 2009. Minimally invasive hip resurfacing compared to minimally invasive total hip arthroplasty. Bull. NYU Hosp. Jt. Dis 67, 113–115. [PubMed] [Google Scholar]
  18. Tempany CMC, Jayender J, Kapur T, Bueno R, Golby A, Agar N, Jolesz FA, 2015. Multimodal imaging for improved diagnosis and treatment of cancers. Cancer. 121, 817–827. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES