TY - JOUR AU - Abdollahi, Farnaz AU - Lazarro, Emily D. Case AU - Listenberger, Molly AU - Kenyon, Robert V. AU - Kovic, Mark AU - Bogey, Ross A. AU - Hedeker, Donald AU - Jovanovic, Borko D. AU - Patton, James L. T1 - Error Augmentation Enhancing Arm Recovery in Individuals With Chronic Stroke JO - Neurorehabilitation and Neural Repair Y1 - 2013/august VL - 28 IS - 2 SP - 120 EP - 128 ER - TY - JOUR AU - Abe, Yuichiro AU - Sato, Shigenobu AU - Kato, Koji AU - Hyakumachi, Takahiko AU - Yanagibashi, Yasushi AU - Ito, Manabu AU - Abumi, Kuniyoshi T1 - A novel 3D guidance system using augmented reality for percutaneous vertebroplasty: technical note. JO - Journal of neurosurgery. Spine Y1 - 2013 VL - 19 SP - 492 EP - 501 KW - Aged; Aged KW - 80 and over; Computer Simulation; Humans; Osteoporotic Fractures KW - surgery; Phantoms KW - Imaging; Spinal Fractures KW - surgery; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface; Vertebroplasty KW - methods N1 - 1547-5646 Owner: NLM N2 - Augmented reality (AR) is an imaging technology by which virtual objects are overlaid onto images of real objects captured in real time by a tracking camera. This study aimed to introduce a novel AR guidance system called virtual protractor with augmented reality (VIPAR) to visualize a needle trajectory in 3D space during percutaneous vertebroplasty (PVP). The AR system used for this study comprised a head-mount display (HMD) with a tracking camera and a marker sheet. An augmented scene was created by overlaying the preoperatively generated needle trajectory path onto a marker detected on the patient using AR software, thereby providing the surgeon with augmented views in real time through the HMD. The accuracy of the system was evaluated by using a computer-generated simulation model in a spine phantom and also evaluated clinically in 5 patients. In the 40 spine phantom trials, the error of the insertion angle (EIA), defined as the difference between the attempted angle and the insertion angle, was evaluated using 3D CT scanning. Computed tomography analysis of the 40 spine phantom trials showed that the EIA in the axial plane significantly improved when VIPAR was used compared with when it was not used (0.96° ± 0.61° vs 4.34° ± 2.36°, respectively). The same held true for EIA in the sagittal plane (0.61° ± 0.70° vs 2.55° ± 1.93°, respectively). In the clinical evaluation of the AR system, 5 patients with osteoporotic vertebral fractures underwent VIPAR-guided PVP from October 2011 to May 2012. The postoperative EIA was evaluated using CT. The clinical results of the 5 patients showed that the EIA in all 10 needle insertions was 2.09° ± 1.3° in the axial plane and 1.98° ± 1.8° in the sagittal plane. There was no pedicle breach or leakage of polymethylmethacrylate. VIPAR was successfully used to assist in needle insertion during PVP by providing the surgeon with an ideal insertion point and needle trajectory through the HMD. The findings indicate that AR guidance technology can become a useful assistive device during spine surgeries requiring percutaneous procedures. ER - TY - JOUR AU - Abhari, Kamyar AU - Baxter, John S. H. AU - Chen, Elvis C. S. AU - Khan, Ali R. AU - Peters, Terry M. AU - de Ribaupierre, Sandrine AU - Eagleson, Roy T1 - Training for planning tumour resection: augmented reality and human factors. JO - IEEE transactions on bio-medical engineering Y1 - 2015 VL - 62 SP - 1466 EP - 1477 KW - Ergonomics; Female; Head KW - surgery; Humans; Imaging KW - Three-Dimensional KW - methods; Male; Neurosurgical Procedures KW - education; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface N1 - 1558-2531 Owner: NLM N2 - Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes. ER - TY - JOUR AU - Ai, Danni AU - Yang, Jian AU - Fan, Jingfan AU - Zhao, Yitian AU - Song, Xianzheng AU - Shen, Jianbing AU - Shao, Ling AU - Wang, Yongtian T1 - Augmented reality based real-time subcutaneous vein imaging system. JO - Biomedical optics express Y1 - 2016 VL - 7 SP - 2565 EP - 2585 KW - (100.0100) Image processing; (100.3010) Image reconstruction techniques; (110.0110) Imaging systems; (110.3080) Infrared imaging; (170.0110) Imaging systems N1 - 2156-7085 Owner: NLM N2 - A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed. ER - TY - JOUR AU - Alaraj, Ali AU - Charbel, Fady T. AU - Birk, Daniel AU - Tobin, Matthew AU - Tobin, Mathew AU - Luciano, Cristian AU - Banerjee, Pat P. AU - Rizzi, Silvio AU - Sorenson, Jeff AU - Foley, Kevin AU - Slavin, Konstantin AU - Roitberg, Ben T1 - Role of cranial and spinal virtual and augmented reality simulation using immersive touch modules in neurosurgical training. JO - Neurosurgery Y1 - 2013 VL - 72 Suppl 1 SP - 115 EP - 123 KW - Central Nervous System Diseases KW - surgery; Competency-Based Education KW - methods; Computer Simulation; Craniotomy KW - education KW - methods; Education KW - Medical KW - Graduate KW - methods; Feedback; Humans; Imaging KW - Three-Dimensional KW - methods; Internship and Residency KW - methods; Medical Errors KW - prevention & control; Neurosurgical Procedures KW - education; Rhizotomy KW - methods; Spinal Fusion KW - methods; Spinal Puncture KW - methods; Touch; Trigeminal Neuralgia KW - surgery; User-Computer Interface; Ventriculostomy KW - methods; Vertebroplasty KW - methods N1 - 1524-4040 Owner: NLM N2 - Recent studies have shown that mental script-based rehearsal and simulation-based training improve the transfer of surgical skills in various medical disciplines. Despite significant advances in technology and intraoperative techniques over the last several decades, surgical skills training on neurosurgical operations still carries significant risk of serious morbidity or mortality. Potentially avoidable technical errors are well recognized as contributing to poor surgical outcome. Surgical education is undergoing overwhelming change, as a result of the reduction of work hours and current trends focusing on patient safety and linking reimbursement with clinical outcomes. Thus, there is a need for adjunctive means for neurosurgical training, which is a recent advancement in simulation technology. ImmersiveTouch is an augmented reality system that integrates a haptic device and a high-resolution stereoscopic display. This simulation platform uses multiple sensory modalities, re-creating many of the environmental cues experienced during an actual procedure. Modules available include ventriculostomy, bone drilling, percutaneous trigeminal rhizotomy, and simulated spinal modules such as pedicle screw placement, vertebroplasty, and lumbar puncture. We present our experience with the development of such augmented reality neurosurgical modules and the feedback from neurosurgical residents. ER - TY - CONF AU - Al-Ataby, Ali AU - Younis, Ola AU - Al-Nuaimy, Waleed AU - Al-Taee, Majid AU - Sharaf, Zain AU - Al-Bander, Baidaa T1 - Visual Augmentation Glasses for People with Impaired Vision PB - IEEE Y1 - 2016/august ER - TY - JOUR AU - Albrecht, Urs-Vito AU - Folta-Schoofs, Kristian AU - Behrends, Marianne AU - von Jan, Ute T1 - Effects of mobile augmented reality learning compared to textbook learning on medical students: randomized controlled pilot study. JO - Journal of medical Internet research Y1 - 2013 VL - 15 SP - e182 EP - e182 KW - Education KW - Medical KW - methods; Humans; Learning; Pilot Projects; Students KW - Medical; Surveys and Questionnaires; cellular phone; education; emotions; medical; problem-based learning N1 - 1438-8871 Owner: NLM N2 - By adding new levels of experience, mobile Augmented Reality (mAR) can significantly increase the attractiveness of mobile learning applications in medical education. To compare the impact of the heightened realism of a self-developed mAR blended learning environment (mARble) on learners to textbook material, especially for ethically sensitive subjects such as forensic medicine, while taking into account basic psychological aspects (usability and higher level of emotional involvement) as well as learning outcomes (increased learning efficiency). A prestudy was conducted based on a convenience sample of 10 third-year medical students. The initial emotional status was captured using the "Profile of Mood States" questionnaire (POMS, German variation); previous knowledge about forensic medicine was determined using a 10-item single-choice (SC) test. During the 30-minute learning period, the students were randomized into two groups: the first group consisted of pairs of students, each equipped with one iPhone with a preinstalled copy of mARble, while the second group was provided with textbook material. Subsequently, both groups were asked to once again complete the POMS questionnaire and SC test to measure changes in emotional state and knowledge gain. Usability as well as pragmatic and hedonic qualities of the learning material was captured using AttrakDiff2 questionnaires. Data evaluation was conducted anonymously. Descriptive statistics for the score in total and the subgroups were calculated before and after the intervention. The scores of both groups were tested against each other using paired and unpaired signed-rank tests. An item analysis was performed for the SC test to objectify difficulty and selectivity. Statistically significant, the mARble group (6/10) showed greater knowledge gain than the control group (4/10) (Wilcoxon z=2.232, P=.03). The item analysis of the SC test showed a difficulty of P=0.768 (s=0.09) and a selectivity of RPB=0.2. For mARble, fatigue (z=2.214, P=.03) and numbness (z=2.07, P=.04) decreased with statistical significance when comparing pre- and post-tests. Vigor rose slightly, while irritability did not increase significantly. Changes in the control group were insignificant. Regarding hedonic quality (identification, stimulation, attractiveness), there were significant differences between mARble (mean 1.179, CI -0.440 to 0.440) and the book chapter (mean -0.982, CI -0.959 to 0.959); the pragmatic quality mean only differed slightly. The mARble group performed considerably better regarding learning efficiency; there are hints for activating components of the mAR concept that may serve to fascinate the participants and possibly boost interest in the topic for the remainder of the class. While the small sample size reduces our study's conclusiveness, its design seems appropriate for determining the effects of interactive eLearning material with respect to emotions, learning efficiency, and hedonic and pragmatic qualities using a larger group. German Clinical Trial Register (DRKS), DRKS-ID: DRKS00004685; https://drks-neu.uniklinik-freiburg.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ID=DRKS00004685. ER - TY - JOUR AU - Albrecht, Urs-Vito AU - Noll, Christoph AU - von Jan, Ute T1 - Explore and experience: mobile augmented reality for medical training. JO - Studies in health technology and informatics Y1 - 2013 VL - 192 SP - 382 EP - 386 KW - Computer-Assisted Instruction KW - methods; Computers KW - Handheld; Curriculum; Education KW - Medical KW - methods; Forensic Medicine KW - education; Germany; Internet; Mobile Applications; Teaching; User-Computer Interface N1 - 1879-8365 Owner: NLM N2 - In medicine, especially in basic education, it may sometimes be inappropriate to integrate real patients into classes due to ethical issues that must be avoided. Nevertheless, the quality of medical education may suffer without the use of real cases. This is especially true of medical specialties such as legal medicine: survivors of a crime are already subjected to procedures that constitute a severe emotional burden and may cause additional distress even without the added presence of students. Using augmented reality based applications may alleviate this ethical dilemma by giving students the possibility to practice the necessary skills based on virtual but nevertheless almost realistic cases. The app "mARble®" that is presented in this paper follows this approach. The currently available learning module for legal medicine gives users an opportunity to learn about various wound patterns by virtually overlaying them on their own skin and is applicable in different learning settings. Preliminary evaluation results covering learning efficiency and emotional components of the learning process are promising. Content modules for other medical specialtiesare currently under construction. ER - TY - JOUR AU - Al-Deen Ashab, Hussam AU - Lessoway, Victoria A. AU - Khallaghi, Siavash AU - Cheng, Alexis AU - Rohling, Robert AU - Abolmaesumi, Purang T1 - An augmented reality system for epidural anesthesia (AREA): prepuncture identification of vertebrae. JO - IEEE transactions on bio-medical engineering Y1 - 2013 VL - 60 SP - 2636 EP - 2644 KW - Algorithms; Anesthesia KW - Epidural KW - methods; Feasibility Studies; Humans; Image Processing KW - Computer-Assisted KW - methods; Lumbar Vertebrae KW - anatomy & histology KW - diagnostic imaging; Reproducibility of Results; Ultrasonography KW - Interventional KW - methods; User-Computer Interface N1 - 1558-2531 Owner: NLM N2 - We propose an augmented reality system to identify lumbar vertebral levels to assist in spinal needle insertion for epidural anesthesia. These procedures require careful placement of a needle to ensure effective delivery of anesthetics and to avoid damaging sensitive tissue such as nerves. In this system, a trinocular camera tracks an ultrasound transducer during the acquisition of a sequence of B-mode images. The system generates an ultrasound panorama image of the lumbar spine, automatically identifies the lumbar levels in the panorama image, and overlays the identified levels on a live camera view of the patient's back. Validation is performed to test the accuracy of panorama generation, lumbar level identification, overall system accuracy, and the effect of changes in the curvature of the spine during the examination. The results from 17 subjects demonstrate the feasibility and capability of achieving an error within clinically acceptable range for epidural anaesthesia. ER - TY - JOUR AU - Andersen, Daniel AU - Popescu, Voicu AU - Cabrera, Maria Eugenia AU - Shanghavi, Aditya AU - Gomez, Gerardo AU - Marley, Sherri AU - Mullis, Brian AU - Wachs, Juan P. T1 - Medical telementoring using an augmented reality transparent display. JO - Surgery Y1 - 2016 VL - 159 SP - 1646 EP - 1653 KW - Adolescent; Adult; Clinical Competence; Computer Terminals; Female; Humans; Laparoscopy KW - education; Male; Motor Skills; Telemedicine KW - instrumentation; User-Computer Interface; Young Adult N1 - 1532-7361 Owner: NLM N2 - The goal of this study was to design and implement a novel surgical telementoring system called the System for Telementoring with Augmented Reality (STAR) that uses a virtual transparent display to convey precise locations in the operating field to a trainee surgeon. This system was compared with a conventional system based on a telestrator for surgical instruction. A telementoring system was developed and evaluated in a study which used a 1 × 2 between-subjects design with telementoring system, that is, STAR or conventional, as the independent variable. The participants in the study were 20 premedical or medical students who had no prior experience with telementoring. Each participant completed a task of port placement and a task of abdominal incision under telementoring using either the STAR or the conventional system. The metrics used to test performance when using the system were placement error, number of focus shifts, and time to task completion. When compared with the conventional system, participants using STAR completed the 2 tasks with less placement error (45% and 68%) and with fewer focus shifts (86% and 44%), but more slowly (19% for each task). Using STAR resulted in decreased annotation placement error, fewer focus shifts, but greater times to task completion. STAR placed virtual annotations directly onto the trainee surgeon's field of view of the operating field by conveying location with great accuracy; this technology helped to avoid shifts in focus, decreased depth perception, and enabled fine-tuning execution of the task to match telementored instruction, but led to greater times to task completion. ER - TY - JOUR AU - Andersen, Dan AU - Popescu, Voicu AU - Cabrera, Maria Eugenia AU - Shanghavi, Aditya AU - Mullis, Brian AU - Marley, Sherri AU - Gomez, Gerardo AU - Wachs, Juan P. T1 - An Augmented Reality-Based Approach for Surgical Telementoring in Austere Environments. JO - Military medicine Y1 - 2017 VL - 182 SP - 310 EP - 315 KW - Clinical Competence KW - standards; Humans; Mentoring KW - methods KW - standards; Patient Simulation; Remote Consultation KW - standards; Surgeons KW - standards; Telemedicine KW - standards; Warfare N1 - 1930-613X Owner: NLM N2 - Telementoring can improve treatment of combat trauma injuries by connecting remote experienced surgeons with local less-experienced surgeons in an austere environment. Current surgical telementoring systems force the local surgeon to regularly shift focus away from the operating field to receive expert guidance, which can lead to surgery delays or even errors. The System for Telementoring with Augmented Reality (STAR) integrates expert-created annotations directly into the local surgeon's field of view. The local surgeon views the operating field by looking at a tablet display suspended between the patient and the surgeon that captures video of the surgical field. The remote surgeon remotely adds graphical annotations to the video. The annotations are sent back and displayed to the local surgeon while being automatically anchored to the operating field elements they describe. A technical evaluation demonstrates that STAR robustly anchors annotations despite tablet repositioning and occlusions. In a user study, participants used either STAR or a conventional telementoring system to precisely mark locations on a surgical simulator under a remote surgeon's guidance. Participants who used STAR completed the task with fewer focus shifts and with greater accuracy. The STAR reduces the local surgeon's need to shift attention during surgery, allowing him or her to continuously work while looking "through" the tablet screen. ER - TY - JOUR AU - Anderson, Fraser AU - Bischof, Walter F. T1 - Augmented reality improves myoelectric prosthesis training JO - International Journal on Disability and Human Development Y1 - 2014/january VL - 13 IS - 3 ER - TY - JOUR AU - Armstrong, David G. AU - Rankin, Timothy M. AU - Giovinco, Nicholas A. AU - Mills, Joseph L. AU - Matsuoka, Yoky T1 - A heads-up display for diabetic limb salvage surgery: a view through the google looking glass. JO - Journal of diabetes science and technology Y1 - 2014 VL - 8 SP - 951 EP - 956 KW - Diabetes Complications; Diabetes Mellitus; Humans; Limb Salvage KW - methods; Surgery KW - Computer-Assisted KW - instrumentation; Telemedicine KW - instrumentation; User-Computer Interface; augmented reality; diabetes; heads-up display; limb salvage; telemedicine N1 - 1932-2968 Owner: NLM N2 - Although the use of augmented reality has been well described over the past several years, available devices suffer from high cost, an uncomfortable form factor, suboptimal battery life, and lack an app-based developer ecosystem. This article describes the potential use of a novel, consumer-based, wearable device to assist surgeons in real time during limb preservation surgery and clinical consultation. Using routine intraoperative, clinical, and educational case examples, we describe the use of a wearable augmented reality device (Google Glass; Google, Mountain View, CA). The device facilitated hands-free, rapid communication, documentation, and consultation. An eyeglass-mounted screen form factor has the potential to improve communication, safety, and efficiency of intraoperative and clinical care. We believe this represents a natural progression toward union of medical devices with consumer technology. ER - TY - CONF AU - Ashab, H. A. AU - Lessoway, V. A. AU - Khallaghi, S. AU - Cheng, A. AU - Rohling, R. AU - Abolmaesumi, P. T1 - AREA: An augmented reality system for epidural anaesthesia PB - IEEE Y1 - 2012/august ER - TY - JOUR AU - Assis, Gilda Aparecida de AU - Corrêa, Ana Grasielle Dionísio AU - Martins, Maria Bernardete Rodrigues AU - Pedrozo, Wendel Goes AU - Lopes, Roseli de Deus T1 - An augmented reality system for upper-limb post-stroke motor rehabilitation: a feasibility study. JO - Disability and rehabilitation. Assistive technology Y1 - 2016 VL - 11 SP - 521 EP - 528 KW - Aged; Exercise Therapy KW - instrumentation KW - methods; Feasibility Studies; Female; Humans; Male; Middle Aged; Range of Motion KW - Articular; Recovery of Function; Shoulder KW - physiology; Stroke Rehabilitation KW - methods; Virtual Reality; Augmented reality; motor rehabilitation; myoelectric control; stroke; upper-limb N1 - 1748-3115 Owner: NLM N2 - To determine the clinical feasibility of a system based on augmented reality for upper-limb (UL) motor rehabilitation of stroke participants. A physiotherapist instructed the participants to accomplish tasks in augmented reality environment, where they could see themselves and their surroundings, as in a mirror. Two case studies were conducted. Participants were evaluated pre- and post-intervention. The first study evaluated the UL motor function using Fugl-Meyer scale. Data were compared using non-parametric sign tests and effect size. The second study used the gain of motion range of shoulder flexion and abduction assessed by computerized biophotogrammetry. At a significance level of 5%, Fugl-Meyer scores suggested a trend for greater UL motor improvement in the augmented reality group than in the other. Moreover, effect size value 0.86 suggested high practical significance for UL motor rehabilitation using the augmented reality system. System provided promising results for UL motor rehabilitation, since enhancements have been observed in the shoulder range of motion and speed. Implications for Rehabilitation Gain of range of motion of flexion and abduction of the shoulder of post-stroke patients can be achieved through an augmented reality system containing exercises to promote the mental practice. NeuroR system provides a mental practice method combined with visual feedback for motor rehabilitation of chronic stroke patients, giving the illusion of injured upper-limb (UL) movements while the affected UL is resting. Its application is feasible and safe. This system can be used to improve UL rehabilitation, an additional treatment past the traditional period of the stroke patient hospitalization and rehabilitation. ER - TY - JOUR AU - Atallah, Sam AU - Larach, Sergio W. AU - Monson, John R. T. T1 - Stereotactic navigation for TAMIS-TME. JO - Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy Y1 - 2016 VL - 25 SP - 271 EP - 277 KW - Humans; Laparoscopy KW - methods; Rectal Neoplasms KW - surgery; Stereotaxic Techniques; Surgery KW - Computer-Assisted KW - methods; Transanal Endoscopic Surgery KW - methods; TAMIS; augmented reality; image-guided surgery; navigation; rectal cancer; taTME N1 - 1365-2931 Owner: NLM N2 - Stereotactic navigation allows for real-time, image-guided surgery, thus providing an augmented working environment for the operator. This technique can be applied to complex minimally invasive surgery for fixed anatomic targets. Transanal minimally invasive surgery represents a new approach to rectal cancer surgery that is technically demanding and introduces the potential for procedure-specific morbidity. Feasibility of stereotactic navigation for TAMIS-TME has been demonstrated, and this could theoretically translate into improved resection quality by improving the surgeon's spatial awareness. The future of minimally invasive surgery as it relates to augmented reality and image-guided surgery is discussed. ER - TY - JOUR AU - Aung, Yee Mon AU - Al-Jumaily, Adel AU - Anam, Khairul T1 - A novel upper limb rehabilitation system with self-driven virtual arm illusion. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2014 VL - 2014 SP - 3614 EP - 3617 KW - Arm KW - physiology; Humans; Illusions; Recovery of Function; Rehabilitation KW - methods; Stroke KW - physiopathology; Stroke Rehabilitation; User-Computer Interface N1 - 1557-170X Owner: NLM N2 - This paper proposes a novel upper extremity rehabilitation system with virtual arm illusion. It aims for fast recovery from lost functions of the upper limb as a result of stroke to provide a novel rehabilitation system for paralyzed patients. The system is integrated with a number of technologies that include Augmented Reality (AR) technology to develop game like exercise, computer vision technology to create the illusion scene, 3D modeling and model simulation, and signal processing to detect user intention via EMG signal. The effectiveness of the developed system has evaluated via usability study and questionnaires which is represented by graphical and analytical methods. The evaluation provides with positive results and this indicates the developed system has potential as an effective rehabilitation system for upper limb impairment. ER - TY - JOUR AU - Azagury, D. E. AU - Ryou, M. AU - Shaikh, S. N. AU - San José Estépar, R. AU - Lengyel, B. I. AU - Jagadeesan, J. AU - Vosburgh, K. G. AU - Thompson, C. C. T1 - Real-time computed tomography-based augmented reality for natural orifice transluminal endoscopic surgery navigation. JO - The British journal of surgery Y1 - 2012 VL - 99 SP - 1246 EP - 1253 KW - Abdominal Wall KW - anatomy & histology; Adult; Cadaver; Computer Simulation; Computer Systems; Digestive System KW - anatomy & histology; Female; Humans; Male; Natural Orifice Endoscopic Surgery KW - methods KW - standards; Pelvic Floor KW - anatomy & histology; Tomography KW - X-Ray Computed KW - standards N1 - 1365-2168 Owner: NLM N2 - Natural orifice transluminal endoscopic surgery (NOTES) is technically challenging owing to endoscopic short-sighted visualization, excessive scope flexibility and lack of adequate instrumentation. Augmented reality may overcome these difficulties. This study tested whether an image registration system for NOTES procedures (IR-NOTES) can facilitate navigation. In three human cadavers 15 intra-abdominal organs were targeted endoscopically with and without IR-NOTES via both transgastric and transcolonic routes, by three endoscopists with different levels of expertise. Ease of navigation was evaluated objectively by kinematic analysis, and navigation complexity was determined by creating an organ access complexity score based on the same data. Without IR-NOTES, 21 (11·7 per cent) of 180 targets were not reached (expert endoscopist 3, advanced 7, intermediate 11), compared with one (1 per cent) of 90 with IR-NOTES (intermediate endoscopist) (P = 0·002). Endoscope movements were significantly less complex in eight of the 15 listed organs when using IR-NOTES. The most complex areas to access were the pelvis and left upper quadrant, independently of the access route. The most difficult organs to access were the spleen (5 failed attempts; 3 of 7 kinematic variables significantly improved) and rectum (4 failed attempts; 5 of 7 kinematic variables significantly improved). The time needed to access the rectum through a transgastric approach was 206·3 s without and 54·9 s with IR-NOTES (P = 0·027). The IR-NOTES system enhanced both navigation efficacy and ease of intra-abdominal NOTES exploration for operators of all levels. The system rendered some organs accessible to non-expert operators, thereby reducing one impediment to NOTES procedures. ER - TY - JOUR AU - Badiali, Giovanni AU - Ferrari, Vincenzo AU - Cutolo, Fabrizio AU - Freschi, Cinzia AU - Caramella, Davide AU - Bianchi, Alberto AU - Marchetti, Claudio T1 - Augmented reality as an aid in maxillofacial surgery: validation of a wearable system allowing maxillary repositioning. JO - Journal of cranio-maxillo-facial surgery : official publication of the European Association for Cranio-Maxillo-Facial Surgery Y1 - 2014 VL - 42 SP - 1970 EP - 1976 KW - Cadaver; Clinical Competence; Data Display; Equipment Design; Feasibility Studies; Feedback; Humans; Image Processing KW - Computer-Assisted KW - methods; Maxilla KW - surgery; Models KW - Anatomic; Observer Variation; Osteotomy KW - Le Fort KW - methods; Patient Care Planning; Photography KW - instrumentation; Printing KW - Three-Dimensional; Surgery KW - instrumentation KW - methods; Tomography KW - X-Ray Computed KW - methods; Treatment Outcome; User-Computer Interface; Video Recording KW - instrumentation; Augmented reality; Computer-assisted surgery; Image-guided surgery; Maxillofacial abnormalities; Maxillofacial orthognathic surgery N1 - 1878-4119 Owner: NLM N2 - We present a newly designed, localiser-free, head-mounted system featuring augmented reality as an aid to maxillofacial bone surgery, and assess the potential utility of the device by conducting a feasibility study and validation. Our head-mounted wearable system facilitating augmented surgery was developed as a stand-alone, video-based, see-through device in which the visual features were adapted to facilitate maxillofacial bone surgery. We implement a strategy designed to present augmented reality information to the operating surgeon. LeFort1 osteotomy was chosen as the test procedure. The system is designed to exhibit virtual planning overlaying the details of a real patient. We implemented a method allowing performance of waferless, augmented-reality assisted bone repositioning. In vitro testing was conducted on a physical replica of a human skull, and the augmented reality system was used to perform LeFort1 maxillary repositioning. Surgical accuracy was measured with the aid of an optical navigation system that recorded the coordinates of three reference points (located in anterior, posterior right, and posterior left positions) on the repositioned maxilla. The outcomes were compared with those expected to be achievable in a three-dimensional environment. Data were derived using three levels of surgical planning, of increasing complexity, and for nine different operators with varying levels of surgical skill. The mean error was 1.70 ± 0.51 mm. The axial errors were 0.89 ± 0.54 mm on the sagittal axis, 0.60 ± 0.20 mm on the frontal axis, and 1.06 ± 0.40 mm on the craniocaudal axis. The simplest plan was associated with a slightly lower mean error (1.58 ± 0.37 mm) compared with the more complex plans (medium: 1.82 ± 0.71 mm; difficult: 1.70 ± 0.45 mm). The mean error for the anterior reference point was lower (1.33 ± 0.58 mm) than those for both the posterior right (1.72 ± 0.24 mm) and posterior left points (2.05 ± 0.47 mm). No significant difference in terms of error was noticed among operators, despite variations in surgical experience. Feedback from surgeons was acceptable; all tests were completed within 15 min and the tool was considered to be both comfortable and usable in practice. We used a new localiser-free, head-mounted, wearable, stereoscopic, video see-through display to develop a useful strategy affording surgeons access to augmented reality information. Our device appears to be accurate when used to assist in waferless maxillary repositioning. Our results suggest that the method can potentially be extended for use with many surgical procedures on the facial skeleton. Further, our positive results suggest that it would be appropriate to proceed to in vivo testing to assess surgical accuracy under real clinical conditions. ER - TY - JOUR AU - Bai, Zhen AU - Blackwell, Alan F. AU - Coulouris, George T1 - Using Augmented Reality to Elicit Pretend Play for Children with Autism. JO - IEEE transactions on visualization and computer graphics Y1 - 2015/05 VL - 21 SP - 598 EP - 610 KW - Asperger Syndrome KW - therapy; Autistic Disorder KW - therapy; Child; Child KW - Preschool; Computer Graphics; Female; Humans; Male; Play Therapy KW - methods; Virtual Reality Exposure Therapy KW - methods N1 - 1941-0506 Owner: NLM N2 - Children with autism spectrum condition (ASC) suffer from deficits or developmental delays in symbolic thinking. In particular, they are often found lacking in pretend play during early childhood. Researchers believe that they encounter difficulty in generating and maintaining mental representation of pretense coupled with the immediate reality. We have developed an interactive system that explores the potential of Augmented Reality (AR) technology to visually conceptualize the representation of pretense within an open-ended play environment. Results from an empirical study involving children with ASC aged 4 to 7 demonstrated a significant improvement of pretend play in terms of frequency, duration and relevance using the AR system in comparison to a non computer-assisted situation. We investigated individual differences, skill transfer, system usability and limitations of the proposed AR system. We discuss design guidelines for future AR systems for children with ASC and other pervasive developmental disorders. ER - TY - CONF AU - Baum, Zachary AU - Ungi, Tamas AU - Lasso, Andras AU - Fichtinger, Gabor AU - Webster, Robert J. AU - Fei, Baowei T1 - Usability of a real-time tracked augmented reality display system in musculoskeletal injections PB - SPIE Y1 - 2017/march ER - TY - JOUR AU - Belhaj Soulami, Réda AU - Verhoye, Jean-Philippe AU - Nguyen Duc, Hung AU - Castro, Miguel AU - Auffret, Vincent AU - Anselmi, Amedeo AU - Haigron, Pascal AU - Ruggieri, Vito Giovanni T1 - Computer-Assisted Transcatheter Heart Valve Implantation in Valve-in-Valve Procedures. JO - Innovations (Philadelphia, Pa.) Y1 - 2016 VL - 11 SP - 193 EP - 200 KW - Adult; Aged; Aged KW - 80 and over; Bioprosthesis; Electrocardiography; Feasibility Studies; Female; Heart Valve Diseases KW - diagnostic imaging KW - surgery; Heart Valve Prosthesis; Humans; Male; Middle Aged; Prosthesis Design; Reproducibility of Results; Retrospective Studies; Surgery KW - Computer-Assisted KW - adverse effects KW - methods; Tomography KW - X-Ray Computed KW - methods; Transcatheter Aortic Valve Replacement KW - instrumentation KW - methods N1 - 1559-0879 Owner: NLM N2 - Valve-in-valve (ViV) procedures are increasingly being considered as an alternative to redo surgery for the treatment of degenerated bioprosthetic heart valves in patients with excessive reoperative risk. The objective of our study was to evaluate the feasibility of computer guidance in transcatheter heart valve (THV) implantation during ViV procedures. Preprocedural electrocardiogram-gated computed tomography-scan images were processed using semiautomatic segmentation of the degenerated bioprosthesis' radiopaque landmarks and of the ascending aorta. Virtual three-dimensional (3D) reconstructions were created. A virtual plane was subsequently added to the 3D reconstructions, indicating the optimal landing plane of the THV inside the tissue valve. Within a hybrid operating theater, a 3D/2D registration was used to superimpose the 3D reconstructions, while dynamic tracking was allowed to maintain the superimposition onto the fluoroscopic images. The THV was afterward implanted according to the optimal landing plane. Projection of the ascending aorta and the coronary arteries was used to assess the risk of coronary ostia obstruction. Between January 2014 and October 2014, nine patients underwent aortic ViV procedures in our institution. Among those nine patients, five procedures were retrospectively evaluated as a validation step using the proposed method. The mean (SD) superimposition error was 1.1 (0.75) mm. Subsequently, two live cases were prospectively carried out using our approach, successfully implanting the THV inside the degenerated tissue valve. Our study demonstrates the feasibility of a computer-guided implantation of THV in ViV procedures. Moreover, it suggests that augmented reality may increase the reliability of THV implantation inside degenerated bioprostheses through better reproducibility. ER - TY - JOUR AU - Bennour, Sami AU - Ulrich, Baptiste AU - Legrand, Thomas AU - Jolles, Brigitte M. AU - Favre, Julien T1 - A gait retraining system using augmented-reality to modify footprint parameters: Effects on lower-limb sagittal-plane kinematics JO - Journal of Biomechanics Y1 - 2018/january VL - 66 SP - 26 EP - 35 ER - TY - JOUR AU - van den Berg, Nynke S. AU - Engelen, Thijs AU - Brouwer, Oscar R. AU - Mathéron, Hanna M. AU - Valdés-Olmos, Renato A. AU - Nieweg, Omgo E. AU - van Leeuwen, Fijs W. B. T1 - A pilot study of SPECT/CT-based mixed-reality navigation towards the sentinel node in patients with melanoma or Merkel cell carcinoma of a lower extremity JO - Nuclear Medicine Communications Y1 - 2016/august VL - 37 IS - 8 SP - 812 EP - 817 ER - TY - CONF AU - Bergmeier, Jan AU - Kundrat, Dennis AU - Schoob, Andreas AU - Kahrs, Lüder A. AU - Ortmaier, Tobias AU - Webster, Robert J. AU - Yaniv, Ziv R. T1 - Methods for a fusion of optical coherence tomography and stereo camera image data PB - SPIE Y1 - 2015/march ER - TY - JOUR AU - Bernhardt, Sylvain AU - Nicolau, Stéphane A. AU - Agnus, Vincent AU - Soler, Luc AU - Doignon, Christophe AU - Marescaux, Jacques T1 - Automatic localization of endoscope in intraoperative CT image: A simple approach to augmented reality guidance in laparoscopic surgery. JO - Medical image analysis Y1 - 2016/05 VL - 30 SP - 130 EP - 143 KW - Endoscopes; Equipment Design; Equipment Failure Analysis; Humans; Intraoperative Care KW - instrumentation KW - methods; Laparoscopy KW - methods; Phantoms KW - Imaging; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface; Augmented reality; C-arm; Intraoperative CT; Registration N1 - 1361-8423 Owner: NLM N2 - The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach. ER - TY - JOUR AU - Bifulco, Paolo AU - Narducci, Fabio AU - Vertucci, Raffaele AU - Ambruosi, Pasquale AU - Cesarelli, Mario AU - Romano, Maria T1 - Telemedicine supported by Augmented Reality: an interactive guide for untrained people in performing an ECG test. JO - Biomedical engineering online Y1 - 2014 VL - 13 SP - 153 EP - 153 KW - Adult; Calibration; Cell Phone; Computer Simulation; Electrocardiography KW - methods; Electrodes; Equipment Design; Female; Humans; Imaging KW - Three-Dimensional; Male; Reproducibility of Results; Signal Processing KW - Computer-Assisted; Software; Telemedicine KW - instrumentation KW - methods N1 - 1475-925X Owner: NLM N2 - In many telemedicine applications, the correct use of medical device at the point of need is essential to provide an appropriate service. Some applications may require untrained people to interact with medical devices and patients: care delivery in transportation, military actions, home care and telemedicine training.Appropriate operation of medical device and correct connection with patient's body are crucial. In these scenarios, tailored applications of Augmented Reality can offer a valid support by guiding untrained people at the point of need. This study aims to explore the feasibility of using Augmented Reality in telemedicine applications, by facilitating an acceptable use of biomedical equipment by any unskilled person. In particular, a prototype system was built in order to estimate how untrained users, with limited or no knowledge, can effectively interact with an ECG device and properly placing ECG electrodes on patient's chest. An Augmented Reality application was built to support untrained users in performing an ECG test. Simple markers attached to the ECG device and onto patient's thorax allow camera calibration. Once objects and their pose in the space are recognized, the video of the current scene is enriched, in real-time, with additional pointers, text boxes and audio that help the untrained operator to perform the appropriate sequence of operations. All the buttons, switches, ports of the ECG device together with the location of precordial leads were coded and indicated. Some user's voice commands were also included to improve usability. Ten untrained volunteers, supported by the augmented reality, were able to carry out a complete ECG test first on a mannequin and then on a real patient in a reasonable time (about 8 minutes on average). Average positioning errors of precordial electrodes resulted less than 3 mm for the mannequin and less than 7 mm for the real patient. These preliminary findings suggest the effectiveness of the developed application and the validity of clinical ECG recordings. This application can be adapted to support the use of other medical equipment as well as other telemedicine tasks and it could be performed with a Tablet or a Smartphone. ER - TY - JOUR AU - Birnbaum, Faith A. AU - Hackley, Steven A. AU - Johnson, Lenworth N. T1 - Enhancing visual performance in individuals with cortical visual impairment (homonymous hemianopsia): Tapping into blindsight JO - Journal of Medical Hypotheses and Ideas Y1 - 2015/december VL - 9 IS - 2 SP - S8 EP - S13 ER - TY - CONF AU - Bodenstedt, S. AU - Reichard, D. AU - Suwelack, S. AU - Wagner, M. AU - Kenngott, H. AU - Müller-Stich, B. AU - Dillmann, R. AU - Speidel, S. AU - Webster, Robert J. AU - Yaniv, Ziv R. T1 - Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery PB - SPIE Y1 - 2015/march ER - TY - JOUR AU - Borgmann, H. AU - Rodríguez Socarrás, M. AU - Salem, J. AU - Tsaur, I. AU - Gomez Rivas, J. AU - Barret, E. AU - Tortolero, L. T1 - Feasibility and safety of augmented reality-assisted urological surgery using smartglass. JO - World journal of urology Y1 - 2017 VL - 35 SP - 967 EP - 972 KW - Clinical Competence; Feasibility Studies; Humans; Internship and Residency; Optical Devices KW - statistics & numerical data; Patient Safety; Treatment Outcome; Urologic Surgical Procedures KW - methods; Urologists; Urology KW - education; Video Recording; Virtual Reality Exposure Therapy KW - instrumentation KW - methods; Google Glass; Surgical training; Technology; Urology; Wearables N1 - 1433-8726 Owner: NLM N2 - To assess the feasibility, safety and usefulness of augmented reality-assisted urological surgery using smartglass (SG). Seven urological surgeons (3 board urologists and 4 urology residents) performed augmented reality-assisted urological surgery using SG for 10 different types of operations and a total of 31 urological operations. Feasibility was assessed using technical metadata (number of photographs taken/number of videos recorded/video time recorded) and structured interviews with the urologists on their use of SG. Safety was evaluated by recording complications and grading according to the Clavien-Dindo classification. Usefulness of SG for urological surgery was queried in structured interviews and in a survey. The implementation of SG use during urological surgery was feasible with no intrinsic (technical defect) or extrinsic (inability to control the SG function) obstacles being observed. SG use was safe as no grade 3-5 complications occurred for the series of 31 urological surgeries of different complexities. Technical applications of SG included taking photographs/recording videos for teaching and documentation, hands-free teleconsultation, reviewing patients' medical records and images and searching the internet for health information. Overall usefulness of SG for urological surgery was rated as very high by 43 % and high by 29 % of surgeons. Augmented reality-assisted urological surgery using SG is both feasible and safe and also provides several useful functions for urological surgeons. Further developments and investigations are required in the near future to harvest the great potential of this exciting technology for urological surgery. ER - TY - CONF AU - Boschmann, Alexander AU - Dosen, Strahinja AU - Werner, Andreas AU - Raies, Ali AU - Farina, Dario T1 - A novel immersive augmented reality system for prosthesis training and assessment PB - IEEE Y1 - 2016/february ER - TY - JOUR AU - Botella, Cristina AU - Pérez-Ara, M. Ángeles AU - Bretón-López, Juana AU - Quero, Soledad AU - García-Palacios, Azucena AU - Baños, Rosa María T1 - In Vivo versus Augmented Reality Exposure in the Treatment of Small Animal Phobia: A Randomized Controlled Trial. JO - PloS one Y1 - 2016 VL - 11 SP - e0148237 EP - e0148237 KW - Adult; Aged; Animals; Cockroaches; Female; Follow-Up Studies; Humans; Intention to Treat Analysis; Male; Middle Aged; Phobic Disorders KW - diagnosis KW - therapy; Spiders; Treatment Outcome; Virtual Reality Exposure Therapy; Young Adult N1 - 1932-6203 Owner: NLM N2 - Although in vivo exposure is the treatment of choice for specific phobias, some acceptability problems have been associated with it. Virtual Reality exposure has been shown to be as effective as in vivo exposure, and it is widely accepted for the treatment of specific phobias, but only preliminary data are available in the literature about the efficacy of Augmented Reality. The purpose of the present study was to examine the efficacy and acceptance of two treatment conditions for specific phobias in which the exposure component was applied in different ways: In vivo exposure (N = 31) versus an Augmented Reality system (N = 32) in a randomized controlled trial. "One-session treatment" guidelines were followed. Participants in the Augmented Reality condition significantly improved on all the outcome measures at post-treatment and follow-ups. When the two treatment conditions were compared, some differences were found at post-treatment, favoring the participants who received in vivo exposure. However, these differences disappeared at the 3- and 6-month follow-ups. Regarding participants' expectations and satisfaction with the treatment, very positive ratings were reported in both conditions. In addition, participants from in vivo exposure condition considered the treatment more useful for their problem whereas participants from Augmented Reality exposure considered the treatment less aversive. Results obtained in this study indicate that Augmented Reality exposure is an effective treatment for specific phobias and well accepted by the participants. ER - TY - JOUR AU - Bourdel, Nicolas AU - Collins, Toby AU - Pizarro, Daniel AU - Bartoli, Adrien AU - Da Ines, David AU - Perreira, Bruno AU - Canis, Michel T1 - Augmented reality in gynecologic surgery: evaluation of potential benefits for myomectomy in an experimental uterine model. JO - Surgical endoscopy Y1 - 2017 VL - 31 SP - 456 EP - 461 KW - Female; Gynecologic Surgical Procedures KW - methods; Gynecology KW - education; Humans; Internship and Residency; Laparoscopy KW - methods; Leiomyoma KW - diagnostic imaging KW - surgery; Magnetic Resonance Imaging; Models KW - Anatomic; Software; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface; Uterine Myomectomy KW - methods; Uterine Neoplasms KW - surgery; Augmented Reality; Gynecologic surgery; Laparoscopy; MRI; Myomectomy N1 - 1432-2218 Owner: NLM N2 - Augmented Reality (AR) is a technology that can allow a surgeon to see subsurface structures. This works by overlaying information from another modality, such as MRI and fusing it in real time with the endoscopic images. AR has never been developed for a very mobile organ like the uterus and has never been performed for gynecology. Myomas are not always easy to localize in laparoscopic surgery when they do not significantly change the surface of the uterus, or are at multiple locations. To study the accuracy of myoma localization using a new AR system compared to MRI-only localization. Ten residents were asked to localize six myomas (on a uterine model into a laparoscopic box) when either using AR or in conditions that simulate a standard method (only the MRI was available). Myomas were randomly divided in two groups: the control group (MRI only, AR not activated) and the AR group (AR activated). Software was used to automatically measure the distance between the point of contact on the uterine surface and the myoma. We compared these distances to the true shortest distance to obtain accuracy measures. The time taken to perform the task was measured, and an assessment of the complexity was performed. The mean accuracy in the control group was 16.80 mm [0.1-52.2] versus 0.64 mm [0.01-4.71] with AR. In the control group, the mean time to perform the task was 18.68 [6.4-47.1] s compared to 19.6 [3.9-77.5] s with AR. The mean score of difficulty (evaluated for each myoma) was 2.36 [1-4] versus 0.87 [0-4], respectively, for the control and the AR group. We developed an AR system for a very mobile organ. This is the first user study to quantitatively evaluate an AR system for improving a surgical task. In our model, AR improves localization accuracy. ER - TY - JOUR AU - Bourdel, Nicolas AU - Collins, Toby AU - Pizarro, Daniel AU - Debize, Clement AU - Grémeau, Anne-Sophie AU - Bartoli, Adrien AU - Canis, Michel T1 - Use of augmented reality in laparoscopic gynecology to visualize myomas. JO - Fertility and sterility Y1 - 2017 VL - 107 SP - 737 EP - 739 KW - Adult; Female; Humans; Image Interpretation KW - Computer-Assisted; Imaging KW - Three-Dimensional; Laparoscopes; Laparoscopy KW - adverse effects KW - instrumentation; Leiomyoma KW - pathology KW - surgery; Leiomyomatosis KW - surgery; Magnetic Resonance Imaging; Predictive Value of Tests; Surgery KW - Computer-Assisted KW - instrumentation; Tumor Burden; Uterine Myomectomy KW - instrumentation KW - methods; Uterine Neoplasms KW - surgery; Gynecologic surgery; MRI; augmented reality; laparoscopy; myomectomy N1 - 1556-5653 Owner: NLM N2 - To report the use of augmented reality (AR) in gynecology. AR is a surgical guidance technology that enables important hidden surface structures to be visualized in endoscopic images. AR has been used for other organs, but never in gynecology and never with a very mobile organ like the uterus. We have developed a new AR approach specifically for uterine surgery and demonstrated its use for myomectomy. Tertiary university hospital. Three patients with one, two, and multiple myomas, respectively. AR was used during laparoscopy to localize the myomas. Three-dimensional (3D) models of the patient's uterus and myomas were constructed before surgery from T2-weighted magnetic resonance imaging. The intraoperative 3D shape of the uterus was determined. These models were automatically aligned and "fused" with the laparoscopic video in real time. The live fused video made the uterus appear semitransparent, and the surgeon can see the location of the myoma in real time while moving the laparoscope and the uterus. With this information, the surgeon can easily and quickly decide on how best to access the myoma. We developed an AR system for gynecologic surgery and have used it to improve laparoscopic myomectomy. Technically, the software we developed is very different to approaches tried for other organs, and it can handle significant challenges, including image blur, fast motion, and partial views of the organ. ER - TY - JOUR AU - Bruellmann, D. D. AU - Tjaden, H. AU - Schwanecke, U. AU - Barth, P. T1 - An optimized video system for augmented reality in endodontics: a feasibility study. JO - Clinical oral investigations Y1 - 2013 VL - 17 SP - 441 EP - 448 KW - Algorithms; Bicuspid KW - anatomy & histology; Color; Computer Graphics; Dental Pulp Cavity KW - anatomy & histology; Endodontics; False Negative Reactions; False Positive Reactions; Feasibility Studies; Humans; Image Processing KW - Computer-Assisted KW - methods; Incisor KW - anatomy & histology; Information Storage and Retrieval; Knowledge Bases; Molar KW - anatomy & histology; Sensitivity and Specificity; Software; User-Computer Interface; Video Recording KW - methods N1 - 1436-3771 Owner: NLM N2 - We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/. ER - TY - JOUR AU - Buchs, Nicolas C. AU - Volonte, Francesco AU - Pugin, François AU - Toso, Christian AU - Fusaglia, Matteo AU - Gavaghan, Kate AU - Majno, Pietro E. AU - Peterhans, Matthias AU - Weber, Stefan AU - Morel, Philippe T1 - Augmented environments for the targeting of hepatic lesions during image-guided robotic liver surgery. JO - The Journal of surgical research Y1 - 2013 VL - 184 SP - 825 EP - 831 KW - Aged; Aged KW - 80 and over; Carcinoma KW - Hepatocellular KW - etiology KW - surgery; Endoscopy; Female; Humans; Imaging KW - Three-Dimensional KW - methods; Liver KW - surgery; Liver Cirrhosis KW - complications; Liver Neoplasms KW - surgery; Male; Minimally Invasive Surgical Procedures; Pilot Projects; Robotics; Stereotaxic Techniques; Surgery KW - Computer-Assisted KW - instrumentation KW - methods; Treatment Outcome; Computer surgery; Hepatic mass; Liver resection; Robotic; Virtual reality N1 - 1095-8673 Owner: NLM N2 - Stereotactic navigation technology can enhance guidance during surgery and enable the precise reproduction of planned surgical strategies. Currently, specific systems (such as the CAS-One system) are available for instrument guidance in open liver surgery. This study aims to evaluate the implementation of such a system for the targeting of hepatic tumors during robotic liver surgery. Optical tracking references were attached to one of the robotic instruments and to the robotic endoscopic camera. After instrument and video calibration and patient-to-image registration, a virtual model of the tracked instrument and the available three-dimensional images of the liver were displayed directly within the robotic console, superimposed onto the endoscopic video image. An additional superimposed targeting viewer allowed for the visualization of the target tumor, relative to the tip of the instrument, for an assessment of the distance between the tumor and the tool for the realization of safe resection margins. Two cirrhotic patients underwent robotic navigated atypical hepatic resections for hepatocellular carcinoma. The augmented endoscopic view allowed for the definition of an accurate resection margin around the tumor. The overlay of reconstructed three-dimensional models was also used during parenchymal transection for the identification of vascular and biliary structures. Operative times were 240 min in the first case and 300 min in the second. There were no intraoperative complications. The da Vinci Surgical System provided an excellent platform for image-guided liver surgery with a stable optic and instrumentation. Robotic image guidance might improve the surgeon's orientation during the operation and increase accuracy in tumor resection. Further developments of this technological combination are needed to deal with organ deformation during surgery. ER - TY - CONF AU - Byrnes, Patrick D. AU - Higgins, William E. AU - Yaniv, Ziv R. AU - Holmes, David R. T1 - Construction of a multimodal CT-video chest model PB - SPIE Y1 - 2014/march ER - TY - JOUR AU - Cabrilo, Ivan AU - Bijlenga, Philippe AU - Schaller, Karl T1 - Augmented reality in the surgery of cerebral aneurysms: a technical report. JO - Neurosurgery Y1 - 2014 VL - 10 Suppl 2 SP - 252--60; discussion 260-1--252 EP - 252 KW - Adult; Aged; Angiography KW - Digital Subtraction; Female; Humans; Imaging KW - Three-Dimensional; Intracranial Aneurysm KW - radiotherapy KW - surgery; Magnetic Resonance Imaging; Male; Middle Aged; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Retrospective Studies; Tomography Scanners KW - X-Ray Computed; User-Computer Interface; Vascular Surgical Procedures N1 - 1524-4040 Owner: NLM N2 - Augmented reality is the overlay of computer-generated images on real-world structures. It has previously been used for image guidance during surgical procedures, but it has never been used in the surgery of cerebral aneurysms. To report our experience of cerebral aneurysm surgery aided by augmented reality. Twenty-eight patients with 39 unruptured aneurysms were operated on in a prospective manner with augmented reality. Preoperative 3-dimensional image data sets (angio-magnetic resonance imaging, angio-computed tomography, and 3-dimensional digital subtraction angiography) were used to create virtual segmentations of patients' vessels, aneurysms, aneurysm necks, skulls, and heads. These images were injected intraoperatively into the eyepiece of the operating microscope. An example case of an unruptured posterior communicating artery aneurysm clipping is illustrated in a video. The described operating procedure allowed continuous monitoring of the accuracy of patient registration with neuronavigation data and assisted in the performance of tailored surgical approaches and optimal clipping with minimized exposition. Augmented reality may add to the performance of a minimally invasive approach, although further studies need to be performed to evaluate whether certain groups of aneurysms are more likely to benefit from it. Further technological development is required to improve its user friendliness. ER - TY - JOUR AU - Cabrilo, Ivan AU - Bijlenga, Philippe AU - Schaller, Karl T1 - Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations. JO - Acta neurochirurgica Y1 - 2014 VL - 156 SP - 1769 EP - 1774 KW - Adult; Angiography KW - Digital Subtraction KW - instrumentation; Cerebral Angiography KW - instrumentation; Craniotomy KW - instrumentation; Female; Humans; Imaging KW - Three-Dimensional KW - instrumentation; Intracranial Arteriovenous Malformations KW - diagnosis KW - surgery; Male; Microsurgery KW - instrumentation; Middle Aged; Minimally Invasive Surgical Procedures KW - instrumentation; Neuronavigation KW - instrumentation; User-Computer Interface; Young Adult N1 - 0942-0940 Owner: NLM N2 - Augmented reality technology has been used for intraoperative image guidance through the overlay of virtual images, from preoperative imaging studies, onto the real-world surgical field. Although setups based on augmented reality have been used for various neurosurgical pathologies, very few cases have been reported for the surgery of arteriovenous malformations (AVM). We present our experience with AVM surgery using a system designed for image injection of virtual images into the operating microscope's eyepiece, and discuss why augmented reality may be less appealing in this form of surgery. N = 5 patients underwent AVM resection assisted by augmented reality. Virtual three-dimensional models of patients' heads, skulls, AVM nidi, and feeder and drainage vessels were selectively segmented and injected into the microscope's eyepiece for intraoperative image guidance, and their usefulness was assessed in each case. Although the setup helped in performing tailored craniotomies, in guiding dissection and in localizing drainage veins, it did not provide the surgeon with useful information concerning feeder arteries, due to the complexity of AVM angioarchitecture. The difficulty in intraoperatively conveying useful information on feeder vessels may make augmented reality a less engaging tool in this form of surgery, and might explain its underrepresentation in the literature. Integrating an AVM's hemodynamic characteristics into the augmented rendering could make it more suited to AVM surgery. ER - TY - JOUR AU - Cabrilo, I. AU - Sarrafzadeh, A. AU - Bijlenga, P. AU - Landis, B. N. AU - Schaller, K. T1 - Augmented reality-assisted skull base surgery. JO - Neuro-Chirurgie Y1 - 2014 VL - 60 SP - 304 EP - 306 KW - Chordoma KW - surgery; Humans; Male; Middle Aged; Neuronavigation; Skull Base KW - surgery; Skull Base Neoplasms KW - surgery; Surgery KW - Computer-Assisted; Augmented reality; Chirurgie de la base du crâne; Chirurgie guidée par images; Image-guided surgery; Neuronavigation; Réalité augmentée; Skull base surgery N1 - 1773-0619 Owner: NLM N2 - Neuronavigation is widely considered as a valuable tool during skull base surgery. Advances in neuronavigation technology, with the integration of augmented reality, present advantages over traditional point-based neuronavigation. However, this development has not yet made its way into routine surgical practice, possibly due to a lack of acquaintance with these systems. In this report, we illustrate the usefulness and easy application of augmented reality-based neuronavigation through a case example of a patient with a clivus chordoma. We also demonstrate how augmented reality can help throughout all phases of a skull base procedure, from the verification of neuronavigation accuracy to intraoperative image-guidance. ER - TY - JOUR AU - Chaballout, Basil AU - Molloy, Margory AU - Vaughn, Jacqueline AU - Brisson Iii, Raymond AU - Shaw, Ryan T1 - Feasibility of Augmented Reality in Clinical Simulations: Using Google Glass With Manikins. JO - JMIR medical education Y1 - 2016 VL - 2 SP - e2 EP - e2 KW - Google Glass; augmented reality; clinical simulation; feasibility; student learning N1 - 2369-3762 Owner: NLM N2 - Studies show that students who use fidelity-based simulation technology perform better and have higher retention rates than peers who learn in traditional paper-based training. Augmented reality is increasingly being used as a teaching and learning tool in a continual effort to make simulations more realistic for students. The aim of this project was to assess the feasibility and acceptability of using augmented reality via Google Glass during clinical simulation scenarios for training health science students. Students performed a clinical simulation while watching a video through Google Glass of a patient actor simulating respiratory distress. Following participation in the scenarios students completed two surveys and were questioned if they would recommend continued use of this technology in clinical simulation experiences. We were able to have students watch a video in their field of vision of a patient who mimicked the simulated manikin. Students were overall positive about the implications for being able to view a patient during the simulations, and most students recommended using the technology in the future. Overall, students reported perceived realism with augmented reality using Google Glass. However, there were technical and usability challenges with the device. As newer portable and consumer-focused technologies become available, augmented reality is increasingly being used as a teaching and learning tool to make clinical simulations more realistic for health science students. We found Google Glass feasible and acceptable as a tool for augmented reality in clinical simulations. ER - TY - JOUR AU - Chandrasekera, Tilanka AU - Kang, Mihyun AU - Hebert, Paulette AU - Choo, Phil T1 - Augmenting space: Enhancing health, safety, and well-being of older adults through hybrid spaces JO - Technology and Disability Y1 - 2017/august VL - 29 IS - 3 SP - 141 EP - 151 N1 - 1055-4181 ER - TY - JOUR AU - Chang, Yao-Jen AU - Kang, Ya-Shu AU - Huang, Po-Chiao T1 - An augmented reality (AR)-based vocational task prompting system for people with cognitive impairments. JO - Research in developmental disabilities Y1 - 2013 VL - 34 SP - 3049 EP - 3056 KW - Adult; Cognition Disorders KW - rehabilitation; Cues; Data Display; Education of Intellectually Disabled KW - methods; Female; Food Industry; Humans; Male; Rehabilitation KW - Vocational KW - methods; Residence Characteristics; Treatment Outcome; Young Adult; Augmented reality; Cognitive impairments; Community-based rehabilitation; Task prompting N1 - 1873-3379 Owner: NLM N2 - This study assessed the possibility of training three people with cognitive impairments using an augmented reality (AR)-based task prompting system. Using AR technology, the system provided picture cues, identified incorrect task steps on the fly, and helped users make corrections. Based on a multiple baseline design, the data showed that the three participants considerably increased their target response, which improved their vocational job skills during the intervention phases and enabled them to maintain the acquired job skills after intervention. The practical and developmental implications of the results are discussed. ER - TY - JOUR AU - Chen, Chien-Hsu AU - Lee, I.-Jui AU - Lin, Ling-Yi T1 - Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders. JO - Research in developmental disabilities Y1 - 2015 VL - 36C SP - 396 EP - 403 KW - 3-D facial animation; Augmented reality (AR); Emotions; Self-facial modeling; Three-dimensional (3-D) facial expressions N1 - 1873-3379 Owner: NLM N2 - Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task. ER - TY - JOUR AU - Chen, Ji-Gang AU - Han, Kai-Wei AU - Zhang, Dan-Feng AU - Li, Zhen-Xing AU - Li, Yi-Ming AU - Hou, Li-Jun T1 - Presurgical Planning for Supratentorial Lesions with Free Slicer Software and Sina App. JO - World neurosurgery Y1 - 2017 VL - 106 SP - 193 EP - 197 KW - Adult; Aged; Female; Fiducial Markers; Glioma KW - diagnostic imaging KW - surgery; Humans; Imaging KW - Three-Dimensional; Magnetic Resonance Imaging; Male; Meningeal Neoplasms KW - surgery; Meningioma KW - surgery; Middle Aged; Mobile Applications; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Software; Supratentorial Neoplasms KW - surgery; 3D Slicer; Augmented reality; Mobile application; Neuronavigation N1 - 1878-8769 Owner: NLM N2 - Neuronavigation systems are used widely in the localization of intracranial lesions with satisfactory accuracy. However, they are expensive and difficult to learn. Therefore, a simple and practical augmented reality (AR) system using mobile devices might be an alternative technique. We introduce a mobile AR system for the localization of supratentorial lesions. Its practicability and accuracy were examined by clinical application in patients and comparison with a standard neuronavigation system. A 3-dimensional (3D) model including lesions was created with 3D Slicer. A 2-dimensional image of this 3D model was obtained and overlapped on the patient's head with the Sina app. Registration was conducted with the assistance of anatomical landmarks and fiducial markers. The center of lesion projected on scalp was identified with our mobile AR system and standard neuronavigation system, respectively. The difference in distance between the centers identified by these 2 systems was measured. Our mobile AR system was simple and accurate in the localization of supratentorial lesions with a mean distance difference of 4.4 ± 1.1 mm. Registration added on an average of 141.7 ± 39 seconds to operation time. There was no statistically significant difference for the required time among 3 registrations (P = 0.646). The mobile AR system presents an alternative technology for image-guided neurosurgery and proves to be practical and reliable. The technique contributes to optimal presurgical planning for supratentorial lesions, especially in the absence of a neuronavigation system. ER - TY - JOUR AU - Chen, Xin AU - Wang, Lejing AU - Fallavollita, Pascal AU - Navab, Nassir T1 - Precise X-ray and video overlay for augmented reality fluoroscopy. JO - International journal of computer assisted radiology and surgery Y1 - 2013 VL - 8 SP - 29 EP - 38 KW - Algorithms; Calibration; Equipment Design; Fluoroscopy KW - instrumentation; Humans; Models KW - Theoretical; Phantoms KW - Imaging; Radiographic Image Interpretation KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed KW - instrumentation; User-Computer Interface; Video Recording KW - methods; X-Rays N1 - 1861-6429 Owner: NLM N2 - The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP. ER - TY - JOUR AU - Chen, Xiaojun AU - Xu, Lu AU - Wang, Yiping AU - Wang, Huixiang AU - Wang, Fang AU - Zeng, Xiangsen AU - Wang, Qiugen AU - Egger, Jan T1 - Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. JO - Journal of biomedical informatics Y1 - 2015 VL - 55 SP - 124 EP - 131 KW - Computer Graphics KW - instrumentation; Equipment Design; Equipment Failure Analysis; Head; Head Protective Devices; Humans; Image Enhancement KW - instrumentation; Imaging KW - Three-Dimensional KW - instrumentation; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - Computer-Assisted KW - instrumentation; User-Computer Interface; Augmented reality; Intra-operative motion tracking; Optical see-through HMD; Surgical navigation N1 - 1532-0480 Owner: NLM N2 - The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements. ER - TY - JOUR AU - Chen, Yue AU - Kwok, Ka-Wai AU - Ge, Jia AU - Hu, Yang AU - Fok, Mable AU - Nilsson, Kent Ronald AU - Tse, Zion Tsz Ho T1 - Augmented Reality for Improving Catheterization in Magnetic Resonance Imaging-Guided Cardiac Electrophysiology Therapy 1 JO - Journal of Medical Devices Y1 - 2014/april VL - 8 IS - 2 SP - 020917 EP - 020917 ER - TY - CONF AU - Cheng, Irene AU - Shen, Rui AU - Moreau, Richard AU - Brizzi, Vicenzo AU - Rossol, Nathaniel AU - Basu, Anup T1 - An augmented reality framework for optimization of computer assisted navigation in endovascular surgery PB - IEEE Y1 - 2014/august ER - TY - JOUR AU - Chinthammit, Winyu AU - Merritt, Troy AU - Pedersen, Scott AU - Williams, Andrew AU - Visentin, Denis AU - Rowe, Robert AU - Furness, Thomas T1 - Ghostman: augmented reality application for telerehabilitation and remote instruction of a novel motor skill. JO - BioMed research international Y1 - 2014 VL - 2014 SP - 646347 EP - 646347 KW - Adult; Demography; Female; Humans; Male; Motor Skills KW - physiology; Remote Consultation; Surveys and Questionnaires; Task Performance and Analysis; Telemedicine KW - methods; Time Factors; Virtual Reality Exposure Therapy KW - methods N1 - 2314-6141 Owner: NLM N2 - This paper describes a pilot study using a prototype telerehabilitation system (Ghostman). Ghostman is a visual augmentation system designed to allow a physical therapist and patient to inhabit each other's viewpoint in an augmented real-world environment. This allows the therapist to deliver instruction remotely and observe performance of a motor skill through the patient's point of view. In a pilot study, we investigated the efficacy of Ghostman by using it to teach participants to use chopsticks. Participants were randomized to a single training session, receiving either Ghostman or face-to-face instructions by the same skilled instructor. Learning was assessed by measuring retention of skills at 24-hour and 7-day post instruction. As hypothesised, there were no differences in reduction of error or time to completion between participants using Ghostman compared to those receiving face-to-face instruction. These initial results in a healthy population are promising and demonstrate the potential application of this technology to patients requiring learning or relearning of motor skills as may be required following a stroke or brain injury. ER - TY - JOUR AU - Cho, H. S. AU - Park, Y. K. AU - Gupta, S. AU - Yoon, C. AU - Han, I. AU - Kim, H.-S. AU - Choi, H. AU - Hong, J. T1 - Augmented reality in bone tumour resection: An experimental study. JO - Bone & joint research Y1 - 2017 VL - 6 SP - 137 EP - 143 KW - Augmented reality; Bone tumour; Navigation N1 - 2046-3758 Owner: NLM N2 - We evaluated the accuracy of augmented reality (AR)-based navigation assistance through simulation of bone tumours in a pig femur model. We developed an AR-based navigation system for bone tumour resection, which could be used on a tablet PC. To simulate a bone tumour in the pig femur, a cortical window was made in the diaphysis and bone cement was inserted. A total of 133 pig femurs were used and tumour resection was simulated with AR-assisted resection (164 resection in 82 femurs, half by an orthropaedic oncology expert and half by an orthopaedic resident) and resection with the conventional method (82 resection in 41 femurs). In the conventional group, resection was performed after measuring the distance from the edge of the condyle to the expected resection margin with a ruler as per routine clinical practice. The mean error of 164 resections in 82 femurs in the AR group was 1.71 mm (0 to 6). The mean error of 82 resections in 41 femurs in the conventional resection group was 2.64 mm (0 to 11) (p < 0.05, one-way analysis of variance). The probabilities of a surgeon obtaining a 10 mm surgical margin with a 3 mm tolerance were 90.2% in AR-assisted resections, and 70.7% in conventional resections. We demonstrated that the accuracy of tumour resection was satisfactory with the help of the AR navigation system, with the tumour shown as a virtual template. In addition, this concept made the navigation system simple and available without additional cost or time. H. S. Cho, Y. K. Park, S. Gupta, C. Yoon, I. Han, H-S. Kim, H. Choi, J. Hong. Augmented reality in bone tumour resection: An experimental study. 2017;6:137-143. ER - TY - JOUR AU - Cho, Nam Hyun AU - Jang, Jeong Hun AU - Jung, Woonggyu AU - Kim, Jeehyun T1 - In vivo imaging of middle-ear and inner-ear microstructures of a mouse guided by SD-OCT combined with a surgical microscope. JO - Optics express Y1 - 2014 VL - 22 SP - 8985 EP - 8995 KW - Animals; Diagnostic Imaging; Disease Models KW - Animal; Ear Diseases KW - diagnosis KW - surgery; Ear KW - Inner KW - pathology KW - Middle KW - surgery; Intraoperative Period; Mice; Microscopy KW - instrumentation; Otologic Surgical Procedures; Tomography KW - Optical Coherence KW - instrumentation N1 - 1094-4087 Owner: NLM N2 - We developed an augmented-reality system that combines optical coherence tomography (OCT) with a surgical microscope. By sharing the common optical path in the microscope and OCT, we could simultaneously acquire OCT and microscope views. The system was tested to identify the middle-ear and inner-ear microstructures of a mouse. Considering the probability of clinical application including otorhinolaryngology, diseases such as middle-ear effusion were visualized using in vivo mouse and OCT images simultaneously acquired through the eyepiece of the surgical microscope during surgical manipulation using the proposed system. This system is expected to realize a new practical area of OCT application. ER - TY - JOUR AU - Choi, Hyunseok AU - Park, Yeongkyoon AU - Lee, Seongpung AU - Ha, Hogun AU - Kim, Sungmin AU - Cho, Hwan Seong AU - Hong, Jaesung T1 - A portable surgical navigation device to display resection planes for bone tumor surgery. JO - Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy Y1 - 2017 VL - 26 SP - 144 EP - 150 KW - Animals; Bone Neoplasms KW - surgery; Computers KW - Handheld; Humans; Imaging KW - Three-Dimensional; Pelvic Bones KW - surgery; Surgery KW - Computer-Assisted KW - methods; Swine; Augmented reality; bone tumor surgery; resection margin; surgical navigation N1 - 1365-2931 Owner: NLM N2 - Surgical navigation has been used in musculoskeletal tumor surgical procedures to improve the precision of tumor resection. Despite the favorable attributes of navigation-assisted surgery, conventional systems do not display the resection margin in real time, and preoperative manual input is required. In addition, navigation systems are often expensive and complex, and this has limited their widespread use. In this study, we propose an augmented reality surgical navigation system that uses a tablet personal computer with no external tracking system. We realized a real-time safety margin display based on three-dimensional dilation. The resection plane induced by the safety margin is updated in real time according to the direction of sawing. The minimum separation between the saw and the resection plane is also calculated and displayed. The surgeon can resect bone tumors accurately by referring to the resection plane and the minimum separation updated in real time. The effectiveness of the system was demonstrated with experiments on pig pelvises. When the desired resection margin was 10 mm, the measured resection margin was 9.85 ± 1.02 mm. The proposed method exhibits sufficient accuracy and convenience for use in bone tumor resection. It also has favorable practical applicability due to its low cost and portability. ER - TY - JOUR AU - Chowriappa, Ashirwad AU - Raza, Syed Johar AU - Fazili, Anees AU - Field, Erinn AU - Malito, Chelsea AU - Samarasekera, Dinesh AU - Shi, Yi AU - Ahmed, Kamran AU - Wilding, Gregory AU - Kaouk, Jihad AU - Eun, Daniel D. AU - Ghazi, Ahmed AU - Peabody, James O. AU - Kesavadas, Thenkurussi AU - Mohler, James L. AU - Guru, Khurshid A. T1 - Augmented-reality-based skills training for robot-assisted urethrovesical anastomosis: a multi-institutional randomised controlled trial. JO - BJU international Y1 - 2015 VL - 115 SP - 336 EP - 345 KW - Anastomosis KW - Surgical KW - education KW - methods KW - standards; Clinical Competence; Computer Simulation; Humans; Laparoscopy KW - standards; Robotic Surgical Procedures KW - standards; Surveys and Questionnaires; Task Performance and Analysis; Urethra KW - surgery; anastomosis; augmented reality; robot-assisted; robotic; skills training; urethrovesical anastomosis N1 - 1464-410X Owner: NLM N2 - To validate robot-assisted surgery skills acquisition using an augmented reality (AR)-based module for urethrovesical anastomosis (UVA). Participants at three institutions were randomised to a Hands-on Surgical Training (HoST) technology group or a control group. The HoST group was given procedure-based training for UVA within the haptic-enabled AR-based HoST environment. The control group did not receive any training. After completing the task, the control group was offered to cross over to the HoST group (cross-over group). A questionnaire administered after HoST determined the feasibility and acceptability of the technology. Performance of UVA using an inanimate model on the daVinci Surgical System (Intuitive Surgical Inc., Sunnyvale, CA, USA) was assessed using a UVA evaluation score and a Global Evaluative Assessment of Robotic Skills (GEARS) score. Participants completed the National Aeronautics and Space Administration Task Load Index (NASA TLX) questionnaire for cognitive assessment, as outcome measures. A Wilcoxon rank-sum test was used to compare outcomes among the groups (HoST group vs control group and control group vs cross-over group). A total of 52 individuals participated in the study. UVA evaluation scores showed significant differences in needle driving (3.0 vs 2.3; P = 0.042), needle positioning (3.0 vs 2.4; P = 0.033) and suture placement (3.4 vs 2.6; P = 0.014) in the HoST vs the control group. The HoST group obtained significantly higher scores (14.4 vs 11.9; P 0.012) on the GEARS. The NASA TLX indicated lower temporal demand and effort in the HoST group (5.9 vs 9.3; P = 0.001 and 5.8 vs 11.9; P = 0.035, respectively). In all, 70% of participants found that HoST was similar to the real surgical procedure, and 75% believed that HoST could improve confidence for carrying out the real intervention. Training in UVA in an AR environment improves technical skill acquisition with minimal cognitive demand. ER - TY - JOUR AU - Chu, Michael W. A. AU - Moore, John AU - Peters, Terry AU - Bainbridge, Daniel AU - McCarty, David AU - Guiraudon, Gerard M. AU - Wedlake, Chris AU - Lang, Pencilla AU - Rajchl, Martin AU - Currie, Maria E. AU - Daly, Richard C. AU - Kiaii, Bob T1 - Augmented reality image guidance improves navigation for beating heart mitral valve repair. JO - Innovations (Philadelphia, Pa.) Y1 - 2012 VL - 7 SP - 274 EP - 281 KW - Animals; Cardiac Surgical Procedures KW - methods; Coronary Artery Bypass KW - Off-Pump; Disease Models KW - Animal; Echocardiography KW - Transesophageal KW - instrumentation KW - methods; Equipment Design; Heart Valve Prosthesis Implantation KW - methods; Image Processing KW - Computer-Assisted; Mitral Valve KW - diagnostic imaging KW - surgery; Sus scrofa; Treatment Outcome; Ultrasonography KW - Interventional KW - methods N1 - 1559-0879 Owner: NLM N2 - Emerging off-pump beating heart valve repair techniques offer patients less invasive alternatives for mitral valve (MV) repair. However, most of these techniques rely on the limited spatial and temporal resolution of transesophageal echocardiography (TEE) alone, which can make tool visualization and guidance challenging. Using a magnetic tracking system and integrated sensors, we created an augmented reality (AR) environment displaying virtual representations of important intracardiac landmarks registered to biplane TEE imaging. In a porcine model, we evaluated the AR guidance system versus TEE alone using the transapically delivered NeoChord DS1000 system to perform MV repair with chordal reconstruction. Successful tool navigation from left ventricular apex to MV leaflet was achieved in 12 of 12 and 9 of 12 (P = 0.2) attempts with AR imaging and TEE alone, respectively. The distance errors of the tracked tool tip from the intended midline trajectory (5.2 ± 2.4 mm vs 16.8 ± 10.9 mm, P = 0.003), navigation times (16.7 ± 8.0 seconds vs 92.0 ± 84.5 seconds, P = 0.004), and total path lengths (225.2 ± 120.3 mm vs 1128.9 ± 931.1 mm, P = 0.003) were significantly shorter in the AR-guided trials compared with navigation with TEE alone. Furthermore, the potential for injury to other intracardiac structures was nearly 40-fold lower when using the AR imaging for tool navigation. The AR guidance also seemed to shorten the learning curve for novice surgeons. Augmented reality-enhanced TEE facilitates more direct and safe intracardiac navigation of the NeoChord DS tool from left ventricular apex to MV leaflet. Tracked tool path results demonstrate fourfold improved accuracy, fivefold shorter navigation times, and overall improved safety with AR imaging guidance. ER - TY - JOUR AU - Chung, Peter J. AU - Vanderbilt, Douglas L. AU - Soares, Neelkamal S. T1 - Social Behaviors and Active Videogame Play in Children with Autism Spectrum Disorder. JO - Games for health journal Y1 - 2015 VL - 4 SP - 225 EP - 234 KW - Aggression KW - psychology; Autism Spectrum Disorder KW - psychology; Child; Child Behavior; Communication; Exercise KW - psychology; Female; Humans; Male; Social Behavior; Social Skills; User-Computer Interface; Video Games KW - psychology N1 - 2161-7856 Owner: NLM N2 - Children with autism spectrum disorder (ASD) often display problematic and excessive videogame play. Using active videogames (AVGs) may have physical benefits, but its effects on socialization are unknown. We conducted an A-B-A' experiment comparing sedentary videogames and AVGs for three dyads of a child with ASD and his sibling. An augmented reality (AR) game was used to introduce AVGs. Sessions were coded for communication, positive affect, and aggression. One dyad had increases in positive affect with AVGs. Otherwise, social behaviors were unchanged or worse. The AR game demonstrated consistent elevations in social behaviors. Use of AVGs has inconsistent effects on social behavior for children with ASD. Further research is needed to understand mediators of response to AVGs. AR games should be evaluated for potential benefits on socialization and positive affect. ER - TY - CONF AU - Cicció, José AU - Quesada, Luis T1 - A Model Proposal for Augmented Reality Game Creation to Incentivize Physical Activity Y1 - 2017/january SP - 252 EP - 259 ER - TY - JOUR AU - Citardi, Martin J. AU - Agbetoba, Abib AU - Bigcas, Jo-Lawrence AU - Luong, Amber T1 - Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. JO - International forum of allergy & rhinology Y1 - 2016/05 VL - 6 SP - 523 EP - 528 KW - Cadaver; Endoscopy KW - methods; Frontal Sinus KW - surgery; Humans; Image Processing KW - Computer-Assisted; Software; Surgery KW - Computer-Assisted; Tomography KW - X-Ray Computed; augmented reality; cadaveric model; computer-aided surgery; endoscopic sinus surgery; image-guided surgery; visualization technology N1 - 2042-6984 Owner: NLM N2 - Augmented reality (AR) fuses computer-generated images of preoperative imaging data with real-time views of the surgical field. Scopis Hybrid Navigation (Scopis GmbH, Berlin, Germany) is a surgical navigation system with AR capabilities for endoscopic sinus surgery (ESS). Predissection planning was performed with Scopis Hybrid Navigation software followed by ESS dissection on 2 human specimens using conventional ESS instruments. Predissection planning included creating models of relevant frontal recess structures and the frontal sinus outflow pathway on orthogonal computed tomography (CT) images. Positions of the optic nerve and internal carotid artery were marked on the CT images. Models and annotations were displayed as an overlay on the endoscopic images during the dissection, which was performed with electromagnetic surgical navigation. The accuracy of the AR images relative to underlying anatomy was better than 1.5 mm. The software's trajectory targeting tool was used to guide instrument placement along the frontal sinus outflow pathway. AR imaging of the optic nerve and internal carotid artery served to mark the positions of these structures during the dissection. Surgical navigation with AR was easily deployed in this cadaveric model of ESS. This technology builds upon the positive impact of surgical navigation during ESS, particularly during frontal recess surgery. Instrument tracking with this technology facilitates identifying and cannulation of the frontal sinus outflow pathway without dissection of the frontal recess anatomy. AR can also highlight "anti-targets" (ie, structures to be avoided), such as the optic nerve and internal carotid artery, and thus reduce surgical complications and morbidity. ER - TY - JOUR AU - Clarkson, Matthew J. AU - Zombori, Gergely AU - Thompson, Steve AU - Totz, Johannes AU - Song, Yi AU - Espak, Miklos AU - Johnsen, Stian AU - Hawkes, David AU - Ourselin, Sébastien T1 - The NifTK software platform for image-guided interventions: platform overview and NiftyLink messaging JO - International Journal of Computer Assisted Radiology and Surgery Y1 - 2014/november VL - 10 IS - 3 SP - 301 EP - 316 ER - TY - JOUR AU - Colomer, Carolina AU - Llorens, Roberto AU - Noé, Enrique AU - Alcañiz, Mariano T1 - Effect of a mixed reality-based intervention on arm, hand, and finger function on chronic stroke JO - Journal of NeuroEngineering and Rehabilitation Y1 - 2016/05 VL - 13 IS - 1 ER - TY - JOUR AU - Conrad, Claudius AU - Fusaglia, Matteo AU - Peterhans, Matthias AU - Lu, Huanxiang AU - Weber, Stefan AU - Gayet, Brice T1 - Augmented Reality Navigation Surgery Facilitates Laparoscopic Rescue of Failed Portal Vein Embolization. JO - Journal of the American College of Surgeons Y1 - 2016 VL - 223 SP - e31 EP - e34 KW - Embolization KW - Therapeutic; Hepatectomy KW - methods; Humans; Laparoscopy KW - methods; Liver Neoplasms KW - surgery; Male; Middle Aged; Portal Vein; Surgery KW - Computer-Assisted KW - methods; Virtual Reality N1 - 1879-1190 Owner: NLM ER - TY - JOUR AU - Corrêa, Ana Grasielle Dionísio AU - de Assis, Gilda Aparecida AU - do Nascimento, Marilena AU - de Deus Lopes, Roseli T1 - Perceptions of clinical utility of an Augmented Reality musical software among health care professionals. JO - Disability and rehabilitation. Assistive technology Y1 - 2017 VL - 12 SP - 205 EP - 216 KW - Adult; Allied Health Personnel KW - psychology; Brazil; Cognition; Disabled Persons KW - rehabilitation; Female; Focus Groups; Humans; Male; Movement; Music Therapy KW - methods; Perception; Video Games; Virtual Reality; Augmented Reality; Music Therapy; motor and cognitive rehabilitation; musical games N1 - 1748-3115 Owner: NLM N2 - Augmented Reality musical software (GenVirtual) is a technology, which primarily allows users to develop music activities for rehabilitation. This study aimed to analyse the perceptions of health care professionals regarding the clinical utility of GenVirtual. A second objective was to identify improvements to GenVirtual software and similar technologies. Music therapists, occupational therapists, physiotherapists and speech and language therapist who assist people with physical and cognitive disabilities were enrolled in three focus groups. The quantitative and qualitative data were collected through inductive thematic analysis. Three main themes were identified: the use of GenVirtual in health care areas; opportunities for realistic application of GenVirtual; and limitations in the use of GenVirtual. The registration units identified were: motor stimulation, cognitive stimulation, verbal learning, recreation activity, musicality, accessibility, motivation, sonic accuracy, interference of lighting, poor sound, children and adults. This research suggested that the GenVirtual is a complementary tool to conventional clinical practice and has great potential to motor and cognitive rehabilitation of children and adults. Implications for Rehabilitation Gaining health professional' perceptions of the Augmented Reality musical game (GenVirtual) give valuable information as to the clinical utility of the software. GenVirtual was perceived as a tool that could be used as enhancing the motor and cognitive rehabilitation process. GenVirtual was viewed as a tool that could enhance clinical practice and communication among various agencies, but it was suggested that it should be used with caution to avoid confusion and replacement of important services. ER - TY - JOUR AU - Cruz, Telmo AU - Brás, Susana AU - Soares, Sandra C. AU - Fernandes, José Maria T1 - Monitoring physiology and behavior using Android in phobias. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2015 VL - 2015 SP - 3739 EP - 3742 KW - Animals; Heart Rate; Humans; Monitoring KW - Physiologic; Phobic Disorders KW - diagnosis KW - physiopathology KW - psychology; Photic Stimulation N1 - 1557-170X Owner: NLM N2 - In this paper, we present an Android-based system Application - AWARE - for the assessment of the person's physiology and behavior outside of the laboratory. To accomplish this purpose, AWARE delivers context dependent audio-visual stimuli, embedded into the subject's real-world perception, via marker/vision-based augmented reality (AR) technology. In addition, it employs external measuring resources connected via Bluetooth, as well as the smartphone's integrated resources. It synchronously acquires the experiment's video (camera input with AR overlay), physiologic responses (with a dedicated ECG measuring device) and behavior (through movement and location, with accelerometer/gyroscope and GPS, respectively). Psychological assessment is heavily based on laboratory procedures, even though it is known that these settings disturb the subjects' natural reactions and condition. The major idea of this application is to evaluate the participant condition, mimicking his/her real life conditions. Given that phobias are rather context specific, they represent the ideal candidate for assessing the feasibility of a mobile system application. AWARE allowed presenting AR stimuli (e.g., 3D spiders) and quantifying the subjects' reactions non-intrusively (e.g., heart rate variation) - more emphatic in the phobic volunteer when presented with spider vs non phobic stimulus. Although still a proof of concept, AWARE proved to be flexible, and straightforward to setup, with the potential to support ecologically valid monitoring experiments. ER - TY - CONF AU - Cui, Nan AU - Kharel, Pradosh AU - Gruev, Viktor AU - Pogue, Brian W. AU - Gioux, Sylvain T1 - Augmented reality with Microsoft HoloLens holograms for near infrared fluorescence based image guided surgery PB - SPIE Y1 - 2017/february ER - TY - JOUR AU - Currie, Maria E. AU - McLeod, A. Jonathan AU - Moore, John T. AU - Chu, Michael W. A. AU - Patel, Rajni AU - Kiaii, Bob AU - Peters, Terry M. T1 - Augmented Reality System for Ultrasound Guidance of Transcatheter Aortic Valve Implantation. JO - Innovations (Philadelphia, Pa.) Y1 - 2016 VL - 11 SP - 31--9; discussion 39--9; discussion 39 EP - 31 KW - Aortic Valve Stenosis KW - diagnostic imaging KW - surgery; Echocardiography KW - Transesophageal KW - instrumentation KW - methods; Humans; Image Processing KW - Computer-Assisted KW - instrumentation; Monitoring KW - Intraoperative KW - methods; Observer Variation; Transcatheter Aortic Valve Replacement KW - methods; Treatment Outcome; Ultrasonography KW - Interventional KW - methods N1 - 1559-0879 Owner: NLM N2 - Transcatheter aortic valve implantation (TAVI) relies on fluoroscopy and nephrotoxic contrast medium for valve deployment. We propose an alternative guidance system using augmented reality (AR) and transesophageal echocardiography (TEE) to guide TAVI deployment. The goals of this study were to determine how consistently the aortic valve annulus is defined from TEE using different aortic valve landmarks and to compare AR guidance with fluoroscopic guidance of TAVI deployment in an aortic root model. Magnetic tracking sensors were integrated into the TAVI catheter and TEE probe, allowing these tools to be displayed in an AR environment. Variability in identifying aortic valve commissures and cuspal nadirs was assessed using TEE aortic root images. To compare AR guidance of TAVI deployment with fluoroscopic guidance, a TAVI stent was deployed 10 times in the aortic root model using each of the two guidance systems. Commissures and nadirs were both investigated as features for defining the valve annulus in the AR guidance system. The commissures were identified more consistently than the nadirs, with intraobserver variability of 2.2 and 3.8 mm, respectively, and interobserver variability of 3.3 and 4.7 mm, respectively. The precision of TAVI deployment using fluoroscopic guidance was 3.4 mm, whereas the precision of AR guidance was 2.9 mm, and its overall accuracy was 3.4 mm. This indicates that both have similar performance. Aortic valve commissures can be identified more reliably than cuspal nadirs from TEE. The AR guidance system achieved similar deployment accuracy to that of fluoroscopy while eliminating the use and consequences of nephrotoxic contrast and radiation. ER - TY - JOUR AU - Cutolo, Fabrizio AU - Meola, Antonio AU - Carbone, Marina AU - Sinceri, Sara AU - Cagnazzo, Federico AU - Denaro, Ennio AU - Esposito, Nicola AU - Ferrari, Mauro AU - Ferrari, Vincenzo T1 - A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom JO - Computer Assisted Surgery Y1 - 2017/january VL - 22 IS - 1 SP - 39 EP - 53 ER - TY - JOUR AU - Da Gama, Alana Elza Fontes AU - Chaves, Thiago Menezes AU - Figueiredo, Lucas Silva AU - Baltar, Adriana AU - Meng, Ma AU - Navab, Nassir AU - Teichrieb, Veronica AU - Fallavollita, Pascal T1 - MirrARbilitation: A clinically-related gesture recognition interactive tool for an AR rehabilitation system. JO - Computer methods and programs in biomedicine Y1 - 2016 VL - 135 SP - 105 EP - 114 KW - Adolescent; Adult; Biomechanical Phenomena; Exercise Therapy KW - methods; Female; Gestures; Humans; Male; Middle Aged; Young Adult; Augmented reality; Biomechanics; Interaction; Kinect(TM); Movement analysis; Rehabilitation N1 - 1872-7565 Owner: NLM N2 - Interactive systems for rehabilitation have been widely investigated for motivational purposes. However, more attention should be given to the manner in which user movements are recognized and categorized. This paper aims to evaluate the efficacy of using a clinically-related gesture recognition tool, based on the international biomechanical standards (ISB) for the reporting of human joint motion, for the development of an interactive augmented reality (AR) rehabilitation system -mirrARbilitation. This work presents an AR rehabilitation system based on ISB standards, which enables the system to interact and to be configured according to therapeutic needs. The Kinect(TM) skeleton tracking technology was exploited and a new movement recognition method was developed to recognize and classify biomechanical movements. Further, our mirrARbilitation system provides exercise instructions while simultaneously motivating the patient. The system was evaluated on a cohort of 33 patients, physiotherapists, and software developers when performing shoulder abduction therapy exercises. Tests were performed in three moments: (i) users performed the exercise until they feel tired without the help of the system, (ii) the same however using the mirrARbilitation for motivation and guidance, and (iii) users performed the exercise again without the system. Users performing the movement without the help of the system worked as baseline reference. We demonstrated that the percentage of correct exercises, measured by the movement analysis method we developed, improved from 69.02% to 93.73% when users interacted with the mirrARbilitation. The number of exercise repetitions also improved from 34.06 to 66.09 signifying that our system increased motivation of the users. The system also prevented the users from performing the exercises in a completely wrong manner. Finally, with the help of our system the users' worst result was performing 73.68% of the rehabilitation movements correctly. Besides the engagement, these results suggest that the use of biomechanical standards to recognize movements is valuable in guiding users during rehabilitation exercises. The proposed system proved to be efficient by improving the user engagement and exercise performance outcomes. The results also suggest that the use of biomechanical standards to recognize movements is valuable in guiding users during rehabilitation exercises. ER - TY - JOUR AU - Dauwe, D. F. AU - Nuyens, D. AU - Buck, S. De AU - Claus, P. AU - Gheysens, O. AU - Koole, M. AU - Coudyzer, W. AU - Driessche, N. Vanden AU - Janssens, L. AU - Ector, J. AU - Dymarkowski, S. AU - Bogaert, J. AU - Heidbuchel, H. AU - Janssens, S. T1 - Three-dimensional rotational angiography fused with multimodal imaging modalities for targeted endomyocardial injections in the ischaemic heart JO - European Heart Journal - Cardiovascular Imaging Y1 - 2014/march VL - 15 IS - 8 SP - 900 EP - 907 ER - TY - JOUR AU - Davis, Matthew Christopher AU - Can, Dang D. AU - Pindrik, Jonathan AU - Rocque, Brandon G. AU - Johnston, James M. T1 - Virtual Interactive Presence in Global Surgical Education: International Collaboration Through Augmented Reality. JO - World neurosurgery Y1 - 2016 VL - 86 SP - 103 EP - 111 KW - Child KW - Preschool; Female; Humans; Hydrocephalus KW - surgery; Infant; International Cooperation; Male; Neuroendoscopy; Remote Consultation; United States; User-Computer Interface; Ventriculostomy; Vietnam; Global Health; Neurosurgery; Pediatrics; Telecommunications N1 - 1878-8769 Owner: NLM N2 - Technology allowing a remote, experienced surgeon to provide real-time guidance to local surgeons has great potential for training and capacity building in medical centers worldwide. Virtual interactive presence and augmented reality (VIPAR), an iPad-based tool, allows surgeons to provide long-distance, virtual assistance wherever a wireless internet connection is available. Local and remote surgeons view a composite image of video feeds at each station, allowing for intraoperative telecollaboration in real time. Local and remote stations were established in Ho Chi Minh City, Vietnam, and Birmingham, Alabama, as part of ongoing neurosurgical collaboration. Endoscopic third ventriculostomy with choroid plexus coagulation with VIPAR was used for subjective and objective evaluation of system performance. VIPAR allowed both surgeons to engage in complex visual and verbal communication during the procedure. Analysis of 5 video clips revealed video delay of 237 milliseconds (range, 93-391 milliseconds) relative to the audio signal. Excellent image resolution allowed the remote neurosurgeon to visualize all critical anatomy. The remote neurosurgeon could gesture to structures with no detectable difference in accuracy between stations, allowing for submillimeter precision. Fifteen endoscopic third ventriculostomy with choroid plexus coagulation procedures have been performed with the use of VIPAR between Vietnam and the United States, with no significant complications. 80% of these patients remain shunt-free. Evolving technologies that allow long-distance, intraoperative guidance, and knowledge transfer hold great potential for highly efficient international neurosurgical education. VIPAR is one example of an inexpensive, scalable platform for increasing global neurosurgical capacity. Efforts to create a network of Vietnamese neurosurgeons who use VIPAR for collaboration are underway. ER - TY - JOUR AU - Debarba, Henrique Galvan AU - Grandi, Jerônimo AU - Maciel, Anderson AU - Zanchet, Dinamar T1 - Anatomic hepatectomy planning through mobile display visualization and interaction. JO - Studies in health technology and informatics Y1 - 2012 VL - 173 SP - 111 EP - 115 KW - Adult; Aged; Female; Hepatectomy KW - methods; Humans; Liver KW - anatomy & histology; Male; Middle Aged; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface N1 - 0926-9630 Owner: NLM N2 - Hepatectomies are resections in which segments of the liver are extracted. While medical images are fundamental in the surgery planning procedure, the process of analysis of such images slice-by-slice is still tedious and inefficient. In this work we propose a strategy to efficiently and semi-automatically segment and classify patient-specific liver models in 3D through a mobile display device. The method is based on volume visualization of standard CT datasets and allows accurate estimation of functional remaining liver volume. Experiments showing effectiveness of the method are presented, and quantitative and qualitative results are discussed. ER - TY - JOUR AU - DeLisi, Michael P. AU - Mawn, Louise A. AU - Galloway, Robert L. T1 - Image-guided transorbital procedures with endoscopic video augmentation JO - Medical Physics Y1 - 2014/august VL - 41 IS - 9 SP - 091901 EP - 091901 ER - TY - JOUR AU - Deng, Weiwei AU - Li, Fang AU - Wang, Manning AU - Song, Zhijian T1 - Easy-to-use augmented reality neuronavigation using a wireless tablet PC. JO - Stereotactic and functional neurosurgery Y1 - 2014 VL - 92 SP - 17 EP - 24 KW - Equipment Design; Feasibility Studies; Humans; Image Processing KW - Computer-Assisted KW - instrumentation KW - methods; Microcomputers; Neuronavigation KW - methods; Neurosurgery KW - methods; Reproducibility of Results; Skull; Surgery KW - methods; Wireless Technology N1 - 1423-0372 Owner: NLM N2 - Augmented reality (AR) technology solves the problem of view switching in traditional image-guided neurosurgery systems by integrating computer-generated objects into the actual scene. However, the state-of-the-art AR solution using head-mounted displays has not been widely accepted in clinical applications because it causes some inconvenience for the surgeon during surgery. In this paper, we present a Tablet-AR system that transmits navigation information to a movable tablet PC via a wireless local area network and overlays this information on the tablet screen, which simultaneously displays the actual scene captured by its back-facing camera. With this system, the surgeon can directly observe the intracranial anatomical structure of the patient with the overlaid virtual projection images to guide the surgery. The alignment errors in the skull specimen study and clinical experiment were 4.6 pixels (approx. 1.6 mm) and 6 pixels (approx. 2.1 mm), respectively. The system was also used for navigation in 2 actual clinical cases of neurosurgery, which demonstrated its feasibility in a clinical application. The easy-to-use Tablet-AR system presented in this study is accurate and feasible in clinical applications and has the potential to become a routine device in AR neuronavigation. ER - TY - JOUR AU - Diana, Michele AU - Halvax, Peter AU - Dallemagne, Bernard AU - Nagao, Yoshihiro AU - Diemunsch, Pierre AU - Charles, Anne-Laure AU - Agnus, Vincent AU - Soler, Luc AU - Demartines, Nicolas AU - Lindner, Veronique AU - Geny, Bernard AU - Marescaux, Jacques T1 - Real-time navigation by fluorescence-based enhanced reality for precise estimation of future anastomotic site in digestive surgery JO - Surgical Endoscopy Y1 - 2014/june VL - 28 IS - 11 SP - 3108 EP - 3118 ER - TY - JOUR AU - Diana, Michele AU - Noll, Eric AU - Diemunsch, Pierre AU - Dallemagne, Bernard AU - Benahmed, Malika A. AU - Agnus, Vincent AU - Soler, Luc AU - Barry, Brian AU - Namer, Izzie Jacques AU - Demartines, Nicolas AU - Charles, Anne-Laure AU - Geny, Bernard AU - Marescaux, Jacques T1 - Enhanced-reality video fluorescence: a real-time assessment of intestinal viability. JO - Annals of surgery Y1 - 2014 VL - 259 SP - 700 EP - 707 KW - Animals; Biomarkers KW - metabolism; Female; Fluorescent Dyes; Image Interpretation KW - Computer-Assisted; Indocyanine Green; Intestine KW - Small KW - blood supply KW - metabolism KW - pathology; Ischemia KW - pathology; Lactic Acid KW - metabolism; Laparoscopy; Magnetic Resonance Spectroscopy; Male; Mesenteric Arteries KW - surgery; Mesentery; Metabolome; Mitochondria KW - metabolism; Spectrometry KW - Fluorescence KW - methods; Swine; Video Recording N1 - 1528-1140 Owner: NLM N2 - Our aim was to evaluate a fluorescence-based enhanced-reality system to assess intestinal viability in a laparoscopic mesenteric ischemia model. A small bowel loop was exposed, and 3 to 4 mesenteric vessels were clipped in 6 pigs. Indocyanine green (ICG) was administered intravenously 15 minutes later. The bowel was illuminated with an incoherent light source laparoscope (D-light-P, KarlStorz). The ICG fluorescence signal was analyzed with Ad Hoc imaging software (VR-RENDER), which provides a digital perfusion cartography that was superimposed to the intraoperative laparoscopic image [augmented reality (AR) synthesis]. Five regions of interest (ROIs) were marked under AR guidance (1, 2a-2b, 3a-3b corresponding to the ischemic, marginal, and vascularized zones, respectively). One hour later, capillary blood samples were obtained by puncturing the bowel serosa at the identified ROIs and lactates were measured using the EDGE analyzer. A surgical biopsy of each intestinal ROI was sent for mitochondrial respiratory rate assessment and for metabolites quantification. Mean capillary lactate levels were 3.98 (SD = 1.91) versus 1.05 (SD = 0.46) versus 0.74 (SD = 0.34) mmol/L at ROI 1 versus 2a-2b (P = 0.0001) versus 3a-3b (P = 0.0001), respectively. Mean maximal mitochondrial respiratory rate was 104.4 (±21.58) pmolO2/second/mg at the ROI 1 versus 191.1 ± 14.48 (2b, P = 0.03) versus 180.4 ± 16.71 (3a, P = 0.02) versus 199.2 ± 25.21 (3b, P = 0.02). Alanine, choline, ethanolamine, glucose, lactate, myoinositol, phosphocholine, sylloinositol, and valine showed statistically significant different concentrations between ischemic and nonischemic segments. Fluorescence-based AR may effectively detect the boundary between the ischemic and the vascularized zones in this experimental model. ER - TY - JOUR AU - Dickey, Ryan M. AU - Srikishen, Neel AU - Lipshultz, Larry I. AU - Spiess, Philippe E. AU - Carrion, Rafael E. AU - Hakky, Tariq S. T1 - Augmented reality assisted surgery: a urologic training tool. JO - Asian journal of andrology Y1 - 2016 VL - 18 SP - 732 EP - 734 KW - Equipment Design; Humans; Internship and Residency; Male; Penile Prosthesis; Penis KW - surgery; Pilot Projects; Urologic Surgical Procedures KW - Male KW - education; User-Computer Interface N1 - 1745-7262 Owner: NLM N2 - Augmented reality is widely used in aeronautics and is a developing concept within surgery. In this pilot study, we developed an application for use on Google Glass ® optical head-mounted display to train urology residents in how to place an inflatable penile prosthesis. We use the phrase Augmented Reality Assisted Surgery to describe this novel application of augmented reality in the setting of surgery. The application demonstrates the steps of the surgical procedure of inflatable penile prosthesis placement. It also contains software that allows for detection of interest points using a camera feed from the optical head-mounted display to enable faculty to interact with residents during placement of the penile prosthesis. Urology trainees and faculty who volunteered to take part in the study were given time to experience the technology in the operative or perioperative setting and asked to complete a feedback survey. From 30 total participants using a 10-point scale, educational usefulness was rated 8.6, ease of navigation was rated 7.6, likelihood to use was rated 7.4, and distraction in operating room was rated 4.9. When stratified between trainees and faculty, trainees found the technology more educationally useful, and less distracting. Overall, 81% of the participants want this technology in their residency program, and 93% see this technology in the operating room in the future. Further development of this technology is warranted before full release, and further studies are necessary to better characterize the effectiveness of Augmented Reality Assisted Surgery in urologic surgical training. ER - TY - JOUR AU - Diotte, Benoit AU - Fallavollita, Pascal AU - Wang, Lejing AU - Weidert, Simon AU - Euler, Ekkehard AU - Thaller, Peter AU - Navab, Nassir T1 - Multi-modal intra-operative navigation during distal locking of intramedullary nails. JO - IEEE transactions on medical imaging Y1 - 2015 VL - 34 SP - 487 EP - 495 KW - Algorithms; Bone Nails; Fluoroscopy KW - methods; Fracture Fixation KW - Intramedullary KW - methods; Humans; Multimodal Imaging; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - education KW - methods N1 - 1558-254X Owner: NLM N2 - The interlocking of intramedullary nails is a technically demanding procedure which involves a considerable amount of X-ray acquisitions; one study lists as many as 48 to successfully complete the procedure and fix screws into 4-6 mm distal holes of the nail. We propose to design an augmented radiolucent drill to assist surgeons in completing the distal locking procedure without any additional X-ray acquisitions. Using an augmented reality fluoroscope that coregisters optical and X-ray images, we exploit solely the optical images to detect the augmented radiolucent drill and estimate its tip position in real-time. Consequently, the surgeons will be able to maintain the down the beam positioning required to drill the screws into the nail holes successfully. To evaluate the accuracy of the proposed augmented drill, we perform a preclinical study involving six surgeons and ask them to perform distal locking on dry bone phantoms. Surgeons completed distal locking 98.3% of the time using only a single X-ray image with an average navigation time of 1.4 ± 0.9 min per hole. ER - TY - JOUR AU - Dixon, Benjamin J. AU - Chan, Harley AU - Daly, Michael J. AU - Vescan, Allan D. AU - Witterick, Ian J. AU - Irish, Jonathan C. T1 - The effect of augmented real-time image guidance on task workload during endoscopic sinus surgery JO - International Forum of Allergy & Rhinology Y1 - 2012/05 VL - 2 IS - 5 SP - 405 EP - 410 ER - TY - JOUR AU - Dixon, Benjamin J. AU - Daly, Michael J. AU - Chan, Harley AU - Vescan, Allan D. AU - Witterick, Ian J. AU - Irish, Jonathan C. T1 - Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. JO - Surgical endoscopy Y1 - 2013 VL - 27 SP - 454 EP - 461 KW - Attention; Cadaver; Endoscopy KW - standards; Humans; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface N1 - 1432-2218 Owner: NLM N2 - Advanced image-guidance systems allowing presentation of three-dimensional navigational data in real time are being developed enthusiastically for many medical procedures. Other industries, including aviation and the military, have noted that shifting attention toward such compelling assistance has detrimental effects. Using the detection rate of unexpected findings, we assess whether inattentional blindness is significant in a surgical context and evaluate the impact of on-screen navigational cuing with augmented reality. Surgeons and trainees performed an endoscopic navigation exercise on a cadaveric specimen. The subjects were randomized to either a standard endoscopic view (control) or an AR view consisting of an endoscopic video fused with anatomic contours. Two unexpected findings were presented in close proximity to the target point: one critical complication and one foreign body (screw). Task completion time, accuracy, and recognition of findings were recorded. Detection of the complication was 0/15 in the AR group versus 7/17 in the control group (p = 0.008). Detection of the screw was 1/15 (AR) and 7/17 (control) (p = 0.041). Recognition of either finding was 12/17 for the control group and 1/15 for the AR group (p < 0.001). Accuracy was greater for the AR group than for the control group, with the median distance from the target point measuring respectively 2.10 mm (interquartile range [IQR], 1.29-2.37) and 4.13 (IQR, 3.11-7.39) (p < 0.001). Inattentional blindness was evident in both groups. Although more accurate, the AR group was less likely to identify significant unexpected findings clearly within view. Advanced navigational displays may increase precision, but strategies to mitigate attentional costs need further investigation to allow safe implementation. ER - TY - JOUR AU - Dixon, Benjamin J. AU - Daly, Michael J. AU - Chan, Harley H. L. AU - Vescan, Allan AU - Witterick, Ian J. AU - Irish, Jonathan C. T1 - Inattentional blindness increased with augmented reality surgical navigation. JO - American journal of rhinology & allergy Y1 - 2014 VL - 28 SP - 433 EP - 437 KW - Endoscopy; Humans; Otolaryngology KW - education; Surgery KW - Computer-Assisted KW - education; Task Performance and Analysis; Tomography KW - X-Ray Computed N1 - 1945-8932 Owner: NLM N2 - Augmented reality (AR) surgical navigation systems, designed to increase accuracy and efficiency, have been shown to negatively impact on attention. We wished to assess the effect "head-up" AR displays have on attention, efficiency, and accuracy, while performing a surgical task, compared with the same information being presented on a submonitor (SM). Fifty experienced otolaryngology surgeons (n = 42) and senior otolaryngology trainees (n = 8) performed an endoscopic surgical navigation exercise on a predissected cadaveric model. Computed tomography-generated anatomic contours were fused with the endoscopic image to provide an AR view. Subjects were randomized to perform the task with a standard endoscopic monitor with the AR navigation displayed on an SM or with AR as a single display. Accuracy, task completion time, and the recognition of unexpected findings (a foreign body and a critical complication) were recorded. Recognition of the foreign body was significantly better in the SM group (15/25 [60%]) compared with the AR alone group (8/25 [32%]; p = 0.02). There was no significant difference in task completion time (p = 0.83) or accuracy (p = 0.78) between the two groups. Providing identical surgical navigation on a SM, rather than on a single head-up display, reduced the level of inattentional blindness as measured by detection of unexpected findings. These gains were achieved without any measurable impact on efficiency or accuracy. AR displays may distract the user and we caution injudicious adoption of this technology for medical procedures. ER - TY - JOUR AU - Domhardt, Michael AU - Tiefengrabner, Martin AU - Dinic, Radomir AU - Fötschl, Ulrike AU - Oostingh, Gertie J. AU - Stütz, Thomas AU - Stechemesser, Lars AU - Weitgasser, Raimund AU - Ginzinger, Simon W. T1 - Training of carbohydrate estimation for people with diabetes using mobile augmented reality. JO - Journal of diabetes science and technology Y1 - 2015/05 VL - 9 SP - 516 EP - 524 KW - Adolescent; Adult; Aged; Blood Glucose; Cell Phone; Diabetes Mellitus KW - Type 1 KW - diet therapy; Diet KW - Diabetic; Dietary Carbohydrates; Eating; Female; Humans; Male; Middle Aged; Mobile Applications; Pilot Projects; Reproducibility of Results; Treatment Outcome; User-Computer Interface; Visual Perception; Young Adult; augmented reality; carbohydrate counting; diabetes education; mHealth N1 - 1932-2968 Owner: NLM N2 - Imprecise carbohydrate counting as a measure to guide the treatment of diabetes may be a source of errors resulting in problems in glycemic control. Exact measurements can be tedious, leading most patients to estimate their carbohydrate intake. In the presented pilot study a smartphone application (BE(AR)), that guides the estimation of the amounts of carbohydrates, was used by a group of diabetic patients. Eight adult patients with diabetes mellitus type 1 were recruited for the study. At the beginning of the study patients were introduced to BE(AR) in sessions lasting 45 minutes per patient. Patients redraw the real food in 3D on the smartphone screen. Based on a selected food type and the 3D form created using BE(AR) an estimation of carbohydrate content is calculated. Patients were supplied with the application on their personal smartphone or a loaner device and were instructed to use the application in real-world context during the study period. For evaluation purpose a test measuring carbohydrate estimation quality was designed and performed at the beginning and the end of the study. In 44% of the estimations performed at the end of the study the error reduced by at least 6 grams of carbohydrate. This improvement occurred albeit several problems with the usage of BE(AR) were reported. Despite user interaction problems in this group of patients the provided intervention resulted in a reduction in the absolute error of carbohydrate estimation. Intervention with smartphone applications to assist carbohydrate counting apparently results in more accurate estimations. ER - TY - JOUR AU - Douglas, David B. AU - Boone, John M. AU - Petricoin, Emanuel AU - Liotta, Lance AU - Wilson, Eugene T1 - Augmented Reality Imaging System: 3D Viewing of a Breast Cancer. JO - Journal of nature and science Y1 - 2016 VL - 2 N1 - 2377-2700 Owner: NLM N2 - To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. ER - TY - JOUR AU - Douglas, David B. AU - Petricoin, Emanuel F. AU - Liotta, Lance AU - Wilson, Eugene T1 - D3D augmented reality imaging system: proof of concept in mammography. JO - Medical devices (Auckland, N.Z.) Y1 - 2016 VL - 9 SP - 277 EP - 283 KW - 3D medical imaging; augmented reality; depth perception; radiology N1 - 1179-1470 Owner: NLM N2 - The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. ER - TY - JOUR AU - Draga, Ronald O. P. AU - Noordmans, Herke Jan AU - Lock, Tycho M. T. W. AU - Jaspers, Joris AU - van Rhijn, Arjen AU - Bosch, J. L. H. Ruud T1 - The Feasibility of Navigation-Assisted Mapping of Bladder Tumors During Transurethral Resection JO - UroToday International Journal Y1 - 2013 VL - 06 IS - 03 ER - TY - JOUR AU - Drouin, Simon AU - Kochanowska, Anna AU - Kersten-Oertel, Marta AU - Gerard, Ian J. AU - Zelmann, Rina AU - De Nigris, Dante AU - Bériault, Silvain AU - Arbel, Tal AU - Sirhan, Denis AU - Sadikot, Abbas F. AU - Hall, Jeffery A. AU - Sinclair, David S. AU - Petrecca, Kevin AU - DelMaestro, Rolando F. AU - Collins, D. Louis T1 - IBIS: an OR ready open-source platform for image-guided neurosurgery. JO - International journal of computer assisted radiology and surgery Y1 - 2017 VL - 12 SP - 363 EP - 378 KW - Brain KW - diagnostic imaging KW - surgery; Brain Neoplasms KW - surgery; Deep Brain Stimulation; Humans; Microsurgery; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Operating Rooms; Prosthesis Implantation; Surgery KW - Computer-Assisted KW - methods; Ultrasonography; User-Computer Interface; Vascular Surgical Procedures KW - methods; Workflow; Augmented reality; Brain shift correction; Image-guided surgery; Ultrasound N1 - 1861-6429 Owner: NLM N2 - Navigation systems commonly used in neurosurgery suffer from two main drawbacks: (1) their accuracy degrades over the course of the operation and (2) they require the surgeon to mentally map images from the monitor to the patient. In this paper, we introduce the Intraoperative Brain Imaging System (IBIS), an open-source image-guided neurosurgery research platform that implements a novel workflow where navigation accuracy is improved using tracked intraoperative ultrasound (iUS) and the visualization of navigation information is facilitated through the use of augmented reality (AR). The IBIS platform allows a surgeon to capture tracked iUS images and use them to automatically update preoperative patient models and plans through fast GPU-based reconstruction and registration methods. Navigation, resection and iUS-based brain shift correction can all be performed using an AR view. IBIS has an intuitive graphical user interface for the calibration of a US probe, a surgical pointer as well as video devices used for AR (e.g., a surgical microscope). The components of IBIS have been validated in the laboratory and evaluated in the operating room. Image-to-patient registration accuracy is on the order of [Formula: see text] and can be improved with iUS to a median target registration error of 2.54 mm. The accuracy of the US probe calibration is between 0.49 and 0.82 mm. The average reprojection error of the AR system is [Formula: see text]. The system has been used in the operating room for various types of surgery, including brain tumor resection, vascular neurosurgery, spine surgery and DBS electrode implantation. The IBIS platform is a validated system that allows researchers to quickly bring the results of their work into the operating room for evaluation. It is the first open-source navigation system to provide a complete solution for AR visualization. ER - TY - JOUR AU - Duménil, A. AU - Kaladji, A. AU - Castro, M. AU - Göksu, C. AU - Lucas, A. AU - Haigron, P. T1 - A versatile intensity-based 3D/2D rigid registration compatible with mobile C-arm for endovascular treatment of abdominal aortic aneurysm. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 1713 EP - 1729 KW - Aortic Aneurysm KW - Abdominal KW - diagnosis KW - surgery; Endovascular Procedures KW - methods; Fluoroscopy KW - methods; Humans; Imaging KW - Three-Dimensional KW - methods; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed KW - methods; 3D/2D registration; Augmented reality; Computer-assisted surgery; Endovascular aneurysm repair N1 - 1861-6429 Owner: NLM N2 - Augmented reality-assisted surgery requires prior registration between preoperative and intraoperative data. In the context of the endovascular aneurysm repair (EVAR) of abdominal aortic aneurysm, no satisfactory solution exists at present for clinical use, in particular in the case of use with a mobile C-arm. The difficulties stem in particular from the diversity of intraoperative images, table movements and changes of C-arm pose. We propose a fast and versatile 3D/2D registration method compatible with mobile C-arm that can be easily repeated during an EVAR procedure. Applicable to both vascular and bone structures, our approach is based on an optimization by reduced exhaustive search involving a multi-resolution scheme and a decomposition of the transformation to reduce calculation time. Registration was performed between the preoperative CT-scan and fluoroscopic images for a group of 26 patients in order to confront our method in real conditions of use. The evaluation was completed by also performing registration between an intraoperative CBCT volume and fluoroscopic images for a group of 6 patients to compare registration results with reference transformations. The experimental results show that our approach allows obtaining accuracy of the order of 0.5 mm, a computation time of [Formula: see text] and a higher rate of success in comparison with a classical optimization method. When integrated in an augmented reality navigation system, our approach shows that it is compatible with clinical workflow. We presented a versatile 3D/2D rigid registration applicable to all intraoperative scenes and usable to guide an EVAR procedure by augmented reality. ER - TY - JOUR AU - Edgcumbe, Philip AU - Pratt, Philip AU - Yang, Guang-Zhong AU - Nguan, Christopher AU - Rohling, Robert T1 - Pico Lantern: Surface reconstruction and augmented reality in laparoscopic surgery using a pick-up laser projector. JO - Medical image analysis Y1 - 2015 VL - 25 SP - 95 EP - 102 KW - Animals; Equipment Design; Image Enhancement KW - instrumentation; Kidney KW - blood supply; Laparoscopes; Laparoscopy KW - instrumentation; Lasers; Lighting KW - instrumentation; Phantoms KW - Imaging; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - Computer-Assisted KW - instrumentation; Swine; User-Computer Interface; Augmented reality; Laparoscopic surgery; Pico Lantern; Pico projector N1 - 1361-8423 Owner: NLM N2 - The Pico Lantern is a miniature projector developed for structured light surface reconstruction, augmented reality and guidance in laparoscopic surgery. During surgery it will be dropped into the patient and picked up by a laparoscopic tool. While inside the patient it projects a known coded pattern and images onto the surface of the tissue. The Pico Lantern is visually tracked in the laparoscope's field of view for the purpose of stereo triangulation between it and the laparoscope. In this paper, the first application is surface reconstruction. Using a stereo laparoscope and an untracked Pico Lantern, the absolute error for surface reconstruction for a plane, cylinder and ex vivo kidney, is 2.0 mm, 3.0 mm and 5.6 mm, respectively. Using a mono laparoscope and a tracked Pico Lantern for the same plane, cylinder and kidney the absolute error is 1.4 mm, 1.5 mm and 1.5 mm, respectively. These results confirm the benefit of the wider baseline produced by tracking the Pico Lantern. Virtual viewpoint images are generated from the kidney surface data and an in vivo proof-of-concept porcine trial is reported. Surface reconstruction of the neck of a volunteer shows that the pulsatile motion of the tissue overlying a major blood vessel can be detected and displayed in vivo. Future work will integrate the Pico Lantern into standard and robot-assisted laparoscopic surgery. ER - TY - JOUR AU - Eftekhar, Behzad T1 - A Smartphone App to Assist Scalp Localization of Superficial Supratentorial Lesions--Technical Note. JO - World neurosurgery Y1 - 2016 VL - 85 SP - 359 EP - 363 KW - Adult; Aged; Brain Neoplasms KW - diagnosis KW - diagnostic imaging KW - pathology KW - surgery; Cerebral Cortex KW - pathology; Female; Head; Humans; Magnetic Resonance Imaging; Male; Middle Aged; Mobile Applications KW - statistics & numerical data KW - trends; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Scalp; Smartphone; Tomography KW - X-Ray Computed; Augmented reality; Image-guided surgery; Mobile applications; Neuronavigation N1 - 1878-8769 Owner: NLM N2 - Neuronavigation is an established technology in neurosurgery. In parts of the world and certain circumstances in which neuronavigation is not easily available or affordable, alternative techniques may be considered. An app to assist scalp localization of superficial supratentorial lesions has been introduced, and its accuracy has been compared with established neuronavigation systems. Sina is a simple smartphone app that overlaps the transparent patients' computed tomography/magnetic resonance images on the background camera. How to use Sina intraoperatively is described. The app was used for scalp localization of the center of the lesions in 11 patients with supratentorial pathologies <3 cm in longest diameter and <2 cm from the cortex. After localization of the lesion using Sina, the center of the lesion was marked on the scalp using standard neuronavigation systems and the deviations were measured. Implementation of Sina for intraoperative scalp localization is simple and practical. The center of the lesions localized by Sina was 10.2 ± 2 mm different from localization done by standard neuronavigation systems. When neuronavigation is not easily available or affordable, Sina can be helpful for scalp localization and preoperative planning of the incision for selected supratentorial pathologies. ER - TY - JOUR AU - Eftekhar, Behzad T1 - App-assisted external ventricular drain insertion. JO - Journal of neurosurgery Y1 - 2016 VL - 125 SP - 754 EP - 758 KW - Catheters; Cerebral Ventricles KW - surgery; Drainage KW - instrumentation KW - methods; Humans; Mobile Applications; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Surgery KW - Computer-Assisted; Android app; EVD = external ventricular drain; OR = operating room; app = mobile device application; augmented reality; diagnostic and operative techniques; external ventricular drain; image-guided surgery; neuronavigation N1 - 1933-0693 Owner: NLM N2 - The freehand technique for insertion of an external ventricular drain (EVD) is based on fixed anatomical landmarks and does not take individual variations into consideration. A patient-tailored approach based on augmented-reality techniques using devices such as smartphones can address this shortcoming. The Sina neurosurgical assist (Sina) is an Android mobile device application (app) that was designed and developed to be used as a simple intraoperative neurosurgical planning aid. It overlaps the patient's images from previously performed CT or MRI studies on the image seen through the device camera. The device is held by an assistant who aligns the images and provides information about the relative position of the target and EVD to the surgeon who is performing EVD insertion. This app can be used to provide guidance and continuous monitoring during EVD placement. The author describes the technique of Sina-assisted EVD insertion into the frontal horn of the lateral ventricle and reports on its clinical application in 5 cases as well as the results of ex vivo studies of ease of use and precision. The technique has potential for further development and use with other augmented-reality devices. ER - TY - JOUR AU - Elmi-Terander, Adrian AU - Skulason, Halldor AU - Söderman, Michael AU - Racadio, John AU - Homan, Robert AU - Babic, Drazenko AU - van der Vaart, Nijs AU - Nachabe, Rami T1 - Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging: A Spine Cadaveric Feasibility and Accuracy Study. JO - Spine Y1 - 2016 VL - 41 SP - E1303 EP - E1311 KW - Feasibility Studies; Humans; Imaging KW - Three-Dimensional KW - methods; Lumbar Vertebrae KW - surgery; Orthopedic Procedures KW - methods; Pedicle Screws; Surgery KW - Computer-Assisted KW - methods; Thoracic Vertebrae KW - surgery N1 - 1528-1159 Owner: NLM N2 - A cadaveric laboratory study. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P < 0.05), specifically significant increases of perfectly placed screws (51% vs. 30%, P < 0.05) and reductions in breaches beyond 4 mm (2% vs. 25%, P < 0.05). All morphometric dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. N/A. ER - TY - JOUR AU - Engelen, Thijs AU - Winkel, Beatrice Mf AU - Rietbergen, Daphne Dd AU - KleinJan, Gijs H. AU - Vidal-Sicart, Sergi AU - Olmos, Renato A. Valdés AU - van den Berg, Nynke S. AU - van Leeuwen, Fijs Wb T1 - The next evolution in radioguided surgery: breast cancer related sentinel node localization using a freehandSPECT-mobile gamma camera combination. JO - American journal of nuclear medicine and molecular imaging Y1 - 2015 VL - 5 SP - 233 EP - 245 KW - Sentinel node; breast cancer; freehandSPECT; mobile gamma camera; navigation; nuclear medicine; radioguided surgery N1 - 2160-8407 Owner: NLM N2 - Accurate pre- and intraoperative identification of the sentinel node (SN) forms the basis of the SN biopsy procedure. Gamma tracing technologies such as a gamma probe (GP), a 2D mobile gamma camera (MGC) or 3D freehandSPECT (FHS) can be used to provide the surgeon with radioguidance to the SN(s). We reasoned that integrated use of these technologies results in the generation of a "hybrid" modality that combines the best that the individual radioguidance technologies have to offer. The sensitivity and resolvability of both 2D-MGC and 3D-FHS-MGC were studied in a phantom setup (at various source-detector depths and using varying injection site-to-SN distances), and in ten breast cancer patients scheduled for SN biopsy. Acquired 3D-FHS-MGC images were overlaid with the position of the phantom/patient. This augmented-reality overview image was then used for navigation to the hotspot/SN in virtual-reality using the GP. Obtained results were compared to conventional gamma camera lymphoscintigrams. Resolution of 3D-FHS-MGC allowed identification of the SNs at a minimum injection site (100 MBq)-to-node (1 MBq; 1%) distance of 20 mm, up to a source-detector depth of 36 mm in 2D-MGC and up to 24 mm in 3D-FHS-MGC. A clinically relevant dose of approximately 1 MBq was clearly detectable up to a depth of 60 mm in 2D-MGC and 48 mm in 3D-FHS-MGC. In all ten patients at least one SN was visualized on the lymphoscintigrams with a total of 12 SNs visualized. 3D-FHS-MGC identified 11 of 12 SNs and allowed navigation to all these visualized SNs; in one patient with two axillary SNs located closely to each other (11 mm), 3D-FHS-MGC was not able to distinguish the two SNs. In conclusion, high sensitivity detection of SNs at an injection site-to-node distance of 20 mm-and-up was possible using 3D-FHS-MGC. In patients, 3D-FHS-MGC showed highly reproducible images as compared to the conventional lymphoscintigrams. ER - TY - CONF AU - Engelhardt, Sandy AU - Kolb, Silvio AU - Simone, Raffaele De AU - Karck, Matthias AU - Meinzer, Hans-Peter AU - Wolf, Ivo AU - Webster, Robert J. AU - Yaniv, Ziv R. T1 - Endoscopic feature tracking for augmented-reality assisted prosthesis selection in mitral valve repair PB - SPIE Y1 - 2016/march ER - TY - JOUR AU - Espejo-Trung, Luciana Cardoso AU - Elian, Silvia Nagib AU - Luz, Maria Aparecia Alves De Cerqueira T1 - Development and Application of a New Learning Object for Teaching Operative Dentistry Using Augmented Reality. JO - Journal of dental education Y1 - 2015 VL - 79 SP - 1356 EP - 1362 KW - Brazil; Computer Literacy; Computer Systems; Computer-Aided Design; Computer-Assisted Instruction; Dental Models; Dentistry KW - Operative KW - education; Dentists; Education KW - Dental; Education KW - Dental KW - Continuing; Education KW - Graduate; Educational Technology; Faculty KW - Dental; Female; Gold Alloys KW - chemistry; Humans; Imaging KW - Three-Dimensional; Inlays; Learning; Male; Program Development; Prosthodontics KW - education; Students KW - Dental; Teaching KW - methods; Tooth Preparation KW - Prosthodontic; User-Computer Interface; computer-assisted instruction; continuing education; dental education; operative dentistry N1 - 1930-7837 Owner: NLM N2 - Learning objects (LOs) associated with augmented reality have been used as attractive new technologic tools in the educational process. However, the acceptance of new LOs must be verified with the purpose of using these innovations in the learning process in general. The aim of this study was to develop a new LO and investigate the acceptance of gold onlay in teaching preparation design at a dental school in Brazil. Questionnaires were designed to assess, first, the users' computational ability and knowledge of computers (Q1) and, second, the users' acceptance of the new LO (Q2). For both questionnaires, the internal consistency index was calculated to determine whether the questions were measuring the same construct. The reliability of Q2 was measured with a retest procedure. The LO was tested by dental students (n=28), professors and postgraduate students in dentistry and prosthetics (n=30), and dentists participating in a continuing education or remedial course in dentistry and/or prosthetics (n=19). Analyses of internal consistency (Kappa coefficient and Cronbach's alpha) demonstrated a high degree of confidence in the questionnaires. Tests for simple linear regressions were conducted between the response variable (Q2) and the following explanative variables: the Q1 score, age, gender, and group. The results showed wide acceptance regardless of the subjects' computational ability (p=0.99; R2=0), gender (p=0.27; R2=1.6%), age (p=0.27; R2=0.1%), or group (p=0.53; R2=1.9%). The methodology used enabled the development of an LO with a high index of acceptance for all groups. ER - TY - JOUR AU - Fallavollita, Pascal AU - Brand, Alexander AU - Wang, Lejing AU - Euler, Ekkehard AU - Thaller, Peter AU - Navab, Nassir AU - Weidert, Simon T1 - An augmented reality C-arm for intraoperative assessment of the mechanical axis: a preclinical study. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 2111 EP - 2117 KW - Aged; Aged KW - 80 and over; Cadaver; Female; Humans; Imaging KW - Three-Dimensional; Knee Joint KW - diagnostic imaging KW - surgery; Male; Monitoring KW - Intraoperative; Orthopedic Procedures; Tibia KW - surgery; Tomography KW - X-Ray Computed; Augmented reality; C-arm fluoroscopy; Intraoperative navigation; Lower limb alignment; Mechanical axis deviation; Tibial osteotomy N1 - 1861-6429 Owner: NLM N2 - Determination of lower limb alignment is a prerequisite for successful orthopedic surgical treatment. Traditional methods include the electrocautery cord, alignment rod, or axis board which rely solely on C-arm fluoroscopy navigation and are radiation intensive. To assess a new augmented reality technology in determining lower limb alignment. A camera-augmented mobile C-arm (CamC) technology was used to create a panorama image consisting of hip, knee, and ankle X-rays. Twenty-five human cadaver legs were used for validation with random varus or valgus deformations. Five clinicians performed experiments that consisted in achieving acceptable mechanical axis deviation. The applicability of the CamC technology was assessed with direct comparison to ground-truth CT. A t test, Pearson's correlation, and ANOVA were used to determine statistical significance. The value of Pearson's correlation coefficient R was 0.979 which demonstrates a strong positive correlation between the CamC and ground-truth CT data. The analysis of variance produced a p value equal to 0.911 signifying that clinician expertise differences were not significant with regard to the type of system used to assess mechanical axis deviation. All described measurements demonstrated valid measurement of lower limb alignment. With minimal effort, clinicians required only 3 X-ray image acquisitions using the augmented reality technology to achieve reliable mechanical axis deviation. ER - TY - JOUR AU - Ferrari, Vincenzo AU - Viglialoro, Rosanna Maria AU - Nicoli, Paola AU - Cutolo, Fabrizio AU - Condino, Sara AU - Carbone, Marina AU - Siesto, Mentore AU - Ferrari, Mauro T1 - Augmented reality visualization of deformable tubular structures for surgical simulation. JO - The international journal of medical robotics + computer assisted surgery : MRCAS Y1 - 2016 VL - 12 SP - 231 EP - 240 KW - Calibration; Cholecystectomy KW - methods; Electromagnetic Radiation; General Surgery KW - education KW - methods; Humans; Laparoscopy KW - instrumentation KW - methods; Reproducibility of Results; Simulation Training; Software; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface; augmented reality; soft tissue; surgical simulation N1 - 1478-596X Owner: NLM N2 - Surgical simulation based on augmented reality (AR), mixing the benefits of physical and virtual simulation, represents a step forward in surgical training. However, available systems are unable to update the virtual anatomy following deformations impressed on actual anatomy. A proof-of-concept solution is described providing AR visualization of hidden deformable tubular structures using nitinol tubes sensorized with electromagnetic sensors. This system was tested in vitro on a setup comprised of sensorized cystic, left and right hepatic, and proper hepatic arteries. In the trial session, the surgeon deformed the tubular structures with surgical forceps in 10 positions. The mean, standard deviation, and maximum misalignment between virtual and real arteries were 0.35, 0.22, and 0.99 mm, respectively. The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. ER - TY - JOUR AU - Ferrer-Torregrosa, Javier AU - Jiménez-Rodríguez, Miguel Ángel AU - Torralba-Estelles, Javier AU - Garzón-Farinós, Fernanda AU - Pérez-Bermejo, Marcelo AU - Fernández-Ehrling, Nadia T1 - Distance learning ects and flipped classroom in the anatomy learning: comparative study of the use of augmented reality, video and notes. JO - BMC medical education Y1 - 2016 VL - 16 SP - 230 EP - 230 KW - Anatomy KW - education; Attitude of Health Personnel; Computer-Assisted Instruction; Education KW - Distance KW - organization & administration KW - standards; Education KW - Graduate KW - methods KW - organization & administration; Educational Measurement; Humans; Learning; Medical Writing; Models KW - Educational; Problem-Based Learning KW - methods; Program Evaluation; Spain; User-Computer Interface; Video Recording; Anatomy; Augmented reality; Autonomous learning; ECTS; Flipped classroom; Metacognition N1 - 1472-6920 Owner: NLM N2 - The establishment of the ECTS (European Credit Transfer System) is one of the pillars of the European Space of Higher Education. This way of accounting for the time spent in training has two essential parts, classroom teaching (work with the professor) and distance learning (work without the professor, whether in an individual or collective way). Much has been published on the distance learning part, but less on the classroom teaching section. In this work, the authors investigate didactic strategies and associated aids for distance learning work in a concept based on flipped classroom where transmitting information is carried out with aids that the professor prepares, so that the student works in an independent way before the classes, thus being able to dedicate the classroom teaching time to more complex learning and being able to count on the professor's help. Three teaching aids applied to the study of anatomy have been compared: Notes with images, videos, and augmented reality. Four dimensions have been compared: the time spent, the acquired learnings, the metacognitive perception, and the prospects of the use of augmented reality for study. The results show the effectiveness, in all aspects, of augmented reality when compared with the rest of aids. The questionnaire assessed the acquired knowledge through a course exam, where 5.60 points were obtained for the notes group, 6.54 for the video group, and 7.19 for the augmented reality group. That is 0.94 more points for the video group compared with the notes and 1.59 more points for the augmented reality group compared with the notes group. This research demonstrates that, although technology has not been sufficiently developed for education, it is expected that it can be improved in both the autonomous work of the student and the academic training of health science students and that we can teach how to learn. Moreover, one can see how the grades of the students who studied with augmented reality are more grouped and that there is less dispersion in the marks compared with other materials. ER - TY - JOUR AU - Finger, T. AU - Schaumann, A. AU - Schulz, M. AU - Thomale, Ulrich-W. T1 - Augmented reality in intraventricular neuroendoscopy. JO - Acta neurochirurgica Y1 - 2017 VL - 159 SP - 1033 EP - 1041 KW - Adolescent; Adult; Aged; Brain Neoplasms KW - surgery; Child; Child KW - Preschool; Female; Humans; Hydrocephalus KW - surgery; Infant; Male; Middle Aged; Neuroendoscopy KW - adverse effects KW - instrumentation KW - methods; Postoperative Complications KW - epidemiology; Reoperation KW - statistics & numerical data; Ventriculostomy KW - methods; Augmented reality; Brain tumors; Hydrocephalus; Neuroendoscopy; Neuronavigation N1 - 0942-0940 Owner: NLM N2 - Individual planning of the entry point and the use of navigation has become more relevant in intraventricular neuroendoscopy. Navigated neuroendoscopic solutions are continuously improving. We describe experimentally measured accuracy and our first experience with augmented reality-enhanced navigated neuroendoscopy for intraventricular pathologies. Augmented reality-enhanced navigated endoscopy was tested for accuracy in an experimental setting. Therefore, a 3D-printed head model with a right parietal lesion was scanned with a thin-sliced computer tomography. Segmentation of the tumor lesion was performed using Scopis NovaPlan navigation software. An optical reference matrix is used to register the neuroendoscope's geometry and its field of view. The pre-planned ROI and trajectory are superimposed in the endoscopic image. The accuracy of the superimposed contour fitting on endoscopically visualized lesion was acquired by measuring the deviation of both midpoints to one another. The technique was subsequently used in 29 cases with CSF circulation pathologies. Navigation planning included defining the entry points, regions of interests and trajectories, superimposed as augmented reality on the endoscopic video screen during intervention. Patients were evaluated for postoperative imaging, reoperations, and possible complications. The experimental setup revealed a deviation of the ROI's midpoint from the real target by 1.2 ± 0.4 mm. The clinical study included 18 cyst fenestrations, ten biopsies, seven endoscopic third ventriculostomies, six stent placements, and two shunt implantations, being eventually combined in some patients. In cases of cyst fenestrations postoperatively, the cyst volume was significantly reduced in all patients by mean of 47%. In biopsies, the diagnostic yield was 100%. Reoperations during a follow-up period of 11.4 ± 10.2 months were necessary in two cases. Complications included one postoperative hygroma and one insufficient fenestration. Augmented reality-navigated neuroendoscopy is accurate and feasible to use in clinical application. By integrating relevant planning information directly into the endoscope's field of view, safety and efficacy for intraventricular neuroendoscopic surgery may be improved. ER - TY - JOUR AU - Fischer, Marius AU - Fuerst, Bernhard AU - Lee, Sing Chun AU - Fotouhi, Javad AU - Habert, Severine AU - Weidert, Simon AU - Euler, Ekkehard AU - Osgood, Greg AU - Navab, Nassir T1 - Preclinical usability study of multiple augmented reality concepts for K-wire placement. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 1007 EP - 1014 KW - Bone Wires; Fracture Fixation KW - Internal KW - methods; Fractures KW - Bone KW - diagnosis KW - surgery; Humans; Imaging KW - Three-Dimensional KW - methods; Pelvic Bones KW - diagnostic imaging KW - injuries KW - surgery; Phantoms KW - Imaging; Radiography KW - Interventional KW - methods; Tomography KW - X-Ray Computed KW - methods; Interventional imaging; Orthopedic and Trauma surgery; Usability study N1 - 1861-6429 Owner: NLM N2 - In many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon's view to assist accurate placement of tools. We design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization. The evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency. The 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently. ER - TY - CONF AU - Foo, Edwin AU - Rajaratnam, Bala S. AU - others T1 - Semi portable rehabilitation system for upper limb disability Y1 - 2012 SP - 4 EP - 4 ER - TY - JOUR AU - Fritz, Jan AU - U-Thainual, Paweena AU - Ungi, Tamas AU - Flammang, Aaron J. AU - Fichtinger, Gabor AU - Iordachita, Iulian I. AU - Carrino, John A. T1 - Augmented reality visualisation using an image overlay system for MR-guided interventions: technical performance of spine injection procedures in human cadavers at 1.5 Tesla. JO - European radiology Y1 - 2013 VL - 23 SP - 235 EP - 245 KW - Aged; Aged KW - 80 and over; Cadaver; Contrast Media; Female; Gadolinium DTPA; Humans; Image Enhancement KW - methods; Image Processing KW - Computer-Assisted KW - methods; Injections KW - Spinal KW - methods; Magnetic Resonance Imaging KW - Interventional KW - methods; Male; Middle Aged; Needles; Prospective Studies; Software N1 - 1432-1084 Owner: NLM N2 - To prospectively assess the technical performance of an augmented reality system for MR-guided spinal injection procedures. The augmented reality system was used with a clinical 1.5-T MRI system. A total of 187 lumbosacral spinal injection procedures (epidural injection, spinal nerve root injection, facet joint injection, medial branch block, discography) were performed in 12 human cadavers. Needle paths were planned with the Perk Station module of 3D Slicer software on high-resolution MR images. Needles were placed under augmented reality MRI navigation. MRI was used to confirm needle locations. T1-weighted fat-suppressed MRI was used to visualise the injectant. Outcome variables assessed were needle adjustment rate, inadvertent puncture of non-targeted structures, successful injection rate and procedure time. Needle access was achieved in 176/187 (94.1 %) targets, whereas 11/187 (5.9 %) were inaccessible. Six of 11 (54.5 %) L5-S1 disks were inaccessible, because of an axial obliquity of 30˚ (27˚-34˚); 5/11 (45.5 %) facet joints were inaccessible because of osteoarthritis or fusion. All accessible targets (176/187, 94.1 %) were successfully injected, requiring 47/176 (26.7 %) needle adjustments. There were no inadvertent punctures of vulnerable structures. Median procedure time was 10.2 min (5-19 min). Image overlay navigated MR-guided spinal injections were technically accurate. Disks with an obliquity ≥27˚ may be inaccessible. ER - TY - JOUR AU - Fritz, Jan AU - U-Thainual, Paweena AU - Ungi, Tamas AU - Flammang, Aaron J. AU - Fichtinger, Gabor AU - Iordachita, Iulian I. AU - Carrino, John A. T1 - Augmented reality visualization with use of image overlay technology for MR imaging-guided interventions: assessment of performance in cadaveric shoulder and hip arthrography at 1.5 T. JO - Radiology Y1 - 2012 VL - 265 SP - 254 EP - 259 KW - Aged; Aged KW - 80 and over; Cadaver; Contrast Media KW - administration & dosage; Female; Hip Joint; Humans; Image Enhancement KW - methods; Injections KW - Intra-Articular KW - methods; Magnetic Resonance Imaging KW - Interventional KW - methods; Male; Middle Aged; Prospective Studies; Shoulder Joint; Statistics KW - Nonparametric N1 - 1527-1315 Owner: NLM N2 - To prospectively assess overlay technology in providing accurate and efficient targeting for magnetic resonance (MR) imaging-guided shoulder and hip joint arthrography. A prototype augmented reality image overlay system was used in conjunction with a clinical 1.5-T MR imager. A total of 24 shoulder joint and 24 hip joint injections were planned in 12 human cadavers. Two operators (A and B) participated, each performing procedures on different cadavers using image overlay guidance. MR imaging was used to confirm needle positions, monitor injections, and perform MR arthrography. Accuracy was assessed according to the rate of needle adjustment, target error, and whether the injection was intraarticular. Efficiency was assessed according to arthrography procedural time. Operator differences were assessed with comparison of accuracy and procedure times between the operators. Mann-Whitney U test and Fisher exact test were used to assess group differences. Forty-five arthrography procedures (23 shoulders, 22 hips) were performed. Three joints had prostheses and were excluded. Operator A performed 12 shoulder and 12 hip injections. Operator B performed 11 shoulder and 10 hip injections. Needle adjustment rate was 13% (six of 45; one for operator A and five for operator B). Target error was 3.1 mm±1.2 (standard deviation) (operator A, 2.9 mm±1.4; operator B, 3.5 mm±0.9). Intraarticular injection rate was 100% (45 of 45). The average arthrography time was 14 minutes (range, 6-27 minutes; 12 minutes [range, 6-25 minutes] for operator A and 16 minutes [range, 6-27 min] for operator B). Operator differences were not significant with regard to needle adjustment rate (P=.08), target error (P=.07), intraarticular injection rate (P>.99), and arthrography time (P=.22). Image overlay technology provides accurate and efficient MR guidance for successful shoulder and hip arthrography in human cadavers. ER - TY - JOUR AU - Fritz, Jan AU - U-Thainual, Paweena AU - Ungi, Tamas AU - Flammang, Aaron J. AU - Kathuria, Sudhir AU - Fichtinger, Gabor AU - Iordachita, Iulian I. AU - Carrino, John A. T1 - MR-guided vertebroplasty with augmented reality image overlay navigation. JO - Cardiovascular and interventional radiology Y1 - 2014 VL - 37 SP - 1589 EP - 1596 KW - Aged; Aged KW - 80 and over; Bone Cements; Cadaver; Female; Humans; Magnetic Resonance Imaging KW - Interventional KW - methods; Male; Polymethyl Methacrylate; Prospective Studies; Software; Vertebroplasty KW - methods N1 - 1432-086X Owner: NLM N2 - To evaluate the feasibility of magnetic resonance imaging (MRI)-guided vertebroplasty at 1.5 Tesla using augmented reality image overlay navigation. Twenty-five unilateral vertebroplasties [5 of 25 (20%) thoracic, 20 of 25 (80%) lumbar] were prospectively planned in 5 human cadavers. A clinical 1.5-Teslan MRI system was used. An augmented reality image overlay navigation system and 3D Slicer visualization software were used for MRI display, planning, and needle navigation. Intermittent MRI was used to monitor placement of the MRI-compatible vertebroplasty needle. Cement injections (3 ml of polymethylmethacrylate) were performed outside the bore. The cement deposits were assessed on intermediate-weighted MR images. Outcome variables included type of vertebral body access, number of required intermittent MRI control steps, location of final needle tip position, cement deposit location, and vertebroplasty time. All planned procedures (25 of 25, 100%) were performed. Sixteen of 25 (64%) transpedicular and 9 of 25 (36%) parapedicular access routes were used. Six (range 3-9) MRI control steps were required for needle placement. No inadvertent punctures were visualized. Final needle tip position and cement location were adequate in all cases (25 of 25, 100%) with a target error of the final needle tip position of 6.1 ± 1.9 mm (range 0.3-8.7 mm) and a distance between the planned needle tip position and the center of the cement deposit of 4.3 mm (range 0.8-6.8 mm). Time requirement for one level was 16 (range 11-21) min. MRI-guided vertebroplasty using image overlay navigation is feasible allowing for accurate vertebral body access and cement deposition in cadaveric thoracic and lumbar vertebral bodies. ER - TY - JOUR AU - Fritz, Jan AU - U-Thainual, Paweena AU - Ungi, Tamas AU - Flammang, Aaron J. AU - McCarthy, Edward F. AU - Fichtinger, Gabor AU - Iordachita, Iulian I. AU - Carrino, John A. T1 - Augmented reality visualization using image overlay technology for MR-guided interventions: cadaveric bone biopsy at 1.5 T. JO - Investigative radiology Y1 - 2013 VL - 48 SP - 464 EP - 470 KW - Aged; Bone Neoplasms KW - pathology KW - secondary; Cadaver; Equipment Design; Equipment Failure Analysis; Female; Humans; Image Enhancement KW - instrumentation; Image Interpretation KW - Computer-Assisted KW - instrumentation; Image-Guided Biopsy KW - instrumentation; Magnetic Resonance Imaging KW - Interventional KW - instrumentation; Male; Middle Aged; Reproducibility of Results; Sensitivity and Specificity; User-Computer Interface N1 - 1536-0210 Owner: NLM N2 - The purpose of this study was to prospectively test the hypothesis that image overlay technology facilitates accurate navigation for magnetic resonance (MR)-guided osseous biopsy. A prototype augmented reality image overlay system was used in conjunction with a clinical 1.5-T MR imaging system. Osseous biopsy of a total of 16 lesions was planned in 4 human cadavers with osseous metastases. A loadable module of 3D Slicer open-source medical image analysis and visualization software was developed and used for display of MR images, lesion identification, planning of virtual biopsy paths, and navigation of drill placement. The osseous drill biopsy was performed by maneuvering the drill along the displayed MR image containing the virtual biopsy path into the target. The drill placement and the final drill position were monitored by intermittent MR imaging. Outcome variables included successful drill placement, number of intermittent MR imaging control steps, target error, number of performed passes and tissue sampling, time requirements, and pathological analysis of the obtained osseous core specimens including adequacy of specimens, presence of tumor cells, and degree of necrosis. A total of 16 osseous lesions were sampled with percutaneous osseous drill biopsy. Eight lesions were located in the osseous pelvis (8/16, 50%) and 8 (8/16, 50%) lesions were located in the thoracic and lumbar spine. Lesion size was 2.2 cm (1.1-3.5 cm). Four (2-8) MR imaging control steps were required. MR imaging demonstrated successful drill placement inside 16 of the 16 target lesions (100%). One needle pass was sufficient for accurate targeting of all lesions. One tissue sample was obtained in 8 of the 16 lesions (50%); 2, in 6 of the 16 lesions (38%); and 3, in 2 of the 16 lesions (12%). The target error was 4.3 mm (0.8-6.8 mm). Length of time required for biopsy of a single lesion was 38 minutes (20-55 minutes). Specimens of 15 of the 16 lesions (94%) were sufficient for pathological evaluation. Of those 15 diagnostic specimens, 14 (93%) contained neoplastic cells, whereas 1 (7%) specimen demonstrated bone marrow without evidence of neoplastic cells. Of those 14 diagnostic specimens, 11 (79%) were diagnostic for carcinoma or adenocarcinoma, which was concordant with the primary neoplasm, whereas, in 3 of the 14 diagnostic specimens (21%), the neoplastic cells were indeterminate. Image overlay technology provided accurate navigation for the MR-guided biopsy of osseous lesions of the spine and the pelvis in human cadavers at 1.5 T. The high technical and diagnostic yield supports further evaluation with clinical trials. ER - TY - JOUR AU - Fuerst, Bernhard AU - Sprung, Julian AU - Pinto, Francisco AU - Frisch, Benjamin AU - Wendler, Thomas AU - Simon, Hervé AU - Mengus, Laurent AU - van den Berg, Nynke S. AU - van der Poel, Henk G. AU - van Leeuwen, Fijs W. B. AU - Navab, Nassir T1 - First Robotic SPECT for Minimally Invasive Sentinel Lymph Node Mapping. JO - IEEE transactions on medical imaging Y1 - 2016 VL - 35 SP - 830 EP - 838 KW - Equipment Design; Humans; Lymph Nodes KW - diagnostic imaging KW - surgery; Minimally Invasive Surgical Procedures; Phantoms KW - Imaging; Robotic Surgical Procedures KW - instrumentation KW - methods; Sentinel Lymph Node Biopsy KW - methods; Tomography KW - Emission-Computed KW - Single-Photon KW - methods N1 - 1558-254X Owner: NLM N2 - In this paper we present the usage of a drop-in gamma probe for intra-operative Single-Photon Emission Computed Tomography (SPECT) imaging in the scope of minimally invasive robot-assisted interventions. The probe is designed to be inserted and reside inside the abdominal cavity during the intervention. It is grasped during the procedure using a robotic laparoscopic gripper enabling full six degrees of freedom handling by the surgeon. We demonstrate the first deployment of the tracked probe for intra-operative in-patient robotic SPECT enabling augmented-reality image guidance. The hybrid mechanical- and image-based in-patient probe tracking is shown to have an accuracy of 0.2 mm. The overall system performance is evaluated and tested with a phantom for gynecological sentinel lymph node interventions and compared to ground-truth data yielding a mean reconstruction accuracy of 0.67 mm. ER - TY - JOUR AU - Fumagalli, Stefano AU - Torricelli, Gionatan AU - Massi, Marta AU - Calvani, Silvia AU - Boni, Serena AU - Roberts, Anna T. AU - Accarigi, Elisabetta AU - Manetti, Stefania AU - Marchionni, Niccolò T1 - Effects of a new device to guide venous puncture in elderly critically ill patients: results of a pilot randomized study. JO - Aging clinical and experimental research Y1 - 2017 VL - 29 SP - 335 EP - 339 KW - Aged; Anxiety KW - etiology KW - prevention & control; Comparative Effectiveness Research; Critical Illness KW - psychology KW - therapy; Equipment Design; Equipment Safety; Female; Hematoma KW - prevention & control; Humans; Infrared Rays KW - therapeutic use; Intensive Care Units; Male; Materials Testing; Phlebotomy KW - adverse effects KW - instrumentation KW - methods KW - psychology; Pilot Projects; Elderly; Hematoma; Intensive care unit; Near-infrared electromagnetic radiation; Nursing; Venous puncture N1 - 1720-8319 Owner: NLM N2 - Novel devices based on the emission of near-infrared electromagnetic radiation (NIR) have been developed to minimize venous puncture failures. These instruments produce an "augmented reality" image, in which subcutaneous veins are depicted on a LCD display. We compared the new technique with standard venipuncture in a population of elderly patients. Patients admitted in Intensive Care Unit were randomized to standard or to NIR assisted procedure. In the 103 enrolled patients (age 74 ± 12 years; standard venipuncture-N = 56; NIR-N = 47), no differences were found in procedure length, number of attempts, and referred pain. With NIR there was a lower incidence of hematomas and fewer anxiety and depressive symptoms. The use of the novel NIR-based device is safer and more psychologically tolerable, and it is not associated to an increase of procedure length or number of attempts. ER - TY - JOUR AU - Gao, Ming-ke AU - Chen, Yi-min AU - Liu, Quan AU - Huang, Chen AU - Li, Ze-yu AU - Zhang, Dian-hua T1 - Three-Dimensional Path Planning and Guidance of Leg Vascular Based on Improved Ant Colony Algorithm in Augmented Reality. JO - Journal of medical systems Y1 - 2015 VL - 39 SP - 133 EP - 133 KW - Algorithms; Computer Simulation; Critical Pathways KW - organization & administration; Humans; Imaging KW - Three-Dimensional KW - methods; Preoperative Period; Radiation Exposure; Reproducibility of Results; Time Factors; Vascular Access Devices; Vascular Surgical Procedures KW - methods N1 - 1573-689X Owner: NLM N2 - Preoperative path planning plays a critical role in vascular access surgery. Vascular access surgery has superior difficulties and requires long training periods as well as precise operation. Yet doctors are on different leves, thus bulky size of blood vessels is usually chosen to undergo surgery and other possible optimal path is not considered. Moreover, patients and surgeons will suffer from X-ray radiation during the surgical procedure. The study proposed an improved ant colony algorithm to plan a vascular optimal three-dimensional path with overall consideration of factors such as catheter diameter, vascular length, diameter as well as the curvature and torsion. To protect the doctor and patient from exposing to X-ray long-term, the paper adopted augmented reality technology to register the reconstructed vascular model and physical model meanwhile, locate catheter by the electromagnetic tracking system and used Head Mounted Display to show the planning path in real time and monitor catheter push procedure. The experiment manifests reasonableness of preoperative path planning and proves the reliability of the algorithm. The augmented reality experiment real time and accurately displays the vascular phantom model, planning path and the catheter trajectory and proves the feasibility of this method. The paper presented a useful and feasible surgical scheme which was based on the improved ant colony algorithm to plan vascular three-dimensional path in augmented reality. The study possessed practical guiding significance in preoperative path planning, intraoperative catheter guiding and surgical training, which provided a theoretical method of path planning for vascular access surgery. It was a safe and reliable path planning approach and possessed practical reference value. ER - TY - CONF AU - Garcia, Jaime Andres AU - Navarro, Karla Felix T1 - The Mobile RehApp™: an AR-based mobile game for ankle sprain rehabilitation PB - IEEE Y1 - 2014/05 ER - TY - JOUR AU - Garcia-Martinez, Alvaro AU - Vicente-Samper, Jose María AU - Sabater-Navarro, José María T1 - Automatic detection of surgical haemorrhage using computer vision. JO - Artificial intelligence in medicine Y1 - 2017/05 VL - 78 SP - 55 EP - 60 KW - Algorithms; Blood Loss KW - Surgical; Hemorrhage KW - diagnosis; Humans; Image Processing KW - Computer-Assisted; Laparoscopy KW - adverse effects; Computer vision; Laparoscopic surgery; Massive bleeding N1 - 1873-2860 Owner: NLM N2 - On occasions, a surgical intervention can be associated with serious, potentially life-threatening complications. One of these complications is a haemorrhage during the operation, an unsolved issue that could delay the intervention or even cause the patient's death. On laparoscopic surgery this complication is even more dangerous, due to the limited vision and mobility imposed by the minimally invasive techniques. In this paper it is described a computer vision algorithm designed to analyse the images captured by a laparoscopic camera, classifying the pixels of each frame in blood pixels and background pixels and finally detecting a massive haemorrhage. The pixel classification is carried out by comparing the parameter B/R and G/R of the RGB space colour of each pixel with a threshold obtained using the global average of the whole frame of these parameters. The detection of and starting haemorrhage is achieved by analysing the variation of the previous parameters and the amount of pixel blood classified. When classifying in vitro images, the proposed algorithm obtains accuracy over 96%, but during the analysis of an in vivo images obtained from real operations, the results worsen slightly due to poor illumination, visual interferences or sudden moves of the camera, obtaining accuracy over 88%. The detection of haemorrhages directly depends of the correct classification of blood pixels, so the analysis achieves an accuracy of 78%. The proposed algorithm turns out to be a good starting point for an automatic detection of blood and bleeding in the surgical environment which can be applied to enhance the surgeon vision, for example showing the last frame previous to a massive haemorrhage where the incision could be seen using augmented reality capabilities. ER - TY - JOUR AU - Gavaghan, K. AU - Oliveira-Santos, T. AU - Peterhans, M. AU - Reyes, M. AU - Kim, H. AU - Anderegg, S. AU - Weber, S. T1 - Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies. JO - International journal of computer assisted radiology and surgery Y1 - 2012 VL - 7 SP - 547 EP - 556 KW - Biopsy KW - instrumentation; Equipment Design; Feasibility Studies; Humans; Imaging KW - Three-Dimensional KW - instrumentation; Lasers; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - instrumentation; User-Computer Interface N1 - 1861-6429 Owner: NLM N2 - Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios. A range of anatomical models, surgical planning data and guidance information taken from liver surgery, cranio-maxillofacial surgery, orthopaedic surgery and biopsy were displayed on patient-specific phantoms, directly on to the patient's skin and on to cadaver tissue. The feasibility of employing the proposed augmented reality visualisation approach in each of the four tested clinical applications was qualitatively assessed for usability, visibility, workspace, line of sight and obtrusiveness. The visualisation approach was found to assist in spatial understanding and reduced the need for sight diversion throughout the simulated surgical procedures. The approach enabled structures to be identified and targeted quickly and intuitively. All validated augmented reality scenes were easily visible and were implemented with minimal overhead. The device showed sufficient workspace for each of the presented applications, and the approach was minimally intrusiveness to the surgical scene. The presented visualisation approach proved to be versatile and applicable to a range of image-guided surgery applications, overcoming many of the deficiencies of previously described AR approaches. The approach presents an initial step towards a widely accepted alternative to monitor displays for the visualisation of surgical navigation data. ER - TY - JOUR AU - Ghaderi, Mohammad Ali AU - Heydarzadeh, Mehrdad AU - Nourani, Mehrdad AU - Gupta, Gopal AU - Tamil, Lakshman T1 - Augmented reality for breast tumors visualization. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2016 VL - 2016 SP - 4391 EP - 4394 KW - Breast Neoplasms KW - diagnostic imaging; Humans; Imaging KW - Three-Dimensional KW - instrumentation KW - methods; Mammography; User-Computer Interface N1 - 1557-170X Owner: NLM N2 - 3D visualization of breast tumors are shown to be effective by previous studies. In this paper, we introduce a new augmented reality application that can help doctors and surgeons to have a more accurate visualization of breast tumors; this system uses a marker-based image-processing technique to render a 3D model of the tumors on the body. The model can be created using a combination of breast 3D mammography by experts. We have tested the system using an Android smartphone and a head-mounted device. This proof of concept can be useful for oncologists to have a more effective screening, and surgeons to plan the surgery. ER - TY - CONF AU - Grandi, Jeronimo AU - Maciel, Anderson AU - Debarba, Henrique AU - Zanchet, Dinamar T1 - Spatially aware mobile interface for 3D visualization and interactive surgery planning PB - IEEE Y1 - 2014/05 ER - TY - JOUR AU - Grasso, R. F. AU - Faiella, E. AU - Luppi, G. AU - Schena, E. AU - Giurazza, F. AU - Del Vescovo, R. AU - D'Agostino, F. AU - Cazzato, R. L. AU - Beomonte Zobel, B. T1 - Percutaneous lung biopsy: comparison between an augmented reality CT navigation system and standard CT-guided technique. JO - International journal of computer assisted radiology and surgery Y1 - 2013 VL - 8 SP - 837 EP - 848 KW - Aged; Equipment Design; Female; Fluoroscopy KW - methods; Humans; Image-Guided Biopsy KW - methods; Lung KW - diagnostic imaging KW - pathology; Lung Diseases KW - diagnosis; Lung Neoplasms KW - diagnosis; Male; Middle Aged; ROC Curve; Radiography KW - Interventional KW - instrumentation; Reproducibility of Results; Retrospective Studies; Tomography KW - X-Ray Computed KW - instrumentation; User-Computer Interface N1 - 1861-6429 Owner: NLM N2 - Percutaneous lung biopsies (PLBs) performed for the evaluation of pulmonary masses require image guidance to avoid critical structures. A new CT navigation system (SIRIO, "Sistema robotizzato assistito per il puntamento intraoperatorio") for PLBs was validated. The local Institutional Review Board approved this retrospective study. Image-guided PLBs in 197 patients were performed with a CT navigation system (SIRIO). The procedures were reviewed based on the number of CT scans, patients' radiation exposure and procedural time recorded. Comparison was performed with a group of 72 patients undergoing standard CT-guided PLBs. Sensitivity, specificity and overall diagnostic accuracy were assessed in both groups. SIRIO-guided PLBs showed a significant reduction in procedure time, number of required CT scans and the radiation dose administered to patients ([Formula: see text]). In terms of diagnostic accuracy, SIRIO proved to be more accurate for small-sized lesions ([Formula: see text]20 mm) than standard CT-guidance. SIRIO proved to be a reliable and effective tool when performing CT-guided PLBs and was especially useful for sampling small ([Formula: see text]20 mm) lesions. ER - TY - JOUR AU - Hallet, Julie AU - Soler, Luc AU - Diana, Michele AU - Mutter, Didier AU - Baumert, Thomas F. AU - Habersetzer, François AU - Marescaux, Jacques AU - Pessaux, Patrick T1 - Trans-thoracic minimally invasive liver resection guided by augmented reality. JO - Journal of the American College of Surgeons Y1 - 2015/05 VL - 220 SP - e55 EP - e60 KW - Carcinoma KW - Hepatocellular KW - surgery; Hepatectomy KW - methods; Humans; Imaging KW - Three-Dimensional; Liver Neoplasms KW - surgery; Male; Middle Aged; Preoperative Care; Surgery KW - Computer-Assisted KW - methods; Thoracic Surgery KW - Video-Assisted KW - methods; Tomography KW - X-Ray Computed N1 - 1879-1190 Owner: NLM ER - TY - JOUR AU - Haneishi, Hideaki AU - Yamaguchi, Tadashi AU - Nakamura, Ryoichi AU - Nakaguchi, Toshiya AU - Suga, Mikio AU - Kawahira, Hiroshi T1 - Research Status in the Fusion and Enrichment of Medical Imaging for High Quality Diagnosis and Treatment (FERMI) Project JO - Journal of Medical Imaging and Health Informatics Y1 - 2013/march VL - 3 IS - 1 SP - 51 EP - 58 ER - TY - JOUR AU - Haouchine, Nazim AU - Cotin, Stephane AU - Peterlik, Igor AU - Dequidt, Jeremie AU - Lopez, Mario Sanz AU - Kerrien, Erwan AU - Berger, Marie-Odile T1 - Impact of Soft Tissue Heterogeneity on Augmented Reality for Liver Surgery. JO - IEEE transactions on visualization and computer graphics Y1 - 2015/05 VL - 21 SP - 584 EP - 597 KW - Computer Graphics; Computer Simulation; Humans; Liver KW - surgery; Liver Neoplasms KW - pathology KW - surgery; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - education; User-Computer Interface N1 - 1941-0506 Owner: NLM N2 - This paper presents a method for real-time augmented reality of internal liver structures during minimally invasive hepatic surgery. Vessels and tumors computed from pre-operative CT scans can be overlaid onto the laparoscopic view for surgery guidance. Compared to current methods, our method is able to locate the in-depth positions of the tumors based on partial three-dimensional liver tissue motion using a real-time biomechanical model. This model permits to properly handle the motion of internal structures even in the case of anisotropic or heterogeneous tissues, as it is the case for the liver and many anatomical structures. Experimentations conducted on phantom liver permits to measure the accuracy of the augmentation while real-time augmentation on in vivo human liver during real surgery shows the benefits of such an approach for minimally invasive surgery. ER - TY - JOUR AU - Haouchine, Nazim AU - Dequidt, Jérémie AU - Berger, Marie-Odile AU - Cotin, Stéphane T1 - Deformation-based augmented reality for hepatic surgery. JO - Studies in health technology and informatics Y1 - 2013 VL - 184 SP - 182 EP - 188 KW - Computer Simulation; Computer-Assisted Instruction KW - methods; Humans; Imaging KW - Three-Dimensional KW - methods; Laparoscopy KW - education KW - methods; Liver Neoplasms KW - pathology KW - physiopathology KW - surgery; Models KW - Biological; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface N1 - 0926-9630 Owner: NLM N2 - In this paper we introduce a method for augmenting the laparoscopic view during hepatic tumor resection. Using augmented reality techniques, vessels, tumors and cutting planes computed from pre-operative data can be overlaid onto the laparoscopic video. Compared to current techniques, which are limited to a rigid registration of the pre-operative liver anatomy with the intra-operative image, we propose a real-time, physics-based, non-rigid registration. The main strength of our approach is that the deformable model can also be used to regularize the data extracted from the computer vision algorithms. We show preliminary results on a video sequence which clearly highlights the interest of using physics-based model for elastic registration. ER - TY - JOUR AU - He, Changyu AU - Kazanzides, Peter AU - Sen, Hasan Tutkun AU - Kim, Sungmin AU - Liu, Yue T1 - An Inertial and Optical Sensor Fusion Approach for Six Degree-of-Freedom Pose Estimation. JO - Sensors (Basel, Switzerland) Y1 - 2015 VL - 15 SP - 16448 EP - 16465 KW - Cadaver; Equipment Design; Humans; Optics and Photonics KW - instrumentation; Extended Kalman Filter; hybrid tracking; inertial tracking; optical tracking N1 - 1424-8220 Owner: NLM N2 - Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions. ER - TY - JOUR AU - He, Kunshan AU - Mao, Yamin AU - Ye, Jinzuo AU - An, Yu AU - Jiang, Shixin AU - Chi, Chongwei AU - Tian, Jie T1 - A novel wireless wearable fluorescence image-guided surgery system. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2016 VL - 2016 SP - 5208 EP - 5211 KW - Animals; Fluorescence; Humans; Indocyanine Green KW - administration & dosage; Injections KW - Intravenous; Lung KW - surgery; Software; Surgery KW - Computer-Assisted KW - methods; Swine; Video Recording; Wireless Technology N1 - 1557-170X Owner: NLM N2 - Segmentectomy using indocyanine green (ICG) has become a primary treatment option to achieve a complete resection and preserve lung function in early-stage lung cancer. However, owing to a lack of appropriate intraoperative imaging systems, it is a huge challenge for surgeons to identify the intersegmental plane during the operation, leading to poor prognosis. Thus, we developed a novel wireless wearable fluorescence image-guided surgery system (LIGHTEN) for fast and accurate identification of intersegmental planes in human patients. The system consists of a handle, light source, Google glass and laptop. Application software is written to capture clear real-time images and Google glass is adopted to display with augmented reality. Twelve in vivo studies of pulmonary segmentectomy in swine by intravenous injection of ICG were conducted to test the performance of the system. A distinct black-and-white transition zone image was observed and displayed simultaneously on the Google glass in all swine. The results demonstrated that surgeons using LIGHTEN can effortlessly and quickly discern intersegmental planes during the operation. Our system has enormous potential in helping surgeons to precisely identify intersegmental planes with mobility and high-sensitivity. ER - TY - JOUR AU - Hervás, Ramón AU - Bravo, José AU - Fontecha, Jesús T1 - An assistive navigation system based on augmented reality and context awareness for people with mild cognitive impairments. JO - IEEE journal of biomedical and health informatics Y1 - 2014 VL - 18 SP - 368 EP - 374 KW - Algorithms; Cognitive Dysfunction KW - rehabilitation; Geographic Information Systems; Humans; Medical Informatics KW - methods; Monitoring KW - Ambulatory KW - methods; Self-Help Devices; User-Computer Interface N1 - 2168-2208 Owner: NLM N2 - This paper presents a system for supplying spatial orientation and support to cognitively impaired people in their daily activities. The system is a technological solution based on external aid at a practical level (substitution-based rehabilitation). In particular, we propose a model focused on points of interest or well-known places, in which user-friendly routes to a destination are generated based on the user context rather than the conventional street names and quantitative distances. Moreover, the system offers augmented reality views that include contextual information. This philosophy of navigation more closely matches the needs of the user than do conventional navigation systems; the proposal is especially useful for users who are not accustomed to using new technologies (e.g., elderly people), people experiencing disorientation and, more generally, individuals with a slight cognitive deficit. The system also includes an application that allows the relatives of the user to establish tasks that must be performed at a specific location and to monitor the activities of the user to detect potentially risky situations. ER - TY - JOUR AU - Hetherington, Jorden AU - Lessoway, Victoria AU - Gunka, Vit AU - Abolmaesumi, Purang AU - Rohling, Robert T1 - SLIDE: automatic spine level identification system using a deep convolutional neural network. JO - International journal of computer assisted radiology and surgery Y1 - 2017 VL - 12 SP - 1189 EP - 1198 KW - Algorithms; Anesthesia KW - Epidural KW - methods; Humans; Image Processing KW - Computer-Assisted; Lumbar Vertebrae KW - diagnostic imaging; Neural Networks (Computer); Spine KW - diagnostic imaging; Machine learning; Needle guidance; Ultrasound; Vertebral level N1 - 1861-6429 Owner: NLM N2 - Percutaneous spinal needle insertion procedures often require proper identification of the vertebral level to effectively and safely deliver analgesic agents. The current clinical method involves "blind" identification of the vertebral level through manual palpation of the spine, which has only 30% reported accuracy. Therefore, there is a need for better anatomical identification prior to needle insertion. A real-time system was developed to identify the vertebral level from a sequence of ultrasound images, following a clinical imaging protocol. The system uses a deep convolutional neural network (CNN) to classify transverse images of the lower spine. Several existing CNN architectures were implemented, utilizing transfer learning, and compared for adequacy in a real-time system. In the system, the CNN output is processed, using a novel state machine, to automatically identify vertebral levels as the transducer moves up the spine. Additionally, a graphical display was developed and integrated within 3D Slicer. Finally, an augmented reality display, projecting the level onto the patient's back, was also designed. A small feasibility study [Formula: see text] evaluated performance. The proposed CNN successfully discriminates ultrasound images of the sacrum, intervertebral gaps, and vertebral bones, achieving 88% 20-fold cross-validation accuracy. Seventeen of 20 test ultrasound scans had successful identification of all vertebral levels, processed at real-time speed (40 frames/s). A machine learning system is presented that successfully identifies lumbar vertebral levels. The small study on human subjects demonstrated real-time performance. A projection-based augmented reality display was used to show the vertebral level directly on the subject adjacent to the puncture site. ER - TY - JOUR AU - Heyn, Patricia C. AU - Baumgardner, Chad A. AU - McLachlan, Leslie AU - Bodine, Cathy T1 - Mixed-reality exercise effects on participation of individuals with spinal cord injuries and developmental disabilities: a pilot study. JO - Topics in spinal cord injury rehabilitation Y1 - 2014 VL - 20 SP - 338 EP - 345 KW - developmental disability; exercise; mixed-reality; rehabilitation; spinal cord injury N1 - 1082-0744 Owner: NLM N2 - The purpose of this pilot study was to investigate the effectiveness of a mixed-reality (MR) exercise environment on engagement and enjoyment levels of individuals with spinal cord injury (SCI) and intellectual and developmental disabilities (IDD). Six people participated in this cross-sectional, observational pilot study involving one MR exercise trial. The augmented reality environment was based on a first-person perspective video of a scenic biking/walking trail in Colorado. Males and females (mean age, 43.3 ± 13.7 years) were recruited from a research database for their participation in previous clinical studies. Of the 6 participants, 2 had SCI, 2 had IDD, and 2 were without disability. The primary outcome measurement of this pilot study was the self-reported engagement and enjoyment level of each participant after the exercise trial. All participants reported increased levels of engagement, enjoyment, and immersion involving the MR exercise environment as well as positive feedback recommending this type of exercise approach to peers with similar disabilities. All the participants reported higher than normal levels of enjoyment and 66.7% reported higher than normal levels of being on a real trail. Participants' feedback suggested that the MR environment could be entertaining, motivating, and engaging for users with disabilities, resulting in a foundation for further development of this technology for use in individuals with cognitive and physical disabilities. ER - TY - JOUR AU - Hoermann, Simon AU - Ferreira Dos Santos, Luara AU - Morkisch, Nadine AU - Jettkowski, Katrin AU - Sillis, Moran AU - Devan, Hemakumar AU - Kanagasabai, Parimala S. AU - Schmidt, Henning AU - Krüger, Jörg AU - Dohle, Christian AU - Regenbrecht, Holger AU - Hale, Leigh AU - Cutfield, Nicholas J. T1 - Computerised mirror therapy with Augmented Reflection Technology for early stroke rehabilitation: clinical feasibility and integration as an adjunct therapy. JO - Disability and rehabilitation Y1 - 2017 VL - 39 SP - 1503 EP - 1514 KW - Adult; Aged; Aged KW - 80 and over; Combined Modality Therapy; Feasibility Studies; Female; Germany; Humans; Male; Middle Aged; Stroke KW - physiopathology; Stroke Rehabilitation KW - methods; Technology; Therapy KW - Computer-Assisted KW - methods; Upper Extremity KW - physiopathology; User-Computer Interface; Augmented reality; upper limb; usability; user experience; virtual reality; visual illusion N1 - 1464-5165 Owner: NLM N2 - New rehabilitation strategies for post-stroke upper limb rehabilitation employing visual stimulation show promising results, however, cost-efficient and clinically feasible ways to provide these interventions are still lacking. An integral step is to translate recent technological advances, such as in virtual and augmented reality, into therapeutic practice to improve outcomes for patients. This requires research on the adaptation of the technology for clinical use as well as on the appropriate guidelines and protocols for sustainable integration into therapeutic routines. Here, we present and evaluate a novel and affordable augmented reality system (Augmented Reflection Technology, ART) in combination with a validated mirror therapy protocol for upper limb rehabilitation after stroke. We evaluated components of the therapeutic intervention, from the patients' and the therapists' points of view in a clinical feasibility study at a rehabilitation centre. We also assessed the integration of ART as an adjunct therapy for the clinical rehabilitation of subacute patients at two different hospitals. The results showed that the combination and application of the Berlin Protocol for Mirror Therapy together with ART was feasible for clinical use. This combination was integrated into the therapeutic plan of subacute stroke patients at the two clinical locations where the second part of this research was conducted. Our findings pave the way for using technology to provide mirror therapy in clinical settings and show potential for the more effective use of inpatient time and enhanced recoveries for patients. Implications for Rehabilitation Computerised Mirror Therapy is feasible for clinical use Augmented Reflection Technology can be integrated as an adjunctive therapeutic intervention for subacute stroke patients in an inpatient setting Virtual Rehabilitation devices such as Augmented Reflection Technology have considerable potential to enhance stroke rehabilitation. ER - TY - JOUR AU - Hoermann, Simon AU - Hale, Leigh AU - Winser, Stanley J. AU - Regenbrecht, Holger T1 - Patient engagement and clinical feasibility of Augmented Reflection Technology for stroke rehabilitation JO - International Journal on Disability and Human Development Y1 - 2014/january VL - 13 IS - 3 ER - TY - JOUR AU - Hollensteiner, Marianne AU - Fuerst, David AU - Schrempf, Andreas T1 - Artificial muscles for a novel simulator in minimally invasive spine surgery. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2014 VL - 2014 SP - 506 EP - 509 KW - Fractures KW - Compression KW - surgery; Humans; Minimally Invasive Surgical Procedures KW - instrumentation KW - methods; Muscles; Simulation Training KW - methods; Spinal Fractures KW - surgery; Spine KW - surgery; Tissue Engineering; Vertebroplasty KW - methods N1 - 1557-170X Owner: NLM N2 - Vertebroplasty and kyphoplasty are commonly used minimally invasive methods to treat vertebral compression fractures. Novice surgeons gather surgical skills in different ways, mainly by "learning by doing" or training on models, specimens or simulators. Currently, a new training modality, an augmented reality simulator for minimally invasive spine surgeries, is going to be developed. An important step in investigating this simulator is the accurate establishment of artificial tissues. Especially vertebrae and muscles, reproducing a comparable haptical feedback during tool insertion, are necessary. Two artificial tissues were developed to imitate natural muscle tissue. The axial insertion force was used as validation parameter. It appropriates the mechanical properties of artificial and natural muscles. Validation was performed on insertion measurement data from fifteen artificial muscle tissues compared to human muscles measurement data. Based on the resulting forces during needle insertion into human muscles, a suitable material composition for manufacturing artificial muscles was found. ER - TY - JOUR AU - Hooten, Kristopher G. AU - Lister, J. Richard AU - Lombard, Gwen AU - Lizdas, David E. AU - Lampotang, Samsun AU - Rajon, Didier A. AU - Bova, Frank AU - Murad, Gregory J. A. T1 - Mixed reality ventriculostomy simulation: experience in neurosurgical residency. JO - Neurosurgery Y1 - 2014 VL - 10 Suppl 4 SP - 576--81; discussion 581--81; discussion 581 EP - 576 KW - Clinical Competence; Computer Simulation; Feedback; Humans; Internship and Residency; Models KW - Neurological; Neurosurgery KW - education; Practice (Psychology); User-Computer Interface; Ventriculostomy KW - education N1 - 1524-4040 Owner: NLM N2 - Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient. ER - TY - JOUR AU - Horeman, Tim AU - van Delft, Freek AU - Blikkendaal, Mathijs D. AU - Dankelman, Jenny AU - van den Dobbelsteen, John J. AU - Jansen, Frank-Willem T1 - Learning from visual force feedback in box trainers: tissue manipulation in laparoscopic surgery JO - Surgical Endoscopy Y1 - 2014/february ER - TY - JOUR AU - Hou, YuanZheng AU - Ma, LiChao AU - Zhu, RuYuan AU - Chen, XiaoLei T1 - iPhone-Assisted Augmented Reality Localization of Basal Ganglia Hypertensive Hematoma. JO - World neurosurgery Y1 - 2016 VL - 94 SP - 480 EP - 492 KW - Basal Ganglia Diseases KW - diagnostic imaging; Hematoma KW - Epidural KW - Cranial KW - diagnostic imaging; Humans; Image Interpretation KW - Computer-Assisted KW - methods; Intracranial Hemorrhage KW - Hypertensive KW - diagnostic imaging; Mobile Applications; Observer Variation; Photography KW - methods; Reproducibility of Results; Sensitivity and Specificity; Smartphone; Subtraction Technique; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface; Hypertensive hematoma; Image-guided surgery; Smartphone N1 - 1878-8769 Owner: NLM N2 - A low-cost, time-efficient technique that could localize hypertensive hematomas in the basal ganglia would be beneficial for minimally invasive hematoma evacuation surgery. We used an iPhone to achieve this goal and evaluated its accuracy and feasibility. We located basal ganglia hematomas in 26 patients and depicted the boundaries of the hematomas on the skin. To verify the accuracy of the drawn boundaries, computed tomography (CT) markers surrounding the depicted boundaries were attached to 10 patients. The deviation between the CT markers and the actual hematoma boundaries was then measured. In the other 16 patients, minimally invasive endoscopic hematoma evacuation surgery was performed according to the depicted hematoma boundary. The deflection angle of the actual trajectory and deviation in the hematoma center were measured according to the preoperative and postoperative CT data. There were 40 CT markers placed on 10 patients. The mean deviation of these markers was 3.1 mm ± 2.4. In the 16 patients who received surgery, the deflection angle of the actual trajectory was 4.3° ± 2.1. The deviation in the hematoma center was 5.2 mm ± 2.6. This new method can locate basal ganglia hematomas with a sufficient level of accuracy and is helpful for minimally invasive endoscopic hematoma evacuation surgery. ER - TY - JOUR AU - Hou, YuanZheng AU - Ma, LiChao AU - Zhu, RuYuan AU - Chen, XiaoLei AU - Zhang, Jun T1 - A Low-Cost iPhone-Assisted Augmented Reality Solution for the Localization of Intracranial Lesions. JO - PloS one Y1 - 2016 VL - 11 SP - e0159185 EP - e0159185 KW - Adolescent; Adult; Aged; Algorithms; Brain KW - pathology; Child; Child KW - Preschool; Female; Humans; Image Processing KW - Computer-Assisted; Magnetic Resonance Imaging; Male; Middle Aged; Models KW - Statistical; Neuronavigation KW - methods; Neurosurgical Procedures; Reproducibility of Results; Smartphone; Tomography KW - X-Ray Computed; Young Adult N1 - 1932-6203 Owner: NLM N2 - Precise location of intracranial lesions before surgery is important, but occasionally difficult. Modern navigation systems are very helpful, but expensive. A low-cost solution that could locate brain lesions and their surface projections in augmented reality would be beneficial. We used an iPhone to partially achieve this goal, and evaluated its accuracy and feasibility in a clinical neurosurgery setting. We located brain lesions in 35 patients, and using an iPhone, we depicted the lesion's surface projection onto the skin of the head. To assess the accuracy of this method, we pasted computed tomography (CT) markers surrounding the depicted lesion boundaries on the skin onto 15 patients. CT scans were then performed with or without contrast enhancement. The deviations (D) between the CT markers and the actual lesion boundaries were measured. We found that 97.7% of the markers displayed a high accuracy level (D ≤ 5mm). In the remaining 20 patients, we compared our iPhone-based method with a frameless neuronavigation system. Four check points were chosen on the skin surrounding the depicted lesion boundaries, to assess the deviations between the two methods. The integrated offset was calculated according to the deviations at the four check points. We found that for the supratentorial lesions, the medial offset between these two methods was 2.90 mm and the maximum offset was 4.2 mm. This low-cost, image-based, iPhone-assisted, augmented reality solution is technically feasible, and helpful for the localization of some intracranial lesions, especially shallow supratentorial intracranial lesions of moderate size. ER - TY - JOUR AU - Hung, Andrew J. AU - Shah, Swar H. AU - Dalag, Leonard AU - Shin, Daniel AU - Gill, Inderbir S. T1 - Development and Validation of a Novel Robotic Procedure Specific Simulation Platform: Partial Nephrectomy. JO - The Journal of urology Y1 - 2015 VL - 194 SP - 520 EP - 526 KW - Animals; Clinical Competence; Computer Simulation; Education KW - Medical KW - Continuing KW - methods; Equipment Design; Female; Humans; Imaging KW - Three-Dimensional; Internship and Residency; Male; Nephrectomy KW - education KW - methods; Robotics KW - instrumentation; Surveys and Questionnaires; Swine; Urology KW - education; User-Computer Interface; nephrectomy; robotics N1 - 1527-3792 Owner: NLM N2 - We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. ER - TY - JOUR AU - Hwang, Alex D. AU - Peli, Eli T1 - An Augmented-Reality Edge Enhancement Application for Google Glass JO - Optometry and Vision Science Y1 - 2014/august VL - 91 IS - 8 SP - 1021 EP - 1030 ER - TY - JOUR AU - Ieiri, Satoshi AU - Uemura, Munenori AU - Konishi, Kouzou AU - Souzaki, Ryota AU - Nagao, Yoshihiro AU - Tsutsumi, Norifumi AU - Akahoshi, Tomohiko AU - Ohuchida, Kenoki AU - Ohdaira, Takeshi AU - Tomikawa, Morimasa AU - Tanoue, Kazuo AU - Hashizume, Makoto AU - Taguchi, Tomoaki T1 - Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative CT image using optical tracking device. JO - Pediatric surgery international Y1 - 2012 VL - 28 SP - 341 EP - 346 KW - Adolescent; Child; Feasibility Studies; Female; Humans; Image Processing KW - Computer-Assisted; Laparoscopy KW - methods; Male; Purpura KW - Thrombocytopenic KW - Idiopathic KW - surgery; Spherocytosis KW - Hereditary KW - surgery; Splenectomy KW - methods; Surgery KW - Computer-Assisted; Tomography KW - X-Ray Computed N1 - 1437-9813 Owner: NLM N2 - In endoscopic surgery, limited views and lack of tactile sensation restrict the surgeon's abilities and cause stress to the surgeon. Therefore, an intra-operative navigation system is strongly recommended. We developed an augmented reality (AR) navigation system based on preoperative CT imaging. The purpose of this study is to evaluate the usefulness, feasibility, and accuracy of this system using laparoscopic splenectomy in children. Volume images were reconstructed by three-dimensional (3D) viewer application. We used an optical tracking system for registration between volume image and body surface markers. The AR visualization was superimposed preoperative 3D CT images onto captured laparoscopic live images. This system was applied to six cases of laparoscopic splenectomy in children. To evaluate registration accuracy, distances from the marker position to the volume data were calculated. The operator recognized the hidden vascular variation of the splenic artery and vein, accessory spleen, and pancreatic tail by overlaying an image onto a laparoscopic live image. The registration accuracy of six cases was 5.30 ± 0.08, 5.71 ± 1.70, 10.1 ± 0.60, 18.8 ± 3.56, 4.06 ± 1.71, and 7.05 ± 4.71. This navigation system provides real-time anatomical information, which cannot be otherwise visualized without navigation. The registration accuracy was acceptable in clinical operation. ER - TY - JOUR AU - Im, Dal Jae AU - Ku, Jeunghun AU - Kim, Yeun Joon AU - Cho, Sangwoo AU - Cho, Yun Kyung AU - Lim, Teo AU - Lee, Hye Sun AU - Kim, Hyun Jung AU - Kang, Youn Joo T1 - Utility of a Three-Dimensional Interactive Augmented Reality Program for Balance and Mobility Rehabilitation in the Elderly: A Feasibility Study. JO - Annals of rehabilitation medicine Y1 - 2015 VL - 39 SP - 462 EP - 472 KW - Aged; Augmented reality; Balance; Exercise; Virtual reality N1 - 2234-0645 Owner: NLM N2 - To improve lower extremity function and balance in elderly persons, we developed a novel, three-dimensional interactive augmented reality system (3D ARS). In this feasibility study, we assessed clinical and kinematic improvements, user participation, and the side effects of our system. Eighteen participants (age, 56-76 years) capable of walking independently and standing on one leg were recruited. The participants received 3D ARS training during 10 sessions (30-minute duration each) for 4 weeks. Berg Balance Scale (BBS) and the Timed Up and Go (TUG) scores were obtained before and after the exercises. Outcome performance variables, including response time and success rate, and kinematic variables, such as hip and knee joint angle, were evaluated after each session. Participants exhibited significant clinical improvements in lower extremity balance and mobility following the intervention, as shown by improved BBS and TUG scores (p<0.001). Consistent kinematic improvements in the maximum joint angles of the hip and knee were observed across sessions. Outcome performance variables, such as success rate and response time, improved gradually across sessions, for each exercise. The level of participant interest also increased across sessions (p<0.001). All participants completed the program without experiencing any adverse effects. Substantial clinical and kinematic improvements were observed after applying a novel 3D ARS training program, suggesting that this system can enhance lower extremity function and facilitate assessments of lower extremity kinematic capacity. ER - TY - JOUR AU - Inoue, D. AU - Cho, B. AU - Mori, M. AU - Kikkawa, Y. AU - Amano, T. AU - Nakamizo, A. AU - Yoshimoto, K. AU - Mizoguchi, M. AU - Tomikawa, M. AU - Hong, J. AU - Hashizume, M. AU - Sasaki, T. T1 - Preliminary study on the clinical application of augmented reality neuronavigation. JO - Journal of neurological surgery. Part A, Central European neurosurgery Y1 - 2013 VL - 74 SP - 71 EP - 76 KW - Brain Neoplasms KW - pathology KW - surgery; Feasibility Studies; Female; Glioblastoma KW - surgery; Humans; Meningioma KW - surgery; Middle Aged; Neuroimaging KW - instrumentation KW - methods; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Surgery KW - Computer-Assisted KW - methods; Treatment Outcome N1 - 2193-6323 Owner: NLM N2 - To develop an augmented reality (AR) neuronavigation system with Web cameras and examine its clinical utility. The utility of the system was evaluated in three patients with brain tumors. One patient had a glioblastoma and two patients had convexity meningiomas. Our navigation system comprised the open-source software 3D Slicer (Brigham and Women's Hospital, Boston, Massachusetts, USA), the infrared optical tracking sensor Polaris (Northern Digital Inc., Waterloo, Canada), and Web cameras. We prepared two different types of Web cameras: a handheld type and a headband type. Optical markers were attached to each Web camera. We used this system for skin incision planning before the operation, during craniotomy, and after dural incision. We were able to overlay these images in all cases. In Case 1, accuracy could not be evaluated because the tumor was not on the surface, though it was generally suitable for the outline of the external ear and the skin. In Cases 2 and 3, the augmented reality error was ∼2 to 3 mm. AR technology was examined with Web cameras in neurosurgical operations. Our results suggest that this technology is clinically useful in neurosurgical procedures, particularly for brain tumors close to the brain surface. ER - TY - JOUR AU - Janssen, Sabine AU - Bolte, Benjamin AU - Nonnekes, Jorik AU - Bittner, Marian AU - Bloem, Bastiaan R. AU - Heida, Tjitske AU - Zhao, Yan AU - van Wezel, Richard J. A. T1 - Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease. JO - Frontiers in neurology Y1 - 2017 VL - 8 SP - 279 EP - 279 KW - Parkinson’s disease; augmented reality; external cueing; freezing of gait; smart glasses; visual cues; wearables N1 - 1664-2295 Owner: NLM N2 - External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. ER - TY - JOUR AU - Jarc, Anthony M. AU - Stanley, Andrew A. AU - Clifford, Thomas AU - Gill, Inderbir S. AU - Hung, Andrew J. T1 - Proctors exploit three-dimensional ghost tools during clinical-like training scenarios: a preliminary study. JO - World journal of urology Y1 - 2017 VL - 35 SP - 957 EP - 965 KW - Animals; Clinical Competence; Education KW - Medical KW - Graduate KW - methods; Imaging KW - Three-Dimensional; Internship and Residency KW - methods; Mentoring KW - methods; Minimally Invasive Surgical Procedures KW - education; Models KW - Animal; Robotic Surgical Procedures KW - education KW - instrumentation; Simulation Training KW - methods; Swine; Augmented reality; Ghost tools; Performance metrics; Proctor; Surgeon training; Telementoring N1 - 1433-8726 Owner: NLM N2 - In this study, we examine three-dimensional (3D) proctoring tools (i.e., semitransparent ghost tools overlaid on the surgeon's field of view) on realistic surgical tasks. Additionally, we develop novel, quantitative measures of whether proctors exploit the additional capabilities offered by ghost tools. Seven proctor-trainee pairs completed realistic surgical tasks such as tissue dissection and suturing in a live porcine model using 3D ghost tools on the da Vinci Xi Surgical System. The usability and effectiveness of 3D ghost tools were evaluated using objective measures of proctor performance based on proctor hand movements and button presses, as well as post-study questionnaires. Proctors exploited the capabilities of ghost tools, such as 3D hand movement (p < 0.001), wristedness (p < 0.001), finger pinch gestures (p < 0.001), and bimanual hand motions (p < 0.001). The median ghost tool excursion distances across proctors in the x-, y-, and z-directions were 57.6, 31.9, and 50.7, respectively. Proctors and trainees consistently evaluated the ghost tools as effective across multiple categories of mentoring. Trainees found ghost tools more helpful than proctors across all categories (p < 0.05). Proctors exploit the augmented capabilities of 3D ghost tools during clinical-like training scenarios. Additionally, both proctors and trainees evaluated ghost tools as effective mentoring tools, thereby confirming previous studies on simple, inanimate tasks. Based on this preliminary work, advanced mentoring technologies, such as 3D ghost tools, stand to improve current telementoring and training technologies in robot-assisted minimally invasive surgery. ER - TY - JOUR AU - Jeon, Yunseok AU - Choi, Seungpyo AU - Kim, Heechan T1 - Evaluation of a simplified augmented reality device for ultrasound-guided vascular access in a vascular phantom. JO - Journal of clinical anesthesia Y1 - 2014 VL - 26 SP - 485 EP - 489 KW - Anesthesiology KW - education; Catheterization KW - Central Venous KW - methods; Catheterization KW - Peripheral KW - methods; Clinical Competence; Education KW - Medical KW - Graduate KW - methods; Equipment Design; Humans; Needles; Phantoms KW - Imaging; Prospective Studies; Time Factors; Ultrasonography KW - Interventional KW - instrumentation KW - methods; User-Computer Interface; Augmented reality; Ultrasonography; Vascular access; Virtual reality N1 - 1873-4529 Owner: NLM N2 - To investigate whether a novel ultrasound device may be used with a simplified augmented reality technique, and to compare this device with conventional techniques during vascular access using a vascular phantom. Prospective, randomized study. Anesthesiology and Pain Medicine departments of a university-affiliated hospital. 20 physicians with no experience with ultrasound-guided techniques. All participants performed the vascular access technique on the vascular phantom model using both a conventional device and the new ultrasound device. Time and the number of redirections of the needle until aspiration of dye into a vessel of the vascular phantom were measured. The median/interquartile range of time was 39.5/41.7 seconds versus 18.6/10.0 seconds (P < 0.001) and number of redirections was 3/3.5 versus 1/0 (P < 0.001) for the conventional and novel ultrasound devices, respectively. During vascular access in a vascular phantom model, the novel device decreased the time and the number of redirections significantly. The device successfully improved the efficiency of the ultrasound-guided vascular access technique. ER - TY - JOUR AU - Juan, M.-Carmen AU - Mendez-Lopez, Magdalena AU - Perez-Hernandez, Elena AU - Albiol-Perez, Sergio T1 - Augmented reality for the assessment of children's spatial memory in real settings. JO - PloS one Y1 - 2014 VL - 9 SP - e113751 EP - e113751 KW - Child; Child KW - Preschool; Female; Humans; Male; Memory KW - Short-Term; Neuropsychological Tests; Software; Spatial Memory; Task Performance and Analysis N1 - 1932-6203 Owner: NLM N2 - Short-term memory can be defined as the capacity for holding a small amount of information in mind in an active state for a short period of time. Although some instruments have been developed to study spatial short-term memory in real environments, there are no instruments that are specifically designed to assess visuospatial short-term memory in an attractive way to children. In this paper, we present the ARSM (Augmented Reality Spatial Memory) task, the first Augmented Reality task that involves a user's movement to assess spatial short-term memory in healthy children. The experimental procedure of the ARSM task was designed to assess the children's skill to retain visuospatial information. They were individually asked to remember the real place where augmented reality objects were located. The children (N = 76) were divided into two groups: preschool (5-6 year olds) and primary school (7-8 year olds). We found a significant improvement in ARSM task performance in the older group. The correlations between scores for the ARSM task and traditional procedures were significant. These traditional procedures were the Dot Matrix subtest for the assessment of visuospatial short-term memory of the computerized AWMA-2 battery and a parent's questionnaire about a child's everyday spatial memory. Hence, we suggest that the ARSM task has high verisimilitude with spatial short-term memory skills in real life. In addition, we evaluated the ARSM task's usability and perceived satisfaction. The study revealed that the younger children were more satisfied with the ARSM task. This novel instrument could be useful in detecting visuospatial short-term difficulties that affect specific developmental navigational disorders and/or school academic achievement. ER - TY - JOUR AU - Juanes, Juan A. AU - Gómez, Juan J. AU - Peguero, Pedro D. AU - Ruisoto, Pablo T1 - Digital Environment for Movement Control in Surgical Skill Training. JO - Journal of medical systems Y1 - 2016 VL - 40 SP - 133 EP - 133 KW - Clinical Competence; General Surgery KW - education; Humans; Movement KW - physiology; Simulation Training; Touch KW - physiology; Frames; Infrared light; Leap motion; Motion capture; Surgical simulation; Technology N1 - 1573-689X Owner: NLM N2 - Intelligent environments are increasingly becoming useful scenarios for handling computers. Technological devices are practical tools for learning and acquiring clinical skills as part of the medical training process. Within the framework of the advanced user interface, we present a technological application using Leap Motion, to enhance interaction with the user in the process of a laparoscopic surgical intervention and integrate the navigation through augmented reality images using manual gestures. Thus, we intend to achieve a more natural interaction with the objects that participate in a surgical intervention, which are augmented and related to the user's hand movements. ER - TY - JOUR AU - Kaladji, Adrien AU - Dumenil, Aurélien AU - Castro, Miguel AU - Cardon, Alain AU - Becquemin, Jean-Pierre AU - Bou-Saïd, Benyebka AU - Lucas, Antoine AU - Haigron, Pascal T1 - Prediction of deformations during endovascular aortic aneurysm repair using finite element simulation JO - Computerized Medical Imaging and Graphics Y1 - 2013/march VL - 37 IS - 2 SP - 142 EP - 149 ER - TY - JOUR AU - Kang, Xin AU - Azizian, Mahdi AU - Wilson, Emmanuel AU - Wu, Kyle AU - Martin, Aaron D. AU - Kane, Timothy D. AU - Peters, Craig A. AU - Cleary, Kevin AU - Shekhar, Raj T1 - Stereoscopic augmented reality for laparoscopic surgery. JO - Surgical endoscopy Y1 - 2014 VL - 28 SP - 2227 EP - 2235 KW - Animals; Depth Perception; Imaging KW - Three-Dimensional; Laparoscopes; Laparoscopy KW - methods; Lighting; Models KW - Animal; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - methods; Swine; Ultrasonography KW - Interventional; Video Recording N1 - 1432-2218 Owner: NLM N2 - Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries. ER - TY - JOUR AU - Kantelhardt, Sven R. AU - Gutenberg, Angelika AU - Neulen, Axel AU - Keric, Naureen AU - Renovanz, Mirjam AU - Giese, Alf T1 - Video-Assisted Navigation for Adjustment of Image-Guidance Accuracy to Slight Brain Shift JO - Operative Neurosurgery Y1 - 2015/december VL - 11 IS - 4 SP - 504 EP - 511 ER - TY - JOUR AU - Karimpoor, Mahta AU - Tam, Fred AU - Strother, Stephen C. AU - Fischer, Corinne E. AU - Schweizer, Tom A. AU - Graham, Simon J. T1 - A computerized tablet with visual feedback of hand position for functional magnetic resonance imaging. JO - Frontiers in human neuroscience Y1 - 2015 VL - 9 SP - 150 EP - 150 KW - computerized tablet; ecological validity; fMRI; handwriting; human factors; neuropsychological tests; proprioception; visual feedback N1 - 1662-5161 Owner: NLM N2 - Neuropsychological tests behavioral tasks that very commonly involve handwriting and drawing are widely used in the clinic to detect abnormal brain function. Functional magnetic resonance imaging (fMRI) may be useful in increasing the specificity of such tests. However, performing complex pen-and-paper tests during fMRI involves engineering challenges. Previously, we developed an fMRI-compatible, computerized tablet system to address this issue. However, the tablet did not include visual feedback of hand position (VFHP), a human factors component that may be important for fMRI of certain patient populations. A real-time system was thus developed to provide VFHP and integrated with the tablet in an augmented reality display. The effectiveness of the system was initially tested in young healthy adults who performed various handwriting tasks in front of a computer display with and without VFHP. Pilot fMRI of writing tasks were performed by two representative individuals with and without VFHP. Quantitative analysis of the behavioral results indicated improved writing performance with VFHP. The pilot fMRI results suggest that writing with VFHP requires less neural resources compared to the without VFHP condition, to maintain similar behavior. Thus, the tablet system with VFHP is recommended for future fMRI studies involving patients with impaired brain function and where ecologically valid behavior is important. ER - TY - JOUR AU - Katić, Darko AU - Spengler, Patrick AU - Bodenstedt, Sebastian AU - Castrillon-Oberndorfer, Gregor AU - Seeberger, Robin AU - Hoffmann, Juergen AU - Dillmann, Ruediger AU - Speidel, Stefanie T1 - A system for context-aware intraoperative augmented reality in dental implant surgery. JO - International journal of computer assisted radiology and surgery Y1 - 2015 VL - 10 SP - 101 EP - 108 KW - Animals; Calibration; Dental Implantation KW - methods; Dental Implants; Swine; User-Computer Interface N1 - 1861-6429 Owner: NLM N2 - Large volumes of information in the OR are ignored by surgeons when the amount outpaces human mental processing abilities. We developed an augmented reality (AR) system for dental implant surgery that acts as an automatic information filter, selectively displaying only relevant information. The purpose is to reduce information overflow and offer intuitive image guidance. The system was evaluated in a pig cadaver experiment. Information filtering is implemented via rule-based situation interpretation with description logics. The interpretation is based on intraoperative distances measurement between anatomical structures and the dental drill with optical tracking. For AR, a head-mounted display is used, which was calibrated with a novel method based on SPAAM. To adapt to surgeon specific preferences, we offer two alternative display formats: one with static and another with contact analog AR. The system made the surgery easier and showed ergonomical benefits, as assessed by a questionnaire. All relevant phases were recognized reliably. The new calibration showed significant improvements, while the deviation of the realized implants was <2.5 mm. The system allowed the surgeon to fully concentrate on the surgery itself. It offered greater flexibility since the surgeon received all relevant information, but was free to deviate from it. Accuracy of the realized implants remains an open issue and part of future work. ER - TY - JOUR AU - Katić, Darko AU - Wekerle, Anna-Laura AU - Görtler, Jochen AU - Spengler, Patrick AU - Bodenstedt, Sebastian AU - Röhl, Sebastian AU - Suwelack, Stefan AU - Kenngott, Hannes Götz AU - Wagner, Martin AU - Müller-Stich, Beat Peter AU - Dillmann, Rüdiger AU - Speidel, Stefanie T1 - Context-aware Augmented Reality in laparoscopic surgery. JO - Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society Y1 - 2013 VL - 37 SP - 174 EP - 182 KW - Algorithms; Animals; Artificial Intelligence; Hepatectomy KW - methods; Humans; Laparoscopy KW - methods; Liver KW - anatomy & histology KW - surgery; Surgery KW - Computer-Assisted KW - methods; Swine; User-Computer Interface N1 - 1879-0771 Owner: NLM N2 - Augmented Reality is a promising paradigm for intraoperative assistance. Yet, apart from technical issues, a major obstacle to its clinical application is the man-machine interaction. Visualization of unnecessary, obsolete or redundant information may cause confusion and distraction, reducing usefulness and acceptance of the assistance system. We propose a system capable of automatically filtering available information based on recognized phases in the operating room. Our system offers a specific selection of available visualizations which suit the surgeon's needs best. The system was implemented for use in laparoscopic liver and gallbladder surgery and evaluated in phantom experiments in conjunction with expert interviews. ER - TY - JOUR AU - Katz, Brian AU - Dramas, Florian AU - Parseihian, Gaëtan AU - Gutierrez, Olivier AU - Kammoun, Slim AU - Brilhault, Adrien AU - Brunet, Lucie AU - Gallay, Mathieu AU - Oriola, Bernard AU - Auvray, Malika AU - Truillet, Philippe AU - Denis, Michel AU - Thorpe, Simon AU - Jouffrais, Christophe T1 - NAVIG: augmented reality guidance system for the visually impaired JO - Technology and Disability Y1 - 2012/november VL - 24 SP - 163 EP - 178 ER - TY - JOUR AU - Kenngott, Hannes G. AU - Wagner, Martin AU - Gondan, Matthias AU - Nickel, Felix AU - Nolden, Marco AU - Fetzer, Andreas AU - Weitz, Jürgen AU - Fischer, Lars AU - Speidel, Stefanie AU - Meinzer, Hans-Peter AU - Böckler, Dittmar AU - Büchler, Markus W. AU - Müller-Stich, Beat P. T1 - Real-time image guidance in laparoscopic liver surgery: first clinical experience with a guidance system based on intraoperative CT imaging. JO - Surgical endoscopy Y1 - 2014 VL - 28 SP - 933 EP - 940 KW - Carcinoma KW - Hepatocellular KW - diagnostic imaging KW - surgery; Cone-Beam Computed Tomography KW - methods; Equipment Design; Fiducial Markers; Hepatectomy KW - methods; Humans; Laparoscopy KW - methods; Liver Neoplasms KW - surgery; Male; Middle Aged; Phantoms KW - Imaging; Reproducibility of Results; Surgery KW - Computer-Assisted KW - instrumentation; Time Factors N1 - 1432-2218 Owner: NLM N2 - Laparoscopic liver surgery is particularly challenging owing to restricted access, risk of bleeding, and lack of haptic feedback. Navigation systems have the potential to improve information on the exact position of intrahepatic tumors, and thus facilitate oncological resection. This study aims to evaluate the feasibility of a commercially available augmented reality (AR) guidance system employing intraoperative robotic C-arm cone-beam computed tomography (CBCT) for laparoscopic liver surgery. A human liver-like phantom with 16 target fiducials was used to evaluate the Syngo iPilot(®) AR system. Subsequently, the system was used for the laparoscopic resection of a hepatocellular carcinoma in segment 7 of a 50-year-old male patient. In the phantom experiment, the AR system showed a mean target registration error of 0.96 ± 0.52 mm, with a maximum error of 2.49 mm. The patient successfully underwent the operation and showed no postoperative complications. The use of intraoperative CBCT and AR for laparoscopic liver resection is feasible and could be considered an option for future liver surgery in complex cases. ER - TY - JOUR AU - Keri, Zsuzsanna AU - Sydor, Devin AU - Ungi, Tamas AU - Holden, Matthew S. AU - McGraw, Robert AU - Mousavi, Parvin AU - Borschneck, Daniel P. AU - Fichtinger, Gabor AU - Jaeger, Melanie T1 - Computerized training system for ultrasound-guided lumbar puncture on abnormal spine models: a randomized controlled trial JO - Canadian Journal of Anesthesia/Journal canadien danesthésie Y1 - 2015/march VL - 62 IS - 7 SP - 777 EP - 784 ER - TY - JOUR AU - Keri, Zsuzsanna AU - Sydor, Devin AU - Ungi, Tamas AU - Holden, Matthew S. AU - McGraw, Robert AU - Mousavi, Parvin AU - Borschneck, Daniel P. AU - Fichtinger, Gabor AU - Jaeger, Melanie T1 - Computerized training system for ultrasound-guided lumbar puncture on abnormal spine models: a randomized controlled trial. JO - Canadian journal of anaesthesia = Journal canadien d'anesthesie Y1 - 2015 VL - 62 SP - 777 EP - 784 KW - Adult; Computer-Assisted Instruction KW - methods; Female; Humans; Internship and Residency KW - methods; Male; Models KW - Anatomic; Needles; Phantoms KW - Imaging; Spinal Puncture KW - methods; Spine KW - abnormalities KW - diagnostic imaging; Ultrasonography KW - Interventional KW - methods N1 - 1496-8975 Owner: NLM N2 - A randomized controlled trial was carried out to determine whether Perk Tutor, a computerized training platform that displays an ultrasound image and real-time needle position in a three-dimensional (3D) anatomical model, would benefit residents learning ultrasound-guided lumbar puncture (LP) in simulation phantoms with abnormal spinal anatomy. Twenty-four residents were randomly assigned to either the Perk Tutor (P) or the Control (C) group and asked to perform an LP with ultrasound guidance on part-task trainers with spinal pathology. Group P was trained with the 3D display along with conventional ultrasound imaging, while Group C used conventional ultrasound only. Both groups were then tested solely with conventional ultrasound guidance on an abnormal spinal model not previously seen. We measured potential tissue damage, needle path in tissue, total procedure time, and needle insertion time. Procedural success rate was a secondary outcome. The needle tracking measurements (expressed as median [interquartile range; IQR]) in Group P vs Group C revealed less potential tissue damage (39.7 [21.3-42.7] cm(2) vs 128.3 [50.3-208.2] cm(2), respectively; difference 88.6; 95% confidence intervals [CI] 24.8 to 193.5; P = 0.01), a shorter needle path inside the tissue (426.0 [164.9-571.6] mm vs 629.7 [306.4-2,879.1] mm, respectively; difference 223.7; 95% CI 76.3 to 1,859.9; P = 0.02), and lower needle insertion time (30.3 [14.0-51.0] sec vs 59.1 [26.0-136.2] sec, respectively; difference 28.8; 95% CI 2.2 to 134.0; P = 0.05). Total procedure time and overall success rates between groups did not differ. Residents trained with augmented reality 3D visualization had better performance metrics on ultrasound-guided LP in pathological spine models. ER - TY - JOUR AU - Kersten-Oertel, Marta AU - Gerard, Ian AU - Drouin, Simon AU - Mok, Kelvin AU - Sirhan, Denis AU - Sinclair, David S. AU - Collins, D. Louis T1 - Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. JO - International journal of computer assisted radiology and surgery Y1 - 2015 VL - 10 SP - 1823 EP - 1836 KW - Adolescent; Arteriovenous Fistula KW - surgery; Cerebral Angiography; Craniotomy; Feasibility Studies; Female; Humans; Intracranial Aneurysm KW - surgery; Intracranial Arteriovenous Malformations KW - surgery; Male; Middle Aged; Neuronavigation KW - methods; Neurosurgical Procedures KW - methods; Operating Rooms; Tomography KW - X-Ray Computed; Vascular Surgical Procedures KW - methods; Aneurysm; Arteriovenous fistulae; Arteriovenous malformations; Augmented reality; Image-guided surgery; Neurovascular surgery; Visualization N1 - 1861-6429 Owner: NLM N2 - The aim of this report is to present a prototype augmented reality (AR) intra-operative brain imaging system. We present our experience of using this new neuronavigation system in neurovascular surgery and discuss the feasibility of this technology for aneurysms, arteriovenous malformations (AVMs), and arteriovenous fistulae (AVFs). We developed an augmented reality system that uses an external camera to capture the live view of the patient on the operating room table and to merge this view with pre-operative volume-rendered vessels. We have extensively tested the system in the laboratory and have used the system in four surgical cases: one aneurysm, two AVMs and one AVF case. The developed AR neuronavigation system allows for precise patient-to-image registration and calibration of the camera, resulting in a well-aligned augmented reality view. Initial results suggest that augmented reality is useful for tailoring craniotomies, localizing vessels of interest, and planning resection corridors. Augmented reality is a promising technology for neurovascular surgery. However, for more complex anomalies such as AVMs and AVFs, better visualization techniques that allow one to distinguish between arteries and veins and determine the absolute depth of a vessel of interest are needed. ER - TY - JOUR AU - Khademi, Maryam AU - Hondori, Hossein Mousavi AU - Dodakian, Lucy AU - Cramer, Steve AU - Lopes, Cristina V. T1 - Comparing "pick and place" task in spatial Augmented Reality versus non-immersive Virtual Reality for rehabilitation setting. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2013 VL - 2013 SP - 4613 EP - 4616 KW - Activities of Daily Living; Adult; Algorithms; Exercise; Female; Hand KW - physiology; Humans; Male; Middle Aged; Movement; Ocular Physiological Phenomena; Stroke Rehabilitation; User-Computer Interface; Video Games; Virtual Reality Exposure Therapy KW - instrumentation KW - methods N1 - 1557-170X Owner: NLM N2 - Introducing computer games to the rehabilitation market led to development of numerous Virtual Reality (VR) training applications. Although VR has provided tremendous benefit to the patients and caregivers, it has inherent limitations, some of which might be solved by replacing it with Augmented Reality (AR). The task of pick-and-place, which is part of many activities of daily living (ADL's), is one of the major affected functions stroke patients mainly expect to recover. We developed an exercise consisting of moving an object between various points, following a flash light that indicates the next target. The results show superior performance of subjects in spatial AR versus non-immersive VR setting. This could be due to the extraneous hand-eye coordination which exists in VR whereas it is eliminated in spatial AR. ER - TY - CONF AU - Khademi, Maryam AU - Hondori, Hossein Mousavi AU - Dodakian, Lucy AU - Cramer, Steve AU - Lopes, Cristina V. T1 - Comparing “pick and place” task in spatial Augmented Reality versus non-immersive Virtual Reality for rehabilitation setting PB - IEEE Y1 - 2013/july ER - TY - CONF AU - Kilgus, T. AU - Bux, R. AU - Franz, A. M. AU - Johnen, W. AU - Heim, E. AU - Fangerau, M. AU - Müller, M. AU - Yen, K. AU - Maier-Hein, L. AU - Webster, Robert J. AU - Yaniv, Ziv R. T1 - Structure Sensor for mobile markerless augmented reality PB - SPIE Y1 - 2016/march ER - TY - JOUR AU - Kilgus, Thomas AU - Heim, Eric AU - Haase, Sven AU - Prüfer, Sabine AU - Müller, Michael AU - Seitel, Alexander AU - Fangerau, Markus AU - Wiebe, Tamara AU - Iszatt, Justin AU - Schlemmer, Heinz-Peter AU - Hornegger, Joachim AU - Yen, Kathrin AU - Maier-Hein, Lena T1 - Mobile markerless augmented reality and its application in forensic medicine. JO - International journal of computer assisted radiology and surgery Y1 - 2015/05 VL - 10 SP - 573 EP - 586 KW - Autopsy KW - methods; Feasibility Studies; Fiducial Markers; Forensic Medicine; Humans; Pilot Projects; Software; Tomography KW - X-Ray Computed KW - methods N1 - 1861-6429 Owner: NLM N2 - During autopsy, forensic pathologists today mostly rely on visible indication, tactile perception and experience to determine the cause of death. Although computed tomography (CT) data is often available for the bodies under examination, these data are rarely used due to the lack of radiological workstations in the pathological suite. The data may prevent the forensic pathologist from damaging evidence by allowing him to associate, for example, external wounds to internal injuries. To facilitate this, we propose a new multimodal approach for intuitive visualization of forensic data and evaluate its feasibility. A range camera is mounted on a tablet computer and positioned in a way such that the camera simultaneously captures depth and color information of the body. A server estimates the camera pose based on surface registration of CT and depth data to allow for augmented reality visualization of the internal anatomy directly on the tablet. Additionally, projection of color information onto the CT surface is implemented. We validated the system in a postmortem pilot study using fiducials attached to the skin for quantification of a mean target registration error of [Formula: see text] mm. The system is mobile, markerless, intuitive and real-time capable with sufficient accuracy. It can support the forensic pathologist during autopsy with augmented reality and textured surfaces. Furthermore, the system enables multimodal documentation for presentation in court. Despite its preliminary prototype status, it has high potential due to its low price and simplicity. ER - TY - JOUR AU - Kim, Duk Nyeon AU - Chae, You Seong AU - Kim, Min Young T1 - X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 529 EP - 541 KW - Calibration; Equipment Design; Fluoroscopy KW - instrumentation; Humans; Imaging KW - Three-Dimensional KW - instrumentation; Neurosurgical Procedures KW - methods; Optical Devices; Phantoms KW - Imaging; Tomography KW - X-Ray Computed KW - instrumentation; X-Rays; Augmented reality; Navigation; Stereo images; Three-dimensional coordinates; X-ray fluoroscopy N1 - 1861-6429 Owner: NLM N2 - In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images. ER - TY - JOUR AU - KleinJan, Gijs H. AU - van den Berg, Nynke S. AU - van Oosterom, Matthias N. AU - Wendler, Thomas AU - Miwa, Mitsuharu AU - Bex, Axel AU - Hendricksen, Kees AU - Horenblas, Simon AU - van Leeuwen, Fijs W. B. T1 - Toward (Hybrid) Navigation of a Fluorescence Camera in an Open Surgery Setting. JO - Journal of nuclear medicine : official publication, Society of Nuclear Medicine Y1 - 2016 VL - 57 SP - 1650 EP - 1653 KW - Humans; Indocyanine Green KW - chemistry; Multimodal Imaging KW - instrumentation; Optical Imaging KW - instrumentation; Preoperative Period; Sentinel Lymph Node Biopsy; Single Photon Emission Computed Tomography Computed Tomography; Surgery KW - Computer-Assisted KW - instrumentation; Technetium Tc 99m Aggregated Albumin KW - chemistry; SPECT/CT; augmented reality; fluorescence imaging; navigation; sentinel node biopsy N1 - 1535-5667 Owner: NLM N2 - With the introduction of the hybrid tracer indocyanine green (ICG)- Tc-nanocolloid, a direct relation between preoperative imaging and intraoperative fluorescence guidance was established. However, fluorescence guidance remains limited by its superficial nature. This study evaluated the feasibility of a nuclear medicine-based navigation concept that allowed intraoperative positioning of a fluorescence camera (FC) in the vicinity of preoperatively defined ICG- Tc-nanocolloid containing sentinel nodes (SNs). Five patients with penile cancer scheduled for SN biopsy were injected with ICG- Tc-nanocolloid followed by preoperative SPECT/CT imaging. The navigation device was used to provide a real-time augmented reality overlay of the SPECT/CT images and video output of the FC. This overlay was then used for FC navigation. SPECT/CT identified 13 SNs in 9 groins. FC navigation was successful for all 12 intraoperatively evaluated SNs (average error, 8.8 mm; range, 0-20 mm). This study reveals the potential benefits of FC navigation during open surgery procedures. ER - TY - JOUR AU - KleinJan, Gijs H. AU - Karakullukçu, Baris AU - Klop, W. Martin C. AU - Engelen, Thijs AU - van den Berg, Nynke S. AU - van Leeuwen, Fijs W. B. T1 - Introducing navigation during melanoma-related sentinel lymph node procedures in the head-and-neck region JO - EJNMMI Research Y1 - 2017/august VL - 7 IS - 1 ER - TY - JOUR AU - Kocev, Bojan AU - Ritter, Felix AU - Linsen, Lars T1 - Projector-based surgeon-computer interaction on deformable surfaces. JO - International journal of computer assisted radiology and surgery Y1 - 2014 VL - 9 SP - 301 EP - 312 KW - Calibration; Computer Simulation; Gestures; Humans; Male; Operating Rooms; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface N1 - 1861-6429 Owner: NLM N2 - Providing intuitive and easy to operate interaction for medical augmented reality is essential for use in the operating room. Commonly, intra-operative navigation information is displayed on an installed monitor, requiring the operating surgeon to change focus from the monitor to the surgical site and vice versa during navigation. Projector-based augmented reality has the potential to alleviate this problem. The aim of our work is to use a projector for visualization and to provide intuitive means for direct interaction with the projected information. A consumer-grade projector is used to visualize preoperatively defined surgical planning data. The projection of the virtual information is possible on any deformable surface, and the surgeon can interact with the presented virtual information. A Microsoft Kinect camera is used to capture both the surgeon interactions and the deformations of the surface over time. After calibration of projector and Kinect camera, the fingertips are localized automatically. A point cloud surface representation is used to determine the surgeon interaction with the projected virtual information. Interaction is detected by estimating the proximity of the surgeon's fingertips to the interaction zone and applying projector-Kinect calibration information. Interaction is performed using multi-touch gestures. In our experimental surgical scenario, the surgeon stands in front of the Microsoft Kinect camera, while relevant medical information is projected on the interaction zone. A hand wave gesture initiates the tracking of the hand. The user can then interact with the projected virtual information according to the defined multi-touch-based gestures. Thus, all information such as preoperative planning data is provided to the surgeon and his/her team intra-operatively in a familiar context. We enabled the projection of the virtual information on an arbitrarily shaped surface and used a Microsoft Kinect camera to capture the interaction zone and the surgeon's actions. The system eliminates the need for the surgeon to alternately view the surgical site and the monitor. The system eliminates unnecessary distractions and may enhance the surgeon's performance. ER - TY - JOUR AU - Kong, Seong-Ho AU - Haouchine, Nazim AU - Soares, Renato AU - Klymchenko, Andrey AU - Andreiuk, Bohdan AU - Marques, Bruno AU - Shabat, Galyna AU - Piechaud, Thierry AU - Diana, Michele AU - Cotin, Stéphane AU - Marescaux, Jacques T1 - Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials. JO - Surgical endoscopy Y1 - 2017 VL - 31 SP - 2863 EP - 2871 KW - Animals; Biomechanical Phenomena; Fiducial Markers; Finite Element Analysis; Fluorescent Dyes; Imaging KW - Three-Dimensional KW - methods; In Vitro Techniques; Kidney KW - diagnostic imaging KW - surgery; Laparoscopy; Models KW - Anatomic; Neoplasms KW - surgery; Surgery KW - Computer-Assisted KW - methods; Swine; Tomography KW - X-Ray Computed; Virtual Reality; Augmented reality; Automatic registration; Fiducials; Finite element modeling; Fluorescence-guided surgery; Optical imaging; Solid organ tumor N1 - 1432-2218 Owner: NLM N2 - Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking. ER - TY - JOUR AU - Koreeda, Yuta AU - Kobayashi, Yo AU - Ieiri, Satoshi AU - Nishio, Yuya AU - Kawamura, Kazuya AU - Obata, Satoshi AU - Souzaki, Ryota AU - Hashizume, Makoto AU - Fujie, Masakatsu G. T1 - Virtually transparent surgical instruments in endoscopic surgery with augmentation of obscured regions. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 1927 EP - 1936 KW - Animals; Endoscopes; Endoscopy KW - methods; Equipment Design; Humans; Image Processing KW - Computer-Assisted KW - methods; Surgery KW - methods; Surgical Instruments; Suture Techniques; Swine; User-Computer Interface; Augmented reality; Computer-assisted surgery; Endoscopic surgery; Laparoscopic surgery; Medical image processing; Visualization N1 - 1861-6429 Owner: NLM N2 - We developed and evaluated a visual compensation system that allows surgeons to visualize obscured regions in real time, such that the surgical instrument appears virtually transparent. The system consists of two endoscopes: a main endoscope to observe the surgical environment, and a supporting endoscope to render the region hidden from view by surgical instruments. The view captured by the supporting endoscope is transformed to simulate the view from the main endoscope, segmented to the shape of the hidden regions, and superimposed to the main endoscope image so that the surgical instruments look transparent. A prototype device was benchmarked for processing time and superimposition rendering error. Then, it was evaluated in a training environment with 22 participants performing a backhand needle driving task with needle exit point error as the criterion. Lastly, we conducted an in vivo study. In the benchmark, the mean processing time was 62.4 ms, which was lower than the processing time accepted in remote surgeries. The mean superimposition error of the superimposed image was 1.4 mm. In the training environment, needle exit point error with the system decreased significantly for experts compared with the condition without the system. This change was not significant for novices. In the in vivo study, our prototype enabled visualization of needle exit points during anastomosis. The benchmark suggests that the implemented system had an acceptable performance, and evaluation in the training environment demonstrated improved surgical task outcomes in expert surgeons. We will conduct a more comprehensive in vivo study in the future. ER - TY - JOUR AU - Kosterhon, Michael AU - Gutenberg, Angelika AU - Kantelhardt, Sven Rainer AU - Archavlis, Elefterios AU - Giese, Alf T1 - Navigation and Image Injection for Control of Bone Removal and Osteotomy Planes in Spine Surgery JO - Operative Neurosurgery Y1 - 2017/january VL - 13 IS - 2 SP - 297 EP - 304 ER - TY - JOUR AU - Kramers, Matthew AU - Armstrong, Ryan AU - Bakhshmand, Saeed M. AU - Fenster, Aaron AU - de Ribaupierre, Sandrine AU - Eagleson, Roy T1 - Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions JO - Studies in health technology and informatics Y1 - 2014 VL - 196 SP - 204--208--204--208 EP - 204 UR - http://europepmc.org/abstract/MED/24732507 N1 - 0926-9630 ER - TY - JOUR AU - Kramers, Matthew AU - Armstrong, Ryan AU - Bakhshmand, Saeed M. AU - Fenster, Aaron AU - de Ribaupierre, Sandrine AU - Eagleson, Roy T1 - Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions. JO - Studies in health technology and informatics Y1 - 2014 VL - 196 SP - 204 EP - 208 KW - Drainage KW - methods; Eyeglasses; Humans; Hydrocephalus KW - surgery; Neurosurgical Procedures KW - methods; Spatial Processing; Surgery KW - Computer-Assisted KW - standards; User-Computer Interface N1 - 1879-8365 Owner: NLM N2 - Image guidance can provide surgeons with valuable contextual information during a medical intervention. Often, image guidance systems require considerable infrastructure, setup-time, and operator experience to be utilized. Certain procedures performed at bedside are susceptible to navigational errors that can lead to complications. We present an application for mobile devices that can provide image guidance using augmented reality to assist in performing neurosurgical tasks. A methodology is outlined that evaluates this mode of visualization from the standpoint of perceptual localization, depth estimation, and pointing performance, in scenarios derived from a neurosurgical targeting task. By measuring user variability and speed we can report objective metrics of performance for our augmented reality guidance system. ER - TY - JOUR AU - Kranzfelder, Michael AU - Wilhelm, Dirk AU - Doundoulakis, Manos AU - Schneider, Armin AU - Bauer, Margit AU - Reiser, Silvano AU - Meining, Alexander AU - Feussner, Hubertus T1 - A probe-based electromagnetic navigation system to integrate computed tomography during upper gastrointestinal endoscopy. JO - Endoscopy Y1 - 2014 VL - 46 SP - 302 EP - 305 KW - Aged; Cohort Studies; Electromagnetic Phenomena; Endoscopy KW - Gastrointestinal KW - methods; Female; Gastrointestinal Neoplasms KW - diagnosis KW - diagnostic imaging KW - surgery; Humans; Male; Middle Aged; Multimodal Imaging KW - instrumentation; Preoperative Care KW - methods; Sensitivity and Specificity; Tomography KW - X-Ray Computed KW - methods N1 - 1438-8812 Owner: NLM N2 - For preoperative work-up, an examination tool that visualizes separately compiled diagnostics in augmented reality would be desirable. We developed a probe-based electromagnetic navigation system, which can be passed through the working channel of an endoscope, to integrate computed tomography (CT) information during upper gastrointestinal endoscopy. The target registration error (TRE) of the system was evaluated experimentally and clinically. A total of 24 study patients with upper gastrointestinal cancer were included in the study. The cancerous lesion was endoscopically located (mean duration 8.4 minutes, range 7.1 - 23.2) and the TRE (coronal, transverse, sagittal layer) was measured by comparing the distance between the navigation probe (at the tip of the endoscope) and the target lesion shown on the corresponding CT cross section. Experimental evaluations showed an accuracy in line with the system's inherent failure rate, with a median TRE of 2.8 mm (IQR 1.8 - 4.3), 2.2 mm (0.4 - 3.7), and 2.8 mm (1.1 - 5.9) in the coronal, transverse, and sagittal planes, respectively. Clinical evaluation revealed a median TRE of 4.8 mm (1.9 - 10.1), 3.9 mm (0.7 - 7.1), and 4.2 mm (0.9 - 8.9), respectively. No complications occurred during navigated endoscopy. The probe-based electromagnetic navigation system revealed high accuracy (TRE < 5 mm), facilitating improved interpretation of endoluminal imaging. ER - TY - CONF AU - Krueger, Evan AU - Messier, Erik AU - Linte, Cristian A. AU - Diaz, Gabriel AU - Kupinski, Matthew A. AU - Nishikawa, Robert M. T1 - An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training PB - SPIE Y1 - 2017/march ER - TY - JOUR AU - Küçük, Sevda AU - Kapakin, Samet AU - Göktaş, Yüksel T1 - Learning anatomy via mobile augmented reality: Effects on achievement and cognitive load. JO - Anatomical sciences education Y1 - 2016 VL - 9 SP - 411 EP - 421 KW - Adolescent; Anatomy KW - education; Cognition; Educational Measurement; Female; Humans; Male; User-Computer Interface; Young Adult; MagicBook; gross anatomy education; learning anatomy; medical education; mobile augmented reality; mobile learning; neuroanatomy education; undergraduate education N1 - 1935-9780 Owner: NLM N2 - Augmented reality (AR), a new generation of technology, has attracted the attention of educators in recent years. In this study, a MagicBook was developed for a neuroanatomy topic by using mobile augmented reality (mAR) technology. This technology integrates virtual learning objects into the real world and allow users to interact with the environment using mobile devices. The purpose of this study was to determine the effects of learning anatomy via mAR on medical students' academic achievement and cognitive load. The mixed method was applied in the study. The random sample consisted of 70 second-year undergraduate medical students: 34 in an experimental group and 36 in a control group. Academic achievement test and cognitive load scale were used as data collection tool. A one-way MANOVA test was used for analysis. The experimental group, which used mAR applications, reported higher achievement and lower cognitive load. The use of mAR applications in anatomy education contributed to the formation of an effective and productive learning environment. Student cognitive load decreased as abstract information became concrete in printed books via multimedia materials in mAR applications. Additionally, students were able to access the materials in the MagicBook anytime and anywhere they wanted. The mobile learning approach helped students learn better by exerting less cognitive effort. Moreover, the sensory experience and real time interaction with environment may provide learning satisfaction and enable students to structure their knowledge to complete the learning tasks. Anat Sci Educ 9: 411-421. © 2016 American Association of Anatomists. ER - TY - JOUR AU - Kurti, Arianit AU - Chow, JoyceA AU - Törnros, MartinE AU - Waltersson, Marie AU - Richard, Helen AU - Kusoffsky, Madeleine AU - Lundström, ClaesF T1 - A design study investigating augmented reality and photograph annotation in a digitalized grossing workstation JO - Journal of Pathology Informatics Y1 - 2017 VL - 8 IS - 1 SP - 31 EP - 31 ER - TY - JOUR AU - Lahanas, Vasileios AU - Loukas, Constantinos AU - Smailis, Nikolaos AU - Georgiou, Evangelos T1 - A novel augmented reality simulator for skills assessment in minimal invasive surgery. JO - Surgical endoscopy Y1 - 2015 VL - 29 SP - 2224 EP - 2234 KW - Computer Simulation; Humans; Laparoscopy KW - education; User-Computer Interface N1 - 1432-2218 Owner: NLM N2 - Over the past decade, simulation-based training has come to the foreground as an efficient method for training and assessment of surgical skills in minimal invasive surgery. Box-trainers and virtual reality (VR) simulators have been introduced in the teaching curricula and have substituted to some extent the traditional model of training based on animals or cadavers. Augmented reality (AR) is a new technology that allows blending of VR elements and real objects within a real-world scene. In this paper, we present a novel AR simulator for assessment of basic laparoscopic skills. The components of the proposed system include: a box-trainer, a camera and a set of laparoscopic tools equipped with custom-made sensors that allow interaction with VR training elements. Three AR tasks were developed, focusing on basic skills such as perception of depth of field, hand-eye coordination and bimanual operation. The construct validity of the system was evaluated via a comparison between two experience groups: novices with no experience in laparoscopic surgery and experienced surgeons. The observed metrics included task execution time, tool pathlength and two task-specific errors. The study also included a feedback questionnaire requiring participants to evaluate the face-validity of the system. Between-group comparison demonstrated highly significant differences (<0.01) in all performance metrics and tasks denoting the simulator's construct validity. Qualitative analysis on the instruments' trajectories highlighted differences between novices and experts regarding smoothness and economy of motion. Subjects' ratings on the feedback questionnaire highlighted the face-validity of the training system. The results highlight the potential of the proposed simulator to discriminate groups with different expertise providing a proof of concept for the potential use of AR as a core technology for laparoscopic simulation training. ER - TY - JOUR AU - Lam, Chee Kiang AU - Sundaraj, Kenneth AU - Sulaiman, Mohd Nazri AU - Qamarruddin, Fazilawati A. T1 - Virtual phacoemulsification surgical simulation using visual guidance and performance parameters as a feasible proficiency assessment tool. JO - BMC ophthalmology Y1 - 2016 VL - 16 SP - 88 EP - 88 KW - Adult; Capsulorhexis KW - education; Clinical Competence KW - standards; Computer Simulation; Education KW - Medical KW - methods; Educational Measurement KW - methods; Feasibility Studies; Female; Humans; Lens Implantation KW - Intraocular KW - education; Male; Middle Aged; Phacoemulsification KW - education; Cataract surgery; Performance assessment; Simulation; Surgical training; Virtual reality N1 - 1471-2415 Owner: NLM N2 - Computer based surgical training is believed to be capable of providing a controlled virtual environment for medical professionals to conduct standardized training or new experimental procedures on virtual human body parts, which are generated and visualised three-dimensionally on a digital display unit. The main objective of this study was to conduct virtual phacoemulsification cataract surgery to compare performance by users with different proficiency on a virtual reality platform equipped with a visual guidance system and a set of performance parameters. Ten experienced ophthalmologists and six medical residents were invited to perform the virtual surgery of the four main phacoemulsification cataract surgery procedures - 1) corneal incision (CI), 2) capsulorhexis (C), 3) phacoemulsification (P), and 4) intraocular lens implantation (IOL). Each participant was required to perform the complete phacoemulsification cataract surgery using the simulator for three consecutive trials (a standardized 30-min session). The performance of the participants during the three trials was supported using a visual guidance system and evaluated by referring to a set of parameters that was implemented in the performance evaluation system of the simulator. Subjects with greater experience obtained significantly higher scores in all four main procedures - CI1 (ρ = 0.038), CI2 (ρ = 0.041), C1 (ρ = 0.032), P2 (ρ = 0.035) and IOL1 (ρ = 0.011). It was also found that experience improved the completion times in all modules - CI4 (ρ = 0.026), C4 (ρ = 0.018), P6 (ρ = 0.028) and IOL4 (ρ = 0.029). Positive correlation was observed between experience and anti-tremor - C2 (ρ = 0.026), P3 (ρ = 0.015), P4 (ρ = 0.042) and IOL2 (ρ = 0.048) and similarly with anti-rupture - CI3 (ρ = 0.013), C3 (ρ = 0.027), P5 (ρ = 0.021) and IOL3 (ρ = 0.041). No significant difference was observed between the groups with regards to P1 (ρ = 0.077). Statistical analysis of the results obtained from repetitive trials between two groups of users reveal that augmented virtual reality (VR) simulators have the potential and capability to be used as a feasible proficiency assessment tool for the complete four main procedures of phacoemulsification cataract surgery (ρ < 0.05), indicating the construct validity of the modules simulated with augmented visual guidance and assessed through performance parameters. ER - TY - JOUR AU - Lanchon, Cecilia AU - Custillon, Guillaume AU - Moreau-Gaudry, Alexandre AU - Descotes, Jean-Luc AU - Long, Jean-Alexandre AU - Fiard, Gaelle AU - Voros, Sandrine T1 - Augmented Reality Using Transurethral Ultrasound for Laparoscopic Radical Prostatectomy: Preclinical Evaluation. JO - The Journal of urology Y1 - 2016 VL - 196 SP - 244 EP - 250 KW - Feasibility Studies; Humans; Imaging KW - Three-Dimensional; Laparoscopy KW - methods; Male; Models KW - Anatomic; Prostate KW - diagnostic imaging KW - surgery; Prostatectomy KW - methods; Surgery KW - Computer-Assisted KW - methods; Ultrasonography KW - Interventional KW - methods; diagnostic imaging; laparoscopy; prostatic neoplasms; ultrasonography; urethra N1 - 1527-3792 Owner: NLM N2 - To guide the surgeon during laparoscopic or robot-assisted radical prostatectomy an innovative laparoscopic/ultrasound fusion platform was developed using a motorized 3-dimensional transurethral ultrasound probe. We present what is to our knowledge the first preclinical evaluation of 3-dimensional prostate visualization using transurethral ultrasound and the preliminary results of this new augmented reality. The transurethral probe and laparoscopic/ultrasound registration were tested on realistic prostate phantoms made of standard polyvinyl chloride. The quality of transurethral ultrasound images and the detection of passive markers placed on the prostate surface were evaluated on 2-dimensional dynamic views and 3-dimensional reconstructions. The feasibility, precision and reproducibility of laparoscopic/transurethral ultrasound registration was then determined using 4, 5, 6 and 7 markers to assess the optimal amount needed. The root mean square error was calculated for each registration and the median root mean square error and IQR were calculated according to the number of markers. The transurethral ultrasound probe was easy to manipulate and the prostatic capsule was well visualized in 2 and 3 dimensions. Passive markers could precisely be localized in the volume. Laparoscopic/transurethral ultrasound registration procedures were performed on 74 phantoms of various sizes and shapes. All were successful. The median root mean square error of 1.1 mm (IQR 0.8-1.4) was significantly associated with the number of landmarks (p = 0.001). The highest accuracy was achieved using 6 markers. However, prostate volume did not affect registration precision. Transurethral ultrasound provided high quality prostate reconstruction and easy marker detection. Laparoscopic/ultrasound registration was successful with acceptable mm precision. Further investigations are necessary to achieve sub mm accuracy and assess feasibility in a human model. ER - TY - JOUR AU - Lee, Byoung-Hee T1 - Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study. JO - Journal of physical therapy science Y1 - 2016 VL - 28 SP - 1387 EP - 1391 KW - Augmented reality; Cerebral palsy; Real-time feedback N1 - 0915-5287 Owner: NLM N2 - [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials. ER - TY - CONF AU - Lee, Changho AU - Han, Seunghoon AU - Kim, Sehui AU - Jeon, Mansik AU - Kim, Jeehyun AU - Kim, Chulhong AU - Oraevsky, Alexander A. AU - Wang, Lihong V. T1 - Intraoperative surgical photoacoustic microscopy (IS-PAM) using augmented reality PB - SPIE Y1 - 2014/march ER - TY - JOUR AU - Lee, Changho AU - Lee, Donghyun AU - Zhou, Qifa AU - Kim, Jeehyun AU - Kim, Chulhong T1 - Real-time Near-infrared Virtual Intraoperative Surgical Photoacoustic Microscopy. JO - Photoacoustics Y1 - 2015 VL - 3 SP - 100 EP - 106 N1 - 2213-5979 Owner: NLM N2 - We developed a near infrared (NIR) virtual intraoperative surgical photoacoustic microscopy (NIR-VISPAM) system that combines a conventional surgical microscope and an NIR light photoacoustic microscopy (PAM) system. NIR-VISPAM can simultaneously visualize PA B-scan images at a maximum display rate of 45 Hz and display enlarged microscopic images on a surgeon's view plane through the ocular lenses of the surgical microscope as augmented reality. The use of the invisible NIR light eliminated the disturbance to the surgeon's vision caused by the visible PAM excitation laser in a previous report. Further, the maximum permissible laser pulse energy at this wavelength is approximately 5 times more than that at the visible spectral range. The use of a needle-type ultrasound transducer without any water bath for acoustic coupling can enhance convenience in an intraoperative environment. We successfully guided needle and injected carbon particles in biological tissues ex vivo and in melanoma-bearing mice in vivo. ER - TY - CONF AU - Lee, Changho AU - Lee, Donghyun AU - Zhou, Qifa AU - Kim, Jeehyun AU - Kim, Chulhong AU - Ntziachristos, Vasilis AU - Zemp, Roger T1 - Virtual intraoperative surgical photoacoustic microscopy PB - SPIE Y1 - 2015/july ER - TY - JOUR AU - Lee, O. AU - Lee, K. AU - Oh, C. AU - Kim, K. AU - Kim, M. T1 - Prototype tactile feedback system for examination by skin touch. JO - Skin research and technology : official journal of International Society for Bioengineering and the Skin (ISBS) [and] International Society for Digital Imaging of Skin (ISDIS) [and] International Society for Skin Imaging (ISSI) Y1 - 2014 VL - 20 SP - 307 EP - 314 KW - Dermoscopy KW - instrumentation KW - methods; Diagnosis KW - Computer-Assisted KW - methods; Equipment Design; Equipment Failure Analysis; Feasibility Studies; Feedback; Humans; Palpation KW - methods; Physical Stimulation KW - methods; Pilot Projects; Psoriasis KW - diagnosis KW - physiopathology; Reproducibility of Results; Robotics KW - methods; Sensitivity and Specificity; Skin Physiological Phenomena; Systems Integration; Touch; User-Computer Interface; disparity; haptics; psoriasis; skin; tactile feedback N1 - 1600-0846 Owner: NLM N2 - Diagnosis of conditions such as psoriasis and atopic dermatitis, in the case of induration, involves palpating the infected area via hands and then selecting a ratings score. However, the score is determined based on the tester's experience and standards, making it subjective. To provide tactile feedback on the skin, we developed a prototype tactile feedback system to simulate skin wrinkles with PHANToM OMNI. To provide the user with tactile feedback on skin wrinkles, a visual and haptic Augmented Reality system was developed. First, a pair of stereo skin images obtained by a stereo camera generates a disparity map of skin wrinkles. Second, the generated disparity map is sent to an implemented tactile rendering algorithm that computes a reaction force according to the user's interaction with the skin image. We first obtained a stereo image of skin wrinkles from the in vivo stereo imaging system, which has a baseline of 50.8 μm, and obtained the disparity map with a graph cuts algorithm. The left image is displayed on the monitor to enable the user to recognize the location visually. The disparity map of the skin wrinkle image sends skin wrinkle information as a tactile response to the user through a haptic device. We successfully developed a tactile feedback system for virtual skin wrinkle simulation by means of a commercialized haptic device that provides the user with a single point of contact to feel the surface roughness of a virtual skin sample. ER - TY - JOUR AU - Lee, Sing Chun AU - Fuerst, Bernhard AU - Fotouhi, Javad AU - Fischer, Marius AU - Osgood, Greg AU - Navab, Nassir T1 - Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 967 EP - 975 KW - Algorithms; Calibration; Cone-Beam Computed Tomography KW - methods; Humans; Imaging KW - Three-Dimensional; Monitoring KW - Intraoperative KW - methods; Phantoms KW - Imaging; Reproducibility of Results; 3D-3D calibration; Augmented reality; C-arm; Cone-beam CT; Intra-operative imaging N1 - 1861-6429 Owner: NLM N2 - This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles. ER - TY - JOUR AU - Leitritz, Martin A. AU - Ziemssen, Focke AU - Suesskind, Daniela AU - Partsch, Michael AU - Voykov, Bogomil AU - Bartz-Schmidt, Karl U. AU - Szurman, Gesine B. T1 - Critical evaluation of the usability of augmented reality ophthalmoscopy for the training of inexperienced examiners. JO - Retina (Philadelphia, Pa.) Y1 - 2014 VL - 34 SP - 785 EP - 791 KW - Clinical Competence KW - standards; Education KW - Medical KW - Undergraduate KW - standards; Educational Measurement; Female; Humans; Male; Ophthalmology KW - education; Ophthalmoscopy; Students KW - Medical; Surveys and Questionnaires; User-Computer Interface N1 - 1539-2864 Owner: NLM N2 - To measure the value of augmented reality technology usage to teach the medical students performing binocular indirect ophthalmoscopy. Thirty-seven medical students were randomly assigned to the training of binocular indirect ophthalmoscopy either in the conventional way or with augmented reality ophthalmoscopy (ARO). For testing student's skills, they had to examine a real person using a conventional ophthalmoscopy system and draw the optic disk. They also had to fill out a questionnaire. Subjective and objective evaluations were performed. Thirty-seven students were randomly assigned to two groups. Eighteen students were trained with conventional ophthalmoscopy and 19 students with ARO. The questionnaires showed no differences. Performing an objective analysis, the median ophthalmoscopy training score for the conventional ophthalmoscopy group was 1.2 (range, 0.67-2) and showed a significant difference (P < 0.0033) to the ARO group (median 2; range, 0.67-2). The study provides evidence that a single ARO training is efficient to improve ophthalmoscopy skills. As the objective analysis showed, the ARO group had a significantly superior performance. Our study also indicates that subjective evaluation of the fundus drawings without systematic analysis is prone to errors. ER - TY - JOUR AU - Li, Liang AU - Yang, Jian AU - Chu, Yakui AU - Wu, Wenbo AU - Xue, Jin AU - Liang, Ping AU - Chen, Lei T1 - A Novel Augmented Reality Navigation System for Endoscopic Sinus and Skull Base Surgery: A Feasibility Study. JO - PloS one Y1 - 2016 VL - 11 SP - e0146996 EP - e0146996 KW - Cadaver; Computer Simulation; Endoscopy KW - methods; Feasibility Studies; Head KW - surgery; Humans; Imaging KW - Three-Dimensional; Neurosurgical Procedures KW - methods; Nose KW - surgery; Operative Time; Paranasal Sinuses KW - surgery; Phantoms KW - Imaging; Reproducibility of Results; Skull Base KW - surgery; Surgery KW - Computer-Assisted KW - methods N1 - 1932-6203 Owner: NLM N2 - To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon's skills and knowledge, not as a substitute. ER - TY - JOUR AU - Liang, Jack T. AU - Doke, Takehito AU - Onogi, Shinya AU - Ohashi, Satoru AU - Ohnishi, Isao AU - Sakuma, Ichiro AU - Nakajima, Yoshikazu T1 - A fluorolaser navigation system to guide linear surgical tool insertion. JO - International journal of computer assisted radiology and surgery Y1 - 2012 VL - 7 SP - 931 EP - 939 KW - Calibration; Feasibility Studies; Fluoroscopy; Humans; Imaging KW - Three-Dimensional; Lasers; Orthopedic Procedures KW - methods; Phantoms KW - Imaging; Software; Surgery KW - Computer-Assisted KW - methods N1 - 1861-6429 Owner: NLM N2 - Conventional navigation systems for minimally invasive orthopedic surgery require a secondary monitor to display guidance information generated with CT or MRI images. Newer systems use augmented reality to project surgical plans into binocular glasses. These surgical procedures are often mentally challenging and cumbersome to perform. A comprehensive surgical navigation system for direct guidance while minimizing radiation exposure was designed and built. System accuracy was evaluated using in vitro needle insertion experiments. The fluoroscopic-based navigation technique is combined with an existing laser guidance technique. As a result, the combined system is capable of surgical planning using two or more X-ray images rather than CT or MRI scans. Guidance information is directly projected onto the patient using two laser beams and not via a secondary monitor. We performed 15 in vitro needle insertion experiments as well as 6 phantom pedicle screw insertion experiments to validate navigation system accuracy. The planning accuracy of the system was found to be 2.32 mm and 2.28°, while its overall guidance accuracy was found to be 2.40 mm and 2.39°. System feasibility was demonstrated by successfully performing percutaneous pin insertion on phantoms. Quantitative and qualitative evaluations of the fluorolaser navigation system show that it can support accurate guidance and intuitive surgical tool insertion procedures without preoperative 3D image volumes and registration processes. ER - TY - JOUR AU - Lin, Chien-Yu AU - Chang, Yu-Ming T1 - Interactive augmented reality using Scratch 2.0 to improve physical activities for children with developmental disabilities. JO - Research in developmental disabilities Y1 - 2015 VL - 37 SP - 1 EP - 8 KW - Cerebral Palsy KW - rehabilitation; Child; Child KW - Preschool; Developmental Disabilities KW - rehabilitation; Female; Humans; Male; Motivation; Motor Activity; Physical Therapy Modalities; User-Computer Interface; Video Games; Augmented-reality; Disabilities; Scratch 2.0; Webcam N1 - 1873-3379 Owner: NLM N2 - This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed. ER - TY - JOUR AU - Lin, Li AU - Shi, Yunyong AU - Tan, Andy AU - Bogari, Melia AU - Zhu, Ming AU - Xin, Yu AU - Xu, Haisong AU - Zhang, Yan AU - Xie, Le AU - Chai, Gang T1 - Mandibular angle split osteotomy based on a novel augmented reality navigation using specialized robot-assisted arms--A feasibility study. JO - Journal of cranio-maxillo-facial surgery : official publication of the European Association for Cranio-Maxillo-Facial Surgery Y1 - 2016 VL - 44 SP - 215 EP - 223 KW - Feasibility Studies; Humans; Mandible; Osteotomy KW - methods; Robotic Surgical Procedures KW - instrumentation KW - methods; Surgery KW - Computer-Assisted; User-Computer Interface; Augmented reality; Computer-assisted surgery; Mandibular angle split osteotomy; Robot-assisted surgery N1 - 1878-4119 Owner: NLM N2 - Augmented reality (AR) navigation, is a visible 3-dimensional display technology, that, when combined with robot-assisted surgery (RAS), allows precision and automation in operational procedures. In this study, we used an innovative, minimally invasive, simplified operative method to position the landmarks and specialized robot-assisted arms to apply in a rapid protyping (RP) model. This is the first report of the use of AR and RAS technology in craniomaxillofacial surgery. Five patients with prominent mandibular angle were randomly chosen for this feasibility study. We reconstructed the mandibular modules and created preoperational plans as semi-embedded and nail-fixation modules for an easy registration procedure. The left side of the mandibular modules comprised the experimental groups with use of a robot, and the right sides comprised the control groups without a robot. With AR Toolkits program tracking and display system applied, we carried out the operative plans and measured the error. Both groups were successfully treated in this study, but the RAS was more accurate and stable. The average position and angle were significant (p < 0.01) between the 2 groups. This study reports a novel augmented reality navigation with specialized robot-assisted arms for mandibular angle split osteotomy. AR and RAS can be helpful for patients undergoing craniomaxillofacial surgery. ER - TY - JOUR AU - Lin, Yen-Kun AU - Yau, Hong-Tzong AU - Wang, I.-Chung AU - Zheng, Cheng AU - Chung, Kwok-Hung T1 - A novel dental implant guided surgery based on integration of surgical template and augmented reality. JO - Clinical implant dentistry and related research Y1 - 2015 VL - 17 SP - 543 EP - 553 KW - Clinical Competence; Computer-Aided Design; Dental Implantation KW - Endosseous KW - methods; Dental Implants; Dental Models; Humans; In Vitro Techniques; Mouth KW - Edentulous KW - surgery; Osteotomy; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed; User-Computer Interface; accuracy; augmented reality; dental implants; guided implant surgery; surgical template N1 - 1708-8208 Owner: NLM N2 - Stereoscopic visualization concept combined with head-mounted displays may increase the accuracy of computer-aided implant surgery. The aim of this study was to develop an augmented reality-based dental implant placement system and evaluate the accuracy of the virtually planned versus the actual prepared implant site created in vitro. Four fully edentulous mandibular and four partially edentulous maxillary duplicated casts were used. Six implants were planned in the mandibular and four in the maxillary casts. A total of 40 osteotomy sites were prepared in the casts using stereolithographic template integrated with augmented reality-based surgical simulation. During the surgery, the dentist could be guided accurately through a head-mounted display by superimposing the virtual auxiliary line and the drill stop. The deviation between planned and prepared positions of the implants was measured via postoperative computer tomography generated scan images. Mean and standard deviation of the discrepancy between planned and prepared sites at the entry point, apex, angle, depth, and lateral locations were 0.50 ± 0.33 mm, 0.96 ± 0.36 mm, 2.70 ± 1.55°, 0.33 ± 0.27 mm, and 0.86 ± 0.34 mm, respectively, for the fully edentulous mandible, and 0.46 ± 0.20 mm, 1.23 ± 0.42 mm, 3.33 ± 1.42°, 0.48 ± 0.37 mm, and 1.1 ± 0.39 mm, respectively, for the partially edentulous maxilla. There was a statistically significant difference in the apical deviation between maxilla and mandible in this surgical simulation (p < .05). Deviation of implant placement from planned position was significantly reduced by integrating surgical template and augmented reality technology. ER - TY - JOUR AU - Liu, Miao AU - Yang, Shourui AU - Wang, Zhangying AU - Huang, Shujun AU - Liu, Yue AU - Niu, Zhenqi AU - Zhang, Xiaoxuan AU - Zhu, Jigui AU - Zhang, Zonghua T1 - Generic precise augmented reality guiding system and its calibration method based on 3D virtual model. JO - Optics express Y1 - 2016/05 VL - 24 SP - 12026 EP - 12042 N1 - 1094-4087 Owner: NLM N2 - Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. ER - TY - JOUR AU - Liu, Runpeng AU - Salisbury, Joseph P. AU - Vahabzadeh, Arshya AU - Sahin, Ned T. T1 - Feasibility of an Autism-Focused Augmented Reality Smartglasses System for Social Communication and Behavioral Coaching. JO - Frontiers in pediatrics Y1 - 2017 VL - 5 SP - 145 EP - 145 KW - augmented reality; autism spectrum disorder; education; feasibility; smartglasses; stimulant; tolerability; virtual reality N1 - 2296-2360 Owner: NLM N2 - Autism spectrum disorder (ASD) is a childhood-onset neurodevelopmental disorder with a rapidly rising prevalence, currently affecting 1 in 68 children, and over 3.5 million people in the United States. Current ASD interventions are primarily based on in-person behavioral therapies that are both costly and difficult to access. These interventions aim to address some of the fundamental deficits that clinically characterize ASD, including deficits in social communication, and the presence of stereotypies, and other autism-related behaviors. Current diagnostic and therapeutic approaches seldom rely on quantitative data measures of symptomatology, severity, or condition trajectory. Given the current situation, we report on the Brain Power System (BPS), a digital behavioral aid with quantitative data gathering and reporting features. The BPS includes customized smartglasses, providing targeted personalized coaching experiences through a family of gamified augmented-reality applications utilizing artificial intelligence. These applications provide children and adults with coaching for emotion recognition, face directed gaze, eye contact, and behavioral self-regulation. This preliminary case report, part of a larger set of upcoming research reports, explores the feasibility of the BPS to provide coaching in two boys with clinically diagnosed ASD, aged 8 and 9 years. The coaching intervention was found to be well tolerated and rated as being both engaging and fun. Both males could easily use the system, and no technical problems were noted. During the intervention, caregivers reported improved non-verbal communication, eye contact, and social engagement during the intervention. Both boys demonstrated decreased symptoms of ASD, as measured by the aberrant behavior checklist at 24-h post-intervention. Specifically, both cases demonstrated improvements in irritability, lethargy, stereotypy, hyperactivity/non-compliance, and inappropriate speech. Smartglasses using augmented reality may have an important future role in helping address the therapeutic needs of children with ASD. Quantitative data gathering from such sensor-rich systems may allow for digital phenotyping and the refinement of social communication constructs of the research domain criteria. This report provides evidence for the feasibility, usability, and tolerability of one such specialized smartglasses system. ER - TY - JOUR AU - Liu, Wen P. AU - Azizian, Mahdi AU - Sorger, Jonathan AU - Taylor, Russell H. AU - Reilly, Brian K. AU - Cleary, Kevin AU - Preciado, Diego T1 - Cadaveric feasibility study of da Vinci Si-assisted cochlear implant with augmented visual navigation for otologic surgery. JO - JAMA otolaryngology-- head & neck surgery Y1 - 2014 VL - 140 SP - 208 EP - 214 KW - Cadaver; Cochlear Implants; Cone-Beam Computed Tomography; Feasibility Studies; Hearing Loss KW - diagnostic imaging KW - surgery; Humans; Otologic Surgical Procedures KW - methods; Robotics KW - instrumentation; Surgery KW - Computer-Assisted KW - methods; Temporal Bone KW - surgery N1 - 2168-619X Owner: NLM N2 - To our knowledge, this is the first reported cadaveric feasibility study of a master-slave-assisted cochlear implant procedure in the otolaryngology-head and neck surgery field using the da Vinci Si system (da Vinci Surgical System; Intuitive Surgical, Inc). We describe the surgical workflow adaptations using a minimally invasive system and image guidance integrating intraoperative cone beam computed tomography through augmented reality. To test the feasibility of da Vinci Si-assisted cochlear implant surgery with augmented reality, with visualization of critical structures and facilitation with precise cochleostomy for electrode insertion. Cadaveric case study of bilateral cochlear implant approaches conducted at Intuitive Surgical Inc, Sunnyvale, California. Bilateral cadaveric mastoidectomies, posterior tympanostomies, and cochleostomies were performed using the da Vinci Si system on a single adult human donor cadaveric specimen. Radiographic confirmation of successful cochleostomies, placement of a phantom cochlear implant wire, and visual confirmation of critical anatomic structures (facial nerve, cochlea, and round window) in augmented stereoendoscopy. With a surgical mean time of 160 minutes per side, complete bilateral cochlear implant procedures were successfully performed with no violation of critical structures, notably the facial nerve, chorda tympani, sigmoid sinus, dura, or ossicles. Augmented reality image overlay of the facial nerve, round window position, and basal turn of the cochlea was precise. Postoperative cone beam computed tomography scans confirmed successful placement of the phantom implant electrode array into the basal turn of the cochlea. To our knowledge, this is the first study in the otolaryngology-head and neck surgery literature examining the use of master-slave-assisted cochleostomy with augmented reality for cochlear implants using the da Vinci Si system. The described system for cochleostomy has the potential to improve the surgeon's confidence, as well as surgical safety, efficiency, and precision by filtering tremor. The integration of augmented reality may be valuable for surgeons dealing with complex cases of congenital anatomic abnormality, for revision cochlear implant with distorted anatomy and poorly pneumatized mastoids, and as a method of interactive teaching. Further research into the cost-benefit ratio of da Vinci Si-assisted otologic surgery, as well as refinements of the proposed workflow, are required before considering clinical studies. ER - TY - JOUR AU - Liu, Wen P. AU - Richmon, Jeremy D. AU - Sorger, Jonathan M. AU - Azizian, Mahdi AU - Taylor, Russell H. T1 - Augmented reality and cone beam CT guidance for transoral robotic surgery. JO - Journal of robotic surgery Y1 - 2015 VL - 9 SP - 223 EP - 233 KW - Animals; Cone-Beam Computed Tomography KW - methods; Feasibility Studies; Oral Surgical Procedures KW - methods; Phantoms KW - Imaging; Robotic Surgical Procedures KW - methods; Swine; Tongue KW - surgery; User-Computer Interface; Cone beam computed tomography; Image-guided robotic surgery; Transoral robotic surgery; Video augmentation; da Vinci N1 - 1863-2491 Owner: NLM N2 - In transoral robotic surgery preoperative image data do not reflect large deformations of the operative workspace from perioperative setup. To address this challenge, in this study we explore image guidance with cone beam computed tomographic angiography to guide the dissection of critical vascular landmarks and resection of base-of-tongue neoplasms with adequate margins for transoral robotic surgery. We identify critical vascular landmarks from perioperative c-arm imaging to augment the stereoscopic view of a da Vinci si robot in addition to incorporating visual feedback from relative tool positions. Experiments resecting base-of-tongue mock tumors were conducted on a series of ex vivo and in vivo animal models comparing the proposed workflow for video augmentation to standard non-augmented practice and alternative, fluoroscopy-based image guidance. Accurate identification of registered augmented critical anatomy during controlled arterial dissection and en bloc mock tumor resection was possible with the augmented reality system. The proposed image-guided robotic system also achieved improved resection ratios of mock tumor margins (1.00) when compared to control scenarios (0.0) and alternative methods of image guidance (0.58). The experimental results show the feasibility of the proposed workflow and advantages of cone beam computed tomography image guidance through video augmentation of the primary stereo endoscopy as compared to control and alternative navigation methods. ER - TY - JOUR AU - Liu, Xinyang AU - Kang, Sukryool AU - Plishker, William AU - Zaki, George AU - Kane, Timothy D. AU - Shekhar, Raj T1 - Laparoscopic stereoscopic augmented reality: toward a clinically viable electromagnetic tracking solution. JO - Journal of medical imaging (Bellingham, Wash.) Y1 - 2016 VL - 3 SP - 045001 EP - 045001 KW - augmented reality; camera calibration; electromagnetic tracking; stereoscopic laparoscopy; ultrasound calibration N1 - 2329-4302 Owner: NLM N2 - The purpose of this work was to develop a clinically viable laparoscopic augmented reality (AR) system employing stereoscopic (3-D) vision, laparoscopic ultrasound (LUS), and electromagnetic (EM) tracking to achieve image registration. We investigated clinically feasible solutions to mount the EM sensors on the 3-D laparoscope and the LUS probe. This led to a solution of integrating an externally attached EM sensor near the imaging tip of the LUS probe, only slightly increasing the overall diameter of the probe. Likewise, a solution for mounting an EM sensor on the handle of the 3-D laparoscope was proposed. The spatial image-to-video registration accuracy of the AR system was measured to be [Formula: see text] and [Formula: see text] for the left- and right-eye channels, respectively. The AR system contributed 58-ms latency to stereoscopic visualization. We further performed an animal experiment to demonstrate the use of the system as a visualization approach for laparoscopic procedures. In conclusion, we have developed an integrated, compact, and EM tracking-based stereoscopic AR visualization system, which has the potential for clinical use. The system has been demonstrated to achieve clinically acceptable accuracy and latency. This work is a critical step toward clinical translation of AR visualization for laparoscopic procedures. ER - TY - JOUR AU - Liu, Xinyang AU - Plishker, William AU - Zaki, George AU - Kang, Sukryool AU - Kane, Timothy D. AU - Shekhar, Raj T1 - On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 1163 EP - 1171 KW - Calibration; Electromagnetic Phenomena; Equipment Design; Humans; Laparoscopes; Laparoscopy; Phantoms KW - Imaging; User-Computer Interface; Augmented reality; Camera calibration; Electromagnetic tracking; Laparoscopic procedure; Laparoscopic visualization N1 - 1861-6429 Owner: NLM N2 - Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. ER - TY - JOUR AU - Llena, C. AU - Folguera, S. AU - Forner, L. AU - Rodríguez-Lozano, F. J. T1 - Implementation of augmented reality in operative dentistry learning. JO - European journal of dental education : official journal of the Association for Dental Education in Europe Y1 - 2018 VL - 22 SP - e122 EP - e130 KW - Adult; Clinical Competence; Dental Cavity Preparation; Dentistry KW - Operative KW - education; Education KW - Dental KW - methods; Female; Humans; Job Satisfaction; Learning; Male; Models KW - Educational; Virtual Reality; Young Adult; augmented reality; cavity design; teaching innovation N1 - 1600-0579 Owner: NLM N2 - To evaluate the efficacy of augmented reality (AR) in the gaining of knowledge and skills amongst dental students in the design of cavity preparations and analyse their degree of satisfaction. AR cavity models were prepared for use with computers and mobile devices. Forty-one students were divided into two groups (traditional teaching methods vs AR). Questionnaires were designed to evaluate knowledge and skills, with the administration of a satisfaction questionnaire for those using AR. The degree of compliance with the standards in cavity design was assessed. The Mann-Whitney U-test was used to compare knowledge and skills between the two groups, and the Wilcoxon test was applied to compare intragroup differences. The chi-square test in turn was used to compare the qualitative parameters of the cavity designs between the groups. Statistical significance was considered for P<.05 in all cases. No significant differences were observed in level of knowledge before, immediately after or 6 months after teaching between the two groups (P>.05). Although the results corresponding to most of the studied skills parameters were better in the experimental group, significant differences (P<.05) were only founded for cavity depth and extent for Class I and divergence of the buccal and lingual walls for the Class II. The experience was rated as favourable or very favourable by 100% of the participants. The students showed preference for computers (60%) vs mobile devices (10%). The AR techniques favoured the gaining of knowledge and skills and were regarded as a useful tool by the students. ER - TY - JOUR AU - Londei, Roberto AU - Esposito, Marco AU - Diotte, Benoit AU - Weidert, Simon AU - Euler, Ekkehard AU - Thaller, Peter AU - Navab, Nassir AU - Fallavollita, Pascal T1 - Intra-operative augmented reality in distal locking. JO - International journal of computer assisted radiology and surgery Y1 - 2015 VL - 10 SP - 1395 EP - 1403 KW - Algorithms; Bone Nails; Bone and Bones KW - pathology; Computer Graphics; Computer Simulation; Equipment Design; Fluoroscopy KW - instrumentation KW - methods; Fracture Fixation KW - Intramedullary KW - methods; Humans; Image Processing KW - Computer-Assisted; Intraoperative Period; Orthopedic Procedures; Phantoms KW - Imaging; Reproducibility of Results; Video Recording; X-Rays N1 - 1861-6429 Owner: NLM N2 - To design an augmented reality solution that assists surgeons during the distal locking of intramedullary nailing procedures. Traditionally, the procedure is performed under X-ray guidance and requires a significant amount of time and radiation exposure. To absolve these complications, we propose video guidance that allows surgeons to achieve both the down-the-beam position of the intramedullary nail and its subsequent locking. For the down-the-beam position, the IM nail pose in X-ray is calculated using a 2D/3D registration scheme and later related to the patient leg pose which is calculated using video-tracked AR markers. For the distal locking, surgeons use an augmented radiolucent drill in which its tip position is detected and tracked in real-time under video guidance. To evaluate the feasibility of our solution, we performed a preclinical study on dry bone phantom with the participation of four clinicians. Participants achieved 100 % success rate in the down-the beam positioning and 93 % success rate in distal locking using only two X-ray images in 100 s. We confirmed that intra-operative navigation using augmented reality provides an alternative way to perform distal locking in a safe and timely manner. ER - TY - JOUR AU - López-Mir, F. AU - Naranjo, V. AU - Fuertes, J. J. AU - Alcañiz, M. AU - Bueno, J. AU - Pareja, E. T1 - Design and validation of an augmented reality system for laparoscopic surgery in a real environment. JO - BioMed research international Y1 - 2013 VL - 2013 SP - 758491 EP - 758491 KW - Computer Systems; Humans; Laparoscopy KW - instrumentation KW - methods; Video-Assisted Surgery KW - methods N1 - 2314-6141 Owner: NLM N2 - This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery. ER - TY - JOUR AU - Loukas, Constantinos AU - Lahanas, Vasileios AU - Georgiou, Evangelos T1 - An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training. JO - The international journal of medical robotics + computer assisted surgery : MRCAS Y1 - 2013 VL - 9 SP - e34 EP - e51 KW - Algorithms; Computer-Assisted Instruction KW - instrumentation; Endoscopes; Endoscopy KW - education KW - instrumentation; Equipment Design; Equipment Failure Analysis; Humans; Phantoms KW - Imaging; Reproducibility of Results; Robotics KW - instrumentation; Sensitivity and Specificity; Surgery KW - Computer-Assisted KW - instrumentation KW - methods; Systems Integration; User-Computer Interface; augmented reality; instrument tracking; laparoscopic training; occlusion handling; surgical simulation N1 - 1478-596X Owner: NLM N2 - Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. ER - TY - JOUR AU - Loy Rodas, Nicolas AU - Barrera, Fernando AU - Padoy, Nicolas T1 - See It With Your Own Eyes: Markerless Mobile Augmented Reality for Radiation Awareness in the Hybrid Room. JO - IEEE transactions on bio-medical engineering Y1 - 2017 VL - 64 SP - 429 EP - 440 KW - Calibration; Computer Graphics; Environmental Monitoring KW - methods; Humans; Image Processing KW - Computer-Assisted KW - methods; Radiation Dosage; Radiography; Safety; Surgery KW - Computer-Assisted; User-Computer Interface; Video Recording N1 - 1558-2531 Owner: NLM N2 - We present an approach to provide awareness to the harmful ionizing radiation generated during X-ray-guided minimally invasive procedures. A hand-held screen is used to display directly in the user's view information related to radiation safety in a mobile augmented reality (AR) manner. Instead of using markers, we propose a method to track the observer's viewpoint, which relies on the use of multiple RGB-D sensors and combines equipment detection for tracking initialization with a KinectFusion-like approach for frame-to-frame tracking. Two of the sensors are ceiling-mounted and a third one is attached to the hand-held screen. The ceiling cameras keep an updated model of the room's layout, which is used to exploit context information and improve the relocalization procedure. The system is evaluated on a multicamera dataset generated inside an operating room (OR) and containing ground-truth poses of the AR display. This dataset includes a wide variety of sequences with different scene configurations, occlusions, motion in the scene, and abrupt viewpoint changes. Qualitative results illustrating the different AR visualization modes for radiation awareness provided by the system are also presented. Our approach allows the user to benefit from a large AR visualization area and permits to recover from tracking failure caused by vast motion or changes in the scene just by looking at a piece of equipment. The system enables the user to see the 3-D propagation of radiation, the medical staff's exposure, and/or the doses deposited on the patient's surface as seen through his own eyes. ER - TY - JOUR AU - Loy Rodas, Nicolas AU - Padoy, Nicolas T1 - Seeing is believing: increasing intraoperative awareness to scattered radiation in interventional procedures by combining augmented reality, Monte Carlo simulations and wireless dosimeters. JO - International journal of computer assisted radiology and surgery Y1 - 2015 VL - 10 SP - 1181 EP - 1191 KW - Computer Simulation; Humans; Imaging KW - Three-Dimensional KW - instrumentation KW - methods; Intraoperative Awareness; Minimally Invasive Surgical Procedures KW - methods; Models KW - Theoretical; Monte Carlo Method; Radiometry KW - instrumentation; Surgery KW - Computer-Assisted KW - methods N1 - 1861-6429 Owner: NLM N2 - Surgical staff performing image-guided minimally invasive surgical procedures are chronically exposed to harmful ionizing radiation. Currently, no means exist to intraoperatively depict the 3D shape and intensity of scattered radiation fields or to assess the body-part exposure of clinicians. We propose a system for simulating and visualizing intraoperative scattered radiation using augmented reality. We use a multi-camera RGBD system to obtain a 3D point cloud reconstruction of the current room layout. The positions of the clinicians, patient, table and C-arm are used to build a radiation propagation simulation model and compute the deposited dose distribution in the room. We use wireless dosimeters to calibrate the simulation and to evaluate its accuracy at each time step. The computed 3D risk map is shown in an augmented reality manner by overlaying the simulation results onto the 3D model. Several 3D visualizations showing scattered radiation propagation, clinicians' body-part exposure and radiation risk maps under different irradiation conditions are proposed. The system is evaluated in an operating room equipped with a robotized X-ray imaging device by comparing the radiation simulation results to experimental measurements under several X-ray acquisition setups and room configurations. The proposed system is capable to display intraoperative scattered radiation intuitively in 3D by using augmented reality. This can have a strong impact on improving clinicians' awareness of their exposure to ionizing radiation and on reducing overexposure risks. ER - TY - JOUR AU - Lozano-Quilis, Jose-Antonio AU - Gil-Gómez, Hermenegildo AU - Gil-Gómez, Jose-Antonio AU - Albiol-Pérez, Sergio AU - Palacios-Navarro, Guillermo AU - Fardoun, Habib M. AU - Mashat, Abdulfattah S. T1 - Virtual rehabilitation for multiple sclerosis using a kinect-based system: randomized controlled trial. JO - JMIR serious games Y1 - 2014 VL - 2 SP - e12 EP - e12 KW - augmented reality; motor rehabilitation; multiple sclerosis; natural interfaces; virtual reality N1 - 2291-9279 Owner: NLM N2 - The methods used for the motor rehabilitation of patients with neurological disorders include a number of different rehabilitation exercises. For patients who have been diagnosed with multiple sclerosis (MS), the performance of motor rehabilitation exercises is essential. Nevertheless, this rehabilitation may be tedious, negatively influencing patients' motivation and adherence to treatment. We present RemoviEM, a system based on Kinect that uses virtual reality (VR) and natural user interfaces (NUI) to offer patients with MS an intuitive and motivating way to perform several motor rehabilitation exercises. It offers therapists a new motor rehabilitation tool for the rehabilitation process, providing feedback on the patient's progress. Moreover, it is a low-cost system, a feature that can facilitate its integration in clinical rehabilitation centers. A randomized and controlled single blinded study was carried out to assess the influence of a Kinect-based virtual rehabilitation system on the balance rehabilitation of patients with MS. This study describes RemoviEM and evaluates its effectiveness compared to standard rehabilitation. To achieve this objective, a clinical trial was carried out. Eleven patients from a MS association participated in the clinical trial. The mean age was 44.82 (SD 10.44) and the mean time from diagnosis (years) was 9.77 (SD 10.40). Clinical effectiveness was evaluated using clinical balance scales. Significant group-by-time interaction was detected in the scores of the Berg Balance Scale (P=.011) and the Anterior Reach Test in standing position (P=.011). Post-hoc analysis showed greater improvement in the experimental group for these variables than in the control group for these variables. The Suitability Evaluation Questionnaire (SEQ) showed good results in usability, acceptance, security, and safety for the evaluated system. The results obtained suggest that RemoviEM represents a motivational and effective alternative to traditional motor rehabilitation for MS patients. These results have encouraged us to improve the system with new exercises, which are currently being developed. ER - TY - JOUR AU - Luó, Xióngbiāo AU - Feuerstein, Marco AU - Kitasaka, Takayuki AU - Mori, Kensaku T1 - Robust bronchoscope motion tracking using sequential Monte Carlo methods in navigated bronchoscopy: dynamic phantom and patient validation. JO - International journal of computer assisted radiology and surgery Y1 - 2012/05 VL - 7 SP - 371 EP - 387 KW - Algorithms; Artificial Intelligence; Bronchoscopes; Bronchoscopy KW - methods; Electromagnetic Phenomena; Equipment Design; Humans; Image Interpretation KW - Computer-Assisted; Imaging KW - Three-Dimensional KW - methods; Monte Carlo Method; Motion; Pattern Recognition KW - Automated KW - methods; Phantoms KW - Imaging; Reproducibility of Results; Video Recording N1 - 1861-6429 Owner: NLM N2 - Accurate and robust estimates of camera position and orientation in a bronchoscope are required for navigation. Fusion of pre-interventional information (e.g., CT, MRI, or US) and intra-interventional information (e.g., bronchoscopic video) were incorporated into a navigation system to provide physicians with an augmented reality environment for bronchoscopic interventions. Two approaches were used to predict bronchoscope movements by incorporating sequential Monte Carlo (SMC) simulation including (1) image-based tracking techniques and (2) electromagnetic tracking (EMT) methods. SMC simulation was introduced to model ambiguities or uncertainties that occurred in image- and EMT-based bronchoscope tracking. Scale invariant feature transform (SIFT) features were employed to overcome the limitations of image-based motion tracking methods. Validation was performed on five phantom and ten human case datasets acquired in the supine position. For dynamic phantom validation, the EMT-SMC simulation method improved the tracking performance of the successfully registered bronchoscopic video frames by 12.7% compared with a hybrid-based method. In comparisons between tracking results and ground truth, the accuracy of the EMT-SMC simulation method was 1.51 mm (positional error) and 5.44° (orientation error). During patient assessment, the SIFT-SMC simulation scheme was more stable or robust than a previous image-based approach for bronchoscope motion estimation, showing 23.6% improvement of successfully tracked frames. Comparing the estimates of our method to ground truth, the position and orientation errors are 3.72 mm and 10.2°, while those of our previous image-based method were at least 7.77 mm and 19.3°. The computational times of our EMT- and SIFT-SMC simulation methods were 0.9 and 1.2 s per frame, respectively. The SMC simulation method was developed to model ambiguities that occur in bronchoscope tracking. This method more stably and accurately predicts the bronchoscope camera position and orientation parameters, reducing uncertainties due to problematic bronchoscopic video frames and airway deformation during intra-bronchoscopy navigation. ER - TY - JOUR AU - Ma, Meng AU - Fallavollita, Pascal AU - Seelbach, Ina AU - Von Der Heide, Anna Maria AU - Euler, Ekkehard AU - Waschke, Jens AU - Navab, Nassir T1 - Personalized augmented reality for anatomy education. JO - Clinical anatomy (New York, N.Y.) Y1 - 2016/05 VL - 29 SP - 446 EP - 453 KW - Anatomy KW - education; Computer Simulation; Education KW - Medical KW - Undergraduate KW - methods; Humans; Imaging KW - Three-Dimensional KW - methods; Tomography KW - X-Ray Computed; User-Computer Interface; Video Games; anatomy; augmented reality; curriculum; education; students N1 - 1098-2353 Owner: NLM N2 - Anatomy education is a challenging but vital element in forming future medical professionals. In this work, a personalized and interactive augmented reality system is developed to facilitate education. This system behaves as a "magic mirror" which allows personalized in-situ visualization of anatomy on the user's body. Real-time volume visualization of a CT dataset creates the illusion that the user can look inside their body. The system comprises a RGB-D sensor as a real-time tracking device to detect the user moving in front of a display. In addition, the magic mirror system shows text information, medical images, and 3D models of organs that the user can interact with. Through the participation of 7 clinicians and 72 students, two user studies were designed to respectively assess the precision and acceptability of the magic mirror system for education. The results of the first study demonstrated that the average precision of the augmented reality overlay on the user body was 0.96 cm, while the results of the second study indicate 86.1% approval for the educational value of the magic mirror, and 91.7% approval for the augmented reality capability of displaying organs in three dimensions. The usefulness of this unique type of personalized augmented reality technology has been demonstrated in this paper. ER - TY - JOUR AU - Mahmoud, Nader AU - Grasa, Óscar G. AU - Nicolau, Stéphane A. AU - Doignon, Christophe AU - Soler, Luc AU - Marescaux, Jacques AU - Montiel, J. M. M. T1 - On-patient see-through augmented reality based on visual SLAM. JO - International journal of computer assisted radiology and surgery Y1 - 2017 VL - 12 SP - 1 EP - 11 KW - Anatomic Landmarks; Animals; Computers KW - Handheld; Humans; Imaging KW - Three-Dimensional KW - methods; Models KW - Anatomic; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - methods; Swine; Augmented reality; Operating room; Registration; Surface meshes; Visual SLAM N1 - 1861-6429 Owner: NLM N2 - An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff. ER - TY - JOUR AU - Mahvash, Mehran AU - Besharati Tabrizi, Leila T1 - A novel augmented reality system of image projection for image-guided neurosurgery. JO - Acta neurochirurgica Y1 - 2013/05 VL - 155 SP - 943 EP - 947 KW - Head KW - surgery; Humans; Imaging KW - Three-Dimensional KW - methods; Neurosurgery KW - instrumentation KW - methods; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface; Video Recording N1 - 0942-0940 Owner: NLM N2 - Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery. ER - TY - JOUR AU - Mahvash, Mehran AU - Tabrizi, Leila Besharati T1 - A novel augmented reality system of image projection for image-guided neurosurgery JO - Acta Neurochirurgica Y1 - 2013/march VL - 155 IS - 5 SP - 943 EP - 947 ER - TY - JOUR AU - Marcus, Hani J. AU - Pratt, Philip AU - Hughes-Hallett, Archie AU - Cundy, Thomas P. AU - Marcus, Adam P. AU - Yang, Guang-Zhong AU - Darzi, Ara AU - Nandi, Dipankar T1 - Comparative effectiveness and safety of image guidance systems in neurosurgery: a preclinical randomized study. JO - Journal of neurosurgery Y1 - 2015 VL - 123 SP - 307 EP - 313 KW - Brain KW - surgery; Humans; Imaging KW - Three-Dimensional KW - instrumentation KW - methods; Neurosurgical Procedures KW - methods; Surgery KW - Computer-Assisted KW - methods; MARTYN = Modelled Anatomical Replica for Training Young Neurosurgeons; augmented reality; diagnostic and operative techniques; image guidance; minimally invasive surgery; neurosurgery N1 - 1933-0693 Owner: NLM N2 - Over the last decade, image guidance systems have been widely adopted in neurosurgery. Nonetheless, the evidence supporting the use of these systems in surgery remains limited. The aim of this study was to compare simultaneously the effectiveness and safety of various image guidance systems against that of standard surgery. In this preclinical, randomized study, 50 novice surgeons were allocated to one of the following groups: 1) no image guidance, 2) triplanar display, 3) always-on solid overlay, 4) always-on wire mesh overlay, and 5) on-demand inverse realism overlay. Each participant was asked to identify a basilar tip aneurysm in a validated model head. The primary outcomes were time to task completion (in seconds) and tool path length (in mm). The secondary outcomes were recognition of an unexpected finding (i.e., a surgical clip) and subjective depth perception using a Likert scale. The time to task completion and tool path length were significantly lower when using any form of image guidance compared with no image guidance (p < 0.001 and p = 0.003, respectively). The tool path distance was also lower in groups using augmented reality compared with triplanar display (p = 0.010). Always-on solid overlay resulted in the greatest inattentional blindness (20% recognition of unexpected finding). Wire mesh and on-demand overlays mitigated, but did not negate, inattentional blindness and were comparable to triplanar display (40% recognition of unexpected finding in all groups). Wire mesh and inverse realism overlays also resulted in better subjective depth perception than always-on solid overlay (p = 0.031 and p = 0.008, respectively). New augmented reality platforms may improve performance in less-experienced surgeons. However, all image display modalities, including existing triplanar displays, carry a risk of inattentional blindness. ER - TY - JOUR AU - Marker, David R. AU - U Thainual, Paweena AU - Ungi, Tamas AU - Flammang, Aaron J. AU - Fichtinger, Gabor AU - Iordachita, Iulian I. AU - Carrino, John A. AU - Fritz, Jan T1 - 1.5 T augmented reality navigated interventional MRI: paravertebral sympathetic plexus injections. JO - Diagnostic and interventional radiology (Ankara, Turkey) Y1 - 2017 VL - 23 SP - 227 EP - 232 KW - Aged; Aged KW - 80 and over; Cadaver; Contrast Media; Female; Gadolinium DTPA KW - administration & dosage; Humans; Image Enhancement KW - methods; Injections KW - Spinal KW - methods; Magnetic Resonance Imaging KW - Interventional KW - methods; Male; Middle Aged; Needles KW - statistics & numerical data; Prospective Studies; Sympathetic Nervous System KW - anatomy & histology N1 - 1305-3612 Owner: NLM N2 - The high contrast resolution and absent ionizing radiation of interventional magnetic resonance imaging (MRI) can be advantageous for paravertebral sympathetic nerve plexus injections. We assessed the feasibility and technical performance of MRI-guided paravertebral sympathetic injections utilizing augmented reality navigation and 1.5 T MRI scanner. A total of 23 bilateral injections of the thoracic (8/23, 35%), lumbar (8/23, 35%), and hypogastric (7/23, 30%) paravertebral sympathetic plexus were prospectively planned in twelve human cadavers using a 1.5 Tesla (T) MRI scanner and augmented reality navigation system. MRI-conditional needles were used. Gadolinium-DTPA-enhanced saline was injected. Outcome variables included the number of control magnetic resonance images, target error of the needle tip, punctures of critical nontarget structures, distribution of the injected fluid, and procedure length. Augmented-reality navigated MRI guidance at 1.5 T provided detailed anatomical visualization for successful targeting of the paravertebral space, needle placement, and perineural paravertebral injections in 46 of 46 targets (100%). A mean of 2 images (range, 1-5 images) were required to control needle placement. Changes of the needle trajectory occurred in 9 of 46 targets (20%) and changes of needle advancement occurred in 6 of 46 targets (13%), which were statistically not related to spinal regions (P = 0.728 and P = 0.86, respectively) and cadaver sizes (P = 0.893 and P = 0.859, respectively). The mean error of the needle tip was 3.9±1.7 mm. There were no punctures of critical nontarget structures. The mean procedure length was 33±12 min. 1.5 T augmented reality-navigated interventional MRI can provide accurate imaging guidance for perineural injections of the thoracic, lumbar, and hypogastric sympathetic plexus. ER - TY - JOUR AU - Markovic, Marko AU - Dosen, Strahinja AU - Cipriani, Christian AU - Popovic, Dejan AU - Farina, Dario T1 - Stereovision and augmented reality for closed-loop control of grasping in hand prostheses. JO - Journal of neural engineering Y1 - 2014 VL - 11 SP - 046001 EP - 046001 KW - Algorithms; Data Interpretation KW - Statistical; Electromyography; Feedback KW - Sensory; Hand; Hand Strength KW - physiology; Humans; Prostheses and Implants; Prosthesis Design; Robotics; Vision KW - Binocular KW - physiology N1 - 1741-2552 Owner: NLM N2 - Technologically advanced assistive devices are nowadays available to restore grasping, but effective and effortless control integrating both feed-forward (commands) and feedback (sensory information) is still missing. The goal of this work was to develop a user friendly interface for the semi-automatic and closed-loop control of grasping and to test its feasibility. We developed a controller based on stereovision to automatically select grasp type and size and augmented reality (AR) to provide artificial proprioceptive feedback. The system was experimentally tested in healthy subjects using a dexterous hand prosthesis to grasp a set of daily objects. The subjects wore AR glasses with an integrated stereo-camera pair, and triggered the system via a simple myoelectric interface. The results demonstrated that the subjects got easily acquainted with the semi-autonomous control. The stereovision grasp decoder successfully estimated the grasp type and size in realistic, cluttered environments. When allowed (forced) to correct the automatic system decisions, the subjects successfully utilized the AR feedback and achieved close to ideal system performance. The new method implements a high level, low effort control of complex functions in addition to the low level closed-loop control. The latter is achieved by providing rich visual feedback, which is integrated into the real life environment. The proposed system is an effective interface applicable with small alterations for many advanced prosthetic and orthotic/therapeutic rehabilitation devices. ER - TY - JOUR AU - Markovic, Marko AU - Karnal, Hemanth AU - Graimann, Bernhard AU - Farina, Dario AU - Dosen, Strahinja T1 - GLIMPSE: Google Glass interface for sensory feedback in myoelectric hand prostheses. JO - Journal of neural engineering Y1 - 2017 VL - 14 SP - 036007 EP - 036007 KW - Adult; Artificial Limbs; Biofeedback KW - Psychology KW - instrumentation KW - methods; Electromyography KW - methods; Equipment Design; Equipment Failure Analysis; Feedback KW - Sensory KW - physiology; Female; Hand KW - innervation KW - physiology; Humans; Male; Reproducibility of Results; Sensitivity and Specificity; Telemetry KW - methods; User-Computer Interface; Virtual Reality N1 - 1741-2552 Owner: NLM N2 - Providing sensory feedback to the user of the prosthesis is an important challenge. The common approach is to use tactile stimulation, which is easy to implement but requires training and has limited information bandwidth. In this study, we propose an alternative approach based on augmented reality. We have developed the GLIMPSE, a Google Glass application which connects to the prosthesis via a Bluetooth interface and renders the prosthesis states (EMG signals, aperture, force and contact) using augmented reality (see-through display) and sound (bone conduction transducer). The interface was tested in healthy subjects that used the prosthesis with (FB group) and without (NFB group) feedback during a modified clothespins test that allowed us to vary the difficulty of the task. The outcome measures were the number of unsuccessful trials, the time to accomplish the task, and the subjective ratings of the relevance of the feedback. There was no difference in performance between FB and NFB groups in the case of a simple task (basic, same-color clothespins test), but the feedback significantly improved the performance in a more complex task (pins of different resistances). Importantly, the GLIMPSE feedback did not increase the time to accomplish the task. Therefore, the supplemental feedback might be useful in the tasks which are more demanding, and thereby less likely to benefit from learning and feedforward control. The subjects integrated the supplemental feedback with the intrinsic sources (vision and muscle proprioception), developing their own idiosyncratic strategies to accomplish the task. The present study demonstrates a novel self-contained, ready-to-deploy, wearable feedback interface. The interface was successfully tested and was proven to be feasible and functionally beneficial. The GLIMPSE can be used as a practical solution but also as a general and flexible instrument to investigate closed-loop prosthesis control. ER - TY - CONF AU - Martins, Silvino AU - Vairinhos, Mario AU - Eliseu, Sergio AU - Borgerson, Janet T1 - Input system interface for image-guided surgery based on augmented reality PB - IEEE Y1 - 2016/december ER - TY - JOUR AU - Martirosyan, Nikolay L. AU - Skoch, Jesse AU - Watson, Jeffrey R. AU - Lemole, G. Michael AU - Romanowski, Marek AU - Anton, Rein T1 - Integration of indocyanine green videoangiography with operative microscope: augmented reality for interactive assessment of vascular structures and blood flow. JO - Neurosurgery Y1 - 2015 VL - 11 Suppl 2 SP - 252--7; discussion 257-8--252 EP - 252 KW - Angiography KW - Digital Subtraction KW - methods; Animals; Blood Circulation KW - physiology; Coloring Agents; Fluorescein Angiography KW - methods; Indocyanine Green; Male; Neurosurgical Procedures KW - methods; Rats; Vascular Surgical Procedures KW - methods; Video Recording KW - methods N1 - 1524-4040 Owner: NLM N2 - Preservation of adequate blood flow and exclusion of flow from lesions are key concepts of vascular neurosurgery. Indocyanine green (ICG) fluorescence videoangiography is now widely used for the intraoperative assessment of vessel patency. Here, we present a proof-of-concept investigation of fluorescence angiography with augmented microscopy enhancement: real-time overlay of fluorescence videoangiography within the white light field of view of conventional operative microscopy. The femoral artery was exposed in 7 anesthetized rats. The dissection microscope was augmented to integrate real-time electronically processed near-infrared filtered images with conventional white light images seen through the standard oculars. This was accomplished by using an integrated organic light-emitting diode display to yield superimposition of white light and processed near-infrared images. ICG solution was injected into the jugular vein, and fluorescent femoral artery flow was observed. Fluorescence angiography with augmented microscopy enhancement was able to detect ICG fluorescence in a small artery of interest. Fluorescence appeared as a bright-green signal in the ocular overlaid with the anatomic image and limited to the anatomic borders of the femoral artery and its branches. Surrounding anatomic structures were clearly visualized. Observation of ICG within the vessel lumens permitted visualization of the blood flow. Recorded video loops could be reviewed in an offline mode for more detailed assessment of the vasculature. The overlay of fluorescence videoangiography within the field of view of the white light operative microscope allows real-time assessment of the blood flow within vessels during simultaneous surgical manipulation. This technique could improve intraoperative decision making during complex neurovascular procedures. ER - TY - JOUR AU - Marzano, Ettore AU - Piardi, Tullio AU - Soler, Luc AU - Diana, Michele AU - Mutter, Didier AU - Marescaux, Jacques AU - Pessaux, Patrick T1 - Augmented reality-guided artery-first pancreatico-duodenectomy. JO - Journal of gastrointestinal surgery : official journal of the Society for Surgery of the Alimentary Tract Y1 - 2013 VL - 17 SP - 1980 EP - 1983 KW - Adenocarcinoma KW - surgery; Aged; Common Bile Duct Neoplasms KW - surgery; Humans; Male; Mesenteric Arteries KW - surgery; Pancreaticoduodenectomy KW - methods; Surgery KW - Computer-Assisted KW - methods N1 - 1873-4626 Owner: NLM N2 - Augmented Reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim to visualize unapparent anatomical details. The potential of AR navigation as a tool to improve safety of the surgical dissection is presented in a case of pancreatico-duodenectomy (PD). A 77-year-old male patient underwent an AR-assisted PD. The 3D virtual anatomical model was obtained from thoraco-abdominal CT scan using customary software (VR-RENDER®, IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM®, Karl Storz, Tüttlingen, Germany) as well as different visible landmarks (inferior vena cava, left renal vein, aorta, superior mesenteric vein, inferior margin of the pancreas). A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Dissection of the superior mesenteric artery and the hanging maneuver were performed under AR guidance along the hanging plane. AR allowed for precise and safe recognition of all the important vascular structures. Operative time was 360 min. AR display and fine registration was performed within 6 min. The postoperative course was uneventful. The pathology was positive for ampullary adenocarcinoma; the final stage was pT1N0 (0/43 retrieved lymph nodes) with clear surgical margins. AR is a valuable navigation tool that can enhance the ability to achieve a safe surgical resection during PD. ER - TY - JOUR AU - Mather, Carey AU - Barnett, Tony AU - Broucek, Vlasti AU - Saunders, Annette AU - Grattidge, Darren AU - Huang, Weidong T1 - Helping Hands: Using Augmented Reality to Provide Remote Guidance to Health Professionals. JO - Studies in health technology and informatics Y1 - 2017 VL - 241 SP - 57 EP - 62 KW - Education KW - Distance; Health Personnel; Humans; Learning; Augmented reality; health professional; learning and teaching; procedure; rural and remote; student; usability N1 - 1879-8365 Owner: NLM N2 - Access to expert practitioners or geographic distance can compound the capacity for appropriate supervision of health professionals in the workplace. Guidance and support of clinicians and students to undertake new or infrequent procedures can be resource intensive. The Helping Hands remote augmented reality system is an innovation to support the development of, and oversee the acquisition of procedural skills through remote learning and teaching supervision while in clinical practice. Helping Hands is a wearable, portable, hands-free, low cost system comprised of two networked laptops, a head-mounted display worn by the recipient and a display screen used remotely by the instructor. Hand hygiene was used as the test procedure as it is a foundation skill learned by all health profession students. The technology supports unmediated remote gesture guidance by augmenting the object with the Helping Hands of a health professional. A laboratory-based study and field trial tested usability and feasibility of the remote guidance system. The study found the Helping Hands system did not compromise learning outcomes. This innovation has the potential to transform remote learning and teaching supervision by enabling health professionals and students opportunities to develop and improve their procedural performance at the workplace. ER - TY - CONF AU - McLeod, A. Jonathan AU - Baxter, John S. H. AU - de Ribaupierre, Sandrine AU - Peters, Terry M. AU - Yaniv, Ziv R. AU - Holmes, David R. T1 - Motion magnification for endoscopic surgery PB - SPIE Y1 - 2014/march ER - TY - JOUR AU - McMullen, David P. AU - Hotson, Guy AU - Katyal, Kapil D. AU - Wester, Brock A. AU - Fifer, Matthew S. AU - McGee, Timothy G. AU - Harris, Andrew AU - Johannes, Matthew S. AU - Vogelstein, R. Jacob AU - Ravitz, Alan D. AU - Anderson, William S. AU - Thakor, Nitish V. AU - Crone, Nathan E. T1 - Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic. JO - IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society Y1 - 2014 VL - 22 SP - 784 EP - 796 KW - Adult; Artificial Intelligence; Artificial Limbs; Brain-Computer Interfaces; Electroencephalography KW - instrumentation KW - methods; Equipment Failure Analysis; Eye Movements; Female; Humans; Male; Man-Machine Systems; Pilot Projects; Prosthesis Design; Robotics KW - methods; Therapy KW - Computer-Assisted KW - methods N1 - 1558-0210 Owner: NLM N2 - To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. ER - TY - JOUR AU - Melillo, Paolo AU - Riccio, Daniel AU - Di Perna, Luigi AU - Sanniti Di Baja, Gabriella AU - De Nino, Maurizio AU - Rossi, Settimio AU - Testa, Francesco AU - Simonelli, Francesca AU - Frucci, Maria T1 - Wearable Improved Vision System for Color Vision Deficiency Correction. JO - IEEE journal of translational engineering in health and medicine Y1 - 2017 VL - 5 SP - 3800107 EP - 3800107 KW - Augmented reality; color vision deficiency; medical device; wearable device N1 - 2168-2372 Owner: NLM N2 - Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved ([Formula: see text]) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. ER - TY - JOUR AU - Michmizos, Konstantinos P. AU - Krebs, Hermano Igo T1 - Pediatric robotic rehabilitation: Current knowledge and future trends in treating children with sensorimotor impairments. JO - NeuroRehabilitation Y1 - 2017 VL - 41 SP - 69 EP - 76 KW - Ankle KW - physiopathology; Ankle Joint KW - physiopathology; Child; Gait Disorders KW - Neurologic KW - rehabilitation; Humans; Movement; Neurological Rehabilitation KW - instrumentation KW - methods KW - trends; Robotics KW - trends; Software; Rehabilitation robotics; adaptive robotic therapy; cerebral palsy; pediatric; robot-aided neurorehabilitation; robot-aided therapy N1 - 1878-6448 Owner: NLM N2 - Robot-aided sensorimotor therapy imposes highly repetitive tasks that can translate to substantial improvement when patients remain cognitively engaged into the clinical procedure, a goal that most children find hard to pursue. Knowing that the child's brain is much more plastic than an adult's, it is reasonable to expect that the clinical gains observed in the adult population during the last two decades would be followed up by even greater gains in children. Nonetheless, and despite the multitude of adult studies, in children we are just getting started: There is scarcity of pediatric robotic rehabilitation devices that are currently available and the number of clinical studies that employ them is also very limited. We have recently developed the MIT's pedi-Anklebot, an adaptive habilitation robotic device that continuously motivates physically impaired children to do their best by tracking the child's performance and modifying their therapy accordingly. The robot's design is based on a multitude of studies we conducted focusing on the ankle sensorimotor control. In this paper, we briefly describe the device and the adaptive environment we built around the impaired children, present the initial clinical results and discuss how they could steer future trends in pediatric robotic therapy. The results support the potential for future interventions to account for the differences in the sensorimotor control of the targeted limbs and their functional use (rhythmic vs. discrete movements and mechanical impedance training) and explore how the new technological advancements such as the augmented reality would employ new knowledge from neuroscience. ER - TY - JOUR AU - Miyake, R. K. T1 - Cryo-laser and cryo-sclerotherapy guided by Augmented Reality JO - Phlebologie Y1 - 2014/05 VL - 43 IS - 05 SP - 257 EP - 261 ER - TY - JOUR AU - Moro, Christian AU - Štromberga, Zane AU - Raikos, Athanasios AU - Stirling, Allan T1 - The effectiveness of virtual and augmented reality in health sciences and medical anatomy. JO - Anatomical sciences education Y1 - 2017 VL - 10 SP - 549 EP - 559 KW - Adolescent; Adult; Anatomy KW - education; Computer-Assisted Instruction KW - methods; Educational Measurement; Female; Health Occupations KW - education; Humans; Learning; Male; Middle Aged; Models KW - Anatomic; Perception; Program Evaluation; Software; Students KW - Health Occupations; Virtual Reality; Young Adult; augmented reality; computer-aided instruction; gross anatomy education; health sciences education; medical education; mixed reality; oculus rift; tablet applications; undergraduate education; virtual reality N1 - 1935-9780 Owner: NLM N2 - Although cadavers constitute the gold standard for teaching anatomy to medical and health science students, there are substantial financial, ethical, and supervisory constraints on their use. In addition, although anatomy remains one of the fundamental areas of medical education, universities have decreased the hours allocated to teaching gross anatomy in favor of applied clinical work. The release of virtual (VR) and augmented reality (AR) devices allows learning to occur through hands-on immersive experiences. The aim of this research was to assess whether learning structural anatomy utilizing VR or AR is as effective as tablet-based (TB) applications, and whether these modes allowed enhanced student learning, engagement and performance. Participants (n = 59) were randomly allocated to one of the three learning modes: VR, AR, or TB and completed a lesson on skull anatomy, after which they completed an anatomical knowledge assessment. Student perceptions of each learning mode and any adverse effects experienced were recorded. No significant differences were found between mean assessment scores in VR, AR, or TB. During the lessons however, VR participants were more likely to exhibit adverse effects such as headaches (25% in VR P < 0.05), dizziness (40% in VR, P < 0.001), or blurred vision (35% in VR, P < 0.01). Both VR and AR are as valuable for teaching anatomy as tablet devices, but also promote intrinsic benefits such as increased learner immersion and engagement. These outcomes show great promise for the effective use of virtual and augmented reality as means to supplement lesson content in anatomical education. Anat Sci Educ 10: 549-559. © 2017 American Association of Anatomists. ER - TY - JOUR AU - Moult, E. AU - Ungi, T. AU - Welch, M. AU - Lu, J. AU - McGraw, R. C. AU - Fichtinger, G. T1 - Ultrasound-guided facet joint injection training using Perk Tutor. JO - International journal of computer assisted radiology and surgery Y1 - 2013 VL - 8 SP - 831 EP - 836 KW - Education KW - Medical KW - methods; Equipment Design; Humans; Injections KW - Intra-Articular KW - instrumentation; Low Back Pain KW - diagnostic imaging KW - drug therapy; Needles; Reproducibility of Results; Ultrasonography; Zygapophyseal Joint N1 - 1861-6429 Owner: NLM N2 - Facet syndrome is a condition that may cause 15-45 % of chronic lower back pain. It is commonly diagnosed and treated using facet joint injections. This needle technique demands high accuracy, and ultrasound (US) is a potentially useful modality to guide the needle. US-guided injections, however, require physicians to interpret 2-D sonographic images while simultaneously manipulating an US probe and needle. Therefore, US-guidance for facet joint injections needs advanced training methodologies that will equip physicians with the requisite skills. We used Perk Tutor-an augmented reality training system for US-guided needle insertions-in a configuration for percutaneous procedures of the lumbar spine. In a pilot study of 26 pre-medical undergraduate students, we evaluated the efficacy of Perk Tutor training compared to traditional training. The Perk Tutor Trained group, which had access to Perk Tutor during training, had a mean success rate of 61.5 %, while the Control group, which received traditional training, had a mean success rate of 38.5 % ([Formula: see text]). No significant differences in procedure times or needle path lengths were observed between the two groups. The results of this pilot study suggest that Perk Tutor provides an improved training environment for US-guided facet joint injections on a synthetic model. ER - TY - JOUR AU - Mouraux, Dominique AU - Brassinne, Eric AU - Sobczak, Stéphane AU - Nonclercq, Antoine AU - Warzée, Nadine AU - Sizer, Phillip S. AU - Tuna, Turgay AU - Penelle, Benoît T1 - 3D augmented reality mirror visual feedback therapy applied to the treatment of persistent, unilateral upper extremity neuropathic pain: a preliminary study. JO - The Journal of manual & manipulative therapy Y1 - 2017 VL - 25 SP - 137 EP - 143 KW - Augmented reality; CRPS; Mirror therapy; Mirror visual feedback; Neuropathic pain; Phantom limb pain; Physical therapy N1 - 1066-9817 Owner: NLM N2 - We assessed whether or not pain relief could be achieved with a new system that combines 3D augmented reality system (3DARS) and the principles of mirror visual feedback. Twenty-two patients between 18 and 75 years of age who suffered of chronic neuropathic pain. Each patient performed five 3DARS sessions treatment of 20 mins spread over a period of one week. The following pain parameters were assessed: (1) visual analogic scale after each treatment session (2) McGill pain scale and DN4 questionnaire were completed before the first session and 24 h after the last session. The mean improvement of VAS per session was 29% (  < 0.001). There was an immediate session effect demonstrating a systematic improvement in pain between the beginning and the end of each session. We noted that this pain reduction was partially preserved until the next session. If we compare the pain level at baseline and 24 h after the last session, there was a significant decrease (  < 0.001) of pain of 37%. There was a significant decrease (  < 0.001) on the McGill Pain Questionnaire and DN4 questionnaire (  < 0.01). Our results indicate that 3DARS induced a significant pain decrease for patients who presented chronic neuropathic pain in a unilateral upper extremity. While further research is necessary before definitive conclusions can be drawn, clinicians could implement the approach as a preparatory adjunct for providing temporary pain relief aimed at enhancing chronic pain patients' tolerance of manual therapy and exercise intervention. 4. ER - TY - JOUR AU - Mousavi Hondori, Hossein AU - Khademi, Maryam AU - Dodakian, Lucy AU - Cramer, Steven C. AU - Lopes, Cristina Videira T1 - A Spatial Augmented Reality rehab system for post-stroke hand rehabilitation. JO - Studies in health technology and informatics Y1 - 2013 VL - 184 SP - 279 EP - 285 KW - Biofeedback KW - Psychology KW - methods; Exercise Therapy KW - methods; Hand; Humans; Imaging KW - Three-Dimensional KW - methods; Movement Disorders KW - etiology KW - rehabilitation; Stroke KW - complications; Stroke Rehabilitation; Therapy KW - Computer-Assisted KW - methods; Treatment Outcome; User-Computer Interface N1 - 0926-9630 Owner: NLM N2 - This paper features a Spatial Augmented Reality system for rehabilitation of hand and arm movement. The table-top home-based system tracks a subject's hand and creates a virtual audio-visual interface for performing rehabilitation-related tasks that involve wrist, elbow, and shoulder movements. It measures range, speed, and smoothness of movements locally and can send the real-time photos and data to the clinic for further assessment. To evaluate the system, it was tested on two normal subjects and proved functional. ER - TY - JOUR AU - Mousavi Hondori, Hossein AU - Khademi, Maryam AU - Dodakian, Lucy AU - McKenzie, Alison AU - Lopes, Cristina V. AU - Cramer, Steven C. T1 - Choice of Human-Computer Interaction Mode in Stroke Rehabilitation. JO - Neurorehabilitation and neural repair Y1 - 2016 VL - 30 SP - 258 EP - 265 KW - Arm KW - physiopathology; Biomechanical Phenomena; Chronic Disease; Cognition; Female; Humans; Male; Middle Aged; Motor Activity KW - physiology; Musculoskeletal Manipulations KW - instrumentation KW - methods; Paresis KW - physiopathology KW - rehabilitation; Reaction Time; Stroke KW - physiopathology; Stroke Rehabilitation; Treatment Outcome; User-Computer Interface; Video Games; augmented reality; direct interaction; indirect interaction; recovery; stroke N1 - 1552-6844 Owner: NLM N2 - Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. ER - TY - JOUR AU - Müller, Michael AU - Rassweiler, Marie-Claire AU - Klein, Jan AU - Seitel, Alexander AU - Gondan, Matthias AU - Baumhauer, Matthias AU - Teber, Dogu AU - Rassweiler, Jens J. AU - Meinzer, Hans-Peter AU - Maier-Hein, Lena T1 - Mobile augmented reality for computer-assisted percutaneous nephrolithotomy. JO - International journal of computer assisted radiology and surgery Y1 - 2013 VL - 8 SP - 663 EP - 675 KW - Fiducial Markers; Fluoroscopy; Humans; Image Processing KW - Computer-Assisted KW - methods; Kidney KW - diagnostic imaging KW - surgery; Kidney Calculi KW - surgery; Nephrostomy KW - Percutaneous KW - methods; Phantoms KW - Imaging; Surgery KW - methods N1 - 1861-6429 Owner: NLM N2 - Percutaneous nephrolithotomy (PCNL) plays an integral role in treatment of renal stones. Creating percutaneous renal access is the most important and challenging step in the procedure. To facilitate this step, we evaluated our novel mobile augmented reality (AR) system for its feasibility of use for PCNL. A tablet computer, such as an iPad[Formula: see text], is positioned above the patient with its camera pointing toward the field of intervention. The images of the tablet camera are registered with the CT image by means of fiducial markers. Structures of interest can be superimposed semi-transparently on the video images. We present a systematic evaluation by means of a phantom study. An urological trainee and two experts conducted 53 punctures on kidney phantoms. The trainee performed best with the proposed AR system in terms of puncturing time (mean: 99 s), whereas the experts performed best with fluoroscopy (mean: 59 s). iPad assistance lowered radiation exposure by a factor of 3 for the inexperienced physician and by a factor of 1.8 for the experts in comparison with fluoroscopy usage. We achieve a mean visualization accuracy of 2.5 mm. The proposed tablet computer-based AR system has proven helpful in assisting percutaneous interventions such as PCNL and shows benefits compared to other state-of-the-art assistance systems. A drawback of the system in its current state is the lack of depth information. Despite that, the simple integration into the clinical workflow highlights the potential impact of this approach to such interventions. ER - TY - CONF AU - Nakao, Megumi AU - Imanishi, K. AU - Kioka, M. AU - Yoshida, M. AU - Minato, Kotaro AU - Matsuda, Tetsuya T1 - Synchronized Visualization of Bone Cutting to Support Microendoscopic Discectomy Y1 - 2012/december VL - 3 ER - TY - JOUR AU - Nakata, Norio AU - Suzuki, Naoki AU - Hattori, Asaki AU - Hirai, Naoya AU - Miyamoto, Yukio AU - Fukuda, Kunihiko T1 - Informatics in radiology: Intuitive user interface for 3D image manipulation using augmented reality and a smartphone as a remote control. JO - Radiographics : a review publication of the Radiological Society of North America, Inc Y1 - 2012 VL - 32 SP - E169 EP - E174 KW - Cell Phone; Computer Peripherals; Computers KW - Handheld; Equipment Design; Equipment Failure Analysis; Image Enhancement KW - instrumentation; Image Interpretation KW - Computer-Assisted KW - instrumentation; Imaging KW - Three-Dimensional KW - instrumentation; Medical Informatics Applications; Telemetry KW - instrumentation; User-Computer Interface N1 - 1527-1323 Owner: NLM N2 - Although widely used as a pointing device on personal computers (PCs), the mouse was originally designed for control of two-dimensional (2D) cursor movement and is not suited to complex three-dimensional (3D) image manipulation. Augmented reality (AR) is a field of computer science that involves combining the physical world and an interactive 3D virtual world; it represents a new 3D user interface (UI) paradigm. A system for 3D and four-dimensional (4D) image manipulation has been developed that uses optical tracking AR integrated with a smartphone remote control. The smartphone is placed in a hard case (jacket) with a 2D printed fiducial marker for AR on the back. It is connected to a conventional PC with an embedded Web camera by means of WiFi. The touch screen UI of the smartphone is then used as a remote control for 3D and 4D image manipulation. Using this system, the radiologist can easily manipulate 3D and 4D images from computed tomography and magnetic resonance imaging in an AR environment with high-quality image resolution. Pilot assessment of this system suggests that radiologists will be able to manipulate 3D and 4D images in the reading room in the near future. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.324115086/-/DC1. ER - TY - JOUR AU - Nifakos, Sokratis AU - Tomson, Tanja AU - Zary, Nabil T1 - Combining physical and virtual contexts through augmented reality: design and evaluation of a prototype using a drug box as a marker for antibiotic training. JO - PeerJ Y1 - 2014 VL - 2 SP - e697 EP - e697 KW - Antibiotics; Antimicrobial resistance; Augmented reality; Mobile learning N1 - 2167-8359 Owner: NLM N2 - Introduction. Antimicrobial resistance is a global health issue. Studies have shown that improved antibiotic prescription education among healthcare professionals reduces mistakes during the antibiotic prescription process. The aim of this study was to investigate novel educational approaches that through the use of Augmented Reality technology could make use of the real physical context and thereby enrich the educational process of antibiotics prescription. The objective is to investigate which type of information related to antibiotics could be used in an augmented reality application for antibiotics education. Methods. This study followed the Design-Based Research Methodology composed of the following main steps: problem analysis, investigation of information that should be visualized for the training session, and finally the involvement of the end users the development and evaluation processes of the prototype. Results. Two of the most important aspects in the antibiotic prescription process, to represent in an augmented reality application, are the antibiotic guidelines and the side effects. Moreover, this study showed how this information could be visualized from a mobile device using an Augmented Reality scanner and antibiotic drug boxes as markers. Discussion. In this study we investigated the usage of objects from a real physical context such as drug boxes and how they could be used as educational resources. The logical next steps are to examine how this approach of combining physical and virtual contexts through Augmented Reality applications could contribute to the improvement of competencies among healthcare professionals and its impact on the decrease of antibiotics resistance. ER - TY - JOUR AU - Nifakos, Sokratis AU - Zary, Nabil T1 - Virtual patients in a real clinical context using augmented reality: impact on antibiotics prescription behaviors. JO - Studies in health technology and informatics Y1 - 2014 VL - 205 SP - 707 EP - 711 KW - Anti-Bacterial Agents KW - therapeutic use; Bacterial Infections KW - drug therapy; Computer-Assisted Instruction KW - methods; Drug Prescriptions KW - statistics & numerical data; Education KW - Distance KW - methods; Humans; Inappropriate Prescribing KW - prevention & control KW - statistics & numerical data; Patient Simulation; Practice Patterns KW - Physicians' KW - statistics & numerical data; User-Computer Interface N1 - 1879-8365 Owner: NLM N2 - The research community has called for the development of effective educational interventions for addressing prescription behaviour since antimicrobial resistance remains a global health issue. Examining the potential to displace the educational process from Personal Computers to Mobile devices, in this paper we investigated a new method of integration of Virtual Patients into Mobile devices with augmented reality technology, enriching the practitioner's education in prescription behavior. Moreover, we also explored which information are critical during the prescription behavior education and we visualized these information on real context with augmented reality technology, simultaneously with a running Virtual Patient's scenario. Following this process, we set the educational frame of experiential knowledge to a mixed (virtual and real) environment. ER - TY - JOUR AU - Nishimoto, Soh AU - Tonooka, Maki AU - Fujita, Kazutoshi AU - Sotsuka, Yohei AU - Fujiwara, Toshihiro AU - Kawai, Kenichiro AU - Kakibuchi, Masao T1 - An augmented reality system in lymphatico-venous anastomosis surgery. JO - Journal of surgical case reports Y1 - 2016/05 VL - 2016 N1 - 2042-8812 Owner: NLM N2 - Indocyanine green lymphography, displayed as infrared image, is very useful in identifying lymphatic vessels during surgeries. Surgeons refer the infrared image on the displays as they proceed the operation. Those displays are usually placed on the walls or besides the operation tables. The surgeons cannot watch the infrared image and the operation field simultaneously. They have to move their heads and visual lines. An augmented reality system was developed for simultaneous referring of the infrared image, overlaid on real operation field view. A surgeon wore a see-through eye-glasses type display during lymphatico-venous anastomosis surgery. Infrared image was transferred wirelessly to the display. The surgeon was able to recognize fluorescently shining lymphatic vessels projected on the glasses and dissect them out. ER - TY - JOUR AU - Nissler, Christian AU - Mouriki, Nikoleta AU - Castellini, Claudio T1 - Optical Myography: Detecting Finger Movements by Looking at the Forearm. JO - Frontiers in neurorobotics Y1 - 2016 VL - 10 SP - 3 EP - 3 KW - computer vision; hand prostheses; human-machine interface; myography; rehabilitation robotics N1 - 1662-5218 Owner: NLM N2 - One of the crucial problems found in the scientific community of assistive/rehabilitation robotics nowadays is that of automatically detecting what a disabled subject (for instance, a hand amputee) wants to do, exactly when she wants to do it, and strictly for the time she wants to do it. This problem, commonly called "intent detection," has traditionally been tackled using surface electromyography, a technique which suffers from a number of drawbacks, including the changes in the signal induced by sweat and muscle fatigue. With the advent of realistic, physically plausible augmented- and virtual-reality environments for rehabilitation, this approach does not suffice anymore. In this paper, we explore a novel method to solve the problem, which we call Optical Myography (OMG). The idea is to visually inspect the human forearm (or stump) to reconstruct what fingers are moving and to what extent. In a psychophysical experiment involving ten intact subjects, we used visual fiducial markers (AprilTags) and a standard web camera to visualize the deformations of the surface of the forearm, which then were mapped to the intended finger motions. As ground truth, a visual stimulus was used, avoiding the need for finger sensors (force/position sensors, datagloves, etc.). Two machine-learning approaches, a linear and a non-linear one, were comparatively tested in settings of increasing realism. The results indicate an average error in the range of 0.05-0.22 (root mean square error normalized over the signal range), in line with similar results obtained with more mature techniques such as electromyography. If further successfully tested in the large, this approach could lead to vision-based intent detection of amputees, with the main application of letting such disabled persons dexterously and reliably interact in an augmented-/virtual-reality setup. ER - TY - JOUR AU - Nomura, Tsutomu AU - Mamada, Yasuhiro AU - Nakamura, Yoshiharu AU - Matsutani, Takeshi AU - Hagiwara, Nobutoshi AU - Fujita, Isturo AU - Mizuguchi, Yoshiaki AU - Fujikura, Terumichi AU - Miyashita, Masao AU - Uchida, Eiji T1 - Laparoscopic skill improvement after virtual reality simulator training in medical students as assessed by augmented reality simulator. JO - Asian journal of endoscopic surgery Y1 - 2015 VL - 8 SP - 408 EP - 412 KW - Cholecystectomy KW - Laparoscopic KW - education; Clinical Competence; Education KW - Medical KW - Undergraduate KW - methods; Humans; Japan; Simulation Training KW - methods; User-Computer Interface; Augmented reality simulator; medical students; virtual reality simulator N1 - 1758-5910 Owner: NLM N2 - Definitive assessment of laparoscopic skill improvement after virtual reality simulator training is best obtained during an actual operation. However, this is impossible in medical students. Therefore, we developed an alternative assessment technique using an augmented reality simulator. Nineteen medical students completed a 6-week training program using a virtual reality simulator (LapSim). The pretest and post-test were performed using an object-positioning module and cholecystectomy on an augmented reality simulator(ProMIS). The mean performance measures between pre- and post-training on the LapSim were compared with a paired t-test. In the object-positioning module, the execution time of the task (P < 0.001), left and right instrument path length (P = 0.001), and left and right instrument economy of movement (P < 0.001) were significantly shorter after than before the LapSim training. With respect to improvement in laparoscopic cholecystectomy using a gallbladder model, the execution time to identify, clip, and cut the cystic duct and cystic artery as well as the execution time to dissect the gallbladder away from the liver bed were both significantly shorter after than before the LapSim training (P = 0.01). Our training curriculum using a virtual reality simulator improved the operative skills of medical students as objectively evaluated by assessment using an augmented reality simulator instead of an actual operation. We hope that these findings help to establish an effective training program for medical students. ER - TY - JOUR AU - Nomura, Tsutomu AU - Matsutani, Takeshi AU - Hagiwara, Nobutoshi AU - Fujita, Itsuo AU - Nakamura, Yoshiharu AU - Kanazawa, Yoshikazu AU - Makino, Hiroshi AU - Mamada, Yasuhiro AU - Fujikura, Terumichi AU - Miyashita, Masao AU - Uchida, Eiji T1 - Characteristics predicting laparoscopic skill in medical students: nine years' experience in a single center. JO - Surgical endoscopy Y1 - 2018 VL - 32 SP - 96 EP - 104 KW - Augmented reality simulator; Medical students; Virtual reality simulator N1 - 1432-2218 Owner: NLM N2 - We introduced laparoscopic simulator training for medical students in 2007. This study was designed to identify factors that predict the laparoscopic skill of medical students, to identify intergenerational differences in abilities, and to estimate the variability of results in each training group. Our ultimate goal was to determine the optimal educational program for teaching laparoscopic surgery to medical students. Between 2007 and 2015, a total of 270 fifth-year medical students were enrolled in this observational study. Before training, the participants were asked questions about their interest in laparoscopic surgery, experience with playing video games, confidence about driving, and manual dexterity. After the training, aspects of their competence (execution time, instrument path length, and economy of instrument movement) were assessed. Multiple regression analysis identified significant effects of manual dexterity, gender, and confidence about driving on the results of the training. The training results have significantly improved over recent years. The variability among the results in each training group was relatively small. We identified the characteristics of medical students with excellent laparoscopic skills. We observed educational benefits from interactions between medical students within each training group. Our study suggests that selection and grouping are important to the success of modern programs designed to train medical students in laparoscopic surgery. ER - TY - JOUR AU - Nosrati, Masoud S. AU - Abugharbieh, Rafeef AU - Peyrat, Jean-Marc AU - Abinahed, Julien AU - Al-Alao, Osama AU - Al-Ansari, Abdulla AU - Hamarneh, Ghassan T1 - Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery. JO - IEEE transactions on medical imaging Y1 - 2016 VL - 35 SP - 1 EP - 12 KW - Algorithms; Animals; Humans; Imaging KW - Three-Dimensional KW - methods; Kidney KW - pathology KW - surgery; Kidney Neoplasms KW - surgery; Nephrectomy; Robotic Surgical Procedures KW - methods; Sheep N1 - 1558-254X Owner: NLM N2 - In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness. ER - TY - JOUR AU - Nosrati, Masoud S. AU - Amir-Khalili, Alborz AU - Peyrat, Jean-Marc AU - Abinahed, Julien AU - Al-Alao, Osama AU - Al-Ansari, Abdulla AU - Abugharbieh, Rafeef AU - Hamarneh, Ghassan T1 - Endoscopic scene labelling and augmentation using intraoperative pulsatile motion and colour appearance cues with preoperative anatomical priors. JO - International journal of computer assisted radiology and surgery Y1 - 2016 VL - 11 SP - 1409 EP - 1418 KW - Color; Endoscopy KW - methods; Humans; Imaging KW - Three-Dimensional KW - methods; Nephrectomy KW - methods; 3D pose estimation; Endoscopy; Image-guided surgery; Kidney; Occluded vessels; Partial nephrectomy; Patient-specific model; Robotic surgery; Segmentation N1 - 1861-6429 Owner: NLM N2 - Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries. ER - TY - JOUR AU - Novak, Domen AU - Nagle, Aniket AU - Keller, Urs AU - Riener, Robert T1 - Increasing motivation in robot-aided arm rehabilitation with competitive and cooperative gameplay JO - Journal of NeuroEngineering and Rehabilitation Y1 - 2014 VL - 11 IS - 1 SP - 64 EP - 64 ER - TY - JOUR AU - Ntourakis, Dimitrios AU - Memeo, Ricardo AU - Soler, Luc AU - Marescaux, Jacques AU - Mutter, Didier AU - Pessaux, Patrick T1 - Augmented Reality Guidance for the Resection of Missing Colorectal Liver Metastases: An Initial Experience. JO - World journal of surgery Y1 - 2016 VL - 40 SP - 419 EP - 426 KW - Aged; Anatomic Landmarks; Antineoplastic Combined Chemotherapy Protocols KW - therapeutic use; Chemotherapy KW - Adjuvant; Colorectal Neoplasms KW - pathology; Female; Hepatectomy KW - methods; Humans; Imaging KW - Three-Dimensional; Liver Neoplasms KW - drug therapy KW - secondary KW - surgery; Male; Middle Aged; Neoadjuvant Therapy; Neoplasm Recurrence KW - Local KW - surgery; Neoplasm KW - Residual; Organoplatinum Compounds KW - administration & dosage; Oxaliplatin; Pilot Projects; Prospective Studies; Surgery KW - Computer-Assisted; Tomography KW - X-Ray Computed N1 - 1432-2323 Owner: NLM N2 - Modern chemotherapy achieves the shrinking of colorectal cancer liver metastases (CRLM) to such extent that they may disappear from radiological imaging. Disappearing CRLM rarely represents a complete pathological remission and have an important risk of recurrence. Augmented reality (AR) consists in the fusion of real-time patient images with a computer-generated 3D virtual patient model created from pre-operative medical imaging. The aim of this prospective pilot study is to investigate the potential of AR navigation as a tool to help locate and surgically resect missing CRLM. A 3D virtual anatomical model was created from thoracoabdominal CT-scans using customary software (VR RENDER(®), IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM(®), Karl Storz, Tüttlingen, Germany). Virtual and real images were manually registered in real-time using a video mixer, based on external anatomical landmarks with an estimated accuracy of 5 mm. This modality was tested in three patients, with four missing CRLM that had sizes from 12 to 24 mm, undergoing laparotomy after receiving pre-operative oxaliplatin-based chemotherapy. AR display and fine registration was performed within 6 min. AR helped detect all four missing CRLM, and guided their resection. In all cases the planned security margin of 1 cm was clear and resections were confirmed to be R0 by pathology. There was no postoperative major morbidity or mortality. No local recurrence occurred in the follow-up period of 6-22 months. This initial experience suggests that AR may be a helpful navigation tool for the resection of missing CRLM. ER - TY - JOUR AU - Nugent, Emmeline AU - Shirilla, Nicole AU - Hafeez, Adnan AU - O'Riordain, Diarmuid S. AU - Traynor, Oscar AU - Harrison, Anthony M. AU - Neary, Paul T1 - Development and evaluation of a simulator-based laparoscopic training program for surgical novices. JO - Surgical endoscopy Y1 - 2013 VL - 27 SP - 214 EP - 221 KW - Adolescent; Adult; Clinical Competence KW - standards; Competency-Based Education KW - methods; Computer Simulation; Education KW - Medical KW - Graduate KW - methods; Female; Functional Laterality; General Surgery KW - education; Humans; Ireland; Laparoscopy KW - education KW - standards; Learning Curve; Male; Manikins; Young Adult N1 - 1432-2218 Owner: NLM N2 - The use of simulation to train novice surgeons in laparoscopic skills is becoming increasingly popular. To maximize benefit from simulation, training needs to be delivered and assessed in a structured manner. This study aimed to define performance goals, demonstrate construct validity of the training program, and evaluate whether novice surgeons could reach the preset performance goals. Nine expert laparoscopic surgeons established performance goals for three basic modules of an augmented-reality laparoscopic simulator. The three laparoscopic modules were used by 40 novice surgeons and 40 surgical trainees (postgraduate years [PGYs] 1-4). The performance outcomes were analyzed across the different groups (novice, PGYs 1 and 2, PGYs 3 and 4, expert) to determine construct validity. Then 26 recruited novices trained on the three modules with the aim of reaching the performance goals. The results demonstrated a significant difference in performance between all levels of experience for time (p < 0.001), motion analysis (p < 0.001), and error score (p < 0.001), thus demonstrating construct validity. All 26 novice surgeons significantly improved in performance with repetition for the metrics of time (p < 0.001) and motion analysis (p < 0.001). For two of the modules, the proficiency goals were reached in fewer than 10 trials by 80% of the study participants. Basic skills in laparoscopic surgery can be learned and improved using proficiency-based simulation training. It is possible for novice surgeons to achieve predefined performance goals in a reasonable time frame. ER - TY - CONF AU - Oh, Jihun AU - Kang, Xin AU - Wilson, Emmanuel AU - Peters, Craig A. AU - Kane, Timothy D. AU - Shekhar, Raj AU - Yaniv, Ziv R. AU - Holmes, David R. T1 - Stereoscopic augmented reality using ultrasound volume rendering for laparoscopic surgery in children PB - SPIE Y1 - 2014/march ER - TY - JOUR AU - Okamoto, Tomoyoshi AU - Onda, Shinji AU - Matsumoto, Michinori AU - Gocho, Takeshi AU - Futagawa, Yasuro AU - Fujioka, Shuichi AU - Yanaga, Katsuhiko AU - Suzuki, Naoki AU - Hattori, Asaki T1 - Utility of augmented reality system in hepatobiliary surgery. JO - Journal of hepato-biliary-pancreatic sciences Y1 - 2013 VL - 20 SP - 249 EP - 253 KW - Aged; Aged KW - 80 and over; Cholecystectomy KW - methods; Female; Gallbladder Neoplasms KW - diagnostic imaging KW - surgery; Hepatectomy KW - methods; Humans; Image Processing KW - Computer-Assisted KW - methods; Jaundice KW - Obstructive KW - surgery; Liver Neoplasms KW - surgery; Middle Aged; Tomography KW - X-Ray Computed; User-Computer Interface N1 - 1868-6982 Owner: NLM N2 - The aim of this study was to evaluate the utility of an image display system for augmented reality in hepatobiliary surgery under laparotomy. An overlay display of organs, vessels, or tumor was obtained using a video see-through system as a display system developed at our institute. Registration between visceral organs and the surface-rendering image reconstructed by preoperative computed tomography (CT) was carried out with an optical location sensor. Using this system, we performed laparotomy for a patient with benign biliary stricture, a patient with gallbladder carcinoma, and a patient with hepatocellular carcinoma. The operative procedures performed consisted of choledochojejunostomy, right hepatectomy, and microwave coagulation therapy. All the operations were carried out safely using images of the site of tumor, preserved organs, and resection aspect overlaid onto the operation field images observed on the monitors. The position of each organ in the overlaid image closely corresponded with that of the actual organ. Intraoperative information generated from this system provided us with useful navigation. However, several problems such as registration error and lack of depth knowledge were noted. The image display system appeared to be useful in performing hepatobiliary surgery under laparotomy. Further improvement of the system with individualized function for each operation will be essential, with feedback from clinical trials in the future. ER - TY - JOUR AU - Okamoto, Tomoyoshi AU - Onda, Shinji AU - Yasuda, Jungo AU - Yanaga, Katsuhiko AU - Suzuki, Naoki AU - Hattori, Asaki T1 - Navigation surgery using an augmented reality for pancreatectomy. JO - Digestive surgery Y1 - 2015 VL - 32 SP - 117 EP - 123 KW - Aged; Carcinoma KW - Pancreatic Ductal KW - surgery; Cystadenoma KW - surgery; Female; Humans; Imaging KW - Three-Dimensional KW - methods; Male; Middle Aged; Pancreatectomy KW - methods; Pancreatic Neoplasms KW - surgery; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed; Treatment Outcome; User-Computer Interface N1 - 1421-9883 Owner: NLM N2 - The aim of this study was to evaluate the utility of navigation surgery using augmented reality technology (AR-based NS) for pancreatectomy. The 3D reconstructed images from CT were created by segmentation. The initial registration was performed by using the optical location sensor. The reconstructed images were superimposed onto the real organs in the monitor display. Of the 19 patients who had undergone hepatobiliary and pancreatic surgery using AR-based NS, the accuracy, visualization ability, and utility of our system were assessed in five cases with pancreatectomy. The position of each organ in the surface-rendering image corresponded almost to that of the actual organ. Reference to the display image allowed for safe dissection while preserving the adjacent vessels or organs. The locations of the lesions and resection line on the targeted organ were overlaid on the operating field. The initial mean registration error was improved to approximately 5 mm by our refinements. However, several problems such as registration accuracy, portability and cost still remain. AR-based NS contributed to accurate and effective surgical resection in pancreatectomy. The pancreas appears to be a suitable organ for further investigations. This technology is promising to improve surgical quality, training, and education. ER - TY - CONF AU - de Oliveira, Luciene Chagas AU - Andrade, Adriano O. AU - de Oliveira, Eduardo Chagas AU - Soares, Alcimar AU - Cardoso, Alexandre AU - Lamounier, Edgard T1 - Indoor navigation with mobile augmented reality and beacon technology for wheelchair users PB - IEEE Y1 - 2017 ER - TY - JOUR AU - Olivieri, Emidio AU - Barresi, Giacinto AU - Mattos, Leonardo S. T1 - BCI-based user training in surgical robotics. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2015 VL - 2015 SP - 4918 EP - 4921 KW - Adult; Brain-Computer Interfaces; Computer User Training KW - methods; Equipment Design; Feedback KW - Psychological; Humans; Male; Phantoms KW - Imaging; Robotic Surgical Procedures KW - education KW - instrumentation KW - methods; Software; User-Computer Interface N1 - 1557-170X Owner: NLM N2 - Human error is a critical risk in surgery, so an important aim of surgical robotic systems is to improve the performance and the safety of surgical operations. Such systems can be potentially enhanced by a brain-computer interface (BCI) able to monitor the user's mental focus and use this information to improve the level of safety of the procedures. In order to evaluate such potential usage of BCIs, this paper describes a novel framework for training the user to regulate his/her own mental state while performing surgery-like tasks using a robotic system. This self-regulation is based on augmented reality (AR) feedback representing the BCI-monitored mental state, which helps the user's effort in maintaining a high level of mental focus during the task. A comparison between a BCI-based training and a training without a BCI highlighted a reduction of post-training trial times as a result of the enhanced training setup, without any loss in performance or in user experience. Such finding allows the identification of further improvements and novel potential applications of this training and interaction paradigm. ER - TY - JOUR AU - Onda, Shinji AU - Okamoto, Tomoyoshi AU - Kanehira, Masaru AU - Fujioka, Shuichi AU - Suzuki, Naoki AU - Hattori, Asaki AU - Yanaga, Katsuhiko T1 - Short rigid scope and stereo-scope designed specifically for open abdominal navigation surgery: clinical application for hepatobiliary and pancreatic surgery. JO - Journal of hepato-biliary-pancreatic sciences Y1 - 2013 VL - 20 SP - 448 EP - 453 KW - Aged; Cohort Studies; Equipment Design; Equipment Safety; Female; Hepatectomy KW - methods; Humans; Imaging KW - Three-Dimensional; Laparoscopes; Laparotomy KW - instrumentation KW - methods; Liver Neoplasms KW - pathology KW - surgery; Male; Middle Aged; Pancreatectomy KW - methods; Pancreatic Neoplasms KW - surgery; Pancreaticoduodenectomy KW - methods; Prognosis; Retrospective Studies; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed KW - methods; Treatment Outcome N1 - 1868-6982 Owner: NLM N2 - We have reported the utility of an image display system using augmented reality (AR) technology in hepatobiliary surgery under laparotomy. Among several procedures, we herein report a system using a novel short rigid scope and stereo-scope, both designed specifically for open abdominal navigation surgery, and their clinical application for hepatobiliary and pancreatic surgery. The 3D reconstructed images were obtained from preoperative computed tomography data. In our specialized operating room, after paired-point matching registration, the reconstructed images are overlaid onto the operative field images captured by the short rigid scopes. The scopes, which are compact and sterilizable, can be used in the operative field. The stereo-scope provides depth information. Eight patients underwent operations using this system, including hepatectomy in two, distal pancreatectomy in three, and pancreaticoduodenectomy in three patients. The stereo-scope was used in five patients. All eight operations were performed safely using the novel short rigid scopes, and stereo images were acquired in all five patients for whom the stereo-scope was used. The scopes were user friendly, and the intraoperative time requirement for our system was reduced compared with the conventional method. The novel short rigid scope and stereo-scope seem to be suitable for clinical use in open abdominal navigation surgery. In hepatobiliary and pancreatic surgery, our novel system may improve the safety, accuracy and efficiency of operations. ER - TY - JOUR AU - Onda, Shinji AU - Okamoto, Tomoyoshi AU - Kanehira, Masaru AU - Suzuki, Fumitake AU - Ito, Ryusuke AU - Fujioka, Shuichi AU - Suzuki, Naoki AU - Hattori, Asaki AU - Yanaga, Katsuhiko T1 - Identification of inferior pancreaticoduodenal artery during pancreaticoduodenectomy using augmented reality-based navigation system. JO - Journal of hepato-biliary-pancreatic sciences Y1 - 2014 VL - 21 SP - 281 EP - 287 KW - Aged; Aged KW - 80 and over; Anastomosis KW - Surgical KW - methods; Bile Duct Neoplasms KW - blood supply KW - diagnostic imaging KW - surgery; Blood Loss KW - prevention & control; Duodenum KW - blood supply; Female; Follow-Up Studies; Humans; Imaging KW - Three-Dimensional; Male; Mesenteric Artery KW - Superior KW - surgery; Middle Aged; Operative Time; Pancreas KW - blood supply; Pancreatic Neoplasms KW - surgery; Pancreaticoduodenectomy KW - methods; Retrospective Studies; Surgery KW - Computer-Assisted KW - instrumentation; Tomography KW - X-Ray Computed KW - methods; Image-guided surgery; Superior mesenteric artery; Surgical navigation system N1 - 1868-6982 Owner: NLM N2 - In pancreaticoduodenectomy (PD), early ligation of the inferior pancreaticoduodenal artery (IPDA) before efferent veins has been advocated to decrease blood loss by congestion of the pancreatic head to be resected. In this study, we herein report the utility of early identification of the IPDA using an augmented reality (AR)-based navigation system (NS). Seven nonconsecutive patients underwent PD using AR-based NS. After paired-point matching registration, the reconstructed image obtained by preoperative computed tomography (CT) was fused with a real-time operative field image and displayed on 3D monitors. The vascular reconstructed images, including the superior mesenteric artery, jejunal artery, and IPDA were visualized to facilitate image-guided surgical procedures. We compared operating time and intraoperative blood loss of six patients who successfully underwent identification of IPDA using AR-based NS (group A) with nine patients who underwent early ligation of IPDA without using AR (group B) and 18 patients who underwent a conventional PD (group C). The IPDA or the jejunal artery was rapidly identified and ligated in six patients. The mean operating time and intraoperative blood loss in group A was 415 min and 901 ml, respectively. There was no significant difference in operating time and intraoperative blood loss among the groups. The AR-based NS provided precise anatomical information, which allowed the surgeons to rapidly identify and perform early ligation of IPDA in PD. ER - TY - JOUR AU - Ong, Ee Ping AU - Lee, Jimmy Addison AU - Cheng, Jun AU - Lee, Beng Hai AU - Xu, Guozhen AU - Laude, Augustinus AU - Teoh, Stephen AU - Lim, Tock Han AU - Wong, Damon W. K. AU - Liu, Jiang T1 - An augmented reality assistance platform for eye laser surgery. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2015 VL - 2015 SP - 4326 EP - 4329 KW - Algorithms; Humans; Laser Therapy; Macula Lutea; Ophthalmologic Surgical Procedures; Optic Disk; Surgery KW - Computer-Assisted N1 - 1557-170X Owner: NLM N2 - This paper presents a novel augmented reality assistance platform for eye laser surgery. The aims of the proposed system are for the application of assisting eye doctors in pre-planning as well as providing guidance and protection during laser surgery. We developed algorithms to automatically register multi-modal images, detect macula and optic disc regions, and demarcate these as protected areas from laser surgery. The doctor will then be able to plan the laser treatment pre-surgery using the registered images and segmented regions. Thereafter, during live surgery, the system will automatically register and track the slit lamp video frames on the registered retina images, send appropriate warning when the laser is near protected areas, and disable the laser function when it points into the protected areas. The proposed system prototype can help doctors to speed up laser surgery with confidence without fearing that they may unintentionally fire laser in the protected areas. ER - TY - CONF AU - Ortega-Palacios, R. AU - Salgado-Ramirez, J. C. AU - Valdez-Hernandez, J. A. T1 - Gait Analysis System by Augmented Reality PB - IEEE Y1 - 2015/march ER - TY - JOUR AU - Ortiz-Catalan, Max AU - Guðmundsdóttir, Rannveig A. AU - Kristoffersen, Morten B. AU - Zepeda-Echavarria, Alejandra AU - Caine-Winterberger, Kerstin AU - Kulbacka-Ortiz, Katarzyna AU - Widehammar, Cathrine AU - Eriksson, Karin AU - Stockselius, Anita AU - Ragnö, Christina AU - Pihlar, Zdenka AU - Burger, Helena AU - Hermansson, Liselotte T1 - Phantom motor execution facilitated by machine learning and augmented reality as treatment for phantom limb pain: a single group, clinical trial in patients with chronic intractable phantom limb pain. JO - Lancet (London, England) Y1 - 2016 VL - 388 SP - 2885 EP - 2894 KW - Adult; Aged; Amines; Amputation KW - rehabilitation; Chronic Pain KW - drug therapy KW - therapy; Cyclohexanecarboxylic Acids; Exercise Therapy KW - methods; Gabapentin; Games KW - Recreational; Humans; Machine Learning; Middle Aged; Pain Measurement KW - statistics & numerical data; Phantom Limb KW - therapy; Slovenia; Sweden; Treatment Outcome; Upper Extremity KW - physiopathology KW - surgery; Virtual Reality Exposure Therapy; gamma-Aminobutyric Acid N1 - 1474-547X Owner: NLM N2 - Phantom limb pain is a debilitating condition for which no effective treatment has been found. We hypothesised that re-engagement of central and peripheral circuitry involved in motor execution could reduce phantom limb pain via competitive plasticity and reversal of cortical reorganisation. Patients with upper limb amputation and known chronic intractable phantom limb pain were recruited at three clinics in Sweden and one in Slovenia. Patients received 12 sessions of phantom motor execution using machine learning, augmented and virtual reality, and serious gaming. Changes in intensity, frequency, duration, quality, and intrusion of phantom limb pain were assessed by the use of the numeric rating scale, the pain rating index, the weighted pain distribution scale, and a study-specific frequency scale before each session and at follow-up interviews 1, 3, and 6 months after the last session. Changes in medication and prostheses were also monitored. Results are reported using descriptive statistics and analysed by non-parametric tests. The trial is registered at ClinicalTrials.gov, number NCT02281539. Between Sept 15, 2014, and April 10, 2015, 14 patients with intractable chronic phantom limb pain, for whom conventional treatments failed, were enrolled. After 12 sessions, patients showed statistically and clinically significant improvements in all metrics of phantom limb pain. Phantom limb pain decreased from pre-treatment to the last treatment session by 47% (SD 39; absolute mean change 1·0 [0·8]; p=0·001) for weighted pain distribution, 32% (38; absolute mean change 1·6 [1·8]; p=0·007) for the numeric rating scale, and 51% (33; absolute mean change 9·6 [8·1]; p=0·0001) for the pain rating index. The numeric rating scale score for intrusion of phantom limb pain in activities of daily living and sleep was reduced by 43% (SD 37; absolute mean change 2·4 [2·3]; p=0·004) and 61% (39; absolute mean change 2·3 [1·8]; p=0·001), respectively. Two of four patients who were on medication reduced their intake by 81% (absolute reduction 1300 mg, gabapentin) and 33% (absolute reduction 75 mg, pregabalin). Improvements remained 6 months after the last treatment. Our findings suggest potential value in motor execution of the phantom limb as a treatment for phantom limb pain. Promotion of phantom motor execution aided by machine learning, augmented and virtual reality, and gaming is a non-invasive, non-pharmacological, and engaging treatment with no identified side-effects at present. Promobilia Foundation, VINNOVA, Jimmy Dahlstens Fond, PicoSolve, and Innovationskontor Väst. ER - TY - JOUR AU - Ortiz-Catalan, Max AU - Sander, Nichlas AU - Kristoffersen, Morten B. AU - Håkansson, Bo AU - Brånemark, Rickard T1 - Treatment of phantom limb pain (PLP) based on augmented reality and gaming controlled by myoelectric pattern recognition: a case study of a chronic PLP patient. JO - Frontiers in neuroscience Y1 - 2014 VL - 8 SP - 24 EP - 24 KW - augmented reality; electromyography; myoelectric control; neurorehabilitation; pattern recognition; phantom limb pain; virtual reality N1 - 1662-4548 Owner: NLM N2 - A variety of treatments have been historically used to alleviate phantom limb pain (PLP) with varying efficacy. Recently, virtual reality (VR) has been employed as a more sophisticated mirror therapy. Despite the advantages of VR over a conventional mirror, this approach has retained the use of the contralateral limb and is therefore restricted to unilateral amputees. Moreover, this strategy disregards the actual effort made by the patient to produce phantom motions. In this work, we investigate a treatment in which the virtual limb responds directly to myoelectric activity at the stump, while the illusion of a restored limb is enhanced through augmented reality (AR). Further, phantom motions are facilitated and encouraged through gaming. The proposed set of technologies was administered to a chronic PLP patient who has shown resistance to a variety of treatments (including mirror therapy) for 48 years. Individual and simultaneous phantom movements were predicted using myoelectric pattern recognition and were then used as input for VR and AR environments, as well as for a racing game. The sustained level of pain reported by the patient was gradually reduced to complete pain-free periods. The phantom posture initially reported as a strongly closed fist was gradually relaxed, interestingly resembling the neutral posture displayed by the virtual limb. The patient acquired the ability to freely move his phantom limb, and a telescopic effect was observed where the position of the phantom hand was restored to the anatomically correct distance. More importantly, the effect of the interventions was positively and noticeably perceived by the patient and his relatives. Despite the limitation of a single case study, the successful results of the proposed system in a patient for whom other medical and non-medical treatments have been ineffective justifies and motivates further investigation in a wider study. ER - TY - JOUR AU - Pachoulakis, Ioannis AU - Xilourgos, Nikolaos AU - Papadopoulos, Nikolaos AU - Analyti, Anastasia T1 - A Kinect-Based Physiotherapy and Assessment Platform for Parkinson's Disease Patients. JO - Journal of medical engineering Y1 - 2016 VL - 2016 SP - 9413642 EP - 9413642 N1 - 2314-5137 Owner: NLM N2 - We report on a Kinect-based, augmented reality, real-time physiotherapy platform tailored to Parkinson's disease (PD) patients. The platform employs a Kinect sensor to extract real-time 3D skeletal data (joint information) from a patient facing the sensor (at 30 frames per second). In addition, a small collection of exercises practiced in traditional physiotherapy for PD patients has been implemented in the Unity 3D game engine. Each exercise employs linear or circular movement patterns and poses very light-weight processing demands on real-time computations. During an exercise, trainer instruction demonstrates correct execution and Kinect-provided 3D joint data are fed to the game engine and compared to exercise-specific control routines to assess proper posture and body control in real time. When an exercise is complete, performance metrics appropriate for that exercise are computed and displayed on screen to enable the attending physiotherapist to fine-tune the exercise to the abilities/needs of an individual patient as well as to provide performance feedback to the patient. The platform can operate in a physiotherapist's office and, following appropriate validation, in a home environment. Finally, exercises can be parameterized meaningfully, depending on the intended purpose (motor assessment versus plain exercise at home). ER - TY - JOUR AU - Pallavicini, Federica AU - Serino, Silvia AU - Cipresso, Pietro AU - Pedroli, Elisa AU - Chicchi Giglioli, Irene Alice AU - Chirico, Alice AU - Manzoni, Gian Mauro AU - Castelnuovo, Gianluca AU - Molinari, Enrico AU - Riva, Giuseppe T1 - Testing Augmented Reality for Cue Exposure in Obese Patients: An Exploratory Study. JO - Cyberpsychology, behavior and social networking Y1 - 2016 VL - 19 SP - 107 EP - 114 KW - Adult; Arousal KW - physiology; Body Mass Index; Bulimia KW - psychology; Case-Control Studies; Cues; Emotions; Energy Intake; Female; Food; Heart Rate; Humans; Male; Middle Aged; Obesity KW - psychology KW - therapy; Photic Stimulation KW - methods; Reaction Time; Reality Testing; Virtual Reality Exposure Therapy KW - methods N1 - 2152-2723 Owner: NLM N2 - Binge eating is one of the key behaviors in relation to the etiology and severity of obesity. Cue exposure with response prevention consists of exposing patients to binge foods while actual eating is not allowed. Augmented reality (AR) has the potential to change the way cue exposure is administered, but very few prior studies have been conducted so far. Starting from these premises, this study was aimed to (a) investigate whether AR foods elicit emotional responses comparable to those produced by the real stimuli, (b) study differences between obese and control participants in terms of emotional responses to food, and (c) compare emotional responses to different categories of foods. To reach these goals, we assess in 15 obese (age, 44.6 ± 13 years; body mass index [BMI], 44.2 ± 8.1) and 15 control participants (age, 43.7 ± 12.8 years; BMI, 21.2 ± 1.4) the emotional responses to high-calorie (savory and sweet) and low-calorie food stimuli, presented through different exposure conditions (real, photographic, and AR). The State-Trait Anxiety Inventory was used for the assessment of state anxiety, and it was administered at the beginning and after the exposure to foods, along with the Visual Analog Scale (VAS) for Hunger and Happiness. To assess the perceived pleasantness, the VAS for Palatability was administered after the exposure to food stimuli. Heart rate, skin conductance response, and facial corrugator supercilii muscle activation were recorded. Although preliminary, the results showed that (a) AR food stimuli were perceived to be as palatable as real stimuli, and they also triggered a similar arousal response; (b) obese individuals showed lower happiness after the exposure to food compared to control participants, with regard to both psychological and physiological responses; and (c) high-calorie savory (vs. low-calorie) food stimuli were perceived by all the participants to be more palatable, and they triggered a greater arousal response. ER - TY - CONF AU - Palma, Santiago Rodriguez AU - Becker, B. C. AU - Lobes, L. A. AU - Riviere, C. N. T1 - Comparative evaluation of monocular augmented-reality display for surgical microscopes PB - IEEE Y1 - 2012/august ER - TY - JOUR AU - Parrini, S. AU - Cutolo, F. AU - Freschi, C. AU - Ferrari, M. AU - Ferrari, V. T1 - Augmented reality system for freehand guide of magnetic endovascular devices. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2014 VL - 2014 SP - 490 EP - 493 KW - Angioscopes; Blood Vessels; Humans; Magnets; Robotics KW - instrumentation KW - methods; Video-Assisted Surgery KW - methods N1 - 1557-170X Owner: NLM N2 - Magnetic guide of endovascular devices or magnetized therapeutic microparticles to the specific target in the arterial tree is increasingly studied, since it could improve treatment efficacy and reduce side effects. Most proposed systems use external permanent magnets attached to robotic manipulators or magnetic resonance imaging (MRI) systems to guide internal carriers to the region of treatment. We aim to simplify this type of procedures, avoiding or reducing the need of robotic arms and MRI systems in the surgical scenario. On account of this we investigated the use of a wearable stereoscopic video see-through augmented reality system to show the hidden vessel to the surgeon; in this way, the surgeon is able to freely move the external magnet, following the showed path, to lead the endovascular magnetic device towards the desired position. In this preliminary study, we investigated the feasibility of such an approach trying to guide a magnetic capsule inside a vascular mannequin. The high rate of success and the positive evaluation provided by the operators represent a good starting point for further developments of the system. ER - TY - JOUR AU - Pauly, Olivier AU - Diotte, Benoit AU - Fallavollita, Pascal AU - Weidert, Simon AU - Euler, Ekkehard AU - Navab, Nassir T1 - Machine learning-based augmented reality for improved surgical scene understanding. JO - Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society Y1 - 2015 VL - 41 SP - 55 EP - 60 KW - Algorithms; Equipment Design; Equipment Failure Analysis; Humans; Image Enhancement KW - instrumentation KW - methods; Image Interpretation KW - Computer-Assisted KW - methods; Imaging KW - Three-Dimensional KW - methods; Machine Learning; Multimodal Imaging KW - methods; Pattern Recognition KW - Automated KW - methods; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - methods; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface; Video Recording KW - methods N1 - 1879-0771 Owner: NLM N2 - In orthopedic and trauma surgery, AR technology can support surgeons in the challenging task of understanding the spatial relationships between the anatomy, the implants and their tools. In this context, we propose a novel augmented visualization of the surgical scene that mixes intelligently the different sources of information provided by a mobile C-arm combined with a Kinect RGB-Depth sensor. Therefore, we introduce a learning-based paradigm that aims at (1) identifying the relevant objects or anatomy in both Kinect and X-ray data, and (2) creating an object-specific pixel-wise alpha map that permits relevance-based fusion of the video and the X-ray images within one single view. In 12 simulated surgeries, we show very promising results aiming at providing for surgeons a better surgical scene understanding as well as an improved depth perception. ER - TY - JOUR AU - Pessaux, Patrick AU - Diana, Michele AU - Soler, Luc AU - Piardi, Tullio AU - Mutter, Didier AU - Marescaux, Jacques T1 - Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. JO - Langenbeck's archives of surgery Y1 - 2015 VL - 400 SP - 381 EP - 385 KW - Cybernetics; Female; Hepatectomy KW - methods; Humans; Imaging KW - Three-Dimensional; Liver Neoplasms KW - pathology KW - surgery; Male; Radiographic Image Interpretation KW - Computer-Assisted; Robotics; Software; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed N1 - 1435-2451 Owner: NLM N2 - Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy. ER - TY - JOUR AU - Pessaux, Patrick AU - Diana, Michele AU - Soler, Luc AU - Piardi, Tullio AU - Mutter, Didier AU - Marescaux, Jacques T1 - Robotic duodenopancreatectomy assisted with augmented reality and real-time fluorescence guidance. JO - Surgical endoscopy Y1 - 2014 VL - 28 SP - 2493 EP - 2498 KW - Aged; Coloring Agents; Feedback; Female; Fluorescence; Humans; Imaging KW - Three-Dimensional; Indocyanine Green; Pancreaticoduodenectomy KW - methods; Robotic Surgical Procedures; Surgery KW - Computer-Assisted KW - methods; Video-Assisted Surgery N1 - 1432-2218 Owner: NLM N2 - The minimally invasive surgeon cannot use 'sense of touch' to orientate surgical resection, identifying important structures (vessels, tumors, etc.) by manual palpation. Robotic research has provided technology to facilitate laparoscopic surgery; however, robotics has yet to solve the lack of tactile feedback inherent to keyhole surgery. Misinterpretation of the vascular supply and tumor location may increase the risk of intraoperative bleeding and worsen dissection with positive resection margins. Augmented reality (AR) consists of the fusion of synthetic computer-generated images (three-dimensional virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim of visualizing unapparent anatomical details. In this article, we review the most common modalities used to achieve surgical navigation through AR, along with a report of a case of robotic duodenopancreatectomy using AR guidance complemented with the use of fluorescence guidance. The presentation of this complex and high-technology case of robotic duodenopancreatectomy, and the overview of current technology that has made it possible to use AR in the operating room, highlights the needs for further evolution and the windows of opportunity to create a new paradigm in surgical practice. ER - TY - JOUR AU - Platts, David Gerard AU - Humphries, Julie AU - Burstow, Darryl John AU - Anderson, Bonita AU - Forshaw, Tony AU - Scalia, Gregory M. T1 - The use of computerised simulators for training of transthoracic and transoesophageal echocardiography. The future of echocardiographic training? JO - Heart, lung & circulation Y1 - 2012/05 VL - 21 SP - 267 EP - 274 KW - Australia; Clinical Competence; Computer Simulation; Computer-Assisted Instruction KW - instrumentation KW - methods; Echocardiography KW - methods; Education; Educational Status; Health Knowledge KW - Attitudes KW - Practice; Humans; Manikins; Vena Cava KW - Superior KW - diagnostic imaging N1 - 1444-2892 Owner: NLM N2 - Echocardiography is the commonest form of non-invasive cardiac imaging but due to its methodology, it is operator dependent. Numerous advances in technology have resulted in the development of interactive programs and simulators to teach trainees the skills to perform particular procedures, including transthoracic and transoesophageal echocardiography. Forty trainee sonographers assessed a computerised mannequin echocardiographic simulator and were taught how to obtain an apical two-chamber (A2C) view and image the superior vena cava (SVC). Forty-two attendees at a TOE simulator workshop assessed its utility and commented on perceived future use, using defined criteria. One hundred percent and 88% of sonographers found the simulator useful in obtaining the SVC or A2C view respectively. All users found it easy to use and the majority found it helped with image acquisition and interpretation. Attendees of the TOE training day assessed the simulator with 100% finding it easy to use, as well as the augmented reality graphics benefiting image acquisition. Ninety percent felt that it was realistic. This study revealed that both trainee sonographers and TOE proceduralists found the simulation process was realistic, helped in image acquisition and improved assessment of spatial relationships. Echocardiographic simulators may play an important role in the future training of echocardiographic skills. ER - TY - JOUR AU - Ponce, Brent A. AU - Brabston, Eugene W. AU - Zu, Shin AU - Watson, Shawna L. AU - Baker, Dustin AU - Winn, Dennis AU - Guthrie, Barton L. AU - Shenai, Mahesh B. T1 - Telemedicine with mobile devices and augmented reality for early postoperative care. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2016 VL - 2016 SP - 4411 EP - 4414 KW - Adult; Aged; Aged KW - 80 and over; Cell Phone; Female; Humans; Male; Middle Aged; Mobile Applications; Neurosurgery; Orthopedics; Postoperative Care KW - instrumentation; Surveys and Questionnaires; Telemedicine KW - instrumentation; User-Computer Interface; Young Adult N1 - 1557-170X Owner: NLM N2 - Advanced features are being added to telemedicine paradigms to enhance usability and usefulness. Virtual Interactive Presence (VIP) is a technology that allows a surgeon and patient to interact in a "merged reality" space, to facilitate both verbal, visual, and manual interaction. In this clinical study, a mobile VIP iOS application was introduced into routine post-operative orthopedic and neurosurgical care. Survey responses endorse the usefulness of this tool, as it relates to The virtual interaction provides needed virtual follow-up in instances where in-person follow-up may be limited, and enhances the subjective patient experience. ER - TY - JOUR AU - Ponce, Brent A. AU - Jennings, Jonathan K. AU - Clay, Terry B. AU - May, Mathew B. AU - Huisingh, Carrie AU - Sheppard, Evan D. T1 - Telementoring: use of augmented reality in orthopaedic education: AAOS exhibit selection. JO - The Journal of bone and joint surgery. American volume Y1 - 2014/05 VL - 96 SP - e84 EP - e84 KW - Adult; Attitude of Health Personnel; Education KW - Distance KW - methods; Education KW - Medical KW - Graduate KW - methods; Equipment Design; Female; Humans; Internet; Joint Instability KW - surgery; Male; Medical Staff KW - Hospital; Mentors; Middle Aged; Operative Time; Orthopedics KW - education; Pilot Projects; Rotator Cuff KW - surgery; Surveys and Questionnaires; Teaching KW - methods; User-Computer Interface; Young Adult N1 - 1535-1386 Owner: NLM N2 - Virtual interactive presence (VIP) is a new technology that allows an individual to deliver real-time virtual assistance to another geographically remote individual via a standard Internet connection. The objectives of this pilot study were to evaluate the efficiency and performance of a VIP system implemented in an operating room setting, determine the potential utility of the system for guidance of surgical procedures, and assess the safety of the system. Following institutional review board approval, fifteen patients underwent arthroscopic shoulder procedures. Two VIP stations were used, one in the operating room and the other in an adjoining dictation room. The attending surgeon proctored operating resident surgeons from the dictation room until his physical presence was required in the operating room. Following each procedure, the attending surgeon, resident surgeons, and three surgical staff members completed a Likert-scale questionnaire regarding the educational utility, efficiency of use, and safety of the system. The operative time was also compared with historical data. Both attending and resident surgeons assigned a favorable rating to the utility of the VIP to highlight anatomy and provide feedback to the resident (p > 0.05 for the difference). Both groups agreed that the system was easy to use and that safety was not compromised (p > 0.05). The majority of resident and attending surgeon responses indicated no perceptible lag between motions (95% and 100%, respectively; p > 0.99) and no interference of the VIP system with the surgical procedure (85% and 100%, respectively; p = 0.24). The mean operative times with and without VIP use did not differ significantly for rotator cuff repair (p = 0.90) or for treatment of instability (p = 0.57). This pilot study revealed that the VIP technology was efficient, safe, and effective as a teaching tool. The attending and resident surgeons agreed that training was enhanced, and this occurred without increasing operative times. Furthermore, the attending surgeon believed that this technology improved teaching effectiveness. These results are promising, and further objective quantification is warranted. ER - TY - JOUR AU - Ponce, Brent A. AU - Menendez, Mariano E. AU - Oladeji, Lasun O. AU - Fryberger, Charles T. AU - Dantuluri, Phani K. T1 - Emerging technology in surgical education: combining real-time augmented reality and wearable computing devices. JO - Orthopedics Y1 - 2014 VL - 37 SP - 751 EP - 757 KW - Aged; Cooperative Behavior; Humans; Male; Microsurgery KW - methods; Radiography; Shoulder Impingement Syndrome KW - diagnostic imaging KW - surgery; Telemedicine; User-Computer Interface N1 - 1938-2367 Owner: NLM N2 - The authors describe the first surgical case adopting the combination of real-time augmented reality and wearable computing devices such as Google Glass (Google Inc, Mountain View, California). A 66-year-old man presented to their institution for a total shoulder replacement after 5 years of progressive right shoulder pain and decreased range of motion. Throughout the surgical procedure, Google Glass was integrated with the Virtual Interactive Presence and Augmented Reality system (University of Alabama at Birmingham, Birmingham, Alabama), enabling the local surgeon to interact with the remote surgeon within the local surgical field. Surgery was well tolerated by the patient and early surgical results were encouraging, with an improvement of shoulder pain and greater range of motion. The combination of real-time augmented reality and wearable computing devices such as Google Glass holds much promise in the field of surgery. ER - TY - JOUR AU - Pothier, David D. AU - Hughes, Cian AU - Dillon, Wanda AU - Ranalli, Paul J. AU - Rutka, John A. T1 - The use of real-time image stabilization and augmented reality eyewear in the treatment of oscillopsia. JO - Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery Y1 - 2012 VL - 146 SP - 966 EP - 971 KW - Aged; Cohort Studies; Eyeglasses; Female; Head Movements KW - physiology; Humans; Image Processing KW - Computer-Assisted KW - instrumentation; Male; Middle Aged; Reflex KW - Vestibulo-Ocular KW - physiology; User-Computer Interface; Vestibular Diseases KW - complications KW - physiopathology KW - therapy; Vision Disorders KW - etiology KW - therapy; Visual Acuity KW - physiology N1 - 1097-6817 Owner: NLM N2 - The symptom of oscillopsia in patients with bilateral vestibular loss (BVL) can be reduced as dynamic visual acuity (DVA), the reduction in visual acuity during head movement, is improved by using real-time image stabilization, delivered by augmented reality eyewear. Tertiary multidisciplinary neurotology clinic. Prospective experimental study. Immersive virtual reality glasses used in combination with a compact digital video camera were used. A software algorithm was developed that used a center-weighted Lucas-Kanade optical flow method to stabilize video in real time. Six patients with BVL were tested for changes in DVA using the eyewear. The ability to read a Snellen chart during a 2-Hz oscillating head rotation DVA test was measured. For combined scores of vertical and horizontal head rotations, the mean number of lines readable at rest was 7.86, which dropped to 2.77 with head movement (a combination of vertical and horizontal perturbations). This increased to a mean of 6.14 lines with the image stabilization software being activated. This difference was statistically significant (P < .001). This is the first successful attempt to improve dynamic visual acuity in patients with bilateral vestibular loss. Recent hardware upgrades are promising in improving these results even further. ER - TY - CONF AU - Potter, Michael AU - Bensch, Alexander AU - Dawson-Elli, Alexander AU - Linte, Cristian A. AU - Mello-Thoms, Claudia R. AU - Kupinski, Matthew A. T1 - Augmenting real-time video with virtual models for enhanced visualization for simulation, teaching, training and guidance PB - SPIE Y1 - 2015/march ER - TY - JOUR AU - Profeta, Andrea Corrado AU - Schilling, Clare AU - McGurk, Mark T1 - Augmented reality visualization in head and neck surgery: an overview of recent findings in sentinel node biopsy and future perspectives. JO - The British journal of oral & maxillofacial surgery Y1 - 2016 VL - 54 SP - 694 EP - 696 KW - Head; Head and Neck Neoplasms KW - surgery; Humans; Lymph Nodes; Melanoma; Neck; Sentinel Lymph Node Biopsy; Skin Neoplasms; Virtual Reality; Augmented reality; fhSPECT; head and neck surgery N1 - 1532-1940 Owner: NLM N2 - "Augmented reality visualisation", in which the site of an operation is merged with computer-generated graphics, provides a way to view the relevant part of the patient's body in better detail. We describe its role in relation to sentinel lymph node biopsy (SLNB), current advancements, and future directions in the excision of tumours in early-stage cancers of the head and neck. ER - TY - JOUR AU - Puerto-Souza, Gustavo A. AU - Cadeddu, Jeffrey A. AU - Mariottini, Gian-Luca T1 - Toward long-term and accurate augmented-reality for monocular endoscopic videos. JO - IEEE transactions on bio-medical engineering Y1 - 2014 VL - 61 SP - 2609 EP - 2620 KW - Humans; Imaging KW - Three-Dimensional KW - methods; Kidney KW - surgery; Kidney Neoplasms KW - surgery; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface; Video-Assisted Surgery KW - methods N1 - 1558-2531 Owner: NLM N2 - By overlaying preoperative radiological 3-D models onto the intraoperative laparoscopic video, augmented-reality (AR) displays promise to increase surgeons' visual awareness of high-risk surgical targets (e.g., the location of a tumor). Existing AR surgical systems lack in robustness and accuracy because of the many challenges in endoscopic imagery, such as frequent changes in illumination, rapid camera motions, prolonged organ occlusions, and tissue deformations. The frequent occurrence of these events can cause the loss of image (anchor) points, and thus, the loss of the AR display after a few frames. In this paper, we present the design of a new AR system that represents a first step toward long term and accurate augmented surgical display for monocular (calibrated and uncalibrated) endoscopic videos. Our system uses correspondence-search methods, and a new weighted sliding-window registration approach, to automatically and accurately recover the overlay by predicting the image locations of a high number of anchor points that were lost after a sudden image change. The effectiveness of the proposed system in maintaining a long term (over 2 min) and accurate (less than 1 mm) augmentation has been documented over a set of real partial-nephrectomy laparascopic videos. ER - TY - JOUR AU - Qu, Miao AU - Hou, Yikang AU - Xu, Yourong AU - Shen, Congcong AU - Zhu, Ming AU - Xie, Le AU - Wang, Hao AU - Zhang, Yan AU - Chai, Gang T1 - Precise positioning of an intraoral distractor using augmented reality in patients with hemifacial microsomia. JO - Journal of cranio-maxillo-facial surgery : official publication of the European Association for Cranio-Maxillo-Facial Surgery Y1 - 2015 VL - 43 SP - 106 EP - 112 KW - Anatomic Landmarks KW - diagnostic imaging; Cephalometry KW - methods; Computer Graphics; Computer-Aided Design; Fiducial Markers; Goldenhar Syndrome KW - surgery; Humans; Image Processing KW - Computer-Assisted KW - methods; Imaging KW - Three-Dimensional KW - methods; Infant; Internal Fixators; Mandible KW - diagnostic imaging KW - surgery; Mandibular Condyle KW - diagnostic imaging; Occlusal Splints; Osteogenesis KW - Distraction KW - instrumentation; Osteotomy KW - instrumentation; Random Allocation; Surgery KW - methods; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface; Augmented reality; Hemifacial microsomia; Intraoral distractor; Mandibular distraction osteogenesis; Three-dimensional real time imaging N1 - 1878-4119 Owner: NLM N2 - Through three-dimensional real time imaging, augmented reality (AR) can provide an overlay of the anatomical structure, or visual cues for specific landmarks. In this study, an AR Toolkit was used for distraction osteogenesis with hemifacial microsomia to define the mandibular osteotomy line and assist with intraoral distractor placement. 20 patients with hemifacial microsomia were studied and were randomly assigned to experimental and control groups. Pre-operative computed tomography was used in both groups, whereas AR was used in the experimental group. Afterwards, pre- and post-operative computed tomographic scans of both groups were superimposed, and several measurements were made and analysed. Both the conventional method and AR technique achieved proper positioning of the osteotomy planes, although the AR was more accurate. The difference in average vertical distance from the coronoid and condyle process to the pre- and post-operative cutting planes was significant (p < 0.01) between the two groups, whereas no significant difference (p > 0.05) was observed in the average angle between the two planes. The difference in deviations between the intersection points of the overlaid mandible across two cutting planes was also significant (p < 0.01). This study reports on an efficient approach for guiding intraoperative distraction osteogenesis. Augmented reality tools such as the AR Toolkit may be helpful for precise positioning of intraoral distractors in patients with hemifacial microsomia in craniofacial surgery. ER - TY - JOUR AU - Racadio, John M. AU - Nachabe, Rami AU - Homan, Robert AU - Schierling, Ross AU - Racadio, Judy M. AU - Babić, Draženko T1 - Augmented Reality on a C-Arm System: A Preclinical Assessment for Percutaneous Needle Localization. JO - Radiology Y1 - 2016 VL - 281 SP - 249 EP - 255 KW - Animals; Cone-Beam Computed Tomography; Fluoroscopy; Models KW - Animal; Needles; Paraspinal Muscles KW - diagnostic imaging; Radiation Dosage; Radiography KW - Interventional; Swine N1 - 1527-1315 Owner: NLM N2 - Purpose To compare the navigational accuracy and radiation dose during needle localization of targets for augmented reality (AR) with and without motion compensation (MC) versus those for cone-beam computed tomography (CT) with real-time fluoroscopy navigation in a pig model. Materials and Methods This study was approved by the Institutional Animal Care and Use Committee. Three operators each localized 15 targets (bone fragments) approximately 7 cm deep in the paraspinal muscles of nine Yorkshire pigs by using each of the three modalities (AR with and without MC and cone-beam CT with fluoroscopy). Target depth, accuracy (distance between needle tip and target), and radiation dose (dose-area product [DAP]) were recorded for each procedure. Correlation between accuracy and depth of target was assessed by using the Pearson correlation coefficient. Two-way analysis of variance was used for differentiating accuracy and DAPs across navigation techniques and operator backgrounds. Results There was no correlation between depth of target and accuracy. There was no significant difference in accuracy between modalities (mean distance, 3.0 mm ± 1.9 [standard deviation] for cone-beam CT with fluoroscopy, 2.5 mm ± 2.0 for AR, and 3.2 mm ± 2.7 for AR with MC [P = .33]). There was, however, a significant difference in fluoroscopy radiation dose (10.4 Gy · cm(2) ± 10.6 for cone-beam CT fluoroscopy, 2.3 Gy · cm(2) ± 2.4 for AR, and 3.3 Gy · cm(2) ± 4.6 for AR with MC [P < .05]) and therefore in total procedural radiation dose (20.5 Gy · cm(2) ± 13.4 for cone-beam CT fluoroscopy, 12.6 Gy · cm(2) ± 5.3 for AR, 13.6 Gy · cm(2) ± 7.4 for AR with MC [P < .05]). Conclusion Use of an AR C-arm system reduces radiation dose while maintaining navigational accuracy compared with cone-beam CT fluoroscopy during image-guided percutaneous needle placement in a pig model. (©) RSNA, 2016 Online supplemental material is available for this article. ER - TY - JOUR AU - Rashid, Zulqarnain AU - Pous, Rafael AU - Norrie, Christopher S. T1 - An independent shopping experience for wheelchair users through augmented reality and RFID. JO - Assistive technology : the official journal of RESNA Y1 - 2017 SP - 1 EP - 10 KW - accessibility; augmented reality (AR); human-centered computing; radio frequency identification (RFID); wheelchair users N1 - 1949-3614 Owner: NLM N2 - People with physical and mobility impairments continue to struggle to attain independence in the performance of routine activities and tasks. For example, browsing in a store and interacting with products located beyond an arm's length may be impossible without the enabling intervention of a human assistant. This research article describes a study undertaken to design, develop, and evaluate potential interaction methods for motor-impaired individuals, specifically those who use wheelchairs. Our study includes a user-centered approach, and a categorization of wheelchair users based upon the severity of their disability and their individual needs. We designed and developed access solutions that utilize radio frequency identification (RFID), augmented reality (AR), and touchscreen technologies in order to help people who use wheelchairs to carry out certain tasks autonomously. In this way, they have been empowered to go shopping independently, free from reliance upon the assistance of others. A total of 18 wheelchair users participated in the completed study. ER - TY - JOUR AU - Reichard, Daniel AU - Bodenstedt, Sebastian AU - Suwelack, Stefan AU - Mayer, Benjamin AU - Preukschas, Anas AU - Wagner, Martin AU - Kenngott, Hannes AU - Müller-Stich, Beat AU - Dillmann, Rüdiger AU - Speidel, Stefanie T1 - Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery. JO - Journal of medical imaging (Bellingham, Wash.) Y1 - 2015 VL - 2 SP - 045001 EP - 045001 KW - endoscopic image processing; quantitative endoscopy; simultaneous localization and mapping; stitching; surgical vision; visualization N1 - 2329-4302 Owner: NLM N2 - The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention, e.g., using augmented reality. To display preoperative data, soft tissue deformations that occur during surgery have to be taken into consideration. Laparoscopic sensors, such as stereo endoscopes, can be used to create a three-dimensional reconstruction of stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just one frame, in general, will not provide enough detail to register preoperative data, since every frame only contains a part of an organ surface. A correct assignment to the preoperative model is possible only if the patch geometry can be unambiguously matched to a part of the preoperative surface. We propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. Using graphics processing unit-based methods, we achieved four frames per second. We evaluated the system with in silico, phantom, ex vivo, and in vivo (porcine) data, using different methods for estimating the camera pose (optical tracking, iterative closest point, and a combination). The results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration. ER - TY - JOUR AU - Robb, Andrew AU - Kopper, Regis AU - Ambani, Ravi AU - Qayyum, Farda AU - Lind, David AU - Su, Li-Ming AU - Lok, Benjamin T1 - Leveraging virtual humans to effectively prepare learners for stressful interpersonal experiences. JO - IEEE transactions on visualization and computer graphics Y1 - 2013 VL - 19 SP - 662 EP - 670 KW - Adaptation KW - Psychological; Adult; Computer Graphics; Computer Simulation; Computer-Assisted Instruction KW - methods; Digital Rectal Examination KW - methods KW - psychology; Female; Humans; Imaging KW - Three-Dimensional KW - methods; Male; Models KW - Biological; Stress KW - Psychological KW - diagnosis KW - prevention & control KW - psychology; Students KW - Medical; User-Computer Interface; Virtual Reality Exposure Therapy N1 - 1941-0506 Owner: NLM N2 - Stressful interpersonal experiences can be difficult to prepare for. Virtual humans may be leveraged to allow learners to safely gain exposure to stressful interpersonal experiences. In this paper we present a between-subjects study exploring how the presence of a virtual human affected learners while practicing a stressful interpersonal experience. Twenty-six fourth-year medical students practiced performing a prostate exam on a prostate exam simulator. Participants in the experimental condition examined a simulator augmented with a virtual human. Other participants examined a standard unaugmented simulator. Participants reactions were assessed using self-reported, behavioral, and physiological metrics. Participants who examined the virtual human experienced significantly more stress, measured via skin conductance. Participants stress was correlated with previous experience performing real prostate exams; participants who had performed more real prostate exams were more likely to experience stress while examining the virtual human. Participants who examined the virtual human showed signs of greater engagement; non-stressed participants performed better prostate exams while stressed participants treated the virtual human more realistically. Results indicated that stress evoked by virtual humans is linked to similar previous real-world stressful experiences, implying that learners real-world experience must be taken into account when using virtual humans to prepare them for stressful interpersonal experiences. ER - TY - JOUR AU - Robu, Maria R. AU - Edwards, Philip AU - Ramalhinho, João AU - Thompson, Stephen AU - Davidson, Brian AU - Hawkes, David AU - Stoyanov, Danail AU - Clarkson, Matthew J. T1 - Intelligent viewpoint selection for efficient CT to video registration in laparoscopic liver surgery. JO - International journal of computer assisted radiology and surgery Y1 - 2017 VL - 12 SP - 1079 EP - 1088 KW - Artificial Intelligence; Hepatectomy KW - methods; Humans; Image Enhancement; Image Processing KW - Computer-Assisted KW - methods; Laparoscopy KW - methods; Liver Neoplasms KW - diagnostic imaging KW - surgery; Minimally Invasive Surgical Procedures KW - methods; Tomography KW - X-Ray Computed KW - methods; Gaussian curvature; Image guidance; Laparoscopic liver surgery; Rigid registration; View planning N1 - 1861-6429 Owner: NLM N2 - Minimally invasive surgery offers advantages over open surgery due to a shorter recovery time, less pain and trauma for the patient. However, inherent challenges such as lack of tactile feedback and difficulty in controlling bleeding lower the percentage of suitable cases. Augmented reality can show a better visualisation of sub-surface structures and tumour locations by fusing pre-operative CT data with real-time laparoscopic video. Such augmented reality visualisation requires a fast and robust video to CT registration that minimises interruption to the surgical procedure. We propose to use view planning for efficient rigid registration. Given the trocar position, a set of camera positions are sampled and scored based on the corresponding liver surface properties. We implement a simulation framework to validate the proof of concept using a segmented CT model from a human patient. Furthermore, we apply the proposed method on clinical data acquired during a human liver resection. The first experiment motivates the viewpoint scoring strategy and investigates reliable liver regions for accurate registrations in an intuitive visualisation. The second experiment shows wider basins of convergence for higher scoring viewpoints. The third experiment shows that a comparable registration performance can be achieved by at least two merged high scoring views and four low scoring views. Hence, the focus could change from the acquisition of a large liver surface to a small number of distinctive patches, thereby giving a more explicit protocol for surface reconstruction. We discuss the application of the proposed method on clinical data and show initial results. The proposed simulation framework shows promising results to motivate more research into a comprehensive view planning method for efficient registration in laparoscopic liver surgery. ER - TY - JOUR AU - Rochlen, Lauryn R. AU - Levine, Robert AU - Tait, Alan R. T1 - First-Person Point-of-View-Augmented Reality for Central Line Insertion Training: A Usability and Feasibility Study. JO - Simulation in healthcare : journal of the Society for Simulation in Healthcare Y1 - 2017 VL - 12 SP - 57 EP - 62 KW - Catheterization KW - Central Venous KW - standards; Clinical Competence KW - standards; Computer Simulation; Education KW - Medical KW - Graduate; Feasibility Studies; Humans; Internship and Residency; Manikins; Pilot Projects; Students N1 - 1559-713X Owner: NLM N2 - The value of simulation in medical education and procedural skills training is well recognized. Despite this, many mannequin-based trainers are limited by the inability of the trainee to view the internal anatomical structures. This study evaluates the usability and feasibility of a first-person point-of-view-augmented reality (AR) trainer on needle insertion as a component of central venous catheter placement. Forty subjects, including medical students and anesthesiology residents and faculty, participated. Augmented reality glasses were provided through which the relevant internal anatomical landmarks were projected. After a practice period, participants were asked to place the needle in the mannequin without the benefit of the AR-projected internal anatomy. The ability of the trainees to correctly place the needle was documented. Participants also completed a short survey describing their perceptions of the AR technology. Participants reported that the AR technology was realistic (77.5%) and that the ability to view the internal anatomy was helpful (92.5%). Furthermore, 85% and 82.1%, respectively, believed that the AR technology promoted learning and should be incorporated into medical training. The ability to successfully place the needle was similar between experienced and nonexperienced participants; however, less experienced participants were more likely to inadvertently puncture the carotid artery. Results of this pilot study demonstrated the usability and feasibility of AR technology as a potentially important adjunct to simulated medical skills training. Further development and evaluation of this innovative technology under a variety of simulated medical training settings would be an important next step. ER - TY - JOUR AU - Rodas, Nicolas Loy AU - Padoy, Nicolas T1 - 3D global estimation and augmented reality visualization of intra-operative X-ray dose. JO - Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention Y1 - 2014 VL - 17 SP - 415 EP - 422 KW - Equipment Design; Equipment Failure Analysis; Humans; Image Interpretation KW - Computer-Assisted KW - instrumentation KW - methods; Imaging KW - Three-Dimensional KW - methods; Monitoring KW - Intraoperative KW - methods; Radiation Dosage; Radiometry KW - methods; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - methods; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface; Wireless Technology N1 - Owner: NLM N2 - The growing use of image-guided minimally-invasive surgical procedures is confronting clinicians and surgical staff with new radiation exposure risks from X-ray imaging devices. The accurate estimation of intra-operative radiation exposure can increase staff awareness of radiation exposure risks and enable the implementation of well-adapted safety measures. The current surgical practice of wearing a single dosimeter at chest level to measure radiation exposure does not provide a sufficiently accurate estimation of radiation absorption throughout the body. In this paper, we propose an approach that combines data from wireless dosimeters with the simulation of radiation propagation in order to provide a global radiation risk map in the area near the X-ray device. We use a multi-camera RGBD system to obtain a 3D point cloud reconstruction of the room. The positions of the table, C-arm and clinician are then used 1) to simulate the propagation of radiation in a real-world setup and 2) to overlay the resulting 3D risk-map onto the scene in an augmented reality manner. By using real-time wireless dosimeters in our system, we can both calibrate the simulation and validate its accuracy at specific locations in real-time. We demonstrate our system in an operating room equipped with a robotised X-ray imaging device and validate the radiation simulation on several X-ray acquisition setups. ER - TY - JOUR AU - Rose, Kelsey AU - Pedowitz, Robert T1 - Fundamental Arthroscopic Skill Differentiation With Virtual Reality Simulation JO - Arthroscopy: The Journal of Arthroscopic & Related Surgery Y1 - 2015/february VL - 31 IS - 2 SP - 299 EP - 305 ER - TY - JOUR AU - Rossano, Cathia AU - Terrier, Philippe T1 - Visually-guided gait training in paretic patients during the first rehabilitation phase: study protocol for a randomized controlled trial. JO - Trials Y1 - 2016 VL - 17 SP - 523 EP - 523 KW - Adaptation KW - Psychological; Brain Injuries KW - Traumatic KW - diagnosis KW - physiopathology KW - psychology KW - rehabilitation; Clinical Protocols; Cues; Disability Evaluation; Exercise Therapy KW - methods; Feedback KW - Psychological; Gait; Humans; Motor Activity; Paresis KW - rehabilitation; Postural Balance; Recovery of Function; Research Design; Spinal Cord Injuries KW - rehabilitation; Stroke KW - therapy; Stroke Rehabilitation KW - methods; Switzerland; Time Factors; Treatment Outcome; Visual Perception; Walk Test; Walking; Augmented reality; Randomized controlled trial; Rehabilitation; Spinal cord injury; Stroke; Traumatic brain injury N1 - 1745-6215 Owner: NLM N2 - After a lesion to the central nervous system, many patients suffer from reduced walking capability. In the first rehabilitation phase, repeated walking exercises facilitate muscular strength and stimulate brain plasticity and motor relearning. However, marked limping, an unsteady gait, and poor management of obstacle clearance may persist, which increases a patient's risk of falling. Gait training with augmented reality has been recommended to improve gait coordination. The objective of this study is to test whether a gait rehabilitation program using augmented reality is superior to a conventional treadmill training program of equivalent intensity. The GASPAR trial (Gait Adaptation for Stroke Patients with Augmented Reality) is a pragmatic, parallel-arm, single-center, nonblind, superiority randomized control trial in neurorehabilitation. The setting is a rehabilitation clinic in Switzerland. The planned number of participants is 70-100. The intervention uses instrumented treadmills equipped with projectors that display shapes on the walking surface. The principle is that patients must adapt their gait to the image that unfolds in front of them. Specific exercises for gait symmetry, coordination enhancement, and gait agility are provided. The program includes twenty 30-min sessions spanning 4 weeks. The comparator group receives standard treadmill training of a similar frequency and intensity. The main outcome to be measured in the trial is walking speed, which is assessed with the 2-min Walk Test. Moreover, gait parameters are recorded during the gait training sessions. Other outcomes are balance control (Berg Balance Scale) and the fear of falling (Falls Efficacy Scale). The statistical analyses will compare the baseline assessment for each participant (before the intervention) with a post-intervention assessment (taken a few days after the end of the program). Furthermore, a follow-up assessment will take place 3 months after discharge. The study results will provide new knowledge about recovery in neurological patients and will contribute to the design of better rehabilitation programs to accompany this process. The findings will also help health care funders to decide whether treadmills equipped with augmented reality capabilities are a worthwhile investment. ClinicalTrials.gov ID: NCT02808078 , registered on 16 June 2016. ER - TY - JOUR AU - Rouzé, Simon AU - de Latour, Bertrand AU - Flécher, Erwan AU - Guihaire, Julien AU - Castro, Miguel AU - Corre, Romain AU - Haigron, Pascal AU - Verhoye, Jean-Philippe T1 - Small pulmonary nodule localization with cone beam computed tomography during video-assisted thoracic surgery: a feasibility study. JO - Interactive cardiovascular and thoracic surgery Y1 - 2016 VL - 22 SP - 705 EP - 711 KW - Adult; Aged; Cone-Beam Computed Tomography KW - methods; Feasibility Studies; Female; Humans; Intraoperative Period; Lung Neoplasms KW - diagnosis KW - surgery; Male; Middle Aged; Multiple Pulmonary Nodules KW - surgery; Operative Time; Pneumonectomy KW - methods; Thoracic Surgery KW - Video-Assisted KW - methods; Cone beam computed tomography; Localization; Lung cancer; Thoracoscopy; Video-assisted thoracic surgery; Wedge N1 - 1569-9285 Owner: NLM N2 - To describe a non-invasive guidance procedure, using intraoperative cone beam computed tomography (CBCT) and augmented fluoroscopy to guide lung resection during video-assisted thoracic surgery (VATS). Patients with solitary or multiple lung nodules between 5 and 20 mm in size were included. Under general anaesthesia, a moderate pneumothorax allowing the CBCT acquisition was first performed. Then a segmentation of the lesion was performed on a 3D reconstruction. A projection of this 3D reconstruction was then integrated into the digital workspace and automatically registered into the fluoroscopic images, creating an augmented fluoroscopy. The procedure was continued under classic video-thoracoscopic vision taking account of the augmented fluoroscopy to locate the targeted nodule. Eight patients were included (mean age 61 ± 11.7 years): 7 patients had an isolated lesion and 1 patient had two lesions (mean size 13.2 ± 5.1 mm). Their mean depth to the pleura was 21.4 ± 10.7 mm. Four patients underwent a wedge resection associated with lymph node resection. Two patients had an initial wedge resection followed by a complementary lobectomy associated with lymph node resection (primary lung tumour). One patient had a wedge resection in the upper lobe and a lobectomy of the inferior lobe associated with lymph node resection. One patient underwent a conversion and a bilobectomy due to vascular injury. The mean global operating time was 100.6 ± 36.7 min. All the nodules have been identified on the CBCT acquisitions. The segmentation of the lesion has been performed in all cases. We have been able to detect all the nodules and to successfully perform the resection in all cases owing to the augmented fluoroscopy. The mean fluoroscopic time was 134.2 ± 55.0 s. The mean imaging time, between the incision and the final nodule localization, was 11.8 ± 3.8 min. This paper is the first describing a clinical application of CBCT performed during thoracic surgery. Associated with augmented reality, it offers a significant progress in VATS resection of subpalpable lung nodules. This preliminary experience highlights the potential of the proposed CBCT approach to improve the perception of targeted small tumours during VATS. ER - TY - JOUR AU - Russell, Steven M. AU - Doménech-Sánchez, Antonio AU - de la Rica, Roberto T1 - Augmented Reality for Real-Time Detection and Interpretation of Colorimetric Signals Generated by Paper-Based Biosensors. JO - ACS sensors Y1 - 2017 VL - 2 SP - 848 EP - 853 KW - augmented reality; bacteria; gold; immunosensor; nanoparticles; pathogens; smartphone; water N1 - 2379-3694 Owner: NLM N2 - Colorimetric tests are becoming increasingly popular in point-of-need analyses due to the possibility of detecting the signal with the naked eye, which eliminates the utilization of bulky and costly instruments only available in laboratories. However, colorimetric tests may be interpreted incorrectly by nonspecialists due to disparities in color perception or a lack of training. Here we solve this issue with a method that not only detects colorimetric signals but also interprets them so that the test outcome is understandable for anyone. It consists of an augmented reality (AR) app that uses a camera to detect the colored signals generated by a nanoparticle-based immunoassay, and that yields a warning symbol or message when the concentration of analyte is higher than a certain threshold. The proposed method detected the model analyte mouse IgG with a limit of detection of 0.3 μg mL , which was comparable to the limit of detection afforded by classical densitometry performed with a nonportable device. When adapted to the detection of E. coli, the app always yielded a "hazard" warning symbol when the concentration of E. coli in the sample was above the infective dose (10 cfu mL or higher). The proposed method could help nonspecialists make a decision about drinking from a potentially contaminated water source by yielding an unambiguous message that is easily understood by anyone. The widespread availability of smartphones along with the inexpensive paper test that requires no enzymes to generate the signal makes the proposed assay promising for analyses in remote locations and developing countries. ER - TY - JOUR AU - Sadda, Praneeth AU - Azimi, Ehsan AU - Jallo, George AU - Doswell, Jayfus AU - Kazanzides, Peter T1 - Surgical navigation with a head-mounted tracking system and display. JO - Studies in health technology and informatics Y1 - 2013 VL - 184 SP - 363 EP - 369 KW - Equipment Design; Equipment Failure Analysis; Head Movements; Head Protective Devices; Humans; Man-Machine Systems; Surgery KW - Computer-Assisted KW - instrumentation; User-Computer Interface N1 - 0926-9630 Owner: NLM N2 - We present the design of a self-contained head-mounted surgical navigation system, which consists of an optical tracking system and an optical see-through head-mounted display (HMD). While the current prototype is bulky, we envision a more compact solution via the eventual integration of the tracking camera(s) into the HMD goggles. Rather than attempting to accurately overlay preoperative models onto the field of view, we adopted a simpler approach of displaying a small "picture-in-picture" virtual view in the HMD. We believe this approach will provide suitable assistance for some image-guided procedures, such as tumor resection, while improving the ergonomics by reducing the need for the surgeon to look away from the patient to view an external monitor. We report the results of initial experiments performed with this system, while preparing for a more clinically realistic study. ER - TY - JOUR AU - Schneider, Adrian AU - Pezold, Simon AU - Saner, Andreas AU - Ebbing, Jan AU - Wyler, Stephen AU - Rosenthal, Rachel AU - Cattin, Philippe C. T1 - Augmented reality assisted laparoscopic partial nephrectomy. JO - Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention Y1 - 2014 VL - 17 SP - 357 EP - 364 KW - Animals; Equipment Design; Equipment Failure Analysis; Image Interpretation KW - Computer-Assisted KW - instrumentation KW - methods; Kidney KW - pathology KW - surgery; Laparoscopes; Magnetics KW - instrumentation; Nephrectomy KW - instrumentation; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - instrumentation; Swine; User-Computer Interface N1 - Owner: NLM N2 - Computer assisted navigation is a widely adopted technique in neurosurgery and orthopedics. However, it is rarely used for surgeries on abdominal organs. In this paper, we propose a novel, noninvasive method based on electromagnetic tracking to determine the pose of the kidney. As a clinical use case, we show a complete surgical navigation system for augmented reality assisted laparoscopic partial nephrectomy. Experiments were performed ex vivo on pig kidneys and the evaluation showed an excellent augmented reality alignment error of 2.1 mm ± 1.2 mm. ER - TY - JOUR AU - Schoob, Andreas AU - Kundrat, Dennis AU - Kleingrothe, Lukas AU - Kahrs, Lüder A. AU - Andreff, Nicolas AU - Ortmaier, Tobias T1 - Tissue surface information for intraoperative incision planning and focus adjustment in laser surgery. JO - International journal of computer assisted radiology and surgery Y1 - 2015 VL - 10 SP - 171 EP - 181 KW - Depth Perception; Endoscopy KW - methods; Humans; Laser Therapy KW - methods; Lasers; Surgery KW - Computer-Assisted KW - methods N1 - 1861-6429 Owner: NLM N2 - Introducing computational methods to laser surgery are an emerging field. Focusing on endoscopic laser interventions, a novel approach is presented to enhance intraoperative incision planning and laser focusing by means of tissue surface information obtained by stereoscopic vision. Tissue surface is estimated with stereo-based methods using nonparametric image transforms. Subsequently, laser-to-camera registration is obtained by ablating a pattern on tissue substitutes and performing a principle component analysis for precise laser axis estimation. Furthermore, a virtual laser view is computed utilizing trifocal transfer. Depth-based laser focus adaptation is integrated into a custom experimental laser setup in order to achieve optimal ablation morphology. Experimental validation is conducted on tissue substitutes and ex vivo animal tissue. Laser-to-camera registration gives an error between planning and ablation of less than 0.2 mm. As a result, the laser workspace can accurately be highlighted within the live views and incision planning can directly be performed. Experiments related to laser focus adaptation demonstrate that ablation geometry can be kept almost uniform within a depth range of 7.9 mm, whereas cutting quality significantly decreases when the laser is defocused. An automatic laser focus adjustment on tissue surfaces based on stereoscopic scene information is feasible and has the potential to become an effective methodology for optimal ablation. Laser-to-camera registration facilitates advanced surgical planning for prospective user interfaces and augmented reality extensions. ER - TY - JOUR AU - Shakur, Sophia F. AU - Luciano, Cristian J. AU - Kania, Patrick AU - Roitberg, Ben Z. AU - Banerjee, P. Pat AU - Slavin, Konstantin V. AU - Sorenson, Jeffrey AU - Charbel, Fady T. AU - Alaraj, Ali T1 - Usefulness of a Virtual Reality Percutaneous Trigeminal Rhizotomy Simulator in Neurosurgical Training. JO - Neurosurgery Y1 - 2015 VL - 11 Suppl 3 SP - 420--5; discussion 425--5; discussion 425 EP - 420 KW - Clinical Competence; Computer Graphics; Computer Simulation; Contrast Media KW - administration & dosage; Fluoroscopy; Humans; Imaging KW - Three-Dimensional; Internship and Residency; Neurosurgery KW - education; Neurosurgical Procedures KW - education; Rhizotomy KW - education KW - methods; Trigeminal Nerve KW - surgery; User-Computer Interface N1 - 1524-4040 Owner: NLM N2 - Simulation-based training may be incorporated into neurosurgery in the future. To assess the usefulness of a novel haptics-based virtual reality percutaneous trigeminal rhizotomy simulator. A real-time augmented reality simulator for percutaneous trigeminal rhizotomy was developed using the ImmersiveTouch platform. Ninety-two neurosurgery residents tested the simulator at American Association of Neurological Surgeons Top Gun 2014. Postgraduate year (PGY), number of fluoroscopy shots, the distance from the ideal entry point, and the distance from the ideal target were recorded by the system during each simulation session. Final performance score was calculated considering the number of fluoroscopy shots and distances from entry and target points (a lower score is better). The impact of PGY level on residents' performance was analyzed. Seventy-one residents provided their PGY-level and simulator performance data; 38% were senior residents and 62% were junior residents. The mean distance from the entry point (9.4 mm vs 12.6 mm, P = .01), the distance from the target (12.0 mm vs 15.2 mm, P = .16), and final score (31.1 vs 37.7, P = .02) were lower in senior than in junior residents. The mean number of fluoroscopy shots (9.8 vs 10.0, P = .88) was similar in these 2 groups. Linear regression analysis showed that increasing PGY level is significantly associated with a decreased distance from the ideal entry point (P = .001), a shorter distance from target (P = .05), a better final score (P = .007), but not number of fluoroscopy shots (P = .52). Because technical performance of percutaneous rhizotomy increases with training, we proposed that the skills in performing the procedure in our virtual reality model would also increase with PGY level, if our simulator models the actual procedure. Our results confirm this hypothesis and demonstrate construct validity. ER - TY - JOUR AU - Shao, Pengfei AU - Ding, Houzhu AU - Wang, Jinkun AU - Liu, Peng AU - Ling, Qiang AU - Chen, Jiayu AU - Xu, Junbin AU - Zhang, Shiwu AU - Xu, Ronald T1 - Designing a wearable navigation system for image-guided cancer resection surgery. JO - Annals of biomedical engineering Y1 - 2014 VL - 42 SP - 2228 EP - 2237 KW - Equipment Design; Image Processing KW - Computer-Assisted KW - instrumentation; Neoplasms KW - surgery; Software; Surgery KW - instrumentation; Wireless Technology N1 - 1573-9686 Owner: NLM N2 - A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. ER - TY - JOUR AU - Sheehan, Florence H. AU - Otto, Catherine M. AU - Freeman, Rosario V. T1 - Echo simulator with novel training and competency testing tools. JO - Studies in health technology and informatics Y1 - 2013 VL - 184 SP - 397 EP - 403 KW - Cardiology KW - education; Computer-Assisted Instruction KW - methods; Echocardiography KW - instrumentation KW - methods; Educational Measurement KW - methods; Equipment Design; Equipment Failure Analysis; Humans; Manikins; Professional Competence; User-Computer Interface N1 - 0926-9630 Owner: NLM N2 - We developed and validated an echo simulator with three novel tools that facilitate training and enable quantitative and objective measurement of psychomotor as well as cognitive skill. First, the trainee can see original patient images - not synthetic or simulated images - that morph in real time as the mock transducer is manipulated on the mannequin. Second, augmented reality is used for Visual Guidance, a tool that assists the trainee in scanning by displaying the target organ in 3-dimensions (3D) together with the location of the current view plane and the plane of the anatomically correct view. Third, we introduce Image Matching, a tool that leverages the aptitude of the human brain for recognizing similarities and differences to help trainees learn to perform visual assessment of ultrasound images. Psychomotor competence is measured in terms of the view plane angle error. The construct validity of the simulator for competency testing was established by demonstrating its ability to discriminate novices vs. experts. ER - TY - JOUR AU - Shen, Fangyang AU - Chen, Bailiang AU - Guo, Qingshan AU - Qi, Yue AU - Shen, Yue T1 - Augmented reality patient-specific reconstruction plate design for pelvic and acetabular fracture surgery. JO - International journal of computer assisted radiology and surgery Y1 - 2013 VL - 8 SP - 169 EP - 179 KW - Acetabulum KW - diagnostic imaging KW - surgery; Adult; Computer Graphics; Female; Fracture Fixation KW - Internal KW - instrumentation KW - methods; Fractures KW - Bone KW - surgery; Humans; Imaging KW - Three-Dimensional; Male; Pelvic Bones KW - surgery; Prostheses and Implants; Radiographic Image Interpretation KW - Computer-Assisted KW - methods; Reproducibility of Results; Software; Surgery KW - methods; Tomography KW - X-Ray Computed; Treatment Outcome; User-Computer Interface N1 - 1861-6429 Owner: NLM N2 - The objective of this work is to develop a preoperative reconstruction plate design system for unilateral pelvic and acetabular fracture reduction and internal fixation surgery, using computer graphics and augmented reality (AR) techniques, in order to respect the patient-specific morphology and to reduce surgical invasiveness, as well as to simplify the surgical procedure. Our AR-aided implant design and contouring system is composed of two subsystems: a semi-automatic 3D virtual fracture reduction system to establish the patient-specific anatomical model and a preoperative templating system to create the virtual and real surgical implants. Preoperative 3D CT data are taken as input. The virtual fracture reduction system exploits the symmetric nature of the skeletal system to build a "repaired" pelvis model, on which reconstruction plates are planned interactively. A lightweight AR environment is set up to allow surgeons to match the actual implants to the digital ones intuitively. The effectiveness of this system is qualitatively demonstrated with 6 clinical cases. Its reliability was assessed based on the inter-observer reproducibility of the resulting virtual implants. The implants designed with the proposed system were successfully applied to all cases through minimally invasive surgeries. After the treatments, no further complications were reported. The inter-observer variability of the virtual implant geometry is 0.63 mm on average with a standard deviation of 0.49 mm. The time required for implant creation with our system is 10 min on average. It is feasible to apply the proposed AR-aided design system for noninvasive implant contouring for unilateral fracture reduction and internal fixation surgery. It also enables a patient-specific surgical planning procedure with potentially improved efficiency. ER - TY - JOUR AU - Shen, Yunhe AU - Hananel, David AU - Zhao, Zichen AU - Burke, Daniel AU - Ballas, Crist AU - Norfleet, Jack AU - Reihsen, Troy AU - Sweet, Robert T1 - A New Design for Airway Management Training with Mixed Reality and High Fidelity Modeling. JO - Studies in health technology and informatics Y1 - 2016 VL - 220 SP - 359 EP - 362 KW - Airway Management; Computer Simulation; Computer-Assisted Instruction KW - instrumentation KW - methods; Equipment Design; Equipment Failure Analysis; High Fidelity Simulation Training KW - methods; Humans; Imaging KW - Three-Dimensional KW - methods; Models KW - Anatomic; Models KW - Biological; Patient-Specific Modeling; User-Computer Interface N1 - 1879-8365 Owner: NLM N2 - Restoring airway function is a vital task in many medical scenarios. Although various simulation tools have been available for learning such skills, recent research indicated that fidelity in simulating airway management deserves further improvements. In this study, we designed and implemented a new prototype for practicing relevant tasks including laryngoscopy, intubation and cricothyrotomy. A large amount of anatomical details or landmarks were meticulously selected and reconstructed from medical scans, and 3D-printed or molded to the airway intervention model. This training model was augmented by virtually and physically presented interactive modules, which are interoperable with motion tracking and sensor data feedback. Implementation results showed that this design is a feasible approach to develop higher fidelity airway models that can be integrated with mixed reality interfaces. ER - TY - JOUR AU - Shen, Yunhe AU - Norfleet, Jack AU - Zhao, Zichen AU - Hananel, David AU - Burke, Daniel AU - Reihsen, Troy AU - Sweet, Robert T1 - High-Fidelity Medical Training Model Augmented With Virtual Reality and Conformable Sensors 1 JO - Journal of Medical Devices Y1 - 2016/august VL - 10 IS - 3 SP - 030915 EP - 030915 ER - TY - CONF AU - Shi, Chen AU - Becker, Brian C. AU - Riviere, Cameron N. T1 - Inexpensive monocular pico-projector-based augmented reality display for surgical microscope PB - IEEE Y1 - 2012/june ER - TY - JOUR AU - Shi, Chaoyang AU - Tercero, Carlos AU - Ikeda, Seiichi AU - Ooe, Katsutoshi AU - Fukuda, Toshio AU - Komori, Kimihiro AU - Yamamoto, Kiyohito T1 - In vitro three-dimensional aortic vasculature modeling based on sensor fusion between intravascular ultrasound and magnetic tracker. JO - The international journal of medical robotics + computer assisted surgery : MRCAS Y1 - 2012 VL - 8 SP - 291 EP - 299 KW - Algorithms; Aorta KW - Thoracic KW - anatomy & histology KW - diagnostic imaging KW - surgery; Computer Simulation; Humans; Imaging KW - Three-Dimensional; Magnetics; Models KW - Anatomic; Models KW - Cardiovascular; Stents; Ultrasonography N1 - 1478-596X Owner: NLM N2 - It is desirable to reduce aortic stent graft installation time and the amount of contrast media used for this process. Guidance with augmented reality can achieve this by facilitating alignment of the stent graft with the renal and mesenteric arteries. For this purpose, a sensor fusion is proposed between intravascular ultrasound (IVUS) and magnetic trackers to construct three-dimensional virtual reality models of the blood vessels, as well as improvements to the gradient vector flow snake for boundary detection in ultrasound images. In vitro vasculature imaging experiments were done with hybrid probe and silicone models of the vasculature. The dispersion of samples for the magnetic tracker in the hybrid probe increased less than 1 mm when the IVUS was activated. Three-dimensional models of the descending thoracic aorta, with cross-section radius average error of 0.94 mm, were built from the data fusion. The development of this technology will enable reduction in the amount of contrast media required for in vivo and real-time three-dimensional blood vessel imaging. ER - TY - JOUR AU - Shi, Yunyong AU - Lin, Li AU - Zhou, Chaozheng AU - Zhu, Ming AU - Xie, Le AU - Chai, Gang T1 - A study of an assisting robot for mandible plastic surgery based on augmented reality. JO - Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy Y1 - 2017 VL - 26 SP - 23 EP - 30 KW - Animals; Dogs; Fuzzy Logic; Mandible KW - surgery; Minimally Invasive Surgical Procedures KW - instrumentation; Robotic Surgical Procedures KW - instrumentation; Surgery KW - Plastic KW - instrumentation; Swine; Mandible plastic surgery; augmented reality; bone drilling; fuzzy control N1 - 1365-2931 Owner: NLM N2 - Mandible plastic surgery plays an important role in conventional plastic surgery. However, its success depends on the experience of the surgeons. In order to improve the effectiveness of the surgery and release the burden of surgeons, a mandible plastic surgery assisting robot, based on an augmented reality technique, was developed. Augmented reality assists surgeons to realize positioning. Fuzzy control theory was used for the control of the motor. During the process of bone drilling, both the drill bit position and the force were measured by a force sensor which was used to estimate the position of the drilling procedure. An animal experiment was performed to verify the effectiveness of the robotic system. The position error was 1.07 ± 0.27 mm and the angle error was 5.59 ± 3.15°. The results show that the system provides a sufficient accuracy with which a precise drilling procedure can be performed. In addition, under the supervision's feedback of the sensor, an adequate safety level can be achieved for the robotic system. The system realizes accurate positioning and automatic drilling to solve the problems encountered in the drilling procedure, providing a method for future plastic surgery. ER - TY - JOUR AU - Siebert, Johan N. AU - Ehrler, Frederic AU - Gervaix, Alain AU - Haddad, Kevin AU - Lacroix, Laurence AU - Schrurs, Philippe AU - Sahin, Ayhan AU - Lovis, Christian AU - Manzano, Sergio T1 - Adherence to AHA Guidelines When Adapted for Augmented Reality Glasses for Assisted Pediatric Cardiopulmonary Resuscitation: A Randomized Controlled Trial. JO - Journal of medical Internet research Y1 - 2017/05 VL - 19 SP - e183 EP - e183 KW - Cardiopulmonary Resuscitation KW - methods KW - standards; Child; Female; Guideline Adherence; Hospitals KW - Pediatric KW - standards; Humans; Male; Prospective Studies; biomedical technologies; emergency medicine; equipment and supplies; eyeglasses; pediatrics; resuscitation N1 - 1438-8871 Owner: NLM N2 - The American Heart Association (AHA) guidelines for cardiopulmonary resuscitation (CPR) are nowadays recognized as the world's most authoritative resuscitation guidelines. Adherence to these guidelines optimizes the management of critically ill patients and increases their chances of survival after cardiac arrest. Despite their availability, suboptimal quality of CPR is still common. Currently, the median hospital survival rate after pediatric in-hospital cardiac arrest is 36%, whereas it falls below 10% for out-of-hospital cardiac arrest. Among emerging information technologies and devices able to support caregivers during resuscitation and increase adherence to AHA guidelines, augmented reality (AR) glasses have not yet been assessed. In order to assess their potential, we adapted AHA Pediatric Advanced Life Support (PALS) guidelines for AR glasses. The study aimed to determine whether adapting AHA guidelines for AR glasses increased adherence by reducing deviation and time to initiation of critical life-saving maneuvers during pediatric CPR when compared with the use of PALS pocket reference cards. We conducted a randomized controlled trial with two parallel groups of voluntary pediatric residents, comparing AR glasses to PALS pocket reference cards during a simulation-based pediatric cardiac arrest scenario-pulseless ventricular tachycardia (pVT). The primary outcome was the elapsed time in seconds in each allocation group, from onset of pVT to the first defibrillation attempt. Secondary outcomes were time elapsed to (1) initiation of chest compression, (2) subsequent defibrillation attempts, and (3) administration of drugs, as well as the time intervals between defibrillation attempts and drug doses, shock doses, and number of shocks. All these outcomes were assessed for deviation from AHA guidelines. Twenty residents were randomized into 2 groups. Time to first defibrillation attempt (mean: 146 s) and adherence to AHA guidelines in terms of time to other critical resuscitation endpoints and drug dose delivery were not improved using AR glasses. However, errors and deviations were significantly reduced in terms of defibrillation doses when compared with the use of the PALS pocket reference cards. In a total of 40 defibrillation attempts, residents not wearing AR glasses used wrong doses in 65% (26/40) of cases, including 21 shock overdoses >100 J, for a cumulative defibrillation dose of 18.7 Joules per kg. These errors were reduced by 53% (21/40, P<.001) and cumulative defibrillation dose by 37% (5.14/14, P=.001) with AR glasses. AR glasses did not decrease time to first defibrillation attempt and other critical resuscitation endpoints when compared with PALS pocket cards. However, they improved adherence and performance among residents in terms of administering the defibrillation doses set by AHA. ER - TY - JOUR AU - Simpfendörfer, Tobias AU - Gasch, Claudia AU - Hatiboglu, Gencay AU - Müller, Michael AU - Maier-Hein, Lena AU - Hohenfellner, Markus AU - Teber, Dogu T1 - Intraoperative Computed Tomography Imaging for Navigated Laparoscopic Renal Surgery: First Clinical Experience. JO - Journal of endourology Y1 - 2016 VL - 30 SP - 1105 EP - 1111 KW - Adult; Aged; Animals; Cone-Beam Computed Tomography KW - methods; Female; Fluoroscopy; Humans; Kidney KW - surgery; Kidney Neoplasms KW - surgery; Laparoscopy KW - methods; Male; Middle Aged; Monitoring KW - Intraoperative KW - methods; Nephrectomy KW - methods; Patient Safety; Renal Artery; Swine; Tomography KW - X-Ray Computed; cone-beam computed tomography; fluoroscopy; image guided surgery; laparoscopy; nephrectomy N1 - 1557-900X Owner: NLM N2 - Laparoscopic partial nephrectomy (LPN) remains challenging in endophytic and complex kidney tumors as the clear understanding of tumor location and spreading depends on a precise analysis of available imaging. The purpose of this study was to investigate navigated kidney surgery using intraoperative cone-beam computed tomography (CBCT) images in conjunction with a previously proposed method for augmented reality (AR) guidance for safe LPN. The concept proposed is based on using an intraoperative CBCT scan for (1) marker-based AR guidance for fast and reliable tumor access and (2) enhancement of real-time fluoroscopy images for accurate tumor resection. Workflow and accuracy of the system were assessed using a porcine kidney model. Ten patients with complex or endophytic tumor localization and R.E.N.A.L. Nephrometry Score of at least nine scheduled for LPN were included in this study. Patients received an intraoperative CBCT after marker placement. Defining the resection line was assisted by AR. In addition, fluoroscopy imaging for depth perception was used for assistance during dissection. Feasibility and performance were assessed by histopathological results, peri- and postoperative data. Surgery was performed successfully and negative margins were found in all cases. Segmental branches of the renal artery shifted as much as 10 mm in the vertical and 11 mm in the sagittal axis intraoperatively compared to preoperative imaging. Fluoroscopy to intraoperative computed tomography image fusion enabled enhanced depth perception during dissection in all cases. Radiation dose area product was 4.8 mGym . The application of the navigation system is feasible and allows for safe and direct access to complex or endophytic renal masses. Radiation limits the application to selected indications. ER - TY - JOUR AU - Soeiro, José AU - Cláudio, Ana Paula AU - Carmo, Maria Beatriz AU - Ferreira, Hugo Alexandre T1 - Visualizing the brain on a mixed reality smartphone application. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2015 VL - 2015 SP - 5090 EP - 5093 KW - Brain KW - anatomy & histology; Humans; Imaging KW - Three-Dimensional KW - methods; Smartphone; Software N1 - 1557-170X Owner: NLM N2 - Augmented and Virtual Reality approaches are getting more and more advanced and consequently their use in various real world areas is increasing. Medicine is one of the fields in which more practical applications are surfacing, mainly approaches that enable new forms of visualization of data obtained from real patients. Our work focuses on providing a new simple, practical and efficient way to visualize the brain of a patient, both in an Augmented Reality and in a Virtual Reality approach, through a smartphone application. ER - TY - JOUR AU - Sokratis, Nifakos AU - Nabil, Zary T1 - Virtual Patients in a Real Clinical Context using Augmented Reality: Impact on Antibiotics Prescription Behaviors JO - Studies in Health Technology and Informatics Y1 - 2014 VL - 205 IS - e-Health – For Continuity of Care SP - 707 EP - 711 N1 - 0926-9630 ER - TY - JOUR AU - Soler, Luc AU - Nicolau, Stephane AU - Pessaux, Patrick AU - Mutter, Didier AU - Marescaux, Jacques T1 - Real-time 3D image reconstruction guidance in liver resection surgery. JO - Hepatobiliary surgery and nutrition Y1 - 2014 VL - 3 SP - 73 EP - 81 KW - Augmented reality (AR); computer-assisted surgery; liver surgery; virtual reality N1 - 2304-3881 Owner: NLM N2 - Minimally invasive surgery represents one of the main evolutions of surgical techniques. However, minimally invasive surgery adds difficulty that can be reduced through computer technology. From a patient's medical image [US, computed tomography (CT) or MRI], we have developed an Augmented Reality (AR) system that increases the surgeon's intraoperative vision by providing a virtual transparency of the patient. AR is based on two major processes: 3D modeling and visualization of anatomical or pathological structures appearing in the medical image, and the registration of this visualization onto the real patient. We have thus developed a new online service, named Visible Patient, providing efficient 3D modeling of patients. We have then developed several 3D visualization and surgical planning software tools to combine direct volume rendering and surface rendering. Finally, we have developed two registration techniques, one interactive and one automatic providing intraoperative augmented reality view. From January 2009 to June 2013, 769 clinical cases have been modeled by the Visible Patient service. Moreover, three clinical validations have been realized demonstrating the accuracy of 3D models and their great benefit, potentially increasing surgical eligibility in liver surgery (20% of cases). From these 3D models, more than 50 interactive AR-assisted surgical procedures have been realized illustrating the potential clinical benefit of such assistance to gain safety, but also current limits that automatic augmented reality will overcome. Virtual patient modeling should be mandatory for certain interventions that have now to be defined, such as liver surgery. Augmented reality is clearly the next step of the new surgical instrumentation but remains currently limited due to the complexity of organ deformations during surgery. Intraoperative medical imaging used in new generation of automated augmented reality should solve this issue thanks to the development of Hybrid OR. ER - TY - JOUR AU - Souzaki, Ryota AU - Ieiri, Satoshi AU - Uemura, Munenori AU - Ohuchida, Kenoki AU - Tomikawa, Morimasa AU - Kinoshita, Yoshiaki AU - Koga, Yuhki AU - Suminoe, Aiko AU - Kohashi, Kenichi AU - Oda, Yoshinao AU - Hara, Toshiro AU - Hashizume, Makoto AU - Taguchi, Tomoaki T1 - An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images. JO - Journal of pediatric surgery Y1 - 2013 VL - 48 SP - 2479 EP - 2483 KW - Bronchogenic Cyst KW - diagnostic imaging KW - surgery; Child; Child KW - Preschool; Hepatoblastoma KW - surgery; Humans; Image Processing KW - Computer-Assisted; Imaging KW - Three-Dimensional; Infant; Laparoscopy KW - methods; Laparotomy KW - methods; Liver Neoplasms KW - surgery; Magnetic Resonance Imaging; Neoplasm Recurrence KW - Local KW - surgery; Neoplasms KW - surgery; Preoperative Care; Rhabdomyosarcoma KW - secondary KW - surgery; Sarcoma KW - surgery; Surgery KW - Computer-Assisted KW - methods; Thoracoscopy KW - methods; Tomography KW - X-Ray Computed; Treatment Outcome; Wilms Tumor KW - surgery; Augmented reality; Image-guided surgery; Laparoscopic surgery N1 - 1531-5037 Owner: NLM N2 - In pediatric endoscopic surgery, a limited view and lack of tactile sensation restrict the surgeon's abilities. Moreover, in pediatric oncology, it is sometimes difficult to detect and resect tumors due to the adhesion and degeneration of tumors treated with multimodality therapies. We developed an augmented reality (AR) navigation system based on preoperative CT and MRI imaging for use in endoscopic surgery for pediatric tumors. The patients preoperatively underwent either CT or MRI with body surface markers. We used an optical tracking system to register the reconstructed 3D images obtained from the CT and MRI data and body surface markers during surgery. AR visualization was superimposed with the 3D images projected onto captured live images. Six patients underwent surgery using this system. The median age of the patients was 3.5 years. Two of the six patients underwent laparoscopic surgery, two patients underwent thoracoscopic surgery, and two patients underwent laparotomy using this system. The indications for surgery were local recurrence of a Wilms tumor in one case, metastasis of rhabdomyosarcoma in one case, undifferentiated sarcoma in one case, bronchogenic cysts in two cases, and hepatoblastoma in one case. The average tumor size was 22.0±14.2 mm. Four patients were treated with chemotherapy, three patients were treated with radiotherapy before surgery, and four patients underwent reoperation. All six tumors were detected using the AR navigation system and successfully resected without any complications. The AR navigation system is very useful for detecting the tumor location during pediatric surgery, especially for endoscopic surgery. ER - TY - JOUR AU - Stone, Scott A. AU - Tata, Matthew S. T1 - Rendering visual events as sounds: Spatial attention capture by auditory augmented reality. JO - PloS one Y1 - 2017 VL - 12 SP - e0182635 EP - e0182635 KW - Acoustic Stimulation; Algorithms; Analysis of Variance; Attention; Auditory Perception; Discrimination (Psychology); Humans; Motion Perception; Psychophysics; Reaction Time; Software; Space Perception; Time Factors; User-Computer Interface N1 - 1932-6203 Owner: NLM N2 - Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. ER - TY - JOUR AU - Suenaga, Hideyuki AU - Hoang Tran, Huy AU - Liao, Hongen AU - Masamune, Ken AU - Dohi, Takeyoshi AU - Hoshi, Kazuto AU - Mori, Yoshiyuki AU - Takato, Tsuyoshi T1 - Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study. JO - International journal of oral science Y1 - 2013 VL - 5 SP - 98 EP - 102 KW - Calibration; Data Display; Feasibility Studies; Humans; Image Processing KW - Computer-Assisted KW - instrumentation KW - methods; Imaging KW - Three-Dimensional KW - methods; Mandible KW - anatomy & histology; Maxilla KW - anatomy & histology; Models KW - Anatomic; Optical Devices; Oral Surgical Procedures KW - methods; Pilot Projects; Stereotaxic Techniques KW - instrumentation; Surgery KW - methods; Tomography KW - X-Ray Computed KW - methods; Tooth KW - anatomy & histology; User-Computer Interface; Video Recording KW - methods N1 - 1674-2818 Owner: NLM N2 - To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. ER - TY - JOUR AU - Suenaga, Hideyuki AU - Tran, Huy Hoang AU - Liao, Hongen AU - Masamune, Ken AU - Dohi, Takeyoshi AU - Hoshi, Kazuto AU - Takato, Tsuyoshi T1 - Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study. JO - BMC medical imaging Y1 - 2015 VL - 15 SP - 51 EP - 51 KW - Calibration; Feasibility Studies; Humans; Imaging KW - Three-Dimensional; Oral Surgical Procedures KW - instrumentation; Phantoms KW - Imaging; Pilot Projects; Surgery KW - Computer-Assisted; Tomography KW - X-Ray Computed; User-Computer Interface; Video Recording N1 - 1471-2342 Owner: NLM N2 - This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject's anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer's anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. ER - TY - JOUR AU - Sun, Guo-Chen AU - Chen, Xiao-Lei AU - Hou, Yuan-Zheng AU - Yu, Xin-Guang AU - Ma, Xiao-Dong AU - Liu, Gang AU - Liu, Lei AU - Zhang, Jia-Shu AU - Tang, Hao AU - Zhu, Ru-Yuan AU - Zhou, Ding-Biao AU - Xu, Bai-Nan T1 - Image-guided endoscopic surgery for spontaneous supratentorial intracerebral hematoma. JO - Journal of neurosurgery Y1 - 2017 VL - 127 SP - 537 EP - 542 KW - GCS = Glasgow Coma Scale; PACS = picture archiving and communication system; diagnostic and operative techniques; endoscopy; evacuation; hematoma N1 - 1933-0693 Owner: NLM N2 - OBJECTIVE Endoscopic removal of intracerebral hematomas is becoming increasingly common, but there is no standard technique. The authors explored the use of a simple image-guided endoscopic method for removal of spontaneous supratentorial hematomas. METHODS Virtual reality technology based on a hospital picture archiving and communications systems (PACS) was used in 3D hematoma visualization and surgical planning. Augmented reality based on an Android smartphone app, Sina neurosurgical assist, allowed a projection of the hematoma to be seen on the patient's scalp to facilitate selection of the best trajectory to the center of the hematoma. A obturator and transparent sheath were used to establish a working channel, and an endoscope and a metal suction apparatus were used to remove the hematoma. RESULTS A total of 25 patients were included in the study, including 18 with putamen hemorrhages and 7 with lobar cerebral hemorrhages. Virtual reality combined with augmented reality helped in achieving the desired position with the obturator and sheath. The median time from the initial surgical incision to completion of closure was 50 minutes (range 40-70 minutes). The actual endoscopic operating time was 30 (range 15-50) minutes. The median blood loss was 80 (range 40-150) ml. No patient experienced postoperative rebleeding. The average hematoma evacuation rate was 97%. The mean (± SD) preoperative Glasgow Coma Scale (GCS) score was 6.7 ± 3.2; 1 week after hematoma evacuation the mean GCS score had improved to 11.9 ± 3.1 (p < 0.01). CONCLUSIONS Virtual reality using hospital PACS and augmented reality with a smartphone app helped precisely localize hematomas and plan the appropriate endoscopic approach. A transparent sheath helped establish a surgical channel, and an endoscope enabled observation of the hematoma's location to achieve satisfactory hematoma removal. ER - TY - JOUR AU - Sun, Guo-Chen AU - Wang, Fei AU - Chen, Xiao-Lei AU - Yu, Xin-Guang AU - Ma, Xiao-Dong AU - Zhou, Ding-Biao AU - Zhu, Ru-Yuan AU - Xu, Bai-Nan T1 - Impact of Virtual and Augmented Reality Based on Intraoperative Magnetic Resonance Imaging and Functional Neuronavigation in Glioma Surgery Involving Eloquent Areas. JO - World neurosurgery Y1 - 2016 VL - 96 SP - 375 EP - 382 KW - Adult; Aged; Brain Neoplasms KW - diagnostic imaging KW - surgery; Female; Glioma KW - surgery; Humans; Imaging KW - Three-Dimensional; Magnetic Resonance Imaging; Male; Middle Aged; Monitoring KW - Intraoperative KW - methods; Neuronavigation KW - methods; Retrospective Studies; Statistics KW - Nonparametric; User-Computer Interface; Augmented reality; Diffusion tensor imaging; Functional neuronavigation; Intraoperative MRI; Virtual reality N1 - 1878-8769 Owner: NLM N2 - The utility of virtual and augmented reality based on functional neuronavigation and intraoperative magnetic resonance imaging (MRI) for glioma surgery has not been previously investigated. The study population consisted of 79 glioma patients and 55 control subjects. Preoperatively, the lesion and related eloquent structures were visualized by diffusion tensor tractography and blood oxygen level-dependent functional MRI. Intraoperatively, microscope-based functional neuronavigation was used to integrate the reconstructed eloquent structure and the real head and brain, which enabled safe resection of the lesion. Intraoperative MRI was used to verify brain shift during the surgical process and provided quality control during surgery. The control group underwent surgery guided by anatomic neuronavigation. Virtual and augmented reality protocols based on functional neuronavigation and intraoperative MRI provided useful information for performing tailored and optimized surgery. Complete resection was achieved in 55 of 79 (69.6%) glioma patients and 20 of 55 (36.4%) control subjects, with average resection rates of 95.2% ± 8.5% and 84.9% ± 15.7%, respectively. Both the complete resection rate and average extent of resection differed significantly between the 2 groups (P < 0.01). Postoperatively, the rate of preservation of neural functions (motor, visual field, and language) was lower in controls than in glioma patients at 2 weeks and 3 months (P < 0.01). Combining virtual and augmented reality based on functional neuronavigation and intraoperative MRI can facilitate resection of gliomas involving eloquent areas. ER - TY - JOUR AU - Sutherland, Colin AU - Hashtrudi-Zaad, Keyvan AU - Sellens, Rick AU - Abolmaesumi, Purang AU - Mousavi, Parvin T1 - An augmented reality haptic training simulator for spinal needle procedures. JO - IEEE transactions on bio-medical engineering Y1 - 2013 VL - 60 SP - 3009 EP - 3018 KW - Computer Simulation; Computer-Assisted Instruction KW - instrumentation KW - methods; Education KW - Medical KW - methods; Finite Element Analysis; Humans; Injections KW - Spinal KW - methods; Models KW - Anatomic; Pilot Projects; Spine KW - anatomy & histology; Torso KW - anatomy & histology; User-Computer Interface N1 - 1558-2531 Owner: NLM N2 - This paper presents the prototype for an augmented reality haptic simulation system with potential for spinal needle insertion training. The proposed system is composed of a torso mannequin, a MicronTracker2 optical tracking system, a PHANToM haptic device, and a graphical user interface to provide visual feedback. The system allows users to perform simulated needle insertions on a physical mannequin overlaid with an augmented reality cutaway of patient anatomy. A tissue model based on a finite-element model provides force during the insertion. The system allows for training without the need for the presence of a trained clinician or access to live patients or cadavers. A pilot user study demonstrates the potential and functionality of the system. ER - TY - JOUR AU - Suzuki, Keisuke AU - Garfinkel, Sarah N. AU - Critchley, Hugo D. AU - Seth, Anil K. T1 - Multisensory integration across exteroceptive and interoceptive domains modulates self-experience in the rubber-hand illusion. JO - Neuropsychologia Y1 - 2013 VL - 51 SP - 2909 EP - 2917 KW - Adolescent; Adult; Analysis of Variance; Body Image; Feedback KW - Physiological KW - physiology; Female; Functional Laterality; Heart Rate KW - physiology; Humans; Illusions KW - physiology KW - psychology; Male; Proprioception; Self Concept; Surveys and Questionnaires; Touch Perception KW - physiology; Visual Perception KW - physiology; Young Adult; Augmented reality; Experience of body ownership; Interoception; Multisensory integration; Predictive coding; Rubber hand illusion N1 - 1873-3514 Owner: NLM N2 - Identifying with a body is central to being a conscious self. The now classic "rubber hand illusion" demonstrates that the experience of body-ownership can be modulated by manipulating the timing of exteroceptive (visual and tactile) body-related feedback. Moreover, the strength of this modulation is related to individual differences in sensitivity to internal bodily signals (interoception). However the interaction of exteroceptive and interoceptive signals in determining the experience of body-ownership within an individual remains poorly understood. Here, we demonstrate that this depends on the online integration of exteroceptive and interoceptive signals by implementing an innovative "cardiac rubber hand illusion" that combined computer-generated augmented-reality with feedback of interoceptive (cardiac) information. We show that both subjective and objective measures of virtual-hand ownership are enhanced by cardio-visual feedback in-time with the actual heartbeat, as compared to asynchronous feedback. We further show that these measures correlate with individual differences in interoceptive sensitivity, and are also modulated by the integration of proprioceptive signals instantiated using real-time visual remapping of finger movements to the virtual hand. Our results demonstrate that interoceptive signals directly influence the experience of body ownership via multisensory integration, and they lend support to models of conscious selfhood based on interoceptive predictive coding. ER - TY - JOUR AU - Suzuki, Naoki AU - Hattori, Asaki AU - Iimura, Jiro AU - Otori, Nobuyoshi AU - Onda, Shinji AU - Okamoto, Tomoyoshi AU - Yanaga, Katsuhiko T1 - Development of AR Surgical Navigation Systems for Multiple Surgical Regions. JO - Studies in health technology and informatics Y1 - 2014 VL - 196 SP - 404 EP - 408 KW - Digestive System Surgical Procedures KW - instrumentation KW - methods; Humans; Paranasal Sinuses KW - surgery; Surgery KW - Computer-Assisted KW - methods N1 - 1879-8365 Owner: NLM N2 - The purpose of our research is to develop surgical navigation systems to enhance surgical safety. Our systems make use of augmented reality technology to superimpose, on the surgery screen on a real time basis, patients' organ models reconstructed in 3D from their X-ray CT data taken before surgery. By doing so, the systems display anatomical risk materials, tumors and blood vessels which surgeons cannot see with their naked eyes. This will in turn lead to surgeons intuitively grasping the inner structures of the operational fields. We so far have been developing navigation systems that can conduct surgeries in various fields. The basic structure of the navigation systems are the same. The navigation systems uses different peripheral equipment and different methods to display navigation images which best meet the demands of each type of surgery. In this thesis, we report on our navigation systems for 2 types of surgery - endoscopic sinus surgery and hepatobilialy-pancreatic surgery. ER - TY - JOUR AU - Sweet, Robert M. T1 - The CREST Simulation Development Process: Training the Next Generation. JO - Journal of endourology Y1 - 2017 VL - 31 SP - S69 EP - S75 KW - Anastomosis KW - Surgical KW - education; Clinical Competence; Clinical Decision-Making; Communication; Computer Simulation; Cystoscopy KW - education; Endoscopy; Humans; Kidney Pelvis KW - surgery; Laparoscopy KW - education; Nephrostomy KW - Percutaneous; Printing KW - Three-Dimensional; Psychomotor Performance; Simulation Training KW - methods; Task Performance and Analysis; Ureteroscopy KW - education; Urethra KW - surgery; Urinary Bladder KW - surgery; Urologic Surgical Procedures KW - education; Urology KW - education; User-Computer Interface; benign prostatic hyperplasia; education; instrumentation; laparoscopy approach; laser; percutaneous nephrolithotomy; renal stone; simulation; ureteral stones; ureteroscopy N1 - 1557-900X Owner: NLM N2 - The challenges of training and assessing endourologic skill have driven the development of new training systems. The Center for Research in Education and Simulation Technologies (CREST) has developed a team and a methodology to facilitate this development process. Backwards design principles were applied. A panel of experts first defined desired clinical and educational outcomes. Outcomes were subsequently linked to learning objectives. Gross task deconstruction was performed, and the primary domain was classified as primarily involving decision-making, psychomotor skill, or communication. A more detailed cognitive task analysis was performed to elicit and prioritize relevant anatomy/tissues, metrics, and errors. Reference anatomy was created using a digital anatomist and clinician working off of a clinical data set. Three dimensional printing can facilitate this process. When possible, synthetic or virtual tissue behavior and textures were recreated using data derived from human tissue. Embedded sensors/markers and/or computer-based systems were used to facilitate the collection of objective metrics. A learning Verification and validation occurred throughout the engineering development process. Nine endourology-relevant training systems were created by CREST with this approach. Systems include basic laparoscopic skills (BLUS), vesicourethral anastomosis, pyeloplasty, cystoscopic procedures, stent placement, rigid and flexible ureteroscopy, GreenLight PVP (GL Sim), Percutaneous access with C-arm (CAT), Nephrolithotomy (NLM), and a vascular injury model. Mixed modalities have been used, including "smart" physical models, virtual reality, augmented reality, and video. Substantial validity evidence for training and assessment has been collected on systems. An open source manikin-based modular platform is under development by CREST with the Department of Defense that will unify these and other commercial task trainers through the common physiology engine, learning management system, standard data connectors, and standards. Using the CREST process has and will ensure that the systems we create meet the needs of training and assessing endourologic skills. ER - TY - JOUR AU - Szabó, Zoltán AU - Berg, Sören AU - Sjökvist, Stefan AU - Gustafsson, Torbjörn AU - Carleberg, Per AU - Uppsäll, Magnus AU - Wren, Joakim AU - Ahn, Henrik AU - Smedby, Örjan T1 - Real-time intraoperative visualization of myocardial circulation using augmented reality temperature display. JO - The international journal of cardiovascular imaging Y1 - 2013 VL - 29 SP - 521 EP - 528 KW - Animals; Body Temperature; Cardiac Surgical Procedures; Coronary Circulation; Disease Models KW - Animal; Echocardiography KW - Doppler; Feasibility Studies; Hemodynamics; Infrared Rays; Monitoring KW - Intraoperative KW - methods; Myocardial Ischemia KW - diagnosis KW - diagnostic imaging KW - physiopathology; Myocardial Perfusion Imaging KW - methods; Swine; Thermography; Time Factors N1 - 1875-8312 Owner: NLM N2 - For direct visualization of myocardial ischemia during cardiac surgery, we tested the feasibility of presenting infrared (IR) tissue temperature maps in situ during surgery. A new augmented reality (AR) system, consisting of an IR camera and an integrated projector having identical optical axes, was used, with a high resolution IR camera as control. The hearts of five pigs were exposed and an elastic band placed around the middle of the left anterior descending coronary artery to induce ischemia. A proximally placed ultrasound Doppler probe confirmed reduction of flow. Two periods of complete ischemia and reperfusion were studied in each heart. There was a significant decrease in IR-measured temperature distal to the occlusion, with subsequent return to baseline temperatures after reperfusion (baseline 36.9 ± 0.60 (mean ± SD) versus ischemia 34.1 ± 1.66 versus reperfusion 37.4 ± 0.48; p < 0.001), with no differences occurring in the non-occluded area. The AR presentation was clear and dynamic without delay, visualizing the temperature changes produced by manipulation of the coronary blood flow, and showed concentrically arranged penumbra zones during ischemia. Surface myocardial temperature changes could be assessed quantitatively and visualized in situ during ischemia and subsequent reperfusion. This method shows potential as a rapid and simple way of following myocardial perfusion during cardiac surgery. The dynamics in the penumbra zone could potentially be used for visualizing the effect of therapy on intraoperative ischemia during cardiac surgery. ER - TY - JOUR AU - Taati, Babak AU - Campos, Jennifer AU - Griffiths, Jeremy AU - Gridseth, Mona AU - Mihailidis, Alex T1 - Vision-based categorization of upper body motion impairments and post-stroke motion synergies JO - International Journal on Disability and Human Development Y1 - 2014/january VL - 13 IS - 3 ER - TY - JOUR AU - Tabrizi, Leila Besharati AU - Mahvash, Mehran T1 - Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique JO - Journal of Neurosurgery Y1 - 2015/july SP - 206 EP - 211 ER - TY - CONF AU - Tang, Qiang AU - Chen, Yan AU - Gale, Alastair G. AU - Kupinski, Matthew A. AU - Nishikawa, Robert M. T1 - The implementation of an AR (augmented reality) approach to support mammographic interpretation training: an initial feasibility study PB - SPIE Y1 - 2017/march ER - TY - CONF AU - Tano, S. AU - Suzuki, K. AU - Miki, K. AU - Watanabe, N. AU - Iwata, M. AU - Hashiyama, T. AU - Ichino, J. AU - Nakayama, K. T1 - Simple augmented reality system for 3D ultrasonic image by see-through HMD and single camera and marker combination PB - IEEE Y1 - 2012/january ER - TY - JOUR AU - Terrier, Philippe T1 - Fractal Fluctuations in Human Walking: Comparison Between Auditory and Visually Guided Stepping. JO - Annals of biomedical engineering Y1 - 2016 VL - 44 SP - 2785 EP - 2793 KW - Adult; Auditory Perception KW - physiology; Female; Gait KW - physiology; Humans; Male; Motion Perception KW - physiology; Sensory Gating KW - physiology; Auditory cueing; Gait variability; Human locomotion; Long-range correlations; Motor control; Sensorimotor synchronization; Visual cueing N1 - 1573-9686 Owner: NLM N2 - In human locomotion, sensorimotor synchronization of gait consists of the coordination of stepping with rhythmic auditory cues (auditory cueing, AC). AC changes the long-range correlations among consecutive strides (fractal dynamics) into anti-correlations. Visual cueing (VC) is the alignment of step lengths with marks on the floor. The effects of VC on the fluctuation structure of walking have not been investigated. Therefore, the objective was to compare the effects of AC and VC on the fluctuation pattern of basic spatiotemporal gait parameters. Thirty-six healthy individuals walked 3 × 500 strides on an instrumented treadmill with augmented reality capabilities. The conditions were no cueing (NC), AC, and VC. AC included an isochronous metronome. For VC, projected stepping stones were synchronized with the treadmill speed. Detrended fluctuation analysis assessed the correlation structure. The coefficient of variation (CV) was also assessed. The results showed that AC and VC similarly induced a strong anti-correlated pattern in the gait parameters. The CVs were similar between the NC and AC conditions but substantially higher in the VC condition. AC and VC probably mobilize similar motor control pathways and can be used alternatively in gait rehabilitation. However, the increased gait variability induced by VC should be considered. ER - TY - JOUR AU - Timmermans, Celine AU - Roerdink, Melvyn AU - van Ooijen, Marielle W. AU - Meskers, Carel G. AU - Janssen, Thomas W. AU - Beek, Peter J. T1 - Walking adaptability therapy after stroke: study protocol for a randomized controlled trial. JO - Trials Y1 - 2016 VL - 17 SP - 425 EP - 425 KW - Adaptation KW - Physiological; Clinical Protocols; Disability Evaluation; Exercise Test; Exercise Therapy KW - methods; Gait; Humans; Netherlands; Postural Balance; Recovery of Function; Research Design; Stroke KW - diagnosis KW - physiopathology KW - therapy; Stroke Rehabilitation KW - methods; Time Factors; Treatment Outcome; Walking; Exercise; Rehabilitation; Stroke; Therapy; Walking adaptability; Walking speed N1 - 1745-6215 Owner: NLM N2 - Walking in everyday life requires the ability to adapt walking to the environment. This adaptability is often impaired after stroke, and this might contribute to the increased fall risk after stroke. To improve safe community ambulation, walking adaptability training might be beneficial after stroke. This study is designed to compare the effects of two interventions for improving walking speed and walking adaptability: treadmill-based C-Mill therapy (therapy with augmented reality) and the overground FALLS program (a conventional therapy program). We hypothesize that C-Mill therapy will result in better outcomes than the FALLS program, owing to its expected greater amount of walking practice. This is a single-center parallel group randomized controlled trial with pre-intervention, post-intervention, retention, and follow-up tests. Forty persons after stroke (≥3 months) with deficits in walking or balance will be included. Participants will be randomly allocated to either C-Mill therapy or the overground FALLS program for 5 weeks. Both interventions will incorporate practice of walking adaptability and will be matched in terms of frequency, duration, and therapist attention. Walking speed, as determined by the 10 Meter Walking Test, will be the primary outcome measure. Secondary outcome measures will pertain to walking adaptability (10 Meter Walking Test with context or cognitive dual-task and Interactive Walkway assessments). Furthermore, commonly used clinical measures to determine walking ability (Timed Up-and-Go test), walking independence (Functional Ambulation Category), balance (Berg Balance Scale), and balance confidence (Activities-specific Balance Confidence scale) will be used, as well as a complementary set of walking-related assessments. The amount of walking practice (the number of steps taken per session) will be registered using the treadmill's inbuilt step counter (C-Mill therapy) and video recordings (FALLS program). This process measure will be compared between the two interventions. This study will assess the effects of treadmill-based C-Mill therapy compared with the overground FALLS program and thereby the relative importance of the amount of walking practice as a key aspect of effective intervention programs directed at improving walking speed and walking adaptability after stroke. Netherlands Trial Register NTR4030 . Registered on 11 June 2013, amendment filed on 17 June 2016. ER - TY - JOUR AU - Trojan, Jörg AU - Diers, Martin AU - Fuchs, Xaver AU - Bach, Felix AU - Bekrater-Bodmann, Robin AU - Foell, Jens AU - Kamping, Sandra AU - Rance, Mariela AU - Maaß, Heiko AU - Flor, Herta T1 - An augmented reality home-training system based on the mirror training and imagery approach. JO - Behavior research methods Y1 - 2014 VL - 46 SP - 634 EP - 640 KW - Adult; Complex Regional Pain Syndromes KW - rehabilitation; Equipment Design; Female; Fingers; Hand KW - physiology; Hand Strength; Humans; Imagery (Psychotherapy); Male; Middle Aged; Movement; Paresis KW - rehabilitation; Phantom Limb KW - rehabilitation; Reproducibility of Results; Stroke; Video Games; Young Adult N1 - 1554-3528 Owner: NLM N2 - Mirror training and movement imagery have been demonstrated to be effective in treating several clinical conditions, such as phantom limb pain, stroke-induced hemiparesis, and complex regional pain syndrome. This article presents an augmented reality home-training system based on the mirror and imagery treatment approaches for hand training. A head-mounted display equipped with cameras captures one hand held in front of the body, mirrors this hand, and displays it in real time in a set of four different training tasks: (1) flexing fingers in a predefined sequence, (2) moving the hand into a posture fitting into a silhouette template, (3) driving a "Snake" video game with the index finger, and (4) grasping and moving a virtual ball. The system records task performance and transfers these data to a central server via the Internet, allowing monitoring of training progress. We evaluated the system by having 7 healthy participants train with it over the course of ten sessions of 15-min duration. No technical problems emerged during this time. Performance indicators showed that the system achieves a good balance between relatively easy and more challenging tasks and that participants improved significantly over the training sessions. This suggests that the system is well suited to maintain motivation in patients, especially when it is used for a prolonged period of time. ER - TY - JOUR AU - Tsutsumi, Norifumi AU - Tomikawa, Morimasa AU - Uemura, Munenori AU - Akahoshi, Tomohiko AU - Nagao, Yoshihiro AU - Konishi, Kozo AU - Ieiri, Satoshi AU - Hong, Jaesung AU - Maehara, Yoshihiko AU - Hashizume, Makoto T1 - Image-guided laparoscopic surgery in an open MRI operating theater. JO - Surgical endoscopy Y1 - 2013 VL - 27 SP - 2178 EP - 2184 KW - Adenomyosis KW - surgery; Aged; Cholecystectomy KW - Laparoscopic KW - methods; Cholecystolithiasis KW - surgery; Feasibility Studies; Female; Hernia KW - Ventral KW - surgery; Herniorrhaphy KW - methods; Humans; Laparoscopy KW - methods; Magnetic Resonance Imaging KW - Interventional KW - methods; Male; Middle Aged; Operating Rooms; Operative Time; Pneumoperitoneum KW - Artificial; Surgery KW - Computer-Assisted KW - methods; Treatment Outcome N1 - 1432-2218 Owner: NLM N2 - The recent development of open magnetic resonance imaging (MRI) has provided an opportunity for the next stage of image-guided surgical and interventional procedures. The purpose of this study was to evaluate the feasibility of laparoscopic surgery under the pneumoperitoneum with the system of an open MRI operating theater. Five patients underwent laparoscopic surgery with a real-time augmented reality navigation system that we previously developed in a horizontal-type 0.4-T open MRI operating theater. All procedures were performed in an open MRI operating theater. During the operations, the laparoscopic monitor clearly showed the augmented reality models of the intraperitoneal structures, such as the common bile ducts and the urinary bladder, as well as the proper positions of the prosthesis. The navigation frame rate was 8 frames per min. The mean fiducial registration error was 6.88 ± 6.18 mm in navigated cases. We were able to use magnetic resonance-incompatible surgical instruments out of the 5-Gs restriction area, as well as conventional laparoscopic surgery, and we developed a real-time augmented reality navigation system using open MRI. Laparoscopic surgery with our real-time augmented reality navigation system in the open MRI operating theater is a feasible option. ER - TY - JOUR AU - U-Thainual, Paweena AU - Fritz, Jan AU - Moonjaita, Choladawan AU - Ungi, Tamas AU - Flammang, Aaron AU - Carrino, John A. AU - Fichtinger, Gabor AU - Iordachita, Iulian T1 - MR image overlay guidance: system evaluation for preclinical use. JO - International journal of computer assisted radiology and surgery Y1 - 2013/05 VL - 8 SP - 365 EP - 378 KW - Humans; Image Enhancement KW - instrumentation; Injections KW - Spinal KW - instrumentation; Magnetic Resonance Imaging KW - Interventional KW - instrumentation; Models KW - Biological; Orthopedic Procedures KW - instrumentation; Phantoms KW - Imaging; Reproducibility of Results; Signal-To-Noise Ratio N1 - 1861-6429 Owner: NLM N2 - A clinical augmented reality guidance system was developed for MRI-guided musculoskeletal interventions Magnetic Resonance Image Overlay System (MR-IOS). The purpose of this study was to assess MRI compatibility, system accuracy, technical efficacy, and operator performance of the MR-IOS. The impact of the MR-IOS on the MR environment was assessed by measuring image quality with signal-to-noise ratio (SNR) and signal intensity uniformity with the system in various on/off states. The system accuracy was assessed with an in-room preclinical experiment by performing 62 needle insertions on a spine phantom by an expert operator measuring entry, depth, angle, and target errors. Technical efficacy and operator performance were tested in laboratory by running an experiment with 40 novice operators (20 using freehand technique versus 20 MR-IOS-guided) with each operator inserting 10 needles into a geometric phantom. Technical efficacy was measured by comparing the success rates of needle insertions between the two operator groups. Operator performance was assessed by comparing total procedure times, total needle path distance, presumed tissue damage, and speed of individual insertions between the two operator groups. The MR-IOS maximally altered SNR by 2% with no perceptible change in image quality or uniformity. Accuracy assessment showed mean entry error of 1.6 ± 0.6 mm, depth error of 0.7 ± 0.5 mm, angle error of 1.5 ± 1.1°, and target error of 1.9 ± 0.8 mm. Technical efficacy showed a statistically significant difference (p = 0.031) between success rates (freehand 35.0% vs. MR-IOS 80.95%). Operator performance showed: mean total procedure time of 40.3 ± 4.4 (s) for freehand and 37.0 ± 3.7 (s) for MR-IOS (p = 0.584), needle path distances of 152.6 ± 15.0 mm for freehand and 116.9 ± 8.7 mm for MR-IOS (p = 0.074), presumed tissue damage of 7,417.2 ± 955.6 mm(2) for freehand and 6062.2 ± 678.5 mm(2) for MR-IOS (p = 0.347), and speed of insertion 5.9 ± 0.4 mm/s for freehand and 4.3 ± 0.3 mm/s for MR-IOS (p = 0.003). The MR-IOS is compatible within a clinical MR imaging environment, accurate for needle placement, technically efficacious, and improves operator performance over the unassisted insertion technique. The MR-IOS was found to be suitable for further testing in a clinical setting. ER - TY - JOUR AU - Valdés Olmos, R. A. AU - Vidal-Sicart, S. AU - Giammarile, F. AU - Zaknun, J. J. AU - Van Leeuwen, F. W. AU - Mariani, G. T1 - The GOSTT concept and hybrid mixed/virtual/augmented reality environment radioguided surgery. JO - The quarterly journal of nuclear medicine and molecular imaging : official publication of the Italian Association of Nuclear Medicine (AIMN) [and] the International Association of Radiopharmacology (IAR), [and] Section of the Society of... Y1 - 2014 VL - 58 SP - 207 EP - 215 KW - Evidence-Based Medicine; Female; Humans; Image-Guided Biopsy KW - methods; Lymph Nodes KW - pathology KW - surgery; Lymphatic Metastasis; Male; Multimodal Imaging KW - methods; Neoplasms KW - diagnosis KW - surgery; Reproducibility of Results; Sensitivity and Specificity; Sentinel Lymph Node Biopsy KW - methods; Surgery KW - Computer-Assisted KW - methods; Tomography KW - Emission-Computed KW - Single-Photon KW - methods; User-Computer Interface N1 - 1824-4785 Owner: NLM N2 - The popularity gained by the sentinel lymph node (SLN) procedure in the last two decades did increase the interest of the surgical disciplines for other applications of radioguided surgery. An example is the gamma-probe guided localization of occult or difficult to locate neoplastic lesions. Such guidance can be achieved by intralesional delivery (ultrasound, stereotaxis or CT) of a radiolabelled agent that remains accumulated at the site of the injection. Another possibility rested on the use of systemic administration of a tumour-seeking radiopharmaceutical with favourable tumour accumulation and retention. On the other hand, new intraoperative imaging devices for radioguided surgery in complex anatomical areas became available. All this a few years ago led to the delineation of the concept Guided intraOperative Scintigraphic Tumour Targeting (GOSTT) to include the whole spectrum of basic and advanced nuclear medicine procedures required for providing a roadmap that would optimise surgery. The introduction of allied signatures using, e.g. hybrid tracers for simultaneous detection of the radioactive and fluorescent signals did amply the GOSTT concept. It was now possible to combine perioperative nuclear medicine imaging with the superior resolution of additional optical guidance in the operating room. This hybrid approach is currently in progress and probably will become an important model to follow in the coming years. A cornerstone in the GOSTT concept is constituted by diagnostic imaging technologies like SPECT/CT. SPECT/CT was introduced halfway the past decade and was immediately incorporated into the SLN procedure. Important reasons attributing to the success of SPECT/CT were its combination with lymphoscintigraphy, and the ability to display SLNs in an anatomical environment. This latter aspect has significantly been improved in the new generation of SPECT/CT cameras and provides the base for the novel mixed reality protocols of image-guided surgery. In these protocols the generated virtual SPECT/CT elements are visually superimposed in the body of the patient in the operating room to directly facilitate, by means of visualization on screen or using head-mounted devices, the localization of radioactive and/or fluorescent targets by minimal invasive approaches in areas of complex anatomy. All these technological advances will play an increasing role in the future extension and the clinical impact of the GOSTT concept. ER - TY - CONF AU - Vannelli, Claire AU - Moore, John AU - McLeod, Jonathan AU - Ceh, Dennis AU - Peters, Terry AU - Webster, Robert J. AU - Yaniv, Ziv R. T1 - Dynamic heart phantom with functional mitral and aortic valves PB - SPIE Y1 - 2015/march ER - TY - JOUR AU - Vera, Angelina M. AU - Russo, Michael AU - Mohsin, Adnan AU - Tsuda, Shawn T1 - Augmented reality telementoring (ART) platform: a randomized controlled trial to assess the efficacy of a new surgical education technology. JO - Surgical endoscopy Y1 - 2014 VL - 28 SP - 3467 EP - 3472 KW - Education KW - Medical KW - methods; Equipment Design; General Surgery KW - education; Humans; Laparoscopy KW - education KW - methods; Learning Curve; Mentors; Reproducibility of Results; Students KW - Medical; Suture Techniques KW - instrumentation; Telemetry KW - instrumentation N1 - 1432-2218 Owner: NLM N2 - Laparoscopic skills training has evolved over recent years. However, conveying a mentor's directions using conventional methods, without realistic on-screen visual cues, can be difficult and confusing. To facilitate laparoscopic skill transference, an augmented reality telementoring (ART) platform was designed to overlay the instruments of a mentor onto the trainee's laparoscopic monitor. The aim of this study was to compare the effectiveness of this new teaching modality to traditional methods in novices performing an intracorporeal suturing task. Nineteen pre-medical and medical students were randomized into traditional mentoring (n = 9) and ART (n = 10) groups for a laparoscopic suturing and knot-tying task. Subjects received either traditional mentoring or ART for 1 h on the validated fundamentals of laparoscopic surgery intracorporeal suturing task. Tasks for suturing were recorded and scored for time and errors. Results were analyzed using means, standard deviation, power regression analysis, correlation coefficient, analysis of variance, and student's t test. Using Wright's cumulative average model (Y = aX (b)) the learning curve slope was significantly steeper, demonstrating faster skill acquisition, for the ART group (b = -0.567, r (2) = 0.92) than the control group (b = -0.453, r (2) = 0.74). At the end of 10 repetitions or 1 h of practice, the ART group was faster versus traditional (mean 167.4 vs. 242.4 s, p = 0.014). The ART group also had fewer fails (8) than the traditional group (13). The ART Platform may be a more effective training technique in teaching laparoscopic skills to novices compared to traditional methods. ART conferred a shorter learning curve, which was more pronounced in the first 4 trials. ART reduced the number of failed attempts and resulted in faster suture times by the end of the training session. ART may be a more effective training tool in laparoscopic surgical training for complex tasks than traditional methods. ER - TY - JOUR AU - Vidrios-Serrano, Carlos AU - Bonilla, Isela AU - Vigueras-Gomez, Flavio AU - Mendoza, Marco T1 - Development of a haptic interface for motor rehabilitation therapy using augmented reality. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2015 VL - 2015 SP - 1156 EP - 1159 KW - Exercise; Humans; Movement; Occupational Therapy; Upper Extremity; User-Computer Interface N1 - 1557-170X Owner: NLM N2 - In this paper, a robot-assisted therapy system is presented, mainly focused on the improvement of fine movements of patients with motor deficits of upper limbs. This system combines the use of a haptic device with an augmented reality environment, where a kind of occupational therapy exercises are implemented. The main goal of the system is to provide an extra motivation to patients, who are stimulated visually and tactilely into a scene that mixes elements of real and virtual worlds. Additionally, using the norm of tracking error, it is possible to quantitatively measure the performance of the patient during a therapy session, likewise, it is possible to obtain information such as runtime and the followed path. ER - TY - JOUR AU - Vigh, Balázs AU - Müller, Steffen AU - Ristow, Oliver AU - Deppe, Herbert AU - Holdstock, Stuart AU - den Hollander, Jürgen AU - Navab, Nassir AU - Steiner, Timm AU - Hohlweg-Majert, Bettina T1 - The use of a head-mounted display in oral implantology: a feasibility study JO - International Journal of Computer Assisted Radiology and Surgery Y1 - 2013/june VL - 9 IS - 1 SP - 71 EP - 78 ER - TY - JOUR AU - Volonté, Francesco AU - Buchs, Nicolas C. AU - Pugin, François AU - Spaltenstein, Joël AU - Jung, Minoa AU - Ratib, Osman AU - Morel, Philippe T1 - Stereoscopic augmented reality for da Vincii™ robotic biliary surgery. JO - International journal of surgery case reports Y1 - 2013 VL - 4 SP - 365 EP - 367 N1 - 2210-2612 Owner: NLM N2 - New laparoscopic techniques put distance between the surgeon and his patient. 3D volume rendered images directly displayed in the da Vinci surgeon's console fill this gap by allowing the surgeon to fully immerse in its intervention. During the robotic operation the surgeon has a greater control on the procedure because he can stay more focused not being obliged to turn is sight out of his operative field. Moreover, thanks to depth perception of the rendered images he had a precise view of important anatomical structures. We describe our preliminary experience in the quest of computer-assisted robotic surgery. ER - TY - JOUR AU - Volonté, Francesco AU - Buchs, Nicolas C. AU - Pugin, François AU - Spaltenstein, Joël AU - Schiltz, Boris AU - Jung, Minoa AU - Hagen, Monika AU - Ratib, Osman AU - Morel, Philippe T1 - Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console. JO - The international journal of medical robotics + computer assisted surgery : MRCAS Y1 - 2013 VL - 9 SP - e34 EP - e38 KW - Depth Perception; Humans; Image Processing KW - Computer-Assisted; Imaging KW - Three-Dimensional; Minimally Invasive Surgical Procedures KW - instrumentation KW - statistics & numerical data; Models KW - Anatomic; Robotics KW - statistics & numerical data; Surgery KW - Computer-Assisted KW - statistics & numerical data; Tomography KW - X-Ray Computed KW - statistics & numerical data; User-Computer Interface; augmented reality; mixed reality; osiriX; robotic surgery N1 - 1478-596X Owner: NLM N2 - Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. ER - TY - JOUR AU - Volonté, Francesco AU - Pugin, Francois AU - Buchs, Nicolas Christian AU - Spaltenstein, Joël AU - Hagen, Monika AU - Ratib, Osman AU - Morel, Philippe T1 - Console-integrated stereoscopic OsiriX 3D volume-rendered images for da Vinci colorectal robotic surgery. JO - Surgical innovation Y1 - 2013 VL - 20 SP - 158 EP - 163 KW - Adenocarcinoma KW - surgery; Aged; Colonic Neoplasms KW - surgery; Digestive System Surgical Procedures KW - instrumentation KW - methods; Female; Humans; Image Processing KW - Computer-Assisted; Imaging KW - Three-Dimensional KW - methods; Laparoscopy KW - methods; Robotics KW - methods; Surgery KW - Computer-Assisted KW - methods N1 - 1553-3514 Owner: NLM N2 - The increased distance between surgeon and surgical field is a significant problem in laparoscopic surgery. Robotic surgery, although providing advantages for the operator, increases this gap by completely removing force feedback. Enhancement with visual tools can therefore be beneficial. The goal of this preliminary work was to create a custom plugin for OsiriX to display volume-rendered images in the da Vinci surgeon's console. The TilePro multi-input display made the generated stereoscopic pairs appear to have depth. Tumor position, vascular supply, spatial location, and relationship between organs appear directly within the surgeon's field of view. This study presents a case of totally robotic right colectomy for cancer using this new technology. Sight diversion was no longer necessary. Depth perception was subjectively perceived as profitable. Total immersion in the operative field helped compensate for the lack of tactile feedback specific to robotic intervention. This innovative tool is a step forward toward augmented-reality robot-assisted surgery. ER - TY - JOUR AU - Wang, Huixiang AU - Wang, Fang AU - Leong, Anthony Peng Yew AU - Xu, Lu AU - Chen, Xiaojun AU - Wang, Qiugen T1 - Precision insertion of percutaneous sacroiliac screws using a novel augmented reality-based navigation system: a pilot study. JO - International orthopaedics Y1 - 2016 VL - 40 SP - 1941 EP - 1947 KW - Bone Screws; Cadaver; Humans; Imaging KW - Three-Dimensional; Pilot Projects; Sacroiliac Joint KW - surgery; Surgery KW - Computer-Assisted; Tomography KW - X-Ray Computed; Augmented reality; Computer assisted surgery; Head mounted display; Sacroiliac screw insertion; Three-dimensional navigation N1 - 1432-5195 Owner: NLM N2 - Augmented reality (AR) enables superimposition of virtual images onto the real world. The aim of this study is to present a novel AR-based navigation system for sacroiliac screw insertion and to evaluate its feasibility and accuracy in cadaveric experiments. Six cadavers with intact pelvises were employed in our study. They were CT scanned and the pelvis and vessels were segmented into 3D models. The ideal trajectory of the sacroiliac screw was planned and represented visually as a cylinder. For the intervention, the head mounted display created a real-time AR environment by superimposing the virtual 3D models onto the surgeon's field of view. The screws were drilled into the pelvis as guided by the trajectory represented by the cylinder. Following the intervention, a repeat CT scan was performed to evaluate the accuracy of the system, by assessing the screw positions and the deviations between the planned trajectories and inserted screws. Post-operative CT images showed that all 12 screws were correctly placed with no perforation. The mean deviation between the planned trajectories and the inserted screws was 2.7 ± 1.2 mm at the bony entry point, 3.7 ± 1.1 mm at the screw tip, and the mean angular deviation between the two trajectories was 2.9° ± 1.1°. The mean deviation at the nerve root tunnels region on the sagittal plane was 3.6 ± 1.0 mm. This study suggests an intuitive approach for guiding screw placement by way of AR-based navigation. This approach was feasible and accurate. It may serve as a valuable tool for assisting percutaneous sacroiliac screw insertion in live surgery. ER - TY - JOUR AU - Wang, Junchen AU - Suenaga, Hideyuki AU - Hoshi, Kazuto AU - Yang, Liangjing AU - Kobayashi, Etsuko AU - Sakuma, Ichiro AU - Liao, Hongen T1 - Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. JO - IEEE transactions on bio-medical engineering Y1 - 2014 VL - 61 SP - 1295 EP - 1304 KW - Dentistry KW - Operative KW - methods; Head KW - anatomy & histology; Humans; Imaging KW - Three-Dimensional KW - instrumentation KW - methods; Models KW - Biological; Phantoms KW - Imaging; Photography KW - Dental; User-Computer Interface N1 - 1558-2531 Owner: NLM N2 - Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm. ER - TY - JOUR AU - Wang, Junchen AU - Suenaga, Hideyuki AU - Liao, Hongen AU - Hoshi, Kazuto AU - Yang, Liangjing AU - Kobayashi, Etsuko AU - Sakuma, Ichiro T1 - Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation. JO - Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society Y1 - 2015 VL - 40 SP - 147 EP - 159 KW - Algorithms; Calibration KW - standards; Computer Systems; Humans; Image Enhancement KW - methods KW - standards; Image Interpretation KW - Computer-Assisted KW - standards; Imaging KW - Three-Dimensional KW - standards; Oral Surgical Procedures KW - standards; Pattern Recognition KW - Automated KW - standards; Reproducibility of Results; Sensitivity and Specificity; Surgery KW - standards; User-Computer Interface; 3D image calibration; Augmented reality; Integral imaging; Stereo tracking; Surgical navigation N1 - 1879-0771 Owner: NLM N2 - Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. ER - TY - JOUR AU - Wang, Junchen AU - Suenaga, Hideyuki AU - Yang, Liangjing AU - Kobayashi, Etsuko AU - Sakuma, Ichiro T1 - Video see-through augmented reality for oral and maxillofacial surgery. JO - The international journal of medical robotics + computer assisted surgery : MRCAS Y1 - 2017 VL - 13 KW - Humans; Jaw KW - diagnostic imaging; Oral Surgical Procedures KW - methods; Orthognathic Surgical Procedures; Phantoms KW - Imaging; Radiographic Image Enhancement KW - methods; Reproducibility of Results; Sensitivity and Specificity; Subtraction Technique; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed KW - methods; User-Computer Interface; Video Recording KW - methods; 3D-2D image registration; augmented reality; object tracking; oral and maxillofacial surgery N1 - 1478-596X Owner: NLM N2 - Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd. ER - TY - JOUR AU - Wang, Xiang AU - Habert, Severine AU - Zu Berge, Christian Schulte AU - Fallavollita, Pascal AU - Navab, Nassir T1 - Inverse visualization concept for RGB-D augmented C-arms. JO - Computers in biology and medicine Y1 - 2016 VL - 77 SP - 135 EP - 147 KW - Color; Equipment Design; Hand KW - diagnostic imaging; Humans; Imaging KW - Three-Dimensional KW - instrumentation KW - methods; Phantoms KW - Imaging; Surgery KW - Computer-Assisted KW - methods; Video Recording; Augmented reality; C-arm fluoroscopy; RGB-D cameras; Video; Visualization; X-ray N1 - 1879-0534 Owner: NLM N2 - X-ray is still the essential imaging for many minimally-invasive interventions. Overlaying X-ray images with an optical view of the surgery scene has been demonstrated to be an efficient way to reduce radiation exposure and surgery time. However, clinicians are recommended to place the X-ray source under the patient table while the optical view of the real scene must be captured from the top in order to see the patient, surgical tools, and the surgical site. With the help of a RGB-D (red-green-blue-depth) camera, which can measure depth in addition to color, the 3D model of the real scene is registered to the X-ray image. However, fusing two opposing viewpoints and visualizing them in the context of medical applications has never been attempted. In this paper, we propose first experiences of a novel inverse visualization technique for RGB-D augmented C-arms. A user study consisting of 16 participants demonstrated that our method shows a meaningful visualization with potential in providing clinicians multi-modal fused data in real-time during surgery. ER - TY - JOUR AU - Watanabe, Eiju AU - Satoh, Makoto AU - Konno, Takehiko AU - Hirai, Masahiro AU - Yamaguchi, Takashi T1 - The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality. JO - World neurosurgery Y1 - 2016 VL - 87 SP - 399 EP - 405 KW - Aged; Brain Neoplasms KW - pathology KW - surgery; Cerebellar Neoplasms KW - surgery; Combined Modality Therapy; Computer Graphics; Female; Hemangioblastoma KW - surgery; Humans; Image Processing KW - Computer-Assisted; Imaging KW - Three-Dimensional; Magnetic Resonance Imaging; Meningioma KW - surgery; Middle Aged; Neuronavigation KW - instrumentation KW - methods; Neurosurgical Procedures KW - methods; Phantoms KW - Imaging; Tomography KW - X-Ray Computed; User-Computer Interface; Augmented reality; Motion capture; Navigator; Tablet PC N1 - 1878-8769 Owner: NLM N2 - The neuronavigator has become indispensable for brain surgery and works in the manner of point-to-point navigation. Because the positional information is indicated on a personal computer (PC) monitor, surgeons are required to rotate the dimension of the magnetic resonance imaging/computed tomography scans to match the surgical field. In addition, they must frequently alternate their gaze between the surgical field and the PC monitor. To overcome these difficulties, we developed an augmented reality-based navigation system with whole-operation-room tracking. A tablet PC is used for visualization. The patient's head is captured by the back-face camera of the tablet. Three-dimensional images of intracranial structures are extracted from magnetic resonance imaging/computed tomography and are superimposed on the video image of the head. When viewed from various directions around the head, intracranial structures are displayed with corresponding angles as viewed from the camera direction, thus giving the surgeon the sensation of seeing through the head. Whole-operation-room tracking is realized using a VICON tracking system with 6 cameras. A phantom study showed a spatial resolution of about 1 mm. The present system was evaluated in 6 patients who underwent tumor resection surgery, and we showed that the system is useful for planning skin incisions as well as craniotomy and the localization of superficial tumors. The main advantage of the present system is that it achieves volumetric navigation in contrast to conventional point-to-point navigation. It extends augmented reality images directly onto real surgical images, thus helping the surgeon to integrate these 2 dimensions intuitively. ER - TY - CONF AU - Watson, Jeffrey R. AU - Martirosyan, Nikolay AU - Skoch, Jesse AU - Lemole, G. Michael AU - Anton, Rein AU - Romanowski, Marek AU - Pogue, Brian W. AU - Gioux, Sylvain T1 - Augmented microscopy with near-infrared fluorescence detection PB - SPIE Y1 - 2015/march ER - TY - JOUR AU - Wen, Rong AU - Chui, Chee-Kong AU - Ong, Sim-Heng AU - Lim, Kah-Bin AU - Chang, Stephen Kin-Yong T1 - Projection-based visual guidance for robot-aided RF needle insertion. JO - International journal of computer assisted radiology and surgery Y1 - 2013 VL - 8 SP - 1015 EP - 1025 KW - Catheter Ablation KW - instrumentation KW - methods; Humans; Models KW - Anatomic; Needles; Robotics KW - instrumentation; Software; Surgery KW - Computer-Assisted KW - methods N1 - 1861-6429 Owner: NLM N2 - The use of projector-based augmented reality (AR) in surgery may enable surgeons to directly view anatomical models and surgical data from the patient's surface (skin). It has the advantages of a consistent viewing focus on the patient, an extended field of view and augmented interaction. This paper presents an AR guidance mechanism with a projector-camera system to provide the surgeon with direct visual feedback for supervision of robotic needle insertion in radiofrequency (RF) ablation treatment. The registration of target organ models to specific positions on the patient body is performed using a surface-matching algorithm and point-based registration. An algorithm based on the extended Kalman filter and spatial transformation is used to intraoperatively compute the virtual needle's depth in the patient's body for AR display. Experiments of this AR system on a mannequin were conducted to evaluate AR visualization and accuracy of virtual RF needle insertion. The average accuracy of 1.86 mm for virtual needle insertion met the clinical requirement of 2 mm or better. The feasibility of augmented interaction with a surgical robot using the proposed open AR interface with active visual feedback was demonstrated. The experimental results demonstrate that this guidance system is effective in assisting a surgeon to perform a robot-assisted radiofrequency ablation procedure. The novelty of the work lies in establishing a navigational procedure for percutaneous surgical augmented intervention integrating a projection-based AR guidance and robotic implementation for surgical needle insertion. ER - TY - JOUR AU - Wen, Rong AU - Tay, Wei-Liang AU - Nguyen, Binh P. AU - Chng, Chin-Boon AU - Chui, Chee-Kong T1 - Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. JO - Computer methods and programs in biomedicine Y1 - 2014 VL - 116 SP - 68 EP - 80 KW - Catheter Ablation KW - instrumentation; Gestures; Humans; Liver Neoplasms KW - surgery; Manikins; Needles; Phantoms KW - Imaging; Robotic Surgical Procedures KW - instrumentation; Robotics KW - instrumentation; Surgery KW - Computer-Assisted KW - instrumentation; User-Computer Interface; Augmented interaction; Augmented reality; Human-robot cooperation; Image-guided surgery; Projector-camera system; Visual guidance N1 - 1872-7565 Owner: NLM N2 - Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. ER - TY - JOUR AU - Wilson, Kenneth L. AU - Doswell, Jayfus T. AU - Fashola, Olatokunbo S. AU - Debeatham, Wayne AU - Darko, Nii AU - Walker, Travelyan M. AU - Danner, Omar K. AU - Matthews, Leslie R. AU - Weaver, William L. T1 - Using augmented reality as a clinical support tool to assist combat medics in the treatment of tension pneumothoraces. JO - Military medicine Y1 - 2013 VL - 178 SP - 981 EP - 985 KW - Cadaver; Decompression KW - Surgical KW - education KW - instrumentation; Female; First Aid KW - instrumentation; Health Personnel KW - education; Humans; Male; Military Personnel KW - education; Pneumothorax KW - surgery; Students KW - Medical; Surgery KW - Computer-Assisted; United States N1 - 1930-613X Owner: NLM N2 - This study was to extrapolate potential roles of augmented reality goggles as a clinical support tool assisting in the reduction of preventable causes of death on the battlefield. Our pilot study was designed to improve medic performance in accurately placing a large bore catheter to release tension pneumothorax (prehospital setting) while using augmented reality goggles. Thirty-four preclinical medical students recruited from Morehouse School of Medicine performed needle decompressions on human cadaver models after hearing a brief training lecture on tension pneumothorax management. Clinical vignettes identifying cadavers as having life-threatening tension pneumothoraces as a consequence of improvised explosive device attacks were used. Study group (n = 13) performed needle decompression using augmented reality goggles whereas the control group (n = 21) relied solely on memory from the lecture. The two groups were compared according to their ability to accurately complete the steps required to decompress a tension pneumothorax. The medical students using augmented reality goggle support were able to treat the tension pneumothorax on the human cadaver models more accurately than the students relying on their memory (p < 0.008). Although the augmented reality group required more time to complete the needle decompression intervention (p = 0.0684), this did not reach statistical significance. ER - TY - JOUR AU - Wrzesien, Maja AU - Alcañiz, Mariano AU - Botella, Cristina AU - Burkhardt, Jean-Marie AU - Bretón-López, Juana AU - Ortega, Mario AU - Brotons, Daniel Beneito T1 - The therapeutic lamp: treating small-animal phobias. JO - IEEE computer graphics and applications Y1 - 2013 VL - 33 SP - 80 EP - 86 KW - Adult; Animals; Cockroaches; Computer Graphics; Female; Humans; Imaging KW - Three-Dimensional; Male; Phobic Disorders KW - therapy; Self Efficacy; Spiders; User-Computer Interface; Virtual Reality Exposure Therapy KW - instrumentation KW - methods; Young Adult N1 - 1558-1756 Owner: NLM N2 - We all have an irrational fear or two. Some of us get scared by an unexpected visit from a spider in our house; others get nervous when they look down from a high building. Fear is an evolutionary and adaptive function that can promote self-preservation and help us deal with the feared object or situation. However, when this state becomes excessive, it might develop into psychological disorders such as phobias, producing high anxiety and affecting everyday life. The Therapeutic Lamp is an interactive projection-based augmented-reality system for treating small-animal phobias. It aims to increase patient-therapist communication, promote more natural interaction, and improve the patient's engagement in the therapy. ER - TY - JOUR AU - Wu, Jing-Ren AU - Wang, Min-Liang AU - Liu, Kai-Che AU - Hu, Ming-Hsien AU - Lee, Pei-Yuan T1 - Real-time advanced spinal surgery via visible patient model and augmented reality system. JO - Computer methods and programs in biomedicine Y1 - 2014 VL - 113 SP - 869 EP - 881 KW - Animals; Computational Biology; Computer Systems; Humans; Imaging KW - Three-Dimensional KW - statistics & numerical data; Models KW - Anatomic; Models KW - Animal; Phantoms KW - Imaging; Spinal Diseases KW - diagnostic imaging KW - surgery; Surgery KW - Computer-Assisted KW - statistics & numerical data; Sus scrofa; Tomography KW - X-Ray Computed; Vertebroplasty KW - methods KW - statistics & numerical data; Visible Human Projects; Augmented reality; Camera-projector system; Digital spinal surgery; Visible patient N1 - 1872-7565 Owner: NLM N2 - This paper presents an advanced augmented reality system for spinal surgery assistance, and develops entry-point guidance prior to vertebroplasty spinal surgery. Based on image-based marker detection and tracking, the proposed camera-projector system superimposes pre-operative 3-D images onto patients. The patients' preoperative 3-D image model is registered by projecting it onto the patient such that the synthetic 3-D model merges with the real patient image, enabling the surgeon to see through the patients' anatomy. The proposed method is much simpler than heavy and computationally challenging navigation systems, and also reduces radiation exposure. The system is experimentally tested on a preoperative 3D model, dummy patient model and animal cadaver model. The feasibility and accuracy of the proposed system is verified on three patients undergoing spinal surgery in the operating theater. The results of these clinical trials are extremely promising, with surgeons reporting favorably on the reduced time of finding a suitable entry point and reduced radiation dose to patients. ER - TY - JOUR AU - Wucherer, Patrick AU - Stefan, Philipp AU - Weidert, Simon AU - Fallavollita, Pascal AU - Navab, Nassir T1 - Task and crisis analysis during surgical training JO - International Journal of Computer Assisted Radiology and Surgery Y1 - 2014/january VL - 9 IS - 5 SP - 785 EP - 794 ER - TY - JOUR AU - Yamauchi, Yasushi T1 - Smart Dry Lab: An Augmented Reality (AR) Based Surgical Training Box. JO - Studies in health technology and informatics Y1 - 2014 VL - 196 SP - 476 EP - 478 KW - Humans; Models KW - Anatomic; Surgical Instruments; Surgical Procedures KW - Operative KW - education; Virtual Reality N1 - 1879-8365 Owner: NLM N2 - Smart Dry Lab (SDL) is a low-cost, AR-based surgical training box, using the ARToolKit API. This is an add-on function for existing surgical training box, which enables quantitative evaluation of forceps maneuver and simulation of soft tissue deformation. In this paper we demonstrate functionality of our prototype system. ER - TY - JOUR AU - Yang, L. AU - Wang, J. AU - Ando, T. AU - Kubota, A. AU - Yamashita, H. AU - Sakuma, I. AU - Chiba, T. AU - Kobayashi, E. T1 - Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation. JO - Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society Y1 - 2015 VL - 40 SP - 205 EP - 216 KW - Algorithms; Artificial Intelligence; Fetoscopy KW - methods; Humans; Image Enhancement KW - methods; Image Interpretation KW - Computer-Assisted KW - methods; Imaging KW - Three-Dimensional KW - methods; Pattern Recognition KW - Automated KW - methods; Reproducibility of Results; Sensitivity and Specificity; Subtraction Technique; Surgery KW - methods; Ultrasonography KW - Prenatal KW - methods; User-Computer Interface; Endoscopy; Minimally invasive fetal surgery; Surgical navigation; Ultrasound imaging; Vision-based tracking N1 - 1879-0771 Owner: NLM N2 - This work introduces a self-contained framework for endoscopic camera tracking by combining 3D ultrasonography with endoscopy. The approach can be readily incorporated into surgical workflows without installing external tracking devices. By fusing the ultrasound-constructed scene geometry with endoscopic vision, this integrated approach addresses issues related to initialization, scale ambiguity, and interest point inadequacy that may be faced by conventional vision-based approaches when applied to fetoscopic procedures. Vision-based pose estimations were demonstrated by phantom and ex vivo monkey placenta imaging. The potential contribution of this method may extend beyond fetoscopic procedures to include general augmented reality applications in minimally invasive procedures. ER - TY - JOUR AU - Yoo, Ha-Na AU - Chung, Eunjung AU - Lee, Byoung-Hee T1 - The Effects of Augmented Reality-based Otago Exercise on Balance, Gait, and Falls Efficacy of Elderly Women. JO - Journal of physical therapy science Y1 - 2013 VL - 25 SP - 797 EP - 801 KW - Augmented reality; Falls efficacy; Otago exercise N1 - 0915-5287 Owner: NLM N2 - [Purpose] The purpose of this study was to determine the effects of augmented reality-based Otago exercise on balance, gait, and falls efficacy of elderly women. [Subjects] The subjects were 21 elderly women, who were randomly divided into two groups: an augmented reality-based Otago exercise group of 10 subjects and an Otago exercise group of 11 subjects. [Methods] All subjects were evaluated for balance (Berg Balance Scale, BBS), gait parameters (velocity, cadence, step length, and stride length), and falls efficacy. Within 12 weeks, Otago exercise for muscle strengthening and balance training was conducted three times, for a period of 60 minutes each, and subjects in the experimental group performed augmented reality-based Otago exercise. [Results] Following intervention, the augmented reality-based Otago exercise group showed significant increases in BBS, velocity, cadence, step length (right side), stride length (right side and left side) and falls efficacy. [Conclusion] The results of this study suggest the feasibility and suitability of this augmented reality-based Otago exercise for elderly women. ER - TY - JOUR AU - Yoon, Jang W. AU - Chen, Robert E. AU - ReFaey, Karim AU - Diaz, Roberto J. AU - Reimer, Ronald AU - Komotar, Ricardo J. AU - Quinones-Hinojosa, Alfredo AU - Brown, Benjamin L. AU - Wharen, Robert E. T1 - Technical feasibility and safety of image-guided parieto-occipital ventricular catheter placement with the assistance of a wearable head-up display. JO - The international journal of medical robotics + computer assisted surgery : MRCAS Y1 - 2017 VL - 13 KW - Brain KW - diagnostic imaging; Catheterization KW - methods; Catheters; Cohort Studies; Computers; Equipment Design; Eyeglasses; Humans; Image Processing KW - Computer-Assisted; Imaging KW - Three-Dimensional; Internet; Movement; Neuronavigation KW - instrumentation KW - methods; Phantoms KW - Imaging; Prospective Studies; Software; Tomography KW - X-Ray Computed; User-Computer Interface; Video Recording; Wearable Electronic Devices; 3D imaging; Google Glass; augmented reality; brain; computer-assisted surgery; craniofacial; facial; image-guided surgery; navigation; neurology; stealth; stereotaxy; ventriculoperitoneal shunt; video-streaming; virtual reality; wearable technology N1 - 1478-596X Owner: NLM N2 - Wearable technology is growing in popularity as a result of its ability to interface with normal human movement and function. Using proprietary hardware and software, neuronavigation images were captured and transferred wirelessly via a password-encrypted network to the head-up display. The operating surgeon wore a loupe-mounted wearable head-up display during image-guided parieto-occipital ventriculoperitoneal shunt placement in two patients. The shunt placement was completed successfully without complications. The tip of the catheter ended well within the ventricles away from the ventricular wall. The wearable device allowed for continuous monitoring of neuronavigation images in the right upper corner of the surgeon's visual field without the need for the surgeon to turn his head to view the monitors. The adaptable nature of this proposed system permits the display of video data to the operating surgeon without diverting attention away from the operative task. This technology has the potential to enhance image-guided procedures. ER - TY - JOUR AU - Yudkowsky, Rachel AU - Luciano, Cristian AU - Banerjee, Pat AU - Schwartz, Alan AU - Alaraj, Ali AU - Lemole, G. Michael AU - Charbel, Fady AU - Smith, Kelly AU - Rizzi, Silvio AU - Byrne, Richard AU - Bendok, Bernard AU - Frim, David T1 - Practice on an augmented reality/haptic simulator and library of virtual brains improves residents' ability to perform a ventriculostomy. JO - Simulation in healthcare : journal of the Society for Simulation in Healthcare Y1 - 2013 VL - 8 SP - 25 EP - 31 KW - Brain KW - anatomy & histology; Chicago; Computer Simulation; Humans; Medical Staff KW - Hospital KW - education; Neurosurgery KW - education; Surveys and Questionnaires; Touch Perception; User-Computer Interface; Ventriculostomy KW - education N1 - 1559-713X Owner: NLM N2 - Ventriculostomy is a neurosurgical procedure for providing therapeutic cerebrospinal fluid drainage. Complications may arise during repeated attempts at placing the catheter in the ventricle. We studied the impact of simulation-based practice with a library of virtual brains on neurosurgery residents' performance in simulated and live surgical ventriculostomies. Using computed tomographic scans of actual patients, we developed a library of 15 virtual brains for the ImmersiveTouch system, a head- and hand-tracked augmented reality and haptic simulator. The virtual brains represent a range of anatomies including normal, shifted, and compressed ventricles. Neurosurgery residents participated in individual simulator practice on the library of brains including visualizing the 3-dimensional location of the catheter within the brain immediately after each insertion. Performance of participants on novel brains in the simulator and during actual surgery before and after intervention was analyzed using generalized linear mixed models. Simulator cannulation success rates increased after intervention, and live procedure outcomes showed improvement in the rate of successful cannulation on the first pass. However, the incidence of deeper, contralateral (simulator) and third-ventricle (live) placements increased after intervention. Residents reported that simulations were realistic and helpful in improving procedural skills such as aiming the probe, sensing the pressure change when entering the ventricle, and estimating how far the catheter should be advanced within the ventricle. Simulator practice with a library of virtual brains representing a range of anatomies and difficulty levels may improve performance, potentially decreasing complications due to inexpert technique. ER - TY - JOUR AU - Zemirline, Ahmed AU - Agnus, Vincent AU - Soler, Luc AU - Mathoulin, Christophe L. AU - Obdeijn, Miryam AU - Liverneaux, Philippe A. T1 - Augmented reality-based navigation system for wrist arthroscopy: feasibility. JO - Journal of wrist surgery Y1 - 2013 VL - 2 SP - 294 EP - 298 KW - arthroscopy; augmented reality; computer-guided surgery; wrist N1 - 2163-3916 Owner: NLM N2 - In video surgery, and more specifically in arthroscopy, one of the major problems is positioning the camera and instruments within the anatomic environment. The concept of computer-guided video surgery has already been used in ear, nose, and throat (ENT), gynecology, and even in hip arthroscopy. These systems, however, rely on optical or mechanical sensors, which turn out to be restricting and cumbersome. The aim of our study was to develop and evaluate the accuracy of a navigation system based on electromagnetic sensors in video surgery. We used an electromagnetic localization device (Aurora, Northern Digital Inc., Ontario, Canada) to track the movements in space of both the camera and the instruments. We have developed a dedicated application in the Python language, using the VTK library for the graphic display and the OpenCV library for camera calibration. A prototype has been designed and evaluated for wrist arthroscopy. It allows display of the theoretical position of instruments onto the arthroscopic view with useful accuracy. The augmented reality view represents valuable assistance when surgeons want to position the arthroscope or locate their instruments. It makes the maneuver more intuitive, increases comfort, saves time, and enhances concentration. ER - TY - JOUR AU - Zeng, Bowei AU - Meng, Fanle AU - Ding, Hui AU - Wang, Guangzhi T1 - A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation. JO - International journal of computer assisted radiology and surgery Y1 - 2017 VL - 12 SP - 1355 EP - 1368 KW - Electrodes KW - Implanted; Electroencephalography; Equipment Design; Humans; Imaging KW - Three-Dimensional KW - instrumentation KW - methods; Neurosurgery KW - methods; Phantoms KW - Imaging; Robotic Surgical Procedures KW - methods; Stereotaxic Techniques; Surgery KW - Computer-Assisted KW - methods; User-Computer Interface; Video Recording; Augmented reality; Projector-camera system; Stereoelectroencephalography; Surgical robot; Viewpoint deviation N1 - 1861-6429 Owner: NLM N2 - Using existing stereoelectroencephalography (SEEG) electrode implantation surgical robot systems, it is difficult to intuitively validate registration accuracy and display the electrode entry points (EPs) and the anatomical structure around the electrode trajectories in the patient space to the surgeon. This paper proposes a prototype system that can realize video see-through augmented reality (VAR) and spatial augmented reality (SAR) for SEEG implantation. The system helps the surgeon quickly and intuitively confirm the registration accuracy, locate EPs and visualize the internal anatomical structure in the image space and patient space. We designed and developed a projector-camera system (PCS) attached to the distal flange of a robot arm. First, system calibration is performed. Second, the PCS is used to obtain the point clouds of the surface of the patient's head, which are utilized for patient-to-image registration. Finally, VAR is produced by merging the real-time video of the patient and the preoperative three-dimensional (3D) operational planning model. In addition, SAR is implemented by projecting the planning electrode trajectories and local anatomical structure onto the patient's scalp. The error of registration, the electrode EPs and the target points are evaluated on a phantom. The fiducial registration error is [Formula: see text] mm (max 1.22 mm), and the target registration error is [Formula: see text] mm (max 1.18 mm). The projection overlay error is [Formula: see text] mm, and the TP error after the pre-warped projection is [Formula: see text] mm. The TP error caused by a surgeon's viewpoint deviation is also evaluated. The presented system can help surgeons quickly verify registration accuracy during SEEG procedures and can provide accurate EP locations and internal structural information to the surgeon. With more intuitive surgical information, the surgeon may have more confidence and be able to perform surgeries with better outcomes. ER - TY - JOUR AU - Zhang, Xinran AU - Chen, Guowen AU - Liao, Hongen T1 - High-Quality See-Through Surgical Guidance System Using Enhanced 3-D Autostereoscopic Augmented Reality. JO - IEEE transactions on bio-medical engineering Y1 - 2017 VL - 64 SP - 1815 EP - 1825 KW - Equipment Design; Equipment Failure Analysis; Microsurgery KW - instrumentation; Minimally Invasive Surgical Procedures KW - instrumentation; Neuronavigation KW - instrumentation; Reproducibility of Results; Sensitivity and Specificity; User-Computer Interface N1 - 1558-2531 Owner: NLM N2 - Precise minimally invasive surgery (MIS) has significant advantages over traditional open surgery in clinic. Although pre-/intraoperative diagnosis images can provide necessary guidance for therapy, hand-eye discoordination occurs when guidance information is displayed away from the surgical area. In this study, we introduce a real three-dimensional (3-D) see-through guidance system for precision surgery. To address the resolution and viewing angle limitation as well as the accuracy degradation problems of autostereoscopic 3-D display, we design a high quality and high accuracy 3-D integral videography (IV) medical image display method. Furthermore, a novel see-through microscopic device is proposed to assist surgeons with the superimposition of real 3-D guidance onto the surgical target is magnified by an optical visual magnifier module. Spatial resolutions of 3-D IV image in different depths have been increased 50%∼70%, viewing angles of different image sizes have been increased 9%∼19% compared with conventional IV display methods. Average accuracy of real 3-D guidance superimposed on surgical target was 0.93 mm ± 0.41 mm. Preclinical studies demonstrated that our system could provide real 3-D perception of anatomic structures inside the patient's body. The system showed potential clinical feasibility to provide intuitive and accurate in situ see-through guidance for microsurgery without restriction on observers' viewing position. Our system can effectively improve the precision and reliability of surgical guidance. It will have wider applicability in surgical planning, microscopy, and other fields. ER - TY - JOUR AU - Zhang, Xinran AU - Chen, Guowen AU - Liao, Hongen T1 - A high-accuracy surgical augmented reality system using enhanced integral videography image overlay. JO - Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Y1 - 2015 VL - 2015 SP - 4210 EP - 4213 KW - Algorithms; Humans; Image Enhancement KW - methods; Imaging KW - Three-Dimensional KW - methods; Surgery KW - Computer-Assisted KW - instrumentation KW - methods; Video-Assisted Surgery KW - methods N1 - 1557-170X Owner: NLM N2 - Image guided surgery has been used in clinic to improve the surgery safety and accuracy. Augmented reality (AR) technique, which can provide intuitive image guidance, has been greatly evolved these years. As one promising approach of surgical AR systems, integral videography (IV) autostereoscopic image overlay has achieved accurate fusion of full parallax guidance into surgical scene. This paper describes an image enhanced high-accuracy IV overlay system. A flexible optical image enhancement system (IES) is designed to increase the resolution and quality of IV image. Furthermore, we introduce a novel IV rendering algorithm to promote the spatial accuracy with the consideration of distortion introduced by micro lens array. Preliminary experiments validated that the image accuracy and resolution are improved with the proposed methods. The resolution of the IV image could be promoted to 1 mm for a micro lens array with pitch of 2.32 mm and IES magnification value of 0.5. The relative deviation of accuracy in depth and lateral directions are -4.68 ± 0.83% and -9.01 ± 0.42%. ER - TY - JOUR AU - Zhu, Ming AU - Chai, Gang AU - Lin, Li AU - Xin, Yu AU - Tan, Andy AU - Bogari, Melia AU - Zhang, Yan AU - Li, Qingfeng T1 - Effectiveness of a Novel Augmented Reality-Based Navigation System in Treatment of Orbital Hypertelorism. JO - Annals of plastic surgery Y1 - 2016 VL - 77 SP - 662 EP - 668 KW - Child; Humans; Hypertelorism KW - diagnostic imaging KW - surgery; Imaging KW - Three-Dimensional; Orthopedic Procedures KW - methods; Osteotomy KW - methods; Preoperative Care; Surgery KW - Computer-Assisted KW - methods; Tomography KW - X-Ray Computed; Treatment Outcome; User-Computer Interface N1 - 1536-3708 Owner: NLM N2 - Augmented reality (AR) technology can superimpose the virtual image generated by computer onto the real operating field to present an integral image to enhance surgical safety. The purpose of our study is to develop a novel AR-based navigation system for craniofacial surgery. We focus on orbital hypertelorism correction, because the surgery requires high preciseness and is considered tough even for senior craniofacial surgeon. Twelve patients with orbital hypertelorism were selected. The preoperative computed tomography data were imported into 3-dimensional platform for preoperational design. The position and orientation of virtual information and real world were adjusted by image registration process. The AR toolkits were used to realize the integral image. Afterward, computed tomography was also performed after operation for comparing the difference between preoperational plan and actual operational outcome. Our AR-based navigation system was successfully used in these patients, directly displaying 3-dimensional navigational information onto the surgical field. They all achieved a better appearance by the guidance of navigation image. The difference in interdacryon distance and the dacryon point of each side appear no significant (P > 0.05) between preoperational plan and actual surgical outcome. This study reports on an effective visualized approach for guiding orbital hypertelorism correction. Our AR-based navigation system may lay a foundation for craniofacial surgery navigation. The AR technology could be considered as a helpful tool for precise osteotomy in craniofacial surgery. ER - TY - JOUR AU - Zhu, Ming AU - Liu, Fei AU - Chai, Gang AU - Pan, Jun J. AU - Jiang, Taoran AU - Lin, Li AU - Xin, Yu AU - Zhang, Yan AU - Li, Qingfeng T1 - A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery. JO - Scientific reports Y1 - 2017 VL - 7 SP - 42365 EP - 42365 KW - Adult; Anatomic Landmarks; Demography; Female; Humans; Hypertrophy; Imaging KW - Three-Dimensional; Male; Mandible KW - diagnostic imaging KW - surgery; Mandibular Nerve KW - surgery; Surgery KW - Oral; Tomography KW - X-Ray Computed; Treatment Outcome; Virtual Reality N1 - 2045-2322 Owner: NLM N2 - Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the "integrated image" on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications. ER - TY - JOUR AU - Zinser, Max J. AU - Mischkowski, Robert A. AU - Dreiseidler, Timo AU - Thamm, Oliver C. AU - Rothamel, Daniel AU - Zöller, Joachim E. T1 - Computer-assisted orthognathic surgery: waferless maxillary positioning, versatility, and accuracy of an image-guided visualisation display. JO - The British journal of oral & maxillofacial surgery Y1 - 2013 VL - 51 SP - 827 EP - 833 KW - Adult; Anatomic Landmarks KW - pathology; Cephalometry KW - methods; Computer Graphics; Data Display; Facial Asymmetry KW - surgery; Female; Frontal Bone KW - pathology; Humans; Imaging KW - Three-Dimensional KW - methods; Jaw Relation Record KW - methods; Male; Malocclusion KW - Angle Class III KW - surgery; Maxilla KW - pathology KW - surgery; Open Bite KW - surgery; Operative Time; Orthognathic Surgical Procedures KW - methods; Osteotomy KW - methods; Patient Care Planning; Stereotaxic Techniques; Surgery KW - Computer-Assisted KW - methods; Treatment Outcome; User-Computer Interface; Video Recording; Visual Perception; Young Adult; Zygoma KW - pathology; 3D-cephalometry; Interactive image-guided-visualization-display; Orthognathic surgery; Surgical navigation; Virtual orthognathic planning N1 - 1532-1940 Owner: NLM N2 - There may well be a shift towards 3-dimensional orthognathic surgery when virtual surgical planning can be applied clinically. We present a computer-assisted protocol that uses surgical navigation supplemented by an interactive image-guided visualisation display (IGVD) to transfer virtual maxillary planning precisely. The aim of this study was to analyse its accuracy and versatility in vivo. The protocol consists of maxillofacial imaging, diagnosis, planning of virtual treatment, and intraoperative surgical transfer using an IGV display. The advantage of the interactive IGV display is that the virtually planned maxilla and its real position can be completely superimposed during operation through a video graphics array (VGA) camera, thereby augmenting the surgeon's 3-dimensional perception. Sixteen adult class III patients were treated with by bimaxillary osteotomy. Seven hard tissue variables were chosen to compare (ΔT1-T0) the virtual maxillary planning (T0) with the postoperative result (T1) using 3-dimensional cephalometry. Clinically acceptable precision for the surgical planning transfer of the maxilla (<0.35 mm) was seen in the anteroposterior and mediolateral angles, and in relation to the skull base (<0.35°), and marginal precision was seen in the orthogonal dimension (<0.64 mm). An interactive IGV display complemented surgical navigation, augmented virtual and real-time reality, and provided a precise technique of waferless stereotactic maxillary positioning, which may offer an alternative approach to the use of arbitrary splints and 2-dimensional orthognathic planning. ER -