Abstract
Background
Mixed reality (MR), the computer-supported augmentation of a real environment with virtual elements, becomes ever more relevant in the medical domain, especially in urology, ranging from education and training over surgeries. We aimed to review existing MR technologies and their applications in urology.
Methods
A non-systematic review of current literature was performed using the PubMed-Medline database using the medical subject headings (MeSH) term “mixed reality”, combined with one of the following terms: “virtual reality”, “augmented reality”, ‘’urology’’ and “augmented virtuality”. The relevant studies were utilized.
Results
MR applications such as MR guided systems, immersive VR headsets, AR models, MR-simulated ureteroscopy and smart glasses have enormous potential in education, training and surgical interventions of urology. Medical students, urology residents and inexperienced urologists can gain experience thanks to MR technologies. MR applications are also used in patient education before interventions.
Conclusions
For surgical support, the achievable accuracy is often not sufficient. The main challenges are the non-rigid nature of the genitourinary organs, intraoperative data acquisition, online and multimodal registration and calibration of devices. However, the progress made in recent years is tremendous in all respects and the gap is constantly shrinking.
Keywords: Augmented reality, Mixed reality, Virtual reality, Augmented virtuality, Urology
Highlights
-
∙
MR, including AV and AR, is an intriguing technology with tremendous potential in urology field.
∙The main challenges lie in intraoperative data acquisition, online and multimodal registration and calibration of devices and data, appropriate display hardware, as well as cooperative devices and tools in the operation theatres.
∙Medical experts should feel encouraged to experience MR solutions and to communicate their specific needs and effects they aim at.
1. Introduction
The transition between reality, augmented reality (AR), augmented virtuality (AV), and virtual reality (VR) is continuous. These terms were first described by Paul Milgram in 1994 who consequently coined the notion of mixed reality (MR), to subsume all possible applications in the reality-virtuality (RV) continuum [1].
In this study, we describe the current capabilities and future challenges of MR. The analysis aims at raising awareness concerning the potential and, more importantly, the demands of using MR in a clinical setting. A wealth of reports in the literature encompass the results of applying existing MR solutions in the field of urology. However, we intend to foster creativity in the specification of MR applications from a realistic perspective about the underlying technical challenges. We will, therefore, first detail on technological aspects of MR in general. We will then address existing MR technologies with emphasis on applications in urology. The examples in the respective sections are arranged according to increasing complexity and technical requirements.
2. Materials and methods
2.1. Evidence acquisition
A non-systematic review of current literature was performed using the PubMed-Medline database using the medical subject headings (MeSH) term “mixed reality”, combined with one of the following terms: “virtual reality”, “augmented reality”, ‘’urology’’ and “augmented virtuality”. The search was limited to articles and abstracts published within the last 5 years, originally published in English. Publications relevant to the subject and their cited references were retrieved and appraised independently by two authors (G.R. and A.P.). After full text evaluation, data were independently extracted by the authors for further assessment of qualitative and quantitative evidence synthesis.
3. Results
3.1. MR technologies
The main requirements for any AR/VR system is a tracking system that estimates the device location within the application environment and display systems that use this location to register realistic virtual content on the real environment in case of AR or to navigate a virtual world in case of VR.
3.2. Tracking
Tracking technologies use diverse sensing modalities such as cameras, inertial sensors, mechanical systems, etc. Typically, choosing a tracking system attempts to find the best trade-off between localization accuracy and cost/complexity. For AR, visual tracking systems are widely used since it is possible to achieve highly accurate tracking using low-cost commercial cameras such as the ones found in modern mobile phones. Visual tracking systems are further sub-categorized into outside-in and inside-out tracking systems.
Outside-in tracking systems consist of several cameras that are statically placed in an environment and offer high-accuracy tracking coverage of this area using multi-view localization of visual features (e.g. the OptiTrack system uses reflective markers). Such systems can provide reliable tracking for a specific application environment of limited size but have an increased cost and low mobility. On the other hand, due to the cameras' static placement, outside-in systems do not suffer as much from image quality degradation and can track fast-moving targets.
Inside-out tracking uses camera inputs attached to the user, for example, the cameras placed on a smartphone, tablet/PC, or a head-mounted display. Most AR applications are based on such tracking systems due to their low-cost and flexibility. Since these trackers rely on visual features, a decrease in the image quality due to e.g. motion blur, lighting changes, or occlusions can lead to loss of tracking. For these reasons, it is common to fuse the camera localization with other sensors that have complementary properties, e.g. inertial sensors [2].
Camera-based tracking requires knowledge of some 3D information of the environment to establish correspondences with 2D image features and to perform localization of the device by computing 6 degree-of-freedom (6DOF) poses. Traditionally this 3D information is obtained by placing special visual markers with distinguishable patterns for detection. Most recent tracking methods immediately use 3D object features as tracking targets. Such approaches exploit on 3D reconstructed objects or respective CAD1 models and use them as a template for matching with the real object, through texture features or line matching [3,4]. Machine learning systems with CNNs2 have lately been successful in this area [5].
In most cases, no prior 3D information is available on the environment. Simultaneous Localization and Mapping (SLAM) systems can then be used to track the device's position within an unknown area while creating a 3D map of the area. SLAM is a challenging area and a key topic for robotics as well as for AR. Although several monocular SLAM systems have been introduced so far, an important disadvantage is the inability to retrieve the real scale of the reconstructed map and to function in dynamic or featureless environments [6,7]. Some of these issues can be mitigated with the use of binocular camera systems or additional inertial or depth sensors [2,8].
3.3. Display/visualization
MR display technologies can be technically split into two main sub-types according to the hardware used to fuse virtual and real content: optical see-through (OS) and video see-through (VS). For OS, transparent screens allow the user to perceive reality directly, while VS reality is captured by the camera(s) and the resulting video stream is augmented and displayed. Examples of OS displays are the HoloLens, MagicLeap, DAQRI smart glasses, amongst others. Examples for VS displays are HTC VIVE Pro, Oculus Rift S/Quest, VRgineers XTAL, amongst many others.
There are several options for the display of visualizations in AR, with the final choice depending on the application at hand. Handheld devices such as tablet-PCs or smartphones might be a popular choice in several fields due to their availability and low cost. Virtual augmentations are not displayed directly in the real world. Instead, they are placed over live video captured by the device camera leading to an inherent delay and loss of immersion. Additionally, it requires the users to point the device on the area of interest, which prohibits them to perform complex tasks at the same time. In the medical domain, such devices are most useful when communicating information, e.g. for patient education but also for discussion between medical experts.
Head-mounted displays (HMDs) provide an alternative to mobile display devices, especially the optical see-through (OS) versions that project augmentations directly over the user's real world view. Until recently, HMDs did not provide an embedded tracking system, such as Google Glass. Therefore, their use in AR was extremely challenging, especially when taking into account the comparatively little computational resources provided by such devices and the requirements of tracking algorithms. The field was revolutionized by the introduction of the Microsoft HoloLens, the Oculus Rift S and Oculus Quest, and many other systems supporting the Windows Mixed Reality or Steam VR frameworks. All these systems use visual and inertial sensors to provide an embedded SLAM tracking system, paving the way for the development of many AR applications.
An alternative to such devices is spatial AR where a projector is used to display AR content. Spatial AR is in general confined to the specific area of projection but has the advantage of not burdening the user with wearing any type of device. A limitation of spatial AR is that no 3D-data can be displayed.
3.4. Technological challenges for AR/MR
Current AR/MR systems have been shown to perform well in static scenarios and specifically prepared environments. An OR can be an example of such a controlled environment in some cases. A large number of applications, however, need robust tracking functionality in highly dynamic scenes requiring SLAM systems. Similarly, object tracking technologies are close to maturity when it comes to rigid objects, but cannot yet handle articulated and non-rigid objects. The human body as a whole can be seen as an articulated object, while organs are in general highly non-rigid. As both the body and individual organs have only a few features, they represent a great challenge for any tracking system.
Scene understanding is a key challenge for tracking systems. Most existing SLAM systems create a sparse map of their environment. Such a geometric model representation is useful for localization. However, for AR/MR, a dense scene mapping is preferred as it allows for the full interaction of virtual and real objects. Further capabilities will be unlocked when a semantic understanding of scenes is achieved in addition to the pure geometric one. The fusion of SLAM system maps with the output of deep neural networks providing segmentation and labeling is an active research topic in computer vision [9,10].
Apart from the tracking technologies, scalability is another major issue for AR. Systems that function in any environment, without requiring many preparatory steps (e.g. calibration) and manual setup are required.
3.5. MR applications in urology
In the following section, we provide examples of AR/MR applications in the field of urology which we consider relevant to clarify options and the technology's potential. The topics to be discussed are education, training and surgery. For education, we name examples of patient and medical staff education. Training is mainly concerned with learning how to perform a particular operation. For surgery, we focus on pre-operative planning and MR-supported conduction of operations.
3.6. Education
Education might well be the most intuitive and direct approach to MR, posing the least constraints on technology. The main aspect of education is sharing knowledge. This can be done in various ways by using text, images, and other media. Since we are mainly dealing with 3D objects in medicine, i.e. the human body and its organs, VR offers several advantages. In contrast to images and videos with a fixed or predefined view-point, VR allows for a free inspection of 3D scenarios. Manipulation and enhancement of the virtual scene with auxiliary information is relatively straight-forward. Furthermore, certain aspects of the presented data can be emphasized so that users can perceive crucial content intuitively. Overall, VR offers great benefit when teaching complex 3D anatomical structures. VR can compete with traditional methods, such as textbooks and anatomical dissection [11]. Lorenzo-Alvarez et al. report that 3D virtual classrooms have the potential to replace traditional classrooms [12].
A potential VR drawback is that it “decouples” users from reality. When studying genitourinary anatomy, VR hinders the ability to relate virtual content to the real world. Especially when educating patients this might be a significant drawback. MR can improve the situation dramatically. By using a Magical Mirror device, displaying virtual information on top of a user's mirror image, can greatly enhance the learning experience [13].
When using head-mounted devices, the intuitive perception of spatial information is similar to that found with VR. Yet, MR can provide significant benefits regarding 3D perception of anatomy compared to VR as reported by Moro et al. [14]. The authors found that participants reported less adverse effects like nausea, headache, dizziness, or disorientation when using MR compared to VR. They also found beneficial side effects like increased student engagement, interactivity, and enjoymentIn a randomized, prospective and single-blinded study, Schoeb et al. evaluated educating medical students in bladder catheter placement through an MR-guided system (HoloLens). It was shown that MR is an effective tool for medical students in bladder catheter placement education [15]. Parkhomenko et al. reported that, through immersive VR (iVR) headsets, the patient-specific anatomy-based models could determine the patient's renal anatomy and the optimal location for percutaneous access before percutaneous nephrolithotomy [16]. In the study, the educational aspect of iVR technology was stated as helping to alleviate patients' anxiety through patients' education.
Wake et al. investigated the impact of 3D printed and AR models3 in patient education in the context of renal and prostate cancer [17]. The authors found great benefit for both technologies with slight advantages toward the 3D printed models. The haptic feedback provided by the printed model as well as the direct manipulation capabilities cannot be provided by current AR/MR systems at a quality level given for a physical object.
3.7. Training
Acquiring surgical experience for young surgeons is challenging. The training options are mainly restricted to using surgical simulation trainers, animal cadavers and, at a later stage, actual patients. These resources often do not resemble human anatomy and are unable to meet the high demand for urologists/surgeons seeking training. VR and especially MR can provide desirable alternatives. Al Janabi et al. studied MR-simulated ureteroscopy by using two different training systems in a full immersion simulation [18]. A head-mounted display was used as an endoscopic screen. The study included 72 participants comprised of novices, intermediates, and experts. The evaluation results showed that MR for surgical training is effective in this scenario and in particular that using head-mounted devices as endoscopic screens can improve surgical performance significantly for all participant groups.
Similar training simulators can be setup for most endoscopic interventions. In 2015, Hung et al. developed a new simulation platform for robotic partial nephrectomy. This platform, where AR and VR are used together, demonstrated its utility for training residents, fellows and inexperienced surgeons in robotic partial nephrectomy [19]. In another study using the hybrid augmented reality simulator, it was shown that the simulator can be used in the urology resident laparoscopy training program [20]. Kuronen-Stewart et al. conducted the face, content and construct validity of Holmium laser enucleation of the prostate (HoLEP) VR simulator, which has become a new gold standard for treating benign prostatic enlargment [21]. It was shown to be a useful and beneficial simulator system for HoLEP training.
In case a surgeon is decoupled from reality by means of intermediate hardware, training scenarios can be setup with relative ease. The necessary visualizations can be generated on demand using a virtual setup. A physical simulation can be used to compute the deformations induced by incisions. Since most robotic platforms do not currently include haptic feedback, the only difference to a real surgery should be the quality of simulation and visualization. The significance of haptic feedback in robot-assisted surgeries is debatable [22]. Våpenstad et al.,however, conclude that haptic feedback is not helpful in transferring surgical skills [23], while Overtoom et al. determined that it may have adverse effects if haptic simulation is only nearly perfect [24].
3.8. Surgery
MR can provide significant support at various levels when performing surgery. For example, VR can be effectively used in surgical planning [25,26]. Urologists were presented with a 3D model of the anatomy in scope along with pathological annotations and additional images like CT, MRI, or PET. The purpose of using VR technology in this setting was to present three-dimensional data using a dedicated system, instead of projecting it onto a two-dimensional display. Since a patient specific planning can be performed exploiting only on pre-acquired data, a registration between surgeon, patient and tools is not necessary. The main aspects to accept such technology are ease of use, quality of presentation, and interactive capabilities to support the planning process. Antonelli et al. used holographic reconstructions and report about the usefulness of three-dimensional preoperative planning before partial nephrectomy [27]. In another study, it was shown that three-dimensional holograms in MR can be used for the preoperative planning before nephron sparing surgery [28]. In a recent study, interactive VR renal models were used in preopereative planning before laparoscopic donor nephrectomy. The authors observed that the operative time was reduced, the donor's results improved, and the patients' preoperative anxiety decreased [29].
Moving on from pure VR, AR can be used to replace traditional display technologies. Al Janabi et al. [18] describe a system to replace the monitors for an endoscopic surgery by a head mounted display (HoloLens). They report about significant benefits when applied in a synthetic training scenario with novice, intermediate and expert surgeons. The main advantage of the AR approach is that endoscopic images and auxiliary information can be displayed on a virtual monitor. Such virtual display can be arbitrarily positioned and hence alleviate disturbance of a surgeon's visual-motor axis often experienced using real monitors. Since the virtual display remains fixed in the real world, stable tracking of the surgeon's head movement is needed. Such tracking is provided by modern mixed reality HMD hardware. Additional co-registration with the patient or surgical tools is not needed. In that regard it is also questionable whether a device like the HoloLens [30] is really needed. In contrast to tracker-less HMD like Google Glass the only difference in data display is that the virtual monitor stays in a fixed position for the HoloLens (i.e. it can move out of sight), while it would remain fixed in the view of the smart glasses. Borgman et al.demonstrated that the use of smart glasses was safe and feasible in 31 AR-assisted urological surgeries performed using smart glasses [31].
Surgical robots can take MR a huge step further. Taking the da Vinci Surgical System as an example, the aforementioned VS technology is used. The surgeon uses a special operating console featuring a fixed stereoscopic view as well as in-hand manipulators to operate surgical tools. All surgical tools will be rigidly attached to the robotic arms on the patient cart. MR approaches tremendously benefit from such a setup. Since the surgeon's view and the means of interaction are fixed, and the position of cameras and tools are known in advance, precise and stable calibration/registration is possible. This allows for accurate visualization of stereoscopic information, even for multiple surgeons at the same time. Due to the strictly controlled scenario, surgical robots can provide MR solutions of highest quality. Schiavina et al. reported that the AR-3D guided surgery can be used to improve intraoperative “real time navigation” to identify the index lesion in robot-assisted radical prostatectomy. Moreover, nerve-sparing surgery approach during robot-assisted radical prostatectomy can be regulated through AR-3D guidance [32]. Additionally, surgical robots have the potential to provide haptic feedback to the surgeon [33].
Unfortunately, robot-assisted surgery has not proven beneficial under all circumstances. In most cases traditional surgery (including minimal invasive surgery) by an expert is still preferred. In traditional surgery the application of AR/MR is very involved, since the surgeon, the patient and consequently the surgical field, the surgical tools, and the displays (HMD, monitors, projectors, etc.) need to be calibrated and tracked. Especially challenging in urology is the fact that most anatomy is comprised from soft tissue which easily deforms due to incisions, pressure, or just under the influence of gravity.
Tracking of surgical instruments is in most cases (i.e. whenever the tool is rigid or articulated) well understood and can be reliably computed [[2], [3], [4]]. Technical issues like occlusion of the tool by the surgeon or by patient's anatomy can be solved by attaching markers or by changing the geometry of the tool. Similarly, most of the devices in the operation theatre can be tracked reliably using existing technology. Newest approaches do not rely anymore on special markers attached to the objects, but use object features like edges, textures, etc. immediately as tracking information [3].
Besides the OR equipment, the surgeon along with the relevant MR hardware (e.g. a HMD) needs to be tracked with high precision. Modern HMD hardware already offers well and proper tracking capabilities. A current drawback is the variability in wearing such devices. The eye-screen configuration changes from user to user, which can cause slight de-calibration. As effect nausea, eye fatigue, shifted display of information, and other adverse effects can result. HoloLens and VIVE hardware allow for a manual adaption of the lens system, while newer hardware like the XTAL adapts automatically to the user and additionally incorporates eye tracking.
Once surgical devices, instruments and the surgeon can be tracked and co-registered [34], it is rather straight forward to spatially localize and display arbitrary information in the operation theatre with very high precision. If the intent of MR is to focus on the operation of equipment, recent technology could be considered mature for practical application.
The ultimate challenge of AR/MR in the surgical setting is introduced by the patients, more technically by the non-rigidity of patients’ anatomy. Technically we decide between articulated i.e. partially rigid and the more general non-rigid deformations. For example, head-movements do not change the status of the brain, despite from rigidly moving it. In such situations it is possible to use pre-operatively acquired information with relative ease. Due to comparatively small differences, optimization based registration techniques can converge fast and reliably.
However, just breathing results in significant deformation of internal organs. This is also the case for open and minimally invasive surgeries in urology, e.g. when inflating the abdomen. Assuming tissue models to be available [30,35] correctly representing volumetric organ deformation, the missing link is reliable markers related to the organs. For example, the shape of the cerebral cortex is such a fiducial. It is unique to a patient and has in general no repetitive structures. It is well suited for registration of pre- and perioperative data. However, in the case of kidney or liver the situations are much worse. In a close-up view both organs look very homogeneous with very few optically distinguishable features. Especially with a restricted view, as found in minimal invasive surgery (MIS), there is little chance to find sufficient features for online registration. In fact most automatically detected features4 are based on illumination artefacts like specular reflections. Such features are not suited to compute a registration. Obviously, also features induced by a pathologic change of the organ should not be used for registration.
In case no fiducials can be found for an anatomy, artificial fiducials and markers can be used. Which marker is best suited depends on the respective application. Larger markers allow for better localization and orientation estimation at the cost of higher space consumption. They appear to be best suited in a rather rigid regime, e.g. in bone proximity. Smaller markers allow a much higher coverage of the anatomy in scope and are often better suited in a non-rigid case. Using multiple small markers at the same time can lead to reliable registration and a reasonably good estimation of position and orientation.
Kong et al. [36] reported in on the use of fluorescent gold fiducials for ex vivo and in vivo experiments in a pig. The fiducials have a helix-shape preventing migration once inserted into the kidney. Their fluorescent coating can be detected in the near-infrared spectrum during surgery and the markers can be reliably identified in CT images. Due to their specific shape and optical properties, the fiducials can be recognized in pre- and perioperative images maximizing computer-based support during kidney surgery. The reported accuracy under deformation was below 1 mm. Regarding the location and borders of small solid renal tumors, the procedure can be beneficial to verify real-time visual information. Further research needs to clarify whether adverse effects are to be expected using the markers. Inserting the markers into the kidney clearly has the technical advantage that internal deformations can be captured.
In some cases custom markers and therewith also AR can be used in elegant ways as reported by Yu et al. [37]. Their main goal is to protect the urethra from damage during MIS. The challenge is to provide reliable information about the urethra position during MIS through augmentation of endoscopic images. Their solution is to use a surface-lighting plastic optical fiber and to illuminate it using coded light5. The fiber is inserted into the urethra and thus encodes its position over its complete length. An automatic process can thus extract and mark the urethra position in the endoscopic images.
4. Conclusions
MR, including AV and AR, is an intriguing technology with tremendous potential in many application domains. Its benefit is the ability to communicate information without changing reality, in seamlessly combining real and virtual content. In the medical domain this could have far reaching impact ranging from education over surgical planning to surgery conduction, treatment and rehabilitation. Since arbitrary information can be communicated, MR applications are especially interesting if the surgeon's information is severely restricted, as in the case of MIS. Although it seems self-evident that physicians want some kind of “multimodal X-ray vision”, it is not straight forward to design an optimal system that fits all needs. Information suited to support the novice might distract the expert.
Both, technology providers as well as medical experts are putting significant effort into the development of suited hardware, software and application procedures. In scenarios with moderate technological demands6 like patient education, MR can be use deffectively. For surgical support, the achievable accuracy is often not sufficient. Especially in urology the non-rigid nature of the organs poses a major challenge. Pre-operatively acquired data needs to be deformed to fit current organ shape. Deformable models, often taking even internal organ deformation into account, are under development. Furthermore, special markers and enhanced optical tracking capabilities are devised supporting the acquisition of dynamics during surgery.
Overall, one can state that the underlying workflow and hardware to use AR/MR in urology are established, although many performance requirements are not yet met. The main challenges lie in intraoperative data acquisition, online and multimodal registration and calibration of devices and data, appropriate display hardware, as well as cooperative devices and tools in the operation theatres. On the other hand, the progress made in recent years is tremendous in all respects and the gap is constantly shrinking.
Medical experts should feel encouraged to experience MR solutions and to communicate their specific needs and effects they aim at. At the same time, prospective end-users should be aware of the intricate technical challenges conditioned by their specific application. This will in turn empower AR experts to devise solutions that are effective and compatible with the overall goal of a better patient care.
Author contribution
Study concept, literature search and collection of articles: G.R., J.R. Writing: G.R., M.Y., J.R., P.L., N.M. Reviewing: A.P., R.I·S., A.M.
Funding
None.
Research involving human participants and/or animals
This article does not contain any studies with human participants or animals performed by any of the authors.
Data availability statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Provenance and peer review
Not commissioned, externally peer reviewed.
Annals of Medicine and Surgery
The following information is required for submission. Please note that failure to respond to these questions/statements will mean your submission will be returned. If you have nothing to declare in any of these categories then this should be stated.
Ethical approval
As this study is a narrative review, the ethical approval is not required.
Consent
This article does not contain any studies with human participants or animals performed by any of the authors.
Registration of research studies
-
1.
Name of the registry:
-
2.
Unique Identifying number or registration ID:
-
3.
Hyperlink to your specific registration (must be publicly accessible and will be checked):
Guarantor
Mehmet Yilmaz, 08.04.2021
Declaration of competing interest
The authors declare no conflict of interest.
Footnotes
Computer Aided Design
Convolutional Neural Networks
Here AR refers to pure AR without AV aspects, i.e. no haptic feedback is available
By means of an automated point of interest (POI) detector
Coded-light is essentially a sequence of light patterns/pulses where frequency or amplitude or color is modulated
This refers for example to the accuracy needed to guide a surgery versus to communicate where an organ is situated within the body.
References
- 1.Milgram P., Takemura H., Utsumi A., Kishino F. Augmented reality: a class of displays on the reality-virtuality continuum. Telemanipulator and Telepresence Technologies. 1994:2351. [Google Scholar]
- 2.Li P, Qin T, Hu B, Zhu F, Shen S. Monocular Visual-Inertial State Estimation for Mobile Augmented Reality2017. 11-21 p.
- 3.Rambach J, Pagani A, Stricker D. [POSTER] Augmented Things: Enhancing AR Applications Leveraging the Internet of Things and Universal 3D Object Tracking2017.
- 4.Wuest H, Vial F, Stricker D. Adaptive Line Tracking with Multiple Hypotheses for Augmented Reality2005. 62-69 p.
- 5.Rambach J, Deng C, Pagani A, Stricker D. Learning 6DoF Object Poses from Synthetic Single Channel Images2018.
- 6.Forster C, Pizzoli M, Scaramuzza D. SVO: Fast Semi-direct Monocular Visual Odometry2014.
- 7.Mur-Artal R., Montiel J., Tardos J., Orb S.L.A.M. A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015;31:1147–1163. [Google Scholar]
- 8.Newcombe R, Davison A, Izadi S, et al. KinectFusion: Real-Time Dense Surface Mapping and Tracking2011. 127-136 p.
- 9.Rambach J, Lesur P, Pagani A, Stricker D. SlamCraft: Dense Planar RGB Monocular SLAM2019.
- 10.Zhi S., Bloesch M., Leutenegger S., Davison A. 2019. SceneCode: monocular dense semantic reconstruction using Learned Encoded Scene Representations; pp. 11768–11777. [Google Scholar]
- 11.Codd A., Choudhury B. Virtual reality anatomy: is it comparable with traditional methods in the teaching of human forearm musculoskeletal anatomy? Anat. Sci. Educ. 2011;4:119–125. doi: 10.1002/ase.214. [DOI] [PubMed] [Google Scholar]
- 12.Lorenzo-Alvarez R., Rudolphi-Solero T., Ruiz-Gómez M.J., Sendra Portero F. Medical student education for abdominal radiographs in a 3D virtual classroom versus traditional classroom: a randomized controlled trial. Am. J. Roentgenol. 2019;213:1–7. doi: 10.2214/AJR.19.21131. [DOI] [PubMed] [Google Scholar]
- 13.Kugelmann D., Stratmann L., Nühlen N. An augmented reality magic mirror as additive teaching device for gross anatomy. Annals of anatomy = Anatomischer Anzeiger: official organ of the Anatomische Gesellschaft. 2017;215 doi: 10.1016/j.aanat.2017.09.011. [DOI] [PubMed] [Google Scholar]
- 14.Moro C., Štromberga Z., Raikos A., Stirling A. The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat. Sci. Educ. 2017;10 doi: 10.1002/ase.1696. [DOI] [PubMed] [Google Scholar]
- 15.Schoeb D.S., Schwarz J., Hein S. Mixed reality for teaching catheter placement to medical students: a randomized single-blinded, prospective trial. BMC Med. Educ. 2020;20:510. doi: 10.1186/s12909-020-02450-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Parkhomenko E., O'Leary M., Safiullah S. Pilot assessment of immersive virtual reality renal models as an educational and preoperative planning tool for percutaneous nephrolithotomy. J. Endourol. 2019;33:283–288. doi: 10.1089/end.2018.0626. [DOI] [PubMed] [Google Scholar]
- 17.Wake N., Rosenkrantz A., Huang R. Patient-specific 3D printed and augmented reality kidney and prostate cancer models: impact on patient education. 3D Printing in Medicine. 2019;5 doi: 10.1186/s41205-019-0041-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Al Janabi H., Aydın A., Palaneer S. Effectiveness of the hololens mixed reality headset in minimally invasive surgery: a simulation-based feasibility study. Surg. Endosc. 2020;34 doi: 10.1007/s00464-019-06862-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Hung A.J., Shah S.H., Dalag L., Shin D., Gill I.S. Development and validation of a novel robotic procedure specific simulation platform: partial nephrectomy. J. Urol. 2015;194:520–526. doi: 10.1016/j.juro.2015.02.2949. [DOI] [PubMed] [Google Scholar]
- 20.Feifer A., Delisle J., Anidjar M. Hybrid augmented reality simulator: preliminary construct validation of laparoscopic smoothness in a urology residency program. J. Urol. 2008;180:1455–1459. doi: 10.1016/j.juro.2008.06.042. [DOI] [PubMed] [Google Scholar]
- 21.Kuronen-Stewart C., Ahmed K., Aydin A. Holmium laser enucleation of the prostate: simulation-based training curriculum and validation. Urology. 2015;86:639–646. doi: 10.1016/j.urology.2015.06.008. [DOI] [PubMed] [Google Scholar]
- 22.Meccariello G., Faedi F., AlGhamdi S. An experimental study about haptic feedback in robotic surgery: may visual feedback substitute tactile feedback? J Robot Surg. 2016;10:57–61. doi: 10.1007/s11701-015-0541-0. [DOI] [PubMed] [Google Scholar]
- 23.Våpenstad C., Hofstad E., Bø L.E. Lack of transfer of skills after virtual reality simulator training with haptic feedback. Minim Invasive Ther. Allied Technol. 2017;26:1–9. doi: 10.1080/13645706.2017.1319866. [DOI] [PubMed] [Google Scholar]
- 24.Overtoom E., Horeman T., Jansen F.-W., Dankelman J., Schreuder H.W.R. Haptic feedback, force feedback, and force-sensing in simulation training for laparoscopy: a systematic overview. J. Surg. Educ. 2018;76 doi: 10.1016/j.jsurg.2018.06.008. [DOI] [PubMed] [Google Scholar]
- 25.Yamada Y., Inoue Y., Kaneko M., Fujihara A., Hongo F., Ukimura O. Virtual reality of three‐dimensional surgical field for surgical planning and intraoperative management. Int. J. Urol. 2019;26 doi: 10.1111/iju.14047. [DOI] [PubMed] [Google Scholar]
- 26.Porpiglia F., Amparore D., Checcucci E. Current use of three-dimensional model technology in urology: a road map for personalised surgical planning. European Urology Focus. 2018;4 doi: 10.1016/j.euf.2018.09.012. [DOI] [PubMed] [Google Scholar]
- 27.Antonelli A., Veccia A., Palumbo C. Holographic reconstructions for preoperative planning before partial nephrectomy: a head-to-head comparison with standard CT scan. Urol. Int. 2018;102:1–6. doi: 10.1159/000495618. [DOI] [PubMed] [Google Scholar]
- 28.Checcucci E., Amparore D., Pecoraro A. 3D mixed reality holograms for preoperative surgical planning of nephron-sparing surgery: evaluation of surgeons’ perception. Minerva Urol. Nefrol. 2019 doi: 10.23736/S0393-2249.19.03610-5. [DOI] [PubMed] [Google Scholar]
- 29.Xie L., O’Leary M., Jefferson F.A. Interactive virtual reality renal models as an educational and preoperative planning tool for laparoscopic donor nephrectomy. Urology. 2021 doi: 10.1016/j.urology.2020.12.046. In press. [DOI] [PubMed] [Google Scholar]
- 30.Ma Y. A review of virtual cutting methods and technology in deformable objects. Int. J. Med. Robot. Comput. Assist. Surg. 2018;14 doi: 10.1002/rcs.1923. [DOI] [PubMed] [Google Scholar]
- 31.Borgmann H., Rodríguez Socarrás M., Salem J. Feasibility and safety of augmented reality-assisted urological surgery using smartglass. World J. Urol. 2017;35:967–972. doi: 10.1007/s00345-016-1956-6. [DOI] [PubMed] [Google Scholar]
- 32.Schiavina R., Bianchi L., Lodi S. Real-time augmented reality three-dimensional guided robotic radical prostatectomy: preliminary experience and evaluation of the impact on surgical planning. Eur Urol Focus. 2020 doi: 10.1016/j.euf.2020.08.004. [DOI] [PubMed] [Google Scholar]
- 33.Amirabdollahian F., Livatino S., Vahedi B. Prevalence of haptic feedback in robot-mediated surgery: a systematic review of literature. J Robotic Surgery. 2018;12 doi: 10.1007/s11701-017-0763-4. [DOI] [PubMed] [Google Scholar]
- 34.Bouget D., Allan M., Stoyanov D., Jannin P. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med. Image Anal. 2016;35 doi: 10.1016/j.media.2016.09.003. [DOI] [PubMed] [Google Scholar]
- 35.Zhang J., Zhong Y., Gu C. Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics. Med. Biol. Eng. Comput. 2018;56:2163–2176. doi: 10.1007/s11517-018-1849-5. [DOI] [PubMed] [Google Scholar]
- 36.Kong S.-H., Haouchine N., Soares R. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials. Surg. Endosc. 2017;31 doi: 10.1007/s00464-016-5297-8. [DOI] [PubMed] [Google Scholar]
- 37.Yu F., Song E., Liu H., Li Y., Zhu J., Hung C.-C. An augmented reality endoscope system for ureter position detection. J. Med. Syst. 2018;42 doi: 10.1007/s10916-018-0992-8. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.