Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Jan 1.
Published in final edited form as: Comput Methods Biomech Biomed Eng Imaging Vis. 2022 Dec 7;11(4):1130–1135. doi: 10.1080/21681163.2022.2154272

Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems

Benjamin D Killeen a, Jonas Winter a, Wenhao Gu a, Alejandro Martin-Gomez a, Russell H Taylor a, Greg Osgood b, Mathias Unberath a
PMCID: PMC10406465  NIHMSID: NIHMS1857601  PMID: 37555199

Abstract

Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool’s pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during “fluoro hunting” for the desired view or standard plane.

Keywords: C-arm positioning, mixed reality, X-ray

1. Introduction

Fully robotic X-ray systems can precisely orient and reposition themselves to align with any viewing direction relative to the patient. However, this does not guarantee that the desired view onto anatomy is indeed achieved easily or quickly, because of challenges in effectively commanding the system. In an interventional radiology (IR) suite, where floor- or ceiling-mounted robotic C-arm systems are prevalent, the interventionalist manipulates the C-arm pose through joysticks, which can be ineffective because adjusting multiple axes in tandem results in complex movements but moving along each axis independently is time consuming. In other scenarios, including those that rely on non- or partially-robotic X-ray imaging systems, the provider interfaces with the X-ray system indirectly through an operator, giving vocal commands such as “more AP” or “roll that way.” The introduction of mobile robotic X-ray systems brings further challenges. Despite being fully robotized, for example, the Brainlab Loop-X requires using a non-sterile control panel to make manual adjustments to the viewing angle (Keil and Trapp 2022). Additionally, the Loop-X is capable of actuating its source and detector independently, allowing for more sophisticated configurations which potentially complicate communication between the surgeon and the operator. Here, we consider alternative interfaces for the surgeon to control robotic X-ray devices more directly, using an optical see-through head-mounted display (OST HMD) that delivers a mixed reality (MR) environment that is spatially calibrated to the X-ray space. This then allows for interactive and on-demand rendering of digitally reconstructed radiographs (DRRs) as a live preview of candidate viewpoints before the X-ray imaging system is moved. In this initial investigation on the usefulness of such approach, we consider the use of a pointer tool to specify the principal ray of the viewing frustum or a virtual “AR Handle” representing the X-ray source, rendered in mixed reality along with the DRR.

Improving the surgeon-C-arm interface has the potential to reduce the number of X-ray acquisitions needed. During surgery, it is common practice to acquire multiple images for the purpose of navigating to a final desired image, a trial-and-error process referred to as fluoro-hunting. In addition to being time-consuming, fluoro-hunting contributes to the radiation dose for patients and clinicians. In a cadaveric study with a non-robotic C-arm, for example, Mandelka et al. (2022) find that an average of 7.1 acquisitions were needed to obtain AP and lateral views of vertebral bodies, exposing the examiners to a median dose of 34.5 μ Gy cm2 per level. For the pelvis, De Silva et al. (2018) find that an average 6.4 ± 4.8 acquisitions were needed to obtain the desired view, measured across five radiologists for six views pertinent to pelvic screw placement. Reducing these non-clinical acquisitions is pertinent as per the as-low-as-reasonably-achievable standard (Hansson 2013).

2. Related Work

Mixed Reality (MR) is an emerging technology that integrates computer-generated content with real world objects. Introducing this technology into medical settings has equipped surgeons with a new set of capabilities that promote improved workflow and outcomes of surgical procedures (Fida et al. 2018; Jud et al. 2020; Elmi-Terander et al. 2019). In this context, the introduction of AR Head-Mounted Displays (HMDs) into the surgical suites enables visual guidance and navigation capabilities, facilitates communication, and allows for the visualization of multiple imaging modalities in situ, promoting the understanding of complex anatomical structures using bi-dimension and three-dimensional images (Rahman et al. 2020). These devices have proven particularly valuable in assisting surgeons during the performance of orthopedic procedures (Fotouhi et al. 2020; Deib et al. 2018; Casari et al. 2021; Teatini et al. 2021; Gu et al. 2022). Furthermore, the wide variety of sensors integrated into AR HMDs has enabled the tracking and localization of surgical tools and markers commonly used in surgical settings (Kunz et al. 2020; Gsaxner et al. 2021).

Recent work has explored the use of AR specifically to aid in the reduction of fluoro-hunting. Unberath et al. (2018) use an HMD to show a virtual indicator of the target pose for a manually operated C-arm, assisting the technician, while Andress et al. (2018) demonstrate the utility of an HMD calibrated with a C-arm system for triangulating and visualizing anatomical structures in 3D. Likewise, the utility of DRRs for obtaining standard views for pelvic screw placement has been explored by De Silva et al. (2018), who simulate fluoroscopic images from the current pose of a calibrated C-arm, manipulated by a radiologist. By contrast, we simulate the planned pose of a robotic X-ray device as one aspect of a surgeon-in-the-loop control interface. A marker-free calibration of the C-arm with the patient and the surgeon’s HMD is achieved in Hajek et al. (2018), with applications in spatially aware visualization of fluoroscopic data (Fotouhi et al. 2019). Finally, Gong et al. (2013) propose a human interface to guide DRR generation in order to initialize a 2D/3D registration, relying on (1) a tracked pointer or (2) Microsoft Kinect hand gestures to manipulate patient model pose. We use a similar control scheme, namely (1) a tracked pointer and (2) HoloLens hand gestures, but as an interface with the next robotic pose—rather than an already acquired image—anticipating and adjusting the planned X-ray projection in MR.

3. Methods

Fig. 1 shows a high-level overview of the proposed interfaces for controlling a robotic X-ray system, which we refer to as “Tool + Preview” and “AR Handle + Preview.” In the Tool + Preview interface, the surgeon, positions a pointer tool in line with the principle ray of the desired view. As the surgeon positions the tool in space, they observe a DRR in real time on the HMD. The placement of the DRR is fixed in the surgeon’s field of view, on the side opposite their dominant hand. This ensures that the surgeon can observe the pointer tool and the live preview simultaneously, i.e. without turning their head to look at a physical display or virtual window. When the surgeon is satisfied with the virtual view they have achieved, they note its position for the robotic X-ray system.

Figure 1.

Figure 1.

Overview of the proposed interfaces to control robotic gantries. Both solutions use a live preview of the currently specified view, approximating how the real X-ray based on this specification will appear. “Tool + Preview” (left) utilizes the pointer tool to specify this view, whereas “AR Handle + Preview” (right) uses a virtual handle that the surgeon manipulates with hand gestures.

There are several drawbacks to the Tool + Preview approach that the proposed AR Handle, shown in Fig. 3, is equipped to solve. The first is reliance on optical tracking of the pointer tool, which can be obscured by the detector, as well as the need to physically place the tool near the patient. In certain views, the placement of the X-ray arm may obstruct tool placement, requiring repositioning. More importantly, once the surgeon withdraws the tool, there is no persistent visualization of the pose they just specified, which may be useful for further iteration. The AR Handle addresses this by remaining fixed in space, as a virtual object, until manipulated with tracked hand gestures. Since virtual objects can be manipulated from afar, the surgeon can position this tool without extending their arm over the patient for long periods. As in the first interface, once the surgeon is satisfied with the DRR they have acquired, they begin robotic motion.

Figure 3.

Figure 3.

An overview of the “AR Handle + Preview” interface. In this interaction, the surgeon manipulates a virtual handle from afar using a pinch and grab interaction, specifying the principle ray of the desired view.

3.1. Experimental Setup

We test each interface by evaluating the ability of an expert user to obtain standard clinical views onto a pelvis phantom. The phantom is fixed in the head-first, supine position and obscured from the surgeon’s view. The robotic X-ray system consists of a Brainlab Loop-X in conjunction with the BrainLab Curve navigation platform. We are interested in the number of real X-ray shots needed to obtain the desired view, namely the anteroposterior (AP), inlet, outlet, (right) obturator oblique, (right) iliac oblique, and teardrop views. After an initial shot using the interface in question, the surgeon evaluated each image and, if needed, used joystick adjustments to obtain the final view. Secondary shots were achieved in this manner for efficiency’s sake, since automated movements first require the Loop-X to return to an upright position before converging on a specified view, rather than moving along the shortest path. Iterative adjustments are more practical in the clinical setting if the first shot is reasonably close. Naturally, a better initial shot will result in fewer adjustments.

To provide a live preview of the given viewpoint, we use a virtual patient model to constantly render digitally reconstructed radiographs (DRRs). Fig. 2 provides an overview of the experimental setup for the Tool + Preview interface. For this study, the patient model consists of a CT scan of the pelvis phantom, and we rely on optical tracking to obtain the pose of the pelvis TH,P and tool TH,T with respect to the HMD frame H. The anatomical pose could likewise be obtained via automatic 2D-3D registration, as in De Silva et al. (2018), and tool tracking is unnecessary when using a fully virtual gantry interface, such as our AR Handle. In our case, the pelvis frame P is registered to the virtual patient model by including the fiducial markers in the CT field of view, thus establishing a point correspondence for TP,Mo. For the Tool + Preview interface, this provides the pose of the pointer tool TM,T with respect to the patient model

TM,H=TP,M1TH,P1. (1)

Following two pivot calibrations, we obtain the tool tip aT and back bT in frame T, which are used to specify the projection matrix of a DRR along the principle ray given by the pointer tool:

P=K[R|t]=K[Rotθ(r^M×z^)|aMdr^M] (2)

where aM = TM,H TH,TaT, bM = TM,H TH,TbT, r^M=bMaM||bMaM|| is the principle ray direction, cosθ=r^Mz^ and K is the camera intrinsic matrix. d is the distance from the virtual camera center to the tool tip, which we set at 650 mm. Note that we do not constrain rotation in the detector plane, since most digital X-ray devices, including the Loop-X, allow for image rotation after acquisition. The virtual camera approximates an X-ray device with a source-to-detector distance of 1020 mm, 0.194 mm pixel size, and a 1536 × 1536 detector. For the sake of performance, the DRR is rendered with 4 × 4 binning.

Figure 2.

Figure 2.

Our experimental setup, with the “Tool + Preview” interface shown. The patient model M is registered to a radiopaque pelvic phantom P (visually obscured). In our experiments, optical tracking by the HMD provides the kinematic chain to obtain TM,T, while the BrainLab Curve tracker L has line of sight to both the tool T and gantry G, enabling positioning of the Loop-X. A DRR (green) is shown in the surgeon’s line of sight, corresponding to a real X-ray along the same principle ray direction, between the X-ray source (below table) and detector (top). DRRs are flipped left-to-right for viewing due to the source being below the patient.

For the AR Handle + Preview interface, the AR handle consists of a grabbable virtual object with pose TH,ARH with respect to the HMD. The handle indicates a center point cARH analagous to the physical tooltip aT as well as the principle ray direction r^ARH in the handle frame. The projection matrix is then calculated as above, substituting aMTM,H TH,ARHcARH and r^MTM,HTH,ARHr^ARH. This AR Handle can be manipulated in 6 DOF by pinching the virtual object itself or from afar.

Our prototype uses a Microsoft HoloLens 2 in communication with a server. For all optical tracking from the HoloLens, we adapt the algorithm proposed by Martin- Gomez et al. (2022) to stream infrared and short depth image sensing from the HoloLens to the server, which computes the pointer tool and pelvis poses.1 At the same time, the server renders DRRs using a Titan 2080 Ti graphics card and open source software tools modified to support dynamic updates to the viewpoint Unberath et al. (2018, 2019). The workstation communicates the tool poses and updated DRRs back to the HoloLens at a rate of 1 per second, as seen in the surgeon’s field of view. The live preview frame rate and resolution are constrained primarily by the wireless connection to the HMD, rather than DRR render time. The HoloLens also displays holographic reconstructions of the optical markers, confirming tracking is up-to-date. This is necessary because the Loop-X gantry occaisonally obscures the line of sight between the HoloLens and the pointer tool. Additionally, the short depth image sensing of the HoloLens, which was intended to facilitate accurate hand tracking, is limited to 1 m.

4. Experiment

For a baseline, we compare our proposed interfaces to the existing Loop-X feature that enables a clinician to specify the viewing direction with a tracked tool. The Loop-X device repositions itself to obtain this viewing direction is closely as possible, adjusting the source angle, detector angle, lateral movement, longitudinal movement, traction yaw (rotation on the floor), and gantry tilt. For the Tool + Preview interface, the same tracked tool is used to specify the viewing direction of both the DRR and the Loop-X, ensuring the two poses align up to the tracking error of both systems. For AR Handle + Preview, we direct the Loop-X to the indicated pose, after the handle has been specified, by holding the pointer tool in the same position in space. This procedural step overcomes the current implementation challenges of controlling the Loop-X and is not considered to be part of the AR Handle interface.

4.1. Results

Our results show the potential for the proposed gantry interfaces to reduce the number of real X-rays needed. “Tool + Preview” facilitated single-shot acquisition of the desired views, compared to 2–4 shots using the baseline “Tool” interface with no mixed reality component. “AR Handle + Preview” performed similarly, with 4/6 standard views acquired on the first try. In the AP and Inlet views, only minor lateral movements were needed to correct the initial shot and bring the relevant anatomy fully in view. The baseline “Tool” interface, by comparison, required up to 3 manual adjustments to the viewing angle, where the surgeon directed a non-expert operator with instructions such as “roll it up just a little more,” and “tilt back,” in addition to lateral and longitudinal movements.

The final images, shown in Fig. 4, show nearly identical views onto the anatomy were achieved, with the primary differences arising from rotation about the principle ray. For completeness, Table 1 indicates the corresponding pose acquired by the Loop-X for each image, relative to an initial “home” position in line with the patient table. Different poses are not necessarily an indication of significantly different viewing directions, due to the fact that only the viewing direction is specified by the tool, not the rotation thereabout. The Loop-X attains this viewing direction as closely as possible while avoiding collisions with the table.

Figure 4.

Figure 4.

The final X-ray images obtained for each view under the three gantry interfaces. 8 out of 10 images using a live preview were evaluated as “perfect” on the first shot, whereas the existing “Tool” interface required at least 2 and up to 4 shots per view. Image artifacts are a result of CLAHE histogram equalization, applied by default.

Table 1.

The final Loop-X pose for each view, using the Brainlab pointer tool interface (white), our “Tool + Preview” interface (red), and our “AR Handle + Preview” interface (blue). Note the number of shots required to obtain each view.

Tool Source (°) Detector (°) Lateral (cm) Longitudinal (cm) Traction Yaw (°) Gantry Tilt (°) Number of Shots
Tool + Preview
AR Handle + Preview
179.4 0.47 −0.02 9.3 0.99 −1.92 4
178.91 2.93 −0.02 13.27 −0.67 −0.48 1
AP 173.31 −2.86 −0.32 14.98 4.64 4.5 2
175.6 −3.29 −0.07 25.62 3.62 29.39 2
177.98 3.46 −0.18 22.98 −0.23 21.34 1
Inlet 177.24 6.67 −2.99 23.79 −1.02 24.35 2
178.63 4.59 2.59 −14.31 9.13 −28.89 4
177.1 −11.42 −0.01 −11.01 −5.55 −25.01 1
Outlet 181.11 18.61 −2.29 −16.37 7.54 −24.52 1
222.25 42.94 0.05 9.72 0.93 −4.01 2
223.85 53.5 −0.11 5.82 4.24 −5.68 1
Obturator Oblique 214.74 45.19 0.04 15.19 −6.23 −0.47 1
138.95 −22.59 −3.22 11.69 6.04 0.18 2
127.03 −41.12 −5.42 19.12 9.97 10.26 1
Iliac Oblique 132.93 −25.49 −4.63 11.08 4.14 −1.35 1
226.41 41.14 8.57 −15.47 9.96 −24.46 4
198.49 30.71 −0.17 −14.89 9.86 −22.41 1
Teardrop 206.23 37.4 −0.06 −15.99 9.95 −24.26 1

5. Discussion

A mixed reality interface for controlling robotic X-ray devices presents numerous opportunities for future work. For instance, our prototype implementation relies on optical tracking of the anatomy, but the choice of anatomical model for rendering DRRs could likewise consist of a preoperative CT or statistical shape model registered to a tracked C-arm via automatic 2D/3D registration (Grupp et al. 2020; Gao et al. 2020; De Silva et al. 2018). Additional work may focus on questions of human-centered design of the mixed reality interface, which has been shown to affect surgical accuracy (Gu et al. 2022), by leveraging the depth sensing of the HoloLens to enrich the information available when planning the next robotic movement. The motion of the AR Handle might be constrained to reflect physically achievable views with the robotic X-ray system, avoiding collisions with the patient and surgical instruments. Moreover, fine control over the rotational degree of freedom in mixed reality may be integrated with the post-acquisition rotation of the image by the X-ray device. Finally, future work will provide higher resolution DRRs at a faster frame rate by leveraging more efficient data transfer protocols.

6. Conclusion

We have considered the question of how best to interface with robotic X-ray devices, in order to investigate possibilities for more effective interfaces that would reduce the time needed for repositioning of the gantry and reducing the number of images that are acquired solely for the purpose of view finding. With the introduction of mobile robotic arms with independent source and detector movements, this question becomes even more pertinent because the added complexity may result in miscommunication between the surgeon and the radiological technician. We studied the use of mixed reality to show a live preview of the next X-ray, increasing the likelihood that the first shot is as close as possible to the desired viewing plane. Separately, the use of an AR Handle allows the surgeon to specify the next view even when the arm or detector would physically obstruct placement of a pointer tool, as well as to maintain visualization of previously acquired views without holding a tool in place. Furthermore, based on our evaluation, either of these paradigms can enable single-shot acquisition of the desired view, reducing the number of overall acquisitions and consequently radiation exposure for the patient and clinicians.

Acknowledgments

This work was supported by the NIH under Grant No. R21EB028505.

Footnotes

1

The version of the tracking algorithm used in this work is currently under review and is therefore anonymized.

References

  1. Andress S, Alex Johnson MD, Unberath M, Winkler AF, Yu K, Fotouhi J, Simon Weidert MD, Greg MD Osgood M, and Navab N (2018, January). On-the-fly augmented reality for orthopedic surgery using a multimodal fiducial. In J. of Medical Imaging, 5(2), Volume 5, pp. 021209. SPIE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Casari FA, Navab N, Hruby LA, Kriechling P, Nakamura R, Tori R, de Lourdes dos Santos Nunes F, Queiroz MC, Fürnstahl P, and Farshad M (2021). Augmented reality in orthopedic surgery is emerging from proof of concept towards clinical studies: a literature review explaining the technology and current state of the art. Current Reviews in Musculoskeletal Medicine 14(2), 192–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. De Silva T, Punnoose J, Uneri A, Mahesh M, Goerres J, Jacobson M, Ketcha MD, Manbachi A, Vogt S, Kleinszig G, Khanna AJ, Wolinksy J-P, Siewerdsen JH, and Osgood G (2018, January). Virtual fluoroscopy for intraoperative C-arm positioning and radiation dose reduction. J. Med. Imaging 5(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Deib G, Johnson A, Unberath M, Yu K, Andress S, Qian L, Osgood G, Navab N, Hui F, and Gailloud P (2018). Image guided percutaneous spine procedures using an optical see-through head mounted display: proof of concept and rationale. Journal of neurointerventional surgery 10(12), 1187–1191. [DOI] [PubMed] [Google Scholar]
  5. Elmi-Terander A, Burström G, Nachabe R, Skulason H, Pedersen K, Fagerlund M, Ståhl F, Charalampidis A, Söderman M, Holmin S, Babic D, Jenniskens I, Edström E, and Gerdhem P (2019, April). Pedicle Screw Placement Using Augmented Reality Surgical Navigation With Intraoperative 3D Imaging: A First In-Human Prospective Cohort Study. Spine 44(7), 517. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Fida B, Cutolo F, di Franco G, Ferrari M, and Ferrari V (2018). Augmented reality in open surgery. Updates in surgery 70(3), 389–400. [DOI] [PubMed] [Google Scholar]
  7. Fotouhi J, Mehrfard A, Song T, Johnson A, Osgood G, Unberath M, Armand M, and Navab N (2020). Development and pre-clinical analysis of spatiotemporal-aware augmented reality in orthopedic interventions. IEEE transactions on medical imaging 40(2), 765–778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Fotouhi J, Unberath M, Song T, Gu W, Johnson A, Osgood G, Armand M, and Navab N (2019, June). Interactive Flying Frustums (IFFs): spatially aware surgical data visualization. Int. J. CARS 14(6), 913–922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Gao C, Liu X, Gu W, Killeen B, Armand M, Taylor R, and Unberath M (2020, September). Generalizing Spatial Transformers to Projective Geometry with Applications to 2D/3D Registration. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, pp. 329–339. Cham, Switzerland: Springer. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Gong RH, Güler Ö, Kürklüoglu M, Lovejoy J, and Yaniv Z (2013, December). Interactive initialization of 2D/3D rigid registration. Med. Phys. 40(12), 121911. [DOI] [PubMed] [Google Scholar]
  11. Grupp RB, Unberath M, Gao C, Hegeman RA, Murphy RJ, Alexander CP, Otake Y, McArthur BA, Armand M, and Taylor RH (2020, May). Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration. Int. J. CARS 15(5), 759–769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Gsaxner C, Li J, Pepe A, Schmalstieg D, and Egger J (2021). Inside-out instrument tracking for surgical navigation in augmented reality. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, pp. 1–11. [Google Scholar]
  13. Gu W, Shah K, Knopf J, Josewski C, and Unberath M (2022, May). A calibration-free workflow for image-based mixed reality navigation of total shoulder arthroplasty. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 10(3), 243–251. [Google Scholar]
  14. Hajek J, Unberath M, Fotouhi J, Bier B, Lee SC, Osgood G, Maier A, Armand M, and Navab N (2018, September). Closing the Calibration Loop: An Inside-Out-Tracking Paradigm for Augmented Reality in Orthopedic Surgery. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, pp. 299–306. Cham, Switzerland: Springer. [Google Scholar]
  15. Hansson SO (2013, January). ALARA: What is Reasonably Achievable? In Radioactivity in the Environment, Volume 19, pp. 143–155. Walthm, MA, USA: Elsevier. [Google Scholar]
  16. Jud L, Fotouhi J, Andronic O, Aichmair A, Osgood G, Navab N, and Farshad M (2020). Applicability of augmented reality in orthopedic surgery–a systematic review. BMC musculoskeletal disorders 21(1), 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Keil H and Trapp O (2022, May). Fluoroscopic imaging: New advances. Injury. [DOI] [PubMed] [Google Scholar]
  18. Kunz C, Maurer P, Kees F, Henrich P, Marzi C, Hlaváč M, Schneider M, and Mathis-Ullrich F (2020). Infrared marker tracking with the hololens for neurosurgical interventions. Current Directions in Biomedical Engineering 6(1). [Google Scholar]
  19. Mandelka E, Barbari JE, Kausch L, Privalov M, Grützner PA, Vetter SY, and Franke J (2022, February). Intraoperative adjustment of radiographic standard projections of the spine: Interrater- and intrarater variance and consequences of ‘fluoro-hunting’ considering time and radiation exposure – A cadaveric study. medRxiv, 2022.02.12.22270884. [Google Scholar]
  20. Martin-Gomez A, Haowei L, Song T, Yang S, Wang G, Ding H, Navab N, Zhao Z, and Armand M (2022). STTAR: Surgical Tool Tracking using off-the-shelf Augmented Reality Head-Mounted Displays. ArXiv. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Rahman R, Wood ME, Qian L, Price CL, Johnson AA, and Osgood GM (2020). Head-mounted display use in surgery: a systematic review. Surgical innovation 27(1), 88–100. [DOI] [PubMed] [Google Scholar]
  22. Teatini A, Kumar RP, Elle OJ, and Wiig O (2021, March). Mixed reality as a novel tool for diagnostic and surgical navigation in orthopaedics. Int. J. CARS 16(3), 407–414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Unberath M, Fotouhi J, Hajek J, Maier A, Osgood G, Taylor R, Armand M, and Navab N (2018, October). Augmented reality-based feedback for technician-in-the-loop C-arm repositioning. Healthcare Technol. Lett. 5(5), 143–147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Unberath M, Zaech J-N, Gao C, Bier B, Goldmann F, Lee SC, Fotouhi J, Taylor R, Armand M, and Navab N (2019). Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation. International journal of computer assisted radiology and surgery (IJCARS). [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Unberath M, Zaech J-N, Lee SC, Bier B, Fotouhi J, Armand M, and Navab N (2018). DeepDRR–A Catalyst for Machine Learning in Fluoroscopy-guided Procedures. In Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI). Springer. [Google Scholar]

RESOURCES