Abstract
Augmented reality (AR) surgery navigation systems display the pre-operation planned virtual model at the accurate position in the real surgical scene to assist the operation. Accurate calibration of the mapping relationship between the virtual coordinate and the real world is the key to the virtual-real fusion effect. Former calibration methods require the doctor user to conduct complex manual procedures before usage. This paper introduces a novel motionless virtual-real calibration method. The method only requires to take a mixed reality image containing both virtual and real marker balls using the built-in forward camera of the AR glasses. The mapping relationship between the virtual and real spaces is calculated by using the camera coordinate system as a transformation medium. The composition and working process of the AR navigation system is introduced, and then the mathematical principle of the calibration is designed. The feasibility of the proposed calibration scheme is verified with a verification experiment, and the average registration accuracy of the scheme is around 5.80mm, which is of same level of formerly reported methods. The proposed method is convenient and rapid to implement, and the calibration accuracy is not dependent on the user experience. Further, it can potentially realize the real-time update of the registration transformation matrix, which can improve the AR fusion accuracy when the AR glasses moves. This motionless calibration method has great potential to be applied in future clinical navigation research.
Keywords: Augmented reality, Optical tracker, Surgical navigation, Virtual and real calibration, PNP
Augmented reality; Optical tracker; Surgical navigation; Virtual and real calibration; PNP.
1. Introduction
With the rapid development of computer-aided surgery, the surgical navigation system has been extensively used in surgery [1]. The current digital navigation technology can precisely locate the anatomical structures and pathological tissues of the operation area and can control the resection range to protect important anatomical structures [2]. Taking oral craniofacial surgery as an example, the complexity of this type of surgery requires high precision [3, 4], so the surgical navigation system is indispensable. However, the traditional surgical navigation is based on two-dimensional (2D) display screen, which brings about the following problems. Firstly, the image displayed on the screen can't establish a direct spatial relationship with the patient's anatomical structure [5], which can't really play the role of navigation. Secondly, the 2D screen is unable to display the real three-dimensional (3D) surgical planning model, lacking depth information. Thirdly, the doctor needs to repeatedly switch his eyes back and forth between the display screen and the operation area during the operation [6], which leads to the problem of hand-eye asynchrony and visual fatigue.
Augmented Reality (AR) technology has become one of the hotspots in the field of computer application. AR technology superimposes the virtual content generated by computer on the real world to achieve the 3D virtual-real fusion display, and enhances users' perception of the real world [7, 8, 9]. Therefore, when AR technology is applied to surgical navigation, the 3D reconstruction model of CT data of patient's lesions can be presented in front of the surgeon, integrated with the actual patient's anatomy [2, 10]. This method extends the surgeon's field of vision, enhances the visual system, and obtain the internal structure of organs invisible to the naked eye [2], which may effectively solve the above problems of surgical navigation [10, 11, 12].
At present, there have been many researches on AR based surgical navigation. Fei B et al. [13] applied an AR navigation tool in irreversible electroporation ablation of pancreas, in which the real-time ultrasound 2D image is rendered in the target area, and the angle between the actual trajectory and the planned trajectory is displayed by HoloLens (Microsoft). The virtual and real space registration is carried out by hand-eye calibration. In orthopedic surgery, Gibby et al. [14] guided the placement of pedicle screws through the 3D surgical path displayed in the AR glasses, but the alignment between the virtual path and the spinal model was achieved by manual operations, and the registration accuracy was not guaranteed. In mandibular surgery, Lin et al. [15, 16] applied AR technology to robot-assisted surgery (RAS) to allow precision and automation in operational procedures. The intraoperative registration system tracks the geometric center of the corresponding marker complex via the optical camera. Using the automatic registration program, the workstation performs calculations and tracking in real time. Zhu et al. [17, 18] used HoloLens to track a picture marker to realize AR navigation. A occlusal splint cemented was tailored for the patient. The picture marker attached to the splint is recognized and tracked by HoloLens, so that the virtual model can overlap with the real mandible in real time. This scheme requires additional manufacturing of accurate occlusal splint, which is rather troublesome. The viewing angle of HoloLens is greatly limited according to the 2D plane structure of the picture marker. Moreover, large-size splints can also be an obstacle during the facial surgery [19].
Recently, Sun et al. [20] put forward an optical tracker based AR navigation scheme and designed a fast calibration algorithm to solve the problem of virtual and real fusion. By aligning the tracker probe tip to the virtual marker points showed in AR glasses, the mapping relationship between the virtual space and the optical tracking space can be estimated through two sets of three-dimensional points in their respective coordinate systems. This calibration method is relatively easy to implement, however the calibration accuracy is still severely affected by hand shaking during probing process. The deficiency in SLAM (Simultaneous Localization and Mapping) function of the HoloLens could also have a great impact on the calibration stability of the method.
In AR surgical navigation studies, the AR calibration is the key step, which connects the virtual space with the real world. The calibration process usually involves complex manual operations, as shown above. In this study, a new motionless virtual-real calibration method is exploited, which can directly solve the mapping relationship between the 3D virtual space and the 3D real world based on a snapshot of the built-in forward camera of the AR glasses. After that, the conversion relationship between the coordinate system of the virtual space and that of the tracker is automatically calculated through algorithm, which greatly lessens the burden on users in terms of time and workload. Since the method is automatic, it does not rely on the user experience, and further the calibration can potentially be refreshed in real-time, which mitigates the user movement influence.
The paper is structured as follows: In section 2, the first half introduces the design and workflow of the AR surgical navigation system. The second half provides an overview of the proposed calibration procedure and its mathematical model based on the navigation system. In Section 3, experiments are designed to validate the feasibility of this calibration method and its registration accuracy. Afterwards, section 4 presents a discussion of the factors that contribute to the registration error. Finally, section 5 draws conclusions and anticipates the prospect of future researches.
2. Materials and methods
2.1. Configuration of AR surgical navigation system
The AR surgical navigation system consists of an optical tracking system, AR glasses and data processing unit, as shown in Figure 1. The optical tracking system consists of the stereo vision tracker Polaris Vicra (Northern Digital, Canada), the optical probe and bracket, which are matching with the tracker. The optical probe includes several marker balls and its tip coordinate can be measured in real time by the optical tracker with a volumetric accuracy of 0.25mm and a maximum frame rate of 20Hz. The optical tracker coordinate serves as the real world coordinate system. The optical bracket is usually fixed on the patient rigidly, so that the tracker can detect the patient movement by monitoring the bracket marker ball positions. Microsoft HoloLens 2 (Microsoft, Redmond, USA) is selected as the AR navigation display system. HoloLens 2 glasses is based on the optical waveguide principle, offering a 30° field of view colorful display. Its advanced SLAM capability allows the virtual model to be stably fused with the real world when the operator changes the view direction. The workstation computer is the data processing unit, which controls the optical tracer and also communicate with the AR glasses. The physical diagram of the whole system is shown in Figure 2.
Figure 1.
Composition diagram of AR surgical navigation system.
Figure 2.
AR surgical navigation system.
The function of the AR surgical navigation system is to realize the spatial fusion between the computer-generated 3D elements from CT data and the patient's surgical area. Doctors can observe the virtual and real superposition effect from any angle of view. In the operation, the patient's surgical site is attached with the optical bracket, and the tracker will obtain the coordinates of the bracket marker balls in real time, and thus the real time coordinates of the patient are achieved. The computer then calculates the spatial transformation of the virtual model based on the coordinates change of the patient. After that, the computer transmits the results to HoloLens, which will update the display content by means of controlling the rotation and translation of the virtual model. The communication between the computer and HoloLens is realized by Transmission Control Protocol (TCP). HoloLens connects to the server on the computer as a client to realize real-time data interaction [21].
2.2. Operation steps of AR surgical navigation system
-
1.
CT scanning and 3D model reconstruction.
The patient must undergo a thin-slice spiral CT scan before operation. The CT data is saved in DICOM format and imported into ProPlan CMF 3.0 (Materialise, Leuven, Belgium), in which careful operations are conducted to reconstruct computed tomography (CT) data into a target 3D model for preoperative planning [22]. The 3D virtual model will be used for display in AR glasses.
-
2.
AR application development.
The virtual scene is designed with Unity 2019.4.13f1c1 (Unity Technologies, San Francisco, USA), including UI interactive buttons for virtual model control, virtual marker balls and the planned surgical model, as shown in Figure 3. The scene is packaged into a UWP (Universal Windows Platform) application and imported into HoloLens under Visual Studio development environment.
-
3.
Surgical model registration.
Figure 3.
Developing the virtual scene with Unity. The virtual scene displayed on HoloLens includes the surgical planned model, virtual marker balls, and UI buttons.
The planned surgical model is not connected with the patient in the coordinate of the tracker or the HoloLens virtual display. Therefore, the first step is to register the surgical model to the patient in the tracker system. Six registration points with clear positioning features are selected from the patient surgical area, for example, the tooth tip points. Their coordinates in the real world are collected with the tracker optical probe, while the virtual coordinates of their corresponding points on the surgical model are already known. Then the transformation matrix between the surgical model in the virtual space and the patient in the real world or tracker coordinate system can be calculated based on the two sets of coordinates.
-
4.
The virtual-real transformation calibration.
The transformation relationship between the HoloLens virtual display system and the tracker system determines the virtual-real fusion effect. Therefore, accurate calibration of the transformation matrix is a key step of AR navigation. The principle of the proposed motionless calibration will be depicted in detail in the next section.
-
5.
Real-time update of the virtual display
After the virtual-real calibration, the surgical 3D model planned before the operation will be transferred to the target area, achieving virtual-real fusion display. When the operation area moves, the computer obtains the position of target area in real time by controlling the tracker to track the reference bracket. Consequently, the translation and rotation of the model in the virtual space displayed on HoloLens can be updated accordingly.
2.3. Virtual and real calibration method design
In order to make the preoperative planning model displayed on HoloLens coincide with the surgical site in the real world scene, the mapping between the HoloLens virtual space and the real world must be established. In other words, the calibration is to solve the transformation relationship between the coordinate system of the virtual space () and that of the tracker () which represents the real world [20]. In this paper, we use the coordinate system of the HoloLens forward camera as the conversion medium to achieve a motionless calibration, as shown in Figure 4.
Figure 4.
Transformation diagram of the proposed virtual-real calibration. The schematic diagram describes the conversion relationships between the three coordinate systems.
For the proposed calibration, we only need to take a calibration picture using the HoloLens forward camera, and the coordinate conversion is automatically solved through algorithm. The calibration process flow is depicted by Figure 5.
Figure 5.
The virtual-real calibration flow chart.
Before starting the operation, it is necessary to use HoloLens forward camera to take several pictures of the checkerboard at different poses, and the intrinsic parameters of the camera can be calculated using the camera calibration toolbox of MATLAB 2018b (MathWorks, Natick, USA). This step is done only once and the intrinsic matrix is stored for following use. The calibration includes the following steps.
-
1.
The user wears the HoloLens glasses and controls the forward camera to take a mixed-reality photo. The photo contains four real marker balls on the bracket fixed on the patient and six virtual balls designed in advance in HoloLens application. In the experiment, a 3D printed plastic skull model is used to represent the surgical target. The captured picture is shown in Figure 6, which includes the four real marker balls fixed on the skull model and the virtual marker balls suspended in the air.
-
2.
Through image processing, the calibration software can detect the center pixel coordinates of the four real mark balls and six virtual marker balls in the picture.
-
3.
By means of the tracker, the coordinates of four real marker balls are obtained in the tracker system.
-
4.
The coordinates of six virtual marker balls in the virtual space coordinate system are already known when developing the HoloLens application. Therefore, by considering the camera coordinate system as a medium, the transformation matrix from the coordinate system of the tracker to the virtual space can be solved.
Figure 6.
AR virtual-real calibration image. The mixed-reality image captured by HoloLens forward camera, which is used for the calibration.
2.4. Mathematical model of calibration and AR fusion display
2.4.1. The mathematical model of virtual-real calibration
The transformation from a world coordinate system to the pixel coordinate system can be expressed by the camera projection model, defined as Eq. (1):
| (1) |
where (, , ) is the coordinates of point in the world coordinate system; (, ) is the image coordinates of point P in pixel coordinate system; is the scale factor; is the camera intrinsic matrix and is the extrinsic matrix of the camera in the world coordinate system.
The pixel coordinates and the tracker coordinates of the four real marker balls are denoted as and respectively. is the transformation from the tracker coordinate system to the forward camera, and it can be calculated according to Eq. (2).
| (2) |
Similarly, , which is the transformation from the HoloLens virtual coordinate system to the forward camera coordinate system, can be determined based on Eq. (3). are the pixel coordinates of six virtual marker balls, and are their virtual space coordinates.
| (3) |
Then with Eq. (4), we can obtain , which is the core transformation matrix from the coordinate system of the tracker to the HoloLens virtual space.
| (4) |
The calibration model is very simple and convenient, since the user does not need to perform complex probing operations as in former reports [20]. Considering the virtual marker balls can be designed around the target area and can be viewed from any viewing angle, this method can solve the accuracy and viewing angle limitations of formerly reported image marker method [17, 18].
2.4.2. The mathematical model of virtual-real fusion display
and are the coordinates of the patient's surgical site in the tracker coordinate system and the virtual coordinate system respectively. Eq. (5) expresses their coordinate system conversion:
| (5) |
is the planned 3D model based on the CT scan data, which is to be displayed by AR glasses. is the transformation matrix between the CT model data in its original coordinate system and the virtual space coordinate of AR glasses, shown in Eq. (6).
| (6) |
Eq. (7) is obtained by combining Eqs. (5) and (6):
| (7) |
In the step of surgical model registration, the Singular Value Decomposition (SVD) algorithm can be used to find the transformation matrix based on Eq. (8):
| (8) |
Combining Eqs. (7) and (8), the transformation matrix can be obtained using Eq. (9):
| (9) |
Using , the virtual model can be transformed from the original position to the patient's surgical site in the virtual space to realize virtual-real fusion. The complete coordinate transformation of AR surgical navigation is summarized in Figure 7.
Figure 7.
Coordinate transformation of virtual-real fusion display of the surgical model.
3. Results
3.1. Calibration and validation experiment procedures
To verify the feasibility of the proposed calibration method, the calibration process is conducted based on the AR surgical navigation system in Figure 2. Firstly, the intrinsic parameters of the HoloLens forward camera is obtained. The HoloLens calibration application also has been developed and imported into HoloLens. The user just needs to run the application on HoloLens and successively performs the surgical model registration and the virtual-real calibration according to the operation steps in section 2. When these two steps are finished, the virtual visual model with designed marker points can be aligned with the real skull model automatically, as shown in Figure 8.
Figure 8.
The finally achieved superposition effect of real and virtual model. (a) and (b) show the fusion display effect of the virtual and real model in two different viewing directions, and (c) shows only the 10 virtual marker points fused with the real model by hiding the virtual model.
We evaluate the virtual-real registration accuracy by aligning the tip of the tracker probe with ten marker points designed on the virtual and real skull model, as shown in Figure 9 (a, b). and are the tracker coordinates of the marker points on the real skull model and virtual model, respectively. | | is the Euclidean distance between two sets of points after overlap, representing the registration error, as shown in Eq. (10).
| (10) |
Figure 9.
Measuring the marker points with probe for error validation. (a) shows using probe to measure the virtual marker points . (b) shows using probe to measure the real marker points .
The distribution of the ten evaluation marker points is shown in Figure 10, and points 1 to 6 are also the registration points used in surgical model registration.
Figure 10.
The distribution of 10 experimental collected points on 3D printed skull model. (a) shows the marker points on the left side of the 3D printed skull model. (b) shows the marker points on the right side of the 3D printed skull model.
3.2. Registration and calibration process
The HoloLens forward camera is calibrated by taking 15 pictures of a checkboard from different view points wearing HoloLens [23]. The intrinsic matrix is obtained, shown in Eq. (11).
| (11) |
| (12) |
During the surgical model registration, the coordinates of six registration points on real skull model, denoted as , and the coordinates of corresponding points on the virtual CT model, denoted as , are shown in Tables 1 and 2 respectively. The transformation matrix can thus be calculated using Eq. (8), and the result shown in Eq. (12).
Table 1.
The coordinates of real registration points in the tracker coordinate system (unit: m).
| Point 1 | Point 1 | Point 1 | Point 1 | Point 1 | Point 1 | |
|---|---|---|---|---|---|---|
| 0.2049 | 0.1845 | 0.1766 | 0.1774 | 0.1877 | 0.2104 | |
| 0.1091 | 0.0988 | 0.0869 | 0.0791 | 0.0873 | 0.1006 | |
| -0.8585 | -0.8436 | -0.8228 | -0.8029 | -0.7899 | -0.7818 |
Table 2.
The coordinates of virtual registration points in the virtual coordinate system (unit: m).
| Point 1 | Point 1 | Point 1 | Point 1 | Point 1 | Point 1 | |
|---|---|---|---|---|---|---|
| 0.0404 | 0.0324 | 0.0155 | -0.0001 | -0.0166 | -0.0346 | |
| -0.0554 | -0.0431 | 0.0397 | -0.0436 | -0.0402 | -0.0419 | |
| -0.0023 | -0.0272 | -0.0466 | -0.0556 | -0.0468 | -0.0261 |
For the step of virtual-real calibration, we extract the pixel coordinates of both virtual and real marker balls and calculate the two sets of extrinsic parameters of the camera through Perspective-n-Point (PNP) algorithm. and are the pixel coordinates of four real marker balls and six virtual marker balls respectively, as listed in Table 3. and are the 3D coordinates of real and virtual marker balls in their respective coordinate systems, shown in Tables 4 and 5. The transformation matrix from the tracker coordinate system to the virtual space, denoted as in Eq. (13), can be derived (See Figure 11).
| (13) |
Table 3.
The center pixel coordinates of marker balls (unit: pixel).
| 1675 | 1422 | 1466 | 1739 | 2271 | 2999 | 2305 | 2248 | 1545 | 2244 | |
| 1093 | 1068 | 1389 | 1291 | 1116 | 1130 | 369 | 903 | 1102 | 1744 |
Table 4.
The coordinates of real marker balls in the tracker coordinate system (unit: m).
| 0.1392 | 0.1299 | 0.1905 | 0.1788 | |
| 0.1057 | 0.0574 | 0.0348 | 0.0964 | |
| -0.7713 | -0.7965 | -0.7559 | -0.7424 |
Table 5.
The coordinates of virtual marker balls in the virtual coordinate system (unit: m).
| -0.1000 | 0 | -0.1000 | -0.1000 | -0.2000 | -0.1000 | |
| -0.1000 | -0.1000 | 0 | -0.1000 | -0.1000 | -0.2000 | |
| -0.5000 | -0.5000 | -0.5000 | -0.6000 | -0.5000 | -0.5000 |
Figure 11.
The extraction of center pixel coordinates of the real and virtual marker balls.
The homogeneous matrix shown in Eq. (14), defining the spatial change of the virtual model can thus be derived using Eq. (9).
| (14) |
3.3. Registration accuracy validation
Next, we apply the calibrated transformation matrix in the virtual-real fusion experiment and validate the registration accuracy. In the fusion condition, we measure the coordinates of virtual and real marker points with the optical probe for eight times to calculate their registration errors. Each time we record the registration errors of 10 pairs of real and virtual marker points, as shown in Table 6. The distribution of error data is shown in Figure 12.
Table 6.
Registration error data (unit: mm).
| Group 1 | Group 2 | Group 3 | Group 4 | Group 5 | Group 6 | Group 7 | Group 8 | Average | |
|---|---|---|---|---|---|---|---|---|---|
| Point 1 | 4.78 | 6.80 | 5.03 | 6.33 | 4.71 | 6.40 | 7.32 | 3.76 | 5.64 |
| Point 2 | 5.35 | 6.60 | 3.81 | 5.39 | 5.27 | 6.88 | 7.15 | 5.92 | 5.80 |
| Point 3 | 3.80 | 5.26 | 7.36 | 7.11 | 6.48 | 4.69 | 5.24 | 4.47 | 5.55 |
| Point 4 | 5.66 | 5.64 | 4.61 | 6.18 | 5.33 | 7.43 | 6.62 | 6.50 | 6.00 |
| Point 5 | 8.01 | 6.29 | 5.45 | 6.21 | 7.80 | 6.08 | 4.30 | 5.08 | 6.15 |
| Point 6 | 6.30 | 5.07 | 6.33 | 5.28 | 3.12 | 5.11 | 6.75 | 7.89 | 5.73 |
| Point 7 | 3.95 | 8.61 | 3.49 | 5.31 | 6.25 | 4.51 | 4.39 | 6.93 | 5.43 |
| Point 8 | 4.72 | 2.78 | 5.55 | 8.02 | 8.36 | 4.86 | 7.40 | 4.88 | 5.82 |
| Point 9 | 7.43 | 5.23 | 5.67 | 4.77 | 5.61 | 5.92 | 3.83 | 4.51 | 5.37 |
| Point 10 | 6.57 | 7.12 | 6.74 | 3.83 | 6.84 | 5.25 | 7.90 | 8.14 | 6.55 |
| Average | 5.66 | 5.94 | 5.40 | 5.84 | 5.98 | 5.71 | 6.09 | 5.81 | 5.80 |
| SD | 1.34 | 1.47 | 1.16 | 1.13 | 1.45 | 0.94 | 1.42 | 1.43 | 0.34 |
Figure 12.
Boxplot of registration error of eight repeated tests. The errors are mostly concentrated in 4–7mm and the median error is 5.8mm.
The final average registration error using the proposed motionless calibration method is around 5.8mm, which is larger than the reported result in Sun's paper [20]. However, the evaluation method of registration error in this paper is different from that in Sun's paper. We have tested the calibration method proposed in Sun's paper, and found that the AR registration accuracy was actually very hard to control. In that method, the user needs to manually probe the virtual cross points, which is hard to judge and easy to be influenced by hand shake and AR glass movement. We could only achieve the AR display registration error of about 4–5 mm at best. We think the reported accuracy of ∼1mm in Sun's paper is the mathematical spatial coordinate transformation residual error, but not the practical AR registration error as tested in our paper.
In the surgical AR navigation process, head rotation may easily lead to inaccurate spatial calculation and positioning of HoloLens, followed by a decrease in registration accuracy. Therefore, a single calibration matrix is not enough in practical clinical applications. The proposed motionless calibration method can potentially update the calibration matrix in real time with a real-time captured image of the real and virtual balls, therefore it can keep the same level of AR fusion effect when the user changes the viewing directions. This is a great advantage of the proposed method. Currently the access to the HoloLens forward camera and the built-in processing hardware is not available, therefore the real-time calibration update can not be demonstrated here.
4. Discussion
The current registration error of the proposed method is 5.80mm, which can offer an effective fusion of the virtual model in the real world for guidance. However the error can still be easily recognized by human eyes. Next, the potential error sources are analyzed.
-
1.
The error caused by surgical model registration.
When the user measures the feature points on the real skull model, it's hard to make probe tip coincide with them precisely. This will cause the calculation error of . The resultant registration error is about 2mm based on the probing tests.
-
2.
The error caused by AR virtual-real calibration.
After using HoloLens main camera to capture mixed-reality ball pictures for many times, it is found that the position of virtual balls is not static. When the HoloLens camera moves, virtual objects will show certain degrees of spatial deviation from their initial positions. When the movement of the camera stops, virtual objects will stabilize to fixed position after several seconds. When capturing the calibration image, we expect the user head to be relatively stationary. Furthermore, we have found a solution to reduce the spatial deviation of virtual objects in mixed-reality photos. When developing the HoloLens application in Unity, we checked the selection of “Rendered from PV camera” in MRTK (Mixed-Reality Toolkit). With this setting, the accuracy of virtual ball marker extraction is effectively improved.
The angle at which the built-in camera imaging the virtual objects is different from the viewing angle of human eyes when observing them through HoloLens. Therefore, there is slight deviation between the position of the virtual objects in the captured image and their actual position in the virtual space rendered in HoloLens.
All these errors related to the captured virtual marker balls will result in the error of , which is the extrinsic parameters between the camera and the virtual space.
-
3.
HoloLens displaying error.
The SLAM and eye tracking technology of HoloLens are not accurate enough for those applications for which the accurate alignment between virtual content and perceived reality is of the utmost importance [24]. HoloLens is more suitable for large scene applications, because the accuracy of virtual-to-real spatial alignment can hardly reach the millimeter level. When it is used as an aid to high precision tasks in which the operating area is displayed in a small range, such as surgical operation, the virtual model can't be stably located. If user moves his eyes and head wearing HoloLens, the virtual objects will move slightly, which increases the uncertainty in estimating the position of virtual objects rendered in HoloLens.
It is worth mentioning that, in Sun's method, the overlapping error will accumulate as the viewing angle changes during use. Our proposed method has the potential to solve this issue, since it's a motionless calibration method and we can achieve real-time calibration to maintain the accuracy of initial alignment theoretically.
5. Conclusion
The combination of AR and digital navigation technology can realize three-dimensional visual navigation for high accuracy operations. In this paper, a novel motionless AR virtual-real calibration method is proposed for AR navigation systems. In this method, user only needs to capture a mixed-reality photo to calculate the transformation matrix, and the calibration results are solved by computer through algorithms with no need of manual probing operations. The calibration principle and operation process are designed. Based on the calibration experiment, the feasibility of the method is proven, and the registration error of AR surgical navigation system using this method is 5.80mm. This method is convenient to operate, and the calibration accuracy does not depend on the user's experience. More importantly, it allows the calibration matrix to be real-time updated to achieve better registration accuracy during the whole operation. We are working with the local AR glasses hardware developer to realize this function, which will be reported in future reports.
There is still great room to improve the accuracy of this AR virtual-real calibration method. Increasing the number of virtual and real marker balls and optimizing the algorithm of marker ball center extraction can improve the calibration accuracy. The advancement of the display technology used in AR glasses is also important. If HoloLens can help capture more accurate virtual-real fusion images, the accuracy of this calibration method will be greatly improved. The inaccuracy of the SLAM and eye tracking alters the stereoscopic perception of the virtual content, causing instability of virtual object location. Therefore, improving these two technologies will be of great help. All these improvements will pave the way for the application of augmented reality technology in modern medical operations in the future.
Declarations
Author contribution statement
Xinjun Wan, Dr: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.
Lizhengyi Shen: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.
Zhiqiang Fang: Performed the experiments.
Shao Dong; Shilei Zhang; Chengzhong Lin: Contributed reagents, materials, analysis tools or data.
Funding statement
Xinjun Wan was supported by National Natural Science Foundation of China [61505107], Shanghai Science and Technology Innovation Action Plan [19511104600].
Data availability statement
Data included in article/supp. material/referenced in article.
Declaration of interests statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper.
References
- 1.Kim Y., Lee B.H., Mekuria K., Cho H., Park S., Wang J.H., Lee D. Registration accuracy enhancement of a surgical navigation system for anterior cruciate ligament reconstruction: a phantom and cadaveric study. Knee. 2017;24:329–339. doi: 10.1016/j.knee.2016.12.007. [DOI] [PubMed] [Google Scholar]
- 2.Yang R., Li C., Tu P., Ahmed A., Ji T., Chen X. Development and application of digital maxillofacial surgery system based on mixed reality technology. Front. Surg. 2021;8:719985. doi: 10.3389/fsurg.2021.719985. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Schramm A., Suarez-Cunqueiro M.M., Barth E.L., Essig H., Bormann K.H., Kokemueller H., Rucker M., Gellrich N.C. Computer-assisted navigation in craniomaxillofacial tumors. J. Craniofac. Surg. 2008;19:1067–1074. doi: 10.1097/SCS.0b013e3181760fc0. [DOI] [PubMed] [Google Scholar]
- 4.Juergens P., Beinemann J., Zandbergen M., Raith S., Kunz C., Zeilhofer H.F. A computer-assisted diagnostic and treatment concept to increase accuracy and safety in the extracranial correction of cranial vault asymmetries. J. Oral Maxillofac. Surg. 2012;70:677–684. doi: 10.1016/j.joms.2011.02.046. [DOI] [PubMed] [Google Scholar]
- 5.Blackwell M., Morgan F., DiGioia A.M. Augmented reality and its future in orthopaedics. Clin. Orthop. Relat. Res. 1998:111–122. doi: 10.1097/00003086-199809000-00014. [DOI] [PubMed] [Google Scholar]
- 6.Ma L., Zhao Z., Chen F., Zhang B., Fu L., Liao H. Augmented reality surgical navigation with ultrasound-assisted registration for pedicle screw placement: a pilot study. Int. J. Comput. Assist. Radiol. Surg. 2017;12:2205–2215. doi: 10.1007/s11548-017-1652-z. [DOI] [PubMed] [Google Scholar]
- 7.Sielhorst T., Feuerstein M., Navab N. Advanced medical displays: a literature review of augmented reality. J. Disp. Technol. 2008;4:451–467. [Google Scholar]
- 8.Koyachi M., Sugahara K., Odaka K., Matsunaga S., Abe S., Sugimoto M., Katakura A. Accuracy of Le Fort I osteotomy with combined computer-aided design/computer-aided manufacturing technology and mixed reality. Int. J. Oral Maxillofac. Surg. 2021;50:782–790. doi: 10.1016/j.ijom.2020.09.026. [DOI] [PubMed] [Google Scholar]
- 9.Okamoto T., Onda S., Yanaga K., Suzuki N., Hattori A. Clinical application of navigation surgery using augmented reality in the abdominal field. Surg. Today. 2015;45:397–406. doi: 10.1007/s00595-014-0946-9. [DOI] [PubMed] [Google Scholar]
- 10.Meng F.H., Zhu Z.H., Lei Z.H., Zhang X.H., Shao L., Zhang H.Z., Zhang T. Feasibility of the application of mixed reality in mandible reconstruction with fibula flap: a cadaveric specimen study. J. Stomatol. Oral Maxillofac. Surg. 2021;122:e45–e49. doi: 10.1016/j.jormas.2021.01.005. [DOI] [PubMed] [Google Scholar]
- 11.Proniewska K., Dolega-Dolegowski D., Dudek D. A holographic doctors' assistant on the example of a wireless heart rate monitor. Bio. Algorithm Med. Syst. 2018:14. [Google Scholar]
- 12.Frantz T., Jansen B., Duerinck J., Vandemeulebroucke J. Augmenting Microsoft's HoloLens with vuforia tracking for neuronavigation. Healthc. Technol. Lett. 2018;5:221–225. doi: 10.1049/htl.2018.5079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Fei B., Webster R.J., Tchaka K., Stoyanov D., Davidson B., Gurusamy K., Hawkes D.J., Clarkson M.J., Clancy N.T., Vasconcelos F., et al. Proceedings of the Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling. 2018. Augmented reality needle ablation guidance tool for irreversible electroporation in the pancreas. [Google Scholar]
- 14.Gibby J.T., Swenson S.A., Cvetko S., Rao R., Javan R. Head-mounted display augmented reality to guide pedicle screw placement utilizing computed tomography. Int. J. Comput. Assist. Radiol. Surg. 2019;14:525–535. doi: 10.1007/s11548-018-1814-7. [DOI] [PubMed] [Google Scholar]
- 15.Lin L., Shi Y., Tan A., Bogari M., Zhu M., Xin Y., Xu H., Zhang Y., Xie L., Chai G. Mandibular angle split osteotomy based on a novel augmented reality navigation using specialized robot-assisted arms–A feasibility study. J. Cranio-Maxillo-Fac. Surg. 2016;44:215–223. doi: 10.1016/j.jcms.2015.10.024. [DOI] [PubMed] [Google Scholar]
- 16.Lin L., Xu C., Shi Y., Zhou C., Zhu M., Chai G., Xie L. Preliminary clinical experience of robot-assisted surgery in treatment with genioplasty. Sci. Rep. 2021;11:6365. doi: 10.1038/s41598-021-85889-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Zhu M., Liu F., Chai G., Pan J.J., Jiang T., Lin L., Xin Y., Zhang Y., Li Q. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery. Sci. Rep. 2017;7:42365. doi: 10.1038/srep42365. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Jiang T.R., Zhu M., Chai G., Li Q.F. Precision of a novel craniofacial surgical navigation system based on augmented reality using an occlusal splint as a registration strategy. Sci. Rep. 2019:9. doi: 10.1038/s41598-018-36457-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Venosta D., Sun Y., Matthews F., Kruse A.L., Lanzer M., Gander T., Gratz K.W., Lubbers H.-T. Evaluation of two dental registration-splint techniques for surgical navigation in cranio-maxillofacial surgery. J. Cranio-Maxillofacial Surg. 2014;42:448–453. doi: 10.1016/j.jcms.2013.05.040. [DOI] [PubMed] [Google Scholar]
- 20.Sun Q., Mai Y., Yang R., Ji T., Jiang X., Chen X. Fast and accurate online calibration of optical see-through head-mounted display for AR-based surgical navigation using Microsoft HoloLens. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1907–1919. doi: 10.1007/s11548-020-02246-4. [DOI] [PubMed] [Google Scholar]
- 21.Neves J., Serrario D., Pires J.N. Application of mixed reality in robot manipulator programming. Ind. Robot Int. J. Robot. Res. Appl. 2018;45:784–793. [Google Scholar]
- 22.Marmulla R., Muhling J. The influence of computed tomography motion artifacts on computer-assisted surgery. J. Oral Maxillofac. Surg. 2006;64:466–470. doi: 10.1016/j.joms.2005.11.019. [DOI] [PubMed] [Google Scholar]
- 23.Zhang Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000;22:1330–1334. [Google Scholar]
- 24.Cutolo F., Fontana U., Cattari N., Ferrari V. Off-line camera-based calibration for optical see-through head-mounted displays. Appl. Sci. 2019:10. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data included in article/supp. material/referenced in article.












