Abstract
Background:
An augmented reality tool allows visual tracking of real anatomical structures and superimposing virtual images, so it can be used for navigation of important structures during surgery.
Objectives:
The authors have developed a new occlusal splint-based optical navigation system for craniomaxillofacial surgery. In this study, the authors aim to measure the accuracy of the system and further analyze the main factors influencing precision.
Methods:
Ten beagle dogs were selected and a three-dimensional model was established through computed tomography scanning, dental model making, and laser scanning, and then registration was performed according to the tooth marking points. The bilateral mandibular osteotomy was performed on Beagle dogs under navigation system based on the occlusal splint. The left side was taken to compare the deviation between the preoperative plan and the surgical results, and the accuracy of distance and angle and the stability of the system were analyzed.
Results:
The average position deviation between the preoperative design and intraoperative navigation was: 0.01 ± 0.73 mm on the lateral height of the mandibular ramus, 0.26 ± 0.57 mm on the inner height of the mandibular ramus, and 0.20 ± 0.51 mm on the osteotomy length. The average angle deviation is 0.94° ± 1.38° on the angle between the mandibular osteotomy plane and ramus plane and 0.66° ± 0.97° on the angle of the retained mandibular angle. And most of the data showed good consistency.
Conclusions:
In summary, the accuracy of the system can meet clinical requirements and can be used as a useful tool to improve the accuracy of craniomaxillofacial surgery.
Keywords: Cranio-maxillofacial surgery, image-guided surgery, occlusal splint, optical navigation system
In Asia, mandibular angle split osteotomy (MASO) is one of the most common aesthetic procedures of craniomaxillofacial surgery. But the craniomaxillofacial region contains a large number of important nerves and blood vessels.1 To avoid injury, the osteotomy in craniomaxillofacial surgery requires high precision and controllable errors. Therefore, the intuitive display of the complex anatomical structure and the safe and effective osteotomy plan is the basis for the navigation system (NS) to accurately guide the craniomaxillofacial surgery.1,2
The key to the success of MASO is the selection of the osteotomy site.3 However, in the actual operation process, there is only a narrow field of vision between the oral mucosa and the mandible, so rough design and misjudgment of osteotomy protocol can easily cause nerve damage.4 Inferior alveolar nerves (IANs) injury is one of the common complications of MASO. In traditional surgical methods, the effect of the operation depends largely on the doctor's clinical experience. Doctors only perform operations based on reading and understanding preoperative computed tomography (CT) images, and cannot see the important tissue structure and the adjacent relationship of the mandible in real-time, especially the position and shape of the IANs.5 This increases the risk of surgery.
In recent years, the emergence and development of augmented reality (AR) technology has provided a more intuitive and safe way of presenting surgery. The technology is characterized by a combination of real-time interaction, virtual reality, and three-dimensional (3D) display.6 It can provide virtual information and improve visual understanding in real-world environments, which has been used in abdominal surgery,7 orthopedics,8 urology,9 neurosurgery,10 and nasal endoscopic surgery.11
This research establishes an AR optical NS that guides MASO in real-time. The system can simultaneously display the preoperatively designed cutting plane (CP) and 3D images of the mandible based on AR technology, and accurately superimpose them into the surgical area, thereby generating a real-time “see-through” effect.5 The system can be used to treat hemifacial microsomia, orbital hypertelorism, drilling holes of the mandible, and other operations.4,12,13 This paper introduces a registration method based on occlusal splints: collect animal tooth models before surgery, make marker brackets according to the occlusal relationship; then scan the 3D data of the dental model and register it with the CT data of the mandible. Finally, the navigation information is fused with the virtual image and the actual object is displayed in the actual operation. Because the accuracy of the system will have a significant impact on surgical outcomes, we designed this study to measure the accuracy of the system and evaluate the main factors. In actual clinical operations in the future, the real-time registration results of video acquisition will be superimposed on the surgical site, which provides a reference for doctors and guarantees the effectiveness of the operation.
MATERIALS AND METHODS
In this study, 10 beagle dogs were randomly selected as our model. According to the designed CP, MASO can be carried out in animal experiments. Based on AR technology, the mandibular section, and virtual image can be superimposed on the mandible of the beagle for navigation surgery. Finally, we measure the relevant indicators in the 3D image of the left mandible and perform statistical analysis on the original data before and after the operation. All animal experiments were approved by the Institutional Animal Care and Use Committee and complied with relevant guidelines and regulations.
Three-Dimensional Data Acquisition and Preoperative Planning
Under natural occlusion, all selected animals underwent a 3D spiral CT scan of the skull. Preoperative thin-slice (0.5 mm) CT scan (GE LightSpeed 16, Milwaukee, WI) and digital imaging of medical image data have been imported into Mimics 18.0 (Materialize Company, Leuven, Belgium) software to create a 3D image of the mandible through regional growth (Fig. 1). According to the different thresholds of skeletal tissue and nerves, the IANs are separated layer by layer and reconstructed in 3D. We can obtain 3D digital models of the mandible and the IANs from the mental foramen to the mandibular foramen, and design the CPs according to the clear position and shape of the nerves. The 3D information of the mandibular CPs and the IANs is output as an STL file (Fig. 2).
FIGURE 1.
The three-dimensional image of the mandible.
FIGURE 2.
Preoperative design. (A) Raw 3D data of mandible. (B) Marking the inferior alveolar nerves (IANs) and cutting planes. (C) Front perspective view of the osteotomy plan. (D) Simulated 3D image after osteotomy. 3D, three-dimensional.
Fabrication, Scanning, and Registration of Occlusal Splints
After planning the navigation information, we need to register the virtual image in the real world. The fiducial mark needs to serve as a common reference between the virtual image data and the real world and to maintain a stable relationship with the mandible. The best way to track and register the AR system is to mark it on the board. Then, we use an occlusal splint to connect the marker and the jaw and stabilize the relationship between them. Specifically, using the mold of the mandibular teeth of the Beagle as a reference, a self-curing plastic material was used to make occlusal splints on both canines and the middle 6 teeth. In a stable occlusal relationship, the plaster and the splint are firmly combined. Then connect the tracking marker and occlusal splint in a personalized posture, which is collectively referred to as a marker complex (MC) (Fig. 3).
FIGURE 3.
Fabrication of marker complex. (A) Dental mold. (B) Occlusal splint. (C) Marker complex (MC). (D) Firm combination of dental mold and MC.
The 3D data is obtained after 3D laser scanning the MC and dental model, and then input these data to the graphics workstation, and fitted with the CT data of the mandibular model together through at least 3 landmarks (usually prominent sharp points or tooth pit). At the same time, the 3D information of the mandibular CPs and IANs is combined with the MC and the mandible and then imported into 3ds Max software (Autodesk, San Rafael, CA) in STL format together. Adjust the position and direction of the virtual image to ensure that the center point of the marked point overlaps with the coordinate origin system, and then export all the information of different materials to the WRL file format (Fig. 4). The 3D images of the preoperative mandible and osteotomy plan were imported into AR Toolkit software (ARToolworks, Seattle, WA) to verify the registration effect.
FIGURE 4.
Scanning and registration of occlusal splints. (A) 3D laser scanning of the marker complex. (B) Calibration the center point in the 3ds Max software. 3D, three-dimensional.
Intraoperative Navigation
Perform intravenous anesthesia, skin preparation, disinfection, prepare a work towel for the beagle, and separate the skin and tissue layers until the mandibular angle is completely exposed. The sterilized MC is firmly worn on the corresponding teeth of the beagle, and then the image is adjusted in AR Toolkit software (ARToolworks) to achieve accurate position, appropriate color contrast, and transparency. Finally, all surgical plans are displayed in the surgical area in real-time through a helmet-mounted display (nVisor ST60, NVIS, Sunnyvale, California, USA). Under the guidance of the predesigned CP displayed in the optical NS based on AR technology, MASO was able to operate (Fig. 5). Collect and keep the cut bone pieces, and then suture the incision layer by layer after hemostasis (Fig. 6). A week later, after performing a CT scan of the postoperative beagle, the image data was imported into Mimics 18.0 software to create a 3D model of the mandible.
FIGURE 5.
Registration and surgery. (A) Intraoperative registration. (B) Operating the mandibular angle split osteotomy (MASO) under the guidance of the predesigned cutting plane displayed in the optical navigation system (NS) based on AR technology. AR, augmented reality.
FIGURE 6.
The cut bone blocks. (A) Front view of the bilateral cut bones. (B) Side view of the bilateral cut bones.
Statistical Analysis
In terms of data analysis, in the preoperative and postoperative measurements of the 10 groups of mandibular models, the left data was selected to mark many anatomical landmarks, and statistical analysis was performed by defining different line segments, planes, and angles (Fig. 7). Specifically, we marked condyle (A1), coracoid (A2), osteotomy point of the mandibular trailing edge (B1), osteotomy point of the mandibular front edge (B2), medial mandibular osteotomy point (B3), and gonion (C). Then, we marked the mandibular ramus plane (A1A2C) as yellow and the osteotomy plane (B1B2B3) as red. Finally, we selected and defined the line segment A1B3 as the inner plate height, the line segment A1B1 as the outer plate height, and the line segment the length of B1B2 was osteotomy length, ∠A1B1B2 was the retained mandibular angle, and the angle between plane A1A2C and plane B1B2B3 was osteotomy angle. Undergoing the statistics of inner plate height, outer plate height, osteotomy length, retained mandibular angle, and osteotomy angle for designed osteotomy, respectively (preoperative) (Fig. 7) and the actual osteotomy (postoperative), respectively, the statistical results are shown in Supplementary Digital Content, Table 1. Statistical software (SPSS 26; IBM Corp, Armonk, NY) was used to perform a paired t-test on the mandibular measurement data, and all measurement data were expressed as mean ± standard deviation. Finally, we made a scatter plot for the preoperative design and postoperative results of each index, and analyzed the correlation coefficient and consistency of each data (Fig. 8, Supplementary Digital Content, Table 2).
FIGURE 7.
Measurement of different line segments, planes, and angles. (A1) Condyle. (A2) Coracoid. (B1) Osteotomy point of the mandibular trailing edge. (B2) Osteotomy point of the mandibular front edge. (B3) Medial mandibular osteotomy point. (C) Gonion. (A1A2C) Mandibular ramus plane (yellow). (B1B2B3) Osteotomy plane (red). (A1B3) Inner plate height. (A1B1) Outer plate height. (B1B2) Osteotomy length. (∠A1B1B2) Retained mandibular angle. (A1A2C across with B1B2B3) Osteotomy angle.
FIGURE 8.
The scatter plot of the preoperative design and the postoperative results. (A) Inner plate height. (B) Outer plate height. (C) Osteotomy length. (D) Retained mandibular angle. (E) Osteotomy angle.
RESULTS
The mean distance deviation between the preoperative design and postoperative result was: 0.26 ± 0.57 mm on the inner plate height, 0.01 ± 0.73 mm on the outer plate height, and 0.20 ± 0.51 mm on the osteotomy length. The angular deviation was: 0.94° ± 1.38° on the angle of the retained mandibular angle and 0.66° ± 0.97° on the osteotomy angle (the angle between the mandibular ramus plane and the osteotomy plane) (Fig. 7, Supplementary Digital Content, Table 1). Supplementary Digital Content, Table 1 shows that each set of data has a P value > 0.05, so the error between the data was not statistically significant, indicating that the higher accuracy of the system meets the accuracy requirements of the surgery.
Judging from the scatter plot of the preoperative design and postoperative results of each index, most of the data showed good consistency and showed a significantly higher correlation coefficient: 0.933 (A), 0.927 (B), 0.961 (C), and 0.913 (E) (Fig. 8, Supplementary Digital Content, Table 2). However, the consistency of the Retained Mandibular Angle data does not show satisfactory consistency, and the correlation coefficient was 0.283 (D) (Fig. 8, Supplementary Digital Content, Table 2).
DISCUSSION
Navigation
Surgical navigation, also known as a computer-assisted surgery (CAS), has rapidly emerged in several surgical disciplines. It is one of the most impressive surgical advances in the past 30 years. It has realized the use of real-time 3D image guidance during surgery. Navigation has not only become a tool to improve medical and health14 but also used as a research tool.15 The use of CAS has expanded the limited visual field of doctors and updated the concept of surgery and surgical instruments. By introducing guided images during surgery, surgery accuracy can be effectively improved, surgery time can be shortened, and surgery complications can be reduced. Due to the complex anatomical structure of the craniomaxillofacial region, CAS seems to be particularly applicable in the field of craniomaxillofacial surgery and has gradually gained recognition in attempts and explorations.16,17 In craniomaxillofacial contour reconstruction surgery, tumor resection and reconstruction surgery, trauma surgery, and orthognathic surgery, It is necessary to reconstruct the functional and aesthetic anatomy by resetting displaced bone pieces, by cutting abnormal bone contours, or by placing various implants.18–21 It is very important to accurately determine the expected result of a surgical correction or the intended position of the implant before surgery. However, in many cases, it is difficult to transmit complex visual planning information to the actual surgical site only by imagination.19,20,22–24 Cranio-maxillofacial surgery requires reliable protection of key anatomical structures and accurate osteotomy,23,25–27 which makes it one of the most promising areas for image-guided surgery.21,28 However, the actual use of CAS is subject to many restrictions and is still in the experimental stage.
Augmented Reality
Traditional optical navigation uses surgical probes to track external markers and present images point-to-point on the screen.29 However, this navigation technology has the following shortcomings:
-
(1)
All navigation information is displayed on a separate display, and the doctor has to constantly switch the field of view between the virtual image and the patient's anatomy during the operation;
-
(2)
Because the display is flat, therefore, the depth of the two-dimensional image must be used to reflect the 3D space position, which increases the difficulty of operation in actual navigation;
-
(3)
All surgical instruments used for navigation must be equipped with positioning and tracking devices, and there must be more than 3 positioning tracking devices to reflect them.
Spatial location. However, not all surgical instruments can meet the installation requirements, because in the mandibular osteotomy, the narrow surgical area is likely to cause the positioning tracking device to be blocked, thereby affecting the positioning of the equipment, so it can only be used for specific operations and specific surgical instruments. However, in the AR NS, virtual images and real objects will be reflected in the NS through the mode of visual fusion.30 When the surgeon wears a head-mounted display, the virtual image can cover the wearer's field of vision, so the wearer does not need to master hand-eye coordination skills. And by unifying the 3D model, the actual position of the patient, and the real-time position of the surgical instrument in the space for registration, the NS can collect and display the position of the surgical instrument in the space in real-time. This is achieved with the help of the positioning function. Under the guidance of AR-based NS, bone resection and reconstruction in complex anatomical areas (such as the head and neck area) may be more accurate and effective, and at the same time, it can reduce the risk of nerve and blood vessel damage.31 Due to these advantages, AR-based NSs seem to be more suitable for craniomaxillofacial surgery.
Registration and Accuracy
For NSs, accuracy is the most important measurement index, because it directly affects the results of the operation. The key to precise navigation is how to accurately match the marker with the human body during the operation, and how to accurately reproduce the positional relationship during the preoperative registration process. Image registration based on external features is the most commonly used method in surgical NSs, requiring markers to be visible in preoperative images, so it is easy to be detected during intraoperative registration.32 Registration methods can be divided into 2 types: intrusive registration and nonintrusive registration. Invasive markers include 3D frame-based markers and titanium screw implant-based markers. Noninvasive fiducial markers can be divided into 3 types: adhesive markers, dental instruments, and anatomical markers (such as skin33 and bone). Most of the literature supports that the gold standard for high-precision registration is a stable marker (such as a titanium screw) marked directly on the bone tissue.34,35 This is because the possibility of relative movement between the skull and the screw is relatively small, so it has high application accuracy.
The parameters of the registration method based on external features can be solved without complicated optimization algorithms, and the accuracy is reliable, and it is relatively easy to achieve fast and fully automatic registration.36 But it is invasive, brings great discomfort to the patient, and limits the doctor's operation during the operation. Major craniofacial surgery involves the facial skull and brain skull. Except for the mandible, the other skulls and maxilla are seamlessly connected, and the teeth and maxilla are also firmly integrated into one. Moreover, the irregular shape of the teeth makes it easy to form a mosaic state with other materials. With this, the occlusal splint can provide a noninvasive, high-precision registration method, because it can be stably installed on the teeth with bony structures.32,37,38 Therefore, we use the occlusal relationship of the teeth to make the fiducial markers tightly integrated with the skull. Cutting et al39 studied the use of sharp bony marks for registration in AR systems, that is, through the bite plate positioning mark, the final control error is less than 1.5 mm, and the rotation error of any axial position is <3°. Tsuji et al40 reported using tooth model data for registration, with the front teeth and the left and right second molars as the standard, and the registration error can be controlled within 1 mm, with a maximum of 1.02 mm. Zhu et al6 applied the early occlusal bracket of this navigation mode for registration, and the position error reached 0.96 ± 0.51 mm. Recently, Wang et al41 used an intraoral 3D scanning instrument to obtain a patient's tooth shape model and then registered the tooth model into a customized stereo camera system. Finally, through a simulated mandibular experiment, it was found that the average error of target overlay registration was less than 0.50 mm and the registration time was less than 0.5 seconds.
In this study, we simulated the tooth structure of experimental animals with dental molds. And the braces of the first 6 jaw teeth made of self-curing plastics act as splints, which can be accurately fixed to the solid model of Beagles teeth with good reproducibility. The mean distance deviation between the preoperative design and intraoperative navigation was: 0.26 ± 0.57 mm on the inner plate height, 0.01 ± 0.73 mm on the outer plate height, and 0.20 ± 0.51 mm on the osteotomy length. The angular deviation was: 0.94° ± 1.38° on the angle of retained mandibular angle and 0.66° ± 0.97° on the osteotomy angle (the angle between the mandibular ramus plane and the osteotomy plane) (Supplementary Digital Content, Table 1). Supplementary Digital Content, Table 1 shows that each set of data has a P value > 0.05, so the error between the data was not statistically significant, indicating that the higher accuracy of the system meets the accuracy requirements of the surgery. Nevertheless, the position of the osteotomy point of the mandibular trailing edge (B1), osteotomy point of the mandibular front edge (B2), and medial mandibular osteotomy point (B3) change greatly, because these points have no clear anatomical marks on the mandible of the Beagle. Therefore, the distance deviation on the Inner plate height and the outer plate height have poor stability in animal experiments, but the system performs relatively satisfactorily in clinical applications, because human maxillofacial bones have clear anatomical markings and aesthetic standards, and the surgeon has a clear plan for the length and angle of the osteotomy.
Consistency
Then, we analyzed the consistency between preoperative design and postoperative results and found that all other indicators showed higher correlation coefficients: 0.933, 0.927, 0.961, and 0.913, except for the angle of retained mandibular angle (0.283) (Supplementary Digital Content, Table 2). This is because the angle that retains the mandibular angle, that is, the angle between A1B1 and B1B2 (∠A1B1B2) is only determined by the positions of point A1, point B1, and point B2. The anatomical positions of point A1 (condylar point) and point B2 (posterior point of the dog's fourth premolar, corresponding to the posterior point of the human second premolar) are relatively easy to determine. The determination of the position of point B1 (the new mandibular angle) is relatively subjective in animal experiments, so the numerical difference between groups is relatively large, so the correlation coefficient is significantly lower than that of other measurement indicators. However, this did not significantly affect the error of each measurement index, nor did it reduce the overall accuracy of the NS.
Besides, point B1 corresponds to the intersection of the depression groove plane and the ascending branch of the mandible in the human body, and the position is relatively fixed and easier to determine. Therefore, we believe that the consistency between groups that preserves the angle of the mandibular angle will perform well in the human body. The consistency of other measurement indicators affected by point B1 (such as the length of A1B1, that is, the height of the mandibular outer plate) will be better, and the overall stability of the system will be better.
Limitation and Prospects
The disadvantage of this study is that only the preoperative design and postoperative results of Beagle mandibular osteotomy were compared. There was no randomized controlled experiment, such as a comparative study with traditional surgical methods. Mainly because the traditional surgical method only relies on the operator's operating experience, there is no quantitative design plan before surgery. Moreover, the postoperative evaluation method only has symmetry and aesthetics of appearance, so it is impossible to objectively evaluate the errors of traditional surgical methods.
Also, due to the lack of randomized controlled trials, there is no time record of traditional surgical methods, time cost comparison with navigation methods, and statistical analysis of related data. However, as the familiarity with the preoperative design software and process improves, the overall time of osteotomy under each navigation is gradually shortened. We speculate that the magnitude of time reduction may gradually decrease, or even no longer be shortened in a large number of applications in the future, and the preoperative preparation time will be stabilized at a time level higher than that of traditional surgical methods. Nevertheless, the precise and real-time guidance of the NS for the intraoperative operation will greatly avoid the time-consuming and economic cost of the traditional method of visual evaluation and repeated correction.
The video detection method adopted in this paper is to use the tooth model to simulate the real canine teeth during the operation, use the marker bracket to fix the spatial relationship between the marker and the model, and perform registration before the operation (Fig. 8). When wearing bracket marks for experimental animals in the experiment, the NS can recognize the marks through related programs and overlap the virtual image with the mandible, just as the tooth model overlaps the preoperative plan. The registration process is completed before the operation, without waiting for intraoperative registration, which greatly reduces the operation time.
In this study, an AR NS based on occlusal splint was established in animal experiments. The system has good accuracy, the distance error is less than 1.5 mm, and the angle error is less than 3. During the operation, experimenters can view information about important nerves and blood vessels under the surface of animal skin and bones in real-time. This information is a virtual 3D graphic generated by a computer. By combining virtual information with real images for observation, experimenters can better understand the tissues near the surgical site and reduce surgical risks. The research provides guidance and experience for clinical surgery practice, guarantees the safety of surgery, and provides methods to improve the results of surgery. After the osteotomy, the symmetrical mandible and roughly the same bilateral mandibular angle can be seen, as shown in Figure 5. Besides, there is still much room for improvement in the accuracy of the NS. If we continue to reduce some registration and docking steps, because this can further reduce some data loss and errors, thereby improving the overall accuracy of the surgical NS.
CONCLUSIONS
This article uses an optical NS to simulate the MASO of experimental animals and verifies the comparison between the results of the osteotomy plane-guided mandibular osteotomy in the AR NS and the preoperative design. The multi-line and multi-angle measurement data are consistent, and there is no statistical difference. Therefore, the use of this registration method can ensure the accuracy of craniofacial surgery, and operating according to the enhanced display of the bone CP can ensure the safety of the osteotomy, and lay the foundation for the application of craniofacial surgery in the future. Although some data show unsatisfactory consistency, this is due to the large difference in the contour and internal structure of the mandible between dogs and humans. In summary, the AR NS established in this experiment explores a simple and easy registration method, and its high-precision navigation performance can meet the actual needs of craniomaxillofacial surgery. This makes the application of the optical surgical NS in actual surgery a step forward.
Supplementary Material
Supplementary Material
Footnotes
The authors report no conflicts of interest.
Supplemental digital contents are available for this article.
REFERENCES
- 1.Ghai S, Sharma Y, Jain N, et al. Use of 3-D printing technologies in craniomaxillofacial surgery: a review. Oral Maxillofac Surg 2018; 22:249–259. [DOI] [PubMed] [Google Scholar]
- 2.Widmann G, Stoffner R, Bale R. Errors and error management in image-guided craniomaxillofacial surgery. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2009; 107:701–715. [DOI] [PubMed] [Google Scholar]
- 3.Liu D, Huang J, Shan L, et al. Intraoral curved ostectomy for prominent mandibular angle by grinding, contiguous drilling, and chiseling. J Craniofac Surg 2011; 22:2109–2113. [DOI] [PubMed] [Google Scholar]
- 4.Qu M, Hou Y, Xu Y, et al. Precise positioning of an intraoral distractor using augmented reality in patients with hemifacial microsomia. J Craniomaxillofac Surg 2015; 43:106–112. [DOI] [PubMed] [Google Scholar]
- 5.Gao Y, Lin L, Chai G, et al. A feasibility study of a new method to enhance the augmented reality navigation effect in mandibular angle split osteotomy. J Craniomaxillofac Surg 2019; 47:1242–1248. [DOI] [PubMed] [Google Scholar]
- 6.Zhu M, Liu F, Chai G, et al. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery. Sci Rep 2017; 7:42365. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bernhardt S, Nicolau SA, Soler L, et al. The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66–90. [DOI] [PubMed] [Google Scholar]
- 8.Elmi-Terander A, Nachabe R, Skulason H, et al. Feasibility and accuracy of thoracolumbar minimally invasive pedicle screw placement with augmented reality navigation technology. Spine (Phila Pa 1976) 2018; 43:1018–1023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Porpiglia F, Fiori C, Checcucci E, et al. Augmented reality robot-assisted radical prostatectomy: preliminary experience. Urology 2018; 115:184. [DOI] [PubMed] [Google Scholar]
- 10.Li Z, Butler E, Li K, et al. Large-scale exploration of neuronal morphologies using deep learning and augmented reality. Neuroinformatics 2018; 16:339–349. [DOI] [PubMed] [Google Scholar]
- 11.Chu Y, Yang J, Ma S, et al. Registration and fusion quantification of augmented reality based nasal endoscopic surgery. Med Image Anal 2017; 42:241–256. [DOI] [PubMed] [Google Scholar]
- 12.Zhu M, Chai G, Lin L, et al. Effectiveness of a novel augmented reality-based navigation system in treatment of orbital hypertelorism. Ann Plast Surg 2016; 77:662–668. [DOI] [PubMed] [Google Scholar]
- 13.Jiang T, Zhu M, Chai G, et al. Precision of a novel craniofacial surgical navigation system based on augmented reality using an occlusal splint as a registration strategy. Sci Rep 2019; 9:501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Grunert P, Darabi K, Espinosa J, et al. Computer-aided navigation in neurosurgery. Neurosurg Rev 2003; 26:73–99. [DOI] [PubMed] [Google Scholar]
- 15.Korb W, Marmulla R, Raczkowsky J, et al. Robots in the operating theatre--chances and challenges. Int J Oral Maxillofac Surg 2004; 33:721–732. [DOI] [PubMed] [Google Scholar]
- 16.Schmelzeisen R, Gellrich NC, Schramm A, et al. Navigation-guided resection of temporomandibular joint ankylosis promotes safety in skull base surgery. J Oral Maxillofac Surg 2002; 60:1275–1283. [DOI] [PubMed] [Google Scholar]
- 17.Gellrich NC, Schramm A, Hammer B, et al. Computer-assisted secondary reconstruction of unilateral posttraumatic orbital deformity. Plast Reconstr Surg 2002; 110:1417–1429. [DOI] [PubMed] [Google Scholar]
- 18.Hassfeld S, Brief J, Krempien R, et al. Computer-assisted oral, maxillary and facial surgery. Radiologe 2000; 40:218–226. [DOI] [PubMed] [Google Scholar]
- 19.Xia J, Samman N, Yeung RW, et al. Computer-assisted three-dimensional surgical planing and simulation. 3D soft tissue planning and prediction. Int J Oral Maxillofac Surg 2000; 29:250–258. [PubMed] [Google Scholar]
- 20.Troulis MJ, Everett P, Seldin EB, et al. Development of a three-dimensional treatment planning system based on computed tomographic data. Int J Oral Maxillofac Surg 2002; 31:349–357. [DOI] [PubMed] [Google Scholar]
- 21.Nijmeh AD, Goodger NM, Hawkes D, et al. Image-guided navigation in oral and maxillofacial surgery. Br J Oral Maxillofac Surg 2005; 43:294–302. [DOI] [PubMed] [Google Scholar]
- 22.Hassfeld S, Mühling J. Computer assisted oral and maxillofacial surgery--a review and an assessment of technology. Int J Oral Maxillofac Surg 2001; 30:2–13. [DOI] [PubMed] [Google Scholar]
- 23.Xia J, Ip HH, Samman N, et al. Three-dimensional virtual-reality surgical planning and soft-tissue prediction for orthognathic surgery. IEEE Trans Inf Technol Biomed 2001; 5:97–107. [DOI] [PubMed] [Google Scholar]
- 24.Girod S, Teschner M, Schrell U, et al. Computer-aided 3-D simulation and prediction of craniofacial surgery: a new approach. J Craniomaxillofac Surg 2001; 29:156–158. [DOI] [PubMed] [Google Scholar]
- 25.Hassfeld S, Mühling J, Zöller J. Intraoperative navigation in oral and maxillofacial surgery. Int J Oral Maxillofac Surg 1995; 24 (1 Pt 2):111–119. [DOI] [PubMed] [Google Scholar]
- 26.Gaggl A, Schultes G, Kärcher H. Navigational precision of drilling tools preventing damage to the mandibular canal. J Craniomaxillofac Surg 2001; 29:271–275. [DOI] [PubMed] [Google Scholar]
- 27.Siessegger M, Schneider BT, Mischkowski RA, et al. Use of an image-guided navigation system in dental implant surgery in anatomically complex operation sites. J Craniomaxillofac Surg 2001; 29:276–281. [DOI] [PubMed] [Google Scholar]
- 28.Ewers R, Schicho K, Undt G, et al. Basic research and 12 years of clinical experience in computer-assisted navigation technology: a review. Int J Oral Maxillofac Surg 2005; 34:1–8. [DOI] [PubMed] [Google Scholar]
- 29.Zavattero E, Ramieri G, Roccia F, et al. Comparison of the outcomes of complex orbital fracture repair with and without a surgical navigation system: a prospective cohort study with historical controls. Plast Reconstr Surg 2017; 139:957–965. [DOI] [PubMed] [Google Scholar]
- 30.Murugesan YP, Alsadoon A, Manoranjan P, et al. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries. Int J Med Robot 2018; 14:e1889. [DOI] [PubMed] [Google Scholar]
- 31.Pietruski P, Majak M, Swiatek-Najwer E, et al. Accuracy of experimental mandibular osteotomy using the image-guided sagittal saw. Int J Oral Maxillofac Surg 2016; 45:793–800. [DOI] [PubMed] [Google Scholar]
- 32.Luebbers HT, Messmer P, Obwegeser JA, et al. Comparison of different registration methods for surgical navigation in cranio-maxillofacial surgery. J Craniomaxillofac Surg 2008; 36:109–116. [DOI] [PubMed] [Google Scholar]
- 33.Metzger MC, Rafii A, Holhweg-Majert B, et al. Comparison of 4 registration strategies for computer-aided maxillofacial surgery. Otolaryngol Head Neck Surg 2007; 137:93–99. [DOI] [PubMed] [Google Scholar]
- 34.Eggers G, Mühling J, Marmulla R. Image-to-patient registration techniques in head surgery. Int J Oral Maxillofac Surg 2006; 35:1081–1095. [DOI] [PubMed] [Google Scholar]
- 35.Mascott CR, Sol JC, Bousquet P, et al. Quantification of true in vivo (application) accuracy in cranial image-guided surgery: influence of mode of patient registration. Neurosurgery 2006; 59: (1 Suppl 1): ONS146–ONS156. [DOI] [PubMed] [Google Scholar]
- 36.Herring JL, Dawant BM, Maurer CR, Jr, et al. Surface-based registration of CT images to physical space for image-guided surgery of the spine: a sensitivity study. IEEE Trans Med Imaging 1998; 17:743–752. [DOI] [PubMed] [Google Scholar]
- 37.Iwai T, Mikami T, Yasumura K, et al. Use of occlusal splint for noninvasive fixation of a reference frame in orbital navigation surgery. J Maxillofac Oral Surg 2016; 15:410–412. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Opdenakker Y, Swennen G, Abeloos J. Application of a non-invasive reference headband and a surgical splint for intraoperative paediatric navigation. Int J Oral Maxillofac Surg 2017; 46:360–362. [DOI] [PubMed] [Google Scholar]
- 39.Cutting C, Grayson B, McCarthy JG, et al. A virtual reality system for bone fragment positioning in multisegment craniofacial surgical procedures. Plast Reconstr Surg 1998; 102:2436–2443. [DOI] [PubMed] [Google Scholar]
- 40.Tsuji M, Noguchi N, Shigematsu M, et al. A new navigation system based on cephalograms and dental casts for oral and maxillofacial surgery. Int J Oral Maxillofac Surg 2006; 35:828–836. [DOI] [PubMed] [Google Scholar]
- 41.Wang J, Shen Y, Yang S. A practical marker-less image registration method for augmented reality oral and maxillofacial surgery. Int J Comput Assist Radiol Surg 2019; 14:763–773. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.








