Skip to main content
Healthcare logoLink to Healthcare
. 2023 Oct 18;11(20):2760. doi: 10.3390/healthcare11202760

Application of Artificial Intelligence in Orthodontics: Current State and Future Perspectives

Junqi Liu 1, Chengfei Zhang 2, Zhiyi Shan 1,*
Editor: Daisuke Ekuni
PMCID: PMC10606213  PMID: 37893833

Abstract

In recent years, there has been the notable emergency of artificial intelligence (AI) as a transformative force in multiple domains, including orthodontics. This review aims to provide a comprehensive overview of the present state of AI applications in orthodontics, which can be categorized into the following domains: (1) diagnosis, including cephalometric analysis, dental analysis, facial analysis, skeletal-maturation-stage determination and upper-airway obstruction assessment; (2) treatment planning, including decision making for extractions and orthognathic surgery, and treatment outcome prediction; and (3) clinical practice, including practice guidance, remote care, and clinical documentation. We have witnessed a broadening of the application of AI in orthodontics, accompanied by advancements in its performance. Additionally, this review outlines the existing limitations within the field and offers future perspectives.

Keywords: artificial intelligence, orthodontics, machine learning, deep learning

1. Introduction

AI is a subfield of computer science that refers to the ability of a machine to imitate cognitive functions of human intelligence [1]. Over the last decade, the field of AI has shown a lot of potential as it can be employed to solve a variety of tasks. The expert system and machine learning are two important branches of AI. Unlike the knowledge-based expert system, which is established based on predetermined rules and knowledge, machine learning focuses on “learning” from training data to improve its capability [2,3]. In addition to its strong adaptability and generalization capabilities, machine learning is capable of processing large-scale data and has more open-source algorithms, which makes it one of the most promising technologies in AI.

Artificial neural networks (ANNs), a sub-domain of machine learning, draw inspiration from the biological neural system of the human brain [4]. ANNs have been notably employed to analyze intricate connections between massive data [5]. An ANN typically has a minimum of three layers, namely, an input layer, an output layer, and at least one hidden layer [6]. Neurons within each layer are interconnected to establish a network of processors. ANNs encompassing multiple hidden layers are commonly referred to as deep learning, which has demonstrated exceptional performance in computer vision tasks such as classification and segmentation [7]. Deep learning is becoming increasingly popular due to its high feasibility and growing computing performance, as well as advanced model training algorithms [8]. In addition, one notable advantage of deep learning over traditional machine learning is that it allows automated feature extraction without manual intervention, enabling the better harnessing of the information within the data [9]. Convolutional neural networks (CNNs), one of the most widely used deep learning algorithms, exhibit particularly remarkable performance in handling high-resolution images [10,11,12]. In CNN, the hidden layers are substituted with three distinct functional layers, namely, convolutional layers, pooling layers, and fully connected layers. The convolutional layers employ convolutional kernels as filters to generate feature maps. The convolution process effectively reduces image complexity, making CNNs highly suitable for tasks like recognizing objects, shapes, and patterns. The pooling layers are commonly employed after convolutional layers to decrease the dimension of feature maps while retaining essential information. Following several iterations of convolutional and pooling layers, the outputs are integrated in the fully connected layers for further decision making. Consequently, thanks to the abovementioned three layers, CNNs outperform algorithms such as ANNs in image-related tasks [6,11].

Malocclusion is distinguished by an anomaly in teeth alignment, occlusion and/or craniofacial relationships [13]. It is a deviation from the norm and the manifestation of normal biological variability [14]. Numerous studies have indicated that the presence of malocclusion not only affects oral health and dental aesthetics but also has a negative impact on psychological well-being and social interactions [15,16,17]. Malocclusion is considered the world’s third most prevalent oral disease, and nearly 30% of the population present with great need of orthodontic treatment [18,19]. Clinical orthodontic practice often requires a significant amount of time to conduct various analyses that necessitate the extensive clinical experience of orthodontists. These workloads have affected the efficiency of clinical orthodontic practice and have also made orthodontic treatment less accessible for non-specialists due to the requirement for clinical experience.

A series of studies have shown that AI can significantly enhance the efficiency of clinical orthodontic practice [20,21]. Several commercially available AI-driven software (3Shape Dental System 2.22.0.0, Uceph 4.2.1, Mastro 3D V6.0 etc.) programs have found widespread applications in orthodontic care. With the ongoing advancement of AI algorithms, computing capabilities and the growing availability datasets, the scope of AI applications in orthodontics is expanding, accompanied by continuous performance improvement. Staying updated on the latest developments of AI applications in orthodontics through timely summaries helps researchers gain a rapid and accurate understanding of this field. In addition, despite obtaining encouraging results, there is still significant room for progress in the application of AI in orthodontics. Therefore, this review provides a comprehensive summary of the current state of AI applications in orthodontics, encompassing diagnosis, treatment planning, and clinical practice. Additionally, the review discusses the current limitations of AI and offers future perspectives, aiming to offer valuable insights for the integration of AI into orthodontic practice.

2. Application of AI in Orthodontics

2.1. Diagnosis

A satisfactory orthodontic diagnosis relies on a series of analysis, like cephalometric analysis, dental analysis, facial analysis, skeletal maturation determination and upper-airway obstruction assessment, to comprehensively evaluate patients’ overall profile, including their facial profile, dental and skeletal relationship, skeletal maturation stages and upper-airway patency [22].

2.1.1. Cephalometric Analysis

Cephalometric analysis, especially landmarking on lateral cephalograms, serves as the foundation of orthodontic diagnosis, treatment planning and treatment outcome assessment. Conventional manual landmarking is time-consuming, experience-dependent and can be inconsistent within and across orthodontists, significantly affecting the efficiency and accuracy of clinical practice [23,24,25,26]. The automated landmark detection was reported as early as the mid-1980s, but the error margin was too high to be implemented in clinical practice [27]. In recent years, with the advancement of AI, numerous studies have been conducted using cephalometric analysis, the reproducibility, efficiency, and accuracy of which are continuously being enhanced [24,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65]. Notably, cephalometric analysis has emerged as the most extensively explored domain of AI applications in orthodontics. Given the vast amount of the relevant literature available, it is impractical to list all of the relevant literature about automated cephalometric analysis. Consequently, this review only summarizes the pertinent literature published within the past five years, as depicted in Table 1, in order to offer the latest advancements in this field [24,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65].

Table 1.

The application of AI in cephalometric analysis in the past 5 years.

Author (Year) Data Type Dataset Size
(Training/Test)
No. of Landmarks/Measurements Algorithm Performance
Payer et al. (2019) [28] Lateral cephalograms 150/250 19/0 CNN Error radii: 26.67% (2 mm), 21.24% (2.5 mm), 16.76% (3 mm), and 10.25% (4 mm).
Nishimoto et al. (2019) [29] Lateral cephalograms 153/66 10/12 CNN Average prediction errors: 17.02 pixels.
Median prediction errors: 16.22 pixels.
Zhong et al. (2019) [30] Lateral cephalograms 150/100
(additional 150 images than validation set).
19/0 U-Net Test 1:
MRE: 1.12 ± 0.88 mm.
SDR within 2, 2.5, 3, and 4 mm: 86.91%, 91.82%, 94.88%, and 97.90%, respectively.
Test 2:
MRE: 1.42 ± 0.84 mm.
SDR within 2, 2.5, 3, and 4 mm: 76.00%, 82.90%, 88.74%, and 94.32%, respectively.
Park et al. (2019) [31] Lateral cephalograms 1028/283 80/0 YOLOv3, SSD YOLOv3 demonstrated overall superiority over SSD in terms of accuracy and computational performance.
For YOLOv3, SDR within 2, 2.5, 3, and 4 mm: 80.40%, 87.4%, 92.00%, and 96.2%, respectively.
Moon et al. (2020) [32] Lateral cephalograms Training: 50, 100, 200, 400, 800, 1200, 1600, 2000.
Test: 200.
19, 40, 80 CNN (YOLOv3) The accuracy of AI is positively correlated with the number of training datasets and negatively correlated with the number of detection targets.
Hwang et al. (2020) [33] Lateral cephalograms 1028/283 A total of 80 CNN (YOLOv3) Mean detection error: 1.46 ± 2.97 mm.
Oh et al. (2020) [34] Lateral cephalograms 150/100
(additional 150 images than validation set).
19/8 CNN (DACFL) MRE: 14.55 ± 8.22 pixel.
SDR within 2, 2.5, 3, and 4 mm: 75.9%, 83.4%, 89.3%, and 94.7%, respectively.
Classification accuracy: 83.94%.
Kim et al. (2020) [35] Lateral cephalograms 1675/400 23/8 Stacked hourglass deep learning model. Point-to-point error: 1.37 ± 1.79 mm.
SCR: 88.43%.
Kunz et al. (2020) [36] Lateral cephalograms 1792/50 18/12 CNN The CNN models showed almost no statistically significant differences with the humans’ gold standard.
Alqahtani et al. (2020) [37] Lateral cephalograms -/30 16/16 Commercially available web-based platform (CephX, https://www.orca-ai.com/, accessed on 23 August 2023) The results obtained from CephX and manual landmarking did not exhibit clinically significant differences.
Lee et al. (2020) [38] Lateral cephalograms 150/250 19/8 Bayesian CNN Mean landmark error: 1.53 ± 1.74 mm.
SDR within 2, 3, and 4 mm: 82.11%, 92.28%, and 95.95%, respectively.
Classification accuracy: 72.69~84.74.
Yu et al. (2020) [39] Lateral cephalograms A total of 5890 Four skeletal classification indicators. Multimodal CNN Sensitivity, specificity, and accuracy for vertical and sagittal skeletal classification: >90%.
Li et al. (2020) [40] Lateral cephalograms 150/100
(additional 150 images than validation set).
19/0 GCN MRE: 1.43 mm.
SDR within 2, 2.5, 3, and 4 mm: 76.57%, 83.68%, 88.21%, and 94.31%, respectively.
Tanikawa et al. (2021) [41] Lateral cephalograms 1755/30 for each subgroup 26/0 CNN Mean success rate: 85~91%.
Mean identification error: 1.32~1.50 mm.
Zeng et al. (2021) [42] Lateral cephalograms 150/100
(additional 150 images than validation set).
19/8 CNN MRE: 1.64 ± 0.91 mm.
SDR within 2, 2.5, 3, and 4 mm: 70.58%, 79.53%, 86.05%, and 93.32%, respectively.
SCR: 79.27%.
Kim et al. (2021) [24] Lateral cephalograms 2610/100
(additional 440 images than validation set)
20/0 Cascade CNN Overall detection error: 1.36 ± 0.98 mm.
Hwang et al. (2021) [43] Lateral cephalograms 1983/200 19/8 CNN (YOLOv3) SDR within 2, 2.5, 3, and 4 mm: 75.45%, 83.66%, 88.92%, and 94.24%, respectively.
SCR: 81.53%.
Bulatova et al. (2021) [44] Lateral cephalograms -/110 16/0 CNN (YOLOv3) (Ceppro software) Total of 12 out of 16 points showed no statistical difference in absolute differences between AI and manual landmarking.
Jeon et al. (2021) [45] Lateral cephalograms -/35 16/26 CNN None of the measurements showed statistically differences except the saddle angle, linear measurements of maxillary incisor to NA line and mandibular incisor to NB line.
Hong et al. (2022) [46] Lateral cephalograms 3004/184 20/ Cascade CNN Total mean error was 1.17 mm.
Accuracy percentage: 74.2%.
Le et al. (2022) [47] Lateral cephalograms 1193/100 41/8 CNN (DACFL) MRE of 1.87 ± 2.04 mm.
SDR within 2, 2.5, 3, and 4 mm: 73.32%, 80.39%, 85.61%, and 91.68%, respectively.
Average SCR: 83.75%.
Mahto et al. (2022) [48] Lateral cephalograms -/30 18/12 Commercially available web-based platform (WebCeph, https://webceph.com, accessed on 23 August 2023) Intraclass correlation coefficient:
7 parameters >0.9 (excellent agreement), 5 parameters: 0.75~0.9 (good agreement).
Uğurlu et al. (2022) [49] Lateral cephalograms 1360/180
(additional 140 images than validation set)
21/0 CNN (FARNet) MRE: 3.4 ± 1.57 mm.
SDR within 2, 2.5, 3, 4 mm: 76.2%, 83.5%, 88.2%, 93.4%, respectively.
Yao et al. (2022) [50] Lateral cephalograms 312/100 (additional 100 images than validation set) 37/0 CNN MRE: 1.038 ± 0.893 mm.
SDR within 1, 1.5, 2, 2.5, 3, 3.5, 4 mm: 54.05%, 91.89%, 97.30%, 100%, 100%, 100%, respectively.
Lu et al. (2022) [51] Lateral cephalograms 150/250 19/0 GCN MRE: 1.19 mm.
SDR within 2, 2.5, 3, and 4 mm: 83.20%, 88.93%, 92.88%, and 97.07%, respectively.
Tsolakis et al. (2022) [52] Lateral cephalograms -/100 16/18 CNN (commercially available software: CS imaging V8). Differences between the AI software (CS imaging V8) and manual landmarking were not clinically significant.
Duran et al. (2023) [53] Lateral cephalograms -/50 32/18 Commercially available web-based platform (OrthoDx, https://ortho dx.phime ntum.com; WebCeph, https://webceph.com, accessed on 23 August 2023) Consistency between AI software and manual landmarking:
A statistically significant good level: angular measurements; a weak level: linear measurement and soft tissue parameters.
Ye et al. (2023) [54] Lateral cephalograms -/43 32/0 Commercially available software (MyOrthoX, Angelalign, and Digident) MRE:
MyOrthoX: 0.97 ± 0.51 mm.
Angelalign: 0.80 ± 0.26 mm.
Digident: 1.11 ± 0.48 mm.
SDR (%) (within 1/1.5/2 mm):
MyOrthoX: 67.02 ± 10.23/82.80 ± 7.36/89.99 ± 5.17.
Angelalign: 78.08 ± 14.23/89.29 ± 14.02/93.09 ± 13.64.
Digident: 59.13 ± 10.36/78.72 ± 5.97/87.53 ± 4.84.
Ueda et al. (2023) [55] Lateral cephalometric data A total of 220 0/8 RF Overall accuracy: 0.823 ± 0.060.
Bao et al.(2023) [56] Reconstructed lateral cephalograms from CBCT -/85 19/23 Commercially available software (Planmeca Romexis 6.2) For landmarks:
MRE: 2.07 ± 1.35 mm
SDR within 1, 2, 2.5, 3, and 4 mm: 18.82%, 58.58%, 71.70%, 82.04%, and 91.39%, respectively.
For measurements:
The rates of consistency within the 95% limits of agreement: 91.76~98.82%.
Kim et al. (2021) [57] Reconstructed
Posteroanterior cephalograms from CBCT
345/85 23/0 Multi-stage CNN MRE: 2.23 ± 2.02 mm
SDR within 2 mm: 60.88%.
Takeda et al. (2021) [58] Posteroanterior cephalograms 320/80 4/1 CNN, RF The CNN showed higher coefficient of determination than RF and less mean absolute error for the distance from the vertical reference line to menton.
CNN with a stochastic gradient descent optimizer had the best performance.
Lee et al. (2019) [59] CBCT 20/7 7 Deep learning Average point-to-point error: 1.5 mm.
Torosdagli et al. (2019) [60] CBCT A total of 50 9/0 Deep geodesic learning Errors in the pixel space: <3 pixels for all landmarks.
Yun et al. (2020) [61] CBCT 230/25 93/0 CNN Average point-to-point error: 3.63 mm.
Kang et al. (2021) [62] CT 20/8 16/0 Multi-stage DRL Mean detection error: 1.96 ± 0.78.
SDR within 2, 2.5, 3, and 4 mm: 58.99%, 75.39%, 86.52%, and 95.70%, respectively.
Ghowsi et al. (2022) [63] CBCT -/100 53/0 Commercially available software (Stratovan Corporation) Mean absolute error: 1.57 mm.
Mean error distance: 3.19 ± 2.6 mm.
SDR within 2, 2.5, 3, and 4 mm: 35%, 48%, 59%, and 75%, respectively.
Dot et al. (2022) [64] CT 128/38
(additional 32 images as validation set).
33/15 SCN For landmarks:
MRE: 1.0 ± 1.3 mm.
SDR within 2, 2.5, and 3 mm: 90.4%, 93.6%, and 95.4%, respectively.
For measurements:
Mean errors: −0.3 ± 1.3° (angular), −0.1 ± 0.7 mm (linear).
Blum et al. (2023) [65] CBCT 931/114 35/0 CNN Mean error: 2.73 mm.

MRE, mean radial error; SDR, success detection rate; YOLOv3, You-Only-Look-Once version 3; SSD, Single-Shot Multibox Detector; SCR, success classification rates; DACFL, deep anatomical context feature learning; CBCT, cone-beam computed tomography; GCN, graph convolutional networks, FARNet, feature aggregation and refinement network; DRL, deep reinforcement learning; CT, computerized tomography; SCN, SpatialConfiguration-Net.

In general, acceptable linear and angular measurement are less than 2 mm and 2°, respectively [23,36,38,43,44,47,54,66,67,68,69,70]. Following this criterion, although some commercially available software can achieve high overall accuracy in automated landmarking on lateral cephalograms, manual supervision is still recommended [47,48,53,54,56].

Compared to classical machine learning, deep learning, especially CNNs, demonstrates superior performance and has been investigated more (Table 1). Several studies have shown that You-Only-Look-Once version 3 (YOLOv3), a popular CNN algorithm, has yielded remarkable results in automated landmarking [32,33,43,44]. Park et al. compared the accuracy and computational efficiency of two CNN algorithms, YOLOv3 and the Single-Shot Multibox Detector (SSD), in identifying 80 landmarks in lateral cephalometric radiograph images. The results indicated that YOLOv3 exhibited superior accuracy and computational performance compared to SSD [31]. To mitigate the risk of overfitting and enhance the generalizability of data, Kim et al. collected 3150 lateral cephalograms taken by nine different cephalography machines from multiple centers nationwide. The researchers utilized the cascade CNN algorithm and achieved an overall automated detection error of 1.36 ± 0.98 mm [24]. The same team developed a CNN algorithm that reached 1.17 mm of total mean error in lateral cephalogram landmark identification despite the presence of genioplasty, bone remodeling, and orthodontic and orthognathic appliance, paving the way for its further use in orthognathic surgical patients [46]. Yao et al. utilized a CNN-based model to identify 37 landmarks in lateral cephalograms, reaching 1.038 ± 0.893 mm of MRE and 97.30% of SDR within 2 mm [50]. To the best of our knowledge, this model achieved the best performance in automated landmarking. The existing CNN models do have some drawbacks, such as down-sampling quantization errors, and requiring preprocessing or postprocessing to improve accuracy, which may increase computational cost and time. To address these issues, Lu et al. proposed three-layer graph convolutional networks (GCNs), obtaining 1.19 mm of MRE and 83.20% of SDR within 2 mm [51].

At the same time, research has reported the use of CNNs for automated landmarking on posteroanterior cephalograms to assess mandibular deviation, which can aid in evaluating facial symmetry [57,58]. Thanks to the advancement of computational power, AI has also made progress in three-dimensional (3D) cephalometric landmark detection, and deep learning and CNNs are the most efficient methods [59,60,71,72,73]. Blum et al. utilized a CNN-based model to conduct 3D cephalometric analysis, which yielded a mean error of 2.73 mm and exhibited a 95% reduction in processing time compared with manual annotation [65]. Dot et al. proposed a fully CNN, SpatialConfiguration-Net, for the 3D automated detection of 33 landmarks and 15 measurements, achieving superior outcomes. Specifically, the MRE for landmarks was only 1.0 ± 1.3 mm, and SDR within 2 mm reached 90.4%. Regarding its measurements, the mean errors were −0.3 ± 1.3°and −0.1 ± 0.7 mm for its angular and linear variables [64]. Deep reinforcement learning (DRL), the algorithm that merges the advantages of deep learning (perception ability) and reinforcement learning (decision-making ability), has also garnered attention for its performance in 3D localization [74,75,76]. Kang et al. utilized multiple-stage DRL for 3D automated landmark detection. The DRL algorithm achieved a mean detection error of 1.96 ± 0.78 mm for landmarks, with 58.99% and 95.70% of the detection rate falling within the 2 mm and 4 mm range [62]. Nevertheless, the current progress in automated 3D cephalometric analysis is predominantly concentrated on landmark detection, with limited emphasis on linear and angular measurements. It is anticipated that future advancements will address this limitation.

2.1.2. Dental Analysis

In orthodontic clinical practice, the utilization of intraoral photographs and orthodontic study models is imperative for dental analysis. These examinations provide clinicians with comprehensive information regarding various aspects, including molar relationships, tooth crowding, dental arch width, overjet and overbite, and oral health status. However, the manual analysis of these examinations is both time-consuming and labor-intensive. Consequently, there is potential for AI to replace human involvement in this analysis. Talaat et al. utilized the YOLO algorithm to detect malocclusion (specifically tooth crowding or spacing, abnormal overjet or overbite, and crossbite) from intraoral photographs. The results showed an exceptional accuracy rate of 99.99% [77]. Similarly, using intraoral imaging as training data, Ryu et al. utilized four CNN algorithms to assess tooth crowding; the results showed that VGG19 had the minimum mean errors in the maxilla (0.84 mm) and mandible (1.06 mm) [78].

The development of digital technology has significantly facilitated the adoption of 3D intraoral scanner images and digital dental models in clinical practice. Some companies, such as Invisalign (Align Technology, Santa Clara, CA, USA), have effectively utilized 3D oral scan data and digital models for automated measurement and analysis. In addition, Im et al. proposed a dynamic-graph convolutional neural network (DGCNN) to automate tooth segmentation in digital models, achieving superior accuracy and reduced computational time compared to the other two commercially available pieces of software: OrthoAnalyzer (ver.1.7.1.3) and Autolign (ver.1.6.2.1) [79]. Beyond that, the accurate segmentation of teeth and the recognition of landmarks on teeth are crucial for automated dental analysis, and significant advancements have been consistently achieved in this domain, hopefully paving the way for further clinical applications [80,81,82,83,84].

2.1.3. Facial Analysis

Facial photographs play a pivotal role in evaluating facial asymmetry and proportions. To our knowledge, so far, there are only three articles which have reported on automated facial analysis, and all of them used 2D frontal photos as training data.

Rao et al. utilized an active shape model algorithm for automated landmarking and measuring on facial images, but only just over 50% of the landmark measures had an error within 3 mm [85]. Yurdakurban et al. compared a machine-learning-based software with researchers in detecting facial midline and evaluating asymmetry, and the differences in most measurements between the two methods were not statistically significant [86]. Rousseau et al. employed a CNN to analyze the vertical dimension of patients. The results showed higher precision and efficiency than manual measurements, with the 95% confidence interval limit of agreements between the manual and automated methods less than 10% [87]. Overall, automated facial analysis is still in its early stages and requires further research to improve its accuracy and applications.

2.1.4. Skeletal Maturation Determination

The determination of patients’ growth spurt is critical for orthodontic treatment, especially for those that need functional and orthopedics treatment. Hand–wrist X-rays have been regarded as the most conventional and accurate way to determine skeletal age. In recent years, several studies have reported combining AI with hand–wrist radiographs to predict skeletal age [88,89,90]. A number of research studies have revealed that the cervical vertebral maturation (CVM) method is also effective for growth estimation and highly correlates with the hand–wrist radiograph method [91,92,93,94,95,96]. Therefore, to minimize unnecessary radiation exposure, hand–wrist X-ray is not routinely used in clinical orthodontic practice [97,98]. Instead, the CVM method, which evaluates the size and shape of the cervical vertebrae through lateral cephalograms, has become increasingly popular in predicting skeletal maturation [91,92,93,94,95,96]. The application of AI in skeletal maturation assessment using lateral cephalograms was summarized in Table 2 [91,98,99,100,101,102,103,104,105,106,107].

Table 2.

The application of AI in skeletal maturation assessment using lateral cephalograms.

Author (Year) Data Type Dataset Size
(Training/Test)
Algorithm Performance
Kök et al. (2019) [99] Lateral cephalograms 240/60 k-NN, NB, DT, ANN, SVM, RF, LR Mean rank of accuracy:
k-NN: 4.67, NB: 4.50, DT: 3.67, ANN: 2.17, SVM: 2.50, RF: 4.33, LR: 5.83.
Makaremi et al. (2019) [100] Lateral cephalograms Training: 360/600/900/1870
Evaluation: 300
Testing: 300
CNN Performance varied depending on image numbers and pre-processing method.
Amasya et al. (2020) [101] Lateral cephalograms 498/149 LR, SVM, RF, ANN, DT Accuracy:
LR: 78.69%, SVM: 81.08%, RF: 82.38%, ANN: 86.93%, DT: 85.89%.
Amasya et al. (2020) [102] Lateral cephalograms -/72 ANN Average of 58.3% agreement with four human observers.
Kök et al. (2021) [91] Lateral cephalograms A total of 419 Total of 24 different ANN models The highest accuracy was 0.9427.
Seo et al.
(2021) [103]
Lateral cephalograms A total of 600 ResNet-18, MobileNet-v2, ResNet-50, ResNet-101, Inception-v3, Inception-ResNet-v2 Accuracy/Precision/Recall/F1 score:
ResNet-18: 0.927 ± 0.025/0.808 ± 0.094/0.808 ± 0.065/0.807 ± 0.074.
MobileNet-v2: 0.912 ± 0.022/0.775 ± 0.111/0.773 ± 0.040/0.772 ± 0.070.
ResNet-50: 0.927 ± 0.025/0.807 ± 0.096/0.808 ± 0.068/0.806 ± 0.075.
ResNet-101: 0.934 ± 0.020/0.823 ± 0.113/0.837 ± 0.096/0.822 ± 0.054.
Inception-v3: 0.933 ± 0.027/0.822 ± 0.119/0.833 ± 0.100/0.821 ± 0.082.
Inception-ResNet-v2: 0.941 ± 0.018/0.840 ± 0.064/0.843 ± 0.061/0.840 ± 0.051.
Zhou et al.
(2021) [104]
Lateral cephalograms 980/100 CNN Mean labeling error: 0.36 ± 0.09 mm.
Accuracy: 71%.
Kim et al.
(2021) [105]
Lateral cephalograms 480/120 CNN Three-step model obtained the highest accuracy at 62.5%.
Rahimi et al.
(2022) [106]
Lateral cephalograms 692/99
(additional 99 images than validation set).
ResNet-18, ResNet-50, ResNet-101, ResNet-152, VGG19, DenseNet, ResNeXt-50, ResNeXt-101, MobileNetV2, InceptionV3. ResNeXt-101 showed the best test accuracy:
Six-class: 61.62%, three-class: 82.83%.
Radwan et al. (2023) [107] lateral cephalograms 1201/150
(additional 150 images than validation set).
U-Net, Alex-Net Segmentation network:
Global accuracy: 0.99.
Average dice score: 0.93.
Classification network:
Accuracy: 0.802.
Sensitivity (pre-pubertal/pubertal/post-pubertal): 0.78/0.45/0.98. Specificity (pre-pubertal/pubertal/post-pubertal): 0.94/0.94/0.75.
F1 score (pre-pubertal/pubertal/post-pubertal): 0.76/0.57/0.90.
Akay et al.
(2023) [98]
lateral cephalograms 352/141
(additional 94 images than validation set).
CNN Classification accuracy: 58.66%.
Precision (stage 1/2/3/4/5/6): 0.82/0.47/0.64/0.52/0.55/0.52.
Recall (stage 1/2/3/4/5/6): 0.70/0.74/0.58/0.54/0.37/0.60.
F1 score (stage 1/2/3/4/5/6): 0.76/0.57/0.61/0.53/0.44/0.56.

k-NN, k-nearest neighbors; NB, Naive Bayes; LR, logistic regression; CNN, convolutional neural network; SVM, support vector machine; RF, random forest; ANN, artificial neural network; DT, decision tree.

Kök et al. utilized seven different machine-learning algorithms to determine the CVM stages [99]. The results showed that these algorithms exhibited varying levels of accuracy in predicting different CVM stages, but the ANN was considered the most stable algorithm, with an average rank of 2.17 in determining all the CVM stages [99]. Similarly, Amasya et al. developed and compared the performance of five ML algorithm in CVM analysis, and the ANN model proved to be better at classification than the other four algorithms (decision tree, random forest, logistic regression, and support vector machine) [101]. The same team further compared this ANN model with four independent human observers to automate the cervical vertebral maturation stages but only reached an average of 58.3% agreement with the observers [102]. Several studies have employed CNN models for CVM prediction, and different degrees of accuracy were obtained [98,105,106]. Makaremi et al. pointed out that an equal distribution of images across all CVM stages is beneficial for improving CNN accuracy [100]. Zhou et al. increased the sample size to enhance the reliability of the results [104]. Seo et al. pioneeringly compared six unsupervised CNN models and utilized a gradient-weighted class activation map (Grad-CAM) to visualize the models [103]. The results indicated that all the algorithms achieved an accurate rate of over 90%, with Inception-ResNet-v2 showing the best performance at 0.941 ± 0.018% accuracy. In addition, the Grad-CAMs showed that Inception-ResNet-v2 focused on several cervical vertebrae, unlike most of the other algorithms that mainly focused on the third cervical vertebra. Radwan et al. also used both the CNN model and unsupervised learning method to predict CVM stages, with a larger sample size and a validation dataset to tune the algorithm. However, the classification network only obtained an accuracy of 0.802 [107]. In summary, ANNs have received much attention and recognition in the early years, but in recent years, CNNs have gradually become more prominent in image-related tasks. With continuous improvements in algorithms, it is expected to achieve more encouraging results in the future.

2.1.5. Upper-Airway Obstruction Assessment

Skeletal deformity and airway obstruction mutually influence each other. Upper-airway obstruction can alter breathing, which can affect the normal development of craniofacial structures and potentially lead to malocclusion and other craniofacial abnormalities. [108]. Screening the presence of upper-airway obstruction, especially adenoid hypertrophy, is critical for orthodontic diagnosis and treatment planning. The application of AI in upper-airway obstruction assessment is summarized in Table 3 [109,110,111,112,113,114,115,116,117].

Table 3.

The application of AI in upper-airway obstruction assessment.

Author (Year) Data Type Dataset Size
(Training/Test)
Algorithm Purpose Performance
Shen et al. (2020) [109] Lateral cephalograms 488/116
(additional 64 images than validation set)
CNN Adenoid hypertrophy detection Classification accuracy: 95.6%.
Average AN ratio error: 0.026.
Macro F1 score: 0.957.
Zhao et al. (2021) [110] Lateral cephalograms 581/160 CNN Adenoid hypertrophy detection Accuracy: 0.919.
Sensitivity: 0.906.
Specificity: 0.938.
ROC: 0.987.
Liu et al. (2021) [111] Lateral cephalograms 923/100 VGG-Lite Adenoid hypertrophy detection Sensitivity: 0.898.
Specificity: 0.882.
Positive predictive value: 0.880.
Negative predictive value: 0.900.
F1 score: 0.889.
Sin et al. (2021) [112] CBCT 214/46
(additional 46 images than validation set)
CNN Pharyngeal airway segmentation Dice ratio: 0.919.
Weighted IoU: 0.993.
Leonardi et al. (2021) [113] CBCT 20/20 CNN Sinonasal cavity and pharyngeal airway segmentation Mean matching percentage (tolerance 0.5 mm/1.0 mm): 85.35 ± 2.59/93.44 ± 2.54
Shujaat et al. (2021) [114] CBCT 48/25 (additional 30 images than validation set) 3D U-Net Pharyngeal airway segmentation Accuracy: 100%.
Dice score:0.97 ± 0.02.
IoU: 0.93 ± 0.03.
Jeong et al. (2023) [115] Lateral cephalograms 1099/120 CNN Upper-airway obstruction evaluation Sensitivity: 0.86.
Specificity: 0.89.
Positive predictive value: 0.90.
Negative predictive value: 0.85,
F1 score: 0.88.
Dong et al. (2023) [116] CBCT A total of 87 HMSAU-Net and 3D-ResNet Upper-airway segmentation and adenoid hypertrophy detection Segmentation: Dice value: 0.96.
Diagnosis: accuracy: 0.912.
Sensitivity: 0.976.
Specificity: 0.867.
Positive predictive value: 0.837.
Negative predictive value: 0.981.
F1 score: 0.901.
Jin et al. (2023) [117] CBCT A total of 50 Transformer and U-Net Nasal and pharyngeal airway segmentation Precision: 85.88~94.25%.
Recall: 93.74~98.44%.
Dice similarity coefficient: 90.95~96.29%.
IoU: 83.68~92.85%.

ROC, receiver operating characteristic; CBCT, cone-beam computed tomography; CNN, convolutional Neural Network; AN, adenoid–nasopharynx; IoU, Intersection over Union; HMSAU-Net, hierarchical masks self-attention U-net; 3D, three-dimensional.

Detecting adenoid hypertrophy based on lateral cephalograms has been proven to be highly accurate and reliable [118,119]. The adenoid–nasopharyngeal (AN) ratio proposed by Fujioka is the most notable method [120]. Both Shen et al. and Zhao et al. employed a CNN model to locate four key points in Fujioka’s method on lateral cephalograms, and subsequently calculated the AN ratio [109,110]. The proposed model by Shen et al. obtained a classification of 95.6% and a mean AN ratio error of 0.026 [68]. The model of Zhao et al. also showed favorable performance, with high accuracy (0.919), sensitivity (0.906) and specificity (0.938) [98]. Liu et al. utilized VGG-Lite to directly detect adenoid hypertrophy on lateral cephalograms without automated landmarking, and the model achieved a positive predictive value of 0.880 and negative predictive value of 0.900 [111]. Dong et al. proposed two deep learning algorithms, the hierarchical masks self-attention U-net (HMSAU-Net) and 3D-ResNet, to automatically segment upper airways and diagnose adenoid hypertrophy, respectively, from CBCT. Of note, a high accuracy of 0.912 was achieved by the adenoid hypertrophy diagnosis model [116].

In addition to adenoid hypertrophy, the morphology and volume of the upper airway are also important indicators for assessing upper-airway obstruction. By using a CNN model, Jeong et al. obtained promising results in automated upper-airway obstruction evaluation based on lateral cephalograms, with a positive predictive value of 0.90 and negative predictive value of 0.85 [115]. The segmentation of the airway from CBCT can provide a 3D view, enabling the more accurate detection of airway obstruction. Recent studies have shown continuous progress in airway segmentation, with deep learning, especially CNN algorithms, being the most commonly used. Sin et al. generated a CNN algorithm to automatically segment and calculate the volume of a pharyngeal airway from CBCT images, achieving a dice ratio of 0.919 and a weighted Intersection over Union (IoU) of 0.993 [112]. Shujaat et al. employed the 3D U-Net and obtained an accuracy of 100% in segmenting a pharyngeal airway [114]. Jin et al. utilized a transformer and U-Net-based model and segmented the entire upper airway, including the nasal cavity and hypopharynx [117].

2.2. Treatment Planning

Orthodontic treatment requires cautious decision-making processes that are the cornerstone of a satisfactory treatment outcome, such as tooth extraction plan and the possibility of surgical interventions. AI is expected to assist orthodontists especially those inexperienced in making correct decisions. The application of AI in treatment planning is summarized in Table 4 [78,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141].

Table 4.

The application of AI in treatment planning.

Author (Year) Data Type Dataset Size
(Training/Test)
Algorithms Purpose Performance
Xie et al. (2010) [121] Cephalometric variables, cast measurement. 180/20 ANN To predict tooth extraction diagnosis. Accuracy: 80%.
Jung et al. (2016) [122] Cephalometric variables, dental variable, profile variables, and chief complaint for protrusion. 64/60
(additional 32 samples than validation set)
ANN To predict tooth extraction diagnosis, and extraction patterns. Success rate:
Tooth extraction diagnosis: 93%.
Extraction patterns: 84%.
Li et al. (2019) [123] Demographic data, cephalometric data, dental data, and soft tissue data. A total of 302 samples MLP (ANN) To predict tooth extraction diagnosis, extraction patterns and anchorage patterns. Accuracy:
For extraction diagnosis: 94%.
For extraction patterns: 84.2%.
For anchorage patterns: 92.8%.
Suhail et al. (2020) [124] Diagnosis, feature identification of photos, models and X-rays. A total of 287 samples ANN, LR, RF To predict tooth extraction diagnosis, and extraction patterns. For extraction diagnosis: LR outperformed the ANN.
For extraction patterns: RF outperformed ANN.
Etemad et al. (2021) [125] Demographic data, cephalometric data, dental data, and soft tissue data. A total of 838 samples RF, MLP (ANN) To predict tooth extraction diagnosis. Accuracy of RF with 22/117/all inputs: 0.75/0.76/0.75.
Accuracy of MLP with 22/117/all inputs: 0.79/0.75/0.79.
Shojaei et al. (2022) [126] Medical records, extra and intra oral photos, dental model records, and radiographic images. A total of 126 samples LR, SVM, DT, RF, Gaussian NB, KNN Classifier, ANN To predict tooth extraction diagnosis, extraction patterns and anchorage patterns. Accuracy for extraction decision:
ANN: 93%, LR:86%, SVM:83%, DT: 76%, RF: 83%, Gaussian NB: 72%, KNN Classifier: 72%.
Accuracy for extraction pattern: ANN: 89%, RF:40%.
Accuracy for extraction and anchorage pattern: ANN: 81%, RF:23%.
Real et al. (2022) [127] Sex, model variables, cephalometric variables, outcome variables. -/214 Commercially available software (Auto-WEKA) To predict tooth extraction diagnosis. Accuracy:
93.9%: input model and cephalometric data.
87.4%: input only model data.
72.7%: input only cephalometric data.
Leavitt et al.
(2023) [128]
Cephalometric variables, dental variables, demographic characteristics. 256/110 RF, LR, SVM To predict tooth extraction patterns. Overall accuracy:
RF: 54.55%, SVM: 52.73%, LR: 49.09%.
Ryu et al. (2023) [78] Intraoral photographs, extraction decision. 2736/400 ResNet (ResNet50, ResNet101), VggNet (VGG16, and VGG19) To predict tooth extraction diagnosis. Accuracy:
Maxilla: VGG19 (0.922) > ResNet101 (0.915) > VGG16 (0.910) > ResNet50 (0.909).
Mandible: VGG19 (0.898) = VGG16 (0.898) > ResNet50 (0.895) > ResNet101 (0.890).
Prasad et al. (2022) [129] Clinical data, cephalometric data, cast and photographic data. A total of 700 samples RF, XGB, LR, DT, K-Neighbors, Linear SVM, NB To predict skeletal jaw base, extraction diagnosis for Class 1 jaw base, and functional/camouflage/surgical strategies for Class 2/3 jaw base. Different algorithms showed different accuracies in different layers. RF performed best in 3 out of 4 layers.
Knoops et al. (2019) [130] 3D face scans A total of 4261 SVM for classification
LR, RR, LARS, and LASSO for regression
To predict surgery/non-surgery decision and surgical outcomes. For surgery/non-surgery decision:
Accuracy: 95.4%. Sensitivity: 95.5%. Specificity: 95.2%.
For surgical outcomes simulation:
Average error: LARS and RR (1.1 ± 0.3 mm). LASSO (1.3 ± 0.3 mm). LR (3.0 ± 1.2 mm).
Choi et al. (2019) [131] Lateral cephalometric variables, dental variable, profile variables, chief complaint for protrusion. 136/112
(additional 68 samples than validation set)
ANN To predict surgery/non-surgery decision, extraction/non-extraction for surgical treatment. Accuracy for all dataset:
Diagnosis of surgery/non-surgery: 96%.
Diagnosis of extraction/non-extraction for Class II surgery: 97%.
Diagnosis of extraction/non-extraction for Class III surgery: 88%.
Diagnosis of extraction/non-extraction for surgery: 91%.
Lee et al. (2020) [132] Lateral cephalograms. 220/40 (additional 73 samples than validation set) CNN (Modified-Alexnet, MobileNet, and Resnet50) To predict the need for orthognathic surgery. Average accuracy for all dataset:
Modified-Alexnet: 96.4%.
MobileNet: 95.4%.
Resnet50: 95.6%.
Jeong et al. (2020) [133] Facial photos (front and right). A total of 822 samples.
Group 1: 207/204.
Group 2: 205/206
CNN To predict the need for orthognathic surgery. Accuracy: 0.893.
Precision: 0.912.
Recall: 0.867.
F1 scores:0.889.
Shin et al. (2021) [134] Lateral cephalograms and posteroanterior cephalograms. A total of 840 samples.
Group 1: 273/304 (additional 30 samples than validation set).
Group 2: 98/109 (additional 11 samples than validation set)
CNN To predict the diagnosis of orthognathic surgery. Accuracy: 0.954.
Sensitivity: 0.844.
Specificity: 0.993.
Kim et al. (2021) [135] Lateral cephalograms. 810/150 CNN
(ResNet-18, 34, 50, 101)
To predict the diagnosis of orthognathic surgery. Accuracy for test dataset:
ResNet-18/34/50/101: 93.80%/93.60%/91.13%/91.33%.
Lee et al. (2022) [136] Cephalometric measurements, demographic characteristics, dental analysis, and chief complaint. 136/60 RF, LR To predict the diagnosis of orthognathic surgery. Accuracy (RF/LR): 90%/78%.
Sensitivity (RF/LR): 84%/89%.
Specificity (RF/LR): 93%/73%.
Woo et al. (2023) [137] Intraoral scan data -/30 Three commercially available software packages (Autolign, Outcome Simulator Pro, Ortho Simulation) To evaluate the accuracy of automated digital setup accuracy. Mean error of three pieces of software:
Linear movement: 0.39~1.40 mm.
Angular movement: 3.25~7.80°.
Park et al. (2021) [138] Lateral cephalograms A total of 284 cases CNN (U-Net) To predict the cephalometric changes of Class II patients after using modified C-palatal plates. Total mean error: 1.79 ± 1.77 mm.
Tanikawa et al. (2021) [139] 3D facial images A total of 72 cases in surgery group and 65 cases in extraction group Deep learning To predict facial morphology change after orthodontic or orthognathic surgical treatment. Average system errors:
Surgery group: 0.94 ± 0.43 mm; orthodontic group: 0.69 ± 0.28 mm.
Success rates (<1 mm): Surgery group: 54%; orthodontic group: 98%.
Success rates (<2 mm): Surgery group: 100%; orthodontic group: 100%.
Park et al. (2022) [140] CBCT 268/44 cGAN To predict post-orthodontic facial changes. Mean prediction error: 1.2 ± 1.01 mm.
Accuracy within 2 mm: 80.8%.
Xu et al. (2022) [141] Total of 17 clinical features A total of 196 cases ANN To predict patient experience of Invisalign treatment. Predictive success rate:
Pain: 87.7%. Anxiety: 93.4%. Quality of life: 92.4%.

ANN, artificial neural network; DT, decision tree; RF, random forest; LR, logistic regression; SVM, support vector machine; NB, naive bayes; KNN, k-nearest neighbors; MLP, multilayer perceptron; XGB, eXtreme Gradient Boosting; RR, ridge regression; LARS, least-angle regression; LASSO, least absolute shrinkage and selection operator regression; CNN, convolutional neural network; CBCT, cone-beam computed tomography; cGAN, conditional generative adversarial networks.

2.2.1. Decision Making for Extractions

Currently, there is no absolute standardized formula for extraction diagnosis and patterns, and the decision depends, to some extent, on the orthodontists’ experience [142]. A wrong decision about extraction could cause a series of irreversible problems like an unfavorable profile, improper occlusion and extraction-space closure difficulties. AI can contribute to reducing the likelihood of incorrect tooth extraction protocols.

ANNs are the most utilized method to predict extraction diagnosis and patterns [121,122,123,124,125,126]. Jung et al. built an AI expert system with neural-network machine learning based on 12 cephalometric variables and 6 additional indices, reaching a success rate of 93% and 84% in deciding extraction/non-extraction and detailed extraction patterns, respectively. In this study, one-third of the learning dataset was chosen as the validation set to prevent overfitting [122]. Li et al. adopted a multilayer perceptron ANN and obtained similar results, with an accuracy of 94% and 84.2% in the determination of extraction diagnosis and patterns. In addition, the proposed algorithm achieved an accuracy of 92% in predicting anchorage patterns [123].

Different machine-learning algorithms have their own strengths and weaknesses. For example, random forest (RF) and support vector machine (SVM) are often used for classification and regression tasks, and logistic regression (LR) is more suitable for binary classification tasks [143,144,145]. Several studies have used different machine-learning algorithms to determine tooth extraction plans [124,125,126,128]. The research results of Shojaei et al. indicated that compared to some traditional machine-learning algorithms, ANNs demonstrated significant advantages in decision making for extraction and anchorage patterns [126]. Leavitt et al. compared three machine-learning algorithms (RF, LR, and SVM) for predicting extraction patterns, but their overall accuracies were not very satisfactory, with SVM achieving the highest accuracy at 54.55% [128]. Although RF can act as an ensemble method to prevent overfitting and performed well in some studies, more research is still needed to further substantiate its effectiveness [124,125]. The abovementioned models used manual measurement values instead of images as the input data. Recently, Ryu et al. employed intraoral photographs and extraction decisions as the input data and utilized four CNN algorithms to build a tooth extraction prediction model. The results indicated VGG19 had the highest prediction accuracy in both the maxilla (0.922) and mandible (0.898) [78]. In summary, there have been several studies which have used AI for decision making during extractions. Most of these studies have used the extracted measurements of patients as their input data. However, the varying number of measurements used in different studies may result in relatively low comparability between the results. Overall, ANNs have shown the best performance in decision making for extractions. However, with changes in the input data type, such as radiographic images, other algorithms like CNNs may exhibit better performances.

2.2.2. Decision Making for Orthognathic Surgery

For adult patients with severe dentofacial deformities, combined orthodontic and orthognathic surgical treatment is usually required to reposition the jaws. Currently, there is no absolute criterion for determining surgical cases, especially in borderline situations where the dilemma between camouflage orthodontic treatment and surgical treatment often confuses inexperienced orthodontists [146,147,148].

Lateral cephalograms are the most used method in clinical practice to assess sagittal skeletal deformities. Several studies have used lateral cephalograms as the input data, whether using an ANN or CNN, and all achieved accuracy rates exceeding 90% [131,132,135]. Shin et al. adopted both lateral cephalograms and posteroanterior cephalograms as their training data to take both the sagittal and lateral relationship of the jaws into consideration [134]. The proposed CNN model reached an accuracy of 95.4% in predicting orthognathic surgery diagnosis.

Facial appearance is also a crucial factor when making the surgical/non-surgical decision. Knoops et al. utilized SVM to predict a surgery/non-surgery decision based on 3D facial images, and reached an accuracy of 95.4% [130]. Trained by front and right facial photos, the CNN model proposed by Jeong et al. only showed an accuracy of 89.3% [133]. Choi utilized a variety of factors as the training data, including lateral cephalometric variables, dental variables, profile variables, and the chief complaint for protrusion [131]. The proposed ANN model not only predicted the surgery/non-surgery decision but also anticipated in the tooth extraction plan for the surgical cases, obtaining an accuracy ranging from 88% to 97%. Nevertheless, it is worth noting that this study did not encompass Class I surgical cases, which may have influenced the generalizability of the model [131]. Using similar types of input data, Lee et al. investigated the abilities of RF and LR to predict the surgery decision of Class III patients, but they only obtained an accuracy of 90% (RF) and 78% (LR) [136]. Overall, AI has made some progress in decision making for orthognathic surgery. However, there is still a need for further improvement in incorporating a more comprehensive type of cases, especially more-borderline cases, which holds the promise of enhancing AI’s diagnostic capabilities.

2.2.3. Treatment Outcome Prediction

For some cases, orthodontists may develop more than one treatment plan, especially for borderline cases. However, deciding on the most suitable treatment plan can be challenging for inexperienced orthodontists. In addition, the treatment outcome for cases involving extraction and interproximal enamel reduction is often irreversible, and suboptimal plans may result in patient dissatisfaction. Predicting treatment outcomes can help orthodontists analyze and treat malocclusions more scientifically, reducing potential risks and complications during and after clinical treatment. Currently, AI can aid in predicting dental, skeletal and facial changes, as well as patients’ experience of clear aligners, thereby guiding the treatment planning [137,138,139,140,141].

Orthodontic tooth setup, initially proposed by Kesling, enables the visualization of the treatment progress and final occlusion, but manual tasks like tooth segmentation and reposition are labor-intensive. With the continuous advancements of digital orthodontics and artificial intelligence, automated virtual setups have been widely applied, especially in the field of clear aligners [137]. Woo et al. compared the accuracy of three pieces of automated digital-setup software with that of a manual setup regarding six directions of tooth movement [137]. The results indicated that the pieces of automated virtual-setup software were effective overall, but further manual adjustments may be still required in clinical practice. Also, it is important to note that the study only included cases where no extractions were performed.

In addition to dental changes, there have been several studies using AI to predict skeletal and facial changes after orthodontic treatment. Park et al. applied a CNN model to predict the cephalometric changes of Class II patients after using modified C-palatal plates, and obtained an overall accuracy of 1.79 ± 1.77 mm [138]. Tanikawa et al. combined geometric morphometric methods and deep learning to predict 3D facial-morphology change after orthodontic (with four premolars extracted) or orthognathic surgical treatment [139]. The proposed system showed an average error of 0.94 ± 0.43 mm and 0.69 ± 0.28 mm in the surgery and orthodontic group, respectively. In another study, a conditional generative adversarial network (cGAN), a type of deep learning algorithm, was used to predict 3D facial changes after orthodontic treatment based on patients’ gender, age and incisor movement [140]. Thanks to the conditions applied to the generator and discriminator, cGAN is supposed to generate high-quality image samples and excels at performing image-to-image translational tasks [149,150]. As a result, 3D facial images and color distance maps were generated, and the distances of six perioral landmarks between the real model and predicted model were calculated, with the cGAN achieving a mean prediction error of 1.2 ± 1.01 mm and an accuracy (within 2 mm) of 80.8% [140].

The selection of the treatment appliance is a crucial aspect of orthodontic treatment planning. Particularly for patients using clear aligners, a poor wearing experience can impact patients’ compliance and consequently affect the treatment outcomes. Xu et al. utilized 17 clinical features as the training data and employed an ANN model to predict patients’ experiences of Invisalign treatment [141]. The proposed model achieved high prediction accuracies of 87.7% for pain, 93.4% for anxiety, and 92.4% for quality of life. To the best of our knowledge, this was the first and currently the only study that utilized AI to predict patient experience of orthodontic treatment, laying the foundation for further research in this area. However, a limitation of this study is that it only included patients’ clinical features as input data and did not incorporate other potential influencing factors such as gender and education level, which could potentially affect the predictive ability of the model [151].

2.3. Clinical Practice

During the orthodontic treatment, orthodontists often come across various challenges, including clinical expertise in orthodontics and patient communication and management. The application of AI can help facilitate efficient and effective orthodontic treatment regarding practice guidance, remote care and clinical documentation [152,153,154,155,156,157,158,159,160].

2.3.1. Practice Guidance

A deep overbite is one of the most common and challenging malocclusions to correct in orthodontic treatment [161]. El-Dawlatly et al. proposed a computer-based decision support system for deep-overbite treatment guidance, trained by the actual treatment changes [152]. Instead of answering binary questions, the model can provide a detailed treatment protocol on deep-overbite correction from seven aspects, such as the intrusion or proclination of incisors, leveling the curve of Spee. With a high success rate of 94.40%, this model is expected to aid orthodontics in correcting deep overbites in the future.

The 3D U-Net, a deep learning algorithm, is widely used in 3D image segmentation. As a modified version of 3D U-Net, 3D U-Net with squeeze-and-excitation modules (3D-UnetSE) has achieved better performance in capturing high level features [153,162]. The stability of palatal mini implants is associated with hard and soft tissues [163,164]. Tao et al. successfully used 3D-UnetSE to accomplish the automated segmentation and thickness measurement of palate bone and soft tissue through CBCT. Furthermore, ideal sites for palatal miniscrews were predicted based on the bone and soft tissue thickness [153].

Monitoring the tooth root position throughout the orthodontic treatment is essential to better prevent adverse outcomes and assess treatment effectiveness. However, conventional methods, whether CBCT or panoramic films, increase radiation exposure. Hu et al. and Lee et al. used deep learning to accurately segment teeth in CBCT scans and merged the segmented teeth with intraoral scanned dental crowns to construct integrated tooth models [154,155]. In this way, orthodontists can determine the position of tooth roots solely based on intraoral scans. These two studies showed the promising performance of tooth position prediction; with continuous improvement in the accuracy of tooth segmentation, integrated models are expected to be widely applied in clinical practice.

2.3.2. Remote Care

Remote monitoring allows orthodontists to remotely track treatment progress and provide timely feedback based on photos or oral scans of the dentition, avoiding unnecessary visits, and bringing flexibility and convenience to patients [156,157,158].

AI has enhanced the applications and effectiveness of remote monitoring software [156]. Dental monitoring (DM, Paris, France), standing out as one of the leading software in AI-driven remote monitoring, has gained widespread popularity and research attention [157,165]. DM is user-friendly, allowing patients to scan their dentitions using a smartphone. Studies have indicated that DM can not only reduce chairside time, but also improve patients’ compliance [156,157]. DM can be applied to both conventional fixed appliances and clear aligners, automatically detecting numerous metrics, such as ill-fitting clear aligners, losses of attachments, archwire passivity, bracket breakages and relapse occurring [158,166,167]. In addition, DM’s detections demonstrate a high level of precision. Homsi et al. claimed that the remotely reconstructed digital models generated by DM were as highly accurate as intraoral scans [168] Moylan et al. reached similar viewpoints by measuring intercanine and intermolar width DM-generated models and plaster models [169]. However, a recent prospective study found that there are still problems with the consistency of DM instructions, especially for the determination of teeth with tracking issues. At the same time, the rationale for DM instructions for clear aligner replacement is difficult to explain [170]. Therefore, orthodontists may adopt a cautious approach towards the widespread use of AI-driven remote monitoring tools.

2.3.3. Clinical Documentation

Clinical photos and radiographs are routinely taken for diagnosis and treatment monitoring. AI can aid in classifying and categorizing these images, thereby enhancing the efficiency of clinical practice. Ryu et al. applied CNNs to automatically classify facial and intraoral photographs, including four facial photos and five intraoral photos. The CNN model obtained an overall valid prediction rate of 98% [159]. Li et al. employed a Deep hidden IDentity (DeepID)-based deep learning model and expanded the categories of orthodontic images into a total of 14, comprising 6 facial images, 6 intraoral images, 1 panoramic film and 1 lateral cephalogram. The proposed model used deep convolutional networks for feature extraction and joint Bayesian for the verification process. As a result, the DeepID model not only reached a high accuracy of 99.4% but also significantly improved the computational speed [160].

3. Limitations and Future Perspectives

The continuous evolution of AI has brought significant advancements in its application in orthodontics. In this review, we comprehensively introduced the recent advances in the application of AI in orthodontics, including diagnosis, treatment planning, and clinical practice. These studies suggest that the application of AI in orthodontics has made promising progress and has great potential for wider clinical applications in the near-future. However, there are still some limitations that may preclude the envisioned application of AI in orthodontics.

Firstly, the scarcity and low generalizability of training data render the current research less reliable. Taking studies incorporated in this review as an example, some AI models used to assist decision making did not include a diverse range of representative case types in the training data; although obtaining promising accuracy, their prediction for those rare deformity types is questionable. Obtaining a significant amount of high-quality data remains challenging, especially data that require manual annotation by experienced experts. A series of measures are expected to alleviate the severity of data insufficiency, such as transfer learning, data augmentation, semi-supervised learning and few-shot learning. However, the effectiveness of these methods remains limited [171,172]. Transfer learning refers to applying pre-trained models in a different but related domain, thereby reducing the dependence on extensive training data. However, this approach may exhibit low generalization capabilities when applied to a new domain [173]. Data augmentation can increase sample size through altering characteristics of existing data or generating synthetic images, but it cannot improve the diversity in biologic variability [171,174]. Semi-supervised learning is suitable when annotated data are limited, but high quality of the annotated data and enough unannotated data are still required [171]. For few-shot learning, its lack of specialized data and standard evaluation frameworks may hinder its further application [172]. Nowadays, due to ethical concerns and data protection issues, data sharing is still challenging. AI models trained with data of low generalizability would be biased. Federated learning is a distributed and decentralized machine-learning approach that allows cross-site collaboration without sharing data directly [175,176]. Blockchain, as a transparent, secure and immutable distributed shared database, provides a secure platform for data sharing and storage [177,178]. The combination of blockchain technology and federated learning is expected to facilitate data sharing through multisite collaboration without compromising data privacy, thereby providing large and more-diverse datasets [179,180].

Secondly, while a considerable amount of the literature has explored the application of AI in orthodontics, it remains challenging to directly compare different studies due to variations in study designs, dataset sizes, and evaluation metrics. To address this issue, Norgeot et al. proposed minimum-information-about-clinical-artificial-intelligence modeling (MI-CLAIM) in order to introduce comparable degrees of transparency and effectiveness to clinical AI modeling [181]. The MI-CLAIM checklist not only facilitates the assessment of the clinical impact of AI study but also enables researchers to replicate the technical design process rapidly.

In addition, despite the impressive performance of AI algorithms, particularly deep learning, their lack of interpretability has raised concerns. The inherent black-box nature of AI makes it challenging for human experts to interpret the AI prediction and determine whether AI made the correct decision based on erroneous reasoning [182]. Explainable AI (XAI) techniques aim to demystify the underlying logic and make the AI algorithms more transparent, explainable and trustable [182,183]. Many XAI approaches, such as gradient-weighted class activation mapping (Grad-CAM) and DeConvNet, have been proposed. These methods can reveal the features that contribute to the decision-making process. For example, Grad-CAM and DeConvNet can generate heatmaps to highlight the contributing regions of the input images [184,185,186]. Hopefully, these methods can be more extensively applied to orthodontics-related AI models in the future [182].

Last but not least, overfitting is a common issue in the whole field of AI. This means the model performs excessively well in the training datasets but shows unsatisfactory performance in the testing dataset [187]. Factors like data insufficiency, low data heterogeneity and excessive variables could all lead to overfitting [188]. Methods like improving data samples, data augmentation, regularization, cross-validation and specific algorithms have all been reported to prevent overfitting [171,189,190,191]. However, not all the studies reported in this review have taken measurements to avoid overfitting.

Although AI has been extensively explored in orthodontic treatment, there are still several other areas where it could be further investigated, for example, the automated detection of orthodontic treatment needs like the index of orthodontic treatment need (IOTN) and index of orthognathic functional treatment need (IOFTN) [192,193]. Currently, AI excels mostly in orthodontics diagnosis, yet it has limited guidance on the treatment process. Orthodontists may encounter various challenges throughout the entire orthodontic treatment, including correcting deep overbites and avoiding bone dehiscence or fenestration. Using AI to aid in preventing or addressing these issues could also be a potential area for future development. As clinical data continue to grow and AI computing power improves, there is no doubt that AI will significantly advance the field of orthodontics.

4. Conclusions

AI has shown manifold applications in orthodontics, contributing to diagnosis, treatment planning and clinical practice. At present, AI still cannot fully replace human experts, but it can serve as a quality-assuring component in clinical routine. With improvement in data availability, computing power and analytics methods, it is believed that AI can better assist clinical orthodontic care.

Author Contributions

Conceptualization, J.L. and Z.S.; writing—original draft preparation, J.L.; writing—review and editing, Z.S. and C.Z.; supervision, Z.S. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research received no external funding.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Kulikowski C.A. An Opening Chapter of the First Generation of Artificial Intelligence in Medicine: The First Rutgers AIM Workshop, June 1975. Yearb. Med. Inform. 2015;10:227–233. doi: 10.15265/IY-2015-016. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Moravčík M., Schmid M., Burch N., Lisý V., Morrill D., Bard N., Davis T., Waugh K., Johanson M., Bowling M. DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science. 2017;356:508–513. doi: 10.1126/science.aam6960. (In English) [DOI] [PubMed] [Google Scholar]
  • 3.Wang X.-L., Liu J., Li Z.-Q., Luan Z.-L. Application of physical examination data on health analysis and intelligent diagnosis. BioMed Res. Int. 2021;2021:8828677. doi: 10.1155/2021/8828677. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Sharif M.S., Abbod M., Amira A., Zaidi H. Artificial Neural Network-Based System for PET Volume Segmentation. Int. J. Biomed. Imaging. 2010;2010:105610. doi: 10.1155/2010/105610. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Wang D., Yang J.S. Analysis of Sports Injury Estimation Model Based on Mutation Fuzzy Neural Network. Comput. Intell. Neurosci. 2021;2021:3056428. doi: 10.1155/2021/3056428. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 6.Ding H., Wu J., Zhao W., Matinlinna J.P., Burrow M.F., Tsoi J.K. Artificial intelligence in dentistry—A review. Front. Dent. Med. 2023;4:1085251. doi: 10.3389/fdmed.2023.1085251. [DOI] [Google Scholar]
  • 7.Chiu Y.C., Chen H.H., Gorthi A., Mostavi M., Zheng S., Huang Y., Chen Y. Deep learning of pharmacogenomics resources: Moving towards precision oncology. Brief. Bioinform. 2020;21:2066–2083. doi: 10.1093/bib/bbz144. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Taye M.M. Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. Computers. 2023;12:91. doi: 10.3390/computers12050091. [DOI] [Google Scholar]
  • 9.Mohammad-Rahimi H., Nadimi M., Rohban M.H., Shamsoddin E., Lee V.Y., Motamedian S.R. Machine learning and orthodontics, current trends and the future opportunities: A scoping review. Am. J. Orthod. Dentofac. Orthop. 2021;160:170–192.e174. doi: 10.1016/j.ajodo.2021.02.013. (In English) [DOI] [PubMed] [Google Scholar]
  • 10.Krizhevsky A., Sutskever I., Hinton G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM. 2017;60:84–90. doi: 10.1145/3065386. [DOI] [Google Scholar]
  • 11.Li Z., Liu F., Yang W., Peng S., Zhou J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn. Syst. 2022;33:6999–7019. doi: 10.1109/TNNLS.2021.3084827. (In English) [DOI] [PubMed] [Google Scholar]
  • 12.Tomè D., Monti F., Baroffio L., Bondi L., Tagliasacchi M., Tubaro S. Deep convolutional neural networks for pedestrian detection. Signal Process. Image Commun. 2016;47:482–489. doi: 10.1016/j.image.2016.05.007. [DOI] [Google Scholar]
  • 13.Zou J., Meng M., Law C.S., Rao Y., Zhou X. Common dental diseases in children and malocclusion. Int. J. Oral Sci. 2018;10:7. doi: 10.1038/s41368-018-0012-3. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Borzabadi-Farahani A., Borzabadi-Farahani A., Eslamipour F. Malocclusion and occlusal traits in an urban Iranian population. An epidemiological study of 11- to 14-year-old children. Eur. J. Orthod. 2009;31:477–484. doi: 10.1093/ejo/cjp031. (In English) [DOI] [PubMed] [Google Scholar]
  • 15.Peter E., Monisha J., Edward Benson P., Ani George S. Does orthodontic treatment improve the Oral Health-Related Quality of Life when assessed using the Malocclusion Impact Questionnaire-a 3-year prospective longitudinal cohort study. Eur. J. Orthod. 2023 doi: 10.1093/ejo/cjad040. (In English) [DOI] [PubMed] [Google Scholar]
  • 16.Ribeiro L.G., Antunes L.S., Küchler E.C., Baratto-Filho F., Kirschneck C., Guimarães L.S., Antunes L.A.A. Impact of malocclusion treatments on Oral Health-Related Quality of Life: An overview of systematic reviews. Clin. Oral Investig. 2023;27:907–932. doi: 10.1007/s00784-022-04837-8. (In English) [DOI] [PubMed] [Google Scholar]
  • 17.Silva T.P.D., Lemos Y.R., Filho M.V., Carneiro D.P.A., Vedovello S.A.S. Psychosocial impact of malocclusion in the school performance. A Hierarchical Analysis. Community Dent. Health. 2022;39:211–216. doi: 10.1922/CDH_00061Silva006. (In English) [DOI] [PubMed] [Google Scholar]
  • 18.Cenzato N., Nobili A., Maspero C. Prevalence of Dental Malocclusions in Different Geographical Areas: Scoping Review. Dent. J. 2021;9:117. doi: 10.3390/dj9100117. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Borzabadi-Farahani A., Borzabadi-Farahani A., Eslamipour F. The relationship between the ICON index and the dental and aesthetic components of the IOTN index. World J. Orthod. 2010;11:43–48. (In English) [PubMed] [Google Scholar]
  • 20.Monill-González A., Rovira-Calatayud L., d’Oliveira N.G., Ustrell-Torrent J.M. Artificial intelligence in orthodontics: Where are we now? A scoping review. Orthod. Craniofacial Res. 2021;24:6–15. doi: 10.1111/ocr.12517. [DOI] [PubMed] [Google Scholar]
  • 21.Albalawi F., Alamoud K.A. Trends and Application of Artificial Intelligence Technology in Orthodontic Diagnosis and Treatment Planning—A Review. Appl. Sci. 2022;12:11864. doi: 10.3390/app122211864. [DOI] [Google Scholar]
  • 22.Proffit W.R., Fields H.W., Larson B., Sarver D.M. Contemporary Orthodontics-e-Book. Elsevier Health Sciences; Amsterdam, The Netherlands: 2018. [Google Scholar]
  • 23.Yue W., Yin D., Li C., Wang G., Xu T. Automated 2-D cephalometric analysis on X-ray images by a model-based approach. IEEE Trans. Biomed. Eng. 2006;53:1615–1623. doi: 10.1109/TBME.2006.876638. [DOI] [PubMed] [Google Scholar]
  • 24.Kim J., Kim I., Kim Y.J., Kim M., Cho J.H., Hong M., Kang K.H., Lim S.H., Kim S.J., Kim Y.H. Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres. Orthod. Craniofacial Res. 2021;24:59–67. doi: 10.1111/ocr.12493. [DOI] [PubMed] [Google Scholar]
  • 25.Baumrind S., Frantz R.C. The reliability of head film measurements: 1. Landmark identification. Am. J. Orthod. 1971;60:111–127. doi: 10.1016/0002-9416(71)90028-5. [DOI] [PubMed] [Google Scholar]
  • 26.Durão A.P.R., Morosolli A., Pittayapat P., Bolstad N., Ferreira A.P., Jacobs R. Cephalometric landmark variability among orthodontists and dentomaxillofacial radiologists: A comparative study. Imaging Sci. Dent. 2015;45:213–220. doi: 10.5624/isd.2015.45.4.213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Cohen A.M., Ip H.H., Linney A.D. A preliminary study of computer recognition and identification of skeletal landmarks as a new method of cephalometric analysis. Br. J. Orthod. 1984;11:143–154. doi: 10.1179/bjo.11.3.143. (In English) [DOI] [PubMed] [Google Scholar]
  • 28.Payer C., Štern D., Bischof H., Urschler M. Integrating spatial configuration into heatmap regression based CNNs for landmark localization. Med. Image Anal. 2019;54:207–219. doi: 10.1016/j.media.2019.03.007. (In English) [DOI] [PubMed] [Google Scholar]
  • 29.Nishimoto S., Sotsuka Y., Kawai K., Ishise H., Kakibuchi M. Personal Computer-Based Cephalometric Landmark Detection With Deep Learning, Using Cephalograms on the Internet. J. Craniofacial Surg. 2019;30:91–95. doi: 10.1097/SCS.0000000000004901. (In English) [DOI] [PubMed] [Google Scholar]
  • 30.Zhong Z., Li J., Zhang Z., Jiao Z., Gao X. An attention-guided deep regression model for landmark detection in cephalograms; Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference; Shenzhen, China. 13–17 October 2019; Proceedings, Part VI 22. [Google Scholar]
  • 31.Park J.H., Hwang H.W., Moon J.H., Yu Y., Kim H., Her S.B., Srinivasan G., Aljanabi M.N.A., Donatelli R.E., Lee S.J. Automated identification of cephalometric landmarks: Part 1-Comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod. 2019;89:903–909. doi: 10.2319/022019-127.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Moon J.H., Hwang H.W., Yu Y., Kim M.G., Donatelli R.E., Lee S.J. How much deep learning is enough for automatic identification to be reliable? Angle Orthod. 2020;90:823–830. doi: 10.2319/021920-116.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Hwang H.W., Park J.H., Moon J.H., Yu Y., Kim H., Her S.B., Srinivasan G., Aljanabi M.N.A., Donatelli R.E., Lee S.J. Automated identification of cephalometric landmarks: Part 2-Might it be better than human? Angle Orthod. 2020;90:69–76. doi: 10.2319/022019-129.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Oh K., Oh I.S., Le V.N.T., Lee D.W. Deep Anatomical Context Feature Learning for Cephalometric Landmark Detection. IEEE J. Biomed. Health Inform. 2021;25:806–817. doi: 10.1109/JBHI.2020.3002582. (In English) [DOI] [PubMed] [Google Scholar]
  • 35.Kim H., Shim E., Park J., Kim Y.J., Lee U., Kim Y. Web-based fully automated cephalometric analysis by deep learning. Comput. Methods Programs Biomed. 2020;194:105513. doi: 10.1016/j.cmpb.2020.105513. (In English) [DOI] [PubMed] [Google Scholar]
  • 36.Kunz F., Stellzig-Eisenhauer A., Zeman F., Boldt J. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J. Orofac. Orthop. 2020;81:52–68. doi: 10.1007/s00056-019-00203-8. (In English) [DOI] [PubMed] [Google Scholar]
  • 37.Alqahtani H. Evaluation of an online website-based platform for cephalometric analysis. J. Stomatol. Oral Maxillofac. Surg. 2020;121:53–57. doi: 10.1016/j.jormas.2019.04.017. (In English) [DOI] [PubMed] [Google Scholar]
  • 38.Lee J.H., Yu H.J., Kim M.J., Kim J.W., Choi J. Automated cephalometric landmark detection with confidence regions using Bayesian convolutional neural networks. BMC Oral Health. 2020;20:270. doi: 10.1186/s12903-020-01256-7. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Yu H.J., Cho S.R., Kim M.J., Kim W.H., Kim J.W., Choi J. Automated Skeletal Classification with Lateral Cephalometry Based on Artificial Intelligence. J. Dent. Res. 2020;99:249–256. doi: 10.1177/0022034520901715. (In English) [DOI] [PubMed] [Google Scholar]
  • 40.Li W., Lu Y., Zheng K., Liao H., Lin C., Luo J., Cheng C.-T., Xiao J., Lu L., Kuo C.-F. Structured landmark detection via topology-adapting deep graph learning; Proceedings of the Computer Vision–ECCV 2020: 16th European Conference; Glasgow, UK. 23–28 August 2020; Proceedings, Part IX 16. [Google Scholar]
  • 41.Tanikawa C., Lee C., Lim J., Oka A., Yamashiro T. Clinical applicability of automated cephalometric landmark identification: Part I-Patient-related identification errors. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):43–52. doi: 10.1111/ocr.12501. (In English) [DOI] [PubMed] [Google Scholar]
  • 42.Zeng M., Yan Z., Liu S., Zhou Y., Qiu L. Cascaded convolutional networks for automatic cephalometric landmark detection. Med. Image Anal. 2021;68:101904. doi: 10.1016/j.media.2020.101904. (In English) [DOI] [PubMed] [Google Scholar]
  • 43.Hwang H.W., Moon J.H., Kim M.G., Donatelli R.E., Lee S.J. Evaluation of automated cephalometric analysis based on the latest deep learning method. Angle Orthod. 2021;91:329–335. doi: 10.2319/021220-100.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Bulatova G., Kusnoto B., Grace V., Tsay T.P., Avenetti D.M., Sanchez F.J.C. Assessment of automatic cephalometric landmark identification using artificial intelligence. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):37–42. doi: 10.1111/ocr.12542. (In English) [DOI] [PubMed] [Google Scholar]
  • 45.Jeon S., Lee K.C. Comparison of cephalometric measurements between conventional and automatic cephalometric analysis using convolutional neural network. Prog. Orthod. 2021;22:14. doi: 10.1186/s40510-021-00358-4. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Hong M., Kim I., Cho J.H., Kang K.H., Kim M., Kim S.J., Kim Y.J., Sung S.J., Kim Y.H., Lim S.H., et al. Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery. Korean J. Orthod. 2022;52:287–297. doi: 10.4041/kjod21.248. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Le V.N.T., Kang J., Oh I.S., Kim J.G., Yang Y.M., Lee D.W. Effectiveness of Human-Artificial Intelligence Collaboration in Cephalometric Landmark Detection. J. Pers. Med. 2022;12:387. doi: 10.3390/jpm12030387. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Mahto R.K., Kafle D., Giri A., Luintel S., Karki A. Evaluation of fully automated cephalometric measurements obtained from web-based artificial intelligence driven platform. BMC Oral Health. 2022;22:132. doi: 10.1186/s12903-022-02170-w. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Uğurlu M. Performance of a Convolutional Neural Network- Based Artificial Intelligence Algorithm for Automatic Cephalometric Landmark Detection. Turk. J. Orthod. 2022;35:94–100. doi: 10.5152/TurkJOrthod.2022.22026. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Yao J., Zeng W., He T., Zhou S., Zhang Y., Guo J., Tang W. Automatic localization of cephalometric landmarks based on convolutional neural network. Am. J. Orthod. Dentofac. Orthop. 2022;161:e250–e259. doi: 10.1016/j.ajodo.2021.09.012. (In English) [DOI] [PubMed] [Google Scholar]
  • 51.Lu G., Zhang Y., Kong Y., Zhang C., Coatrieux J.L., Shu H. Landmark Localization for Cephalometric Analysis Using Multiscale Image Patch-Based Graph Convolutional Networks. IEEE J. Biomed. Health Inform. 2022;26:3015–3024. doi: 10.1109/JBHI.2022.3157722. (In English) [DOI] [PubMed] [Google Scholar]
  • 52.Tsolakis I.A., Tsolakis A.I., Elshebiny T., Matthaios S., Palomo J.M. Comparing a Fully Automated Cephalometric Tracing Method to a Manual Tracing Method for Orthodontic Diagnosis. J. Clin. Med. 2022;11:6854. doi: 10.3390/jcm11226854. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Duran G.S., Gökmen Ş., Topsakal K.G., Görgülü S. Evaluation of the accuracy of fully automatic cephalometric analysis software with artificial intelligence algorithm. Orthod. Craniofacial Res. 2023;26:481–490. doi: 10.1111/ocr.12633. (In English) [DOI] [PubMed] [Google Scholar]
  • 54.Ye H., Cheng Z., Ungvijanpunya N., Chen W., Cao L., Gou Y. Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification? BMC Oral Health. 2023;23:467. doi: 10.1186/s12903-023-03188-4. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Ueda A., Tussie C., Kim S., Kuwajima Y., Matsumoto S., Kim G., Satoh K., Nagai S. Classification of Maxillofacial Morphology by Artificial Intelligence Using Cephalometric Analysis Measurements. Diagnostics. 2023;13:2134. doi: 10.3390/diagnostics13132134. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Bao H., Zhang K., Yu C., Li H., Cao D., Shu H., Liu L., Yan B. Evaluating the accuracy of automated cephalometric analysis based on artificial intelligence. BMC Oral Health. 2023;23:191. doi: 10.1186/s12903-023-02881-8. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Kim M.J., Liu Y., Oh S.H., Ahn H.W., Kim S.H., Nelson G. Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomographysynthesized posteroanterior cephalometric images. Korean J. Orthod. 2021;51:77–85. doi: 10.4041/kjod.2021.51.2.77. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Takeda S., Mine Y., Yoshimi Y., Ito S., Tanimoto K., Murayama T. Landmark annotation and mandibular lateral deviation analysis of posteroanterior cephalograms using a convolutional neural network. J. Dent. Sci. 2021;16:957–963. doi: 10.1016/j.jds.2020.10.012. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Lee S.M., Kim H.P., Jeon K., Lee S.H., Seo J.K. Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. Phys. Med. Biol. 2019;64:055002. doi: 10.1088/1361-6560/ab00c9. (In English) [DOI] [PubMed] [Google Scholar]
  • 60.Torosdagli N., Liberton D.K., Verma P., Sincan M., Lee J.S., Bagci U. Deep Geodesic Learning for Segmentation and Anatomical Landmarking. IEEE Trans. Med. Imaging. 2019;38:919–931. doi: 10.1109/TMI.2018.2875814. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Yun H.S., Jang T.J., Lee S.M., Lee S.H., Seo J.K. Learning-based local-to-global landmark annotation for automatic 3D cephalometry. Phys. Med. Biol. 2020;65:085018. doi: 10.1088/1361-6560/ab7a71. (In English) [DOI] [PubMed] [Google Scholar]
  • 62.Kang S.H., Jeon K., Kang S.H., Lee S.H. 3D cephalometric landmark detection by multiple stage deep reinforcement learning. Sci. Rep. 2021;11:17509. doi: 10.1038/s41598-021-97116-7. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Ghowsi A., Hatcher D., Suh H., Wile D., Castro W., Krueger J., Park J., Oh H. Automated landmark identification on cone-beam computed tomography: Accuracy and reliability. Angle Orthod. 2022;92:642–654. doi: 10.2319/122121-928.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Dot G., Schouman T., Chang S., Rafflenbeul F., Kerbrat A., Rouch P., Gajny L. Automatic 3-Dimensional Cephalometric Landmarking via Deep Learning. J. Dent. Res. 2022;101:1380–1387. doi: 10.1177/00220345221112333. (In English) [DOI] [PubMed] [Google Scholar]
  • 65.Blum F.M.S., Möhlhenrich S.C., Raith S., Pankert T., Peters F., Wolf M., Hölzle F., Modabber A. Evaluation of an artificial intelligence-based algorithm for automated localization of craniofacial landmarks. Clin. Oral Investig. 2023;27:2255–2265. doi: 10.1007/s00784-023-04978-4. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Yang J., Ling X., Lu Y., Wei M., Ding G. Cephalometric image analysis and measurement for orthognathic surgery. Med. Biol. Eng. Comput. 2001;39:279–284. doi: 10.1007/BF02345280. (In English) [DOI] [PubMed] [Google Scholar]
  • 67.Schwendicke F., Chaurasia A., Arsiwala L., Lee J.H., Elhennawy K., Jost-Brinkmann P.G., Demarco F., Krois J. Deep learning for cephalometric landmark detection: Systematic review and meta-analysis. Clin. Oral Investig. 2021;25:4299–4309. doi: 10.1007/s00784-021-03990-w. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Arık S., Ibragimov B., Xing L. Fully automated quantitative cephalometry using convolutional neural networks. J. Med. Imaging. 2017;4:014501. doi: 10.1117/1.JMI.4.1.014501. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Cao L., He H., Hua F. Deep Learning Algorithms Have High Accuracy for Automated Landmark Detection on 2D Lateral Cephalograms. J. Evid. Based Dent. Pract. 2022;22:101798. doi: 10.1016/j.jebdp.2022.101798. (In English) [DOI] [PubMed] [Google Scholar]
  • 70.Meriç P., Naoumova J. Web-based Fully Automated Cephalometric Analysis: Comparisons between App-aided, Computerized, and Manual Tracings. Turk. J. Orthod. 2020;33:142–149. doi: 10.5152/TurkJOrthod.2020.20062. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Montúfar J., Romero M., Scougall-Vilchis R.J. Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections. Am. J. Orthod. Dentofac. Orthop. 2018;153:449–458. doi: 10.1016/j.ajodo.2017.06.028. (In English) [DOI] [PubMed] [Google Scholar]
  • 72.Zhang J., Liu M., Wang L., Chen S., Yuan P., Li J., Shen S.G., Tang Z., Chen K.C., Xia J.J., et al. Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization. Med. Image Anal. 2020;60:101621. doi: 10.1016/j.media.2019.101621. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Dot G., Rafflenbeul F., Arbotto M., Gajny L., Rouch P., Schouman T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. Int. J. Oral Maxillofac. Surg. 2020;49:1367–1378. doi: 10.1016/j.ijom.2020.02.015. (In English) [DOI] [PubMed] [Google Scholar]
  • 74.Ghesu F.C., Georgescu B., Mansi T., Neumann D., Hornegger J., Comaniciu D. An artificial agent for anatomical landmark detection in medical images; Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2016: 19th International Conference; Athens, Greece. 17–21 October 2016; Proceedings, Part III 19. [Google Scholar]
  • 75.Ghesu F.C., Georgescu B., Grbic S., Maier A., Hornegger J., Comaniciu D. Towards intelligent robust detection of anatomical structures in incomplete volumetric data. Med. Image Anal. 2018;48:203–213. doi: 10.1016/j.media.2018.06.007. (In English) [DOI] [PubMed] [Google Scholar]
  • 76.Chen S., Wu S. Deep Q-networks with web-based survey data for simulating lung cancer intervention prediction and assessment in the elderly: A quantitative study. BMC Med. Inform. Decis. Mak. 2022;22:1. doi: 10.1186/s12911-021-01695-4. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Talaat S., Kaboudan A., Talaat W., Kusnoto B., Sanchez F., Elnagar M.H., Bourauel C., Ghoneima A. The validity of an artificial intelligence application for assessment of orthodontic treatment need from clinical images. Semin. Orthod. 2021;27:164–171. doi: 10.1053/j.sodo.2021.05.012. [DOI] [Google Scholar]
  • 78.Ryu J., Kim Y.H., Kim T.W., Jung S.K. Evaluation of artificial intelligence model for crowding categorization and extraction diagnosis using intraoral photographs. Sci. Rep. 2023;13:5177. doi: 10.1038/s41598-023-32514-7. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Im J., Kim J.Y., Yu H.S., Lee K.J., Choi S.H., Kim J.H., Ahn H.K., Cha J.Y. Accuracy and efficiency of automatic tooth segmentation in digital dental models using deep learning. Sci. Rep. 2022;12:9429. doi: 10.1038/s41598-022-13595-2. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Woodsend B., Koufoudaki E., Lin P., McIntyre G., El-Angbawi A., Aziz A., Shaw W., Semb G., Reesu G.V., Mossey P.A. Development of intra-oral automated landmark recognition (ALR) for dental and occlusal outcome measurements. Eur. J. Orthod. 2022;44:43–50. doi: 10.1093/ejo/cjab012. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Woodsend B., Koufoudaki E., Mossey P.A., Lin P. Automatic recognition of landmarks on digital dental models. Comput. Biol. Med. 2021;137:104819. doi: 10.1016/j.compbiomed.2021.104819. (In English) [DOI] [PubMed] [Google Scholar]
  • 82.Zhao Y., Zhang L., Liu Y., Meng D., Cui Z., Gao C., Gao X., Lian C., Shen D. Two-Stream Graph Convolutional Network for Intra-Oral Scanner Image Segmentation. IEEE Trans. Med. Imaging. 2022;41:826–835. doi: 10.1109/TMI.2021.3124217. (In English) [DOI] [PubMed] [Google Scholar]
  • 83.Wu T.H., Lian C., Lee S., Pastewait M., Piers C., Liu J., Wang F., Wang L., Chiu C.Y., Wang W., et al. Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and Landmark Localization on 3D Intraoral Scans. IEEE Trans. Med. Imaging. 2022;41:3158–3166. doi: 10.1109/TMI.2022.3180343. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Liu Z., He X., Wang H., Xiong H., Zhang Y., Wang G., Hao J., Feng Y., Zhu F., Hu H. Hierarchical Self-Supervised Learning for 3D Tooth Segmentation in Intra-Oral Mesh Scans. IEEE Trans. Med. Imaging. 2023;42:467–480. doi: 10.1109/TMI.2022.3222388. (In English) [DOI] [PubMed] [Google Scholar]
  • 85.Rao G.K.L., Srinivasa A.C., Iskandar Y.H.P., Mokhtar N. Identification and analysis of photometric points on 2D facial images: A machine learning approach in orthodontics. Health Technol. 2019;9:715–724. doi: 10.1007/s12553-019-00313-8. [DOI] [Google Scholar]
  • 86.Yurdakurban E., Duran G.S., Görgülü S. Evaluation of an automated approach for facial midline detection and asymmetry assessment: A preliminary study. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):84–91. doi: 10.1111/ocr.12539. (In English) [DOI] [PubMed] [Google Scholar]
  • 87.Rousseau M., Retrouvey J.M. Machine learning in orthodontics: Automated facial analysis of vertical dimension for increased precision and efficiency. Am. J. Orthod. Dentofac. Orthop. 2022;161:445–450. doi: 10.1016/j.ajodo.2021.03.017. (In English) [DOI] [PubMed] [Google Scholar]
  • 88.Kim H., Kim C.S., Lee J.M., Lee J.J., Lee J., Kim J.S., Choi S.H. Prediction of Fishman’s skeletal maturity indicators using artificial intelligence. Sci. Rep. 2023;13:5870. doi: 10.1038/s41598-023-33058-6. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Lee H., Tajmir S., Lee J., Zissen M., Yeshiwas B.A., Alkasab T.K., Choy G., Do S. Fully Automated Deep Learning System for Bone Age Assessment. J. Digit. Imaging. 2017;30:427–441. doi: 10.1007/s10278-017-9955-8. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Kim J.R., Shim W.H., Yoon H.M., Hong S.H., Lee J.S., Cho Y.A., Kim S. Computerized Bone Age Estimation Using Deep Learning Based Program: Evaluation of the Accuracy and Efficiency. AJR Am. J. Roentgenol. 2017;209:1374–1380. doi: 10.2214/AJR.17.18224. (In English) [DOI] [PubMed] [Google Scholar]
  • 91.Kök H., Izgi M.S., Acilar A.M. Determination of growth and development periods in orthodontics with artificial neural network. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):76–83. doi: 10.1111/ocr.12443. (In English) [DOI] [PubMed] [Google Scholar]
  • 92.Franchi L., Baccetti T., McNamara J.A., Jr. Mandibular growth as related to cervical vertebral maturation and body height. Am. J. Orthod. Dentofac. Orthop. 2000;118:335–340. doi: 10.1067/mod.2000.107009. (In English) [DOI] [PubMed] [Google Scholar]
  • 93.Flores-Mir C., Burgess C.A., Champney M., Jensen R.J., Pitcher M.R., Major P.W. Correlation of skeletal maturation stages determined by cervical vertebrae and hand-wrist evaluations. Angle Orthod. 2006;76:1–5. doi: 10.1043/0003-3219(2006)076[0001:COSMSD]2.0.CO;2. (In English) [DOI] [PubMed] [Google Scholar]
  • 94.Kucukkeles N., Acar A., Biren S., Arun T. Comparisons between cervical vertebrae and hand-wrist maturation for the assessment of skeletal maturity. J. Clin. Pediatr. Dent. 1999;24:47–52. (In English) [PubMed] [Google Scholar]
  • 95.McNamara J.A., Jr., Franchi L. The cervical vertebral maturation method: A user’s guide. Angle Orthod. 2018;88:133–143. doi: 10.2319/111517-787.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Kim D.W., Kim J., Kim T., Kim T., Kim Y.J., Song I.S., Ahn B., Choo J., Lee D.Y. Prediction of hand-wrist maturation stages based on cervical vertebrae images using artificial intelligence. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):68–75. doi: 10.1111/ocr.12514. (In English) [DOI] [PubMed] [Google Scholar]
  • 97.Gandini P., Mancini M., Andreani F. A comparison of hand-wrist bone and cervical vertebral analyses in measuring skeletal maturation. Angle Orthod. 2006;76:984–989. doi: 10.2319/070605-217. [DOI] [PubMed] [Google Scholar]
  • 98.Akay G., Akcayol M.A., Özdem K., Güngör K. Deep convolutional neural network—The evaluation of cervical vertebrae maturation. Oral Radiol. 2023;39:629–638. doi: 10.1007/s11282-023-00678-7. [DOI] [PubMed] [Google Scholar]
  • 99.Kök H., Acilar A.M., İzgi M.S. Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Prog. Orthod. 2019;20:41. doi: 10.1186/s40510-019-0295-8. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Makaremi M., Lacaule C., Mohammad-Djafari A. Deep Learning and Artificial Intelligence for the Determination of the Cervical Vertebra Maturation Degree from Lateral Radiography. Entropy. 2019;21:1222. doi: 10.3390/e21121222. (In English) [DOI] [Google Scholar]
  • 101.Amasya H., Yildirim D., Aydogan T., Kemaloglu N., Orhan K. Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: Comparison of machine learning classifier models. Dentomaxillofac. Radiol. 2020;49:20190441. doi: 10.1259/dmfr.20190441. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Amasya H., Cesur E., Yıldırım D., Orhan K. Validation of cervical vertebral maturation stages: Artificial intelligence vs human observer visual analysis. Am. J. Orthod. Dentofac. Orthop. 2020;158:e173–e179. doi: 10.1016/j.ajodo.2020.08.014. (In English) [DOI] [PubMed] [Google Scholar]
  • 103.Seo H., Hwang J., Jeong T., Shin J. Comparison of Deep Learning Models for Cervical Vertebral Maturation Stage Classification on Lateral Cephalometric Radiographs. J. Clin. Med. 2021;10:3591. doi: 10.3390/jcm10163591. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Zhou J., Zhou H., Pu L., Gao Y., Tang Z., Yang Y., You M., Yang Z., Lai W., Long H. Development of an Artificial Intelligence System for the Automatic Evaluation of Cervical Vertebral Maturation Status. Diagnostics. 2021;11:2200. doi: 10.3390/diagnostics11122200. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105.Kim E.G., Oh I.S., So J.E., Kang J., Le V.N.T., Tak M.K., Lee D.W. Estimating Cervical Vertebral Maturation with a Lateral Cephalogram Using the Convolutional Neural Network. J. Clin. Med. 2021;10:5400. doi: 10.3390/jcm10225400. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Mohammad-Rahimi H., Motamadian S.R., Nadimi M., Hassanzadeh-Samani S., Minabi M.A.S., Mahmoudinia E., Lee V.Y., Rohban M.H. Deep learning for the classification of cervical maturation degree and pubertal growth spurts: A pilot study. Korean J. Orthod. 2022;52:112–122. doi: 10.4041/kjod.2022.52.2.112. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Radwan M.T., Sin Ç., Akkaya N., Vahdettin L. Artificial intelligence-based algorithm for cervical vertebrae maturation stage assessment. Orthod. Craniofacial Res. 2023;26:349–355. doi: 10.1111/ocr.12615. (In English) [DOI] [PubMed] [Google Scholar]
  • 108.Rojas E., Corvalán R., Messen E., Sandoval P. Upper airway assessment in Orthodontics: A review. Odontoestomatologia. 2017;19:40–51. doi: 10.22592/ode2017n30a5. [DOI] [Google Scholar]
  • 109.Shen Y., Li X., Liang X., Xu H., Li C., Yu Y., Qiu B. A deep-learning-based approach for adenoid hypertrophy diagnosis. Med. Phys. 2020;47:2171–2181. doi: 10.1002/mp.14063. (In English) [DOI] [PubMed] [Google Scholar]
  • 110.Zhao T., Zhou J., Yan J., Cao L., Cao Y., Hua F., He H. Automated Adenoid Hypertrophy Assessment with Lateral Cephalometry in Children Based on Artificial Intelligence. Diagnostics. 2021;11:1386. doi: 10.3390/diagnostics11081386. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Liu J.L., Li S.H., Cai Y.M., Lan D.P., Lu Y.F., Liao W., Ying S.C., Zhao Z.H. Automated Radiographic Evaluation of Adenoid Hypertrophy Based on VGG-Lite. J. Dent. Res. 2021;100:1337–1343. doi: 10.1177/00220345211009474. (In English) [DOI] [PubMed] [Google Scholar]
  • 112.Sin Ç., Akkaya N., Aksoy S., Orhan K., Öz U. A deep learning algorithm proposal to automatic pharyngeal airway detection and segmentation on CBCT images. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):117–123. doi: 10.1111/ocr.12480. (In English) [DOI] [PubMed] [Google Scholar]
  • 113.Leonardi R., Lo Giudice A., Farronato M., Ronsivalle V., Allegrini S., Musumeci G., Spampinato C. Fully automatic segmentation of sinonasal cavity and pharyngeal airway based on convolutional neural networks. Am. J. Orthod. Dentofac. Orthop. 2021;159:824–835.e821. doi: 10.1016/j.ajodo.2020.05.017. (In English) [DOI] [PubMed] [Google Scholar]
  • 114.Shujaat S., Jazil O., Willems H., Van Gerven A., Shaheen E., Politis C., Jacobs R. Automatic segmentation of the pharyngeal airway space with convolutional neural network. J. Dent. 2021;111:103705. doi: 10.1016/j.jdent.2021.103705. (In English) [DOI] [PubMed] [Google Scholar]
  • 115.Jeong Y., Nang Y., Zhao Z. Automated Evaluation of Upper Airway Obstruction Based on Deep Learning. BioMed Res. Int. 2023;2023:8231425. doi: 10.1155/2023/8231425. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Dong W., Chen Y., Li A., Mei X., Yang Y. Automatic detection of adenoid hypertrophy on cone-beam computed tomography based on deep learning. Am. J. Orthod. Dentofac. Orthop. 2023;163:553–560.e553. doi: 10.1016/j.ajodo.2022.11.011. (In English) [DOI] [PubMed] [Google Scholar]
  • 117.Jin S., Han H., Huang Z., Xiang Y., Du M., Hua F., Guan X., Liu J., Chen F., He H. Automatic three-dimensional nasal and pharyngeal airway subregions identification via Vision Transformer. J. Dent. 2023;136:104595. doi: 10.1016/j.jdent.2023.104595. (In English) [DOI] [PubMed] [Google Scholar]
  • 118.Soldatova L., Otero H.J., Saul D.A., Barrera C.A., Elden L. Lateral Neck Radiography in Preoperative Evaluation of Adenoid Hypertrophy. Ann. Otol. Rhinol. Laryngol. 2020;129:482–488. doi: 10.1177/0003489419895035. (In English) [DOI] [PubMed] [Google Scholar]
  • 119.Duan H., Xia L., He W., Lin Y., Lu Z., Lan Q. Accuracy of lateral cephalogram for diagnosis of adenoid hypertrophy and posterior upper airway obstruction: A meta-analysis. Int. J. Pediatr. Otorhinolaryngol. 2019;119:1–9. doi: 10.1016/j.ijporl.2019.01.011. (In English) [DOI] [PubMed] [Google Scholar]
  • 120.Fujioka M., Young L.W., Girdany B.R. Radiographic evaluation of adenoidal size in children: Adenoidal-nasopharyngeal ratio. AJR Am. J. Roentgenol. 1979;133:401–404. doi: 10.2214/ajr.133.3.401. (In English) [DOI] [PubMed] [Google Scholar]
  • 121.Xie X., Wang L., Wang A. Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod. 2010;80:262–266. doi: 10.2319/111608-588.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 122.Jung S.K., Kim T.W. New approach for the diagnosis of extractions with neural network machine learning. Am. J. Orthod. Dentofac. Orthop. 2016;149:127–133. doi: 10.1016/j.ajodo.2015.07.030. (In English) [DOI] [PubMed] [Google Scholar]
  • 123.Li P., Kong D., Tang T., Su D., Yang P., Wang H., Zhao Z., Liu Y. Orthodontic treatment planning based on artificial neural networks. Sci. Rep. 2019;9:2037. doi: 10.1038/s41598-018-38439-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Suhail Y., Upadhyay M., Chhibber A., Kshitiz Machine learning for the diagnosis of orthodontic extractions: A computational analysis using ensemble learning. Bioengineering. 2020;7:55. doi: 10.3390/bioengineering7020055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Etemad L., Wu T.H., Heiner P., Liu J., Lee S., Chao W.L., Zaytoun M.L., Guez C., Lin F.C., Jackson C.B., et al. Machine learning from clinical data sets of a contemporary decision for orthodontic tooth extraction. Orthod. Craniofacial Res. 2021;24((Suppl. S2)):193–200. doi: 10.1111/ocr.12502. (In English) [DOI] [PubMed] [Google Scholar]
  • 126.Shojaei H., Augusto V. Constructing Machine Learning models for Orthodontic Treatment Planning: A comparison of different methods; Proceedings of the 2022 IEEE International Conference on Big Data (Big Data); Osaka, Japan. 17–20 December 2022. [Google Scholar]
  • 127.Real A.D., Real O.D., Sardina S., Oyonarte R. Use of automated artificial intelligence to predict the need for orthodontic extractions. Korean J. Orthod. 2022;52:102–111. doi: 10.4041/kjod.2022.52.2.102. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 128.Leavitt L., Volovic J., Steinhauer L., Mason T., Eckert G., Dean J.A., Dundar M.M., Turkkahraman H. Can we predict orthodontic extraction patterns by using machine learning? Orthod. Craniofacial Res. 2023;26:552–559. doi: 10.1111/ocr.12641. (In English) [DOI] [PubMed] [Google Scholar]
  • 129.Prasad J., Mallikarjunaiah D.R., Shetty A., Gandedkar N., Chikkamuniswamy A.B., Shivashankar P.C. Machine Learning Predictive Model as Clinical Decision Support System in Orthodontic Treatment Planning. Dent. J. 2022;11:1. doi: 10.3390/dj11010001. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Knoops P.G.M., Papaioannou A., Borghi A., Breakey R.W.F., Wilson A.T., Jeelani O., Zafeiriou S., Steinbacher D., Padwa B.L., Dunaway D.J., et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci. Rep. 2019;9:13597. doi: 10.1038/s41598-019-49506-1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 131.Choi H.-I., Jung S.-K., Baek S.-H., Lim W.H., Ahn S.-J., Yang I.-H., Kim T.-W. Artificial intelligent model with neural network machine learning for the diagnosis of orthognathic surgery. J. Craniofacial Surg. 2019;30:1986–1989. doi: 10.1097/SCS.0000000000005650. [DOI] [PubMed] [Google Scholar]
  • 132.Lee K.-S., Ryu J.-J., Jang H.S., Lee D.-Y., Jung S.-K. Deep convolutional neural networks based analysis of cephalometric radiographs for differential diagnosis of orthognathic surgery indications. Appl. Sci. 2020;10:2124. doi: 10.3390/app10062124. [DOI] [Google Scholar]
  • 133.Jeong S.H., Yun J.P., Yeom H.G., Lim H.J., Lee J., Kim B.C. Deep learning based discrimination of soft tissue profiles requiring orthognathic surgery by facial photographs. Sci. Rep. 2020;10:16235. doi: 10.1038/s41598-020-73287-7. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 134.Shin W., Yeom H.G., Lee G.H., Yun J.P., Jeong S.H., Lee J.H., Kim H.K., Kim B.C. Deep learning based prediction of necessity for orthognathic surgery of skeletal malocclusion using cephalogram in Korean individuals. BMC Oral Health. 2021;21:130. doi: 10.1186/s12903-021-01513-3. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 135.Kim Y.H., Park J.B., Chang M.S., Ryu J.J., Lim W.H., Jung S.K. Influence of the Depth of the Convolutional Neural Networks on an Artificial Intelligence Model for Diagnosis of Orthognathic Surgery. J. Pers. Med. 2021;11:356. doi: 10.3390/jpm11050356. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 136.Lee H., Ahmad S., Frazier M., Dundar M.M., Turkkahraman H. A novel machine learning model for class III surgery decision. J. Orofac. Orthop. 2022 doi: 10.1007/s00056-022-00421-7. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.Woo H., Jha N., Kim Y.-J., Sung S.-J. Evaluating the accuracy of automated orthodontic digital setup models. Semin. Orthod. 2023;29:60–67. [Google Scholar]
  • 138.Park J.H., Kim Y.-J., Kim J., Kim J., Kim I.-H., Kim N., Vaid N.R., Kook Y.-A. Use of artificial intelligence to predict outcomes of nonextraction treatment of Class II malocclusions. Semin. Orthod. 2021;27:87–95. [Google Scholar]
  • 139.Tanikawa C., Yamashiro T. Development of novel artificial intelligence systems to predict facial morphology after orthognathic surgery and orthodontic treatment in Japanese patients. Sci. Rep. 2021;11:15853. doi: 10.1038/s41598-021-95002-w. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 140.Park Y.S., Choi J.H., Kim Y., Choi S.H., Lee J.H., Kim K.H., Chung C.J. Deep Learning-Based Prediction of the 3D Postorthodontic Facial Changes. J. Dent. Res. 2022;101:1372–1379. doi: 10.1177/00220345221106676. (In English) [DOI] [PubMed] [Google Scholar]
  • 141.Xu L., Mei L., Lu R., Li Y., Li H., Li Y. Predicting patient experience of Invisalign treatment: An analysis using artificial neural network. Korean J. Orthod. 2022;52:268–277. doi: 10.4041/kjod21.255. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Ribarevski R., Vig P., Vig K.D., Weyant R., O’Brien K. Consistency of orthodontic extraction decisions. Eur. J. Orthod. 1996;18:77–80. doi: 10.1093/ejo/18.1.77. [DOI] [PubMed] [Google Scholar]
  • 143.Drucker H., Wu D., Vapnik V.N. Support vector machines for spam categorization. IEEE Trans. Neural Netw. 1999;10:1048–1054. doi: 10.1109/72.788645. (In English) [DOI] [PubMed] [Google Scholar]
  • 144.Khozeimeh F., Sharifrazi D., Izadi N.H., Joloudari J.H., Shoeibi A., Alizadehsani R., Tartibi M., Hussain S., Sani Z.A., Khodatars M., et al. RF-CNN-F: Random forest with convolutional neural network features for coronary artery disease diagnosis based on cardiac magnetic resonance. Sci. Rep. 2022;12:11178. doi: 10.1038/s41598-022-15374-5. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Ahsan M.M., Luna S.A., Siddique Z. Machine-Learning-Based Disease Diagnosis: A Comprehensive Review. Healthcare. 2022;10:541. doi: 10.3390/healthcare10030541. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Rabie A.B., Wong R.W., Min G.U. Treatment in Borderline Class III Malocclusion: Orthodontic Camouflage (Extraction) Versus Orthognathic Surgery. Open Dent. J. 2008;2:38–48. doi: 10.2174/1874210600802010038. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 147.Alhammadi M.S., Almashraqi A.A., Khadhi A.H., Arishi K.A., Alamir A.A., Beleges E.M., Halboub E. Orthodontic camouflage versus orthodontic-orthognathic surgical treatment in borderline class III malocclusion: A systematic review. Clin. Oral Investig. 2022;26:6443–6455. doi: 10.1007/s00784-022-04685-6. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 148.Eslami S., Faber J., Fateh A., Sheikholaemmeh F., Grassia V., Jamilian A. Treatment decision in adult patients with class III malocclusion: Surgery versus orthodontics. Prog. Orthod. 2018;19:28. doi: 10.1186/s40510-018-0218-0. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 149.Mirza M., Osindero S. Conditional generative adversarial nets. arXiv. 20141411.1784 [Google Scholar]
  • 150.Isola P., Zhu J.-Y., Zhou T., Efros A.A. Image-to-image translation with conditional adversarial networks; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA. 21–26 July 2017. [Google Scholar]
  • 151.Vu H., Vo P.T., Kim H.D. Gender modified association of oral health indicators with oral health-related quality of life among Korean elders. BMC Oral Health. 2022;22:168. doi: 10.1186/s12903-022-02104-6. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 152.El-Dawlatly M.M., Abdelmaksoud A.R., Amer O.M., El-Dakroury A.E., Mostafa Y.A. Evaluation of the efficiency of computerized algorithms to formulate a decision support system for deepbite treatment planning. Am. J. Orthod. Dentofac. Orthop. 2021;159:512–521. doi: 10.1016/j.ajodo.2020.05.014. (In English) [DOI] [PubMed] [Google Scholar]
  • 153.Tao T., Zou K., Jiang R., He K., He X., Zhang M., Wu Z., Shen X., Yuan X., Lai W., et al. Artificial intelligence-assisted determination of available sites for palatal orthodontic mini implants based on palatal thickness through CBCT. Orthod. Craniofacial Res. 2023;26:491–499. doi: 10.1111/ocr.12634. (In English) [DOI] [PubMed] [Google Scholar]
  • 154.Hu X., Zhao Y., Yang C. Evaluation of root position during orthodontic treatment via multiple intraoral scans with automated registration technology. Am. J. Orthod. Dentofac. Orthop. 2023;164:285–292. doi: 10.1016/j.ajodo.2023.04.012. (In English) [DOI] [PubMed] [Google Scholar]
  • 155.Lee S.C., Hwang H.S., Lee K.C. Accuracy of deep learning-based integrated tooth models by merging intraoral scans and CBCT scans for 3D evaluation of root position during orthodontic treatment. Prog. Orthod. 2022;23:15. doi: 10.1186/s40510-022-00410-x. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 156.Hansa I., Semaan S.J., Vaid N.R. Clinical outcomes and patient perspectives of Dental Monitoring® GoLive® with Invisalign®-a retrospective cohort study. Prog. Orthod. 2020;21:16. doi: 10.1186/s40510-020-00316-6. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 157.Strunga M., Urban R., Surovková J., Thurzo A. Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment. Healthcare. 2023;11:683. doi: 10.3390/healthcare11050683. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 158.Hansa I., Katyal V., Semaan S.J., Coyne R., Vaid N.R. Artificial Intelligence Driven Remote Monitoring of orthodontic patients: Clinical applicability and rationale. Semin. Orthod. 2021;27:138–156. doi: 10.1053/j.sodo.2021.05.010. [DOI] [Google Scholar]
  • 159.Ryu J., Lee Y.S., Mo S.P., Lim K., Jung S.K., Kim T.W. Application of deep learning artificial intelligence technique to the classification of clinical orthodontic photos. BMC Oral Health. 2022;22:454. doi: 10.1186/s12903-022-02466-x. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Li S., Guo Z., Lin J., Ying S. Artificial Intelligence for Classifying and Archiving Orthodontic Images. BioMed Res. Int. 2022;2022:1473977. doi: 10.1155/2022/1473977. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 161.Keim R.G. Fine-tuning our treatment of deep bites. J. Clin. Orthod. 2008;42:687–688. (In English) [PubMed] [Google Scholar]
  • 162.Çiçek Ö., Abdulkadir A., Lienkamp S.S., Brox T., Ronneberger O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation; Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference; Athens, Greece. 17–21 October 2016; Proceedings, Part II 19. [Google Scholar]
  • 163.Möhlhenrich S.C., Heussen N., Modabber A., Bock A., Hölzle F., Wilmes B., Danesh G., Szalma J. Influence of bone density, screw size and surgical procedure on orthodontic mini-implant placement—part B: Implant stability. Int. J. Oral Maxillofac. Surg. 2021;50:565–572. doi: 10.1016/j.ijom.2020.07.003. (In English) [DOI] [PubMed] [Google Scholar]
  • 164.Poon Y.C., Chang H.P., Tseng Y.C., Chou S.T., Cheng J.H., Liu P.H., Pan C.Y. Palatal bone thickness and associated factors in adult miniscrew placements: A cone-beam computed tomography study. Kaohsiung J. Med. Sci. 2015;31:265–270. doi: 10.1016/j.kjms.2015.02.002. (In English) [DOI] [PubMed] [Google Scholar]
  • 165.Dalessandri D., Sangalli L., Tonni I., Laffranchi L., Bonetti S., Visconti L., Signoroni A., Paganelli C. Attitude towards Telemonitoring in Orthodontists and Orthodontic Patients. Dent. J. 2021;9:47. doi: 10.3390/dj9050047. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 166.Sangalli L., Savoldi F., Dalessandri D., Visconti L., Massetti F., Bonetti S. Remote digital monitoring during the retention phase of orthodontic treatment: A prospective feasibility study. Korean J. Orthod. 2022;52:123–130. doi: 10.4041/kjod.2022.52.2.123. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 167.Sangalli L., Alessandri-Bonetti A., Dalessandri D. Effectiveness of dental monitoring system in orthodontics: A systematic review. J. Orthod. 2023 doi: 10.1177/14653125231178040. online ahead of print . (In English) [DOI] [PubMed] [Google Scholar]
  • 168.Homsi K., Snider V., Kusnoto B., Atsawasuwan P., Viana G., Allareddy V., Gajendrareddy P., Elnagar M.H. In-vivo evaluation of Artificial Intelligence Driven Remote Monitoring technology for tracking tooth movement and reconstruction of 3-dimensional digital models during orthodontic treatment. Am. J. Orthod. Dentofac. Orthop. 2023 doi: 10.1016/j.ajodo.2023.04.019. online ahead of print . (In English) [DOI] [PubMed] [Google Scholar]
  • 169.Moylan H.B., Carrico C.K., Lindauer S.J., Tüfekçi E. Accuracy of a smartphone-based orthodontic treatment-monitoring application: A pilot study. Angle Orthod. 2019;89:727–733. doi: 10.2319/100218-710.1. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Ferlito T., Hsiou D., Hargett K., Herzog C., Bachour P., Katebi N., Tokede O., Larson B., Masoud M.I. Assessment of artificial intelligence-based remote monitoring of clear aligner therapy: A prospective study. Am. J. Orthod. Dentofac. Orthop. 2023;164:194–200. doi: 10.1016/j.ajodo.2022.11.020. (In English) [DOI] [PubMed] [Google Scholar]
  • 171.Candemir S., Nguyen X.V., Folio L.R., Prevedello L.M. Training Strategies for Radiology Deep Learning Models in Data-limited Scenarios. Radiol. Artif. Intell. 2021;3:e210014. doi: 10.1148/ryai.2021210014. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 172.Ge Y., Guo Y., Das S., Al-Garadi M.A., Sarker A. Few-shot learning for medical text: A review of advances, trends, and opportunities. J. Biomed. Inform. 2023;144:104458. doi: 10.1016/j.jbi.2023.104458. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 173.Langnickel L., Fluck J. We are not ready yet: Limitations of transfer learning for Disease Named Entity Recognition. bioRxiv. 2021 doi: 10.1101/2021.07.11.451939. (In English) [DOI] [Google Scholar]
  • 174.Frid-Adar M., Diamant I., Klang E., Amitai M., Goldberger J., Greenspan H. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing. 2018;321:321–331. doi: 10.1016/j.neucom.2018.09.013. [DOI] [Google Scholar]
  • 175.Zhang K., Song X., Zhang C., Yu S. Challenges and future directions of secure federated learning: A survey. Front. Comput. Sci. 2022;16:165817. doi: 10.1007/s11704-021-0598-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 176.Wolff J., Matschinske J., Baumgart D., Pytlik A., Keck A., Natarajan A., von Schacky C.E., Pauling J.K., Baumbach J. Federated machine learning for a facilitated implementation of Artificial Intelligence in healthcare—A proof of concept study for the prediction of coronary artery calcification scores. J. Integr. Bioinform. 2022;19:20220032. doi: 10.1515/jib-2022-0032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Attaran M. Blockchain technology in healthcare: Challenges and opportunities. Int. J. Healthc. Manag. 2022;15:70–83. doi: 10.1080/20479700.2020.1843887. [DOI] [Google Scholar]
  • 178.Tagde P., Tagde S., Bhattacharya T., Tagde P., Chopra H., Akter R., Kaushik D., Rahman M.H. Blockchain and artificial intelligence technology in e-Health. Environ. Sci. Pollut. Res. 2021;28:52810–52831. doi: 10.1007/s11356-021-16223-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 179.Rieke N., Hancox J., Li W., Milletarì F., Roth H.R., Albarqouni S., Bakas S., Galtier M.N., Landman B.A., Maier-Hein K., et al. The future of digital health with federated learning. NPJ Digit. Med. 2020;3:119. doi: 10.1038/s41746-020-00323-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 180.Allareddy V., Rampa S., Venugopalan S.R., Elnagar M.H., Lee M.K., Oubaidin M., Yadav S. Blockchain technology and federated machine learning for collaborative initiatives in orthodontics and craniofacial health. Orthod. Craniofacial Res. 2023 doi: 10.1111/ocr.12662. (In English) [DOI] [PubMed] [Google Scholar]
  • 181.Norgeot B., Quer G., Beaulieu-Jones B.K., Torkamani A., Dias R., Gianfrancesco M., Arnaout R., Kohane I.S., Saria S., Topol E., et al. Minimum information about clinical artificial intelligence modeling: The MI-CLAIM checklist. Nat. Med. 2020;26:1320–1324. doi: 10.1038/s41591-020-1041-y. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 182.Singh A., Sengupta S., Lakshminarayanan V. Explainable deep learning models in medical image analysis. J. Imaging. 2020;6:52. doi: 10.3390/jimaging6060052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 183.Naz Z., Khan M.U.G., Saba T., Rehman A., Nobanee H., Bahaj S.A. An Explainable AI-Enabled Framework for Interpreting Pulmonary Diseases from Chest Radiographs. Cancers. 2023;15:314. doi: 10.3390/cancers15010314. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 184.Selvaraju R.R., Das A., Vedantam R., Cogswell M., Parikh D., Batra D. Grad-CAM: Why did you say that? arXiv. 20161611.07450 [Google Scholar]
  • 185.Selvaraju R.R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization; Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy. 22–29 October 2017. [Google Scholar]
  • 186.Zeiler M.D., Fergus R. Visualizing and understanding convolutional networks; Proceedings of the Computer Vision—ECCV 2014: 13th European Conference; Zurich, Switzerland. 6–12 September 2014; Proceedings, Part I 13. [Google Scholar]
  • 187.Bao C., Pu Y., Zhang Y. Fractional-Order Deep Backpropagation Neural Network. Comput. Intell. Neurosci. 2018;2018:7361628. doi: 10.1155/2018/7361628. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 188.Wang K., Yang B., Li Q., Liu S. Systematic Evaluation of Genomic Prediction Algorithms for Genomic Prediction and Breeding of Aquatic Animals. Genes. 2022;13:2247. doi: 10.3390/genes13122247. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 189.Xi J., Wang M., Li A. Discovering mutated driver genes through a robust and sparse co-regularized matrix factorization framework with prior information from mRNA expression patterns and interaction network. BMC Bioinform. 2018;19:214. doi: 10.1186/s12859-018-2218-y. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 190.Leite A.F., Gerven A.V., Willems H., Beznik T., Lahoud P., Gaêta-Araujo H., Vranckx M., Jacobs R. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin. Oral Investig. 2021;25:2257–2267. doi: 10.1007/s00784-020-03544-6. (In English) [DOI] [PubMed] [Google Scholar]
  • 191.Kittichai V., Kaewthamasorn M., Thanee S., Jomtarak R., Klanboot K., Naing K.M., Tongloy T., Chuwongin S., Boonsang S. Classification for avian malaria parasite Plasmodium gallinaceum blood stages by using deep convolutional neural networks. Sci. Rep. 2021;11:16919. doi: 10.1038/s41598-021-96475-5. (In English) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 192.Borzabadi-Farahani A. An insight into four orthodontic treatment need indices. Prog. Orthod. 2011;12:132–142. doi: 10.1016/j.pio.2011.06.001. (In English) [DOI] [PubMed] [Google Scholar]
  • 193.Borzabadi-Farahani A., Eslamipour F., Shahmoradi M. Functional needs of subjects with dentofacial deformities: A study using the index of orthognathic functional treatment need (IOFTN) J. Plast. Reconstr. Aesthet. Surg. 2016;69:796–801. doi: 10.1016/j.bjps.2016.03.008. (In English) [DOI] [PubMed] [Google Scholar]

Articles from Healthcare are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES