Abstract
Artificial intelligence (AI) has recently gained significant interest in orthodontics due to its ability to enhance diagnostic accuracy, guide treatment planning, and improve therapeutic outcomes. This review aimed to explore the relevance and applications of AI across various aspects of orthodontics. A comprehensive literature search was performed from January 2010 to 1 March 2025 in databases including PubMed, EMBASE, Web of Science, Scopus, and Cochrane. Letters to the editor, case reports, systematic reviews, and animal studies were excluded. Artificial intelligence models, especially those using deep learning, have been integrated into multiple orthodontic fields, including landmark identification, malocclusion classification, treatment planning, growth prediction, and risk assessment. They have also achieved notable success in segmenting two-dimensional and three-dimensional anatomical structures, aligner therapy, evaluating facial asymmetry, localizing impacted canines, and identifying clefts. While several investigations highlight the high accuracy of AI models, others emphasize the need for clinician oversight, recommending that these tools serve as a supportive tool rather than a replacement for clinical judgment. AI-based algorithms may enhance treatment quality, decrease procedural time and operator variability, and reduce human error. However, further clinical trials are needed to validate and optimize the accuracy and reliability of these models in orthodontics.
Keywords: Artificial Intelligence, Machine Learning, Deep Learning, Convolutional Neural Networks, Orthodontics
Introduction
Artificial intelligence (AI), a branch of computer science, focuses on creating systems capable of emulating human learning, reasoning, and decision-making (Russell and Norvig 2021). By replicating human cognitive processes, AI enables machines to perform tasks typically requiring human intelligence (Shan et al. 2021). Various algorithms have been developed to facilitate AI, including the following:
- Machine learning (ML): It enables machines to learn and improve their performance through experience rather than explicit programming (J.-H. Park et al. 2019a, b). ML is generally categorized into four types:
Supervised ML algorithms (SMLA): In this type, labeled external data is used to develop algorithms that identify patterns for future instances. SMLA is applied in diagnosing dental deformities through cephalometric imaging, as well as evaluating disease progression, dental anxiety, and behavioral management in children.
Unsupervised ML algorithms: They do not label or structure data; they autonomously explore and interpret unclassified data to reveal hidden structures.
Semi-supervised ML algorithms: Lies between supervised and unsupervised learning, where a small amount of data is labeled while the rest remains unlabeled. This allows the system to enhance its learning capabilities.
Reinforcement ML algorithms: Reinforced ML employs algorithms that allow the machine to interact with its environment, leading to actions and the identification of errors or rewards. This approach operates on the Trial-and-Error principle and is utilized in treatment strategies for septic patients, emphasizing offline learning (Butul and Sharab 2024).
- Neural networks: An Artificial Neural Network (ANN) imitates the human nervous system using units similar to neurons. These units, activated by data, process information across multiple layers, memorizing features and interpreting new data based on previous learnings (Jung and Kim, 2016).
- Deep learning: This is an advanced form of ANN where a substantial amount of data is analyzed and classified by the computer using binary (True/False) questions. This technology requires complex mathematical calculations to generate predictions (Montúfar et al. 2018). Convolutional neural networks (CNNs), a type of deep learning (DL) method, have been widely adopted in recent years for medical image analysis due to their superior performance in detection, diagnosis, and classification (Khan et al. 2018).
A proper hard and soft tissue diagnostic examination is crucial in orthodontic and orthognathic treatments (Choi 2015). Several assessment methods such as clinical evaluation, 2D and 3D radiographic techniques, and photographic examination have been used by clinicians (Berssenbrügge et al. 2014). Nowadays, AI- based programs and algorithms could evaluate the images, and perform cephalometric and photographic analysis quickly. AI-enhanced diagnostic tools process large-scale patient datasets, enable automated and precise malocclusion assessments, and reduce inter-examiner variability (Gaonkar et al. 2024). AI tools can also assist orthodontists in decision- making, and improve therapeutic outcomes and efficiency (Ryu et al. 2023). Although several reviews have previously assessed AI applications in dentistry and certain areas of orthodontics such as growth prediction, cervical vertebral maturation assessment, and cephalometric landmark identification (Dashti et al. 2024; de Queiroz Tavares Borges Mesquita et al. 2023; Hendrickx et al. 2024; Kazimierczak et al. 2024c), we have identified numerous new studies exploring the applications of AI across various orthodontic domains (Fig. 1). As research interest in AI within orthodontics continues to grow, there is still a need for updated and comprehensive reviews of literature that evaluate its broader applications in orthodontics. The present study provides a narrative review of current artificial intelligence applications within multiple domains of orthodontics.
Fig. 1.
Applications of artificial intelligence across various aspects of orthodontics
Materials & methods
Information sources and search strategy
A comprehensive literature search was conducted from January 2010 to 1 March 2025 in databases including PubMed, EMBASE, Web of Science, Scopus, and Cochrane. The search strategy consisted both Medical Subheading (MeSH) and non- MeSH terms. Boolean operators such as (AND) and (OR) were employed to systematically combine keywords. The search strategies were adapted for each database to ensure sensitivity and specificity (Table 1). Google scholar, and the reference parts of the retrieved articles were also examined to identify additional relevant papers. Duplicate records were removed before screening.
Table 1.
Databases, applied search strategy, and the number of retrieved studies
| Database of published trials | Search strategy used | Hits |
|---|---|---|
|
MEDLINE searched via PubMed, searched on 1 March 2025, via |
("orthodontics"[MeSH Terms] OR "Malocclusion"[MeSH Terms] OR "Cephalometry"[MeSH Terms] OR "Diagnosis"[MeSH Terms] OR "Diagnostic imaging"[MeSH Terms] OR "orthodontics"[Title/Abstract] OR "Malocclusion"[Title/Abstract] OR "Cephalometry"[Title/Abstract] OR "Diagnosis"[Title/Abstract] OR "Diagnostic imaging"[Title/Abstract] OR "treatment planning"[Title/Abstract]) AND ((("Artificial Intelligence"[MeSH Terms] OR "Artificial Intelligence"[Title/Abstract] OR "machine learning"[MeSH Terms] OR "deep learning"[MeSH Terms] OR "neural network"[Title/Abstract] OR "machine learning"[Title/Abstract] OR "deep learning"[Title/Abstract]) OR "Convolutional Neural Networks"[MeSH Terms]) OR "Convolutional Neural Networks"[Title/Abstract]) | 136984 |
| ISI web of science Core Collection was searched via web of knowledge on 1 March 2025, via apps.webofknowledge.com | ("orthodontics" OR "Malocclusion" OR "Cephalometry" OR "Diagnosis" OR "Diagnostic imaging" OR "treatment planning") (All Fields) and ("Artificial Intelligence" OR "machine learning" OR "deep learning" OR "neural network "OR "Convolutional Neural Networks") (All Fields) | 120915 |
| Embase searched via Embase on 1 March 2025, via www.embase.com | ('orthodontics'/exp OR 'orthodontics' OR 'malocclusion'/exp OR 'malocclusion' OR 'cephalometry'/exp OR 'cephalometry' OR 'diagnosis'/exp OR 'diagnosis' OR 'diagnostic imaging'/exp OR 'diagnostic imaging' OR 'treatment planning'/exp OR 'treatment planning') AND ('artificial intelligence' OR 'machine learning' OR 'deep learning' OR 'neural network' OR 'convolutional neural networks') | 219359 |
| Scopus searched via Scopus on 1 March 2025, via https://www.scopus.com | (TITLE-ABS-KEY ("orthodontics" OR "Malocclusion" OR "Cephalometry" OR "Diagnosis" OR "Diagnostic imaging" OR "treatment planning") AND TITLE-ABS-KEY (("Artificial Intelligence" OR "machine learning" OR "deep learning" OR "neural network "OR "Convolutional Neural Networks")) | 225589 |
| Cochrane Central Register of Controlled Trials searched via the Cochrane Library Searched on 1 March 2025 via www thecochranelibrary.com | "orthodontics" OR "Malocclusion" OR "Cephalometry" OR "Diagnosis" OR "Diagnostic imaging" OR "treatment planning" in Title Abstract Keyword AND "Artificial Intelligence" OR "machine learning" OR "deep learning" OR "neural network" OR "Convolutional Neural Networks" in Title Abstract Keyword | 3066 |
| Total | 705913 |
Eligibility criteria
The inclusion criteria for this review were studies reporting the applications of AI across various areas of orthodontics were considered in this review. Original peer- reviewed research articles and conference papers were included. Only studies published in English were included.
Animal studies, systematic reviews, case reports, and letters to the editor were excluded.
Study selection and data extraction
Two reviewers (S.A. and Sh.T.) independently screened titles and abstracts, after which the full texts of potentially eligible publications were evaluated. Any disagreements were resolved through discussion with a third reviewer (S.H.). The selection process followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Fig. 2). Finally, 286 studies met the inclusion criteria. Data extraction was performed independently by two review authors (Sh.T. and S.A), with disagreements discussed with a third author (S.H.). The extracted data were: study design, field of use, sample size, type of AI implemented, the main result, and its reliability. (Table 2).
Fig. 2.
PRISMA flow diagram
Table 2.
Baseline characteristics of the included studies
| Author/year/country | Study design | Area of application | AI mode | Sample size | Outcomes | Acceptable Reliability (+) Or Needs further refinement (?) |
|---|---|---|---|---|---|---|
| Facial asymmetry | ||||||
|
Kazimierczak et al./2024 |
Observational Single center |
Reliability of AI-based analysis of facial asymmetry | CephX (ORCA Dental AI, Las Vegas, NV, United States) | CT scans of 115 patients |
- High rate of inaccuracies - poor agreement with manual Asymmetry rate analysis - Inability to assess the degree of asymmetry |
? |
|
Jeon et al./2024 Korea (S. M. Jeon et al. 2024) |
Observational Single center |
Diagnostic performance in detection of facial asymmetry | Deep convolutional neural network (DCNN)-based computer-assisted diagnosis (CAD) system with U-Net | PA cephalograms of 1020 orthodontic patients |
- No significant differences between the DCNN-based CAD system and conventional analysis - Potential of the DCNN-based CAD system for diagnosis of facial asymmetry |
+ |
|
Yurdakurban et al./2021 Turkey (Yurdakurban et al. 2021) |
Observational Single center |
Level of agreement between the conventional method and a machine-learning approach to facial midline determination and asymmetry assessment | Emotrics (Massachusetts, USA) | facial frontal photographs of 90 samples (53 females; 37 males) | - Potential clinical use of Machine-learning algorithms in asymmetry assessment and midline determination | + |
|
Takeda et al./2021 Japan (Takeda et al. 2021) |
Observational Single center |
Analysis of mandibular deviation and facial asymmetry | a deep learning-based CNN and a random forest algorithm (i.e., a robust decision tree- based machine learning algorithm) |
400 PA cephalograms (255 female and 145 male patients) |
- Prompt assessment of facial asymmetry and reduced effort in orthodontic diagnosis by using deep CNN | + |
| Facial attractiveness | ||||||
|
Grillo et al./2024 Brazil (Grillo et al. 2024) |
Observational (retrospective cohort study) Single center |
Assessment of facial proportions, angles, and symmetry among celebrities |
-A Python-based algorithm -TensorFlow (an open-source machine-learning model) |
Images of 115 female celebrities |
- Importance of specific ratios and angles in an attractive female face - Facial symmetry, though preferred, is not essential |
+ |
|
Cai et al./2024 China (Cai et al. 2024) |
Observational Single center |
Quantification of profile improvements through occlusal plane (OP) rotation in orthodontic treatment |
- Uceph - Back-propagation artificial neural network (BP-ANN) models |
Cephalometric radiographs of 903 patients |
-Facial aesthetic improves by rotation of OP - More efficient: hypodivergent groups vertically, Class III groups sagittally |
+ |
|
Peck et al./2022 USA(Peck et al. 2022) |
Observational Single center |
Assessment of the apparent age and attractiveness | ‘‘Haystack’’ (Haystack AI, New York, NY) | Photographs of 65 orthognathic surgery patients |
- Supportive tools - Serve as an adjunct, not replacement, for rater groups - Particularly helpful for subjective assessments, such as attractiveness |
? |
|
Obwegeser et al./2022 Switzerland (Obwegeser et al. 2022) |
Observational Single center |
Influence of tooth alignment on facial attractiveness | An existing face detector and deep CNNs employing a DenseNet-201 architecture | 960 frontal facial images of 40 female participants |
- Teeth Alignment improves facial attractiveness - Comparable effect of teeth alignment to wearing lipstick -No impact on perceived age |
+ |
|
Patcas et al./2019 |
Observational Single center |
Facial attractiveness of treated cleft patients and controls | A face detector and CNNs trained | Frontal and profile images of 20 treated left-sided cleft patients (10 males, mean age: 20.5 years) and 10 controls (5 males, mean age: 22.1 years) |
-AI-based results were comparable to the average scores of cleft patients - Strong agreement with both professional panels - Overall lower scores for control cases |
+ |
|
Patcas et al./2019 |
Observational (retrospective longitudinal cohort study) Single center |
Impact of orthognathic treatment on facial attractiveness and age appearance | computational algorithm comprising a face detector20 and convolutional neural networks (CNNs) | Pre- and post-treatment photographs (n = 2164) of 146 consecutive orthognathic patients | -A tool to score facial attractiveness and apparent age in orthognathic patients | + |
|
Yu et al./2014 China (X. Yu et al. 2014) |
Observational Multi center |
Evaluation of facial attractiveness from a set of orthodontic photographs |
- The Procrustes superimposition algorithms - A support vector regression (SVR) function |
Photographs of 108 patients (30 male and 78 female patients; age range, 10–29 years) | Geometric morphometrics combined with SVR is a potential method for objective evaluation of facial attractiveness | + |
| Cleft | ||||||
|
Kamei et al./2024 India (Kamei et al. 2024) |
Observational Multi center |
Comparison of skeletal maturation in cleft and non- cleft groups | The MobileNet architecture using TensorFlow framework employed to develop 2 convolutional neural network (CNN)-based AI models | 960 cephalograms of patients with and without unilateral cleft lip and palate (UCLP) aged 6 to 18 years |
-An aid for detection of skeletal maturity in UCLP patients -Evidence of delayed skeletal growth in UCLP patients compared to non- UCLP patients |
+ |
|
Arslan et al./2024 Turkey (Arslan et al. 2024) |
Observational Single center |
Detection and numbering teeth in cleft patients |
Diagnocat (based on deep learning methods, specifically CNNs) |
100 panoramic x-rays (52 males, 48 females) |
-Effective in detecting and numbering teeth in cleft patients - Accuracy needs improvement in the cleft region |
? |
|
Lin et al./2021 Korea (G. Lin et al. 2021) |
Observational Single center |
Determination of the cephalometric predictors of the future surgical need in patients with repaired unilateral cleft lip and palate (UCLP) | The Boruta method and XGBoost algorithm | Lateral cephalograms of 56 Korean patients with UCLP | Predict future surgical need for sagittal skeletal discrepancy in UCLP patients at the age of 6 | + |
|
Alam et al./2021 Saudi Arabia (Alam et al. 2021) |
Observational Single center |
Comparison of the sagittal jaw relationship in non-syndromic cleft and non-cleft (NC) individuals | Webceph software |
Lateral cephalograms of 123 patients (31 NC and 92 cleft): - bilateral cleft lip and palate (BCLP): 29, - unilateral cleft lip and palate (UCLP): 41 - unilateral cleft lip and alveolus (UCLA): 9 - unilateral cleft lip (UCL): 13 |
Decreased sagittal development in different types of cleft patients compared to NCs | + |
|
Wang et al./2021 |
Observational Single center |
Quantification of the 3D maxillary asymmetry in unilateral cleft lip and palate (UCP) patients | 3D U-Net |
CBCT images of 60 patients with UCP 39 males/21 females a mean age of 11.52 years |
- Significant maxillary hypoplasia on the cleft side (in the pyriform aperture and alveolar crest near the defect) -Contribution of defect structures to the variability of maxilla on the cleft side |
+ |
|
Alam et al./2020 Saudi Arabia (Alam & Alfawzan 2020b) |
Observational Single center |
Evaluation of Sella turcica (ST) bridging, associated abnormalities, and morphology in subjects with four different types of clefts | Webceph software |
Lateral cephalograms of 123 patients (31 NC and 92 cleft): - bilateral cleft lip and palate (BCLP): 29, - unilateral cleft lip and palate (UCLP): 41 - unilateral cleft lip and alveolus (UCLA): 9 - unilateral cleft lip (UCL): 13 |
-Higher occurrence of ST bridging, Class III malocclusion, and associated dental anomalies in cleft subjects - Significant differences in ST morphometry between cleft and NC subjects |
+ |
|
Alam et al./2020 Saudi Arabia (Alam & Alfawzan 2020a) |
Observational Single center |
Comparison of dental characteristics (DC) among different types of cleft lip and palate (CLP) and non-cleft (NC) individuals | Webceph software |
Lateral cephalograms of 123 patients (31 NC and 92 cleft): - bilateral cleft lip and palate (BCLP): 29, - unilateral cleft lip and palate (UCLP): 41 - unilateral cleft lip and alveolus (UCLA): 9 - unilateral cleft lip (UCL): 13 |
-Association of CLP type with significant alterations in DC compared to NC -Among different CLP types, BCLP showed the greatest alterations in DC |
+ |
|
Shafi et al./2020 Pakistan (Shafi et al. 2020) |
Observational Multi center |
Machine learning–based solution to avoid cleft in the mother’s womb |
MLP model (a feedforward artificial neural network classification) |
Questionnaire data of 1000 mothers −500 mothers whose children had clefts in their lip/palate −500 mothers whose children were normal and healthy |
- CLP-MLP is a better model for cleft prediction before birth - 92.6% accuracy on unseen test data |
+ |
|
Omar et al./2018 Malaysia (Omar et al. 2018) |
Observational Single center |
Exploration of features that contribute to the success of pre-graft orthodontic treatments for cleft patients | the random forests and the cforest (machine learning algorithm) |
18 CLP patients’ dental records: nine successes and nine unsuccessful cases |
-Top four variables: 1. affected cleft palate either at the soft palate or hard palate or both, 2. ethnicity, 3. referral age, 4. age at treatment - Only the affected cleft palate was identified as an important variable |
+ |
| Impacted canine | ||||||
|
Minhas et al./2024 China (Minhas et al. 2024) |
Observational Single center |
3D reconstruction from 2D panoramic X-rays in assessing the position of maxillary impacted canines | deep-learning reconstruction algorithm |
Pre-treatment CBCT data from 123 patients: M: 50, F: 73 14.59 y −74 patients with impacted canines - 49 patients without impacted canines |
- 70% similarity - Improved precision and predictability in locating maxillary impacted canines is required |
? |
|
Abdulkreem et. al/2024 United Arab Emirates (Abdulkreem et al. 2024) |
Observational Single center |
Classification of impacted and nonimpacted canines | A convolutional neural network with SqueezeNet architecture | 182 panoramic radiographs (91 with and 91 without impacted canines) |
- AI algorithms can be automated -Application of automated AI algorithms for preprocessing Panoramic radiographs -Enhances the identification of impacted canines |
+ |
|
Ozcan et al./2024 Turkey (Özcan et al. 2024) |
Observational Single center |
Identification of the bucco-palatal position of the maxillary impacted canine | a Convolutional Neural Network (CNN) | 810 panoramic x-rays | Further refinement is required for model’s clinical application | ? |
|
de Araujo et al./2024 Brazil (Cristiano Miranda de Araujo et al. 2025) |
Observational Single center |
Prediction of palatally impacted maxillary canines using maxilla measurements | Adaboost Classifier, Decision Tree, Gradient Boosting Classifier, K-Nearest Neighbours (KNN), Logistic Regression, Multilayer Perceptron Classifier (MLP), Random Forest Classifier and Support Vector Machine (SVM) | CBCT scans of 138 patients |
- A promising method in predicting palatally impacted maxillary canines -Among the models, both the Gradient Boosting Classifier and the Random Forest Classifier demonstrated the best performance |
+ |
|
Deepa et al./2024 India (Deepa et al. 2024) |
Observational Single center |
diagnosing canine impaction in orthodontics | fast region-based convolutional neural networks (fast R-CNN) | 1973 digital panoramic x-rays | - High performance with an accuracy of 98.3% | + |
|
Aljabri et al./2022 Saudi Arabia (Aljabri et al. 2022) |
Observational Single center |
Classification of impacted canines using Yamamoto classification |
Convolutional Neural Networks (CNN) Four deep learning models: DenseNet-121, VGG-16, Inception V3, and ResNet-50 |
2D panoramic radiographic images of 268 patients (9–12 y) | - Inception V3 outperforms the other classifiers, with an accuracy of 0.9259 | + |
|
Chen et al./2020 China (S. Chen et al. 2020) |
Observational Single center |
Assessment of maxillary structure variation in unilateral canine impaction | A machine learning algorithm utilizing Learning-based multi-source Integration framework for Segmentation (LINKS) | cone-beam computed tomography (CBCT) images of 30 study group (SG) patients with unilaterally impacted maxillary canines and 30 healthy control group (CG) subjects |
- Palatal expansion could benefit patients with unilateral canine impaction - Fast and efficient CBCT image segmentation is crucial -It enables effective analysis of large clinical data sets |
+ |
| Dental model analysis | ||||||
|
Tamayo-Quintero et al./2024 Colombia (Tamayo-Quintero et al. 2024) |
Observational Single center |
Classification of orthodontic arch shapes | DentalArch software |
450 digital dental models They refined dataset to 50 models |
- Particularly useful for arch shape classification -It promotes data-driven approach and reduces subjectivity in arch shape determination |
? |
|
Hack et al./2024 Romania (Hack et al. 2024) |
Observational Single center |
Detection of the position and shape of teeth in different orthodontic anomalies | Deep Learning | 45 patients with dentomaxillary anomalies Angle Class I (DDM with crowding and deviation of the superior inter-incisive line) | -Advancements in technology and machine learning are transforming orthodontics through artificial intelligence (AI) | + |
|
Jae-Hun Yu et al./2023 Korea (J. H. Yu et al. 2023) |
Observational Single center |
Comparison of automatic digital (AD) and manual digital (MD) model analyses | Automatic model analysis software | 26 intraoral scanner records |
- AD method enables reproducible analysis in a significantly shorter time - AD and MD methods yield notably different measurement outcomes - AD and MD analysis should not be used interchangeably |
? |
|
Nauwelaers et al./2021 Belgium (Nauwelaers et al. 2021) |
Observational Single center |
Construction of statistical shape models (SSMs) for the human palate | singular autoencoder (SAE) |
digitized 3D maxillary dental casts from 1,324 individuals 535 men (40%)/789 women (60%) |
- Promising tool for 3D palatal shape analysis -It combines principal component analysis (PCA) with deep learning |
+ |
| Segmentation of anatomic structures in 3D, and 2D images | ||||||
|
Yurdakurban/2025 Turkey (Yurdakurban et al. 2025) |
Observational Single center |
automated tooth segmentation | dentOne software (DIORCO Co, Ltd, Yongin, South Korea) and Medit Ortho Simulation software (Medit Corp, Seoul, South Korea) | Twelve sets of dental scans for both the maxillary and mandibular pretreatment, consisting of 286 teeth |
- Automated system appropriate for use in clinical settings - Strong alignment with the standard manual technique - Healthcare professionals can utilize open-source applications - Capability to create personalized automated segmentation models - Customized to meet specific clinical requirements |
? |
|
Yacout et al./2024 Egypt (Yacout et al. 2024) |
Observational Single center |
automated tooth segmentation | The MONAI (Medical Open Network for Artificial Intelligence) Label active learning tool extension | 55 CBCT scans |
- AI segmentation software shows acceptable accuracy - Improvement needed for incisor lingual surfaces in dentOne - Overestimation of mesiodistal widths, especially in premolars - Requires manual adjustments for accuracy |
+ |
|
Wang et al./2024 Belgium |
Observational Single center |
automatic tooth segmentation | A multi-step 3D U-Net pipeline | 761 Intra oral scanner (IOS) images (380 upper jaws, 381 lower jaws) |
3D U-Net pipeline: - Precise for automatic tooth segmentation - Efficient processing - Consistent results - Applicable to IOS images Online cloud-based platform: - Viable alternative for IOS segmentation |
+ |
|
Krenmayr/2024 Germany (Krenmayr et al. 2024) |
Observational Single center |
improve 3D tooth segmentation | DilatedToothSegNt, a graph neural network designed | 1800 unique raw maxillary and mandibular dental surfaces captured directly through an Intra oral scanner (IOS) from 900 diferent patients |
- Demonstrates superior accuracy compared to existing methods - Promising tool for automated dental model analysis - Improves computer-aided treatment planning - Provides more precise tooth segmentation - Enhances reliability in analysis |
+ |
|
Hu et al./2024 China (Y. Hu et al. 2024) |
Observational Single center |
classification and 3D segmentation of mixed dentition | Automated deep learning model was built based on modified nnU-Net and U-Net networks |
120 mixed dentition CBCT scans from three centers and 143 permanent dentition CBCT scans from a public dataset with a male-to-female ratio of 233:223 5050 deciduous teeth, 17,294 permanent teeth (including 143 third molar germs), and 78 supernumerary teeth |
- High clinical applicability - Robustness - Generalizability for mixed and permanent dentition |
+ |
|
Su et al./2024 China (Su et al. 2024) |
Observational Single center |
PDL segmentation | Mask R-CNN network | 389 patients and 1734 axial CBCT images |
- AI-driven segmentation of PDLs on CBCT imaging - Enables chair-side PDL measurements - Advantages for periodontists, orthodontists, prosthodontists, and implantologists - Enhances diagnosis efficiency and treatment planning accuracy |
+ |
|
Peng et al./2024 China (Peng et al. 2024) |
Observational Single center |
maximum cross-sectional area (MCSA) of masseter muscle | deep learning-based model | CBCT scans of 197 patients with skeletal Class III malocclusion (male group = 67; female group = 130) |
- MCSA indicates masseter muscle size - Relevant for skeletal Class III malocclusion - CSA 5–10 mm above mandibular foramen, parallel to Frankfort plane |
? |
|
Ni et al./2024 China (Ni et al. 2024) |
Observational Multi center |
automated segmentation of the mandibular canal |
2D U-Net 3D U-Net |
A total of 625 CBCT scans from 4 centers |
- Possible use in clinical settings - AI technology enhances clinical processes - Focus on mandibular canal localization |
+ |
|
Vinayahalingam et al./2023 |
Observational Single center |
automated teeth segmentation and labeling system |
Two different CNNs: 1). PointCNN 2). 3D-Unet |
1750 3D scans (875 maxilla, 875 mandible) from 875 patients |
- Time-effective - Teeth segmentation that is independent of observer variation—Labeling on Intra-oral scans |
+ |
|
Vinayahalingam et al./2023 The Netherlands |
Observational Single center |
an automated segmentation tool for 3D reconstruction of the TMJ | Three different 3D U-Nets | 162 CBCT scans of 81 patients who had orthognathic surgery |
- Achieved high precision, rapidity, and reliability in segmenting mandibular condyles and glenoid fossae - Potential risks: - Limited robustness and generalizability |
+ |
|
Tao et al./2023 |
Observational Single center |
automatic segmentation method for zygomatic bones | A deep learning-based model based on both VGG-16 and 3D U-Net | One hundred thirty CBCT scans |
- segmentation of zygomatic bones - Performance: high accuracy - Efficiency: superior to dentists |
+ |
|
Nogueira-Reis et al./2023 Belgium (Nogueira-Reis et al. 2023) |
Observational Single center |
creation of a maxillary virtual patient (MVP) | Three previously validated individual CNN models |
40 CBCT scans of two devices (20 Accuitomo 3D; 20 Newtom VGi evo) consisting of 560 teeth, 80 sinuses, and 40 maxillofacial complexes |
- Integrated CNN models: fast, precise, and consistent - High interobserver agreement in developing the MVP |
+ |
|
Liu et al./2023 China (Z. Liu et al. 2023) |
Observational Single center |
unsupervised pre-training for 3D tooth segmentation |
- The Dynamic Graph CNN (DGCNN) [as backbone] - The STSNet |
A substantial dataset of 3D intraoral scans (IOS) comprising 12,000 unlabeled meshes and 1,000 labeled ones |
- The efficacy of the suggested unsupervised pre-training approach - Decreases the requirement for extensive labeled training data - Improves accuracy in 3D tooth segmentation |
+ |
|
Hu et al./2023 China (H. Q. Hu et al. 2023) |
Observational Single center |
a novel end-to-end tooth segmentation method | MPCNet | 100 lower dental models | - MPCNet outperforms existing techniques for tooth segmentation | + |
|
Homsi et al./2023 USA (Homsi et al. 2023) |
Observational (prospective clinical study) Single center |
comparison of 3D models from the dental monitoring (DM) application and the iTero Element 5D scanner | DM artificial intelligence tracking algorithm | 24 patients (aged 14–55 years) |
- DM artificial intelligence tracking algorithm tracks tooth movement - Reconstructs 3D digital models - Achieves clinically acceptable degree |
+ |
|
Deng et al./2023 USA (Deng et al. 2023) |
Observational (ambispective cross-sectional) Single center |
automatic segmentation and landmark detection | SkullEngine framework | Sixty-one sets of Cone- beam computed tomography (CBCT) images |
- The SkullEngine framework is highly efficient, integrating segmentation and landmark detection - The accuracy of automatic landmark digitization needs improvement |
? |
|
Chen et al./2023 |
Observational Single center |
proposed a CNN-Transformer Architecture UNet network | A U-shaped CTA-UNet with CNN-Transformer architecture | CBCT scans of 200 patients |
- CTA-UNet pre-trained by CTAMIM Outperforms traditional models - Focus on dental CBCT image segmentation tasks - Practical significance in orthodontics and teeth implants |
+ |
|
Alqahtani et al./2023 Belgium (Alqahtani et al. 2023) |
Observational Single center |
segmentation and classification of teeth |
- The CNN framework - multiple U-Net models |
215 CBCT scans (1780 teeth) |
- Proposed CNN model outperformed other state-of-the-art algorithms in accuracy and efficiency - Viable alternative for: - Automatic segmentation - Classification of teeth with brackets |
+ |
|
Al-Ubaydi et al./2023 Iraq (Al-Ubaydi & Al-Groosh 2023) |
A part of a prospective ongoing clinical trial | Segmentation of tooth model (STM) that was produced by the artificial intelligence (AI) program (CephX®) | artificial intelligence (AI) program (CephX) | intraoral scan (IOS) and CBCT scans of 10 patients with Cl I malocclusion (mild to-moderate crowding) |
- Automatic AI approach (CephX) - Recommended for clinical practice - Suitable for patients with mild crowding - No teeth restorations needed - Notable speed - Effective results |
+ |
|
Zhao et al./2023 |
Observational Single center |
Design of a two-stream graph convolutional network (i.e., TSGCN) | TSGCN | 80 intra-oral scanner images |
- TSGCN considerably excelled compared to the conventional techniques - Segmentation of 3D tooth surfaces |
+ |
|
Wu et al./2022 USA (Wu et al. 2022) |
Observational Single center |
two-stage framework for combined tooth labeling and landmark identification |
TS-MDL: -An end-to-end iMeshSegNet method - PointNet-Reg |
136 patients’ raw upper intraoral scans | - Potential for orthodontic applications | + |
|
Wei et al./2022 China (Wei et al. 2022) |
Observational Single center |
tooth landmark/axis detection on tooth model |
A neural network: ToothNet and TSegNet are designed to accurately segment tooth and tooth crown |
100 CBCT scans and corresponding dental crown models |
- The method attained top-tier performance - Potential for real-world clinic application |
+ |
|
Observational Single center |
Segmentation of the masseter muscle (MM) | A 3D U-shape network | 50 independent CBCT and 42 paired CT and CBCT |
- Automatic model for MM structure segmentation - Compatible with CBCT and CT images - Accurate and efficient - Enhances treatment efficacy - Supports personalized care and follow-up |
+ | |
|
Preda et al./2022 Belgium (Preda et al. 2022) |
Observational Single center |
automated maxillofacial bone segmentation | (3D) U-Net (CNN) model | 144 scans |
- Suggested CNN architecture: - Time-efficient - Precise - Consistent - CBCT-based - Emphasizes automated segmentation of the maxillofacial region |
+ |
|
Lin et al./2022 China (B. Lin et al. 2022) |
Observational Single center |
The identification of anterior disc displacement (ADD) of the temporomandibular joint (TMJ) using MRI scans | Deep learning models were created utilizing a convolutional neural network (CNN) that is based on the ResNet architecture and the “Imagenet” database | 9009 sagittal MRI of the TMJ |
- CNN model effectively detects ADD - Useful for clinicians’ pre-orthodontic treatment - Enhances treatment outcomes |
+ |
|
Lee et al./2022 Korea (S. C. Lee et al. 2022) |
Observational Single center |
integrated tooth models (ITMs) for 3D assessment of root position during orthodontic treatment | 3D tooth modeling using convolutional neural network (CephX) | Pre-treatment and post-treatment intraoral scans accompanied by related CBCT scans from 15 patients |
- The accuracy of deep learning and manual techniques is comparable - Integrates intraoral scans and CBCT images - Deep learning method preferred for efficiency - Recommended for clinical practice |
+ |
|
Im et al./2022 Korea (Im et al. 2022) |
Observational Single center |
automatic tooth segmentation in digital dental models | a dynamic graph convolutional neural network (DGCNN)based algorithm | 546 digital dental models |
- Automatic tooth segmentation deep learning-based - High success rate and precision - Efficient for orthodontic diagnosis - Aids in appliance fabrication |
+ |
|
El Bsat et al./2022 Lebanon (El Bsat et al. 2022) |
Observational Single center |
tooth segmentation to achieve an autonomous system of assessment of the dentition |
Four architectures: -The MobileUnet network -The AdapNet network -The DenseNet network -The SegNet network |
A dataset of 797 occlusal views of the maxillary arch |
- Four architectural tests developed - Automated teeth segmentation and detection in two-dimensional photo application - No postprocessing needed - Best results with SegNet |
+ |
|
Ahn et al./2022 Korea (J. Ahn et al. 2022) |
Observational Single center |
Facial profile analysis algorithm for the exploration of the feasibility of automation in the measurement of 13 geometric parameters | Mask-RCNN, a trained decentralized convolutional neural network |
200 CBCT data of 200 orthodontic patients (69 males, 131 females) 170 cases were used to construct the dataset |
- High consistency with expert manual measurements - Applicable in real-world use - Saves time and effort for experts in analyzes 3D CBCT images |
+ |
|
Wang et al./2021 |
Observational Single center |
segmentation of the jaw, the teeth, and the background in CBCT scans | MS-D network |
Thirty dental CBCT scans (mean age ± SD, 14.2 ± 3.4 y; 19 females and 11 males) |
- Accurate multiclass segmentation of jaw and teeth - Comparable performance to binary segmentation - MS-D network improves patient-specific orthodontic treatment - Reduces segmentation time for multiple anatomic structures in CBCT scans |
+ |
|
Verhelst et al./2021 Belgium (Verhelst et al. 2021) |
Observational Single center |
Automatic generation of three-dimensional (3D) surface models of the human mandible | Two convolutional neural networks utilizing a 3D U-Net framework were integrated | 60 anonymized full skull CBCT scans of orthognathic surgery patients (70 preoperative scans and 90 postoperative scans) |
- Deep learning algorithm utilizing a layered 3D U-Net structure - Improved time-efficiency - Minimized operator error - Remarkable precision - Compared against established clinical standards |
+ |
|
Lo Giudice et al./2021 Italy (Leonardi et al. 2021) |
Observational Single center |
fully automatic segmentation of the mandible |
- A fully convolutional deep encoder-decoder network built on the Tiramisu model, enhanced with ‘squeeze-and-excitation’ blocks - Bidirectional convolutional LSTMs |
Forty CBCT scans from healthy patients (20 females/20 males, mean age 23.37 ± 3.34) |
- Deep learning CNN technology - Accurate like experienced image readers - Significantly faster - Clinically relevant |
+ |
|
Lim et al./2021 Korea (Lim et al. 2021) |
Observational Single center |
Image and position tracking of the IAN | A customized 3D nnU-Net | 138 CBCT datasets |
- Deep active learning framework - Fast - Precise - Robust clinical tool - Demarcates IAN location |
+ |
|
Cui et al./2021 China (Cui et al. 2021) |
Observational Single center |
tooth segmentation on 3D scanned point cloud data of dental models | TSegNet | 2000 dental models (1000 upper jaws and 1000 lower jaws) |
- Produces superior results - Significantly outperforms other methods |
+ |
|
Li et al./2020 China (Q. Li et al. 2020) |
Observational Single center |
automatic tooth root segmentation algorithm |
Attention U-net (AttU-Net) Recurrent neural network (RNN) |
24 patient cases containing 1160 images |
- Method: Attention U-net + RNN - Focus: Tooth roots segmentation - Results: Promising outcomes - Benefits: - Enhanced efficiency - Improved accuracy - Potential for clinical practice |
+ |
|
Chung et al./2020 Korea (M. Chung et al. 2020) |
Observational Single center |
Pixel-wise labeling for the exploitation of an instance segmentation framework | pose regression convolutional neural network (CNN) | 175 CBCT images |
- Key implications of the method: 1) Pose-aware VOI realignment 2) Robust tooth detection 3) Metal-robust CNN for segmentation |
+ |
|
Zhang et al. 2019 China (Y. Zhang et al. 2019) |
Observational Single center |
noise reduction and masseter muscle segmentation | generative adversarial network (GAN)-based framework CycleGAN | 40 CBCT images and 30 CT images |
- Addressed shape distortion problem - Focused on unsupervised GAN-based transfer learning - Proposed method outperforms the current leading techniques - Achievements in noise reduction - Success in masseter muscle segmentation tasks |
+ |
|
Pei et al./2017 China (Pei et al. 2017) |
Observational Single center |
fully automatic segmentation of the anterior cranial base (ACB) | volumetric convolutional network with nested residual connections (NRN) | 120 clinically captured CBCT images |
- NRN showed faster convergence - Outperformed other CNN variants - Improved traditional segmentation methods |
+ |
| Cephalometric landmarks’ detection and cephalometric analysis | ||||||
|
Wang et al./2024 |
Observational Single center |
Identification of three-dimensional (3D) anatomical landmarks from cone-beam computed tomography (CBCT) images | PointRend algorithm | CBCTs of four hundred subjects aged 18 to 45 years |
-Deep learning enables automatic segmentation of the mandible - Effectively identifies anatomic landmarks - Addresses clinical needs in individuals without mandibular deformities |
+ |
|
Kazimierczak et al./2024 Poland (Kazimierczak et al. 2024b) |
Observational Single center |
comparison of cephalometric analysis results among three commercial AI-driven programs | commercial AI-driven programs: CephX, WebCeph, and AudaxCeph | One hundred twenty-four cephalograms |
-Discrepancies and high variability in certain analyses -Standardization across AI platforms is required -Clinicians must critically evaluate automated results |
? |
|
Sahlsten et al./2024 Finland (Sahlsten et al. 2024) |
Observational Multi center |
clinical applicability of a light-weight deep learning neural network for fast localization of 46 clinically significant cephalometric landmarks | deep learning model based on the stacked hourglass network | 309 CBCT scans from Finnish and Thai patients |
-Clinically sufficient accuracy in most of the predicted cases -Reliable landmarking and cephalometric analysis |
+ |
|
Bor et al./2024 Turkey (Bor et al. 2024) |
Observational Single center |
Comparison of three AI-based cephalometric analysis platforms with the traditional digital tracing method |
Three AI-based cephalometric analysis platforms: CephX, WeDoCeph, and WebCeph |
120 lateral cephalometric radiographs |
- They save time and minimize human errors -Human supervision by an orthodontist remains essential |
? |
|
Zecca et al./2024 Italy (Zecca et al. 2024) |
Observational Single center |
Improvement of the extraction of cephalometric values from limited field of view (FOV) images | The GridSearchCV algorithm and other algorithms | 1255 individuals as a training population |
-Effective in accurately predicting key cephalometric measurements - A reliable tool for clinical use |
+ |
|
Zaheer et al./2024 Pakistan (Zaheer et al. 2024) |
Observational Single center |
Comparison of AI-driven tools and manual method | WebCeph, and CephX softwares | 54 lateral cephalometric radiographs |
- Manual analysis showed the highest mean detection errors - Fully automatic approach achieved the lowest errors - Fully automatic AI software has the most reproducible results |
+ |
|
Baig et al./2024 India (Baig et al. 2024) |
Observational Single center |
Comparison of three AI-based lateral cephalometric tracing software | AI-based software programs (WebCeph™, Cephio, and Ceppro DDH Inc.) | Sixty-three lateral cephalometric radiographs |
-All three tracing programs showed measurement inaccuracies -They were inconsistent in assessing important cephalometric parameters -Human validation remains essential |
? |
|
Khabadze et al./2024 Russia (Khabadze et al. 2024) |
Observational Single center |
Comparison of 3D cephalometric analysis performed using AI with that conducted manually by a specialist orthodontist | Diagnocat system | 30 CBCT scans |
- An effective diagnostic tool - Assesses the mandibular growth direction - Defines the skeletal class - Estimates the overbite, overjet, and Wits parameter |
+ |
|
Chuchra et al./2024 IND (Chuchra et al. 2024) |
Observational Single center |
Comparison of digital tools with manual tracings in doing cephalometric analysis | OneCeph, WebCeph, and NemoCeph | 130 cephalometric radiographs |
-Promising reliability of OneCeph, WebCeph, and NemoCeph for cephalometric analysis -Human validation remains essential |
? |
|
O’ Friel et al./2024 USA (O'Friel et al. 2024) |
Observational Single center |
Identification of cephalometric landmarks on lateral cephalograms of Class II and Class III skeletal relationships | AudaxCeph® | Sixty cephalograms depicting severe Class II or Class III skeletal discrepancies |
-Minimal discrepancies (exceeding 2 mm) compared to manual operators - Notable variations at specific landmarks -Practitioner verification remains essential |
? |
|
Gonca et al./2024 |
Observational Single center |
Identification of posteroanterior (PA) cephalometric landmarks | the Feature Aggregation and Refinement Network (FARNet) based artificial intelligence (AI) algorithm | 1431 cephalograms |
-The cascade CNN algorithm auto-identified 47 PA cephalometric landmarks -Clinically acceptable point-to-point error (mean 1.84 mm) -An effective alternative to manual identification |
+ |
|
Strunga et al./2024 Slovakia (Strunga et al. 2024) |
Observational Single center |
Comparison of the AI-automated cephalometric tracing with human-operated digital tracing | AI-auto-tracing feature software (Invivo v 7.1.2, Anatomage – 3D analysis module) | 120 CBCT scans | -Less accurate and slower than trained human tracing | ? |
|
Saifeldin et al./2024 Egypt (Saifeldin et al. 2024) |
Observational prospective study Single center |
Comparison of cephalometric analyses acquired through manual tracing and the Eyes of AI-driven web-based program | Eyes of AI (a cutting-edge web-based platform driven by fully automated AI), utilizing PyTorch as the primary deep learning framework | The lateral cephalometric radiographs of 150 cases | -Accurate when compared to manual measurements | + |
|
Muñoz et al./2024 Chile (Muñoz et al. 2024) |
Observational Single center |
Comparison of cephalometric tracing performed by AI and 2 human operators | AI in the WebCeph software | 30 lateral cephalograms of individuals with orthodontic treatment indications |
- Notable differences between AI results and those produced by humans - Human validation remains essential |
? |
|
Kang et al./2024 Korea (Kang et al. 2024) |
Observational Single center |
the error range of cephalometric measurements based on the landmarks detected |
Cascaded-CNN models - RetinaNet - U-net18 |
120 lateral cephalograms of patients (mean age: 32.5 ± 11.6) |
-Possibility of errors by automated lateral cephalometric analysis systems - Caution is advised |
? |
|
Han et al./2024 Korea (Han et al. 2024) |
Observational Multi center |
Quantification of the effects of midline-related landmark identification on midline deviation measurements |
Cascaded-CNN models - RetinaNet - U-net18 |
2,903 PA cephalogram images | - An effective tool for auto-identification of midline landmarks and quantification of midline deviation in PA cephalograms of adult patients | + |
|
Guinot-Barona et al./2024 Spain (Guinot-Barona et al. 2024) |
Observational Single center |
Comparison of the Ricketts and Steiner cephalometric analysis obtained by two experienced orthodontists and artificial intelligence (AI)-based software program | AI software program (CephX; ORCA Dental AI) | 50 lateral cephalometric radiographs from 50 patients | -Significant discrepancies in seven of the 24 cephalometric measurements between the orthodontists and the AI-based program assessed | ? |
|
El-Dawlatly et al./2024 Egypt (El-Dawlatly et al. 2024) |
Observational Multi center |
Accuracy and efficiency in performing lateral cephalometric radiographic measurements | WebCeph software (Assemble Circle Corp., Gyeonggi-do, Republic of Korea) | 200 lateral cephalometric radiographs of growing and adolescent female patients | - Not fully reliable at locating landmarks | ? |
|
Ahn et al./2024 Korea (H. J. Ahn et al. 2024) |
Observational Single center |
Identification of anatomical landmarks using artificial intelligence (AI) and manual identification | ON3D software | three-dimensional radiologic scans of 30 patients over 19 years old |
- Minimal significant differences between AI and manual tracing -A reliable method in planning orthognathic surgery |
+ |
|
Ahn et al./2024 Korea (H. Ahn & Yang 2024) |
Observational Single center |
Comparison of the measured values by artificial intelligence (AI) and the manual measurement | AI | Pre-orthodontic computed tomography scans of ten males and ten females |
-No significant difference between AI and manual tracing - Efficient to measure anatomical landmarks |
+ |
|
Zhao et al./2023 |
Observational Single center |
Detection of cephalometric landmarks | multi-scale YOLOV3 (MS-YOLOV3) |
400 2D Xray digital lateral cephalograms 163 2D X-ray digital AP cephalograms |
-Robust performance in detection of cephalometric landmarks on both lateral and AP cephalograms | + |
|
Blum et al./2023 Germany (Blum et al. 2023) |
Observational Single center |
Fully automated detection of craniofacial landmarks in cone-beam computed tomography (CBCT) | 3D U-Net algorithm |
931 CBCTs 620 female, 425 male |
-Clinically acceptable accuracy in landmark detection -Comparable to manual landmark determination - Requires less time |
+ |
|
Zese et al./2023 Italy (Zese et al. 2023) |
Observational Single center |
Automatic detection of the landmarks |
four networks: - The backbone of the network is a EfficientNetB7 pretrained on Imagenet |
1732 images of anonymized lateral teleradiographs | - Tend to learn the average position of each landmark instead of focusing on the image itself | + |
|
Ye et al./2023 China (Ye et al. 2023) |
Observational Single center |
Evaluation of the automatic digitization of cephalograms | AI-based machine learning programs: MyOrthoX, Angelalign, and Digident | Lateral cephalograms of 43 patients | -Increased efficiency without compromising accuracy | + |
|
Prince et al./2023 India (Prince et al. 2023) |
Observational Single center |
Comparison of cephalometric measurements obtained by an artificial-intelligence assisted software (WebCeph) with digital software (AutoCEPH) and manual tracing method | WebCeph | Fifty pre- treatment lateral cephalograms | - Good agreement with AutoCEPH and manual tracing for all the cephalometric measurements | + |
|
Popova et al./2023 Germany (Popova et al. 2023) |
Observational Single center |
The effects of developmental stages of a dentition, fixed orthodontic appliances or other dental appliances on detection of cephalometric landmarks | Convolutional Neural Network (CNN) | 430 Cephalometric radiographs in the training dataset and 460 images as the test dataset | Growth structures and developmental stages of dentition affect model performance | + |
|
Panesar et al./2023 USA (Panesar et al. 2023) |
Observational Single center |
Assessment of cephalometric analyses performed by artificial intelligence (AI) with and without human augmentation |
RadioCef software with CEFBOT module: With four subsystems (a convolutional neural network (CNN)) |
30 cephalometric radiographs of patients with a class I skeletal relationship |
-The AI/human augmentation method significantly improved the precision and accuracy of less experienced dental professionals - Comparable to the level of an experienced orthodontist |
? |
|
Lee et al./2023 |
Observational Single center |
Comparison of AI cephalometric analysis with manual analysis by human examiners | CellmatIQ, CephX, AudaxCeph Automatic Tracing Mode and WebCeph |
Eighty-four pre- treatment lateral cephalograms 46 males 38 females |
-Difficult to fully replace manual landmarking by human experts | ? |
|
Lee et al./2023 |
Observational Single center |
Comparison of a fully automatic posteroanterior (PA) cephalometric landmark identification model with those of human examiners |
ResNet 18 ResNet 50 |
1032 PA cephalometric images |
-Comparable to human examiners -Promising accuracy and reliability in landmark detection - As an aid for clinicians to perform cephalometric analysis more efficiently - Saves time and effort |
+ |
|
Kunz et al./2023 Germany (Kunz et al. 2023) |
Observational Single center |
Evaluation of the accuracy of various cephalometric parameters as produced artificial intelligence (AI)-assisted automated cephalometric analysis | Automated cephalometric analyses (DentaliQ.ortho, WebCeph, AudaxCeph, CephX) | 50 cephalometric X-rays |
-Fully automated cephalometric analyses offer timesaving advantages -Reduce individual human errors -Supervision of experienced clinicians remains essential |
? |
|
Kazimierczak et al./2023 Poland (Kazimierczak et al. 2023) |
Observational Single center |
Prevalence of nasal septum deviation (NSD) and its association with abnormalities detected through cephalometric analysis | AI web-based software, CephX | CT scans of 120 consecutive, post-traumatic patients aged 18–30 years |
-High repeatability in automatic cephalometric analyses, -Reliability of the AI model for most cephalometric analyses |
+ |
|
Jiang et al./2023 China (Jiang et al. 2023) |
Observational Multi center |
Automated landmark localization utilizing artificial intelligence (AI) |
A cascade framework “CephNet”: RegionNet and the LocationNet |
9870 cephalograms | - High accuracy and applicability | + |
|
Indermun et al./2023 South Africa (Indermun et al. 2023) |
Observational Single center |
Comparison of two cephalometric landmark identification methods, a computer-assisted human examination software and an artificial intelligence program | BoneFinder: the AI software used | 409 cephalograms | -No significant difference between the two programs | + |
|
Duran et al./2023 Turkey (Duran et al. 2023) |
Observational Single center |
Comparison of fully automatic cephalometric analysis software with non-automated cephalometric analysis software | Automatic digital cephalometric analysis software OrthoDx™ (AI-Powered Orthodontic Imaging System, Phimentum) and WebCeph (Assemblecircle, Seoul, Korea) with artificial intelligence algorithm | 336 lateral cephalometric radiographs |
- Similarity of fully automatic cephalometric analysis software to the results of non-automated software - Differences in specific parameters - Manual intervention of the clinician is needed |
? |
|
Danisman et al./2023 Turkey (Danisman 2023) |
Observational Single center |
Comparison of cephalometric measurements of the web-based artificial intelligence cephalometric analysis platform with the computer assisted cephalometric analysis method | AI based platform WebCeph® | 60 patients’ pretreatment lateral cephalograms |
- Not sufficient to accurately determine soft tissue landmarks, -More suitable with the manual correction by observers |
? |
|
Chen et al./2023 |
Observational Single center |
landmark detection |
-a U- shaped CNN with Monte Carlo dropout -the heatmap generation network is a U- shaped CNN modified based on V-Net |
400 cephalometric X-ray images 108 cephalometric images |
- An assistant tool in clinical practice | ? |
|
Bao et al./2023 China (Bao et al. 2023) |
Observational Single center |
Comparison of automatic cephalometric landmark localization and measurements with computer assisted manual analysis | AI automatic analysis (Planmeca Romexis 6.2) | Reconstructed lateral cephalograms (RLCs) from cone-beam computed tomography (CBCT) in 85 patients |
- Not capable of fully replacing manual tracing -Manual supervision remains essential |
? |
|
Alessandri-Bonetti et al./2023 Italy (Alessandri-Bonetti et al. 2023) |
Observational Single center Pilot study |
Comparison of a fully automated AI-assisted cephalometric analysis (with the one obtained by a computerized digital software) with manual landmark identification | digital cephalometric software Openceph (Openceph, v4.1.0, developed by Dr Bruno Oliva) | 13 lateral cephalograms |
- Reliable and accurate - It cannot replace the expertise of the orthodontist |
? |
|
Tao et al./2023 |
Observational Single center |
Automatic detection of landmarks in CT images | A two-stage deep learning model | 80 sets of CT data of patients with dento-maxillofacial deformities | -Excellent performance in craniomaxillofacial landmarks detection | + |
|
Dot et al./2022 France (Dot et al. 2022) |
Observational Single center |
Automatic localization of 3D cephalometric landmarks on computed tomography (CT) scans | deep learning (DL) pipeline based on Spatial Configuration-Net |
Two hundred presurgical CT scans mean age of 27 ± 11 y and 58.6% were females (n = 116) |
- DL method requires further refinement -Highly accurate 3D landmark localization -Reliable for skeletal evaluation, comparable to that of clinicians |
? |
|
Yao et al./2022 China (Yao et al. 2022) |
Observational Single center |
automatic landmark location | The ResNet 18 as the network backbone |
512 lateral cephalograms Two hundred and forty-seven males, and 265 females |
-Detects 37 landmarks with high accuracy | + |
|
Chung et al./2022 Korea (E. J. Chung et al. 2022) |
Observational Single center |
Applicability of lateral cephalograms generated by CBCT | AI-based landmark measurement program WebCeph |
CBCT and lateral cephalograms of patients: 15 male/15 female a mean age of 16.57 years Group I: conventional lateral cephalograms Group II: cephalometric radiographs generated from CBCTs using OnDemand 3D Group III: cephalometric radiographs generated from CBCTs using Invivo5 |
-Obvious inaccuracies in landmark detection with the automatic WebCeph -Points were frequently identified at the wrong location |
? |
|
Duman et al./2022 Turkey (Duman Ş et al. 2022) |
Observational Single center |
Diagnostic performance of an artificial intelligence system for the morphological classification of Sella turcica | PyTorch supported by U-Net and TensorFlow 1, and the GoogleNet Inception V3 algorithm | 188 CBCT images | -Save time and facilitate diagnosis | + |
|
Uğurlu et al./2022 Turkey (Ugurlu 2022) |
Observational Single center |
Detection of cephalometric landmarks automatically and enabling the automatic analysis of cephalometric radiographs | Feature aggregation and refinement network (FARNet) | 1620 lateral cephalograms |
-Adequate success in automatic landmark detection -Further refinement is necessary |
? |
|
Tsolakis et al./2022 Greece (Tsolakis et al. 2022) |
Observational Single center |
Comparison of an automated cephalometric analysis of automatically identifying cephalometric landmarks with a manual tracing method | Artificial Intelligence CS imaging V8 software |
100 cephalometric X-rays 43 males, 57 females, mean age: 15.9 ± 4.8 years |
- Reliable and accurate for all cephalometric measurements | + |
|
Silva et al./2022 Brazil (Silva et al. 2022) |
Observational Single center |
The reliability of CEFBOT, an artificial intelligence (AI)-based cephalometry software, for cephalometric landmark annotation and measurements according to Arnett’s analysis | CEFBOT AI software | Thirty lateral cephalometric radiographs | -It still relies on human input and guidance for training and refinement | ? |
|
Ristau et al./2022 USA (Ristau et al. 2022) |
Observational Single center |
Comparison of the cephalometric landmark identification by an automated tracing software to human tracers | AudaxCeph®'s artificial intelligence software | Sixty cephalograms | -A good adjunctive tool | + |
|
Ramadan et al./2022 Saudi Arabia (Ramadan et al. 2022) |
Observational Single center |
Detection of cephalometric landmarks | A deep learning architecture ResNet50 | 400 images (mean age: 27.0 years; 235 females, 165 males) | -Very promising and helpful | + |
|
Mahto et al./2022 Nepal (Mahto et al. 2022) |
Observational Single center |
Comparison of the cephalometric measurements obtained from web-based fully automated Artificial Intelligence (AI) driven platform “WebCeph” with that from manual tracing | WebCeph |
Thirty pre- treatment lateral cephalograms (8 males, 22 females, mean age: 20.17 ± 6.72 years) |
-A good agreement between “WebCeph” and manual tracing -Fairly accurate compared to manual method |
+ |
|
Le et al./2022 Korea (Le et al. 2022) |
Observational Single center |
Identification of cephalometric landmarks |
deep anatomical context feature learning (DACFL) model - based on the attention U-Net |
1293 cephalograms | -Effective in cephalometric landmarks detection | ? |
|
Kılınç et al./2022 Turkey (Kılınç et al. 2022) |
Observational Single center |
Comparison of three different cephalometric assessment methods: Smartphone Application Tracing Method CephNinja (SATM), Web Based Artificial Intelligence (AI) Driven Tracing Method WebCeph (WATM) and Conventional Hand Tracing Method (CHTM) | WebCeph |
110 lateral cephalometric radiographs 44 (40%) males and 66 (60%) females, mean age of 15.83 § 2.85 |
-Statistically and clinically significant differences in various measurements among groups | ? |
|
Katyal et al./2022 India (Katyal & Balakrishnan 2022) |
Observational Single center |
Evaluation of tracing cephalometric radiographs with WebCeph, in comparison to digital tracing with FACAD and manual tracing | WebCeph | Pre-treatment cephalometric radiographs of 25 patients (14 males and 11 females, mean age of 18 ± 3.2 years) | - A reliable, faster and practical tool for cephalometric analysis in comparison to digital tracing using FACAD and manual tracing | + |
|
Jiang et al./2022 |
Observational Multi center |
Automated linear measurements | a cascade neural network ‘ScaleNet’: composed of ScaleNet_a and ScaleNet_b |
2860 cephalograms (average age = 18.3 years, 1340 females, 1520 males) |
-High accuracy -High potential for clinical application in cephalometric analysis |
+ |
|
Hong et al./2022 Korea (Hong et al. 2022) |
Observational Multi center |
landmark identification (LI) in serial lateral cephalograms (Lat-cephs) of Class III patients who underwent two jaw orthognathic surgery |
RetinaNet U-Net |
3188 Lat-cephs of C-III patients | -Useful in serial Lat-cephs | + |
|
Gil et al./2022 Korea (Gil et al. 2022) |
Observational Multi center |
auto-identification of the posteroanterior (PA) cephalometric landmarks |
Cascade CNN: RetinaNet U-Net |
2798 PA cephalograms | -An effective alternative to manual identification | + |
|
Davidovitch et al./2022 Israel (Davidovitch et al. 2022) |
Observational Single center |
Comparison of artificial intelligence derived cephalometric landmark identification with that of human observers |
a commercially available convolutional neural network artificial intelligence platform (CephX, Orca Dental, Hertzylia, Israel) | Ten digital lateral cephalometric radiographs |
- Highly accurate - An aid in orthodontic diagnosis |
+ |
|
Çoban et al./2022 Turkey (Çoban et al. 2022) |
Observational Single center |
Comparison of the measurements performed with digital manual (DM) cephalometric analysis and automatic cephalometric analysis obtained from an online artificial intelligence (AI) platform | WebCeph platform |
105 Cephalometric radiographs (mean age: 17.25 ± 1.87 years) 49 females, 56 males |
- Significant differences in some measurements between the two cephalometric analysis methods - Not all differences were clinically significant -Requires further refinement |
? |
|
King et al./2022 Taiwan (King et al. 2022) |
Observational Single center |
cephalometric landmark detection | YOLOv3 as the modified backbone model | 400 lateral cephalograms of 400 patients aged 6 to 60 years | - Accurate in detecting small landmarks defined by only one single pixel location | + |
|
Zeng et al./2021 China (Zeng et al. 2021) |
Observational Single center |
Prediction of cephalometric landmarks automatically | Align-Net, a Proposal-Net and 19 Refine-Nets | 400 cephalometric X-ray images | - Competitive performance compared with the other methods | + |
|
Tanikawa et al./2021 Japan (Tanikawa et al. 2021a) |
Observational Single center |
Determination of whether AI systems that recognize cephalometric landmarks, can apply to various patient groups | two deep convolutional neural networks for landmark patch classification (CNN-PC) and landmark point estimation (CNN-PE) |
1785 digital lateral cephalograms (828 male and 957 female patients; mean age, 12.2 ± 5.4 years) |
-They could be applied to various patient groups -Patient-oriented errors were found in cleft patients |
+ |
|
Kim et al./2021 Korea (M. J. Kim et al. 2021b) |
Observational Single center |
Identification system for posteroanterior (PA) cephalometric landmarks | The multi-stage CNN model | 430 PA-cephalograms synthesized from cone-beam computed tomography scans (CBCT-PA) |
-Automated identification for CBCT-synthesized PA cephalometric landmarks did not meet the clinically favorable error range - AI landmark identification on PA cephalograms showed better consistency than manual methods |
? |
|
Kim et al./2021 Korea (M. J. Kim et al. 2021a) |
Observational Single center |
A fully automated identification of cephalometric landmarks | The multi-stage CNNs | 430 lateral and 430 MIP lateral cephalograms synthesized by cone-beam computed tomography (CBCT) | Variation in image data types could influence the performance of landmark identification systems | ? |
|
Kim et al./2021 |
Observational Multi center |
Automated identification of cephalometric landmarks |
a fully automated landmark pre—diction algorithm, the cascade network the U-Net |
3150 lateral cephalograms | -It can assist in preliminary patient screening and mid-treatment assessment | ? |
|
Jeon et al./2021 Korea (S. Jeon & Lee 2021) |
Observational Single center |
Comparison of an automatic cephalometric analysis using convolutional neural network with a conventional cephalometric approach | CephX | 35 lateral cephalograms |
-Clinically acceptable diagnostic performance - Additional manual adjustment are needed |
? |
|
Kwon et al./2021 South Korea (Kwon et al. 2021) |
Observational Single center |
localization of cephalometric landmarks | CNN | 400 lateral cephalograms |
-Successful detection rate -Anatomic facial type classification accuracy |
+ |
|
Bulatova et al./2021 USA (Bulatova et al. 2021) |
Observational Single center |
Comparison of cephalometric landmark identification between artificial intelligence (AI) deep learning convolutional neural networks (CNN) and the manually traced (MT) group | AI program (Ceppro software, DDH Inc) | 110 cephalometric images | - Increased efficiency without compromising accuracy | + |
|
Hwang et al./2021 Korea (Hwang et al. 2021) |
Observational Single center |
Comparison of an automated cephalometric analysis based on the latest deep learning method with previously published AI | a modification of a contemporary deep learning method, YOLO version 3 algorithm | 1983 cephalograms |
-Superior performance compared to previous AI methods -Comparable to human examiners |
+ |
|
Muraev et al./2020 Russia (Muraev et al. 2020) |
Observational Single center |
The performance of placing cephalometric points (CPs) on frontal cephs | Two artificial neural networks (ANN) | 30 depersonalized frontal cephs |
-Accurate - Comparable to humans in placing CPs -Surpass the accuracy of inexperienced doctors |
+ |
|
Song et al./2020 Japan (Song et al. 2020) |
Observational Single center |
Automatic identification of cephalometric landmarks | pre-trained networks with a backbone of ResNet50, which is a state-of-the-art convolutional neural network | 350 lateral cephalograms |
-Satisfying results on both SDR (Successful Detection Rate) and SCR (Successful Classification Rate) -Computational time issue needs refinement |
+ |
| Moon et al./2020 (J.-H. Moon et al. 2020) |
Observational Single center |
Automatic identification of cephalometric landmarks | a deep-learning algorithm (a modified You-Only-Look-Once version 3 algorithm) | 2400 cephalograms | -A considerably large learning data was required | ? |
|
Wirtz et al./2020 Germany (Wirtz et al. 2020) |
Observational Single center |
Localization of landmarks | 2-D coupled shape model with a Hough Forest | 400 cephalometric images |
- Competitive performance with a successful detection rate of 70.24% on 250 images -Clinically relevant 2 mm accuracy range |
? |
|
Meriç et al./2020 Turkey (Meriç & Naoumova 2020) |
Observational Single center |
Comparison of the cephalometric analyses made with fully automated tracings, computerized tracing, and app-aided tracings with equivalent hand-traced measurements | An online automatic cephalometric tracing and analysis service named CephX (ORCA Dental AI, Las Vegas, NV) |
Pre-treatment lateral cephalometric radiographs of 40 patients (7 males, 33 females, mean age: 16.0 ± 4.6 years) |
-Fully automatic analysis with CephX needs refinement -CephX analysis with manual correction is promising in clinical practice - Shorter analyzing time |
? |
|
Lee et al./2020 Korea (J. H. Lee et al. 2020) |
Observational Single center |
Localization of cephalometric landmarks | BCNN | 400 lateral cephalograms |
-Identification of cephalometric landmarks and their confidence regions -It could be used as a computer-aided diagnosis tool |
? |
|
Kunz et al./2020 Germany (Kunz et al. 2020) |
Observational Single center |
an automated cephalometric analysis | a customized open-source CNN deep learning algorithm | 1792 cephalometric X-rays | -Analyzes unknown cephalometric X-rays at almost the same quality level as experienced human examiners | + |
|
Kim et al./2020 |
Observational Multi center |
a fully automated cephalometric analysis method | SHG model | 2075 lateral cephalograms | - Save time and effort | + |
|
Hwang et al./2020 Korea (Hwang et al. 2020) |
Observational Single center |
Comparison of cephalometric landmarks identification by an automated identification system (AI) with those identified by human examiners | The YOLOv3 algorithm | 1028 cephalograms |
-Accurate in landmarks identification -Comparable to human examiners |
+ |
|
Park et al./2019 |
Observational Single center |
Comparison of two of the latest deep learning algorithms for automatic identification of cephalometric landmarks | You-Only-Look-Once version 3 (YOLOv3) and Single Shot Multibox Detector (SSD) methods | 1028 cephalometric radiographic images | -YOLOv3 seemed to be more promising | + |
|
Nishimoto et al./2019 Japan (Nishimoto et al. 2019) |
Observational Single center |
automated landmark predicting system | a convolutional neural network | 219 lateral cephalograms | -Not statistically different from those calculated from manually plotted points | + |
|
Dai et al./2019 China (Dai et al. 2019) |
Observational Single center |
an automated cephalometric landmark localization method | adversarial encoder-decoder network | 300 lateral cephalograms | -Good performance in localization of most cephalometric landmarks | ? |
|
Banumathi et al./2011 India (Banumathi et al. 2011) |
Observational Single center |
an automated landmark identification | Support Vector machine | 100 cephalometric radiographic images |
-Accurate location estimation of the landmark - As good as the performance of the expert dentists |
+ |
|
Mario et al./2010 Brazil (Mario et al. 2010) |
Observational Single center |
Analysis of cephalometric variables | PANN |
120 orthodontic patients 44.17% males and 55.83% females |
-Equivalent agreement between the model’s and specialist’s performance - It identified contradictions in the data that were overlooked by the orthodontists |
+ |
|
Abe et al./2005 Brazil (Abe et al. 2005) |
Observational Single center |
Analyzing cephalometric measurements | PANN | 40 patients | Refinement is required | ? |
| Cervical vertebral maturation (CVM) stage | ||||||
|
Shoari et al./2024 Iran (Shoari et al. 2024) |
Observational Single center |
mandibular growth stage based on cervical vertebral maturation (CVM) | A lightweight convolutional neural network (CNN) | Lateral cephalograms of 200 people, 108 women and 92 men |
-Diagnostic aid to estimate mandible growth status - Highest accuracy in estimation of pre-pubertal and pubertal stages |
+ |
|
Nogueira-Reis et al./2024 Brazil (Nogueira-Reis et al. 2023) |
Observational Single center |
Detection of the pubertal growth spurt by analyzing cervical vertebrae maturation (CVM) | Inception-v3 model | 600 LCRs of patients from 6 to 17 years old | - Potential for determining the maturation status | + |
|
Kamei et al./2024 India (Kamei et al. 2024) |
Observational Multi center |
Comparison of skeletal maturation in cleft and non-cleft groups | The MobileNet architecture using TensorFlow framework employed to develop 2 convolutional neural network (CNN)-based AI models | 960 cephalograms of patients with and without unilateral cleft lip and palate (UCLP) aged 6 to 18 years |
-Objectively detect skeletal maturity in UCLP patients -Evidence of delayed skeletal growth compared to non- UCLP patients |
+ |
|
Radwan et al./2023 Turkey (Radwan et al. 2023) |
Observational Single center |
Determination of the stage of cervical vertebra maturation (CVM) | the Python implementation of UNet for segmentation and Alex-Net | 1501 Lateral cephalograms |
-Reliable in determining the pre-pubertal and post-pubertal growth stages -Highly accurate |
+ |
|
Moztarzadeh et al./2023 Czech Republic (Moztarzadeh et al. 2023) |
Observational Single center |
Automatic detection and classification of cervical vertebrae | a new convolutional neural network architecture (MobileNetV2) | 319 lateral cephalometric radiographs | -High performance on a collected small dataset | + |
|
Akay et al./2023 Turkey (Akay et al. 2023) |
Observational Single center |
Automatic determination of the cervical vertebral maturation (CVM) processes | deep learning-based convolutional neural network (CNN) model | 588 digital lateral cephalometric radiographs of patients with a chronological age between 8 and 22 years |
-Moderate success -Classification accuracy of 58.66% |
? |
|
Li et al./2023 China (H. Li et al. 2023) |
Observational Single center |
A fully automated, CVM assessment system called the psc-CVM assessment system |
-Position Network -Shape Recognition Network -CVM Assessment Network |
10,200 lateral cephalograms |
-High accuracy -Significantly consistent with expert panels -An efficient and accurate diagnostic aid |
+ |
|
Khazaei et al./2023 Iran (Khazaei et al. 2023) |
Observational Single center |
Automatic classification of pubertal growth spurts using cervical vertebral maturation staging |
The CNN models: ImageNet, ResNet-101, ResNet-50, VGG16, VGG-19, DenseNet-121, DenseNet-169, ConvNeXtBase-296, and EfficientNetB3-386 |
Cephalometric radiographs of 1846 patients (aged 5–18 years) |
- Potential as a diagnostic tool -Highly accurate even with a small number of images |
+ |
|
Atici et al./2023 USA (Atici et al. 2023) |
Observational Single center |
An automated classification of the cervical vertebrae maturation (CVM) stages | AggregateNet with a set of tunable directional edge enhancers | 1018 cephalometric radiographs | - Higher accuracy than the other models | + |
|
Şatir et al./2023 Turkey (Şatir et al. 2023) |
Observational Single center |
Automatic determination of the growth period of the hand and wrist | ImageJ program | Hands-wrist radiographs of 270 individuals aged 8–17 years (135 female and 135 male cases) | -Peak growth period can be distinguished using the pure density values obtained from all phalanges of the third finger | + |
|
Ameli et al./2023 Canada (Ameli et al. 2023) |
Observational Single center |
Automatic prediction of the CVM stages | Convolutional neural network (CNN) | 30,016 slices obtained from 56 patients with the age range of 7–16 years |
-Automatically detects the C2-C4 vertebrae - Accurately classifies slices into 3 growth phases without the need for annotating the shape and configuration of vertebrae |
+ |
|
Seo et al./2023 South Korea (Hyejun Seo et al. 2023) |
Observational Single center |
Accurate bone-age estimation by focusing on the cervical vertebrae |
1. Med-Bone Age version 1.0.3 software program 2. DeepLabv3 +, the semantic segmentation network for delineated cervical vertebral region, and Inception-ResNet-v2, a classification network modified to a regression model for age estimation |
hand–wrist radiographs and lateral cephalograms of 900 participants, aged 4–18 years | -Estimates bone age with sufficiently high accuracy | + |
|
Mohammad Rahimi et al./2022 Iran (Mohammad-Rahimi et al. 2022) |
Observational Single center |
Determination of cervical vertebral maturation (CVM) stage | Two transfer learning models based on ResNet-101 | 890 cephalograms |
- Reasonable accuracy - High reliability in detecting the pubertal stage - Accuracy was still less than that of observers |
? |
|
Liao et al./2022 China (Liao et al. 2022) |
Observational Single center |
Automatic CVM assessment | ResNet-50 | 900 lateral cephalograms | - Superior performance on CVM-900 | + |
|
Li et al./2022 |
Observational Single center |
A fully automated artificial intelligence-aided cervical vertebral maturation (CVM) classification method | Four CNN models namely, VGG16, GoogLeNet, DenseNet161, and ResNet152 | 6079 cephalometric images | - Convenient, fast and reliable method | + |
|
Atici et al./2022 USA (Atici et al. 2022) |
Observational Single center |
A fully automated detection and classification of the Cervical Vertebrae Maturation (CVM) stages | an innovative custom-designed deep Convolutional Neural Network (CNN) with a built-in set of novel directional filters | 1018 Cephalometric radiographs |
-Best performance compared with the pre-trained MobileNetV2, ResNet101, and Xception models, with or without the directional filters - An effective tool for determining the skeletal maturity stage and treatment timing |
+ |
|
Agarwal et al./2022 India (Agarwal & Agarwal 2022) |
Observational Single center |
Determination of the degree of maturation (i.e. cervical vertebral maturation [CVM]) | Convolutional neural network (CNN) | 383 individuals aged between 10 and 36 years | High accuracy to classify the majority classes | + |
|
Zhou et al./2021 China (Zhou et al. 2021) |
Observational Single center |
Automatic determination of the CVM status |
a convolutional neural network (CNN) Detnet architecture |
1080 cephalometric radiographs |
-Good agreement with human examiners, -Useful and reliable tool |
+ |
|
Seo et al./2021 Korea (H. Seo et al. 2021) |
Observational Single center |
Performance of six state-of-the-art convolutional neural network (CNN)-based deep learning models for cervical vertebral maturation (CVM) | ResNet-18, MobileNet-v2, ResNet-50, ResNet-101, Inception-v3, and Inception-ResNet-v2 | 600 lateral cephalometric radiographs |
-All models had more than 90% accuracy -Best performance of Inception-ResNet-v2 |
+ |
|
Kim et al./2021 |
Observational Multi center |
Prediction of the hand-wrist maturation stages |
an ensemble of machine learning models: Bayesian Ridge, Ridge, Linear Regression, Huber Regressor, SGD Regressor, Random Forest Regressors, TheilSen Regressor, AdaBoost Regressor and Linear SVR |
499 pairs of hand-wrist radiographs and lateral cephalograms of 455 orthodontic patients |
-Hand-wrist skeletal maturation index (SMI) prediction -Chronological age and sex increased the prediction accuracy |
+ |
|
Kök et al./2021 Turkey (Kök et al. 2021) |
Observational Single center |
Determination of the growth-development periods and gender from the cervical vertebrae | Artificial Neural Network (ANN) | 419 patients aged between 8 and 17 years |
-Determination of growth-development periods and gender by using ANN -Satisfactory success |
+ |
|
Kok et al./2021 Turkey (Kok et al. 2021) |
Observational Single center |
Determination of the growth and development | artificial neural network models (NNMs) and naive Bayes models (NBMs) |
360 individuals between the ages of 8 and 17 years (180 females and 180 males) |
-The NNMs were more successful than the NBMs | + |
|
Amasya et al./2020 |
Observational Single center |
Cervical vertebral maturation (CVM) analysis | artificial neural network (ANN) model |
647 lateral cephalograms (mean age: 15.36 ± 4.13 years) |
Close performance of ANN model, if not better than, to human observers in CVM analysis | ? |
|
Amasya et al./2020 |
Observational Single center |
Comparison of the performance of five different supervised machine learning (ML) classifier models for cervical vertebral maturation (CVM) analysis | logistic regression (LR), support vector machine, random forest, artificial neural network (ANN) and decision tree (DT) models | 647 digital lateral cephalometric radiographs | -Best performance of the ANN model | + |
|
Kök et al./2019 Turkey (Kök et al. 2019) |
Observational Single center |
Determination of cervical vertebrae stages (CVS) for growth and development periods | k-nearest neighbors (k-NN), Naive Bayes (NB), decision tree (Tree), artificial neural networks (ANN), support vector machine (SVM), random forest (RF), and logistic regression (Log.Regr.) algorithms |
300 individuals aged between 8 and 17 years (150 females and 150 males) |
-Lowest accuracy of kNN and Log.Regr. algorithms -Varying accuracy of SVM-RF-Tree and NB algorithms -ANN could be the preferred method for determining CVS |
+ |
|
Makaremi et al./2019 France (Makaremi et al. 2019) |
Observational Single center |
Determination of the degree of maturation of CVM | Resnet, Alexnet, VGG, Squeezenet, Densenet, Inception v3 | 2000 X-ray radiographic images |
- Validated by cross validation -Almost ready for use by orthodontists |
? |
| Maturation stage of the mid- palatal suture | ||||||
|
Zhu et al./2024 China (Zhu et al. 2024) |
Observational Single center |
Maturation stage of the mid- palatal suture | CNN model ResNet50, ResNet18, RessNet101, Inception-v3, and Efficientnetv2-s models |
785 CBCT scans with 2371 images All patients aged 4–20 years |
-Enhanced diagnostic and treatment efficiency -Enhanced accuracy |
+ |
|
Tang et al./2024 China (Tang et al. 2024) |
Observational Single center |
Automatic classification of midpalatal sutures with different maturation stages | Self-Attention of ViT (vision transformer) | 2518 CBCT images of the palate plane, 1259 images as the training set, 506 images as the verification set, and 753 images as the test set |
-Effectively complete CBCT images classification of midpalatal suture maturation stages - Its performance is better than a clinician |
+ |
| Growth prediction | ||||||
|
Moon et al./2024 Korea (J. H. Moon et al. 2024) |
Observational Single center |
Facial growth prediction models | multivariate partial least squares regression (PLS) and a deep learning method | lateral cephalograms of 410 patients (236 female and 174 male adult cases) |
-Valuable tools for growth prediction -More advantageous when uncertainty is considerable |
+ |
|
Larkin et al./2024 Japan (Larkin et al. 2024) |
Observational Single center |
Artificial intelligence-assisted growth prediction | CNN (GP-GCNN) | lateral cephalograms of 198 preadolescent children (62 female and 136 male cases) | Further refinement is required to improve the accuracy in the chin area | ? |
|
Gonca et al./2024 |
Observational Single center |
Classification of maturation stage | multilayer perceptron (MLP), support vector machine (SVM), gradient boosting machine (GBM) and C 5.0 decision | Hand–wrist radiographs of 1067 individuals aged between 7 and 18 years (633 female and 434 male cases) | -It not sufficient to predict maturation stage in growing patients | ? |
|
Zakhar et al./2023 USA (Zakhar et al. 2023) |
Observational Multi center |
Forecasting the magnitude and direction of mandibular growth | XGBoost, Random Forest, Lasso, Ridge, Linear Regression, and Support Vector Regression (SVR), along with a Multilayer Perceptron (MLP) regressor | Lateral cephalometric radiographs of 123 males at three time points (T1: 12; T2: 14; T3: 16 years old) |
-Successful prediction of the post-pubertal mandibular length -More substantial sample sizes are required |
? |
|
Wood et al./2023 USA (Wood et al. 2023) |
Observational Multi center |
Prediction of the post-pubertal mandibular length and Y axis of growth in males | least squares (linear), ridge, lasso, elastic net, XGBoost, random forest, and a neural network |
cephalometric radiographs of 63 males with Class I Angle malocclusion T1 representing the pre-pubertal stage (Mean age ± SD: 11.85 ± 0.46 yrs), T2 representing the pubertal stage (Mean age ± SD: 13.82 ± 0.49 yrs), T3 representing the post-pubertal stage (Mean age ± SD: 15.80 ± 0.57 yrs) |
-No significant difference among the techniques -Except, the least squares method had a significantly larger error than all others in predicting the Y axis of growth |
+ |
|
Kim et al./2023 |
Observational Single center |
Performance of automated skeletal maturation assessment system for Fishman’s skeletal maturity indicators (SMI) | Deep convolutional neural network (CNN) | 2593 hand-wrist radiographs | - Clinically reliable prediction of SMI with a very low prediction error | + |
|
Kim et al./2023 |
Observational Single center |
Prediction of the longitudinal craniofacial growth in a Japanese population | Multiple regression analysis, least absolute shrinkage and selection operator (LASSO), radial basis function network, multilayer perceptron, and gradient-boosted decision tree |
Longitudinal lateral cephalometric radiographs of 59 children (27 males and 32 females) these individuals had complete longitudinal records for ages 6, 12, and 13 |
-High prediction accuracy of LASSO for all linear and angular skeletal parameters | + |
|
Parrish et al./2023 USA (Parrish et al. 2023) |
Observational Single center |
Prediction of the post-pubertal mandibular length and Y-axis in females | six traditional regression algorithms and a small neural network (NN) model: XGBoost regression, Random Forest regressor, Lasso, Ridge, Linear Regression, support vector regression (SVR), and multilayer perceptron (MLP) regressor | Cephalometric data of 176 females with Angle Class I occlusion |
- Capable of predicting mandibular length within 3 mm and Y-axis within 1 degree - All of the ML algorithms were similarly accurate, with the exception of multilayer perceptron regressor |
+ |
|
Zhang et al./2023 China (D. Zhang et al. 2023) |
Observational Single center |
Representation of the ageing-related dynamic attention (ARDA) | EfficientNet-B0 | 14,142 lateral cephalogram radiographs (LCR) images from 4 to 40 years old | - Accurate reflection of the development and degeneration patterns | + |
|
Perillo et al./2021 Italy (Perillo et al. 2021) |
Observational Single center |
Simplification of the information from different types of pathogenic processes leading to the worsening of skeletal Class III malocclusion | LASSO networks (Ln), and Boruta selection (Ba) |
Cephalometric analyses of 144 Class III untreated subjects followed longitudinally during the growth process (4–19 years) were performed (79 female and 65 male cases) |
- Removing redundant cephalometric features from the dataset improved data clarity | + |
| Photographic analysis, and video assessment | ||||||
|
Zhang et al./2025 China (H. Zhang et al. 2024) |
Observational Single center |
Self-health management of skeletal malocclusion with lateral photos | ResNet50 convolutional neural network |
A total of 2109 newly diagnosed patients (536 males, 1573 females) After excluding 69 poor-quality photos, there were 2040 images left in the lateral photo database |
- A promising tool for screening, self-monitoring and early detection of skeletal malocclusion | + |
|
Kılıç et al./2024 Turkey (Kılıç et al. 2024) |
Observational Single center |
Preliminary diagnosis of skeletal malocclusion using just one photograph | Machine learning |
524 pre-pubertal children, aged between 5 and 12 years (273 females and 251 males) |
-Information about the orthodontic problem, age of treatment, and various treatment options | + |
|
Chang et al./2024 China (Q. Chang et al. 2024) |
Observational Single center |
Automatic soft-tissue analysis model that performs landmark detection and measurement calculations | an automatic soft-tissue analysis model based on deep learning | 578 frontal photographs and 450 lateral photographs |
- No statistically significant difference between the model prediction and manual annotation measurements except for the mid facial-lower facial height index - High consistency in a total of 14 measurements |
? |
|
Kocakaya et al./2024 Turkey (Kocakaya et al. 2024) |
Observational Single center |
Prediction of the cephalometric classifications from profile photographs |
Webceph MobileNet V2, Inception V3, DenseNet 121, DenseNet 169, DenseNet 201, EfficientNet B0, Xception, VGG16, VGG19, NasNetMobile, ResNet101, ResNet 152, ResNet 50, EfficientNet V2 |
Cephalometric radiographs and profile photographs of 990 patients | -The most successful deep learning models were DENSENET 201 and EFFICIENTNET V2 | + |
|
Ito et al./2024 Japan (Ito et al. 2024) |
Observational Single center |
Prediction of a cephalometric skeletal parameter directly from lateral profile photographs |
Seven regression convolutional neural network (CNN) models: VGG16, VGG19, InceptionV3, Inception-ResNetV2, DenseNet-121, EfficientNetB7, and EfficientNetV2. VGG16 and VGG19, |
lateral profile photographs of 1600 subjects |
-Predicts a cephalometric skeletal parameter directly from lateral profile photographs - With 71% of predictions being within 2° of accuracy - A non-invasive preliminary screening tool |
? |
|
Mezerji et al./2023 Iran (Soleiman Mezerji et al. 2023) |
Observational Single center |
Automatic analysis of facial photographs | A two-stage fully convolutional network architecture | 395 profile photographs, 271 frontal photographs in smile and 346 frontal photographs at rest | -Similar accuracy of the automatic and manual method for most of the measured variables | ? |
|
Cai et al./2023 China (Cai et al. 2023) |
Observational Single center |
The intricate relationship between facial soft tissue and skeletal types | Resnet 50 as Resnet-Concat model (Res model), Shuffle-Attention model (SfA model) |
1044 3-side-photographs (18–30 years old patients) |
- Accurate automatic differentiation of diverse sagittal skeletal classes based on facial traits | + |
|
Ali et al./2022 Iraq (Ali et al. 2022) |
Observational Single center Prospective |
Prediction of cephalometric variables via a lateral photograph in skeletal Class I, II, and III patterns | artificial neural network |
Digital lateral cephalometric radiographs of 94 patients (41 boys and 53 girls): Thirty skeletal Class I (14 boys and 16 girls), 34 skeletal Class II (14 boys and 20 girls), and 30 skeletal Class III malocclusion (13 boys and 17 girls) |
- An accurate method in for the prediction of cephalometric variables | ? |
|
Mohammed et al./2022 New Zealand (Mohammed et al. 2022) |
Observational Single center |
Automated analysis of smiles | OpenFace2.2.0 |
Smile videos of a convenience sample of 30 participants (16 females, 14 males; age 18.9 years SD 2.2 years) Participants aged 16–22 years |
- Acceptable level of accuracy in assessment of smile features such as frequency, duration, genuineness, and intensity | + |
|
Tanikawa et al./2021 Japan (Tanikawa and Chonho 2021) |
Observational Single center |
Automatic Clinical Descriptions of Facial Images | AI/CNN (named CNN-PC) | Lateral and frontal facial images of patients who visited the orthodontic department (1000 patients; male = 397; female = 603; age range 5–68 years old) | Further refinement is required before clinical use | ? |
|
Baksi et al./2021 South Australia (Baksi et al. 2021) |
Observational Single center |
Identification of anatomical landmarks on three-dimensional soft tissue images | an automated algorithm | 30 Three-dimensional soft tissue images of Caucasian adolescents | - Clinically relevant in detection of midsagittal landmarks | + |
|
Demircan et al./2021 Turkey (Demircan, Kılıç, and Önal-Süzek 2021) |
Observational Single center |
Their project aims to create a mobile app to help parents determine if they should consult an orthodontist early for potential Class III malocclusion risk |
1) python Dlib facial landmark detector implementation 2) Python Multi-task Cascade Convolutional Neural Network (MTCNN) face detection implementation, and 3) python face alignment library implementation |
dataset consisting of 60 profile images of patients | - Third heuristic method performed best with the accuracy of 81,66% (49/60) | ? |
| Orthodontic treatment planning, diagnostics, prediction, and risk monitoring | ||||||
|
Chang et al./2025 China (Q. Chang et al. 2025) |
Observational Single center |
Pre-training approach using multi-center lateral cephalograms for self-supervised learning | multi-attribute classification network | 3310 lateral cephalograms |
- Advances automated diagnostic tools in orthodontics - Addresses need for accurate malocclusion diagnosis - Improves diagnostic efficiency and accuracy - May reduce healthcare costs for orthodontic treatments |
+ |
|
Zheng et al./2025 China (Zheng et al. 2025) |
Observational Single center (cross-sectional, retrospective study) |
Fully automatic extraction of root volume information | Dynamic Graph Convolutional Neural Network | 4534 teeth from 105 patients |
- Methodology for OIRR assessment - Automated and reliable tools - Improved orthodontic treatment planning - Better monitoring capabilities |
+ |
|
Gong et al./2025 China (Gong et al. 2025) |
Observational Single center |
Prediction of changes in lateral appearance | a new conditional generative adversarial network (CGAN) model | 511 participants (218 male and 293 female individuals) |
- Soft-P-CGAN predicts post-treatment lateral changes - Analyzes soft and hard tissue changes in cephalograms - Predictions mostly clinically acceptable - Assists in setting orthodontic treatment goals - Validates image generation networks for lateral profile prediction - Proposes methods to improve accuracy and interpretability |
+ |
|
Gaonkar et al./2024 India (Gaonkar et al. 2024) |
Randomized Clinical Trial | AI-enhanced diagnostic tools for orthodontic treatment planning | AI-powered diagnostic software | 100 orthodontic cases |
- AI tools boost treatment planning accuracy - Reduces treatment time - Fewer appointments - Higher patient satisfaction |
+ |
|
Tomášik et al./2024 Slovakia (Tomášik et al. 2024) |
Observational Single center |
The possibilities of AI-enhanced face improvement technologies in planning orthodontic treatments | FaceApp | 50 participants (25 male and 25 female) |
- AI face enhancement aids in facial aesthetics - Useful in orthodontic treatment planning - Key changes: - Lip fullness - Eye size - Lower face height - More appealing AI-enhanced photos - Supports individualized, soft-tissue-focused orthodontics |
+ |
|
Etemad et al./2024 USA (L. E. Etemad et al. 2024) |
Observational Multi center |
Prediction of whether orthodontic patients will need extraction or non-extraction treatment | Machine Learning Algorithm—Random Forest |
1135 patients Male: University 1: 131 (44.11%)/University 2: 341 (40.69%) Female: University 1: 166 (55.89%)/University 2: 497 (59.31%) Age (Mean ± SD): University 1: 17.15 ± 8.67/University 2: 18.37 ± 10.69 |
- Maxillary and mandibular crowding are key aspects - Affect extraction decisions at both institutions - Utilizes datasets from two U.S. institutions - Aims to develop an AI model for orthodontic support |
+ |
|
Noeldeke et al./2024 Germany (Noeldeke et al. 2024) |
Observational Single center |
Crossbites detection and classification of non-crossbite, frontal, and lateral crossbites | DenseNet, EfficientNet, MobileNet, ResNet18, ResNet50, and Xception | 676 photographs from 311 orthodontic patients |
- CNNs effectively process clinical photographs - Potential for detecting crossbites - Focus on deep learning for orthodontic diagnosis - Uses intraoral 2D photographs - Initial insights into deep learning in orthodontics |
+ |
|
Wang et al./2024 China |
Observational Single center |
Relationships of hard and soft tissues in the lower third of the face for skeletal Class II hyperdivergent patients compared to Class I normodivergent patients | Network Analysis | 52 adult patients (42 females, 10 males) |
- Class II hyperdivergent patients show more soft tissue variations than Class I normodivergent patients - Variations are mainly in the sagittal direction - Relevant hard tissue landmarks for predictions are positioned more forward |
+ |
|
Alam et al./2024 China (Alam et al. 2024) |
Observational Single center |
Prediction of treatment outcomes in orthodontics | A machine‑learning‑based AI model |
30 patients (60% females and 40% males) |
- AI models have potential in orthodontic treatment predictions - High accuracy generally, but struggles with complex and nonstandard cases |
? |
|
Cho et al./2024 Korea (Cho et al. 2024) |
Observational Single center |
Prediction of soft tissue and alveolar bone changes following orthodontic treatment | AI prediction model was based on the TabNet deep neural network | A total of 1774 lateral cephalograms of 887 adult patients (604 females and 283 males) |
- AI less effective than conventional methods for predicting orthodontic changes - AI excels in predicting variable soft tissue landmarks with high variability - Potential need for a hybrid model combining both approaches |
? |
|
Ramasubbu et al./ 2024 India (Ramasubbu et al. 2024) |
Observational Single center |
Prediction of facial and dental outcomes following orthodontic treatment | Style-based Generative Adversarial Network-2 (StyleGAN-2) |
50 bimaxillary patients (18 males and 32 females) AI-predicted outcomes analyzed by four groups of 140 evaluators (35 orthodontists, 35 oral maxillofacial surgeons, 35 other specialty dentists, and 35 laypersons) |
- Method shows potential - Most evaluators found AI predictions reliable |
? |
|
Tanikawa et al./2024 Japan (Tanikawa et al. 2024) |
Observational Single center |
Patterns of Facial Soft Tissue Shape in Orthodontic Premolar Extraction Cases | AI clustering method | 152 patients (All female) |
- Facial form changes vary by AI-classified pre-treatment profile patterns - Pre-treatment profiles guide soft to hard tissue movement ratios - Aids in predicting the facial profile after treatment with moderate to high accuracy |
+ |
|
Salmanpour et al./2024 Turkey (Salmanpour and Camci 2024) |
Observational Single center |
Comparison of the predictive abilities of different convolutional neural network (CNN) models and machine learning algorithms |
convolutional neural network (CNN) models: VGG16, ResNet50 V2, ResNet101 V2, ResNet152 V2, InceptionResnetV2, Xception, MobileNet V2, NasNetMobile, and DenseNet |
237 orthodontic patients (147 women, 90 men) |
- CNN models with photographs predict cooperation successfully - Voice data is less effective than photographs for prediction |
? |
|
Alsubhi et al./2024 Saudi Arabia (Alsubhi et al. 2024) |
Observational Single center |
Dental crowding prediction from occlusal view images | ResNet50, MobileNetV3 Small, and a customized CNN architecture | 256 images |
- Dental crowding prediction model developed - Used deep learning (DL) - Tested with four experiments - Best model: MobileNetV3 Small with CLAHE - Accuracy: 0.907 |
+ |
|
Chen et al./2024 China (H. Chen et al. 2024) |
Observational Single center |
Orthodontists' ability enhancement to monitor root and jawbone information | Deep learning-based cross-temporal multimodal image fusion system | 1,283 orthodontic patients treated, aged 12–49 years, comprising 486 males and 797 females |
- Developed deep learning-based multimodal fusion system - Facilitates ongoing risk assessment during orthodontic procedures - No additional radiation exposure - Aims to advance risk management and treatment strategies - Seeks to reshape orthodontic practice |
+ |
|
Xu et al./2024 China (S. Xu et al. 2024) |
Observational Single center |
Grading Orthodontically Induced External Root Resorption (OIERR) on tooth slices | CNNs (EfficientNet-B1, EfficientNet-B2, EfficientNet-B3, EfficientNet-B4, EfficientNet-B5, and MobileNet-V3) |
A total of 2146 tooth slices of various OIERR grades A total of 400 pairs of CBCT scans (before and after treatment) 123 (30.75%) were men (mean age 21.54 ± 5.86 years) and 277 (69.25%) were women (mean age 24.26 ± 6.47 years) |
- Six CNNs excelled in OIERR grading - Surpassed orthodontists' performance - Reliable diagnostic support for orthodontists |
+ |
|
Prasad et al./2023 India (Prasad et al. 2023) |
Observational Single center |
A broader outline prediction of orthodontic diagnosis and treatment plan |
Random Forest Classifier, XGB * Classifier, Logistic Regression, Decision Tree Classifier, K-Neighbors Classifier, Linear SVM, Naïve Bayes Classifier |
700 case records of orthodontically treated patients in the past ten years |
- ML-based AI model: 84% accuracy in treatment plan prediction - Compared to orthodontist expert decisions - Lacks subtle decision-making ability - Quality of expertise and training data are limiting factors |
? |
|
Mason et al./2023 India (Mason et al. 2023) |
Observational Single center |
Prediction of extraction/non-extraction decision | two-stage mesh deep learning |
393 patients (200 non-extraction and 193 extraction) (143 females and 250 males) |
- ML models predict extraction decisions - Effective for diverse patient populations - High accuracy and precision - Influential components: - Crowding - Sagittal and vertical characteristics |
+ |
|
Trehan et al./2023 India (Trehan et al. 2023) |
Randomized Clinical Trial | Prediction of extraction in orthodontic treatment plan | convolutional neural network (ResNet-50) | 700 patients |
- Suggestion to develop the current AI model - Aim: improve prediction accuracy |
? |
|
Ryu et al./2023 Korea (Ryu et al. 2023) |
Observational Single center |
tooth landmark detection and tooth extraction diagnosis | ResNet50, ResNet101, VGG16, and VGG19 |
3,136 orthodontic occlusal photographs (1500 maxillary and 1636 mandibular individual intraoral photos of 1636 patients (786 males and 850 females)) |
- Deep learning applied to orthodontic photos - Dental crowding classification achieved - Diagnosis of orthodontic extraction confirmed - AI aids clinicians in diagnosis and treatment planning |
+ |
|
Leavitt et al./2023 USA (Leavitt et al. 2023) |
Observational Single center |
Predicting orthodontic extraction patterns | Random Forest (RF), Logistic Regression (LR), and Support Vector Machine (SVM) algorithms |
366 patients (240 females and 126 males) 0–40 years |
-—Evaluated supervised machine learning methods demonstrated strong accuracy for extracting U/L4s and U4s patterns - Poor predictions for U4/L5s, U5/L4s, and U/L5s extraction patterns - Key predictive indicators identified: - Molar relationship - Mandibular crowding - Overjet |
+ |
|
Senirkentli et al./2023 Turkey (Senirkentli et al. 2023) |
Observational Single center |
Deciding between serial extraction or expansion | Multilayer Perceptron, Linear Logistic Regression, k-nearest Neighbors, Naïve Bayes, and Random Forest |
116 patients 6–9 years old |
- Utilizing machine learning for early treatment decisions in mixed dentition patients - Helpful for pediatric and general dentists |
+ |
|
Taraji et al./2023 USA (Taraji et al. 2023) |
Observational Single center |
Predictive power of machine learning algorithms for planning adult Class III malocclusion treatment | Logistic Regression, Support Vector Machine, Multilayer Perceptron (MLP), k-Nearest Neighbor, Random Forest, Convolutional Neural Network (CNN), and Extreme Gradient Boosting (XGBoost) |
182 participants (91 females and 91 males) |
- Notable cephalometric differences in Class III adults between two groups: orthodontic camouflage vs. surgery - 93% accurate AI model developed - Highlights AI and machine learning's role in orthodontics - Aims to improve diagnosis and treatment planning - Reduces clinician subjectivity in borderline cases |
+ |
|
Makaremi et al./2023 France (Makaremi et al. 2023) |
Observational Single center |
knowledge relating to the shape of the skull | Interpretable convolutional neural networks (MIE-ICNN) |
2694 orthodontic patients (1/3 males and 2/3 females) |
- Discuss retrognathia-impacted structures from literature - Identify new medically relevant structures - Highlight evolution of impacted structures by C2Rm severity - Provide insights into human anatomy evolution |
+ |
|
Volovic et al./2023 USA (Volovic et al. 2023) |
Observational Single center |
Prediction of the orthodontic treatment duration | Linear Regression, Lasso, Ridge, and Elastic Net, XGBoost and Random Forest, Support Vector Regression (SVR) and Gaussian Process Regression |
478 patients who received orthodontic treatment (315 females and 163 males) |
- Study on orthodontic treatment duration - Uses machine learning (ML) for predictions-based variables measured before treatment |
+ |
|
L. Xing et al./2023 Korea (Xing et al. 2023) |
Observational Single center |
Prediction of lip prominence based on hard-tissue measurements | eXtreme Gradient Boosting (XGBoost), support-vector machines (SVM), linear regression, neural network, random forest, K nearest neighbors (KNN) algorithm, and decision trees |
1549 patients aged ≥ 12 years (1117 females and 432 males) |
- Effective XGBoost model - High accuracy and practicability in predicting upper and lower lip prominence - AI-assisted predictor for orthodontic treatment planning |
+ |
|
Guo et al./2023 China (Guo et al. 2023) |
Observational Single center |
Prediction of posttreatment outcomes for skeletal Class II extraction patients | multiple stepwise regression (MSR), support vector machine (SVM) and random forest (RF) |
124 skeletal Class II extraction patients (76 female and 48 male) 12–30 years old |
- Maxillary incisor and lower lip protrusion are fey indicators for facial profile aesthetics in skeletal Class II extraction patients - New method for predicting posttreatment aesthetics |
+ |
|
Rauf et al./2023 Iraq (Rauf et al. 2023) |
Observational Single center |
Arch width prediction | linear regression and k-nearest neighbor | 450 intraoral scan (IOS) images of orthodontic patients |
- Use machine learning to improve orthodontic diagnosis - Predict dental arch measurements - Prevent anterior segment malocclusion |
+ |
|
Shimizu et al./2022 Japan (Shimizu et al. 2022) |
Observational Single center |
The development of a prioritized problem list and treatment plan | support vector machine (SVM), self-attention network |
967 consecutive cases 60.3% were female |
- Development of two AI systems - Functionality: - Outputs a prioritized problem list - Creates a treatment plan - Clinical system performance: - Previous system: average ranking among its peers - Current system: nearly equal the worst orthodontist |
+ |
|
Shojaei et al./2022 Iran (Shojaei & Augusto 2022) |
Observational Single center |
Clinical decision-making on orthodontic treatment planning | Logistic Regression, SVM, Decision Tree, Random Forest, Gaussian NB, KNN Classifier and Neural Network |
216 patients (156 extraction and 60 non-extraction) (173 female and 43 male) 9–34 years |
- Neural networks achieve high accuracy in medical diagnosis - Logistic regression and random forest are good for simpler tasks like extraction |
+ |
|
Real et al./2022 Chile (Real et al. 2022) |
Observational Single center |
Prediction the requirement for tooth extractions during orthodontic treatments | automated machine learning software (Auto-WEKA) | 214 patients (120 female, 94 male) |
- The precision of optimal extraction prediction models improves when incorporating:: - Model data - Cephalometric data - Benefits the analytical process |
+ |
|
Khosravi-Kamrani et al./2022 Brazil (Khosravi-Kamrani et al. 2022) |
Observational Single center |
Prediction model derived from previous cephalometric data on 5 predominant subtypes of skeletal Class III malocclusion | distance-weighted discrimination (DWD) | Pretreatment lateral cephalometric records of 148 patients (68 female, 80 male) |
- Evaluation of a systematic method to characterize patients with Class III malocclusion into subtypes - Subtype 1: mandibular prognathic - Increased probability of requiring orthognathic surgery - Subtypes 2/3: - Noticeably reduced treatment failure - In response to orthodontics alone |
+ |
|
Tian et al./2022 China (Tian et al. 2022) |
Observational Single center |
Prediction of the face of patients after orthodontic treatment | network based on Encoder-Decoder architecture (OrthodNet) |
1,687 patients 12–18 years |
- Prediction of post-orthodontic facial appearance - Promotes innovation in orthodontic, orthognathic, and cosmetic dentistry - Reduces doctor-patient disputes - Improves consultation and treatment effectiveness in the field |
+ |
|
Park et al./2022 Korea (Y. S. Park et al. 2022) |
Observational Single center |
A 3D post-orthodontic face prediction method based |
- A cGAN - The U-net structure -The discriminator PatchGAN |
Paired T1 and T2 CBCT data sets (n = 312, 96 men, 216 women) of adult patients |
- Clinically acceptable accuracy - Good usability |
+ |
|
Etemad et al./2021 USA (L. Etemad et al. 2021) |
Observational Single center |
Prediction of extraction vs non-extraction | Random forest (RF) and multilayer perception (MLP) models | 838 patients (341 female, 497 male) |
- Incongruent data pattern identified in patient population - Need for future work on incongruent data segregation - Goal: Enhance generalization and support clinical decisions |
+ |
|
Park et al./2021 Korea (J. H. Park et al. 2021) |
Observational Single center Retrospective |
Prediction of the dental, skeletal, and soft tissue changes after non- extraction treatment | U-Net-based deep learning model | 284 patients |
- Predicted changes visualized through lateral cephalometric images - Useful for clinicians - Helpful for patients - Aids in considering non-extraction treatment |
+ |
|
Ali et al./2021 Iraq (Ali et al. 2021) |
Observational Single center |
Prediction the size of unerupted premolars and canines in Iraqi population | Bayesian Regularization Neural Network | 94 adult patients (41 males and 53 females) seeking for orthodontic treatment |
- Study focus: Artificial intelligence systems in orthodontics - Key technology: Neural network machine learning - Purpose: Accurate prediction of unerupted teeth - Performance factors: - Careful choice of input data - Favorable generalization - Appropriate arrangement |
+ |
|
Yu et al./2020 Korea (H. J. Yu et al. 2020) |
Observational Single center |
skeletal diagnostic system | A multimodal convolutional neural network architecture with a modified DenseNet |
5890 lateral cephalograms (mean age = 25.4 y, 2,643 females, 3,247 males) |
- CNN-based system for skeletal orthodontic diagnosis - No intermediary steps needed - Simplifies diagnostic procedures |
+ |
|
Li et al./2019 China (P. Li et al. 2019) |
Observational Single center |
Prediction of orthodontic treatment plans | All three neural networks used in this work are three-layer MLPs | 302 patients |
- Proposed method: artificial neural networks - Offers direction for planning orthodontic treatments - Target audience: less-experienced orthodontists - Aims to improve treatment outcomes |
+ |
|
Jung et al./2016 South Korea (Jung and Kim 2016) |
Observational Single center |
diagnosis of extractions | neural network machine learning | 156 patients (62 males and 94 females) |
- AI expert systems in orthodontics - Enhanced performance through: - Input data selection - Modeling organization - Generalization |
+ |
|
Auconi et al./2015 Italy (Auconi et al. 2015) |
Observational Single center |
Prediction of Class III treatment outcomes | fuzzy clustering repartition and network analysis | Cephalometric data of 54 Class III patients (32 females, 22 males) taken before (T1, mean age 8.2 ± 1.6 years) and after (T2, mean age 14.6 ± 1.8 years) |
- Fuzzy clustering - Estimates individualized risk - Focuses on treatment failure - Applicable to Class III patients |
+ |
|
Xie et al./2010 China (Xie et al. 2010) |
Observational Single center |
The determination of whether extraction is needed | artificial neural network | 200 subjects; 120 for extraction treatments, and 80 for non- extraction treatments |
- Artificial neural network effective treatment determination whether extraction or non- extraction - Malocclusion patients: 11–15 years - 80% accuracy |
+ |
| Need for orthodontic treatment: | ||||||
|
Stetzel et al./2024 China (Stetzel et al. 2024) |
Observational Single center |
The automation of the aesthetic component (AC) of the Index of Orthodontic Treatment Need (IOTN) | IOTN network | 1009 pre-treatment frontal intraoral photos |
- Promising results from AI system - Notable accuracy, but not ready for clinical use - Potential for additional refinement identified - Elimination of overjet demonstrated improvements - Improved accuracy and clinical applicability |
? |
|
Thanathornwong et al./2018 Thailand (Thanathornwong 2018) |
Observational Single center |
General practitioners evaluate the necessity for orthodontic treatment | Bayesian network |
1000 participants 14 to 19 years old |
- Promising results - High accuracy - Classifies patients: - Needing treatment - Not needing treatment |
+ |
| Malocclusion classification: | ||||||
|
Bardideh et al./2024 Iran (Bardideh et al. 2024) |
Observational Single center |
dental occlusion classification using intraoral photographs | multistage neural network system | 948 patients, including 2844 photographs |
- AI excelled in detecting malocclusion classes - Outperformed orthodontists in angle classification - Poor performance in overjet and overbite measurements compared to experts |
? |
|
Zhao et al./2024 China (L. Zhao et al. 2024) |
Observational Single center |
automatically classifies dental, skeletal and functional Class III malocclusions |
1.K-Nearest Neighbor (KNN) 2.Logistic Regression (LR) 3.Support Vector Machine (SVM) (Linear & RBF) 4.Gaussian Process Regression (GPR) 5.Decision Tree (DT) 6.Multilayer Perceptron (MLP) 7.Random Forest (RF) 8.Quadratic Discriminant Analysis (QDA) 9.Extreme Gradient Boosting (XGBoost) |
The collected data related to 46 cephalometric feature measurements from 4–14-year-old children (n = 666) (357 males and 309 females) |
- ML models utilizing cephalometric data - Effective in classifying dental, functional, and skeletal Class III malocclusions in children - Key features: - SN-GoMe - U1_NA - Overjet - Key indicators for predicting severity of Class III malocclusions |
+ |
|
Ueda et al./2023 Japan (Ueda et al. 2023) |
Observational Single center |
maxillofacial morphology classification | three distinct ML models within the scikit-learn (sklearn) package in Python (random forest classifier (RF), logistic regression (LR), and support vector classification (SVC)) | Initial cephalograms of 220 patients aged 18 years or older |
- AI model developed in the study - It classifies maxillofacial shapes accurately - Potential for improvement with additional learning data |
+ |
|
Johannes et al./2023 Germany (Johannes et al. 2023) |
Observational Single center |
sagittal skeletal without cephalometric landmarks classification | The model algorithm of CNN (ResNet101, pretrained on ImageNet) | A total of 800 lateral cephalograms |
- Decision-making tool for malocclusion treatment - Based on lateral cephalograms - No cephalometric landmarks needed - Comparable with orthodontic diagnostic criterias |
+ |
|
Kim et al./2022 Korea (Kim et al. 2022) |
Observational Single center |
classification of sagittal skeletal relationships | DCNN-based AI model with that of the automated-tracing AI software (V-ceph, version 8.3, Osstem, Seoul, Korea). The AI software was developed using a dense convolutional network (DenseNet)—based deep learning algorithm and the edge AI concept |
1574 cephalometric images (745 males and 829 females) |
- Sagittal skeletal classification based on cephalometric images - DCNN-based AI model surpassed the performance of automated-tracing AI software |
+ |
|
Li et al./2022 |
Observational Single center |
Classification of sagittal skeletal patterns | Four different CNNs, namely VGG16, GoogLeNet, ResNet152, and DenseNet161 | 2432 lateral cephalometric radiographs |
- Fast and effective support for orthodontists - Aids in diagnosis of sagittal skeletal classification patterns |
+ |
|
Tafala et al./2022 Morocco (Tafala et al. 2022) |
Observational Single center |
Integration of artificial intelligence and dentistry-orthodontics expertise | convolutional neural networks | 339 cephalometric images of patients |
- Created a system for automatic classification of dental malocclusion - Initial stage in orthodontic diagnosis - Important for aesthetic dentistry |
+ |
|
Harun et al./2022 Malaysia (Harun et al. 2022) |
Observational Single center |
malocclusion classification model | Convolutional Neural Network (CNN) | 454 images |
- Cut-out method increases accuracy - Effective for all malocclusion classes - Better than non-implementation |
+ |
|
Ryu et al./2022 Korea (Ryu et al. 2022) |
Observational Single center |
orthodontic clinical photos classification | convolutional neural networks (CNNs) deep learning | 4448 clinical photos from 491 patients (213 female and 278 male) |
- AI-based system - Needs trained models - Classifies orthodontic facial and intraoral photos automatically |
+ |
|
Cejudo Grano de Oro et al./2022 Germany (Cejudo Grano de Oro et al. 2022) |
Observational Single center |
automatic image augmentation for deep learning-based classification of orthodontic photographs | deep learning ResNet-18 |
2051 clinical RGB images 42.1% male and 57.9% female patients |
- Efficient hyperparameter tuning - Identifies optimal values - Maximizes model performance - Avoids overhead of brute force search - Eliminates need for manual hyperparameter parametrization |
+ |
|
Talaat et al./2021 Germany (Talaat et al. 2021) |
Observational Single center |
Detection and localization of orthodontic malocclusions | The CNN model used for this research study was the “You Only Look Once” model | Intraoral images of 700 Subjects |
- Deep convolutional neural networks used to identify and localize orthodontic issues - Validated with clinical images - AI engine accurately detects malocclusion - Works with different intra-oral image views |
+ |
|
Kim et al./2020 |
Observational Single center |
Automatic identification and classification of skeletal malocclusions | two different architectures of multi-channel deep learning (DL) models: “Ensemble” and “Synchronized multi-channel” |
A total of 218 DICOM-formatted CBCTs: 95 Class I, 72 Class II, and 51 Class III |
- Multi-channel models significantly outperformed single-channel models - Achieved accuracy over 93% - Single-channel models use one directional view of 2D images - Class-selective Relevance Mapping (CRM) shows sagittal-left view model outperformed others |
+ |
| Clear Aligner: | ||||||
|
Feng et al./2025 China (Feng et al. 2025) |
Observational Single center (retrospective) |
development of a fully integrated, flexible, transparent aligner through the embedding of high-performance piezoelectric sensors into the occlusal surfaces | Extreme Gradient Boosting (XGBoost) boosted decision tree model, Back Propagation (BP) Neural Networks, Support Vector Machines (SVM), Linear Regression, Random Forest, K-Nearest Neighbors (KNN), and Decision Trees |
1467 patients were evaluated for oral habits, 5 volunteers were selected among these, 3 participants were female and 2 were male, and the average age is 25.3 ± 1.7 years old |
- Fully-integrated sensing system - Wireless monitoring capabilities - Significant advancement in intraoral wearable sensors - Facilitates remote orthodontic monitoring and evaluation - Provides new avenue for effective orthodontic care |
+ |
|
Liu et al./2025 China (C. Liu et al. 2025) |
Observational Single center |
Assessment of the 3D effects of clear aligners on tooth movement | 3D fusion model based on artificial intelligence software | 136 patients who completed clear aligner (aged 13–35 years); 90 (66%) were female, 46 (34%) were male |
- 3D fusion model provides offers a clinical reference - Assists in designing targets and assessing treatment outcomes - Advances clear aligner orthodontics - Offers clinical benefits and precision - Enhances personalization and evidence-based planning |
+ |
|
Wolf et al./2024 Germany (Wolf et al. 2024) |
Observational Single center |
determinants of successful tooth movement in adult clear aligner therapy | Three different ML methods were utilized: (1) logistic regression with L1 regularization, (2) extreme gradient boosting (XGBoost), and (3) support vector classification using a radial basis function kernel | 9942 CAT patients (70.6% females, 29.4% males, age range 18–64 years, median 30.5 years), |
- Moderate, well-calibrated predictive accuracy observed - Effective methods: regularized logistic regression and XGBoost - Factors have been identified that significantly impact the risk of refinement in CAT - This emphasizes the significance in planning orthodontic treatment.—Possible advantages include: - Reduced treatment durations - Decreased discomfort for patients - Fewer clinic visits |
? |
|
Abu Arqub et al./2024 USA (Abu Arqub et al. 2024) |
Observational Single center |
The precision of ChatGPT's responses regarding orthodontic clear aligners | ChatGPT | 111 questions were generated by three orthodontists |
- The general accuracy of ChatGPT's responses to CAT questions was not satisfactory - Lack of citations to relevant literature - Limited ability to provide current and precise information |
? |
|
Dursun et al./2024 Turkey (Dursun & Bilici Geçer 2024) |
Observational Single center |
accuracy of responses generated by different AIs in relation to orthodontic clear aligners | The chatbots tested were: (i) ChatGPT model GPT-3.5, which is currently available for free. (ii) ChatGPT model GPT-4, which is available as part of a subscription in ChatGPT Plus. (iii) Gemini. (iv) Copilot | 20 most frequently asked questions were included |
- Chatbots provide accurate answers on clear aligners - Responses are moderately reliable and difficult to read - ChatGPT, Gemini, and Copilot have potential in orthodontics - Need for more evidence-based information and improved readability |
? |
|
Adel et al./2024 Egypt (Adel et al. 2024) |
Observational Single center |
prediction of Invisalign SmileView for digital AI smile simulation | Invisalign SmileView | 24 adult subjects (12 females and 12 males; mean age 22 ± 5.2 years) |
- Invisalign SmileView tool: pre-treatment simulation - Limited predictability - Reliable parameters: - Philtrum height - Commissure height - Smile width - Buccal corridor - Smile index - Unreliable factors: - Smile arc - Most posterior tooth display - Lower incisor exposure - Actual outcomes: better aesthetics than simulations |
? |
|
Mourgues et al./2024 Spain (Mourgues et al. 2024) |
Observational Single center |
The predictability of orthodontic results simulation by SmileView™ (SV) and its impact on anatomical modifications to the teeth | Invisalign® SmileView™ (SV) | 51 subjects |
- SV simulates broader smiles - Achievable via aligner treatments - Predicts vertical movements of incisors - Adjusts upper incisor mesiodistal size - Corrects dental midline deviations |
+ |
|
Tanaka et al./2023 Brazil (Tanaka et al. 2023) |
Observational Single center |
the accuracy of ChatGPT in information provision on clear aligners, temporary anchorage devices, and digital imaging | ChatGPT 4.0 | 45 questions and answers |
- ChatGPT effectiveness in providing quality answers - Topics: - Clear aligners - Temporary anchorage devices - Digital imaging - Context of interests of Orthodontics |
+ |
|
Murphy et al./2023 USA (Murphy et al. 2023) |
Observational Single center |
Comparison of maxillary incisor and canine movement between Invisalign and fixed orthodontic appliances | two-stage mesh deep learning |
60 patients (Invisalign, n = 30; braces, n = 30) Aged ≥ 16 years when treatment was begun |
- Comparison: Fixed orthodontic appliances vs. Invisalign - Finding: Patients with fixed appliances experience: - More maxillary tooth movement - Significant movement in all directions - Especially notable rotation of maxillary canine - Tipping of maxillary canine also significant |
+ |
|
Lee et al./2023 |
Observational Single center |
Identification of predictors regarding the type and severity of malocclusion that affect total Invisalign treatment duration | Two-Stage Mesh Deep Learning (TS-MDL) |
116 consecutive patients (88 females and 28 males) |
- Specific tooth movement types do not conclusively affect aligner treatment time - Composite score better predicts treatment duration than individual malalignment factors - Anterior malalignment factors significantly impact treatment time |
? |
|
Ferlito et al./2023 USA (Ferlito et al. 2023) |
Observational Single center |
The assessment of the repeatability of the Go or No-Go instructions provided by the application | AI-based GO/NO-GO instructions |
30 patients (18 female and 12 male) |
- Large discrepancies in tooth position - Patients receiving GO and NO-GO instruction - Inconsistency with quantitative findings |
? |
|
Xu et al./2022 China (L. Xu et al. 2022) |
Observational Single center |
Invisalign treatment patient experience prediction | The ANNs were four-layer fully connected MLPs (Machine Learning Plannings) | 196 patients |
- Constructed ANNs show good accuracy in predicting patient experience - Areas of prediction include: - Pain - Anxiety - Quality of Life (QoL) - Artificial intelligence system developed for predicting patient comfort - Potential for clinical application - Aims to enhance patient compliance |
+ |
|
Thurzo et al./2021 Slovakia (Thurzo et al. 2021) |
Observational Single center |
the impact of computerized personalized decision algorithms responding to observed and anticipated patient behavior | Artificial intelligence system (AI) system—Dental monitoring | 86 patients (54 female, 32 male) |
- Implementation of application updates - Incorporate computerized decision processes - Enhance clinical performance - Improve patient compliance - Use algorithms with decision tree architecture - Create a baseline for machine learning optimization |
+ |
| Orthodontic appliances: | ||||||
|
Li et al./2024 China (R. Li et al. 2024) |
Observational Single center |
Virtual orthodontic bracket removal | A segmentation network based on pointnet neural network (PNN) |
49 orthodontists, Dataset A: 978 bonded teeth, 20 original teeth, and 20 brackets generated by scanners Dataset B: which included an additional 118 bonded teeth |
- Tool efficiency and precision are satisfactory - Can operate without original tooth data - Displays bonding deviation - Useful in bracket position assessment scenario |
+ |
|
ElShebiny et al./2024 USA (ElShebiny et al. 2024) |
Observational Single center |
Comparison of the accuracy of a workflow involving virtual bracket removal (VBR) by AI to traditional bracket removal | Pre-debond scan was then uploaded to the uDesign by uLab Systems software (version 7.0; uLab Systems, Inc, Memphis, Tenn), | 30 dental arches (17 maxillary and 13 mandibular) of 17 patients; 11 females and 6 males |
- VBR by AI is considered accurate - Suitable for fabrication of thermoplastic orthodontic retainers - Clinically acceptable quality |
+ |
|
Aldabbagh et al./2019 Saudi Arabia (Aldabbagh et al. 2019) |
Observational Single center |
Mobile application design for orthodontic treatment assistance in accurate wired bracket placement | python code and java code | 13 male and female participants, |
- Mobile application evaluated - Achieved desired goals - Focus on speed - Focus on accuracy |
+ |
|
Akçam et al./2002 Turkey (Akçam & Takada 2002) |
Observational Single center |
Development of a computer-assisted inference model for the selection of appropriate types of headgear appliances | Fuzzy logic | 85 cases (52 females, 33 males) |
- Majority of examiners: six or more out of eight - Satisfaction with system recommendations - Confirmation of usefulness of proposed inference logic |
+ |
| Oral hygiene, and Periodontal health: | ||||||
|
Ozsunkar et al./2024 Turkey (Ozsunkar et al. 2024) |
Observational Single center |
Detection of white spot lesions in post-orthodontic oral photographs | YOLOv5x algorithm |
435 images patients aged 16 to 62 |
- Model's accuracy in detecting white spot lesions - Sensitivity of the model remains lower than expected - Challenges for practical application - Promising detection rate - Acceptance compared to previous studies |
? |
|
Gurgel et al./2024 USA (Gurgel et al. 2024) |
Observational Single center (retrospective secondary data analysis from a clinical trial performed) |
Influence of the piezocision surgery in the orthodontic biomechanics | Novel artificial intelligence (AI)-automated tools |
38 patients aged from 18 to 40 years: control (n = 19) and piezocision (n = 19) (10 females and 28 males) |
- Open-source automated dental tools used - Facilitated clinicians' assessment of piezocision treatment outcomes - Piezocision surgery performed before orthodontic treatment - No decrease in treatment time observed - No influence on orthodontic biomechanics - Similar tooth movements compared to conventional treatment |
+ |
|
Snider et al./2023 USA (Snider et al. 2023) |
Observational Single center (prospective clinical study) |
Effectiveness of Dental Monitoring™ (DM™) Artificial Intelligence Driven Remote Monitoring Technology (AIDRM) | Dental Monitoring™ (DM™) Artificial Intelligence Driven Remote Monitoring Technology (AIDRM) technology |
DM Group: (n = 24) Control Group (n = 25) not monitored by DM |
- AIDRM by weekly DM scans and personalized active notifications - improve oral hygiene over time in orthodontic patients |
+ |
|
Andrade et al./2023 Brazil (Andrade et al. 2023) |
Observational Single center (retrospective feasibility study) |
Automated biofilm detection capacity | U-Net neural network | 176 patients (The first dataset consisted of 96 photographs from 16 patients; The second dataset comprised 480 intra-oral photos from 160 patients) |
- Segmenting dental biofilm visually - Using U-Net for segmentation - Assists professionals and patients in identifying dental biofilm - Improves oral hygiene and health |
+ |
|
Machoy et al./2021 Poland (Machoy et al. 2021) |
Observational Single center |
Temperature changes and automatic classifiers as determiners of measurement sensitivity | Computational Formulation of Orthodontic referral Decisions system (CFOD) |
120 patients 18–35 years |
- Class I elastics in orthodontics - No significant change in periodontal temperature - Evidence of safe orthodontic forces - AI used for gingival temperature assessment - Excludes factors affecting thermographic analysis |
+ |
|
Alalharith et al./2020 Saudi Arabia (Alalharith et al. 2020) |
Observational Single center |
Automatic detection of periodontal disease in orthodontic patients | Faster Region-based Convolutional Neural Network (Faster R-CNN) | 134 intraoral images (47 males and 47 females) |
- Viability of deep learning models - Focus on detection and diagnosis of gingivitis in intraoral images - Highlights potential usability in dentistry - Aims to reduce severity of periodontal disease globally - Supports preemptive, non-invasive diagnosis |
+ |
| Sex determination | ||||||
|
de Araujo et al./2024 Brazil (C. M. de Araujo et al. 2024) |
Observational Single center |
Measurements of dental arch and maxillary skeletal base for sex determination | Logistic Regression, Gradient Boosting Classifier, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Multi-Layer Perceptron Classifier (MLP), Decision Tree, and Random Forest Classifier | 100 patients (50 males, 50 females) with an age range from 10 to 88 years and a mean age of 38.5 ± 20.6 years (38.2 ± 19 for men and 38.8 ± 22.3 for women) |
- Transverse dental arch and maxillary skeletal base measurements - Strong predictive capability and high accuracy - Best performance by SVM algorithm - Useful for forensic sex determination |
+ |
|
Küchler et al./2024 Germany (Küchler et al. 2024) |
Observational Single center (cross-sectional study) |
combination of mandibular and dental dimensions for sex determination | logistic regression, gradient boosting classifier, k-nearest neighbors, support vector machine, multilayer perceptron classifier, decision tree, and random forest classifier | 108 individuals (51% females and 49% males); age ranging from 9 to 40 years old (15.7 ± 7.9 years) |
- The integration of mandibular and dental dimensions for sex prediction would be a promising approach - The potential of machine learning techniques as valuable tools |
+ |
|
Hase et al./2024 Japan (Hase et al. 2024) |
Observational Single center (cross-sectional study) |
sex estimation | VGG16 and DenseNet-121 | 600 lateral cephalograms of patients aged 4 to 63 years (300 female patients and 300 male patients) |
- Analysis of the system conducted sex estimation from lateral cephalograms - High accuracy achieved |
+ |
|
Kök et al./2021 Turkey (Kök et al. 2021) |
Observational Single center |
Determination of the growth-development periods and gender from the cervical vertebrae | Artificial Neural Network (ANN) | 419 patients aged between 8 and 17 years |
- Growth-development periods and gender identified via cervical vertebrae - ANN algorithm showed satisfactory success |
+ |
Results
AI applications in orthodontics
Facial asymmetry
Facial symmetry is one of the key aspects of facial attractiveness (N. Kazimierczak et al. 2024a, b, c). Several factors such as masticatory and temporo-mandibular joint (TMJ) disorders, condylar trauma, condylar hyperplasia, craniofacial microsomia, craniosynostosis, and plagiocephaly could contribute to the facial asymmetry (Cheong and Lo 2011). Four studies evaluated the diagnostic performance of AI in detection of facial asymmetry, and compared the AI- based diagnosis with conventional analysis made by clinicians (S. M. Jeon et al. 2024; N. Kazimierczak et al. 2024a, b, c; Takeda et al. 2021; Yurdakurban et al. 2021). Two of these studies used CephX, and Emotrics programs (N. Kazimierczak et al. 2024a, b, c; Yurdakurban et al. 2021), while the others trained a CNN model (S. M. Jeon et al. 2024; Takeda et al. 2021). Three studies have concluded that AI- based software and Machine-learning algorithms might offer a clinically acceptable diagnostic evaluation of facial asymmetry on PA cephalograms and facial photographs (S. M. Jeon et al. 2024; Takeda et al. 2021; Yurdakurban et al. 2021). However, Kazimierczak et al. reported that the high rate of tracing inaccuracies and poor agreement with manual assessments preclude the use of the AI-driven web-based platform CephX to assess facial asymmetry on CT scans (N. Kazimierczak et al. 2024a, b, c).
Facial attractiveness
Facial attractiveness affects one’s quality of life and social interactions (Westfall et al. 2019). Earlier investigations have systematically examined the association between dentoskeletal malocclusions and perceived facial attractiveness (Olsen and Inglehart 2011; Papio et al. 2019; Richards et al. 2015). Orthodontic, and orthognathic treatments could improve facial attractiveness by correcting dentoskeletal problems. Four studies utilized AI to evaluate how such treatments influence facial attractiveness (Cai et al. 2024; Obwegeser et al. 2022; Patcas et al. 2019a, b; Patcas et al. 2019a, b). AI based programs and algorithms demonstrated the positive impact of treatments on facial attractiveness. Also, Patcas et al. found that AI-based results were comparable with the panel ratings (Patcas et al. 2019a, b).
Three studies employed AI to assess facial attractiveness from facial photographs (Grillo et al. 2024; Peck et al. 2022; X. Yu et al. 2014). Two studies have found that AI is quite reliable and it could enhance speed and efficiency (Grillo et al. 2024; X. Yu et al. 2014). Peck et al. found AI is most effective when supplementing standard rater groups in facial attractiveness assessments (Peck et al. 2022). This study utilized Haystack software to evaluate preoperative and postoperative photographs of 65 patients who underwent orthognathic surgery.
Cleft
Clef lip and palate (CLP) presents as a prevalent craniofacial anomaly that clinicians frequently observe in practice (Manna et al. 2009). Both genetics and environmental factors are associated with CLP (Kohli and Kohli 2012; Murray 2002). Evaluating craniofacial morphology and dental occlusion is critical in CLP diagnosis and management. Nowadays, AI has gained interest for radiographic and morphologic analysis. Eight studies investigated the craniofacial characteristics, dental relationships, early prediction of surgical needs, and facial attractiveness of cleft patients using AI- based software (Webceph), and AI algorithms (Alam and Alfawzan 2020a, 2020b; Alam et al. 2021; Arslan et al. 2024; Kamei et al. 2024; G. Lin et al. 2021; Patcas et al. 2019a, b; X. Wang et al. 2021a, b). Alam et al. compared dentoskeletal relationship, and Sella turcica bridging of cleft patients with non-clef individuals in three studies, by using Webceph software (Alam and Alfawzan 2020a, 2020b; Alam et al. 2021). Patcas et al. employed a CNN to compare the facial attractiveness of cleft patients after treatment with that of non-cleft individuals, and found comparable results with scores of professional raters (Patcas et al. 2019a, b). Wang et al. utilized deep learning-driven CBCT auto-segmentation to quantify three-dimensional maxillary asymmetry in cleft patients (X. Wang et al. 2021a, b). Kamei et al. compared the skeletal maturation and presence of cervical vertebral anomalies between cleft and non-cleft groups using convolutional neural network (CNN)-based AI models (Kamei et al. 2024). Arslan et al. found that AI was effective in detecting and numbering teeth in cleft patients, but further refinement is required (Arslan et al. 2024). Lin et al. demonstrated that AI could predict the future need for surgery in cleft patients at the age of 6 years (G. Lin et al. 2021). One study proposed a machine learning method to predict cleft before birth (Shafi et al. 2020). They achieved 92.6% accuracy on test data based on the multilayer perceptron model. Omar et al. evaluated the variables that contribute to the success of pre-graft orthodontic treatment of CLP using a machine learning algorithm (Omar et al. 2018). They found four variables determining the success of the pre -graft orthodontic treatments.
Impacted canine
Maxillary canines are the most common impacted teeth after the third molars (Manne et al. 2012). Surgical exposure, and orthodontic treatment are considered for some cases. Precise localization of maxillary canines, via 2D and 3D radiographs, is necessary to expose and apply an appropriate vector of force (Amintavakoli and Spivakovsky 2018). Also, Classification of impacted canines could be performed based on their location and angulation in the radiographs (Ericson and Kurol 1988; Yamamoto et al. 2003). Nowadays, AI could be used to predict the occurrence, and localize the position of maxillary impacted canines. Deepa et al. evaluated the performance of a CNN model in prediction of the occurrence of maxillary canine impaction and demonstrated promising performance with high accuracy (Deepa et al. 2024). Ozcan et al. used a CNN model to predict the bucco- palatal position of impacted canine crowns from panoramic x- rays, and found that further refinement is required before clinical adoption (Özcan et al. 2024). de Araujo et al. demonstrated that palatally impacted maxillary canines could be predicted using maxillary measurements analyzed through machine learning approaches (Cristiano Miranda de Araujo et al. 2025). Two studies used a CNN model to classify impacted canines on panoramic radiographs (Abdulkreem et al. 2024; Aljabri et al. 2022). Both studies found that AI algorithms could improve the identification of impacted canines. Minhas et al. implemented a CNN to reconstruct 3D images from 2D panoramic X-rays in patients with maxillary impacted canines (Minhas et al. 2024). They found that the 50% accuracy was insufficient for clinical application. Chen et al. used a machine learning- based method with cone-beam computed tomography (CBCT) data to assess maxillary structural differences in unilateral canine impaction (S. Chen et al. 2020). They demonstrated that automatic segmentation method with this machine learning algorithm was fast and efficient, applicable to both normal and pathological subjects, and adaptable to CBCT images of different quality levels.
Cephalometric landmarks’ detection and cephalometric analysis
Cephalometric analysis serves as a fundamental method in orthodontic diagnosis and treatment planning. It consists of identifying anatomical landmarks and performing measurements. Nowadays, AI- based algorithms could assist orthodontists in automatic landmark identification and cephalometric analysis. Thirty- eight studies investigated the performance of AI systems in detection of cephalometric landmarks, and compared it with manual landmark identification (H. J. Ahn et al. 2024; Alessandri-Bonetti et al. 2023; Banumathi et al. 2011; Bao et al. 2023; Bulatova et al. 2021; S. Chang et al. 2023; J. Chen et al. 2023a, b; Dai et al. 2019; Danisman 2023; Davidovitch et al. 2022; El-Dawlatly et al. 2024; Hwang et al. 2020; Indermun et al. 2023; Jiang et al. 2023; Kang et al. 2024; J. Kim et al. 2021a, b, c, d; M. J. Kim et al. 2021a; King et al. 2022; Kwon et al. 2021; Le et al. 2022; J. Lee et al. 2023a, b, c; J. H. Lee et al. 2020; J.-H. Moon et al. 2020; Nishimoto et al. 2019; O'Friel et al. 2024; Ramadan et al. 2022; Ristau et al. 2022; Silva et al. 2022; Song et al. 2020; Tanikawa et al. 2021b; Tsolakis et al. 2022; Ugurlu 2022; Wirtz et al. 2020; Yao et al. 2022; Ye et al. 2023; Zeng et al. 2021; Zese et al. 2023; C. Y. Zhao et al. 2023a, b). Twenty-four studies concluded that AI alone is not yet fully reliable for detecting various cephalometric landmarks, and clinicians should review the landmark positions when using AI- based models due to errors in landmark identification by AI– based models (Bao et al. 2023; Bulatova et al. 2021; S. Chang et al. 2023; J. Chen et al. 2023a, b; Dai et al. 2019; Danisman 2023; Davidovitch et al. 2022; El-Dawlatly et al. 2024; Kang et al. 2024; J. Kim et al. 2021a, b, c, d; M. J. Kim et al. 2021a; Le et al. 2022; J. Lee et al. 2023a, b, c; J. H. Lee et al. 2020; J.-H. Moon et al. 2020; O'Friel et al. 2024; Ramadan et al. 2022; Ristau et al. 2022; Silva et al. 2022; Tanikawa et al. 2021b; Ugurlu 2022; Wirtz et al. 2020; Zeng et al. 2021; Zese et al. 2023). Fourteen studies demonstrated that AI showed high accuracy and it could be an efficient tool in automatic detection of cephalometric landmarks (H. J. Ahn et al. 2024; Alessandri-Bonetti et al. 2023; Banumathi et al. 2011; Hwang et al. 2020; Indermun et al. 2023; Jiang et al. 2023; King et al. 2022; Kwon et al. 2021; Nishimoto et al. 2019; Song et al. 2020; Tsolakis et al. 2022; Yao et al. 2022; Ye et al. 2023; C. Y. Zhao et al. 2023a, b).
Nine studies investigated the performance of AI in landmark detection and measurements from 3D CT, and CBCT images (H. Ahn & Yang 2024; J. Ahn et al. 2022; Blum et al. 2023; Dot et al. 2022; Duman Ş et al. 2022; Khabadze et al. 2024; Sahlsten et al. 2024; Strunga et al. 2024; L. Tao et al. 2023a, b; Y. Wang et al. 2024a, b, c, d). All studies, except Strunga et al.(Strunga et al. 2024), demonstrated that the accuracy was in acceptable range and AI could be an efficient tool. Strunga et al. found that AI-automated cephalometric analysis with AI-auto-tracing software was less accurate and slower than human operated digital tracing (Strunga et al. 2024).
Seven studies evaluated the performance of AI in landmark identification from PA cephalograms (Gil et al. 2022; Gonca et al. 2024a, b; Han et al. 2024; M. J. Kim et al. 2021b; H. Lee et al. 2023a, b, c; Muraev et al. 2020; C. Y. Zhao et al. 2023a, b). All studies, except Kim et al. (M. J. Kim et al. 2021b), found promising accuracy while saving time and effort. Kim et al. demonstrated that automated landmark identification failed to achieve the clinically acceptable error limit of 2 mm (M. J. Kim et al. 2021b).
Park et al. evaluated and compared the accuracy of two deep learning architectures for detecting anatomical landmarks (J. H. Park et al. 2019a, b). Hong et al. used a CNN model for landmark detection in serial cephalograms of orthognathic patients, which found that clinicians should re-evaluate the landmarks due to errors of automatic identification (Hong et al. 2022). Popova et al. investigated how different developmental stages of dentition and the presence of fixed orthodontic or other dental appliances affect the accuracy of cephalometric landmark detection using a CNN model (Popova et al. 2023). The study demonstrated that variations in growth structures and dentition stages significantly influenced the accuracy of the customized CNN approach.
Thirty- six studies investigated the performance of AI in cephalometric analysis and compared it with human- operated analysis (H. Ahn & Yang 2024; Alessandri-Bonetti et al. 2023; Baig et al. 2024; Bao et al. 2023; Bor et al. 2024; Chuchra et al. 2024; Çoban et al. 2022; Danisman 2023; Duran et al. 2023; El-Dawlatly et al. 2024; Guinot-Barona et al. 2024; Hwang et al. 2021; S. Jeon & Lee 2021; F. Jiang et al. 2022a, b, c; Kang et al. 2024; Katyal & Balakrishnan 2022; Kazimierczak et al. 2023; Kazimierczak et al. 2024b; Kılınç et al. 2022; H. Kim et al. 2020a, b; Kunz et al. 2023; Kunz et al. 2020; J. Lee et al. 2023a, b, c; Mahto et al. 2022; Mario et al. 2010; Meriç & Naoumova 2020; Muñoz et al. 2024; Nishimoto et al. 2019; Panesar et al. 2023; Prince et al. 2023; Saifeldin et al. 2024; Silva et al. 2022; Tsolakis et al. 2022; Ugurlu 2022; Zaheer et al. 2024; Zecca et al. 2024). Twenty- three studies demonstrated that it should be used by caution due to possibility of errors generated by AI-based automatic analysis (Alessandri-Bonetti et al. 2023; Baig et al. 2024; Bao et al. 2023; Bor et al. 2024; Chuchra et al. 2024; Çoban et al. 2022; Danisman 2023; Duran et al. 2023; El-Dawlatly et al. 2024; Guinot-Barona et al. 2024; S. Jeon and Lee 2021; Kang et al. 2024; Kazimierczak et al. 2023; Kazimierczak et al. 2024b; Kılınç et al. 2022; H. Kim et al. 2020a, b; Kunz et al. 2023; J. Lee et al. 2023a, b, c; Meriç & Naoumova 2020; Muñoz et al. 2024; Panesar et al. 2023; Silva et al. 2022; Ugurlu 2022). Thirteen studies found that AI-based analysis was reliable and accurate (H. Ahn & Yang 2024; Hwang et al. 2021; F. Jiang et al. 2022a, b, c; Katyal & Balakrishnan 2022; Kunz et al. 2020; Mahto et al. 2022; Mario et al. 2010; Nishimoto et al. 2019; Prince et al. 2023; Saifeldin et al. 2024; Tsolakis et al. 2022; Zaheer et al. 2024; Zecca et al. 2024).
Cervical vertebra maturation stage
Cervical vertebra maturation (CVM) stage is routinely used to determine skeletal maturity which is an important factor in timing of growth modification treatments (Cericato et al. 2015). AI models can help clinicians to determine CVM stage. Twenty-six studies investigated the performance of AI systems in identifying growth and development periods as well as the stage of CVM (Agarwal and Agarwal 2022; Akay et al. 2023; H. Amasya et al. 2020a, b; Hakan Amasya et al. 2020a, b; Ameli et al. 2023; Atici et al. 2022, 2023; Kamei et al. 2024; Khazaei et al. 2023; H. Kim et al. 2023a, b; Kök et al. 2019; Kok et al. 2021; Kök et al. 2021; H. Li et al. 2022a, b; H. Li et al. 2023; Liao et al. 2022; Makaremi et al. 2019; Mohammad-Rahimi et al. 2022; Moztarzadeh et al. 2023; Nogueira-Reis et al. 2024; Radwan et al. 2023; Şatir et al. 2023; H. Seo et al. 2021; Hyejun Seo et al. 2023; Shoari et al. 2024; Zhou et al. 2021). Twenty studies found that AI was highly accurate and reliable (Agarwal & Agarwal 2022; Hakan Amasya et al. 2020a, b; Ameli et al. 2023; Atici et al. 2022, 2023; Kamei et al. 2024; Khazaei et al. 2023; H. Kim et al. 2023a, b; Kök et al. 2019; Kök et al. 2021; H. Li et al. 2022a, b; H. Li et al. 2023; Liao et al. 2022; Moztarzadeh et al. 2023; Nogueira-Reis et al. 2024; Radwan et al. 2023; Şatir et al. 2023; H. Seo et al. 2021; Hyejun Seo et al. 2023; Shoari et al. 2024; Zhou et al. 2021). Four studies demonstrated that the accuracy of AI needs further improvement (Akay et al. 2023; H. Amasya et al. 2020a, b; Makaremi et al. 2019; Mohammad-Rahimi et al. 2022).
Seven studies also compared the performance of different AI algorithms in determining CVM stage, and growth and development periods (Hakan Amasya et al. 2020a, b; Atici et al. 2022, 2023; Kök et al. 2019; Kok et al. 2021; Nogueira-Reis et al. 2024; H. Seo et al. 2021). Atici et al. found that the performance of AggregateNet model with the tunable directional edge filters in determining CVM stage, was better than other models (ResNet20, MobileNetV2, Xception and CNNDF) (Atici et al. 2023). Seo et al. found that the Inception-ResNet-v2 performed the best in comparison to ResNet-18, MobileNet-v2, ResNet-50, ResNet-101, and Inception-v3 (H. Seo et al. 2021). Kok et al. found that k-nearest neighbors (k-NN) and logistic regression (Log. Regr.) algorithms exhibited the lowest accuracy in identifying cervical vertebrae stages (CVS) during growth and development periods. By contrast, artificial neural networks (ANN) demonstrated markedly superior performance and may be preferable for CVS determination (Kök et al. 2019). Kok et al. reported that neural network models (NNMs) outperformed naive Bayes models (NBMs) in accurately predicting growth and development outcomes (Kok et al. 2021). Amasya et al. demonstrated that the ANN model outperformed four alternative models: logistic regression (LR), support vector machine, random forest, and decision tree (DT) (Hakan Amasya et al. 2020a, b). Atici et al. reported that a custom-designed convolutional neural network (CNN) combined with tunable Directional Filters (CNNDF) outperformed other models, including pre-trained MobileNetV2, ResNet101, and Xception, regardless of whether directional filters were used (Atici et al. 2022).
Maturation stage of the mid- palatal suture
Two studies assessed mid-palatal suture maturation stages with AI. Both of them effectively classified the stages, and their performance was better than that of a clinician (Tang et al. 2024; Zhu et al. 2024). Tang et al. applied an enhanced vision transformer (ViT), and Zhu et al. used CNN models: ResNet50, ResNet18, RessNet101, Inception-v3, and Efficientnetv2-s (Tang et al. 2024; Zhu et al. 2024).
Growth prediction
Growth prediction of craniofacial structures in growing patients could help clinicians in their treatment planning and provide a better prognosis. Eight studies evaluated the performance of AI- based growth prediction models using lateral cephalograms, and hand wrist radiographs (Gonca et al. 2024a, b; E. Kim et al. 2023a, b; Larkin et al. 2024; J. H. Moon et al. 2024; Parrish et al. 2023; Perillo et al. 2021; Wood et al. 2023; Zakhar et al. 2023; Z. Zhang et al. 2022). The study identified prediction errors in specific landmarks. While authors concluded that artificial intelligence offers potential for growth prediction, they emphasized that prediction accuracy in certain craniofacial regions requires further enhancement.
Four studies also compared the performance of different AI algorithms in growth prediction (Gonca et al. 2024a, b; E. Kim et al. 2023a, b; Parrish et al. 2023; Wood et al. 2023). Gonca et al. demonstrated that all AI algorithms (multilayer perceptron (MLP), support vector machine (SVM), and gradient boosting machine (GBM)), except for C 5.0 decision tree classifier had similar overall predictive accuracies (Gonca et al. 2024a, b). Wood et al. reported that the accuracies of various techniques, including ridge, lasso, elastic net, XGBoost, random forest, and neural networks, did not differ significantly. However, the least squares method demonstrated notably greater errors when estimating growth along the Y-axis (Wood et al. 2023). Kim et al. reported that the least absolute shrinkage and selection operator (LASSO) demonstrated the highest prediction accuracy among several machine learning methods, including multiple regression analysis, radial basis function network, multilayer perceptron, and gradient-boosted decision tree, for predicting longitudinal craniofacial growth in the Japanese population (E. Kim et al. 2023a, b). Parrish et al. found that all of the algorithms (XGBoost regression, Random Forest regressor, Lasso, Ridge, Linear Regression, support vector regression (SVR)), except multilayer perceptron regressor, were similarly accurate in predicting the post-pubertal mandibular length and Y-axis in females (Parrish et al. 2023).
Photographic analysis, and video assessment
Facial photographs provide valuable information regarding facial form and facial characteristics of individuals. Nowadays, AI- based models have been investigated to assess their capability in predicting cephalometric information from facial photographs. Eight studies evaluated the performance of deep learning models in prediction of cephalometric variables (Ali et al. 2022; Baksi et al. 2021; Cai et al. 2023; Q. Chang et al. 2024; Ito et al. 2024; Kocakaya et al. 2024; Soleiman Mezerji et al. 2023; Tanikawa and Chonho 2021). They used frontal photographs (Cai et al. 2023; Q. Chang et al. 2024; Tanikawa and Chonho 2021), profile photographs (Ali et al. 2022; Cai et al. 2023; Q. Chang et al. 2024; Ito et al. 2024; Kocakaya et al. 2024; Soleiman Mezerji et al. 2023; Tanikawa and Chonho 2021), oblique photographs (Cai et al. 2023), and 3D soft tissue images (Baksi et al. 2021). All of them except Tanikawa et al., concluded that deep learning models had the promising ability to predict most of the cephalometric variables. Tanikawa et al. found that further improvement is needed (Tanikawa and Chonho 2021). Kocakaya et al. systematically evaluated multiple deep learning architectures for predicting cephalometric classifications from profile photographs, and reported DenseNet 201 and EfficientNet V2 as the top-performing systems (Kocakaya et al. 2024). Ito et al. reported that InceptionResNetV2 outperformed all other evaluated models across all assessed performance metrics (Ito et al. 2024).
Zhang et al. used a deep learning model with lateral photographs for self- health management of skeletal malocclusion. They found it effective for preliminary classification of skeletal malocclusion, offering valuable insights into AI's potential for self-monitoring and early detection of skeletal malocclusion at home (H. Zhang et al. 2024). Kılıç et al. developed a mobile app encouraging parents to pursue orthodontic evaluations for their children prior to the recommended treatment age. The app uses machine learning to diagnose skeletal malocclusion from a single photograph, providing essential information on orthodontic problems, treatment age, and options (Kılıç et al. 2024).
Mohammed et al. developed and validated open-access software for automated analysis of smile videoes, assessing features like frequency, genuineness, duration, and intensity with acceptable accuracy. This software enables systematic analysis of how specific oral health conditions and their corresponding rehabilitative interventions influence smile aesthetics (Mohammed et al. 2022).
Dental model analysis
Jae-Hun Yu et al. carried out a comparative study examining the reliability and efficiency of automatic digital (AD) versus manual digital (MD) model analyses using intraoral scans. The findings indicate that the AD method produced more reproducible results in less time, but also revealed significant measurement differences compared with the MD method. Therefore, AD and MD analyses are not interchangeable and should be applied distinctly in clinical practice (J. H. Yu et al. 2023). Tamayo-Quintero et al. evaluated the performance of an AI-driven tool, DentalArch, in categorizing orthodontic arch forms as square, ovoid, or tapered. This software enhances orthodontic diagnostics by providing a data-driven method for arch shape classification (Tamayo-Quintero et al. 2024). Hack et al. compared complete AI algorithms with traditional medical software for detecting tooth position and shape in orthodontic anomalies. They found that AI algorithms provided better detection and stability than conventional methods (Hack et al. 2024). Nauwelaers et al. assessed different methods for constructing the human palate's statistical shape models (SSMs). The SAE framework represents an advanced tool for 3D palatal shape analysis, combining principal component analysis (PCA) with the adaptive capabilities of deep learning (Nauwelaers et al. 2021).
Segmentation of anatomic structures in 3D, and 2D images
Su et al. used a deep learning (DL) algorithm to automatically segment the PDL in cone-beam computed tomography (CBCT) images. Chair-side measurements of PDL may improve both the efficiency and accuracy of diagnosis and treatment planning (Su et al. 2024). Four studies used deep learning models for teeth segmentation and labeling on CBCTs, and all of them found that artificial neural networks provide a reliable alternative for automatic tooth segmentation and classification (Alqahtani et al. 2023; Z. Chen et al. 2023a, b; M. Chung et al. 2020; Q. Li et al. 2020). Twelve studies used deep learning for teeth segmentation using 3D scans, and all of them were effective (Cui et al. 2021; El Bsat et al. 2022; H. Q. Hu et al. 2023; Im et al. 2022; Krenmayr et al. 2024; Z. Liu et al. 2023; Vinayahalingam et al. 2023a, b; X. Wang et al. 2024a, b, c, d; Wu et al. 2022; X. Xu et al. 2019; Yacout et al. 2024; Y. Zhao et al. 2022). Yacout et al. compared dentOne and Medit Ortho Simulation software for automated tooth segmentation, finding both accurate. However, dentOne overestimates mesiodistal widths, particularly premolars. Manual adjustment is needed for accuracy, but not for proximal surface reconstruction (Yacout et al. 2024). Hu et al. developed a high-precision, automated deep learning model for the classification and 3D segmentation of mixed dentition in cone-beam computed tomography (CBCT) images, demonstrating strong clinical applicability and robustness for both mixed and permanent dentition (Y. Hu et al. 2024).
Three studies used CBCT and 3D scans for tooth segmentation with artificial intelligence (Al-Ubaydi & Al-Groosh 2023; Lee et al. 2022; Wei et al. 2022). Al-Ubaydi et al. used an artificial intelligence (AI) program (CephX®) with an intraoral and CBCT scans for tooth segmentation, which may be advised in clinical practice for patients with mild crowding and no teeth restorations (Al-Ubaydi & Al-Groosh 2023).
Two studies used convolutional neural network-based models for automatically segmenting cone-beam computed tomography scans of mandibles. Both of them showed accurate results (Lo Giudice et al. 2021; Yurdakurban et al. 2025). Ni et al. developed a deep learning-based system for segmenting the mandibular canal in cone beam computed tomography (CBCT) images. The results demonstrated that this artificial intelligence system improves clinical workflow efficiency in localizing the mandibular canal (Ni et al. 2024).
Lim et al. employed automated tracking of the inferior alveolar nerve (IAN) to enhance surgical speed and safety. Their findings indicate that a deep learning framework serves as a reliable, rapid, and accurate clinical tool for IAN localization (Lim et al. 2021). Verhelst et al. developed a multi-layered deep learning algorithm that automatically generates 3D surface models of the human mandible using CBCT data. The 3D U-Net architecture enhances processing speed, reduces operator-induced errors, and demonstrates superior accuracy compared to established clinical standards (Verhelst et al. 2021). Vinayahalingam et al. developed a deep learning-based automated segmentation tool for 3D reconstructing the temporomandibular joint. (TMJ). The AI tool effectively segmented mandibular condyles and glenoid fossae, but its robustness and generalizability may be limited due to its training on only one CBCT scanner type (Vinayahalingam et al. 2023a, b). Tao et al. conducted a deep learning-based method for automatically segmenting zygomatic bones from CBCT scans; their model achieved high accuracy and efficiency in segmenting these bones (B. Tao et al. 2023a, b). Preda et al. examined a deep convolutional neural network (CNN) model for automated maxillofacial bone segmentation from CBCT images. The CNN demonstrated time-efficiency and accuracy in segmenting the maxillofacial complex (Preda et al. 2022). Pei et al. introduced a fully automated method for segmenting the anterior cranial base (ACB) in CBCT images by employing a volumetric convolutional neural network that incorporates nested residual connections (NRN). The NRN demonstrated faster convergence and significant improvements over traditional segmentation methods (Pei et al. 2017).
Nogueira-Reis et al. evaluated the combined segmentation performance of three convolutional neural network (CNN) models for constructing a maxillary virtual patient (MVP) from CBCT images. These models exhibited rapid processing times, high segmentation accuracy, and strong interobserver agreement in generating the MVP (Nogueira-Reis et al. 2023). Deng et al. studied a deep learning framework called SkullEngine for automatic segmentation and landmark detection of the midface, mandible, and teeth aimed at orthognathic surgical planning. The results showed that SkullEngine efficiently integrates segmentation and landmark detection in a multi-stage, coarse-to-fine approach (Deng et al. 2023). Wang et al. developed a mixed-scale dense (MS-D) convolutional neural network designed to improve multiclass segmentation accuracy for the jaw, teeth, and background in CBCT scans. Their results indicated high accuracy in multiclass segmentation, comparable to binary segmentation. This approach has the potential to substantially decrease the time required for anatomical structure segmentation, thereby enhancing the feasibility of patient-specific orthodontic treatment (H. Wang et al. 2021a, b).
The volume of Masseter muscle could change during orthodontic- orthognathic treatment, which influences the facial appearance, and stability of the treatment (Coclici et al. 2019). Three dimensional models of Masseter muscle could provide information about size, shape, and 3D orientation of this muscle in orthodontic, and orthodontic-orthognathic treatments. Three studies used deep learning- based models for automatic segmentation of Masseter muscle from CBCT, and CT images (Y. Jiang et al. 2022a, b, c; Peng et al. 2024; Y. Zhang et al. 2019). They found that these models could accurately segment the masseter master and improve clinical efficiency.
Therefore, AI could enhance our efficiency and accuracy in segmentation, labeling, image classification, and object detection.
Orthodontic treatment planning, diagnostics, prediction, and risk monitoring
Five studies showed that machine learning models improve orthodontic diagnostics and treatment planning, leading to greater accuracy, shorter treatment times, fewer appointments, and higher patient satisfaction (Q. Chang et al. 2025; Gaonkar et al. 2024; Prasad et al. 2023; Shimizu et al. 2022; Tomášik et al. 2024). Successful treatment of some malocclusions requires careful decisions about tooth extractions, considering both aesthetics and dental health. Factors like crowding and tooth positioning guide these choices. Ten studies examined machine learning (ML) algorithms that predict extraction or non-extraction decisions. The ML models demonstrate high accuracy and precision in predicting extraction decisions among a racially and ethnically diverse patient population (L. Etemad et al. 2021; L. E. Etemad et al. 2024; Jung and Kim 2016; P. Li et al. 2019; Mason et al. 2023; Real et al. 2022; Shojaei & Augusto 2022; Takada 2016; Trehan et al. 2023; Xie et al. 2010). Ryu et al. developed artificial intelligence models for tooth landmark detection and extraction diagnosis. By utilizing deep learning with orthodontic photographs, dental crowding categorization and diagnosis of orthodontic extraction were successfully determined (Ryu et al. 2023). Leavitt et al. studied how machine learning (ML) predicts orthodontic extraction patterns in a diverse population. They found that ML effectively predicted extraction patterns for upper and lower first premolars (U/L4s) and upper first premolars only (U4s) but was less accurate for patterns involving the upper first and lower second premolars (U4/L5s), upper second and lower first premolars (U5/L4s), and upper and lower second premolars (U/L5s). These patterns' key factors included molar relationships, mandibular crowding, and overjet (Leavitt et al. 2023). Senirkentli et al. investigated the application of machine learning algorithms to support clinical decision-making between serial extraction and expansion of the maxillary and mandibular dental arches in the early treatment of borderline patients presenting with moderate to severe dental crowding (Senirkentli et al. 2023). Taraji et al. investigated morphological characteristics in post-circumpubertal Class III malocclusion cases and evaluated the effectiveness of machine learning algorithms for treatment planning in adults. They noted significant cephalometric differences in patients needing orthodontic camouflage versus surgery and created a 93% accurate AI model, highlighting AI's potential to reduce clinician subjectivity in borderline cases (Taraji et al. 2023). Also, Khosravi-Kamrani et al. used a statistical prediction model from previous cephalometric data to examine the relationship between Class III subtypes and treatment methods (surgical vs nonsurgical) and outcomes. Their findings indicated that subtype 1 (mandibular prognathic) was more likely to require orthognathic surgery, while subtypes 2 and 3 had lower treatment failure rates with orthodontics alone (Khosravi-Kamrani et al. 2022).
Yu et al. developed a sophisticated skeletal diagnostic system using a convolutional neural network (CNN) for an end-to-end diagnosis from lateral cephalograms, streamlining skeletal orthodontic diagnosis without complex procedures (H. J. Yu et al. 2020). Noeldeke et al. trained six convolutional neural networks to diagnose crossbites and classify them into non-crossbite, frontal, and lateral types using 2D intraoral photographs. The study emphasizes the potential of deep learning models in orthodontic diagnosis based on these images (Noeldeke et al. 2024). Makaremi et al. developed a methodological pipeline that combines CNN and interpretability techniques to analyze skull shape and the effects of Class II by mandibular retrognathia (C2Rm). This approach helped identify known and newly impacted structures, assessed their changes based on the severity of C2Rm, and offered insights into the evolution of human anatomy (Makaremi et al. 2023). Wang et al. investigated the relationships between hard and soft tissues in the lower third of the face in Class II hyperdivergent and Class I normodivergent patients using network analysis. Their results indicated that Class II hyperdivergent patients with a convex profile exhibited greater variations in soft tissue morphology along the sagittal axis. Additionally, the hard tissue landmarks relevant for soft tissue prediction were located more anteriorly in the vertical dimension (T. Wang et al. 2024a, b, c, d).
Two studies assessed the performance of artificial intelligence models in predicting orthodontic treatment outcomes. Both studies reported high predictive accuracy in most evaluated cases (Alam et al. 2024; Auconi et al. 2015). Volovic et al. created an innovative machine-learning model to anticipate the duration of orthodontic treatment by utilizing crucial pre-treatment variables. The model effectively estimates treatment times within a clinically acceptable range and recognizes several significant predictors, such as extraction choices, the impact of the COVID-19 pandemic, intermaxillary relationships, positioning of the lower incisors, and the incorporation of additional appliances (Volovic et al. 2023). Nine studies used artificial neural networks to predict facial profiles after orthodontic treatment. All but one study showed clinically acceptable accuracy and usability (Cho et al. 2024; Gong et al. 2025; Guo et al. 2023; J. H. Park et al. 2021; Y. S. Park et al. 2022; Ramasubbu et al. 2024; Tanikawa et al. 2024; Tian et al. 2022; Xing et al. 2023). Cho et al. assessed an AI model for predicting soft tissue and alveolar bone changes after orthodontic treatment and compared it to conventional methods. While AI was less effective than conventional statistical methods, it excelled in predicting variable soft tissue landmarks (Cho et al. 2024). Two studies investigated facial soft tissue shape patterns in extraction cases using AI. They found that identifying pretreatment profile patterns aids in selecting soft tissue to hard tissue movement ratios, which can reliably estimate post-treatment facial profiles (Guo et al. 2023; Tanikawa et al. 2024).
Salmanpour et al. compared various CNN models and machine learning algorithms in prediction of patient cooperation using frontal photographs and voice recordings. They found that while some CNN models could predict cooperation from photographs, voice data was less effective (Salmanpour & Camci 2024).
Ali et al. established a neural network to predict the size of unerupted premolars and canines in the Iraqi population. Their study showed that machine learning in orthodontics can accurately predict the size of unerupted teeth (Ali et al. 2021). Rauf et al. used artificial intelligence to predict arch width, helping prevent future crowding in both growing and adult patients seeking orthodontic treatment. They found that machine learning can improve orthodontic diagnosis and treatment planning by predicting dental arch measurements and preventing anterior segment malocclusion (Rauf et al. 2023). Also, Alsubhi et al. created a deep learning (dl) model for predicting dental crowding, with MobileNetV3 Small and CLAHE achieving the highest accuracy at 0.907. Limitations included a small dataset and possible merging of DL architectures (Alsubhi et al. 2024).
Chen et al. developed the first deep learning-based cross-temporal multimodal image fusion system for acquiring root and jawbone information without the need for additional radiation. This innovation enables orthodontists to continuously monitor risk during treatment while minimizing patient exposure to radiation (H. Chen et al. 2024). Orthodontically induced root resorption (OIRR) is a prevalent side effect of orthodontic treatment. Traditionally, manual methods have been employed for 3D quantitative analysis of OIRR using CBCT, but these approaches can often be subjective and time-consuming. However, recent advancements in computer technology have enabled the application of deep-learning techniques to medical image processing. Zheng et al. developed a deep-learning model that extracts root volume and identifies root resorption from CBCT images. This model provides reliable methods for assessing OIRR and supports more effective treatment planning (Zheng et al. 2025). Xu et al. evaluated six deep CNNs for grading orthodontically induced external root resorption (OIERR) on tooth slices. The results demonstrated that CNNs performed better than orthodontists, indicating that the proposed grading system offers reliable diagnostic support in clinical practice (S. Xu et al. 2024).
Need for orthodontic treatment
Thanathornwong et al. developed a clinical decision support system that enables general practitioners to assess the necessity of orthodontic treatment in patients with permanent teeth, thereby improving the accuracy of patient classification (Thanathornwong 2018). Stetzel et al. used artificial intelligence to automate the aesthetic assessment of the Index of Orthodontic Treatment Need (IOTN), showing promising accuracy. However, further improvements are needed for clinical use. Excluding overjet from the analysis improved outcomes, indicating the potential for better accuracy in practice (Stetzel et al. 2024).
Malocclusion classification
Ueda et al. developed an artificial intelligence (AI) model that utilizes cephalometric measurements to classify maxillofacial morphology accurately. While the AI model in this study demonstrated effective classification of maxillofacial morphology, there is potential for further enhancement with additional learning data (Ueda et al. 2023).
Five studies developed a malocclusion classification model using Convolutional Neural Network (CNN) to classify dental images, showing that CNNs can efficiently aid orthodontists in classifying malocclusions (Harun et al. 2022; Johannes et al. 2023; Kim et al. 2022; H. Li et al. 2022a, b; Tafala et al. 2022). Kim et al. introduced two multi-channel deep learning architectures, "Ensemble" and "Synchronized multi-channel," for classifying skeletal malocclusions from 3D CBCT images, achieving over 93% accuracy—superior to single-channel models using 2D images (I. Kim et al. 2020a, b). Four studies used an artificial intelligence (AI) system to classify dental occlusion and better orthodontic diagnosis using intraoral photographs (Bardideh et al. 2024; Cejudo Grano de Oro et al. 2022; Ryu et al. 2022; Talaat et al. 2021). All of them demonstrated accurate detection. However, in the Bardideh et al. study, AI performance was not acceptable in measuring overjet and overbite compared with expert orthodontists (Bardideh et al. 2024). Also, Zhao et al. developed a machine learning model for classifying dental, skeletal, and functional Class III malocclusions. Their research indicated that ML models using cephalometric data can aid dentists in this classification for children. Key features like SN_GoMe, U1_NA, and Overjet are important for predicting the severity of Class III malocclusions (L. Zhao et al. 2024).
Clear Aligner
Clear aligner therapy (CAT) is an increasingly favored approach for treatment of malocclusions, yet predicting outcomes and determining necessary adjustments is pretty challenging. Accurate predictions not only improve decision-making but also help shorten treatment durations (Wolf et al. 2024).
Wolf et al. studied factors impacting the success of clear aligner therapy (CAT) in adults. Their findings showed moderate predictive accuracy using logistic regression and XGBoost, emphasizing the significance of these factors in reducing refinements and potentially resulting in shorter treatment durations, reduced discomfort, and fewer visits to the clinic (Wolf et al. 2024).
Two research studies assessed how accurate ChatGPT's responses were regarding orthodontic clear aligners. Abu Arqub et al. found that the responses were suboptimal and lacked citations, indicating limited accuracy. In contrast, Tanaka et al. indicated that ChatGPT effectively provided high-quality responses concerning clear aligners, temporary anchorage devices, and digital imaging (Abu Arqub et al. 2024; Tanaka et al. 2023). Dursun et al. assessed the precision, reliability, quality, and comprehensibility of the answers provided by ChatGPT-3.5, ChatGPT-4, Gemini, and Copilot about clear aligners. All chatbots provided generally accurate and moderately reliable answers, but the readability was challenging. While ChatGPT, Gemini, and Copilot Although ChatGPT, Gemini, and Copilot appear to be promising tools for patient information in orthodontics, they require more content based on evidence and improved readability to be fully effective (Dursun and Bilici Geçer 2024).
Murphy et al. evaluated the movement of maxillary incisors and canines between Invisalign and traditional fixed orthodontic appliances by utilizing AI. Their research indicated that individuals using fixed appliances experienced notably greater movement of maxillary teeth in every direction, especially in the rotation and tipping of the maxillary canine (Murphy et al. 2023). Lee et al. identified specific predictors related to malocclusion type and severity that affect the overall duration of Invisalign treatment by analyzing intraoral digital scans. The study utilized a deep learning approach for automated tooth segmentation and precise landmark identification. There was insufficient evidence to show that specific tooth movements impact aligner treatment time. However, a composite score predicts treatment duration more accurately than individual malalignment factors (S. Lee et al. 2023a, b, c).
Xu et al. developed ANNs to forecast patient experiences in the initial stages of treatment. These ANNs effectively predicted aspects like discomfort, anxiety, and quality of life, indicating that AI systems could enhance patient comfort and compliance in clinical compliance (L. Xu et al. 2022). Thurzo et al. investigated the impact of customized computerized decision algorithms in a clinical orthodontic application. They evaluated factors like patient-app engagement, discipline, and clinical aligner tracking using an AI system (AI system-dental monitoring®). All factors demonstrated considerable enhancement following the update, with the exception of clinical non-tracking in males as evaluated through AI video scans. These updates can enhance clinical performance and improve patient compliance (Thurzo et al. 2021). Ferlito et al. employed smartphone deep learning algorithms to assess patients' readiness to move on to the next aligner and determine areas where the teeth are not tracking with the clear aligners. This study assessed the consistency of the application's Go or No-Go instructions and examined 3D discrepancies causing unseating. Their findings raise concerns about the reliability of remote monitoring because of compatibility problems with gauges. Additionally, discrepancies in tooth positioning suggest that AI decisions were inconsistent with quantitative data (Ferlito et al. 2023). Liu et al. created a 3D fusion model using AI that combines CBCT and intraoral scanning data based on Andrews' Six Element standard. This model evaluates the impact of clear aligners on tooth movement and aids in planning treatment by providing reliable pretreatment target positions, representing a significant advancement in orthodontic precision and personalization (C. Liu et al. 2025).
Feng et al. developed a transparent aligner with embedded piezoelectric sensors using flexible printed circuits. This fully integrated system features wireless monitoring and machine learning, enabling advanced intraoral sensing and facilitating remote orthodontic care (Feng et al. 2025).
Two studies used SmileView in clear aligner therapy (Adel et al. 2024; Mourgues et al. 2024). Mourgues et al. examined whether SmileView™ (SV) can accurately simulate orthodontic results and modify tooth anatomy. They found that SV generally produces simulations of broader smiles, particularly with aligners, and shows high predictability for vertical movements of incisors. Additionally, it adjusts the mesiodistal size of upper incisors and can correct deviations between dental and facial midlines (Mourgues et al. 2024). Also, Adel et al. assessed the predictability of the Invisalign SmileView tool for digital smile simulations compared to actual treatment outcomes. They found that while it can simulate smiles for pre-treatment with some reliability—using parameters like philtrum height and smile width—other factors such as smile arc and tooth display were not accurately predicted. Overall, actual post-treatment results showed better aesthetics than the AI-generated simulations (Adel et al. 2024).
Orthodontic appliances
Akçam et al. created a computer-assisted model to select headgear appliances for orthodontic patients. They found that most of the examiners expressed satisfaction with the recommendations, validating model’s effectiveness. (Akçam and Takada 2002). Aldabbagh et al. created a mobile application to improve the automatic and accurate placement of wired brackets in orthodontics. The assessment results indicated that the application could achieve its goals for speed and accuracy (Aldabbagh et al. 2019).
Three studies developed a virtual orthodontic bracket removal tool using deep learning. All studies demonstrated that AI-based virtual bracket removal (VBR) is accurate enough for practical use (ElShebiny et al. 2024; R. Li et al. 2024).
Oral hygiene, and Periodontal health
Snider et al. assessed Dental Monitoring's (DM) artificial intelligence (AI) in enhancing oral hygiene during orthodontic treatment, and they reported that weekly DM scans and providing tailored notifications could lead to better oral hygiene outcomes over time in individuals undergoing orthodontic care (Snider et al. 2023). Ozsunkar et al. assessed the YOLOv5x algorithm's performance in detecting white spot lesions using oral photographs taken after orthodontic treatment. The model's accuracy and sensitivity were not as high as anticipated for practical applications (Ozsunkar et al. 2024). Andrade et al. evaluated the U-Net neural network for automated dental biofilm detection in tooth images. They found that utilizing U-Net for the visual segmentation of dental biofilm is practical, aiding professionals and patients in improving oral hygiene (Andrade et al. 2023). Machoy et al. found that using class I elastics in orthodontic treatment for individuals with healthy periodontal conditions does not notably affect periodontal temperature, indicating safe orthodontic forces for clinical use. Integrating artificial intelligence in evaluating gingival temperature aids in removing variables that could interfere with unbiased thermographic evaluation (Machoy et al. 2021). Alalharith et al. developed advanced object detection and deep learning algorithms to detect periodontal disease in orthodontic patients using intraoral images automatically. Their study demonstrated the effectiveness of these models in diagnosing gingivitis, highlighting their potential for non-invasive, preemptive diagnosis in dentistry to reduce the severity of periodontal disease globally (Alalharith et al. 2020). Gurgel et al. studied the effects of piezocision surgery on orthodontic biomechanics and movement of teeth in the mandibular arch utilizing AI tools. They found that piezocision before treatment did not reduce treatment time or affect biomechanics, resulting in tooth movements comparable to conventional methods (Gurgel et al. 2024).
Sex determination
Human sexual dimorphism is a well-researched area that examines various psychological and biological traits, notably in the craniofacial complex. While the face is a prominent indicator of identity, differences between sexes can also be seen in teeth dimensions and mandible size. Machine learning—an artificial intelligence subset—has recently enhanced sex determination by analyzing craniofacial structures using predictive models based on training data (Küchler et al. 2024). We discovered four studies in the databases that created deep-learning models for more effective and reliable sex estimation. All studies achieved high accuracy by employing deep learning methods and AI tools (C. M. de Araujo et al. 2024; Hase et al. 2024; Kök et al. 2021; Küchler et al. 2024). Küchler et al. examined the combination of mandibular and dental dimensions for sex determination using machine learning methods (Küchler et al. 2024). De Araujo et al. used dental arch and maxillary skeletal base measurements to determine sex with supervised machine learning (C. M. de Araujo et al. 2024). Also, Kök et al. used ANN to determine the growth periods and gender of cervical vertebrae, achieving satisfactory results (Kök et al. 2021).
Discussion
This review highlighted the potential of AI in orthodontic diagnosis and clinical decision making. AI could enhance the accuracy of orthodontic assessments in diverse conditions such as facial asymmetry, cleft lip and palate, impacted canines, landmark detection, cephalometric, photographic, and dental model analysis, as well as malocclusion classification. In addition, AI supports clinicians in treatment planning and improving treatment effectiveness by determining orthodontic need, predicting growth and CVM stage to optimize treatment timing, guiding decision making and appliance selection, and promoting oral hygiene. Given the wide AI applications in multiple aspects of orthodontics, following sections discuss its application across specific domains.
Diagnostic applications
Facial asymmetry
AI demonstrated acceptable performance in assessing facial asymmetry using facial photographs and PA cephalograms, with accuracy levels comparable to experts’ evaluations in 2D imaging modalities (S. M. Jeon et al. 2024; Takeda et al. 2021; Yurdakurban et al. 2021). However, its application to 3D imaging modalities such as CT scans remains limited, due to the current discrepancies observed when compared to manual analysis (N. Kazimierczak et al. 2024a, b, c). These findings underscore the need for further improvement of AI models, including larger and more diverse training datasets, improved algorithms, and validation against manual methods.
Facial attractiveness
The results provided evidence of AI’s promising performance in demonstrating the positive impacts of orthodontic treatment on facial attractiveness (Patcas et al. 2019a, b). Most studies confirmed that AI-based models are reliable and accurate for assessing facial attractiveness using facial photographs (Grillo et al. 2024; S. Yu et al. 2024). However, one study suggested that it should be considered as an adjunctive tool rather than a replacement for clinician assessment (Peck et al. 2022). Such discrepancies may be attributed to methodological differences across the studies, such as variations in AI-based software or models, differences in training datasets, or the criteria applied to evaluate attractiveness. Further refinement is required to improve its efficiency.
Cleft
AI demonstrated accurate and efficient performance in prenatal prediction of clefts, as well as in the diagnostic analysis and estimation of future surgical need of cleft patients (Alam and Alfawzan 2020a, 2020b; Alam et al. 2021; Arslan et al. 2024; Kamei et al. 2024; G. Lin et al. 2021; Omar et al. 2018; Shafi et al. 2020). AI was able to compare cleft and non-cleft patients across multiple parameters, including craniofacial morphology, dental characteristics, and skeletal maturation. These findings suggest that AI could be a useful adjunct for the early detection and diagnosis of cleft patients, complementing traditional clinical assessments.
Impacted canine
AI has recently been employed to predict the occurrence of maxillary impacted canines. While initial results have demonstrated promising performance, further refinement is required to enhance predictive accuracy (Minhas et al. 2024). Also, AI has shown considerable success in the classification of impacted canines (Ericson and Kurol 1988; Yamamoto et al. 2003), suggesting its potential role in improving early diagnosis and treatment planning.
Cephalometric landmarks’ detection and cephalometric analysis
Based on the results, several studies have reported that AI is not fully reliable or precise in identifying cephalometric landmarks and performing cephalometric analysis, highlighting the need for clinicians’ supervision to verify landmark placement (Bao et al. 2023; Bulatova et al. 2021; S. Chang et al. 2023; J. Chen et al. 2023a, b; Dai et al. 2019; Danisman 2023; Davidovitch et al. 2022; El-Dawlatly et al. 2024; Kang et al. 2024; J. Kim et al. 2021a, b, c, d; M. J. Kim et al. 2021a; Le et al. 2022; J. Lee et al. 2023a, b, c; J. H. Lee et al. 2020; J.-H. Moon et al. 2020; O'Friel et al. 2024; Ramadan et al. 2022; Ristau et al. 2022; Silva et al. 2022; Tanikawa et al. 2021b; Ugurlu 2022; Wirtz et al. 2020; Zeng et al. 2021; Zese et al. 2023). In contrast, other studies have demonstrated that AI can achieve acceptable accuracy in both landmark detection and cephalometric analysis (H. J. Ahn et al. 2024; Alessandri-Bonetti et al. 2023; Banumathi et al. 2011; Hwang et al. 2020; Indermun et al. 2023; Jiang et al. 2023; King et al. 2022; Kwon et al. 2021; Nishimoto et al. 2019; Song et al. 2020; Tsolakis et al. 2022; Yao et al. 2022; Ye et al. 2023; C. Y. Zhao et al. 2023a, b). These discrepancies may be attributed to variations in AI models, training datasets, types and definitions of landmarks, and sample size. Notably, AI has shown promising accuracy in identifying landmarks on PA cephalograms and 3D images, suggesting potential for broader clinical application with further refinement.
Cervical vertebra maturation stage, Mid- palatal suture maturation stage, and Growth prediction
The majority of studies indicate that AI is capable of identifying the CVM stage with high accuracy and reliability, supporting its potential as a valuable adjunct in growth assessment. However, some have emphasized the need for further refinements to enhance diagnostic precision (Akay et al. 2023; H. Amasya et al. 2020a, b; Makaremi et al. 2019; Mohammad-Rahimi et al. 2022).
Only limited evidence has addressed the potential of AI in determining the maturation stage of mid-palatal suture, reporting promising performance (Tang et al. 2024; Zhu et al. 2024). Nevertheless, further trials are required to validate these results and establish the clinical application of AI in this domain.
Based on the results, AI exhibited some inaccuracies in growth prediction, suggesting that its application remains limited. Further refinements are required to enhance prediction accuracy in specific areas (Gonca et al. 2024a, b; E. Kim et al. 2023a, b; Larkin et al. 2024; J. H. Moon et al. 2024; Parrish et al. 2023; Perillo et al. 2021; Wood et al. 2023; Zakhar et al. 2023; Z. Zhang et al. 2022).
Photographic analysis, and dental model analysis
Most studies have shown that AI performs well in predicting cephalometric variables from facial photographs, highlighting its potential as a diagnostic adjunct (Ali et al. 2022; Baksi et al. 2021; Cai et al. 2023; Q. Chang et al. 2024; Ito et al. 2024; Kocakaya et al. 2024; Soleiman Mezerji et al. 2023). Furthermore, AI models provided more accurate performance in dental model analysis. They could help clinicians to efficiently evaluate arch form, detect tooth positions, and perform dental model analysis (Hack et al. 2024; Tamayo-Quintero et al. 2024).
Segmentation of anatomic structures in 3D, and 2D images
A wide range of studies have indicated that AI can accurately and efficiently segment, label, and classify anatomic structures such as teeth, PDL, mandible, mandibular canal, inferior alveolar nerve, TMJ, zygomatic bones, maxillofacial bone, anterior cranial base, and masseter muscle in CBCT and 3D scans. Convolutional neural networks and 3D U-Net models can enhance time efficiency, reduce operator- related errors, and provide outstanding accuracy in comparison to the clinical standard (Verhelst et al. 2021). Furthermore, CNN models can create a maxillary virtual patient from CBCT images (Nogueira-Reis et al. 2023). This advancement points to a future where personalized orthodontic and orthognathic care will become more accessible and reliable.
Malocclusion classification
AI models showed strong capabilities in the classification of malocclusions from dental images such as lateral cephalometries, CBCTs, and intraoral photographs. Numerous models have demonstrated high accuracy in identifying skeletal and dental patterns. However, they have some limitations in measuring overjet and overbite when compared with expert orthodontists (Bardideh et al. 2024). Overall, the integration of AI into malocclusion classification can improve orthodontic treatment's diagnostic accuracy and efficiency.
Sex determination
Recent studies have demonstrated that artificial intelligence can analyze craniofacial structures to assess human sexual dimorphism. By focusing on differences in dental and mandibular structures. These AI models can estimate sex with high accuracy. Whether using measurements of the dental arch and maxillary skeletal base or evaluating cervical vertebrae offer faster and more reliable tools (C. M. de Araujo et al. 2024; Kök et al. 2021; Küchler et al. 2024).
Treatment- related applications
Orthodontic treatment planning, diagnostics, prediction, and risk monitoring
AI is utilized in diagnosing orthodontic issues, planning treatments, and monitoring risks, resulting in enhanced accuracy, shortened treatment times, fewer visits, and greater patient satisfaction (Q. Chang et al. 2025; Gaonkar et al. 2024; Prasad et al. 2023; Shimizu et al. 2022; Tomášik et al. 2024). These models can predict the need for tooth extractions, treatment duration, and facial profiles after orthodontic treatment more accurately and efficiently. These technologies help orthodontists to reduce clinician subjectivity in borderline cases (Taraji et al. 2023). Furthermore, CNN models provide solutions for patient cooperation (Salmanpour and Camci 2024), estimating the size of unerupted teeth (Ali et al. 2021), and detection of orthodontically induced root resorption (OIRR) (Zheng et al. 2025) by analyzing large datasets faster and more reliably than human clinicians alone. However, factors like a limited dataset and the absence of generalizability, more research is needed; as a result, AI is not a replacement for clinical skill; it is a powerful extension of it. These technologies can enhance the quality of orthodontic treatment by making procedures more efficient, personalized, and predictable.
Need for orthodontic treatment
Artificial intelligence can be used to automate the aesthetic component of the Index of Orthodontic Treatment Need (IOTN) and assess whether orthodontic care is needed. Studies have demonstrated encouraging accuracy, particularly when evaluation criteria were enhanced, like excluding overjet. However, additional enhancements are required for practical clinical application (Stetzel et al. 2024; Thanathornwong 2018).
Clear aligner
The incorporation of AI into clear aligner therapy (CAT) is enhancing the accuracy, personalization, and efficiency of orthodontic treatment. AI models can improve decision-making, decrease treatment duration, reduce the frequency of clinic visits, enhance patient comfort and compliance, enable remote monitoring, and improve tooth movement (Wolf et al. 2024). Nonetheless, AI models have certain limitations, such as inconsistent predictions and challenges in the readability of chatbot-generated responses (Abu Arqub et al. 2024; Dursun & Bilici Geçer 2024). Moreover, tools such as SmileView can improve patient engagement and expectation management, although actual post-treatment results showed better aesthetics than the AI-generated simulations (Adel et al. 2024; Mourgues et al. 2024). These advancements highlight the growing utility of AI in enhancing the accuracy, efficiency, and personalization of clear aligner therapy within orthodontic practice.
Orthodontic appliances
Recent studies indicate that AI technologies can improve accuracy, efficiency, and clinical decision-making in procedures such as headgear selection (Akçam and Takada 2002), bracket placement (Aldabbagh et al. 2019), and virtual bracket removal (ElShebiny et al. 2024). These findings highlight the potential of AI to support clinical decision making, underscoring its value for incorporation into orthodontic practice.
Oral hygiene, and periodontal health
Studies have demonstrated the effectiveness of AI models in diagnosing gingivitis & promoting oral hygiene during orthodontic treatment. Tools such as dental monitoring systems and AI-based biofilm detection have the potential to improve orthodontic patients’ oral hygiene over time (Snider et al. 2023). While algorithms like the YOLOv5x showed promising performance in detecting white spot lesions, further refinement is still required (Ozsunkar et al. 2024). In addition, AI has been applied to evaluate gingival temperature (Machoy et al. 2021), detect periodontal disease (Alalharith et al. 2020), and assess the impact of piezocision surgery on orthodontic biomechanics (Gurgel et al. 2024).
Conclusions
In summary, this narrative review emphasizes the considerable advantages of AI-based algorithms and deep learning techniques in diagnosis and treatment planning of orthodontics. They enhance clinical decision-making, improve treatment planning, and boost efficiency by reducing human error and operator variability. While the results are promising, clinicians must continue to supervise and validate AI-driven recommendations. AI should complement, not replace, clinical expertise. It should support orthodontists in providing high-quality care. Continued research and rigorous trials are necessary to refine these models and ensure their safe and effective integration into practice.
Abbreviations
- AC
Aesthetic Component
- ACB
Anterior Cranial Base
- AD
Automatic Digital
- ADD
Anterior Disc Displacement
- AI
Artificial Intelligence
- AIDRM
Artificial Intelligence Driven Remote Monitoring
- ANN
Artificial Neural Network
- ARDA
Ageing-Related Dynamic Attention
- AttU-Net
Attention U-net
- BCLP
Bilateral Cleft Lip and Palate
- CAT
Clear aligner therapy
- CBCT
Cone-Beam Computed Tomography
- CFOD
Computational Formulation of Orthodontic referral Decisions system
- Cforest
Conditional Forest
- CG
Control Group
- CGAN
Conditional Generative Adversarial Network
- CHTM
Conventional Hand Tracing Method
- CLP
Clef lip and palate
- CNN
Convolutional Neural Network
- CNN-PC
Convolutional Neural Networks for Landmark Patch Classification
- CNN-PE
Convolutional Neural Networks for Point Estimation
- C2RM
Class II by Mandibular Retrognathia
- CT
Computed Tomography
- CVM
Cervical Vertebral Maturation
- DACFL
Deep Anatomical Context Feature Learning
- DC
Dental Characteristics
- DenseNet
Dense Convolutional Network
- DGCNN
Dynamic Graph CNN
- DL
Deep Learning
- DM
Dental Monitoring
- DM
Digital Manual
- DT
Decision Tree
- DTM
Driven Tracing Method
- DWD
Distance-Weighted Discrimination
- FARNet
Feature Aggregation and Refinement Network
- FOV
Field Of View
- GAN
Generative Adversarial Network
- GBM
Gradient Boosting Machine
- GPR
Gaussian Process Regression
- IAN
Inferior Alveolar Nerve
- IOS
Intra oral scanner
- IOTN
Index of Orthodontic Treatment Need
- ITMs
Integrated Tooth Models
- KNN
K-Nearest Neighbours
- LASSO
Least Absolute Shrinkage and Selection Operator
- LI
Landmark Identification
- LINKS
Learning-based multi-source Integration framework for Segmentation
- LR
Logistic Regression
- MA
Model Analyses.
- MCSA
Maximum Cross-Sectional Area
- ML
Machine Learning
- MLP
Multilayer Perceptron
- MLPs
Machine Learning Plannings
- MM
Masseter Muscle
- MONAI
Medical Open Network for Artificial Intelligence Label
- MPL
Multilayer Perceptron Classifier
- MRI
Magnetic Resonance Imaging
- MS-D
Mixed-Scale Dense
- MSR
Multiple Stepwise Regression
- MT
Manually Traced
- MTCNN
Multi-task Cascade Convolutional Neural Network
- MVP
Maxillary Virtual Patient
- NB
Naive Bayes
- NC
Non-Cleft
- NN
Neural Network
- NNMs
Neural Network Models
- NRC
Nested Residual Connections
- NSD
Nasal Septum Deviation
- OIERR
Orthodontically Induced External Root Resorption
- OIRR
Orthodontically induced root resorption
- PA
Posteroanterior
- PCA
Principal Component Analysis
- PLS
Partial Least Squares Regression
- PNN
Pointnet Neural Network
- QDA
Quadratic Discriminant Analysis
- Res model
Resnet-Concat model
- RF
Random Forest
- RNN
Recurrent Neural Network
- SAE
Singular Autoencoder
- SATM
Smartphone Application Tracing Method
- SfA model
Shuffle-Attention model
- SG
Study Group
- sklearn
Scikit-Learn
- SMI
Skeletal Maturity Iindicators
- SMLA
Supervised Machine Learning Algorithms
- SSD
Single Shot Multibox Detector
- SSMs
Statistical Shape Models
- ST
Sella Turcica
- STM
Segmentation of Tooth Model
- SV
SmileView™
- SVC
Support Vector Classification
- SVM
Support Vector Machine
- SVR
Support Vector Regression
- 3D
Three-Dimensional
- TSGCN
Two-Stream Graph Convolutional Network
- TS-MDL
Two-Stage Mesh Deep Learning
- UCLA
Unilateral Cleft Lip and Alveolus
- UCLP
Unilateral Cleft Lip and Palate
- U4s
Upper First Premolars Only
- U/L4s
Upper and Lower First Premolars
- U4/L5s
Upper First and Lower Second Premolars
- U/L5s
Upper and Lower Second Premolars
- U5/L4s
Upper Second and Lower First Premolars
- VBR
Virtual Bracket Removal
- ViT
Vision Transformer
- XGBoost
Extreme Gradient Boosting
- YOLO
You Only Look Once
Authors contributions
Sania Azizi (SA): Conceptualization, Methodology, Investigation, Writing-Original draft.
Sepehr Hatampoor (SH): Investigation, Writing-Review and Editing, Supervision.
Shabnam Tahamtan (SHT): Conceptualization, Methodology, Investigation, Writing-Review and Editing, Supervision.
All authors read and approved the final version of the paper.
Funding
None.
Data availability
The dataset(s) supporting the conclusions of this article is(are) included within the manuscript.
Declarations
Ethics approval
Not applicable. This study is a narrative review.
Informed consent
Not applicable. This study is a narrative review.
Competing interest
The authors declare no conflict of interest.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Abdulkreem A, Bhattacharjee T, Alzaabi H, Alali K, Gonzalez A, Chaudhry J, Prasad S (2024) Artificial intelligence-based automated preprocessing and classification of impacted maxillary canines in panoramic radiographs. Dentomaxillofac Radiol 53(3):173–177. 10.1093/dmfr/twae005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Abe JM, Ortega NRS, Mário MC, Del Santo M (2005) Paraconsistent artificial neural network:: an application in cephalometric analysis. In: Khosla R, Howlett RJ, Jain LC (eds) KES J, Pt 2, Proceedings Vol 3682, pp 716–723
- Abu Arqub S, Al-Moghrabi D, Allareddy V, Upadhyay M, Vaid N, Yadav S (2024) Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners. Angle Orthod 94(3):263–272. 10.2319/071123-484.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Adel SM, Bichu YM, Pandian SM, Sabouni W, Shah C, Vaiid N (2024) Clinical audit of an artificial intelligence (AI) empowered smile simulation system: a prospective clinical trial. Sci Rep 14(1):19385. 10.1038/s41598-024-69314-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Agarwal S, Agarwal S (2022) Bone age assessment from lateral cephalograms using deep learning algorithms in the Indian population. Indian J Dent Res 33(4):402–407. 10.4103/ijdr.ijdr_955_21 [DOI] [PubMed] [Google Scholar]
- Ahn H, Yang B (2024) A comparative study of measurements using manual and artificial intelligence for anatomical landmarks in three-dimensional cone-beam computed tomography images. Eur J Orthod 52:200. 10.1016/j.ijom.2023.10.555 [Google Scholar]
- Ahn J, Nguyen TP, Kim YJ, Kim T, Yoon J (2022) Automated analysis of three-dimensional CBCT images taken in natural head position that combines facial profile processing and multiple deep-learning models. Comput Methods Programs Biomed 226:107123. 10.1016/j.cmpb.2022.107123 [DOI] [PubMed] [Google Scholar]
- Ahn HJ, Byun SH, Baek SH, Park SY, Yi SM, Park IY, On S-W, Kim J-C, Yang BE (2024) A comparative analysis of artificial intelligence and manual methods for three-dimensional anatomical landmark identification in dentofacial treatment planning. Bioengineering. 10.3390/bioengineering11040318 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Akay G, Akcayol MA, Özdem K, Güngör K (2023) Deep convolutional neural network-the evaluation of cervical vertebrae maturation. Oral Radiol 39(4):629–638. 10.1007/s11282-023-00678-7 [DOI] [PubMed] [Google Scholar]
- Akçam MO, Takada K (2002) Fuzzy modelling for selecting headgear types. Eur J Orthod 24(1):99–106. 10.1093/ejo/24.1.99 [DOI] [PubMed] [Google Scholar]
- Alalharith DM, Alharthi HM, Alghamdi WM, Alsenbel YM, Aslam N, Khan IU, Shahin SY, Dianišková S, Alhareky MS, Barouch KK (2020) A deep learning-based approach for the detection of early signs of gingivitis in orthodontic patients using faster region-based convolutional neural networks. Int J Environ Res Public Health. 10.3390/ijerph17228447 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alam MK, Alfawzan AA (2020a) Dental characteristics of different types of cleft and non-cleft individuals. Front Cell Dev Biol. 10.3389/fcell.2020.00789 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alam MK, Alfawzan AA (2020b) Evaluation of Sella turcica bridging and morphology in different types of cleft patients. Front Cell Dev Biol. 10.3389/fcell.2020.00656 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alam MK, Alfawzan AA, Haque S, Mok PL, Marya A, Venugopal A, Jamayet NB, Siddiqui AA (2021) Sagittal jaw relationship of different types of cleft and non-cleft individuals. Front Pediatr. 10.3389/fped.2021.651951 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alam MK, Alanazi DSA, Alruwaili SRF, Alderaan RAI (2024) Assessment of AI models in predicting treatment outcomes in orthodontics. J Pharm Bioallied Sci 16(Suppl 1):S540-s542. 10.4103/jpbs.jpbs_852_23 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aldabbagh G, Omar M, Tayeb H, Alafeef R, Aldabbagh R (2019) Simplyortho: a software for orthodontics that automates bracket placement. Int J Adv Sci Technol 28(15):98–112 [Google Scholar]
- Alessandri-Bonetti A, Sangalli L, Salerno M, Gallenzi P (2023) Reliability of artificial intelligence-assisted cephalometric analysis. A pilot study. BioMedInformatics 3(1):44–53. 10.3390/biomedinformatics3010003 [Google Scholar]
- Ali SM, Saloom HF, Tawfeeq MA (2021) Artificial neural network for prediction of unerupted premolars and canines. Int Med 28:74–79 [Google Scholar]
- Ali SM, Saloom HF, Tawfeeq MA (2022) Cephalometric variables prediction from lateral photographs between different skeletal patterns using regression artificial neural networks. Turk J Orthod 35(2):101–111. 10.5152/TurkJOrthod.2022.21087 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aljabri M, Aljameel SS, Min-Allah N, Alhuthayfi J, Alghamdi L, Alduhailan N, Alfehaid R, Alqarawi R, Alhareky M, Shahin SY, Al Turki W (2022) Canine impaction classification from panoramic dental radiographic images using deep learning models. Inform Med Unlocked. 10.1016/j.imu.2022.100918 [Google Scholar]
- Alqahtani KA, Jacobs R, Smolders A, Van Gerven A, Willems H, Shujaat S, Shaheen E (2023) Deep convolutional neural network-based automated segmentation and classification of teeth with orthodontic brackets on cone-beam computed-tomographic images: a validation study. Eur J Orthod 45(2):169–174. 10.1093/ejo/cjac047 [DOI] [PubMed] [Google Scholar]
- Alsubhi R, Alsharif H, Kadi H, Barashi M (2024) Dental crowding prediction from occlusal view images using deep learning. Paper presented at the 2024 IEEE International Conference on Automatic Control and Intelligent Systems, I2CACIS 2024 - Proceedings
- Al-Ubaydi AS, Al-Groosh D (2023) The validity and reliability of automatic tooth segmentation generated using artificial intelligence. Sci World J 2023:5933003. 10.1155/2023/5933003 [Google Scholar]
- Amasya H, Cesur E, Yıldırım D, Orhan K (2020a) Validation of cervical vertebral maturation stages: artificial intelligence vs human observer visual analysis. Am J Orthod Dentofacial Orthop 158(6):e173–e179. 10.1016/j.ajodo.2020.08.014 [DOI] [PubMed] [Google Scholar]
- Amasya H, Yildirim D, Aydogan T, Kemaloglu N, Orhan K (2020b) Cervical vertebral maturation assessment on lateral cephalometric radiographs using artificial intelligence: comparison of machine learning classifier models. Dentomaxillofac Radiol. 10.1259/dmfr.20190441 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ameli N, Lagravere M, Lai H (2023) Application of deep learning to classify skeletal growth phase on 3D radiographs. Indian J Dent Res 2022(33):402–407 [Google Scholar]
- Amintavakoli N, Spivakovsky S (2018) Cone-beam computed tomography or conventional radiography for localising of maxillary impacted canines? Evid Based Dent 19(1):22–23. 10.1038/sj.ebd.6401291 [DOI] [PubMed] [Google Scholar]
- Andrade KM, Silva BPM, de Oliveira LR, Cury PR (2023) Automatic dental biofilm detection based on deep learning. J Clin Periodontol 50(5):571–581. 10.1111/jcpe.13774 [DOI] [PubMed] [Google Scholar]
- Arslan C, Yucel NO, Kahya K, Sunal Akturk E, Germec Cakan D (2024) Artificial intelligence for tooth detection in cleft lip and palate patients. Diagnostics. 10.3390/diagnostics14242849 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Atici SF, Ansari R, Allareddy V, Suhaym O, Cetin AE, Elnagar MH (2022) Fully automated determination of the cervical vertebrae maturation stages using deep learning with directional filters. PLoS ONE 17(7):e0269198. 10.1371/journal.pone.0269198 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Atici SF, Ansari R, Allareddy V, Suhaym O, Cetin AE, Elnagar MH (2023) Aggregatenet: a deep learning model for automated classification of cervical vertebrae maturation stages. Orthod Craniofac Res 26(S1):111–117. 10.1111/ocr.12644 [DOI] [PubMed] [Google Scholar]
- Auconi P, Scazzocchio M, Cozza P, McNamara JA Jr., Franchi L (2015) Prediction of class III treatment outcomes through orthodontic data mining. Eur J Orthod 37(3):257–267. 10.1093/ejo/cju038 [DOI] [PubMed] [Google Scholar]
- Baig N, Gyasudeen KS, Bhattacharjee T, Chaudhry J, Prasad S (2024) Comparative evaluation of commercially available AI-based cephalometric tracing programs. BMC Oral Health 24(1):1241. 10.1186/s12903-024-05032-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baksi S, Freezer S, Matsumoto T, Dreyer C (2021) Accuracy of an automated method of 3D soft tissue landmark detection. Eur J Orthod 43(6):622–630. 10.1093/ejo/cjaa069 [DOI] [PubMed] [Google Scholar]
- Banumathi A, Raju S, Abhaikumar V (2011) Diagnosis of dental deformities in cephalometry images using support vector machine. J Med Syst 35(1):113–119. 10.1007/s10916-009-9347-9 [DOI] [PubMed] [Google Scholar]
- Bao H, Zhang K, Yu C, Li H, Cao D, Shu H, Liu L, Yan B (2023) Evaluating the accuracy of automated cephalometric analysis based on artificial intelligence. BMC Oral Health 23(1):191. 10.1186/s12903-023-02881-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bardideh E, Lal Alizadeh F, Amiri M, Ghorbani M (2024) Designing an artificial intelligence system for dental occlusion classification using intraoral photographs: a comparative analysis between artificial intelligence-based and clinical diagnoses. Am J Orthod Dentofacial Orthop 166(2):125–137. 10.1016/j.ajodo.2024.03.012 [DOI] [PubMed] [Google Scholar]
- Berssenbrügge P, Berlin NF, Kebeck G, Runte C, Jung S, Kleinheinz J, Dirksen D (2014) 2D and 3D analysis methods of facial asymmetry in comparison. J Craniomaxillofac Surg 42(6):e327-334. 10.1016/j.jcms.2014.01.028 [DOI] [PubMed] [Google Scholar]
- Blum FMS, Möhlhenrich SC, Raith S, Pankert T, Peters F, Wolf M, Hölzle F, Modabber A (2023) Evaluation of an artificial intelligence-based algorithm for automated localization of craniofacial landmarks. Clin Oral Investig 27(5):2255–2265. 10.1007/s00784-023-04978-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bor S, Ciğerim S, Kotan S (2024) Comparison of AI-assisted cephalometric analysis and orthodontist-performed digital tracing analysis. Prog Orthod 25(1):41. 10.1186/s40510-024-00539-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bulatova G, Kusnoto B, Grace V, Tsay TP, Avenetti DM, Sanchez FJC (2021) Assessment of automatic cephalometric landmark identification using artificial intelligence. Orthod Craniofac Res 24(Suppl 2):37–42. 10.1111/ocr.12542 [DOI] [PubMed] [Google Scholar]
- Butul B, Sharab L (2024) Obstacles behind the innovation- a peek into Artificial intelligence in the field of orthodontics – a literature review. Saudi Dent J 36(6):830–834. 10.1016/j.sdentj.2024.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cai J, Deng Y, Min Z, Zhang Y, Zhao Z, Jing D (2023) Revealing the representative facial traits of different sagittal skeletal types: decipher what artificial intelligence can see by Grad-CAM. J Dent 138:104701. 10.1016/j.jdent.2023.104701 [DOI] [PubMed] [Google Scholar]
- Cai J, Min Z, Deng Y, Jing D, Zhao Z (2024) Assessing the impact of occlusal plane rotation on facial aesthetics in orthodontic treatment: a machine learning approach. BMC Oral Health 24(1):30. 10.1186/s12903-023-03817-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cejudo Grano de Oro JE, Koch PJ, Krois J, Garcia Cantu Ros A, Patel J, Meyer-Lueckel H, Schwendicke F (2022) Hyperparameter tuning and automatic image augmentation for deep learning-based angle classification on intraoral photographs-a retrospective study. Diagnostics (Basel) 12(7). 10.3390/diagnostics12071526
- Cericato GO, Bittencourt MA, Paranhos LR (2015) Validity of the assessment method of skeletal maturation by cervical vertebrae: a systematic review and meta-analysis. Dentomaxillofac Radiol 44(4):20140270. 10.1259/dmfr.20140270 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chang S, Wang SF, Zuo FF, Wang F, Gong BW, Wang YJ, Xie XJ (2023) Automated diagnostic classification with lateral cephalograms based on deep learning network model. Zhonghua Kou Qiang Yi Xue Za Zhi 58(6):547–553. 10.3760/cma.j.cn112144-20230305-00072 [DOI] [PubMed] [Google Scholar]
- Chang Q, Bai Y, Wang S, Wang F, Wang Y, Zuo F, Xie X (2024) Automatic soft-tissue analysis on orthodontic frontal and lateral facial photographs based on deep learning. Orthod Craniofac Res 27(6):893–902. 10.1111/ocr.12830 [DOI] [PubMed] [Google Scholar]
- Chang Q, Bai Y, Wang S, Wang F, Liang S, Xie X (2025) Automated orthodontic diagnosis via self-supervised learning and multi-attribute classification using lateral cephalograms. Biomed Eng Online 24(1):9. 10.1186/s12938-025-01345-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen S, Wang L, Li G, Wu TH, Diachina S, Tejera B, Kwon JJ, Lin F-C, Lee Y-T, Xu T, Shen D, Ko CC (2020) Machine learning in orthodontics: introducing a 3D auto-segmentation and auto-landmark finder of CBCT images to assess maxillary constriction in unilateral impacted canine patients. Angle Orthod 90(1):77–84. 10.2319/012919-59.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen J, Che H, Sun J, Rao Y, Wu J (2023a) An automatic cephalometric landmark detection method based on heatmap regression and Monte Carlo dropout. Annu Int Conf IEEE Eng Med Biol Soc 2023:1–4. 10.1109/embc40787.2023.10341102 [DOI] [PubMed] [Google Scholar]
- Chen Z, Chen S, Hu F (2023b) CTA-UNet: CNN-transformer architecture UNet for dental CBCT images segmentation. Phys Med Biol. 10.1088/1361-6560/acf026 [DOI] [PubMed] [Google Scholar]
- Chen H, Qu Z, Tian Y, Jiang N, Qin Y, Gao J, Zhang R, Ma Y, Jin Z, Zhai G (2024) A cross-temporal multimodal fusion system based on deep learning for orthodontic monitoring. Comput Biol Med 180:109025. 10.1016/j.compbiomed.2024.109025 [DOI] [PubMed] [Google Scholar]
- Cheong YW, Lo LJ (2011) Facial asymmetry: etiology, evaluation, and management. Chang Gung Med J 34(4):341–351 [PubMed] [Google Scholar]
- Cho SJ, Moon JH, Ko DY, Lee JM, Park JA, Donatelli RE, Lee SJ (2024) Orthodontic treatment outcome predictive performance differences between artificial intelligence and conventional methods. Angle Orthod 94(5):557–565. 10.2319/111823-767.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Choi KY (2015) Analysis of facial asymmetry. Arch Craniofac Surg 16(1):1–10. 10.7181/acfs.2015.16.1.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chuchra A, Gupta K, Arora R, Bindra S, Hingad N, Babbar A (2024) Digital cephalometric analysis: unveiling the role and reliability of semi-automated OneCeph, artificial intelligence-powered WebCeph mobile app, and semi-automated computer-aided NemoCeph software in orthodontic practice. Cureus 16(11):e72948. 10.7759/cureus.72948 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chung M, Lee M, Hong J, Park S, Lee J, Lee J, Yang I-H, Shin YG (2020) Pose-aware instance segmentation framework from cone beam CT images for tooth segmentation. Comput Biol Med 120:103720. 10.1016/j.compbiomed.2020.103720 [DOI] [PubMed] [Google Scholar]
- Chung EJ, Yang BE, Park IY, Yi S, On SW, Kim YH, Kang S-H, Byun SH (2022) Effectiveness of cone-beam computed tomography-generated cephalograms using artificial intelligence cephalometric analysis. Sci Rep 12(1):20585. 10.1038/s41598-022-25215-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Çoban G, Öztürk T, Hashimli N, Yağci A (2022) Comparison between cephalometric measurements using digital manual and web-based artificial intelligence cephalometric tracing software. Dent Press J Orthod. 10.1590/2177-6709.27.4.e222112.oar [Google Scholar]
- Coclici A, Hedeşiu M, Bran S, Băciuţ M, Dinu C, Rotaru H, Roman R (2019) Early and long-term changes in the muscles of the mandible following orthognathic surgery. Clin Oral Investig 23(9):3437–3444. 10.1007/s00784-019-03019-3 [DOI] [PubMed] [Google Scholar]
- Cui Z, Li C, Chen N, Wei G, Chen R, Zhou Y, Shen D, Wang W (2021) Tsegnet: an efficient and accurate tooth segmentation network on 3d dental model. Med Image Anal. 10.1016/j.media.2020.101949 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dai X, Zhao H, Liu T, Cao D, Xie L (2019) Locating anatomical landmarks on 2D lateral cephalograms through adversarial encoder-decoder networks. IEEE Access 7:132738–132747. 10.1109/ACCESS.2019.2940623 [Google Scholar]
- Danisman H (2023) Artificial intelligence web-based cephalometric analysis platform: comparison with the computer assisted cephalometric method. Clin Investig Orthod 82(4):194–203. 10.1080/27705781.2023.2254537 [Google Scholar]
- Dashti M, Khosraviani F, Azimi T, Sehat MS, Alekajbaf E, Fahimipour A, Zare N (2024) Predicting mandibular bone growth using artificial intelligence and machine learning: a systematic review. Adv Artif Intell Mach Learn 4(3):2731–2745 [Google Scholar]
- Davidovitch M, Sella-Tunis T, Abramovicz L, Reiter S, Matalon S, Shpack N (2022) Verification of convolutional neural network cephalometric landmark identification. Appl Sci. 10.3390/app122412784 [Google Scholar]
- de Araujo CM, de Jesus Freitas PF, Ferraz AX, Quadras ICC, Zeigelboim BS, Priolo Filho S, Küchler EC (2024) Sex determination through maxillary dental arch and skeletal base measurements using machine learning. Head Face Med. 10.1186/s13005-024-00446-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Araujo CM, Freitas PFdJ, Ferraz AX, Andreis PKDS, Meger MN, Baratto-Filho F, Schroder AGD (2025) Predicting the risk of Maxillary Canine Impaction based on Maxillary measurements using supervised machine learning. Orthod Craniofac Res 28(1):207–215 [DOI] [PubMed] [Google Scholar]
- de Queiroz Tavares Borges Mesquita G, Vieira WA, Vidigal MTC, Travençolo BAN, Beaini TL, Spin-Neto R, Paranhos LR, de Brito Júnior RB (2023) Artificial intelligence for detecting cephalometric landmarks: a systematic review and meta-analysis. J Digit Imaging 36(3):1158–1179. 10.1007/s10278-022-00766-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deepa S, Umamageswari A, Sherinbeevi L, Sangari A (2024) Unleashing hidden canines: a novel fast R-CNN based technique for automatic auxiliary canine impaction. Int J Adv Technol Eng Explor 11(115):916 [Google Scholar]
- Demircan GS, Kılıç B, Önal-Süzek T (2021) Early Diagnosis and prediction of skeletal class III malocclusion from profile photos using artificial intelligence. Paper presented at the IFMBE Proceedings
- Deng HH, Liu Q, Chen A, Kuang T, Yuan P, Gateno J, Kim D, Barber JC, Xiong KG, Yu P, Gu KJ, Xu X, Yan P, Shen D, Xia JJ (2023) Clinical feasibility of deep learning-based automatic head CBCT image segmentation and landmark detection in computer-aided surgical simulation for orthognathic surgery. Int J Oral Maxillofac Surg 52(7):793–800. 10.1016/j.ijom.2022.10.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dot G, Schouman T, Chang S, Rafflenbeul F, Kerbrat A, Rouch P, Gajny L (2022) Automatic 3-dimensional cephalometric landmarking via deep learning. J Dent Res 101(11):1380–1387. 10.1177/00220345221112333 [DOI] [PubMed] [Google Scholar]
- Duman ŞB, Syed AZ, Celik Ozen D, Bayrakdar İ, Salehi HS, Abdelkarim A, . . . Orhan K (2022) Convolutional neural network performance for sella turcica segmentation and classification using CBCT Images. Diagnostics (Basel) 12(9). 10.3390/diagnostics12092244
- Duran GS, Gökmen Ş, Topsakal KG, Görgülü S (2023) Evaluation of the accuracy of fully automatic cephalometric analysis software with artificial intelligence algorithm. Orthod Craniofac Res 26(3):481–490. 10.1111/ocr.12633 [DOI] [PubMed] [Google Scholar]
- Dursun D, Bilici Geçer R (2024) Can artificial intelligence models serve as patient information consultants in orthodontics? BMC Med Inform Decis Mak 24(1):211. 10.1186/s12911-024-02619-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- El Bsat AR, Shammas E, Asmar D, Sakr GE, Zeno KG, Macari AT, Ghafari JG (2022) Semantic segmentation of Maxillary teeth and palatal rugae in two-dimensional images. Diagnostics. 10.3390/diagnostics12092176 [DOI] [PMC free article] [PubMed] [Google Scholar]
- El-Dawlatly M, Attia KH, Abdelghaffar AY, Mostafa YA, Abd El-Ghafour M (2024) Preciseness of artificial intelligence for lateral cephalometric measurements. J Orofacial Orthopedics / Fortschritte der Kieferorthopädie 85(Suppl 1):27–33. 10.1007/s00056-023-00459-1 [Google Scholar]
- ElShebiny T, Paradis AE, Kasper FK, Palomo JM (2024) Assessment of virtual bracket removal by artificial intelligence and thermoplastic retainer fit. J Orofac Orthop 166(6):608–615. 10.1016/j.ajodo.2024.07.020 [Google Scholar]
- Ericson S, Kurol J (1988) Early treatment of palatally erupting maxillary canines by extraction of the primary canines. Eur J Orthod 10(4):283–295. 10.1093/ejo/10.4.283 [DOI] [PubMed] [Google Scholar]
- Etemad LE, Heiner JP, Amin AA, Wu TH, Chao WL, Hsieh SJ, Ko CC (2024) Effectiveness of machine learning in predicting orthodontic tooth extractions: a multi-institutional study. Bioengineering. 10.3390/bioengineering11090888 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Etemad L, Wu TH, Heiner P, Liu J, Lee S, Chao WL, . . . Ko CC (2021) Machine learning from clinical data sets of a contemporary decision for orthodontic tooth extraction. Orthod Craniofac Res 24 Suppl 2:193–200. 10.1111/ocr.12502
- Feng H, Song W, Li R, Yang L, Chen X, Guo J, . . . Wang J (2025) A fully integrated orthodontic aligner with force sensing ability for machine learning-assisted diagnosis. Adv Sci (Weinh) 12(2):e2411187. 10.1002/advs.202411187
- Ferlito T, Hsiou D, Hargett K, Herzog C, Bachour P, Katebi N, Tokede O, Larson B, Masoud MI (2023) Assessment of artificial intelligence-based remote monitoring of clear aligner therapy: a prospective study. Am J Orthod Dentofacial Orthop 164(2):194–200. 10.1016/j.ajodo.2022.11.020 [DOI] [PubMed] [Google Scholar]
- Gaonkar P, Mohammed I, Ribin M, Kumar CD, Thomas PA, Saini R (2024) Assessing the impact of AI-enhanced diagnostic tools on the treatment planning of orthodontic cases: an RCT. J Pharm Bioallied Sci 16(Suppl 2):S1798–S1800. 10.4103/jpbs.jpbs_1147_23 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gil SM, Kim I, Cho JH, Hong M, Kim M, Kim SJ, Kang KH (2022) Accuracy of auto-identification of the posteroanterior cephalometric landmarks using cascade convolution neural network algorithm and cephalometric images of different quality from nationwide multiple centers. Am J Orthod Dentofacial Orthop 161(4):e361. 10.1016/j.ajodo.2021.11.011 [DOI] [PubMed] [Google Scholar]
- Gonca M, Bayrakdar İ, Çelik Ö (2024a) Does the FARNet neural network algorithm accurately identify posteroanterior cephalometric landmarks? BMC Med Imaging 24(1):294. 10.1186/s12880-024-01478-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gonca M, Sert MF, Gunacar DN, Kose TE, Beser B (2024b) Determination of growth and developmental stages in hand–wrist radiographs: can fractal analysis in combination with artificial intelligence be used? J Orofac Orthop. 10.1007/s00056-023-00510-1 [DOI] [PubMed] [Google Scholar]
- Gong B, Chang Q, Shi T, Wang S, Wang Y, Zuo F, Xie X, Bai Y (2025) Research of orthodontic soft tissue profile prediction based on conditional generative adversarial networks. J Dent 154:105570. 10.1016/j.jdent.2025.105570 [DOI] [PubMed] [Google Scholar]
- Grillo R, Quinta Reis BA, Lima BC, Peral Ferreira Pinto LA, Melhem-Elias F (2024) Frontal facial analysis of female celebrity attractiveness standards through artificial intelligence. J Craniomaxillofac Surg. 10.1016/j.jcms.2024.03.023 [DOI] [PubMed] [Google Scholar]
- Guinot-Barona C, Alonso Pérez-Barquero J, Galán López L, Barmak AB, Att W, Kois JC, Revilla-León M (2024) Cephalometric analysis performance discrepancy between orthodontists and an artificial intelligence model using lateral cephalometric radiographs. J Esthet Restor Dent 36(4):555–565. 10.1111/jerd.13156 [DOI] [PubMed] [Google Scholar]
- Guo RZ, Tian Y, Li XB, Li WR, He DQ, Sun YN (2023) Facial profile evaluation and prediction of skeletal class II patients during camouflage extraction treatment: a pilot study. Head Face Med 19(1):11. 10.1186/s13005-023-00397-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gurgel M, Alvarez MA, Aristizabal JF, Baquero B, Gillot M, Al Turkestani N, Cevidanes L (2024) Automated artificial intelligence-based three-dimensional comparison of orthodontic treatment outcomes with and without piezocision surgery. Orthod Craniofac Res 27(2):321–331. 10.1111/ocr.12737 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hack M, Drăgulin B, Hack L, ElSaafin M, Dumitrescu I, Stan D, Păcurar M (2024) Comparative study on the results of orthodontic diagnostics by using algorithms generated by artificial intelligence and simple algorithms. Med Pharm Rep 97(2):215–221. 10.15386/mpr-2702 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Han SH, Lim J, Kim JS, Cho JH, Hong M, Kim M, Kim S-J, Kim Y-J, Kim YH, Lim S-H, Sung SJ, Kang K-H, Baek S-H, Choi S-K, Kim N (2024) Accuracy of posteroanterior cephalogram landmarks and measurements identification using a cascaded convolutional neural network algorithm: a multicenter study. Korean J Orthod 54(1):48–58. 10.4041/kjod23.075 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harun MF, Samah AA, Shabuli MIA, Majid HA, Hashim H, Ismail NA, . . . Alias A (2022) Incisor malocclusion using cut-out method and convolutional neural network. Prog Microbe Molecul Biol 5(1). 10.36877/pmmb.a0000279
- Hase H, Mine Y, Okazaki S, Yoshimi Y, Ito S, Peng TY, Sano M, Koizumi Y, Kakimoto N, Tanimoto K, Murayama T (2024) Sex estimation from maxillofacial radiographs using a deep learning approach. Dent Mater J 43(3):394–399. 10.4012/dmj.2023-253 [DOI] [PubMed] [Google Scholar]
- Hendrickx J, Gracea RS, Vanheers M, Winderickx N, Preda F, Shujaat S, Jacobs R (2024) Can artificial intelligence-driven cephalometric analysis replace manual tracing? A systematic review and meta-analysis. Eur J Orthod. 10.1093/ejo/cjae029 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Homsi K, Snider V, Kusnoto B, Atsawasuwan P, Viana G, Allareddy V, Gajendrareddy P, Elnagar MH (2023) In-vivo evaluation of artificial intelligence driven remote monitoring technology for tracking tooth movement and reconstruction of 3-dimensional digital models during orthodontic treatment. Am J Orthod Dentofacial Orthop 164(5):690–699. 10.1016/j.ajodo.2023.04.019 [DOI] [PubMed] [Google Scholar]
- Hong M, Kim I, Cho JH, Kang KH, Kim M, Kim SJ, Kim Y-J, Sung S-J, Kim YH, Lim S-H, Kim N, Baek SH (2022) Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery. Korean J Orthod 52(4):287–297. 10.4041/kjod21.248 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hu HQ, Li ZX, Gao WC (2023) Mpcnet: improved meshsegnet based on position encoding and channel attention. IEEE Access 11:23326–23334. 10.1109/access.2023.3254206 [Google Scholar]
- Hu Y, Liu C, Liu W, Xiong Y, Zeng W, Chen J, Li X, Guo J, Tang W (2024) Fully automated method for three-dimensional segmentation and fine classification of mixed dentition in cone-beam computed tomography using deep learning. J Dent 151:105398. 10.1016/j.jdent.2024.105398 [DOI] [PubMed] [Google Scholar]
- Hwang HW, Park JH, Moon JH, Yu Y, Kim H, Her SB, Lee SJ (2020) Automated identification of cephalometric landmarks: Part 2: might it be better than human? Angle Orthod 90(1):69–76. 10.2319/022019-129.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hwang HW, Moon JH, Kim MG, Donatelli RE, Lee SJ (2021) Evaluation of automated cephalometric analysis based on the latest deep learning method. Angle Orthod 91(3):329–335. 10.2319/021220-100.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Im J, Kim JY, Yu HS, Lee KJ, Choi SH, Kim JH, Cha JY (2022) Accuracy and efficiency of automatic tooth segmentation in digital dental models using deep learning. Sci Rep 12(1):9429. 10.1038/s41598-022-13595-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Indermun S, Shaik S, Nyirenda C, Johannes K, Mulder R (2023) Human examination and artificial intelligence in cephalometric landmark detection-is AI ready to take over? Dentomaxillofac Radiol. 10.1259/dmfr.20220362 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ito S, Mine Y, Urabe S, Yoshimi Y, Okazaki S, Sano M, Koizumi Y, Peng T-Y, Kakimoto N, Murayama T, Tanimoto K (2024) Prediction of a cephalometric parameter and skeletal patterns from lateral profile photographs: a retrospective comparative analysis of regression convolutional neural networks. J Clin Med. 10.3390/jcm13216346 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jeon S, Lee KC (2021) Comparison of cephalometric measurements between conventional and automatic cephalometric analysis using convolutional neural network. Prog Orthod 22(1):14. 10.1186/s40510-021-00358-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jeon SM, Kim S, Lee KC (2024) Deep learning-based assessment of facial asymmetry using U-net deep convolutional neural network algorithm. J Craniofac Surg 35(1):133–136. 10.1097/SCS.0000000000009862 [DOI] [PubMed] [Google Scholar]
- Jiang F, Guo Y, Zhou Y, Yang C, Xing K, Zhou J, Lin Y, Cheng F, Li J (2022a) Automated calibration system for length measurement of lateral cephalometry based on deep learning. Phys Med Biol. 10.1088/1361-6560/ac9880 [DOI] [PubMed] [Google Scholar]
- Jiang Y, Shang F, Peng J, Liang J, Fan Y, Yang Z, Qi Y, Yang Y, Xu T, Jiang R (2022b) Automatic masseter muscle accurate segmentation from CBCT using deep learning-based model. J Clin Med. 10.3390/jcm12010055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang Y, Shang F, Peng J, Liang J, Fan Y, Yang Z, Qi Y, Yang Y, Xu T, Jiang R (2022c) Automatic masseter muscle accurate segmentation from CBCT using deep learning-based model. J Clin Med 12(1):55 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang F, Guo Y, Yang C, Zhou Y, Lin Y, Cheng F, Quan S, Feng Q, Li J (2023) Artificial intelligence system for automated landmark localization and analysis of cephalometry. Dentomaxillofac Radiol 52(1):20220081. 10.1259/dmfr.20220081 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johannes T, Akhilanand C, Joachim K, Shankeeth V, Anahita H, Saeed Reza M, Hossein MR (2023) Evaluation of AI model for Cephalometric landmark classification (TG Dental). J Med Syst 47(1):92. 10.1007/s10916-023-01977-6 [DOI] [PubMed] [Google Scholar]
- Jung S-K, Kim T-W (2016) New approach for the diagnosis of extractions with neural network machine learning. Am J Orthod Dentofacial Orthop 149(1):127–133. 10.1016/j.ajodo.2015.07.030 [DOI] [PubMed] [Google Scholar]
- Kamei G, Batra P, Singh AK, Arora G, Kaushik S (2024) Development of an artificial intelligence-based algorithm for the assessment of skeletal age and detection of cervical vertebral anomalies in patients with cleft lip and palate. Cleft Palate Craniofac J 10556656241299890. 10.1177/10556656241299890
- Kang S, Kim I, Kim YJ, Kim N, Baek SH, Sung SJ (2024) Accuracy and clinical validity of automated cephalometric analysis using convolutional neural networks. Orthod Craniofac Res 27(1):64–77. 10.1111/ocr.12683 [DOI] [PubMed] [Google Scholar]
- Katyal D, Balakrishnan N (2022) Evaluation of the accuracy and reliability of WebCeph – an artificial intelligence-based online software. APOS Trends Orthod 12(4):271–276. 10.25259/APOS_138_2021 [Google Scholar]
- Kazimierczak N, Kazimierczak W, Serafin Z, Nowicki P, Lemanowicz A, Nadolska K, Janiszewska-Olszowska J (2023) Correlation analysis of nasal septum deviation and results of AI-driven automated 3D cephalometric analysis. J Clin Med. 10.3390/jcm12206621 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kazimierczak N, Kazimierczak W, Serafin Z, Nowicki P, Jankowski T, Jankowska A, Janiszewska-Olszowska J (2024a) Skeletal facial asymmetry: reliability of manual and artificial intelligence-driven analysis. Dentomaxillofac Radiol 53(1):52–59. 10.1093/dmfr/twad006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kazimierczak W, Gawin G, Janiszewska-Olszowska J, Dyszkiewicz-Konwińska M, Nowicki P, Kazimierczak N, Serafin Z, Orhan K (2024b) Comparison of three commercially available, AI-driven cephalometric analysis tools in orthodontics. J Clin Med. 10.3390/jcm13133733 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kazimierczak W, Jedliński M, Issa J, Kazimierczak N, Janiszewska-Olszowska J, Dyszkiewicz-Konwińska M, Orhan K (2024c) Accuracy of artificial intelligence for cervical vertebral maturation assessment-a systematic review. J Clin Med. 10.3390/jcm13144047 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khabadze Z, Mordanov O, Shilyaeva E (2024) Comparative analysis of 3D cephalometry provided with artificial intelligence and manual tracing. Diagnostics. 10.3390/diagnostics14222524 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khan S, Rahmani H, Shah SAA, Bennamoun M, Medioni G, Dickinson S (2018) A guide to convolutional neural networks for computer vision
- Khazaei M, Mollabashi V, Khotanlou H, Farhadian M (2023) Automatic determination of pubertal growth spurts based on the cervical vertebral maturation staging using deep convolutional neural networks. J World Fed Orthod 12(2):56–63. 10.1016/j.ejwf.2023.02.003 [DOI] [PubMed] [Google Scholar]
- Khosravi-Kamrani P, Qiao X, Zanardi G, Wiesen CA, Slade G, Frazier-Bowers SA (2022) A machine learning approach to determine the prognosis of patients with class III malocclusion. Am J Orthod Dentofacial Orthop 161(1):e1. 10.1016/j.ajodo.2021.06.012 [DOI] [PubMed] [Google Scholar]
- Kılıç B, İbrahim AH, Aksoy S, Sakman MC, Demircan GS, Önal-Süzek T (2024) A family-centered orthodontic screening approach using a machine learning-based mobile application. J Dent Sci 19(1):186–195. 10.1016/j.jds.2023.05.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kılınç DD, Kırcelli BH, Sadry S, Karaman A (2022) Evaluation and comparison of smartphone application tracing, web based artificial intelligence tracing and conventional hand tracing methods. J Stomatol Oral Maxillofac Surg 123(6):e906–e915. 10.1016/j.jormas.2022.07.017 [DOI] [PubMed] [Google Scholar]
- Kim H, Shim E, Park J, Kim YJ, Lee U, Kim Y (2020a) Web-based fully automated cephalometric analysis by deep learning. Comput Methods Programs Biomed 194:105513. 10.1016/j.cmpb.2020.105513 [DOI] [PubMed] [Google Scholar]
- Kim DW, Kim J, Kim T, Kim T, Kim YJ, Song IS, Kim D-W, Kim Y-J, Song I-S, Ahn B, Choo J, Lee D-Y, Lee DY (2021a) Prediction of hand-wrist maturation stages based on cervical vertebrae images using artificial intelligence. Orthod Craniofac Res 24(Suppl 2):68–75. 10.1111/ocr.12514 [DOI] [PubMed] [Google Scholar]
- Kim J, Kim I, Kim YJ, Kim M, Cho JH, Hong M, Kim Y-J, Cho J-H, Kang K-H, Lim S-H, Kim S-J, Kim YH, Kim N, Sung S-J, Baek S-H, Baek SH (2021b) Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres. Orthod Craniofac Res 24(S2):59–67. 10.1111/ocr.12493 [DOI] [PubMed] [Google Scholar]
- Kim MJ, Liu Y, Oh SH, Ahn HW, Kim SH, Nelson G (2021c) Automatic cephalometric landmark identification system based on the multi-stage convolutional neural networks with CBCT combination images. Sensors. 10.3390/s21020505 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim MJ, Liu Y, Oh SH, Ahn HW, Kim SH, Nelson G (2021d) Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomography-synthesized posteroanterior cephalometric images. Korean J Orthod 51(2):77–85. 10.4041/kjod.2021.51.2.77 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim HJ, Kim KD, Kim DH (2022) Deep convolutional neural network-based skeletal classification of cephalometric image compared with automated-tracing software. Sci Rep 12(1):11659. 10.1038/s41598-022-15856-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim E, Kuroda Y, Soeda Y, Koizumi S, Yamaguchi T (2023a) Validation of machine learning models for craniofacial growth prediction. Diagnostics. 10.3390/diagnostics13213369 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim H, Kim CS, Lee JM, Lee JJ, Lee J, Kim JS, Choi SH (2023b) Prediction of Fishman’s skeletal maturity indicators using artificial intelligence. Sci Rep. 10.1038/s41598-023-33058-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim I, Misra D, Rodriguez L, Gill M, Liberton DK, Almpani K, . . . Antani S (2020) Malocclusion classification on 3D cone-beam CT craniofacial images using multi-channel deep learning models(). Annu Int Conf IEEE Eng Med Biol Soc 2020:1294–1298. 10.1109/embc44109.2020.9176672
- King CH, Wang YL, Lin WY, Tsai CL (2022, 28–31 March 2022) Automatic cephalometric landmark detection on X-ray images using object detection. Paper presented at the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)
- Kocakaya DNC, Özel MB, Kartbak SBA, Çakmak M, Sinanoğlu EA (2024) Profile photograph classification performance of deep learning algorithms trained using cephalometric measurements: a preliminary study. Diagnostics. 10.3390/diagnostics14171916 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kohli SS, Kohli VS (2012) A comprehensive review of the genetic basis of cleft lip and palate. J Oral Maxillofac Pathol 16(1):64–72. 10.4103/0973-029x.92976 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kök H, Acilar AM, İzgi MS (2019) Usage and comparison of artificial intelligence algorithms for determination of growth and development by cervical vertebrae stages in orthodontics. Prog Orthod 20(1):41. 10.1186/s40510-019-0295-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kök H, Izgi MS, Acilar AM (2021) Determination of growth and development periods in orthodontics with artificial neural network. Orthod Craniofac Res 24(Suppl 2):76–83. 10.1111/ocr.12443 [DOI] [PubMed] [Google Scholar]
- Kok H, Izgi MS, Acilar AM (2021) Evaluation of the artificial neural network and naive bayes models trained with vertebra ratios for growth and development determination. Turk J Orthod 34(1):2–9. 10.5152/TurkJOrthod.2020.20059 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krenmayr L, von Schwerin R, Schaudt D, Riedel P, Hafner A (2024) Dilatedtoothsegnet: tooth segmentation network on 3d dental meshes through increasing receptive vision. J Imaging Inform Med 37(4):1846–1862. 10.1007/s10278-024-01061-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Küchler, E. C., Kirschneck, C., Marañón-Vásquez, G. A., Schroder, Â. D., Baratto-Filho, F., Romano, F. L., . . . de Araujo, C. M. (2024). Mandibular and dental measurements for sex determination using machine learning. Sci Rep, 14(1), 9587. 10.1038/s41598-024-59556-9
- Kunz F, Stellzig-Eisenhauer A, Zeman F, Boldt J (2020) Artificial intelligence in orthodontics: evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J Orofac Orthop 81(1):52–68. 10.1007/s00056-019-00203-8 [DOI] [PubMed] [Google Scholar]
- Kunz F, Stellzig-Eisenhauer A, Widmaier LM, Zeman F, Boldt J (2023) Assessment of the quality of different commercial providers using artificial intelligence for automated cephalometric analysis compared to human orthodontic experts. J Orofac Orthop. 10.1007/s00056-023-00491-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kwon HJ, Koo HI, Park J, Cho NI (2021) Multistage probabilistic approach for the localization of cephalometric landmarks. IEEE Access 9:21306–21314. 10.1109/ACCESS.2021.3052460 [Google Scholar]
- Larkin A, Kim JS, Kim N, Baek SH, Yamada S, Park K, . . . Park JH (2024) Accuracy of artificial intelligence-assisted growth prediction in skeletal Class I preadolescent patients using serial lateral cephalograms for a 2-year growth interval. Orthod Craniofac Res. 10.1111/ocr.12764
- Le VNT, Kang J, Oh IS, Kim JG, Yang YM, Lee DW (2022) Effectiveness of human-artificial intelligence collaboration in cephalometric landmark detection. J Pers Med 12(3):387. 10.3390/jpm12030387 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leavitt L, Volovic J, Steinhauer L, Mason T, Eckert G, Dean JA, Dundar MM, Turkkahraman H (2023) Can we predict orthodontic extraction patterns by using machine learning? Orthod Craniofac Res 26(4):552–559. 10.1111/ocr.12641 [DOI] [PubMed] [Google Scholar]
- Lee JH, Yu HJ, Kim MJ, Kim JW, Choi J (2020) Automated cephalometric landmark detection with confidence regions using Bayesian convolutional neural networks. BMC Oral Health 20(1):270. 10.1186/s12903-020-01256-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee SC, Hwang HS, Lee KC (2022) Accuracy of deep learning-based integrated tooth models by merging intraoral scans and CBCT scans for 3d evaluation of root position during orthodontic treatment. Prog Orthod 23(1):15. 10.1186/s40510-022-00410-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee H, Cho JM, Ryu S, Ryu S, Chang E, Jung YS, Kim JY (2023a) Automatic identification of posteroanterior cephalometric landmarks using a novel deep learning algorithm: a comparative study with human experts. Sci Rep 13(1):15506. 10.1038/s41598-023-42870-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee J, Bae SR, Noh HK (2023b) Commercial artificial intelligence lateral cephalometric analysis: part 1-the possibility of replacing manual landmarking with artificial intelligence service. J Clin Pediatr Dent 47(6):106–118. 10.22514/jocpd.2023.085 [DOI] [PubMed] [Google Scholar]
- Lee S, Wu TH, Deguchi T, Ni A, Lu WE, Minhas S, Murphy S, Ko CC (2023c) Assessment of malalignment factors related to Invisalign treatment time aided by automated imaging processes. Angle Orthod 93(2):144–150. 10.2319/031622-225.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leonardi R, Lo Giudice A, Isola G, Spampinato C (2021) Deep learning and computer vision: two promising pillars, powering the future in orthodontics. Semin Orthod 27(2):62–68. 10.1053/j.sodo.2021.05.002 [Google Scholar]
- Li P, Kong D, Tang T, Su D, Yang P, Wang H, Zhao Z, Liu Y (2019) Orthodontic treatment planning based on artificial neural networks. Sci Rep 9(1):2037. 10.1038/s41598-018-38439-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li Q, Chen K, Han L, Zhuang Y, Li J, Lin J (2020) Automatic tooth roots segmentation of cone beam computed tomography image sequences using U-net and RNN. J Xray Sci Technol 28(5):905–922. 10.3233/xst-200678 [DOI] [PubMed] [Google Scholar]
- Li H, Chen Y, Wang Q, Gong X, Lei Y, Tian J, Gao X (2022a) Convolutional neural network-based automatic cervical vertebral maturation classification method. Dentomaxillofac Radiol 51(6):20220070. 10.1259/dmfr.20220070 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li H, Xu Y, Lei Y, Wang Q, Gao X (2022b) Automatic classification for sagittal craniofacial patterns based on different convolutional neural networks. Diagnostics. 10.3390/diagnostics12061359 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li H, Li H, Yuan L, Liu C, Xiao S, Liu Z, Zhou G, Dong T, Ouyang N, Liu Lu, Ma C, Feng Y, Zheng Y, Xia L, Fang B (2023) The psc-CVM assessment system: a three-stage type system for CVM assessment based on deep learning. BMC Oral Health 23(1):557. 10.1186/s12903-023-03266-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li R, Zhu C, Chu F, Yu Q, Fan D, Ouyang N, Fang B (2024) Deep learning for virtual orthodontic bracket removal: tool establishment and application. Clin Oral Investig 28(1):121. 10.1007/s00784-023-05440-1 [DOI] [PubMed] [Google Scholar]
- Liao N, Dai J, Tang Y, Zhong Q, Mo S (2022) ICVM: an interpretable deep learning model for CVM assessment under label uncertainty. IEEE J Biomed Health Inform 26(8):4325–4334. 10.1109/jbhi.2022.3179619 [DOI] [PubMed] [Google Scholar]
- Lim HK, Jung SK, Kim SH, Cho Y, Song IS (2021) Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 21(1):630. 10.1186/s12903-021-01983-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin G, Kim PJ, Baek SH, Kim HG, Kim SW, Chung JH (2021) Early prediction of the need for Orthognathic Surgery in patients with repaired unilateral cleft lip and palate using machine learning and longitudinal lateral cephalometric analysis data. J Craniofac Surg 32(2):616–620. 10.1097/scs.0000000000006943 [DOI] [PubMed] [Google Scholar]
- Lin B, Cheng M, Wang S, Li F, Zhou Q (2022) Automatic detection of anteriorly displaced temporomandibular joint discs on magnetic resonance images using a deep learning algorithm. Dentomaxillofac Radiol 51(3):20210341. 10.1259/dmfr.20210341 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu Z, He X, Wang H, Xiong H, Zhang Y, Wang G, Hao J, Feng Y, Zhu F, Hu H (2023) Hierarchical self-supervised learning for 3d tooth segmentation in intra-oral mesh scans. IEEE Trans Med Imaging 42(2):467–480. 10.1109/tmi.2022.3222388 [DOI] [PubMed] [Google Scholar]
- Liu C, Liu Y, Yi C, Xie T, Tian J, Deng P, Shan Y, Dong H, Xu Y (2025) Application of a 3D fusion model to evaluate the efficacy of clear aligner therapy in malocclusion patients: prospective observational study. J Med Internet Res 27:e67378. 10.2196/67378 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lo Giudice A, Ronsivalle V, Spampinato C, Leonardi R (2021) Fully automatic segmentation of the mandible based on convolutional neural networks (CNNs). Orthod Craniofac Res 24:100–107. 10.1111/ocr.12536 [DOI] [PubMed] [Google Scholar]
- Machoy M, Szyszka-Sommerfeld L, Koprowski R, Wawrzyk A, Woźniak K, Wilczyński S (2021) Assessment of periodontium temperature changes under orthodontic force by using objective and automatic classifier. Appl Sci. 10.3390/app11062634 [Google Scholar]
- Mahto RK, Kafle D, Giri A, Luintel S, Karki A (2022) Evaluation of fully automated cephalometric measurements obtained from web-based artificial intelligence driven platform. BMC Oral Health 22(1):132. 10.1186/s12903-022-02170-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Makaremi M, Lacaule C, Mohammad-Djafari A (2019) Deep learning and artificial intelligence for the determination of the cervical vertebra maturation degree from lateral radiography. Entropy. 10.3390/e21121222 [Google Scholar]
- Makaremi M, Vafaei Sadr A, Marcy B, Chraibi Kaadoud I, Mohammad-Djafari A, Sadoun S, . . . N'Kaoua B (2023) An interpretable machine learning approach to study the relationship beetwen retrognathia and skull anatomy. Sci Rep 13(1):18130. 10.1038/s41598-023-45314-w
- Manna F, Pensiero S, Clarich G, Guarneri GF, Parodi PC (2009) Cleft lip and palate: current status from the literature and our experience. J Craniofac Surg 20(5):1383–1387. 10.1097/SCS.0b013e3181b0daa3 [DOI] [PubMed] [Google Scholar]
- Manne R, Gandikota C, Juvvadi SR, Rama HR, Anche S (2012) Impacted canines: etiology, diagnosis, and orthodontic management. J Pharm Bioallied Sci 4(Suppl 2):S234-238. 10.4103/0975-7406.100216 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mario MC, Abe JM, Ortega NR, Del Santo M Jr (2010) Paraconsistent artificial neural network as auxiliary in cephalometric diagnosis. Artif Organs 34(7):E215–221. 10.1111/j.1525-1594.2010.00994.x
- Mason T, Kelly KM, Eckert G, Dean JA, Dundar MM, Turkkahraman H (2023) A machine learning model for orthodontic extraction/non-extraction decision in a racially and ethnically diverse patient population. Int Orthod 21(3):100759. 10.1016/j.ortho.2023.100759 [DOI] [PubMed] [Google Scholar]
- Meriç P, Naoumova J (2020) Web-based fully automated cephalometric analysis: comparisons between App-aided, computerized, and manual tracings. Turk J Orthod 33(3):142–149. 10.5152/TurkJOrthod.2020.20062 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Minhas S, Wu TH, Kim DG, Chen S, Wu YC, Ko CC (2024) Artificial intelligence for 3D reconstruction from 2D panoramic X-rays to assess maxillary impacted canines. Diagnostics. 10.3390/diagnostics14020196 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mohammad-Rahimi H, Motamadian SR, Nadimi M, Hassanzadeh-Samani S, Minabi MAS, Mahmoudinia E, Lee VY, Rohban MH (2022) Deep learning for the classification of cervical maturation degree and pubertal growth spurts: a pilot study. Korean J Orthod 52(2):112–122. 10.4041/kjod.2022.52.2.112 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mohammed H, Kumar R Jr., Bennani H, Halberstadt JB, Farella M (2022) Automated detection of smiles as discrete episodes. J Oral Rehabil 49(12):1173–1180. 10.1111/joor.13378 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Montúfar J, Romero M, Scougall-Vilchis RJ (2018) Automatic 3-dimensional cephalometric landmarking based on active shape models in related projections. Am J Orthod Dentofacial Orthop 153(3):449–458 [DOI] [PubMed] [Google Scholar]
- Moon J-H, Hwang H-W, Yu Y, Kim M-G, Donatelli RE, Lee S-J (2020) How much deep learning is enough for automatic identification to be reliable?: A cephalometric example. Angle Orthod 90(6):823–830. 10.2319/021920-116.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moon JH, Shin HK, Lee JM, Cho SJ, Park JA, Donatelli RE, Lee SJ (2024) Comparison of individualized facial growth prediction models based on the partial least squares and artificial intelligence. Angle Orthod 94(2):207–215. 10.2319/031723-181.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mourgues T, González-Olmo MJ, Huanca Ghislanzoni L, Peñacoba C, Romero-Maroto M (2024) Artificial intelligence in aesthetic dentistry: is treatment with aligners clinically realistic? J Clin Med. 10.3390/jcm13206074 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moztarzadeh O, Jamshidi M, Sargolzaei S, Keikhaee F, Jamshidi A, Shadroo S, Hauer L (2023) Metaverse and medical diagnosis: a blockchain-based digital twinning approach based on MobileNetV2 algorithm for cervical vertebral maturation. Diagnostics. 10.3390/diagnostics13081485 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muñoz G, Zamora D, Brito L, Ravelo V, de Moraes M, Olate S (2024) Comparison between an expert operator an inexperienced operator, and artificial intelligence software: a brief clinical study of cephalometric diagnostic. J Craniofac Surg. 10.1097/scs.0000000000010346 [DOI] [PubMed] [Google Scholar]
- Muraev AA, Tsai P, Kibardin I, Oborotistov N, Shirayeva T, Ivanov S, . . . Tuturov N (2020) Frontal cephalometric landmarking: humans vs artificial neural networks. Int J Comput Dent 23(2):139–148.
- Murphy SJ, Lee S, Scharm JC, Kim S, Amin AA, Wu TH, Deguchi T (2023) Comparison of maxillary anterior tooth movement between Invisalign and fixed appliances. Am J Orthod Dentofacial Orthop 164(1):24–33. 10.1016/j.ajodo.2022.10.024 [DOI] [PubMed] [Google Scholar]
- Murray JC (2002) Gene/environment causes of cleft lip and/or palate. Clin Genet 61(4):248–256. 10.1034/j.1399-0004.2002.610402.x [DOI] [PubMed] [Google Scholar]
- Nauwelaers N, Matthews H, Fan Y, Croquet B, Hoskens H, Mahdi S, El Sergani A, Gong S, Xu T, Bronstein M, Marazita M, Weinberg S, Claes P (2021) Exploring palatal and dental shape variation with 3d shape analysis and geometric deep learning. Orthod Craniofac Res 24(Suppl 2):134–143. 10.1111/ocr.12521 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ni FD, Xu ZN, Liu MQ, Zhang MJ, Li S, Bai HL, Ding P, Fu KY (2024) Towards clinically applicable automated mandibular canal segmentation on CBCT. J Dent 144:104931. 10.1016/j.jdent.2024.104931 [DOI] [PubMed] [Google Scholar]
- Nishimoto S, Sotsuka Y, Kawai K, Ishise H, Kakibuchi M (2019) Personal computer-based cephalometric landmark detection with deep learning, using cephalograms on the internet. J Craniofac Surg 30(1):91–95. 10.1097/scs.0000000000004901 [DOI] [PubMed] [Google Scholar]
- Noeldeke B, Vassis S, Sefidroodi M, Pauwels R, Stoustrup P (2024) Comparison of deep learning models to detect crossbites on 2D intraoral photographs. Head Face Med 20(1):45. 10.1186/s13005-024-00448-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nogueira-Reis F, Morgan N, Nomidis S, Van Gerven A, Oliveira-Santos N, Jacobs R, Tabchoury CPM (2023) Three-dimensional maxillary virtual patient creation by convolutional neural network-based segmentation on cone-beam computed tomography images. Clin Oral Investig 27(3):1133–1141. 10.1007/s00784-022-04708-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nogueira-Reis F, Cascante-Sequeira D, Farias-Gomes A, de Macedo MMG, Watanabe RN, Santiago AG, Tabchoury CPM, Freitas DQ (2024) Determination of the pubertal growth spurt by artificial intelligence analysis of cervical vertebrae maturation in lateral cephalometric radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 138(2):306–315. 10.1016/j.oooo.2024.02.017 [DOI] [PubMed] [Google Scholar]
- Obwegeser D, Timofte R, Mayer C, Eliades T, Bornstein MM, Schätzle MA, Patcas R (2022) Using artificial intelligence to determine the influence of dental aesthetics on facial attractiveness in comparison to other facial modifications. Eur J Orthod 44(4):445–451. 10.1093/ejo/cjac016 [DOI] [PubMed] [Google Scholar]
- O’Friel K, Chapple A, Ballard R, Armbruster P (2024) Assessing AudaxCeph®’s cephalometric tracing technology versus a semi-automated approach for analyzing severe Class II and Class III skeletons. Int Orthod 22(4):100926. 10.1016/j.ortho.2024.100926 [DOI] [PubMed] [Google Scholar]
- Olsen JA, Inglehart MR (2011) Malocclusions and perceptions of attractiveness, intelligence, and personality, and behavioral intentions. Am J Orthod Dentofacial Orthop 140(5):669–679. 10.1016/j.ajodo.2011.02.025 [DOI] [PubMed] [Google Scholar]
- Omar ZA, Chin SN, Sentian A, Hamzah N, Yassin F (2018) Exploring contributing features of pre-graft orthodontic treatment of cleft lip and palate patients using random forests. Transac Sci Technol 5(1):5–11 [Google Scholar]
- Özcan M, Erdem B, Turan B, Tokatlı N, Şar Ç, Özdemir F (2024) Deep-learning model for assessing difficulty in localizing impacted canines. Int Dent J 74:S3. 10.1016/j.identj.2024.07.578 [Google Scholar]
- Ozsunkar PS, Özen D, Abdelkarim AZ, Duman S, Uğurlu M, Demİr MR, Özen DÇelİk, Kuleli B, Çelİk Ö, Imamoglu BS, Bayrakdar IS, Duman SB (2024) Detecting white spot lesions on post-orthodontic oral photographs using deep learning based on the YOLOv5x algorithm: a pilot study. BMC Oral Health 24(1):490. 10.1186/s12903-024-04262-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Panesar S, Zhao A, Hollensbe E, Wong A, Bhamidipalli SS, Eckert G, Dutra V, Turkkahraman H (2023) Precision and accuracy assessment of cephalometric analyses performed by deep learning artificial intelligence with and without human augmentation. Appl Sci. 10.3390/app13126921 [Google Scholar]
- Papio MA, Fields HW Jr, Beck FM, Firestone AR, Rosenstiel SF (2019) The effect of dental and background facial attractiveness on facial attractiveness and perceived integrity and social and intellectual qualities. Am J Orthod Dentofacial Orthop 156(4):464-474.e461. 10.1016/j.ajodo.2018.10.021 [DOI] [PubMed] [Google Scholar]
- Park J-H, Hwang H-W, Moon J-H, Yu Y, Kim H, Her S-B, Srinivasan G, Aljanabi MNA, Donatelli RE, Lee S-J (2019a) Automated identification of cephalometric landmarks: Part 1—comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod 89(6):903–909 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park JH, Hwang HW, Moon JH, Yu Y, Kim H, Her SB, Srinivasan G, Aljanabi MNA, Donatelli RE, Lee SJ (2019b) Automated identification of cephalometric landmarks: Part 1: comparisons between the latest deep learning methods YOLOV3 and SSD. Angle Orthod 89(6):903–909. 10.2319/022019-127.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park JH, Kim YJ, Kim J, Kim J, Kim IH, Kim N, Vaid NR, Kook YA (2021) Use of artificial intelligence to predict outcomes of nonextraction treatment of class II malocclusions. Semin Orthod 27(2):87–95. 10.1053/j.sodo.2021.05.005 [Google Scholar]
- Park YS, Choi JH, Kim Y, Choi SH, Lee JH, Kim KH, Chung CJ (2022) Deep learning-based prediction of the 3D postorthodontic facial changes. J Dent Res 101(11):1372–1379. 10.1177/00220345221106676 [DOI] [PubMed] [Google Scholar]
- Parrish M, O’Connell E, Eckert G, Hughes J, Badirli S, Turkkahraman H (2023) Short- and long-term prediction of the post-pubertal mandibular length and Y-axis in females utilizing machine learning. Diagnostics. 10.3390/diagnostics13172729 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patcas R, Bernini DAJ, Volokitin A, Agustsson E, Rothe R, Timofte R (2019a) Applying artificial intelligence to assess the impact of orthognathic treatment on facial attractiveness and estimated age. Int J Oral Maxillofac Surg 48(1):77–83. 10.1016/j.ijom.2018.07.010 [DOI] [PubMed] [Google Scholar]
- Patcas R, Timofte R, Volokitin A, Agustsson E, Eliades T, Eichenberger M, Bornstein MM (2019b) Facial attractiveness of cleft patients: a direct comparison between artificial-intelligence-based scoring and conventional rater groups. Eur J Orthod 41(4):428–433. 10.1093/ejo/cjz007 [DOI] [PubMed] [Google Scholar]
- Peck CJ, Patel VK, Parsaei Y, Pourtaheri N, Allam O, Lopez J, Steinbacher D (2022) Commercial artificial intelligence software as a tool for assessing facial attractiveness: a proof-of-concept study in an orthognathic surgery cohort. Aesthet Plast Surg 46(2):1013–1016. 10.1007/s00266-021-02537-4 [Google Scholar]
- Pei Y, Qin H, Ma G, Guo Y, Chen G, Xu T, Zha H (2017) Multi-scale volumetric convnet with nested residual connections for segmentation of anterior cranial base. Paper presented at the lecture notes in computer science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Peng J, Chen S, Shang F, Yang Y, Jiang R (2024) Measurement plane of the cross-sectional area of the masseter muscle in patients with skeletal class III malocclusion: an artificial intelligence model. Am J Orthod Dentofacial Orthop. 10.1016/j.ajodo.2024.03.011 [DOI] [PubMed] [Google Scholar]
- Perillo L, Auconi P, d’Apuzzo F, Grassia V, Scazzocchio M, Nucci L, Franchi L (2021) Machine learning in the prognostic appraisal of class III growth. Semin Orthod 27(2):96–108. 10.1053/j.sodo.2021.05.006 [Google Scholar]
- Popova T, Stocker T, Khazaei Y, Malenova Y, Wichelhaus A, Sabbagh H (2023) Influence of growth structures and fixed appliances on automated cephalometric landmark recognition with a customized convolutional neural network. BMC Oral Health 23(1):274. 10.1186/s12903-023-02984-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prasad J, Mallikarjunaiah DR, Shetty A, Gandedkar N, Chikkamuniswamy AB, Shivashankar PC (2023) Machine learning predictive model as clinical decision support system in orthodontic treatment planning. Dent J 11(1). 10.3390/dj11010001
- Preda F, Morgan N, Van Gerven A, Nogueira-Reis F, Smolders A, Wang X, Nomidis S, Shaheen E, Willems H, Jacobs R (2022) Deep convolutional neural network-based automated segmentation of the maxillofacial complex from cone-beam computed tomography: a validation study. J Dent 124:104238. 10.1016/j.jdent.2022.104238 [DOI] [PubMed] [Google Scholar]
- Prince STT, Srinivasan D, Duraisamy S, Kannan R, Rajaram K (2023) Reproducibility of linear and angular cephalometric measurements obtained by an artificial-intelligence assisted software (WebCeph) in comparison with digital software (AutoCEPH) and manual tracing method. Dent Press J Orthod 28(1):e2321214. 10.1590/2177-6709.28.1.e2321214.oar [Google Scholar]
- Radwan MT, Sin Ç, Akkaya N, Vahdettin L (2023) Artificial intelligence-based algorithm for cervical vertebrae maturation stage assessment. Orthod Craniofac Res 26(3):349–355. 10.1111/ocr.12615 [DOI] [PubMed] [Google Scholar]
- Ramadan RA, Khedr AY, Yadav K, Alreshidi EJ, Sharif MH, Azar AT, Kamberaj H (2022) Convolution neural network based automatic localization of landmarks on lateral x-ray images. Multimed Tools Appl 81(26):37403–37415. 10.1007/s11042-021-11596-3 [Google Scholar]
- Ramasubbu N, Valai Kasim SA, Thavarajah R, Nathamuni Rengarajan K (2024) Applying artificial intelligence to predict the outcome of orthodontic treatment. APOS Trends Orthod 14(4):264–272. 10.25259/APOS_270_2023 [Google Scholar]
- Rauf AM, Mahmood TMA, Mohammed MH, Omer ZQ, Kareem FA (2023) Orthodontic implementation of machine learning algorithms for predicting some linear dental arch measurements and preventing anterior segment malocclusion: a prospective study. Medicina (Kaunas). 10.3390/medicina59111973 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Real AD, Real OD, Sardina S, Oyonarte R (2022) Use of automated artificial intelligence to predict the need for orthodontic extractions. Korean J Orthod 52(2):102–111. 10.4041/kjod.2022.52.2.102 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richards MR, Fields HW Jr., Beck FM, Firestone AR, Walther DB, Rosenstiel S, Sacksteder JM (2015) Contribution of malocclusion and female facial attractiveness to smile esthetics evaluated by eye tracking. Am J Orthod Dentofacial Orthop 147(4):472–482. 10.1016/j.ajodo.2014.12.016 [DOI] [PubMed] [Google Scholar]
- Ristau B, Coreil M, Chapple A, Armbruster P, Ballard R (2022) Comparison of AudaxCeph®’s fully automated cephalometric tracing technology to a semi-automated approach by human examiners. Int Orthod 20(4):100691. 10.1016/j.ortho.2022.100691 [DOI] [PubMed] [Google Scholar]
- Russell SJ, Norvig P (2021) Artificial Intelligence: A Modern Approach, Global Edition 4e
- Ryu J, Lee YS, Mo SP, Lim K, Jung SK, Kim TW (2022) Application of deep learning artificial intelligence technique to the classification of clinical orthodontic photos. BMC Oral Health 22(1):454. 10.1186/s12903-022-02466-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ryu J, Kim YH, Kim TW, Jung SK (2023) Evaluation of artificial intelligence model for crowding categorization and extraction diagnosis using intraoral photographs. Sci Rep 13(1):5177. 10.1038/s41598-023-32514-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sahlsten J, Järnstedt J, Jaskari J, Naukkarinen H, Mahasantipiya P, Charuakkra A, Vasankari K, Hietanen A, Sundqvist O, Lehtinen A, Kaski K (2024) Deep learning for 3D cephalometric landmarking with heterogeneous multi-center CBCT dataset. PLoS ONE 19(6):e0305947. 10.1371/journal.pone.0305947 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saifeldin H, Osorio J, Xi M, Safwat B, Khokher MR, Li S, . . . Wang D (2024) Accuracy of eyes of aitm artificial intelligence-driven platform for lateral cephalometric analysis. Ain Shams Dent J (Egypt) 33(1):1–10. 10.21608/asdj.2024.277176.1229
- Salmanpour F, Camci H (2024) Artificial intelligence for predicting orthodontic patient cooperation: voice records versus frontal photographs. APOS Trends Orthod 14(4):255–263. 10.25259/APOS_221_2023 [Google Scholar]
- Şatir S, Büyükçavuş MH, Sari Ö, Çimen T (2023) A novel approach to radiographic detection of growth development period with hand-wrist radiographs: a preliminary study with ImageJ imaging software. Orthod Craniofac Res 26(1):100–106. 10.1111/ocr.12584 [DOI] [PubMed] [Google Scholar]
- Senirkentli GB, İnce Bingöl S, Ünal M, Bostancı E, Güzel MS, Açıcı K (2023) Machine learning based orthodontic treatment planning for mixed dentition borderline cases suffering from moderate to severe crowding: an experimental research study. Technol Health Care 31(5):1723–1735. 10.3233/thc-220563 [DOI] [PubMed] [Google Scholar]
- Seo H, Hwang J, Jeong T, Shin J (2021) Comparison of deep learning models for cervical vertebral maturation stage classification on lateral cephalometric radiographs. J Clin Med. 10.3390/jcm10163591 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seo H, Hwang J, Jung Y-H, Lee E, Nam OH, Shin J (2023) Deep focus approach for accurate bone age estimation from lateral cephalogram. J Dent Sci 18(1):34–43. 10.1016/j.jds.2022.07.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shafi N, Bukhari F, Iqbal W, Almustafa KM, Asif M, Nawaz Z (2020) Cleft prediction before birth using deep neural network. Health Informatics J 26(4):2568–2585. 10.1177/1460458220911789 [DOI] [PubMed] [Google Scholar]
- Shan T, Tay FR, Gu L (2021) Application of artificial intelligence in dentistry. J Dent Res 100(3):232–244. 10.1177/0022034520969115 [DOI] [PubMed] [Google Scholar]
- Shimizu Y, Tanikawa C, Kajiwara T, Nagahara H, Yamashiro T (2022) The validation of orthodontic artificial intelligence systems that perform orthodontic diagnoses and treatment planning. Eur J Orthod 44(4):436–444. 10.1093/ejo/cjab083 [DOI] [PubMed] [Google Scholar]
- Shoari SA, Sadrolashrafi SV, Sohrabi A, Afrouzian R, Ebrahimi P, Kouhsoltani M, Soltani MK (2024) Estimating mandibular growth stage based on cervical vertebral maturation in lateral cephalometric radiographs using artificial intelligence. Prog Orthod 25(1):28. 10.1186/s40510-024-00527-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shojaei H, Augusto V (2022) 'Constructing machine learning models for orthodontic treatment planning: a comparison of different methods'. Paper presented at the Proceedings - 2022 IEEE International Conference on Big Data, Big Data 2022
- Silva TP, Hughes MM, dos Santos Menezes L, de Melo MDFB, de Freitas PHL, Takeshita WM (2022) Artificial intelligence-based cephalometric landmark annotation and measurements according to Arnett’s analysis: can we trust a bot to do that? Dentomaxillofac Radiol. 10.1259/DMFR.20200548 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Snider V, Homsi K, Kusnoto B, Atsawasuwan P, Viana G, Allareddy V, Gajendrareddy P, Elnagar MH (2023) Effectiveness of AI-driven remote monitoring technology in improving oral hygiene during orthodontic treatment. Orthod Craniofac Res 26(Suppl 1):102–110. 10.1111/ocr.12666 [DOI] [PubMed] [Google Scholar]
- Soleiman Mezerji M, Sheikhzadeh S, Mirzaie M, Gholinia H (2023) Fully automated orthodontic photograph analysis by machine learning. Caspian J Dent Res 12(2):70–81 [Google Scholar]
- Song Y, Qiao X, Iwamoto Y, Chen Y-W (2020) Automatic cephalometric landmark detection on X-ray images using a deep-learning method. Appl Sci 10(7):2547 [Google Scholar]
- Stetzel L, Foucher F, Jang SJ, Wu TH, Fields H, Schumacher F, Richmond S, Ko CC (2024) Artificial intelligence for predicting the aesthetic component of the index of orthodontic treatment need. Bioengineering. 10.3390/bioengineering11090861 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strunga, M., Ballova, D. S., Tomasik, J., Oravcova, L., Danisovic, L., & Thurzo, A. (2024). AI-automated Cephalometric Tracing: A New Normal in Orthodontics? Paper presented at the International Conference on Artificial Intelligence, Computer, Data Sciences, and Applications, ACDSA 2024.
- Su S, Jia X, Zhan L, Gao S, Zhang Q, Huang X (2024) Automatic tooth periodontal ligament segmentation of cone beam computed tomography based on instance segmentation network. Heliyon 10(2):e24097. 10.1016/j.heliyon.2024.e24097 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tafala I, Bourzgui F, Othmani MB, Azmi M (2022) Automatic classification of malocclusion. Paper presented at the Procedia Computer Science
- Takada K (2016) Artificial intelligence expert systems with neural network machine learning may assist decision-making for extractions in orthodontic treatment planning. J Evid Based Dent Pract 16(3):190–192. 10.1016/j.jebdp.2016.07.002 [DOI] [PubMed] [Google Scholar]
- Takeda S, Mine Y, Yoshimi Y, Ito S, Tanimoto K, Murayama T (2021) Landmark annotation and mandibular lateral deviation analysis of posteroanterior cephalograms using a convolutional neural network. J Dent Sci 16(3):957–963. 10.1016/j.jds.2020.10.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Talaat S, Kaboudan A, Talaat W, Kusnoto B, Sanchez F, Elnagar MH, Bourauel C, Ghoneima A (2021) The validity of an artificial intelligence application for assessment of orthodontic treatment need from clinical images. Semin Orthod 27(2):164–171. 10.1053/j.sodo.2021.05.012 [Google Scholar]
- Tamayo-Quintero JD, Gómez-Mendoza JB, Guevara-Pérez SV (2024) DentalArch: AI-based arch shape detection in orthodontics. Appl Sci. 10.3390/app14062567 [Google Scholar]
- Tanaka OM, Gasparello GG, Hartmann GC, Casagrande FA, Pithon MM (2023) Assessing the reliability of ChatGPT: a content analysis of self-generated and self-answered questions on clear aligners, TADs and digital imaging. Dent Press J Orthod 28(5):e2323183. 10.1590/2177-6709.28.5.e2323183.oar [Google Scholar]
- Tang H, Liu S, Tan W, Fu L, Yan M, Feng H (2024) Prediction of midpalatal suture maturation stage based on transfer learning and enhanced vision transformer. BMC Med Inform Decis Mak 24(1):232. 10.1186/s12911-024-02598-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tanikawa C, Lee C, Lim J, Oka A, Yamashiro T (2021a) Clinical applicability of automated cephalometric landmark identification: Part I-patient-related identification errors. Orthod Craniofac Res 24(Suppl 2):43–52. 10.1111/ocr.12501 [DOI] [PubMed] [Google Scholar]
- Tanikawa C, Lee C, Lim J, Oka A, Yamashiro T (2021b) Clinical applicability of automated cephalometric landmark identification: part I—patient-related identification errors. Orthod Craniofac Res 24(S2):43–52. 10.1111/ocr.12501 [DOI] [PubMed] [Google Scholar]
- Tanikawa C, Tan TJ, Takada K (2024) Facial soft-tissue shape changes after fixed edgewise treatment with premolar extraction in individual artificial-intelligence-classified facial profile patterns. BMC Oral Health 24(1):740. 10.1186/s12903-024-04512-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tanikawa C, Chonho L (2021) Machine learning for facial recognition in orthodontics. In Machine Learning in Dentistry, pp 55–65
- Tao B, Yu X, Wang W, Wang H, Chen X, Wang F, Wu Y (2023a) A deep learning-based automatic segmentation of zygomatic bones from cone-beam computed tomography images: a proof of concept. J Dent. 10.1016/j.jdent.2023.104582 [DOI] [PubMed] [Google Scholar]
- Tao L, Li M, Zhang X, Cheng M, Yang Y, Fu Y, Zhang R, Qian D, Yu H (2023b) Automatic craniomaxillofacial landmarks detection in CT images of individuals with dentomaxillofacial deformities by a two-stage deep learning model. BMC Oral Health 23(1):876. 10.1186/s12903-023-03446-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taraji S, Atici SF, Viana G, Kusnoto B, Allareddy V, Miloro M, Elnagar MH (2023) Novel machine learning algorithms for prediction of treatment decisions in adult patients with class III malocclusion. J Oral Maxillofac Surg 81(11):1391–1402. 10.1016/j.joms.2023.07.137 [DOI] [PubMed] [Google Scholar]
- Thanathornwong B (2018) Bayesian-based decision support system for assessing the needs for orthodontic treatment. Healthc Inform Res 24(1):22–28. 10.4258/hir.2018.24.1.22 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thurzo A, Kurilová V, Varga I (2021) Artificial intelligence in orthodontic smart application for treatment coaching and its impact on clinical performance of patients monitored with AI-telehealth system. Healthcare. 10.3390/healthcare9121695 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tian JL, Zhang QY, Li HZ, Wang Q, Lei Y, Zang L, . . . Yang JJ (2022) Study of facial generation methods after orthodontic treatment. Paper presented at the Proceedings - 2022 IEEE 46th Annual Computers, Software, and Applications Conference, COMPSAC 2022
- Tomášik J, Zsoldos M, Majdáková K, Fleischmann A, Oravcová Ľ, Sónak Ballová D, Thurzo A (2024) The potential of AI-powered face enhancement technologies in face-driven orthodontic treatment planning. Appl Sci. 10.3390/app14177837 [Google Scholar]
- Trehan M, Bhanotia D, Shaikh TA, Sharma S, Sharma S (2023) Artificial intelligence-based automated model for prediction of extraction using neural network machine learning: a scope and performance analysis. J Contemp Orthod 7(4):281–286. 10.18231/j.jco.2023.048 [Google Scholar]
- Tsolakis IA, Tsolakis AI, Elshebiny T, Matthaios S, Palomo JM (2022) Comparing a fully automated cephalometric tracing method to a manual tracing method for orthodontic diagnosis. J Clin Med. 10.3390/jcm11226854 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ueda A, Tussie C, Kim S, Kuwajima Y, Matsumoto S, Kim G, Nagai S (2023) Classification of maxillofacial morphology by artificial intelligence using cephalometric analysis measurements. Diagnostics. 10.3390/diagnostics13132134 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ugurlu M (2022) Performance of a convolutional neural network-based artificial intelligence algorithm for automatic cephalometric landmark detection. Turk J Orthod 35(2):94–100. 10.5152/TurkJOrthod.2022.22026 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Verhelst PJ, Smolders A, Beznik T, Meewis J, Vandemeulebroucke A, Shaheen E, Jacobs R (2021) Layered deep learning for automatic mandibular segmentation in cone-beam computed tomography. J Dent 114:103786. 10.1016/j.jdent.2021.103786 [DOI] [PubMed] [Google Scholar]
- Vinayahalingam S, Berends B, Baan F, Moin DA, van Luijn R, Bergé S, Xi T (2023a) Deep learning for automated segmentation of the temporomandibular joint. J Dent. 10.1016/j.jdent.2023.104475 [DOI] [PubMed] [Google Scholar]
- Vinayahalingam S, Kempers S, Schoep J, Hsu TMH, Moin DA, van Ginneken B, Xi T (2023b) Intra-oral scan segmentation using deep learning. BMC Oral Health. 10.1186/s12903-023-03362-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Volovic J, Badirli S, Ahmad S, Leavitt L, Mason T, Bhamidipalli SS, Eckert G, Albright D, Turkkahraman H (2023) A novel machine learning model for predicting orthodontic treatment duration. Diagnostics. 10.3390/diagnostics13172740 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang H, Minnema J, Batenburg KJ, Forouzanfar T, Hu FJ, Wu G (2021a) Multiclass CBCT image segmentation for orthodontics with deep learning. J Dent Res 100(9):943–949. 10.1177/00220345211005338 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang X, Pastewait M, Wu TH, Lian C, Tejera B, Lee YT, Ko CC (2021b) 3D morphometric quantification of maxillae and defects for patients with unilateral cleft palate via deep learning-based CBCT image auto-segmentation. Orthod Craniofac Res 24:108–116. 10.1111/ocr.12482 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang T, Nie K, Fan Y, Chen G, Xu K, Han B, Pei Y, Song G, Xu T (2024b) Network analysis of three-dimensional hard-soft tissue relationships in the lower 1/3 of the face: skeletal class I-normodivergent malocclusion versus class II-hyperdivergent malocclusion. BMC Oral Health 24(1):996. 10.1186/s12903-024-04752-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang X, Alqahtani KA, Van den Bogaert T, Shujaat S, Jacobs R, Shaheen E (2024c) Convolutional neural network for automated tooth segmentation on intraoral scans. BMC Oral Health 24(1):804. 10.1186/s12903-024-04582-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y, Wu W, Christelle M, Sun M, Wen Z, Lin Y, Zhang H, Xu J (2024d) Automated localization of mandibular landmarks in the construction of mandibular median sagittal plane. Eur J Med Res. 10.1186/s40001-024-01681-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang HY, Li KH, Zhu JH, Wang F, Lian CF, Ma JH (2024, Oct 06–10) Weakly supervised tooth instance segmentation on 3D dental models with multi-label learning. Paper presented at the 27th international conference on medical image computing and computer assisted intervention (MICCAI), Palmeraie Conf Ctr, Marrakesh, Morocco
- Wei G, Cui Z, Zhu J, Yang L, Zhou Y, Singh P, Gu M, Wang W (2022) Dense representative tooth landmark/axis detection network on 3D model. Comput Aided Geom des. 10.1016/j.cagd.2022.102077 [Google Scholar]
- Westfall RS, Millar MG, Lovitt A (2019) The influence of physical attractiveness on belief in a just world. Psychol Rep 122(2):536–549. 10.1177/0033294118763172 [DOI] [PubMed] [Google Scholar]
- Wirtz A, Lam J, Wesarg S (2020) Automated cephalometric landmark localization using a coupled shape model. Curr Dir Biomed Eng 6(3):56–59. 10.1515/cdbme-2020-3015 [Google Scholar]
- Wolf D, Farrag G, Flügge T, Timm LH (2024) Predicting outcome in clear aligner treatment: a machine learning analysis. J Clin Med. 10.3390/jcm13133672 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wood T, Anigbo JO, Eckert G, Stewart KT, Dundar MM, Turkkahraman H (2023) Prediction of the post-pubertal mandibular length and Y axis of growth by using various machine learning techniques: a retrospective longitudinal study. Diagnostics. 10.3390/diagnostics13091553 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu TH, Lian C, Lee S, Pastewait M, Piers C, Liu J, Wang F, Wang Li, Chiu C-Y, Wang W, Jackson C, Chao W-L, Shen D, Ko CC (2022) Two-stage mesh deep learning for automated tooth segmentation and landmark localization on 3D intraoral scans. IEEE Trans Med Imaging 41(11):3158–3166. 10.1109/tmi.2022.3180343 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xie X, Wang L, Wang A (2010) Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod 80(2):262–266. 10.2319/111608-588.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xing L, Zhang X, Guo Y, Bai D, Xu H (2023) XGBoost-aided prediction of lip prominence based on hard-tissue measurements and demographic characteristics in an Asian population. Am J Orthod Dentofacial Orthop 164(3):357–367. 10.1016/j.ajodo.2023.01.017 [DOI] [PubMed] [Google Scholar]
- Xu X, Liu C, Zheng Y (2019) 3D tooth segmentation and labeling using deep convolutional neural networks. IEEE Trans vis Comput Graph 25(7):2336–2348. 10.1109/tvcg.2018.2839685 [DOI] [PubMed] [Google Scholar]
- Xu L, Mei L, Lu RQ, Li Y, Li HS, Li Y (2022) Predicting patient experience of Invisalign treatment: an analysis using artificial neural network. Korean J Orthod 52(4):268–277. 10.4041/kjod21.255 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xu S, Peng H, Yang L, Zhong W, Gao X, Song J (2024) An automatic grading system for orthodontically induced external root resorption based on deep convolutional neural network. J Imaging Inform Med. 10.1007/s10278-024-01045-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yacout YM, Eid FY, Tageldin MA, Kassem HE (2024) Evaluation of the accuracy of automated tooth segmentation of intraoral scans using artificial intelligence-based software packages. Am J Orthod Dentofacial Orthop 166(3):282-291.e281. 10.1016/j.ajodo.2024.05.015 [DOI] [PubMed] [Google Scholar]
- Yamamoto G, Ohta Y, Tsuda Y, Tanaka A, Nishikawa M, Inoda H (2003) A new classification of impacted canines and second premolars using orthopantomography. Asian J Oral Maxillofac Surg 15(1):31–37. 10.1016/S0915-6992(03)80029-8 [Google Scholar]
- Yao J, Zeng W, He T, Zhou S, Zhang Y, Guo J, Tang W (2022) Automatic localization of cephalometric landmarks based on convolutional neural network. Am J Orthod Dentofacial Orthop 161(3):e250–e259. 10.1016/j.ajodo.2021.09.012 [DOI] [PubMed] [Google Scholar]
- Ye H, Cheng Z, Ungvijanpunya N, Chen W, Cao L, Gou Y (2023) Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification? BMC Oral Health. 10.1186/s12903-023-03188-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu X, Liu B, Pei Y, Xu T (2014) Evaluation of facial attractiveness for patients with malocclusion: a machine-learning technique employing procrustes. Angle Orthod 84(3):410–416. 10.2319/071513-516.1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yu HJ, Cho SR, Kim MJ, Kim WH, Kim JW, Choi J (2020) Automated skeletal classification with lateral cephalometry based on artificial intelligence. J Dent Res 99(3):249–256. 10.1177/0022034520901715 [DOI] [PubMed] [Google Scholar]
- Yu JH, Kim JH, Liu J, Mangal U, Ahn HK, Cha JY (2023) Reliability and time-based efficiency of artificial intelligence-based automatic digital model analysis system. Eur J Orthod 45(6):712–721. 10.1093/ejo/cjad032 [DOI] [PubMed] [Google Scholar]
- Yu S, Zheng Y, Dong L, Huang W, Wu H, Zhang Q, Yan X, Wu W, Lv T, Yuan X (2024) The accuracy and reliability of different midsagittal planes in the symmetry assessment using cone-beam computed tomography. Clin Anat 37(2):218–226. 10.1002/ca.24133 [DOI] [PubMed] [Google Scholar]
- Yurdakurban E, Duran GS, Görgülü S (2021) Evaluation of an automated approach for facial midline detection and asymmetry assessment: a preliminary study. Orthod Craniofac Res 24:84–91. 10.1111/ocr.12539 [DOI] [PubMed] [Google Scholar]
- Yurdakurban E, Süküt Y, Duran GS (2025) Assessment of deep learning technique for fully automated mandibular segmentation. Am J Orthod Dentofacial Orthop 167(2):242–249. 10.1016/j.ajodo.2024.09.006 [DOI] [PubMed] [Google Scholar]
- Zaheer R, Shafique HZ, Khalid Z, Shahid R, Jan A, Zahoor T, Nawaz R, Ul Hassan M (2024) Comparison of semi and fully automated artificial intelligence driven softwares and manual system for cephalometric analysis. BMC Med Inform Decis Mak 24(1):271. 10.1186/s12911-024-02664-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zakhar G, Hazime S, Eckert G, Wong A, Badirli S, Turkkahraman H (2023) Prediction of pubertal mandibular growth in males with class II malocclusion by utilizing machine learning. Diagnostics. 10.3390/diagnostics13162713 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zecca PA, Caccia M, Levrini L, Carganico A, Reguzzoni M, Donadio D, Tosi D, Protasoni M (2024) Ai-based open-source software for cephalometric analysis from limited FOV radiographs. J Dent 151:105412. 10.1016/j.jdent.2024.105412 [DOI] [PubMed] [Google Scholar]
- Zeng M, Yan Z, Liu S, Zhou Y, Qiu L (2021) Cascaded convolutional networks for automatic cephalometric landmark detection. Med Image Anal 68:101904. 10.1016/j.media.2020.101904 [DOI] [PubMed] [Google Scholar]
- Zese R, Lombardo L, De Maio M, Tamascelli M, Cremonini F (2023) A novel cephalometric tool enhanced by AI assistance. Paper presented at the CEUR Workshop Proceedings
- Zhang Z, Liu N, Guo Z, Jiao L, Fenster A, Jin W, Zhang Y, Chen J, Yan C, Gou S (2022) Ageing and degeneration analysis using ageing-related dynamic attention on lateral cephalometric radiographs. NPJ Digit Med 5(1):151. 10.1038/s41746-022-00681-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang D, Yang J, Du S, Bu W, Guo YC (2023) An uncertainty-aware and sex-prior guided biological age estimation from orthopantomogram images. IEEE J Biomed Health Inform 27(10):4926–4937. 10.1109/JBHI.2023.3297610 [DOI] [PubMed] [Google Scholar]
- Zhang H, Liu C, Yang P, Yang S, Yu Q, Liu R (2024) The concept of AI-assisted self-monitoring for skeletal malocclusion. Health Informatics J 30(3):14604582241274511. 10.1177/14604582241274511 [DOI] [PubMed] [Google Scholar]
- Zhang Y, Pei Y, Qin H, Guo Y, Ma G, Xu T, Zha H (2019, 8–11 April 2019) Masseter muscle segmentation from cone-beam CT images using generative adversarial network. Paper presented at the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)
- Zhao Y, Zhang L, Liu Y, Meng D, Cui Z, Gao C, Gao X, Lian C, Shen D (2022) Two-stream graph convolutional network for intra-oral scanner image segmentation. IEEE Trans Med Imaging 41(4):826–835. 10.1109/tmi.2021.3124217 [DOI] [PubMed] [Google Scholar]
- Zhao CY, Yuan ZB, Luo SC, Wang WJ, Ren Z, Yao XF, Wu T (2023a) Automatic recognition of cephalometric landmarks via multi-scale sampling strategy. Heliyon. 10.1016/j.heliyon.2023.e17459 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhao Y, Gao L, Wang Y (2023b) Advances in algorithms for three-dimensional craniomaxillofacial features construction based on point clouds. Chin J Stomatol 58(6):519–526. 10.3760/cma.j.cn112144-20230218-00049 [Google Scholar]
- Zhao L, Chen X, Huang J, Mo S, Gu M, Kang N, Song S, Zhang X, Liang B, Tang M (2024) Machine learning algorithms for the diagnosis of class III malocclusions in children. Children (Basel) 11(7):762. 10.3390/children11070762 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zheng Q, Ma L, Wu Y, Gao Y, Li H, Lin J, Qing S, Long D, Chen X, Zhang W (2025) Automatic 3-dimensional quantification of orthodontically induced root resorption in cone-beam computed tomography images based on deep learning. Am J Orthod Dentofacial Orthop 167(2):188–201. 10.1016/j.ajodo.2024.09.009 [DOI] [PubMed] [Google Scholar]
- Zhou J, Zhou H, Pu L, Gao Y, Tang Z, Yang Y, You M, Yang Z, Lai W, Long H (2021) Development of an artificial intelligence system for the automatic evaluation of cervical vertebral maturation status. Diagnostics. 10.3390/diagnostics11122200 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhu M, Yang P, Bian C, Zuo F, Guo Z, Wang Y, Bai Y, Zhang N (2024) Convolutional neural network-assisted diagnosis of midpalatal suture maturation stage in cone-beam computed tomography. J Dent 141:104808. 10.1016/j.jdent.2023.104808 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The dataset(s) supporting the conclusions of this article is(are) included within the manuscript.


