Abstract
Objectives
The objective of the present study was to determine the accuracy of machine learning (ML) models in the detection of mesiobuccal (MB2) canals in axial cone-beam computed tomography (CBCT) sections.
Methods
A total of 2500 CBCT scans from the oral radiology department of University Dental Hospital, Sharjah were screened to obtain 277 high-resolution, small field-of-view CBCT scans with maxillary molars. Among the 277 scans, 160 of them showed the presence of MB2 orifice and the rest (117) did not. Two-dimensional axial images of these scans were then cropped. The images were classified and labelled as N (absence of MB2) and M (presence of MB2) by 2 examiners. The images were embedded using Google's Inception V3 and transferred to the ML classification model. Six different ML models (logistic regression [LR], naïve Bayes [NB], support vector machine [SVM], K-nearest neighbours [Knn], random forest [RF], neural network [NN]) were then tested on their ability to classify the images into M and N. The classification metrics (area under curve [AUC], accuracy, F1-score, precision) of the models were assessed in 3 steps.
Results
NN (0.896), LR (0.893), and SVM (0.886) showed the highest values of AUC with specified target variables (steps 2 and 3). The highest accuracy was exhibited by LR (0.849) and NN (0.848) with specified target variables. The highest precision (86.8%) and recall (92.5%) was observed with the SVM model.
Conclusion
The success rates (AUC, precision, recall) of ML algorithms in the detection of MB2 were remarkable in our study. It was also observed that when the target variable was specified, significant success rates such as 86.8% in precision and 92.5% in recall were achieved. The present study showed promising results in the ML-based detection of MB2 canal using axial CBCT slices.
Key words: Artificial intelligence, Root canal, Cone-beam computed tomography, Machine learning
Introduction
One of the main factors for the success of any endodontic treatment is the accuracy with which the root canals are identified during the treatment procedure.1 The chances of failure of endodontic treatment increases approximately 4-fold when canals are missed during root canal treatment.2 The maxillary first molar exhibits many variations of root canal anatomy.3 The presence of a second mesiobuccal canal (MB2) is one of the variations with an incidence ranging from 33% to 96% as reported by several studies.4, 5, 6, 7 It has been found that the mesiopalatine tilt of the MB2 canal orifice causes difficulty in locating the canal during endodontic treatment.8 Conventionally, intraoperative examination and periapical radiographs were used to detect the presence of MB2. However, several recent studies have shown that cone-beam computed tomography (CBCT) has been more effective in the detection of MB2 compared to conventional radiographs.9,10
There has been a recent surge in the research related to the application of artificial intelligence (AI) in dentistry.11 Machine learning (ML) which is a subgroup of AI, is being explored as an adjunct in maxillofacial and dental imaging.12 Further, unsupervised ML is a subset of ML that learns from data sets with no human supervision, labelling, or structuring.13 This subcategory of ML works on specific data without needing instructions.13 In this study, we have applied unsupervised ML algorithms and measured the success of ML algorithms in classifying images obtained without any human intervention.
AI-based detection of MB2 in maxillary first molar remains unexplored.14 To the best of our knowledge, there is only one research paper published in this specific area of research, which mainly focuses on the application of a deep learning model to primarily detect and further segment unobturated (missed) MB2 canals in maxillary molars which have been endodontically obturated.14 To further explore this area, we conducted a study to determine the accuracy of ML models in the detection of MB2 canals in axial CBCT sections.
Methodology
The study protocol was approved by the research ethics committee of the University of Sharjah (REC-23-10-12-01-F). Two examiners obtained 2500 CBCT scans from the dental radiology files of the University Dental Hospital, Sharjah, between June 2022 and June 2024, using a Planmeca Viso 7 CBCT machine. The scans belonged to male and female patients within the age group of 18 and 60 years.
After screening the CBCT scan, 277 scans with maxillary first molars were eligible for the study. The scans with the following criteria were included in the study. Only the scans that were obtained using high-resolution settings (150 μm voxel size, 100 kVp, and 12.5 mA, 5-second exposure time) small-volume scans (Ø 6.0 × 6.0 cm) were included in the study. The scans showed complete coverage of the crown and root apex of maxillary first molar.
Scans not covering the region of interest (ROI) (mesial side of maxillary second premolar to distal side of maxillary second molar) were excluded from the research. Scans with caries, restoration, fracture, and root canal treatment of maxillary first molar were not included in the study.
Two examiners with 10 years of clinical experience screened the high-resolution CBCT scans for the presence/absence of MB2 canals. In case of disagreement between the 2 examiners, a third examiner with similar experience was called in to assist with determining the classification.
Later, 10% of the scans were reevaluated by each of the examiners after 2 weeks to obtain intrarater reliability.
Once confirmed the images were cropped uniformly (200 × 400-pixel square), labelled (N = MB2 not present, M = MB2 present) and saved in Joint Photographic Expert Group (JPEG) format. The first axial section at the level of canal orifice was considered as mentioned in a study by Normando et al.15 (Figures 1A and B). The images were preprocessed using ‘ImageFilter’, a package of Python Imaging Library (PIL) available in Python (Figures 2 A–C).
Fig. 1.
A, Sagittal cone-beam computed tomorgraphy (CBCT) section showing the level at which axial images were obtained. Using the technique by Normando et al. B, Cropped axial CBCT section with maxillary first molar with mesiobuccal orifice (red arrow).
Fig. 2.
Original image (A) and preprocessed image (B) with increased sharpness. Snap of the code for image filter in Python Imaging Library.
The examiners then used a pretrained deep learning model (Google's Inception v3) for image embedding (image reading and uploading to the server). The activations of the penultimate layer in the neural network (NN) using vector representations of images were used for embedding. The feature vector of each image was then subjected to classification by ML models (logistic regression [LR], naïve Bayes [NB], support vector machine [SVM], K-nearest neighbours [Knn], random forest [RF], NN) using Orange Data Mining version 3.38.1 (Figures 3A and B). The 6 ML models were selected based on their performance in recently published studies on medical image classification.16,17 The ML models calculated a feature vector for each image. The image descriptors were returned as an advanced data table
Fig. 3.
A, Workflow of the model analysis. (B) Screenshot of the program used in the study showing details of the classification model. (Orange data mining version 3.38.1)
A set of images in classified folders were selected for image clustering. These folder names were taken as classification names. Cross-validation was applied (number of crosses: 10), and the accuracy matrices of each ML algorithm were examined.
The study consisted of 3 stages. In the first stage, the success criteria of ML algorithms in classification were examined without selecting the target variable. The target variable is the outcome that the model aims to predict. In the present study, M (presence of MB2) and N (absence of MB2) were the target variables. In the second and third stages, the success rates of ML algorithms in classification were analysed by specifying the target variables M and N.
Results
Among the 277 scans, 160 of them showed the presence of MB2 orifice (M) and the rest 117 did not show MB2 (N). The interrater reliability between the 2 examiners for the detection of MB2 was 0.85 indicating almost perfect agreement. The intrarater reliability for examiners 1 and 2 were 0.92 and 0.96, respectively. Classification success rates of ML algorithms were evaluated under 5 performance criteria: area under the curve (AUC), classification accuracy (CA), F1 score, precision, and recall.
Table 1 presents the classification metrics of ML algorithms without specifying any target variable (step 1). NN showed the highest AUC value (0.903), whereas kNN showed the lowest (0.649). When the classification accuracy was evaluated, LR showed the highest value (0.84) (Figure 4). Similarly, highest F1-score (0.841), precision (0.841), and recall (0.841) were also shown by LR. kNN showed the lowest performance metrics among the 6 ML models.
Table 1.
Performance metrics without specifying target variable (cross-validation number of folds: 10).
| Classifier | AUC | CA | F1 | Precision | Recall | |
|---|---|---|---|---|---|---|
| Image data set (277) | LR | 0.885 | 0.841 | 0.841 | 0.841 | 0.841 |
| NN | 0.903 | 0.838 | 0.837 | 0.837 | 0.838 | |
| SVM | 0.886 | 0.819 | 0.815 | 0.826 | 0.819 | |
| RF | 0.812 | 0.744 | 0.736 | 0.746 | 0.744 | |
| NB | 0.800 | 0.726 | 0.727 | 0.735 | 0.726 | |
| kNN | 0.649 | 0.599 | 0.601 | 0.607 | 0.599 |
AUC, area under the curve; CA, classification accuracy; kNN, K-nearest neighbours; LR, logistic regression; NB, Naïve Bayes; NN, neural network; RF, random forest; SVM, support vector machine.
Fig. 4.
Confusion matrix for logistic regression (showing number of instances)
In the second step, M category was selected as the target variable and the classification success of ML algorithms (Table 2). When AUC metric was evaluated, NN (0.896) showed the best performance, closely followed by LR (0.893) and SVM (0.886) (Figure 5). SVM showed the best recall metrics (0.925)
Table 2.
Performance metrics with specification of target variable target M and target N (cross-validation number of folds: 10).
| Target M (step 2) | Classifier | AUC | CA | F1 | Precision | Recall |
|---|---|---|---|---|---|---|
| Image data set (277) | LR | 0.893 | 0.841 | 0.865 | 0.849 | 0.881 |
| NN | 0.896 | 0.838 | 0.862 | 0.848 | 0.875 | |
| SVM | 0.886 | 0.819 | 0.855 | 0.796 | 0.925 | |
| RF | 0.809 | 0.744 | 0.797 | 0.735 | 0.869 | |
| NB | 0.810 | 0.726 | 0.748 | 0.796 | 0.706 | |
| kNN | 0.649 | 0.599 | 0.636 | 0.669 | 0.606 | |
| Target N (Step 3) | Classifier | AUC | CA | F1 | Precision | Recall |
| Image data set (277) | LR | 0.893 | 0.841 | 0.807 | 0.829 | 0.786 |
| NN | 0.896 | 0.838 | 0.803 | 0.821 | 0.786 | |
| SVM | 0.886 | 0.819 | 0.760 | 0.868 | 0.675 | |
| RF | 0.808 | 0.744 | 0.654 | 0.761 | 0.573 | |
| NB | 0.798 | 0.726 | 0.698 | 0.652 | 0.752 | |
| kNN | 0.649 | 0.599 | 0.554 | 0.523 | 0.590 |
AUC, area under the curve; CA, classification accuracy; kNN, K-nearest neighbours; LR, logistic regression; NB, naïve Bayes; NN, neural network; RF, random forest; SVM, support vector machine.
Fig. 5.
A, Area under the curve–receiver operating characteristic (AUC-ROC) of the 6 machine learning (ML) models with specified target variable M. (B) AUC-ROC of the 6 ML models with specified target variable N.
In the last step, the N category was selected as the target variable and the classification success rates of ML algorithms are given in Table 2. The highest AUC value was observed with NN (0.896), followed by LR (0.886) and SVM (0.886) (Figure 5). SVM showed best (0.868) precision metrics.
When the AUC values (without specifying target variable) of the ML models were compared using the vassarstats.net program, NN, LR, and SVM showed significantly higher AUC values compared to RF, NB, and kNN (Table 3).
Table 3.
Significance of difference among the AUC values (without specifying target variable) of the ML models used in the study.
| Model | Model | Standard error of difference | Z value | P value |
|---|---|---|---|---|
| LR | NN | 0.0268 | -0.6714 | .250983 (NS) |
| SVM | 0.0279 | -0.0359 | .485681 (NS) | |
| RF | 0.0321 | 2.2769 | .011396* | |
| NB | 0.0327 | 2.6029 | .004622* | |
| kNN | 0.0382 | 6.1741 | <.000001* | |
| NN | SVM | 0.0267 | 0.6357 | .262486 (NS) |
| RF | 0.0311 | 2.929 | .0017* | |
| NB | 0.0317 | 3.2511 | .000575* | |
| kNN | 0.0374 | 6.7923 | <.000001* | |
| SVM | RF | 0.032 | 2.312 | .010389* |
| NB | 0.0326 | 2.6379 | .004171* | |
| kNN | 0.0382 | 6.2076 | <.000001* | |
| RF | NB | 0.0362 | 0.3312 | .370247(NS) |
| kNN | 0.0413 | 3.9445 | .00004* | |
| NB | kNN | 0.0418 | 3.6136 | .000151* |
LR, logistic regression; ML, machine learning; NB, naïve Bayes; NN, neural network; SVM, support vector machine; RF, random forest.
Level of significance <0.05, NS, not significant,
Significant.
Discussion
There has been a recent surge in AI-based research and publication in dentistry.18 Accordingly, focus groups have also developed guidelines for the reporting of the AI studies in dentistry.19,20 Some of these research papers have focused on the application of AI in endodontics.21 A wide array of areas has been explored for a possible scope of endodontics, which includes tooth segmentation, detection of caries, periapical pathologies, obturation, and C-shaped canals.22,23 AI-based detection of C-shaped canals in mandibular molars is a good comparator for our study. The C-shaped canals have been detected in periapical radiographs, panoramic radiographs, and CBCT.24,25
A large percentage of root canal failures related to maxillary first molars have been attributed to undetected MB2.26,27 Studies have revealed that conventional 2-dimensional radiographs were not reliable in detecting multiple canals as in case of MB2, thus additional diagnostic modalities were required.28 Over the past few years, detection of MB2 canals has been aided by CBCT imaging to a significant extent.29
Recently, various ML models have been employed in the identification and classification of dental anatomical variations and pathologies.30,31 In the present study, 6 different ML algorithms were used to classify the images.
Numerical imbalance between data sets would have influenced the performance metrics in the present study. Therefore, both variables (M and N) were defined as target variables in our model classification. The performance of the model was evaluated without specifying target variables in the first step. This was followed by the evaluation of the model by specifying target variables (M and N) in the second and third steps. There was no major difference in the success rates of the model in the 3 steps. The impact of different data set sizes on model performance and the solution were discussed by Althnian et al.32
Prior to ML-based classification, image embedding was carried out using Inception-v3, which is a third genre of Google's Inception convolutional neural network (CNN).33,34 The Inception-v3 can be used as a base architecture and transfer learning.35,36 The activations of the penultimate layer in the NN using vector representations of images are used for embedding.37 Google Image Embedding reads images and uploads them to a remote server or evaluates them locally. Deep learning/ML models are used to calculate a feature vector for each image. The image descriptors return an advanced data table. Image Embedding offers several embedders, each trained for a specific task. Images are sent to a server or evaluated locally on the user's computer, where vector representations are calculated. No internet connection is required for the process, and images sent to the server are not stored anywhere.38 Google's Inception V3 network is the latest advanced deep architecture widely used in medical image processing. It introduced an innovative block known as the Inception V3 block. This block allows determining convolutions and pooling operations simultaneously. It enables multi-level feature extraction for medical image processing.39,40 A recent study using transfer learning approach using Inception V3 with LR and SVM classifier showed high specificity and sensitivity for classification of chest X-rays.41
In our study, NN (0.896) and LR (0.893) showed the highest values of AUC with specified target variables. A study reporting CNN-based detection of C-shaped canals in a mandibular second molar on panoramic radiograph reported AUC of 0.98.24 Another study by Yang et al.25 also reported AUC of 0.98 in periapical radiographs for detecting C-shaped canals. A recent study revealed that LR and NN showed high training and testing accuracy when used for classification of binary images.41 Another study conducted using LR for the histopathological detection of breast cancer revealed an AUC value of 0.83, which is marginally lower than the to the AUC values observed in the present study.42 The same study also used SVM for histopathological classification and achieved an AUC of 0.84, which was also marginally lower than the AUC of the SVM model in the present study.
In the present study, the highest precision (86.8%) and recall (92.5%) were observed with the SVM model. A recent study employed an AI architecture like our research for the classification of lung ultrasound images for the detection of coronavirus disease 2019 (COVID-19).43 They used a combination of pretrained deep learning CNN models with SVM as a supervised classifier.44 Higher precision (99%) and recall (99%) were observed in their study compared to the present study. Similar outcomes were observed in studies using SVM to detect dental caries on radiographs.45 In that study, the SVM classifier model showed an accuracy of 95.7%. In the same study, researchers also used another classifier, kNN, which showed an accuracy compared to our study. Another study using SVM using binary classification of 105 dental radiographic images into carious and noncarious teeth showed an accuracy of 97.3%.45
In the present study, the SVM model showed precision of 86.8%. The SVM model showed a precision of 90% when it was used for classification and numbering of posterior teeth on dental radiographs.46 Similar accuracy outcomes were observed in the detection of caries on panoramic radiographs using multiclass SVM.47
In the first stage of the present study, the success criteria of ML algorithms in classification were examined without selecting the target variable. In the second and third stages, the success rates of ML algorithms in classification were analysed without making any changes to the rules by specifying the target variable. The target variable is a variable that the researcher is looking to predict during the classification process.48 In the present study, the presence or absence of the MB2 was the target variable.
RF and NB were the other ML models used for the binary classification of axial CBCT images of canal orifices in the present study. The classification accuracies of RF and NB were 74.4% and 72.6%, respectively. When RF was used for tooth identification on panoramic radiographs, the accuracy was 98%.49 Another study using RF for the detection of periodontal bone loss in dental radiographs of 116 patients reported an accuracy of 83%.50 The study concluded that RF showed higher accuracy with small sample size, very similar to the present study. Modified RF models have also been used for dental radiographic image segmentation and cephalometric evaluation.51
In the study, the classification of images was done with unsupervised ML algorithms. To alleviate the existing difficulty caused by the scarcity of annotated examples and studies and the laborious annotation process, a large number of studies based on unsupervised learning have been developed. Unsupervised classification aims to create groups or clusters with similar values in large data sets and then determine users’ classes by taking samples from these clusters.52,53
One of the unique features of our study is that we have used axial CBCT images for ML-based classification for the presence of MB2. Most of the previous dental radiographic studies were based on intraoral and panoramic radiographs for detection of C-shaped canals.24,25
One of the limitations of our study is that the smaller data set of high-resolution small CBCT scans. A future study involving multiple teaching hospitals can be planned. This would help us to obtain a larger data set of high-resolution, small field-of-view CBCT scans with and without MB2 canal orifice. The present study can provide a base model for developing deep learning models that can detect MB2 canals in 3D CBCT scans.
Conclusion
The success rate (AUC, precision, recall) of ML algorithms in the detection of MB2 was remarkable in our study. It was also observed that when the target variable was specified, significant success rates such as 86.8% in precision and 92.5% in recall were achieved. The present study showed promising results in the ML-based detection of MB2 canals using axial CBCT slices. The methodology of the study can serve as a foundation for creating an ML model for the detection of MB2 in 3D CBCT scans, thereby aiding the dental practitioner in the accurate identification of MB2 canals and better treatment outcomes.
Conflict of interest
None disclosed.
Acknowledgments
Author contributions
Shetty: Protocol/project development, Data analysis, Manuscript writing/editing. Yuvali: data collection or management. Ozsahin: data collection or management. Al-Bayatti: data collection or management. Narasimhan: data collection or management. Mohammed Alsaegh: data collection or management. Al-Daghestani: data collection or management. Shetty: data collection or management. Castelino: data collection or management. Ozsahin: Protocol/project development, Data analysis, Manuscript writing/editing. David: Protocol/project development, Data analysis, Manuscript writing/editing.
Ethics statement
All methods were carried out in accordance with relevant guidelines and regulations. All experimental protocols were approved by the Research Ethics Committee Ref. no. REC-23-10-12-01-F (University of Sharjah).
Consent to participate
Informed written consent was obtained from all subjects involved in the study.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Availability of data and materials
References
- 1.Tabassum S., Khan FR. Failure of endodontic treatment: the usual suspects. Eur J Dent. 2016;10(1):144–147. doi: 10.4103/1305-7456.175682. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Baruwa A.O., Martins J.N.R., Meirinhos J., et al. The influence of missed canals on the prevalence of periapical lesions in endodontically treated teeth: a cross-sectional study. J Endod. 2020;46(1) doi: 10.1016/j.joen.2019. 34-39. e1. [DOI] [PubMed] [Google Scholar]
- 3.Shetty H., Sontakke S., Karjodkar F., et al. A Cone Beam Computed Tomography (CBCT) evaluation of MB2 canals in endodontically treated permanent maxillary molars. A retrospective study in Indian population. J Clin Exp Dent. 2017;9(1) doi: 10.4317/jced.52716. e51-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Pattanshetti N., Gaidhane M., Al Kandari A.M. Root and canal morphology of the mesiobuccal and distal roots of permanent first molars in a Kuwait population–a clinical study. Int Endod J. 2008;41(9):755–762. doi: 10.1111/j.1365-2591.2008.01427.x. [DOI] [PubMed] [Google Scholar]
- 5.Weine F.S., Healey H.J., Gerstein H., et al. Canal configuration in the mesiobuccal root of the maxillary first molar and its endodontic significance. Oral Surg Oral Med Oral Pathol. 1969;28(3):419–425. doi: 10.1016/0030-4220(69)90237-0. [DOI] [PubMed] [Google Scholar]
- 6.Alnowailaty Y., Alghamdi F. The prevalence and location of the second mesiobuccal canals in maxillary first and second molars assessed by cone-beam computed tomography. Cureus. 2022;14(5):e24900. doi: 10.7759/cureus.24900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Shen Y., Gu Y. Assessment of the presence of a second mesiobuccal canal in maxillary first molars according to the location of the main mesiobuccal canal-a micro-computed tomographic study. Clin Oral Investig. 2021;25(6):3937–3944. doi: 10.1007/s00784-020-03723-5. [DOI] [PubMed] [Google Scholar]
- 8.Vertucci FJ. Root canal morphology and its relationship to endodontic procedures. Endod Top. 2005; 10:3–29. 10.1111/j.1601-1546.2005.00129.x [DOI]
- 9.Onn H.Y., Sikun MSYA, Abdul Rahman H., et al. Prevalence of mesiobuccal-2 canals in maxillary first and second molars among the Bruneian population-CBCT analysis. BDJ Open. 2022;8(1):32. doi: 10.1038/s41405-022-00125-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Lee S.J., Lee E.H., Park S.H., et al. A cone-beam computed tomography study of the prevalence and location of the second mesiobuccal root canal in maxillary molars. Restor Dent Endod. 2020;45:1–8. doi: 10.5395/rde.2020.45.e46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Khurshid Z., Waqas M., Hasan S., et al. Deep learning architecture to infer Kennedy Classification of partially edentulous arches using object detection techniques and piecewise annotations. Int Dent J. 2025;75(1):223–235. doi: 10.1016/j.identj.2024.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Shetty S.R., Reddy S., Shetty R.M., et al. Machine learning in maxillofacial radiology: a review. J Datta Meghe Institute Med Sci Univ. 2021;16:794–796. doi: 10.4103/jdmimsu.jdmimsu_303_20. [DOI] [Google Scholar]
- 13.Chippalakatti S, Renumadhavi CH, Pallavi A. Comparison of Unsupervised Machine Learning Algorithm F or Dimensionality Reduction. 2022 International Conference on Knowledge Engineering and Communication Systems (ICKES), Chickballapur, India, 2022, pp. 1-7, 10.1109/ICKECS56523.2022.10060625.
- 14.Albitar L., Zhao T., Huang C., et al. Artificial intelligence (AI) for detection and localization of unobturated second Mesial Buccal (MB2) canals in Cone-Beam Computed Tomography (CBCT) Diagnostics (Basel) 2022;12(12):3214. doi: 10.3390/diagnostics12123214. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Normando P.C., Santos J.D., Akisue E., et al. Location of the second mesiobuccal canal of maxillary molars in a Brazilian subpopulation: analyzing using Cone-Beam Computed Tomography. J Contemp Dent Pract. 2022;23(10):979–983. doi: 10.5005/jp-journals-10024-3422. [DOI] [PubMed] [Google Scholar]
- 16.Atban F., Ekinci E., Garip Z. Traditional machine learning algorithms for breast cancer image classification with optimized deep features. Biomed Sig Proc Cntrl. 2023;81 doi: 10.1016/j.bspc.2022.104534. [DOI] [Google Scholar]
- 17.Ramesh Kumar P, Vijaya A. Naïve Bayes Machine Learning Model for image classification to assess the level of deformation of thin components. Mater Today: Proc. 2022;68:2265–2274. doi: 10.1016/j.matpr.2022.08.489. [DOI] [Google Scholar]
- 18.Dashti M., Ghasemi S., Khurshid Z. Role of artificial intelligence in oral diagnosis and dental treatment. Eur J Gen Dent. 2023;12(03):135–137. doi: 10.1055/s-0043-1772565. [DOI] [Google Scholar]
- 19.Schwendicke F., Singh T., Lee J.H., et al. Artificial intelligence in dental research: checklist for authors, reviewers, readers. J Dent. 2021;107 doi: 10.1016/j.jdent.2021.103610. [DOI] [Google Scholar]
- 20.Tejani A.S., Klontzas M.E., Gatti A.A., et al. CLAIM 2024 Update Panel. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 update. Radiol Artif Intell. 2024;6(4) doi: 10.1148/ryai.240300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Setzer F.C., Li J., Khan A.A. The use of artificial intelligence in endodontics. J Dent Res. 2024;103(9):853–862. doi: 10.1177/00220345241255593. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Asgary S. Artificial intelligence in endodontics: a scoping review. Iran Endod J. 2024;19(2):85–98. doi: 10.22037/iej.v19i2.44842. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Ourang S.A., Sohrabniya F., Mohammad-Rahimi H., et al. Artificial intelligence in endodontics: fundamental principles, workflow, and tasks. Int Endod J. 2024;57(11):1546–1565. doi: 10.1111/iej.14127. [DOI] [PubMed] [Google Scholar]
- 24.Jeon S.J., Yun J.P., Yeom H.G., et al. Deep-learning for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Dentomaxillofac Radiol. 2021;50(5) doi: 10.1259/dmfr.20200513. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Yang S., Lee H., Jang B., et al. Development and validation of a visually explainable deep learning model for classification of C-shaped canals of the mandibular second molars in periapical and panoramic dental radiographs. J Endod. 2022;48(7):914–921. doi: 10.1016/j.joen.2022.04.007. [DOI] [PubMed] [Google Scholar]
- 26.Betancourt P., Navarro P., Muñoz G., et al. Prevalence and location of the secondary mesiobuccal canal in 1,100 maxillary molars using cone beam computed tomography. BMC Med Imaging. 2016;16(1):66. doi: 10.1186/s12880-016-0168-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Blattner T.C., George N., Lee C.C., et al. Efficacy of cone-beam computed tomography as a modality to accurately identify the presence of second mesiobuccal canals in maxillary first and second molars: a pilot study. J Endod. 2010;36(5):867–870. doi: 10.1016/j.joen.2009.12.023. [DOI] [PubMed] [Google Scholar]
- 28.Nattress B.R., Martin DM. Predictability of radiographic diagnosis of variations in root canal anatomy in mandibular incisor and premolar teeth. Int Endod J. 1991;24(2):58–62. doi: 10.1111/j.1365-2591.1991.tb00808.x. [DOI] [PubMed] [Google Scholar]
- 29.Ríos-Osorio N., Quijano-Guauque S., Briñez-Rodríguez S., et al. Cone-beam computed tomography in endodontics: from the specific technical considerations of acquisition parameters and interpretation to advanced clinical applications. Restor Dent Endod. 2023;49(1):e1. doi: 10.5395/rde.2024.49.e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Arsiwala-Scheppach L.T., Chaurasia A., Müller A., et al. Machine learning in dentistry: a scoping review. J Clin Med. 2023;12(3):937. doi: 10.3390/jcm12030937. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Lin H., Chen J., Hu Y., et al. Embracing technological revolution: a panorama of machine learning in dentistry. Med Oral Patol Oral Cir Bucal. 2024;29(6):e742–e749. doi: 10.4317/medoral.26679. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Althnian A., AlSaeed D., Al-Baity H., et al. Impact of dataset size on classification performance: an empirical evaluation in the medical domain. Appl Sci. 2021;11(2):796. doi: 10.3390/app11020796. [DOI] [Google Scholar]
- 33.Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV. 2016. p. 2818–26. 10.1109/CVPR.2016.308. [DOI]
- 34.Lee D.W., Kim S.Y., Jeong S.N., et al. Artificial intelligence in fractured dental implant detection and classification: evaluation using dataset from two dental hospitals. Diagnostics (Basel) 2021;11(2):233. doi: 10.3390/diagnostics11020233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Egevad L., Swanberg D., Delahunt B., et al. Identification of areas of grading difficulties in prostate cancer and comparison with artificial intelligence assisted grading. Virchows Arch. 2020;477(6):777–786. doi: 10.1007/s00428-020-02858-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Tiwari RG, Misra A, Ujjwal N. Image embedding and classification using Pre-Trained Deep Learning Architectures. 2022 8th International Conference on Signal Processing and Communication (ICSC), 2022 pp. 125–130. 10.1109/icsc56524.2022.10009560.
- 37.Szegedy C., Vanhoucke V., Ioffe S., et al. Rethinking the inception architecture for computer vision. arXiv.org. 2015 https://arxiv.org/abs/1512.00567 Available at: Accessed: 28 December, 2024. [Google Scholar]
- 38.Udendhran R., Balamurugan M., Suresh A., et al. Enhancing image processing architecture using deep learning for embedded vision systems. Microproc Microsyst. 2020;76 doi: 10.1016/j.micpro.2020.103094. [DOI] [Google Scholar]
- 39.Shah S.R., Qadri S., Bibi H., et al. Comparing Inception V3, VGG 16, VGG 19, CNN, and ResNet 50: a case study on early detection of a rice disease. Agronomy. 2023;13(6):1633. doi: 10.3390/agronomy13061633. [DOI] [Google Scholar]
- 40.Wang C., Chen D., Hao L., et al. Pulmonary image classification based on Inception-v3 transfer learning model. IEEE Access. 2019;7:146533–146541. doi: 10.1109/ACCESS.2019.2946000. [DOI] [Google Scholar]
- 41.Vanlalhruaia, Singh YK, Singh ND. Binary face image recognition using logistic regression and neural network. International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), Chennai, India, 2017, pp. 3883-3888, 10.1109/ICECDS.2017.8390191.
- 42.Al-Jumaili S, Duru AD, Uçan ON. Covid-19 Ultrasound image classification using SVM based on kernels deduced from Convolutional neural network. 2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 2021, pp. 429-433, 10.1109/ISMSIT52890.2021.9604551.
- 43.Jusman Y, Anam MK, Puspita S, et al. Comparison of dental caries level images classification performance using KNN and SVM methods. 2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Kuala Terengganu, Malaysia, 2021, pp. 167-172, 10.1109/ICSIPA52582.2021.9576774.
- 44.Virupaiah G., Sathyanarayana AK. Analysis of image enhancement techniques for dental caries detection using texture analysis and support vector machine. Int J Appl Sci Eng. 2020;17(1):75–86. doi: 10.6703/IJASE.202003_17(1).075. [DOI] [Google Scholar]
- 45.Arifin AZ, Hadi M, Yuniarti A, et al. Classification and numbering on posterior dental radiography using support vector machine with mesiodistal neck detection. The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems, Kobe, 2012, pp. 432-435, 10.1109/SCIS-ISIS.2012.6505362.
- 46.Putra R.P., Rahman AY. Classifying dental caries types using panoramic dental images using watershed method and multiclass support vector machine. Appl Technol Comp Sci J. 2023;6(2):143–151. doi: 10.33086/atcsj.v6i2.5910. [DOI] [Google Scholar]
- 47.Eran Tal. Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES’23), August 8-10, 2023, Montreal, Canada. ACM, New York, NY, USA, 10 pages. 10.1145/3600211.3604678 [DOI]
- 48.Singh S.B., Laishram A., Thongam K., et al. A random forest-based automatic classification of dental types and pathologies using panoramic radiography images. J Sci Industrial Res. 2025;83:531–543. doi: 10.56042/jsir.v83i5.3994. [DOI] [Google Scholar]
- 49.Sameer S, Karthik J, Teja MS, et al. Estimation of Periodontal Bone Loss Using SVM and Random Forest. 2023 2nd International Conference for Innovation in Technology (INOCON), Bangalore, India, 2023, pp. 1-7, 10.1109/INOCON57975.2023.10101155
- 50.Wang C.W., Huang C.T., Lee J.H., et al. A benchmark for comparison of dental radiography analysis algorithms. Med Image Analysis. 2016;31:63–76. doi: 10.1016/j.media.2016.02.004. [DOI] [PubMed] [Google Scholar]
- 51.Mei S., Ji J., Geng Y., Zhang Z., et al. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans Geosci Remote Sens. Apr. 2019;57(9):6808–6820. [Google Scholar]
- 52.Zhang S., Xu M., Zhou J., et al. Unsupervised spatial–spectral CNN-based feature learning for hyperspectral image classification. IEEE Trans Geosci Remote Sens. 2022;60:1–17. doi: 10.1109/TGRS.2022.3153673. [DOI] [Google Scholar]
- 53.Lichy A., Bader O., Dubin R., et al. When a RF beats a CNN and GRU, together—a comparison of deep learning and classical machine learning approaches for encrypted malware traffic classification. Comput. Secur. 2023;124 doi: 10.1016/j.cose.2022.103000. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.





