Table 3.
Study factor | Author/year/ ML class | Image type | Feature extractor/Pre-processing method | Algorithm architecture | Evaluation | Findings |
---|---|---|---|---|---|---|
Human bite marks | Mahasantipiya et al. (2012)** | Bite marks obtained from dental cast and captured by digital camera. | Manual measurements on the binary image | Multi-layer feed- forward NN | Mean Squared Error (MSE), Accuracy | The average accuracy is approximately 82%. |
Molina et al. (2022)**** | Bite mark obtained from dental cast and scanned by the 3D scanner. | Manual measurements by human experts using Blueprint© software | N/A | ROC, AUC, ICC, Sensitivity, Specificity | Excellent inter-rater reliability (ICC >0.95), the highest area under the ROC curve (AUC) was obtained for the Euclidean distance of lower teeth rotation (AUC = 0.73) | |
Sex determination | Akkoç et al. (2017)* | Maxillary tooth plaster images | Gray Level Co-0ccurrence Matrix (GLCM) | RF Algorithm, SVM, ANN, Naive Bayesian, kNN | Classification Accuracy, Sensitivity, Specificity, ROC, AUC | RF algorithms outperform other ML algorithms with a 90% success rate. |
Akkoç et al. (2016)* | Maxillary tooth plaster images | Discrete Cosine Transform (DCT) | RF Algorithm | Classification Accuracy, Sensitivity, Specificity, AUC | The average classification was 85.166%, while the area under the ROC curve was 91.75%. | |
Patil et al. (2018)** | Panoramic radiographs | Manual morphometric measurement by human expert using Digimizer Image analysis software | Feed-forward NN with backpropagation, Logistic regression | MSE, Mean Absolute Error (MAE), Of Determination (R2), R, LeastMean Square Error (LMSE), ROC, Sensitivity, Specificity | The overall accuracy of discriminant analysis was 69.1%, logistic regression was 69.9%, and ANN was 75%. | |
Ortiz et al. (2020)** | Panoramic radiographs | Manual morphometric measurement by human expert | Logistic regression, ANN, Naive Bayesian, kNN | Discriminant Analysis, Training, and Testing Accuracy | Based on discriminant function, accuracy for females was 68.00% and 74% for males. Based on predictive analysis, the kNN model (0.937) and ANN (0.992) exhibit the best accuracy during the training phase, while during testing, NN (0.891) outperforms others. |
|
Esmaeilyfard et al. (2021) ** | First Molar Teeth in Cone Beam Computed Tomography Images | Manual morphometric measurement by human expert | RF Algorithm, SVM, Naive Bayesian | Accuracy, Sensitivity, Precision, Specificity, ROC, AUC | Naive Bayesian was the best tool for sex classification, with an accuracy of 92.31%. | |
Liang et al. (2021)*** | Panoramic radiographs | mask-RCNN | ResNet34, Inception- ResNet | Mean Average Precision (mAP) | The proposed method surpasses all existing approaches, obtaining up to 59.62% mAP and 50.57% rank-1 accuracy. | |
Milošević et al. (2021)*** | Panoramic radiographs | DenseNet201, InceptionResNetV2, ResNet50, VGG16, VGG19, and Xception | A customized model which consists of a single 1x1 convolutional layer after feature extraction followed by the fully connected layer. | Model Accuracy (There are two models built: a family of models specialized for certain tooth types and a general model that can assess the sex from any tooth type) | The general model achieves an overall accuracy of 72.68%, while the specialized models achieve an overall accuracy of 72.4%. | |
Nithya and Sornam (2022)*** | Panoramic radiographs | NA | Five Convolutional layers, including a fully connected layer in the final layer. | Training Accuracy | The proposed CNN model exhibits better training accuracy (95%) than the VGG16 pre-trained model. | |
Franco et al. (2022) *** | Panoramic radiographs | ROI was extracted by human experts using the Darwin V7 software package. | DenseNet121 associated with learning approaches: From scratch and transfer learning. | Model Accuracy, Classification Accuracy, ROC, AUC, | Transfer learning (82%) outperformed the from scratch architecture (71%). Also, females and males aged≥15 years were correctly classified at 87% and 84%, respectively, while females and males aged <15 were 80% and 83%, respectively. | |
Age Estimation | De Tobel et al. (De Tobel et al., 2017)*** | Panoramic radiographs | Linear and Quadratic Discriminant analysis, Decision Trees, SVM, k-NN, Ensemble Classifiers | AlexNet | Rank-N RR, Mean Absolute Difference (MAD), Mean Linearly Weighted Kappa, ICC | The mean accuracy (Rank-1 RR) was 0.51, the mean absolute difference was 0.6 stages, the mean linearly weighted kappa was 0.82, and the mean ICC was 0.95. The novel method appears to be effective because the automated pilot approach used to stage the development of the lower third molar on panoramic radiographs resembled staging performed by human observers |
Merdietio Boedi et al. (2020)*** | Panoramic radiographs | ROIs are cropped using Adobe Photoshop CC 2018 and segmented using built-in tools. Images are then grouped into three types: bounding boxes (BB), rough (RS), and full tooth segmentation (FS) | DenseNet201 | Accuracy, MAD, Cohen's Kappa | FS dataset increased the staging allocation accuracy by 7% compared to BB. DenseNet201 was superior to AlexNet, as DenseNet201 improved the accuracy of stage allocation by 3%. | |
Banar et al. (2020)*** | Panoramic radiographs | Object detection: YOLO- Like CNN architecture Object segmentation: U-Net like CNN architecture | DenseNet201 | Accuracy, MAE, Dice, Linearly Weighted Kappa | The current fully automated method for stage classification performed inferior to the semi-automatic approach proposed by Merdietio Boedi et al. (2020), with a stage classification accuracy of 54%, an MAE of 0.69 stages, and linearly weighted kappa of 0.79, respectively. | |
Fan et al. (2020)*** | Panoramic radiographs | ROIs which consist of five landmarks selected according to the forensic experience. | Customize CNN model: DENT-net | Recognition accuracy, false match rate (FMR), equal error rate (ERR), AUC | Rank-1 and Rank-5 accuracy of 85.16% and 97.74% were achieved, respectively. The AUC of the DENT- net was 0.996. | |
Matsuda et al. (2020)*** | Panoramic radiographs | NA | VGG16, ResNet50, Inception V3, InceptionResNet-V2, Xception, and MobileNet-V2 | Accuracy | VGG16 model achieved the highest accuracy (100.0%) with pretraining and with fine-tuning. | |
Lai et al. (2020)*** | Panoramic radiographs | histogram equalization algorithm is adopted to adjust the brightness of the images. | Customize CNN model: LCANet | Recognition accuracy | Rank-1 and Rank-5 accuracy of 87.21 and 95.34% were achieved, respectively. | |
Kim et al. (2021)*** | Panoramic radiographs | ROIs consist of the maxilla and mandibular first molar of the right and left sides, manually extracted by the human observer. | ResNet152 | Accuracy, AUC | The accuracy of the tooth-wise estimation was 89.05–90.27%. The AUC scores ranged between 0.94 and 0.98 for all age groups, indicating exceptional ability. |
|
Upalananda et al. (2021)*** | Panoramic radiographs | Manual cropping was done by an expert on each stage's image of the mandibular third molar. | GoogLeNet | Accuracy, Sensitivity, Specificity | The overall accuracy of this method was 82.5%, and the accuracy at each stage of development ranged from 87.5% to 97.5%. The proposed study, which used GoogLeNet to look at different stages of development, is similar to a study done before on finding dental caries. |
|
Lee et al. (2020)*** | Panoramic radiographs | Annotation of each tooth in the maxillae and mandibles was manually performed by expert. | mask R-CNN | F1-Score, Mean Intersection over Union (IoU) | The proposed method generated a mean IoU of 0.877 and an F1-score of 0.875 (precision: 0.858, recall: 0.893). In addition, the segmentation method's visual examination revealed that it closely matched the actual data. | |
Kahaki et al. (2020)*** | Panoramic radiographs | Projection-based transformation | Deep CNN with 5 convolutional layers and 2 fully connected layers | Model Accuracy | The results of the analysis show that the method is good at identifying images, which makes it possible for automated age estimation to be very accurate (81.83%). | |
Mohammad et al. (2021)*** | Panoramic radiographs | Dynamic Programming- Active Contour | AlexNet | Dice, Jaccard, ME, F-Score | The overall performance of the proposed classification approach to stage premolar development on panoramic radiographs was superior to the conventional method. | |
Mohammad et al. (2022)*** | Panoramic radiographs | Dynamic Programming- Active Contour | From Scratch | Accuracy, Training, Validation, and Testing Accuracy, Kappa Value | On the training, validation, and testing sets, the accuracy of the proposed model is 97.74, 96.63, and 78.13%, respectively. Although moderate agreement (Kappa value = 0.58) was achieved, no sign of the model's over-or under-fitting upon the learning process was seen. | |
Milošević et al. (2021)*** | Panoramic radiographs | DenseNet201, InceptionResNetV, ResNet50, VGG16, VGG19, and Xception | A customized model consists of a single 1x1 convolutional layer after feature extraction, followed by the fully connected layer. | R2, MAE, Model Accuracy | The fully automated DL model for complete panoramic radiographs has a mean absolute error of 3.96 years, a median absolute error of 2.95 years, and R2 of 0.8439. | |
Dental comparison | Mahdi et al. (2020)*** | Panoramic radiographs | Manual annotation by expert dentist | Transfer learning with ResNet50 and ResNet101 | F1-score, Accuracy, Precision, Recall | The average F1 score obtained is more than 0.97. So, the authors suggested that the proposed model could be a useful and reliable tool to help dentists do their jobs. |
Chen et al. (2019) *** | Digital dental periapical films | Manual annotation by expert dentist | Faster R-CNN with Inception Resnet version 2 | Mean average precision (mAP), IoU, Precision, Recall | The results show that precision and recall are both greater than 90%, and the mean value of the IOU between detected boxes and ground truth is also greater than 91%. | |
Miki et al. (2017) *** | Dental Cone-Beam CT images | Manual cropping is done by the experts | CNN with AlexNet | Classification accuracy, Detection rate | The accuracy of tooth detection was 77.4%, with an average false detection of 5.8 per image. According to the authors, the results show the potential utility of the proposed method for the automatic recording of dental information. | |
Choi et al. (2022) *** | Panoramic radiographs | Manual annotation by the oral and maxillofacial radiologist using fully web-browser based labeling system developed by Digital Dental Hub (Seoul, Korea) | EfficientDet-D3, EfficientNet-B3 | IoU, Precision, Recall | Natural teeth had an average precision of 99.1%, prostheses had an average precision of 80.6%, treated root canals had an average precision of 81.2%, and implants had an average precision of 96.8%. |
* ML algorithm, **ANN, ***Deep Neural Network, ****Computational Technology.