Table 5.
Authors Name and Year | Methods | Results | Authors Suggestions/Conclusions |
---|---|---|---|
Prajapati et al., (2017) [16] | Transfer learning with VGG16 pre-trained model | Accuracy = 88.46% | Transfer learning with the VGG16 pre-trained model achieved better accuracy. |
Lee et al., (2018) [56] | Pre-trained GoogLeNet Inception v3 network | Accuracy of 89%, 88%, and 82% was observed in the premolar, molar, and both the premolar-molar regions. | In terms of diagnosing dental caries, Deep CNN algorithms are anticipated to be among the best and most productive technique. |
Vinayahalingam et al., (2021) [57] | CNN MobileNet V2 | Accuracy = 0.87, sensitivity = 0.86, specificity = 0.88, AUC = 0.90 |
This method forms a promising foundation for the further development of automatic third molar removal assessment. |
Choi et al., (2018) [63] | Customized CNN | F1max = 0.74, FPs = 0.88 | This system can be used to detect proximal dental caries on several periapical images. |
Lee et al., (2021) [65] | Deep CNN (U-Net) | Precision = 63.29%, recall = 65.02%, F1-score = 64.14% |
Clinicians should not wholly rely on AI-based dental caries detection results, but should instead use them only for reference. |
Yang et al., (2018) [67] | Customized CNN | F1 score = 0.749 | The method doesn’t always work on images of molars. |
Lee et al., (2018) [68] | Pre-trained deep CNN (VGG-19) and self-trained network | Premolars (accuracy = 82.8%), molars (accuracy = 73.4%) | Using a low-resolution dataset can reduced the accuracy of the diagnosis and prediction of PCT. |
Al Kheraif et al., (2019) [69] | Hybrid graph-cut technique and CNN | Accuracy = 97.07% | The DL with convolution neural network system effectively recognizes the dental disease. |
Murata et al., (2019) [70] | Customized AlexNet CNN | Accuracy = 87.5%, sensitivity = 86.7%, specificity = 88.3%, AUC = 0.875 |
The AI model can be a supporting tool for inexperienced dentists. |
Krois et al., (2019) [72] | Custom-made CNN | Accuracy = 0.81, sensitivity = 0.81, Specificity = 0.81 |
ML-based models could minimize the efforts. |
Zhao et al., (2020) [77] | Customized Two-staged attention segmentation network | Accuracy = 96.94%, dice = 92.72%, recall = 93.77% |
Failure to properly divide the foreground image into teeth areas due to inaccurate pixel segmentation. |
Fariza et al., (2020) [78] | U-Net convolution network | Accuracy = 97.61% | Segmentation with the proposed U-Net convolution network results in fast segmentation and smooth image edges. |
Lakshmi and Chitra, (2020) [79] | Sobel edge detection with deep CNN | Accuracy = 96.08% | Sobel edge detection with deep CNN is efficient for cavities prediction compared to other methods. |
Khan et al., (2021) [80] | U-Net + Densenet121 | mIoU = 0.501, Dice coefficient = 0.569 |
DL can be a viable option for segmentation of caries, ABR, and IRR in dental radiographs. |
Moran et al., (2020) [81] | Pre-trained ResNet and an Inception model | Accuracy = 0.817, precision = 0.762, recall = 0.923, specificity = 0.711, negative predictive = 0.902 |
Clinically, the examined CNN model can aid in the diagnosis of periodontal bone deterioration during periapical examinations. |
Chen et al., (2021) [82] | Customized Faster R-CNN | Precision = 0.5, recall = 0.6 | Disease lesions with too small sizes may not be indications for faster R-CNN. |
Lin and Chang, (2021) [84] | ResNet | Accuracy = 93.33% | In the second stage, endodontic therapy is the most vulnerable to incorrect labeling. |
Zhang et al., (2022) [85] | Customized multi-task CNN | Precision = 0.951, recall = 0.955, F-score = 0.953 | The method can provide reliable and comprehensive diagnostic support for dentists. |
Yu et al., (2020) [91] | Customized ResNet50-FPN | Accuracy = 95.25%, sensitivity = 89.83%, specificity = 96.10% |
Only implement caries detection for First Permanent Molar not all teeth. |
Rana et al., (2017) [92] | Customized CNN | AUC = 0.746, precision = 0.347, recall = 0.621 | Dental professionals and patients can benefit from automated point-of-care early diagnosis of periodontal diseases provided. |
Tanriver et al., (2021) [94] | Multiple pre-trained NNs; EfcientNet-b4 architecture | sensitivity = 89.3, precision = 86.2, F1 = 85.7 |
The suggested model shows significant promise as a low-cost, noninvasive tool to aid in screening procedures and enhance OPMD identification. |
Schlickenrieder et al., (2021) [95] | pre-trained ResNeXt-101–32x8d | accuracy = 98.7%, AUC = 0.996 | More training is needed in AI-based detection, classification of common and uncommon dental disorders, and all types of restorations. |
Takahashi et al., (2021) [96] | YOLO v3 and SSD | mAP = 0.80, mIoU = 0.76 | This method was limited accuracy in identifying tooth-colored prosthese. |