Skip to main content
. 2022 Apr 25;34(4):270–281. doi: 10.1016/j.sdentj.2022.04.004

Table 2.

Characteristics of studies included in the systematic review.

Authors (Year) Country Sample size and characteristics Outcome
evaluated
Study objective Methodology Type of AI Network Study parameter(s) Main Findings Study limitation Future directions
Saghiri et al., 2012a Iran 50 single rooted extracted teeth (mandibular incisor and 2nd premolar) placed within extraction sockets of dried skull. Locating anatomic position of minor apical foramen. “To develop a new approach for locating the minor apical foramen using feature-extracting procedures from radiographs and then processing data using ANN as a decision making system.” Following access cavity preparation, a file was placed and radiograph was taken to evaluate the location of the file in relation to the minor apical foramen and further checked after retrieving the tooth from the alveolar socket.
This was evaluated by two endodontists, who assessed the position of the files on the radiographs and then visualized the root apices under a stereomicroscope, considered as the gold standard.
ANN – Multilayer perceptron model Accuracy of identifying the minor apical foramen. Analysis of the images from radiographs by ANN showed that in 93% of the samples, the location of the foramen had been determined correctly by false rejection and acceptation error methods.Significant differences were observed in data obtained from endodontists and ANN
(p < 0.01). Comparing with stereomicroscope examination for determining anatomic position of minor apical foramen, ANN had greater accuracy (96%) than endodontists (76%).
Small sample size Larger studies with more sample size.
Saghiri et al., 2012b Iran “To evaluate the accuracy of the ANN to simulate the clinical situation of working length determination.”
Miki et al., 2017a Japan 52 CBCT volumes divided into training (n = 42) and test (n = 10) cases. Identification and classification of teeth from CBCT for forensic applications. “To investigate the application of a DCNN for classifying tooth types on dental CBCT images and to automate the dental filing process using dental x-ray images.” 52 CBCT datasets with a field of view ranging from 51 to 200 mm and voxel resolution in the range of 0.1–0.39 mm, were used in a DCNN for identifying and classifying 7 types of teeth (central incisors, lateral incisors, canines, first premolars, second premolars, first molars and second molars).The 52 volumes were randomly divided into training
(n = 42) and test (n = 10) datasets and were evaluated for possible application in automated filing of post-mortem dental charts.
DCNN - As a component of automated dental charting Teeth classification accuracy The average classification accuracy using the augmented training data by CBCT image rotation and intensity transformation was 88.8%. Limited number of CBCT datasets. Classification accuracy
can be improved by combining the results for a tooth, or by applying
3D convolution.
Lee et al., 2018a South Korea 3000 dental periapical radiographs including maxillary premolars (n = 778) and molar (n = 769), and mandibular premolars (n = 722) and molars (n = 731) Detection of dental caries from periapical radiographic images. “To evaluate the efficacy of DCNN algorithms for detection and diagnosis of dental caries on periapical radiographs” 3000 periapical radiographic images were divided into training and validationdataset
(n = 2400) and test dataset (n = 600).
DCNN Diagnostic accuracy, sensitivity, specificity, positive/negative
predictive values, ROC curve, and
AUC for detection of dental caries by DCNN.
DCNN based premolar model provided the best diagnostic accuracy (89%) and AUC (0.917), which was significantly greater (p < 0.001) molar model (accuracy – 88%, AUC – 0.890) and combined molar premolar model (accuracy – 82%, AUC – 0.845). Only permanent teeth were included.
Number of periapical radiographs with/without dental caries were too small to perform optimal DCNN learning.
Enhance diagnostic accuracy through inclusion of history, clinical examination, percussion and tactile evaluation in DCNN algorithm.
Bouchahma et al., 2019 Tunisia 200 dental periapical radiographs. Predicting treatment options for dental decay. “To propose an automatic method using DCNN to detect the decay from dental X-Ray images and to predict the needed treatment.” DCNN model was designed based on dental periapical radiographs obtained from patients.
The model was evaluated in a subset of 200 radiographs to identify dental decay and predict treatment either as fluoride application, root canal treatment or simple dental restoration.
DCNN Treatment prediction accuracy The DCNN based dental decay treatment prediction model had an overall accuracy of 87% (fluoride application – 98%, root canal treatment – 88%, simple dental restoration – 77%). Small sample size Further studies with larger sample size.
Ekert et al., 2019b Germany Dental panoramic radiographs from 85 patients (median age – 51 years) including 2001 teeth. Detection of apical Lesions “To apply DCNN to detect apical lesions on panoramic dental radiographs.” DCNN model was tested against an ordinal reference scale inferred through majority values assigned by 6 independent examiners who assessed the panoramic dental radiographs.Reference scale values were, no apical lesion (0), widened PDL space or uncertain apical lesion (1), and certain apical lesion (2)
.
DCNN Sensitivity, specificity, positive and negative predictive values, and AUC for ROC. The DCNN had an overall AUC of 0.85 ± 0.04 for detection of apical lesions. The model showed greater specificity (0.87 ± 0.04) than sensitivity (0.65 ± 0.12). This correlated with a greater negative predictive value (0.93 ± 0.03) than positive predictive value (0.49 ± 0.10).
The sensitivity was significantly higher for molars than for other tooth types.
Smaller training datasets
Further studies to consider the impact of factors such as image
Projection and contrast quality on the discrimination ability of DCNN.
Hu et al., 2019 USA 21 patients (mean age – 27.6 ± 3.5 years) AI based real time pain detection and localization using neuro-imaging. “To test the feasibility of a mobile neuroimaging-based clinical augmented reality and AI framework for objective pain detection and also localization direct from the patient's brain in real time.” Cortical brain activity during acute pain (cold stimulation of hypersensitive teeth) was recorded in real time using a portable optical neuroimaging technology (functional near-infrared spectroscopy).
ANN and DCNN based AI algorithm was used to classify the hemodynamic data into pain and non-pain brain states.
ANN and DCNN based AI algorithm Classification accuracy of pain and non-pain states. For pain/non-pain discrimination, the AI algorithm achieved 80.37% classification accuracy and a positive likelihood ratio of 2.35. Using the same algorithm for a left/right localization task 74.23% accuracy and a positive likelihood ratio of 2.02 were observed. Small sample size Extensive validation is still required for clinical translation.
Tuzoff et al., 2019 Russia 1574 dental panoramic radiographs. Teeth detection and numbering. “To propose and evaluate a novel solution based on CNN for automatically performing the tasks of teeth detection and numbering from dental panoramic radiographs.” 1574 randomly selected dental panoramic radiographs were divided into training (n = 1352) and testing (n = 22) subsets.
Comparison of sensitivity, specificity and precision of tooth detection and numbering by the CNN based algorithm was done with information provided by five experts in oral radiology.
CNN Sensitivity, specificity and precision of tooth detection and numbering. CNN based algorithm achieved a sensitivity of 0.9941 and precision of 0.9945 for teeth detection, and sensitivity of 0.9800 and specificity of 0.9994 for teeth numbering. These results were comparable to that of the experts (teeth detection – sensitivity of 0.9980/precision of 0.9998; teeth numbering – sensitivity of 0.9893/specificity of 0.9997). Majority of the misclassifications were observed among teeth neighboring edentulous spaces. Further scope for improvement of in terms of advanced augmentation techniques, extended datasets and use of more recent CNN architectures.
Fukuda et al., 2020 Japan 300 dental panoramic radiographs, including 330 teeth with VRF. Detecting vertical root fracture in teeth. “To evaluate the use of a CNN system for detecting VRF on panoramic radiography.” 300 dental panoramic radiographs were randomly divided into training (n = 240) and testing (n = 60) images. For comparison, the presence of VRF line was confirmed by 3 experts (2 radiologists and 1 endodontist). CNN Recall, precision, and F measure for diagnostic performance. 267 teeth with VRF (80.9%) were accurately detected by the CNN algorithm. In addition, 20 teeth without VRF were falsely diagnosed. Values for recall, precision and F measure were 0.75, 0.93, and 0.83, respectively. Inadequate training items, inclusion of only radiographs with clear VRF lines and single center study. Further large scale studies to address limitations.
Mallishery et al., 2020 India 500 root canal treatment cases. Difficulty level and decision for referral in root canal treatment cases. “To generate a machine learning algorithm which can help predict the difficulty level of the root canal treatment case and decide about a referral, with the help of the standard AAE endodontic case difficulty assessment form.” 500 root canal treatment cases recorded using AAE endodontic case difficulty assessment form, were assessed by 2 pre-calibrated endodontists (3rd endodontist was consulted for conflicting opinions). This was compared with the algorithm generated through ANN. ANN Sensitivity of the ANN algorithm for identifying difficulty level and deciding referral. ANN algorithm achieved a sensitivity of 94.96%. No analysis of alternative sources of data. Further ANN algorithms should include alternative sources of data such as radiographs and clinical findings in conjunction with the AAE case difficulty assessment form.
Orhan et al., 2020 Turkey 153 CBCT images of periapical lesions acquired from 109 patients. Detection, localization and volume determination of periapical
Lesions.
“To verify the diagnostic performance of an artificial intelligence system based on the DCNN method to detect periapical pathosis CBCT images.” 153 CBCT images showing periapical lesions were evaluated by an expert human observer using manual segmentation in a medical imaging software. DCNN was trained and evaluated for detecting, localizing and determining volume of the periapical lesions from CBCT images. DCNN Reliability of DCNN for detection, localization and volume determination of periapical
Lesions.
The DCNN detected and localized teeth with a reliability of 92.8%. 142 of 153 periapical lesions were correctly detected and only tooth was wrongly localized. Comparing volume determination of the lesions by DCNN and manual segmentation, there was no significant difference (p > 0.05). DCNN based measurements may have been altered by the presence of perio-endo lesions, PDL tissue loss and alveolar bone defects. Further studies to address algorithms which include variation of normal dentoalveolar anatomy.

ANN – Artificial neural network, DCNN – Depp convolutional neural network, CBCT – Cone beam computed tomography, ROC - Receiver operating characteristics, AUC – Area under curve, PDL – Periodontal ligament, AI – Artificial intelligence, CNN – Convolutional neural network, VRF – Vertical root fracture, AAE – American association of endodontists.