Abstract
Objectives:
The present study aimed to evaluate the performance of a Faster Region-based Convolutional Neural Network (R-CNN) algorithm for tooth detection and numbering on periapical images.
Methods:
The data sets of 1686 randomly selected periapical radiographs of patients were collected retrospectively. A pre-trained model (GoogLeNet Inception v3 CNN) was employed for pre-processing, and transfer learning techniques were applied for data set training. The algorithm consisted of: (1) the Jaw classification model, (2) Region detection models, and (3) the Final algorithm using all models. Finally, an analysis of the latest model has been integrated alongside the others. The sensitivity, precision, true-positive rate, and false-positive/negative rate were computed to analyze the performance of the algorithm using a confusion matrix.
Results:
An artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey) was designed based on R-CNN inception architecture to automatically detect and number the teeth on periapical images. Of 864 teeth in 156 periapical radiographs, 668 were correctly numbered in the test data set. The F1 score, precision, and sensitivity were 0.8720, 0.7812, and 0.9867, respectively.
Conclusion:
The study demonstrated the potential accuracy and efficiency of the CNN algorithm for detecting and numbering teeth. The deep learning-based methods can help clinicians reduce workloads, improve dental records, and reduce turnaround time for urgent cases. This architecture might also contribute to forensic science.
Keywords: Artificial Intelligence, Deep Learning, Tooth, Classification, Dental Radiography
Introduction
Dental radiographs are commonly used tools in routine clinical practice.1,2 Imaging and analysis of dental structures are important tools for diagnosing diseases, monitoring oral health, and planning treatments.2–4 One common scanning technique is periapical radiography, which can create highly detailed intraoral images of the oral cavity.5 Periapical radiographs are extensively used in clinical practice since they allow viewing of the crown and the root of a tooth with relatively high resolution and minimal radiation exposure. However, some clinicians can misdiagnose radiographic findings due to factors such as inexperience, knowledge, and fatigue.6 These common problems in interpreting dental images can be overcome using tools based on artificial intelligence (AI) technologies. AI and deep learning (DL) tools such as convolutional neural networks (CNNs) have gathered momentum in medical and dental research while promising to improve and facilitate radiologic diagnosis, especially in forensics.7,8 Since teeth are the hardest tissue in humans and survive long after a person’s death, efficient and innovative tools for tooth detection and numbering can provide crucial assistance to medicolegal experts in identifying individuals and furnishing other statistical data.9,10 Many research projects have already applied AI to tooth detection and numbering in the evaluation of dental radiographs.11 However, there are a limited number of studies using CNN-based DL algorithms for detection and numbering teeth in periapical images. This study aimed to develop automatic detection and numbering of teeth in periapical images using a Faster Region-based Convolutional Neural Network (R-CNN) method, which reduces the number of calculations performed and improves success and speed.
Methods and materials
Radiographic data set
Periapical radiographs used for several diagnostic purposes were randomly selected from the digital archive of the Eskisehir Osmangazi University Dentistry Faculty’s Department of Dentomaxillofacial Radiology. The data set was composed of 1686 anonymized periapical images of adults obtained from January 2018 to January 2020. Only periapical radiographs with good diagnostic quality were included. Radiographs with metal impositions, cone-cuts, position and motion artifacts, deciduous teeth, root fragments, or implants were excluded from the study. The protocol of the study was approved by the ethical committee of the institution [decision date and number: 06.08.2019/14]. All periapical radiographs were obtained using intraoral phosphor storage plates (Planmeca, Helsinki, Finland). The plates were exposed for 0.05 s at 60 kVp, 2 mA, with the bisecting angle technique, using a ProX periapical X-ray unit (Planmeca, Helsinki, Finland). Then plates were scanned with a ProScanner Phosphor Plate Scanning Device (Planmeca, Helsinki, Finland).
Image evaluation
For ground-truth labeling, a manual drawing of each tooth was performed by two dentomaxillofacial radiologists (CG and ISB). The radiologists’ interpretations were in agreement in all images using Colabeler software (MacGenius, Blaze Software, Sacramento, CA). The experts were asked to draw a minimal-sized bounding box around each tooth (entire crown and root) and label each box using the FDI tooth numbering system (18–11 for right maxilla, 21–28 for left maxilla, 31–38 for left mandible, and 48–41 for right mandible).
Deep convolutional neural network
A pre-trained architecture (GoogLeNet Inception v. 3 CNN) was employed for preprocessing, and transfer learning techniques from TensorFlow Library were applied for data set training. The GoogLeNet Inception v. 3 network has been extensively utilized in medicine and has achieved high results in image detection, classification, and segmentation, more so than any other DL method.12 A Faster R-CNN method that was developed from Fast R-CNN architecture was applied for tooth detection and numbering (Figure 1). This model allows the extraction of feature maps from the input image using ConvNet and passes those maps through a Region Proposal Network, which returns object proposals. Next, the maps are classified, and the bounding boxes are predicted.
Figure 1.
The system architecture of the Faster R-CNN algorithm. CNN, convolutional neural network.
Model construction, training, and validation
In the present study, an AI model (CranioCatch, Eskisehir-Turkey) was designed using Faster R-CNN inception architecture to automatically detect and number teeth in periapical radiographs. The training process was handled on a PC equipped in Eskisehir Osmangazi University Faculty of Dentistry Dental-AI Laboratory, which included a Dell PowerEdge T640 Calculation Server (Dell Inc., Texas), a Dell PowerEdge T640 GPU Calculation Server with 32 GB RDIMM, and NVIDIA Tesla V100 16G Passive GPU display card (Dell Inc., Texas), and a Dell PowerEdge R540 Storage Server (Dell Inc., Texas).
Before training, each radiograph was resized from 1171 × 886 pixels to 886 × 886 pixels. The algorithm consisted of the:(1) Jaw classification model, (2) Region detection models, and (3) Final algorithm using all models. Before describing these models, some technical terms need to be elucidated. These DL tools require that some hyperparameters be set before the learning process begins. The first of these, the learning rate, determines how fast a network updates its parameters. The second parameter, which is the number of epochs, is the number of times an entire instance passes through the network during the training process. Lastly, the batch size should be understood as the total number of training data sets present in a single batch.
(1) Jaw classification model: after labeling, an upper jaw and lower jaw classification model was developed. The 1686 annotated radiographs were randomly divided into a training group with 1366 (540 lower jaws, 826 upper jaws) images, a validation group with 160 (60 lower jaws, 100 upper jaws) images, and a test group with 160 (60 lower jaws, 100 upper jaws) images. The training set was divided randomly into 100 batch sizes for every epoch, and 20,000 epochs were run at a learning rate of 0.01.Figure 2 (Table 1)
Figure 2.

Diagram of the jaw classification algorithm on periapical radiographs.
Table 1.
Predictive performance measurements using the jaw classification model
| Metric | Number (Upper Jaw) | Number (Lower Jaw) |
|---|---|---|
| TP | 506 | 287 |
| FP | 429 | 144 |
| FN | 8 | 6 |
| Sensitivity [TPR] | 0.9844 | 0.9795 |
| Precision [PPV] | 0.5412 | 0.6659 |
| F1 score | 0.6984 | 0.7928 |
FN, false-negative; FP, false-positive; PPV, positive predictive value; TP, true-positive; TPR, true positive rate.
(2) Region detection models
Dental object detection model: after the classification of jaws, dental object detection models with 16 classes both for maxilla and mandible were developed. The 660 annotated mandibular periapical radiographs were randomly grouped: a training set with 540 images (2373 tooth labels), a validation set with 60 images (300 tooth labels), and a test set with 60 images (286 tooth labels). 1026 annotated maxillar periapical radiographs were randomly grouped: a training set with 840 images (3809 tooth labels), a validation set with 90 images (500 tooth labels), and a test set with 96 images (478 tooth labels). The mandibular and maxillar training sets were separated randomly into 76 and 119 batch sizes for every epoch, and 500,000 epochs were run at a learning rate of 0.002. After detecting teeth, the algorithms were trained to determine the number of individual teeth, according to the FDI two-digit system.(Figure 3)
Tooth numbering model: for tooth numbering, the algorithm uses the output of tooth detection modeling. First, the model crops the teeth to fall within the range of the detected bounding boxes. Then, CNN classifies each cropped image to estimate the tooth number. To obtain better results in right, middle and left classification, a direct numbering model was developed and used to support the classification by looking at the directions where the numbers increase or decrease. In images with missing teeth, areas with missing teeth are identified and numbered, but no marking is made. The following method was used to detect missing teeth. By calculating the mean and deviation of the distance between two consecutive teeth, we determine that there is a missing tooth in that position when a distance above this deviation is calculated. Therefore, missing teeth do not affect the correct numbering. By evaluating the direction of increases and decreases in the last digits of the numbered teeth, the method can determine whether the image represents the right or left-sided teeth or the anterior region that includes both. two batch sizes for every epoch, and 200,000 epochs were run at a learning rate of 0.002.(Figure 4)(Figure 5)
Specific models for each region: six different region detection models (anterior maxilla, left and right maxilla; anterior mandible, left and right mandible) were trained using the same parameters as their corresponding object detection models. From the estimation results of the trained models, the row of teeth with the highest accuracy of probability was determined, and the remaining teeth were enumerated. two batch sizes for every epoch, and 200,000 epochs were run at a learning rate of 0.002 (Table 2).(Figure 4)(Figure 5)
Figure 3.

Diagram of the dental object detection and tooth numbering algorithm on periapical radiographs.
Figure 4.
Diagram of the specific tooth numbering model for lower jaw on periapical radiographs.
Figure 5.
Diagram of the specific tooth numbering model for upper jaw on periapical radiographs.
Table 2.
Predictive performance measurements using the region detection models
| Metric | Number (Maxilla-Left region) | Number (Maxilla-Anterior region) | Number (Maxilla-Right region) | Number (Mandible-Left region) | Number (Mandible -Anterior region) | Number (Mandible -Right region) |
|---|---|---|---|---|---|---|
| TP | 126 | 159 | 221 | 130 | 40 | 117 |
| FP | 68 | 240 | 121 | 32 | 40 | 72 |
| FN | 1 | 5 | 2 | 2 | 1 | 3 |
| Sensitivity [TPR] | 0.9921 | 0.9695 | 0.9910 | 0.9849 | 0.976 | 0.9750 |
| Precision [PPV] | 0.6495 | 0.3985 | 0.6462 | 0.8025 | 0.500 | 0.6190 |
| F1 score | 0.7850 | 0.5648 | 0.7823 | 0.8844 | 0.6612 | 0.7573 |
FN, false-negative; FP, false-positive; PPV, positive predictive value; TP, true-positive; TPR, true positive rate.
(3) Final algorithm using all models: this last model is created by combining all the other models. In the end, teeth identified in their proper rows by the test results were accepted as correct, while teeth that were out of order were renumbered according to the sequence of teeth. Thus, the most accurate results were obtained (Table 3).(Figure 6)
Table 3.
Predictive performance measurements in the test group using the final algorithm using all developed AI models (CranioCatch, Eskisehir-Turkey)
| Metric | Number |
|---|---|
| TP | 668 |
| FP | 187 |
| FN | 9 |
| Sensitivity [TPR] | 0.9867 |
| Precision [PPV] | 0.7812 |
| F1 score | 0.8720 |
AI, artificial intelligence; FN, false-negative; FP, false-positive; PPV, positive predictive value; TP, true-positive; TPR, true positive rate.
Figure 6.

Diagram of the final artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey) using all developed models for tooth detection and numbering on periapical radiographs.
Performance analysis
After training and validation, the testing data set consisting of 156 periapical radiographs (764 tooth labels) was employed to assess the performance of the model and compare it with the ground truth. The precision, sensitivity, and F1 scores are the common metrics for tooth detection and numbering studies. Before calculating the metrics, the following numbers were first estimated: true-positive (TP), false-positive (FP), and false-negative (FN). Intersection over Union (IoU) determines the precision. The IoU estimates the area of overlap between the ground truth and the detected box. A match is considered a TP if the value exceeds a threshold of 0.7, and it is considered an FP if it does not. After computing TP, FP, and FN, the following metrics were calculated: sensitivity = TP/(TP +FN), precision = TP/(TP +FP), and F1 score = 2 TP / (2TP +FP + FN).
Results
Between the 156-test set periapical images with 864 tooth labels, there were 668 TPs, 187 FPs, and 9 FNs (Table 3). The model achieved a sensitivity of 0.9867 and a precision of 0.7812. The F1 score was 0.8720. Sensitivity (0.9867) was high for the detection and numbering process (Table 3). The AI algorithm based on the Faster R-CNN Inception v3 architecture performed well in detecting and numbering teeth in periapical images.(Figure 7)
Figure 7.
Tooth detection and numbering on periapical radiographs using the artificial intelligence algorithm (CranioCatch, Eskisehir-Turkey)
Discussion
Computer-aided diagnosis (CAD) has rapidly entered the medical research field in the last two decades. CAD frameworks to aid clinicians and radiologists have been employed for several medical problems, such as detection of cancers, disease classification, and localization of pathologies.13,14 CAD systems can simplify the tooth detection and numbering process for dental recording while saving time and minimizing human failures.3,15 These systems can also help clinicians with decision making, treatment planning, forensic investigations, and detecting pathologies.3,16
Several studies have been conducted for the detection and numbering of teeth using Bayesian techniques,17 linear models,18 or binary support vector machines (SVM).19,20 Nonetheless, the image features are often manually curated, and a large part of these systems usually perform an image enhancement process before segmentation and feature extraction. This situation creates a large workload, and image recognition performance is significantly affected by the quality of the extracted features.6
In the study by Mahoor et al,17 a dental classification method was presented in bitewing images using Bayesian classification. The authors reported a large-scale change in the performance of upper premolars (from 72.0 to 94.2%). The study concluded that in both jaws, the classification of premolars is more forceful than molars.
Lin et al19 and Hosntalab et al21 suggested using pixel-level segmentation techniques that stand on classical computer vision methods for tooth detection. They mentioned sensitivity rates for detecting teeth as 0.9419 and 0.88,21 respectively. These studies utilized feature extraction and classification techniques for teeth numbering. Lin et al19 used width/height teeth ratio and crown-size parameters. To define the shape of the teeth, Hosntalab et al21 performed a wavelet-Fourier descriptor. A sequence alignment algorithm, SVM,19 and feedforward neural networks16 were applied for the classification of teeth. They classified the teeth with accuracies of 0.984 and above 0.945, respectively.
Recently AI and DL tools—specifically, CNNs—have achieved high results recognizing objects, faces, and activities; AI tools have also done well with tracking, three-dimensional mapping, and localization.22 DL algorithms that can automatically extract image features using the original pixel information as input have recently improved. These systems greatly decrease the workload and can obtain some features that are difficult for even the human eye to recognize. In 2012, a deep learning CNN ImageNet classification study reached satisfactory performance. Later, an R-CNN, Fast R-CNN with spatial pyramid pool, and Faster R-CNN with region proposal network were used in medically relevant fields, and successful results were achieved compared to object detection tasks. Currently, CNN-based DL algorithms have become an important method in medical image classification.
In dentistry, CNNs were performed for detection of caries,23,24 apical pathologies,25,26 periodontal diseases,27,28 root fracture,29 jaw lesion diagnosis,30,31 maxillary sinusitis classification,32 lower third molar development staging,33 root morphology evaluation,34 cephalometric landmarks localization,35 and detection of atherosclerotic carotid plaques.36 Deep learning systems have been applied in dentomaxillofacial radiology, including in the detection, classification, or diagnosis of diseases or anatomical structures.15
Zhang et al37 detected and classified teeth in periapical radiographs using deep learning methods. In that study, the authors used these methods on 1000 images and obtained high precision and recall results of 0.958 and 0.961, respectively. The radiographs with one or two teeth resulted in a misdiagnosis because the proposed algorithm did not work for right- and left-side discrimination. To correct this limitation, our study used a region detection model. Similar to our study, Chen et al6 employed Faster R-CNN in the TensorFlow framework for tooth detection and numbering. The authors collected a total of 1250 periapical images, and then the algorithm was tested. Until the post-processing techniques were employed, the outcome metrics of the study were deficient. Also, the authors concluded that the algorithm had reached the success level of a student dentist who contributed to that study.
The performance of the CNN-based deep learning algorithms has varied widely among studies, tasks, and metrics utilized. Studies on tooth segmentation and tooth detection using CNNs began in 2015 and 2017, respectively. The tooth classification and detection tasks were performed most frequently, with a stated mean accuracy of 0.77–0.98 (Table 4).
Table 4.
Comparison of performance for tooth detection and classification presented by previous reports
| Name of author | Year | Image type | Data set Size | CNN architecture | Outcomes |
|---|---|---|---|---|---|
| Miki et al.38 | 2017 | CBCT | 52 | AlexNet | High accuracy of 91.0% |
| Miki et al.39 | 2017 | CBCT | 52 | AlexNet | The detection rate was 77.4% classification accuracy was 77.1% |
| Oktay et al.40 | 2017 | Panoramic | 100 | AlexNet | Classification accuracy of molar is 94.32%, premolar is 91.74% canine and incisors is 92.47% |
| Zhang et al.37 | 2018 | Periapical | 1000 | VGG16 | High precision and recall of 95.8 and 96.1% in total |
| Chen et al.6 | 2019 | Periapical | 1250 | ResNet | Precisions and recalls exceed 90% the mean value of the IOU 91%. |
| Tuzoff et al.3 | 2018 | Panoramic | 1574 | VGG16 | Teeth detection sensitivity of 0.9941, the precision of 0.9945 Teeth numbering sensitivity is 0.9800 and specificity is 0.9994 |
| Muramatsu et al.41 | 2019 | Panoramic | 100 | DetectNet GoogLeNet ResNet |
Tooth detection sensitivity was 96.4%. Classification accuracies for tooth types and tooth conditions were 93.2 and 98.0%, respectively |
| Kim et al.11 | 2020 | Panoramic | 303 | RCNN + Heuristics | mAP of the tooth detection rates of 96.7% sensitivity, specificity, and accuracy of tooth numbering were 84.2%, 75.5%, and 84.5%, respectively |
| Yasa et al.16 | 2020 | Bitewing | 1125 | GoogLeNet Inception v2 |
Sensitivity, precision, and F-measure values were 0.9748, 0.9293 and 0.9515, respectively |
| Kılıc et al.42 | 2021 | Panoramic | 421 | GoogLeNet Inception v2 |
Detecting and numbering the deciduous Teeth sensitivity, precision, and F1 score were 0.9804, 0.9571, and 0.9686, respectively. |
| Present study | 2021 | Periapical | 1686 | GoogLeNet Inception v3 |
Sensitivity of 0.9867 and precision of 0.7812 F1 score was 0.8720. |
CBCT, Cone-beam Computed tomography
This study demonstrated the potential of Faster R-CNN models for automated dental image interpretation and diagnostic tasks on periapical images. The architecture showed high performance in tooth detection and numbering. However, the heterogeneity of algorithms, imaging protocols, evaluation parameters, and data set sizes made comparisons difficult among different works.
The most common dental imaging types are panoramic, periapical, and bitewing. Panoramic scans represent single radiographic images of the mandible, maxilla, and teeth, but the roots of the teeth in the bone cannot be captured well in the images. In previous studies, bitewing scans were researched6; this technique can only capture the crowns of posterior teeth. Periapical radiographs visualize the entire crown and roots of the teeth with selected areas that can assist clinicians to diagnose dental diseases. Since CNNs do not require hand-crafted feature extraction, unlike with classical machine learning (ML) methods, they can also be performed for other tasks.3 In future studies, our algorithm can be applied to evaluate other types of imaging modalities.
The current study had several limitations. Our data set consisted of periapical images without implants, root fragments, extensive crown destruction, and deciduous teeth. Future scanning attempts should be performed using complex images. A further limitation was that the performance of the experts was not evaluated. The process needs further work and investigation before it can be used to assist clinicians in the real world and translate the results into meaningful clinical impacts. Furthermore, scratch learning modelling was used rather than pre-trained transfer learning modelling in this study. We followed such a path because our data set was not large enough. In our example, if you have much more radiology images than images for a generic image recognition task. Thus, we preferred learning from scratch. However, comparing such tasks can be quite important for the following modelling and learning methods which future studies should planning to perform the classification on a large data set.
Conclusions
The study demonstrated the potential accuracy and efficiency of a CNN algorithm for the detection and numbering of teeth. Deep learning-based methods can help clinicians in reducing workload and improving dental records. This architecture might also contribute to forensic science.
Footnotes
Acknowledgment: This study was presented as an oral presentation at the 23rd International Congress of Dental and MaxilloFacial Radiology, Gwangju, South Korea.
Conflict of Interest: The authors declare that they have no conflict of interest.
Funding: This work has been supported by Eskisehir Osmangazi University Scientific Research Projects Coordination Unit under grant number 202045E06.
Ethical approval: The Institutional Ethical Review Board of Eskisehir Osmangazi University, Faculty of Dentistry approved the study with decision date/number: 06.08.2019/14 and the study followed the Declaration of Helsinki on medical protocol and ethics.
Informed consent: Additional informed consent was obtained from all individual participants included in the study.
Contributor Information
Cansu Görürgöz, Email: cansu92009@hotmail.com.
Kaan Orhan, Email: call53@yahoo.com.
Ibrahim Sevki Bayrakdar, Email: ibrahimsevkibayrakdar@gmail.com.
Özer Çelik, Email: ozercelik05@gmail.com.
Elif Bilgir, Email: bilgirelif04@hotmail.com.
Alper Odabaş, Email: aodabas@ogu.edu.tr.
Ahmet Faruk Aslan, Email: afaslan@ogu.edu.tr.
Rohan Jagtap, Email: drrohanjagtap@gmail.com.
REFERENCES
- 1.Chan M, Dadul T, Langlais R, Russell D, Ahmad M. Accuracy of extraoral bite-wing radiography in detecting proximal caries and crestal bone loss. J Am Dent Assoc 2018; 149: 51–8. doi: 10.1016/j.adaj.2017.08.032 [DOI] [PubMed] [Google Scholar]
- 2.Vandenberghe B, Jacobs R, Bosmans H. Modern dental imaging: a review of the current technology and clinical applications in dental practice. Eur Radiol 2010; 20: 2637–55. doi: 10.1007/s00330-010-1836-1 [DOI] [PubMed] [Google Scholar]
- 3.Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol 2019; 48: 20180051. doi: 10.1259/dmfr.20180051 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. J Dent Res 2020; 99: 769–74. doi: 10.1177/0022034520915714 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Wang C-W, Huang C-T, Lee J-H, Li C-H, Chang S-W, Siao M-J, et al. A benchmark for comparison of dental radiography analysis algorithms. Med Image Anal 2016; 31: 63–76. doi: 10.1016/j.media.2016.02.004 [DOI] [PubMed] [Google Scholar]
- 6.Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep 2019; 9: 3840. doi: 10.1038/s41598-019-40414-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10: 257–73. doi: 10.1007/s12194-017-0406-5 [DOI] [PubMed] [Google Scholar]
- 8.Hung K, Montalvao C, Tanaka R, Kawai T, Bornstein MM. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review. Dentomaxillofac Radiol 2020; 49: 20190107. doi: 10.1259/dmfr.20190107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Nomir O, Abdel-Mottaleb M. Human identification from dental X-ray images based on the shape and appearance of the teeth. IEEE Trans.Inform.Forensic Secur. 2007; 2: 188–97. doi: 10.1109/TIFS.2007.897245 [DOI] [Google Scholar]
- 10.Tohnak S, Mehnert AJH, Mahoney M, Crozier S. Synthesizing dental radiographs for human identification. J Dent Res 2007; 86: 1057–62. doi: 10.1177/154405910708601107 [DOI] [PubMed] [Google Scholar]
- 11.Kim C, Kim D, Jeong H, Yoon S-J, Youm S. Automatic tooth detection and numbering using a combination of a CNN and heuristic algorithm. Applied Sciences 2020; 10: 5624. doi: 10.3390/app10165624 [DOI] [Google Scholar]
- 12.Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dent 2018; 77: 106–11. doi: 10.1016/j.jdent.2018.07.015 [DOI] [PubMed] [Google Scholar]
- 13.Doi K. Computer-Aided diagnosis in medical imaging: historical review, current status and future potential. Comput Med Imaging Graph 2007; 31(4-5): 198–211. doi: 10.1016/j.compmedimag.2007.02.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Rezaei M, Yang H, Meinel C. Deep neural network with l2-norm unit for brain lesions detection.In: International Conference on Neural Information Processing 2017;: 798–807. [Google Scholar]
- 15.Leite AF, Gerven AV, Willems H, Beznik T, Lahoud P, Gaêta-Araujo H, et al. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin Oral Investig 2021; 25: 2257–67. doi: 10.1007/s00784-020-03544-6 [DOI] [PubMed] [Google Scholar]
- 16.Yasa Y, Çelik Özer, Bayrakdar IS, Pekince A, Orhan K, Akarsu S, et al. An artificial intelligence proposal to automatic teeth detection and numbering in dental bite-wing radiographs. Acta Odontol Scand 2021; 79: 275–81. doi: 10.1080/00016357.2020.1840624 [DOI] [PubMed] [Google Scholar]
- 17.Mahoor MH, Abdel-Mottaleb M. Classification and numbering of teeth in dental bitewing images. Pattern Recognit 2005; 38: 577–86. doi: 10.1016/j.patcog.2004.08.012 [DOI] [Google Scholar]
- 18.Aeini F, Mahmoudi F. Classification and numbering of posterior teeth in bitewing dental images. In: 3rd International Conference on Advanced Computer Theory and Engineering: ICACTE; ) (, ; 2010IEEE. [Google Scholar]
- 19.Lin PL, Lai YH, Huang PW. An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information. Pattern Recognit 2010; 43: 1380–92. doi: 10.1016/j.patcog.2009.10.005 [DOI] [Google Scholar]
- 20.Yuniarti A, Nugroho AS, Amaliah B, Arifin AZ. Classification and numbering of dental radiographs for an automated human identification system. TELKOMNIKA 2012; 10: 137–46. doi: 10.12928/telkomnika.v10i1.771 [DOI] [Google Scholar]
- 21.Hosntalab M, Aghaeizadeh Zoroofi R, Abbaspour Tehrani-Fard A, Shirani G. Classification and numbering of teeth in multi-slice CT images using wavelet-Fourier descriptor. Int J Comput Assist Radiol Surg 2010; 5: 237–49. doi: 10.1007/s11548-009-0389-8 [DOI] [PubMed] [Google Scholar]
- 22.Sklan JES, Plassard AJ, Fabbri D, Landman BA. Toward content based image retrieval with deep Convolutional neural networks. Proc SPIE Int Soc Opt Eng 2015;; : 94179417. doi: 10.1117/12.2081551 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Devito KL, de Souza Barbosa F, Felippe Filho WN. An artificial multilayer perceptron neural network for diagnosis of proximal dental caries. Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2008; 106: 879–84. doi: 10.1016/j.tripleo.2008.03.002 [DOI] [PubMed] [Google Scholar]
- 24.Valizadeh S, Goodini M, Ehsani S, Mohseni H, Azimi F, Bakhshandeh H. Designing of a computer software for detection of Approximal caries in posterior teeth. Iran J Radiol 2015; 12: e16242: e16242. doi: 10.5812/iranjradiol.12(2)2015.16242 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Ekert T, Krois J, Meinhold L, Elhennawy K, Emara R, Golla T, et al. Deep learning for the radiographic detection of apical lesions. J Endod 2019; 45: 917–22. doi: 10.1016/j.joen.2019.03.016 [DOI] [PubMed] [Google Scholar]
- 26.Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J 2020; 53: 680–9. doi: 10.1111/iej.13265 [DOI] [PubMed] [Google Scholar]
- 27.Lee J-H, Kim D-H, Jeong S-N, Choi S-H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci 2018; 48: 114–23. doi: 10.5051/jpis.2018.48.2.114 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Krois J, Ekert T, Meinhold L, Golla T, Kharbot B, Wittemeier A, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep 2019; 9: 8495. doi: 10.1038/s41598-019-44839-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Fukuda M, Inamoto K, Shibata N, Ariji Y, Yanashita Y, Kutsuna S, et al. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol 2020; 36: 337–43. doi: 10.1007/s11282-019-00409-x [DOI] [PubMed] [Google Scholar]
- 30.Ariji Y, Yanashita Y, Kutsuna S, Muramatsu C, Fukuda M, Kise Y, et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg Oral Med Oral Pathol Oral Radiol 2019; 128: 424–30. doi: 10.1016/j.oooo.2019.05.014 [DOI] [PubMed] [Google Scholar]
- 31.Lee J-H, Kim D-H, Jeong S-N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis 2020; 26: 152–8. doi: 10.1111/odi.13223 [DOI] [PubMed] [Google Scholar]
- 32.Murata M, Ariji Y, Ohashi Y, Kawai T, Fukuda M, Funakoshi T, et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol 2019; 35: 301–7. doi: 10.1007/s11282-018-0363-7 [DOI] [PubMed] [Google Scholar]
- 33.De Tobel J, Radesh P, Vandermeulen D, Thevissen PW. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study. J Forensic Odontostomatol 2017; 35: 42–54. [PMC free article] [PubMed] [Google Scholar]
- 34.Hiraiwa T, Ariji Y, Fukuda M, Kise Y, Nakata K, Katsumata A, et al. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol 2019; 48: 20180218. doi: 10.1259/dmfr.20180218 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Kunz F, Stellzig-Eisenhauer A, Zeman F, Boldt J. Artificial intelligence in orthodontics : Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J Orofac Orthop 2020; 81: 52–68. doi: 10.1007/s00056-019-00203-8 [DOI] [PubMed] [Google Scholar]
- 36.Kats L, Vered M, Zlotogorski-Hurvitz A, Harpaz I. Atherosclerotic carotid plaque on panoramic radiographs: neural network detection. Int J Comput Dent 2019; 22: 163–9. [PubMed] [Google Scholar]
- 37.Zhang K, Wu J, Chen H, Lyu P. An effective teeth recognition method using label tree with cascade network structure. Comput Med Imaging Graph 2018; 68: 61–70. doi: 10.1016/j.compmedimag.2018.07.001 [DOI] [PubMed] [Google Scholar]
- 38.Miki Y, Muramatsu C, Hayashi T, Zhou X, Hara T, Katsumata A, et al. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput Biol Med 2017; 80: 24–9. doi: 10.1016/j.compbiomed.2016.11.003 [DOI] [PubMed] [Google Scholar]
- 39.Miki Y, Muramatsu C, Hayashi T, Zhou X, Hara T, Katsumata A. Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification. In: Medical Imaging 2017: Computer-Aided Diagnosis (). : International Society for Optics and Photonics. [Google Scholar]
- 40.Oktay AB. Tooth detection with Convolutional neural networks. Medical Technologies National Congress (TIPTEKNO) IEEE 2017;: 1; ––4p.. [Google Scholar]
- 41.Muramatsu C, Morishita T, Takahashi R, Hayashi T, Nishiyama W, Ariji Y, et al. Tooth detection and classification on panoramic radiographs for automatic dental chart filing: improved classification by multi-sized input data. Oral Radiol 2021; 37: 13–19. doi: 10.1007/s11282-019-00418-w [DOI] [PubMed] [Google Scholar]
- 42.Kılıc MC, Bayrakdar IS, Çelik Özer, Bilgir E, Orhan K, Aydın OB, et al. Artificial intelligence system for automatic deciduous tooth detection and numbering in panoramic radiographs. Dentomaxillofac Radiol 2021; 50: 20200172Mar 4:20200172. doi: 10.1259/dmfr.20200172 [DOI] [PMC free article] [PubMed] [Google Scholar]




