Abstract
Objectives
This study aimed to investigate the accuracy of deep learning algorithms to diagnose tooth caries and classify the extension and location of dental caries in cone beam computed tomography (CBCT) images. To the best of our knowledge, this is the first study to evaluate the application of deep learning for dental caries in CBCT images.
Methods
The CBCT image dataset comprised 382 molar teeth with caries and 403 noncarious molar cases. The dataset was divided into a development set for training and validation and test set. Three images were obtained for each case, including axial, sagittal, and coronal. The test dataset was provided to a multiple-input convolutional neural network (CNN). The network made predictions regarding the presence or absence of dental decay and classified the lesions according to their depths and types for the provided samples. Accuracy, sensitivity, specificity, and F1 score values were measured for dental caries detection and classification.
Results
The diagnostic accuracy, sensitivity, specificity, and F1 score for caries detection in carious molar teeth were 95.3%, 92.1%, 96.3%, and 93.2%, respectively, and for noncarious molar teeth were 94.8%, 94.3%, 95.8%, and 94.6%. The CNN network showed high sensitivity, specificity, and accuracy in classifying caries extensions and locations.
Conclusions
This research demonstrates that deep learning models can accurately identify dental caries and classify their depths and types with high accuracy, sensitivity, and specificity. The successful application of deep learning in this field will undoubtedly assist dental practitioners and patients in improving diagnostic and treatment planning in dentistry.
Clinical significance
: This study showed that deep learning can accurately detect and classify dental caries. Deep learning can provide dental caries detection accurately. Considering the shortage of dentists in certain areas, using CNNs can lead to broader geographic coverage in detecting dental caries.
Key words: Dental caries, Cone beam computed tomography, Artificial intelligence, Deep learning
Introduction
Dental caries is the most common dental disease that, if left uncontrolled, can have serious consequences.1 Various methods are available for diagnosing dental caries. Sometimes, the extension of caries is so small that they cannot be visualised or diagnosed without the assistance of radiographic images. X-rays are one of the best methods for diagnosing dental caries and damage to the tooth root.2 Cone beam computed tomography (CBCT) plays a significant role in diagnosing dental conditions such as dental caries.3 CBCT offers numerous advantages, including multiplanar reconstruction, high accuracy, and low exposure time.4,5 The detection of caries lesions through conventional radiographs remains instead an elusive process. The limitations inherent in traditional radiography are mainly due to the 2D representation of caries lesions, which are 3D structures in reality, and this might lead to the loss of valuable information.
Moreover, the radiographic appearance of a lesion can change dramatically as a function of the chosen projection geometry. The film replacement by digital detectors does not address these fundamental limitations.6 Some previous studies showed that CBCT is a superior imaging modality to intraoral digital imaging systems.7, 8, 9
Convolutional neural networks (CNNs) are a type of deep learning model based on neural networks in artificial intelligence that are used for image classification and object detection in images.10, 11, 12, 13 CNNs can detect dental caries and analyse dental images to identify caries regions.14 They can identify dental caries signs in images, enabling early caries detection using trained neural networks.15, 16, 17, 18, 19
Deep learning has been widely used in CBCT images.20,21 This technique is used in a variety of fields, including segmentation of the upper airway,22, 23, 24 segmentation of the inferior alveolar nerve,25,26 bone-related disease,27,28 tooth segmentation and endodontics,29 temporomandibular joint and sinus disease,30,31 dental implant,32,33 and landmark localisation.34,35 Previous studies evaluated the caries detection performance of deep learning methods. However, there are few studies on neural networks’ performance in caries lesions with different radiographic depths and locations. This is important to health economic perspectives and treatment decision-making, since dental caries treatments, such as remineralisation, cavity filling, root canal therapy, and tooth extraction, vary with lesion depth and location.14,36, 37, 38
To the best of our knowledge, no study has investigated caries lesions segmentation and classification in CBCT images. Therefore, this study aimed to investigate the accuracy of using deep learning algorithms to diagnose and classify tooth decay according to their extension and location in CBCT images.
Material and methods
The steps of doing the work, training the deep learning model, and then testing this model are shown in Figure 1.
Fig. 1.
Training and test steps.
Dataset collection
The institutional ethics committee approved the present research. An anonymised dataset of CBCT images collected between January 2020 and December 2023 was used in this study. The CBCT images were taken for purposes such as implant placement or evaluation of impacted teeth or paranasal sinuses. The exclusion criterion was poor radiographic image quality, such as severe distortion, blur, and noise. CBCT images were taken using a NewTom VGi scanner (VGi EVO NewTom, Italy) by the following parameters: scanning time 8.9 s, 5 mA, 19 mAs, 120 kV, and reviewed with NNT viewer software. The images were used unidentified.
Data preparation
In total, 819 CBCT images were labelled by 2 oral and maxillofacial radiologists with 5 years of experience to create the reference standard. The radiologists evaluated all samples and segmented the images by cropping the full images into a rectangular box that fits the teeth. For each teeth, 3 images were evaluated including axial, sagittal, and coronal. These images were cropped only to depict a single tooth in each image. The images were categorised as dental and nondental caries cases. For a single image, a consensus of the expert radiologists was required to determine the final label. In cases of instability, the relevant images were excluded from the study. The final dataset consisted of 785 molar teeth images, of which 382 were carious and 403 were noncarious cases. The carious teeth were divided into 4 types based on the location of caries. Occlusal caries detected on the CBCT radiographs were labelled as type I, proximal caries as type II, caries in the cervical region as type III, and teeth with more than 1 caries as type IV. As a result, there were 99 occlusal caries (type I), 107 proximal caries (type II), 57 cervical caries (type III), and 119 cases of more than 1 caries (Type IV). For classification of the caries extension, there were the following 3 classes: “D1” caries had radiolucency in enamel or the outer third of dentin; “D2” caries had radiolucency in the middle third of dentin; and “D3” caries had radiolucency in the inner third of dentin with or without apparent pulp involvement. All annotated areas on an image ultimately constructed the reference dataset, which consists of 110 D1 lesions, 140 D2 lesions, and 132 D3 lesions.
Subsequently, a data augmentation procedure was applied to increase the training data volume. In this regard, all tooth images were vertically and horizontally flipped. Additionally, the dataset was augmented through random rotations (20°), and the images were magnified up to a maximum of 2 times.39 In the present study, data augmentation increased the number of images about 10 times. The dataset ultimately consisted of 7850 cases, including 3820 dental caries cases and 4030 nondental caries cases, all converted to PNG format. The images were resized to dimensions of 96 × 160 pixels. Seventy percent of the data were used as the training dataset, and the remaining 30% were allocated as the test dataset. The selection of data for these datasets was done randomly. The number of dental and nondental caries samples increased to 5495 for the training steps. This dataset includes 2674 dental caries images and 2821 nondental caries images.The flowchart of the data extraction procedure is shown in Figure A1 Appendix. Finally, the data were provided as input to the deep convolutional neural network algorithm, and this network was trained.
The multiple-input deep convolutional neural network architecture
Figure 2 shows a view of the proposed multiple-input deep convolutional neural network architecture. The network structure starts with 3 inputs for 3 input images of axial, sagittal, and coronal. The image is resized and normalised in several layers of convolution and pooling. Then, these 3 inputs are combined and concatenated again through several steps, and by repeating this operation, they are flattened into a vector. In the final part, the input vector is entered into a fully connected deep network to perform the classification operation.
Fig. 2.
The multiple-input deep convolutional neural network architecture.
Based on the presented architecture, the classification was done in 3 steps. First, the images were presented as input to this network and categorised based on dental and nondental caries. Second, the cases identified as dental caries were presented to this architecture in 2 separate steps and were categorised once based on type and once based on location.
Configuration and evaluation metrics
This model was implemented using the PyTorch library in the Python programming language.The implementation was carried out in the Google Colab environment.36 The training dataset was randomly divided into 32 batches per epoch, and the model was trained for 100 epochs with a learning rate of 0.001. A fine-tuning of parameters was performed using the BCEWithLogitsLoss cost function, the Adam optimisation algorithm, and the CosineAnnealingLR learning rate scheduler to improve the detection of dental caries. The weights with the highest values were used when the validation accuracy did not improve.
After training the CNN network, the test dataset was provided to the neural network, and the network made predictions regarding the presence or absence of dental caries for the provided samples. The results of these predictions were compared and evaluated against the labels of the test dataset. The accuracy, sensitivity, specificity, and F1-score metrics were examined during the evaluation.
Results
The reliability of the interexaminers applying κ statistics was assesed. The images were evaluated again for checking the consistency between the first and the second sets of records. Acceptable interexaminer agreement (kappa values >0.75) was detected.
The accuracy, sensitivity, specificity, and F1-score on test dataset for nondental caries cases were 94.8%, 94.3%, 95.8%, and 94.6%. The results of these metrics on the same dataset for dental caries cases were 95.3%, 92.1%, 96.3%, and 93.2%. The result of classification for different types and extensions are presented in Table 1 and Table 2. The methods yielded accuracy, sensitivity, specificity, and F1-score of more than 80% for all types of dental caries. Type III showed lower results than other types.
Table 1.
Result of dental caries classification for different types.
| Type | Accuracy | Sensitivity | Specificity | F1-Score |
|---|---|---|---|---|
| I (Occlusal) | 93.3% | 92.3% | 94.6% | 92.8% |
| II (Proximal) | 93.7% | 95.2% | 94.2% | 94.4% |
| III (Cervical) | 91.6% | 88.7% | 91.3% | 89.4% |
| IV (More than one caries) | 97.2% | 95.1% | 96.4% | 94.0% |
Table 2.
Result of dental caries classification for different extensions.
| Extension | Accuracy | Sensitivity | Specificity | F1-Score |
|---|---|---|---|---|
| D1 | 89.7% | 87.7% | 90.6% | 89.0% |
| D2 | 93.3% | 95.0% | 94.2% | 93.6% |
| D3 | 96.2% | 96.5% | 97.5% | 97.3% |
The results given in Figure 3 show that the proposed models produced effective results for type I, type II, and type IV caries detection. However, the proposed network architecture for detecting type III caries remained weak, possibly due to insufficient datasets of this type. The model showed better results in detecting D3 lesions. The accuracy of the predicted labels for the detection and classification of types and extensions in the test datasets is visualised as confusion matrices in Figure 3.
Fig. 3.
Confusion matrices.
Discussion
This research investigated the accuracy of deep learning algorithms and machine learning approaches for detecting the extension and location of dental caries in CBCT images. The results of this study demonstrated that deep learning is a precise and highly efficient tool for detecting dental caries. Similarly, most previous studies showed relatively high precision, sensitivity, and specificity in deep learning for detecting caries (usually more than 80%). In this study, oral and maxillofacial radiologists were considered as the gold standard.
Various studies have compared the accuracy, sensitivity, and specificity of using CNN networks and deep learning to detect dental caries in dental radiographs (Table A1 Appendix). Previous studies have obtained an accuracy of CNN networks for caries detection ranging from 82% to 97%. The differences in results can be attributed to various factors, such as the use of different datasets. Datasets can vary in size, diversity, data quality, and distribution. The selection of layers, types of layers, number of neurons, learning rate, and other model parameters can also significantly impact the results. Machine learning techniques, optimisation algorithms, and other methods can influence the model's performance and accuracy. Variations in preprocessing methods and data preprocessing procedures can also lead to differences in the results. Our study achieved an accuracy of more than 94.6%. The accuracy of our model was higher than that of models in previous studies and yielded a score of 0.986. The study by Geetha et al40 shows the highest accuracy of all studies. However, this study is a special case because it uses a very shallow neural network and only a hidden layer to build its feature extractor, which is now rare. The authors of this study used only 105 images and 10-fold cross-validation, so their model was not evaluated on a hold test set.
The sensitivity and specificity of previous studies were between 63.29%−92.3% and 60.71%−100%, respectively. Our study's sensitivity was 94.5% and specificity was 91.8%, which was better and more desirable than previous studies (Table A1 in Appendix). The results of this study are higher than previous studies and can be examined from two perspectives. First, this study used 3D CBCT images of higher quality. In comparison, panoramic and periapical images with lower quality were used in previous studies. In addition, using multiple-input architecture and data augmentation has sought high accuracy in previous image studies. This difference in measurement could be due to the technique used.41
In the present study, the accuracy of the classification of the location of the dental caries was higher than in the study of Day et al,42 especially for cervical caries. The possible reason may be the use of panoramic radiographs in that study. Panoramic radiographs are 2D images, and this type of caries is usually located in the vestibule and lingual of the teeth. Hence, it overlaps with the denser healthy tooth tissue, and, at the same time, it can be challenging to detect because of overlapping with the radiolucent reflection of the pulp chamber on the radiograph.
The results of this study indicated that deep learning could automatically learn the differences amongst caries depths in caries extension image features and achieve effective interpretations, especially for D3 lesions similar to the study of Lian et al.43 The possible reason is that D3 lesions have a more extensive range of transmission images in panoramic films and are easier to detect with the naked eye. Caries in the D1 and D2 stages are more likely to be missed or have lesion boundaries that are difficult to determine.
Prados-Privado et al44 have presented a systematic review on caries diagnosis and detection using neural networks and concluded that the best results were obtained in the study using only one examiner, so the same criteria were continuously used in caries detection and then in the study in which 4 experts analysed the images. Finally, the worst results in terms of accuracy were obtained by studying with 2 examiners. Therefore, only a single examiner was used in this study.
CBCT has many advantages that aid in dental caries detection. The high accuracy of this technology allows for the detection of small cavities. The imaging of CBCT has fewer geometric distortion problems than conventional methods in theory; thus, the real multiplanar reformation of caries lesions was available, which was impossible before. Many incipient carious lesions remain undetected on intraoral radiographs due to inappropriate image angulation or overlap of contact areas.45 CBCT systems were superior to conventional methods in detecting occlusal caries. The superior detection of recurrent or residual caries using CBCT may be due to multiplanar images produced by the CBCT (axial, coronal, and sagittal planes).46,47
Recent studies have shown that CNNs can accurately detect dental caries. CNNs can provide dental caries detection at much lower costs than dental examinations. Considering the shortage of dentists in certain areas, using CNNs can lead to broader geographic coverage in detecting dental caries. However, to ensure the accuracy of dental caries diagnosis, it may still be necessary to have a dental professional review the results. Furthermore, using CNNs for dental caries detection requires accurate translation of radiographic data and training the networks with sufficient and diverse datasets.
This method still requires further refinement and improvement. Additionally, this approach requires more complex equipment and powerful processing capabilities, which may not be readily available in some dental centers. In addition, the images used during the training process must be labeled by experts. Therefore, when artificial intelligence is trained using the scores of human observers, the system cannot exceed the trainer. Thus, performance depends on the quality of input. Expert judgments are usually used as standard guidelines. It is important to stress that manual marking by experts provides the reference needed to train and assess models but does not necessarily represent the truth. The use of a histologic gold standard method is essential for the validation of a method of caries diagnosis. We applied no gold standard in the study, such as microcomputed tomography and histology of extracted teeth. However, dentists with different experiences and professional backgrounds are required for comparison, which may provide more valuable information. Second, labelling in the constructed reference test was not sufficiently precise, as it was not triangulated with the gold standard (histology). None of the studies included in the present review referenced the standard used. Future studies would likely pave the way for applications of such AI systems in dental caries detection and, in a greater aspect, prevention and control of dental caries in communities.
Conclusion
This study demonstrated that deep learning models can accurately identify dental caries with high accuracy, sensitivity, and specificity. This study highlights the potential power of deep learning in detecting dental caries using CBCT images. The successful application of deep learning models in this field will undoubtedly assist dental practitioners and patients in improving diagnostic and treatment planning in dentistry.
Conflict of interest
None disclosed.
Footnotes
Supplementary material associated with this article can be found in the online version at doi:10.1016/j.identj.2023.10.003.
Appendix. Supplementary materials
REFERENCES
- 1.Selwitz RH, Ismail AI, Pitts NB. Dental caries. Lancet. 2007;369(9555):51–59. doi: 10.1016/S0140-6736(07)60031-2. [DOI] [PubMed] [Google Scholar]
- 2.Van Gorp G, Maes A, Lambrechts M, Jacobs R, Declerck D. Is use of CBCT without proper training justified in paediatric dental traumatology? An exploratory study. BMC Oral Health. 2023;23(1):270. doi: 10.1186/s12903-023-03013-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Du M, Wu X, Ye Y, Fang S, Zhang H, Chen M. A combined approach for accurate and accelerated teeth detection on cone beam CT images. Diagnostics (Basel) 2022;12(7) doi: 10.3390/diagnostics12071679. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Paknahad M, Dokohaki S, Khojastepour L, Shahidi S, Haghnegahdar A. A radio-odontometric analysis of sexual dimorphism in first molars using cone-beam computed tomography. Am J Forensic Med Pathol. 2022;43(1):46–51. doi: 10.1097/paf.0000000000000735. [DOI] [PubMed] [Google Scholar]
- 5.Paknahad M, Pourzal A, Mahjoori-Ghasrodashti M, Khojastepour L. Evaluation of maxillary sinus characteristics in patients with cleft lip and palate using cone beam computed tomography. Cleft Palate Craniofac J. 2022;59(5):589–594. doi: 10.1177/10556656211023239. [DOI] [PubMed] [Google Scholar]
- 6.Park YS, Ahn JS, Kwon HB, Lee SP. Current status of dental caries diagnosis using cone beam computed tomography. Imaging Sci Dent. 2011;41(2):43–51. doi: 10.5624/isd.2011.41.2.43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Young S, Lee J, Hodges R, Chang T-L, Elashoff D, White S. A comparative study of high-resolution cone beam computed tomography and charge-coupled device sensors for detecting caries. Dentomaxillofac Radiol. 2009;38(7):445–451. doi: 10.1259/dmfr/88765582. [DOI] [PubMed] [Google Scholar]
- 8.Akdeniz BG, Gröndahl H-G, Magnusson B. Accuracy of proximal caries depth measurements: comparison between limited cone beam computed tomography, storage phosphor and film radiography. Caries Res. 2006;40(3):202–207. doi: 10.1159/000092226. [DOI] [PubMed] [Google Scholar]
- 9.Haiter-Neto F, Wenzel A, Gotfredsen E. Diagnostic accuracy of cone beam computed tomography scans compared with intraoral image modalities for detection of caries lesions. Dentomaxillofac Radiol. 2008;37(1):18–22. doi: 10.1259/dmfr/87103878. [DOI] [PubMed] [Google Scholar]
- 10.Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current applications of deep learning and radiomics on CT and CBCT for maxillofacial diseases. Diagnostics. 2023;13(1):110. doi: 10.3390/diagnostics13010110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Tsoromokos N, Parinussa S, Claessen F, Moin DA, Loos BG. Estimation of alveolar bone loss in periodontitis using machine learning. Int Dent J. 2022;72(5):621–627. doi: 10.1016/j.identj.2022.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Al-Rawi N, Sultan A, Rajai B, Shuaeeb H, Alnajjar M, Alketbi M, et al. The effectiveness of artificial intelligence in detection of oral cancer. Int Dent J. 2022;72(4):436–447. doi: 10.1016/j.identj.2022.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Lam DW, Chau DR. Biomimetic dental prostheses designed by artificial intelligence versus CAD software. Int Dent J. 2023;73::S32–S33. doi: 10.1016/j.identj.2023.07.298. [DOI] [Google Scholar]
- 14.Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dent. 2018;77:106–111. doi: 10.1016/j.jdent.2018.07.015. [DOI] [PubMed] [Google Scholar]
- 15.Gao H, Xiao J, Yin Y, Liu T, Shi J. A mutually supervised graph attention network for few-shot segmentation: the perspective of fully utilizing limited samples. IEEE Trans Neural Netw Learn Syst. 2022:1–13. doi: 10.1109/TNNLS.2022.3155486. [DOI] [PubMed] [Google Scholar]
- 16.Cao Z, Xu L, Chen DZ, Gao H, Wu J. A robust shape-aware rib fracture detection and segmentation framework with contrastive learning. IEEE Trans Multimedia. 2023;25:1584–1591. doi: 10.1109/TMM.2023.3263074. [DOI] [Google Scholar]
- 17.Chen T, Zheng W, Hu H, Luo C, Chen J, Yuan C, et al. A corresponding region fusion framework for multi-modal cervical lesion detection. IEEE/ACM Trans Comput Biol Bioinform. 2022 doi: 10.1109/tcbb.2022.3178725. [DOI] [PubMed] [Google Scholar]
- 18.Chen M, Luo X, Shen H, Huang Z, Peng Q, Yuan Y. A Chinese nested named entity recognition approach using sequence labeling. Int J Web Inf Syst. 2023;19(1):42–60. doi: 10.1108/IJWIS-04-2023-0070. [DOI] [Google Scholar]
- 19.Payghode V, Goyal A, Bhan A, Iyer SS, Dubey AK. Object detection and activity recognition in video surveillance using neural networks. Int J Web Inf Syst. 2023 doi: 10.1108/IJWIS-01-2023-0006. Epub ahead of print. [DOI] [Google Scholar]
- 20.Fan W, Zhang J, Wang N, Li J, Hu L. The application of deep learning on CBCT in dentistry. Diagnostics. 2023;13(12):2056. doi: 10.3390/diagnostics13122056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Obrubov A, Solovyh E, Arranz I, Pérez F, Tejedor M, Saakyan E. Artificial intelligence DENTOMO: opportunities and prospects for analysis of CBCT in dentistry. Int Dent J. 2021;71:S35–S36. doi: 10.1016/j.identj.2021.08.009. [DOI] [PubMed] [Google Scholar]
- 22.Shujaat S, Jazil O, Willems H, Van Gerven A, Shaheen E, Politis C. Automatic segmentation of the pharyngeal airway space with convolutional neural network. J Dent. 2021;111:103705. doi: 10.1016/j.jdent.2021.103705. [DOI] [PubMed] [Google Scholar]
- 23.Ryu S, Kim JH, Yu H, Jung H-D, Chang SW, Park JJ, et al. Diagnosis of obstructive sleep apnea with prediction of flow characteristics according to airway morphology automatically extracted from medical images: computational fluid dynamics and artificial intelligence approach. Comput Methods Programs Biomed. 2021;208:106243. doi: 10.1016/j.cmpb.2021.106243. [DOI] [PubMed] [Google Scholar]
- 24.Wu W, Yu Y, Wang Q, Liu D, Yuan X. Upper airway segmentation based on the attention mechanism of weak feature regions. IEEE Access. 2021;9:95372–95381. doi: 10.1109/ACCESS.2021.3094032. [DOI] [Google Scholar]
- 25.Cipriano M, Allegretti S, Bolelli F, Bartolomeo MD, Pollastri F, Pellacani A, et al. Deep segmentation of the mandibular canal: a new 3D annotated dataset of CBCT volumes. IEEE Access. 2022;10:11500–11510. doi: 10.1109/ACCESS.2022.3144840. [DOI] [Google Scholar]
- 26.Jaskari J, Sahlsten J, Järnstedt J, Mehtonen H, Karhu K, Sundqvist O, et al. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes. Sci Rep. 2020;10(1):5842. doi: 10.1038/s41598-020-62321-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Son DM, Yoon YA, Kwon HJ, An CH, Lee SH. Automatic detection of mandibular fractures in panoramic radiographs using deep learning. Diagnostics (Basel) 2021;11(6) doi: 10.3390/diagnostics11060933. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Yilmaz E, Kayikcioglu T, Kayipmaz S. Computer-aided diagnosis of periapical cyst and keratocystic odontogenic tumor on cone beam computed tomography. Comput Methods Programs Biomed. 2017;146:91–100. doi: 10.1016/j.cmpb.2017.05.012. [DOI] [PubMed] [Google Scholar]
- 29.Jang TJ, Kim KC, Cho HC, Seo JK. A fully automated method for 3D individual tooth identification and segmentation in dental CBCT. IEEE Trans Pattern Anal Mach Intell. 2022;44(10):6562–6568. doi: 10.1109/tpami.2021.3086072. [DOI] [PubMed] [Google Scholar]
- 30.Le C, Deleat-Besson R, Prieto J, Brosset S, Dumont M, Zhang W, et al. Automatic segmentation of mandibular ramus and condyles. Annu Int Conf IEEE Eng Med Biol Soc. 2021:2952–2955. doi: 10.1109/embc46164.2021.9630727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.de Dumast P, Mirabel C, Cevidanes L, Ruellas A, Yatabe M, Ioshida M, et al. A web-based system for neural network based classification in temporomandibular joint osteoarthritis. Comput Med Imaging Graph. 2018;67:45–54. doi: 10.1016/j.compmedimag.2018.04.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Xiao Y, Liang Q, Zhou L, He X, Lv L, Chen J, et al. Construction of a new automatic grading system for jaw bone mineral density level based on deep learning using cone beam computed tomography. Sci Rep. 2022;12(1):12841. doi: 10.1038/s41598-022-16074-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Sorkhabi MM, Saadat Khajeh M. Classification of alveolar bone density using 3-D deep convolutional neural network in the cone-beam CT images: a 6-month clinical study. Measurement. 2019;148 doi: 10.1016/j.measurement.2019.106945. [DOI] [Google Scholar]
- 34.Torosdagli N, Liberton DK, Verma P, Sincan M, Lee JS, Bagci U. Deep geodesic learning for segmentation and anatomical landmarking. IEEE Trans Med Imaging. 2019;38(4):919–931. doi: 10.1109/tmi.2018.2875814. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Lian C, Wang F, Deng HH, Wang L, Xiao D, Kuang T, et al. Multi-task dynamic transformer network for concurrent bone segmentation and large-scale landmark localization with dental CBCT. In: Cham AL, Martel, et al. editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. Springer International Publishing; 2020. p. 807–816. [DOI] [PMC free article] [PubMed]
- 36.Mohammad-Rahimi H, Motamedian SR, Rohban MH, Krois J, Uribe SE, Mahmoudinia E, et al. Deep learning for caries detection: a systematic review. J Dent. 2022;122:104115. doi: 10.1016/j.jdent.2022.104115. [DOI] [PubMed] [Google Scholar]
- 37.Vinayahalingam S, Kempers S, Limon L, Deibel D, Maal T, Hanisch M, et al. Classification of caries in third molars on panoramic radiographs using deep learning. Sci Rep. 2021;11(1):12609. doi: 10.1038/s41598-021-92121-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Cantu AG, Gehrung S, Krois J, Chaurasia A, Rossi JG, Gaudin R, et al. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent. 2020;100:103425. doi: 10.1016/j.jdent.2020.103425. [DOI] [PubMed] [Google Scholar]
- 39.Islam MM, Hossain MB, Akhtar MN, Moni MA, Hasan KF. CNN based on transfer learning models using data augmentation and transformation for detection of concrete crack. Algorithms. 2022;15(8):287. [Google Scholar]
- 40.Geetha V, Aprameya KS, Hinduja DM. Dental caries diagnosis in digital radiographs using back-propagation neural network. Health Inf Sci Syst. 2020;8(1):8. doi: 10.1007/s13755-019-0096-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, et al. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digital Medicine. 2021;4(1):65. doi: 10.1038/s41746-021-00438-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Dayı B, Üzen H, Çiçek IB, Duman ŞB. A novel deep learning-based approach for segmentation of different type caries lesions on panoramic radiographs. Diagnostics. 2023;13(2):202. doi: 10.3390/diagnostics13020202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Lian L, Zhu T, Zhu F, Zhu H. Deep learning for caries detection and classification. Diagnostics (Basel) 2021;11(9) doi: 10.3390/diagnostics11091672. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Prados-Privado M, García Villalón J, Martínez-Martínez CH, Ivorra C, Prados-Frutos JC. Dental caries diagnosis and detection using neural networks: a systematic review. J Clin Med. 2020;9(11):3579. doi: 10.3390/jcm9113579. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Mosavat F, Ahmadi E, Amirfarhangi S, Rafeie N. Evaluation of diagnostic accuracy of CBCT and intraoral radiography for proximal caries detection in the presence of different dental restoration materials. BMC Oral Health. 2023;23(1):419. doi: 10.1186/s12903-023-02954-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Salem W. Role of CBCT in dental caries detection; a systematic review. Oral Researc & Reviews. 2017;1 [Google Scholar]
- 47.Nandita R, Arthanari A. OPG and CBCT-a more reliable source for human identification? Perception based web survey among dentist. J Adv Med Dent Sci Res. 2022;10(1):120–128. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



