Skip to main content
Imaging Science in Dentistry logoLink to Imaging Science in Dentistry
. 2023 Sep 4;53(4):271–281. doi: 10.5624/isd.20230058

Convolutional neural networks for automated tooth numbering on panoramic radiographs: A scoping review

Ramadhan Hardani Putra 1,, Eha Renwi Astuti 1, Aga Satria Nurrachman 1, Dina Karimah Putri 1,2, Ahmad Badruddin Ghazali 3, Tjio Andrinanti Pradini 4, Dhinda Tiara Prabaningtyas 4
PMCID: PMC10761295  PMID: 38174035

Abstract

Purpose

The objective of this scoping review was to investigate the applicability and performance of various convolutional neural network (CNN) models in tooth numbering on panoramic radiographs, achieved through classification, detection, and segmentation tasks.

Material and Methods

An online search was performed of the PubMed, Science Direct, and Scopus databases. Based on the selection process, 12 studies were included in this review.

Results

Eleven studies utilized a CNN model for detection tasks, 5 for classification tasks, and 3 for segmentation tasks in the context of tooth numbering on panoramic radiographs. Most of these studies revealed high performance of various CNN models in automating tooth numbering. However, several studies also highlighted limitations of CNNs, such as the presence of false positives and false negatives in identifying decayed teeth, teeth with crown prosthetics, teeth adjacent to edentulous areas, dental implants, root remnants, wisdom teeth, and root canal-treated teeth. These limitations can be overcome by ensuring both the quality and quantity of datasets, as well as optimizing the CNN architecture.

Conclusion

CNNs have demonstrated high performance in automated tooth numbering on panoramic radiographs. Future development of CNN-based models for this purpose should also consider different stages of dentition, such as the primary and mixed dentition stages, as well as the presence of various tooth conditions. Ultimately, an optimized CNN architecture can serve as the foundation for an automated tooth numbering system and for further artificial intelligence research on panoramic radiographs for a variety of purposes.

Keywords: Artificial Intelligence; Technology Transfer; Deep Learning; Dentition; Radiography, Panoramic

Introduction

In the field of dentistry, panoramic radiography is a standard examination procedure used in clinical practice to capture an image of the dental and maxillofacial region. This technique is instrumental in making diagnoses, planning treatments, and evaluating treatment outcomes. Panoramic radiography allows the assessment of various factors, including the condition of the teeth, the presence of lesions and trauma, the structure of the jawbone, the status of edentulous patients, and the location of the third molar.1,2 One advantage of panoramic techniques is that they provide an image of the maxillary and mandibular regions with a relatively low radiation dose. Specifically, the radiation dose a patient receives from 1 panoramic radiograph is equivalent to the dose from 4 intraoral radiographs.3

Prior to creating a radiodiagnostic report based on panoramic radiography, tooth numbering serves as a guide for dental and periodontal charting, based on tooth anatomy and location. A numbering system is employed in dental radiological reports, particularly those involving panoramic radiographs.4 Various tooth numbering methods are routinely implemented in dental practice, with popular systems including the Fédération Dentaire Internationale (FDI) system, the Universal Numbering System, and the Zsigmondy-Palmer system.5 The FDI system is the most commonly utilized and is widely used in diagnostic reports. Tooth numbering is also essential for interpreting radiographs, recording patient dental medical history, and performing forensic tasks.6 As an important diagnostic imaging tool, panoramic radiographs are taken in large numbers every day. However, the interpretation of these radiographs can be subjective, relying heavily on the dentist’s experience and knowledge. This can potentially lead to misinterpretation, especially in the context of a heavy daily workload.7,8 The application of an artificial intelligence (AI)-based radiographic diagnostic tool is anticipated to address these issues by reducing errors, shortening the overall treatment duration, and enhancing the quality of patient dental care.9

Over the past decade, convolutional neural networks (CNNs) have garnered considerable attention due to their high performance in image recognition and computer vision.10 In 2012, a CNN outperformed traditional machine learning techniques in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).11 Essentially, a CNN employs a deep learning approach to independently determine the most effective features for image representation. A CNN comprises several layers, including convolution, pooling, nonlinearity, and fully connected layers. These allow the CNN to perform feature extraction and selection for classification within these hidden layers, thereby automatically discerning the relevant features of an image.12 Due to these advantages, numerous studies have sought to apply CNNs in the realm of digital radiography.13 To recognize an object on a digital radiograph, a CNN performs 3 primary tasks: classification, detection, and segmentation. Classification involves categorizing or labeling an image into a specific class. Detection facilitates the identification of the location of lesions, organs, or other objects of interest. Finally, segmentation is employed to delineate the precise pixel-wise boundaries of an organ or pathological feature.12,14 In dentomaxillofacial radiology, CNNs have been utilized for a variety of purposes, including the identification and recognition of dental caries, periodontal bone loss, intraosseous lesions, dental implants, and osteoporosis.14,15,16

One potential application of this technology is in the numbering of teeth on panoramic radiographs. The task of tooth numbering presents a challenge, as the CNN must learn to identify and classify multiple objects, in this case teeth, within a single radiographic image. CNN-based architectures have demonstrated the capacity to recognize and classify types of teeth based on dental anatomy and location.17,18 The use of an automatic CNN approach for tooth numbering can reduce the time spent by dentists on identifying teeth shown on panoramic radiographs. The automation of tooth numbering is crucial, as the system must be capable of identifying the number of the tooth for subsequent analysis during the initial stage of interpretation. For instance, this functionality is important in identifying and diagnosing radiopathology in a specific tooth. The findings from these workflows are then consolidated in the diagnostic report or dental filling chart of the panoramic radiographs.

This scoping review was designed to gather and examine published studies regarding the performance of various CNN architectures in the context of tooth numbering systems for panoramic radiographs. The review incorporated studies employing a structured exploration to address a central question: how can CNNs be utilized to automate tooth numbering on panoramic radiographs? The applicability, performance, and limitations of the various CNN architectures reported in existing studies were analyzed and discussed. Ultimately, the findings of this review may serve as a basis for future advancements in AI research, with the aim of enhancing the overall oral health care system.

Materials and Methods

Search strategy

This study was designed as a scoping review, which is used to map the literature on a specific topic or research area. This approach provides an opportunity to identify key concepts, detect gaps in the research, and pinpoint sources of evidence.19,20 A systematic literature search was performed for studies published between January 2012 and September 2022, with the process conducted by 2 reviewers. The publications were limited to those published in 2012 or later because CNNs have demonstrated significant performance improvements since that time.11 A systematic search was conducted of 3 databases: PubMed, Science Direct, and Scopus. The article search strategy employed Boolean operators, including AND, OR, and NOT. As each database has a unique procedure for article searches, the search terms were adjusted in accordance with the guidelines of each respective database (Table 1).

Table 1. Article search.

graphic file with name isd-53-271-i001.jpg

Study selection

Articles from 3 online databases were screened based on the pertinence of their titles and abstracts to the topic of the review. Two reviewers independently screened these titles and abstracts for potential eligibility, then conducted a full-text review in accordance with the selection criteria. All studies demonstrating the use of CNNs for tooth numbering on panoramic radiographs, including those utilizing classification, detection, and segmentation, were included. The inclusion criteria for this study were articles or journal publications that used a CNN approach for tooth numbering, were written in English, and provided full-text access. The exclusion criteria were articles that employed machine learning methods other than CNN for tooth numbering on periapical, bitewing, or 3-dimensional imaging. This review focused on the applicability of CNN for tooth numbering, given the increased popularity of CNN over the past decade due to its superior performance in computer vision tasks.

Data extraction

One reviewer performed the data extraction, which was subsequently discussed in depth with a second reviewer. The data extracted in this study included the author, year of publication, CNN architecture, application of the transfer learning method, tooth numbering method (classification, detection, or segmentation), number of datasets, sources of errors in tooth numbering, performance, and primary findings. The tooth numbering methods examined in this study included classification, detection, and segmentation. The performance metrics extracted for this review were accuracy, precision, sensitivity, specificity, and F1 score.

Results

A total of 540 studies were retrieved from 3 online databases for screening. Following an initial review based on the titles and abstracts, 28 studies were identified as potentially suitable for further consideration. Upon full-text evaluation, 12 studies fulfilled the inclusion criteria and were incorporated into this review (Fig. 1). The results derived from the data extraction of all included studies are displayed in Table 2.

Fig. 1. Flowchart illustrating the article search process. CNN: convolutional neural network.

Fig. 1

Table 2. Results of data extraction from the included studies.

graphic file with name isd-53-271-i002.jpg

CNN: convolutional neural network, mAP: mean average precision.

Datasets

The sizes of the datasets ranged from 160 to 8,000 panoramic images. Ten studies utilized a distinct test dataset. In 2 studies,26,27 the total number of images was the only information provided, with no details regarding the partitioning of the dataset. In 6 of the 10 studies with a distinct test set,24,25,28,30,31,32 data were divided into 3 datasets (training, validation, and testing), while 4 studies21,22,23,29 reported division into 2 datasets (training and testing).

CNN architecture

The CNN architecture employed varied considerably among the included studies. Eight studies21,22,23,24,25,27,28,29,30 reported using a combination of 2 to 4 different CNN architectures, while 4 studies26,29,31,32 utilized individually constructed CNN architectures for the enumeration of teeth on panoramic radiographs. The most frequently used architecture was Faster R-CNN, as reported in 7 studies, followed by ResNet in 4 studies. Inception and Mask R-CNN were each used in 3 studies, VGG in 2, and AlexNet, Xception, GoogLeNet, DENTECT, YOLO-v5, U-Net, and EfficientDet in 1 study each.

Regarding changes over the years studied, Faster R-CNN was initially introduced in 2019 for the purpose of tooth detection, and this was followed by the use of VGG-16 for numbering teeth on panoramic radiographs.21 In the following year, modifications were made to Faster R-CNN using other CNN architectures such as ResNet23 and Inception-v224,25 to evaluate the performance of tooth numbering systems. Starting in 2021, a number of studies began developing tooth segmentation-based architectures using Mask R-CNN, which was also modified with other CNN architectures like ResNet27,28 and Faster R-CNN.30 In the same year, Yüksel et al.29 developed a CNN-based architecture, termed DENTECT, that involved segmentation into 4 different quadrants before performing tooth detection and numbering on panoramic radiographs. Most recently, a collaborative learning approach was introduced that combined Faster R-CNN and YOLO-v5 for tooth detection and Mask R-CNN and U-Net for tooth segmentation. This approach has been demonstrated effective in a variety of complex situations.30

Types of tooth numbering methods

The primary functions of CNN in recognizing dental objects on panoramic radiographs include 3 tasks: classification, detection, and segmentation. A schematic representation of these tasks is provided in Figure. 2. For tooth numbering, 11 studies employed the detection method, 5 utilized the classification method, and 3 adopted the segmentation method. Table 3 summarizes a comparative analysis of these 3 distinct approaches to tooth numbering on panoramic radiographs.

Fig. 2. Illustration of the various tasks involved in automated tooth numbering on panoramic radiographs. A. The pre-processed panoramic image. B. The classification task, which requires a labeled dataset or cropped tooth images from panoramic images, is employed for tooth numbering on each image. C. The detection task, which requires a panoramic radiograph with a marked region of interest, facilitates the localization and identification of the tooth object by drawing a bounding box around it. D. The segmentation task is carried out to delineate precise boundaries and identify the tooth objects on the panoramic radiograph.

Fig. 2

Table 3. Comparison of the use of convolutional neural networks (CNNs) for classification, detection, and segmentation for tooth numbering on panoramic radiographs.

graphic file with name isd-53-271-i003.jpg

Detection enables the localization and numerical identification of a specific tooth by drawing a bounding box around it. Related studies have most commonly applied Faster R-CNN for tooth detection, occasionally modifying it with other CNN architectures to enhance the classification of tooth numbers. The references and/or data annotations for this task were supplied by various professionals, including radiologists,21,24,31 dentists,22,23 pedodontists,25 endodontists,29 and experts whose qualifications were not described.27,28,32 For detection tasks, the models were assessed using several metrics: precision (n=7), F1 score (n=5), sensitivity (n=5), specificity (n=1), accuracy (n=3), and mean average precision (n=1). In a comparative test, 1 study contrasted the performance of tooth detection using Faster R-CNN and YOLO v5 architectures within a collaborative learning framework.30

Classification is employed to categorize and enumerate isolated or cropped tooth images derived from panoramic radiographs. ResNet architectures are predominantly utilized for this task and demonstrate high performance. One study involved a comparison of 5 distinct CNN architectures, namely AlexNet, VGGNet, ResNet, Xception, and GoogLeNet, to determine the most effective option for classification. The reference standard for data annotation and performance evaluation was supplied by a radiologist,21 a dentist,22 a medical expert,26 and an expert whose qualifications were not described.27,28 The performance of the CNNs was assessed using several metrics, including accuracy (n=3), F1 score (n=1), and specificity and sensitivity (n=2).

Segmentation is carried out to delineate the pixel-wise boundaries of the tooth object on panoramic radiographs. Mask R-CNN was primarily employed for teeth segmentation, followed by other CNN architectures to identify pathologies or specific conditions affecting each tooth. This method can also be utilized to facilitate automated dental chart filling. The standard references for the segmentation task were an endodontist and a medical expert.28,29 The performance of the segmentation task was evaluated using F1 score (n=2), accuracy (n=1), and pixel accuracy (n=1). One study compared the performance of tooth segmentation using Mask R-CNN and U-Net architectures in a collaborative learning setting.30

Findings

Assessing the sources of errors in tooth numbering is key for the advancement of this system. As indicated in Table 2, only 5 studies reported an evaluation of the mistakes made by the system in tooth numbering. The presence of severely decayed teeth, crown prosthetics (including metal restorations, dental fillings, and pontics), missing adjacent teeth, dental implants, root remnants, wisdom teeth, and root canal-treated teeth can impair the performance of CNNs in tooth numbering. This is due to the potential for these factors to generate false positives or false negatives. The analysis of error sources is vital to address the limitations of CNNs in tooth numbering. Consequently, studies should include error analysis reports to aid in the development of optimized CNN architecture and dataset preparation for future research.

Although the aforementioned tooth conditions can decrease the performance of CNNs in tooth numbering, numerous studies have demonstrated that CNNs can extend beyond this purpose. They can identify the presence of dental implants, missing teeth, teeth treated with root canals, root remnants, and teeth with crown restorations and fillings. This capability can be further refined for automated dental chart filling in panoramic radiographs.28,29,30,31 CNNs can also be utilized for numbering deciduous teeth on panoramic radiographs.25 However, employing CNNs for numbering in other stages of dentition presents a challenge. This is due to the more intricate process required to perform tooth recognition tasks, particularly in the mixed dentition stage, because of the presence of tooth germs.

Discussion

All of the included studies reported that CNNs, irrespective of the specific architecture employed, demonstrated good performance in the automatic numbering of teeth on panoramic radiographs. Vinayahalingam et al.28 applied the CNN methodology to the automation of dental chart filling on panoramic radiographs. The objective of automating dental chart filling extends beyond merely facilitating tooth numbering; it can also be utilized to automatically identify the presence of crowns, fillings, root canals, implants, and residual roots on panoramic radiographs. The findings of this study indicated that CNNs achieved an F1 score exceeding 95% for tasks related to classification, detection, and segmentation. CNNs have been shown to be effective for automatic tooth numbering, and their application can be further expanded to automate dental chart filling using panoramic radiographs.

Faster R-CNN is the architecture most commonly employed for tooth numbering on panoramic radiographs.33 This model is an enhancement of R-CNN and Fast R-CNN.34 Faster R-CNN utilizes a region proposal network (RPN), a type of neural network that supplants the function of selective search. This substitution reduces the excessive computational demands on the computer, thereby accelerating object detection based on deep learning.35,36 The RPN shares full-image convolutional features with the detection network, which allows for nearly cost-free region proposals. It is a fully convolutional network that concurrently predicts object boundaries and objectness scores at each position.37 One benefit of using an RPN is the increase in speed during both training and prediction. Given that an RPN is a straightforward network that only employs convolutional layers, the prediction time can be faster than when using the classification base network.38,39

The primary functions of a CNN in identifying objects, including tooth numbers, on digital radiography images are classification, detection, and segmentation. Most of the studies examined21,22,23,24,25,27,28,29,30,31,32 involved the use of detection as a numeration method. Detection is an important step in pinpointing the location of an object. The most frequently used CNN architecture for performing detection tasks is Faster R-CNN. This architecture was employed in 7 studies for detection purposes. Bilgir et al.24 and Kılıc et al.25 chose Faster R-CNN for tooth numbering due to its design, which simplifies the complexity of CNNs with an architecture constructed to be comparatively deep and wide. CNNs can also be employed to number primary teeth using the detection task. Kılıc et al.25 utilized a dataset comprised of 421 anonymous panoramic radiographic images of pediatric patients aged between 5 and 7 years. The performance achieved included a precision of 95.71%, a sensitivity of 98.04%, and an F1 score of 96.86%. This demonstrates that CNNs can be used for numbering of both permanent and primary teeth. Further advancements can be made in the use of CNNs for primary tooth numbering, such as the application of tooth germ numeration, age estimation, and digital forensic purposes. Following successful detection, the identified tooth can undergo further analysis using other machine learning methods to detect pathologies or anomalies. The architecture of the CNN can be modified or expanded to detect various pathologies in the dental area of panoramic radiographs.

Five studies employed a classification method for numbering teeth on panoramic radiographs.21,22,26,27,28 This task is essential for categorizing tooth numbers based on their features. ResNet, one of the most common CNN architectures, is frequently chosen for this classification task. The popularity of ResNet (short for Residual Network) is largely due to its residual connection mechanism, which addresses the vanishing gradient problem. This problem arises when the gradient results analyzed by the model fail to reach the first layer, thereby preventing CNNs from studying the calculated errors.40,41 The operational principle of ResNet involves constructing a network that is relatively complex, while concurrently determining the optimal number of layers to overcome the vanishing gradient problem.42 In 2015, ResNet emerged as the winner of the ILSVRC and the Common Objects in Context (COCO) competitions. Specifically, it won in the categories of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.43 Regarding CNN architecture, depth is an important element in developing high-performing CNN models. ResNet offers several variations of layer types, including 18, 34, 50, 101, and 152 layers.44 Mahdi et al. utilized ResNet-50, comprising 50 layers, and ResNet-101, comprising 101 layers, to compare the precision and F1 score performance for permanent tooth numbering on panoramic radiographs.23 ResNet-101, being a deeper network than ResNet-50, delivered superior performance. Consequently, from a computational perspective, ResNet-101 demands greater computing power than ResNet-50. Both architectures are applicable in the field of dentistry.45,46 Vinayahalingam et al.28 employed Mask R-CNN with ResNet-50 to automate dental chart filling based on panoramic radiographs. The F1 score performance achieved for detection was 99.3%. This indicates that the detection method is not solely useful for tooth numbering, but can also be applied to automated dental chart filling, a feature that is beneficial for forensic identification.

Only 3 studies employed segmentation for tooth numbering on panoramic radiographs.28,29,30 While most of the included studies utilized Mask R-CNN, U-Net is among the most frequently used architectures in medical imaging segmentation due to its effectiveness and superior performance. Image segmentation is a semantic process, dividing images into objects and non-objects.47,48 Chandrashekar et al.30 implemented a collaborative technique, combining 2 architectures for segmentation and detection tasks in panoramic radiography to improve results. The collaborative CNN architecture used was a combination of Mask R-CNN and U-Net for segmentation, which was then modified into a detection CNN architecture model, namely Faster R-CNN and YOLO-v5. The accuracy achieved for segmentation was 98.44%, while the detection performance was 98.77%. The findings of this study suggest that the collaborative model is significantly more effective than the individual learning model. CNNs can also be utilized for automated dental chart filling on panoramic radiographs using segmentation methods.28 Further advancements are possible, particularly in using the segmentation method for tooth numbering on panoramic radiographs, as only 3 articles have utilized this method to date. Segmentation plays a key role in computer vision and image processing, providing an effective process to facilitate analysis by dividing the image into 2 parts: the object and the background.49,50,51

The studies included in this review utilized a range of dataset sizes to develop their neural network models. The smallest dataset, used by Mima et al., contained 160 images,32 while the largest, used by Prados-Privado et al., contained 8,000.27 Eight of 10 studies included in this review used datasets of at least 1,000 images. A substantial quantity of data is necessary for the development of a CNN model. Studies involving fewer than 1,000 images can be considered to include a small dataset, which may result in a less accurate output model.52 Since gathering, processing, and labeling data is exhaustive, several methods have been employed to enhance output model performance. These methods include transfer learning and data augmentation. Transfer learning is a technique that can expedite the learning process by transferring the pre-trained base layer of the CNN model using readily available datasets, then training the remaining layers on a smaller, local dataset. Seven of the studies included in this review (as shown in Table 2) demonstrated that this technique can be effective for tooth recognition, although the transfer learning was performed using datasets other than panoramic radiographs, such as ImageNet and COCO. Data augmentation is another technique that can expand the training dataset through the application of image transformations such as flipping, color manipulation, cropping, magnification, rotation, and translation.53

The studies included in this review highlighted the limitations of CNNs in tooth numbering for panoramic radiographs. Numerous studies reported various errors made by the system during the testing phase. The system may fail to identify a tooth due to the presence of severely decayed teeth, crown prosthetics (including metal restorations, dental fillings, and pontics), missing adjacent teeth, dental implants, root remnants, wisdom teeth, and root canal treatments on panoramic images. The most frequent errors, resulting in either false negatives or false positives, were associated with the presence of root remnants. The system also demonstrated errors in detecting dental implants and teeth with restoration materials.21,22,27,28 For the further advancement of CNNs in tooth numbering, it is crucial to include a variety of potential tooth forms in the training process. These forms may include primary teeth, tooth germs, residual roots, tooth anomalies, or teeth with crown restorations. The presence of an edentulous area should also be considered in the training dataset, as it can disrupt the learning process of the CNN model, leading to false positives on adjacent teeth.21,27 These considerations should be addressed during the dataset preparation stage of CNN system development for tooth numbering on panoramic radiographs.28

In future studies, CNN performance could be optimized by considering dataset quality as well as quantity, based on the requirements of the development process.31 Merely increasing dataset size may not necessarily improve performance, as the diversity of samples and the quality of the dataset are also crucial for optimizing results. For an effective learning system, it is essential that the sample is diverse and evenly distributed. This is particularly important given the challenging task of identifying multiple tooth objects with similar anatomical features in various locations. Once an appropriate dataset is assembled, the selection of appropriate CNN architecture is another important step prior to architecture modification. Collaborative or hybrid learning can be employed by integrating the CNN architecture based on the study’s objectives, as each architecture has specific advantages for certain tasks. This optimization depends on the computational power and the researcher’s knowledge, as optimal performance can be achieved with smaller datasets and lower computational power when the study is well-designed. The optimal CNN architectures for tooth numeration on panoramic radiographs can be utilized in various AI research projects for further advancements in the field of dentistry.

In conclusion, CNNs can be employed to automate the process of tooth numbering on panoramic radiographs through classification, detection, and segmentation. Enhancing and optimizing the CNN architecture based on the task at hand can improve the performance and results of the automated tooth numbering system. Future developments in CNN-based models should focus on various stages of dentition, such as the primary and mixed dentition stages. These models should also consider a range of tooth conditions, including teeth with restorations, dental anomalies, residual teeth, and edentulous areas, to enhance the recognition of tooth number. An optimized CNN architecture is anticipated to be useful for automated tooth numbering on panoramic radiographs, irrespective of the tooth condition. Ultimately, it could serve as a foundational element for AI research on panoramic radiography for a variety of purposes.

Footnotes

Conflicts of Interest: None

References

  • 1.Ossowska A, Kusiak A, Świetlik D. Artificial intelligence in dentistry - narrative review. Int J Environ Res Public Health. 2022;19:3449. doi: 10.3390/ijerph19063449. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Izzetti R, Nisi M, Aringhieri G, Crocetti L, Graziani F, Nardi C. Basic knowledge and new advances in panoramic radiography imaging techniques: a narrative review on what dentists and radiologists should know. Appl Sci (Basel) 2021;11:7858 [Google Scholar]
  • 3.Rozylo-Kalinowska I. Imaging techniques in dental radiology. Cham: Springer; 2020. Panoramic radiography in dentistry; pp. 43–56. [Google Scholar]
  • 4.Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent. 2020;50:169–174. doi: 10.5624/isd.2020.50.2.169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Havale R, Sheetal BS, Patil R, Hemant Kumar R, Anegundi RT, Inushekar KR. Dental notation for primary teeth: a review and suggestion of a novel system. Eur J Paediatr Dent. 2015;16:163–166. [PubMed] [Google Scholar]
  • 6.Forrest A. Forensic odontology in DVI: current practice and recent advances. Forensic Sci Res. 2019;4:316–330. doi: 10.1080/20961790.2019.1678710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Müller A, Mertens SM, Göstemeyer G, Krois J, Schwendicke F. Barriers and enablers for artificial intelligence in dental diagnostics: a qualitative study. J Clin Med. 2021;10:1612. doi: 10.3390/jcm10081612. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Mosadeghrad AM. Factors influencing healthcare service quality. Int J Health Policy Manag. 2014;3:77–89. doi: 10.15171/ijhpm.2014.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Joda T, Yeung AW, Hung K, Zitzmann NU, Bornstein MM. Disruptive innovation in dentistry: what it is and what could be next. J Dent Res. 2021;100:448–453. doi: 10.1177/0022034520978774. [DOI] [PubMed] [Google Scholar]
  • 10.Yasaka K, Abe O. Deep learning and artificial intelligence in radiology: current applications and future directions. PLoS Med. 2018;15:e1002707. doi: 10.1371/journal.pmed.1002707. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Krizhevsky A, Sutskever I, Hinton GE. In: Advances in Neural Information Processing Systems 25 (NIPS 2012) Pereira F, Burges CJ, Bottou L, Weinberger KQ, editors. San Mateo: Morgan Kaufmann Publishers; 2012. ImageNet classification with deep convolutional. [Google Scholar]
  • 12.Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional neural networks for radiologic images: a radiologist’s guide. Radiology. 2019;290:590–606. doi: 10.1148/radiol.2018180547. [DOI] [PubMed] [Google Scholar]
  • 13.Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent. 2019;91:103226. doi: 10.1016/j.jdent.2019.103226. [DOI] [PubMed] [Google Scholar]
  • 14.Putra RH, Doi C, Yoda N, Astuti ER, Sasaki K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac Radiol. 2022;51:20210197. doi: 10.1259/dmfr.20210197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Roosanty A, Widyaningrum R, Diba SF. Artificial intelligence based on Convolutional Neural Network for detecting dental caries on bitewing and periapical radiographs. J Radiol Dentomaksilofas Indones. 2022;6:89–94. [Google Scholar]
  • 16.Musri N, Christie B, Ichwan SJ, Cahyanto A. Deep learning convolutional neural network algorithms for the early detection and diagnosis of dental caries on periapical radiographs: a systematic review. Imaging Sci Dent. 2021;51:237–242. doi: 10.5624/isd.20210074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Miki Y, Muramatsu C, Hayashi T, Zhou X, Hara T, Katsumata A, et al. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput Biol Med. 2017;80:24–29. doi: 10.1016/j.compbiomed.2016.11.003. [DOI] [PubMed] [Google Scholar]
  • 18.Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep. 2019;9:3840. doi: 10.1038/s41598-019-40414-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Nadal C, Sas C, Doherty G. Technology acceptance in mobile health: scoping review of definitions, models, and measurement. J Med Internet Res. 2020;22:e17256. doi: 10.2196/17256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Daudt HM, van Mossel C, Scott SJ. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. BMC Med Res Methodol. 2013;13:48. doi: 10.1186/1471-2288-13-48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol. 2019;48:20180051. doi: 10.1259/dmfr.20180051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Kim C, Kim D, Jeong H, Yoon SJ, Youm S. Automatic tooth detection and numbering using a combination of a CNN and heuristic algorithm. Appl Sci (Basel) 2020;10:5624 [Google Scholar]
  • 23.Mahdi FP, Motoki K, Kobashi S. Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs. Sci Rep. 2020;10:19261. doi: 10.1038/s41598-020-75887-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Bilgir E, Bayrakdar İŞ, Çelik Ö, Orhan K, Akkoca F, Sağlam H, et al. An artifıcial ıntelligence approach to automatic tooth detection and numbering in panoramic radiographs. BMC Med Imaging. 2021;21:124. doi: 10.1186/s12880-021-00656-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kılıc MC, Bayrakdar IS, Çelik Ö, Bilgir E, Orhan K, Aydın OB, et al. Artificial intelligence system for automatic deciduous tooth detection and numbering in panoramic radiographs. Dentomaxillofac Radiol. 2021;50:20200172. doi: 10.1259/dmfr.20200172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Lin SY, Chang HY. Tooth numbering and condition recognition on dental panoramic radiograph images using CNNs. IEEE Access. 2021;9:166008–166026. [Google Scholar]
  • 27.Prados-Privado M, García Villalón J, Blázquez Torres A, Martínez-Martínez CH, Ivorra C. A convolutional neural network for automatic tooth numbering in panoramic images. Biomed Res Int. 2021;2021:3625386. doi: 10.1155/2021/3625386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Vinayahalingam S, Goey RS, Kempers S, Schoep J, Cherici T, Moin DA, et al. Automated chart filing on panoramic radiographs using deep learning. J Dent. 2021;115:103864. doi: 10.1016/j.jdent.2021.103864. [DOI] [PubMed] [Google Scholar]
  • 29.Yüksel AE, Gültekin S, Simsar E, Özdemir ŞD, Gündoğar M, Tokgöz SB, et al. Dental enumeration and multiple treatment detection on panoramic X-rays using deep learning. Sci Rep. 2021;11:12342. doi: 10.1038/s41598-021-90386-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Chandrashekar G, AlQarni S, Bumann EE, Lee Y. Collaborative deep learning model for tooth segmentation and identification using panoramic radiographs. Comput Biol Med. 2022;148:105829. doi: 10.1016/j.compbiomed.2022.105829. [DOI] [PubMed] [Google Scholar]
  • 31.Choi HR, Siadari TS, Kim JE, Huh KH, Yi WJ, Lee SS, et al. Automatic detection of teeth and dental treatment patterns on dental panoramic radiographs using deep neural networks. Forensic Sci Res. 2022;7:456–466. doi: 10.1080/20961790.2022.2034714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Mima Y, Nakayama R, Hizukuri A, Murata K. Tooth detection for each tooth type by application of faster R-CNNs to divided analysis areas of dental panoramic X-ray images. Radiol Phys Technol. 2022;15:170–176. doi: 10.1007/s12194-022-00659-1. [DOI] [PubMed] [Google Scholar]
  • 33.Shirsat S, Abraham S. In: Big Data Analytics: 9th International Conference on Big Data Analytics Proceedings. Srirama SN, Lin JC, Bhatnagar R, Agarwal S, Reddy PK, editors. Berlin: Springer-Verlag; 2021. Tooth detection from panoramic radiographs using deep learning; pp. 54–63. [Google Scholar]
  • 34.Su Y, Li D, Chen X. Lung nodule detection based on faster R-CNN framework. Comput Methods Programs Biomed. 2021;200:105866. doi: 10.1016/j.cmpb.2020.105866. [DOI] [PubMed] [Google Scholar]
  • 35.Gavrilescu R, Zet C, Foșalău C, Skoczylas M, Cotovanu D. Faster R-CNN: an approach to real-time object detection; 2018 International Conference and Exposition on Electrical and Power Engineering (EPE); Iasi, Romania; IEEE; 2018. 0165-8. Available from https://ieeexplore.ieee.org/document/8559776. [Google Scholar]
  • 36.Shih KH, Chiu CT, Lin JA, Bu YY. Real-time object detection with reduced region proposal network via multi-feature concatenation. IEEE Trans Neural Netw Learn Syst. 2020;31:2164–2173. doi: 10.1109/TNNLS.2019.2929059. [DOI] [PubMed] [Google Scholar]
  • 37.Wang C, Peng Z. Design and implementation of an object detection system using faster R-CNN; 2019 International Conference on Robots & Intelligent System (ICRIS); Haikou, China. IEEE; 2019. pp. 204–206. Available from https://ieeexplore.ieee.org/abstract/document/8806291. [Google Scholar]
  • 38.Bharati P. In: Computational Intelligence in Pattern Recognition. Pramanik A, editor. Springer; 2020. Deep learning techniques - R-CNN to mask R-CNN: a survey; pp. 657–668. [Google Scholar]
  • 39.Zhang Y, Chu J, Leng L, Miao J. Mask-refined R-CNN: a network for refining object details in instance segmentation. Sensors (Basel) 2020;20:1010. doi: 10.3390/s20041010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hu Y, Huber A, Anumula J, Liu SC. Overcoming the vanishing gradient problem in plain recurrent networks [Internet] [cited 2023 Mar 8];arXiv. 2018 :1801.06105. Available from https://arxiv.org/abs/1801.06105v3. [Google Scholar]
  • 41.Zhang K, Sun M, Han TX, Yuan X, Guo L, Liu T. Residual networks of residual networks: Multilevel residual networks. IEEE Trans Circuits Syst Video Technol. 2018;28:1303–1314. [Google Scholar]
  • 42.Wickramasinghe CS, Marino DL, Manic M. ResNet autoencoders for unsupervised feature learning from high-dimensional data: deep models resistant to performance degradation. IEEE Access. 2021;9:40511–40520. [Google Scholar]
  • 43.He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition [Internet] [cited 2023 Mar 8];arXiv. 2015 :1512.03385v1. Available from https://arxiv.org/abs/1512.03385. [Google Scholar]
  • 44.Mukhometzianov R, Carrillo J. CapsNet comparative performance evaluation for image classification [Internet] arXiv. 2018:1805.11195. Available from https://arxiv.org/abs/1805.11195. [Google Scholar]
  • 45.Hu J, Shen L, Albanie S, Sun G, Vedaldi A. Gather-Excite: exploiting feature context in convolutional neural networks [Internet] [cited 2023 Mar 8];arXiv. 2019 :1810.12348. Available from https://arxiv.org/abs/1810.12348. [Google Scholar]
  • 46.Ding X, Zhang X, Ma N, Han J, Ding G, Sun J. RepVGG: making VGG-style convnets great again [Internet] [cited 2023 Mar 8];arXiv. 2021 :2101.03697. Available from https://arxiv.org/abs/2101.03697. [Google Scholar]
  • 47.Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, Liang J. UNet++: a nested u-net architecture for medical image segmentation [Internet] [cited 2023 Mar 8];arXiv. 2018 :1807.10165. doi: 10.1007/978-3-030-00889-5_1. Avaiable from https://arxiv.org/abs/1807.10165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation [Internet] [cited cited 2023 Mar 8];arXiv. 2015 :1505.04597. Avaiable from https://arxiv.org/abs/1505.04597. [Google Scholar]
  • 49.Chen X, Williams BM, Vallabhaneni SR, Czanner G, Williams R, Zheng Y. Learning active contour models for medical image segmentation; 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Long Beach, CA, USA. 2019. pp. 11624–11632. [Google Scholar]
  • 50.Patil DD, Deore SG. Medical image segmentation: a review. Int J Comput Sci Mob Comput. 2013;2:22–27. [Google Scholar]
  • 51.Fan J, Zhang Z, Song C, Tan T. Learning integral objects with intra-class discriminator for weakly-supervised semantic segmentation; 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Seattle, WA, USA. 2020. pp. 4282–4291. [Google Scholar]
  • 52.Hwang JJ, Jung YH, Cho BH, Heo MS. An overview of deep learning in the field of dentistry. Imaging Sci Dent. 2019;49:1–7. doi: 10.5624/isd.2019.49.1.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data. 2021;8:53. doi: 10.1186/s40537-021-00444-8. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Imaging Science in Dentistry are provided here courtesy of Korean Academy of Oral and Maxillofacial Radiology

RESOURCES