Skip to main content
Dentomaxillofacial Radiology logoLink to Dentomaxillofacial Radiology
. 2022 Mar 2;51(5):20210504. doi: 10.1259/dmfr.20210504

Application of deep learning in teeth identification tasks on panoramic radiographs

Fahad Umer 1, Saqib Habib 1, Niha Adnan 1,
PMCID: PMC10043607  PMID: 35143260

Abstract

Objectives:

To investigate the current developments of Artificial Intelligence (AI) in teeth identification on Panoramic Radiographs (PR). Our aim was to evaluate and compare the performances of Deep Learning (DL) models that have been employed in the execution of this task.

Methods:

The systematic review was registered on PROSPERO. All recent studies that utilized DL models for identifying teeth on PRs were included in this review. An extensive search of the medical electronic databases including PubMed NLM, EBSCO Dentistry & Oral Sciences Source, and Wiley Cochrane Library was conducted. This was followed by a hand search of the IEEE Xplore database. The diagnostic performance of DL models in teeth identification tasks on PR was the primary outcome assessed in this review. The risk of bias assessment of the included studies was evaluated via the modified QUADAS-2 tool. Owing to the heterogeneity of the reported performance metrics, a meta-analysis was not possible..

Results:

The search yielded a total of 282 articles, out of which 13 relevant ones were included in this review. These studies utilized a diverse range of DL models for teeth identification tasks on PRs and reported their performances using a variety of metrics.

Conclusion:

The results of teeth identification tasks carried out by DL models are encouraging; however, there is a need for the shortcomings that have been identified in our preliminary review, to be addressed by future researchers.

Keywords: Orthopantomography, Panoramic radiography, Neural networks, Deep learning, Artificial intelligence, Dentistry

Introduction

Dental panoramic radiographs are an essential adjunct for diagnosis, treatment planning and assessment in clinical dentistry. 1 The diagnostic accuracy is dependent on the expertise of the dental healthcare providers who would benefit greatly by the automation of this process, effectuating a higher diagnostic yield. 2

Machine Learning (ML) and Deep Learning (DL) algorithms are computer-assisted frameworks that have been a focus of research for radiographic analysis and interpretation in medicine and dentistry. 3 Recently, however, interest in DL has overtaken ML due to the inherent nature of DL algorithms that make their results superior. 4 ML algorithms require a pre-processing step of ‘feature extraction’ whereby, a human annotator is required to manually label the key features in the input image for the algorithm to learn. 5 Contrarily, DL algorithms do not require constant manual identification of such features. Instead, they learn from the initial examples laid down preliminarily, and map the new input to output independently. 6 Since medical image pre-processing is a complex task requiring domain experts for continuous manual identification of important features; leveraging ML for the automation of radiograph interpretation is time consuming and expensive. Therefore, DL is currently dominating research in medical and dental image processing for the automation of this process. 6

In dentistry, PRs (such as Orthopantomograms) are a commonly used imaging modality for diagnosing oral and maxillofacial diseases. PRs are being used to train DL models and have been employed for various diagnostic tasks thus far. 7 These tasks include: tooth classification, diagnosis of dental caries, determination of working length, periapical lesions, vertical root fractures, maxillary sinus pathologies, ameloblastomas, odontogenic lesions and periodontal bone loss etc. 5,8 The results from these studies hold great promise for the clinical implementation of their diagnostic abilities. The application of these results, however, necessitates the accurate association of pathology with the causative anatomic structure. 9 Since dental diseases are either directly associated with, or lie in close proximity to teeth; their identification is one of the most important pieces of the dental image automation puzzle. ML as well as DL models have been utilized for teeth identification in the past. 5 Due to the complex anatomy and overlap of anatomical boundaries on PRs, this still remains a particularly challenging task for an AI model. 2

The detection of teeth in PR will be referred to as ‘teeth identification’ in this review and the techniques are further classified into object detection, semantic segmentation and instance segmentation as shown in Figure 1. 10,11 Our aim is to explore the current status of scientific evidence available for the identification of teeth on PR, to determine the techniques being employed (object detection, semantic and instance segmentation techniques) and the specific DL models utilized for teeth identification. Furthermore, this review also identifies gaps in research and discusses the future prospects of DL in teeth identification.

Figure 1.

Figure 1.

Teeth detection tasks: (i) unannotated orthopantomogram. (ii) object detection of all teeth via bounding boxes. (iii) semantic segmentation of all teeth as ‘tooth’ via fluid outlines. (iv) instance segmentation of all teeth as each ‘individual tooth’ via fluid margins.

Methods and materials

This systematic review was conducted using a predetermined protocol in accordance to the Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The following PICO model was employed:

  1. What are the developments of DL models in teeth identification tasks on PRs?

  2. What are the types of DL models that have been employed for teeth identification tasks on PRs?

  3. What are the diagnostic performance of the DL models in teeth identification tasks on PRs?

Population: PR radiographs (orthopantomograms)

Intervention: DL models employing teeth identification tasks

Comparison: Expert opinion, reference standard and/or ground truth

Outcome: Diagnostic performance of DL models

The study protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) and a registration number was obtained (Reg. No.: CRD42021249627) prior to commencing the review. This was done to ensure transparency in research and to avoid any unplanned duplication of the review questions. The authors designed focused and targeted review questions which are as follows:

  1. What are the developments of DL models in teeth identification tasks on PRs?

  2. What are the types of DL models that have been employed for teeth identification tasks on PRs?

  3. What are the diagnostic performance of the DL models in teeth identification tasks on PRs?

Inclusion criteria

  1. Studies that evaluated a DL model for teeth identification on PRs.

  2. Studies that evaluated teeth detection/segmentation of all teeth on PRs.

  3. Studies that reported the performance metrics of the DL model utilized.

  4. Studies that explicitly mention the type of DL model used for teeth identification.

Exclusion criteria

  1. Studies not in English language.

  2. Studies only available as abstracts.

  3. Studies registered as protocols only.

Literature search

A comprehensive literature search was conducted in May 2021 to identify relevant literature in the four major health sciences databases i.e., PubMed NLM, EBSCO Dentistry & Oral Sciences Source, and Wiley Cochrane Library. Furthermore, a manual search was also performed by the authors in Google Scholar and IEEE Xplore databases to identify relevant literature not present in the aforementioned databases.

Search strategy

Database-specific keywords and medical subject headings (MeSH) were predefined and a comprehensive search strategy was implemented. The authors used various combinations of these search terms and conducted a pilot literature search to formulate the final search strategy. Furthermore, hand search was carried out after reviewing the reference list of all included articles including those of the IEEE Xplore database. Article citations were exported to EndNote reference manager where duplicate references were removed. Three independent reviewers performed a detailed screening of the studies. Any disagreement between the reviewers was resolved through discussion and a consensus was reached where all authors agreed.

Search terms

‘Teeth segmentation’, ‘Image segmentation’, ‘Orthopantomography’, ‘Tooth Classification’, ‘Teeth detection and numbering’, ‘Panoramic radiography’, ‘Radiographic image interpretation’ ‘Dental X-ray image’, ‘Computer assisted diagnosis’, ‘Convolution neural network’, ‘Deep learning’, ‘U-net’, ‘CNN’, ‘Artificial neural network’, ‘R-CNN’.

Data extraction

A customized proforma was designed by the authors to extract the following data from included studies:

  1. Study details (authors, journal, country and year of publication).

  2. Type of DL model used (U-Net, R-CNN, ResNet, etc).

  3. Diagnostic performance parameters (precision, recall, Intersection over Union (IoU), F1-Score, DICE index).

  4. Study characteristics (sample size, annotators if specified), datasets studied, reference standard).

Risk of bias assessment

The risk of bias and methodological quality of the included studies was assessed using the modified Quality Assessment of Diagnostic Accuracy Studies QUADAS-2 tool. 12,13 All authors (FUMR, SQHB, NHAD) of this review were involved in this process. The modified QUADAS-2 tool derived from a study was further modified to ensure better quality assessment of the included studies. 14 To the best of our knowledge, a specifically tailored tool for assessing the methodological quality in AI-related research in dentistry does not exist. 15 Due to this, questions related to performance metrics, use of validation and data augmentation techniques, information for future replicability and reference standard were further included to formulate a tool better suited to our review. 14 This tool was then piloted on three studies to ensure its applicability. The combination of all satisfied criteria was used to give a cumulative numerical score; a high quality (score 6–8), moderate quality (score 4–5) and low quality (score 0–3) depicting the methodological quality of each included studies. The outcome of risk of bias assessment are presented in Table 1.

Table 1.

Quality assessment of the selected studies using modified QUADAS tool. The ‘✓’ demonstrates a favourable response to the question and the ‘X’ demonstrates an unfavourable response to the question. The overall score reflects the quality and risk of bias for each study

Quality Assessment Questions Nishitani et al 2021 Leite et al 2021 Lee et al
2020
Zhao et al 2020 Mahdi et al 2020 Mura-matsu et al 2020 Silva et al
2020
Muresan et al 2020 Chung et al 2020 Tuzoff et al 2019 Koch et al
2019
Wirtz et al 2018 Jader et al
2018
Does the sample size consist of >100 images? × × ×
Was there an explicit inclusion/exclusion criterion defined for dataset selection? × × × × × ×
Did the investigators mention sample size calculation? × × × × × × × × × × × × ×
Is the dataset multicentric? × × × × × × × × × × ×
Was there adequate information provided for replicating study?
Did the investigators employed any validation techniques in the test data to reduce bias? × × × × × × × × × ×
Does the study mention type of specialist involve for annotation? × × × × × × × × × ×
Did the investigators report more than one performance metrics? × × ×
Did the investigators employ any data augmentation techniques? × × × × × ×
Overall score 3 6 5 5 3 4 6 5 4 4 4 2 4

High (6-8) score

Moderate (4-5) score

Low (0-3) score

Data synthesis

Since a high heterogeneity among outcome measures and DL models were observed amongst studies, a quantitative analysis of the results i.e., a meta-analysis was not possible. Hence, the findings of included studies are presented narratively.

Results

A total of 282 studies were identified following a detailed electronic and manual literature search. After removing duplicates and performing a detailed screening of the studies, 13 articles fulfilled our inclusion criteria. Hence, these 13 articles were subjected to a final analysis. The screening process is shown as PRISMA flowchart in Figure 2. A graphical summary of the methodology and results is illustrated in Figure 3.

Figure 2.

Figure 2.

PRISMA flowchart

Figure 3.

Figure 3.

Graphical summary of the review

Description of included studies

The main findings of the included studies are summarized in Table 2. All included studies were published between 2018 and 2021. The results are described based on the type of teeth identification methods i.e., object detection, semantic segmentation and instance segmentation techniques. This is followed by the identification of the type of DL model that was used as well as their diagnostic performances.

Table 2.

Main findings of the included studies

Author, Year,
(Country)
AI Model used Sample size Annotator(s)/
Ground truth
Teeth Identification Task(s) Performance metrics Conclusions
Nishitani et al24 2021
(Japan)
U-Net 162 images
Training set: 82
Validation set: 20
Test set: 60
Not mentioned Semantic segmentation Jaccard index (0.809)
Dice index (0.894)
U-Net with the new loss function exhibited
a higher segmentation accuracy of teeth in panoramic dental X-ray images than that obtained by U-Net with the conventional loss function
Leite et al1 2021
(Belgium)
 ResNet 101 153 images
Training set: 70
Validation set: 18
Test set:65
Oral radiologist with 20 years’ experience Instance segmentation,
Time Analysis
Sensitivity (0.98)
Precision (0.99)
IoU (0.95)
Recall (0.98)
F1 score (0.97)
Hausdroff distances (7.9)
The AI tool yielded a highly accurate and fast performance for detecting and segmenting teeth, faster than the ground truth alone. Also, the time needed to perform fully manual segmentation of teeth on a panoramic radiograph may be reduced by 67% when using a segmentation method based on deep learning algorithms.
Lee et al2
2020
(Korea)
R-CNN 30 images
846 teeth
Oral radiologist with 5 years’ experience Instance segmentation Precision (0.85)
Recall (0.89)
IoU (0.87)
F1 score (0.87)
Visual analysis
The method achieved high performance for automation of tooth segmentation on dental panoramic images. The proposed method might be applied in the first step of diagnosis automation and in forensic identification, which involves similar segmentation tasks.
Zhao et al21 2020
(China)
TSASNet 1500 image
Training set :1200
Validation set: 150
Test set:150
Not mentioned Semantic segmentation Accuracy (0.96)
Recall (0.93)
Specificity (0.97)
Precision (0.94)
Dice index (0.92)
Results showed that TSASNet can obtain superior segmentation performance on dental panoramic images over other state-of-the-art methods and has highly competitive performance compared with the current medical image segmentation methods.
Mahdi et al6 2020
(Japan)
Faster RCNN
(ResNet100 +and ResNet 50)
1000 image
Training set: 900
Test set :100
Not mentioned Object detection ResNet 50
Precision (0.97)
ResNet 101
Precision (0.98)
F1 score (0.98)
The proposed model can be used as a useful and reliable tool to assist dental care professionals in dentistry.
Muramatsu et al15 2020
(Germany)
ResNet 50 100 images Dental radiologist Object detection Sensitivity (0.96) High tooth detection sensitivity and classification accuracies were obtained using a limited dataset, suggesting the potential utility of the proposed method in the automatic filing of dental records.
Silva et al17
2020
(Bahia)
Mask R CNN
PA Net
HTC
ResNet
543 images Not mentioned Semantic segmentatin,
Instance segmentation,
Teeth numbering
Mask R CNN
Accuracy (0.96)
F1 score (0.90)
PA Net
Accuracy (0.96)
F1 score (0.91)
HTC
Accuracy (0.96)
F1 score (0.89)
ResNet
Accuracy (0.96)
F1 score (0.91)
Results showed that instance segmentation and numbering are feasible to be accomplished by an end-to-end deep network. In our experiments, PANet achieved the best results i.e.F1 score of 91.65% on semantic segmentation
Muresan et al25 2020
(Romania)
ERF Net 1000 image
Training set: 700
Validation set: 100
Test set: 200
Not mentioned Semantic segmentation Accuracy (0.89)
Precision (0.98)
Recall (0.91)
F1 score (0.93)
The proposed method solution is able segment accurately the teeth and identify the problems correctly as compared to other methods which are not able to identify all the problems correctly.

Four studies included in our review performed the teeth identification task through DL-based object detection. 6,16–18 Others leveraged DL for either semantic segmentation or instance segmentation methods of teeth identification. Furthermore, two studies also performed a supplementary task of tooth numbering in addition to teeth identification. 18,19

Datasets studied

We noticed a large variation in the datasets utilized by the included studies ranging from 14 radiographs to 1500 radiographs. 20–23 Only four studies mentioned the details of annotators (oral radiologists/experts). 1,2,17,18 The remaining studies did not mention any details of annotators and or the annotation methods involved in the training process. However, these studies were still included in our review in an attempt to assess as many DL models as possible.

DL models studied

ResNet is an advanced CNN with many deep layers incorporated in its framework with the ability to minimize the vanishing gradient problem via skip connections. It is a deeper neural network compared with its predecessors AlexNet and GoogleNet constituting 8 and 22 layers, respectively. Investigators have since then experimented with the number of deep layers in the ResNet architecture, such as Resnet 101 and ResNet50. 1,17 Four studies employed the base algorithm ResNet, two of them utilized this algorithm for instance segmentation and the remaining two studies employed it for object detection tasks. 1,16,17,19 For instance segmentation, two studies exhibited good overall performance of the ResNet architecture (0.97 and 0.91 F1-scores, respectively). 1,19 For object detection, ResNet-based algorithms achieved high sensitivity (91%), precision (0.99) and recall scores (0.97). 16,17

R-CNN was developed to increase the computational efficiency of computer vision tasks, it proposes 2000 regions of interest in a given image, known as ‘region proposal’, which are then fed through the CNN for processing. Fast and Faster R-CNN are a further development on the R-CNN model which facilitates real-time image processing. 24,25

Three studies in our review leveraged R-CNN and its advanced version Faster R-CNN for teeth identification tasks. 2,6,18 Only one of those utilized R-CNN specifically for instance segmentation task with moderate precision scores (0.85). 2 The others used Faster R-CNN for object detection to identify teeth on PR. 6,18 In contrast, Faster R-CNN achieved higher precision scores for object detection with a high precision score (0.99). 18 One study utilized both ResNet 50 and ResNet 101 as a back-bone for his Faster R-CNN also reported high precision scores (0.97 and 0.98, respectively). 6

Mask R-CNN is a further upgrade on the previous R-CNN and Faster R-CNN and is being employed particularly for instance segmentation tasks. 20 Two studies utilized Mask R-CNN on PRs and reported acceptable performance (0.88 and 0.90 F1-scores, respectively). 19,20

U-net is a relatively new DL algorithm composed of a fully convolutional network, i.e., it consists of convolutional layers with no hidden layers. The algorithm architecture consists of encoder–decoder layers which when visualized appear similar to the letter ‘U’. This technique is used to overcome the ‘bottleneck problem’ encountered in linear CNN models which results loss of important features in original images. Three studies utilized U-net-based algorithms and demonstrated satisfactory performance of the models with acceptable DICE index scores (0.7, 0.9 and 0.927, respectively). 21,22,26

ERFNet was utilized in one study which is a linear encoder–decoder convolution network with a total of 23 layers. 27 The first 16 layers are used for encoding the image and the remaining seven are utilized for decoding, allowing for segmentation of teeth with excellent results. (F1 score 0.93). 27

An end-to-end model TSASNet was employed by one study for segmentation of PR, demonstrating high-grade performance (DICE index 0.92). 23

Risk of bias assessment

A modified QUADAS −2 tool was utilized to assess the quality of studies. 14 By combining scores of the satisfied criteria, an overall score of each study was determined and is presented in Table 1. Two studies were able to achieve the highest score of 6. 1,19 Whereas, one received the lowest score of 2. 22

Discussion

To reliably utilize DL for dental image processing and PR automation, there is a need to substantiate the usefulness of the perpetually evolving DL algorithms. This can only be done by testing these algorithms under solid experimental conditions representing real-world scenarios.

As the focus of this review was mainly DL, all the included studies were published after 2018. The algorithms which have been utilized during this time represent the state-of-the-art Convolutional Neural Networks (CNNs) available to the research community. In this review, we noted that DL models have been appropriately harnessed to perform teeth identification tasks. Investigators have used multifarious performance metrics (sensitivity, precision, accuracy, F1-score, IoU, DICE index, etc) and have reported optimal performance. Moreover, two studies also performed teeth numbering in addition to teeth identification, with encouraging results. 18,27

As we have already described, tooth identification tasks may be classified into three categories; object detection, semantic segmentation and instance segmentation. In object detection, teeth in the PRs were identified and localized using bounding boxes; and four of the included studies adopted this method with good results. 6,16–18

Teeth segmentation is a further variation of teeth identification in which the teeth are detected and segregated from the background of the image. 28 This segmentation task can be further classified as semantic and instance segmentation. 29 In semantic segmentation, multiple items of the same class are identified as a single entity. 28,29 Whereas, in instance segmentation multiple similar items are considered as being individual entity. All teeth are identified as ‘tooth’ and not as individual tooth class in semantic segmentation. Contrarily, in instance segmentation, all teeth can be individually identified according to their tooth class (for example: molar, premolar, canine, etc).

Overall, four studies performed object detection, 6,16–18 with two studies performing teeth numbering task in addition to object detection. 18,19 Four studies performed only semantic segmentation, 21,23,26,27 and a further four employed purely instance segmentation for teeth identification. 1,2,20,22 One study performed three tasks, namely, teeth numbering via object detection, semantic segmentation and instance segmentation on PRs. 19

For the automation of radiographic interpretation, the ideal dataset should be large enough to encompass a wide range of clinical scenarios. Additionally, the data should include variables such as different machines used for radiography and patients with diverse ethnicities/ geographical backgrounds, this will allow for mitigation of the phenomenon of ‘overfitting’ associated with DL algorithms. 30 Moreover, annotation of ground truth needs to be done by multiple radiologists with varying level of experience in order to account for interexaminer variations. In our review it was observed that only four studies gave information regarding annotation of ground truth. 1,2,17,18 Only two of these gave details regarding the experience of annotators involved. 1,2 And only one study used more than one annotator for annotation of datasets. 18 Two studies employed multicentric data, all others utilized data from a single center; this may have made their results overfitted, resulting in limited generalizability on unseen data from another cohort. 16,27

DL is data intensive and require large datasets to make accurate predictions. 30 In our study, we noted heterogeneity in the datasets being employed; the largest dataset being that of 1500 PRs radiographs. Four out of thirteen studies utilized the same dataset for training DL algorithms. 19–21,23 This dataset is popularly known as UFBA-UESC Dental Images Dataset. 5,19 The PRs in this dataset are grouped into ten separate categories including teeth, missing teeth, dental appliances, restorations, implants, etc. This original dataset was annotated to be used for semantic segmentation task. Later, Jader et al upgraded the same dataset to include annotation to allow for instance segmentation. This dataset was named as UFBA-UESC Dental Images Dataset Deep. 19–21

For the training and evaluation of DL model, datasets should be divided into three groups; training, validation and testing datasets. Whilst most studies used this template of evaluation of their models, four studies did not follow this protocol. 2,17,19,22 Therefore, these studies have incorporated a high risk of bias in their results, which could have been easily avoided by stratifying their dataset into predetermined groups as stated above.

Like any other project our review also suffers from certain limitations, although we had a predefined PICO question, our review outcomes are rather broad. Some of the findings were included during the synthesis phase of the literature search since we decided to include conference proceedings as well. Owing to the fact that DL is an unexplored domain, we wanted to include the maximum information aiming to benefit the readers of this paper. The literature search was done by only one author with the help of a librarian; albeit the extraction was cross-checked by all three authors. The included studies were diverse with heterogeneous algorithms and performance metrics, hence no direct comparisons amongst studies and their performance was possible.

Our review also has one distinct advantage as in our search we also included the IEEE Xplore database to cover a large majority of engineering publications on the same topic which may otherwise have been missed. It can be argued that inclusion of these publications add gray literature to our review since these conference proceedings are not peer-reviewed. However, they were included in an attempt to incorporate all recent evidence available on teeth identification in our review.

Conclusions and future applications

For future research, there is a need to develop a standardized methodology and pre-determined performance metrics to increase the overall robustness and generalizability of DL algorithms for diagnostic purposes. 4 The EQUATOR network is available for such guidance in AI-based intervention studies. 31 The need for such a network specific for dentistry still prevails and is available in the form of a checklist at present. 31 However, the said checklist does not extensively cover all important aspects of reporting AI-based interventional studies and needs to be improved further for widespread applicability of DL models in dentistry. 31

Strategies should be devised to generate AI datasets which are properly collected, curated and made available to researchers globally, in a systematic, secure and ethical way. Data annotation process needs to be regulated with the development of a standardized software and tools so that dataset attributes are consistent and can be utilized by researchers all around the globe.

According to our review, teeth identification with the help of CNNs have shown promising results however the included studies suffered from certain limitations such as overfitting, high risk of bias and high heterogeneity. Because of these reasons, the utilization of DL models for performing teeth identification tasks in clinical practice is still questionable. However, the evidence for routine use of DL models as an adjunct diagnostic aid is sparse but developing steadily. 4

The throughput cost-effectiveness and efficiency of CNN-based models is still not very well understood and its effect on their overall acceptability in clinical practice needs further exploration. Nonetheless, as the preliminary results are encouraging it is hoped that the shortcomings of DL, which have been identified in our review will be addressed by future researchers.

Footnotes

First Floor Dental Clinics, Department of Surgery, Jenabai Hussainali Shariff (JHS) building, Aga Khan University Hospital, Karachi, Pakistan

Contributor Information

Fahad Umer, Email: fahad.umer@aku.edu.

Saqib Habib, Email: saqib.habib@aku.edu.

Niha Adnan, Email: nihasuriya@gmail.com.

REFERENCES

  • 1. Leite AF, Gerven AV, Willems H, Beznik T, Lahoud P, et al. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin Oral Investig 2021; 25: 2257–67. doi: 10.1007/s00784-020-03544-6 [DOI] [PubMed] [Google Scholar]
  • 2. Lee JH, Han SS, Kim YH, Lee C, Kim I. Application of a fully deep convolutional neural network to the automation of tooth segmentation on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 129: 635–42: S2212-4403(19)31581-0. doi: 10.1016/j.oooo.2019.11.007 [DOI] [PubMed] [Google Scholar]
  • 3. Howard J. Artificial intelligence: implications for the future of work. Am J Ind Med 2019; 62: 917–26. doi: 10.1002/ajim.23037 [DOI] [PubMed] [Google Scholar]
  • 4. Shan T, Tay FR, Gu L. Application of artificial intelligence in dentistry. J Dent Res 2021; 100: 232–44. doi: 10.1177/0022034520969115 [DOI] [PubMed] [Google Scholar]
  • 5. Silva G, Oliveira L, Pithon M. Automatic segmenting teeth in x-ray images: trends, a novel data set, benchmarking and future perspectives. Expert Systems with Applications 2018; 107: 15–31. doi: 10.1016/j.eswa.2018.04.001 [DOI] [Google Scholar]
  • 6. Mahdi FP, Motoki K, Kobashi S. Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs. Sci Rep 2020; 10(1): 19261. doi: 10.1038/s41598-020-75887-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Chen YW, Stanley K, Att W. Artificial intelligence in dentistry: current applications and future perspectives. Quintessence Int 2020; 51: 248–57. doi: 10.3290/j.qi.a43952 [DOI] [PubMed] [Google Scholar]
  • 8. Umer F, Habib S. Critical analysis of artificial intelligence in :endodontics: A scoping review. J Endod 2022; 48: 152–60: S0099-2399(21)00802-5. doi: 10.1016/j.joen.2021.11.007 [DOI] [PubMed] [Google Scholar]
  • 9. Kuwada C, Ariji Y, Fukuda M, Kise Y, Fujita H, et al. Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 130: 464–69: S2212-4403(20)30969-X. doi: 10.1016/j.oooo.2020.04.813 [DOI] [PubMed] [Google Scholar]
  • 10. Wagner FH, Dalagnol R, Tarabalka Y, Segantine TY, Thomé R, et al. U-net-id, an instance segmentation model for building extraction from satellite images—case study in the joanópolis city, brazil. Remote Sensing 2020; 12: 1544. doi: 10.3390/rs12101544 [DOI] [Google Scholar]
  • 11. Adnan N, Umer F. Understanding deep learning—challenges and prospects. J Pak Med Assoc 2022; 72: 59–63. doi: 10.47391/JPMA.AKU-12 [DOI] [PubMed] [Google Scholar]
  • 12. Whiting PF, Weswood ME, Rutjes AWS, Reitsma JB, Bossuyt PNM, et al. Evaluation of quadas, a tool for the quality assessment of diagnostic accuracy studies. BMC Med Res Methodol 2006; 6: 9. doi: 10.1186/1471-2288-6-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Qu YJ, Yang ZR, Sun F, Zhan SY. Risk on bias assessment: (6) a revised tool for the quality assessment on diagnostic accuracy studies (quadas-2). Zhonghua Liu Xing Bing Xue Za Zhi 2018; 39: 524–31. doi: 10.3760/cma.j.issn.0254-6450.2018.04.028 [DOI] [PubMed] [Google Scholar]
  • 14. Mahmood H, Shaban M, Indave BI, Santos-Silva AR, Rajpoot N, et al. Use of artificial intelligence in diagnosis of head and neck precancerous and cancerous lesions: a systematic review. Oral Oncol 2020; 110: 104885: S1368-8375(20)30321-3. doi: 10.1016/j.oraloncology.2020.104885 [DOI] [PubMed] [Google Scholar]
  • 15. Habib S, Umer F. Comments on “artificial intelligence applications in restorative dentistry: a systematic review.” J Prosthet Dent 2022; 127: 196–97: S0022-3913(21)00424-8. doi: 10.1016/j.prosdent.2021.08.003 [DOI] [PubMed] [Google Scholar]
  • 16. Chung M, Lee J, Park S, Lee M, Lee CE, et al. Individual tooth detection and identification from dental panoramic x-ray images via point-wise localization and distance regularization. Artif Intell Med 2021; 111: 101996: S0933-3657(20)31261-6. doi: 10.1016/j.artmed.2020.101996 [DOI] [PubMed] [Google Scholar]
  • 17. Muramatsu C, Morishita T, Takahashi R, Hayashi T, Nishiyama W, et al. Tooth detection and classification on panoramic radiographs for automatic dental chart filing: improved classification by multi-sized input data. Oral Radiol 2021; 37: 13–19. doi: 10.1007/s11282-019-00418-w [DOI] [PubMed] [Google Scholar]
  • 18. Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol 2019; 48(4): 20180051. doi: 10.1259/dmfr.20180051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Silva B, Pinheiro L, Oliveira L, Pithon M, et al. A study on tooth segmentation and numbering using end-to-end deep neural networks. In: Silva B, Pinheiro L, Oliveira L, eds. 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI); Recife/Porto de Galinhas, Brazil. ; November 2020. doi: 10.1109/SIBGRAPI51738.2020.00030 [DOI] [Google Scholar]
  • 20. Jader G, Fontineli J, Ruiz M, Abdalla K, Pithon M, et al. Deep Instance Segmentation of Teeth in Panoramic X-Ray Images. In: Oliveira L, ed. 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI); Parana. ; October 2018. 10.1109/SIBGRAPI.2018.00058 [DOI] [Google Scholar]
  • 21. Koch TL, Perslev M, Igel C, Brandt SS. Accurate Segmentation of Dental Panoramic Radiographs with U-NETS. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI); Venice, Italy. ; April 2019. 10.1109/ISBI.2019.8759563 [DOI] [Google Scholar]
  • 22. Wirtz A, Mirashi SG, Wesarg S, editors . Automatic teeth segmentation in panoramic X-ray images using a coupled shape model in combination with a neural network. Springer; 2018. [Google Scholar]
  • 23. Zhao Y, Li P, Gao C, Liu Y, Chen Q, et al. TSASNet: tooth segmentation on dental panoramic x-ray images by two-stage attention segmentation network. Knowledge-Based Systems 2020; 206: 106338. doi: 10.1016/j.knosys.2020.106338 [DOI] [Google Scholar]
  • 24. Chu W-S, Zeng J, De la Torre F, Cohn JF, Messinger DS. Unsupervised synchrony discovery in human interaction. Proc IEEE Int Conf Comput Vis 2015; 2015: 3146–54. doi: 10.1109/ICCV.2015.360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Ren S, He K, Girshick R, Sun J. Faster r-cnn: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 2015; 28: 91–99. [DOI] [PubMed] [Google Scholar]
  • 26. Nishitani Y, Nakayama R, Hayashi D, Hizukuri A, Murata K. Segmentation of teeth in panoramic dental x-ray images using u-net with a loss function weighted on the tooth edge. Radiol Phys Technol 2021; 14: 64–69. doi: 10.1007/s12194-020-00603-1 [DOI] [PubMed] [Google Scholar]
  • 27. Muresan MP, Barbura AR, Nedevschi S. Teeth Detection and Dental Problem Classification in Panoramic X-Ray Images using Deep Learning and Image Processing Techniques. In: Nedevschi S, ed. 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP); Cluj-Napoca, Romania. ; 3 September 2020. 10.1109/ICCP51029.2020.9266244 [DOI] [Google Scholar]
  • 28. Corbella S, Srinivas S, Cabitza F. Applications of deep learning in dentistry. Oral Surg Oral Med Oral Pathol Oral Radiol 2021; 132: 225–38: S2212-4403(20)31321-3. doi: 10.1016/j.oooo.2020.11.003 [DOI] [PubMed] [Google Scholar]
  • 29. Rodrigues JA, Krois J, Schwendicke F. Demystifying artificial intelligence and deep learning in dentistry. Braz Oral Res 2021; 35: e094: S1806-83242021000100604. doi: 10.1590/1807-3107bor-2021.vol35.0094 [DOI] [PubMed] [Google Scholar]
  • 30. Umer F, Khan M. A call to action: concerns related to artificial intelligence. Oral Surg Oral Med Oral Pathol Oral Radiol 2021; 132: 255: S2212-4403(21)00425-9. doi: 10.1016/j.oooo.2021.04.056 [DOI] [PubMed] [Google Scholar]
  • 31. Schwendicke F, Singh T, Lee J-H, Gaudin R, Chaurasia A, et al. Artificial intelligence in dental research: checklist for authors, reviewers, readers. J Dent 2021; 107: 103610: S0300-5712(21)00031-2. doi: 10.1016/j.jdent.2021.103610 33631303 [DOI] [Google Scholar]

Articles from Dentomaxillofacial Radiology are provided here courtesy of Oxford University Press

RESOURCES