Skip to main content
Imaging Science in Dentistry logoLink to Imaging Science in Dentistry
. 2022 Oct 12;52(4):383–391. doi: 10.5624/isd.20220105

Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

Rini Widyaningrum 1,, Ika Candradewi 2, Nur Rahman Ahmad Seno Aji 3, Rona Aulianisa 4
PMCID: PMC9807794  PMID: 36605859

Abstract

Purpose

Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches.

Materials and Methods

Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models.

Results

The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics (i.e., dice coefficient and intersection-over-union [IoU] score). Multi-Label U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively.

Conclusion

Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection.

Keywords: Radiography, Panoramic; Deep Learning; Periodontitis; Tooth

Introduction

Periodontitis, the most prevalent chronic inflammatory disease, is characterized by the progressive destruction of tooth-supporting structures. The loss of periodontal tissues contributes to tooth mobility and eventual tooth loss,1,2,3 which affects the effectiveness of mastication, swallowing, and digestion and thus permanently impairs quality of life.4 Bone remodeling, inflammation, and periodontal regeneration can be associated with systemic diseases, medications,5 serum lipid impairment, and cardiovascular health.6 Nonetheless, periodontitis has been associated with several health problems. An accurate periodontitis diagnosis is crucial for effective treatment planning. Currently, the categorization of periodontitis has been updated and further classified based on a multidimensional staging and grading system. The previous classifications of “chronic” and “aggressive” are now merged into a single category called “periodontitis.” According to the 2017 World Workshop on Classification of Periodontal and Peri-Implant Diseases and Conditions,1 periodontitis stages are determined by severity (based on interdental clinical attachment loss level, the degree of radiographic bone loss [RBL], and the number of teeth lost due to periodontitis), complexity, extent, and distribution. RBL related to periodontitis is commonly assessed on panoramic radiographs. Periodontitis staging indicates the severity of the condition and is utilized to assess the complexity of disease management. In addition to facilitating communication with patients and dental clinicians, periodontitis staging on panoramic radiographs based on RBL contributes to prognostication and treatment planning.

Digital radiography is advancing and driving emerging studies on pattern recognition and artificial intelligence to assist oral radiologists in delivering accurate and reproducible assessments.7 Image segmentation, which refers to the process of identifying essential image components, is a fundamental task in biomedical image processing. High-accuracy biomedical image segmentation using computer vision has become a substantial challenge for providing a basis for further image processing in numerous clinical applications.8 The U-Net architecture has been extensively studied for the segmentation of biomedical images due to its ability to generate precisely segmented images using a small amount of training data8,9 and its popularity is shown by its widespread use with all major primary imaging modalities, such as computed tomography scans, magnetic resonance imaging, X-rays, and microscopy.9 However, research on U-Net models for the segmentation of panoramic radiographs for periodontitis staging in comparison with other deep learning methods remains limited. Nevertheless, several investigations have aimed to detect periodontitis using radiographic images based on deep learning, including Faster R-CNN as a deep learning method for digital panoramic radiographs,10 a deep convolutional neural network (CNN) algorithm for detecting alveolar bone loss in periapical radiographs,11 and a deep learning hybrid method for panoramic radiographs.12,13

Deep learning techniques are currently being widely applied to aid dentists and oral radiologists in assessing diseases with a higher accuracy of radiographic observations, while saving time and preventing fatigue-related misdiagnoses. The image segmentation method proposed in this work is anticipated to contribute to the future development of these techniques. This study aimed to compare the performance of 2 computational vision models - namely, Multi-Label U-Net and Mask R-CNN - in segmenting panoramic radiographs to detect and stage periodontitis according to RBL and determine the best segmentation method. These results would be helpful in developing further computer-assisted periodontitis diagnosis techniques based on radiographic findings.

Materials and Methods

Dataset

Digital panoramic radiographs were retrospectively collected from 100 patients who underwent panoramic radiographic examinations to support their oral treatments at the Dental Hospital of Universitas Gadjah Mada between May and June 2017. All the images were taken at this dental hospital for diagnostic and treatment planning purposes. No panoramic radiographs were primarily taken for this study, and only panoramic radiographs of acceptable diagnostic quality were included. Radiographs with poor quality due to an unusual head position, metal or motion artifacts, or the presence of deciduous teeth or severe tooth crowding were excluded from this study. This work was approved by the Committee of Health Research, Faculty of Dentistry, Universitas Gadjah Mada, Indonesia (Ref. 00482/KKEP/FKG-UGM/EC/2020). The panoramic radiographs were taken using Vatech Pax-I PCH-2500 (Vatech Global, Seoul, Korea) and exported in BMP format with dimensions of 2868×1504 pixels.

Figure 1 displays the flowchart of the study. In this study, image segmentation was performed in 2 steps: training and testing. As illustrated in Figure 1, training and testing were conducted sequentially. Training was performed as the initial step, starting with generating ground truth data. To obtain the ground truth data, a dentist and a periodontist collaborated to manually annotate all the digital panoramic radiographs. Before annotating the panoramic radiographs, they calibrated the staging of periodontitis by consensus to classify RBL according to the new (2017) classification of periodontal and peri-implant diseases1 (Table 1). This consensus served as a reference for radiographic annotations.

Fig. 1. Flowchart of training and testing in this study. IoU: intersection over union.

Fig. 1

Table 1. Staging of periodontitis adapted from the periodontitis classification1.

graphic file with name isd-52-383-i001.jpg

Figure 2 presents the results of image annotations in this study. As shown in Figure 2, annotations were made by marking the alveolar crest and alveolar bone surrounding each tooth (square box on panoramic radiograph). The staging of periodontitis was determined by a dentist (shown as a number in upper left corner of the annotated box) based on the RBL1 (Table 1) and served as the ground truth for periodontitis classification during image segmentation. The alveolar bone surrounding the severe crowding teeth was left unannotated in Figure 2. All annotations were performed, with no difference between the anterior and posterior regions for horizontal and vertical bone loss. As long as the alveolar bone and alveolar crest could be observed, annotation was performed despite the presence of retained roots, tooth malposition, and mild tooth crowding.

Fig. 2. Annotated panoramic radiograph for image segmentation. The alveolar crest and alveolar bone of each tooth are annotated by drawing a square box around them. A number in the upper left corner of the annotated box indicates the RBL-based periodontitis staging. Annotations are not made on the alveolar bone surrounding severely crowded teeth. RBL: radiographic bone loss.

Fig. 2

All panoramic radiographs that were annotated underwent image pre-processing through data augmentation and image resizing. Through data augmentation, 1000 images were produced from 100 original digital panoramic radiograph images. As a result, a total of 1100 images were used in this study, consisting of 100 original images and 1000 augmented images. For the purpose of image segmentation, datasets were obtained from the region of interest (ROI) of annotated areas on panoramic radiographs depicting RBL with various stages of periodontitis in the alveolar bone and interdental alveolar crest. Therefore, the datasets in the study consisted of 5 classes of periodontitis (normal, stage 1 periodontitis, stage 2 periodontitis, stage 3 periodontitis, and stage 4 periodontitis) and each radiograph could produce datasets for different stages of periodontitis. From 1100 images, 9907 annotated ROIs were provided, which were used as data in the study. The distribution of the data is presented in Figure 3. The number of ROIs for normal conditions, stage 1 periodontitis, stage 2 periodontitis, stage 3 periodontitis, and stage 4 periodontitis was 2233, 3118, 1538, 2461, and 557, respectively.

Fig. 3. Distribution of data based on the staging of periodontitis determined by radiographic bone loss on panoramic radiographs.

Fig. 3

The 9907 ROIs from the original and augmented panoramic radiographs that had been annotated were further randomly divided into training (75%) and testing (25%) datasets. The training dataset was divided into training and validation datasets (Fig. 1) and then fed into deep learning algorithms to generate segmentation models. Two algorithm models were used for image segmentation: Multi-Label U-Net described by Dev et al.14 and Mask R-CNN from another previous study.15

The final step was testing the segmentation models from the previous training phase (Fig. 1). The testing dataset, which comprised 25% of the total sample, contained separate radiographs that were not used in the training phase. Both Multi-Label U-Net and Mask-R CNN classified periodontitis automatically based on the segmentation models generated during training. Annotated panoramic radiographs were used as the ground truth. The accuracy of Multi-Label U-Net and Mask-R CNN for detecting and classifying periodontitis was determined by comparing the segmentation results from the testing phase to the ground truth, and it was quantitatively evaluated using the dice coefficient and intersection-over-union (IoU) score as segmentation metrics. If the threshold of the metrics was under 0.5, then parameter modification or tuning was conducted. The best model produced during training was then employed for segmentation during testing to detect and classify the stages of periodontitis on panoramic radiographs.

Pre-processing and image augmentation

Image rotation from −5° to 5° was applied as an augmentation technique to overcome the drawback of unbalanced sample groups. In this study, identical algorithm models completed without data augmentation were also employed and compared with the previous model for which image augmentation was used. Pre-processing was conducted by resizing the annotated images to meet the image size specification for each architecture model. An image size of 128×128 is needed as input for Multi-Label U-Net.14 Meanwhile, given that image size has no significant influence on Mask R-CNN, the original, unresized images were assigned as input for this algorithm model.

Multi-Label U-Net

The Multi-Label U-Net model used in this work was previously developed for analyzing sky and cloud images.14 This U-Net model contains several convolution layers with down-sampling, followed by up-sampling and another convolution layer to the point where the output is the same size as the input (128×128). Thus, the process appears similar to a U shape. U-Net was first proposed for the segmentation of electron microscopy images in biomedical applications.16 In this study, similar biomedical images in the form of plain X-ray radiographs were segmented to detect 5 classes (i.e., normal conditions and 4 periodontitis stages). Therefore, Multi-Label U-Net was utilized to produce a ternary mask that can segment multiple types of cloud images. The ternary segmentation output has grayscale values of 0, 64, 128, 192, and 255, which were used in this study to represent different conditions, such as background, normal, stage 1 periodontitis, stage 2 periodontitis, stage 3 periodontitis, and stage 4 periodontitis.

Mask R-CNN

The algorithm model of Mask R-CNN was based on the previous faster R-CNN model.15 Mask R-CNN processes fundamentally use the CNN backbone architecture to obtain feature maps and region proposals and then give a region output. Next, ROI aligns process the feature maps and region to another FC layer and convolutional layer. The FC layer is then split into 2 different layers: the softmax layer, which contributes the class label, and the regression layer, which serves as the bounding box. Finally, the convolutional layer is formed to generate a binary mask for each class. Mask R-CNN also uses transfer learning from significant data (i.e., the COCO dataset training weights).17

Evaluation methods for image segmentation

The output of the Multi-Label U-Net and Mask R-CNN models was evaluated to validate the data and determine the reliability of image segmentation for periodontitis detection and classification. Two related metrics were used to measure performance in terms of the similarity between predicted segmentation and ground truth images (i.e., the annotated area on a panoramic radiograph).

Two metrics were used to evaluate image segmentation: the Sørensen dice coefficient and the IoU score or Jaccard similarity index. In the image dataset, several ground truth regions were annotated. The model developed during training was validated by computing the dice score, which quantifies object similarity and is defined as the ratio of the overlap of two segmentations to the total size of two objects. IoU is a metric for assessing the accuracy of object detection on a given dataset and determining the overlap between 2 bounding boxes or masks. If the predicted and ground truth bounding boxes perfectly overlap, then IoU=1. The formulas for the dice coefficient and IoU score (Jaccard similarity index) are shown in Equations (1) and (2):

Dice(A,B)=2ABA+B (1)
Jaccard(A,B)=ABAB (2)

where A is the predicted segmentation mask and B is the ground truth mask. The symbol ∩ represents intersection, while ∪ represents union. Equations 1 and 2 were applied to Multi-Label U-Net and Mask R-CNN to produce the dice coefficient and IoU score as evaluation metrics. In this study, these 2 metrics were then used to evaluate the results of image segmentation of panoramic radiographs for periodontitis detection.

Results

Image segmentation by Multi-Label U-Net

The default parameters for the training ternary mask of Multi-Label U-Net were based on a previous study.14 The default parameters used for training are listed in Table 2 and were utilized to conduct a focused analysis. During training, model loss was calculated to evaluate the learning process.

Table 2. Multi-Label U-Net parameters used in the training process.

graphic file with name isd-52-383-i002.jpg

The effect of image augmentation on the evaluation metrics for Multi-Label U-Net was investigated, and the results are shown in Figure 4. Data augmentation was found to affect the evaluation metrics for Multi-Label U-Net. Without augmentation, the dataset would have been constrained by its small size and lack of variation, leading to model underfitting. Image augmentation added new image data to Multi-Label U-Net and improved its performance by enabling it to learn from significant data variation.

Fig. 4. Effects of data augmentation on the evaluation metrics of Multi-Label U-Net and Mask R-CNN. IoU: intersection over union.

Fig. 4

The effect of tuning parameters, such as optimizers and learning rate values, on the evaluation metrics was also investigated, and the results are shown in Figure 5. The learning rate was found to be critical for both evaluation metrics. The optimal learning rate was 1×10-4, and values less than or greater than 1×10-4 tended to produce a suboptimal model.

Fig. 5. Effects of the optimizer and learning rate on the evaluation metrics of Multi-Label U-Net.

Fig. 5

The best segmentation model on panoramic radiographs was identified for the classification of periodontitis staging. The model was stored in a .json file, and the trained weights were stored in an .h5 file. A snippet of the periodontitis segmentation ternary mask output is displayed in Figure 6. As illustrated in Figure 5, the dice coefficient and IoU score of Multi-Label U-Net were 0.97, and 0.98, respectively. The best evaluation metrics of Multi-Label U-Net were obtained from a combination of RMSprop optimizers with a 1×10-4 learning rate.

Fig. 6. Segmentation result of the Multi-Label U-Net model.

Fig. 6

Image segmentation by Mask R-CNN

Similar to Multi-Label U-Net training, Mask R-CNN also used augmented images divided into training and testing datasets at a proportion of 0.75 : 0.25. The parameters of the training process in Mask R-CNN are listed in Table 3.

Table 3. Mask R-CNN parameters used in the training process16.

graphic file with name isd-52-383-i003.jpg

The effect of data augmentation on the evaluation metrics for Mask R-CNN was investigated, and the results are presented in Figure 4. Both evaluation metrics were improved by using data augmentation in the Mask R-CNN algorithm. Nevertheless, Figure 4 reveals that the evaluation metrics of Mask R-CNN were not as good as those of Multi-Label U-Net.

The influence of parameter tuning (i.e., the backbone parameter and learning rate), on the evaluation metrics of Mask R-CNN is shown in Figure 7. The learning rate also showed a significant role in both evaluation metrics. The optimum learning rate value for Mask R-CNN was 1×10-3.

Fig. 7. Effects of the Mask R-CNN backbone and learning rate on the evaluation metrics.

Fig. 7

A comparison of an output snippet between a periodontitis segmentation mask and a ground truth mask is shown in Figure 8. The results of image segmentation using Mask R-CNN, as presented in Figure 7, revealed that the dice coefficient and IoU score of Mask R-CNN were 0.87 and 0.74, respectively. The best evaluation metrics for Mask R-CNN were achieved using the ResNet-101 backbone with a 1×10-3 learning rate.

Fig. 8. Segmentation result of the Mask R-CNN model. The ground truth is represented by a green bounding box, and the detection result is represented by a red bounding box. The dice coefficient and IoU score are presented in the caption above the bounding box. IoU: intersection over union.

Fig. 8

U-Net has the characteristic of semantic segmentation, and Mask R-CNN has a function of instance segmentation. The latter allows the classification performance of each individual object to be measured using classification metrics. The performance of Mask R-CNN based on evaluation metrics for each stage of periodontitis is summarized in Figure 9. When Mask R-CNN was used for data testing, the detection accuracy was 95%, with an average precision of 0.86, recall (sensitivity) of 0.88, and F1-score of 0.87 (Fig. 9). As shown in Figure 9, using Mask R-CNN as the segmentation method provided the best detection results for stage 4 periodontitis, with a precision of 0.97, recall of 0.95, and F1-score of 0.96. The evaluation metric results for segmentation showed that Mask R-CNN exhibited superior performance for periodontitis diagnosis in comparison with the ground truth images.

Fig. 9. The performance of Mask R-CNN for periodontitis staging.

Fig. 9

Discussion

Periodontitis staging is critical for determining disease severity and carefully planning comprehensive treatment strategies. RBL is among the variables that can be used to identify periodontitis stage. RBL less than 15% in the coronal third of the tooth root indicates stage I periodontitis, RBL ranging between 15% and 33% indicates stage II, and RBL extending to the middle third of the root and beyond indicates stages III and IV (Table 1).1 Panoramic radiographs are versatile images that can support dental and periodontal diagnoses and could be used to harvest information to build artificial intelligence models based on computer vision through several processes, such as image segmentation. The automatic detection and classification of periodontitis and other diseases can save time, reduce human error due to fatigue and an excessive workload,18 and aid oral radiologists in performing accurate and reproducible assessments.7

Image segmentation is commonly conducted by identifying important image components and then breaking down or partitioning the image into separate homogeneous regions. In most cases, segmentation is followed by classification.8,19 Owing to shape irregularities and natural anatomical variations, the segmentation of medical and dental radiographs has become a considerable challenge. Inadequate imaging modalities can lead to low contrast, imbalanced exposure, noise, and a variety of image artifacts that complicate medical image segmentation.20 High-accuracy medical image segmentation must use computer vision to provide a foundation for further image processing in a range of diagnostic applications.8

A lack of variation in the dataset frequently leads to model underfitting.21 In this work, data augmentation was used to overcome the problem of an unbalanced dataset distribution (Fig. 2). Image augmentation produces a large number of image variations and increases data diversity. In line with previous studies,12,22,23 image rotation was used for data augmentation in this study. Several techniques, such as horizontal inversion10,12,23 and shifts of vertical alignment, brightness, sharpness, or contrast, can be applied for data augmentation in the image segmentation of panoramic radiographs.22 As presented in Figure 4, augmentation was performed with Multi-Label U-Net and Mask R-CNN and significantly improved the dice coefficient and IoU score of both deep learning models. The evaluation metrics of Multi-Label U-Net tended to be higher than those of Mask R-CNN (Fig. 4).

The dice coefficient and IoU score were used to evaluate the image segmentation methods. As shown in Figures 5 and 7, both methods were effective in segmenting periodontitis on panoramic radiographs. In previous studies, the dice coefficient has also been utilized to evaluate the machine learning-based automatic segmentation of teeth on panoramic radiographs,24 in a deep learning hybrid method for diagnosing periodontal bone loss and stage periodontitis,12 and in panoramic radiographs using Mask R-CNN and a novel calibration method for diagnosing periodontitis.13

Because it divides the intersection by the union of the two areas, IoU has been widely used for quantifying the similarity between predicted and ground truth areas.25,26 This metric has also been applied to assess a complete deep learning Mask R-CNN,27 compare 4 segmentation algorithms (U-Net, DCU-Net, DoubleU-Net, and Nano-Net) for dental segmentation in panoramic radiographs,23 and evaluate the ability of U-Net to detect caries lesions on bitewing radiographs.25

The use of deep learning in computer vision for medical image analysis has increased in recent decades. U-Net is a semantic segmentation model that employs convolutional layers using a symmetrical network architecture.23 Its architecture is defined by the pattern of convolutional network layers aligned in a U shape and the use of skip connections between them. The encoder is the left part of the “U”, and the decoder is the right part. The model condenses the input in the encoder section, thereby increasing contextual information but decreasing precise positional information about objects. Through skip connections between the encoder and decoder layers, the decoder layer expands and combines contextual information with precise information about object locations.25

The U-Net architecture has been successfully implemented with CNNs in a variety of vision tasks involving medical image segmentation and is capable of segmenting images with sparse training data.8,9 This semantic segmentation technique has also been applied in the segmentation of panoramic radiographs. Various U-Net models are effective for tooth segmentation in panoramic radiographs.23 U-Net also shows acceptable to high accuracy for the detection of proximal caries on bitewing radiographs.25 Although U-Net is a method for semantic segmentation, it has the potential for automatic periodontitis detection when combined with other classification techniques.

The effect of parameter tuning on the evaluation metrics of Multi-Label U-Net is shown in Figure 5. The learning rate was found to be important for both evaluation metrics, and 1×10-4 was the optimal value. A model with a learning rate of less than or greater than 1×10-4 might be suboptimal. In some cases, Multi-Label U-Net models with such high learning rates are incapable of learning anything and produce a value of 0 for both evaluation metrics. During training, the learning rate defines how the model’s error weights are updated. The minimum gradient descent will not be reached with a low learning rate, but will be surpassed with a high learning rate. The other parameter, the optimizer, plays a minor role compared with the learning rate. With an optimal learning rate of 1×10-4, RMSprop can produce a better model than the Adam optimizer.

The effects of parameter tuning for the backbone and learning rate of Mask R-CNN on evaluation metrics are shown in Figure 7. The learning rate had a significant impact on both evaluation metrics. For Mask R-CNN, the optimal learning rate was 1×10-3. The lowest learning rate of 1×10-2 generated 0 values for the dice coefficient and IoU score of Mask R-CNN. Meanwhile, the other parameter (namely, the CNN backbone) had a minor role compared with the learning rate. The CNN backbone of ResNet-101 tended to have more parameters and a deeper layer than ResNet-50. Therefore, ResNet-101 yielded a better model than ResNet-50 with an optimum learning rate of 1×10-3. However, both parameters produced 0 values for the dice coefficient and IoU score when the learning rate was 1×10-2.

Image segmentation is used to identify, localize, and classify each individual object in an image. Semantic segmentation categorizes each pixel of an object without individually classifying the objects. By classifying, localizing, and segmenting each detected object, instance segmentation combines these 2 common machine learning tasks (detection and semantic segmentation). Mask R-CNN is an extension of faster R-CNN, which utilizes a branch of convolutional networks to perform instance segmentation.17 In this study, Mask R-CNN showed lower performance in image segmentation than Multi-Label U-Net. However, as an instance segmentation method, Mask R-CNN can be further developed for the segmentation and classification of periodontitis. Since Mask R-CNN performs instance segmentation, it is able to specifically distinguish each individual object between teeth as a periodontitis stage on panoramic radiographs. This technique has been recently applied to automate tooth segmentation27 and the instance segmentation of non-X-ray images, such as dental hyperspectral images.26

The results of this study indicated that Multi-Label U-Net outperformed Mask R-CNN in image segmentation for periodontitis detection on panoramic radiographs. However, Multi-Label U-Net performed semantic segmentation. As a result, the identification of periodontitis was limited to blocks of teeth that were not specific to the individual level of RBL in the interdental alveolar crest.

In conclusion, compared with Mask R-CNN, Multi-Label U-Net generated better segmentation for panoramic radiographs to support periodontal disease diagnosis. Since Multi-Label U-Net is a semantic segmentation method, the authors recommend integrating it with other techniques to develop hybrid computational vision models for automatic periodontitis detection on panoramic radiographs.

Footnotes

This work was funded by the Faculty of Dentistry Universitas Gadjah Mada under the grant “Penelitian Dana Masyarakat FKG UGM 2020” (Ref No. 3651/UN1/FKG1/Set.KG1/LT/2020).

Conflicts of Interest: None

References

  • 1.Papapanou PN, Sanz M, Buduneli N, Dietrich T, Feres M, Fine DH, et al. Periodontitis: consensus report of workgroup 2 of the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Diseases and Conditions. J Periodontol. 2018;89 Suppl 1:S173–S182. doi: 10.1002/JPER.17-0721. [DOI] [PubMed] [Google Scholar]
  • 2.Teeuw WJ, Coelho L, Silva A, van der Palen CJ, Lessmann FG, van der Velden U, et al. Validation of a dental image analyzer tool to measure alveolar bone loss in periodontitis patients. J Periodontal Res. 2009;44:94–102. doi: 10.1111/j.1600-0765.2008.01111.x. [DOI] [PubMed] [Google Scholar]
  • 3.Preshaw PM. Detection and diagnosis of periodontal conditions amenable to prevention. BMC Oral Health. 2015;15 Suppl 1:S5. doi: 10.1186/1472-6831-15-S1-S5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hanindriyo L, Widita E, Widyaningrum R, Priyono B, Agustina D. Influence of residential characteristics on the association between the oral health status and BMI of older adults in Indonesia. Gerodontology. 2018;35:268–275. doi: 10.1111/ger.12352. [DOI] [PubMed] [Google Scholar]
  • 5.Zia A, Hakim S, Khan AU, Bey A, Ateeq H, Parveen S, et al. Bone markers and bone mineral density associates with periodontitis in females with poly-cystic ovarian syndrome. J Bone Miner Metab. 2022;40:487–497. doi: 10.1007/s00774-021-01302-6. [DOI] [PubMed] [Google Scholar]
  • 6.Widita E, Hanindriyo L, Widyaningrum R, Priyono B, Agustina D. The association between periodontal conditions and serum lipids among elderly participants in Gadjah Mada medical centre, Yogyakarta. J Dent Indones. 2017;24:63–69. [Google Scholar]
  • 7.Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJ. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18:500–510. doi: 10.1038/s41568-018-0016-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Gadosey PK, Li Y, Adjei Agyekum E, Zhang T, Liu Z, Yamak PT, et al. SD-UNet: stripping down U-Net for segmentation of biomedical images on platforms with low computational budgets. Diagnostics (Basel) 2020;10:110. doi: 10.3390/diagnostics10020110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Zhou Z, Siddiquee MM, Tajbakhsh N, Liang J. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging. 2020;39:1856–1867. doi: 10.1109/TMI.2019.2959609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Thanathornwong B, Suebnukarn S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci Dent. 2020;50:169–174. doi: 10.5624/isd.2020.50.2.169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Alotaibi G, Awawdeh M, Farook FF, Aljohani M, Aldhafiri RM, Aldhoayan M. Artificial intelligence (AI) diagnostic tools: utilizing a convolutional neural network (CNN) to assess periodontal bone level radiographically-a retrospective study. BMC Oral Health. 2022;22:399. doi: 10.1186/s12903-022-02436-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Chang HJ, Lee SJ, Yong TH, Shin NY, Jang BG, Kim JE, et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci Rep. 2020;10:7531. doi: 10.1038/s41598-020-64509-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Jiang L, Chen D, Cao Z, Wu F, Zhu H, Zhu F. A two-stage deep learning architecture for radiographic staging of periodontal bone loss. BMC Oral Health. 2022;22:106. doi: 10.1186/s12903-022-02119-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Dev S, Manandhar S, Lee YH, Winkler S. Multi-label cloud segmentation using a deep network; 2019 USNC-URSI Radio Science Meeting (Joint with AP-S Symposium); 2019 Jul 7-12; Atlanta. IEEE; 2019. pp. 113–114. [Google Scholar]
  • 15.He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. IEEE Trans Pattern Anal Mach Intell. 2020;42:386–397. doi: 10.1109/TPAMI.2018.2844175. [DOI] [PubMed] [Google Scholar]
  • 16.Falk T, Mai D, Bensch R, Çiçek Ö, Abdulkadir A, Marrakchi Y, et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat Methods. 2019;16:67–70. doi: 10.1038/s41592-018-0261-2. [DOI] [PubMed] [Google Scholar]
  • 17.Loh R, Yong WX, Yapeter J, Subburaj K, Chandramohanadas R. A deep learning approach to the screening of malaria infection: automated and rapid cell counting, object detection and instance segmentation using Mask R-CNN. Comput Med Imaging Graph. 2021;88:101845. doi: 10.1016/j.compmedimag.2020.101845. [DOI] [PubMed] [Google Scholar]
  • 18.Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, et al. Artificial intelligence in medical imaging of the liver. World J Gastroenterol. 2019;25:672–682. doi: 10.3748/wjg.v25.i6.672. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent. 2019;91:103226. doi: 10.1016/j.jdent.2019.103226. [DOI] [PubMed] [Google Scholar]
  • 20.Abdi AH, Kasaei S, Mehdizadeh M. Automatic segmentation of mandible in panoramic X-ray. J Med Imaging (Bellingham) 2015;2:044003. doi: 10.1117/1.JMI.2.4.044003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data. 2019;6:60. doi: 10.1186/s40537-021-00492-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Kim J, Lee HS, Song IS, Jung KH. DeNTNet: Deep Neural Transfer Network for the detection of periodontal bone loss using panoramic dental radiographs. Sci Rep. 2019;9:17615. doi: 10.1038/s41598-019-53758-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.da Silva Rocha É, Endo PT. A comparative study of deep learning models for dental segmentation in panoramic radiograph. Appl Sci (Basel) 2022;12:3103 [Google Scholar]
  • 24.Kanuri N, Abdelkarim AZ, Rathore SA. Trainable WEKA (Waikato Environment for Knowledge Analysis) segmentation tool: machine-learning-enabled segmentation on features of panoramic radiographs. Cureus. 2022;14:e21777. doi: 10.7759/cureus.21777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Cantu AG, Gehrung S, Krois J, Chaurasia A, Rossi JG, Gaudin R, et al. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent. 2020;100:103425. doi: 10.1016/j.jdent.2020.103425. [DOI] [PubMed] [Google Scholar]
  • 26.Lian L, Zhu T, Zhu F, Zhu H. Deep learning for caries detection and classification. Diagnostics (Basel) 2021;11:1672. doi: 10.3390/diagnostics11091672. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lee JH, Han SS, Kim YH, Lee C, Kim I. Application of a fully deep convolutional neural network to the automation of tooth segmentation on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol. 2019;129:635–642. doi: 10.1016/j.oooo.2019.11.007. [DOI] [PubMed] [Google Scholar]

Articles from Imaging Science in Dentistry are provided here courtesy of Korean Academy of Oral and Maxillofacial Radiology

RESOURCES