Abstract
Background/purpose
Artificial Intelligence (AI) can optimize treatment approaches in dental healthcare due to its high level of accuracy and wide range of applications. This study seeks to propose a new deep learning (DL) ensemble model based on deep Convolutional Neural Network (CNN) algorithms to predict tooth position, detect shape, detect remaining interproximal bone level, and detect radiographic bone loss (RBL) using periapical and bitewing radiographs.
Materials and methods
270 patients from January 2015 to December 2020, and all images were deidentified without private information for this study. A total of 8000 periapical radiographs with 27,964 teeth were included for our model. AI algorithms utilizing the YOLOv5 model and VIA labeling platform, including VGG-16 and U-Net architecture, were created as a novel ensemble model. Results of AI analysis were compared with clinicians' assessments.
Results
DL-trained ensemble model accuracy was approximately 90% for periapical radiographs. Accuracy for tooth position detection was 88.8%, tooth shape detection 86.3%, periodontal bone level detection 92.61% and radiographic bone loss detection 97.0%. AI models were superior to mean accuracy values from 76% to 78% when detection was performed by dentists.
Conclusion
The proposed DL-trained ensemble model provides a critical cornerstone for radiographic detection and a valuable adjunct to periodontal diagnosis. High accuracy and reliability indicate model's strong potential to enhance clinical professional performance and build more efficient dental health services.
Keywords: Convolutional neural networks (CNN), YOLO, Tooth position, Tooth shape, Bone level
Introduction
Artificial intelligence (AI) seeks to create a human-like intelligence that can be applied in medical fields to increase the quality of patient care. AI can perform repetitive work usually tasked to humans and simplify complicated processes, and has solved healthcare problems and improved patient outcomes around the world.1 Some examples of the near infinite possibilities for AI use in precision medicine include systems to diagnose disease and surgical robotics to assist during operations.2,3 More recently, AI-based infrastructure has increased efficiency in health services and could potentially be used in other medical-related business processes such as automated dental insurance approval.
Deep learning (DL) AI-assisted dental image processing tools are under development and have been garnering more attention from researchers. These tools incorporate advanced image processing and machine learning algorithms that process image datasets collected by dentists, and can analyze enormous amounts of various information.4 Automatic detection of most common dental diseases has already been implemented, including bone loss due to periodontal disease and dental caries.5, 6, 7, 8 The ability to decrease human error while improving diagnostic efficiency and accuracy has made AI attractive to clinicians seeking better-quality diagnosis and treatment support.
DL is a form of machine learning where computers are trained to automatically extract image attributes. DL-trained AI uses convolutional neural networks (CNNs) to label dental image data sets and has been shown to be suitable for use in medical and dental applications.9,10 CNNs are the backbone of deep learning models that use computer vision in training work, though actual implementation involves a variety of methods and framework systems. For example, ResNet, Inception, and Plain CNN have been used for classification, while YOLO and Detectron2 frameworks have been used for detection and segmentation tasks that increase accuracy by predicting an object's bounding box. Recent studies have investigated DL's application across a wide variety of medical conditions and clinical situations, such as tooth detection, localization and oral cancer identification.11,12 Briefly, accuracy and breadth of use make AI a promising tool to optimize treatment approaches that leads to future dental healthcare trends.
In clinical settings, periodontal conditions can be evaluated using visual and tactile examinations. Measurements of periodontal pocket depth (PPD), bleeding on probing (BOP), and clinical attachment loss (CAL) remain the gold standard for examination, while radiographs are used to confirm diagnosis and treatment plans. However, discrepancies related to probe tip diameter, angulation, probing force, and intra-examiner differences can lead to different outcomes.13,14 As well, in cases with mild attachment loss or subgingival CEJ localization, accurate CAL determination is challenging as CEJ location is difficult to determine. In such cases, precise and reliable assessment is dependent on the interpretation of interproximal radiographic bone level since buccal and lingual bone cannot be detected in radiographic. Nevertheless, interpretations of radiographs may vary depending on dentists' expertise and experience. Thus, developing an automated assistant system to evaluate the remaining interproximal bone level from intraoral bitewing and periapical radiographs will aid in certain extent to obtain an accurate and reliable periodontal diagnosis.15 Although many complicated problems have been solved by DL-based computer-aided diagnosis using dental radiographs, there is a limited number of studies that used AI algorithms to perform comprehensive diagnosis and evaluation of interproximal bone level. More specifically, previous research on the use of deep CNN architecture together with identification and detection of periodontal bone level from limited sample images and follow-up time made the accuracy not close enough to dentists’ diagnosis.6,7,16,17 Due to the different staging frameworks for clinical practice, these model for clinical application and decision-making is still limited. Under ideal conditions, the well-trained CNN model should achieve approximately 90% in various detection from dental images.
The aim of this study was to propose a novel objective method for automatic feature detection of periapical radiographs based on CNN model AI algorithms and compare it with conventional examiner assessments. AI algorithms using VGG-16 and U-Net architecture were developed in this study to train different deep learning models related to the four categories of periodontal circumstances, including tooth position, tooth shape detection, remaining interproximal bone level detection, and radiographic bone loss detection. DL-based CNN algorithms based on periapical radiographic images is expected to provide the reliable as a reference to diagnose and predict for interpretation.
Materials and methods
Input data gathering and general preprocessing
This study was approved by the Institutional Review Board (IRB) of Taipei Medical University (approval No. N202008018) and carried out at the Dental Department of Taipei Medical University's Shuang-Ho Hospital, Taiwan. The study protocol followed the strengthening the reporting of observational studies in epidemiology (STROBE) statement guidelines. Periapical radiographic datasets were collected from 2015 to 2020, and all X-ray images were anonymized to remove any private information. A total of 270 subjects from Taipei Medical University (TMU-DATA) were included in this study. Study population characteristics are summarized in Table 1. A total of 8000 periapical radiographs of 27,964 teeth were collected to analyze: (1) Tooth position detection, (2) Tooth shape detection & segmentation, (3) Bone level detection & segmentation, and (4) Radiographic bone loss detection. This study used a convolutional ensemble model with 16 convolutional layers (Fig. 1). The model was trained with the training dataset then validated with the image dataset. During the general preparation process, class type data labelling and annotation labelling processing sections were implemented. The data labelling by class type process names images by tooth position in accordance with tooth category. The annotation labelling process was performed by dentists who labeled periapical radiographs with VGG Image Annotator (VIA) with polylines and/or polygons to determine the semantic border of a tooth, bone level, and other information.18 To reduce inter-observer variability and ensure the accuracy and consistency of the annotations, we recruited 5 senior clinical dentists with periodontal training and radiology backgrounds to make the standards of original teeth and related information according to the annotation guidelines in the annotation labeling process. After annotation labeling was completed, each assignment was reviewed by the other senior dentists to ensure the consistency and accuracy of the annotations for quality control. Completed annotation data (including x-y coordinates of the instances of semantic boundaries) was then stored in JSON format, which was imported into the training process to train the segmentation and detection models.
Table 1.
Baseline demographic characteristics. A total of 270 subjects in this study.
Characteristic | Total |
---|---|
Number of patients | 270 |
Female/male | 153/117 |
Age (years) | 59.81 ± 10.53 |
<60 yrs | 129 (47.7%) |
≥60 yrs | 141 (52.3%) |
History of periodontitis, n (%) | 173 (64%) |
Bone height condition | |
Mild(≤3 mm) | 74 (27.4%) |
Moderate(3–5 mm) | 124 (45.9%) |
Severe(≥5 mm) | 72 (26.6%) |
Figure 1.
Overall architecture of the Mask R–CNN model. From left to right: X-ray image (640 × 640 pixels) inputs, ensemble CNN model with convolutional layers to extract representative features, and set training variable and error function using the full connected network with softmax function to classify the output.
Tooth position detection
A transfer learning algorithm using the YOLOv5 (https://github.com/ultralytics/yolov5) pre-trained model was used for real-time object detection of tooth position. The Fédération Dentaire Internationale (FDI) tooth numbering system was used, which labels upper teeth as 18–11, 21–28 from right to left, and lower teeth as 48–41, 31–38 from right to left. Bounding box annotation data is the same as the tooth segmentation data, which was obtained from the VIA labeling platform. Approximately 1600 images were selected for annotation by tooth number and checked by dentists. It was noted during the data observation process that the number of wisdom teeth labels was much lower than other tooth positions, thus wisdom teeth were excluded to prevent data imbalance. Input image sizes were 640-pixel x 640-pixel for YOLOv5m.
Tooth shape detection and segmentation
For tooth shape detection and segmentation, teeth were located with both bounding boxes and semantic boundaries. The overall flow diagram of the proposed tooth shape as shown in Fig. 2. Similarly, tooth information obtained from VIA was used for implementation but without additional preprocessing before training. Semantic boundaries were determined using x-y coordinates obtained from VIA, and bounding boxes were determined by the leftmost, rightmost, top, and bottom coordinates (straight lines were extended from these 4 points to form a rectangular box).
Figure 2.
Flow diagram of the Mask-R-CNN method for tooth shape detection. Large regions of interest are scanned, and anchor box edges are refined when an object is found. Each region is run through a classification algorithm to classify the object. The segmentation model ResNet encoder executes teeth masks for the objects based on the anchor boxes. Finally, masks are processed and overlaid on the original image.
Bone level detection
Demarcation lines between bone level and gingival area were zoomed and highlighted with bounding boxes and semantic boundaries using basic information obtained from VIA and pre-processing algorithms. These include Black Pixel Calculation (percentage black area helped determine gingival area) and Polyline Length Detection (only accepted labeled polylines with horizontal distance greater than 80% of the image width).
The workable system for automatic feature detection using the DL-trained AI model (including levels of bone loss and tooth shape detection) was obtained utilizing previous features, computer vision contour modeling, and geometry. This implementation was based on the Deep Learning Hybrid Method to automatically diagnose periodontal bone loss with slight modifications. Predicted outputs from bone level segmentation and tooth shape segmentation were presented in the form of masks. While each predicted instance has its corresponding mask, this same mask can be used to calculate contour coordinates of the predicted instance in the image. After teeth contour coordinates are obtained, images are fitted with an ellipse to determine the major axis to help determine the distance between the anatomical/clinical root and bone level (calculated using the intersection points of the major axis of a tooth's fitting ellipse and bone level instance). Detection of intersection points is done using Shapely (https://pypi.org/project/Shapely/).
Radiographic bone loss detection
The deep learning hybrid method is also used to label CEJ level and detect radiographic bone level (RBL) from X-ray images to improve interpretation reliability. Fig. 3 shows the integrated segmentation networks, including tooth shape, CEJ level, and bone level. After obtaining contour coordinates from tooth shape detection, tooth contours of are fitted with an ellipse to determine the major principal axis of inertia of the tooth, which helps determine the distance between the root and the bone level and intersection points. First, the segmentation model is generated and long axes of the teeth are found. Then intersection points of the root apex, the lower point of the crown and bone level are regarded as the root, CEJ level and bone level points. Finally, RBL percentage rate analysis was calculated from the tooth long-axis, periodontal bone level, and CEJ level. Assessments of the RBL percentage rate can also be used to assist in the evaluation of periodontal disease severity according to the 2017 World Workshop on Periodontal diseases and conditions.19
Figure 3.
DL model assessment of radiographic bone loss (RBL). RBL percentage is calculated as a ratio of the distance between bone level and the lowest point of the clinical root to the distance between the CEJ level and the lowest point of the clinical root. This can be used to assist with the detection of radiographic periodontal alveolar bone loss.
Result
Tooth position detection
A digital periapical radiograph is generally employed for tooth position detection. The following images illustrate the visualization result of our YOLOv5m model. Fig. 4 shows an example image from the AI-assisted tool for tooth position (tooth localization). With accuracy measured at 88.8%, this example shows that AI can be efficiently applied to label teeth and recognize full mouth periapical dental x-rays.
Figure 4.
An example of periapical radiograph from dataset for tooth position. (A) Ground truth of labeling annotation. (B) Prediction output from YOLOv5.
Tooth shape detection and segmentation
Semantic boundaries are determined from x-y coordinates obtained from VIA, and bounding boxes are determined by the leftmost, rightmost, top, and bottom coordinates (with straight lines extending from these 4 points forming a rectangular box). Fig. 5 depicts a mosaic with tooth instance segmentation, accuracy measured at 86.3%.
Figure 5.
The original X-ray image for tooth shape detection.
The Detectron2 framework was also employed for training, with final results evaluated using COCO [3] object detection and segmentation metrics (i.e., average precision scores). Table 2 shows average precision (AP) at various Intersection over Union (IoU) thresholds for tooth detection and segmentation.
Table 2.
Average precision (AP) for tooth detection and segmentation.
Method | AP | AP50 | AP75 |
---|---|---|---|
Bounding Box | 70.29 | 90.73 | 85.80 |
Segmentation | 73.39 | 90.65 | 87.06 |
Abbreviation: AP, Average precision.
Periodontal bone level detection and segmentation
Semantic boundaries were defined using x-y coordinates obtained from VIA, with bounding boxes determined using the leftmost and rightmost coordinates along with upper (if the gingival area lies in the upper area) or lower image corners (if the gingival area lies in the lower area). To increase accuracy, the image was segmented into three regions (top, middle, and lower) instead of analyzing the complete image. Periodontal bone level essential for periodontal diagnosis is shown in Fig. 6, with accuracy measured at 92.61%.
Figure 6.
Image analysis of periodontal bone level (PBL) for each tooth.
Periodontal bone level (PBL) was determined using the CNN ensemble method (AP50) for periapical radiographs. The Detectron2 framework (backbone MaskRCNN with Feature Pyramid Network) developed by Facebook was used for training, with final results evaluated using COCO object detection and segmentation metrics (i.e., AP scores). AP at various IoU thresholds in tooth detection and segmentation, bone level detection, and segmentation for the detection of PBL are shown in Table 3.
Table 3.
Average precision (AP)for bone level detection and segmentation.
Method | AP | AP50 | AP75 |
---|---|---|---|
Bounding Box | 75.38 | 97.25 | 91.81 |
Segmentation | 75.86 | 96.99 | 92.61 |
Abbreviation: AP, Average precision.
Radiographic bone loss detection
AI was used to calculate tooth position, long-axis orientation of teeth, bone level, and CEJ level. All data were combined to calculate the RBL percentages, which were evaluated as root level divided by bone level and multiplied by 100 (Fig. 7). Table 4 also shows segmentation with data augmentation for RBL detection with accuracy around 97.0%. Furthermore, detection results can be further classified as mild, moderate, and severe.
Figure 7.
An example of periapical radiograph from dataset for radiographic bone loss detection.
Table 4.
Radiographic bone level (RBL) from X-ray images RBL detection with and without data augmentation. Performance was measured using the single metric of average precision (AP).
Method | AP | AP50 | AP75 |
---|---|---|---|
Segmentation without data augmentation | 77.98 | 92.98 | 89.94 |
Segmentation with data augmentation | 76.10 | 90.67 | 87.42 |
Abbreviation: AP, Average precision.
Discussion
Periodontal disease is one of the most prevalent dental diseases suffered by adults. Radiographs assist in the diagnosis of periodontal disease, estimation of severity, determination of prognosis, and evaluation of treatment outcome.20, 21, 22 Evaluation of bone level in radiographs is based mainly on the appearance of interdental alveolar bone because the relatively dense root structure obscures the facial and lingual bony plates. However, several factors may affect the interpretation of radiographs, and different interpretations might result in different diagnoses. Therefore, the AI model was developed in this study to provide an adjunct for accurate detection and efficient interpretation of interdental bone level and radiographic bone loss. The Kappa coefficient showed substantial agreement between dentists and automatic diagnoses. The tool used in this study can assist dentists identify teeth by providing precise, reliable, and uniform interpretation of current interdental bone level and RBL. Several architectures and layers are involved in this automatic detection system to provide adequate information for radiographic image interpretation. Thus, this AI-assisted model can offer a reliable periodontal diagnosis and treatment planning in combination with periodontal examination and clinical findings.
While a convolutional ensemble model (comprising VGG-16, Inception, ResNet, and EfficientNet architectures) was used for image classification, U-Net architecture was used for segmentation. The DL-trained AI model was developed to identify teeth, evaluate remaining periodontal bone height, and compare results with clinician assessments. In the past, Chen et al. used faster regions with CNN (faster R–CNN) in the TensorFlow tool package to detect and label each tooth in only 1250 dental periapical films, resulting in a precision measurement of 77.1%.23 Lee and coworkers also developed DL-trained architectures, including GoogLeNet Inception, to classify periodontally compromised teeth (PCT) from periapical radiographs.16 However, applications are still in the preliminary development stage. In our model, deep CNN architecture is characterized by hierarchical feature learning and expression capability from image data sets in multiple convolutional, connected, and hidden layers. We are currently planning to refine the VGG-16 network architecture by adjusting the number of convolutional layers, hidden layers, and hyperparameters to improve deep learning efficiency.
Similarly, previous studies used a DL-trained model to segment teeth data on dental periapical radiographs and concluded that DL-Trained AI model accuracy was higher than less experienced dentists.24 A previous study by Lee et al. also employed DL-based CNN to conduct diagnosis and prediction of periodontally compromised teeth, and demonstrated diagnostic accuracy of 81.0% and 76.7% for premolars and molars, respectively.5 More recently, another study by Lee et al. utilized DL to measure alveolar bone level, and obtained an accuracy level of 0.87 for periodontal staging by using Computer Vision Annotation Tool (CVAT).25 The present study employed the convolutional ensemble model as a DL algorithm, trained on approximately 8000 annotated dental images, reaching an accuracy of approximately 0.9 − a value remarkably higher than values between 76% and 78% previously reported for radiographic detection performed by the dentists.5,17 Furthermore, the sensitivity and specificity for the present DL-trained model were above 80%, indicating that model is a good classifier. Put simply, the CNN derived results were more accurate and reliable than individual dentists.
Kim et al. and Krois et al. published the automatic detection of periodontal disease from panoramic X-rays using the TensorFlow framework DL and DeNTNet DL models.6,17 Chang et al. also studied the use of hybrid DL to detect RBL levels from panoramic X-rays.7 Their results illustrate that detecting RBL from panoramic radiographs can be both faster and more comfortable for patients, but image distortion, less image detail in individual teeth, and lower pixel quality can occur.26, 27, 28 In our study, detection of interdental bone level and RBL were conducted using periapical and bitewing radiographs that offer more detail and higher-quality images for periodontal alveolar bone levels. We successfully created this AI model and implemented a training process for the novel DL ensemble model, which is a crucial cornerstone step for future applications including determining morphology of bony defects, precise classification of staging and grading, and treatment planning for dental implants. More specifically, tooth position labeling and tooth shape identification could provide digital coordinate information of the object's location for AI digital treatment applications such as robot dental implant surgery. The periodontal bone loss evaluation would assist dentists in diagnosis with more efficiency and reliability. It will help general dental practitioners and specialists to fasten the treatment process as well as obtain excellent outcomes.
In addition, some limitations regarding radiographs still remain, such as underestimating the extent of bone loss in width and the information provided reflecting the effects of past cellular neural networks experience on bone and roots. Thus, the assessment of periodontal interproximal bone level should be based on a combined clinical-radiographic evaluation approach. As well, room remains to refine input data, for example by increasing image input and annotation quality, raising computer vision deep learning ease-of-use, and developing more effective algorithms. If significant improvements occur in these three aspects, the system's overall capabilities will be enhanced. Overall, this study used a unique deep leaning ensemble mode to comprehensively detect tooth position, shape, remaining interproximal bone level, and radiographic bone loss from periapical and bitewing radiographs. This model offers several advantages and novelties, including the reduction of overfitting, improvement of generalization, and enhancement of overall model accuracy. This ensemble approach has explored multiple parameters, unlike previous studies that focused on only one or two parameters.
Up to this point, the current convolutional ensemble model of the deep CNN architecture AI model was developed with remarkable accuracy and efficiency. The present AI tool is useful not only for reviewing a large number of images, but also as an indication of quality artificial intelligence-based infrastructure in digital dentistry field. This propels efficient and reliable dental services and ushers in a new era for future AI applications using two- and three-dimensional imaging modalities to improve the field of digital dentistry.
In conclusion, the developed AI-based detection model can assist dental professionals with diagnosis and treatment planning through radiographic estimation of interproximal alveolar bone level and RBL. This state-of-the-art convolutional ensemble model based on CNN performs its tasks accurately and reliably by processing dental images and radiographic data. The DL-based model can also help dentists interpret images in clinical practice which may be overlooked by many factors, such as differences in experience, tiredness, and inattentiveness. This AI detection will, therefore, empower dental professionals to substantially improve, and lead to potential future applications in the field of digital dentistry.
Declaration of competing interest
The authors have no conflicts of interest relevant to this article.
Acknowledgments
This study was supported by grants from Dentall Co., Ltd. and Taipei Medical University-industry collaboration, Southern Taiwan Science Park Bureau, Ministry of Science and Technology (MOST) (grant number EX-03-06-08-111).
Contributor Information
Yuan-Min Lin, Email: ymlin@nycu.edu.tw.
Wei-Jen Chang, Email: cweijen1@tmu.edu.tw.
References
- 1.Ahmed N., Abbasi M.S., Zuberi F., et al. Artificial intelligence techniques: analysis, application, and outcome in dentistry-a systematic review. BioMed Res Int. 2021 doi: 10.1155/2021/9751564. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Sun M., Chai Y., Chai G., et al. Fully automatic robot-assisted surgery for mandibular angle split osteotomy. J Craniofac Surg. 2020;31:336–339. doi: 10.1097/SCS.0000000000005587. [DOI] [PubMed] [Google Scholar]
- 3.Yamada M., Saito Y., Imaoka H., et al. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep. 2019;9:1–9. doi: 10.1038/s41598-019-50567-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Barr A., Feigenbaum E.A., editors. vol. 1. William Kaufmann; 1981. (The Handbook of Artificial Intelligence). [Google Scholar]
- 5.Lee J.H., Kim D.H., Jeong S.N., et al. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci. 2018;48:114–123. doi: 10.5051/jpis.2018.48.2.114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Kim J., Lee H.S., Song I.S., et al. DeNTNet: deep neural transfer network for the detection of periodontal bone loss using panoramic dental radiographs. Sci Rep. 2019;9:1–9. doi: 10.1038/s41598-019-53758-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Chang H.J., Lee S.J., Yong T.H., et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci Rep. 2020;10:1–8. doi: 10.1038/s41598-020-64509-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Lee J.H., Kim D.H., Jeong S.N., et al. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J Dent. 2018;77:106–111. doi: 10.1016/j.jdent.2018.07.015. [DOI] [PubMed] [Google Scholar]
- 9.White S.C., Pharoah M.J., editors. Oral Radiology-E-Book: Principles and Interpretation. Elsevier Health Sciences; 2014. [Google Scholar]
- 10.Fukuda M., Inamoto K., Shibata N., et al. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020;36:337–343. doi: 10.1007/s11282-019-00409-x. [DOI] [PubMed] [Google Scholar]
- 11.Aubreville M., Knipfer C., Oetter N., et al. Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Sci Rep. 2017;7:1–10. doi: 10.1038/s41598-017-12320-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Tuzoff D.V., Tuzova L.N., Bornstein M.M., et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofacial Radiol. 2019;48 doi: 10.1259/dmfr.20180051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Garnick J.J., Silverstein L. Periodontal probing: probe tip diameter. J Periodontol. 2000;71:96–103. doi: 10.1902/jop.2000.71.1.96. [DOI] [PubMed] [Google Scholar]
- 14.Trombelli L., Farina R., Silva C.O., et al. Plaque-induced gingivitis: case definition and diagnostic considerations. J Clin Periodontol. 2018;45:S44–S67. doi: 10.1111/jcpe.12939. [DOI] [PubMed] [Google Scholar]
- 15.Lin P., Huang P., Huang P. Automatic methods for alveolar bone loss degree measurement in periodontitis periapical radiographs. Comput Methods Progr Biomed. 2017;148:1–11. doi: 10.1016/j.cmpb.2017.06.012. [DOI] [PubMed] [Google Scholar]
- 16.Lee J.H., Kim Dh, Jeong S.N., et al. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodontal Implant Sci. 2018;48:114–123. doi: 10.5051/jpis.2018.48.2.114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Krois J., Ekert T., Meinhold L., et al. Deep learning for the radiographic detection of periodontal bone loss. Sci Rep. 2019;9:1–6. doi: 10.1038/s41598-019-44839-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.A VGG image annotator (VIA). Available at: https://www.robots.ox.ac.uk/∼vgg/software/via/[Date accessed: March 13, 2023].
- 19.Ccaton J.G., Armitage G., Berglundh T., et al. A new classification scheme for periodontal and peri-implant diseases and conditions–Introduction and key changes from the 1999 classification. J Periodontol. 2018;89:S1–S8. doi: 10.1002/JPER.18-0157. [DOI] [PubMed] [Google Scholar]
- 20.Mol A. Imaging methods in periodontology. Periodontol. 2000 2004;34:34–48. doi: 10.1046/j.0906-6713.2003.003423.x. [DOI] [PubMed] [Google Scholar]
- 21.Tugnait A., Hirschmann P.N., Clerehugh V. Validation of a model to evaluate the role of radiographs in the diagnosis and treatment planning of periodontal diseases. J Dent. 2006;34:509–515. doi: 10.1016/j.jdent.2005.12.002. [DOI] [PubMed] [Google Scholar]
- 22.Corbet E.F., Ho D.K., Lai S.M. Radiographs in periodontal disease diagnosis and management. Aust Dent J. 2009:S27–S43. doi: 10.1111/j.1834-7819.2009.01141.x. [DOI] [PubMed] [Google Scholar]
- 23.Chen H., Zhang K., Lyu P., et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep. 2019;9:1–11. doi: 10.1038/s41598-019-40414-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Cantu A.G., Gehrung S., Krois J., et al. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent. 2020;100 doi: 10.1016/j.jdent.2020.103425. [DOI] [PubMed] [Google Scholar]
- 25.Lee C.T., Kabir T., Nelson J., et al. Use of the deep learning approach to measure alveolar bone level. J Clin Periodontol. 2022;49:260–269. doi: 10.1111/jcpe.13574. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Åkesson L., Håkansson J., Rohlin M. Comparison of panoramic and intraoral radiography and pocket probing for the measurement of the marginal bone level. J Clin Periodontol. 1992;19:326–332. doi: 10.1111/j.1600-051x.1992.tb00654.x. [DOI] [PubMed] [Google Scholar]
- 27.Pepelassi E.A., Diamanti-Kipioti A. Selection of the most accurate method of conventional radiography for the assessment of periodontal osseous destruction. J Clin Periodontol. 1997;24:557–567. doi: 10.1111/j.1600-051x.1997.tb00229.x. [DOI] [PubMed] [Google Scholar]
- 28.Hellén-Halme K., Lith A., Shi X.-Q. Reliability of marginal bone level measurements on digital panoramic and digital intraoral radiographs. Oral Radiol. 2020;36:135–140. doi: 10.1007/s11282-019-00387-0. [DOI] [PMC free article] [PubMed] [Google Scholar]