Skip to main content
Dentomaxillofacial Radiology logoLink to Dentomaxillofacial Radiology
. 2024 Jan 25;53(3):165–172. doi: 10.1093/dmfr/twae002

Panoramic imaging errors in machine learning model development: a systematic review

Eduardo Delamare 1,2,, Xingyue Fu 3,, Zimo Huang 4, Jinman Kim 5
PMCID: PMC11003661  PMID: 38273661

Abstract

Objectives

To investigate the management of imaging errors from panoramic radiography (PAN) datasets used in the development of machine learning (ML) models.

Methods

This systematic literature followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses and used three databases. Keywords were selected from relevant literature.

Eligibility criteria

PAN studies that used ML models and mentioned image quality concerns.

Results

Out of 400 articles, 41 papers satisfied the inclusion criteria. All the studies used ML models, with 35 papers using deep learning (DL) models. PAN quality assessment was approached in 3 ways: acknowledgement and acceptance of imaging errors in the ML model, removal of low-quality radiographs from the dataset before building the model, and application of image enhancement methods prior to model development. The criteria for determining PAN image quality varied widely across studies and were prone to bias.

Conclusions

This study revealed significant inconsistencies in the management of PAN imaging errors in ML research. However, most studies agree that such errors are detrimental when building ML models. More research is needed to understand the impact of low-quality inputs on model performance. Prospective studies may streamline image quality assessment by leveraging DL models, which excel at pattern recognition tasks.

Keywords: dental panoramic radiograph, quality assessment, artificial intelligence, systematic review, imaging errors

Introduction

Panoramic radiography (PAN) is an established imaging modality in dentistry. It allows clinicians to diagnose dental diseases and obtain an overview of the anatomy of the teeth and surrounding structures.1–3 The quality of PAN depends on multiple factors, including the type of equipment, the patient's anatomical features, and operator-related factors. Therefore, PAN is susceptible to several imaging errors.4,5 Low-quality radiographs may lead to incorrect interpretation and diagnosis, thus potentially resulting in inaccurate treatment planning.4 Therefore, the clinical evaluation of imaging errors in PAN is an essential step in its quality assurance process and must be performed continuously.5

Many recent studies have used PANs in computer-assisted diagnosis (CAD) systems with the potential to benefit clinical decision-making.6–13 Among these CAD systems, the state-of-the-art relies extensively on machine learning (ML) models. More specifically, deep learning (DL) algorithms, including convolutional neural network (CNN) models, are frequently used in these studies. Some CNN models, such as VGG-16, VGGNet-19, and GoogLeNet Inception-v3, have been used on PAN for purposes such as classification,7–13 disease detection,8,9 identification,11,14 and segmentation.13

Previous studies indicated that PAN imaging errors impact ML model performance, and authors suggested different criteria to handle this limitation.7,8,15–17 Several studies, for example, outright excluded radiographs from the dataset when faced with issues such as severe noise, haziness, and distortion,8 heavily overlapped structures, or blurred radiographs.15–17 Other studies only included radiographs that satisfied multivariable criteria, such as proper positioning.7

Notably, low-quality radiographs affected by common imaging errors are mentioned in several studies as one of the potential explanations for decreased AI model performance.7,14–17 Therefore, this systematic review aims to investigate how imaging errors are managed in research using ML models developed on PAN datasets, and to examine whether quality assessment strategies should be observed in this type of research.

Methods

Paper selection process

Preferred Reporting Items for Systematic Reviews and Meta-analysis guideline was used to summarize the paper searching process.18 The database search was undertaken in November 2022 using three electronic databases: PubMed, Scopus, and Google Scholar. The search terms used in this systematic review were based on the keywords of relevant literature. These were: (1) (((((((Artificial intelligence) OR (deep learning)) OR (CNN)) OR (Convolutional neural network)) OR (neural network)) OR (CNNs)) OR (Convolutional Neural Networks)) OR (Neural networks)) AND (((((panoramic image*) OR (orthopantomogram)) OR (orthopantomography)) OR (orthopantomographic image)) OR (OPG)); (2) limit to studies published in English; (3) limit to peer-reviewed articles published in the past ten years; and (4) limit to studies where full text is accessible.

Eligibility criteria

Inclusion criteria

Full manuscripts, published in English in the past 10 years, that:

  1. Reported development of ML models on PAN datasets.

  2. Acknowledged any type of Imaging Error and/or considered any form of PAN quality assessment strategy, whether adopted or not.

Exclusion criteria

  1. Systematic or literature reviews.

  2. Editorials, commentaries, and letters to the editor.

  3. Studies that did not assess the quality of the radiographs before the development of the ML model.

Study selection

Three databases were used to cross-reference and locate the targeted papers. The retrieved searches were then screened for relevance by referring to their title and abstract by 3 researchers—a coursework master's student in data science, a lecturer in dentomaxillofacial radiology, and a professor in computer science. The master student and the lecturer screened the papers and ensured all included articles met the eligibility criteria.

Results

Description of included papers

A total of 400 articles were identified from the database search. A flowchart of the paper search, selection process, and outputs is shown in Figure 1. From the 400 papers, 97 duplicates were removed. After title and abstract screening, 161 articles that did not meet inclusion criterion 1 were excluded (Section “Inclusion Criteria”). From the remaining 142 articles, 4 were removed due to their full text being unable to be retrieved. A full-text screening was conducted on all 138 remaining articles for mentioning quality assessment strategies as defined in criterion 2, resulting in 41 articles.7–12,14–48 The following subsections refer to these 41 articles, and a summary of their findings is presented in Table 1.

Figure 1.

Figure 1.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses flowchart illustrating the search and selection process for studies on ML and quality assessment of PANs.

Table 1.

Overview of the models in reviewed articles.

Reference Model Purpose Imaging errors and concerns Management of quality concerns
Lee et al7 DCNN Classification Positioning Quality assessment as part of inclusion/exclusion criteria
Lee et al8 DCNN Detection and classification Distortion, artificial noise, and blur Quality assessment as part of inclusion/exclusion criteria
Lee et al9 DCNN Detection and classification Low resolution Quality assessment as part of inclusion/exclusion criteria
Poedjiastoeti et al10 CNN Classification Contrast Quality concern acknowledged and image enhancement performed
Lee and Jeong11 DCNN Classification and identification Noise, haziness, and distortion Quality assessment as part of inclusion/exclusion criteria
Abdalla-Aslan et al12 ML model Classification Contrast, positioning Imaging errors were acknowledged, but no action was taken
Lee et al13 R-CNN Classification and segmentation Overlapped anatomy Imaging errors were acknowledged, but no action was taken
Fan et al14 CNN Identification Brightness, overlapped teeth, and positioning Quality concern acknowledged and image enhancement performed
Ekert et al15 DCNN Detection Overlapped anatomy, contrast, hazing, positioning Quality assessment as part of inclusion/exclusion criteria
Krois et al16 CNN Detection Overlapped teeth Imaging errors were acknowledged, but no action was taken
Tuzoff et al17 R-CNN Detection and classification Overlapped teeth, and blur Imaging errors were acknowledged, but no action was taken
Aliaga et al19 ML model Detection Blur, positioning Quality assessment as part of inclusion/exclusion criteria
Alzubaidi and Otoom20 ML model Classification Artificial noise Quality concern acknowledged and image enhancement performed
Endres et al21 CNN Detection Positioning, artefacts, density, contrast Imaging errors were acknowledged, but no action was taken
Hiraiwa et al22 CNN Classification Non-specific Imaging errors were acknowledged, but no action was taken
Khan and Mukati23 CNN Augmentation Overlapped teeth Imaging errors were acknowledged, but no action was taken
Lee et al24 DCNN Classification Noise, blur, and distortion Quality assessment as part of inclusion/exclusion criteria
Lee et al25 CNN Classification Blur, contrast, and ghost images Imaging errors were acknowledged, but no action was taken
Leite et al26 CNN Detection and segmentation Density variations, positioning, artifacts Imaging errors were acknowledged, but no action was taken
Takahashi et al27 DCNN Detection and segmentation Non-specific Imaging errors were acknowledged, but no action was taken
Vigil and Bharathi28 MLNN Classification and segmentation Blur, variation in illumination, intensity orientation, positioning Quality concern acknowledged and image enhancement performed
Zhu et al29 DCNN Segmentation Non-specific Quality assessment as part of inclusion/exclusion criteria
Benakatti et al30 ML model Identification Haziness, distortion, blur Quality assessment as part of inclusion/exclusion criteria
Cha et al31 DCNN Segmentation Blur, ovelapped anatomy Quality assessment as part of inclusion/exclusion criteria
Shen et al32 ML model Classification Non-specific Quality assessment as part of inclusion/exclusion criteria
Lee et al33 RNN Detection Non-specific Imaging errors were acknowledged, but no action was taken
Sukegawa et al34 CNN Classification Non-specific Imaging errors were acknowledged, but no action was taken
Liu et al35 CNN Detection Blur, distortion Quality assessment as part of inclusion/exclusion criteria
Yu et al36 CNN Classification and segmentation Distortion, artificial noise, blur Quality assessment as part of inclusion/exclusion criteria
Bunyarit et al37 ANN-MLP Classification Non-specific Quality assessment as part of inclusion/exclusion criteria
Santosh et al38 ML model Classification Brightness, noise Quality concern acknowledged and image enhancement performed
Zhu et al39 CNN Detection and segmentation Intensity variations, artificial noise, contrast Quality assessment as part of inclusion/exclusion criteria
Bonfanti-Gris et al40 CNN Detection and classification Overlapped teeth, blur Imaging errors were acknowledged, but no action was taken
Wang et al41 CNN Classification Distortion, positioning Quality assessment as part of inclusion/exclusion criteria
Lin and Chang42 CNN Detection and classification Non-specific Quality assessment as part of inclusion/exclusion criteria
Warin et al43 CNN Detection and classification Overlapped anatomy Imaging errors were acknowledged, but no action was taken
Aljabri et al44 DCNN Classification Low resolution Quality concern acknowledged and image enhancement performed
Cejudo et al45 CNN Classification Artefacts, positioning Imaging errors were acknowledged, but no action was taken
Estai et al46 CNN Detection Contrast, orientation Quality assessment as part of inclusion/exclusion criteria
Lee et al47 CNN Classification Non-specific Imaging errors were acknowledged, but no action was taken
Liet al48 RNN Detection and classification and segmentation Contrast Imaging errors were acknowledged, but no action was taken

Model architecture

From the 41 selected studies, only 6 articles used ML models not based on neural networks.12,19,20,30,32,38 The remaining 35 articles used deep learning models: 33 articles indicated the use of Artificial Neural Network (ANN) models,7–11,13–17,21–27,29,31,33–36,39-48 and 2 articles specifically stated the use of Multilayer Neural Network (MLNN) models.28,37 Within the 33 ANN articles, 29 studies referred to the use of CNN models7–11,14–16,21–27,29,31,34–36,39-47 and 4 studies used Regional Convolutional Neural Networks (RCNN) models.13,17,33,48

Type of ML model task

In Table 1, we present the different image analysis tasks on PAN. Seventeen studies reported on detection tasks—where bounding-box-based annotations were performed within an image;8,9,15–17,19,21,26,27,33,35,39,40,42,43,46,48 26 studies focused on classification tasks, indicating studies that assigned a label to the entire image—either binary or multi-class;7–13,17,20,22,24,25,28,32,34,36–38,40–45,47,48 3 studies conducted identification tasks, also bounding-box-based discrimination of object, but among other objects within the same class (e.g. facial recognition).11,14,30 The studies identified forensic features (as in post-mortem human identification) and types of dental implant systems;11,14,30 9 studies on segmentation tasks;13,26–29,31,36,39,48 and one study on dataset augmentation.23

Among the studies, 14 used multi-stage architectures combining different tasks in a pipeline to produce an output.8,9,11,13,17,26–28,36,39,40,42,43,48 In contrast, 27 studies used a single model to carry out distinct tasks.7,10,12,14–16,19–25,29–35,37,38,41,44–47

A summary of the descriptive characteristics of the imaging errors highlighted in studies conducted on distinct types of ML model tasks and the respective management of quality concerns are presented in Table 2.

Table 2.

Descriptive characteristics of imaging errors and respective management relative to the type of machine learning task.

Task Imaging errors Management of quality concerns
Detection8,9,15–17,19,21,26,27,33,35,39,40,42,43,46,48 Majority of papers highlighting issues related to pre and post-exposure issues, such as contrast, density, noise, and resolution Studies in this category were evenly distributed between the decision to remove affected images from the dataset or take no action. Several studies cited imaging errors as a reason for decreased model performance
Classification7–13,17,20,22,24,25,28,32,34,36–38,40–45,47,48 Distortions, positioning, overlapped teeth, and anatomy were cited by the majority of the papers in this category The majority of studies (5/7) that opted to use image enhancement methods were aimed at classification tasks
Segmentation13,26–29,31,36,39,48 No particular pattern observed. Uniform distribution of different types of imaging errors and concerns Half of the studies on this type of task opted not to take action when faced with quality concerns. The other half removed severely affected samples from their datasets
Identification11,14,30 Most imaging concerns linked to poor patient positioning 2/3 Studies used imaging errors as determinant of inclusion and exclusion criteria
Augmentation23 Overlapped teeth No action taken

Imaging errors

Of the 41 papers included, 19 articles have mentioned several imaging errors associated with low-quality radiographs.7–11,13–16,20,25,27,28,35,39–41,44,45 These errors include distortion, artificial noise, and blur, as indicated by nine studies.8,11,17,20,25,28,35,39,40 Additionally, 7 articles have identified that brightness, resolution, and contrast can affect image quality.9,10,14,25,38,39,44 Furthermore, Lee et al25 and Cejudo et al45 pointed out that ghost images and artefacts can contribute to poor image quality. Five articles identified overlapped teeth,14–17,40 and another 5 articles indicated that patient malpositioning7,14,21,28,41 could influence the quality of the image negatively. Lastly, some studies have found that the appearance of borders,7 variation in illumination,28 intensity, and orientation28,39 can be used as indicators to define image quality.

Based on the descriptions above, these imaging errors were grouped into 2 main categories: positioning errors/anatomical challenges and pre-/post-exposure issues. A summary of the management of quality concerns relative to these imaging error categories is presented in Table 3.

Table 3.

Summary of quality concerns relative to different types of imaging error.

Imaging error Management of quality concerns
Positioning error/anatomical challenges7,13,16,17,19,23,31,40,41,43 Most studies highlighted the exclusion of low-quality or distorted images, implying that good-quality PANs lead to better outcomes. It is apparent that issues with positioning can create substantial challenges for ML-based image analysis and diagnostics
Positioning error and pre-/post-exposure issues8,11,12,14,15,21,24–26,28,30,35,36,45 The main concerns were severe distortions, artificial noise, blur, and poor-quality PANs. The role of preprocessing to improve contrast was underlined, as well as dealing with teeth overlap, malpositioning, and image intensity variations
Pre-/post-exposure issues9,10,20,38,39,44,46,48 The common concern among the studies was that poor image quality and low contrast challenge the goal attainment. Histogram equalization and other image pre-processing techniques were used to improve image quality. Collecting a larger quantity and quality dataset was deemed necessary

Management of image quality concerns

The management of quality concerns was grouped into 3 categories:

  1. Studies that acknowledged the influence of low quality on performance but took no action.12,13,16,17,21–23,25–27,33,34,40,43,45,47,48

  2. Studies that used quality assessment as part of exclusion/inclusion criteria.7–9,11,15,19,23,24,29–32,35–37,39,41,42,46

  3. Studies that applied image enhancement methods.10,14,20,28,38,44

In the first category (quality concern acknowledged—no action), several authors mentioned that the quality and quantity of training data would heavily impact the ML model performance.12,13,16,17,21–23,25–27,33,34,40,43,45,47,48 Some articles pointed out that the most significant challenge for artificial intelligence models to achieve super-human levels was to obtain extensive and high-quality training data.12,21 Blurring, low contrast, and ghost images would decrease the image quality and may lead to misidentification.25,27 Similarly, Warin et al43 indicated that low-quality PAN may reduce the accuracy of mandibular fracture detection. Studies such as Hiraiwa et al22 demonstrated that using high-quality images would elevate the AI model inference results.22,23,26,33,40,43,45,47,48 One exceptional case mentioned that their model achieved excellent performance on low-quality PAN, such as in images containing teeth with blurred contours or decoronated.17 Overall, most of the papers in this category highlighted concerns relative to positioning errors.

In the second category, 14 studies decided to exclude heavily overlapped, noisy, blurred, distorted, or low-contrast images.7,8,11,15,19,29,32,35–37,39,41,42,46 It was also mentioned in this category that high-quality datasets would help avoid overfitting.8

In the third category, 6 studies applied image enhancement methods to overcome the impact of imaging errors.10,14,20,28,38,44 Three studies used histogram equalization.10,20,28 Fan et al14 used standardization on PAN to eliminate noise and focus on the region of interest. The authors mentioned that low-quality PAN, somewhat affected by imaging errors, would negatively influence the model performance.14 Santosh et al38 suggested that image quality and brightness can be improved by image pre-processing.

Overview of suggestions for future studies

A total of 8 of the 41 included studies provided suggestions regarding image quality in future research.11,15,16,25,26,34,43,44 Some articles emphasized that the quality of images is an essential factor in improving the performance of artificial intelligence models.11,15,16,26,44 Three studies suggested exploring the influence of image projection and quality contrast on ML model discrimination performance.15,16,26 Sukegawa et al34 recommended developing CNN models to identify image quality and patient covariate ensembles. Warin et al43 and Aljabri et al44 suggested using a greater quantity and better quality of images for real scenarios in future studies.

Discussion

This review synthesized evidence from 41 papers on the quality assessment strategies and management of imaging errors from PAN datasets used in the development of ML models. From our comprehensive analysis of the literature, this systematic review is the first to focus on this problem.

A group of authors used image enhancement techniques during data pre-processing stages.10,14,20,28,38,44 The utilization of image enhancement methods is commonly observed across studies applying ML models in several imaging modalities in healthcare.49 It does not necessarily reflect specific quality concerns as seen on PAN, such as positioning errors. Even though image enhancement methods were largely applied across studies to improve performance, these authors did not address low-quality images due to positioning or other technical errors.10,13,14,20,28,38,44

A few studies acknowledged the influence of training data quality on AI model performance and opted to either eliminate low-quality images during model development or only include high-quality ones.7–9,11,15,19,23,24,29,31,32,35–37,39,41,42,46 Such elimination is based on subjective criteria such as “low quality” and “high quality.” The type and magnitude of the imaging errors detected on these images are not described, implying the reader should use common sense to understand the issues that affected the dataset. This reliance on subjective criteria may introduce bias in the model accuracy once development is complete. Concurrently, several authors agree that image quality affects model performance.9,22,23 These statements expose the need for higher methodological consistency in quality assessment of PAN. Even though the term AI implies intelligence, ML models used in computer vision tasks are only trained to represent intrinsic statistical patterns of the provided data.50 If pixels from specific regions of interest are obscured due to imaging errors, this can substantially alter such statistical patterns, impairing the performance of DL algorithms. While clinicians may easily identify these errors, they present a considerable challenge for AI until a model is specifically developed to detect them. This highlights a fundamental limitation of AI systems so far overlooked in the literature. Plausibly, it may be hypothesized that for ML algorithms, the differences between error-free and low-quality PAN might introduce changes as significant as those observed between different classes of images—such as seen between PAN of adult and primary dentition. Accordingly, our study reveals a gap in knowledge between well-established literature on known quality assessment challenges with PAN versus the current practices in research of ML applications using this imaging modality.

Quality assessment of PAN has the purpose of attributing at least three different subclasses to radiographs before diagnostic interpretation: undiagnostic, diagnostic with imaging errors, and diagnostic without imaging errors.51 Imaging errors are widely variable and present distinct features depending on the nature of the problem. Issues such as the presence of metallic artefacts or patient positioning are reflected on the radiograph in varying degrees of severity, which may ultimately make the PAN undiagnostic.51 Given the high prevalence and wide variability of these errors, previous studies have proposed quality assessment strategies to deal with this issue.7,19,41,52,53 The authors of these studies suggest that, based on quality assessment analysis, radiographs can exhibit satisfactory diagnostic quality even in the presence of errors. Nevertheless, these errors must be identified, and the extent of their influence needs to be gauged before diagnostic interpretations can be made.16,17,45 For example, identifying the palatoglossal airway space, the most prevalent positioning error in PAN,53 informs clinicians that abnormalities around the apices of maxillary teeth may be obscured. Comparable to diagnostic interpretation, quality assessment analysis tasks, such as detecting the palatoglossal airway space, also rely on recognizing well-documented visual patterns. Therefore, future AI studies may focus on identifying such features, as these are likely to benefit from computer vision models.

A few authors claimed their models should perform adequately regardless of the quality of the radiograph, as suboptimal images are likely to happen in real-world scenarios.16,17 This claim may also explain why 91 studies assessed for eligibility in this review did not mention quality assessment strategies or concerns. Even though this rationale may seem sensible, this review revealed a few issues with this reasoning. As an instrument of clinical decision support, AI models (whether applied to PAN or not) must achieve excellent prediction performance while observing as many sources of systematic and random errors as possible.54 Imaging errors, as observed on PAN, are a well-documented source of systematic errors and are highly likely to introduce bias and decrease model accuracy, even in cases of satisfactory model performance. Accounting for systematic errors before model development can additionally benefit clinical decision support as the expectations upon the diagnostic potential of each radiograph can be adjusted, and images may be subclassified relative to the nature and severity of imaging errors present. Therefore, a consistent quality assessment strategy for PAN may also address some well-documented shortcomings of AI tools in healthcare applications, such as robustness and generalizability50,55 as differences between radiographs may be adequately accounted for.

Systematic errors (such as imaging errors) are also likely to undermine the explainability of AI systems54—another aspect that may benefit from future development of an AI-based quality assessment tool for PAN. Such a tool may inform subsequent models in a pipeline how challenging any given radiograph is and the extent to which imaging errors have compromised different regions of interest. This may be used as an explainability resource by offering clinicians advice on why a prediction may or may not be made.

Furthermore, failing to adopt an objective quality assessment strategy to classify images as high or low quality exposes a few issues: Obtaining a large dataset free from imaging errors is unrealistic; regardless of the type of ML task, using only high-quality images for training may lead to bias; albeit useful, standard image enhancement techniques rarely address the specific quality assessment concerns observed on PAN. This point further reinforces that conducting an objective strategy to gauge the extent of technique-related faults before building an AI model may lead to more reliable and reproducible performance and improve methodological consistency.

There are limitations to this systematic review. Only three databases were used to identify relevant articles, which may have resulted in missing some relevant studies. The sample size is also relatively low for a growing body of literature on this subject.

Conclusion

In conclusion, this review revealed significant inconsistencies in the management of PAN imaging errors in ML research. Paradoxically, most studies agree that such errors are detrimental when building ML models. More research is needed to gauge how low-quality inputs impact model performance in different tasks. For this purpose, consistent and objective quality assessment criteria should be adopted in the methodology of similar studies in the future. DL models have the potential to streamline the development of these criteria, as the determination of image quality also relies on pattern recognition tasks.

Author contributions

Delamare and X. Fu are joint first and corresponding authors of this article.

Contributor Information

Eduardo Delamare, Sydney Dental School, Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW, 2050, Australia; Digital Health and Data Science, Faculty of Medicine and Health, The University of Sydney, Camperdown, NSW, 2050, Australia.

Xingyue Fu, School of Computer Science, Faculty of Engineering, The University of Sydney, Camperdown, NSW, 2050, Australia.

Zimo Huang, School of Computer Science, Faculty of Engineering, The University of Sydney, Camperdown, NSW, 2050, Australia.

Jinman Kim, School of Computer Science, Faculty of Engineering, The University of Sydney, Camperdown, NSW, 2050, Australia.

Funding

None declared.

Conflicts of interest

None declared.

References

  • 1. Izzetti R, Nisi M, Aringhieri G, Crocetti L, Graziani F, Nardi C.. Basic knowledge and new advances in panoramic radiography imaging techniques: a narrative review on what dentists and radiologists should know. Appl Sci. 2021;11(17):7858. [Google Scholar]
  • 2. Shahidi S, Zamiri B, Abolvardi M, Akhlaghian M, Paknahad M.. Comparison of dental panoramic radiography and CBCT for measuring vertical bone height in different horizontal locations of posterior mandibular alveolar process. J Dent (Shiraz). 2018;19(2):83-91. [PMC free article] [PubMed] [Google Scholar]
  • 3. Mahdi FP, Motoki K, Kobashi S.. Optimization technique combined with deep learning method for teeth recognition in dental panoramic radiographs. Sci Rep. 2020;10(1):19261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Dhillon M, Raju SM, Verma S, et al. Positioning errors and quality assessment in panoramic radiography. Imaging Sci Dent. 2012;42(4):207-212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Choi BR, Choi DH, Huh KH, et al. Clinical image quality evaluation for panoramic radiography in Korean dental clinics. Imaging Sci Dent. 2012;42(3):183-190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Fukuda M, Ariji Y, Kise Y, et al. Comparison of 3 deep learning neural networks for classifying the relationship between the mandibular third molar and the mandibular canal on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol. 2020;130(3):336-343. [DOI] [PubMed] [Google Scholar]
  • 7. Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon SJ.. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: a preliminary study. Dentomaxillofac Radiol. 2019;48(1):20170344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Lee JH, Kim DH, Jeong SN.. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020;26(1):152-158. [DOI] [PubMed] [Google Scholar]
  • 9. Lee DW, Kim SY, Jeong SN, Lee JH.. Artificial intelligence in fractured dental implant detection and classification: evaluation using dataset from two dental hospitals. Diagnostics (Basel). 2021;11(2):233. doi: 10.3390/diagnostics11020233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Poedjiastoeti W, Suebnukarn S.. Application of convolutional neural network in the diagnosis of jaw tumors. Healthc Inform Res. 2018;24(3):236-241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Lee JH, Jeong SN.. Efficacy of deep convolutional neural network algorithm for the identification and classification of dental implant systems, using panoramic and periapical radiographs: A pilot study. Medicine (Baltimore). 2020;99(26):e20787. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Abdalla-Aslan R, Yeshua T, Kabla D, Leichter I, Nadler C.. An artificial intelligence system using machine-learning for automatic detection and classification of dental restorations in panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol. 2020;130(5):593-602. [DOI] [PubMed] [Google Scholar]
  • 13. Lee JH, Han SS, Kim YH, Lee C, Kim I.. Application of a fully deep convolutional neural network to the automation of tooth segmentation on panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol. 2020;129(6):635-642. [DOI] [PubMed] [Google Scholar]
  • 14. Fan F, Ke W, Wu W, et al. Automatic human identification from panoramic dental radiographs using the convolutional neural network. Forensic Sci Int. 2020;314:110416. [DOI] [PubMed] [Google Scholar]
  • 15. Ekert T, Krois J, Meinhold L, et al. Deep Learning for the Radiographic Detection of Apical Lesions. J Endod. 2019;45(7):917-922.e5. [DOI] [PubMed] [Google Scholar]
  • 16. Krois J, Ekert T, Meinhold L, et al. Deep Learning for the Radiographic Detection of Periodontal Bone Loss. Sci Rep. 2019;9(1):8495. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Tuzoff DV, Tuzova LN, Bornstein MM, et al. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol. 2019;48(4):20180051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Aliaga I, Vera V, Vera M, García E, Pedrera M, Pajares G.. Automatic computation of mandibular indices in dental panoramic radiographs for early osteoporosis detection. Artif Intell Med. 2020;103:101816. [DOI] [PubMed] [Google Scholar]
  • 20. Alzubaidi MA, Otoom MA.. comprehensive study on feature types for osteoporosis classification in dental panoramic radiographs. Comput Methods Programs Biomed. 2020;188:105301. [DOI] [PubMed] [Google Scholar]
  • 21. Endres MG, Hillen F, Salloumis M, et al. Development of a deep learning algorithm for periapical disease detection in dental radiographs. Diagnostics (Basel). 2020;10(6): [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Hiraiwa T, Ariji Y, Fukuda M, et al. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol. 2019;48(3):20180218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Khan S, Mukati A.. Dataset augmentation for machine learning applications of dental radiography. Int J Adv Comput Sci Appl. 2020;2:453-456. [Google Scholar]
  • 24. Lee JH, Kim YT, Lee JB, Jeong SN.. A performance comparison between automated deep learning and dental professionals in classification of dental implant systems from dental imaging: a multi-center study. Diagnostics (Basel). 2020;10(11): [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Lee KS, Jung SK, Ryu JJ, Shin SW, Choi J.. Evaluation of transfer learning with deep convolutional neural networks for screening osteoporosis in dental panoramic radiographs. J Clin Med. 2020;9(2):392. doi: 10.3390/jcm9020392. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Leite AF, Gerven AV, Willems H, et al. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin Oral Investig. 2021;25(4):2257-2267. [DOI] [PubMed] [Google Scholar]
  • 27. Takahashi T, Nozaki K, Gonda T, Mameno T, Wada M, Ikebe K.. Identification of dental implants using deep learning-pilot study. Int J Implant Dent. 2020;6(1):53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Vigil A, Bharathi S.. Diagnosis of pulpitis from dental panoramic radiograph using histogram of gradients with discrete wavelet transform and multilevel neural network techniques. TS. 2021;38(5):1549-1555. [Google Scholar]
  • 29. Zhu H, Cao Z, Lian L, Ye G, Gao H, Wu J.. CariesNet: a deep learning approach for segmentation of multi-stage caries lesion from oral panoramic X-ray image. Neural Comput Appl. 2022;1-9. doi: 10.1007/s00521-021-06684-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Benakatti V, Nayakar R, Anandhalli M.. Machine learning for identification of dental implant systems based on shape—a descriptive study. J Indian Prosthodont Soc. 2021;21(4):405-411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Cha J-Y, Yoon H-I, Yeo I-S, Huh K-H, Han J-S.. Panoptic segmentation on panoramic radiographs: deep learning-based segmentation of various structures including maxillary sinus and Mandibular Canal. J Clin Med. 2021;10(12):2577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Shen S, Liu Z, Wang J, Fan L, Ji F, Tao J.. Machine learning assisted cameriere method for dental age estimation. BMC Oral Health. 2021;21(1):641. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Lee S, Kim D, Jeong H-G.. Detecting 17 fine-grained dental anomalies from panoramic dental radiography using artificial intelligence. Sci Rep. 2022;12(1):5172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Sukegawa S, Fujimura A, Taguchi A, et al. Identification of osteoporosis using ensemble deep learning model with panoramic radiographs and clinical covariates. Sci Rep. 2022;12(1):6088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Liu J, Liu Y, Li S, Ying S, Zheng L, Zhao Z.. Artificial Intelligence-aided detection of ectopic eruption of maxillary first molars based on panoramic radiographs. J Dent. 2022;125:104239. doi: 10.1016/j.jdent.2022.104239. [DOI] [PubMed] [Google Scholar]
  • 36. Yu D, Hu J, Feng Z, Song M, Zhu H.. Deep learning based diagnosis for cysts and tumors of jaw with massive healthy samples. Sci Rep. 2022;12(1):1855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Bunyarit SS, Nambiar P, Naidu M, Asif MK, Poh RY.. Dental age estimation of Malaysian Indian Children and Adolescents: Applicability of Chaillet and Demirjian’s modified method using Artificial Neural Network. Ann Hum Biol. 2022;49(3-4):192-199. [DOI] [PubMed] [Google Scholar]
  • 38. Santosh KC, Pradeep N, Goel V, et al. Machine learning techniques for human age and gender identification based on teeth X-ray images. J Healthc Eng. 2022;2022:8302674. doi: 10.1155/2022/8302674. Retraction in: J Healthc Eng. 2023 Oct 11;2023:9812937. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 39. Zhu H, Cao Z, Lian L, Ye G, Gao H, Wu J.. CariesNet: a deep learning approach for segmentation of multi-stage caries lesion from oral panoramic X-ray image. Neural Comput Appl. 2022. Jan 7:1–9. doi: 10.1007/s00521-021-06684-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Bonfanti-Gris M, Garcia-Cañas A, Alonso-Calvo R, Salido Rodriguez-Manzaneque MP, Pradies Ramiro G.. Evaluation of an artificial intelligence web-based software to detect and classify dental structures and treatments in panoramic radiographs. J Dent. 2022;126:104301. doi: 10.1016/j.jdent.2022.104301. [DOI] [PubMed] [Google Scholar]
  • 41. Wang X, Liu Y, Miao X, et al. Densen: a convolutional neural network for estimating chronological ages from panoramic radiographs. BMC Bioinformatics. 2022;23(Suppl 3):426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Lin S-Y, Chang H-Y.. Tooth numbering and condition recognition on dental panoramic radiograph images using CNNs. IEEE Access. 2021;9:166008-166026. [Google Scholar]
  • 43. Warin K, Limprasert W, Suebnukarn S, Inglam S, Jantana P, Vicharueang S.. Assessment of deep convolutional neural network models for mandibular fracture detection in panoramic radiographs. Int J Oral Maxillofac Surg. 2022;51(11):1488-1494. [DOI] [PubMed] [Google Scholar]
  • 44. Aljabri M, Aljameel SS, Min-Allah N, et al. Canine impaction classification from panoramic dental radiographic images using deep learning models. Informatics in Med Unlocked. 2022;30:100918. 10.1016/j.imu.2022.100918. [DOI] [Google Scholar]
  • 45. Cejudo JE, Chaurasia A, Feldberg B, Krois J, Schwendicke F.. Classification of dental radiographs using deep learning. J Clin Med. 2021;10(7):1496. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Estai M, Tennant M, Gebauer D, et al. Deep learning for automated detection and numbering of permanent teeth on panoramic images. Dentomaxillofac Radiol. 2022;51(2):20210296. doi: 10.1259/dmfr.20210296. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Lee J-H, Kim Y-T, Lee J-B, Jeong S-N.. Deep learning improves implant classification by dental professionals: a multi-center evaluation of accuracy and efficiency. J Periodontal Implant Sci. 2022;52(3):220-229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Li H, Zhou J, Zhou Y, et al. An interpretable computer-aided diagnosis method for periodontitis from panoramic radiographs. Front Physiol. 2021;12:655556. doi: 10.3389/fphys.2021.655556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Advances in Deep Learning Techniques for Medical Image Analysis. 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC), Solan, India; 2018:271-277.
  • 50. Schwendicke FA, Samek W, Krois J.. Artificial intelligence in dentistry: chances and challenges. J Dent Res. 2020;99(7):769-774. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Ramesh A. Panoramic imaging. In: Mallya S, Lam E. White and Pharoah's Oral radiology E-book: principles and interpretation: second South Asia Edition E-Book. Elsevier India; 2019:131 May 15.
  • 52. Rondon RH, Pereira YC, do Nascimento GC.. Common positioning errors in panoramic radiography: A review. Imaging Sci Dent. 2014;44(1):1-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Granlund CM, Lith A, Molander B, Gröndahl K, Hansen K, Ekestubbe A.. Frequency of errors and pathology in panoramic images of young orthodontic patients. Eur J Orthod. 2012;34(4):452-457. [DOI] [PubMed] [Google Scholar]
  • 54. Amann J, Blasimme A, Vayena E, Frey D, Madai VI.. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20(1):310-319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Schwendicke F, Singh T, Lee JH, et al. Artificial intelligence in dental research: checklist for authors, reviewers, readers. J Dent. 2021;107:103610.33631303 [Google Scholar]

Articles from Dentomaxillofacial Radiology are provided here courtesy of Oxford University Press

RESOURCES