Skip to main content
Imaging Science in Dentistry logoLink to Imaging Science in Dentistry
. 2025 Jul 1;55(3):271–279. doi: 10.5624/isd.20250023

Deep learning for dentomaxillofacial cone-beam computed tomography image quality enhancement: A pilot study

Ali Nazari 1,2,, Seyed Mohammad Yousef Najafi 1,2,, Reza Abbasi 1, Hossein Mohammad-Rahimi 3,4, Parisa Motie 5, Mina Iranparvar Alamdari 2,6, Mehdi Hosseinzadeh 6, Ruben Pauwels 3,, Falk Schwendicke 4
PMCID: PMC12505439  PMID: 41070253

Abstract

Purpose

This study was conducted to develop and evaluate a deep learning-based super-resolution approach for enhancing the quality of cone-beam computed tomography (CBCT) images in dentomaxillofacial imaging.

Materials and Methods

A deep learning-based super-resolution method using the MIRNet-v2 model was developed to enhance CBCT image quality. The study used a dataset comprising 6,961 anonymized axial slices from 15 CBCT scans. High-resolution images served as ground truth, while low-resolution versions were created through artificial degradation, including downscaling, blurring, and noise addition. The model was evaluated using a 5-fold cross-validation strategy, employing peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) as metrics. Qualitative assessments conducted by 2 experienced radiologists involved criteria such as noise, sharpness, spatial resolution, and diagnostic quality, scored using a CBCT evaluation chart.

Results

The model significantly improved degraded CBCT images across all evaluation metrics. Enhanced images demonstrated mean PSNR values exceeding 35 dB and SSIM values over 0.85, with the highest performance achieved for blurred images (PSNR: 43.86±1.61, SSIM: 0.98±0.01). Subjective assessments indicated improvements in diagnostic quality, noise reduction, and spatial resolution, with outputs comparable to the original images in several degradation scenarios. Interobserver reliability was fair (Cohen kappa: 0.335). Notable improvements were observed for noise and artifact reduction in specific degradation groups, suggesting improved diagnostic utility.

Conclusion

Deep learning-based super-resolution demonstrates considerable potential for enhancing CBCT image quality, especially in scenarios involving blur and downscaling. These results suggest possible applications in low-dose imaging protocols and improved clinical decision-making.

Keywords: Cone-Beam Computed Tomography; Artificial Intelligence; Image Processing, Computer-Assisted; Image Enhancement

Introduction

Cone-beam computed tomography (CBCT) is widely used in dentistry to provide volumetric data on teeth and the maxillofacial complex.1,2 The quality of CBCT images is crucial for accurate diagnosis of pathologies, identification of structures, and assessment of functional attributes,3,4,5 thus directly influencing treatment planning.6,7

The voxel size for dental CBCT scans typically ranges from 75 µm to 500 µm.8 However, the actual spatial resolution of CBCT images generally falls between 0.6 and 2.8 line pairs per mm,9 influenced by factors such as focal spot size, detector quality, partial volume effects, contrast-to-noise ratio, patient movement, and reconstruction algorithms.8 Many CBCT scanners offer a “high-resolution” mode, which produces smaller voxel sizes. Achieving higher resolution often necessitates increased milliamperage (mA), additional projections (base images), extended rotation arcs, and longer exposure times, thereby increasing patient radiation dose.

Deep learning, a particularly promising branch of artificial intelligence, is significantly advancing the fields of medicine and dentistry.10 Currently, deep learning applications in dentistry include the diagnosis of maxillofacial pathologies,11 dental caries,12 periodontitis,13 and periapical lesions.14 Image-to-image translation, a specific deep learning application within image processing, has been employed in dentistry for tasks such as image super-resolution (SR), denoising, and modality conversion.15,16,17

SR generates high-resolution images from low-resolution inputs by removing noise and degradation associated with lower-resolution acquisition methods.18 Recent years have seen marked advancements in SR methods, particularly those utilizing deep learning, to enhance the quality of both 2-dimensional (2D) and 3-dimensional dental images.15,19,20,21 Hwang et al.21 developed a deep learning-based SR model for restoring high-resolution CBCT images from low-resolution compressed images, outperforming conventional bicubic interpolation. Furthermore, Li et al.22 demonstrated that deep learning-based SR could increase image quality and improve clinical decision-making compared to non-SR images or traditional upscaling.

This study aimed to train and test a deep learning-based SR model to generate high-resolution, high-quality CBCT images from downscaled, noisy, and blurred CBCT inputs. Additionally, the subjective diagnostic value of enhanced CBCT images was assessed.

Materials and Methods

Study design

This study employed a deep learning-based SR approach to improve the resolution, technical performance metrics, and subjective quality of CBCT images. Given the retrospective nature of image collection and anonymization, informed consent was waived by the Institutional Review Board (IRB) of Shahid Beheshti University of Medical Sciences (IRB number: IR.SBMU.DRC.REC.1402.129). The study outcomes were reported in accordance with the Artificial Intelligence in Dental Research guidelines.23

Dataset and data preparation

The dataset included 6,961 2D axial slices derived from 15 CBCT scans collected in 2023 at the Department of Oral and Maxillofacial Radiology, Shahid Beheshti University of Medical Sciences. Images were captured using a NewTom VGi CBCT device (QR s.r.l., Verona, Italy), with parameters of 110 kV, 6 to 13 mA, and 5.4 s exposure time. The inclusion criteria were: 1) high-resolution images with a voxel size of 150 µm, 2) a field of view measuring 8×8 cm, and 3) no significant noise or artifacts (including motion and metal artifacts).

The CBCT images were exported in Digital Imaging and Communications in Medicine format and anonymized. For model training and testing, each 2D axial slice of the CBCT images was used as ground truth. At this stage, the images were converted into 8-bit RGB format. To simulate low-dose CBCT images, the original scans underwent a series of artificial degradations. The use of artificially degraded images to train deep learning models for medical imaging enhancement is widely employed in the literature, as it enables controlled and reproducible simulations of low-quality data.24,25 The degradations were as follows: 1) downscaling: Images were downscaled by a factor of 2, 2) blurring: Gaussian blur was applied with a sigma value of 2.0, and 3) noise addition: Gaussian noise was introduced with a standard deviation of 20.

These degradations were applied individually and in combination, generating a diverse set totaling 7 groups of images.

Model development and training

The MIRNet-v2 model26 was employed for this task. This model consists of multiple multi-scale residual blocks, each containing parallel multi-resolution convolution streams for feature extraction at different scales. It employs a selective kernel feature fusion mechanism for dynamic feature aggregation. This design maintains high-resolution representations throughout the network while simultaneously capturing contextual information from lower-resolution streams. The model balances the preservation of fine spatial details with the requirement to encode broader contextual information, a feature critical for enhancing CBCT image quality.

The MIRNet-v2 model was implemented using Python and the PyTorch library. The Charbonnier loss function was utilized for training, which involved 20 epochs with early stopping based on validation performance to prevent overfitting. Validation data were separated from training data, ensuring no overlap between patient sets. Five-fold cross-validation was performed, involving 12 patients, with 3 patients per fold. The number of 2D images per fold was 1,391 for fold 1, 1,529 for fold 2, 1,393 for fold 3, 1,391 for fold 4, and 1,257 for fold 5.

Evaluation

The efficacy of the model was assessed on the validation dataset using the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) to quantitatively evaluate downgraded and SR images (model outputs) against the original high-resolution images. PSNR and SSIM were defined as follows:

PSNR=10log10(255)2MSE

Here, MSE (mean squared error) is calculated between pairs of images (degraded vs. original and SR vs. original), representing the average squared difference in pixel intensities. Higher PSNR values indicate reconstructed images closer to the original high-resolution reference.

SSIM (f, f′)=l(f, f′)·c(f, f′)·s(f, f′)

Here, f and f′ represent the 2 images compared (downgraded/SR and original, respectively). l(f, f′) measures luminance similarity, assessing the closeness of brightness levels between images. c(f, f′) measures contrast similarity. s(f, f′) quantifies structural similarity, evaluating the alignment of structures and patterns. SSIM values closer to 1 indicate higher structural and visual similarity to the reference image.

Two oral and maxillofacial radiologists (M.I. and M.H.) with over 3 years of experience independently conducted the subjective evaluations. The evaluation criteria were based on a modified version of the clinical CBCT image evaluation chart from the Korean Academy of Oral and Maxillofacial Radiology.27,28 Each image was scored based on 7 criteria: overall diagnostic quality; noise (random fluctuations in grey values in anatomical structures); background noise; artifact (bright or dark lines radiating from high-density objects, disrupting clear visualization); sharpness (the clarity of boundaries between areas of differing radiodensity, or the ability to define an edge); spatial resolution (the ability to image small, high-contrast objects, determined by blurring) and contrast resolution (the ability to reveal subtle differences in radiodensity, limited by noise) of dental structures; and spatial and contrast resolution of periodontal structures. Each criterion was scored as 0=poor, 1=moderate, or 2=good quality.

Statistical analysis

Statistical analyses were performed using SPSS version 15 (SPSS Inc., Chicago, IL, USA). Paired t-tests were used to compare PSNR and SSIM values between predicted and low-resolution images. Analysis of variance (ANOVA) or the Kruskal-Wallis H test (if variances were heterogeneous) was used to compare PSNR and SSIM values among predicted images according to degradation group. For qualitative evaluations, interobserver reliability was assessed using the Cohen kappa coefficient. Kappa statistics were interpreted according to Landis and Koch et al.,29 with less than 0.2 indicating slight agreement; 0.2–0.4 fair agreement; 0.41–0.6 moderate agreement; 0.61–0.8 substantial agreement; and greater than 0.8 almost perfect agreement. Scores were compared between the original low-resolution and predicted images using the Friedman test, with post hoc comparisons conducted using Friedman 2-way ANOVA by ranks. Statistical significance was set at α=0.05.

Results

The performance of the MIRNet-v2 model under different artificial degradation conditions is detailed in Figure 1 and Table 1. Examples of various image degradations and corresponding model outputs are shown in Figure 2. The model significantly improved the quality of all degraded images, as measured by PSNR and SSIM metrics, when comparing predicted images to their low-resolution counterparts (P<0.05). All average PSNR values exceeded 35 dB, while the SSIM values were greater than 0.85.

Fig. 1. Bar graphs presenting peak signal-to-noise ratio (PSNR) (A) and structural similarity index measure (SSIM) (B) values of degraded low-resolution images and the corresponding outputs generated by the MIRNet-v2 super-resolution model under various degradation conditions.

Fig. 1

Table 1. Peak signal-to-noise ratio (PNSR) and structural similarity index measure (SSIM) of the MIRNet-v2 super-resolution model.

graphic file with name isd-55-271-i001.jpg

Deep learning for dentomaxillofacial cone-beam computed tomography image quality enhancement: A pilot study

Fig. 2. Comparison of original, degraded, and predicted cone-beam computed tomography images under various degradation conditions. The central image represents the original high-quality image. Surrounding it are multiple columns showing degraded images (left) and the corresponding predicted images (right) across 7 degradation scenarios.

Fig. 2

Based on the Levene test, the criterion for homogeneity of variances was not met (P>0.05). Therefore, a non-parametric Kruskal-Wallis H analysis was employed to compare the PSNR and SSIM values across degradation procedures. The analysis revealed statistically significant differences among degradation groups for both PSNR (χ2(2)=92.021, P<0.05), and SSIM (χ2(2)=83.132, P<0.05). The model performed best with blurred images (PSNR: 43.86±1.61, SSIM: 0.98±0.01), followed by downscaled images (PSNR: 40.14±0.84, SSIM: 0.97±0.01), with no significant difference between these groups (P>0.05). Noise addition resulted in significantly lower SSIM and PSNR values in predicted images compared to blurring (SSIM: P<0.05, PSNR: P<0.05) and downscaling (SSIM: P<0.05, PSNR: P<0.05). The smallest improvement was detected for images that had been subjected to noise combined with downscaling (PSNR: 35.08±1.11, SSIM: 0.84±0.07).

Subjective evaluation indicated a fair interobserver agreement (κ: 0.335; 95% CI: 0.170-0.653; P<0.05), according to the Cohen kappa coefficient.30 The results of this evaluation are presented in Figure 3. Moreover, the outcomes of statistical comparisons between the predicted images (model outputs) and the downgraded and original images are detailed in Tables 2 and 3, respectively. Comparisons between downgraded and predicted images revealed that the model improved the overall quality of all degraded images except those subjected to blurring alone and a combination of noise and blurring.

Fig. 3. Subjective evaluation of image quality under various degradation conditions and model outputs. A. Overall diagnostic quality. B. Noise in anatomical structures. C. Background noise. D. Artifacts. Responses are categorized as poor (red), moderate (yellow), and good (green). E. Sharpness. F. Resolution in dental structures. G. Resolution in periodontal structures. The stacked bars illustrate percentage distributions for each criterion, highlighting the impact of degradations and the improvements achieved through restoration. *P<0.05, with output images outperforming degraded images. **P<0.05, with output images underperforming degraded images. ΔP<0.05, with the original high-resolution images outperforming the output images. ΘP<0.05, with the original high-resolution images underperforming the output images.

Fig. 3

Table 2. The subjective outcome of deep learning model compared with the degraded images pair.

graphic file with name isd-55-271-i002.jpg

+: outcome image has higher mean rank, −: outcome image has lower mean rank

Table 3. The subjective outcome of deep learning model compared with the original images.

graphic file with name isd-55-271-i003.jpg

+: outcome image has higher mean rank, −: outcome image has lower mean rank

A comparison between original and predicted images revealed that the overall diagnostic quality of predicted images was comparable to the original images for the downscaled and blurred+downscaled groups. The quality of the predicted images exceeded that of the original images regarding background noise in downscaled images (P<0.05) and artifacts in images subjected to noise, blur, and downscaling (P<0.05).

Discussion

CBCT provides essential volumetric data for dental diagnosis and treatment planning.2 However, spatial resolution and overall image quality, particularly concerning metal and patient-related artifacts,31 are often limited. This complicates evaluations of dental conditions such as vertical root fractures32 and apical root isthmuses.33 The present study aimed to enhance CBCT image resolution through a deep learning-based SR approach, thereby improving image quality.

Objective measurements demonstrated that MIRNet-v2 can improve the quality of CBCT images across various degradation scenarios. The model showed particularly strong performance on blurred images, achieving PSNR values exceeding 43 dB and SSIM values nearing 0.98. However, images degraded by a combination of noise, blurring, and downscaling posed greater challenges, suggesting that the presence of multiple degradation factors can adversely impact model performance. Nevertheless, the model achieved some level of improvement across all degradation scenarios. MIRNet-v2 was chosen for this study due to its architecture, which effectively preserves high-resolution spatial features while incorporating multi-scale contextual information, which is crucial for CBCT image enhancement. Its established state-of-the-art performance in various image restoration tasks, including simultaneous SR and denoising, further underscores its suitability for medical imaging applications.26

Subjective evaluation by radiologists indicated that the model outputs improved overall diagnostic quality, noise reduction, and spatial resolution compared to degraded images. Notably, in scenarios involving downscaling and combined blurring with downscaling, model outputs were comparable to the original high-resolution images. However, discrepancies arose between objective and subjective evaluations. Specifically, the SSIM and PSNR measurements for the enhanced images did not consistently correlate with the image quality perceived by the radiologists, a finding aligned with previous research.15,34 This discrepancy was further emphasized by the fair interobserver agreement observed among radiologists. These findings highlight the limitations of objective metrics in capturing the complexity of human visual assessment. Consequently, future studies in medical image enhancement should incorporate subjective or clinical evaluations to ensure clinical relevance. Nonetheless, these metrics a standardized, fully objective framework useful for systematically benchmarking the performance of various image enhancement methods.

The present results align with growing evidence that deep learning-based SR can enhance various dental images, including periapical radiographs,20 panoramic radiographs,15 and head and neck CBCT images.8 Hwang et al.21 and Rytky et al.24 previously demonstrated that deep learning-based SR effectively restores high-resolution CBCT images from low-resolution ones, outperforming conventional upscaling methods. The present study builds upon this prior research, providing further evidence that deep learning-based SR is a viable approach for improving the quality of CBCT images, addressing not only resolution but also noise and blurring. An additional advantage of this study, compared to previous works, is its evaluation of oral and maxillofacial radiologist's perceptions of diagnostic image quality.

The improved image quality achieved through deep learning-based SR could meaningfully impact clinical practice. Enhanced CBCT images may enable more accurate diagnoses of pathological conditions, better visualization of anatomical structures, and more informed treatment planning. For instance, Ponder et al.35 reported superior assessments of external root resorption using high-resolution CBCT scans compared to low-resolution scans. Another study36 demonstrated that high-resolution CBCT images had increased sensitivity and specificity for the detection of vertical root fractures compared to their lower-resolution counterparts. Moreover, deep learning-based SR could enable the use of lower-dose CBCT protocols without compromising image quality, thereby reducing patient radiation exposure consistent with the “as low as reasonably achievable” (ALARA) principle in medical imaging.37 Additionally, these models could extend the lifespan of older CBCT devices by improving output quality without necessitating hardware upgrades.

The primary limitation of this study was its relatively small dataset, consisting of only 15 CBCT scans from a single imaging device; this may limit the generalizability of the findings to other CBCT systems. Furthermore, although the artificial degradation methods used here aimed to mimic real-world scenarios, they may not fully capture the complexity and variety of image quality issues encountered in clinical practice. Notably, the artificial noise introduced exhibited characteristics somewhat distinct from typical clinical noise patterns. Despite this simplified noise simulation, the deep learning model demonstrated remarkable effectiveness in correcting these degradations, suggesting the potential robustness of the approach. However, future research should employ more comprehensive and systematic degradation simulations and validate the clinical impact of deep learning-based SR by assessing its effects on clinicians' diagnostic performance and decision-making. Additionally, future studies could assess the potential of this approach in low-dose imaging scenarios.

In conclusion, this pilot study demonstrated the feasibility and potential of deep learning-based super-resolution for enhancing CBCT image quality. The MIRNet-v2 model successfully improved technical and subjective image quality across scenarios involving noise, blurring, and downscaling. These findings suggest that this model could be valuable for improving the diagnostic capabilities of CBCT imaging in dentistry, with possible applications in low-dose imaging. Further research is needed to address the limitations of the present study and establish clinical utility through larger-scale validation studies.

Footnotes

Conflicts of Interest: None

References

  • 1.Baccher S, Gowdar IM, Guruprasad Y, Solanki RN, Medhi R, Shah MJ, et al. CBCT: a comprehensive overview of its applications and clinical significance in dentistry. J Pharm Bioallied Sci. 2024;16(Suppl 3):S1923–S1925. doi: 10.4103/jpbs.jpbs_19_24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Pauwels R, Araki K, Siewerdsen J, Thongvigitmanee SS. Technical aspects of dental CBCT: state of the art. Dentomaxillofac Radiol. 2015;44:20140224. doi: 10.1259/dmfr.20140224. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Pinto JC, de Faria Vasconcelos K, Leite AF, Wanderley VA, Pauwels R, Oliveira ML, et al. Image quality for visualization of cracks and fine endodontic structures using 10 CBCT devices with various scanning protocols and artefact conditions. Sci Rep. 2023;13:4001. doi: 10.1038/s41598-023-31099-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Rinne CA, Dagassan-Berndt DC, Connert T, Müller-Gerbl M, Weiger R, Walter C. Impact of CBCT image quality on the confidence of furcation measurements. J Clin Periodontol. 2020;47:816–824. doi: 10.1111/jcpe.13298. [DOI] [PubMed] [Google Scholar]
  • 5.Lagos de Melo LP, Queiroz PM, Moreira-Souza L, Nadaes MR, Santaella GM, Oliveira ML, et al. Influence of CBCT parameters on image quality and the diagnosis of vertical root fractures in teeth with metallic posts: an ex vivo study. Restor Dent Endod. 2023;48:e16. doi: 10.5395/rde.2023.48.e16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Park HN, Min CK, Kim KA, Koh KJ. Optimization of exposure parameters and relationship between subjective and technical image quality in cone-beam computed tomography. Imaging Sci Dent. 2019;49:139–151. doi: 10.5624/isd.2019.49.2.139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Bamba J, Araki K, Endo A, Okano T. Image quality assessment of three cone beam CT machines using the SEDENTEXCT CT phantom. Dentomaxillofac Radiol. 2013;42:20120445. doi: 10.1259/dmfr.20120445. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hatvani J, Horváth A, Michetti J, Basarab A, Kouamé D, Gyöngy M. Deep learning-based super-resolution applied to dental computed tomography. IEEE Trans Radiat Plasma Med Sci. 2019;3:120–128. [Google Scholar]
  • 9.Brüllmann D, Schulze RK. Spatial resolution in CBCT machines for dental/maxillofacial applications-what do we know today? Dentomaxillofac Radiol. 2015;44:20140204. doi: 10.1259/dmfr.20140204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Mohammad-Rahimi H, Rokhshad R, Bencharit S, Krois J, Schwendicke F. Deep learning: a primer for dentists and dental researchers. J Dent. 2023;130:104430. doi: 10.1016/j.jdent.2023.104430. [DOI] [PubMed] [Google Scholar]
  • 11.Motie P, Hemmati G, Hazrati P, Lazar M, Varzaneh FA, Mohammad-Rahimi H, et al. In: Emerging technologies in oral and maxillofacial surgery. Khojasteh A, Ayoub AF, Nadjmi N, editors. Singapore: Springer; 2023. Application of artificial intelligence in diagnosing oral and maxillofacial lesions, facial corrective surgeries, and maxillofacial reconstructive procedures; pp. 287–328. [Google Scholar]
  • 12.Mohammad-Rahimi H, Motamedian SR, Rohban MH, Krois J, Uribe SE, Mahmoudinia E, et al. Deep learning for caries detection: a systematic review. J Dent. 2022;122:104115. doi: 10.1016/j.jdent.2022.104115. [DOI] [PubMed] [Google Scholar]
  • 13.Mohammad-Rahimi H, Motamedian SR, Pirayesh Z, Haiat A, Zahedrozegar S, Mahmoudinia E, et al. Deep learning in periodontology and oral implantology: a scoping review. J Periodontal Res. 2022;57:942–951. doi: 10.1111/jre.13037. [DOI] [PubMed] [Google Scholar]
  • 14.Sadr S, Mohammad-Rahimi H, Motamedian SR, Zahedrozegar S, Motie P, Vinayahalingam S, et al. Deep learning for detection of periapical radiolucent lesions: a systematic review and meta-analysis of diagnostic test accuracy. J Endod. 2023;49:248–261.e3. doi: 10.1016/j.joen.2022.12.007. [DOI] [PubMed] [Google Scholar]
  • 15.Mohammad-Rahimi H, Vinayahalingam S, Mahmoudinia E, Soltani P, Bergé SJ, Krois J, et al. Super-resolution of dental panoramic radiographs using deep learning: a pilot study. Diagnostics (Basel) 2023;13:996. doi: 10.3390/diagnostics13050996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Yang X, Chen Y, Yue X, Lin X, Zhang Q. Variational synthesis network for generating micro computed tomography from cone beam computed tomography; 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); Houston, TX, USA. IEEE; 2021. pp. 1611–1614. Available from: [DOI] [Google Scholar]
  • 17.Nakao M, Imanishi K, Ueda N, Imai Y, Kirita T, Matsuda T. Regularized three-dimensional generative adversarial nets for unsupervised metal artifact reduction in head and neck CT images. IEEE Access. 2020;8:109453–109465. [Google Scholar]
  • 18.Yang J, Huang T. In: Super-resolution imaging. Milanfar P, editor. Boca Raton: CRC Press; 2017. Image super-resolution: historical overview and future challenges; pp. 1–34. [Google Scholar]
  • 19.Moran MB, Faria MD, Giraldi GA, Bastos LF, Conci A. Using super-resolution generative adversarial network models and transfer learning to obtain high resolution digital periapical radiographs. Comput Biol Med. 2021;129:104139. doi: 10.1016/j.compbiomed.2020.104139. [DOI] [PubMed] [Google Scholar]
  • 20.Moran M, Faria M, Giraldi G, Bastos L, Conci A. Do radiographic assessments of periodontal bone loss improve with deep learning methods for enhanced image resolution? Sensors. 2021;21:2013. doi: 10.3390/s21062013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Hwang JJ, Jung YH, Cho BH, Heo MS. Very deep super-resolution for efficient cone-beam computed tomographic image restoration. Imaging Sci Dent. 2020;50:331–337. doi: 10.5624/isd.2020.50.4.331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Li W, Li Y, Liu X, Zheng XL, Gao SY, Huangfu HM, et al. Transfer learning-based super-resolution in panoramic models for predicting mandibular third molar extraction difficulty: a multi-center study. Med Data Min. 2023;6:20 [Google Scholar]
  • 23.Schwendicke F, Singh T, Lee JH, Gaudin R, Chaurasia A, Wiegand T, et al. Artificial intelligence in dental research: checklist for authors, reviewers, readers. J Dent. 2021;107:103610 [Google Scholar]
  • 24.Rytky SJ, Tiulpin A, Finnilä MA, Karhula SS, Sipola A, Kurttila V, et al. Clinical super-resolution computed tomography of bone microstructure: application in musculoskeletal and dental imaging. Ann Biomed Eng. 2024;52:1255–1269. doi: 10.1007/s10439-024-03450-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Choi K. Self-supervised projection denoising for low-dose cone-beam CT. Annu Int Conf IEEE Eng Med Biol Soc. 2021;2021:3459–3462. doi: 10.1109/EMBC46164.2021.9629859. [DOI] [PubMed] [Google Scholar]
  • 26.Zamir SW, Arora A, Khan S, Hayat M, Khan FS, Yang MH, et al. Learning enriched features for fast image restoration and enhancement. IEEE Trans Pattern Anal Mach Intell. 2023;45:1934–1948. doi: 10.1109/TPAMI.2022.3167175. [DOI] [PubMed] [Google Scholar]
  • 27.Choi H, Yun JP, Lee A, Han SS, Kim SW, Lee C. Deep learning synthesis of cone-beam computed tomography from zero echo time magnetic resonance imaging. Sci Rep. 2023;13:6031. doi: 10.1038/s41598-023-33288-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ryu K, Lee C, Han Y, Pang S, Kim YH, Choi C, et al. Multi-planar 2.5D U-Net for image quality enhancement of dental cone-beam CT. PLoS One. 2023;18:e0285608. doi: 10.1371/journal.pone.0285608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Landis J, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174. [PubMed] [Google Scholar]
  • 30.Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15:155–163. doi: 10.1016/j.jcm.2016.02.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Gaêta-Araujo H, Leite AF, Vasconcelos KF, Jacobs R. Two decades of research on CBCT imaging in DMFR - an appraisal of scientific evidence. Dentomaxillofac Radiol. 2021;50:20200367. doi: 10.1259/dmfr.20200367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Zhang L, Wang T, Cao Y, Wang C, Tan B, Tang X, et al. In vivo detection of subtle vertical root fracture in endodontically treated teeth by cone-beam computed tomography. J Endod. 2019;45:856–862. doi: 10.1016/j.joen.2019.03.006. [DOI] [PubMed] [Google Scholar]
  • 33.Tolentino ES, Amoroso-Silva PA, Alcalde MP, Honório HM, Iwaki LC, Rubira-Bullen IR, et al. Limitation of diagnostic value of cone-beam CT in detecting apical root isthmuses. J Appl Oral Sci. 2020;28:e20190168. doi: 10.1590/1678-7757-2019-0168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, et al. Photo-realistic single image super-resolution using a generative adversarial network; 2017 IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA. IEEE; 2017. pp. 105–114. Available from: [DOI] [Google Scholar]
  • 35.Ponder SN, Benavides E, Kapila S, Hatch NE. Quantification of external root resorption by low- vs high-resolution cone-beam computed tomography and periapical radiography: a volumetric and linear analysis. Am J Orthod Dentofacial Orthop. 2013;143:77–91. doi: 10.1016/j.ajodo.2012.08.023. [DOI] [PubMed] [Google Scholar]
  • 36.Uysal S, Akcicek G, Yalcin ED, Tuncel B, Dural S. The influence of voxel size and artifact reduction on the detection of vertical root fracture in endodontically treated teeth. Acta Odontol Scand. 2021;79:354–358. doi: 10.1080/00016357.2020.1859611. [DOI] [PubMed] [Google Scholar]
  • 37.Farman AG. ALARA still applies. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2005;100:395–397. doi: 10.1016/j.tripleo.2005.05.055. [DOI] [PubMed] [Google Scholar]

Articles from Imaging Science in Dentistry are provided here courtesy of Korean Academy of Oral and Maxillofacial Radiology

RESOURCES