Skip to main content
Journal of Dental Research logoLink to Journal of Dental Research
. 2024 May 31;103(9):853–862. doi: 10.1177/00220345241255593

The Use of Artificial Intelligence in Endodontics

FC Setzer 1,, J Li 2, AA Khan 3
PMCID: PMC11378448  PMID: 38822561

Abstract

Endodontics is the dental specialty foremost concerned with diseases of the pulp and periradicular tissues. Clinicians often face patients with varying symptoms, must critically assess radiographic images in 2 and 3 dimensions, derive complex diagnoses and decision making, and deliver sophisticated treatment. Paired with low intra- and interobserver agreement for radiographic interpretation and variations in treatment outcome resulting from nonstandardized clinical techniques, there exists an unmet need for support in the form of artificial intelligence (AI), providing automated biomedical image analysis, decision support, and assistance during treatment. In the past decade, there has been a steady increase in AI studies in endodontics but limited clinical application. This review focuses on critically assessing the recent advancements in endodontic AI research for clinical applications, including the detection and diagnosis of endodontic pathologies such as periapical lesions, fractures and resorptions, as well as clinical treatment outcome predictions. It discusses the benefits of AI-assisted diagnosis, treatment planning and execution, and future directions including augmented reality and robotics. It critically reviews the limitations and challenges imposed by the nature of endodontic data sets, AI transparency and generalization, and potential ethical dilemmas. In the near future, AI will significantly affect the everyday endodontic workflow, education, and continuous learning.

Keywords: deep learning/machine learning, treatment planning, computer vision/convolutional neural networks, decision making, diagnostic systems, cracked teeth

Introduction

Artificial intelligence (AI), a term coined by John McCarthy, was originally defined as “the science and engineering of making intelligent machines.” In 1955, McCarthy and colleagues proposed a 2-mo, 10-man study based on the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (McCarthy et al. 2006). This landmark project evolved rapidly and continues to affect multiple aspects of our lives today. AI is now a powerful tool that is used to solve complex problems.

Machine learning, a subset of AI, uses algorithms instead of explicit programming to analyze and learn from data and then make informed decisions (Choi et al. 2020). The approach of machine learning is to let computers learn to program themselves through experience. It starts by first gathering and preparing data on which the ML model is trained to find patterns or make predictions. A human programmer may tweak this model to yield more accurate results. Then, new evaluation data are used to test the accuracy of the ML model.

Deep learning (DL), itself a subdomain of ML, involves constructing neural networks with multiple layers to learn from data sets and make predictions (Choi et al. 2020). Artificial neural networks may contain thousands or millions of processing nodes that are interconnected and organized into layers similar to the human brain. DL has become particularly valuable for the analysis of complex imagery data, such as biomedical images.

AI has enabled breakthroughs in diverse health care fields, including radiology, cardiovascular surgery, and neurology (Zhu et al. 2022). The Food and Drug Administration (FDA) has approved 171 AI-based applications (Food and Drug Administration 2023). In endodontics, AI has the potential to improve the diagnosis, treatment, and prevention of pulpal and periapical disease. While AI in endodontics is focused mainly on radiologic diagnosis, AI-based applications are also being developed for clinical treatment and prediction of outcomes. Several reviews have described the application of AI in endodontics (Boreak 2020; Aminoshariae et al. 2021; Das 2022; Karobari et al. 2023; Khanagar et al. 2023; Patel et al. 2023; Ramezanzade et al. 2023; Sudeep et al. 2023). While some of these reviews addressed particular limitations and challenges (Aminoshariae et al. 2021; Das 2022; Patel et al. 2023), for example, regarding the need to validate the reliability, applicability, and cost-effectiveness of AI models for clinical implementation, the steady increase in endodontic AI publications warrants a more thorough critical review. This review will highlight the different AI-based technologies for endodontics and discuss some of the challenges, limitations, and ethical concerns.

Advancements in endodontic AI research for clinical applications include the detection and diagnosis of periapical lesions (PLs), fractures, resorptions, or root canal anatomy and clinical treatment outcome predictions. Representative studies with methods and outcomes are presented in Table 1.

Table 1.

Representative Studies of Clinical AI Applications Detailing AI Methods and Outcomes.

Reference Application AI Method Outcome
Lesion detection
Ekert et al. 2019 PL detection Applied deep convolutional neural networks on 2D panoramic radiographs Sensitivity = 0.65 (0.03)
Specificity = 0.87(0.04)
PPV = 0.49 (0.10)
NPV = 0.93 (0.03)
Setzer et al. 2020 PL detection Developed a DL algorithm for automated PL detection and simultaneous multilabel segmentation of 5 categories—lesion, tooth structure, bone, restorative materials, and background—from CBCTs Sensitivity = 0.93
Specificity = 0.88
PPV = 0.087
NPV = 0.93
Cumulative Dice index for all lesions = 0.67
Orhan et al. 2020 PL detection and volume calculation Applied deep convolutional neural networks on 3D (CBCT) images Sensitivity = 0.95
Precision = 0.89
Reliability = 0.928
Kirnbauer et al. 2022 PL detection Used a 2-step approach to CBCTs of the whole dental arch, first identifying a periradicular ROI and then segmenting and classifying periapical lesions Sensitivity = 0.97 ± 0.03
Specificity = 0.88 ± 0.04
Crack detection
Shah et al. 2018
Vicory et al. 2021
Sahu et al. 2023
Crack detection ML and imaging features extracted from 3D wavelets to detect cracks using high-resolution CBCT; studies were conducted using synthetic and in vitro data Simulated fracture data in vivo CBCT scans via ROC analysis:
AUC = 0.97 (Shah et al.)
Simulated fracture data in vivo CBCT scans via iterative 2D ML + DWD:
Separability P value = 0.026 (Vicory et al.)
Fracture data in ex vivo CBCT scans via iterative 2D ML + DWD: separability P value = 0.74 (Vicory et al.)
Fracture data in ex vivo scans via human observer: accuracy (micro-CT scans) = 0.55 to 0.70 (Sahu et al.), accuracy (CBCT scans) = 0.38 to 0.41 (Sahu et al.)
Fracture data in ex vivo scans via 3D ML: accuracy (micro-CT scans) = 0.88 (Sahu et al.), accuracy (CBCT scans) = 0.56 (Sahu et al)
Resorption detection
Mohammad-Rahimi et al. 2023 ECR detection and differentiation from caries SSL models (SimCLR v2, MoCo v2, BYOL, DINO, NNCLR, SwAV, MSN, Barlow Twins, and SimSiam) trained on 2D radiographs For ECR, DINO achieved the highest mean accuracy (85.64 ± 4.56)
MoCo v2 exhibited the highest recall and F1 score (77.37% and 82.93%, respectively)
Assessment of anatomy
Chen et al. 2019 Detect and number teeth Faster region CNN applied on periapical radiographs Precision >90%
Recall >90%
Intersection-over-union value between detected and ground truth boxes = 91%
Hiraiwa et al. 2019 Assess the root morphology of mandibular first molars DL performed using AlexNet and GoogleNet architectures implemented with the DIGITS library on the Caffe framework AlexNet
Accuracy = 87.4%; Sensitivity = 77.3%; Specificity = 97.1%; PPV = 96.3%; NPV = 81.8%.
GoogleNet
Accuracy = 85.3%, Sensitivity = 74.2%, Specificity = 95.9%, a PPV = 94.7%, NPV = 80.0%
Lin et al. 2021 Automated segmentation of teeth and pulp chambers U-Net from CBCT images after training from 30 samples of micro-CT data Dice index = 96.20% ± 0.58%
Precision rate = 97.31% ± 0.38%
Recall rate = 95.11% ± 0.97%
Sherwood et al. 2021 Segment and classify C-shape canal anatomy in second mandibular molars U-Net, residual U-Net, and Xception U-Net architectures were used for image segmentation and classification of C-shaped anatomies using CBCT Xception U-Net
Dice coefficients = 0.768 ± 0.0349
Sensitivity = 0.786 ± 0.0378
PPV = 80.0% ± 0.1098%
Residual U-Net+
Dice coefficients = 0.736 ± 0.0297
Sensitivity= 0.746 ± 0.0391
PPV = 78.2% ± 0.0.1971%
U-Net
Dice coefficients = 0.660 ± 0.0354
Sensitivity = 0.720 ± 0.0495
PPV = 77.6% ± 0.1998%
Yuce et al. 2023 Detection of pulpal chamber and calcifications Applied DL on bite-wing radiographs Detection of pulp chambers
Recall = 86.98%; precision = 98.94%; F1 score = 91.60%; accuracy = 86.18%
Detection of pulpal calcification
Recall = 86.39%; precision = 85.23%; specificity = 97.94%
F1 score = 85.49%; accuracy = 9 6.54%
Duman et al. 2023 Detection of MB2 YOLOv5 DL architecture Sensitivity = 0.92
Precision = 0.83
F1 score = 0.87
Treatment forecasting
Qu et al. 2022 Prognosis prediction for endodontic microsurgery Gradient boosting machine and random forest models Gradient boosting machine model:
Predictive accuracy = 0.80; sensitivity = 0.92; specificity = 0.71; PPV = 0.71; NPV = 0.92; F1 = 0.8; AUC = 0.88
Random forest model:
Accuracy = 0.80; sensitivity = 0.85; specificity = 0.76, PPV = 0.73; NPV = 0.87; F1 = 0.79; AUC = 0.83
Lee et al. 2023 Predict the outcome of endodontic treatment based on pretreatment radiographs 17-layered DCNN with a self-attention layer (Periapical Radiograph Explanatory System with Self-Attention Network [PRESSAN-17]) compared with a conventional DCNN without a self-attention layer (residual neural network [RESNET]-18) Comparing the mean accuracy of 5-fold validation of 2 models, PRESSAN-17 (67.0%) showed a significant difference from RESNET-18 (63.4%, P < .05)

2D, 2-dimensional; 3D, 3-dimensional; AI, artificial intelligence; AUC, area under the curve; CBCT, cone-beam computed tomography; CNN, convolutional neural network; DCNN, deep convolutional neural network; DL, deep learning; DWD, ; ECR, extracanal cervical resorption; ML, machine learning; NPV, negative predictive value; PL, periapical lesion; PPV, positive predictive value; ROC, receiver-operating characteristic; ROI, region of interest; SSL, self-supervised learning.

Significant work on AI applications in endodontics, especially for lesion detection, has been based on 2-dimensional (2D) radiography, such as periapical and panoramic radiographs. The introduction of 3-dimensional (3D) radiography in the form of cone-beam computed tomography (CBCT) has significantly improved the detection of PL compared with 2D radiography. Nevertheless, CBCT interpretation by clinicians suffers from low inter- and intraobserver agreement, and there are low sensitivity and specificity for PL detection in endodontically treated teeth (Parker et al. 2017). Thus, AI-based CBCT applications have become critical to eliminate observer bias. A systematic review and meta-analysis of the diagnostic test accuracy of DL algorithms on pooled data from 12 studies reported the sensitivity range for radiographic detection of PL to be 0.65 to 0.96 (Sadr et al. 2023), comparable with lesion detection accuracy by human clinicians using CBCT. Setzer et al. (2020) used DL algorithms for automated PL detection and simultaneous multilabel segmentation of 5 categories—lesion, tooth structure, bone, restorative materials, and background—from CBCTs (Fig. 1). Multilabel segmentation approaches require significant time and labor efforts to provide training and validation data sets based on expert clinician annotation.

Figure 1.

Figure 1.

Full 3-dimensional multilabel segmentation with periapical lesion detection of dental limited field-of-view cone-beam computed tomography (CBCT) of the maxillary left quadrant (canine through wisdom tooth). Periapical lesion on the first maxillary left molar. Comparison of clinician-labeled ground truth segmentation (Clinician) with fully automated segmentation of the identical areas with the AI platform (AI). Ground truth labeling techniques as described by Setzer et al. (2020). Labels: lesion, red; tooth structure, yellow; bone, blue; restorative materials, green; background, black. (A) Original CBCT slice, sagittal view. (B) Original CBCT slice, coronal view. (C) Three-dimensional rendering of the entire limited field-of-view volume, ground truth labeling (clinician). (D) Three-dimensional rendering of the entire limited field-of-view volume, fully automated labeling (AI). (E) Ground truth labeling of (A) sagittal view (clinician). (F) Fully automated labeling of (A) sagittal view (AI). (G) Ground truth labeling of (B) coronal view (clinician). (H) Fully automated labeling of (B) coronal view (AI). Unpublished case example, courtesy of Rui Qi Chen, Center for Machine Learning, Georgia Institute of Technology, Atlanta, Georgia.

Cracked teeth are the third most common cause of tooth loss in industrialized countries. The early detection of cracks, followed by appropriate interventions to prevent crack propagation, is an effective strategy to avert tooth loss. The development of objective and reliable AI-based methods to detect cracks is imperative. Early attempts used convolutional neural network–based segmentation for individual teeth before applying fracture detection algorithms. However, this approach was not optimal for clinical data, as clinical scans are often acquired using different acquisition parameters, a problem well-known by the ML community as a domain shift. To address this problem in an unsupervised manner, Sahu et al. (2023) developed a novel 3D Fourier domain adaptation model for tooth segmentation from a source domain to an adapted target domain (i.e., 2 different CBCT scans and acquisition protocols). Their experiments demonstrated that the proposed domain adaptation method can significantly improve the segmentation performance for the target domain. This technology is currently being further refined to increase the predictive validity of CBCT in detecting cracks (Fig. 2; Table 1).

Figure 2.

Figure 2.

Crack detection. (A) Original tooth volume. A strong crack (left) and a subtle crack (right) are indicated by 2 arrows. (B) Probability map overlay. Values are interpolated from 0 (red) to 1 (purple). The larger crack shown in purple indicates a strong probability (value = 1), while the subtle crack is shown in green (value = 0.6). Modified from Sahu et al. 2023.

An innovative AI application developed by Lee et al. (2023) analyzed 2D preoperative radiographs to predict the outcome of endodontic treatment after 3 y (Table 1). Success was defined as a periapical index (PAI) score of 1 and failure as a PAI score of 4 or 5, or radiographic evidence of extraction (Fig. 3). A limitation of the study was that it was based solely on radiographs. Advanced algorithms should aim to include patient factors such as medical history or the quality of restorations to achieve higher accuracy.

Figure 3.

Figure 3.

Visualization of 2 “test set” case examples. Each example shows the gray-scale preoperative periapical radiographs and corresponding Grad-CAM heat map of each feature or endodontic prediction. The red region represents a larger weight, which can be decoded by the color bar on the right. In this figure, 4 clinical features and the endodontic outcome prediction Grad-CAM heat map were superimposed on a preoperative preprocessed image. (A) Example of a mandibular right first premolar (failure). COD, coronal defect; FCR, full coverage restoration; PAR, periapical radiolucency; PRF, previous root filling. (B) Example of a mandibular left second premolar (success). Previously unpublished case examples from Lee et al. (2023). Courtesy Dr. Junghoon Lee, DDS, PhD, Microscope Center, Yonsei University College of Dentistry, Seoul, South Korea.

The review of text records by AI models has been applied for training purposes and to improve the detection and classification of pathologies. In endodontics, text recognition has not yet been harnessed. However, future applications would likely benefit from its incorporation, particularly as an AI review of patient records would be objective and not subjective as with clinicians (Karobari et al. 2023). Text record implementation might benefit training (e.g., to learn the differential diagnosis of lesions) and also improve outcome prediction. However, there are selection bias and privacy concerns with assessing large quantities of original patient data for purposes of AI training (Das 2022). Synthetic data, as in artificially created patient records modeled on real patients, may provide an alternative solution.

Potential Benefits of Using AI in Endodontic Diagnosis and Treatment Planning

Integrating AI collaboratively for endodontic diagnosis and treatment planning may offer multiple benefits, including increased accuracy and efficiency (Aminoshariae et al. 2024). AI algorithms can analyze large amounts of patient data, including radiographic and clinical images, medical records, and clinical symptoms. Integrating more patient data points with knowledge gained from AI-driven prognostication and outcome studies will significantly improve clinical decision making and treatment planning using AI support. AI models will continue to learn and adapt after new information and feedback from clinicians become available and provide clinicians with recommendations and probabilities for different endodontic pathologies, aiding in accurate diagnosis. Over time, the algorithms will refine their diagnostic and treatment-planning capabilities, improving accuracy and efficiency. This may contribute to reduced costs and burdens on the health care system and improved integration of all stakeholders, including patients, providers, and insurance carriers (Schwendicke and Büttner 2023).

AI can provide objectivity in diagnosing endodontic pathologies, evaluating features and patterns that a human observer may not easily detect and reducing subjective bias clinicians add to medical image analysis (Hosny et al. 2018). In situations of failed root canal therapy, a decision between nonsurgical and surgical retreatment has to be made if a patient opts for tooth retention. Histologically, odontogenic PLs may be granulomas, cysts, or abscesses. While abscesses can be diagnosed clinically by the presence of a sinus tract or clinical symptoms such as swelling, redness, or pain, cysts cannot be distinguished from granulomas clinically or radiographically. This bears an impact on the potential outcome of a retreatment procedure, as there is a general consensus that granulomas heal after adequate nonsurgical retreatment. However, this may be different for epithelial-lined cysts, which may require surgical excision (Setzer and Kratchman 2022). AI-supported differential diagnosis based on radiographic imaging to distinguish apical granulomas from cysts or other types of lesions may provide practitioners with valuable decision support to favor either nonsurgical or surgical retreatment options.

Earlier studies by Simon et al. (2006) aimed at identifying cysts versus granulomas based on gray-scale values in CBCT images and achieved a 76.5% accuracy compared with the histology results. Building on this work, Okada et al. (2015) used a semiautomatic combination of graph-based random walks segmentation with ML-based boosted classifiers and evaluated 28 CBCT data sets that included the original 17 CBCT data sets of Simon et al. (2006) to attempt automated differential diagnosis of periapical granulomas from cystic lesions. The results of Okada et al. (2015) coincided 94.1% with the results obtained by Simon et al. (2006); Ver Berne et al. (2023) published a 2-step DL approach for the radiologic detection and classification of radicular cysts versus periapical granulomas and achieved a sensitivity of 1.00 (0.63 to 1.00), specificity of 0.95 (0.86 to 0.99), and area under the receiver-operating characteristic curve (AUC) of 0.97 for radicular cysts and a sensitivity of 0.77 (0.46 to 0.95), specificity of 1.00 (0.93 to 1.00), and AUC of 0.88 for periapical granulomas. Moreover, several studies investigated AI-based approaches to detect and classify ameloblastomas from odontogenic keratocysts, lesions that are important for differential diagnosis from lesions of endodontic origin; these studies reported excellent results for lesions exceeding 10 mm in diameter (Chai et al. 2022).

AI could help to implement greater consistency and standardization, aid in lowering interobserver variability, and ensure that diagnostic and treatment decisions align with best practices, improving the quality and consistency of endodontic care across different settings and clinicians (Aminoshariae et al. 2024). AI decision support and expertise augmentation may provide clinicians with evidence-based recommendations, second opinions, and “red flag” warnings based on probabilities acquired from large data sets and information gained from clinical studies (Setzer et al. 2020). The use of AI could result in a reduction in unnecessary medical imaging or consultations. The predictive capabilities of AI can also further provide risk assessment, helping practitioners identify cases with a high risk of failure and possibly avoiding future complications by either modifying the overall treatment plan or suggesting different time points for needed interventions (Mallishery et al. 2020).

AI-Assisted Endodontic Treatment

AI-based applications may aid clinical root canal therapy in various ways. Automated radiographic image analysis, including CBCT imaging, should render the root canal system precisely, including the number and complexity of root canals as well as lengths, curvatures, and general morphology, such as canal fusions, divergences, or anatomical variations. As outlined above, AI-based technologies have already been developed for automated segmentation of the pulp cavity (Lin et al. 2021) and the root canal system (Wang et al. 2023). Specific root canal anatomies, such as the second mesiobuccal canal in maxillary molars (Duman et al. 2023) or C-shape configurations in mandibular molars (Sherwood et al. 2021) have been successfully detected and classified using AI-based models. Similar advances have been made to detect pulp calcifications. For surgical endodontics, the automated detection and segmentation of tissues surrounding the operating site can reduce the risk of iatrogenic errors and injury to anatomical structures (e.g., the inferior alveolar or mental nerves) (Oliveira-Santos et al. 2023).

However, there is still a need to develop clinically approved applications, either as stand-alone programs or incorporated into existing image analysis applications, such as proprietary CBCT software. These programs may be cloud based or run on existing hardware. At the time of this report, only 2 dental AI companies have received FDA clearance to detect clinical conditions with a specific interest in endodontics in 2D bitewing and periapical radiographs, including tooth and root canal segmentation and PL detection. It is unlikely that AI support will, however, stop with applications that aid the detection or classification of pathologies or raise red flags if a clinician is at risk of overlooking specific entities. AI support will include office management applications, review of drug regimens for patients, or treatment planning and decision support (Patel et al. 2023).

AI-based real-time support techniques are being developed for clinicians. Two cadaver studies evaluated the ability of an AI model to identify the minor apical foramen and determine the working length (Saghiri et al. 2012), the latter with a 96% accuracy compared with experienced endodontists. These tools can aid in identifying the precise location of the anatomical constriction, allowing for more effective and efficient root canal treatment.

Augmented Reality and Robotics

Augmented reality (AR) is an interactive experience that combines and enhances the real world with computer-generated content. AR and AI are related in endodontics via dynamic guided navigation techniques. Imaging sensors for dynamic navigation or AR headset techniques feed data into computer vision and ML algorithms to track the position of a tool or its user and provide real-time orientation feedback. Three-dimensional positional tracking allowing for dynamic navigation has been used for locating calcified canals (Jain et al. 2020), the retreatment of fiber posts, and for guided root-end resection in endodontic microsurgery.

Farronato et al. (2023) implemented a markerless AR system to drill preplanned virtually guided access cavities and compared the system’s accuracy and efficiency with that of pre- and postoperative high-resolution CBCT scans. A similar in vitro study using AR for endodontic access cavity preparation was also published by Faus-Matoses et al. (2022). Several studies have explored the use of AR technologies for surgical endodontics, evaluating its use for osteotomy and apicoectomy and comparing its accuracy to that of template-guided approaches in in vitro models (Remschmidt et al. 2023). Ultimately, this will also result in the development and application of robotic endodontic procedures. The first descriptions of robotic dental procedures are all related to autonomous implant placement (Cheng et al. 2021). Artificial intelligence is generally related to robotics (Karobari et al. 2023). Endodontic robotic systems will provide support by automated root-end surgery or root canal instrumentation by providing precise and controlled movements and haptic feedback based on real-time data from intraoral sensors.

To train these robots, the growing adoption of AR systems in endodontics lays a solid foundation by accumulating a wealth of data about surgery planning and real-time execution for individual patients together with CBCT characterizing their 3D anatomy. This data-rich environment will pave the way for training robots in endodontics to learn from human execution with supervised learning algorithms and develop autonomy. To enhance the training process, generative AI can be employed to augment the training data sets beyond real patient data, allowing robots to encounter a wide range of scenarios and challenges (Bandi et al. 2023). In addition, reinforcement learning (RL) can be leveraged to enable robots to learn from their actions and improve over time (Hu et al. 2023). Integrating human oversight and feedback into the RL process can ensure a more robust learning experience. It is worth noting that robot training and development in endodontics is an emerging field, with the first case reports recently published (Isufi et al. 2024; Liu et al. 2024), and may borrow experience from other successful fields such as autonomous vehicles.

Challenges and Limitations of Current AI Applications in Endodontics

A variety of pitfalls exist for AI in health care (Schwendicke and Büttner 2023). AI-based applications (e.g., DL models) typically rely on large data sets for training, validation, and testing of the algorithms. Annotation and labeling of the training data are time and labor intensive, particularly if CBCT data are assessed. To date, there are no image repositories for AI training in dentistry, albeit for the medical field to advance AI in medicine through transparent and reproducible collaborative research. Creating medical image repositories for dentistry and pooling resources from research institutions should be undertaken to gain larger sample sizes for AI training and validation (Huang et al. 2024).

To cope with limited sample sizes, approaches such as transfer learning, in which AI networks are pretrained on other data sets with larger data availability (Kora et al. 2022), and self-supervised learning, which allows a model to pretrain and learn from unlabeled data (Shurrab and Duwairi 2022), have become commonplace in AI. These models may then be fine-tuned on different target data sets for specific tasks (Caron et al. 2021). Initially designed for natural language processing, transformer-based models have been adapted for various vision tasks. Transformers offer greater flexibility than architectures based on convolutional neural networks, as they use patches of input images and self-attention mechanisms to identify dependencies across the entire image (Dosovitskiy et al. 2020). However, increased data sets are required. To date, only 1 study in endodontics has attempted to use a transformer-based architecture to describe and classify radiolucent lesions in panoramic radiographs (Silva et al. 2024). Another technique for overcoming small data sets is active learning, in which training data are assessed using uncertainty quantification and samples with the highest uncertainty scores are labeled to train the AI (Huang et al. 2024).

If data availability is restricted to 1 or a few institutions, obtaining diverse and representative data from different populations can be challenging, potentially introducing biases and limiting the generalizability of AI models to other patient groups or settings (Khanagar et al. 2023). Variations in patient demographics, treatment protocols, and equipment may affect the applicability and performance of AI models. AI models should be validated across diverse populations and settings. Overfitting may occur when an AI architecture gives accurate predictions for the training set but not for new data (Ramezanzade et al. 2023). In addition, domain shift may complicate matters for imaging applications (Sahu et al. 2023). Different machines may have different parameters and may present challenges to AI algorithms. Therefore, it has been suggested that AI applications must be tested on different machines to verify if prediction results achieved with different devices are reproducible. Challenges of interpreting radiographic images can also relate to variations and the anatomical complexities of the human dentition, noise and artifacts, image quality and variability, lack of standardization, limited training data, issues with ground truth labeling and interobserver variability, or the lack of clinical validity due to variations between different environments.

Particularly, the lack of standardization in endodontics may affect AI studies. Practice philosophies may vary among clinicians, depending on education and training, variations in treatment guidelines, and a general lack of standardized protocols, complicating the development of universally applicable and accepted AI models. AI-based applications should consider diverse treatment approaches, anatomical variations, and cultural factors. AI models, especially if developed using DL approaches, also raise concerns regarding interpretability and explainability. DL models often operate as black boxes, making it difficult to comprehend how the AI arrived at specific conclusions or recommendations, which may hinder the acceptance and trust in AI-based applications. AI-generated results must arrive with explanations and justifications so that clinicians can understand and validate the outcomes. Approaches such as transparency in algorithms (Shah 2018), interpretable visualization (Schwendicke et al. 2020), and explainable AI (Kundu 2021) were developed to ensure that AI’s decisions and reasoning are understandable by health care professionals and patients. The human-in-the-loop approach (Uegami et al. 2022), in which AI recommendations are reviewed by health care professionals before being finalized, maintains the benefits of AI while ensuring human oversight. Legislation such as the Algorithmic Accountability Act can encourage responsible AI use. In addition, the medical and dental fields need to develop standards and certification processes specific to AI in health care.

To date, AI in endodontics has been mostly explored for research purposes and lacks clinical validation. Next steps in AI research must ensure that AI algorithms can be safely employed after confirming that AI-based assessments and recommendations align with clinical reality and demonstrate consistent and verifiable results. For example, studies should be conducted that compare clinician-based evaluations with results from automated applications in retrospective cohort studies or prospective clinical trials. Moreover, as ethical considerations arise with introducing and adapting new technologies, AI must follow the ethical principles of nonmaleficence, beneficence, justice, autonomy, and veracity. Challenges that arise need to be recognized and addressed by the stakeholders, including clinicians, patients, developers, and health care insurers, as the integration of AI may disrupt existing workflows (Rokhshad et al. 2023). Practitioners, researchers, and developers must ensure that governing ethical principles remain guaranteed, especially if AI-based decision support is to be followed.

Finally, we would like to point out that AI in endodontics shares some broader issues as other AI applications in health care such as data quality, biases, annotation accuracy, and keeping pace with technological advancements (Schwendicke et al. 2020). The effectiveness of AI models heavily depends on the quality of the data used for training and validation (Whang et al. 2023). High-quality, diverse data sets are essential to develop generalizable AI models. In addition, inherent biases in data can lead to skewed AI predictions, which may mislead clinical decision making. Addressing data biases and ensuring diverse representation is important for developing AI models that are fair and effective for all patient groups. Also, the accuracy of annotations in training datasets directly influences the performance of AI models. This is particularly challenging in complex medical imaging such as CBCT used in endodontics. Reliable annotations require expert input, which is not always available. Robust or data-efficient AI algorithms that are less dependent on high-quality or high-number annotations are desirable (Azizi et al. 2023). Furthermore, recent technological advancements such as the transformer models have led to a breakthrough in natural language processing and are affecting image-based AI applications (Zhang et al. 2023). These models, through pretraining using large, multisource public data sets, have a powerful capability to discover complex patterns in medical imaging such as CBCT, which could enhance the capabilities of AI in Endodontics.

AI in Endodontic Education

AI will impact endodontic education in multiple ways. A recent scoping review on the impact of AI on endodontic education (Aminoshariae et al. 2024) identified 10 areas of potential impact. Students will benefit from AI-assisted training, including radiographic interpretation, differential diagnoses and treatment options, evaluating risks and benefits, and making recommendations for endodontic referrals. AI can help calibrate researchers and educators based on existing standardizing criteria, aid with administrative tasks, monitor student progress, or facilitate personalized education. As AI algorithms can continuously learn and adapt based on new data and feedback from clinicians, AI is ideally suited to contribute to continuous learning and improvement, helping clinicians enhance their diagnostic skills, refine treatment-planning abilities, and stay updated with the latest advancements.

Conclusions

AI applications will have a significant impact on the everyday endodontic practice of the future, transforming various aspects of patient care and practice management and potentially providing increased precision and efficiency. Integrating AI into daily endodontic practice for diagnosis and treatment planning involves data-driven decision making and interdisciplinary collaborations. This includes biomedical image analysis and interpretation, the development of personalized treatment plans incorporating patient data, historical outcomes, and evidence-based guidelines. In addition, it may encompass real-time guided procedures using augmented reality, improve workflow efficiency and quality assurance, foster research and knowledge synthesis, and aid continuous education for professional development.

Author Contributions

F.C. Setzer, J. Li, A. A. Khan, contributed to conception, design, data literature search and interpretation, drafted and critically revised the manuscript. All authors gave their final approval and agree to be accountable for all aspects of work.

Footnotes

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Dr. Asma Khan acknowledges funding through the National Institutes of Health (NIH) 5R44DE027574-03. Dr. Jing Li and Dr. Frank Setzer acknowledge funding through NIH 1R41DE031485-01.

References

  1. Aminoshariae A, Kulild J, Nagendrababu V. 2021. Artificial intelligence in endodontics: current applications and future directions. J Endod. 47(9):1352–1357. doi: 10.1016/j.joen.2021.06.003 [DOI] [PubMed] [Google Scholar]
  2. Aminoshariae A, Nosrat A, Nagendrababu V, Dianat O, Mohammad-Rahimi H, O‘Keefe AW, Setzer FC. 2024. Artificial intelligence in endodontic education. J Endod. 50(5): 562–578. doi: 10.1016/j.joen.2024.02.011 [DOI] [PubMed] [Google Scholar]
  3. Azizi S, Culp L, Freyberg J, Mustafa B, Baur S, Kornblith S, Chen T, Tomasev N, Mitrović J, Strachan P, et al. 2023. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nat Biomed Eng. 7(6):756–779. doi: 10.1038/s41551-023-01049-7 [DOI] [PubMed] [Google Scholar]
  4. Bandi A, Adapa PVSR, Kuchi YEVPK. 2023. The power of generative AI: a review of requirements, models, input–output formats, evaluation metrics, and challenges. Future Internet. 15(8):260. doi: 10.3390/fi15080260 [DOI] [Google Scholar]
  5. Boreak N. 2020. Effectiveness of artificial intelligence applications designed for endodontic diagnosis, decision-making, and prediction of prognosis: a systematic review. J Contemp Dent Pract. 21(8):926–934. doi: 10.5005/jp-journals-10024-2894 [DOI] [PubMed] [Google Scholar]
  6. Caron M, Touvron H, Misra I, Jégou H, Mairal J, Bojanowski P, Joulin A. 2021. Emerging properties in self-supervised vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. p. 9650–9660. doi: 10.48550/arXiv.2104.14294 [DOI] [Google Scholar]
  7. Chai ZK, Mao L, Chen H, Sun TG, Shen XM, Liu J, Sun ZJ. 2022. Improved diagnostic accuracy of ameloblastoma and odontogenic keratocyst on cone-beam CT by artificial intelligence. Front Oncol. 11:793417. doi: 10.3389/fonc.2021.793417 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Chen Y, Liu L, Qiu S, Hu C, Wang L, Li Y, Tan X, Gao Y, Huang D. 2023. Application of real-time augmented reality-guided osteotomy and apex location in endodontic microsurgery: a surgical simulation study based on 3D-printed alveolar bone model. J Endod. 49(7):880–888. doi: 10.1016/j.joen.2023.05.011 [DOI] [PubMed] [Google Scholar]
  9. Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, Lee CH. 2019. A Deep Learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep. 9(1):3840. doi: 10.1038/s41598-019-40414-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cheng KJ, Kan TS, Liu YF, Zhu WD, Zhu FD, Wang WB, Jiang XF, Dong XT. 2021. Accuracy of dental implant surgery with robotic position feedback and registration algorithm: an in-vitro study. Comput Biol Med. 129:104153. doi: 10.1016/j.compbiomed.2020.104153 [DOI] [PubMed] [Google Scholar]
  11. Choi RY, Coyner AS, Kalpathy-Cramer J, Chiang MF, Campbell JP. 2020. Introduction to machine learning, neural networks, and deep learning. Transl Vis Sci Technol. 9(2):14. doi: 10.1167/tvst.9.2.14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Das S. 2022. Artificial intelligence in endodontics: a peek into the future. RGUHS J Dent Sci. 14(3):35–37. [Google Scholar]
  13. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al. 2020. An image is worth 16x16 words: transformers for image recognition at scale. ArXiv. abs/2010.11929. [Google Scholar]
  14. Duman ŞB, Çelik Özen D, Bayrakdar IŞ, Baydar O, Alhaija ESA, Helvacioğlu Yiğit D, Çelik Ö, Jagtap R, Pileggi R, Orhan K. 2023. Second mesiobuccal canal segmentation with YOLOv5 architecture using cone beam computed tomography images. Odontology. 112(2):552–561. [DOI] [PubMed] [Google Scholar]
  15. Ekert T, Krois J, Meinhold L, Elhennawy K, Emara R, Golla T, Schwendicke F. 2019. Deep Learning for the radiographic detection of apical lesions. J Endod. 45(7):917–922.e5. doi: 10.1016/j.joen.2019.03.016. [DOI] [PubMed] [Google Scholar]
  16. Farronato M, Torres A, Pedano MS, Jacobs R. 2023. Novel method for augmented reality guided endodontics: an in vitro study. J Dent. 132:104476. doi: 10.1016/j.jdent.2023.104476 [DOI] [PubMed] [Google Scholar]
  17. Faus-Matoses V, Faus-Llácer V, Moradian T, Riad Deglow E, Ruiz-Sánchez C, Hamoud-Kharrat N, Zubizarreta-Macho Á, Faus-Matoses I. 2022. Accuracy of endodontic access cavities performed using an augmented reality appliance: an in vitro study. Int J Environ Res Public Health. 19(18):11167. doi: 10.3390/ijerph191811167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Food and Drug Administration. 2023. Artificial intelligence and machine learning (AI/ML)-enabled medical devices [accessed 2023 Nov 24]. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices.
  19. Gao X, Xin X, Li Z, Zhang W. 2021. Predicting postoperative pain following root canal treatment by using artificial neural network evaluation. Sci Rep. 11(1):17243. doi: 10.1038/s41598-021-96777-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Gerlach NL, Meijer GJ, Kroon DJ, Bronkhorst EM, Bergé SJ, Maal TJ. 2014. Evaluation of the potential of automatic segmentation of the mandibular canal using cone-beam computed tomography. Br J Oral Maxillofac Surg. 52(9):838–844. doi: 10.1016/j.bjoms.2014.07.253 [DOI] [PubMed] [Google Scholar]
  21. Hiraiwa T, Ariji Y, Fukuda M, Kise Y, Nakata K, Katsumata A, Fujita H, Ariji E. 2019. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac Radiol. 48(3):20180218. doi: 10.1259/dmfr.20180218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. 2018. Artificial intelligence in radiology. Nat Rev Cancer. 18(8):500–510. doi: 10.1038/s41568-018-0016-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hu M, Zhang J, Matkovic L, Liu T, Yang X. 2023. Reinforcement learning in medical image analysis: concepts, applications, challenges, and future directions. J Appl Clin Med Phys. 24(2):e13898. doi: 10.1002/acm2.13898 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Huang J, Farpour N, Yang BJ, Mupparapu M, Lure F, Li J, Yan H, Setzer FC. 2024. Uncertainty-based active learning by Bayesian U-net for multi-label cone-beam CT segmentation. J Endod. 50(2):220–228. doi: 10.1016/j.joen.2023.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Isufi A, Hsu TY, Chogle S. 2024. Robot-assisted and haptic-guided endodontic surgery: a case report. J Endod. 50(4):533–539.e1. doi: 10.1016/j.joen.2024.01.012 [DOI] [PubMed] [Google Scholar]
  26. Jain SD, Carrico CK, Bermanis I. 2020. 3-dimensional accuracy of dynamic navigation technology in locating calcified canals. J Endod. 46(6):839–845. doi: 10.1016/j.joen.2020.03.014 [DOI] [PubMed] [Google Scholar]
  27. Karobari MI, Adil AH, Basheer SN, Murugesan S, Savadamoorthi KS, Mustafa M, Abdulwahed A, Almokhatieb AA. 2023. Evaluation of the diagnostic and prognostic accuracy of Artificial intelligence in endodontic dentistry: a comprehensive review of literature. Comput Math Methods Med. 2023:7049360. eCollection 2023. doi: 10.1155/2023/7049360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Khanagar SB, Alfadley A, Alfouzan K, Awawdeh M, Alaqla A, Jamleh A. 2023. Developments and performance of artificial intelligence models designed for application in endodontics: a systematic review. Diagnostics (Basel). 13(3):414. doi: 10.3390/diagnostics13030414 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kirnbauer B, Hadzic A, Jakse N, Bischof H, Stern D. 2022. Automatic Detection of Periapical Osteolytic Lesions on Cone-Beam Computed Tomography Using Deep Convolutional Neuronal Networks. J Endod. 48(11):1434-1440. doi: 10.1016/j.joen.2022.07.013. [DOI] [PubMed] [Google Scholar]
  30. Kora P, Ooi CP, Faust O, Raghavendra U, Gudigar A, Chan WY, Meenakshi K, Swaraja K, Plawiak P, Acharya UR. 2022. Transfer learning techniques for medical image analysis: a review. Biocybern Biomed Eng. 42(1):79–107. doi: 10.1016/j.bbe.2021.11.004 [DOI] [Google Scholar]
  31. Kundu S. 2021. AI in medicine must be explainable. Nat Med. 27(8):1328. doi: 10.1038/s41591-021-01461-z [DOI] [PubMed] [Google Scholar]
  32. Lee J, Seo H, Choi YJ, Lee C, Kim S, Lee YS, Lee S, Kim E. 2023. An endodontic forecasting model based on the analysis of preoperative dental radiographs: a pilot study on an endodontic predictive deep neural network. J Endod. 49(6):710–719. doi: 10.1016/j.joen.2023.03.015 [DOI] [PubMed] [Google Scholar]
  33. Lin X, Fu Y, Ren G, Yang X, Duan W, Chen Y, Zhang Q. 2021. Micro-computed tomography-guided Artificial intelligence for pulp cavity and tooth segmentation on cone-beam computed tomography. J Endod. 47(12):1933–1941. doi: 10.1016/j.joen.2021.09.001 [DOI] [PubMed] [Google Scholar]
  34. Liu C, Liu X, Wang X, Liu Y, Bai Y, Bai S, Zhao Y. 2024. Endodontic microsurgery with an autonomous robotic system: a clinical report. J Endod [epub ahead of print 17 Feb 2024] in press. doi: 10.1016/j.joen.2024.02.005 [DOI] [PubMed] [Google Scholar]
  35. Mallishery S, Chhatpar P, Banga KS, Shah T, Gupta P. 2020. The precision of case difficulty and referral decisions: an innovative automated approach. Clin Oral Investig. 24(6):1909–1915. doi: 10.1007/s00784-019-03050-4 [DOI] [PubMed] [Google Scholar]
  36. McCarthy J, Minsky ML, Rochester N, Shannon CE. 2006. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AIMag. 2006;27(4):12 [accessed 2023 Nov 3]. https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1904. [Google Scholar]
  37. Mohammad-Rahimi H, Dianat O, Abbasi R, Zahedrozegar S, Ashkan A, Motamedian SR, Rohban MH, Nosrat A. 2024. Artificial Intelligence for Detection of External Cervical Resorption Using Label-efficient Self-supervised Learning Method. J Endod. 50(2):144–153.e2. doi: 10.1016/j.joen.2023.11.004. [DOI] [PubMed] [Google Scholar]
  38. Okada K, Rysavy S, Flores A, Linguraru MG. 2015. Noninvasive differential diagnosis of dental periapical lesions in cone-beam CT scans. Med Phys. 42(4):1653–1665. doi: 10.1118/1.4914418 [DOI] [PubMed] [Google Scholar]
  39. Oliveira-Santos N, Jacobs R, Picoli FF, Lahoud P, Niclaes L, Groppo FC. 2023. Automated segmentation of the mandibular canal and its anterior loop by deep learning. Sci Rep. 13(1):10819. doi: 10.1038/s41598-023-37798-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. 2020. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J. 53(5):680–689. doi: 10.1111/iej.13265. [DOI] [PubMed] [Google Scholar]
  41. Parker JM, Mol A, Rivera EM, Tawil PZ. 2017. Cone-beam computed tomography uses in clinical endodontics: observer variability in detecting periapical lesions. J Endod. 43(2):184–187. doi: 10.1016/j.joen.2016.10.007 [DOI] [PubMed] [Google Scholar]
  42. Patel A, Shah D, Patel Z, Makwana HS, Rajput A. 2023. A comprehensive review on AI in endodontics. Int J Res Appl Sci Eng Technol. 11(7):165–170. doi: 10.22214/ijraset.2023.54565 [DOI] [Google Scholar]
  43. Qu Y, Lin Z, Yang Z, Lin H, Huang X, Gu L. 2022. Machine learning models for prognosis prediction in endodontic microsurgery. J Dent. 118:103947. doi: 10.1016/j.jdent.2022.103947 [DOI] [PubMed] [Google Scholar]
  44. Ramezanzade S, Laurentiu T, Bakhshandah A, Ibragimov B, Kvist T, Bjørndal L. 2023. The efficiency of artificial intelligence methods for finding radiographic features in different endodontic treatments—a systematic review. Acta Odontol Scand. 81(6):422–435. doi: 10.1080/00016357.2022.2158929 [DOI] [PubMed] [Google Scholar]
  45. Remschmidt B, Rieder M, Gsaxner C, Gaessler J, Payer M, Wallner J. 2023. Augmented reality-guided apicoectomy based on maxillofacial CBCT scans. Diagnostics (Basel). 13(19):3037. doi: 10.3390/diagnostics13193037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Rokhshad R, Ducret M, Chaurasia A, Karteva T, Radenkovic M, Roganovic J, Hamdan M, Mohammad-Rahimi H, Krois J, Lahoud P, et al. 2023. Ethical considerations on artificial intelligence in dentistry: a framework and checklist. J Dent. 135:104593. doi: 10.1016/j.jdent.2023.104593 [DOI] [PubMed] [Google Scholar]
  47. Sadr S, Mohammad-Rahimi H, Motamedian SR, Zahedrozegar S, Motie P, Vinayahalingam S, Dianat O, Nosrat A. 2023. Deep learning for detection of periapical radiolucent lesions: a systematic review and meta-analysis of diagnostic test accuracy. J Endod. 49(3):248–261.e3. doi: 10.1016/j.joen.2022.12.007 [DOI] [PubMed] [Google Scholar]
  48. Saghiri MA, Garcia-Godoy F, Gutmann JL, Lotfi M, Asgar K. 2012. The reliability of artificial neural network in locating minor apical foramen: a cadaver study. J Endod 38(8):1130–1134. doi: 10.1016/j.joen.2012.05.004 [DOI] [PubMed] [Google Scholar]
  49. Sahu P, Fishbaugh J, Vicory J, Khan A, Paniagua B. 2023. 3D fourier domain adaptation for improving CBCT tooth segmentation under scanner parameter shift. 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Cartagena, Colombia. p. 1–5. doi: 10.1109/ISBI53787.2023.10230669 [DOI] [Google Scholar]
  50. Schwendicke F, Büttner M. 2023. Artificial intelligence: advances and pitfalls. Br Dent J. 234(10):749–750. doi: 10.1038/s41415-023-5855-0 [DOI] [PubMed] [Google Scholar]
  51. Schwendicke F, Samek W, Krois J. 2020. Artificial intelligence in dentistry: chances and challenges. J Dent Res. 99(7):769–774. doi:10.1177/ 0022034520915714 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Setzer FC, Kratchman SI. 2022. Present status and future directions: surgical endodontics. Int Endod J. 55(suppl 4):1020–1058. doi: 10.1111/iej.13783 [DOI] [PubMed] [Google Scholar]
  53. Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J. 2020. Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images. J Endod. 46(7):987–993. doi: 10.1016/j.joen.2020.03.025 [DOI] [PubMed] [Google Scholar]
  54. Shah H. 2018. Algorithmic accountability. Philos Trans A Math Phys Eng Sci. 376(2128):20170362. doi: 10.1098/rsta.2017.0362 [DOI] [PubMed] [Google Scholar]
  55. Sherwood AA, Sherwood AI, Setzer FC, K SD, Shamili JV, John C, Schwendicke F. 2021. A deep learning approach to segment and classify C-shaped canal morphologies in mandibular second molars using cone-beam computed tomography. J Endod. 47(12):1907–1916. doi: 10.1016/j.joen.2021.09.009 [DOI] [PubMed] [Google Scholar]
  56. Shurrab S, Duwairi R. 2022. Self-supervised learning methods and applications in medical imaging analysis: a survey. PeerJ Comput Sci. 8:e1045. doi: 10.7717/peerj-cs.1045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Silva TP, Andrade-Bortoletto MFS, Ocampo TSC, Alencar-Palha C, Bornstein MM, Oliveira-Santos C, Oliveira ML. 2024. Performance of a commercially available generative pre-trained transformer (GPT) in describing radiolucent lesions in panoramic radiographs and establishing differential diagnoses. Clin Oral Investig. 28(3):204. doi: 10.1007/s00784-024-05587-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Simon JH, Enciso R, Malfaz JM, Roges R, Bailey-Perry M, Patel A. 2006. Differential diagnosis of large periapical lesions using cone-beam computed tomography measurements and biopsy. J Endod. 32(9):833–837. doi: 10.1016/j.joen.2006.03.008 [DOI] [PubMed] [Google Scholar]
  59. Sudeep P, Gehlot PM, Murali B, Ballagere Mariswamy A. 2023. Artificial intelligence in endodontics: a narrative review. J Int Oral Health. 15(2):134–141. doi: 10.4103/jioh.jioh_257_22 [DOI] [Google Scholar]
  60. Uegami W, Bychkov A, Ozasa M, Uehara K, Kataoka K, Johkoh T, Kondoh Y, Sakanashi H, Fukuoka J. 2022. MIXTURE of human expertise and deep learning-developing an explainable model for predicting pathological diagnosis and survival in patients with interstitial lung disease. Mod Pathol. 35(8):1083–1091. doi: 10.1038/s41379-022-01025-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Ver Berne J, Saadi SB, Politis C, Jacobs R. 2023. A deep learning approach for radiological detection and classification of radicular cysts and periapical granulomas. J Dent. 135:104581. doi: 10.1016/j.jdent.2023.104581 [DOI] [PubMed] [Google Scholar]
  62. Vicory J, Chandradevan R, Hernandez-Cerdan P, Huang WA, Fox D, Qdais LA, McCormick M, Mol A, Walters R, Marron JS, Geha H, Khan A, Paniagua B. 2021. Dental microfracture detection using wavelet features and machine learning. Proc SPIE Int Soc Opt Eng. 11596:115961R. doi: 10.1117/12.2580744. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Wang Y, Xia W, Yan Z, Zhao L, Bian X, Liu C, Qi Z, Zhang S, Tang Z. 2023. Root canal treatment planning by automatic tooth and root canal segmentation in dental CBCT with deep multi-task feature learning. Med Image Anal. 85:102750. doi: 10.1016/j.media.2023.102750 [DOI] [PubMed] [Google Scholar]
  64. Whang SE, Roh Y, Song H, Lee JG. 2023. Data collection and quality challenges in deep learning: a data-centric AI perspective. VLDB J. 32:791–813. doi: 10.48550/arXiv.2112.06409 [DOI] [Google Scholar]
  65. Yuce F, Öziç MÜ, Tassoker M. 2023. Detection of pulpal calcifications on bite-wing radiographs using Deep Learning. Clin Oral Investig. 27(6):2679–2689. doi: 10.1007/s00784-022-04839-6. [DOI] [PubMed] [Google Scholar]
  66. Zhang J, Li C, Yin Y, Zhang J, Grzegorzek M. 2023. Applications of artificial neural networks in microorganism image analysis: a comprehensive review from conventional multilayer perceptron to popular convolutional neural network and potential visual transformer. Artif Intell Rev. 56(2):1013–1070. doi: 10.1007/s10462-022-10192-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Zhu S, Gilbert M, Chetty I, Siddiqui F. 2022. The 2021 landscape of FDA-approved artificial intelligence/machine learning-enabled medical devices: an analysis of the characteristics and intended use. Int J Med Inform. 165:104828. doi: 10.1016/j.ijmedinf.2022.104828 [DOI] [PubMed] [Google Scholar]

Articles from Journal of Dental Research are provided here courtesy of International and American Associations for Dental Research

RESOURCES