Abstract
Background
Deep-learning networks are promising techniques in dentistry. This study developed and validated a deep-learning network, You Only Look Once (YOLO) v5, for the automatic evaluation of root-canal filling quality on periapical radiographs.
Methods
YOLOv5 was developed using 1,008 periapical radiographs (training set: 806, validation set: 101, testing set: 101) from one center and validated on an external data set of 500 periapical radiographs from another center. We compared the network’s performance with that of inexperienced endodontist in terms of recall, precision, F1 scores, and Kappa values, using the results from specialists as the gold standard. We also compared the evaluation durations between the manual method and the network.
Results
On the external test data set, the YOLOv5 network performed better than inexperienced endodontist in terms of overall comprehensive performance. The F1 index values of the network for correct and incorrect filling were 92.05% and 82.93%, respectively. The network outperformed the inexperienced endodontist in all tooth regions, especially in the more difficult-to-assess upper molar regions. Notably, the YOLOv5 network evaluated images 150–220 times faster than manual evaluation.
Conclusions
The YOLOv5 deep learning network provided clinicians with a new, relatively accurate and efficient auxiliary tool for assessing the radiological quality of root canal fillings, enhancing work efficiency with large sample sizes. However, its use should be complemented by clinical expertise for accurate evaluations.
Keywords: Deep learning, Periapical disease, Root canal filling
Background
Root canal therapy is used to relieve toothache and restore dental function in millions of people annually [1]. Filling quality is important for root canal success [2]. Dentists assess this quality based on filling parameters in radiographs; these parameters include (1) the distance between the end of the root canal filling material and the radiographic apex in the radiograph and (2) the density of the filling material (i.e., no voids remain) [3–7].
Despite the significance of accurately assessing root-canal filling quality in predicting treatment outcomes, current radiographic evaluation methods based on periapical radiographs have some limitations. First, the current assessment relies heavily on the individual experience of clinicians to classify the fillings as adequate, short or over-extended fillings [8]. Consequently, due to variations in personal experience, inevitable inter-observer variability arises, rendering the process highly subjective. Second, when dealing with complex tooth structures, periapical radiographs may present issues such as image overlap and distortion, which pose significant challenges for clinicians in interpreting the radiographs [9], thereby prolonging the assessment time. Third, in the context of evaluating a large sample of periapical radiographic images, relying solely on manual assessment is not only time-consuming but also labor-intensive.
Therefore, there is a clinical need for a tool that can assist doctors in assessing periapical radiographic images. This tool is not only required to reduce inter-observer variability, thus improving the consistency of assessment results among different doctors when evaluating periapical radiographs, but also to improve the accuracy of doctors' assessments when confronted with complex images and large sample sizes. By leveraging this tool, doctors can achieve more consistent and precise evaluations. Ultimately, this leads to more accurate and consistent determination of patient outcomes. With the accelerated iterations of computer processors and artificial intelligence, deep-learning networks have achieved remarkable results in many areas of dentistry [10–12], providing algorithms for tooth segmentation [13], jaw segmentation [14], dental recognition [15], and dental symptom detection [16]. However, almost no relevant algorithms have been proposed for the automatic evaluation of the quality of root canal treatment. Therefore, the present study explored the application of deep-learning networks for the automatic evaluation of radiographic quality of root canal filling based on characteristics of periapical radiographs to provide a reference for clinical application. This study proposed the hypothesis that YOLOv5, as a deep learning model, can provide clinicians with an accurate, and consistent auxiliary tool for radiological assessment of root canal fillings, enhancing work efficiency, especially when managing large sample sizes.
Methods
Data sets
The study protocol was reviewed by the appropriate Institutional Review Board and complied with the principles of the Declaration of Helsinki. Each participant signed a detailed informed consent form. In addition, this study conformed to the 2021 Minimum Information about Clinical Artificial Intelligence Modelling protocol.
We randomly collected 1,508 apical peripheral radiographs from two scanning centers (Centers 1 and 2). The detailed characteristics of this data set, including patient age and sex, and the positioning and equipment used for radiographing periapical areas, are provided in Table 1. The collected periapical radiographic data were anonymized to ensure patient privacy.
Table 1.
Basic information of the periapical radiograph data set
| Center | Male / Female | Age (years) | Anterior teeth: maxillary/mandibular | Premolars: maxillary/mandibular | Molars: maxillary/mandibular |
|---|---|---|---|---|---|
| Center 1 (Internal set) | 421/587 | 15–64 | 157/71 | 245/126 | 171/238 |
| Center 2 (External set) | 236/264 | 16–56 | 65/34 | 106/77 | 77/141 |
| Total | 657/851 | 15–64 | 222/105 | 351/203 | 248/379 |
Manufacturer's information for the periapical radiograph: (1) Center1: Sirona Dental System GmbH (2)Center2: Planmeca ProX™ in FenLan
The inclusion criteria were: (1) fully formed apical image on periapical radiograph (adults/children); (2) parallel projection method used (film parallel to tooth long axis); (3) clear imaging on periapical radiograph; (4) no post-and-core image on periapical radiograph; (5) periapical radiograph showed full apex; (6) adult or pediatric patients had no systemic disease prior to treatment; and (7) no mishaps occurred in the apical region, including fractured files, canal deviation, or zipping.
The exclusion criteria were: (1) open apical image on the periapical radiograph (adults/children); (2) non-parallel projection method used (film not parallel to tooth long axis); (3) blurry imaging on periapical radiograph (overexposed or underexposed); (4) post-and-core image present on periapical radiograph; (5) periapical radiograph lacked full apex; (6) adult or pediatric patients had systemic disease prior to treatment; and (7) mishaps occurred in the apical region, including fractured files, canal deviation, or zipping.
Data distribution and annotation
The data set was divided into two: internal and external data sets. The 1008 periapical radiographs obtained from Center 1 constituted an internal data set that was used to develop our deep-learning network. These images were randomly divided into training, validation, and internal test sets (806, 101, and 101 periapical radiographs, respectively; Fig. 1). The external data set was used to evaluate the generalizability of the deep-learning network with data not visible during training. We used 500 periapical radiographs collected from Center 2 to test this network independently.
Fig. 1.
Source databases of periapical radiographs
To establish an accurate label for quality assessment of the internal periapical radiographs, two endodontists, each with 10 years of experience, manually annotated and then reviewed and revised the data set from Center 1. We expected to obtain high-quality final annotations. The specific labeling form was, first, on the periapical radiograph, to mark the apical region of the tooth with the root-canal filling, and next to label the filling effect of each rectangular box, which was classified into two categories: correct and incorrect filling. We defined correct filling as a distance between the end of the root canal filling material and the radiographic apex of 0–2 mm and sufficient filling material density. Incorrect filling included both short and over-extended fillings, which were defined as filling deficiency and excess, respectively (Fig. 2).
Fig. 2.

Software annotation of radiographs
During the training sessions, we only focused on the apical portion of the dental filling material, an area that is essential for evaluating the quality of the root canal filling. The training did not address the middle or coronal portions of the canal filling or procedural errors such as stripping or perforation, which were beyond the scope of our training at this stage.
Workflow of the you only look once (YOLO) v5 network method
To achieve an automatic and accurate evaluation of the root canal treatment, we proposed the use of the single-stage target detection model, YOLO [17]. YOLOv5 is a single-stage target detection model that enables the simultaneous localization and classification of target regions. The network structure of YOLOv5 was used as the evaluation model of the root-canal filling effect and was divided into four modules: input (input), skeleton network (backbone), feature fusion (Nk), and output (prediction) (Fig. 3).
Fig. 3.
YOLOv5 network structure for radiograph evaluation
Design of the filling quality assessment network
After the radiograph was inputted, the network was passed through the four modules. The output comprised the classification results of the filling quality detected in the apical area.
The input used MosYOLOv5 network c data enhancement and adaptive anchor box computing. MosYOLOv5 network c data enhancement concatenated images by random scaling, cropping, and arrangement to expand the data set and improve the robustness of the network model. The adaptive anchor box automatically calculated the optimal size based on the image size of the training data.
The backbone extracts feature of the input image mainly includes two important structures: focus and reference cross-stage parietal network (CSPNet). Focus reduces the number of subsampled parameters by introducing slice operations to improve the speed of forward and backpropagation. In the reference CSPNet [18], CSP divides the feature mapping of the base layer into two parts and then combines them through the cross-stage hierarchy to reduce the number of calculations and ensure accuracy.
Nk performed multiscale feature fusion of the feature map, which mainly included two structures: the Feature Pyramid Network (FPN) and Path Aggregation Network (PAN). The FPN transmits semantic features from top to bottom. Based on the PAN [19], the PAN characteristics of the pyramid are bottom-up pass positioning characteristics through the introduction of a CSP structure, strengthening the network feature fusion ability.
Prediction, the output module, included the loss function and non-maximum suppression. CIOU_Loss was used as the loss function. It consisted of classification loss function (classification loss) and detection box regression loss function (bounding box regression loss). The detection box regression loss function considered both the distance information of the bounding box center point and the scale information of the bounding box width–height ratio, resulting in higher speed and accuracy of the prediction box regression. Non-maximum suppression was used to screen the prediction boxes and filter highly overlapping test results.
Model training
The network model for assessing periapical radiograph filling quality was implemented using the PyTorch YOLOv5 network platform [20]. The Adam optimizer was used to minimize the loss function, with an initial learning rate of 0.001, a batch size of 32, and an image size of 640 pixels. At the end of each training cycle, we computed the loss function of the validation data set to determine the normal convergence of the network. If the performance of the validation data set did not improve within 30 training cycles (epochs), we considered the training process to have converged and stopped this process. The models were trained using a single Nvidia RTX3090 GPU. In the data augmentation section, we employed horizontal flipping, vertical flipping, random cropping, Mos YOLOv5 network splicing, random rotation, and random scale transformation to improve the generalization performance and robustness of the model inference.
Evaluation of the internal test set model
The evaluation indicators used in this study included recall, precision, and F1 score. We defined correct detection as an overlap between the rectangular box and labeled rectangular box (intersection over union) of > 50% and a correct classification; otherwise, we considered the determination to be a misdetection. Recall, precision, and F1 were calculated as follows:
where TP represents the positive target correctly detected as positive, such as accurately identifying a correct filling. FP indicates a negative target incorrectly detected as positive, such as a tooth root that is not properly filled but incorrectly detected as properly filled. FN indicates a positive target falsely detected as negative, such as a tooth root that is properly filled but not detected. As the final statistics revealed scant data pertaining to under- or overfilling, we combined the sample sizes of these data into a single “incorrect filling” category.
Evaluation of the external test set model
To assess the clinical applicability of the YOLOv5 network, we conducted a pilot study comparing its performance with that of an inexperienced endodontist. The evaluation was based on an external test data set obtained from Center 2.First, we invited two endodontists (a professional with 10 years of experience and one who was relatively inexperienced with less than 3 years of experience) to annotate the Center 2 data set together. Neither endodontist was involved in the annotation of the internal test set. During the labeling process, we recorded the two artificial and YOLOv5 network marking times. Every 100 pictures was timed once for a total of five times to compare the times required for manual and YOLOv5 network evaluations. In addition, we used the labeling results of the professional endodontist as the gold standard and compared the recall, precision, and F1 score of labeling results between the YOLOv5 network and professional endodontists as well as between inexperienced and professional endodontists. Furthermore, we performed these three assessments separately in each dental area.
Statistical analysis
We compared recall, precision, F1 scores, and Kappa values between the YOLOv5 network and inexperienced endodontist using a cross-table comparison. The results for professional endodontists are the rows of the cross table, while those for the YOLOv5 network and inexperienced endodontist are the columns of the cross table. We used the Kruskal–Wallis test to compare the times between the YOLOv5 network and the inexperienced and professional endodontists. All statistical analyses were performed using IBM SPSS Statistics for Windows, version 26.0 (IBM Corp., Armonk, NY, USA). Statistical significance was set at P < 0.05.
Results
Comparison between the YOLOv5 network and professional endodontists
The YOLOv5 network system demonstrated a 91.02% recall rate in the internal test set, indicating its strong ability to identify correct fillings. Moreover, the omission rate was relatively low (Table 2).
Table 2.
Performance of YOLOv5 network on the internal test set
| Root canal filling state | Recall | Precision | F1-score |
|---|---|---|---|
| Correct filling | 91.02% | 86.86% | 88.89% |
| Non-correct filling | 78.38% | 85.29% | 81.69% |
Comparisons between the YOLOv5 network and inexperienced endodontist
Based on the comparison results from the external test set, the F1 score for the assessment of correct filling was 87.80% in inexperienced endodontist. However, the comprehensive performance in the assessment of incorrect filling status could still be improved. Conversely, the F1 scores of the YOLOv5 network system for accurate and inaccurate filling were 92.05% and 82.93%, respectively, higher than those of inexperienced endodontist. Thus, in this pilot study, the comprehensive performance of the YOLOv5 network system in evaluating correct and incorrect filling states was better than that of the inexperienced endodontist (Table 3).
Table 3.
Performance of YOLOv5 network and inexperienced endodontist on the external test set
| Root canal filling state | Recall (%) | Precision (%) | F1-score (%) | Kappa | |
|---|---|---|---|---|---|
| YOLOv5 network | Correct filling | 99.51 | 85.63 | 92.05 | 0.75 |
| Incorrect filling | 71.43 | 98.84 | 82.93 | ||
| Inexperienced endodontist | Correct filling | 99.51 | 78.55 | 87.80 | 0.59 |
| Incorrect filling | 53.50 | 98.45 | 69.33 |
Comparisons of evaluation ability between the YOLOv5 network and inexperienced endodontist according to dental region
In this pilot study, the inexperienced endodontist showed a low ability to identify correct fillings, especially in the upper molars, with an F1 index of 58.49%. In contrast, the YOLOv5 network system demonstrated high accuracy in identifying correct filling across different regions, with F1 indices of 71.79% for upper molars, 93.75% for lower anterior teeth, 93.20% for lower premolars, and 84.21% for lower molars. However, the F1 index for incorrect filling was relatively low in certain areas, particularly the upper molar area, but was higher in the lower anterior teeth and lower premolars (Table 4).
Table 4.
Performance of YOLOv5 network and inexperienced endodontist in each tooth position region on the external test set
| Upper anterior teeth (%) | Upper premolars (%) | upper molars (%) | Lower anterior teeth (%) | Lower premolars (%) | Lower molars (%) | |
|---|---|---|---|---|---|---|
| YOLOv5 network | ||||||
| Recall of correct filling | 100.00 | 100.00 | 100.00 | 100.00 | 97.87 | 99.09 |
| Recall of non-correct filling | 66.04 | 76.19 | 56.00 | 88.23 | 90.00 | 73.95 |
| Precision of correct filling | 77.50 | 87.70 | 82.72 | 89.47 | 93.87 | 87.55 |
| Precision of non-correct filling | 100.00 | 100.00 | 100.00 | 100.00 | 96.42 | 97.78 |
| F1-score of correct filling | 87.32 | 93.45 | 90.54 | 94.44 | 92.83 | 92.96 |
| F1-score of non-correct filling | 79.55 | 86.49 | 71.79 | 93.75 | 93.20 | 84.21 |
| Inexperienced endodontist | ||||||
| Recall of correct filling | 100.00 | 100.00 | 100.00 | 100.00 | 92.59 | 98.64 |
| Recall of non-correct filling | 64.15 | 52.38 | 41.33 | 47.05 | 63.33 | 55.46 |
| Precision of correct filling | 76.54 | 78.10 | 78.22 | 65.38 | 81.03 | 80.37 |
| Precision of non-correct filling | 100.00 | 100.00 | 100.00 | 100.00 | 88.24 | 95.65 |
| F1-score of correct filling | 86.71 | 87.70 | 87.78 | 79.07 | 86.42 | 88.57 |
| F1-score of non-correct filling | 78.16 | 68.75 | 58.49 | 63.99 | 73.74 | 70.21 |
Label time comparison
The results of the Kruskal–Wallis test showed a significant difference in the duration of evaluations between the YOLOv5 network and professional and inexperienced endodontists (p = 0.002). Specifically, the YOLOv5 network required an average of 0.16 min to mark 100 periapical radiographs, whereas inexperienced and professional endodontists required averages of 33.23 min and 26.39 min, respectively (Fig. 4).
Fig. 4.

Comparison of annotation durations between YOLOv5 network and dentists
Discussion
Root canal filling is a pivotal aspect of endodontic treatment, and its quality directly influences treatment outcomes [21]. Proper filling can significantly reduce the incidence of periapical lesions [22]. Conversely, inadequate filling increases the risk of developing periapical periodontitis [23]. Inadequate fillings include both short and over-extended fillings, and the likelihood of treatment failure due to over-extended filling is four times higher than in cases, where the root canal is underfilled [24].
Periapical radiographs were used for imaging in this study. At present, the main imaging methods for evaluating the effect of root-canal filling are periapical radiographs and cone beam computed tomography (CBCT). The main limitation of periapical radiographs is that they are two-dimensional images, while root-canal filling is a three-dimensional process. Therefore, when evaluating the effectiveness of root-canal filling based on periapical radiographs, problems such as image overlap and distortion are likely to occur on the periapical radiographs [9], resulting in inaccurate assessment by doctors. However, compared with CBCT, periapical radiographs also have advantages that cannot be ignored. For example, for patients, root film is cheaper, and the radiation dose is less [25]. For doctors, taking periapical radiographs is simpler and faster, and the images of the periapical area are clearer. Therefore, periapical radiographs remain the most widely used tool for clinicians to evaluate the effectiveness of root canal filling. Although CBCT is affected by beam hardening and scattering, which can degrade image clarity [26], it can offer high spatial resolution imaging in sagittal, coronal, and axial planes, effectively mitigating issues such as image overlap and distortion commonly seen in periapical radiographs. In future research, we will further explore the role of deep learning in CBCT periapical radiographic images.
In the present study, the YOLOv5 network demonstrated three notable advantages. First, it effectively detected both correct and incorrect fillings on external test sets, achieving F1 scores—the harmonic mean of recall and precision—of 92.05% and 82.93%, respectively. Specifically, in the task of detecting correct fillings, the network achieved a recall rate of 99.51%, indicating that it could detect almost all instances of correct fillings with very few missed detections. At the same time, its precision rate showed that 85.63% of the positive detections were true positives, with some false positives. Consequently, the F1 score of 92.05% reflected the excellent performance of the YOLOv5 network in detecting correct fillings. In the task of detecting incorrect fillings, the network's recall rate was 71.43%. Thus, while it could identify a substantial portion of incorrect fillings, some actual incorrect fillings were missed. However, its accuracy rate was as high as 98.84%, indicating that the majority of the positive detections made by the YOLOv5 network were true positives, with only a few false positives. Despite the relatively lower recall rate, the F1 score of 82.93% demonstrated the good performance of the YOLOv5 network in detecting incorrect fillings. When compared to previously reported deep learning tools in radiology, such as convolutional neural networks like ResNet 50 [27], ResNetXt 50 [28], and GCNet 50 [29], among which ResNet 50 performed best with an F1 score of 53.34% on the test data set, the YOLOv5 network in this study significantly outperformed these convolutional neural networks. In addition, transformer networks like the Anatomy-Guided Multi-Branch Transformer (AGMB-Transformer) network [30] exhibited even superior performance, achieving an F1 score of 90.48%. Compared to the AGMB-Transformer, the YOLOv5 network requires further enhancement in detecting incorrect fillings, while its performance in detecting correct fillings is comparable to that of the AGMB-Transformer network.
Second, in this study, the YOLOv5 network surpassed inexperienced endodontist in overall comprehensive performance. Based on the results for each tooth position area, the performance of the YOLOv5 network was better than that of inexperienced endodontist, especially in the more challenging evaluation areas of the upper premolar and upper molar. It is noteworthy that the comparison between the YOLOv5 network and an inexperienced endodontist in this article was a pilot study, so the validity of the results needs to be enhanced. In the future, we will add 2–3 more inexperienced endodontist samples and optimize the capabilities of the YOLOv5 network, which will better enhance the reliability of the comparison results.
Furthermore, the YOLOv5 network is highly efficient and not limited by factors such as individual experience differences or fatigue, requiring only an average of < 0.1 s to process a periapical radiograph, while professional and inexperienced endodontist require 150–220 times longer to complete the assessment task. These three advantages demonstrate that the YOLOv5 network is a tool that can help clinicians assess the radiological quality of root canal fillings relatively accurately and rapidly across large samples, greatly improving clinical efficiency.
To understand the limitations of our model (Fig. 5), we collated two typical errors. The first type occurred when the YOLOv5 network was faced with a root canal that was not compactly filled in the non-apical part of the canal and failed to identify the incorrect filling. Our analysis showed that this occurred, because the YOLOv5 network tended to recognize local features. The second type of error involved regional assessment results in the external test set. We found that when evaluating the quality of root-canal filling, particularly in the maxillary posterior teeth, the F1 index of the YOLOv5 network was low compared with the anterior tooth and premolar district. The reasons for this are as follows: First, the anatomical structure is more complex in the posterior maxillary teeth compared with the anterior and premolar regions. In addition, capturing high-quality periapical radiographs of the maxillary posterior teeth is challenging, leading to variable image data quality.
Fig. 5.
Limitations of model
Therefore, the YOLOv5 network still holds significant potential for improvement. In the future, we will incorporate samples from a broader range of centers to enhance data diversity and improve model stability. Previous research has indicated that models trained on data from different institutions exhibit a notable decline in classification performance compared to those trained on data from the same institution. Therefore, conducting multicenter studies to enhance model stability is imperative [31, 32]. In addition, to overcome the YOLOv5 network's tendency to focus on local features, we can utilize multi-scale feature fusion techniques, such as the FPN [33], to bolster YOLOv5's ability to detect objects of various sizes. Addressing the model's inadequate detection capabilities in the posterior tooth region, we can adopt lighter networks. For instance, ShuffleNetV2 is a lightweight convolutional neural network that maintains high performance while reducing model complexity through the use of channel shuffle operations and bottleneck structures [34]. We can also optimize YOLOv5's loss function to more accurately measure the differences between predicted bounding boxes and actual ones. For example, we could consider using the Normalized Wasserstein Distance (NWD) [35] to replace traditional localization loss functions such as CIOU. Incorporating attention mechanisms like SimAM [35] can also enhance YOLOv5's feature extraction capabilities. We believe that through these optimized approaches, we will be better equipped to tackle complex future scenarios, not only overcoming current limitations but also venturing into more intricate cases, such as those involving apical areas with file separation, canal deflection, or zipping of the canal wall, as well as situations, where the periapical involvement is obscure or where post-and-core structures are present.
Conclusions
We applied a deep learning network, YOLOv5, to assess the radiological quality of root canal fillings using periapical radiographs. The YOLOv5 network achieved F1 scores—a harmonic mean of recall and precision—of 92.05% for correctly filled cases and 82.93% for incorrectly filled cases, demonstrating its capability to differentiate between correct and incorrect filling states. In addition, the YOLOv5 network required an average of only 0.16 min to annotate 100 periapical radiographs. These findings suggest that YOLOv5 could serve as a useful and relatively auxiliary efficient tool for assessing the radiological quality of root canal fillings, particularly with large sample sizes. However, clinicians should use it as a supplementary aid, as knowledge and experience remain vital to ensure accurate evaluations.
Acknowledgements
Not applicable.
Abbreviations
- YOLO
You only look once
- CSPNET
Cross-stage parietal network
- PAN
Path aggregation network
- FPN
Feature pyramid network
- NWD
Normalized Wasserstein Distance
- CIOU
Complete-intersection-over-union
Author contributions
L.J. and B.D. contributed to validation and writing – original draft. Z.X., H.B., P.D., and X.Z. contributed to resources. Z.X., H.B., and P.D. contributed to software. Z.Z. and Y.P. contributed to data curation. Z.Z. and Y.L. contributed to investigation. Y.P. and Z.L. contributed to formal analysis. Y.L. and Z.L. contributed to methodology. X.F. contributed to project administration. X.F. and F.H. contributed to supervision. F.H. and X.Z. contributed to conceptualization, funding acquisition, and writing – review & editing.
Funding
This work was supported by the Key Fields Special Project of Guangdong Universities (2021ZDZX1024), the Guangdong Province Health Appropriate Technology Promotion Project: Science and Education Letter [2023] No. 10–24, the Guangdong Province Health Appropriate Technology Promotion Project: Science and Education Letter [2023] No. 10–25, the Research and Cultivation Program of Stomatological Hospital, Southern Medical University (PY2021017) and the Foshan Engineering Technology Research Center for Oral Functional Occlusion Reconstruction of Periodontal Disease (GCJSYJZX2024-008).
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
The study protocol underwent review by both the Ethics Committee of Shunde Hospital, Southern Medical University (KYLS20240404), and the Ethics Committee of Stomatological Hospital, Southern Medical University (NYKQ-EC-[2024]12), ensuring adherence to the principles of the Declaration of Helsinki. Furthermore, each participant provided their informed consent by signing a comprehensive consent form.
Consent for publication
Not applicable.
Competing interest
Zineng Xu, Hailong Bai, and Peng Ding are associated with DeepCare, a company that focuses on artificial intelligence applications in dentistry. This study and its findings are not related to their affiliations. The authors are solely responsible for the contents of this study.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Liuli Jin and Bingran Du contributed equally to this work as first authors.
Contributor Information
Fei Hu, Email: 1106978372@qq.com.
Xueyang Zhang, Email: zxy123@smu.edu.cn.
References
- 1.Fransson H, Dawson V. Tooth survival after endodontic treatment. Int Endodontic J. 2023;56(2):140–53. [DOI] [PubMed] [Google Scholar]
- 2.Saunders WP, Saunders EM, Sadiq J, Cruickshank E. Technical standard of root canal treatment in an adult Scottish sub-population. Br Dent J. 1997;182:382–6. [DOI] [PubMed] [Google Scholar]
- 3.Burke FM, Lynch CD, Ní Ríordáin R, Hannigan A. Technical quality of root canal fillings performed in a dental school and the associated retention of root-filled teeth: a clinical follow-up study over a 5-year period. J Oral Rehabil. 2009;36:508–15. [DOI] [PubMed] [Google Scholar]
- 4.Balto H, Al Khalifah Sh, Al Mugairin S, Al Deeb M, Al-Madi E. Technical quality of root fillings performed by undergraduate students in Saudi Arabia. Int Endod J. 2010;43:292–300. [DOI] [PubMed] [Google Scholar]
- 5.Barrieshi-Nusair KM, Al-Omari MA, Al-Hiyasat AS. Radiographic technical quality of root canal treatment performed by dental students at the dental teaching center in Jordan. J Dent. 2004;32:301–7. [DOI] [PubMed] [Google Scholar]
- 6.Dugas NN, Lawrence HP, Teplitsky PE, Pharoah MJ, Friedman S. Periapical health and treatment quality assessment of root-filled teeth in two Canadian populations. Int Endod J. 2003;36:181–92. [DOI] [PubMed] [Google Scholar]
- 7.Lupi-Pegurier L, Bertrand MF, Muller-Bolla M, Rocca JP, Bolla M. Periapical status, prevalence and quality of endodontic treatment in an adult French population. Int Endod J. 2002;35:690–7. [DOI] [PubMed] [Google Scholar]
- 8.Field J, Gutmann JL, Solomon ES, Rakusin H. A clinical radiographic retrospective assessment of the success rate of single-visit root canal treatment. Int Endodontic J. 2004;37:70–82. [DOI] [PubMed] [Google Scholar]
- 9.Yapp KE, Brennan P, Ekpo E. Endodontic disease detection: digital periapical radiography versus cone-beam computed tomography-a systematic review. J Med Imaging (Bellingham). 2021;8: 041205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. J Dent Res. 2020;99:769–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: a scoping review. J Dent. 2019;91: 103226. [DOI] [PubMed] [Google Scholar]
- 12.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44. [DOI] [PubMed] [Google Scholar]
- 13.Duan W, Chen Y, Zhang Q, Lin X, Yang X. Refined tooth and pulp segmentation using U-Net in CBCT image. Dentomaxillofac Radiol. 2021;50:20200251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Machado LF, Watanabe PC, Rodrigues GA, Junior LO. Deep learning for automatic mandible segmentation on dental panoramic x-ray images. Biomed Phys Eng Express. 2023. 10.1088/2057-1976/acb7f6. [DOI] [PubMed] [Google Scholar]
- 15.Chandrashekar G, AlQarni S, Bumann EE, Lee Y. Collaborative deep learning model for tooth segmentation and identification using panoramic radiographs. Comput Biol Med. 2022;148: 105829. [DOI] [PubMed] [Google Scholar]
- 16.Çelik B, Savaştaer EF, Kaya HI, Çelik ME. The role of deep learning for periapical lesion detection on panoramic radiographs. Dento Maxillo Fac Radiol. 2023;52:20230118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 779-88.
- 18.Wang CY, Liao HY, Wu YH, Chen PY, Hsieh JW, Yeh IH et al. Net. A new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops; 2020. p. 390-1
- 19.Liu S, Qi L, Qin H, Shi J, Jia J. Path aggregation network for instance segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2018. p. 8759-68.
- 20.Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, et al. Pytorch: an imperative style, high-performance deep learning library. Adv Neural Inf Process Syst. 2019;32:1. [Google Scholar]
- 21.Balto K. Root-filled teeth with adequate restorations and root canal treatment have better treatment outcomes. Evid Based Dent. 2011;12:72–3. [DOI] [PubMed] [Google Scholar]
- 22.Alves Dos Santos GN, Faria-E-Silva AL, Ribeiro VL, Pelozo LL, Candemil AP, Oliveira ML, et al. Is the quality of root canal filling obtained by cone-beam computed tomography associated with periapical lesions? A systematic review and meta-analysis. Clin Oral Invest. 2022;26(5105):16. [DOI] [PubMed] [Google Scholar]
- 23.Segura-Egea JJ, Jiménez-Pinzón A, Poyato-Ferrera M, Velasco-Ortega E, Ríos-Santos JV. Periapical status and quality of root fillings and coronal restorations in an adult Spanish population. Int Endodontic J. 2004;37:525–30. [DOI] [PubMed] [Google Scholar]
- 24.Swartz DB, Skidmore AE Jr, Griffin JA. Twenty years of endodontic success and failure. J Endod. 1983;9:198–202. [DOI] [PubMed] [Google Scholar]
- 25.Meirinhos J, Martins JNR, Pereira B, Baruwa A, Gouveua J, Quaresma SA, et al. Prevalence of apical periodontitis and its association with previous root canal treatment, root canal filling length and type of coronal restoration–a cross-sectional study. Int Endod J. 2020;53:573–84. [DOI] [PubMed] [Google Scholar]
- 26.Quaresma SA, Da Costa RP, Ferreira Petean IB, Silva-Sousa AC, Mazzi-Chaves JF, Ginjeira A, et al. Root canal treatment of severely calcified teeth with use of cone-beam computed tomography as an intraoperative resource. Iran Endod J. 2022;17:39–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.K. He et al., Deep residual learning for image recognition. Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
- 28.S. Xie et al., “Aggregated residual transformations for deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1492–1500.
- 29.Y. Cao et al., “GcNet: Non-local networks meet squeeze-excitation networks and beyond,” in Proc. Int. Conf. Comput. Vis. Workshops, 2019, pp. 1971–1980.
- 30.Li Y, Zeng G, Zhang Y, Wang J, Jin Q, Sun L, Zhang Q, Lian Q, Qian G, Xia N, Peng R, Tang K, Wang S, Wang Y. AGMB-transformer: anatomy-guided multi-branch transformer network for automated evaluation of root canal therapy. IEEE J Biomed Health Inform. 2022;26(4):1684–95. 10.1109/JBHI.2021.3129245. [DOI] [PubMed] [Google Scholar]
- 31.Albadawy EA, Saha A, Mazurowski MA. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med Phys. 2018;45:1150–8. [DOI] [PubMed] [Google Scholar]
- 32.Balachandar N, Chang K, Kalpathy-Cramer J, Rubin DL. Accounting for data variability in multi-institutional distributed deep learning for medical imaging. J Am Med Inform Assoc. 2020;27:700–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Li X, Wang Q, Yang X, Wang K, Zhang H. Track fastener defect detection model based on improved yolov5s. Sensors (Basel). 2023;23:6457. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Zhao J, Du C, Li Y, Mudhsh M, Guo D, Fan Y, et al. YOLO-Granada: a lightweight attentioned Yolo for pomegranates fruit detection. Sci Rep. 2024;14:16848. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Yu H, Chen J, Yu P, Feng D. A lightweight defect detection algorithm for escalator steps. Sci Rep. 2024;14:23830. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.



