Abstract
Background
Intraoperative preservation of parathyroid glands (PGs) remained a significant challenge in thyroidectomy. Recently, deep learning has demonstrated considerable potential in medical applications. We proposed a novel intraoperative method for PG identification.
Methods
We developed a localization subnet based on YOLOX and a novel semantic segmentation model termed Trans-U-HRNet, collectively termed PG-AI. The dataset included 976 images from 121 patients undergoing open thyroidectomy, with images from 101 patients randomly split 8:2 for training and internal validation. PG detection was quantified using PG-AI, and its performance was visually compared with near-infrared autofluorescence (NIRAF) imaging and assessments by surgeons with varying experience levels.
Results
PG-AI achieved an accuracy of 91.1% and a recall rate of 86.5% on the internal validation set. The recognition rates of PG-AI were 88.7% and 85.0% on the internal and external validation sets, respectively, in visualization. PG-AI showed 72.1% agreement with NIRAF imaging, and the combined approaches successfully identified all PGs. In external validation, PG-AI significantly outperformed junior surgeons in recognition rate (p = 0.004).
Conclusion
PG-AI generated accurate segmentation masks of PGs in real-time intraoperative images, providing reliable visual guidance to surgeons during identification.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12893-026-03590-z.
Keywords: Artificial intelligence, Deep learning, Near-infrared autofluorescence imaging, Parathyroid gland, Segment, Thyroidectomy
Introduction
The global incidence of thyroid cancer has steadily risen [1]. In China, the incidence reached 46.61 per 100,000 people in 2022, making it the third most common malignant cancer nationwide [2]. While conventional open thyroidectomy remained the primary treatment for thyroid carcinoma [3], a critical challenge was the preservation of parathyroid glands (PGs). The procedure carried a risk of PG injury, potentially leading to postoperative hypocalcemia and related complications. Reported incidences of temporary and permanent hypoparathyroidism after surgery ranged from 5.94% to 67.69% and 0% to 20%, respectively [4].
The small size, anatomical variability and visual similarity of PGs to surrounding tissues rendered intraoperative identification particularly difficult. Visual inspection remained the primary method for PG identification, although its accuracy strongly correlated with surgical experience [5]. Recently, several technologies have emerged for PG identification, including nano-carbon negative imaging, immune colloidal gold technique (ICGT), near-infrared autofluorescence (NIRAF) imaging [6–8]. However, these approaches increased patient costs and had limitations, such as prolonged operative times and occasional false-positive results.
Advances in deep learning facilitated novel approaches for PG identification and segmentation. Wang et al. proposed a deep learning method for automatic PG identification during endoscopic thyroid surgery [9]. Avci et al. applied object detection to identify PGs in NIRAF images [10], while Liu et al. introduced a dual-branch network for PG localization and segmentation in ultrasound images [11]. These developments highlighted the potential of artificial intelligence (AI) to overcome current limitations in PG identification.
Despite these advances, basic research on AI applications for tissue identification and segmentation in open neck surgeries remained limited. Distinguishing critical structures from complex visual backgrounds persisted as a key challenge, including PG preservation in thyroidectomy or rare tissue detection in submandibular surgeries [12, 13]. The complexity of PG identification was heightened during open thyroidectomy, where adjacent tissues such as lymph nodes and adipose tissue exhibited greater visual similarity to PGs than in magnified endoscopic views. In this study, we developed a novel deep learning method comprising a localization subnet based on YOLOX and a semantic segmentation model, Trans-U-HRNet, which achieved precise PG boundary delineation and robust performance in small-object segmentation in surgical images during open thyroidectomy. The AI model was designed to enhance intraoperative recognition accuracy and provided real-time visual guidance to support surgical decision-making.
Methods
Data collection
The study cohort included a consecutive series of 121 patients who underwent thyroidectomy at Shanghai Sixth People’s Hospital from August 2022 to December 2023. The study was approved by the Human Subjects Ethics Committee of Shanghai Sixth People’s Hospital (Approval No: 2022-KY-178 (K)) and conducted in accordance with the Helsinki Declaration of 1975, as revised in 2013. Written informed consent was obtained from all patients.
Surgical procedure
During thyroidectomy, intraoperative images containing identified PGs were captured using a high-resolution digital camera positioned 15 cm perpendicular to the surgical field. The assistant captured 5–10 photos per case (Fig. 1) after thyroid capsule dissection and gland removal. These images were used for training and testing. All PGs were labeled using Labelme software by a senior surgeon with more than 20 years of experience, following the ICGT methodology, which demonstrated diagnostic accuracy comparable to frozen section examination [7].
Fig. 1.

The data acquisition process during surgery. The assistant used a smartphone to capture images of PGs
Data analysis
A total of 976 images with identified PGs from 121 patients were used to train and test the instance segmentation network. Of these, 816 images from 101 patients were randomly split 8:2 into a training set and an internal validation set. Additionally, 160 images from 20 patients were collected as an independent external validation set.
AI model
The two-stage architecture was designed to address PG identification challenges by enabling efficient localization in large, high-resolution images and precise segmentation of small targets. YOLOX was selected as the localization subnet due to its optimal balance between inference speed and model complexity, which was critical for processing high-resolution surgical images and rapidly localizing PGs. For fine-grained segmentation, we developed Trans-U-HRNet, which combined HRNet’s high-resolution spatial details and gland boundaries with Transformer-based global context awareness to capture long-range dependencies in the surgical field. YOLOX and Trans-U-HRNet were integrated into a unified framework, termed PG-AI, resulting in a system designed for real-time intraoperative assistance in open thyroidectomy. The localization-based workflow for PG segmentation is shown in Fig. 2.
Fig. 2.
The overall workflow of the localization-based method for PG segmentation (a). Data transfer process and diagrams of the two models used (b). Composition of the intraoperative dataset and partitioning of internal and external cohorts (c). Legends used in this figure (d)
PG-AI was trained and validated on annotated PG images that included gland positions and contours. Performance was assessed using standard segmentation metrics and clinical criteria, covering instance segmentation, semantic segmentation, and PG identification. For instance segmentation, COCO metrics [14] were applied with an IoU threshold of 50% and a confidence score > 0.5, reporting average precision (AP50) and recall (AR100). For PG identification, the same IoU threshold was used to ensure anatomical accuracy of detected glands, consistent with object detection and instance segmentation standards. Precision (PRE), recall (REC), accuracy (ACC), and F1-score (the harmonic mean of PRE and REC) were also calculated to assess model performance.
PG-AI model was trained and evaluated on a workstation equipped with an NVIDIA Tesla V100 GPU (32GB memory). Model inputs were high-resolution surgical images (3840 × 2160 pixels). Training was performed for 100 epochs with a learning rate of 0.0001 and a batch size of 4, requiring approximately 33 h. Detailed experimental configurations and hyperparameters are provided in Table S1.
Image verification
The results of PG-AI recognition were visualized and analyzed to calculate the recognition rate of PGs. For repeated images, one image was selected for visual analysis. PG-AI performance was compared with near-infrared autofluorescence (NIRAF) imaging (ARGOS 300PT, Microscopic Intelligence Co., China). NIRAF emitted near-infrared light at 785 nm, while PG autofluorescence at 820 nm was detected and displayed as a bright grayscale image [15]. Images with identified PGs from the validation sets were assessed by a senior surgeon (> 20 years of thyroid surgery experience), an intermediate surgeon (> 10 years of experience), and a junior surgeon (2 years of experience). Visualization of the two methods is shown in Fig. 3.
Fig. 3.
Comparison of PG identification using PG-AI segmentation and NIRAF imaging in conventional thyroidectomy. PGs are marked with a white arrow in the original surgical image (A), the PG-AI segmentation mask (B), and the NIRAF image (C)
Statistical analysis
Statistical analyses were performed using SPSS 26.0. McNemar’s test for paired nominal data was applied to compare PG recognition rates between PG-AI and NIRAF imaging, as well as between PG-AI and surgeon groups. Statistical significance was defined as p < 0.05. Continuous data are expressed as mean ± standard deviation (SD), and categorical data as frequencies or percentages.
Results
Computer results
The YOLOX-based localization subnet achieved an AP50 of 88.9% and localized 99.1% of PGs. PG-AI’s average inference time was 135 ms per image. PG-AI achieved superior DICE, JAC, and F1 scores for PG segmentation and identification, outperforming other models on both validation sets (Table 1) [16–20]. Visual comparisons confirmed that its segmentation results better aligned with medical definitions and surgeon requirements (Fig. 4).
Table 1.
Segmentation results in internal and external validation sets.a
| Dataset | Model | DICE | ACC | PRE | REC | F1 | hd95 | JAC |
|---|---|---|---|---|---|---|---|---|
| Internal validation set | YOLOXGFL+MedT | 0.351 | 43.1% | 69.3% | 53.2% | 0.602 | 154.12 | 0.218 |
| YOLOXGFL + U-net | 0.175 | 18.9% | 79.1% | 20.1% | 0.321 | 275.71 | 0.125 | |
| YOLOXGFL+Swin-Unet | 0.637 | 70.7% | 87.1% | 79.3% | 0.830 | 113.24 | 0.505 | |
| YOLOXGFL + U-net 3+ | 0.532 | 53.1% | 70.1% | 68.7% | 0.694 | 133.79 | 0.431 | |
| PG-AI | 0.693 | 79.8% | 91.1% | 86.5% | 0.887 | 104.57 | 0.588 | |
| External validation set | YOLOXGFL+MedT | 0.254 | 25.7% | 43.1% | 38.7% | 0.408 | 350.79 | 0.180 |
| YOLOXGFL + U-net | 0.098 | 13.3% | 50.7% | 15.1% | 0.233 | 407.98 | 0.027 | |
| YOLOXGFL+Swin-Unet | 0.487 | 54.3% | 69.7% | 70.3% | 0.699 | 271.45 | 0.326 | |
| YOLOXGFL + U-net 3+ | 0.433 | 46.1% | 70.1% | 56.4% | 0.625 | 297.46 | 0.339 | |
| PG-AI | 0.531 | 59.5% | 76.0% | 73.4% | 0.747 | 271.84 | 0.368 |
a DICE Dice similarity coefficient, ACC accuracy, PRE precision, REC recall, F1 F1-score, hd95 95th percentile of the Hausdorff Distance, JAC Jaccard index
Fig. 4.
Visualization of PG segmentation results. The source image illustrates the prediction results of the localization subnet, reflecting its high localization performance, with YOLOX (with GFL) successfully locating nearly all PGs (99.5%)
Statistics results
The evaluation was performed on the internal and external validation sets. The clinical characteristics of 121 patients in internal and external datasets are presented in Table 2. The validation tests included 49 patients with a total of 111 PGs identified by ICGT. PG-AI demonstrated the highest recognition rate at 87.4% (97/111, 95% confidence interval [CI]: 79.8–92.5%). NIRAF imaging achieved a recognition rate of 84.7% (94/111, 95% CI: 76.8–90.3%), and conventional visual inspection achieved 80.2% (89/111, 95% CI: 71.8–86.7%). There was no significant difference between PG-AI and NIRAF recognition rates (χ2 = 0.11, p = 0.736). PG-AI produced eight false positives, misclassifying adipose tissue (n = 6), a thyroid nodule (n = 1), and muscle (n = 1). Importantly, three of these false-positive PGs were correctly identified. PG-AI also yielded 10 false negatives (undetected PGs). In contrast, NIRAF imaging resulted in 17 false negatives and 1 false positive in the same cases (Tables 3 and 4). PG-AI demonstrated a concordance rate of 72.1% (80/111, 95% CI: 62.8–80.2%) with NIRAF assessments. All PGs were identified by either PG-AI or NIRAF, with no PGs missed by both methods.
Table 2.
Clinical characteristics of patients in the internal and external datasets in the study
| Characteristics | Internal dataset N (%) |
External dataset N (%) |
|---|---|---|
| Sex, n (%) | ||
| Male | 36 (35.6%) | 3 (15.0%) |
| Female | 65 (64.4%) | 17 (85.0%) |
| Age(years, Mean ± SD) | 47.24 ± 13.84 | 49.79 ± 13.56 |
| Days of hospitalization(days, Mean ± SD) | 5.74 ± 2.37 | 5.72 ± 2.03 |
| Operation time(minutes, Mean ± SD) | 80.05 ± 28.81 | 76.03 ± 24.94 |
| Scope of surgery, n (%) | ||
| Unilateral | 60 (59.4%) | 13 (65.0%) |
| Bilateral | 41 (40.6%) | 7 (35.0%) |
| Procedure, n (%) | ||
| Thyroid lobectomy | 9 (8.9%) | 1 (5.0%) |
| Thyroid lobectomy with CND* | 56 (55.4%) | 12 (60.0%) |
| Total thyroidectomy | 10 (9.9%) | 3 (15.0%) |
| Total thyroidectomy with CND | 26 (25.7%) | 4 (20.0%) |
* CND: central neck dissection
Table 3.
The diagnostic performance of PG-AI, NIRAF imaging and 3 surgeons
| PG-AI | NIRAF imaging | Junior surgeon |
Intermediate surgeon | Senior surgeon |
|
|---|---|---|---|---|---|
| True Positive | 97 | 94 | 84 | 92 | 93 |
| False Negative | 10 | 17 | 22 | 16 | 15 |
| False Positive | 8 | 1 | 14 | 9 | 13 |
Table 4.
The false positive tissues identified by different groups for the cohort
| False positive tissues | PG-AI | NIRAF imaging | Junior surgeon |
Intermediate surgeon | Senior surgeon |
|---|---|---|---|---|---|
| Adipose tissues | 6 | 1 | 12 | 8 | 8 |
| Thyroid nodules | 1 | 0 | 0 | 1 | 5 |
| Muscles | 1 | 0 | 2 | 0 | 0 |
PG-AI was further compared with three surgeons of varying experience levels. After removing duplicate images of the same PG, 54 images from 29 patients with 71 PGs were included in the internal validation set. PG-AI achieved a recognition rate of 88.7% (63/71, 95% CI: 79.5–94.2%). For the same images, recognition rates were 78.9% (56/71, 95% CI: 68.1–86.9%) for the junior surgeon, 83.1% (59/71, 95% CI: 72.7–90.2%) for the intermediate surgeon, and 85.9% (61/71, 95% CI: 75.9–92.4%) for the senior surgeon. No significant differences were observed between PG-AI and the junior surgeon (χ2 = 2.4, p = 0.118), intermediate surgeon (χ2 = 0.75, p = 0.388), or senior surgeon (χ2 = 0.1, p = 0.754). In the external validation set, 33 images from 20 patients with 40 PGs were analyzed. PG-AI achieved a recognition rate of 85.0% (34/40, 95% CI: 70.4–93.3%). Recognition rates for the same images were 70.0% (28/40, 95% CI: 54.5–82.1%) for the junior surgeon, 75.0% (30/40, 95% CI: 59.5–86.0%) for the intermediate surgeon, and 82.5% (33/40, 95% CI: 67.7–91.5%) for the senior surgeon. PG-AI significantly outperformed the junior surgeon (χ2 = 7.56, p = 0.004), whereas no significant differences were observed compared with the intermediate surgeon (χ2 = 0.00, p = 1.00) or senior surgeon (χ2 = 0.75, p = 0.388). Among the seven external validation patients who underwent bilateral thyroidectomy, serum calcium and parathyroid hormone (PTH) levels remained within the normal clinical range throughout the 6-month postoperative period (Table S2).
Discussion
Preservation of PGs remains a critical task in thyroidectomy. Their morphological and topographical heterogeneity, combined with close similarity to adjacent tissues, rendered them highly susceptible to iatrogenic injury. Reported incidences of incidental PG removal during surgery ranged from 3.7% to 24.9% [21]. Accurate intraoperative localization of PGs is therefore essential to ensure their preservation and reduce the risk of postoperative hypocalcemia.
In recent years, deep learning demonstrated increasing potential for PG recognition and segmentation. However, the task remained challenging due to the small size of PGs within a large surgical field, which often contained morphologically similar tissues such as adipose tissue and lymph nodes. Existing AI studies primarily focused on PG identification in NIRAF images [10, 22–24], while segmentation of real surgical images remained limited. Wang et al. applied the Faster R-CNN model to segment PGs in laparoscopic surgery, reporting precision of 88.7%, recall of 92.3%, and F1-score of 90.5% [9]. However, this approach relied on magnified laparoscopic views, which inherently facilitated PG identification through tissue magnification and enhanced visualization of subtle anatomical features. In contrast, our PG-AI framework operated directly on unprocessed white-light images from open surgery, where PGs exhibited greater morphological similarity to surrounding adipose tissue or lymph nodes and occupied a small fraction of the surgical field.
In this study, we developed an automated method for joint detection and segmentation of PGs in high-resolution intraoperative images. The method comprised two subnets: YOLOX with GFL and Trans-U-HRNet. The architecture prioritized localization before segmentation, efficiently narrowing the search space in large surgical images and enabling detailed contour delineation through mask-based output. This approach was critical for distinguishing PGs from morphologically similar surrounding tissues. On the internal validation set, the model achieved a precision of 91.1% and a recall of 86.5%, demonstrating superior performance in segmenting small targets within high-resolution surgical scenes.
Despite encouraging internal validation results, performance declined on the independent external validation set (DICE: 0.531). This reduction was likely due to several factors. First, inherent variability in the surgical environment remained uncontrolled. Despite a standardized imaging protocol, factors such as surgical lighting, degree of tissue exposure, presence of blood, and subtle differences in camera angle introduced by the assistant altered the visual characteristics of PGs and surrounding tissues. Notably, all datasets were labeled by the same senior surgeon using ICGT as a reference, minimizing the likelihood that labeling inconsistencies contributed substantially to this performance drop.
Visualization analysis of validation set images was crucial for evaluating the clinical efficacy of the AI model. PG-AI demonstrated consistently high recognition rates across both internal and external validation sets. Analysis of misidentified cases revealed that recognition errors were predominantly concentrated in tissues with morphological features resembling PGs, such as adipose and thyroid tissues. Additionally, 10 small, irregularly shaped PGs were not detected by PG-AI. Future research will focus on enhancing the model’s ability to recognize irregularly shaped PGs through targeted training, thereby improving overall recognition rates.
Multiple studies have established NIRAF imaging as a reliable modality for intraoperative PG identification, with high detection accuracy and a significant reduction in inadvertent gland resection [25–27]. To objectively evaluate PG-AI performance, pairwise comparisons was conducted between PG-AI, NIRAF imaging, and visual inspection by surgeons. Among the methods evaluated, NIRAF imaging exhibited the lowest false-positive rate, with only one case observed. PG-AI showed a higher false-positive rate than NIRAF, whereas surgeons were most prone to misidentification. In previous studies, false positives in NIRAF imaging were reported for tissues such as thyroid nodules, central lymph nodes and adipose tissue [15, 23]. In current study, the limited sample size of the validation dataset likely contributed to the absence of some false-positive cases, and detection of PGs in isolated tissues was not included, which may have further reduced false positives incidence. Certain adipose tissues and thyroid nodules in our surgical imaging data closely resembled PGs, increasing the likelihood of misidentification by surgeons and contributing to false-positive diagnoses.
The limited tissue penetration of NIRAF imaging, combined with susceptibility to blood interference and variable autofluorescence intensity among PGs [28, 29], contributed to a higher incidence of false-negative results in our study. Notably, PG-AI successfully identified all NIRAF false-negative cases. There were no cases in which both PG-AI and NIRAF simultaneously failed to detect PGs. This differential performance suggested that, while PG-AI may struggle with irregularly shaped or partially exposed PGs, NIRAF imaging maintained reliable detection via intrinsic autofluorescence signals. Importantly, the complementary strengths of both methods demonstrated higher efficiency in PG identification than either method alone. The synergistic application of PG-AI and NIRAF imaging may offer a promising approach to optimize PG preservation during surgery and reduce postoperative complications.
All three surgeons, regardless of experience level, made false-positive identifications, primarily due to misclassification of adipose tissue or thyroid nodules, resembling PGs in shape and color. Further analysis of identification performance showed that, in both internal and external validation sets, PG-AI achieved higher recognition accuracy than all three surgeons. In the external validation set, PG-AI significantly outperformed the junior surgeon, while no statistically significant differences were observed compared with the intermediate or senior surgeons, indicating that PG-AI possessed recognition capability comparable to experienced surgeons. PG-AI demonstrated potential to assist inexperienced surgeons by providing real-time intraoperative PG recognition and segmentation.
To our knowledge, this is the first study to employ an AI model for segmentation and identification of PGs in conventional surgical images. This work extends our previous research on video-based PG segmentation and underscores the translational value of AI-assisted identification in challenging surgical environments [30]. While the findings demonstrate promising accuracy in PG detection during conventional surgery, several limitations should be acknowledged. First, the dataset was derived from a single institution and was relatively small, which may limit generalizability to populations with different anatomical variations or surgical practices. Future studies should incorporate data augmentation and multi-center cross-validation to further assess robustness. Second, although this study established the technical viability of PG-AI, its clinical impact remains to be quantified. A large-scale study, particularly in patients undergoing bilateral thyroidectomy, is needed to directly evaluate the effect of PG-AI on postoperative outcomes such as hypocalcemia.
Conclusions
PG-AI accurately and reliably segmented PGs in real-time during conventional open thyroidectomy, demonstrating high precision and recall on internal validation. Its recognition performance was comparable to senior surgeons and significantly exceeded that of junior surgeons, providing reliable visual guidance during identification. Integration of PG-AI with NIRAF imaging holds promise for improving intraoperative PG identification and preserving gland function, thereby reducing postoperative complications.
Supplementary Information
Acknowledgements
This work is supported by the Fundamental Research Funds for the Central Universities (Project Number. YG2023LC10).
Authors’ contributions
FY and XY designed the study and drafted the manuscript. ZL, HC, YW and JK collected the data and performed the data analysis. XD, QL and BW supervised the study and revised the manuscript. All authors reviewed the manuscript.
Data availability
The datasets generated and analyzed during the current study are not publicly available due to institutional regulations and ethical restrictions related to patient confidentiality but are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
This study was conducted in accordance with the Declaration of Helsinki and approved by the Human Subjects Ethics Committee of Shanghai Sixth People’s Hospital (Approval No: 2022-KY-178 (K)). The written informed consent to participate was obtained from all patients in this study.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Fan Yu and Xiaolei Yi contributed equally to this work.
Contributor Information
Xuehai Ding, Email: dinghai@shu.edu.cn.
Quanyong Luo, Email: luoqy@sjtu.edu.cn.
Bo Wu, Email: wubo7421@sohu.com.
References
- 1.Pizzato M, Li M, Vignat J, et al. The epidemiological landscape of thyroid cancer worldwide: GLOBOCAN estimates for incidence and mortality rates in 2020. Lancet Diabetes Endocrinol. 2022;10(4):264–72. 10.1016/S2213-8587(22)00035-3. [DOI] [PubMed] [Google Scholar]
- 2.Han B, Zheng R, Zeng H, et al. Cancer incidence and mortality in China, 2022. J Natl Cancer Cent. 2024;4(1):47–53. 10.1016/j.jncc.2024.01.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Rossi L, Materazzi G, Bakkar S, Miccoli P. Recent trends in surgical approach to thyroid cancer. Front Endocrinol (Lausanne). 2021;12:699805. 10.3389/fendo.2021.699805. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Rao SS, Rao H, Moinuddin Z, Rozario AP, Augustine T. Preservation of parathyroid glands during thyroid and neck surgery. Front Endocrinol (Lausanne). 2023;14:1173950. 10.3389/fendo.2023.1173950. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Abood A, Ovesen T, Rolighed L, Triponez F, Vestergaard P. Hypoparathyroidism following total thyroidectomy: high rates at a low-volume, non-parathyroid institution. Front Endocrinol (Lausanne). 2024;15:1330524. 10.3389/fendo.2024.1330524. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Lin DX, Zhuo XB, Lin Y, et al. Enhancing parathyroid preservation in papillary thyroid carcinoma surgery using nano-carbon suspension. Sci Rep. 2024;14(1):24680. 10.1038/s41598-024-76126-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Xia W, Zhang J, Shen W, Zhu Z, Yang Z, Li X. A rapid intraoperative parathyroid hormone assay based on the immune colloidal gold technique for parathyroid identification in thyroid surgery. Front Endocrinol (Lausanne). 2020;11:594745. 10.3389/fendo.2020.594745. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Canali L, Russell MD, Sistovaris A, et al. Camera-based near-infrared autofluorescence versus visual identification in total thyroidectomy for parathyroid function preservation: systematic review and meta-analysis of randomized clinical trials. Head Neck. 2025;47(1):225–34. 10.1002/hed.27900. [DOI] [PubMed] [Google Scholar]
- 9.Wang B, Zheng J, Yu JF, et al. Development of artificial intelligence for parathyroid recognition during endoscopic thyroid surgery. Laryngoscope. 2022;132(12):2516–23. 10.1002/lary.30173. [DOI] [PubMed] [Google Scholar]
- 10.Avci SN, Isiktas G, Ergun O, Berber E. A visual deep learning model to predict abnormal versus normal parathyroid glands using intraoperative autofluorescence signals. J Surg Oncol. 2022;126(2):263–7. 10.1002/jso.26884. [DOI] [PubMed] [Google Scholar]
- 11.Liu Q, Ding F, Li J, et al. DCA-Net: Dual-branch contextual-aware network for auxiliary localization and segmentation of parathyroid glands. Biomed Signal Process Control. 2023(Jul Pt.2):84. 10.1016/j.bspc.2023.104856.
- 12.Aboh IV, Chisci G, Salini C, et al. Submandibular ossifying lipoma. J Craniofac Surg. 2015;26(3):973–4. 10.1097/SCS.0000000000001489. [DOI] [PubMed] [Google Scholar]
- 13.Akrish S, Leiser Y, Shamira D, Peled M. Sialolipoma of the salivary gland: two new cases, literature review, and histogenetic hypothesis. J Oral Maxillofac Surg. 2011;69(5):1380–4. 10.1016/j.joms.2010.05.010. [DOI] [PubMed] [Google Scholar]
- 14.Lin TY, Maire M, Belongie S, Hays J, Zitnick CL, Microsoft COCO: Common objects in context. Springer International Publishing. 2014. 10.1007/978-3-319-10602-1_48.
- 15.Yu F, Yi X, Lin Z, Wu Y, Luo Q, Wu B. Fluorescence intensity of parathyroid glands in thyroid and parathyroid surgery: a near-infrared autofluorescence study. Front Surg. 2025;12:1559274. 10.3389/fsurg.2025.1559274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Cham: Springer; 2015. [Google Scholar]
- 17.Valanarasu JMJ, Oza P, Hacihaliloglu I, Patel VM. Medical Transformer: Gated Axial-Attention for Medical Image Segmentation. 2021. 10.48550/arXiv.2102.10662.
- 18.Cao H, Wang Y, Chen J, et al. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. 2021. 10.48550/arXiv.2105.05537.
- 19.Huang H, Lin L, Tong R, Hu H, Wu J. UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. arXiv. 2020. 10.1109/ICASSP40776.2020.9053405.
- 20.Ruan J, Xie M, Gao J, Liu T, Fu Y. EGE-UNet: an efficient group enhanced UNet for skin lesion segmentation. Cham: Springer; 2023. [Google Scholar]
- 21.Barbieri D, Indelicato P, De Leo S, et al. Will the autofluorescence take over inadvertent parathyroidectomy? Results from a multicentre cohort study. Updates Surg. 2025;77(2):369–80. 10.1007/s13304-025-02083-7. [DOI] [PubMed] [Google Scholar]
- 22.Avci SN, Isiktas G, Berber E. A visual deep learning model to localize parathyroid-Specific autofluorescence on Near-Infrared imaging: localization of parathyroid autofluorescence with deep learning. Ann Surg Oncol. 2022. 10.1245/s10434-022-11632-y. [DOI] [PubMed] [Google Scholar]
- 23.Yu F, Sang T, Kang J, et al. An automatic parathyroid recognition and segmentation model based on deep learning of near-infrared autofluorescence imaging. Cancer Med. 2024;13(4):e7065. 10.1002/cam4.7065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Akgun E, Uysal M, Avci SN, Berber E. The use of artificial intelligence to detect parathyroid tissue on ex vivo specimens during thyroidectomy and parathyroidectomy procedures using near-infrared autofluorescence signals. Surgery. 2024;176(5):1396–401. 10.1016/j.surg.2024.07.015. [DOI] [PubMed] [Google Scholar]
- 25.Takahashi T, Yamazaki K, Ota H, Shodo R, Ueki Y, Horii A. Near-Infrared fluorescence imaging in the identification of parathyroid glands in thyroidectomy. Laryngoscope. 2021;131(5):1188–93. 10.1002/lary.29163. [DOI] [PubMed] [Google Scholar]
- 26.Kuo TC, Chen KY, Lai CW, Lin MT, Chang CH, Wu MH. Analysis of near-infrared autofluorescence imaging for detection of inadvertently resected parathyroid glands after endoscopic thyroidectomy. Eur J Surg Oncol. 2024;50(11):108648. 10.1016/j.ejso.2024.108648. [DOI] [PubMed] [Google Scholar]
- 27.Bakkar S, Allan M, Halaseh B, et al. An outcome analysis of utilizing contrast-free near-infrared autofluorescence imaging in thyroid cancer surgery: a retrospective study. Updates Surg. 2025. 10.1007/s13304-025-02123-2. [DOI] [PubMed] [Google Scholar]
- 28.Han YS, Kim Y, Lee HS, Kim Y, Ahn YC, Lee KD. Detectable depth of unexposed parathyroid glands using near-infrared autofluorescence imaging in thyroid surgery. Front Endocrinol (Lausanne). 2023;14:1170751. 10.3389/fendo.2023.1170751. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Idogawa H, Sakashita T, Homma A. A novel study for fluorescence patterns of the parathyroid glands during surgery using a fluorescence spectroscopy system. Eur Arch Otorhinolaryngol. 2020;277(5):1525–9. 10.1007/s00405-020-05849-4. [DOI] [PubMed] [Google Scholar]
- 30.Sang T, Yu F, Zhao J, Wu B, Ding X, Shen C. A novel deep learning method to segment parathyroid glands on intraoperative videos of thyroid surgery. Front Surg. 2024;11:1370017. 10.3389/fsurg.2024.1370017. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The datasets generated and analyzed during the current study are not publicly available due to institutional regulations and ethical restrictions related to patient confidentiality but are available from the corresponding author on reasonable request.



