Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Aug 1.
Published in final edited form as: J Biophotonics. 2022 Apr 20;15(8):e202200008. doi: 10.1002/jbio.202200008

A co-axial excitation, dual-RGB/NIR paired imaging system toward computer-aided detection (CAD) of parathyroid glands in situ and ex vivo

Yoseph Kim 1,2,9, Hun Chan Lee 3,9, Jongchan Kim 4,9, Eugene Oh 1,2, Jennifer Yoo 4, Bo Ning 4, Seung Yup Lee 5,6, Khalid Mohamed Ali 7, Ralph P Tufano 7, Jonathon O Russell 7, Jaepyeong Cha 2,4,8,*
PMCID: PMC9357067  NIHMSID: NIHMS1793195  PMID: 35340114

Abstract

Early and precise detection of parathyroid glands (PGs) is a challenging problem in thyroidectomy due to their small size and similar appearance to surrounding tissues. Near-infrared autofluorescence (NIRAF) has stimulated interest as a method to localize PGs. However, high incidence of false positives for PGs has been reported with this technique. We introduce a prototype equipped with a coaxial excitation light (785-nm) and a dual-sensor to address the issue of false positives with the NIRAF technique. We test the clinical feasibility of our prototype in situ and ex vivo using sterile drapes on 10 human subjects. Video data (1,287 images) of detected PGs were collected to train, validate, and compare the performance for PG detection. We achieved a mean average precision of 94.7% and a 19.5-millisecond processing time/detection. This feasibility study supports the effectiveness of the optical design and may open new doors for a deep learning-based PG detection method.

Keywords: deep-learning, hypocalcemia, near-infrared autofluorescence, parathyroid glands, thyroid surgery

Graphical Abstract:

graphic file with name nihms-1793195-f0001.jpg

This paper shows the preliminary feasibility of a co-axial excitation, dual-red-green-blue (RGB)/near-infrared (NIR) paired imaging system that detects autofluorescence signals from parathyroid glands intraoperatively and exploits computer-aided algorithms to localize them post-hoc. The aim of the study was to explore the potential of addressing false negative/positive issues from current NIR technology. Our machine learning algorithm was tested on real-time data from 6 thyroid/parathyroidectomy patients and achieved a mean average precision of 94.7% and a 19.5 millisecond processing time per detection.

1. INTRODUCTION

Postoperative hypocalcemia (low calcium levels in the blood plasma) is a major hypoparathyroidism (hypo-PT) complication after thyroidectomy [1]. In the US, approximately 20 million people are diagnosed with thyroid diseases annually and around 150,000 thyroidectomies are performed per year. Reportedly, approximately 27% of the patients who undergo thyroidectomies suffer from transient or permanent hypocalcemia, which can lead to lifelong deleterious consequences along with economic burden [2]. One study found that hospitalization costs after neck drain removal were higher in the post-operative hypocalcemic group compared to the non-hypocalcemic group (P<0.001) [3, 4]. Direct damage to or accidental removal of parathyroid glands (PGs) during surgery is one of the main causes of these adverse outcomes. Challenges in identifying PGs are due to their small sizes (3 – 7 mm) and similar appearances to lymph nodes, fat, and thyroid tissue [5, 6]. Permanent hypocalcemia, defined as hypocalcemia present for more than 6 months after thyroidectomy, is reported in 1 – 10% of patients [7]. Minimizing hypoparathyroidism is important for quality of life as postoperative hypocalcemia may result in prolonged hospital stays and clinic visits and also require lifelong calcium and vitamin D supplements, causing patients to incur significant additional health care costs [3, 8, 9].

The current gold standard for identifying PGs is the surgeons’ visual inspection [10] but the post-operative complication rates can be inconsistent depending on the experience of the surgeons and surgical volumes [1113]. Alternatively, the identity of PG tissue (during the thyroidectomy) can be directly confirmed by frozen section analysis and indirectly confirmed by intraoperative parathyroid hormone (PTH) assays that are invasive, requiring repeated sampling and a wait time of 20 – 60 minutes per sample. As a noninvasive means, Paras et al. introduced an optical method to identify PGs using near-infrared autofluorescence (NIRAF) for the first time in 2011 [14] and reported that the NIRAF technique is effective for intraoperative real-time localization of PGs [1423]. Thereafter, the NIRAF technology has gained much attention for its use in non-invasive, label-free, and safe localization of PGs by producing high-contrast images in real-time. Consequently, two commercial products (PTeye from Medtronic and Fluobeam LX from Fluoptics) recently received Food and Drug Administration (FDA) clearances for their NIRAF detection technology [24]. One is a non-imaging, spot probe (PTeye) and another is an imaging-based localization device (Fluobeam LX).

Other optical technologies have been recently studied to help localize PGs. Kim et al. developed a probe-type parathyroid autofluorescence detector using a phase-sensitive process and optical filtering [25]. Their goal was to overcome the limitations of conventional NIRAF devices, which include difficulty matching the autofluorescence screen with the actual surgical field of view, having to turn off or turn away the surgical lights, and not being able to use the device for remote access. They successfully demonstrated the efficacy of their device in a preliminary clinical trial but acknowledged that their system has limitations, including discomfort in clinical applications and variable signal intensity. Wang et al. studied the use of laser-induced breakdown spectroscopy (LIBS) to identify PGs [26]. They looked at smear samples from rabbits and tested three machine learning algorithms as classifiers to distinguish PGs from non-PGs. Their results demonstrated the feasibility of LIBS to characterize the elemental composition of parathyroid glands ex vivo. Ignat et al. used a probe-based confocal laser endomicroscopy (pCLE) to confirm histology of PGs in the surgical field prior to tissue resection and differentiate thyroid, parathyroid, and lymph nodes [27]. This approach suggested the possibility of obviating the need for frozen sections, which can be time-consuming and costly, but also required an intravenous injection of 2.5 mL of 10% fluorescein sodium. Meanwhile, Barberio et al. showed the feasibility of hyperspectral imaging (HSI) to discriminate the thyroid from parathyroid glands through spectral signatures [28]. This group also tested an automatic classification algorithm on one patient using their spectral data. Although their method was novel and may prove its clinical potential with further studies, the group identified limitations, including that the data acquisition had to be performed with the lights off and under specific camera positions. These optical techniques are novel and pave a new way to identify parathyroid glands intraoperatively; however, they are pre-clinical in nature and in development. Similarly, our device is an early-stage prototype, but overcomes several of these limitations. As our device has both RGB and NIRAF screens and does not require the surgical lights to be altered, it has the potential to fit into current surgical workflows. Moreover, our technique is non-invasive and does not require a dye-injection.

While the NIRAF technology may improve a surgeon’s confidence level and reduce the need for frozen section or PTH analysis as an adjunctive tool to the standard care of visual inspection, false negatives (no fluorescence signal in PG) can occur with the use of current devices that use NIRAF technology in secondary hyperparathyroidism cases [29]. Moreover, false positives (fluorescence signal in non-PG) have been reported with brown fat, colloidal thyroid nodules, burn tissues, thyroid lobe/cyst, vicryl suture, surgical ink, and most importantly metastatic lymph nodes [6, 24]. These potential false negative and false positive results can be mitigated per the surgeons’ experience and judgement, but autotransplanting a metastatic lymph node presumed as a PG can have potential consequences, such as the necessity of a re-operation surgery [6]. Thus, relying only on the NIRAF technology is still suboptimal and it is common practice to perform frozen section during thyroidectomies of malignant thyroid diseases to distinguish between a metastatic lymph node that needs to be removed and a devascularized PG that requires autotransplantation [30].

In recent years, deep convolutional neural networks (CNNs) have shown state-of-the-art results in many computer vision tasks [31]. CNNs have achieved remarkable results in the task of image classification. Despite the breakthrough of CNNs in the computer vision domain, CNN structures usually require a massive amount of training data. Furthermore, in the field of medical imaging, the acquisition process of images involves many layers of protocols and privacy issues. Moreover, ground truth annotations are hampered due to the requirements of expert input (e.g., histopathological examination), which may result in human error and inconsistent results. Despite the complexity and computing time limitations of CNNs, The You-only-look-once (YOLO) model [32] has been well studied in the field of colonoscopy [33, 34]. YOLO employed a single CNN to predict the bounding boxes of detected regions. Since YOLO limits the number of bounding boxes, it avoids repetitive detection of the same object and thus greatly improves the detection speed, making YOLO suitable for real-world applications [32].

In the present study, we introduce a new method to improve PG detection capability by exploiting a co-registered dual-red-green-blue (RGB)/near-infrared (NIR) paired imaging device and the YOLOv5 architecture-based deep learning. Although a number of efforts have been made to quantify PG autofluorescence signals by using NIRAF technology alone [35, 36], our study is the first report that uses both color RGB/NIR paired imaging-based PG detection by incorporating multi-modal (both RGB light and NIRAF ground truth imaging) data into PG identification using a deep learning algorithm. The primary and secondary aims of this study were to demonstrate the novelty of our device and method: 1) the first development of the co-registered dual-color RGB/NIR paired optical imaging system with a coaxial, collimated excitation light that can allow access to the small incision surgical field, and 2) the first feasibility demonstration of the computer-aided PG detection by using the paired RGB/NIR imaging and deep learning application of the real human thyroid and parathyroid specimen data from 10 parathyroid gland samples.

2. MATERIALS AND METHODS

2.1. Hardware design: Dual-RGB/NIR paired imaging with a coaxial, collimated illumination

As illustrated in Figure 1, our clinical imaging prototype consists of a coaxial, collimated 785 nm fiber laser module and a co-registered dual-RGB/NIR camera module combined with a fixed c-mount lens. The excitation light was illuminated coaxially to the image sensors to provide a consistent level of light to the field of view. The dual-RGB/NIR camera module allowed the co-registration of the RGB and NIR images by optically aligning the image sensors. To permit a uniform illumination, a collimating lens (RMX 4X objective lens, Olympus) was attached to the 785 nm excitation laser (Fiber laser, Civil Laser Inc) via a 400 µm multimode fiber (400 µm core diameter multimode fiber, OceanOptics). The excitation beam was steered by a kinematic mirror and reflected by a notch beam splitter (NFD01–785-25×36, Semrock, Inc.). The size of the collimated beam was approximately 5 cm in diameter, which covered the camera field-of-views at a working distance of 20 to 30 cm. A bumped angle adapter was implemented to avoid direct specular reflection from the sterile drape plastic window (04-CC900 Closed camera covers, The Medical Supply Group Inc.). An additional single notch filter (NF03–785E-25, Semrock, Inc.) was used to mitigate spectral crosstalk from the incident light. An electrically tunable lens (Optotune, Switzerland) was used to control the focal plane. A 30 mm fixed back-focal length achromatic c-mount lens (AC254–030-AB-ML, Thorlabs) was connected to a commercially available FDA listed dual-RGB/NIR camera system (ITSLC1711, InTheSmart Co. Ltd., Seoul, South Korea). Each RGB and NIR image sensor provided 1920×1080 resolution images and the size of each image sensor was 6.45 mm diagonally. The camera module was connected to the camera control unit to process the co-registered, real-time dual-RGB/NIR images and the images were displayed in split view onto a 17” UHD screen monitor.

FIGURE 1.

FIGURE 1.

[System assembly (a) Schematic of dual-red-green-blue (RGB) / near-infrared (NIR) imaging system with a coaxial, collimated excitation light (Cross sectional view) and (b) Sterile drape attachable 3-degree angle coupler.]

2.2. Bench-top study protocols and system characterization

To characterize the system specifications, we first conducted benchtop testing. A Thorlabs USAF 1951 target was used to characterize the field of views and spatial resolutions over different working distances. The laser power from the exit pupil was measured at a working distance of 10 and 30 cm using a power meter (PM100D, Thorlabs). Fluorescence imaging was tested, and a positive signal was confirmed using a diluted ICG solution in an Eppendorf tube. The device was tested with the operating room (OR) surgical lights on to see whether there was any OR light interference. There was no difference in signal intensity with the OR lights on. Video frame rates and image sizes were tested.

Table 1 provides the summary of the hardware specifications. The average power from the exit pupil at working distances of 10 and 60 cm was 5 mW/cm2 with a beam diameter of 5 cm. The spatial resolutions were measured between 50 – 200 μm. The system was able to detect as low as 10 picomol in a 1-ml vial. Contrast-to-noise ratio (CNR) was measured at 4.25 under surgical lights.

TABLE 1.

[Device characteristics.]

Excitation wavelength 785 nm
Emission wavelength >810 nm
Working distance 10 – 60 cm
Spatial resolution 50 – 200 μm
Field of views Minimum: 1 cm x 1 cm
Maximum: 15 cm x 15 cm
Camera frame rate (both RGB/NIR) 60 frames per second
Image format PNG
Video format MP4
Laser power density 5 mW/cm2

2.3. Ex vivo and in vivo data collection

The clinical study protocol was approved by the Johns Hopkins Medicine Institutional Research Board (IRB00224302-CIR00059986). The feasibility of the device was determined by assessing the ability to detect autofluorescence signals from PGs before tissue exposure or dissections and successfully distinguishing those PGs from surrounding anatomical structures during normal PG and thyroid surgeries. The real-time video data was collected from 6 patients, including videos of both excised ex vivo and in situ specimens which were used to train, validate, and compare between the detection models post-operatively. The resected specimens were sent for frozen section analysis for histopathological confirmation as part of routine practice and when sampling was indicated.

Prior to the surgery, the nurse put on the sterile sheath cover on the imager, and the imager was placed on a sterile surface near the patient’s bed. When the surgeon suspected a tissue to be a PG during parathyroid and thyroid surgery, they picked up the imager to check the surgical field. The monitor was positioned appropriately such that the surgeons were able to easily view the RGB/NIR split screen as shown in Figure 2a. Then, real-time images and video data of the surgical field were collected using the imager for approximately 10 seconds in situ after the surgeon exposed the thyroid tissue bed. After resecting the tissue, the imager was used within 1 minute to confirm whether any PG autofluorescence was detected from the specimen before being sent to frozen section analysis for histopathological confirmation as part of routine practice (Figure 2b). Real-time video data was collected from 6 patients and included both in situ and excised ex vivo specimen data. All the suspected PGs were confirmed through either the surgeon’s visual judgment and/or frozen section analysis.

FIGURE 2.

FIGURE 2.

[A picture of the operating room; a surgeon holding the device and watching the red-green-blue (RGB) / near-infrared (NIR) screen: Example of specimen data collection (a) in situ (yellow arrow: parathyroid gland) and (b) ex vivo tissue.]

2.4. Data preparation and image pre-processing for a deep learning application

An FDA registered dual-RGB/NIR camera system (ITSLC1711, InTheSmart Co. Ltd., Seoul, South Korea) was used under approval by the Johns Hopkins Institutional Review Board (IRB00224302-CIR00059986) and the Imaging and Recording Oversight Committee. The entire dataset was collected using the dual-RGB/NIR camera system. The collected dataset was trained and tested on a separate computer (i7–1165G7 2.8 GHz, Turbo up to 4.7 GHz, L3 Cache 12MB, 16GB RAM). A software called labelImg (https://github.com/tzutalin/labelImg.wiki.git) was used that provides a tool for enclosing boxes around objects of interest and save information about the location and label of the boxes. RGB-PG and NIR-PG were used as labels for PGs in each modality. We tried to ensure that the edges of bounding boxes touch the outermost pixels of parathyroid glands. The entire dataset consisted of 1,287 images for each modality (only true positive data) from 6 patients. As the video data included dual-RGB/NIR images at 60-frames-per-second (FPS) with 1920 × 1080 pixel size, we divided them as one for the RGB data set, another for the NIR data set and masks for the PG ground truth data set. The videos had different fluorescence patterns and durations. Out of 10 patients (see Table. 2), images collected from 4 patients were used to train the models and the other images collected 2 patients were used to evaluate the model’s performance. Each patient was randomly selected to train and test the model. The example images are shown in Figure 3.

TABLE 2.

[Description of patients enrolled in this study.]

Patient Number Age Sex Surgery type Primary diagnosis Number of PG detected
Deep Learning algorithm applied 1 40 F Open completion thyroidectomy Thyroid malignant neoplasm 4
2 71 F Open parathyroidectomy Hypercalcemia 4
3 53 F Open total thyroidectomy Thyroid nodule 4
4 75 F Open total thyroidectomy Thyroid tumor 4
5 75 F Open thyroid lobectomy and Isthmusectomy - right Thyroid malignant neoplasm 2
6 76 M Open parathyroidectomy Hyperparathyroidism 4
Deep Learning algorithm NOT applied 7 33 F Open total thyroidectomy Thyroid nodule 4
8 26 F Open total thyroidectomy Thyroid malignant neoplasm 4
9 54 F Open total thyroidectomy Thyroid malignant neoplasm 4
10 49 F Open total thyroidectomy Thyroid tumor, benign 4

FIGURE 3.

FIGURE 3.

[Example of a dual-red-green-blue (RGB) / near-infrared (NIR) image from excised thyroid tissue. Left figure shows RGB or color imaging, and the right shows the NIR view. The parathyroid gland (circled in yellow) is brightly fluorescing]

2.5. Training protocol and convolutional neural network architecture

YOLO, well known for real-time object detection, was used as a baseline model that can potentially aid surgeons in localizing PGs intraoperatively. As depicted in Figure 4, the network architecture of YOLOv5 consists of three parts including the Backbone: CSPDarknet, Neck: PANet, and Head: YOLO Layer. First, the data is inputted to CSPDarknet for feature extraction. It is then fed to PANet for feature fusion. Lastly, the YOLO Layer outputs detection results, such as the class, score, location, and size. Faster R-CNN was also used to compare the performance with YOLOv5. Faster R-CNN object detection network is composed of a feature extraction network followed by a region proposal network to generate object proposals and another subnetwork to predict the actual class of each object proposal [37]. The feature extraction network is typically a pretrained CNN, such as ResNet-50 or Inception v3. We used the pytorch framework to train all the models for 1500 epochs, which was enough to observe convergence.

FIGURE 4.

FIGURE 4.

[YOLOv5 architecture used in this study.]

In order to solve the problem of limited size of training data, we used a technique called transfer learning [38], where a model’s structure and parameter trained for one task is reused as the starting point for another task. The neural networks were initialized by the pretrained weights on the COCO dataset [39]. We not only collected clean images but also low-quality, blurry images from videos to make our model robust to motion artifacts during surgery. Also, data augmentation was used to improve network accuracy by randomly transforming the original data during training. Note that data augmentation is not applied to test and validation data. To take the images of low resolution in real-time situations into account, the data was augmented by blurring the image with the probability of 0.1.

To evaluate the trained object detector on a large set of images to measure the performance, Mean Average Precision (mAP) at different intersection over union (IoU) levels (mAP.5:.95) was used as an evaluation metric for object detection. It is a popular metric in measuring the performance of object detection. The general definition of the AP (Average Precision) is calculating the area under the precision-recall curve. The precision-recall curve was created by calculating precisions at each level of confidence threshold. The AP provides a single number that incorporates the ability of the detector to make correct classifications (precision) and the ability of the detector to find all relevant objects (recall). mAP.5:.95 means average AP over different IOU thresholds, from 0.5 to 0.95 with a step size of 0.05.

3. RESULTS AND DISCUSSION

3.1. Ex-vivo and in situ PG autofluorescence imaging

The imager was successfully used intraoperatively by a surgeon to assist in identifying the PGs from 10 patients in this early feasibility study. By providing the collimated excitation light coaxially to the image sensors, the imager was able to provide a consistent level of light to the target PG regardless of the orientation of the camera. There was a varying degree of autofluorescence signals depending on the surgery type, disease, and specimen. The resected tissues were all confirmed by frozen section analysis.

The imager was able to confirm the surgeon’s judgment in real-time during parathyroidectomy cases after the specimen was resected. Specimens were resected as indicated by the surgeon, and all resected specimens were sent for frozen section. During one total thyroidectomy, the imager was able to detect an accidently removed PG from a resected pathological thyroid gland as shown in Figure 5. In this case, the PG was autotransplanted into the patient, potentially mitigating the risk of postoperative hypocalcemia in the patient [40]. The device clearly showed the PG autofluorescence signals with high specificity (Figure 6a and 6b). Even deep-seated tissues were able to be imaged without light loss, and high resolution was possible due to the electrically tunable lens. In several cases, PG autofluorescence was detected even before the surgeon dissected the PG gland, and while it was covered by fat and/or fascia. The imager was also tested on tissues confirmed to be lymph nodes by the surgeon and frozen section analysis, and, as expected, no autofluorescence signal was detected.

FIGURE 5.

FIGURE 5.

[Example of the ex vivo parathyroid gland (PG) specimen showing autofluorescence with prediction value. The orange box displays that our YOLOv5 algorithm detected PG with a prediction value of 0.92. The prediction value ranges from 0 to 1 which shows how confident the model is on that prediction.]

FIGURE 6.

FIGURE 6.

[In situ imaging. (a) Dual red-green-blue (RGB) / near-infrared (NIR) screen showing parathyroid gland (PG) autofluorescence detected before tissue exposure during parathyroidectomy, (b) Forceps holding identified PG in RGB view (left) and bright PG autofluorescence signal compared to surrounding tissues (right). Yellow circles indicate PG.]

Currently available commercial devices have been limited to NIRAF-only detection, which is qualitative and error-prone depending on the device used, the focus, and whether PGs are embedded in the subcapsular thyroid tissues, resulting in false negative detections [6]. Compared to the single-mode NIR imaging devices already in the market, the advantages of our optical design include 1) co-axial illumination well-suited for small incision field imaging, while wide-full field imaging is also possible by increasing working distance, and 2) an electrically tunable lens which can adjust the focal plane either to a small field of view with higher resolution or a large field of view with compromised resolution, and 3) dual-RGB/NIR imaging which could potentially permit co-localization or even a fused imaging mode that highlights PGs with a pseudocolor overlay onto the color screen.

3.2. Deep learning-based PG detection in situ

To compare classification performance, we trained the YOLOv5 and Faster RCNN independently with NIR, RGB, and dual-RGB/NIR images (Figure 7). Clearly, when trained with dual-RGB/NIR images, a trained neural network has the best performance while the one with only RGB images has the worst performance. This result is expected because more information can be extracted from dual-RGB/NIR images to classify images. This study also partially implicates that autofluorescence imaging has better performance in detecting PG than RGB imaging. Finally, the best result uses transfer learning of the YOLOv5 module along with the data augmentation of the dual-RGB/NIR images, resulting in a 5-fold cross-validation mean average precision (mAP.5:.95) of 94.7% with19.5 millisecond inference time (1/51.28 FPS) which is above the common real-time FPS standard of 30.

FIGURE 7.

FIGURE 7.

[Computer aided detection of parathyroid gland (PG) in situ using the YOLO v5 deep learning technique. Note the red and orange boxes contain prediction values of 0.81 of red-green-blue (RGB) and 0.96 of near-infrared (NIR), respectively (confidence level between 0 and 1).]

In this study, for the detection task, we employed state-of-the-art CNN-based detection methods to locate PGs and systematically evaluate them on our newly collected dataset. We established benchmarks for our newly collected dataset, and our study can potentially benefit other researchers working in the same area. Through comprehensive experiments of detecting the PGs, we find that YOLOv5 achieves the best performance in terms of mean Average Precision score when trained with both RGB and NIR images. Furthermore, by comparing YOLOv5 and Faster RCNN, not only in the performance in terms of mean Average Precision but also in terms of FPS, YOLOv5 outperformed Faster RCNN [Table 3]. We also provided insight comparing different combinations of image modalities. The overall results showed that the performance of RGB/NIR was better than single modalities. But still an overfitting problem exists due to a variety in PG shape and lack of data to cover this diversity. In further studies, we will add more data and apply regularization methods such as batch normalization and increasing dropout rate only on the RGB image data to solve overfitting problems. Also, these CNN-based detection models which can distinguish parathyroid glands from fat and lymph nodes will be further studied.

TABLE 3.

[The mean average precision (mAP) of trained models on each image datasets using YOLOv5 and Faster RCNN. Red highlights indicate highest values within the model, as well as highlighting that YOLOv5 NIR has a higher mAP value than any Faster RCNN image type.]

Model Inference time, ms (1/FPS) Image Type mAP.5:.95
YOLOv5 19.5 ms
(1/51.28 FPS)
Dual 0.947
RGB 0.842
NIR 0.913
Faster RCNN 71.4 ms
(1/13.99 FPS)
Dual 0.898
RGB 0.742
NIR 0.856

3.3. Limitations of the current study and future direction

The current study has the limitation of a small sample size. Due to the lack of training data, our model may not perform on false positive and false negative cases as well as on true positive cases, given the small sample size in this study due to the difficulty of acquiring a large PG medical images dataset; thus, caution must be applied. However, the data can be increased over time and more data will further improve the accuracy and specificity of the PG classification. To ensure the robustness of the models, we applied 5-fold validation during the experiments. The model’s generalization can be enhanced by increasing the dataset in the future study. The current multi-modality framework hit the limits of our computational resources, but we plan to combine all three modalities for training the model in our future work. Nonetheless, our research will be constructive in combining and selecting appropriate image modalities for PG image segmentation.

Future studies will focus on integration of inference and real-time operation of the improved PG detection and classification accuracy. We anticipate implementing the improved neural network model to the dual-RGB/NIR camera system for real-time, deep learning-assisted PG identification. Additional modifications to the network can be implemented by extending the usage of the co-learn module at each skip connection layer so that it can determine the importance to be placed on each modality at those layers. Due to the variability of PG sizes, leveraging designs such as the extended inception modules proposed by Dolz et al. [41] may boost the performance as it would be able to learn from different receptive fields. Lastly, applying color augmentations is recommended in future studies; particularly on the RGB images as there is a larger variance in brightness and contrast on that modality.

4. CONCLUSION

In this work, we have developed a clinically viable prototype system that utilizes a co-axial excitation (785-nm), dual-red-green-blue/near-infrared (NIR) paired imaging. The system permits broad ranges of field-of-view (1 to 15 cm in diameter) depending on the working distances (10 to 60 cm). From the early feasibility study on human patients (N=10), the system was capable of localizing PGs both in situ and ex vivo intraoperatively. Computer-aided deep learning algorithms were exploited using 1,287 images from 6 patients and achieved a mean average precision of 94.7% and a 19.5 millisecond processing time per PG detection. Our results support the effectiveness of our approach despite the small sample size. This novel approach has the potential to improve specificity in identification of PG during parathyroid and thyroid surgeries. Additional data are needed to verify whether there may be an impact on clinical patient outcomes.

Supplementary Material

vS1

Video S1: Example of the ex vivo parathyroid gland specimen showing autofluorescence.

Download video file (2.9MB, mp4)
vS2

Video S2: In situ imaging of the parathyroid gland autofluorescence detection before tissue exposure during parathyroidectomy.

Download video file (2.8MB, mp4)
vS3

Video S3: Computer-aided detection of a parathyroid gland in situ using the YOLO v5 deep learning technique.

Download video file (2.7MB, mp4)
supinfo

Figure of (a) the system and (b) data collection and deep learning process.

vS4

Video S4: Computer-aided detection of a parathyroid gland in situ using the Faster R-CNN deep learning technique.

Download video file (2.2MB, mp4)

ACKNOWLEDGMENTS

Research reported in this publication was supported by the National Institute of Biomedical Imaging and Bioengineering and National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health under Award Number R43EB030874, R41EB032284, R41DK131650, Johns Hopkins University FastForward U 2021 Summer MedTech Award, and Children’s National Hospital SPF44215PID30005967. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Authors would like to gratefully acknowledge Jae H. Choi, B.S. (Neuroscience and English, Johns Hopkins University) for his support in proofreading and editing this manuscript. Authors also thank InTheSmart Co. Ltd for a device loan for this work. Yoseph Kim, Hun Chan Lee and Jongchan Kim contributed equally to this work.

Abbreviations:

RGB

red-green-blue

NIR

near-infrared

NIRAF

near-infrared autofluorescence

PG

parathyroid gland

PTH

parathyroid hormone

ICG

indocyanine green

YOLO

you-only-look-once

CNN

convolutional neural network

hypo-PT

hypoparathyroidism

FDA

Food and Drug Administration

OR

operating room

CNR

contrast-to-noise ratio

mAP

Mean Average Precision

IoU

intersection over union

AP

Average Precision

FPS

frames per second

Footnotes

FINANCIAL DISCLOSURE

E. Oh and J. Cha have ownership interests of Optosurgical, LLC.

CONFLICT OF INTEREST

Y. Kim and E. Oh were employees when the study was carried out.

J. Cha owns shares in Optosurgical, LLC.

SUPPORTING INFORMATION

Additional supporting information is available in the online version of this article at the publisher’s website or from the author.

DATA AVAILABILITY STATEMENT

The data that support the findings of this study are available from the corresponding author upon reasonable request.

REFERENCES

  • [1].Bhattacharyya N, Fried MP, Assessment of the morbidity and complications of total thyroidectomy, Arch Otolaryngol Head Neck Surg 128 (2002) 389–392. [DOI] [PubMed] [Google Scholar]
  • [2].Herranz-Gonzalez J, Gavilan J, Matinez-Vidal J, Gavilan C, Complications following thyroid surgery, Arch Otolaryngol Head Neck Surg 117 (1991) 516–518. [DOI] [PubMed] [Google Scholar]
  • [3].Zahedi Niaki N, Singh H, Moubayed SP, Leboeuf R, Tabet J-C, Christopoulos A, Ayad T, Olivier M-J, Guertin L, Bissada E, The Cost of Prolonged Hospitalization due to Postthyroidectomy Hypocalcemia: A Case-Control Study, Advances in Endocrinology 2014 (2014) 954194. [Google Scholar]
  • [4].Wang TS, Cheung K, Roman SA, Sosa JA, To supplement or not to supplement: a cost-utility analysis of calcium and vitamin D repletion in patients after thyroidectomy, Ann Surg Oncol 18 (2011) 1293–1299. [DOI] [PubMed] [Google Scholar]
  • [5].Shinden Y, Nakajo A, Arima H, Tanoue K, Hirata M, Kijima Y, Maemura K, Natsugoe S, Intraoperative Identification of the Parathyroid Gland with a Fluorescence Detection System, World J Surg 41 (2017) 1506–1512. [DOI] [PubMed] [Google Scholar]
  • [6].Raïs D.O. Hartl; Guerlain Joanne; Breuskin Ingrid; Abbaci Muriel; Laplace-Builhé Corinne, Intraoperative Parathyroid Gland Identification Using Autofluorescence: Pearls and Pitfalls, World Journal of Surgery and Surgical Research 2 (2019). [Google Scholar]
  • [7].Edafe O, Antakia R, Laskar N, Uttley L, Balasubramanian SP, Systematic review and meta-analysis of predictors of post-thyroidectomy hypocalcaemia, Br J Surg 101 (2014) 307–320. [DOI] [PubMed] [Google Scholar]
  • [8].Al-Qurayshi Z, Robins R, Hauch A, Randolph GW, Kandil E, Association of Surgeon Volume With Outcomes and Cost Savings Following Thyroidectomy: A National Forecast, JAMA Otolaryngol Head Neck Surg 142 (2016) 32–39. [DOI] [PubMed] [Google Scholar]
  • [9].Fanget F, Demarchi MS, Maillard L, El Boukili I, Gerard M, Decaussin M, Borson-Chazot F, Lifante JC, Hypoparathyroidism: Consequences, economic impact, and perspectives. A case series and systematic review, Ann Endocrinol (Paris) (2021). [DOI] [PubMed] [Google Scholar]
  • [10].Mittendorf EA, McHenry CR, Complications and sequelae of thyroidectomy and an analysis of surgeon experience and outcome, Surg Technol Int 12 (2004) 152–157. [PubMed] [Google Scholar]
  • [11].Gourin CG, Tufano RP, Forastiere AA, Koch WM, Pawlik TM, Bristow RE, Volume-based trends in thyroid surgery, Arch Otolaryngol Head Neck Surg 136 (2010) 1191–1198. [DOI] [PubMed] [Google Scholar]
  • [12].Kandil E, Noureldine SI, Abbas A, Tufano RP, The impact of surgical volume on patient outcomes following thyroid surgery, Surgery 154 (2013) 1346–1352; discussion 1352–1343. [DOI] [PubMed] [Google Scholar]
  • [13].Meltzer C, Klau M, Gurushanthaiah D, Tsai J, Meng D, Radler L, Sundang A, Surgeon volume in thyroid surgery: Surgical efficiency, outcomes, and utilization, Laryngoscope 126 (2016) 2630–2639. [DOI] [PubMed] [Google Scholar]
  • [14].Paras C, Keller M, White L, Phay J, Mahadevan-Jansen A, Near-infrared autofluorescence for the detection of parathyroid glands, J Biomed Opt 16 (2011) 067012. [DOI] [PubMed] [Google Scholar]
  • [15].McWade MA, Paras C, White LM, Phay JE, Solorzano CC, Broome JT, Mahadevan-Jansen A, Label-free intraoperative parathyroid localization with near-infrared autofluorescence imaging, J Clin Endocrinol Metab 99 (2014) 4574–4580. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Yu HW, Chung JW, Yi JW, Song RY, Lee JH, Kwon H, Kim SJ, Chai YJ, Choi JY, Lee KE, Intraoperative localization of the parathyroid glands with indocyanine green and Firefly(R) technology during BABA robotic thyroidectomy, Surg Endosc 31 (2017) 3020–3027. [DOI] [PubMed] [Google Scholar]
  • [17].Abbaci M, De Leeuw F, Breuskin I, Casiraghi O, Lakhdar AB, Ghanem W, Laplace-Builhe C, Hartl D, Parathyroid gland management using optical technologies during thyroidectomy or parathyroidectomy: A systematic review, Oral Oncol 87 (2018) 186–196. [DOI] [PubMed] [Google Scholar]
  • [18].Squires MH, Jarvis R, Shirley LA, Phay JE, Intraoperative Parathyroid Autofluorescence Detection in Patients with Primary Hyperparathyroidism, Ann Surg Oncol 26 (2019) 1142–1148. [DOI] [PubMed] [Google Scholar]
  • [19].Kose E, Rudin AV, Kahramangil B, Moore E, Aydin H, Donmez M, Krishnamurthy V, Siperstein A, Berber E, Autofluorescence imaging of parathyroid glands: An assessment of potential indications, Surgery 167 (2020) 173–179. [DOI] [PubMed] [Google Scholar]
  • [20].Thomas G, McWade MA, Nguyen JQ, Sanders ME, Broome JT, Baregamian N, Solorzano CC, Mahadevan-Jansen A, Innovative surgical guidance for label-free real-time parathyroid identification, Surgery 165 (2019) 114–123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Kim SW, Lee HS, Ahn YC, Park CW, Jeon SW, Kim CH, Ko JB, Oak C, Kim Y, Lee KD, Near-Infrared Autofluorescence Image-Guided Parathyroid Gland Mapping in Thyroidectomy, J Am Coll Surg 226 (2018) 165–172. [DOI] [PubMed] [Google Scholar]
  • [22].Oh E, Lee HC, Kim Y, Ning B, Lee SY, Cha J, Kim WW, A pilot feasibility study to assess vascularity and perfusion of parathyroid glands using a portable hand-held imager, Lasers in Surgery and Medicine 54 (2022) 399–406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Oh E, Kim Y, Ning B, Lee SY, Kim WW, Cha J, Development of a non-invasive, dual-sensor handheld imager for intraoperative preservation of parathyroid glands, Annu Int Conf IEEE Eng Med Biol Soc 2021 (2021) 7408–7411. [DOI] [PubMed] [Google Scholar]
  • [24].Solorzano CC, Thomas G, Berber E, Wang TS, Randolph GW, Duh QY, Triponez F, Current state of intraoperative use of near infrared fluorescence for parathyroid identification and preservation, Surgery (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Kim Y, Kim SW, Lee KD, Ahn YC, Phase-sensitive fluorescence detector for parathyroid glands during thyroidectomy: A preliminary report, J Biophotonics 13 (2020) e201960078. [DOI] [PubMed] [Google Scholar]
  • [26].Wang Q, Xiangli W, Chen X, Zhang J, Teng G, Cui X, Idrees BS, Wei K, Primary study of identification of parathyroid gland based on laser-induced breakdown spectroscopy, Biomed Opt Express 12 (2021) 1999–2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Ignat M, Lindner V, Vix M, Marescaux J, Mutter D, Intraoperative Probe-Based Confocal Endomicroscopy to Histologically Differentiate Thyroid From Parathyroid Tissue Before Resection, Surg Innov 26 (2019) 141–148. [DOI] [PubMed] [Google Scholar]
  • [28].Barberio M, Maktabi M, Gockel I, Rayes N, Jansen-Winkeln B, Köhler H, Rabe SM, Seidemann L, Takoh JP, Diana M, Neumuth T, Chalopin C, Hyperspectral based discrimination of thyroid and parathyroid during surgery, Current Directions in Biomedical Engineering 4 (2018) 399–402. [Google Scholar]
  • [29].Squires MH, Shirley LA, Shen C, Jarvis R, Phay JE, Intraoperative Autofluorescence Parathyroid Identification in Patients With Multiple Endocrine Neoplasia Type 1, JAMA Otolaryngol Head Neck Surg 145 (2019) 897–902. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Thomas G, McWade MA, Paras C, Mannoh EA, Sanders ME, White LM, Broome JT, Phay JE, Baregamian N, Solorzano CC, Mahadevan-Jansen A, Developing a Clinical Prototype to Guide Surgeons for Intraoperative Label-Free Identification of Parathyroid Glands in Real Time, Thyroid 28 (2018) 1517–1531. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM, Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning, IEEE Trans Med Imaging 35 (2016) 1285–1298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Redmon J, Divvala S, Girshick R, Farhadi A, You Only Look Once: Unified, Real-Time Object Detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788. [Google Scholar]
  • [33].Mori Y, Kudo SE, Berzin TM, Misawa M, Takeda K, Computer-aided diagnosis for colonoscopy, Endoscopy 49 (2017) 813–819. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Repici A, Badalamenti M, Maselli R, Correale L, Radaelli F, Rondonotti E, Ferrara E, Spadaccini M, Alkandari A, Fugazza A, Anderloni A, Galtieri PA, Pellegatta G, Carrara S, Di Leo M, Craviotto V, Lamonaca L, Lorenzetti R, Andrealli A, Antonelli G, Wallace M, Sharma P, Rosch T, Hassan C, Efficacy of Real-Time Computer-Aided Detection of Colorectal Neoplasia in a Randomized Trial, Gastroenterology 159 (2020) 512–520 e517. [DOI] [PubMed] [Google Scholar]
  • [35].Demarchi MS, Karenovics W, Bedat B, De Vito C, Triponez F, Autofluorescence pattern of parathyroid adenomas, BJS Open 5 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Akbulut S, Erten O, Kim YS, Gokceimam M, Berber E, Development of an algorithm for intraoperative autofluorescence assessment of parathyroid glands in primary hyperparathyroidism using artificial intelligence, Surgery 170 (2021) 454–461. [DOI] [PubMed] [Google Scholar]
  • [37].Ren S, He K, Girshick R, Sun J, Faster R-CNN: towards real-time object detection with region proposal networks, Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, MIT Press, Montreal, Canada, 2015, pp. 91–99. [Google Scholar]
  • [38].T. L., S. J., W. T., M. R., Transfer Learning via Advice Taking., Advances in Machine Learning I. Studies in Computational Intelligence,, Springer, Berlin, Heidelberg: 2010. [Google Scholar]
  • [39].Lin T-Y, Maire M, Belongie SJ, Hays J, Perona P, Ramanan D, Doll·r P, Zitnick CL, Microsoft COCO: Common Objects in Context, ECCV, 2014. [Google Scholar]
  • [40].Oran E, Yetkin G, Mihmanli M, Celayir F, Aygun N, Coruh B, Peker E, Uludag M, The risk of hypocalcemia in patients with parathyroid autotransplantation during thyroidectomy, Ulus Cerrahi Derg 32 (2016) 6–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Dolz J, Desrosiers C, Ayed IB, IVD-Net: Intervertebral disc localization and segmentation in MRI with a multi-modal UNet, ArXiv abs/1811.08305 (2018). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

vS1

Video S1: Example of the ex vivo parathyroid gland specimen showing autofluorescence.

Download video file (2.9MB, mp4)
vS2

Video S2: In situ imaging of the parathyroid gland autofluorescence detection before tissue exposure during parathyroidectomy.

Download video file (2.8MB, mp4)
vS3

Video S3: Computer-aided detection of a parathyroid gland in situ using the YOLO v5 deep learning technique.

Download video file (2.7MB, mp4)
supinfo

Figure of (a) the system and (b) data collection and deep learning process.

vS4

Video S4: Computer-aided detection of a parathyroid gland in situ using the Faster R-CNN deep learning technique.

Download video file (2.2MB, mp4)

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

RESOURCES