Skip to main content
Journal of Imaging Informatics in Medicine logoLink to Journal of Imaging Informatics in Medicine
. 2025 Apr 30;39(1):242–249. doi: 10.1007/s10278-025-01473-y

Landmark-Based Pancreas Sub-region Segmentation in CT

Yan Zhuang 1,2,#, Abhinav Suri 3,#, Tejas Sudharshan Mathai 3, Brandon Khoury 4, Ronald M Summers 3,
PMCID: PMC12921005  PMID: 40307593

Abstract

CT-based imaging biomarkers can be derived from the pancreas for detecting pancreatic pathologies. However, current approaches using full pancreas segmentations are unable to provide region-specific biomarkers that are crucial in predicting disease severity for many conditions, such as pancreatic adenocarcinomas. This study aims to develop an automated 3D tool to detect and segment the pancreatic sub-regions (the head, body, and tail) on CT volumes. This retrospective study used a subset of 549 CT volumes from the publicly available TotalSegmentator (TS) dataset. The dataset was randomly split into training (n = 440) and testing (n = 109) subsets. Additionally, 30 CT volumes from the TCIA NIH Pancreas-CT dataset were used for external validation. A 3D full-resolution nnUNet model was trained with a custom loss function to detect the landmarks corresponding to the pancreas’s head, body, and tail. Based on the detected landmarks, a post-processing algorithm generated the sub-region segmentations. We evaluated the predicted segmentation against the ground truth masks using the Dice similarity coefficient (DSC) and Normalized Surface Distance (NSD). The mean±std of DSC (%) and NSD (%) for the head, body, and tail were 90.8±4.1 and 94.5±4.6, 83.3±7.6 and 87.2±7.4, and 85.1±9.8 and 89.7±8.8, respectively. On the external dataset, the mean±std of DSC and NSD for the head, body, and tail were 83.4±2.6 and 89.7±4.1, 79.4±5.9 and 88.5±6.0, and 81.2±5.5 and 91.4±5.3, respectively. The proposed model can accurately segment three pancreas sub-regions and enables imaging biomarkers to be derived from each sub-region and the pancreas as a whole.

Keywords: CT, Pancreas, Sub-region, Segmentation

Introduction

Pancreatic pathologies, such as diabetes, pancreatitis, and pancreatic cancer, are leading causes of morbidity and mortality worldwide [13]. While whole-organ segmentation of the pancreas can provide information about disease progression [4], morphological and appearance changes in individual regions of the pancreas may provide more detailed information related to disease severity and prognosis [57]. For example, patients with pancreatic cancers localized to the body or tail of the pancreas have better survival than those with cancers of the head of the pancreas [8]. Pancreatic ductal adenocarcinoma, one of the most deadly cancers, is more often metastasized in the body and tail [9]. In another study, the head, body, and tail’s surface nodularity scores significantly differ among various gender and age groups [5]. The significance of pancreas subregion segmentation has been increasingly recognized for pancreas condition monitoring [10]. Thus, automated segmentation of pancreatic sub-regions, such as the head, body, and tail, can enable further in-depth analysis.

Although manually delineating different pancreas sub-regions is time-consuming and laborious [5], to the best of our knowledge, only a handful of prior works have explored automatic approaches to segment pancreatic sub-regions on imaging studies, such as MRI [6] and CT [11]. Bagur et al. developed a template-based registration method to segment pancreas sub-regions on MRI Dixon sequences [6]. Javed et al. measured the length and volumetric features of the pancreas for a naive Bayes model to create pseudo-labels and then used them to train a UNet for segmenting the pancreatic head, body, and tail [11]. However, their approach was evaluated only on a small dataset, and an open-source version of their models for testing on external datasets has not been released.

In this work, we propose a simple, yet effective landmark-based pancreatic sub-region segmentation method to automatically delineate the head, body, and tail sub-regions of the pancreas (Fig. 1). A 3D nnUNet model trained with a customized focal loss function detected three key landmarks in the head, body, and tail by framing the landmark detection problem as a proxy segmentation task. Based on these detected landmarks, the nearest-neighbor (NN) post-processing algorithm assigned each voxel in the pancreas mask to a sub-region label based on the closest landmark (Fig. 1). The segmentation masks for the head, body, and tail are the final outputs of the segmentation model. A comprehensive evaluation of the algorithm was performed internally and externally. Experimental results showed that the proposed method accurately segmented pancreatic sub-regions. Lastly, the model weights and annotations are publicly available.1 To the best of our knowledge, this is the first publicly available pancreas sub-region segmentation model for CT.

Fig. 1.

Fig. 1

Proposed pipeline takes abdominal CT volumes as the input. The first step is to detect three key landmarks via a 3D nnUnet model and derive the full pancreas segmentation via TotalSegmentator. Then, a post-processing algorithm assigns each voxel in the full pancreas segmentation to the closest landmark and obtains sub-region segmentations for the head, body, and tail. The figure is best viewed in color

Materials and Methods

Patient Sample

The public TotalSegmentator (TS) dataset [12] consisted of 1230 patients, and was used to train and evaluate the proposed model. 681 subjects were excluded because part or all of the pancreas was not included in the CT volume. The final dataset consisted of 549 patients (549 CT scans) that were randomly split into training (n = 440) and testing (n = 109) subsets. Figure 2 shows the inclusion and exclusion criteria used in the study. Furthermore, an additional 30 CT volumes from the TCIA NIH Pancreas-CT dataset were randomly selected for external validation [13].

Fig. 2.

Fig. 2

Standards for Reporting of Diagnostic Accuracy (STARD) chart shows the inclusion and exclusion criteria for the TotalSegmentator dataset used in this work

Annotation

The pancreas consists of 5 main anatomical regions: uncinate process, head, neck, body, and tail [14, 15]. However, it is difficult to accurately distinguish the neck from the body and the uncinate process from the head within CT imaging. After consultation with a senior board-certified radiologist with 30+ years of experience and following the prior literature [6, 11], the uncinate process and head were combined as a single class. The neck was combined with the body to comprise a unified class. To separate the head and body, the superior mesenteric artery (SMA) and superior mesenteric vein (SMV) were used as reference points to aid annotation. The pancreatic head lies to the right of the SMA and SMV, and the body lies to the left of the SMA and SMV [15]. Then, per prior literature, we separated the body and tail regions as follows  [16]. A vertical line was drawn along the rightmost tip of the left kidney. Where that line intersects the posterior margin of the pancreas, the boundary between the body and tail is made perpendicular to the axis of the pancreas. Figure 3a and b shows the guidelines for separating the head, body, and tail.

Fig. 3.

Fig. 3

a The annotation guideline to segment the head and body. The superior mesenteric artery (SMA) and vein (SMV) were used to divide the head and the body; namely, the pancreatic head lies to the right of the center of the SMA and SMV, and the body lies to the left of the center of SMA and SMV; b the annotation guideline to segment the head and body. The rightmost tip of the left kidney was used as a reference to divide the body and the tail. Right (R), left (L), posterior (P), and anterior (A) directions are labeled; c an example of the pancreas head ( Inline graphic ), body ( Inline graphic ), tail ( Inline graphic ) landmarks, and their 3D locations in the pancreas. The figure is best viewed in color

One postdoctoral research fellow with two years of experience in abdominal radiology and one senior medical student labeled the three pancreatic landmarks for all 549 scans. Figure 3c shows an example of pancreas head, body, and tail landmarks and its 3D visualization. The annotator located the approximate centroid of each subregion and marked those locations with a sphere using the public ITK-SNAP tool [17]. More specifically, a sphere with a radius of 6 voxels was used to indicate the landmarks. The aforementioned senior radiologist verified the quality of the annotations in 50 randomly selected volumes. Then, the head, body, and tail sub-region annotations for 109 testing volumes and 30 external validation volumes were marked to evaluate the final segmentation results. Likewise, the sub-region annotations for 50 random volumes were verified by two experts (a senior board-certified radiologist with 30+ years of experience and a second-year radiology resident).

Methods

The pancreatic sub-region segmentation model takes a 3D CT volume as the input and predicts the corresponding segmentation masks for the pancreas’s head, body, and tail (Fig. 1). Specifically, the pancreatic landmarks were detected via a proxy segmentation task. A standard 3D full-resolution nnUnet [18] was trained with a customized loss function [19] to segment the three landmarks (“spheres”). In our preliminary experiments, the nnUNet was unable to learn with a standard (default) Dice and Cross Entropy loss due to the small nature of the landmarks. Due to the class imbalance between the foreground (landmarks) and the background (everything else), a focal loss term was employed in the loss function, which penalized the model when it misclassified the foreground landmarks. The final loss was a weighted combination of the dice and focal loss: L=w0LDice+w1LFocal in where LDice=1-2y·y^y+y^ is the dice loss item and LFocal=-α(1-y·y^)γlog(y·y^) is the focal loss item.y is the ground truth and y^ is the prediction. The w0 and w1 are the corresponding weighting factors. In the meantime, the whole pancreas segmentation was obtained by leveraging the TotalSegmentator tool. Finally, a Nearest Neighbor algorithm was used to assign each voxel in the entire pancreas segmentation to the closest landmark and obtain sub-region segmentations for the head, body, and tail.

In our experiment, w0 and w1 were set to 1 and 5. The focal loss parameters α and γ were set to 0.25 and 15. All other hyper-parameters were set to the default nnUNet configuration. The model was trained with five-fold cross-validation for 500 epochs, and an ensemble of 5 models was used at test time. The checkpoints that achieved the highest validation Dice were used for inference.

Metrics

Precision, sensitivity, and F1-score were calculated to evaluate the pancreatic landmark detection performance. Then, the segmentation results were evaluated against the ground truth masks using the Dice similarity coefficient (DSC) and Normalized Surface Distance (NSD) [20]. Additionally, a nnU-Net model was trained using 30 randomly selected patients with full subregion annotations from the training set. Two-sided paired t-tests were performed to compare the DSC and NSD scores between the proposed and nnU-Net-based models.

To calculate the precision, sensitivity, and F1-score, the following criteria were used to determine a true positive (TP): (1) the landmark was predicted; (2) the distance between the predicted and true landmarks was within 25 mm of each other (threshold chosen based on the average thickness of the pancreatic head and body) [16]. The centroid of the mask was computed for use in all metrics. Undetected landmarks were marked as false negatives (FN). Any detected landmarks that failed to meet the criteria (1) and (2) were false positives (FP). Additionally, the Euclidean (L2) distance between the predicted and true landmarks was reported in mm.

Results

Evaluation of Landmark Detection

Table 1 shows the pancreatic landmark detection results. The proposed model demonstrated higher sensitivity than precision for the body and tail because of a large number of false positive detections. Severe pathological conditions and poor image quality contributed to these detection errors. The L2 distance error was the highest for the pancreatic body and lowest for the head. The proposed algorithm successfully detected all head landmarks. Nevertheless, there were 2 and 4 undetected landmarks for the body and tail. 5 detected landmarks, which failed to meet the criteria (1) and (2), were the false positive cases for the body and tail, respectively. Overall, nnUNet was found to detect most of the landmarks of interest reliably. Figure 4 shows detection results and ground truth annotations for the head, body, and tail landmarks, as well as their 3D visualizations.

Table 1.

Results of detecting pancreatic landmarks in the head, body, and tail

Total landmarks TP FP FN Precision Sensitivity F1-score L2 (mm)
Head 109 109 0 0 1.0 1.0 1.0 4.2 ± 2.4
Body 109 102 5 2 0.953 0.980 0.966 7.1 ± 4.3
Tail 109 100 5 4 0.952 0.961 0.956 6.5 ± 4.2

Fig. 4.

Fig. 4

The detected pancreas head ( Inline graphic ), body ( Inline graphic ), and tail ( Inline graphic ) landmarks and the corresponding reference annotation. The full segmentation is shown in (yellow). The bottom row shows the 3D visualization of the pancreas (yellow) and landmarks (colored dots). The figure is best viewed in color

Evaluation of Pancreatic Sub-Region Segmentation

On the TS testing set, we excluded six cases with landmark detection failures, two cases that had undetected body landmarks, and four cases that had undetected tail region landmarks (Table 1). The mean ± std of DSC (%) and NSD (%) of the proposed model for the head, body, and tail sub-region segmentation were 90.8±4.1 and 94.5±4.6, 83.3±7.6 and 87.2±7.4, and 85.1±9.8 and 89.7±8.8, respectively. The mean ± std of DSC (%) and NSD (%) of the nnU-Net-based model for the head, body, and tail sub-region segmentation were 89.3±5.5 and 92.7±7.3, 82.4±9.1 and 86.2±8.7, and 83.0±8.7 and 87.9±8.6, respectively. The segmentation of the head achieved higher DSC than that of the body and tail because delineating the boundary between body and tail is more challenging. When compared with the nnU-Net-based model, the overall mean DSC and NSD of the proposed method significantly outperformed the nnU-Net-based model (p < 0.001). In terms of subregion performance, there was a significant difference between the DSC and NSD of the head (DSC: p = 0.001; NSD: p = 0.001). There was no significant difference between the DSC and NSD of the body (DSC: p = 0.243; NSD: p = 0.223 ). The result for the tail was mixed (DSC: p = 0.032; NSD: p = 0.061). Figure 5 shows example segmentations for the head, body, and tail.

Fig. 5.

Fig. 5

Examples of predicted segmentations for the head (red), body (green), and tail (blue), as well as the corresponding reference annotation. The bottom row shows the 3D visualization of the predicted and reference masks. The figure is best viewed in color

On the external validation dataset (TCIA NIH Pancreas-CT dataset), the proposed model successfully detected three landmarks in 28 out of 30 CT volumes. Two cases, one with an undetected body landmark and one with an undetected tail landmark, were excluded from the sub-region segmentation evaluation. The mean±std of DSC and NSD for the head, body, and tail were 83.4±2.6 and 89.7±4.1, 79.4±5.9 and 88.5±6.0, and 81.2±5.5 and 91.4±5.3. The mean ± std of DSC (%) and NSD (%) of the nnU-Net-based model for the head, body, and tail sub-region segmentation were 91.0±2.9 and 94.9±4.2, 86.2±7.0 and 90.3±8.0, and 84.2±10.8 and 88.4±13.0, respectively. When compared with the nnU-Net-based model, the overall mean DSC and NSD of the nnU-Net-based model significantly outperformed the proposed method (p < 0.001). In terms of subregion performance, there was a significant difference between the DSC and NSD of the head and body (p < 0.001) while there was no significant difference for the tail (DSC: p = 0.104; NSD: p = 0.153). Figure 6 shows example segmentations for the head, body, and tail from the TCIA NIH Pancreas-CT dataset.

Fig. 6.

Fig. 6

Examples of predicted segmentations for the head (red), body (green), and tail (blue), as well as the corresponding reference annotation for the external TCIA NIH Pancreas-CT dataset. The bottom row shows the 3D visualization of the predicted and reference masks. The figure is best viewed in color

Discussion

The proposed landmark-based pancreas sub-region segmentation model is a simple and effective method that can accurately segment the head, body, and tail sub-regions of the pancreas. To ameliorate the annotation burden, three key landmarks were marked by research fellows and verified by radiologists. As discussed in other work on the same topic [6, 11], annotating landmarks was likely effective since it encoded relative anatomical distance between the head, body, and tail sub-regions. Then, we used a customized 3D nnUNet model with a specifically designed loss function (to address the imbalanced class problem) to detect these three landmarks. We applied a nearest-neighbor post-processing algorithm based on the detected landmarks to obtain the full pancreatic sub-region segmentation. The proposed model demonstrated a mean DSC and NSD of 86.4% and 90.4% on the subset of a large public CT dataset with reasonable performance. External validation showed its robust performance on the TCIA NIH Pancreas-CT dataset, with a mean DSC of 81.3% and NSD of 89.8%.

Several landmark detection failures were due to deteriorated image quality and severe pathology. For example, since the TS dataset contained low-dose chest CTs in which soft tissues had poor visibility or were in low resolution, detecting landmarks was very challenging, leading to few undetected landmarks. Despite the varying image quality and acquisition types, we chose the TS dataset as it is one of the largest public CT datasets (n>500) with a diverse distribution of different image acquisition protocols and pathologies. The diversity of this dataset helped ensure the generalizability of our proposed model. Additionally, segmentation of the head achieved the highest DSC because of the accurate detection of all the head landmarks. However, the body and tail segmentation have inferior results compared to the head. Localizing the body and tail landmarks proved challenging, likely because there are no outstanding anatomical reference points, such as SMA and SMV for the head, to help define the boundary between the body and tail. Therefore, a detected tail landmark with a high L2 error could lead to under-segmentation of the tail and over-segmentation of the body (and vice versa), thus resulting in a lower DSC and NSD for both parts.

In terms of the annotation, we noticed that the two prior works [6, 11] contained limited information regarding the annotation of sub-regions for the pancreas. Thus, there was a need to create our own dataset with evidence-based guidelines for sub-region annotation. In the process of reviewing the literature for finding optimal reference points to distinguish between regions, we found that the spleno-renal ligament could be used for localization of the tail of the pancreas [15]. However, the spleno-renal ligament only provides a rough location of the boundary between the body and tail, potentially leading to an unreliable sub-region segmentation. We determined that using the medial aspect of the left kidney as a reference point to separate the body and tail was a more reliable annotation guideline. Though this reference point could be problematic if the left kidney is displaced due to pathology or resected entirely due to surgery, it served as an effective and robust reference point in most scenarios. Another limitation is that the proposed approach does not segment pancreas sub-regions in an end-to-end fashion. Since it is challenging and cumbersome to annotate every voxel in the pancreas [21], the proposed method aims to detect pancreatic sub-region landmarks first and later assign voxels close to these landmarks with the same label as the landmark. The derived sub-region segmentations can also be used to train a 3D nnUNet in an end-to-end manner.

In conclusion, we created and evaluated a deep learning-based tool to segment 3D pancreatic sub-regions (the head, body, and tail) in CT volumes. We make both the sub-region annotations and model publicly accessible. This tool can extract imaging biomarkers for each pancreatic sub-region, facilitating the measurement of regional appearance and morphological changes in pancreatic pathologies.

Funding

This work was supported by the Intramural Research Program of the National Institutes of Health (NIH) Clinical Center (project number 1Z01 CL040004), and the Medical Research Scholars Program at the National Institutes of Health. Y.Z. is supported in part by the Eric and Wendy Schmidt AI in Human Health Fellowship Program at Icahn School of Medicine at Mount Sinai. This work utilized the computational resources of the NIH HPC Biowulf cluster. This work was supported in part through the computational and data resources and staff expertise provided by Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai and supported by the Clinical and Translational Science Awards (CTSA) grant UL1TR004419 from the National Center for Advancing Translational Sciences.

Data Availability

The datasets used in this study are publicly available. The annotations and model weights can be accessed at: https://github.com/rsummers11/Pancreas_Landmarks_Segmentation.

Declarations

Competing Interests

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Ronald M. Summers reports a relationship with Ping An (CRADA) that includes: funding grants. Co-author RMS receives royalties from iCAD, Philips, ScanMed, PingAn, MGB, and Translation Holdings.

Footnotes

1

The model and annotation will be publicly available upon the acceptance of the paper.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Yan Zhuang and Abhinav Suri contributed equally to this work.

References

  • 1.Cao, K., Xia, Y., Yao, J., Han, X., Lambert, L., Zhang, T., Tang, W., Jin, G., Jiang, H., Fang, X., et al. Large-scale pancreatic cancer detection via non-contrast ct and deep learning. Nature medicine 29(12), 3033–3043 (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ambrosetti, M.C., Grecchi, A., Ambrosetti, A., Amodio, A., Mansueto, G., Montemezzi, S., Zamboni, G.A.: Quantitative edge analysis of pancreatic margins in patients with chronic pancreatitis: a correlation with exocrine function. Diagnostics 13(13), 2272 (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cao, K., Xia, Y., Yao, J., Han, X., Lambert, L., Zhang, T., Tang, W., Jin, G., Jiang, H., Fang, X., et al. Large-scale pancreatic cancer detection via non-contrast ct and deep learning. Nature medicine 29(12), 3033–3043 (2023) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Tallam, H., Elton, D.C., Lee, S., Wakim, P., Pickhardt, P.J., Summers, R.M.: Fully automated abdominal ct biomarkers for type 2 diabetes using deep learning. Radiology 304(1), 85–95 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Sartoris, R., Calandra, A., Lee, K.J., Gauss, T., Vilgrain, V., Ronot, M.: Quantification of pancreas surface lobularity on ct: A feasibility study in the normal pancreas. Korean Journal of Radiology 22(8), 1300 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Triay Bagur, A., Aljabar, P., Ridgway, G.R., Brady, M., Bulte, D.P.: Pancreas mri segmentation into head, body, and tail enables regional quantitative analysis of heterogeneous disease. Journal of Magnetic Resonance Imaging 56(4), 997–1008 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Kawamoto, S., Siegelman, S.S., Bluemke, D.A., Hruban, R.H., Fishman, E.K.: Focal fatty infiltration in the head of the pancreas: evaluation with multidetector computed tomography with multiplanar reformation imaging. Journal of computer assisted tomography 33(1), 90–95 (2009) [DOI] [PubMed] [Google Scholar]
  • 8.Artinyan, A., Soriano, P.A., Prendergast, C., Low, T., Ellenhorn, J.D.I., Kim, J.: The anatomic location of pancreatic cancer is a prognostic factor for survival. HPB (Oxford) 10(5), 371–376 (2008) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Erning, F.N., Mackay, T.M., Geest, L.G., Groot Koerkamp, B., Laarhoven, H.W., Bonsing, B.A., Wilmink, J.W., Santvoort, H.C., Vos-Geelen, J., Eijck, C.H., et al. Association of the location of pancreatic ductal adenocarcinoma (head, body, tail) with tumor stage, treatment, and survival: a population-based analysis. Acta oncologica 57(12), 1655–1662 (2018) [DOI] [PubMed] [Google Scholar]
  • 10.Javed, S., Qureshi, T.A., Gaddam, S., Wang, L., Azab, L., Wachsman, A.M., Chen, W., Asadpour, V., Jeon, C.Y., Wu, B., et al. Risk prediction of pancreatic cancer using ai analysis of pancreatic subregions in computed tomography images. Frontiers in Oncology 12, 1007990 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Javed, S., Qureshi, T.A., Deng, Z., Wachsman, A., Raphael, Y., Gaddam, S., Xie, Y., Pandol, S.J., Li, D.: Segmentation of pancreatic subregions in computed tomography images. Journal of Imaging 8(7), 195 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D.T., Cyriac, J., Yang, S., et al.: Totalsegmentator: Robust segmentation of 104 anatomic structures in ct images. Radiology: Artificial Intelligence 5(5) (2023) [DOI] [PMC free article] [PubMed]
  • 13.Roth, H.R., Lu, L., Farag, A., Shin, H.-C., Liu, J., Turkbey, E.B., Summers, R.M.: Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part I 18, pp. 556–564 (2015). Springer
  • 14.Horan, F.: Gray’s anatomy: the anatomical basis of clinical practice: Edited by susan standring pp. 1551. illinois: Churchill livingstone elsevier, 2008. isbn: 978-0-443-06684-9. The Journal of Bone & Joint Surgery British Volume 91(7), 983–983 (2009)
  • 15.Radiopaedia: Pancreas. https://radiopaedia.org/articles/pancreas?lang=us. Accessed: 03-27-2024 (2022)
  • 16.Syed, A.-B., Mahal, R.S., Schumm, L.P., Dachman, A.H.: Pancreas size and volume on computed tomography in normal adults. Pancreas 41(4), 589–595 (2012) [DOI] [PubMed] [Google Scholar]
  • 17.Yushkevich, P.A., Gao, Y., Gerig, G.: Itk-snap: An interactive tool for semi-automatic segmentation of multi-modality biomedical images. In: 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3342–3345 (2016). IEEE [DOI] [PMC free article] [PubMed]
  • 18.Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18(2), 203–211 (2021) [DOI] [PubMed] [Google Scholar]
  • 19.Ma, J., Chen, J., Ng, M., Huang, R., Li, Y., Li, C., Yang, X., Martel, A.L.: Loss odyssey in medical image segmentation. Medical Image Analysis 71, 102035 (2021) [DOI] [PubMed] [Google Scholar]
  • 20.Nikolov, S., Blackwell, S., Zverovitch, A., Mendes, R., Livne, M., De Fauw, J., Patel, Y., Meyer, C., Askham, H., Romera-Paredes, B., et al. Clinically applicable segmentation of head and neck anatomy for radiotherapy: deep learning algorithm development and validation study. Journal of medical Internet research 23(7), 26151 (2021) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Park, S., Chu, L., Fishman, E., Yuille, A., Vogelstein, B., Kinzler, K., Horton, K., Hruban, R., Zinreich, E., Fouladi, D.F., et al. Annotated normal ct data of the abdomen for deep learning: Challenges and strategies for implementation. Diagnostic and interventional imaging 101(1), 35–44 (2020) [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets used in this study are publicly available. The annotations and model weights can be accessed at: https://github.com/rsummers11/Pancreas_Landmarks_Segmentation.


Articles from Journal of Imaging Informatics in Medicine are provided here courtesy of Springer

RESOURCES