Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Aug 29.
Published in final edited form as: Proc SPIE Int Soc Opt Eng. 2016 Mar 21;9784:97842F. doi: 10.1117/12.2216396

3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework

Xiaofeng Yang 1,*, Peter J Rossi 1, Ashesh B Jani 1, Hui Mao 2, Walter J Curran 1, Tian Liu 1,*
PMCID: PMC6715140  NIHMSID: NIHMS1001858  PMID: 31467459

Abstract

We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.

Keywords: Prostate segmentation, ultrasound, anatomical feature, machine learning

1. INTRODUCTION

Prostate cancer is the second leading cause of cancer death for U.S. male populations [1]. Transrectal ultrasound (TRUS) is the standard imaging modality for the image-guided prostate-cancer interventions (e.g., biopsy and brachytherapy) due to its versatility and real-time capability. Accurate segmentation of the prostate plays a key role in biopsy needle placement [2], treatment planning [3], and motion monitoring [4]. For example, the segmentation of the prostate will help physicians to measure the volume of the prostate gland and to generate High-Dose-Rate (HDR) and Low-Dose-Rate (LDR) brachytherapy plan [5]. Ultrasound segmentation is very challenging due to the inherent speckle and artifacts such as shadows, attenuation and signal dropout.

Many methods were proposed to automatically segment the prostate in TRUS images [2, 615]. A semiautomatic method by warping an ellipse to fit the prostate on TRUS images was presented [9]. A level set based method [12, 16] were used to detect the prostate surface from 3D TRUS images. Gabor support vector machine (G-SVM) and statistical shape model were used to extract the prostate boundary [13, 17]. A 2D semiautomatic discrete dynamic contour model was used to segment the prostate [10]. Hodge et al. [18] described 2D active shape models for semiautomatic segmentation of the prostate and extended the algorithm to 3D segmentation using rotational-based slicing. Ding et al. [19] described a slice-based 3D prostate segmentation method based on a continuity constraint, implemented as an autoregressive model. Tutar et al. [20] proposed an optimization framework where the segmentation process is to fit the best surface to the underlying images under shape constraints. Hu et al. [21] used a model-based initialization and mesh refinement for prostate segmentation. Zhang et al. [22] improved the prostate boundary detection system with a tree-structured nonlinear filter, directional wavelet transforms and tree-structured wavelet transform. Chiu et al. [23] introduced a semiautomatic segmentation algorithm based on the dyadic wavelet transform and the discrete dynamic contour. Even though advanced segmentation methods have been proposed for prostate ultrasound images, manual segmentation is still the gold standard and widely used in most clinical applications because of reliability. However, manual segmentation is time consuming, highly subjective, and often irreproducible in procedures such as biopsy and treatment. Therefore, there is still unmet clinical need to develop reliable, automatic, 3D prostate segmentation methods due to the low contrast between the prostate and non-prostate tissue and the low signal-to-noise ratio of TRUS images [6, 8, 2426].

In this paper, we propose a new segmentation method for 3D prostate TRUS images. We integrate multi-atlas registration and anatomical signature into machine learning framework to perform prostate segmentation. This approach has 2 distinctive strengths: 1) Instead of using the voxel intensity information alone, the patch-based representation in the discriminative feature space is used as anatomical signature to deal with low contrast and SNR problem in TRUS images. 2) In order to improve the KSVM training efficiency, a feature selection mechanism is introduced to identify the more informative and salient features in the anatomical signature of each voxel through minimizing the logistic sparse LASSO energy function. Finally, the selected features with higher discriminative power are used to train the KSVM. In summary, the proposed method allows many-to-one correspondences to identify a set of good candidate voxels in the atlases to perform machine learning.

2. METHODS

Figure 1 shows the schematic flow chart of the proposed segmentation method, which consists of four major steps. First, pre-processing is performed for the training and new TRUS images. Second, patch-based features are extracted from the registered training images with patient-specific information, because these training images have been mapped to the new patient’ images. Third, the most informative anatomical features are selected to train the kernel support vector machine (KSVM). Finally, the selected informative anatomical features are extracted from newly acquired images as the input of the well-trained KSVM and the output of the trained KSVM is the segmented prostate of the new patient. These major steps are briefly described below.

Figure 1.

Figure 1.

Schematic flow chart of 3D prostate segmentation.

2.1. Pre-processing

Pre-processing is performed to the training TRUS dataset, which includes reducing speckle noise, bias correction and grayscale normalization. The same processing is performed for the new patient’s images that will be segmented. Such pre-processing steps are implemented to improve the accuracy of the prostate registrations. During the alignment processing of training set, we first select one TRUS image as the template, detect probe center, position and radius, and align other TRUS images to the template image. And we use the corresponding transformation obtained from the training image alignment to align the segmented prostates (binary mask) to the template prostate. Since the segmented prostate of each training image is available, in order to optimize the alignment of training set we again align each training image to the template image by registering the binary segmentation prostates to the template prostate. A Gaussian filtering process is performed on the segmented binary prostates before registration in order to enforce the proper optimization of the cost function during registration [27]. The segmented prostates are binary images with relatively simple shapes; therefore the optimal deformable transformations to warp the binary segmentation prostates to the template prostate can be robustly estimated. When a newly acquired TRUS image comes, all aligned training images in training set are registered to this new image. The deformable registration methods [6, 24, 2830] are used to obtain the spatial deformation field between the new TRUS image and the training images. The same transformations are applied to the segmented prostates in the training set.

2.2. Patch-based feature extraction

Patch-based representation has been widely used as voxel anatomical signature in computer vision and medical image analysis. The principle of the conventional patch-based representation is to first define a small image patch centered at each voxel and then use the voxel intensities of image patch as the anatomical signature of each voxel. However, due to the noise and anatomical complexity of prostate ultrasound images, patch-based representation using voxel intensities alone may not be able to effectively distinguish the prostate and non-prostate voxels. Hence, we propose to use patch-based anatomical features as signatures for each voxel to characterize the image appearance. Three types of images features, namely, the Gabor wavelet feature, the histogram of gradient (HOG) feature, and the local binary pattern (LBP) feature, are extracted from a small image patch centered at each voxel of each aligned training image. Gabor and HOG features can provide complementary anatomical information, and LBP can capture texture information from the input image. The 16 Gabor feature is used in this study. For the HOG feature, it is the 3×3 gradient orientation histogram, resulting in a 9D feature vector. The LBP feature is extracted in three resolution levels and it has a dimension of 30. Therefore, each voxel is represented by a 55 dimensional feature signature. Although the features are extracted from the 2D slice for each voxel due to the larger voxel size (1mm) along the sagittal direction than the sizes (0.12×012 mm2) along axial and coronal directions, the proposed framework is operated on 3D prostate TRUS images.

2.3. Feature selection

Based on the above features, we can obtain patch-based representation of each voxel. It should be noted that the patch-based anatomical signature may contain noisy and redundant features which could affect the segmentation accuracy. Therefore, a feature selection needs to be implemented to identify the most informative and salient features in the anatomical signature of each voxel. This feature selection can also be considered as a binary variable regression problem with respect to each dimension of the original feature. Therefore, a logistic function is used as the regression function. The logistic function [31] represents a conditional probability model defined by

P(y|β,b,f(x))=11+exp(y(βTf(x)+b)) (1)

where f(x) denotes the original feature signature of voxel x, and y is a binary variable with y = +1 denoting that is belonging to the prostate region and y = −1 otherwise. β and b are parameters of the model.

Moreover, the aim of feature selection is to select a small subset of most informative feature as anatomical signatures, which can be well accomplished by enforcing the sparsity constraint during the logistic regression process. Therefore, the feature selection problem can be finally formulated as a logistic sparse LASSO problem [32]. It is defined as,

J(β,b)=c=1Plog(1+exp(LcβTf(xc)+b)))+λβ1 (2)

where f(x) denotes the original feature signature of voxel xc. Label Lc = +1 denoting that xc is belonging to the prostate region and Lc = −1 otherwise. β is the sparse coefficient vector, β1 is the L1 norm, b is the intercept scalar, and λ is the regularization parameter. The first term of (2) is obtained by inputting the label values of drawn samples and their original feature signatures to the logistic function in (1), and then taking the logarithm for the maximum likelihood estimation. The second term of (2) is the L1 norm which aims to enforce the sparsity constraint for LASSO. Through minimizing the logistic sparse LASSO energy function (2) the features with superior discriminant power are selected. Based on the selected features, we can directly measure their discriminant power to separate the prostate and non-prostate voxels quantitatively, based on the Fisher’s score.

2.4. Support vector machine (SVM) training and segmentation

SVM is a popular supervised machine learning model with associated statistical learning algorithms that analyze data and recognize patterns for classification and regression analysis. The idea behind SVMs is to map the original data points from the input space to a high-dimensional (hyperplane) feature space such that the classification problem becomes simpler in the hyperplane space. The training phase of SVMs looks for a linear optimal separating hyperplane as a maximum margin classifier with respect to the training data [28]. Since the training data are not linearly separable, kernel-based SVM methods are employed to classify these features. In this study, the kernel-based SVM is used to identify the features of the prostate glandular tissues. We use the multiple selected features as well as the transformed prostate volumes to train the RBF kernel-based SVM [33]. In order to segment the prostate for newly acquired TRUS images, we perform the same selected feature extraction process for the new TRUS. The multiple features of the new TRUS images are the input of the trained kernel-based SVM, and the trained SVM adaptively label the prostate tissue based on its texture and location. The output of trained SVM is a binary image (volume) consisting of many “0” (non-prostate tissues) and “1” (prostate tissue) points. After a morphological processing, we can obtain the 3D segmented prostate.

3. EXPERIMENTS AND RESULTS

The proposed prostate segmentation method was tested with TRUS images of 10 prostate-cancer patients. All TRUS images were acquired using a Hitachi ultrasound scanner and a 7.5MHz bi-plane probe. Each 3D B-mode TRUS data sets consisted of 1024×768×75 voxels and the voxel size was 0.12×0.12×1.00 mm3. All prostate glands were contoured in TRUS images by an experienced physician. A leave-one-out cross-validation method was used to evaluate the performance of the proposed segmentation algorithm. In other words, we used the 9 training images and segmented prostates as the training set and applied the proposed method to process the remaining subject. The resulted segmentations were compared with the manual results using Dice volume overlap. As demonstrated in Figure 2, the proposed segmentation method works well for 3D TRUS images of the prostate and achieved similar results as compared to manual segmentation. We successfully performed the segmentation method for all enrolled patients. Figure 3 shows Dice volume overlaps between our and manual segmentations for each patient. Overall the prostate volume Dice Overlap coefficient was 89.7±2.3%, which demonstrated the accuracy of the proposed segmentation method.

Figure 2.

Figure 2.

Comparison of the proposed method and manual segmentations: (a) Axial, (b) coronal and (c) sagittal TRUS images. The manual prostate segmentation is shown in the yellow dotted line and the automated segmentation is shown in red dotted line.

Figure 3.

Figure 3.

Dice volume overlaps between the automated and manual segmentations.

4. DISCUSSION AND CONCLUSION

We report a novel 3D TRUS prostate segmentation method based on the patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned input images and adopted as signatures for each voxel. The most robust and informative features are then identified by the feature selection process to train the KSVM. This trained KSVM is used to help localize the prostate of a new patient. In this study, we have demonstrated its clinical feasibility, and validated its accuracy with manual segmentations (gold standard). This segmentation technique could be a useful tool in image-guided interventions for prostate-cancer diagnosis and treatment.

ACKNOWLEDGEMENTS

This research is supported in part by the Department of Defense (DoD) Prostate Cancer Research Program (PCRP) Award W81XWH-13-1-0269 and Winship Cancer Institute.

REFERENCES

  • [1].Prostate Cancer Foundation, http://www.prostatecancerfoundation.org, (2008).
  • [2].Yan P, Xu S, Turkbey B et al. , “Discrete Deformable Model Guided by Partial Active Shape Model for TRUS Image Segmentation,” Biomedical Engineering, IEEE Transactions on, 99, 1–9 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Hodge KK, McNeal JE, Terris MK et al. , “Random systematic versus directed ultrasound guided transrectal core biopsies of the prostate,” J.Urol, 142, 71–74 (1989). [DOI] [PubMed] [Google Scholar]
  • [4].Shen D, Lao Z, Zeng J et al. , “Optimized prostate biopsy via a statistical atlas of cancer spatial distribution,” Med. Image Anal, 8(2), 139–150 (2004). [DOI] [PubMed] [Google Scholar]
  • [5].Yang X, Rossi P, Mao H et al. , “A MR-TRUS registration method for ultrasound-guided prostate interventions,” Proc. SPIE, 9415, 94151Y–94151Y-9 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Yang X, Schuster D, Master V et al. , “Automatic 3D segmentation of ultrasound images using atlas registration and statistical texture prior,” Proc. SPIE 7964, 796432, (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Noble JA, and Boukerroui D, “Ultrasound image segmentation: a survey,” IEEE Trans.Med.Imaging, 25, 987–1010 (2006). [DOI] [PubMed] [Google Scholar]
  • [8].Yang F, Suri J, and Fenster A, “Segmentation of prostate from 3-D ultrasound volumes using shape and intensity priors in level set framework,” Conf.Proc.IEEE Eng Med Biol.Soc.2006;, 1, 2341–2344 (2006). [DOI] [PubMed] [Google Scholar]
  • [9].Badiei S, Salcudean SE, Varah J et al. , “Prostate segmentation in 2D ultrasound images using image warping and ellipse fitting,” MICCAI, 4191, 17–24 (2006). [DOI] [PubMed] [Google Scholar]
  • [10].Ladak HM, Mao F, Wang Y et al. , “Prostate boundary segmentation from 2D ultrasound images,” Med.Phys, 27(8), 1777–1788 (2000). [DOI] [PubMed] [Google Scholar]
  • [11].Pathak SD, Chalana V, Haynor DR et al. , “Edge-guided boundary delineation in prostate ultrasound images,” IEEE Trans.Med.Imaging, 19(12), 1211–1219 (2000). [DOI] [PubMed] [Google Scholar]
  • [12].Shao F, ling KV, and Ng WS, “3D Prostate Surface Detection from Ultrasound Images Based on Level Set Method,” MICCAI, 4191, 389–396 (2003). [Google Scholar]
  • [13].Shen D, Zhan Y, and Davatzikos C, “Segmentation of prostate boundaries from ultrasound images using statistical shape model,” IEEE Trans.Med.Imaging, 22(4), 539–551 (2003). [DOI] [PubMed] [Google Scholar]
  • [14].Lixin G, Pathak SD, Haynor DR et al. , “Parametric shape modeling using deformable superellipses for prostate segmentation,” Medical Imaging, IEEE Transactions on, 23(3), 340–349 (2004). [DOI] [PubMed] [Google Scholar]
  • [15].Yang X, Rossi P, Jani A et al. , “BEST IN PHYSICS (IMAGING): 3D Prostate Segmentation in Ultrasound Images Using Patch-Based Anatomical Feature,” Medical Physics, 42(6), 3685–3685 (2015). [Google Scholar]
  • [16].Zhang H, Bian Z, Guo Y et al. , “An efficient multiscale approach to level set evolution,” Engineering in Medicine and Biology Society.Proceedings of the 25th Annual International Conference of the IEEE, 1, 17–21 (2003). [Google Scholar]
  • [17].Zhan Y, and Shen D, “Deformable segmentation of 3-D ultrasound prostate images using statistical texture matching method,” IEEE Trans.Med.Imaging, 25(3), 256–272 (2006). [DOI] [PubMed] [Google Scholar]
  • [18].Hodge AC, Fenster A, Downey DB et al. , “Prostate boundary segmentation from ultrasound images using 2D active shape models: Optimisation and extension to 3D,” Computer Methods and Programs in Biomedicine, 84(2–3), 99–113 (2006). [DOI] [PubMed] [Google Scholar]
  • [19].Ding M, Chiu B, Gyacskov I et al. , “Fast prostate segmentation in 3D TRUS images based on continuity constraint using an autoregressive model,” Medical Physics, 34(11), 4109–4125 (2007). [DOI] [PubMed] [Google Scholar]
  • [20].Tutar IB, Pathak SD, Gong LX et al. , “Semiautomatic 3-D prostate segmentation from TRUS images using spherical harmonics,” Ieee Transactions on Medical Imaging, 25(12), 1645–1654 (2006). [DOI] [PubMed] [Google Scholar]
  • [21].Hu N, Downey DB, Fenster A et al. , “Prostate boundary segmentation from 3D ultrasound images,” Medical Physics, 30(7), 1648–1659 (2003). [DOI] [PubMed] [Google Scholar]
  • [22].Zhang Y, Sankar R, and Qian W, “Boundary delineation in transrectal ultrasound image for prostate cancer,” Computers in Biology and Medicine, 37(11), 1591–1599 (2007). [DOI] [PubMed] [Google Scholar]
  • [23].Chiu B, Freeman GH, Salama MMA et al. , “Prostate segmentation algorithm using dyadic wavelet transform and discrete dynamic contour,” Physics in Medicine and Biology, 49(21), 4943–4960 (2004). [DOI] [PubMed] [Google Scholar]
  • [24].Yang X, Akbari H, Halig L et al. , “3D Non-rigid Registration Using Surface and Local Salient Features for Transrectal Ultrasound Image-guided Prostate Biopsy,” Proc. SPIE 7964, 79642V, (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Yan P, Xu S, Turkbey B et al. , “Adaptively learning local shape statistics for prostate segmentation in ultrasound,” IEEE Trans Biomed Eng, 58(3), 633–41 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Akbari H, Yang X, Halig L et al. , “3D segmentation of prostate ultrasound images using wavelet transform,” Proc. SPIE 7962, 79622K, (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Yang X, Rossi P, Mao H et al. , “A New MR-TRUS Registration for Ultrasound-Guided Prostate Interventions,” Proc. SPIE 94151Y–94151Y-9, (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Yang X, and Fei B, “3D prostate segmentation of ultrasound images combining longitudinal image registration and machine learning“ Proc. SPIE 8316, 83162O (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Rueckert D, Sonoda LI, Hayes C et al. , “Nonrigid registration using free-form deformations: application to breast MR images,” Medical Imaging, IEEE Transactions on, 18(8), 712–721 (1999). [DOI] [PubMed] [Google Scholar]
  • [30].Yang X, Torres M, Kirkpatrick S et al. , “Ultrasound 2D Strain Estimator Based on Image Registration for Ultrasound Elastography,” Proceedings of SPIE--the International Society for Optical Engineering, 9040, 904018 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Liao S, Gao YZ, Lian J et al. , “Sparse Patch-Based Label Propagation for Accurate Prostate Localization in CT Images,” Ieee Transactions on Medical Imaging, 32(2), 419–434 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Aseervatham S, Antoniadis A, Gaussier E et al. , “A sparse version of the ridge logistic regression for large-scale text categorization,” Pattern Recognition Letters, 32(2), 101–106 (2011). [Google Scholar]
  • [33].Yang XF, Wu N, Cheng GH et al. , “Automated Segmentation of the Parotid Gland Based on Atlas Registration and Machine Learning: A Longitudinal MRI Study in Head-and-Neck Radiation Therapy,” International Journal of Radiation Oncology Biology Physics, 90(5), 1225–1233 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES