Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 May 1.
Published in final edited form as: Med Image Anal. 2012 Feb 11;16(4):904–914. doi: 10.1016/j.media.2012.02.001

Statistical 4D Graphs for Multi-Organ Abdominal Segmentation from Multiphase CT

Marius George Linguraru 1,2, John A Pura 1, Vivek Pamulapati 1, Ronald M Summers 1
PMCID: PMC3322299  NIHMSID: NIHMS356727  PMID: 22377657

Abstract

The interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Diagnosis also relies on the comprehensive analysis of multiple organs and quantitative measures of soft tissue. An automated method optimized for medical image data is presented for the simultaneous segmentation of four abdominal organs from 4D CT data using graph cuts. Contrast-enhanced CT scans were obtained at two phases: non-contrast and portal venous. Intra-patient data were spatially normalized by non-linear registration. Then 4D convolution using population training information of contrast-enhanced liver, spleen and kidneys was applied to multiphase data to initialize the 4D graph and adapt to patient-specific data. CT enhancement information and constraints on shape, from Parzen windows, and location, from a probabilistic atlas, were input into a new formulation of a 4D graph. Comparative results demonstrate the effects of appearance, enhancement, shape and location on organ segmentation. All four abdominal organs were segmented robustly and accurately with volume overlaps over 93.6% and average surface distances below 1.1 mm.

Keywords: multiphase CT, 4D graph, multi-organ segmentation, enhancement, shape

1. Introduction

In CT-based clinical abdominal diagnosis, radiologists rely on analyzing multiphase computed tomography (CT) data, as soft tissue enhancement can be an indicator of abnormality. Contrast-enhanced CT has proven exceptionally useful to improving diagnosis due to the ability to differentiate tumors from healthy tissue. For instance, the level of enhancement in the tumor is an important indication of malignancy and can be used to better classify abdominal abnormalities (Fritz et al, 2006; Voci et al., 2000). This routine clinical acquisition protocol makes multiphase data (with/without contrast) readily available.

Diagnosis also relies on the comprehensive analysis of groups of organs and quantitative measures of soft tissue, as the volumes and shapes of organs can be indicators of disorders. When presented three-dimensional (3D) patient data, such as CT, radiologists typically analyze them organ-by-organ and slice-by-slice until the entire image data are covered. This allows detecting multi-diseases from multi-organs.

Computer-aided diagnosis (CAD) and medical image analysis traditionally focus on organ or disease-based applications. However there is a strong incentive to migrate toward the automated simultaneous segmentation and analysis of multiple organs for comprehensive diagnosis or pre-operative planning and guidance. Additionally, the interpretation of medical images should benefit from anatomical and physiological priors, such as shape and appearance; synergistic combinations of priors were seldom incorporated in the optimization of CAD.

1.1 CT-based Abdominal Organ Segmentation

A variety of methods have been proposed for the segmentation of individual abdominal organs from CT images, in particular CT after contrast enhancement. The liver enjoyed special attention in recent literature (Delingette and Ayache, 2005; Heimann et al., 2009; Linguraru et al., 2010; Okada et al., 2008a; Soler et al. 2001; Song et al., 2009; Wimmer et al., 2009), kidneys were analyzed sporadically (Ali et al., 2007; Shim et al., 2009; So and Chung, 2009), while the spleen (Danelon and Stitzel, 2008; Linguraru et al., 2010) and pancreas (Shimizu et al, 2010a) were segmented less frequently. Model driven approaches have been both popular and successful (Soler et al. 2001; Song et al., 2009), including active and statistical shape models (Okada et al., 2008a; So and Chung, 2009; Wimmer et al., 2009) and atlas-based segmentation (Linguraru et al., 2010; Okada et al., 2008a; Shimizu et al, 2010a). Level sets and geodesic active contours were frequently involved in these techniques (Heimann et al., 2009; Linguraru et al., 2010; Wimmer et al., 2009). Occasionally, graph cuts were employed (Ali et al., 2007; Shim et al., 2009).

Recently, the simultaneous segmentation of multiple abdominal organs has been addressed in publications (Linguraru and Summers, 2008; Okada et al., 2008b; Park et al., 2003; Seifert et al., 2009; Shimizu et al, 2007; Shimizu et al, 2010a). Most of these methods rely on some form of prior knowledge of the organs, for example probabilistic atlases (Park et al., 2003; Reyes et al., 2009; Shimizu et al, 2007) and statistical models (Okada et al., 2008b), which are sensitive to initialization/registration. An initial segmentation is typically achieved and subsequently refined. The relation between organs and manual landmarks was used in (Park et al., 2003). An efficient optimization of level sets techniques for general multi-class segmentation was proposed in (Bae and Tai, 2009), paving the way for the discrete optimization of graph cuts with nonsubmodular functions in (El-Zehiry and Grady, 2011).

Notably, a hierarchical multi-organ statistical atlas was developed by Okada et al. (2008b). The analysis was restricted to the liver area due to the large variations to be statistically modeled for inter-organ relationships. Also recently, Seifert et al. (2009) proposed a semantic navigation for fast multi-organ segmentation from CT data. The method estimated the organ location, orientation and size using automatically detected anatomical landmarks and machine learning techniques. Decision forests were additionally proposed in (Montillo et al., 2011) to classify multiple organs from CT volumes. The method achieved high prediction accuracy and was fast, but its segmentation overlap was low. Another interesting concept was presented in (Zhan et al., 2008) for the scheduling problem of multi-organ segmentation to maximize the performance of CAD systems designed to analyze the whole human body.

In addition, multiphase contrast-enhanced CT data were employed in abdominal multi-organ analysis (Xu et al., 2004; Linguraru and Summers, 2008; Sadananthan et al., 2010). In (Xu et al., 2004), the segmentation was based on independent component analysis in a variational Bayesian mixture, while in (Sakashita et al., 2007), expectation-maximization and principal component analysis were combined. A 4D convolution was proposed in (Linguraru and Summers, 2008) constrained by a training model of abdominal soft tissue enhancement. These intensity-based methods are hampered by the high variability of abdominal intensity and texture.

1.2 Graph Cuts

Graph cuts (Boykov and Jolly, 2001) have become popular for image segmentation due to their ability to handle highly textured data via a numerically robust global optimization. The segmentation uses hard constraints from user defined areas of “object” and “background” and additional soft constraints from boundaries and region information. The value of the method was immediate for medical data (the segmentation of bone from CT and kidney from magnetic resonance imaging - MRI) and video sequences (2D+time; Boykov and Jolly, 2001). Graph cuts were also used to track objects from occluding scenes in (Khan and Shah, 2009).

To reduce the sensitivity to initialization, global geodesics were computed via graph cuts (Boykov and Kolmogorov, 2003) and used to segment the liver, lung and heart. This method imposed length/area constraints for object boundaries and relied on consistent edge weights to obtain geometric properties. While introducing the theoretical advantages of graph cuts, there was no validation of medical data segmentation provided in (Boykov and Jolly, 2001; Boykov and Kolmogorov, 2003).

Combining the length/area concept with the computation of flux, the geometric interpretation can be seen as a shape prior in the construction of the graph (Kolmogorov and Boykov, 2005; Vu and Manjunath, 2008). Multiple objects can be consecutively segmented (Vu and Manjunath, 2008). The shape model was implemented as a density estimation for shape priors initially proposed for level sets in (Cremers et al., 2006), but a symmetric shape distance can be biased if the initialization is poor. A multi-region segmentation via graph-cuts was recently proposed by (Delong and Boykov, 2009) with separate appearance models for each region. Their approach uses distance priors between regions, but no explicit shape priors, and was not quantitatively validated.

The introduction of shape into graph cuts has been an area of active research. Compact shape priors were used in (Das et al., 2008), but medical data often involve complex shapes. A star shape descriptor was introduced in (Veksler, 2008), but only shapes complying with a generic star shape were extracted. Shape priors were embedded into the weights on the edges in the graph by using a level-set formulation in (Freedman and Zhang, 2005), but this interactive method was robust only to small shape variations. Finally, a kernel principle component analysis was used to learn a statistical model of relevant shapes in (Malcolm et al., 2007) in a Bayesian formulation to perform segmentation via graph cuts in four natural images.

All the above graph cut techniques suffer from the manual initialization of the segmentation.

1.3 Graph Cuts for Biomedical Data

Following the theoretical advances of graph cuts techniques, several medical image analysis applications have been proposed. The validation of these applications is generally more thorough.

An automated graph cut technique was used in (García-Lorenzo et al., 2009) to segment multiple sclerosis lesions from MRI of the brain using an expectation maximization initialization. Statistical models of intensity and spatial distribution from MRI data were registered and used to construct a graph for the segmentation of the hippocampus in (van der Lijn et al., 2008; Lötjönen et al. 2010). An interesting example of using graph-cuts to solve non-rigid registration for brain MRI was presented in (So and Chung, 2009). In general, registration methods are more robust on brain MRI data than the abdomen, which has higher shape and appearance variability.

Brain tumors were automatically segmented from MRI using integrated probabilistic boosting trees into graph cuts to handle intra-patient intensity heterogeneity (Wels et al., 2008). Graph cuts were also used to refine the manual segmentation of breast tumors from MRI data (Zheng et al., 2007). The skull was accurately removed from MRI images in (Sadananthan et al., 2010) using intensity thresholding for initialization. Then foot bones were segmented interactively from CT in (Liu et al., 2008). Using an acquisition protocol for plaque reconstruction, carotid plaques were segmented semi-automatically from ultrasound images in (Seabra et al., 2009).

In (Ali et al., 2007; Ben Ayed et al., 2009; Chen and Shapiro, 2008; Lin et al., 2005) model-based information was included for the segmentation of heart, spleen and kidneys. The models were aligned using markers in (Ali et al., 2007; Lin et al., 2005), manual placements in axial slices in (Chen and Shapiro, 2008) and intra-model constraints given in the first frame of the cardiac cycle in (Ben Ayed et al., 2009). Shape priors were employed in (Bauer et al., 2010; Esneault et al., 2010) to reconstruct the liver vasculature and lung airways; the cuts in the graph were constrained by a tubular filter. Probabilistic shape-based energies for graph-cuts were combined with image intensity in a non-parametric iterative model in (Freiman et al., 2010) for the segmentation of the kidneys. Also, in (Shimizu et al., 2010b), shape priors and neighboring constraints were incorporated using signed distances from boundaries to segment the liver.

In other types of biomedical applications, a multi-level automated graph-cut algorithm was used in (Al-Kofahi et al., 2009) to segment cell nuclei; the seed points were detected by a Laplacian-of-Gaussian filter in a method designed for histopathology data. A graph cuts optimization was presented in (Deleus and van Hulle, 2009) for the parcellation of the brain from functional MRI. In (Gramfort et al., 2010), a data-driven graph approach was implemented to estimate the variability of neural responses on magnetoencephalography or electroencephalography data. Finally, a study of the effect of weights and topology on the construction of graphs can be found in (Grady and Jolly, 2008).

1.4 Motivation and Approach

Abdominal multi-organ segmentation remains a challenging task because the sizes, shapes and locations of the organs vary significantly in different subjects. Moreover, these organs have similar appearance in CT images, even in contrast-enhanced data, and are in close proximity to each other.

An advantage when handling medical data is the available prior information regarding organ location, shape and appearance. Although highly variable between patients and in the presence of disease, abdominal organs satisfy basic rules of anatomy and physiology. Hence, the incorporation of statistical models into algorithms for medical data analysis greatly benefits the segmentation of abdominal images. For example, the enhancement of soft tissue in CT images is not only a marker of disease, but also an indicator of tissue or organ type, as contrast agent intake is tissue specific. As presented in the previous sections, certain levels of model-based information have been included in abdominal segmentation and to a reduced extent into graph cuts. They generally suffer from manual initialization and do not address multi-organ segmentation.

An integrated statistical model for medical data is introduced in this paper and incorporated into a graph-based approach. We propose a new formulation of a 4D directional graph to automatically segment abdominal organs, at this stage the liver, spleen and left and right kidneys using graph cuts. The statistical priors comprise location probabilities that are intrinsic to medical data, an enhancement constraint characteristic to the clinical protocols using abdominal CT and an unbiased asymmetric shape measure. The method is optimized globally and starts with training (entire patient population) 4D intensity data to automatically initialize the graph, then migrating to patient specific information for better specificity. Comparative results at different stages of the algorithm show the effects of appearance, shape and location on the accuracy of organ segmentation.

2. Methods and Materials

2.1 Data

A schematic of the segmentation algorithm is illustrated in Figure 1. Data in this study were declared exempt for IRB review by the National Institutes of Health’s Office of Human Subjects Research. Images were collected with LightSpeed Ultra and QX/I [GE Healthcare], Brilliance64 and Mx8000 IDT 16 [Philips Healthcare] and Definition [Siemens Healthcare] scanners.

Figure 1.

Figure 1

A schematic of the graph-based segmentation algorithm. NCP – non-contrast phase; PVP – portal venous phase.

Twenty-eight random abdominal CT studies with or without contrast enhancement from healthy subjects were used to create statistical models. Data were collected at high resolution (1mm slice thickness) with in-slice resolution from 0.54 mm to 0.91 mm. The liver, spleen and left and right kidneys were manually segmented by two research fellows supervised by a board-certified radiologist (one segmentation for each organ). The tip of the xiphoid process (an ossified cartilaginous extension below the sternal notch) was marked manually in these data and used in the location, appearance and shape models.

For testing the algorithm, 20 random abdominal CT studies (normal and abnormal) were obtained with two temporal acquisitions (40 CT scans). The first image was obtained at non-contrast phase (NCP) and a second at portal venous phase (PVP) using fixed delays. Image resolution ranged from 0.62 to 0.82 mm in the axial view. Ten images were of low resolution (5 mm slice thickness) and were used for training and testing the algorithm using a leave-one-out strategy. Ten images were of high resolutions (1 mm slice thickness) and used only for testing. The liver, spleen and left and right kidneys were manually segmented (by two research fellows supervised by a board-certified radiologist) in the 20 CT cases using the PVP CT volumes to provide a reference standard for testing the method.

2.2 Model Initialization

The statistical models of location and appearance were built from the 28 CT cases described in the previous section (10 NCP and 18 PVP cases). The 28 CT data were further used to build shape constraints via a Parzen window distribution, as explained in the construction of the 4D graph.

A probabilistic atlas (PA) was constructed for each organ: liver, spleen, left kidney and right kidney (Reyes et al., 2009). Organ locations were normalized to an anatomical landmark (xiphoid process) to preserve spatial relationships and model organs in the anatomical space. A random image set was used as reference and the remaining images registered to it. The registration was performed for each organ separately. Structural variability, including the size of organs, was conserved by a size-preserving affine registration adapted from (Studholme et al., 1999). The location bias was minimized by the normalization by the tip of the xiphoid process. The PA was constructed independently from the segmentation algorithm.

Appearance statistics were computed from the training data (the 28 cases used in the model). Histograms of the segmented organs (objects) and background at NCP and PVP were computed and modeled as sums of Gaussians, as in Figure 2.

Figure 2.

Figure 2

Fitted sums of Gaussians to training data of organs/objects (a and b) and background (c and d). NCP intensity models are shown in (a and c) and PVP data in (b and d). Here, training data refers to the training cases in the leave-one-out strategy. The histogram peaks related to the liver/spleen and kidneys are marked.

2.3 Preprocessing

Although training and testing images were acquired during the same session and intrapatient, there was small, but noticeable abdominal interphase motion, especially associated with breathing. The preprocessing follows our work in (Linguraru and Summers, 2008).

Data were smoothed using anisotropic diffusion (Perona and Malik, 1999). NCP data were registered to the PVP images. The demons non-linear registration algorithm was employed (Thirion, 1998), as the limited range of motion ensures partial overlaps between organs over multiple phases. The deformation field F of image I to match image J was governed by the optical flow equation

F=(IJ)JJ2+(IJ)2. (1)

2.4 4D Convolution

From smoothed training data of multiphase CT, the min and max intensities of organs were estimated: mini,t = μi,t - 3σi,t and maxi.t = μi,t + 3σi,t, where i=1..3 for liver, spleen and kidneys, μp,t and σp,t represent the mean and standard deviation, and t=1,2 for NCP and PVP. As in (Linguraru and Summers, 2008), a 4D array K(x,y,z,t)=It(x,y,z) was created from multiphase data. A convolution with a 4D filter f labeled only regions for which all voxels in the convolution kernel satisfied the intensity constraints

L(x,y,z)=(Kοf)(x,y,z,t)={lj,ift(minjtK(x,y,z,t)maxjt)0,otherwise.} (2)

L represents the labeled image and lj the labels (j=1..4 for liver, spleen, left kidney and right kidney). The labeled organs in L appear eroded as a result of the 4D convolution. In our method, L provided seeds for objects (Io) in the 4D graph and was used to estimate the patient-specific histograms. The eroded inverted L provided the background (Ib) seeds and the related histograms. To report the segmentation results by 4D convolution (see Results), L was dilated to compensate for the undersegmentation of organs.

2.5 4D Graph

Graph cuts (GC) were chosen for the inherent capability to provide a globally optimal solution for segmentation (Boykov and Jolly, 2001). Let A = (A1, A2, …, Ap, …) be a vector that defines the segmentation. The component Ap associated with the voxel p in an image can be assigned a label of either object of interest/organ Oi (with i=1..4, for liver, spleen, left kidney and right kidney) or background B, where BO= Ø and OiOj= Ø for i≠j. In the classical graph cut algorithm, Ap takes binary values for O and B. In our application, Ap can have a value from 0 to 4, where 0 denotes the background, 1 the liver, 2 the spleen, 3 the right kidney and 4 the left kidney.

The inputs to our problem are two sets of registered abdominal CT scans per patient: the NCP and PVP sequences. Hence every voxel p in the graph has two intensity values: Incpp and Ipcpp. A simplified schematic representation of the 4D graph is shown in Figure 3. Every voxel is connected to both Oi (sources) and B (sink) via t-links and to its neighbors via n-links (which can be directional). Source and sink are terminologies used in (Boykov and Jolly, 2001). The costs of the connections determine the segmentation and weak links are good candidates for the cut. Typical graph cuts perform data labeling (t-links) via log-likelihoods based solely on 2D or 3D interactive histogram fitting. They penalize neighborhood discontinuities (n-links) through likelihoods from the image contrast/gradients (Boykov and Jolly, 2001).

Figure 3.

Figure 3

A simplified schematic of the multi-object multi-phase graph. 4D information is input from the NCP and PVP data. T-links are connected to the objects (O1 to On) and background (B) terminals. Directional n-links connect neighboring nodes (the image shows only two neighbors for each voxel). The width of a line in the graph reflects the strength of the connection.

We first extend the formulation to analyze 4D data and then incorporate penalties from the contrast enhancement of CT soft tissue, Parzen shape windows and location from a priori probabilities. While location knowledge is incorporated in the labeling of objects, shape information penalizes boundaries not resembling the references. The energy E to minimize can be written generically as

E(A)=Edata(A)+Eenhance(A)+Elocation(A)+i=14(Eboundary(A)+Eshape(A)), (3)

with i=1..4 for liver, spleen, left kidney and right kidney. The subparts of this costs function are described below.

2.5.1 T-links

In this application, Edata is a regional term that computes penalties based on 4D histograms of O and B. The probabilities P of a voxel to belong to O or B are computed from patient specific histograms of NCP and PVP data, as described in the previous section.

Edata(A)=λpORp(O)+(1λ)pBRp(B); (4)
Rp(Oi)=ln(Pncp(IncppOi)Ppvp(IpvppOi)iPncp(IncppOi)Ppvp(IpvppOi)+Pncp(IncppB)Ppvp(IpvppB)); (5)
Rp(B)=ln(Pncp(IncppB)Ppvp(IpvppB)iPncp(IncppOi)Ppvp(IpvppOi)+Pncp(IncppB)Ppvp(IpvppB)). (6)

Eenhance penalizes regions that do not enhance rapidly during the acquisition of NCP-PVP CT data (i.e. muscles, ligaments and marrow). Liver, spleen and kidneys are expected to enhance faster. Eenhance can be seen as a gradient in the 4th dimension of the multiphase data. σncp and σpvp are the standard deviations of noise associated with NCP and PVP.

Eenhance(A)=pP1(1+Ep2)withEp=(IpvppIncpp)22σncpσpvp. (7)

Due to the different enhancement patterns of abdominal organs, the peaks in the organs’ histograms in Figure 2 are distinguishable between liver/spleen (high peaks in Figure 2.a and Figure 2.b) and kidneys (low peaks in Figure 2.a and Figure 2.b). However, the image intensity used in Edata is insufficient to separate the liver from the spleen, and the left and right kidneys. Therefore, we fitted the same intensity and enhancement models to the liver and spleen and similarly, we analyzed the intensities of the kidneys together. However, the probabilistic atlas used in Elocation allows separating the liver and spleen and the two kidneys. Location constraints from the normalized probabilistic atlas (PA) can be seen as

Elocation(A)=pPln(PAp(pO)). (8)

PAp represents the probability of p to belong to O. PAp was obtained by registering PA to the test images by a sequence of coarse-to-fine affine registrations. In the current version of the algorithm, the xiphoid process is not detected in the test cases and the registration of the atlas to a test case is based on image intensity.

The individual energies or costs of the t-links of p to the graph terminals can be written as below, where the enhancement is used as a penalty term.

Eoip=Ep2((1λ)Rp(B)ln(1PAp(pOi)))(1+Ep2). (9)
EpB=(λRp(O)ln(PAp(pO)))(1+Ep2). (10)

2.5.2 N-links

Eboundary assigns penalties for 4D heterogeneity between two voxels p and q with qNp a small neighborhood of p and dist(p, q) the Euclidean distance between p and q.

Eboundary(A)c=μ{p,q}Npw{pq}+(1μ){p,q}Npw{qp}. (11)

The directional penalties in Eboundary are initialized symmetrically as

w{pq}=w{qp}={0,ifAp=Aqexp(IncppIncpqIpvppIpvpq2σncpσpvp)1dist(p,q),otherwise.} (12)

Then the condition in (13) penalizes transitions from dark (less enhanced) to bright (more enhanced) regions to correct the edges of O, considering image noise. This is an intrinsic attribute of medical data (e.g. the visceral fluids and fat are darker than O).

IF((IpvppIpvpq)>σpvpOR(IncppIncpq)>σncp)THENw{qp}=1,ELSEw{pq}=1. (13)

Shape constraints were introduced using Parzen shape (PS) windows (Parzen, 1962) estimated from the reference organ shapes from the 28 CT data used for modeling. First, the result of the 4D convolution (L) was used to align the shape references using scaling, rotation and the location of the centroids. An asymmetric normalized dissimilarity measure D (equation (16)) between two shapes (si and sj) was used to avoid the bias introduced by L, which is an approximation of the shape of the object/organ s. H is the Heaviside step function, s refers to the binary segmentation of an organ and x to the integration over the image domain.

PS(s)=i=1nexp(D(s,si)2σ2)n (14)
withσ2=i=1nminjiD(sj,si)n (15)
andD(sj,si)=(H(sj)H(si))2H(Sj)dxH(sj)dx. (16)

The penalties used in Eshape are initialized symmetrically from PS.

ν{pq}=ν{qp}={0,ifAp=Aqpmax(PS(s)p,PS(s)q)dist(p,q),otherwise} (17)

and

IF(PS(s)p>PS(s)q)THENν{qp}=1,ELSEν{pq}=1. (18)

The directionality of the n-link in (18) penalizes transitions from lower to higher shape probabilities to encourage cuts where there is a strong prior shape resemblance. The shape energy becomes

Eshape(A)=δ{p,q}Npν{pq}+(1δ){p,q}Npν{qp}. (19)

Parameters λ, μ and δ are constants and weigh the contribution from object/background, and the directionality of the graph at boundaries/shape, respectively (all set to value 0.5 for equal contributions).

To address the NP-hard problem for the segmentation of more than two labels via graph cuts, we adopted the α-expansion move proposed in (Boykov et al., 2001). The algorithm breaks the multi-label cut into a sequence of binary source-sink cuts. With each expansion, a given label takes space from the other labels. Moves are allowed only if there is a decrease in energy. The segmentation is thus reduced to a binary optimization problem. For additional details, please consult http://www.csd.uoc.gr/~komod/ICCV07_tutorial/.

2.6 Analysis

We compared results obtained after the 4D convolution to those achieved using intensity-based 4D GC (without shape and location constraints) and after including shape and location correction. We computed the Dice coefficient (symmetric volume overlap), volume error (volume difference over the volume of the reference), root mean square error and average surface distance from comparison with the manual segmentations. Non-parametric statistical tests (Mann-Whitney U test) were performed to assess the significance of segmentation improvement at different steps of the algorithm using the overlap measure at 95% confidence interval.

The influence of patient specific (from the patient CT) versus population (training data) statistics on the accuracy of organ segmentation was analyzed. We also investigated the role of image resolution in the quality of results.

2.7 General Parzen Model

To generate shape models via Parzen windows, as described above, requires repeating a spatial normalization (by location, orientation and scale) up to the number of reference shapes (in our application equal to 28). For computational reasons, we created a general Parzen model (GPM), which can be adapted to the analyzed data though a single spatial normalization. The creation of GPM follows the method described in equation (12), but instead of requiring L as normalizing reference, it uses a random reference shape from the set of 28. For consistency, we employed the same case used in the creation of PA. The GPM is created off-line and aligned to L by a single space normalization, as in Figure 4. Results with and without GPM were compared.

Figure 4.

Figure 4

The construction of the general Parzen model (right) requires one on-line registration, while a typical Parzen model (left) performs N on-line registrations with N the number of training shapes.

For additional computational optimization, a multiscale multithreading approach was implemented.

3. Results

Quantitative results from applying our method to the segmentation of liver, spleen and kidneys are shown in Table I at different stages of the algorithm. The use of 4D intensity-based graph-cuts improved the results significantly over those of the 4D convolution for all organs (p<0.05 for all). Employing shape and location information brought a further significant improvement for the segmentation of the spleen and liver (p<0.05 for both). Significantly better segmentations by using patient specific data over training data were noted for both kidneys (p<0.03 for both).

Comparative results in Table II illustrate the effects of image resolution, use of multiphase data and GPM on the accuracy of segmentations. At low spatial resolution (5 mm slice thickness) overlaps were superior to 91.8% and ASD less than 1.1 mm for all organs. A significant improvement was obtained for the segmentation of left and right kidneys and spleen (p<0.05 for all) when data of high resolution were involved (1 mm slice thickness), leading to overlaps over 93.6% and ASD under 1.1 mm for the four abdominal organs. Only the segmentation of the liver benefitted significantly from the use of multiphase data (p<0.04) when compared to using only singlephase images at PVP enhancement. There was no significant difference between the segmentations obtained employing a patient specific shape model and results using GPM.

Table II.

The effects of image resolution (low versus high) and number of CT phases (single versus multiple) on the accuracy of segmentation (mean±std) for the liver, spleen, left kidney and right kidney. Columns present the Dice coefficient (DC), volume estimation error (VER), root mean square (RMSE) error and average surface distance (ASD). GPM refers to the use of a general Parzen model in the computation of the shape energy. Highlighted cells mark the organs where a significant change (p<0.05) was noted between results on data of low and high resolution, results on multiphase versus singlephase data, and results using a patient specific shape model versus GPM. The metric used to test the significance of results was DC.

ORGAN DC
(%)
VER
(%)
RMS
(mm)
ASD
(mm)
4D GCSL
(Low Res –
Multiphase)
LKidney 91.9±3.0 6.7±5.2 1.8±0.8 0.8±0.3
RKidney 93.2±1.5 5.5±4.5 1.8±0.8 0.8±0.4
Spleen 91.8±1.5 6.6±5.7 2.1±0.9 1.0±0.5
Liver 95.6±0.6 2.4±1.1 3.0±1.3 1.1±0.4

4D GCSL
(High Res –
Multiphase)
LKidney 94.0±1.0 4.6±3.5 1.2±0.2 0.8±0.2
RKidney 93.9±0.8 4.7±3.6 1.3±0.2 0.9±0.1
Spleen 93.6±1.8 5.0±.3.7 1.5±1.2 0.9±0.6
Liver 96.4±0.7 2.6±1.2 2.8±1.3 1.1±0.6

4D GCSL
(Low Res –
Singlephase)
LKidney 91.8±3.0 6.8±6.1 1.8±1.0 0.8±0.4
RKidney 92.9±1.6 6.2±4.6 2.0±1.0 0.9±0.5
Spleen 90.9±1.9 7.4±5.9 2.5±1.1 1.1±0.6
Liver 94.8±0.9 2.9±1.6 3.3±1.7 1.2±0.5

4D GCSL
(GPM –
Low Res –
Multiphase)
LKidney 91.9±3.1 8.7±5.1 2.0±1.2 0.9±0.7
RKidney 93.1±1.5 9.2±4.3 1.8±0.8 0.8±0.4
Spleen 91.7±1.6 6.7±5.8 2.0±1.0 1.0±0.6
Liver 95.6±0.6 2.4±1.1 3.0±1.3 1.1±0.4

Figure 5 presents a typical example of liver, spleen and kidneys segmentation from axial projections of the 3D CT. A 3D rendering is shown in Figure 6 along with the errors between manual and automated segmentations. Finally, Figure 7 illustrates comparative results at different stages of the algorithm.

Figure 5.

Figure 5

A typical example of liver (blue), spleen (green), right kidney (yellow) and left kidney (red) automated segmentation on 2D axial views of the CT data. Images are shown in cranial to caudal order from top left to bottom right.

Figure 6.

Figure 6

3D images of the automatically segmented abdominal organs; (a) is a posterior view and (b) an anterior view. The liver ground truth is blue, spleen is green, right kidney is yellow, left kidney is red. Segmentation errors are overlaid in white on each organ.

Figure 7.

Figure 7

Comparative examples of liver (blue), spleen (green), right kidney (yellow) and left kidney (red) at different stages of the segmentation algorithm. Left and right columns present two axial slices of the same patient; (a) and (b) are results of the 4D convolution; (c) and (d) are results of the intensity-based 4D GC (4D GCI); (e) and (f) were obtained using the full method (4D GSL) with training data; (g) and (h) are results from 4D GSL using patient specific information. Rectangles emphasize areas of differences around the organs. See Table I for quantitative results.

1. Discussion

Livers, spleens and kidneys were segmented from multiphase clinical data following the typical acquisition protocol of abdominal CT images. Training data from a patient population were used to automatically initialize the graph by an adaptive 4D convolution. Then patient specific image characteristics were estimated for improved specificity and input into the 4D directional graph. This was particularly helpful for the segmentation of kidneys. While there were partial overlaps between the object and background distributions (especially at NCP), the combination of multiphase data ensured a better separation.

The cuts in the 4D graph were based on globally minimizing an energy that included enhancement, location and shape constraints. The enhancement information allowed improving regional bias within tissues, thereby better modeling its physiological properties. The location probabilistic priors, intrinsic to medical data, and shape information from the asymmetric computation of Parzen shape windows (to avoid shape bias) supplied additional constraints for the global optimization of the graph. A Parzen distribution was preferred as a non-parametric probability model that converges to the true density with increasing number of samples.

Using graph cuts based only on intensity information significantly improved the segmentation of all four abdominal organs over the 4D convolution. This was likely caused by the addition of the 4D boundary information in graphs. Nevertheless, the 4D convolution was a surprisingly robust initializer for the graph construction due to its use of multiphase information.

Moving from training to patient-specific statistics only improved the segmentation of kidneys, probably due to the prevalence of liver and spleen statistics in the object histogram. Optimizing the graph with shape and location constraints brought a significant improvement in the segmentation of spleen and liver, as kidneys, already well segmented at the previous step of the algorithm due to strong image contrast at edges from fast enhancement, vary less in shape. Results further suggested that the segmentation of the spleen and kidneys is influenced by the image resolution, unlike in the case of the liver, the largest abdominal organ in our application. The segmentation of spleen and kidneys was not sensitive to multiphase versus singlephase data, unlike the liver, probably due to the good quality of enhancement at PVP in the dataset.

The method avoided the inclusion of heart segments in the segmentation of liver, but had the tendency to underestimate organ volumes, in particular that of the spleen. Parts of the inferior vena cava may be erroneously segmented in the mid-cephalocaudal liver region, especially when contrast enhancement is low, and represented one of the sources of error in the liver segmentation (Figure 6). Partial volume effects, small interphase registration errors and the estimation of object and background distributions may have also contributed to the undersegmentation.

On a typical dataset and without multiscale optimization, the processing time was on average 9h 25min. To reduce the computational costs of the segmentation method, a general Parzen model (GPM) was tested and results from GPM did not vary significantly from those obtained otherwise. However, employing GPM the computation was reduced to an average of 3h; the registration of the multiphase CT scans accounted for 2h 12min of this time. After multiscale optimization the total computation time became 15min on a quad core 2.67 GHz processor with 8 GB RAM.

A main advantage in using the shape energy in (19) is that most of the processing of the energy can be performed off-line prior to the segmentation, especially when GPM is used. The shape model is non-parametric and does not require the computation of signed distance functions, as in (Shimizu et al., 2010b), or overlaps, as in (Freiman et al., 2010). However, by using the directional capabilities of the graph in the weights of the n-links, the shape model includes simpler and intuitive information similar to the vector of the gradient used in (Shimizu et al., 2010b) without the need to explicitly compute it. Finally, the multi-label segmentation was computed using a sequence of binary graph cuts (Boykov et al., 2001). A different solution to segment multi-labels via a single graph cut was proposed in (Delong and Boykov, 2009).

In the future, we will include more organs in the 4D graph for a holistic segmentation of abdominal data. Additionally, we will test the method on data with a variety of abdominal pathologies towards developing a segmentation technique robust to physiological and clinical variability.

2. Conclusion

A new formulation for a 4D graph-based method to segment abdominal organs from multiphase CT data was proposed. The method extends basic graph cuts by using: 1) temporal acquisitions at two phases and enhancement modeling; 2) shape priors from Parzen windows; and 3) location constraints from a probabilistic atlas. The automated technique was optimized to employ constraints typical to medical images and adapted to patient data. Livers, spleens and kidneys were robustly and accurately segmented from data of low and high resolution. This approach promises to support the processing of large medical data in a clinically-oriented integrated analysis of the abdomen.

Highlights.

Anatomical and physiological models were incorporated in statistical graph cuts.

Multiple abdominal organs were automatically segmented and analyzed.

Appearance, shape and location priors improved the accuracy of organ segmentation.

Livers, spleens and kidneys were segmented with volume overlaps over 93.6%.

Table I.

Statistics (mean±std) for the liver, spleen, left kidney and right kidney segmentation results from data of low resolution (5mm slice thickness). Columns present the Dice coefficient (DC), volume estimation error (VER), root mean square (RMSE) error and average surface distance (ASD). 4D C represents the convolution, GCI is GC based solely on image intensity (including 4D appearance and enhancement) and 4D GCSL includes shape and location constraints. Highlighted cells mark the organs where a significant improvement (p<0.05) was obtained between consecutive steps of the segmentation algorithm, as indicated by numbers from 1 to 4 in the table. The metric used to test the significance of results was DC.

ORGAN DC
(%)
VER
(%)
RMS
(mm)
ASD
(mm)
1. 4D C
(Training
Data)
LKidney 88.7±3.7 10.9±8.9 2.3±0.4 1.1±0.3
RKidney 89.6±3.4 13.6±6.8 2.1±0.5 1.1±0.3
Spleen 79.9±10.1 14.9±16.9 4.5±1.9 2.7±1.7
Liver 89.1±3.7 7.3±4.6 6.7±1.5 3.4±1.0

2. 4D GCI
(Patient
Data)
LKidney 92.6±2.4 5.4±6.9 1.8±1.2 0.8±0.6
RKidney 92.8±1.9 5.6±5.8 1.8±0.8 0.8±0.4
Spleen 89.6±2.7 11.4±6.9 3.0±1.4 1.5±0.9
Liver 94.0±1.2 6.2±2.8 4.4±2.0 1.8±0.7

3. 4D GCSL
(Patient
Data)
LKidney 91.9±3.0 6.7±5.2 1.8±0.8 0.8±0.3
RKidney 93.2±1.5 5.5±4.5 1.8±0.8 0.8±0.4
Spleen 91.8±1.5 6.6±5.7 2.1±0.9 1.0±0.5
Liver 95.6±0.6 2.4±1.1 3.0±1.3 1.1±0.4

4. 4D GCSL
(Training
Data)
LKidney 90.8±2.7 12.8±7.1 2.6±1.1 1.2±0.6
RKidney 92.6±1.6 9.2±4.3 2.0±0.7 0.9±0.3
Spleen 91.9±1.5 6.4±5.0 1.9±0.6 0.9±0.4
Liver 95.5±0.7 2.1±1.6 3.0±1.3 1.2±0.5

Acknowledgements

This work was supported in part by the Intramural Research Program of the National Institutes of Health, Clinical Center. The authors would like to thank Ananda S. Chowdhury, PhD, Jesse K. Sandberg, Visal Desai and Javed Aman for helping with the data analysis.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans Biomed Eng. 2009;57(4):841–852. doi: 10.1109/TBME.2009.2035102. [DOI] [PubMed] [Google Scholar]
  2. Ali AM, Farag AA, El-Baz AS. Graph cuts for kidney segmentation with prior shape constraints; Proceedings of MICCAI 2007; Part I, LNCS. 2007; pp. 384–392. [DOI] [PubMed] [Google Scholar]
  3. Bae E, Tai XC. Energy Minimization Methods in Computer Vision and Pattern Recognition. Lecture Notes in Computer Science. Vol. 5681. 2009. Efficient Global Minimization for the Multiphase Chan-Vese Model of Image Segmentation; pp. 28–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bauer C, Pock T, Sorantin E, Bischof H, Beichel R. Segmentation of interwoven 3D tubular tree structures utilizing shape priors and graph cuts. Med Image Anal. 2010;14(2):172–84. doi: 10.1016/j.media.2009.11.003. [DOI] [PubMed] [Google Scholar]
  5. Ben Ayed I, Punithakumar K, Li S, Islam A, Chong J. Med Image Comput Comput Assist Interv. Pt 2. Vol. 12. 2009. Left ventricle segmentation via graph cut distribution matching; pp. 901–9. [DOI] [PubMed] [Google Scholar]
  6. Boykov Y, Jolly MP. Int Conf Comp Vis. I. 2001. Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images; pp. 105–112. [Google Scholar]
  7. Boykov Y, Veksler O, Zabih R. Fast Approximate Energy Minimization via Graph Cuts. IEEE Trans Pattern Anal Mach Intell. 2001;23(11):1222–39. [Google Scholar]
  8. Boykov Y, Kolmogorov V. Int Conf Comp Vis. 2003. Computing geodesics and minimal surfaces via graph cuts. [Google Scholar]
  9. Chen JH, Shapiro LG. Int Conf Pattern Recog. 2008. Medical image segmentation via min s-t cuts with sides constraints. [Google Scholar]
  10. Cremers D, Osher SJ, Soatto S. Kernel density estimation and intrinsic alignment for shape priors in level set segmentation. Int J Comp Vis. 2006;69(3):335–351. [Google Scholar]
  11. Danelson KA, Stitzel JD. Volumetric splenic injury measurement in ct scans for comparison with injury score. Biomed Sci Instrum. 2008;44:159–64. [PubMed] [Google Scholar]
  12. Das P, Veksler O, Zavadsky V, Boykov Y. Semiautomatic segmentation with compact shape priors. Image and Vision Computing. 2008;27(1-2):206–219. [Google Scholar]
  13. Deleus F, Van Hulle MM. A connectivity-based method for defining regions-of-interest in fMRI data. IEEE Trans Image Process. 2009;18(8):1760–71. doi: 10.1109/TIP.2009.2021738. [DOI] [PubMed] [Google Scholar]
  14. Delingette H, Ayache N. Hepatic Surgery Simulation. Communications of the ACM. 2005;48(2):31–36. [Google Scholar]
  15. Delong A, Boyvov Y. Globally Optimal Segmentation of Multi-Region Objects; International Conference on Computer Vision; 2009.pp. 285–292. [Google Scholar]
  16. El-Zehiry N, Grady L. Energy Minimization Methods in Computer Vision and Pattern Recognition. Lecture Notes in Computer Science. Vol. 6819. 2011. Discrete Optimization of the Multiphase Piecewise Constant Mumford-Shah Functional; pp. 233–246. [Google Scholar]
  17. Esneault S, Lafon C, Dillenseger JL. Liver vessels segmentation using a hybrid geometrical moments/graph cuts method. IEEE Trans Biomed Eng. 2010;57(2):276–83. doi: 10.1109/TBME.2009.2032161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Freedman D, Zhang T. Computer Vision Pattern Recognition. 2005. Interactive graph cut based segmentation with shape priors. [Google Scholar]
  19. Freiman M, Kronman A, Esses SJ, Joskowicz L, Sosna J. MICCAI. LNCS 6363. 2010. Non-parametric Iterative Model Constraint Graph min-cut for Automatic Kidney Segmentation; pp. 73–80. [DOI] [PubMed] [Google Scholar]
  20. Fritz GA, Schoellnast H, Deutschmann HA, Quehenberger F, Tillich M. Multiphasic multidetector-row CT (MDCT) in detection and staging of transitional cell carcinomas of the upper urinary tract. European Radiology. 2006;16(6):1244–52. doi: 10.1007/s00330-005-0078-0. [DOI] [PubMed] [Google Scholar]
  21. Gramfort A, Keriven R, Clerc M. Graph-based variability estimation in single-trial event-related neural responses. IEEE Trans Biomed Eng. 2010;57(5):1051–1061. doi: 10.1109/TBME.2009.2037139. [DOI] [PubMed] [Google Scholar]
  22. García-Lorenzo D, Lecoeur J, Arnold DL, Collins DL, Barillot C. Med Image Comput Comput Assist Interv. Pt 2. Vol. 12. 2009. Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts; pp. 584–91. [DOI] [PubMed] [Google Scholar]
  23. Grady L, Jolly MP. MICCAI 2008, Part I, LNCS 5241. 2008. Weights and topology: a study of the effects of graph construction on 3D image segmentation; pp. 153–161. [DOI] [PubMed] [Google Scholar]
  24. Heimann T, et al. Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans Med Imaging. 2009;28(8):1251–65. doi: 10.1109/TMI.2009.2013851. [DOI] [PubMed] [Google Scholar]
  25. Hu X, Shimizu A, Kobatake H, Nawano S. Independent analysis of four-phase abdominal CT images; Proceedings of MICCAI 2004, LNCS 3217; 2004.pp. 916–924. [Google Scholar]
  26. Khan SM, Shah M. Tracking multiple occluding people by localizing on multiple scene planes. IEEE Trans Pattern Anal Mach Intell. 2009;31(3):505–19. doi: 10.1109/TPAMI.2008.102. [DOI] [PubMed] [Google Scholar]
  27. Kolmogorov V, Boykov Y. Int Conf Comp Vis. 2005. What metrics can be approximated by geo-cuts or global optimization of length/area and flux. [Google Scholar]
  28. van der Lijn F, den Heijer T, Breteler MM, Niessen WJ. Hippocampus segmentation in MR images using atlas registration, voxel classification, and graph cuts. Neuroimage. 2008;43(4):708–20. doi: 10.1016/j.neuroimage.2008.07.058. [DOI] [PubMed] [Google Scholar]
  29. Lin X, Cowan B, Young A. Proc IEEE Eng Med Biol Soc. Vol. 3. 2005. Model-based graph cut method for segmentation of the left ventricle; pp. 3059–62. [DOI] [PubMed] [Google Scholar]
  30. Linguraru MG, Summers RM. Multi-organ segmentation in 4D contrast-enhanced abdominal CT; IEEE Symposium on Biomedical Imaging; 2008.pp. 45–48. [Google Scholar]
  31. Linguraru MG, Sandberg JA, Li Z, Shah F, Summers RM. Atlas-based automated segmentation of spleen and liver using adaptive enhancement estimation. Med. Phys. 2010;37(2):771–783. doi: 10.1118/1.3284530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Liu L, Raber D, et al. Interactive separation of segmented bones in CT volumes using graph cut; Proceedings of MICCAI 2008, Part I, LNCS 5241; 2008; pp. 296–304. [DOI] [PubMed] [Google Scholar]
  33. Lötjönen JM, Wolz R, Koikkalainen JR, Thurfjell L, Waldemar G, Soininen H, Rueckert D. Fast and robust multi-atlas segmentation of brain magnetic resonance images. Neuroimage. 2010;49(3):2352–65. doi: 10.1016/j.neuroimage.2009.10.026. [DOI] [PubMed] [Google Scholar]
  34. Malcolm J, Rathi Y, Tannenbaum A. Int Conf Im Proc. 2007. Graph cut segmentation with nonlinear shape priors. [Google Scholar]
  35. Montillo A, Shotton J, Winn J, Iglesias JE, Metaxas D, Criminisi A. Entangled Decision Forests and their Application for Semantic Segmentation of CT Images. Information Processing in Medical Imaging. Lecture Notes in Computer Science. 2011;6801:184–196. doi: 10.1007/978-3-642-22092-0_16. [DOI] [PubMed] [Google Scholar]
  36. Okada T, Shimada R, Hori M, Nakamoto M, Chen Y,W, Nakamura H, Sato Y. Automatic segmentation of the liver from 3D CT images using probabilistic atlas and multilevel statistical shape model. Academic Radiology. 2008a;15:1390–1403. doi: 10.1016/j.acra.2008.07.008. [DOI] [PubMed] [Google Scholar]
  37. Okada T, Yokota K, Hori M, Nakamoto M, Nakamura H, Sato Y. Construction of hierarchical multi-organ statistical atlases and their application to multi-organ segmentation from CT images. MICCAI 2008. 2008b:502–9. doi: 10.1007/978-3-540-85988-8_60. [DOI] [PubMed] [Google Scholar]
  38. Park H, Bland PH, Meyer CR. Construction of an abdominal probabilistic atlas and its application in segmentation. IEEE Trans. Med. Imaging. 2003;22(4):483–492. doi: 10.1109/TMI.2003.809139. [DOI] [PubMed] [Google Scholar]
  39. Parzen E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962;33:1065–1076. [Google Scholar]
  40. Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Analysis and Machine Intelligence. 1990;12:629–639. [Google Scholar]
  41. Reyes M, Gonzalez Ballester MA, Li Z, Kozic N, Chin S, Summers RM, Linguraru MG. Anatomical variability of organs via principal factor analysis from the construction of an abdominal probabilistic atlas; IEEE International Symposium on Biomedical Imaging (ISBI); 2009; pp. 682–685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Sadananthan SA, Zheng W, Chee MW, Zagorodnov V. Skull stripping using graph cuts. Neuroimage. 2010;49(1):225–39. doi: 10.1016/j.neuroimage.2009.08.050. [DOI] [PubMed] [Google Scholar]
  43. Sakashita M, Kitasaka T, Mori K, Suenaga Y, Nawano S. A method for extracting multi-organ from four-phase contrasted CT images based on CT value distribution estimation using EM-algorithm. SPIE. 2007;6509:1C1–12. [Google Scholar]
  44. Seabra JC, Pedro LM, Fernandes JF, Sanches JM. A 3-D ultrasound-based framework to characterize the echo morphology of carotid plaques. IEEE Trans Biomed Eng. 2009;56(5):1442–53. doi: 10.1109/TBME.2009.2013964. [DOI] [PubMed] [Google Scholar]
  45. Seifert S, Barbu A, Zhou K, Liu D, Feulner J, Huber M, Suehling M, Cavallaro A, Comaniciu D. Hierarchical parsing and semantic navigation of full body CT data. Proc. SPIE. 2009;7259:725902–8. [Google Scholar]
  46. Shim H, Chang S, Tao C, Wang JH, Kaya D, Bae KT. Semiautomated segmentation of kidney from high-resolution multidetector computed tomography images using a graph-cuts technique. J Comput Assist Tomogr. 2009;33(6):893–901. doi: 10.1097/RCT.0b013e3181a5cc16. [DOI] [PubMed] [Google Scholar]
  47. Shimizu A, Ohno R, Ikegami T, Kobatake H, Nawano S, Smutek D. Segmentation of multiple organs in non-contrast 3D abdominal CT images. Int. J. Comp. Assist. Radiol. Surg. 2007;2:135–142. [Google Scholar]
  48. Shimizu A, Kimoto T, Kobatake H, Nawano S, Shinozaki K. Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography. Int. J. Comp. Assist. Radiol. Surg. 2010a;5:85–98. doi: 10.1007/s11548-009-0384-0. [DOI] [PubMed] [Google Scholar]
  49. Shimizu A, Nakagomi K, Narihira T, Kobatake H, Nawano S, Shinozaki K, Ishizu K, Togashi K. Automated Segmentation of 3D CT Images based on Statistical Atlas and Graph Cuts. MICCAI-MCV. 2010b:129–138. [Google Scholar]
  50. Soler L, Delingette H, Malandain G, Montagnat J, Ayache N, Koehl C, Dourthe O, Malassagne B, Smith M, Mutter D, Marescaux J. Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery. Comput Aided Surg. 2001;6(3):131–42. doi: 10.1002/igs.1016. [DOI] [PubMed] [Google Scholar]
  51. Song Y, Bulpitt A, Brodlie K. Liver segmentation using automatically defined patient specific B-spline surface models. 2009:43–50. doi: 10.1007/978-3-642-04271-3_6. MICCAI 2009 Part II, LNCS 5762. [DOI] [PubMed] [Google Scholar]
  52. So RWK, Chung AC. Multi-level non-rigid image registration using graph-cuts; IEEE Int Conf Acoustics, Speech and Signal Processing.pp. 397–400. [Google Scholar]
  53. Spiegel M, Hahn DA, Daum V, Wasza J, Hornegger J. Segmentation of kidneys using a new active shape model generation technique based on non-rigid image registration. Comput Med Imaging Graph. 2009;33(1):29–39. doi: 10.1016/j.compmedimag.2008.10.002. [DOI] [PubMed] [Google Scholar]
  54. Studholme C, Hill DLG, Hawkes DJ. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognition. 1999;32(1):71–86. [Google Scholar]
  55. Thirion JP. Image matching as a diffusion process: an analogy with Maxwell’s demons. Medical Image Analysis. 1998;2(3):243–260. doi: 10.1016/s1361-8415(98)80022-4. [DOI] [PubMed] [Google Scholar]
  56. Veksler O. Star shape prior for graph-cut image segmentation. Euro Conf Comp Vis. 2008 [Google Scholar]
  57. Voci SL, Gottlieb RH, Fultz PJ, Mehta A, Parthasarathy R, Rubens DJ, Strang JG. Delayed computed tomographic characterization of renal masses: preliminary experience. Abdominal Imaging. 2000;25(3):317–21. doi: 10.1007/s002610000009. [DOI] [PubMed] [Google Scholar]
  58. Vu N, Manjunath BS. Shape prior segmentation of multiple objects with graph cuts. Computer Vision Pattern Recognition. 2008 [Google Scholar]
  59. Wels M, Carneiro G, Aplas A, Huber M, Hornegger J, Comaniciu D. A discriminative model-constrained graph cuts approach to fully automated pediatric brain tumor segmentation in 3-D MRI. Med Image Comput Comput Assist Interv. 2008;11(Pt 1):67–75. doi: 10.1007/978-3-540-85988-8_9. [DOI] [PubMed] [Google Scholar]
  60. Wimmer A, Soza G, Hornegger J. A generic probabilistic active shape model for organ segmentation. Med Image Comput Comput Assist Interv. 2009;12(Pt 2):26–33. doi: 10.1007/978-3-642-04271-3_4. [DOI] [PubMed] [Google Scholar]
  61. Zhan Y, Zhou XS, Peng Z, Krishnan A. Active scheduling of organ detection and segmentation in whole-body medical images. Med Image Comput Comput Assist Interv. 2008;11(Pt 1):313–21. doi: 10.1007/978-3-540-85988-8_38. [DOI] [PubMed] [Google Scholar]
  62. Zheng Y, Baloch S, Englande S, Schnall MD, Shen S. Segmentation and classification of breast tumor using dynamic contrast-enhanced MR images. 2007:393–401. doi: 10.1007/978-3-540-75759-7_48. MICCAI 2007, Part II, LNCS 4792. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES