Abstract
Volumetric segmentation of subcortical structures such as the basal ganglia and thalamus is necessary for non-invasive diagnosis and neurosurgery planning. This is a challenging problem due in part to limited boundary information between structures, similar intensity profiles across the different structures, and low contrast data. This paper presents a semi-automatic segmentation system exploiting the superior image quality of ultra-high field (7 Tesla) MRI. The proposed approach handles and exploits multiple structural MRI modalities. It uniquely combines T1-weighted (T1W), T2-weighted (T2W), diffusion, and susceptibility-weighted (SWI) MRI and introduces a dedicated new edge indicator function. In addition to this, we employ prior shape and configuration knowledge of the subcortical structures in order to guide the evolution of geometric active surfaces. Neighboring structures are segmented iteratively, constraining over-segmentation at their borders with a non-overlapping penalty. Extensive experiments with data acquired on a 7T MRI scanner demonstrate the feasibility and power of the approach for the segmentation of basal ganglia components critical for neurosurgery applications such as deep brain stimulation.
I. INTRODUCTION
The differentiation and localization of brain structures is a crucial component for any neuroscience research or clinical applications. Volumetric segmentation is a pre-requisite for many neuroimaging studies such as voxel-based morphometry (VBM), statistical shape analysis, white matter fiber tractography from diffusion-weighted Magnetic Resonance Imaging (MRI), or seed-based analysis of resting-state functional MRI (fMRI). It is also critical for surgical interventions such as deep brain stimulation (DBS) or tumor resection. However, manual segmentation is prone to inherent confounds such as operator subjectivity and inter- or intra-observer variability of border definitions, which are all driven by the quality and richness of the input data. Most importantly, manual segmentation of fine brain structures is a tedious, time consuming and significantly limiting factor for any clinical or translational workflow that requires anatomical definition. The problem is further aggravated when multiple modalities are available, each modality providing enhanced information for the segmentation of specific structures, forcing the user to discover that and to constantly switch between them. These challenges will become more and more relevant with the proliferation and advances of high-field MR machines that provide higher-resolutions images with superior contrast that allows the delineations of smaller structures with greater shape complexity.
Various segmentation frameworks have been reported to automate the manual segmentation during the last two decades. However, most segmentation methods still require user intervention, and some artifacts such as over-segmentation around boundaries of neighboring objects are unavoidable. In particular, when an image has low-contrast or objects to be segmented are occluded, segmentation techniques have shown limited performance [1], [2]. Therefore, segmentation of complex and adjacent objects such as subcortical structures in brain MR images still remains a challenging task. In general, segmentation approaches are based on local edge information (edge based) or the intensity of a given image (region based). Accuracy of edge detection and the image quality such as its Contrast-to-Noise Ratio (CNR) and Signal-to-Noise Ratio (SNR) are critical factors in the segmentation performance. On the other hand, region based approaches utilize the distribution of intensities over the entire region of interest, and they are more robust to noise or missing information than edge based approaches [3]. However, neighboring regions can have similar intensity distributions that often overlap.
Recently, it has been reported that Susceptibility Weighted Image (SWI) at higher magnetic fields provides superior image contrast, thereby allowing improved delineation of subcortical structures [4]. Moreover, detailed anatomical information obtained by combining SWI with T1W or T2W images enables localization and visualization of subcortical structures [5]. In this paper, we focus on the segmentation of subcortical structures such as the basal ganglia and thalamus from MRI data obtained at high magnetic field (7T), critical for any neurosurgery planning and particularly for DBS procedures. In particular, we start with an edge based segmentation approach to exploit sufficient edge information on the MRI (with high CNR and SNR), embedded in an active contour/surface model [6]. We develop a new geodesic active contour/surface (GAC/GAS) model [7], which originally translated the energy based active contours’ minimization problem into a geometric curve evolution approach computing a geodesic curve in a Riemannian space via the level-sets method [8], thereby handling topological changes of evolving curves as well as increasing attraction of the active contour toward the boundary, even with high variation of gradient values. Its 3D extension led to the geodesic active surface (GAS) model [9], which is the basis of our proposed framework. However, this approach fails to achieve accurate segmentation results for occluded objects or regions with weak or missing boundaries that commonly exist in MRI data. Various approaches have been proposed in order to address this problem by incorporating shape prior information [1], [2], [10], [11]. In [10] in particular, training shapes are represented by the level-set method as a Gaussian distribution in the subspace obtained by Principle Component Analysis (PCA), and level-set curves evolve toward a best-fit shape estimated using maximum a posteriori (MAP) within the GAC framework. Our proposed method considers the volumetric shape model incorporated into the GAS framework. Additionally, we extract the edge information integrating edge maps generated from multi-modal images (referring to different image contrasts in this context) such as SWI, T2W, and Fractional Anisotropy (FA), using a new edge indicator function. Boundary information from the shape prior, initially located on a region overlapping with an object after registration onto the data to be segmented, is applied as a weighting factor for the edge maps of the multi-modal images.
Over-segmentation around boundaries between neighboring structures is inevitable during the semi-automatic segmentation process. Overlapping regions are often found also on manual segmentations because of inaccurate definition of the boundary information. However, accurate delineation of adjacent structures, such as the basal ganglia and thalamic structures in the brain, provides crucial information in neurosurgery procedures such as deep brain stimulation [4]. Some approaches have been proposed in the literature in order to overcome such overlapping problem [11], [12]. As such, we have added a penalty term into our framework, considering adjacency between basal ganglia and thalamic structures, thereby incorporating another layer of prior structural information. The segmentation process for each structure follows the subject-specific manual analysis pipeline presented in [5]. Structures within our framework are initially segmented and represented by the level-set method. Then, each level-set surface is utilized as a non-overlapping constraint, limiting the possible deformation of the other evolving surface toward its adjacent structures.
The remaining parts of the paper are organized as follows. Section II presents each extension within the GAS framework in detail. In Section III we present the overall schematic for the segmentation of basal ganglia and thalamic structures. We then present experimental results on real 3D MRI in Section IV. Finally, we conclude with possible future research directions in Section V.
II. METHODS
We extend the geodesic/geometric active surface (GAS) model (or minimal surfaces) by incorporating additional global information, including shape priors and non-overlapping constraints. The target application in this work is the segmentation of subcortical structures, as a key ingredient in deep brain stimulation protocols, and additional known non-overlapping constraints are exploited, in the form of negative distance forces between the corresponding evolving surfaces. This encodes the basic anatomical relationships between the different components in these regions. The per-structure shape prior model is built via a probabilistic approach and incorporated into the GAS model by estimating its best-fit shape and pose while guiding the evolving surfaces toward it. A newly introduced edge indicator function is obtained by integrating edge maps generated from the Laplacian of the smoothed multi-modal datasets (T2W, SWI, and Fractional Anisotropy (FA) from DWI), together with boundary information about the given shape prior (initially, it is registered onto the region maximally matching with the structure to be segmented). The next sections describe these contributions in detail.
1) Geodesic active surface model
Given a 3D image (this will later be extended to multi modal data, meaning vectorial 4D images) and an evolving 2D surface in parametric form, , the goal of active surfaces is to propage this 2D sufrace in the 3D image space such that it evolves toward the region of interest and stops at its boundary. For this purpose, a deformable surface model was proposed by Terzopoulos et al. [6]. The GAS model was developed to extend this classical model [9], and is based on solving the functional
| (2) |
where da is the Euclidean element of area, and the Euclidean area of the surface S is given by . Also, g is a decreasing function such that g(χ) → 0 as χ → ∞ (to be more precise, as the gradient magnitude of the variable goes to zero). This edge indicator function should attract the surface towards the objects of interest, and in [7], [9] it was selected as (considering it is now defined in the whole image space and not just on the evolving surface)
| (3) |
where Î is a smoothed version obtained by regularizing I using anisotropic diffusion, and γ is 1 or 2. For the simplification, we write g(I) instead of g(I(S)).
The following surface evolution (Euler-Language) equation, minimizing EGAS, is obtained by calculus of variations:
| (4) |
where H is the mean curvature and is the inner unit normal to the evolving surface S. The steepest descent flow described above is implemented using the level-sets method [8] via an embedding function given by the signed distance map, , whose zero level-set function is the surface S(i.e., u(t, S) = 0) :
| (5) |
As it is standard practice, the following minimal surfaces model is obtained by adding a constant motion force c, weighted by g(I), in order to increase the speed of convergence [9],
| (6) |
In this level-set representation, the surface u evolves at all points normal to the level-set as a function of the image gradient and the surface curvature at that point. The term ∇g(I) · ∇u provides stable detection of boundaries even if variations in their gradient are large, and makes the model more robust to parameters choice [9].
In this paper, the segmentation of subcortical structures from MRI data is performed within this GAS framework, with extension to be presented in the next three sections.
2) Guidance from statistical shape models
The GAS model utilizes edge information to detect objects as discussed in the previous section. This approach has shown reliable and fast in many applications [2], [10], [13], [14]. However, low contrast or occlusion around objects’ boundaries might lead to inaccurate segmentations. In this case, guiding the surface evolution via shape (and pose) prior information can considerably improve the quality of the segmentation. In [10], the shape model is built based on a probabilistic approach and is then incorporated into the GAS framework. The modeling of the shape prior and the estimation of the shape and pose parameters is briefly summarized below.
Each surface in the provided training dataset , represented as a binary segmentation, is embedded as the zero level-set of a higher dimensional surface , using the signed distance map, where each point N3 (points if it is assumed that a 3D shape template is cropped omtp a size N in each dimension) encodes the distance to the nearest points on the surface. A mean surface (shape) μ is obtained as the arithmetic mean of the training dataset . The variance of the shape is computed using Principal Component Analysis (PCA). A matrix is constructed consisting of column vectors Ŝl, obtained from subtracting the mean μ from Sl. Then the covariance matrix decomposed using Singular Value Decomposition (SVD),
| (7) |
Here, is a unitary matrix whose columns represent n orthogonal modes of shape variation, and is a diagonal matrix of the corresponding eigenvalues as scaling factors along these variations. An estimate of a new shape u′ is represented by combining the first k principle components, and is given by the coefficients . The dimension of the training set is therefore reduced to k by projecting u′ – μ onto the k principle components,
| (8) |
where Uk is a matrix with the first (largest corresponding eigenvalues) k columns of U. Given ψ, the estimate ũ of u′ is reconstructed as
| (9) |
Then, the probability of a particular surface is computed by assuming, following the PCA model, a Gaussian distribution in the reduced shape subspace,
| (10) |
where Σk is a matrix with the first (largest eigenvalues) k rows and columns of Σ.
Note that the shape model cannot guide the surface evolution without its global pose information, as the structures must be registered for the shape information to be relevant. Let the shape surface, u*, be determined from the shape parameter ψ and the pose parameter p. The surface u evolves toward the target shape by estimating u* using the maximum a posteriori (MAP) at a given discrete time t following
| (11) |
where λ1 ∈ [0,1] is a coefficient that controls the effect of the estimated surface model. More specifically, u* is estimated using MAP at each update of the surface evolution,
| (12) |
Accordingly, parameters ψ and p are also estimated,
| (13) |
To compute , we reformulate (13) using Bayes Theorem:
| (14) |
In (14), the inside term is modeled by a Laplacian density function over Voutside, the volume of the surface u that lies outside the shape surface u* estimated in the previous iteration [10]:
| (15) |
The gradient term is modeled by the Laplacian of the squared error between |∇I| (it approximates a Gaussian along the normal at its boundaries) h(u*) (defined as the best Gaussian fit for the relationship between u* and |∇I|) [10],
| (16) |
The last two terms are shape and pose priors, respectively. The shape prior is a Gaussian model over the shape parameters ψ as in (10). The pose parameters are assumed uniformly distributed, and therefore, do not contribute to the MAP estimation.
Finally, the evolution equation of the surface in (11) is incorporated into the (discrete time) level-set equation (6):
| (17) |
where γ2 controls the tradeoff between the shape prior and the image forces. In this framework, the surface evolves globally, towards the MAP estimate of a given shape model (prior), and locally based on image gradient and surface curvature. Also, each training shape is initially aligned by registering it with a target region to reduce MAP estimation time.
3) A new multi-modal edge indicator function gnew
The inverse function g(I) of image gradient in (3), commonly selected as an edge indicator in active surface models, often fails to generate clear edge information when the objects to be segmented have low contrast boundaries. In this section, a new edge indicator function is introduced by combining edge maps generated from the Laplacian of multi-modal images (i.e., multiple modalities all derived from MRI), together with boundary information from the shape prior presented in the previous section.
Recently, it has been showed that SWI at higher magnetic fields shows superior contrast, especially within the basal ganglia and thalamus structures, by comparison with T1W and T2W images [4], [15]. To take advantage of this, the edge information of the SWI is integrated with that of T2W images, and with FA images obtained from diffusion MRI, defining a new edge indicator function. This automates the procedure typically followed by experts performing manual segmentations and switching between various modalities to exploit information from multiple contrasts [5]. The three main steps to compute this new edge indicator function with fusion of multi-modal images are described next.
- First, the stopping function g in (3) is substituted by g′ using a sigmoid function, whose center and width are controlled by the user[1]. Additionally, the zerog-crossing of the Laplacian image, instead of the image gradient, is applied in order to detect more detailed boundaries,
where β is the center of the intensity range, and α is in inverse proportion to the slope of the function at β (i.e., the slope is 1/4α). Also, is ΔÎ the Laplacian of a regularized image. Note that regularization such as anisotropic diffusion before the Laplacian operation is important for the noisy SWI.(18) Next, two edge map terms, g’High, and g’Low are computed with βHigh and βLow, and a fixed value of α, tuned by the user with respect to each modality image, such as the T2W (or FA) and SWI images, using (18). These edge map terms have comparable values (0 or 1) within regions with strong boundaries or homogeneous intensities, but have different values in intermediate regions (see Fig. 1). More specifically, a positive (or negative) α is selected to transform higher intensity values of the Laplacian magnitude into homogeneous regions (or boundaries) by the sigmoid function. As in Fig. 1-(a), if , α > 0, βHigh is manually chosen to be the value of the Laplacian magnitude of the smoothed SWI to produce g′High, where regions with intensity values over βHigh are considered as strongly homogeneous, thereby capturing more edge information on the SWI. Also, βLow is manually chosen to be the value of the Laplacian magnitude of the smoothed T2W (or FA) image to produce g’Low, where regions with intensity values under βLow are considered as strong boundaries, thereby capturing wider homogeneous regions on the T2W (or FA) image. On the other hand, as in Fig. 1-(b), if , α < 0, βHigh is manually chosen to be the value of the Laplacian magnitude of the smoothed T2W (or FA) image to produce g′High, where regions with intensities over βHigh are considered as strong boundaries, thereby capturing wider homogeneous region on the T2W (or FA) image. Additionally, βLow is manually chosen to be the value of the Laplacian magnitude of the SWI to produce g′Low, where regions with intensities under βLow, are considered as homogeneous regions, thereby capturing more edge information on the SWI.
- Finally, a new edge map gnew, is obtained from g′Low and g′High. Let g′Low (T2W (or FA)) or g′High (T2W (or FA)) be the edge map terms computed using (18) with βLow and a fixed positive α, or βHigh and a fixed negative α, on the T2W (or FA) image; and let g′Low (SWI) or g′High (SWI) be the edge map terms computed with βLow and a fixed negative α, or βHigh and a fixed positive α, on the SWI. An edge map gnew is computed by weighted averaging of g′High and g′Low with the (smoothed) Dirac measure δε of , the level-set of a given shape prior on the initial position (registered onto the test data), to weight the values of g′Low (T2W (or FA)) on the homogeneous region and g′High (SWI) on the boundary surface (zero level-set) within the shape prior if α>0, or values of g′High (T2W (or FA)) on the homogeneous region and g′Low (SWI) on the boundary surface within the shape prior if α<0:
where the Dirac measure δε is defined as the regularized version of the derivative δ0(z) of the function H(Z) [12]:(19)
Here ε is the width of the function. Specifically, with ε =1 is 1 on the boundary of a given shape prior on the initial position. In this case, gnew captures sufficient boundary information on the SWI if it is assumed that the shape prior on the initial position optimally matches the object to be segmented (gnew is fixed during the shape/surface evolution for stability). In particular, is initially registered onto the region maximally overlapping with an object to be segmented or the boundaries of the edge map (g′High (SWI) if α > 0, g′Low (SWI) if α < 0) obtained from the SWI using FSL FLIRT (FMRIB's Linear Image Registration Tool) [25] or the Euler transformation in 3D provided by the user. Proper initial placement of a given shape model contributes not only to accurately integrate boundary information of the initial shape surface into the new edge map, but also reduces the MAP estimation time of the shape prior step. Note that corresponding training shapes registered initially onto the test data are overlapped with structures to be segmented since training shapes are manually segmented versions from other datasets on the same ROI as the test dataset.(20)
Fig. 1.
Schematic overview of the proposed segmentation
Fig. 2 shows a simple interpretation of g′High(T2W), g′Low(SWI), , and gnew for a clear and an unclear edge in case of α<0 in 1D. g′Low(SWI) has wider boundary regions due to small βLow and superior contrast on the SWI, compared with g′High(T2W). Also, is 1 on the boundary of the initial shape prior. gnew is computed as the weighted average of g′Low(SWI) and g′High(T2W) with , capturing g′Low(SWI) on the boundary and g′High(T2W) on the homogeneous region within the initial shape prior. Therefore, gnew has more detailed boundary information, when compared with g′Low(SWI) or gHigh (T2W).
Fig. 2.
Sigmoid function with βlow = −10, and (a) α = 1, (b) α = −1
Fig. 3 shows the SWI and T2W images in three orthogonal directions, corresponding to the Laplacian outputs, g′Low (T2W) with α= 0.5 and βLow = 8, g′High (SWI)with α= 0.5 and βHigh = 13, δε of a shape prior for the left external Globus Pallidus (GPe), and gnew on the ROI of the 2D axial slice. g′Low (T2W)contains wider homogeneous regions, while g′High (SWI) has more detailed edge information. In particular, g′High (SWI) provides clearer separation of the left GPe and internal Globus Pallidus (GPi) (see red circle in Fig.3-(h)). This is attributed to the superior contrast of the SWI, enabling the identification of thin boundaries (lamina pallid medialis) separating GPe and GPi [4]. Stronger boundaries are exhibited by the intensity transformation using g′ with βHigh. Finally, we observed that gnew (Fig. 3-(j)) shows clearer boundaries by weighted averaging of g′Low (T2W) and g′High (SWI) with . More specifically, edge information around the left GPe in gnew comes from boundaries of g′High (SWI), and homogeneous region within the left GPe comes from g′Low (T2W) if it is assumed that the intensity of g′High (SWI) on boundaries and g′Low (T2W) on an homogeneous region within a given shape prior for left GPe on the initial position is ideally 0 and 1, respectively. On the other hand, g′High (SWI)and g′Low (T2W)are averaged with the weighting value on the region where g′High (SWI)is not 0 on the boundary or g′Low (T2W) is smaller than 1 on the homogeneous region within a given shape prior for GPe (see gray scale region in Fig. 3-(j)). For comparison, the image gradient of each single modality image, SWI or T2W image, and the corresponding g in (3) is presented in Fig. 4. The edge map produced by g does not have sufficient information, by comparison with the edge map generated by gnew. In particular, boundaries between the left GPe and GPi are not well identified. The new edge map introduced in this section is more accurate, integrating information from the T2W and SWI images together with the shape prior.
Fig. 3.
Interpretation gnew on the clear edge and unclear edge in a 1D (in case of α<0). (a) g′ with βHigh on T2W in (14), g′High (T2W). (b) g′ with βLow on SWI in (14), g′Low (SWI). (c) Dirac measure for the level set representation of a mean shape, . (d) a new edge map, gnew.
Fig. 4.
A new edge map generated by combining axial T2W image with SWI. (a) Axial T2W image. (b) Axial SWI. (c) ROI of T2W image. (d) ROI of SWI. (e) Laplacian of smoothed T2W image. (f) Laplacian of smoothed SWI. (g) g′Low (T2W) with α= 0.5, βLow= 8. (h) g′High (SWI) with α= 0.5, βHigh= 13. (i) with ε= 1. (j) gnew. Note that regions around left GPe and GPi (the red circle) in (j) are improved.
4) Anatomical constraint between adjacent structures
Overlap between adjacent segmented objects is often inevitable even if the global shape information and the new edge indicator function discussed in the previous sections are employed. Anatomical constraints for non-overlap and adjacency should be considered for more accurate segmentation of the basal ganglia [11] (such constraints can be imposed for other anatomical segmentation tasks as well). In this section, a global penalty term constraining the propagation of each adjacent surface to avoid overlaps is considered and incorporated into our extended active surface model. A pre-segmented version obtained from our model (e.g., segmented SN in Fig. 5) is utilized as a repulsion constraint for its adjacent structures. This means the pre-segmented objects act as a global force in opposite direction to the shape priors (and edge and curvature) of their neighboring objects during the segmentation process. Furthermore, adjacent structures are iteratively segmented, avoiding overlapping regions between them by constraining the over-segmentation of their adjacent structures. During the iterative process, the structures are corrected in order to obtain clear boundaries between them and to maximize the Dice's Coefficient (DC) values defined as
| (21) |
where VA and VB are the respective volumes of structures A and B to be compared for their similarity measurement.
Fig. 5.
Image gradient magnitude of the SWI and T2W images and their corresponding g in (2). (a) Gradient magnitude of smoothed T2W. (b) Gradient magnitude of smoothed SWI. (c) gT2W (inverse of (a)). (d) gSWI (inverse of (b)).
We consider the resulting surface in (17), whose adjacent structure to be segmented exists, as uadj. It is represented by level-sets using a signed distance function. uadj is fixed at this step of the iteration, and the negative distance between uadj and the current evolving surface u adjacent to it acts as the repulsion force during the segmentation process. The surface evolution equation in the negative direction of the distance between uadj and u is given by (at a given discrete time t and with the constant weight λ3)
| (22) |
The surface evolution equation with non-overlapping constraints in (22) is incorporated into the update expression (17) with shape priors and gnew as introduced in the previous sections,
| (23) |
This process is iterated, so that different structures can be segmented using a specific set of non-overlapping constraints. Fig. 5 shows such an iterative segmentation workflow for the Substantia Nigra (SN) and Subthalamic Nucleus (STN). First, SN is segmented without applying penalty for overlapping. If the structure does not have neighboring structures to be segmented, λ3 is set to 0, disabling the non-overlap constraints. Initially segmented SN is utilized as the non-overlapping constraint to segment STN at the next iteration. Then, the segmented STN is also utilized to constrain and correct over-segmentation of SN. This process can be repeated until convergence (defined as the state where no significant changes in the segmentation of the desired structures occur, considering overlapping region and DC values at the same time). Variations of the SN and STN at each iteration are presented in Fig. 6. Each structure is shown as a 2D contour on axial and coronal slices. Red, green, and blue contours represent segmented STN, segmented SN, and the “ground truth” (manual segmentation), respectively, for each structure. Initial segmentation results in Fig 6-(a) show large overlapping regions, attributed to over-segmentation of each structure around unclear boundaries between SN and STN. The overlapping regions are considerably reduced, and segmentation results are corrected toward the ground truth, as the segmentation progresses. The non-overlapping constraint and iterative process introduced in this section improves the segmentation mostly for unclear boundaries between neighboring structures by providing each structure with global shape information about its neighboring structure. Furthermore, this could significantly aid in the segmentation when the input image has lower contrast or SNR such as in clinical 1.5T or 3T. In particular, this process is a critical feature for the accurate segmentation of basal ganglia structures to be presented in the next section.
Fig. 6.
Iterative segmentation flow for SN and STN within modified GAS framework
III. APPLICATION TO THE SEGMENTATION OF BASAL GANGLIA STRUCTURES
The workflow for the proposed semi-automatic volumetric segmentation process of the basal ganglia component and thalamus is shown in Fig. 7. GPe and GPi are segmented on the axial images since thelamina pallid medialis [4] which represents boundaries between GPe and GPi is shown well in the axial SWI. SN and STN are segmented on the coronal images since this direction shows high contrast, allowing the delineation between SN and STN. Also, the FA image is utilized to segment Caudate Nucleus (CN), Putamen (Pu), and Thalamus (Tha). We fully utilize multi-modal images, combining SWI, T2W, and FA from DWI to segment all the structures. More specifically, axial T2W image registered onto the SWI, and axial SWI itself are utilized to generate an edge map for the segmentation of GPe and GPi. Coronal T2W image registered onto the SWI, and coronal SWI itself, are utilized to segment SN and STN. CN, Pu, and Tha are segmented from the axial SWI registered onto the FA image, and the FA image itself.
Fig. 7.
Segmentation results of SN and STN at each iteration. Top shows contours in both axial and coronal slices, and bottom represents the corresponding 3D structures. (a) First iteration: The green and red represent the first segmented SN without the constraint and the first segmented STN with the first SN, respectively. (b) Second iteration: The green and red represent the second segmented SN with the first STN and the first segmented STN with the first SN, respectively. (c) Third iteration: The green and red represent the second segmented SN with the first STN and the second segmented STN with the second SN, respectively. The blue represents manually segmented SN and STN.
The training set for the shape priors consists of manual segmentations obtained from other subjects, or the same subject on other scan dates. Corresponding non-overlapping constraints for each structure are summarized in Table I, considering the high probability of connection between neighboring structures presented in [5]. All segmented structures are finally overlapped on the desired modality after registration.
TABLE I.
Corresponding non-overlapping constraints for each structure to be segmented
| Structure | Overlapping constraints |
|---|---|
| GPe | GPi, Pu |
| GPi | GPe |
| SN | STN |
| STN | SN |
| CN | Tha |
| Pu | GPe |
| Tha | CN |
IV. EXPERIMENTAL RESULTS
In this section, we present segmentation results of the basal ganglia component and thalamus on real 3D 7T MRI using our proposed method. Segmentation results are compared with those obtained from GAS [7], GAS with shape priors [10], GAS based on g′ in (18) with shape priors, and GAS based on gnew in (19) with shape prior using multi-modal images but without non-overlapping constraints, validating the effects of the different GAS extensions in our proposed method. Additionally, we compare our proposed method with FSL FIRST [16], [17] and FreeSurfer [18], [19], widely used single-modality tools for segmentation of subcortical regions. We quantitatively measure the performance of each approach using the DC (21) and visually analyze segmented volumes on the Amira software package [20], facilitating the simultaneous visualization of multiple structures.
1) Implementation details and data acquisition
Our proposed method was implemented in the ITK/VTK framework [21], [22], which provides open source C++ libraries for image segmentation and registration. The implementation was also integrated into the 3D Slicer program [23], a free software package for image visualization and analysis. In particular, modularization of the implementation within the 3D Slicer program allows developers to test algorithms by tuning parameters easily and rapidly under the provided Graphical User Interface (GUI) environment. GAS [7] and GAS with shape priors [10] are currently available on ITK libraries and tested for the comparison. Our proposed method was built from ITK classes related to these approaches.
We utilized six MRI datasets, including T1W, T2W, SWI and FA (from DWI) on each one, scanned (under approved IRB) from five subjects using a 7T magnet at the Center for Magnetic Resonance Research of the University of Minnesota. Table II shows the used training shape sets and manual segmentation as ground truth for each dataset and the corresponding subject. For each dataset (T2W, SWI, and FA image) from 1 to 5, fourteen structures - left and right sides of GPe, GPi, SN, STN, CN, Pu, and Tha within the basal ganglia region - were manually segmented by an anatomical expert. Dataset 6 (T1W, T2W, SWI, and b0 (non-diffusion weighted data from the DWI series)) was acquired with insufficient anatomical information for some structures (e.g., SN and STN on coronal T2W image) within the basal ganglia region because of the chosen smaller field of view. Therefore, this dataset is considered as an appropriate test to check the segmentation of occluded structures in practice, and its segmentation results are evaluated only visually since manually segmented versions were not available. Also, the T1W image in dataset 6 was additionally used to test the optimal combination of multi-modal images within our proposed model. In addition, datasets 3 and 4 were acquired all from the same subjects on different dates. Detailed acquisition information for all datasets, the manual segmentation pipeline, and the registration process for each structure are presented in [5]. Finally, training shape sets for the structures on each dataset are built by using the leave-one-out method [24], e.g., training shape sets for test set 1 consist of manual segmentations for each structure from datasets 2, 3, 4, and 5, leaving the ones from dataset 1 out. Moreover, training shape sets for test sets 1, 2, 5, and 6 are manually segmented versions from all the other subjects. Note that test set 6 is not only used for the segmentation of occluded objects as previously mentioned but also utilized to segment structures using training shapes from all different subjects. The training shapes for each structure and the data are initially registered onto datasets within the same ROI using FSL FLIRT (FMRIB's Linear Image Registration Tool) [25], overlapping with structures to be segmented on the data.
TABLE II.
Training shape set and ground truth for each test and the corresponding subject
| Test data set | Subject No. | Training shape set (each structure segmented manually from) | Ground truth (each structure segmented manually from) |
|---|---|---|---|
| 1 | 1 | 2, 3, 4, 5 | 1 |
| 2 | 2 | 1, 3, 4, 5 | 2 |
| 3 | 3-a | 1, 2, 4, 5 | 3 |
| 4 | 3-b | 1, 2, 3, 5 | 4 |
| 5 | 4 | 1, 2, 3, 4 | 5 |
| 6 | 5 | 1, 2, 3, 4, 5 | - |
2) Experimental results on the real MRI
For datasets 1–5 in Table II, and as mentioned before, GPe and GPi are segmented using the corresponding training shapes and by combining axial SWI and axial T2W images registered onto axial SWI. SN and STN are segmented using the corresponding training shape set and by combining coronal SWI and coronal T2W images registered onto coronal SWI. CN, Pu, and Tha are segmented using the FA image and axial SWI registered onto the FA image with corresponding training shape set. Moreover, each structure is iteratively segmented with the non-overlapping constraint, reducing overlapping regions with its adjacent structure and maximizing a DC value within our proposed framework. Since other previous techniques based on GAS, FSL FIRST and FreeSurfer utilize single-modal images, GPe and GPi are segmented with these packages on the axial T2W (or T1W) image and the axial SWI, respectively. Similarly, SN and STN are segmented on coronal T2W (or T1W) and coronal SWI. Also, the FA (or T1W) and axial SWI is utilized to segment CN, Pu, and Tha.
Segmentation results for dataset 3 (similar results are obtained for the other data), represented as 2D contours with superimposed ground truth (blue contour) and 3D volumes, for each structure, are shown using the Amira environment in figures 8-10. Fig. 11 shows the 3D manual segmentation on the same dataset 3. DC values of each segmented result for datasets 1–5 during the non-overlapping iterative process are summarized in Table III. In addition, Fig. 12 presents average DC values and standard deviation errors for all the structures and tested segmentation algorithms on datasets 1 – 5.
Fig. 8.
Schematic workflow for the semi-automatic 3D segmentation of basal ganglia components and thalamus
Fig. 10.
Comparison of segmentation results for GPe and GPi on dataset 3. The light green and brown represent GPe and GPi, respectively. The blue contours represent manual segmentations. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results of GAS, GAS with shape prior using g, GAS with shape prior using g’ on axial T2W image (left column) and axial SWI (right column), respectively. Figures (d), (e), and (f) are segmentation results of GAS, the proposed approach without non-overlapping constraints, and the proposed approach, respectively, with surface distance maps (right column, top: GPe, bottom: GPi) on axial T2W image combined with axial SWI.
Fig. 11.
Comparison of segmentation results for SN and STN on dataset 3. The red and yellow represent SN and STN, respectively. The blue contours represent manual segmentations. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results of GAS, GAS with shape prior using g, GAS with shape prior using g’ on coronal T2W image (left column) and coronal SWI (right column), respectively. Figures (d), (e), and (f) are segmentation results of GAS, the proposed approach without non-overlapping constraints, and the proposed approach, respectively, with surface distance maps (right column, top: SN, bottom: STN) on coronal T2W image combined with coronal SWI.
TABLE III.
DC values of the proposed approach during iterative process for each data set
| Data | Iteration | GPe Left | GPe Right | GPi Left | GPi Right | SN Left | SN Right | STN Left | STN Right | CN Left | CN Right | Tha Left | Tha Right | Pu Left | Pu Right |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 1st | 0.754(F) | 0.686 | 0.757 | 0.743(F) | 0.834 | 0.752 | 0.617(F) | 0.665(F) | 0.779 | 0.739 | 0.837(F) | 0.856(F) | 0.784 | 0.744 |
| 2nd | 0.755 | 0.715 | 0.727 | 0.742 | 0.841 | 0.757 | 0.63 | 0.688 | 0.783 | 0.743 | 0.839 | 0.857 | 0.784 | 0.744 | |
| 3rd | 0.754 | - | - | - | - | - | 0.638 | - | - | - | - | - | - | - | |
| 2 | 1st | 0.774(F) | 0.741 | 0.819 | 0.775(F) | 0.82(F) | 0.796(F) | 0.768 | 0.732 | 0.721 | 0.762 | 0.85(F) | 0.809(F) | 0.82 | 0.739 |
| 2nd | 0.78 | 0.742 | 0.82 | 0.784 | 0.82 | 0.793 | 0.802 | 0.72 | - | 0.762 | - | 0.811 | 0.82 | 0.739 | |
| 3rd | - | - | - | - | 0.804 | - | - | - | - | - | - | - | - | - | |
| 3 | 1st | 0.728(F) | 0.789 | 0.622 | 0.803(F) | 0.786 | 0.765 | 0.751(F) | 0.757(F) | 0.799(F) | 0.791(F) | 0.872 | 0.851 | 0.817 | 0.78 |
| 2nd | 0.737 | 0.791 | 0.711 | 0.804 | 0.812 | 0.768 | 0.686 | 0.743 | 0.803 | 0.784 | 0.872 | 0.851 | 0.817 | 0.78 | |
| 3rd | 0.748 | - | - | - | 0.8 | 0.801 | 0.673 | 0.763 | 0.805 | 0.798 | 0.872 | - | - | - | |
| 4 | 1st | 0.817 | 0.755 | 0.816(F) | 0.783(F) | 0.789(F) | 0.752 | 0.724 | 0.711(F) | 0.772 | 0.764 | 0.889(F) | 0.856(F) | 0.85 | 0.78 |
| 2nd | 0.817 | 0.75 | 0.802 | 0.765 | 0.785 | 0.75 | 0.73 | 0.719 | 0.773 | 0.765 | 0.887 | 0.855 | 0.85 | 0.78 | |
| 3rd | - | - | 0.80 | - | 0.752 | 0.757 | 0.69 | 0.761 | 0.775 | - | 0.888 | - | - | - | |
| 5 | 1st | 0.745 | 0.752 | 0.749(F) | 0.764(F) | Coronal SWI and T2W data were not available | 0.763 | 0.763 | 0.844(F) | 0.872(F) | 0.774 | 0.744 | |||
| 2nd | 0.759 | 0.751 | 0.753 | 0.781 | 0.764 | 0.763 | 0.842 | 0.872 | 0.774 | 0.744 | |||||
| 3rd | 0.762 | - | 0.75 | - | - | - | - | - | - | - | |||||
(F) indicates the first segmented one for adjacent structures, and blue numbers are final DC values after iteration. Note that segmentation for SN and STN on data set 5 was not tested since coronal data was not available.
Fig. 12.
Comparison of segmentation results from the single-modality based approaches for CN, Tha, and Pu on dataset 3. The violet, dark green, and cyan represent CN, Tha, and Pu, respectively. The blue contours represent manual segmentations. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (c), and (e) show segmentation results in the one view of CN and Tha from GAS, GAS with shape prior using g, and GAS with shape prior using g’ on FA image (left column) and SWI (right column), respectively. Figures (b), (d), and (f) are segmentation results in another view of CN, Tha, and Pu from GAS, GAS with shape prior using g, GAS with shape prior using g’ on FA image (left column) and SWI (right column), respectively.
In figures 8-10 (a)-(c) we observe that the segmentation results are visually improved by incorporating the shape prior term. Moreover, more accurate segmentation results are obtained when using the new edge detection function (see figures 8-10 (b), (c), and (d) right column). However, the results still includes over- and under-segmented areas and overlapping regions between neighboring structures, whereas our complete approach shows significantly improved segmentation results with reduced overlapping regions (see figures 8-10 (d) left column). Additionally, our approach yields overall higher DC values, Fig. 12. We also observe that while the DC values for left STN, right SN, CN, and Pu with a single-modality image (using GAS based on g or g’ with shape priors) are similar to those of our approach, their visual segmentation results were inaccurate due to under- and over-segmentation and overlapping regions between neighboring structures, figures 9-10 (b) and (c). In addition, overall DC values are increased or maintained during the iterative segmentation process within our approach, Table III. In the few cases where the DC values are reduced after iteration, we still note that our approach shows clear delineation between adjacent structures, whereas manual segmentations have overlapping regions (see Fig. 9 (d) left column and DC values of SN and STN on dataset 3 in Table III). This means that those manual segmentations were not completely well defined around boundaries, even if they were produced by an anatomy specialist.
Fig. 9.
DC values of segmentations from GAS and our proposed model (without and with non-overlapping constraints), based on three combinations of two single-modal images for each structure. (a) GPe. (b) GPi. (c) SN. (d) STN. (e) CN. (f) Tha. (g) Pu. The left and right columns represent left and right structures, respectively.
Next, we work with dataset 6, showing that our approach exhibits robustness to the variability of training shapes, a critical aspect for practical uses. The segmentation results are evaluated only visually since ground truths on dataset 6 are not available. We have tested our proposed model by combining the T1W image and SWI in addition to the combination of T2W and SWI to segment GPe, GPi, SN, and STN in axial and coronal direction, respectively. Also, the b0 image combined with T1W, T2W, and SWI, respectively, is utilized to segment CN, Pu, and Tha. Additionally, other segmentation approaches are tested on T1W, T2W, and SWI to segment GPe, GPi, SN, and STN, respectively. Similarly, the b0 image, T1W, T2W, and SWI were utilized to segment CN, Pu, and Tha. Figures 13-15 show the results, demonstrating the improvements obtained with our proposed approach. We observed that the segmentation results on dataset 6 are comparable with the results on dataset 3 even if training shapes from all the different subjects were simultaneously employed. The SN and STN on T2W were accurately segmented using edge information from SWI even if SN and STN are not shown in the T2W (see Fig. 14 (d)). Note that the segmentation results on the single T2W were primarily affected by shape priors (see Fig. 14 (b) and (c)). Furthermore, we observe that the combination of T2W and SWI for GPe and GPi shows better segmentation over that of T1W and SWI. We should note that T2W has more structural information over T1W within the GP region, and SWI provides clear delineation between GPe and GPi [4] (see Fig. 13 (d)). On the other hand, the combination of T1W and SWI for SN and STN shows better results than that of T2W and SWI (see Fig. 14 (d)) since the T2W of dataset 6 in Fig. 14 has no information for SN and STN (small field of view). These results explain that the edge map (missing boundaries between SN and STN) generated from T1W, which has homogeneous intensity around the boundaries of the SN and STN regions, was improved by the detailed edge information from the high contrast SWI. Moreover, we should note that the combination of T1W or T2W and SWI shows superior results than that of single SWI (see figures 13-15 (c), GAS with shape priors using g’ (on the single modality) and (d), proposed method without non-overlapping constraints (on the multi-modality)). This demonstrates that the edge map on SWI does not have enough homogeneous regions (inside structures for evolution of surfaces) even if SWI provides detailed edge information. Additionally, the combination of b0 and SWI in our approach shows better segmentation results for CN, Tha, and Pu (see Fig. 15 (d)) than the combination of b0 and T1W or T2W. To conclude, in this experiment we have demonstrated that the combination of edge map from multi-modal images, shape priors and non-overlapping constraints within our proposed approach, contributes to clear improvements on the quality of subcortical structures segmentation.
Fig. 13.
Comparison of segmentation results from the multi-modality based approaches for CN, Tha, and Pu on dataset 3. The violet, dark green, and cyan represent CN, Tha, and Pu, respectively. The blue contours represent manual segmentations. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results in the one view (first column) of CN and Tha and another view (second column) of CN, Tha, and Pu from GAS, the proposed approach without non-overlapping constraints, and the proposed approach, respectively, with surface distance maps (third column (top view) and fourth column (bottom view), first row: CN, second row: Pu and Tha) on the FA image combined with axial SWI. GPe (light green) segmented on T2W combined with SWI (i.e., segmented GPe in Fig. 10 (f)) is incorporated as contours in (b) and (c), respectively, to see overlaps between Pu and GPe. Note that overlaps between Pu and GPe in (c) are considerably reduced (see top right of (b) and (c)).
Fig. 15.
Average DC values and standard errors of segmented results for each approach on data set from 1 to 5. Figures (a) and (b) represent DC values for left and right structures, respectively.
Fig. 14.
Manual segmentations for each structure on dataset 3. Top left shows GPe (light green) and GPi (brown) on the axial SWI. Top right represents SN (red) and STN (yellow) on the coronal SWI. Bottom left shows Pu (cyan) and GPe (light green) on the FA image. Bottom right represents CN (violet) and Tha (dark green) on the FA image..
Finally, we have performed segmentation using FSL FIRST and FreeSurfer. We should note that FSL FIRST and FreeSurfer work only on the T1W data [16]-[19] and do not segment GPe, GPi, SN, and STN. Therefore, sagittal T1W of dataset 6 was applied and tested to segment only CN, Pu, and Tha. We visually compared with GAS, GAS with shape priors, and GAS based on g’ with shape priors on single T1W data, respectively (see Fig. 15 (a)-(c)). Also, we have tested our approach on the combination of T1W or T2W and SWI of dataset 6, Fig. 16. We observe that the obtained segmentation results are qualitatively better compared with other previous techniques tested here, but there are still overlapped regions between CN, Pu, and Tha (especially, see segmentation results obtained using FreeSurfer in Fig. 16 (b)). On the other hand, our approach shows comparable results without overlapped regions between neighboring structures. In particular, the segmentation results with the combination of T2W and SWI exhibit comparable performance as those of the combination of T1W and SWI (see Fig. 16 (c)-(d)). Note that FSL FIRST and FreeSurfer do not work on T2W which provides more anatomical information than T1W within subcortical structures [4]. To summarize this last experiment, we demonstrated that the segmentation results on low contrast T1W are significantly improved by the new edge indicator function, exploiting detailed edge information of SWI, and the non-overlapping iterative process within our approach, showing comparable visual performance with FSL FIRST and FreeSurfer, but without overlapped regions between structures. Furthermore, our approach achieves better segmentation results by taking advantage of sufficient edge information on the combination of T2W and SWI.
Fig. 16.
Comparison of segmentation results from the multi-modality based approaches for GPe and GPi on dataset 6. The light green and brown represent GPe and GPi, respectively. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results of GAS, the proposed approach without non-overlapping constraints, and the proposed approach, respectively, on the axial T2W image combined with axial SWI.
V. CONCLUSION
This paper presented a novel active surface model for the segmentation of subcortical structures such as the basal ganglia and thalamus using ultra high field MRI. A statistical shape model is employed to guide the evolving surface toward structures to be segmented on edge maps with limited information. We introduce a novel edge indicator function, exploiting the superior SNR and CNR of SWI at high field MRI. This new edge indicator function generates features combining edge maps obtained from the Laplacian of multi-modal images such as T1W, T2W, or FA image and SWI with boundary information on the initial position of the given shape priors. Moreover, a non-overlapping repulsion force is added, delineating boundaries between neighboring objects and improving the overall quality of the segmentation.
Exhaustive segmentation tests on different combinations of MRI modalities, such as T1W, T2W, FA (or b0) and SWI were performed, showing visually accurate segmentation and yielding high DC values. We demonstrated that the combination of T2W and SWI within GPe, GPi, SN, and STN region, and FA (or b0) and SWI within thalamus region, yields better results than other combinations. Furthermore, our approach shows comparable segmentation results for fully occluded object and robustness to variability of the training shapes as well. We demonstrated that the proposed approach significantly improves volumetric segmentation of complex and adjacent structures such as the basal ganglia and thalamus.
Our future work includes further investigation for the relationship between empirical parameters and an edge based multiphase implementation for the simultaneous segmentation of neighboring structures, considering overlapping regions.
Fig. 17.
Comparison of segmentation results from the multi-modality based approaches for SN and STN on dataset 6. The red and yellow represent SN and STN, respectively. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results of GAS, the proposed approach without non-overlapping constraints, and the proposed approach, respectively, on the coronal T2W image combined with coronal SWI.
Fig. 18.
Comparison of segmentation results from the multi-modality based approaches for CN, Tha and Pu on dataset 6. The violet, dark green, and cyan represent CN, Tha, and Pu, respectively. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results in the one view (left column) and another view (right column) of CN, Tha, and Pu from GAS, the proposed approach without non-overlapping constraints, and the proposed approach, respectively, on the FA image combined with axial SWI. GPe (The light green) segmented on T2W combined with SWI (i.e., segmented GPe in Fig. 16 (c)) is incorporated as contours in (b) and (c), respectively, to see overlaps between Pu and GPe. Note that overlaps between Pu and GPe in (c) are considerably reduced (see top right of (b) and (c)).
Fig. 19.
Comparison of segmentation results from the single-modality based approaches on T1W image for CN, Tha, and Pu (dataset 6). The violet, dark green, and cyan represent CN, Tha, and Pu, respectively. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a), (b), and (c) show segmentation results of GAS, GAS with shape prior using g, and GAS with shape prior using g’ on T1W image, respectively. Two views of for CN, Tha, and Pu are shown in left and right columns.
Fig. 20.
Segmentation results from FSL FIRST and FreeSurfer on T1W image for CN, Tha, and Pu (dataset 6). The violet, dark green, and cyan represent CN, Tha, and Pu, respectively. Top and bottom in each figure represent contours and volumetric segmentations, respectively. Figures (a) and (b) show segmentation results of FSL FIRST and FreeSurfer, respectively. Two views of for CN, Tha, and Pu are shown in left and right columns.
Fig. 21.
Comparison of segmentation results from the multi-modality based approaches on T1W data combined with FA image, SWI, or T2W image for CN, Tha and Pu (dataset 6). The violet, dark green, and cyan represent CN, Tha, and Pu, respectively. Top and bottom in each figure represent contours and volumetric segmentations, respectively. First row (i.e., figures (a), (b), and (c)) show segmentation results of GAS. Second row (i.e., figures (d), (e), and (f)) show segmentation results of the proposed approach without non-overlapping constraints. Third row (i.e., figures (g), (h), and (i)) show segmentation results of the proposed approach. First (i.e., figures (a), (d), and (g)), second (i.e., figures (b), (e), and (h)), and third (i.e., figures (c), (f), and (i)) columns represent segmentation results of each approach on T1W combined FA, SWI, and T2W, respectively. The left and right sides in each figure are two views of CN, Tha, and Pu. GPe (The light green) segmented on T2W combined with SWI (i.e., segmented GPe in Fig. 16 (c)) is incorporated as contours in (d)-(i) to see overlaps between Pu and GPe. Note that overlaps between Pu and GPe in figures (g)-(i) are considerably reduced (see top right of (d)-(i))..
ACKNOWLEDGMENTS
This work was supported in part by NIH grants R01 EB008645, R01 EB008432, S10RR026783, P30 NS057091, P41 RR008079, P41 EB015894, the Human Connectome Project (U54 MH091657) from the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research and the W.M. Keck Foundation.
REFERENCES
- 1.Madden MJ. M.S. thesis. Dept. Electron. Eng.; West Virginia University, USA: 2007. Segmentation of Images with Low-contrast Edges. [Google Scholar]
- 2.Fang W, Chan KL. Incorporating shape prior into geodesic active contours for detecting partially occluded object. Pattern Recognition. 2007;40:2163–2172. [Google Scholar]
- 3.Chan TF, Vese LA. Active contours without edges. IEEE Trans. Image Processing. 2001 Feb.10:266–277. doi: 10.1109/83.902291. [DOI] [PubMed] [Google Scholar]
- 4.Abosch A, Yacoub E, Ugurbil K, Harel N. An assessment of current brain targets for deep brain stimulation surgery with susceptibility weighted imaging at 7 tesla. Neurosurgery. 2010 Dec.67:1745–1756. doi: 10.1227/NEU.0b013e3181f74105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Lenglet C, Abosch A, Yacoub E, De Martino F, Sapiro G, Harel N. Comprehensive in vivo mapping of the human basal ganglia and thalamic connectome in individuals using 7T MRI. PLoS ONE. 2012 Jan.7:1–14. doi: 10.1371/journal.pone.0029153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Terzopoulos D, Witkin A, Kass M. Constraints on deformable models: Recovering 3D shape and nonrigid motions. Artificial Intelligence. 1988;36:91–123. [Google Scholar]
- 7.Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int'l J. Computer Vision. 1997;20:61–79. [Google Scholar]
- 8.Osher S, Sethian JA. Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Computational Physics. 1988;79:12–49. [Google Scholar]
- 9.Caselles V, Kimmel R, Sapiro G, Sbert C. Minimal surfaces based object segmentation. IEEE Trans. Pattern Analysis and Machine Intelligence. 1997 Apr.19:394–398. [Google Scholar]
- 10.Leventon M, Grimson E, Faugeras O. Statistical shape influence in geodesic active contours. 2000 IEEE Conf. CVPR. 1:316–323. [Google Scholar]
- 11.Uzunbas MG, Soldea O, Unay D, Cetin M, Unal G, Ercil A, Ekin A. Coupled nonparametric shape and moment-based intershape pose priors for multiple basal ganglia structure segmentation. IEEE Trans. Medical Imaging. 2010 Dec.29:1959–1978. doi: 10.1109/TMI.2010.2053554. [DOI] [PubMed] [Google Scholar]
- 12.Paragios N, Deriche R. 2000 Proc. 6th European Conference on Computer Vision-Part II. Dublin, Ireland: Coupled geodesic active regions for image segmentation: A level-set approach; pp. 224–240. [Google Scholar]
- 13.Paragios N, Deriche R. Geodesic active contours and level-sets for the detection and tracking of moving objects. IEEE Trans. Pattern Analysis and Machine Intelligence. 2000 Mar.22:266–280. [Google Scholar]
- 14.Lorigo LM, Faugeras O, Grimson WEL, Keriven R, Kikinis R, Nabavi A, Westin C-F. Codimension-two geodesic active contours for the segmentation of tubular structures. 2000 IEEE Conf. CVPR. 1:444–451. [Google Scholar]
- 15.Cho Z, Min H, Oh S, Han J, Park C, Chi J, Kim Y, Paek S, Lozano AM, Lee KH. Direct visualization of deep brain stimulation targets in Parkinson disease with the use of 7-Tesla magnetic resonance imaging. Neurosurgery. 2010 Sep.113:639–647. doi: 10.3171/2010.3.JNS091385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Woolrich MW, Jbabdi S, Patenaudea B, Chappell M, Makni S, Behrens T, Beckmann C, Jenkinson M, Smith SM. Bayesian analysis of neuroimaging data in FSL. NeuroImage. 2009;45(1):173–186. doi: 10.1016/j.neuroimage.2008.10.055. [DOI] [PubMed] [Google Scholar]
- 17.Patenaude B, Smith SM, Kennedy DN, Jenkinson M. A Bayesian model of shape and appearance for subcortical brain segmentation. NeuroImage. 2011;56(3):907–922. doi: 10.1016/j.neuroimage.2011.02.046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Fischl B, Salat DH, Busa E, Albert M, Dieterich M, Haselgrove C, van der Kouwe AJ, Killiany R, Kennedy D, Klaveness S, Montillo A, Makris N, Rosen B, Dale AM. Whole brain segmentation: Automated labeling of neuroanatomical structures in the human brain. Neuron. 2002 Jan.33:341–355. doi: 10.1016/s0896-6273(02)00569-x. [DOI] [PubMed] [Google Scholar]
- 19.Fischl B, Salat DH, van der Kouwe AJ, Makris N, Segonne F, Quinn BT, Dale AM. Sequence-independent segmentation of magnetic resonance images. NeuroImage Math. Brain Imag. 2004;23(1):S69–S84. doi: 10.1016/j.neuroimage.2004.07.016. [DOI] [PubMed] [Google Scholar]
- 20.Amira, Visage Imaging. [Online]. Available: http://www.amira.com/
- 21.Insight Segmentation and Registration Toolkit Kitware. [Online]. Available: http://www.itk.org/
- 22.The Visualization Toolkit Kitware. [Online]. Available: http://www.vtk.org/
- 23.Pieper S, Kikinis R, Miller J, Halle M, Lorensen B, Schroeder W. 3D Slicer. [Online]. Available: http://www.slicer.org/
- 24.Siverman BW. Density Estimation for Statistics and Data Analysis. Chapman Hall/CRC; London, U.K.: 1986. [Google Scholar]
- 25.Jekinson M, Bannister P, Smith SM. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage. 2002;17:825–841. doi: 10.1016/s1053-8119(02)91132-8. [DOI] [PubMed] [Google Scholar]





















