Abstract
Shown in every neuroanatomy textbook, a key morphological feature is the bumpy ridges, which we refer to as hippocampal dentation, on the inferior aspect of the hippocampus. Like the folding of the cerebral cortex, hippocampal dentation allows for greater surface area in a confined space. However, examining numerous approaches to hippocampal segmentation and morphology analysis, virtually all published 3D renderings of the hippocampus show the inferior surface to be quite smooth or mildly irregular; we have rarely seen the characteristic bumpy structure on reconstructed 3D surfaces. The only exception is a 9.4T postmortem study (Yushkevich et al. [2009]: NeuroImage 44:385–398). An apparent question is, does this indicate that this specific morphological signature can only be captured using ultra high‐resolution techniques? Or, is such information buried in the data we commonly acquire, awaiting a computation technique that can extract and render it clearly? In this study, we propose an automatic and robust super‐resolution technique that captures the fine scale morphometric features of the hippocampus based on common 3T MR images. The method is validated on 9.4T ultra‐high field images and then applied on 3T data sets. This method opens possibilities of future research on the hippocampus and other sub‐cortical structural morphometry correlating the degree of dentation with a range of diseases including epilepsy, Alzheimer's disease, and schizophrenia. Hum Brain Mapp 39:472–490, 2018. © 2017 Wiley Periodicals, Inc.
Keywords: hippocampus dentation, segmentation, morphometry
INTRODUCTION
Numerous studies have been devoted to the image based sub‐cortical morphology from radiology images. The hippocampus has been the focus of more studies than any other sub‐cortical structures. A number of brain disorders have demonstrable abnormalities of hippocampal volume [Beresford et al., 2006; Bobinski et al., 1995; Fleisher et al., 2008; Hayes et al., 2014; Morra et al., 2009a,b), shape (Apostolova et al., 2006; Colliot et al., 2008; Csernansky et al., 1998; Frankó et al., 2013; Gao and Bouix, 2014, 2016; Nestor et al., 2012; Scher et al., 2007; Styner et al., 2004; Thompson et al., 2004; Wang et al., 2006], or metabolic properties [Kraguljac et al., 2013] of the hippocampus. Accurate segmentation of the hippocampus is the critical first step for volumetric or morphometric analysis, therefore methods to precisely and consistently extract the hippocampus from MR images has been the subject of much research [Bishop et al., 2011; Boccardi et al., 2011; Carmichael et al., 2005; Chupin et al., 2009a, 2009b, 2007; Collins and Pruessner, 2010; Coupe et al., 2011a, 2011, 2010b; Gao et al., 2012a; Ghanei et al., 1998; Hao et al., 2014; Hu et al., 2011; Khan et al., 2011; Kim et al., 2013; Konrad et al., 2009; Kwak et al., 2013; Luo et al., 2014; Morey et al., 2009; Pipitone et al., 2014; Pluta et al., 2009; Prudent et al., 2010; Tong et al., 2013; Van Leemput et al., 2009; van der Lijn et al., 2008; Wang et al., 2011a, 2013; Yushkevich et al., 2010; Zarei et al., 2013; Zarpalas et al., 2014].
The majority of recent studies have used images with voxel dimensions at or near 1 mm isotropic acquired on 3‐Tesla (3T) scanners. When 3T MRI is not sufficient to reveal fine structural elements, 7T scanner may be used to push the in vivo resolution limit to a higher level. For example, ultra‐high field scanners have been employed to push the in vivo resolution to 0.7 mm isotropic at 7T in vivo [Derix et al., 2014; Henry et al., 2011; Kim et al., 2013; Wisse et al., 2012] and even 0.2 mm isotropic at 9.4T ex vivo specimens [Yushkevich et al., 2009]. However, at present there are fewer than thirty 7T scanners in all of North America. The limited accessibility to 7T scanners severely limits the utility of this technology to the wider research community. Furthermore, 7T scanners are expensive, more prone to image distortions due to field inhomogeneities, and not currently FDA‐approved for clinical use.
An interesting morphological feature of the hippocampus that is commonly, but not always, present is a series of transverse ridges on its inferior surface, which we refer to as hippocampal dentation. These ridges, or dentes (“teeth”) arise from folds in the Cornu Ammonis 1 (CA1) layer of the hippocampus and appear on the inferior‐lateral aspect of the hippocampal body and extend through the inferior‐medial aspect of the tail [Duvernoy, 2005; Figure 22, 32, 35]and is similar to the undulating contour of the dentate gyrus above it.
Similar to the gyri of the neocortex, the folds of hippocampal neuronal layers that produce the dentated appearance may represent an adaptation to pack a larger surface area in a given volume.
To the best of our knowledge, while quantitative studies of radiological brain images have been advancing for decades and have examined numerous approaches to hippocampal segmentation and morphology analysis, virtually all published 3D renderings of the hippocampus show the inferior surface to be quite smooth or mildly irregular; we have rarely seen prominent hippocampal dentation in a reconstructed 3D surface, with the only exception being the 9.4T postmortem study in [Yushkevich et al., 2009]. Under the 0.2 mm isotropic resolution, the reconstructed dentation starts to appear.
Interestingly, though hippocampal dentation is not apparent in segmentations performed at native resolution of 1 mm isotropic, such structure can be visually observed even in routine 3T images. Figure 1 shows examples of typical T1w MPRAGE images, in which the degree of hippocampal dentation varies dramatically among normal individuals from prominently dentated (Fig. 1C) to minimally dentated (Fig. 1G). However, with the typical approach to segmentation based on manual tracing performed in the image native resolution, the reconstructed 3D surfaces of the hippocampi do not clearly show dentation, as shown in Figure 1D,H. It should be noted that most published segmentation results do not have the “boxy” surface appearance of Figure 1D/H, due to triangulation‐based approaches to surface rendering, and/or smoothing of the extracted surfaces. Nevertheless, it is evident that a binary volume at this resolution is not sufficient to reveal fine‐scale surface features such as dentation.
Figure 1.

Top row: a prominently dentated hippocampus. Bottom row: a minimally dentated hippocampus. (A),(E). Full sagittal images through the hippocampus, which is surrounded by a dashed box. (B),(F). Magnified view of the hippocampal region in original image resolution of 1 × 1 . The undulating hippocampal contour in B is di cult to appreciate in the native resolution when viewed up close, but is more apparent viewed from a distance or when squinting. (C), (G). Hippocampal region in sub‐pixel resolution (0.1 × 0.1 ). The dentated contour of the inferior hippocampal surface can be clearly seen in C as opposed to the smooth contour in G. (D), (H) Reconstructed inferior surface of the hippocampus at the native resolution, the dentation information is lost. [Color figure can be viewed at http://wileyonlinelibrary.com]
This leads to the question, do we have to use ultra‐high resolution images obtained with ultra‐high field scanners (7T or greater), and possibly post‐mortem specimens, to extract such complicated surface contours? Or, if such information does reside in the 3T data, can we design a specific algorithm to extract it?
The present study addresses such an issue of extracting the fine hippocampal morphology features from the clinically available 3T MR images. Our underlying hypothesis is that the grayscale data of standard T1w images contains additional in‐ formation about the contour of the hippocampal boundary that can be used to infer sub‐millimeter surface features, but that this data is lost when segmentation is used to generate a binary mask in the native resolution. By subsampling the data to a point where the resolution is much smaller compared to the variation in surface contour and then employing a robust segmentation algorithm, we can reproduce the surface on a sub‐voxel scale.
Essentially, the key contributions of the paper are in two folds:
First from a neuro‐anatomical point of view, the proposed method successfully extracts a significant morphological feature of the hippocampus from clinically available MR scanning. Such a characteristic dentate morphology has been demonstrated in essentially every neuroanatomy book, and its degree has been found significantly correlated with various psychiatric and psychological states. However, reviewing neuroimage analysis methods particularly hippocampus segmentation literatures, we failed to find correct capture of such dentate morphology from clinical MR images. With the capability of this work, we can now quantitatively capture the characteristic morphology of the hippocampus and this enables us to further study the correlation with various disorders in a more quantitative and robust manner at much larger scale.
Second from an algorithmic point of view, the work proposes an approach to utilize low‐resolution training atlas to segment structure at much higher resolution. Indeed, the detailed tracing of target is very tedious manual work. The growth of the scanner resolution improves the capability of detecting finer and finer structures. However, the manual burden for volumetric atlas labeling increases super‐linearly with respect to the resolution growth. Therefore, with the fast growth of data: size and resolution, we are in need of such an accurate and robust approach to utilize the already created atlas in lower resolution to analyze the higher resolution data.
We believe that this opens possibilities of future research on hippocampal and other sub‐cortical structure morphometry correlating the changes in dentation with a range of diseases and disease progression including epilepsy, Alzheimer's disease (AD), and schizophrenia.
Methods
As mentioned above, although numerous hippocampus segmentation studies exist, to the best of our knowledge none have demonstrated hippocampal dentation on 3T MR images. This may be due to the fact that either the labeling is performed in the native image resolution, or the mesh/graph nodes density is not high enough. Because of this, even contours drawn manually by an expert, which is ubiquitously considered as the reference standard, are not able to reveal fine hippocampal morphologic features. As a result, existing online databases of training data for multi‐atlas segmentation approaches do not contain such information.
In this work, we propose a coupled self‐correcting multi‐atlas and active contour scheme to harness the robustness of the multi‐atlas method and achieve the super‐resolution segmentation capability under the active contour framework.
The main idea of the present work is straight‐forward: the segmentation is performed on a much denser interpolated grid to reveal the millimeter/sub‐millimeter level morphological features contained in the gray scale information of the 1‐millimeter scale native images.
However, several obstacles rise when dealing with images at such a high resolution. First, despite the fact that the multi‐atlas based algorithms are currently achieving the most accurate and robust performance for the purpose of segmentation of the hippocampus, they heavily rely on the existence of training segmentations. Unfortunately, since there have not been training hippocampal segmentation under 3T MRI revealing the fine dentation features, a multi‐atlas approach alone would not be applicable for the fine‐scale segmentation features at high resolution. Second, the high‐resolution at which the morphology of hippocampal dentation is apparent would boost the data volume to a much larger magnitude, as will be discussed below, more than 1000 times larger volumetrically. Nonlinear registration, such as ITK's symmetric‐demons [Ibanez et al., 2005] and ANTs [Avants et al., 2009], often consumes 100–160 time number of pixels in memory (single threaded execution, steady state memory consumption, not peak). Assuming this scales linearly, a 1280*1280*750 matrix will require around 180 GB of memory. Handling such a large data volume is a challenging issue for most workstations.
Indeed, how to perform segmentation on such a locally highly interpolated grid obtained from a monotonic interpolation in an accurate and robust way, is the main issue we addressed in the research and the main contribution of the paper. In addressing the above issues, we present the following coupled two‐stage approach that extracts the fine hippocampal morphologic features from the widely available 3T MR images.
To aid further discussion, we define some notations. First, denote the novel image to be segmented as , where the discrete domain (grid) on which the image is defined is with the grid density (resolution) . Standard 3T MR images often have s = 1.0 mm. Alongside, a set of training images are defined on the same domain as . Their respective manually segmented binary images are with 1 indicating being inside the target, the hippocampus being the present case.
The proposed method is detailed below. It contains two main components. First, a “self‐correcting” multi‐atlas scheme is used to determine the low‐resolution hippocampus probability map. After that, an active contour scheme further refines the morphology at much higher interpolated resolution.
Construction of probability map in native resolution via a self‐correcting multi‐atlas approach
Due to its robustness and accuracy, the multi‐atlas methods have been adopted in many segmentation scenarios. The basic idea behind the atlas‐based segmentation is to drive segmentation by registration: to segment a novel image, one registers already segmented images (training images) to this novel image, and utilizes the resulting transformation to deform the corresponding segmentations (training label images) to the space of the novel image. The basic scheme of the multi‐atlas approach segmentation can be divided into two steps: registration and label fusion: First, each of the images is registered to , and the optimal transformation minimizes the cost function:
| (1) |
where the dis‐similarity measurement measures the global discrepancy between the two images. After the registration, in the second stage of the multi‐atlas segmentation, each training label image is transformed with and the transformed training label images, , are fused to form the segmentation.
The residual registration costs are often times used as an indicator for the registration performance. However, not only is a single value not sufficient in describing the whole deformation field, but also such a value only reflects the global discrepancy between the two images, and is not specific about the target we are trying to extract. To address such issues, in the fusion step researchers have proposed localized methods that compare the local patterns between the registered training images with the novel image [Derix et al., 2014; Sabuncu et al., 2010; Wang et al., 2011b, 2012].
While such collective decision making in the fusion step improves the overall performance, such an idea can further be employed in the upstream registration step. With more accurate registration transformation, the fusion is provided with better alignment and significantly better accuracy and robustness is achieved [Gao et al., 2015]. However, there the filtering strategy is only performed over the linear (affine) transformation. The nonlinear deformation which reveals the detailed morphology is computationally prohibitive to be processed through the Kalman filtering scheme presented in [Gao et al., 2015]. The present research below proposes a computationally feasible way to harness the nonlinear inter‐relationship among the training (label) images, and use such information to correct the registration step, for a better overall segmentation.
The key observation is this: the nonlinear transformation are computed to register the two grayscale images. As a result, should also align the corresponding binary masks, which highlight the target regions. Formally, if and register and to the , respectively, we would have
| (2) |
As a result, a by‐product is a registration between and :
| (3) |
Since 's are of the same modality and 's are binary images, the quality of can be evaluated at a point‐wise accuracy with straightforward metrics. Moreover, contrasting to using the optimization final cost as the registration quality assessment, such an evaluation is independent of the registration optimization process. This provides an approach, at the stage of registration, to cross check the registration performance and the possibility of self‐correcting, which is detailed next.
Furthermore, since the ultimate goal of the registration is the segmentation of the target, the registration accuracy remote from the target is of less interest. Indeed, only the registration accuracy around the target is affecting the segmentation. Therefore, we can focus in particular on the points in the target, that is, .
In the nonlinear registration between grayscale images, the regularization plays a critical role in avoiding the singularity development. The choice of appropriate regularization is a fine art balancing the optimization stability between desired registration accuracy, especially at the sharp/fine‐scale regions.
However, in the nonlinear registration between binary images, fortunately, one can adopt a point set representation of the target [Gao and Tannenbaum, 2010]. Under such a representation, a diffeomorphic registration can be achieved without the usage of regularization. Specifically, and are considered as non‐normalized probability density functions (pdfs) of certain random variables and , respectively. Evidently, and are uniformly distributed on the respective supports of , . Then, Q points are sample from and , forming two sets of points . To find an optimal correspondence and diffeomorphic transformation among the points.
We denote the correspondence between and by a matrix where indicates is corresponding (not corresponding, resp.) with . Denoting the pair‐wise distance matrix as where is the norm, we find the correspondence between the two sets of points by solving such as assignment problem:
| (4) |
where is the Hadamard product of the two matrices and is the matrix Frobenius norm. Moreover, it is noted that the optimization variable is not restricted to be a binary matrix. Otherwise the optimization becomes an NP‐hard combinatorial problem. Fortunately, due to the fact that the constraint matrix of (4) is totally unimodular, the resulting optimal A is a binary matrix [Burkard et al., 2009]. This optimization problem can be shown to be convex, and it can be effectively solved by using, for example, the interior point method [Boyd and Vandenberghe, 2004]. The resulting matrix A will give a one‐to‐one correspondence and transformation between and . Hence, a transformation is defined between and .
While the transformations obtained through different route should coincide, we have . That is,
| (5) |
where is the residual transformation.
The final transformation is computed as
| (6) |
where is a convex weight adjusting the contributions from two routes.
Once the nonlinear transformations are computed, a simple averaging scheme is adopted to obtain a probability map : as
| (7) |
Using the majority voting rule, the boundary of the target can therefore be defined as the 0.5‐isocontour of . It is noted that though more sophisticated fusion schemes exist, here the purpose is mainly to obtain a robust and accurate probability map for the next step's fine tuning of segmentation, which is detailed in the next section.
Synergistic Surface Evolution for Super‐Resolution Segmentation
As discussed above, though a rich amount of segmentation schemes exists for the hippocampus, the shortcoming of them all is that at the native image resolution, the fine dentation morphology has not been captured in the 3T MR image based segmentations. Moreover, due to the fact that atlas based methods depend on training segmentations, it is apparent if certain shape features do not exist in training set, it is rare, if not impossible, that they will be captured by atlas based segmentation.
As a result, while the probability map : obtained above contains valuable information about the approximate morphology, it has to be fine‐tuned in a more de novo and data driven approach. The fine‐tuning of the surface in the up‐sampled space is detailed below.
Monotonic Interpolation with Very Large Interpolation Factor
To extract the fine morphology, the image is up‐sampled by a factor of K and we now denote the new image , where with the grid density (resolution) In this study we used the cubic spline for such a purpose. Correspondingly, the probability map is also interpolated to be which is defined on . Two critical issues have to be addressed to achieve successful overall segmentation.
First, the choice of is apparently critical for the capability of detecting fine scale morphology features. As our experiments show in the next section, at 0.7 mm/pixel ( ), which is a common resolution of 7T MRI, the segmentation still does not capture dentation very well. With (0.2 mm isotropic resolution), the dentations start to emerge. This is consistent with the observation that at 9.4T with isotropic 0.2 mm/pixel resolution, the reconstructed surfaces start to show the dentations [Yushkevich et al., 2009]. While the choice of will be further evaluated in Section 4, it is noted here that such a simple up‐sampling based process indeed reveals three important issues: First, we may have overlooked the valuable information already existing in the 3T MRI; Second, if we still segment the structure in the native resolution, even the 0.7 mm/pixel resolution common in 7T MRI is not sufficient for the this morphology study; Third, with the capability of extracting sub‐pixel information, a large amount of clinically acquired 3T images will be available for the study of fine‐scale morphology.
Most of the super‐resolution studies only has an interpolation factor around 1.5, 2, or 3 in two‐dimension [Dong et al., 2014]. Interpolating to 10 times denser is very rare, especially in three‐dimension. This is the justifiable since the objective of the super‐resolution is to achieve better, shaper, and more visually appealing appearance. With such an objective, the usage of sophisticated super‐resolution techniques results in very high computation load even for moderate values for 2D images.
Contrastingly, the objective of the present work is not on the textural content of the image: we are not trying to infer the whole content of the 9.4T MR image from a 3T image. Instead, one is only interested in the outer contour of certain object. With such goal, we have to use very high magnification factor in all the three dimensions, and this precludes the usage of sophisticated super‐resolution techniques such as those based on neural network and sparse encoding. As it is shown in the result section, function interpolation suffices such a purpose. However, cares must be taken in the choice of the interpolation kernel, which is the second issue detailed below.
With the purpose of locating the dentate surface, the interpolation must not introduce any new edge/boundary/surface, that is, the interpolation kernel must guarantee the monotonicity and range of [Fritsch and Carlson, 1980]. Indeed, non‐monotonic interpolation may result in Gibbs ringing artifacts, which may be mistakenly regarded as ripples on the structure.
Figures 2 and 3 show the usage of different interpolation scheme and their effects in the identification of the edges. In both figures, the monotonic kernel results in a much smoother and sharper hippocampus boundary.
Figure 2.

Sagittal view of the hippocampus region at high resolution using different kernels. Left panel uses the non‐monotonic cubic kernel whereas the monotonic cubic kernel is used for the right panel. In the dash‐line circled regions, we can observe that the hippocampus boundary on the left side has many zig‐zag artifacts. In particular, in the yellow circle on the hippocampus tail, no obvious edge can be seen on the left panel, whereas a clear half‐circular shaped edge can be seen on the right side. [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 3.

Two coronal views are shown. Like the Figure 2, the non‐monotonic kernel is used for the left panel and the monotonic is used on the right. Again, the boundaries on the right side is much smoother and sharper than those on the left. [Color figure can be viewed at http://wileyonlinelibrary.com]
Although the up‐sampling provides richer morphology details, boosting the resolution from to an isotropic 0.1 mm will dramatically increase the image volume size by a factor of 1500. As a result, a single image file can be as large as 50G bytes, which is computationally prohibitive. To address such an issue, realizing the notion of the approximated hippocampus region has already been encoded in , we will focus the computation only around the hippocampus, covering approximately region.
In native resolution, manual tracing of a hippocampus only covers roughly 30 sagittal slices. It is a time consuming but still possible process. However, at 0.1 mm resolution a single side hippocampus occupies approximately voxels. Contouring in all those slices is extremely time‐consuming, if not impossible at all, for human raters. While currently computer‐aided segmentation is only facilitating human contouring as the gold standard in the clinical resolution settings, at much higher resolutions, paradigm must shift and computer‐aid segmentation is indispensable. It is also noted that some software allows contouring in physical space, and the resulting contour may achieve sub‐pixel accuracy in 2 . This may reveal the dentation pattern in a single slice. However, when advancing to the next slice 1 or 1.5 mm away, the overall spatial resolution is still too coarse for the reconstruction of fine surface features in 3 . Inevitably, one of the main claims of the present work is that for such a fine‐scale shape reconstruction, entirely manual extraction is beyond the feasible capability of human rater, and a computational approach seems necessary.
The main idea in this fine‐tuning step, is that the algorithm would learn the image features from the high probability region, defined by , as well as the edge information in , together constrained by the spatial vicinity of , to compute the final segmentation.
To proceed, denote the high confidence learning region is defined as , where higher value indicates higher confidence. Then, to robustly capture the appearance inside the hippocampus, three robust statistics, the median, inter‐quartile range, and median absolute deviation are measured locally for each location as a feature vector . With the feature vectors defined, the hippocampus appearance is now characterized by the probability density function ı of the feature vectors estimated by the kernel density estimation procedure [Botev et al., 2010; Gao et al., 2012b]. Essentially, given any feature vector, the function ı will provide a value measuring the likelihood for such a vector belonging to the hippocampus. However, it may be the case that certain similar appearing images may also excite high likelihood values even being remote from the hippocampus. This may cause the segmentation to “leak”. To address such an issue, under a Bayesian framework, the posterior is computed as ρ(x)≔ı which synergizes both the image appearance and the prior estimation of the location of the target. This effectively mitigates the segmentation leakage problem. Such a posterior value can be considered as the conformal metric defined on the image domain [Caselles et al., 1997; Gao et al., 2012a; Kichenassamy et al., 1996b]. This is achieved by the following variational approach. We denote the family of evolving surface as . It evolves to minimize the energy functional:
| (8) |
where in the first term the traverses the space inside the closed surface and the second term is the total surface area. The and are positive weighting factors.
The first variation of the functional is computed and the flow of the surface is governed by the partial differential equation:
| (9) |
in which is the spatial parametrization of the surface is the inward unit normal vector field on , and is the mean curvature of the surface.
In addition to the regional statistic force, the edge based force is also added to the flow. Define to be the LoG (Laplacian of Gaussian) filtered version of , we update the Equation (9) to
| (10) |
Essentially, the surface will evolve and converge to the locations that possess strong edge appearance, and are similar in intensity statistics, yet spatially close to the atlas derived probability map.
The LoG of the images at high‐resolution is visualized in Figure 4 to further emphasize the importance of the choice of the interpolation kernel. In Figure 4, the background gray‐scale image is the up‐sampled in the high‐resolution. The yellow curves indicate the zero‐crossing regions of the LoG image: . The red contour indicates the initial contour obtained from the native‐resolution atlas based computation. It can be seen from the arrowing pointing region that the initial contour passes the dentate inferior hippocampal surfaces. In the fine tune step, it is expected to evolve and converge to the correct dentations. Moreover, we can see that much of the regions share very similar intensity as the hippocampal region. A pure region‐based energy will drive the initial contour and leak into those locations. Edge based energy term, with smaller attraction region, serves our purpose here well since the initial atlas based contour is already quite close to the real dentation. However, if the non‐monotonic kernel is used, shown on the left, the resulting image gets many superfluous edges. As predicted by the Gibbs ringing effect, these edges are spatially quite close to the real boundary, analogous to the “main lobe” in the signal processing. Such adjacent edges will mislead the edge‐based energy term to converge to wrong locations. In contrast, with the monotonic kernel on the right panel, there are much fewer isolated edges attracting the contour evolution, and the contour correctly converges to the desired location.
Figure 4.

The LoG of the images at high‐resolution. The yellow curves indicate the zero‐crossing regions of the LoG image: . The red contour indicates the initial contour obtained from the native‐resolution atlas based computation. See text for detailed discussion. [Color figure can be viewed at http://wileyonlinelibrary.com]
The entire process, including both the atlas and fine tune steps, are fully automated. At convergence, with proper setting of (which is found to be around 10), the final surface will enclose the hippocampus and will be able to capture the hippocampal dentations, which is the critical shape feature for various morphology studies.
Quantitative Validation and Evaluation
As a preview, the segmentation of two hippocampi are shown in Figure 5. Both images were taken under 3Tesla MR scanner. The dentated structure of the inferior surface of Figure 5F clearly differentiates it from that of Figure 5C. Such morphology, to the best of our knowledge, has not been paid full attention under the segmentation based on 1 mm resolution. Indeed, even the volumetric segmentations traced by physicians following consistent protocols do not reveal the characteristic morphology under the hippocampus [Boccardi et al., 2011, Boccardi, 2015].
Figure 5.

Hippocampus with smooth inferior surface in (A) full sagittal view, (B) magnified sagittal view with segmentation contour in orange, and (C) 3 surface, inferior view. Hippocampus with prominently dentated (bumpy) inferior surface in (D) sagittal slice, (E) magnified sagittal view with segmentation contour in orange, and (E) 3 surface, inferior view. The ridges that produce the dentated appearance of the hippocampus can be clearly seen in F and are notably absent in C. Comparing C/F with Figure 1H/D, the improvement can be seen. The images were obtained on a 3 Tesla MR scanner. [Color figure can be viewed at http://wileyonlinelibrary.com]
However, a critical question is: are such dentations real, or are they merely artifacts induced by the computation approach? For example, in the one‐dimensional signal processing, one critical phenomena to avoid in interpolation is the Gibbs ringing effect. Such effect introduces “bumpy” artifacts into the original signal. In the present study, it is critical to rule out such artifacts and quantitatively validate the morphological features we captured are realistic and accurate.
While dentation can certainly be observed visually in a 3T image, to quantitatively validate the results, we have to rely on higher field image where dentation can be undoubtedly captured.
In this section, we design experiments to validate the detected dentation.
Validation Framework
In Yushkevichet al. [2009], researchers obtained five ultra‐high resolution hippocampus images from three subjects. Only the postmortem hippocampus region is imaged. The imaging time ranges from 13 hours to 62 hours, under a 9.4T ultra‐high field scanner. The hippocampus are traced out by experts from the five volumes. At such a high resolution ( ), the dentation of the hippocampus can unequivocally be observed and captured.
Figure 6 shows two columns of images with varying resolution. The first row has the original resolution (0.2 mm/pixel) and clearly the hippocampus on the left has a prominently dentated inferior surface. The right one is relatively flat. From the first to the fourth rows, the resolution decreases from 0.2, 0.4, 0.67, to 1 mm/pixel. The dentations of the hippocampus on the left are noticeable harder to perceive, and the high quality image textures disappear, while the less complex shape of the hippocampus in the right panel is minimally affected. However, even in the last row of the left panel, we can still visually detect traces of dentation. This convey a critical message that: the dentation information is not totally lost at 1 mm resolution and the dentate structures we visually appreciate in the 3T images are truly reflecting the same structure in ultra‐high field. This key observation is the basis of our recovery. However, if the segmentation is carried also at 1mm resolution, the voxel size is too large to capture the fine bumpy contour. Instead, to capture the fine dentation information, the segmentation should be performed at much higher resolution. Indeed, as an illustration, the 1 mm resolution image in the bottom left panel of Figure 6 is interpolated back to 0.2 mm/pixel, as shown in the right panel of Figure 7, the dentation is visually better observed. Computationally, using segmentation on such a down‐sample‐then‐up‐sampled “high” resolution grid, we are able to obtain closer to ground truth morphology.
Figure 6.

9.4 T images at various levels of resampling. From top to bottom the images are shown at 0.2 (native), 0.4, 0.67, and 1.0 per pixel to simulated the effect of lower resolution sampling. The left column shows a prominently dentated hippocampus and the right shows a smooth hippocampus. The bottom row on the left does not show the dentated contour seen in the top image, while less of a difference is apparent on the right.
Figure 7.

The 1 mm resolution image on the left (same as the bottom left panel of Figure 6) is interpolated back to 0.2 /pixel on the right. The internal texture is lost, but the dentated appearance of the inferior boundary visually observed more precisely, particularly when viewed up close.
This is also the approach we take to validate the proposed method. The main idea is: starting from the high‐resolution (0.2 mm/pixel) image data, whose validated segmentations are available, we first down‐sample them to 1 mm/pixel resolution. The down‐sampling is performed using linear kernel. Other choices have been explored but unlike the up‐sampling step, here no difference was observed among different choices. Then, using the proposed method we aim to accurately depict the bumpy contour of the hippocampi. The perceived dentation is then compared with the ground truth.
However, there is a major obstacle for the proposed segmentation method to be applied and validated on this data set: the 9.4T images are only acquired around the hippocampus area, the multi‐atlas based on the whole‐brain training images are not directly applicable. To solve this problem and use the small field‐of‐view (FOV) high‐resolution images in our validation, we take the approach that to adapt the training images to the same FOV as that of the 9.4T images.
To proceed, denote the five high‐resolution 9.4T images to be 5 where 0.2 indicates the resolution. The validated segmentation is also provided in [Yushkevich et al., 2009], and they are denoted as 5. Then, the are registered to their average image , defined as , over the similarity transformation minimizing the mean‐square‐error metric. After the registration, the average image is computed again with the registered and the registration is performed again. Such iteration converges in a few times and we get a final average image. With slight abuse of notation, the final average image is stilled denoted as .
After that, each training image is registered to over the similarity transformation minimizing the mean‐square‐error metric. Once the registration optimization converges, the registered training images and their corresponding MR images are cropped to the FOV of the . This way, all the training images and training segmentations are defined on the similar FOV as the high‐resolution 9.4T MRIs, which enables the application of the proposed algorithm on to them.
Visual Assessment
The image is down‐sampled to 1mm/pixel resolution and is denoted as . One slice of is shown in Figure 8C. After adapting the training images to , the proposed algorithm is applied to it. The results are shown in Figure 8D. Comparing with the validated segmentation at the original resolution, shown in Figure 8B, the dentation on the inferior surface is well kept.
Figure 8.

Segmentation on the 9.4T data. (A) 9.4T image as in (Yushkevich et al., 2009) in which the dentation can be clearly seen. (B) Original image with validated segmentation in (Yushkevich et al., 2009). In it, the dentation is correctly captured. (C) Down‐sampled version of the original image to 1 mm resolution, then interpolated to the 0.2 mm resolution. The internal texture can hardly be discerned. However, the dentation on the inferior surface of the hippocampus can still be seen. (D) The proposed segmentation on C. The dentation captured and highly comparable to the original segmentation in B. [Color figure can be viewed at http://wileyonlinelibrary.com]
Quantitative Evaluation
Two types of quantities often used for evaluating segmentation accuracy: one based on the volumetric overlapping, such as the Dice coefficient [Dice, 1945]. The other often represents the point‐wise distance measure, such as variations of the Hausdorff distance [Hausdorff, 1962]. In this study, since the primary interest is to capture the bumpy morphology on the inferior aspect of the hippocampus, the Dice coefficient is not sensitive to this measure. Instead, we measure the largest distance from the manual segmentation to the algorithm output in the sagittal slices.
Denote the surface of as . The proposed segmentation is performed on the down‐sampled 1 resolution images, and their respective surfaces are recorded. As a comparison, the proposed segmentation, without the super‐resolution step, is also performed directly under the 1 mm resolution, with their respective surfaces . The Hausdorff distance is measured between and all the resulting surfaces and are shown in Table 1. The unit of distance is .
Table 1.
The Hausdorff distance is measured between the reference segmentation surface and that from the proposed method ( )
| Surface | 1R | 2L | 2R | 3L | 3R | |
|---|---|---|---|---|---|---|
|
|
2.1 | 1.9 | 1.5 | 1.8 | 2.0 | |
|
|
0.7 | 0.3 | 0.6 | 0.2 | 0.4 |
As a comparison, the distance is also measured to the surface without the super‐resolution step . With the super‐resolution, the largest surface discrepancies are correctly reduced. “R” and “L” mean right and left, respectively, and 1, 2, or 3 refer to the subject.
As a comparison, the Dice coefficients with and without the super‐resolution are also computed, as shown in Table 2. Though there are only five cases, it can be observed that the Dice coefficients do not fluctuate much. This indicates that the proposed super‐resolution (SR) scheme is valuable in capturing and reconstructing fine detailed morphology, while the overall volumetric accuracy is not the main objective of the proposed scheme.
Table 2.
The Dice coefficients between the reference segmentation volume and the proposed method, with and without the super‐resolution (SR) step
| Segmentation | 1R | 2L | 2R | 3L | 3R |
|---|---|---|---|---|---|
| without SR | 0.81 | 0.80 | 0.82 | 0.78 | 0.81 |
| with SR | 0.81 | 0.81 | 0.80 | 0.79 | 0.82 |
In one case (2R), the Dice even drops. This is due to the fact that in the high‐resolution fine tuning steps, the contour slightly leaks into the non‐hippocampal region.
Hosseini et al. provided a comprehensive evaluation of the hippocampus segmentation algorithms [Hosseini et al., 2014, 2016, 2015]. They categorize various metrics into three groups. The three groups of metrics base their evaluation on, respectively, voxel, distance, and volume. To better characterize the segmentation with and without the super resolution scheme, we also compute the evaluation metrics in [Hosseini et al., 2014, 2016, 2015], shown in Table 3.
Table 3.
Various metrics (Hosseini et al., 2014, 2016, 2015) between the reference segmentation volume and the proposed method, with and without the super‐resolution (SR) step
| ID/Metric | Similarity | Precision | RMS | MD | Sensitivity | RAVD | |
|---|---|---|---|---|---|---|---|
| 1R | SR | 0.70 | 0.89 | 0.44 | 0.28 | 0.86 | 0.06 |
| w/o/SR | 0.68 | 0.85 | 0.72 | 0.40 | 0.84 | 0.07 | |
| 2L | SR | 0.68 | 0.83 | 0.28 | 0.20 | 0.81 | −0.07 |
| w/o/SR | 0.66 | 0.77 | 0.77 | 0.56 | 0.78 | −0.08 | |
| 2R | SR | 0.71 | 0.77 | 0.48 | 0.30 | 0.76 | 0.10 |
| w/o/SR | 0.72 | 0.79 | 0.69 | 0.40 | 0.78 | 0.08 | |
| 3L | SR | 0.65 | 0.79 | 0.17 | 0.10 | 0.73 | 0.10 |
| w/o/SR | 0.62 | 0.78 | 1.1 | 0.59 | 0.71 | 0.11 | |
| 3R | SR | 0.69 | 0.83 | 0.37 | 0.19 | 0.81 | −0.07 |
| w/o/SR | 0.70 | 0.81 | 0.99 | 0.56 | 0.80 | −0.07 | |
Among the metrics, it can be observed that the metrics based on voxels and volumes are less sensitive to the SR than those based on the distances. This correctly reflects the characteristics that the fine‐scale dentate structures do not significantly alter the volume of the segmentation structure, yet they significantly change the surface distance due to their ridges/valleys. The observation is consistent with the comparison based on the Dice and Hausdorff metrics shown in Tables 1 and 2.
EXPERIMENTS ON 3T CLINICAL DATA
In this section, we apply the proposed algorithm to extract the fine‐scale hippocampal dentation from epilepsy patients and AD patients.
Epilepsy Data and Neurologist's Visual Assessment
Six scans were selected from an existing IRB‐approved database of clinical epilepsy patient scans maintained by one of the authors (LV) at the Epilepsy Center of the University of Alabama at Birmingham. Epilepsy patients are of particular interest because of the prevalence of temporal lobe epilepsy and unilateral hippocampal atrophy/hippocampal sclerosis in this population, hence the use of this database as a source for test scans, including those with symmetric appearing hippocampi for the sake of uniformity of scan acquisition. All scans were acquired on a single 3T Philips Achieva platform (Philips Healthcare, Einthoven, Netherlands) with an 8‐channel head coil. A common T1‐weighted MPRAGE sequence was used with 1 mm resolution in the sagittal plane (FOV 256 mm) and 1.2 mm thick slices for both visual evaluation and analysis. Basic sequence parameters include a TR of 7 , a TE of 3.3 , and a flip angle of 8°.
Visual review of gray‐scale MR images for assessment of dentation was done in OsiriX by scrolling through all the sagittal slices of each hippocampus; due to the curvilinear shape of the hippocampus, no single sagittal plane is capable of completely capturing dentation. Default automatic interpolation for zooming was turned on as is the typical practice in clinical imaging review [Rosset et al., 2004].
A large number of scans were visually reviewed by a board‐certified clinical neuroimaging expert, and those included in this study were individually selected as representative examples of three groups of hippocampal appearance: 1. prominent hippocampal dentation bilaterally “bumpy”), 2. virtually no dentation bilaterally (“smooth”), and 3. asymmetric dentation. Figures 9 and 10 show the reconstructed surfaces of the extracted hippocampi. Subjects A‐C are the asymmetric group, subjects D and E are the bumpy group, and subjects F and G are the smooth group. Subjects ranged in age from 23 to 58 years with no particular distribution between groups, and all subjects but one was female.
Figure 9.

Inferior view of six right hippocampus. A, B: asymmetry group; C, D: bumpy group; E, F: smooth group. The arrowheads indicate the prominent dentes and their approximate orientations. Note the visible dentation in A and B on this side. [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 10.

Inferior view of six right hippocampi. A, B: asymmetry group; C, D: bumpy group; E, F: smooth group. The arrowheads indicate the prominent dentations and their approximate orientations. Note the lack of dentation in A and B on this side. [Color figure can be viewed at http://wileyonlinelibrary.com]
Regarding the diagnostic interpretation of the scans, subjects B and C (asymmetric group) showed mild to moderate right hippocampal atrophy and T2 signal hyper intensity on coronal images in the clinical imaging protocol, which included coronal FLAIR and high‐resolution coronal T2w sequences (not shown here). Subject D (bumpy group) had a right parietal trans‐mantle cortical dysplasia, but no abnormality affecting the hippocampi; the remainder of the subjects' scans were unremarkable.
The proposed algorithm configuration was developed independent of the reviewers' classification of each of the 14 hippocampi analyzed, but visualization of the resulting surfaces shows that they compare favorably as seen in Figures 9 and 10.
Regarding classification of the degree of dentation, the subjects in this study were chosen as clear examples of the morphologic variation that exists across individuals based on the clinical imaging experience of the reviewer. Two such cases, one with a bumpy appearance and the other with a smooth appearance are detailed in the Figure 5 mentioned above. Certainly there is a spectrum of degrees of “bumpiness” between the few examples used in this study, and the categories used herein are not intended to be comprehensive or exhaustive. Rather, they are intended simply to be illustrative. It is also important to note that, based on our experience, very bumpy and very smooth hippocampi are commonly seen in the normal population, though striking degrees of asymmetry as seen in our asymmetric group are uncommon in the absence of hippocampal pathology.
ADNI Hippocampus Data
The proposed method is applied to all the 3T images in the hippocampus segmentation project in [Boccardi et al., 2015]. In this data set, all the hippocampi have been segmented and validated by human experts. However, although such reference segmentation has been validated, due to the limitation that the segmentation is only performed and recorded in the native resolution, certain morphological features are inevitably lost. Surface renderings from the native resolution segmentations are shown in the top rows of Figures 11 and 12. In particular, from the binary label images, the marching cube algorithm is used to extract the surface using the 3D Slicer [Lorensen and Cline, 1987]. The Laplacian smoothing is applied for 10 iterations simply to avoid the stacking effect and no triangle decimation is performed.
Figure 11.

Four right hippocampi from the ADNI data set. The top row shows surface renderings from the validated segmentations in the native resolution; the bottom row shows surfaces from the proposed super‐resolution method. Panel A shows a hippocampus with a smooth inferior surface, which shows little difference between rows, whereas the other three panels show hippocampi with prominent dentations that are much more clearly seen in the bottom row. [Color figure can be viewed at http://wileyonlinelibrary.com]
Figure 12.

Four left hippocampi from the ADNI data set. The top row shows sur‐face renderings from the validated segmentations in the native resolution; the bottom row shows surfaces from the proposed super‐resolution method. Panel A shows a hippocampus with a smooth inferior surface, which shows little difference between rows, whereas the other three panels show hippocampi with prominent dentations that are much more clearly seen in the bottom row as in Figure 11. [Color figure can be viewed at http://wileyonlinelibrary.com]
Contrastingly, the bottom rows in Figures 11 and 12 show the surfaces extracted from the same procedure and parameters, but from the proposed super‐resolution method.
We can observe from the comparison that, the hippocampus‐A in both Figures have rather smooth (not bumpy) inferior surfaces. In such a situation, both the top (native resolution) and bottom (super resolution) rows correctly reflect such morphologic features.
However, for the other three hippocampi (not necessarily paired), while prominent dentation can be seen in the proposed method, they are only marginally well observed in the surfaces extracted from the native resolution expert validated segmentation. It is convincing to observe that the surfaces from the native resolution capture certain degree of the largest dents in Figures 11B,D and 12C,D.
This clearly demonstrates that, although the reference segmentation is the current reference standard, due to the limitation that it is performed and recorded in the native image resolution, the resulting segmentation is not able to reflect certain morphologic features that are indeed captured by the imaging devices. On the other hand, by creatively extending the segmentation to the sub‐pixel space, the important features of hippocampal morphology can be correctly reconstructed.
CONCLUSION AND DISCUSSION
We present a segmentation scheme for the hippocampus that reveals subtle surface morphological features unique to the hippocampus. The proposed method enables delineation of surface features that are often overlooked and not well depicted with segmentation performed in the native resolution. This analysis was based on a sequence commonly collected in clinical and research protocols. While ultra‐high resolution images obtained in vivo at 7T or ex vivo at 9.4T would be ideal, the lack of access to these scanners and the non‐trivial nature of obtaining such images with high quality severely limits their use to the broader neuroscience community. By contrast, most large hospitals and all major research centers in North America have access to 3T.
Evidently, the dentation on the hippocampus also significantly increases the hippocampal surface and CA1 volume. Such dentational structure is spatially close to the dentate gyrus which is known to contribute to the formation of new episodic memories and more importantly of being one of a select few brain structures with high rates of neurogenesis after birth. As a result, the study of the dentation structure on the hippocampal surface may reveal the meso‐scale effect of neurogenesis in adulthood. This opens up numerous possibilities of future research in hippocampal surface analysis correlating the degree of dentation with a variety of clinical parameters in a range of common diseases known to involve the hippocampus including epilepsy, AD, and schizophrenia, for which publicly available databases of 3T images already exist.
Ongoing research includes applying the algorithm to larger sets of data, quantifying the degree of dentation, and correlation that with various physiology, psychology and psychiatry conditions.
Moreover, several issues rise in the presented research are further discussed below.
How High Resolution Is Sufficient?
In this work, the resolution of the dense image grid was chosen to be isotropic 0.1 . This was determined empirically by balancing the computation load and the necessity for revealing the features of dentation. First, 9.4T images with a resolution of 0.2 per pixel, the dentations can be reconstructed from the binary segmentation volume. Therefore 0.2 per pixel could be sufficient. On the other hand, it was observed in the experiment that the dentation can be better captured when the density if further increased to 0.1 per pixel. However, further increasing the density does not increase the performance, measured by the Hausdorff distance.
Theoretically, the Nyquist sampling theorem dictates that the sampling frequency should be at least twice of the highest frequency in the original signal. However, individual dentes commonly have a width of approximately 2 mm—corresponding to a wave length of 4 mm. Nyquist would predict a sampling rate denser than 2 mm/pixel would be sufficient. On one hand, this supports the notion that the native resolution used in this study may capture dentation information. On the other hand, it says little about how such captured information can be correctly interpreted by the later segmentation to reconstruct the bump, which, is the main topic of the present report.
Moreover, the proposed method is performed on the rectangular grid. Therefore, the total number of samples is cubic with respect to the resolution. This, however, could be reduced if the processing algorithm is performed on a graph, which can be constructed in a way that it is dense only along the boundary. On the other hand, it is noted that numerical processing on a rectangular grid is often more stable. Indeed, the boundary computation using level set on a grid enjoys more numerical advantages than its original version on parametric curve/surface. In our on‐going research, we are improving our previous short path based algorithm [Zhu et al., 2014] to the graph to improve the computation efficiency of the proposed method.
It is also worth noting that super‐resolution techniques have been studied in previous reports, in particular for boosting the resolution of the image taken at lower resolution, see [Bahrami et al., 2016; Gao et al., 2012a; Tian and Ma, 2011; Yang et al., 2010; Yu et al., 2012; Zeyde et al., 2012c] and the references therein.
While the generic super‐resolution schemes provide exciting results, in this study we approached the problem in a novel way for three reasons. First, through our evaluation we discovered that, we need a 10x super‐resolution ratio whereas most existing super‐resolution methods have a ratio less than 5. Above that, the numerical stability may become an issue. Second, it is too computationally heavy even in 2 , not to mention in 3 to boost the resolution ten times in all directions with standard approaches. Finally, in this study we are in particular focusing on the morphologic contour of a specific structure, whereas the purpose of general super‐resolution methods is for the entire textural content of the image not just boundaries of structures. Because of these, the proposed scheme is designed to balance the morphologic accuracy and the computation complexity.
Quantification of Dentation
With the capability of accurately and robustly capturing the hippocampal dentation, the next question is how to quantitatively analyze the overall degree of dentation in terms of the quantity and depth of the dents, and correlate them with various physiologic, psychometric, and diagnostic parameters. To that end, in future work we can leverage the surface parameterization frameworks based on conformal mapping [Angenent et al., 1999; Gao et al., 2006], Graph theory [Gelas and Gouail‐lard, 2007], as well as the optimal transportation [Haker et al., 2004; Sandhu et al., 2012; Su et al., 2015]. Once the surface is parameterized on certain regular domain, the geometrical and statistical features of dentation can be quantified through a multi‐scale approach [Gao et al., 2007; Schröder and Sweldens, 1995], which is able to characterize the bumps at the specific size and scale.
Validation in the Era of Big Data
In Section 3.3, a quantitative evaluation is performed to validate the results. We further detailed the difficulty and challenge in the validation at such dense and large data set. Indeed, the enabling factor of such a validation still relies on the seminal work of [Yushkevich et al., 2009], which is human‐work intensive and only has five public cases. All the results after that, including the epilepsy and AD, are largely visually assessed without quantitative validation.
In most existing reports evaluating segmentation accuracy done in the native resolution (∼1 isotropic), the imaging reviewers only have to manually contour roughly 30 slices to cover the entire hippocampus. While this process is already time consuming and tedious, it is still manageable. Fortunately, the results of these efforts are publicly available for the community in several outstanding open data sets, such as [Boccardi et al., 2015], which includes more than one hundred expert validated segmentations.
By contrast, to validate the fine detailed segmentation/morphology in the present work, one has to contour ten times as many slices. In addition, in each slice, much more precision has to be taken to delineate the shape detail. Essentially, contouring each volume is similar to that in [Yushkevich et al., 2009] and many fewer data sets can be manually contoured in a given amount of time.
This poses a general problem of validation of image computing in the era of big data. Previously, human computation has always been considered to be the reference standard against which any computer based algorithm must be compared. Consequently, the computer aided segmentation has only facilitated or approximated human contouring in the clinical resolution settings. Unfortunately, the complexity and size of data sets have increased to the extent that human evaluation cannot not feasibly meet the need for validation, both quantitatively and qualitatively. At such a high resolution and data quantity, the paradigm has shifted and the computer aid is no long merely facilitating, but rather has become indispensable.
One example is in the statistical shape analysis where group difference is computed from two sets of complex geometric shapes. There, the results are inherently not assessable to human observers. Recently, this issue has been addressed by designing an algorithm to validate other algorithms [Gao et al., 2014], and newly designed algorithm can now be quantitatively validated against algorithm‐generated, instead of human‐generated, “ground truth” [Gao and Bouix, 2016]. Similarly, in the digital pathology field, the segmentation of millions of nuclei in a single whole slide histopathology scan is impossible to be checked by any imaginable single human effort, and a computational approach is necessary to aid such a process [Zhou et al., 2017].
Inspired by those ideas, adopting such algorithm‐generated data sets as the “ground truth” reference may be a feasible solution for the validation of fine detailed image segmentation. However, how such data sets can be designed and how to avoid bias in the validation, are important yet unsolved future research topics.
ACKNOWLEDGMENTS
The authors would like to thank the anonymous referees for their helpful comments and suggestions that led to a significant improvement of the paper. This work was supported by the National Natural Science Foundation of China No. 61601302.
REFERENCES
- Angenent S, Haker S, Tannenbaum A, Kikinis R (1999): On the laplace‐beltrami operator and brain surface flattening. IEEE Trans Med Imaging 18:700. [DOI] [PubMed] [Google Scholar]
- Apostolova LG, Dinov ID, Dutton RA, Hayashi KM, Toga AW, Cummings JL, Thompson PM (2006): 3D comparison of hippocampal atrophy in amnestic mild cognitive impairment and Alzheimerś disease. Brain 129:2867–2873. [DOI] [PubMed] [Google Scholar]
- Avants BB, Tustison N, Song G (2009): Advanced normalization tools (ANTS). Insight J 2:1–35. [Google Scholar]
- Bahrami K, Shi F, Rekik I, Shen D (2016): Convolutional neural network for reconstruction of 7T‐like images from 3T MRI using appearance and anatomical features. In Deep Learning and Data Labeling for Medical Applications ‐ 1st International Workshop, LABELS 2016, and 2nd International Workshop, DLMIA 2016 Held in Conjunction with MICCAI 2016, Proceedings. vol. 10008 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10008 LNCS, Springer Verlag, pp. 39–47, 1st International Workshop on Large‐Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2016 and 2nd International Workshop on Deep Learning in Medical Image Analysis, DLMIA 2016 held in conjunction with 19th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2016, Athens, Greece, 21–21 October.
- Beresford TP, Arciniegas DB, Alfers J, Clapp L, Martin B, Du Y, Liu D, Shen D, Davatzikos C (2006): Hippocampus volume loss due to chronic heavy drinking. Alcoholism: Clin Exp Res 30:1866–1870. [DOI] [PubMed] [Google Scholar]
- Bishop CA, Jenkinson M, Andersson J, Declerck J, Merhof D (2011): Novel Fast Marching for Automated Segmentation of the Hippocampus (FMASH): Method and validation on clinical data. Neuroimage 55:1009–1019. [DOI] [PubMed] [Google Scholar]
- Bobinski M, Wegiel J, Wisniewski HM, Tarnawski M, Mlodzik B, Reisberg B, de Leon MJ, Miller DC (1995): Atrophy of hippocampal formation subdivisions correlates with stage and duration of Alzheimer disease. Dement Geriatr Cogn Disord 6:205–210. [DOI] [PubMed] [Google Scholar]
- Boccardi M, Bocchetta M, Morency FC, Collins DL, Nishikawa M, Ganzola R, Grothe MJ, Wolf D, Redolfi A, Pievani M, Antelmi L, Fellgiebel A, Matsuda H, Teipel S, Duchesne S, Jack CR Jr, Frisoni GB (2015): Training labels for hippocampal segmentation based on the EADC‐ADNI harmonized hippocampal protocol. Alzheimer's Dement 11:175–183. [DOI] [PubMed] [Google Scholar]
- Boccardi M, Ganzola R, Bocchetta M, Pievani M, Redolfi A, Bartzokis G, Camicioli R, Csernansky JG, de Leon MJ, deToledo‐Morrell L, Killiany RJ, Lehéricy S, Pantel J, Pruessner JC, Soininen H, Watson C, Duchesne S, Jack CR Jr, Frisoni GB (2011): Survey of protocols for the manual segmentation of the hippocampus: Preparatory steps towards a joint EADC‐ADNI harmonized protocol. J Alzheimers Dis 26(s3):61–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Botev ZI, Grotowski JF, Kroese DP (2010): Kernel density estimation via diffusion. Ann Stat 38:2916–2957. [Google Scholar]
- Boyd S, Lieven V (2004): Convex Optimization. Cambridge University Press. [Google Scholar]
- Burkard R, DellÁmico M, Martello S (2009): Assignment Problems. SIAM. [Google Scholar]
- Carmichael OT, Aizenstein HA, Davis SW, Becker JT, Thompson PM, Meltzer CC, Liu Y (2005): Atlas‐based hippocampus segmentation in Alzheimerś disease and mild cognitive impairment. Neuroimage 27:979–990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Caselles V, Kimmel R, Sapiro G (1997): Geodesic active contours. Int J Comput Vision 22:61–79. [Google Scholar]
- Chupin M, Gérardin E, Cuingnet R, Boutet C, Lemieux L, Lehéricy S, Benali H, Garnero L, Colliot O (2009a): Fully automatic hippocampus segmentation and classification in Alzheimerś disease and mild cognitive impairment applied on data from ADNI. Hippocampus 19:579–587. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chupin M, Hammers A, Liu RSN, Colliot O, Burdett J, Bardinet E, Duncan JS, Garnero L, Lemieux L (2009b): Automatic segmentation of the hippocampus and the amygdala driven by hybrid constraints: Method and validation. Neuroimage 46:749–761. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chupin M, Mukuna‐Bantumbakulu AR, Hasboun D, Bardinet E, Baillet S, Kinkingnéhun S, Lemieux L, Dubois B, Garnero L (2007): Anatomically constrained region deformation for the automated segmentation of the hippocampus and the amygdala: Method and validation on controls and patients with Alzheimerś disease. Neuroimage 34:996–1019. [DOI] [PubMed] [Google Scholar]
- Collins DL, Pruessner JC (2010): Towards accurate, automatic segmentation of the hippocampus and amygdala from MRI by augmenting ANIMAL with a template library and label fusion. NeuroImage 52:1355–1366. [DOI] [PubMed] [Google Scholar]
- Colliot O, Chételat G, Chupin M, Desgranges B, Magnin B, Benali H, Dubois B, Garnero L, Eustache F, Lehéricy S (2008): Discrimination between Alzheimer disease, mild cognitive impairment, and normal aging by using automated segmentation of the hippocampus. Radiology 248:194–201. [DOI] [PubMed] [Google Scholar]
- Coupe P, Eskildsen SF, Manjon JV, Fonov V, Collins DL (2011): Simultaneous segmentation and grading of hippocampus for patient classification with Alzheimerś disease. Med Image Comput Comput Assist Interv 14:149–157. [DOI] [PubMed] [Google Scholar]
- Coupe P, Manjon JV, Fonov V, Pruessner J, Robles M, Collins DL (2010): Nonlocal patch‐based label fusion for hippocampus segmentation. Med Image Comput Comput Assist Interv 13:129–136. [DOI] [PubMed] [Google Scholar]
- Coupe P, Manjon JV, Fonov V, Pruessner J, Robles M, Collins DL (2011, Jan): Patch‐based segmentation using expert priors: Application to hippocampus and ventricle segmentation. Neuroimage 54:940–954. [DOI] [PubMed] [Google Scholar]
- Csernansky JG, Joshi S, Wang L, Haller JW, Gado M, Miller JP, Grenander U, Miller MI (1998): Hippocampal morphometry in schizophrenia by high dimensional brain mapping. Proc Natl Acad Sci 95:11406–11411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Derix J, Yang S, Lüsebrink F, Fiederer LDJ, Schulze‐Bonhage A, Aertsen A, Speck O, Ball T (2014): Visualization of the amygdalo–hippocampal border and its structural variability by 7T and 3T magnetic resonance imaging. Hum Brain Mapp 35:4316–4329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dice LR (1945): Measures of the amount of ecologic association between species. Ecology 26:297–302. [Google Scholar]
- Dong C, Loy CC, He K, Tang X (2014): Image super‐resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38:295–307. [DOI] [PubMed] [Google Scholar]
- Duvernoy HM (2005): The Human Hippocampus: Functional Anatomy, Vascularization and Serial Sections with MRI. Springer. [Google Scholar]
- Fleisher AS, Sun S, Taylor C, Ward CP, Gamst AC, Petersen RC, Jack CR Jr, Aisen PS, Thal LJ, and For the Alzheimer's Disease Cooperative Study (2008): Volumetric MRI vs clinical predictors of Alzheimer disease in mild cognitive impairment. Neurology 70:191–199. [DOI] [PubMed] [Google Scholar]
- Frankó E, Joly O, for the Alzheimer's Disease Neuroimaging Initiative (2013): Evaluating Alzheimer's disease progression using rate of regional hippocampal atrophy. PloS One 8:e71354. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fritsch FN, Carlson RE (1980): Monotone piecewise cubic interpolation. SIAM J Numer Anal 17:238–246. [Google Scholar]
- Gao X, Zhang K, Tao D, Li X (2012a): Image super‐resolution with sparse neighbor embedding. IEEE Trans Image Process 21:3194–3205. [DOI] [PubMed] [Google Scholar]
- Gao Y, Corn B, Schifter D, Tannenbaum A (2012b): Multiscale 3D shape representation and segmentation with applications to hippocampal/caudate extraction from brain MRI. Med Image Anal 16:374–385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao Y, Kikinis R, Bouix S, Shenton M, Tannenbaum A (2012c): A 3D interactive multi‐object segmentation tool using local robust statistics driven active contours. Med Image Anal 16:1216–1227. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao Y, Bouix S (2016): Statistical shape analysis using 3D Poisson equation—A quantitatively validated approach. Med Image Anal 30:72–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao Y, Tannenbaum A (2010): Image processing and registration in a point set representation. Proc SPIE 7623:762308–762316. [Google Scholar]
- Gao Y, Melonakos J, Tannenbaum A (2006): Conformal Flattening ITK Filter. MICCAI Workshop on Open Source, Copenhagen, Denmark/Insight Journal. [Google Scholar]
- Gao Y, Nain D, LeFaucheur X, Tannenbaum A (2007): Spherical wavelet itk filter. MICCAI. [Google Scholar]
- Gao Y, Riklin‐Raviv T, Bouix S (2014): Shape analysis, a field in need of careful validation. Hum Brain Mapp 35:4965–4978. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao Y, Zhu L, Cates J, MacLeod RS, Bouix S, Tannenbaum A (2015): A Kalman filtering perspective for multi‐atlas segmentation. SIAM J Imaging Sci 8:1007–1029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gelas A, Gouaillard A (2007): Parameterization of discrete surfaces. Insight J. [Google Scholar]
- Ghanei A, Soltanian‐Zadeh H, Windham JP (1998): Segmentation of the hippocampus from brain MRI using deformable contours. Comput Med Imaging Graph 22:203–216. [DOI] [PubMed] [Google Scholar]
- Haker S, Zhu L, Tannenbaum A, Angenent S (2004): Optimal mass transport for registration and warping. Int J Comput Vision 60:225–240. [Google Scholar]
- Hao Y, Wang T, Zhang X, Duan Y, Yu C, Jiang T, Fan Y (2014): Local label learning (LLL) for subcortical structure segmentation: Application to hippocampus segmentation. Hum Brain Mapp 35:2674–2697. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hausdorff F (1962): Set theory. Chelsea Pub Co, American Mathematical Soc. Vol. 119. [Google Scholar]
- Hayes K, Buist R, Vincent TJ, Thiessen JD, Zhang Y, Zhang H, Wang J, Summers AR, Kong J, Li X‐M, Martin M (2014): Comparison of manual and semi‐automated segmentation methods to evaluate hippocampus volume in APP and PS1 transgenic mice obtained via in vivo magnetic resonance imaging. J Neurosci Methods 221:103–111. [DOI] [PubMed] [Google Scholar]
- Henry TR, Chupin M, Lehéricy S, Strupp JP, Sikora MA, Sha ZY, Ugurbil K, Van de Moortele P‐F (2011): Hippocampal sclerosis in temporal lobe epilepsy: findings at 7T. Radiology‐Radiological Society of North America 261:199 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hosseini MP, Nazem‐Zadeh MR, Pompili D, Soltanian‐Zadeh H(2014): Statistical Validation of Automatic Methods for Hippocampus Segmentation in MR Images of Epileptic Patients, 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, USA, Aug. 26–30 [DOI] [PMC free article] [PubMed]
- Hosseini MP, Nazem‐Zadeh MR, Pompili D, Jafari‐Khouzani K, Elisevich K, Soltanian‐Zadeh H (2016): Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients. Med Phys 43:538–553. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hosseini MP, Nazem‐Zadeh MR, Pompili D, Jafari‐Khouzani K, Elisevich K, Soltanian‐Zadeh H (2015): Automatic and Manual Segmentation of Hippocampus in Epileptic Patients MRI, 6th annual New York Medical Imaging Informatics Symposium (NYMIIS), New York, USA.
- Hu S, Coupe P, Pruessner JC, Collins DL (2011): Appearance‐based modeling for segmentation of hippocampus and amygdala using multi‐contrast MR imaging. Neuroimage 58:549–559. [DOI] [PubMed] [Google Scholar]
- Ibanez L, Schroeder W, Ng L, Cates J (2005): The ITK software guide.
- Khan AR, Cherbuin N, Wen W, Anstey KJ, Sachdev P, Beg MF (2011): Optimal weights for local multi‐atlas fusion using supervised learning and dynamic information (SuperDyn): Validation on hippocampus segmentation. Neuroimage 56:126–139. [DOI] [PubMed] [Google Scholar]
- Kichenassamy S, Kumar A, Olver P, Tannenbaum A, Yezzi A (1996): Conformal curvature flows: From phase transitions to active vision. Arch Ration Mech Anal 134:275–301. [Google Scholar]
- Kim M, Wu G, Li W, Wang L, Son Y‐D, Cho Z‐H, Shen D (2013): Automatic hippocampus segmentation of 7.0 Tesla MR images by combining multiple atlases and auto‐context models. NeuroImage 83:335–345. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Konrad C, Ukas T, Nebel C, Arolt V, Toga AW, Narr KL (2009): Defining the human hippocampus in cerebral magnetic resonance images–an overview of current segmentation protocols. Neuroimage 47:1185–1195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kraguljac NV, White DM, Reid MA, Lahti AC (2013): Increased Hippocampal Glutamate and Volumetric Deficits in Unmedicated Patients With Schizophrenia. JAMA Psychiatry 70:1294–1302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kwak K, Yoon U, Lee D‐K, Kim GH, Seo SW, Na DL, Shim H‐J, Lee J‐M (2013): Fully‐automated approach to hippocampus segmentation using a graph‐cuts algorithm combined with atlas‐based segmentation and morphological opening. Magn Reson Imaging 31:1190–1196. [DOI] [PubMed] [Google Scholar]
- Lorensen WE, Cline HE (1987): Marching cubes: A high resolution 3D surface construction algorithm. ACM Siggraph Comput Graphics 21:163–169. p [Google Scholar]
- Luo ZR, Zhuang XJ, Zhang RZ, Wang JQ, Yue C, Huang X (2014): Automated 3D segmentation of hippocampus based on active appearance model of brain MR images for the early diagnosis of Alzheimerś disease. Minerva Med 105:157–165. [PubMed] [Google Scholar]
- Morey RA, Petty CM, Xu Y, Pannu Hayes J, Wagner HR, Lewis DV, LaBar KS, Styner M, McCarthy G (2009): A comparison of automated segmentation and manual tracing for quantifying hippocampal and amygdala volumes. Neuroimage 45:855–866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morra JH, Tu Z, Apostolova LG, Green AE, Avedissian C, Madsen SK, Parikshak N, Hua X, Toga AW, Jack CR, Schuff N, Weiner MW, Thompson PM (2009): Automated 3D mapping of hippocampal atrophy and its clinical correlates in 400 subjects with Alzheimer's disease, mild cognitive impairment, and elderly controls. Hum Brain Mapp 30:2766–2788. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Morra JH, Tu Z, Apostolova LG, Green AE, Avedissian C, Madsen SK, Parikshak N, Toga AW, Jack CR, Schuff N, Weiner MW, Thompson PM (2009): Automated mapping of hippocampal atrophy in 1‐year repeat MRI data from 490 subjects with Alzheimerś disease, mild cognitive impairment, and elderly controls. NeuroImage 45(Sup 1):S3–S15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nestor SM, Gibson E, Gao FQ, Kiss A, Black SE (2012): A direct morphometric comparison of five labeling protocols for multi‐atlas driven automatic segmentation of the hippocampus in Alzheimerś disease. Neuroimage 66C:50–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pipitone J, Park MT, Winterburn J, Lett TA, Lerch JP, Pruessner JC, Lepage M, Voineskos AN, Mallar Chakravarty M (2014): Multi‐atlas segmentation of the whole hippocampus and subfields using multiple automatically generated templates. Neuroimage 101:494–512. [DOI] [PubMed] [Google Scholar]
- Pluta J, Avants BB, Glynn S, Awate S, Gee JC, Detre JA (2009): Appearance and incomplete label matching for diffeomorphic template based hippocampus segmentation. Hippocampus 19:565–571. [DOI] [PubMed] [Google Scholar]
- Prudent V, Kumar A, Liu S, Wiggins G, Malaspina D, Gonen O (2010): Human hippocampal subfields in young adults at 7.0 T: Feasibility of imaging. Radiology 254:900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosset A, Spadola L, Ratib O (2004): OsiriX: An open‐source software for navigating in multidimensional DICOM images. J Digit Imag 17:205–216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sabuncu MR, Yeo BT, Van Leemput K, Fischl B, Golland P (2010): A generative model for image segmentation based on label fusion. IEEE Trans Med Imaging 29:1714–1729. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sandhu R, Dominitz A, Gao Y, Tannenbaum A (2012): Volumetric mapping of genus zero objects via mass preservation. arXiv preprint arXiv 1205:1225. [Google Scholar]
- Scher AI, Xu Y, Korf ESC, White LR, Scheltens P, Toga AW, Thompson PM, Hartley SW, Witter MP, Valentino DJ, Launer LJ (2007): Hippocampal shape analysis in Alzheimer's disease: A population‐based study. Neuroimage 36:8–18. [DOI] [PubMed] [Google Scholar]
- Schröder P, Sweldens W (1995): Spherical wavelets: Efficiently representing functions on the sphere. Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, ACM, (pp. 161–172).
- Styner M, Lieberman JA, Pantazis D, Gerig G (2004): Boundary and medial shape analysis of the hippocampus in schizophrenia. Med Image Anal 8:197–203. [DOI] [PubMed] [Google Scholar]
- Su Z, Wang Y, Shi R, Zeng W, Sun J, Luo F, Gu X (2015): Optimal mass transport for shape matching and comparison. IEEE Trans Pattern Anal Mach Intell 37:2246–2259. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thompson PM, Hayashi KM, de Zubicaray GI, Janke AL, Rose SE, Semple J, Hong MS, Herman DH, Gravano D, Doddrell DM, Toga AW (2004): Mapping hippocampal and ventricular change in Alzheimer disease. Neuroimage 22:1754–1766. [DOI] [PubMed] [Google Scholar]
- Tian J, Ma K‐K (2011): A survey on super‐resolution imaging. Signal Image Video Process 5:329–342. [Google Scholar]
- Tong T, Wolz R, Coupé P, Hajnal JV, Rueckert D (2013): Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling. Neuroimage 76:11–23. [DOI] [PubMed] [Google Scholar]
- van der Lijn F, den Heijer T, Breteler MM, Niessen WJ (2008): Hippocampus segmentation in MR images using atlas registration, voxel classification, and graph cuts. Neuroimage 43:708–720. [DOI] [PubMed] [Google Scholar]
- Van Leemput K, Bakkour A, Benner T, Wiggins G, Wald LL, Augustinack J, Dickerson BC, Golland P, Fischl B (2009): Automated segmentation of hippocampal subfields from ultra‐high resolution in vivo MRI. Hippocampus 19:549–557. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang H, Das SR, Suh JW, Altinay M, Pluta J, Craige C, Avants B, Yushkevich PA, Yushkevich PA (2011a): A learning‐based wrapper method to correct systematic errors in automatic image segmentation: Consistently improved performance in hippocampus, cortex and brain segmentation. Neuroimage 55:968–985. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang H, Suh JW, Das S, Pluta J, Altinay M, Yushkevich P (2011b): Regression‐based label fusion for multi‐atlas segmentation. IEEE CVPR (p 1113–1120). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang H, Suh JW, Das SR, Pluta JB, Craige C, Yushkevich PA (2013): Multi‐atlas segmentation with joint label fusion. IEEE Trans Pattern Anal Mach Intell 35:611–623. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang L, Miller JP, Gado MH, McKeel DW, Rothermich M, Miller MI, Morris JC, Csernansky JG (2006): Abnormalities of hippocampal surface structure in very mild dementia of the Alzheimer type. Neuroimage 30:52–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wisse LE, Gerritsen L, Zwanenburg JJ, Kuijf HJ, Luijten PR, Biessels GJ, Geerlings MI (2012): Subfields of the hippocampal formation at 7T MRI: In vivo volumetric assessment. Neuroimage 61:1043–1049. [DOI] [PubMed] [Google Scholar]
- Yang J, Wright J, Huang TS, Ma Y (2010): Image super‐resolution via sparse representation. IEEE Trans Image Process 19:2861–2873. [DOI] [PubMed] [Google Scholar]
- Yu G, Sapiro G, Mallat S (2012): Solving inverse problems with piecewise linear estimators: From Gaussian mixture models to structured sparsity. IEEE Trans Image Process 21:2481–2499. [DOI] [PubMed] [Google Scholar]
- Yushkevich P, Avants B, Pluta J, Das S, Minkoff D, Mechanichamilton D, Glynn S, Pickup S, Liu W, Gee J (2009): A high‐resolution computational atlas of the human hippocampus from postmortem magnetic resonance imaging at 9.4T. NeuroImage 44:385–398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yushkevich PA, Wang H, Pluta J, Das SR, Craige C, Avants BB, Weiner MW, Mueller S (2010): Nearly automatic segmentation of hippocampal subfields in it in vivo focal T2‐weighted MRI. Neuroimage 53:1208–1224. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zarei M, Beckmann CF, Binnewijzend MA, Schoonheim MM, Oghabian MA, Sanz‐Arigita EJ, Scheltens P, Matthews PM, Barkhof F (2013): Functional segmentation of the hippocampus in the healthy human brain and in Alzheimer's disease. Neuroimage 66:28–35. [DOI] [PubMed] [Google Scholar]
- Zarpalas D, Gkontra P, Daras P, Maglaveras N (2014): Gradient‐based reliability maps for ACM‐based segmentation of hippocampus. IEEE Trans Biomed Eng 61:1015–1026. [DOI] [PubMed] [Google Scholar]
- Zeyde R, Elad M, Protter M (2012): On single image scale‐up using sparse‐representations In: Curves and Surfaces. Springer; pp. 711–730. [Google Scholar]
- Zhou N, Yu X, Zhao T, Wen S, Wang F, Zhu W, Kurc T, Tannenbaum A, Saltz J, Gao Y (2017): Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis. Proc. SPIE 10140, Medical Imaging 2017: Digital Pathology, 101400K. [DOI] [PMC free article] [PubMed]
- Zhu L, Kolesov I, Gao Y, Kikinis R, Tannenbaum A (2014): An effective interactive medical image segmentation method using fast growcut. MICCAI Workshop Interactive Med Image Comput. [Google Scholar]
