Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Mar 1.
Published in final edited form as: J Neurosci Methods. 2025 Jan 2;415:110359. doi: 10.1016/j.jneumeth.2024.110359

Convolutional Neural Networks for the segmentation of hippocampal structures in postmortem MRI scans

BN Anoop a,g, Karl Li a, Nicolas Honnorat a, Tanweer Rashid a, Di Wang a, Jinqi Li a, Elyas Fadaee a, Sokratis Charisis a, Jamie M Walker c, Timothy E Richardson c, David A Wolk d, Peter T Fox b, José E Cavazos a,e, Sudha Seshadri a, Laura EM Wisse f, Mohamad Habes a,b,*
PMCID: PMC12308772  NIHMSID: NIHMS2096352  PMID: 39755177

Abstract

Background:

The hippocampus plays a crucial role in memory and is one of the first structures affected by Alzheimer’s disease. Postmortem MRI offers a way to quantify the alterations by measuring the atrophy of the inner structures of the hippocampus. Unfortunately, the manual segmentation of hippocampal subregions required to carry out these measures is very time-consuming.

New Method:

In this study, we explore the use of fully automated methods relying on state-of-the-art Deep Learning approaches to produce these annotations. More specifically, we propose a new segmentation framework made of a set of encoder–decoder blocks embedding self-attention mechanisms and atrous spatial pyramidal pooling to produce better maps of the hippocampus and identify four hippocampal regions: the dentate gyrus, the hippocampal head, the hippocampal body, and the hippocampal tail.

Results:

Trained using slices extracted from 15 postmortem T1-weighted, T2-weighted, and susceptibility-weighted MRI scans, our new approach produces hippocampus parcellations that are better aligned with the manually delineated parcellations provided by neuroradiologists.

Comparison with Existing Methods:

Four standard deep learning segmentation architectures: UNet, Double UNet, Attention UNet, and Multi-resolution UNet have been utilized for the qualitative and quantitative comparison of the proposed hippocampal region segmentation model.

Conclusions:

Postmortem MRI serves as a highly valuable neuroimaging technique for examining the effects of neurodegenerative diseases on the intricate structures within the hippocampus. This study opens the way to large sample-size postmortem studies of the hippocampal substructures.

Keywords: Image segmentation, Magnetic resonance imaging, Postmortem brain MRI, Hippocampus subregions, Deep learning, Convolutional Neural Networks, Alzheimer’s disease, Dementia

1. Introduction

The hippocampus is an intricate brain region crucial to memory encoding. It was quickly noticed that this brain region is significantly damaged by the most common neurodegenerative disorders, including Alzheimer’s disease (AD) (Braak and Braak, 1991; Yushkevich et al., 2015; Verhaaren et al., 2015; Habes et al., 2016, 2018; Jonkman et al., 2019b), frontotemporal dementia (FTD) (Frisoni et al., 1999), and Lewy body dementia (LBD) (Elder et al., 2017). Over the years, large magnetic resonance imaging (MRI) studies have established the measure of hippocampal atrophy as one of the most reliable, easily accessible and widely used biomarkers of AD (Kulaga-Yoskovitz et al., 2015; Bender et al., 2018). However, while most neuroimaging studies consider the hippocampus as a single entity, it was also acknowledged that this structure is heterogeneous and made of distinct subregions fulfilling different functions, presenting specific connectivity to other brain regions, and exhibiting distinct vulnerabilities to neurodegenerative diseases (Cox et al., 2019; Tanaka, 2021). This heterogeneity is found both along the anterior–posterior axis of the hippocampus and across its different cytoarchitectonic subregions, such as the cornu ammonis fields, the dentate gyrus, and the subiculum (De Flores et al., 2015).

The hippocampus can be subdivided along the anterior–posterior axis into three major sections: hippocampal head (HH), hippocampal body (HB), and hippocampal tail (HT) (Hrybouski et al., 2019). There is a functional differentiation between the anterior part, the head, and the posterior part of the body and the tail of the hippocampus (Fritch et al., 2021; Auguste et al., 2023). In particular, several studies revealed that the posterior hippocampus is involved during spatial memory tasks (Fritch et al., 2021), while the anterior hippocampus is engaged if a memory task contains emotional information (Auguste et al., 2023). In addition, the dentate gyrus (DG) (Leutgeb et al., 2007; Scharfman, 2011) is one of the main subregions of the hippocampus that contributes to memory consolidation and spatial cognition in the brain. One of the few sites of adult hippocampal neurogenesis (Winner et al., 2011; Piatti et al., 2013), the DG has been shown to have its cellular and microvascular processes disrupted in a range of dementias and related disorders (Travis et al., 2015; Moreno-Jiménez et al., 2019; Terreros-Roncal et al., 2021), making it a promising avenue towards understanding how neurodegenerative processes impact cognition. However, it is also a challenging subregion to study, as its volume has been shown to be affected by a wide range of factors, including psychiatric illnesses (Hayes et al., 2017; Nakahara et al., 2020; Nuninga et al., 2020), sleep (Neylan et al., 2010), and aging (Parker et al., 2019). It is essential to accurately assess the size of the DG to analyze the impact of various natural and pathological processes.

Postmortem brain MRI examinations provide crucial information about the hippocampus structure and its vulnerability to neurodegenerative disorders. First, postmortem MRI can be used to establish ground-truth information about the atrophy patterns not only in pathologically confirmed neurological disorders (de Jager et al., 2010; Den Haan et al., 2018) but also in healthy aging (Adler et al., 2018) and brain development (Insausti, 2010). And then, because postmortem brain MRI studies can validate the imaging techniques used on living individuals (de Flores et al., 2020): comparing postmortem and in-vivo images is a means of quantifying the limitations and the biases of the various in-vivo imaging modalities and an approach to develop segmentation methods better capturing brain regions of interest. As the hippocampus is crucially involved in many neurological (Dawe et al., 2011) and psychiatric disorders (Saygin et al., 2017) and an important outcome measure in clinical trials in dementia (Abdallah et al., 2015; Kreilkamp et al., 2018; Gerlach et al., 2022), the segmentation of the hippocampus from postmortem brain MRI imaging is a valuable instrument in both research and clinical contexts that can contribute to a better understanding, diagnosis, and treatment of neurological disorders.

However, the manual segmentation of the hippocampus is laborious and time-consuming. Several automated hippocampus segmentation approaches were proposed to tackle this issue, such as ASHS (Wisse et al., 2016). ASHS applies non-rigid registration to T1w and T2w MRI scans, performs a multi-atlas joint label fusion, and a voxelwise learning-based error correction, to propagate anatomic labels from a set of manually labeled training images to an unlabeled image (Wisse et al., 2016). The automated segmentation method proposed by Lim et al. (2012, 2013) relies on Freesurfer (Dale et al., 1999; Fischl and Dale, 2000). The entire Freesurfer pipeline is applied to obtain a segmentation of the whole hippocampus and then the segmentation of the hippocampus subfields is obtained by combining Bayesian inference and statistical models of the medial temporal lobe (Lim et al., 2012, 2013).

The recent Convolutional Neural Networks (CNNs) present several advantages over these traditional machine learning methods and, in particular, the ability to model spatial hierarchical features (LeCun et al., 2015), to fit multi-scale, data-driven models at once (Long et al., 2015; Krizhevsky et al., 2017), the possibility to produce models that are robust to translation (Simonyan and Zisserman, 2014) and fairly robust with respect to other variations in the data (LeCun et al., 2015). These advantages have established CNNs as the new gold standard for image segmentation tasks in a variety of domains, including medical imaging (Kayalibay et al., 2017; Oktay et al., 2018; Anoop et al., 2020; Thomas et al., 2020; Pawan et al., 2021; Niyas et al., 2022). CNN encoder–decoder architectures have already been used several times in the past to delineate hippocampal subregions in standard invivo MRI scans (Zhu et al., 2019; Shi et al., 2019; Ma et al., 2020; Yang et al., 2020; Manjón et al., 2022). Zhu et al. proposed for instance a fully convolutional network relying on dilated convolutions to segment hippocampus subregions in the brain scans of infants (Zhu et al., 2019). Shi et al. used generative adversarial networks to segment hippocampus subregions (Shi et al., 2019). Ma et al. proposed a cascaded dual discriminator adversarial learning model with a difficulty-aware attention mechanism (Ma et al., 2020), and Yang et al. a multiscale 3D CNN (Yang et al., 2020). Lastly, Manjón et al. trained a U-Net CNN architecture (Manjón et al., 2022). Unfortunately, none of these deep-learning approaches were used to segment the hippocampus and its subregions in postmortem MRI scans and it is unclear if the good performances reported for these architectures would replicate for high-resolution postmortem scans acquired using experimental MRI protocols tuned to mitigate the MRI contrast changes induced by the fixation of brain tissues in formalin Adler et al. (2018), Ravikumar et al. (2021), Khandelwal et al. (2023).

The present study was designed to address this question. In this work, we propose a new deep-learning method for the segmentation of the hippocampus and its subregions in susceptibility-weighted (SWI), T1-weighted (T1w), and T2-weighted (T2w) postmortem MRI scans. More specifically, we propose a new segmentation framework made of a set of encoder–decoder blocks embedding self-attention mechanisms and atrous spatial pyramid pooling (ASPP) to produce better maps of the hippocampus and identify four hippocampal regions: the dentate gyrus, the hippocampal head, the hippocampal body, and the hippocampal tail. Our previous work established that attention gates and atrous spatial pooling boost binary segmentation performance when segmenting white matter hyperintensities (Benet Nirmala et al., 2023). In this work, we propose to use the same approach for segmenting the hippocampus. More specifically and unlike traditional segmentation architectures (Rahil et al., 2023), the field of view of the convolutional kernels is adjusted in the initial two stages of the encoder part to extract more predominant features. We employ a modified atrous spatial pyramid pooling (ASPP) module with a weight-sharing concept and we fix the dilation rates of the ASPP module experimentally. Our new model was trained on slices extracted from 15 postmortem SWI scans and compared with four standard deep learning segmentation architectures: Unet (Ronneberger et al., 2015), Double UNet (Jha et al., 2020), Attention UNet (Oktay et al., 2018) and Multi-resolution UNet (Ibtehaz and Rahman, 2020). To the best of our knowledge, our framework and our approach is the first deep learning model proposed for the delineation of hippocampus subregions in postmortem brain MRI scans.

2. Materials and methods

2.1. Postmortem MRI scans

The University of Texas Health Science Center at San Antonio is collecting donated brains from patients with clinically diagnosed neurodegenerative disorders (consent forms can be downloaded here: https://biggsinstitute.org/research/brain-donation/enroll/). The brains are usually split in two: the right hemisphere is frozen for long term storage, while the left hemisphere is fixed in a 10% neutral buffered formalin solution (Pfefferbaum et al., 2004), scanned, and dissected. Fifteen brains were selected for the present study. These 15 scans correspond to nine women and six men, with an average age of 78 years and a standard deviation of seven years. The left side of the donated brains fixated in formalin, including the cerebral hemisphere, the cerebellum, and the brain stem approximately cut at the level of the cervicomedullary junction, were placed in separate plastic bags filled with formalin. The bags were wrapped in paper diapers, fixed in an 8-channel knee coil, and imaged within a Siemens TimTrio 3T MRI scanner to acquire susceptibility-weighted imaging (SWI), T1w and T2w MRI scans for each brain. The susceptibility-weighted MRI scans were acquired with a repetition time of 39 ms, four echo times of 6.72, 12.79, 21.29, and 29.79 ms, a flip angle of 15 degrees, and an isotropic resolution of 0.5 mm. Since the scans were averaged six times, the scanning time reached 61 min for each brain. The T1-weighted scans were acquired with a repetition time of 2.2 s, echo time 3.25 ms, flip angle 13 degrees, an isotropic resolution of 0.5 mm, and the 4 averages took 24 min to complete. Lastly, the T2-weighted scans were acquired with a repetition time of 9.79 s, echo time of 23 ms, a flip angle of 120 degrees, an isotropic resolution of 0.8 mm. Averaging 14 scans required 98 min per brain. Please refer to Li et al. (2023) for more details.

2.2. Pre-processing

The MRI scans were pre-processed in two steps. First, the axes of the fifteen SWI scans were manually permuted and flipped to reorient all the brains in the same direction. For each individual brain, the rigid registration of the Advanced Normalization Tools software library (ANTS, version 2.2.0) was then used to align the T1w and the T2w MRI scans with the manually re-oriented SWI scans (Avants et al., 2011).

2.3. Manual hippocampus segmentation

A neuroradiology expert (KL) manually annotated the fifteen SWI scans in three steps. He segmented first the whole hippocampal region. This mask was then subdivided into the hippocampal head (HH), hippocampal body (HB), and hippocampal tail (HT) (Hrybouski et al., 2019) and the dentate gyrus (DG). The annotation of the dentate gyrus was based on the location of the stratum radiatum lacunosum moleculare (SRLM) or hypointense band Yushkevich et al. (2015). The division between the hippocampal head (HH) and hippocampal body (HB) was based on the disappearance of the uncal apex, where the first slice without the uncal apex was considered hippocampal body (Olsen et al., 2019). The hippocampal body (HB) and hippocampal tail (HT) divisions were based on the colliculi. As the borders between other subregions cannot be observed in postmortem brain MRI scans (Wisse et al., 2017), we did not further subdivide the remaining hippocampus subregions. Fig. 3 illustrates the annotation of an individual brain.

Fig. 3.

Fig. 3.

Manual annotations (annot.) of an axial, a sagittal (sag.), and a coronal (cor.) slice and the corresponding SWI, T1w, and T2w MRI intensity. The whole hippocampus mask and its four structures, dentate gyrus (DG), hippocampal head (HH), hippocampal body (HB), and hippocampal tail (HT).

All the manual annotations produced by our neuroradiology expert (KL) were reviewed by the senior radiologist who developed the segmentation protocol (LW, Wisse et al. (2021)) and further investigated by the rest of the team in case of disagreement to reach a consensus before training our Deep Networks. The literature suggests that the straightforward hippocampal subdivisions considered in this work are usually associated with high inter-rater agreements between radiologists, so we focused our resources on increasing our sample size instead of recruiting additional radiologists (Wisse et al., 2021; Caldairou et al., 2016).

2.4. Deep learning architectures

The Manual annotations were used to train deep convolutional networks. In this work, we compared the performance of five variants of UNet architectures: the original UNet (Ronneberger et al., 2015), the Double UNet (Jha et al., 2020), the Attention UNet (Oktay et al., 2018), the Multi-resolution UNet (Ibtehaz and Rahman, 2020), and a custom deep learning architecture that will be denoted DeepAIM, for Deep neural network with Attention-assisted Multi-resolution.

For each architecture, four models were compared: a model using the three MRI modalities to predict the segmentation (M1), a model only based on the SWI intensity (M2), a model only taking the T1w scans into account (M3), and a model exclusively based on the T2w intensity (M4). In M1 models, the three modalities were concatenated to create input images with three channels.

For each architecture and each model, two variants were trained: a model variant generating a mask for the whole hippocampus (binary segmentation task) and a model variant generating a parcellation of the hippocampal structures as a multiclass segmentation task (Chen et al., 2020, 2018a; Sharan et al., 2022) with five classes: DG, HH, HB, HT, and the brain outside the hippocampus. The models generating the hippocampus masks will be denoted: M1-h, M2-h, M3-h, M4-h and the models producing a map of the structures: M1-s, M2-s, M3-s, M4-s. They are all illustrated in Fig. 1.

Fig. 1.

Fig. 1.

Overview of the proposed hippocampal structures segmentation approach.

Because the hippocampus only covers a small part of the brain, the field of view of the models was restricted to a 96 × 48 × 64 box that was automatically centered around a first estimation of the hippocampus location obtained via registration. This background cropping procedure is described in detail in Section 2.8. This box corresponds to a size of 48 mm × 24 mm × 32 mm, which was manually selected to be large enough to contain the hippocampal regions of all the brains in our dataset.

Since our data set is only made of 15 volumes, the deep networks were designed to segment the 2D sagittal slices of the boxes instead of 3D volumes. This approach produced 64 images of size 96 × 48 for each brain and reduced overfitting by producing smaller Deep Networks: M1 models were designed to handle small input images of size 96 × 48 × 3, while M2, M3, and M4 Models were designed for input images of size 96 × 48. The leave-one-out cross-validation approach described in Section 2.7 was adopted to demonstrate the models’ solidity (Wong, 2015).

2.4.1. U-nets

Five variants of UNet architectures were compared (Ronneberger et al., 2015). The convolutional neural network architecture UNet is the gold standard for image segmentation (Ronneberger et al., 2015). This architecture is named after its U-shaped structure which is made of an encoder and a decoder linked by skip connections. The encoder part gradually extracts high-level features via convolutional and downsampling layers. The decoder part aims at reconstructing an image of the same resolution as the input image by gradually upsampling the high-level features generated by the encoder. The skip connections feed each decoding layer with encoded features generated at the same resolution as the decoding layer to produce sharper, more accurate outputs (Ronneberger et al., 2015).

DoubleUNet (DUnet) is a deep network combining two UNet models to produce better segmentations (Jha et al., 2020). DoubleUNet implements Atrous Spatial Pyramid Pooling (ASPP) to exploit the hierarchical features learned by the UNet models (Chen et al., 2018b; Jha et al., 2020).

Attention UNet (A_Unet) performs better than the original UNet by exploiting attention mechanisms during the segmentation (Oktay et al., 2018). Attention mechanisms are used to let the network focus on the most informative regions of the input image. This focus is achieved by training the network to generate masks multiplied with the encoded features to mask irrelevant regions (Oktay et al., 2018).

Multi-resolution UNet (M_Unet) is a variant of UNet that fuses feature maps from multiple resolutions to improve the image segmentation (Ibtehaz and Rahman, 2020). In a standard UNet, the encoder part of the network captures high-level features through a series of convolutional and pooling layers, while the decoder part gradually upsamples the features to generate the segmentation map. In a Multi-resolution UNet, additional pathways are introduced to fuse feature maps from different resolutions in the decoder part. This fusion process improves the combination of local and global information during the upsampling process, which produces more accurate results.

2.4.2. Deep AIM

In addition to these four standard architectures, we tested a custom UNet architecture that builds on our preliminary work (Anoop et al., 2021; Benet Nirmala et al., 2023). This new architecture coined Deep AIM for Deep neural network with Attention-assisted Multi-resolution, is depicted in Fig. 2 and consists of an encoder–decoder structure expanding on our prior DeepMIR approach (Rashid et al., 2021). Compared to DeepMIR and standard UNet architectures, we introduced a multi-scale module (Chen et al., 2018b; Huang et al., 2021) and a set of attention modules in Deep AIM to achieve better segmentation performance. The multi-scale module extracts features of different scales, and the attention (Oktay et al., 2018) module provides attention to relevant features from the encoder side during the decoding stage. Consistently with the Attention U-Net and Multi-resolution U-Net architectures, DeepAIM was implemented to use the padding = “same” method, which preserves the dimensions of the feature maps.

Fig. 2.

Fig. 2.

The Deep AIM networks used in this study contain four self-attention gates (element-wise multiplications) and an atrous spatial pyramid pooling (ASPP) module (Benet Nirmala et al., 2023).

More specifically, the proposed architecture is designed to take input image patches with a 96 × 48 size and generate a probability map of the same size. The filter dimension of the encoder ranges from 7 × 7 to 3 × 3. The first and second stages of the encoder use 7 × 7 and 5 × 5 kernels, respectively, to extract more prominent characteristics from the initial layers. Batch normalization and ReLU activation are employed after each convolution. The decoder has a fixed filter dimension of 3 × 3. The encoder and decoder are connected with skip connections that enable passing the features extracted by the encoder to the decoder in the same stage. Each skip connection is passed through a self-attention module that delivers attention to specific feature maps, improving total performance (Benet Nirmala et al., 2023). The multi-scale module is attached to extract multi-scale features from the highest level feature space, replacing deeper U-Net layers.

The attention module in Deep AIM uses the self-attention (Oktay et al., 2018) approach to extract both spatial and channel information relevant features for the classification task. A channel-wise 1 × 1 × 1 convolution is carried out on both inputs, followed by a sigmoid activation function normalizing the attention coefficients. These attention coefficients filter the features passed from the encoder part of the UNet to the decoder part generating the final segmentation.

The multi-scale spatial pyramid pooling (ASPP) module combines features of different scales extracted from the deepest layer of our architecture and relies on spatial pyramid pooling (Chen et al., 2018b). As proposed in Huang et al. (2019), we implemented the convolution kernels corresponding to the different scales of the spatial pyramid pooling module by using atrous convolutions with different dilation rates. During our experiments, we selected an ASPP module implementing five kernels: a standard skip connection, an atrous convolution with a dilation rate of 6 pixels, and three large kernels with a dilation rate of 12 pixels.

The number of trainable parameters in the five UNet architectures and eight variants compared in this work are reported in Table 1. The eight variants present different numbers of parameters due to disparities in the depth of their input and output network layers: M1 Models process input images containing three channels, while M2, M3, and M4 Models deal with single-channel input images. As a result, the input layer of the M1 models contains more trainable parameters. Similarly, the output layer of Mh Models (binary segmentation) contains fewer parameters than the output layer of Ms Models that generate an output value for each hippocampal structure (multiclass segmentation).

Table 1.

The number of trainable parameters in the models compared in this work. M3 and M4 models are similar to the M2 models.

Model M1-h M1-s M2-h M2-s

Unet 31,050,177 31,050,276 31,049,025 31,049,124
A_Unet 1,216,821 1,217,016 1,216,533 1,216,728
D_Unet 19,735,074 19,735,173 19,733,346 19,733,455
M_Unet 7,262,750 7,262,912 7,262,504 7,262,666
DeepAIM 33,842,043 33,842,005 33,838,907 33,838,919

2.5. Evaluation metrics

Seven binary segmentation metrics were used to measure the quality of the annotations generated by our deep networks: Dice Coefficient (Dice, 1945), precision (Powers, 2020), recall (Powers, 2020), specificity (Taha and Hanbury, 2015), accuracy (Taha and Hanbury, 2015), the Jaccard index, and the Pearson correlation coefficient (r) (Cohen et al., 2009). In binary image segmentation, a class is usually considered positive and the second as negative or background. True positives correspond then to the image pixels segmented as positive by a classification method that are positive according to the ground truth manual segmentation. A false positive is a pixel wrongly segmented as positive by the classification method. Similarly, true negatives and false negatives denote pixels classified as backgrounds that are respectively annotated or not annotated as backgrounds in the gold standard.

The Dice coefficient, also known as the Sørensen index, is calculated from the number of true positives (TP), number of false positives (FP), and number of false negatives (FN) as follows:

Dice=2TP2TP+FP+FN (1)

The percentage of true positive pixels among the pixels classified as positive is known as precision (p) or positive predictive value, and it is defined as:

P=TPTP+FP (2)

The percentage of true positive pixels among the pixels that are positive in the manual segmentation is known as recall (R) or sensitivity, and defined as:

R=TPTP+FN (3)

The specificity (S) is the proportion of true negatives in the set of pixels manually segmented as background (where TN is the number of true negative pixels):

S=TNTN+FP (4)

The accuracy (A) is the proportion of accurate predictions to the total number of predictions:

A=TP+TNTP+TN+FP+FN (5)

The Jaccard index (J) is a metric measuring the degree of similarity between two sample sets. J is usually measured by dividing the cardinal number of the intersection of the sets by the cardinal number of the union of the sets. Altematively, the Jaccard index can be derived from the Dice coefficient as follows:

J=Dice(2-Dice) (6)

Lastly, Pearson Correlation Coefficient (r) estimates the linear relationship between two sets of data:

r=i=1nxi-xyi-yi=1nxi-x2i=1nyi-y2 (7)

where xi, yi are the binary labels and predictions and x, y are the corresponding means.

The quality of the hippocampus masks generated by the binary deep networks was estimated by computing these seven metrics to compare, for each test brain, the masks generated by the deep networks with the hippocampus mask manually annotated by the neurologist. The metric values obtained for the 15 test brains were averaged. For the multiclass models, a set of metrics was obtained for each brain and each hippocampal subregion separately, considering the remaining regions and the brain outside the hippocampus as background. The metric values corresponding to the four hippocampal structures were averaged together to produce 4 class average metrics. Lastly, the fifteen metrics or 4 class average metrics values measured for the fifteen test brains were averaged, to obtain a single value to report for each metric and brain region.

2.6. Implementation details

Real-time data augmentation was conducted during the network training. The data augmentation method provided with the ImageData-Generator of the Keras preprocessing Python library (version 1.1.2) was used. The parameters of this image generator were set to a rotation angle of 90, width and height shift range of 0.3, shear range of 0.5, zoom range of 0.3, horizontal and vertical flips were selected, and the fill mode reflect was chosen. The optimization was carried out using the Adam optimizer set to a learning rate of 0.0003 and set to optimize a Dice Loss. Model weights were initialized using the “he normal” approach (He et al., 2015). The models were trained for 1500 epochs and a batch size of 64. All the tests were conducted using a Microsoft Azure cloud machine with rocky Linux 9, a 40 GB dedicated NVIDIA Tesla A100 GPU, and Keras with Tensorflow as the backend.

2.7. Cross-validation

Since our data set is only made of 15 volumes, a leave-one-out cross-validation approach was adopted when comparing the segmentation models (Wong, 2015). One after the other, the fifteen volumes were used for testing parcellation models trained using the first ten remaining volumes and validated using the last four remaining volumes. Since all the deep networks were designed to segment 2D sagittal slices instead of 3D volumes, 64 images of size 96 × 48 were available for each brain volume. As a result, all the models were trained using 640 2D images of size 96 × 48, 256 2D images of size 96 × 48 were used for validating them, and they were tested by segmenting the 64 images of size 96 × 48 providing from the single test volume. Overall, fifteen models were trained for each model architecture investigated in this work: one for each brain volume in our dataset.

2.8. Cropping and intensity normalization

As explained in the previous Sections, the field of view of the Deep Networks was cropped by automatically centering a box of fixed size 96 × 48 × 64 (48 mm × 24 mm × 32 mm) around the hippocampus. This centering was carried out for each brain separately, and achieved in four steps. First, the other brain SWI volumes were warped into the space of the SWI volume of the brain to crop by using the non-rigid SyN registration method (Avants et al., 2008) implemented in the ANTS library (version 2.2.0) (Avants et al., 2011). The non-rigid transformations determined by ANTS were then applied to the manually annotated whole hippocampus masks, to warp these masks into the brain to crop. The warped masks were summed and thresholded to produce a first guess of the hippocampus location. The cropping box was then centered at the center of this first hippocampus segmentation.

During the cross-validation, only 14 brains were considered in the training/validation set. For these brains, the cropping box was then determined by registering each brain with the 13 remaining brains in the set. The cropping box of the test brain, on the opposite, was obtained by registering all the 14 training and validation brains to the test brain. In both cases, a threshold of 7 was applied on the sum of the registered mask to generate the mask required to center the cropping box. The SWI modality was chosen as a reference to carry out all the registrations because it was the modality selected by the neurologist to annotate the hippocampal regions.

After cropping the scans, the MRI intensity of the 64 sagittal slices inside each box was linearly normalized by setting the minimum MRI value in that slice to 0 and the largest slice value to 1. The slices were concatenated to prepare the training, the validation, and the testing sets required for the cross-validation, and then normalized a last time before training and testing the deep networks. During that last normalization, the mean and the standard deviation of the concatenated MRI intensity values in the training set were calculated and used to z-score the concatenated slices in the training set, the validation set, and the testing set (Carré et al., 2020).

3. Results

Fig. 4 presents the hippocampus masks obtained for all the model variants tested, for an axial slice. Similar masks are shown in Supplementary Figures 1 and 2 for saggital and coronal slices. Similarly, Fig. 5 and Supplementary Figures 3 and 4 present the parcellations obtained for the same slices. Interestingly, the four models UNet (Ronneberger et al., 2015), Double UNet (Jha et al., 2020), Attention UNet (Oktay et al., 2018), and DeepAIM (Benet Nirmala et al., 2023) produced similar results, very close to the manual segmentation. Multi-resolution UNet (Ibtehaz and Rahman, 2020) produced the worst parcellations.

Fig. 4.

Fig. 4.

Segmentation generated by the four variants of the five deep network architectures trained to segment the whole hippocampus.

Fig. 5.

Fig. 5.

Segmentation generated by the four variants of the five deep network architectures trained to segment the hippocampal structures.

The complete quantitative comparison presented in Table 3 for the M1 models taking all the MRI modalities into account confirm these qualitative results. In particular, DeepAIM achieved the best Dice, Jaccard, accuracy, and precision for the hippocampus segmentation, the delineation of each hippocampus structure, and whole hippocampus parcellation. Multi-resolution UNet achieved the best recall, except for the whole hippocampus which was better segmented by Double UNet.

Table 3.

Performance (in percents) of the M1 models trained using all MRI modalities (D: Dice, P: precision, R: recall, S: specificity, A: accuracy, J: Jaccard, r: correlation) for the whole hippocampus (M1-h, Whole), the four hippocampal structures (M1-s) and the four class average (Average).

Models 1 Structure D P R S A J r

Unet Whole 88.6 91.4 86.5 99.6 98.9 80.4 88.2
Head 84.3 88.7 81.1 99.7 99.2 74.4 84.2
Body 79.0 77.9 80.7 99.7 99.5 67.2 78.9
Tail 76.9 84.3 74.3 99.9 99.7 64.9 77.9
DG 75.2 78.4 72.8 99.9 99.8 63.2 75.3
Average 82.3 84.3 80.5 99.8 99.6 71.3 82.2

A_Unet Whole 84.2 89.1 83.7 99.4 98.7 75.4 84.5
Head 80.2 87.5 78.8 99.7 99.2 70.3 80.9
Body 72.1 75.9 70.9 99.7 99.4 59.2 72.4
Tail 74.2 80.7 72.6 99.9 99.6 60.4 75.2
DG 72.9 81.9 70.9 99.9 99.8 60.4 74.1
Average 77.4 83.2 75.2 99.8 99.5 65.5 77.9

D_Unet Whole 92.0 91.0 94.0 99.6 99.1 87.1 92.1
Head 91.4 90.9 92.7 99.8 99.6 85.6 91.4
Body 86.0 82.6 90.9 99.8 99.7 77.6 86.2
Tail 84.8 88.2 83.4 99.9 99.7 76.7 85.2
DG 85.2 81.7 90.4 99.9 99.9 76.3 85.6
Average 88.5 87.6 90.0 99.8 99.7 81.1 88.5

M_Unet Whole 91.0 90.9 92.3 99.5 99.1 84.9 90.9
Head 87.2 82.5 93.9 99.5 99.3 79.2 87.4
Body 64.8 51.1 93.8 98.8 98.7 49.2 67.9
Tail 44.2 31.5 93.7 98.3 98.2 29.6 51.7
DG 33.9 21.4 98.2 98.1 98.1 21.2 44.0
Average 69.5 58.3 92.7 98.9 98.9 55.2 71.9

DeepAIM Whole 93.3 94.6 92.2 99.7 99.3 88.3 93.0
Head 92.1 93.7 91.0 99.8 99.6 86.9 92.0
Body 87.2 88.5 86.3 99.9 99.7 80.3 87.2
Tail 85.5 92.5 81.8 99.9 99.8 78.4 86.2
DG 86.9 91.1 83.9 99.9 99.9 80.0 87.1
Average 89.5 92.1 87.4 99.9 99.7 83.2 89.5

Most of the performance metrics measured for the M1 models combining all the MRI modalities (Table 3) were similar to the values measured for the SWI MRI alone, indicated in Supplementary Table 1, and slightly better than the values achieved with only T1w or T2w scans (Supplementary Table 2 and 3 respectively). Overall, the best models achieved an accuracy and a Dice scores above the 90% for most hippocampal structures, the hippocampus tail and the dentate gyrus being the most difficult regions to parcellate. The most serious segmentation failures happened with the Multi-resolution UNet for these two regions. In two settings (M1 and M2 models), Multi-resolution UNet also produced disappointing results for the hippocampus body, but the hippocampus head and the complete hippocampus were usually well segmented, even with Multi-resolution UNets.

Lastly, model’s training times are reported in Table 2. Depending on the architectures, between one hour fifty minutes and three hours forty-five minutes were required to train a model when using Microsoft Azure cloud computing nodes set with rocky Linux 9, running a dedicated 40 GB memory NVIDIA Tesla A100 GPU, and Keras with Tensorflow as the backend. Attention UNet was the fastest architecture to train for all the settings, DeepAIM the slowest to optimize, and these training times perfectly reflect the number of trainable parameters of the networks reported in Table 1.

Table 2.

Networks training times in seconds.

Model M1-h M1-s M2-h M2-s

Unet 13,247 11,432 11,045 10,819
A_Unet 8,440 7,936 6,968 6,612
D_Unet 11,752 11,132 10,505 10,028
M_Unet 10,247 9,732 8,107 7,518
DeepAIM 13,550 11,727 11,987 11,252

4. Discussion

In this study, we compare the ability of five Deep Network architectures to segment hippocampus and hippocampal structures in multimodal postmortem MRI brain scans, including a new CNN-based encoder–decoder network with an attention mechanism and a multiresolution module (DeepAIM) (Benet Nirmala et al., 2023). The superior performance of the DeepAIM model can be attributed to the strategic integration of larger kernel sizes within the initial two layers of the encoder, as well as the inclusion of both Attention modules and ASPP layer within the architecture. Attention networks (Oktay et al., 2018) are integral to image segmentation, enhancing model performance by selectively focusing on relevant image regions while suppressing irrelevant ones. By selectively attending to informative features, attention mechanisms enable more accurate segmentation results, improving the model’s ability to distinguish between different objects or regions within an image. Additionally, attention networks aid in capturing contextual information and spatial relationships between image regions, contributing to the model’s understanding of complex scenes and ultimately leading to more efficient and interpretable segmentation models. The ASPP (Ibtehaz and Rahman, 2020) layer is crucial in image segmentation due to its ability to capture multi-scale contextual information effectively. By incorporating atrous (or dilated) convolutions at multiple rates within the ASPP layer, it enables the model to perceive objects at different scales, improving segmentation accuracy. This multi-scale approach is particularly beneficial in handling objects of various sizes within an image, enhancing the model’s ability to segment both small and large objects accurately. Additionally, the ASPP layer helps mitigate the trade-off between receptive field size and computational cost, making it computationally efficient while maintaining segmentation performance. Its inclusion in segmentation networks facilitates the extraction of rich spatial information across different scales, contributing significantly to the overall segmentation quality. Overall, the ASPP layer plays a pivotal role in enhancing the contextual understanding of images, leading to more precise and detailed segmentation results. The ablation study of the DeepAIM model is provided in Benet Nirmala et al. (2023). In our study, the placement of the ASPP module at the bottleneck layer follows insights from prior literature, including Chen et al. (2018b) and Huang et al. (2019). The ASPP module is specifically designed to extract multi-scale features from high-dimensional representations, making it particularly effective at the deepest layer of the encoder, where the feature maps have the richest contextual information. In Chen et al. (2018b), the spatial pyramid pooling technique was used to capture multi-scale features from these high-dimensional features, leveraging kernels of varying scales. Similarly, Huang et al. (2019) proposed kernel-sharing atrous convolution to enhance feature extraction across multiple scales. Both studies highlighted the effectiveness of such modules when placed in the bottleneck layer. In our experiments, we observed that placing the ASPP module at the bottleneck – using dilation rates of 6 and 12 – enabled the network to capture multi-scale contextual features effectively. Furthermore, the proposed multi-scale module processes features selectively from specific dilation rates with additional convolutional filters, which we found to improve the model’s ability to discern features relevant for segmentation tasks.

Surprisingly, A_UNet produced the worst parcellation results, performing worse than the original UNet architecture. These disappointing results could originate from the smallest number of trainable parameters in the tested A_UNet architectures, which may have caused underfitting. The superior performances observed for DeepAIM are in line with our previous work (Benet Nirmala et al., 2023), where the positive effects of the three improvements introduced in DeepAIM, the modified kernel structure in the initial layers, which enhances feature extraction; the incorporation of attention modules, which selectively focus on salient features; and the addition of the ASPP module, which captures multi-scale contextual information effectively, were all established by a rigorous ablation study.

Based on the observation that SWI MRI scans provide sharper contrast, apart from T1w and T2w scans used in Benet Nirmala et al. (2023), the present study expanded on our previous work (Benet Nirmala et al., 2023) to include SWI to allow for segmentation of smaller brain regions. We hope that our work will contribute to a broader exploration of the use of SWI protocols in postmortem studies. The incorporation of SWI into this study has evidently contributed to the enhancement of hippocampal segmentation and delineation of its structures in postmortem MRI data, as indicated by the comparative analysis presented in Table 3 and supplementary material. This augmentation underscores the utility and efficacy of SWI as a complementary imaging modality for improving the accuracy and precision of hippocampal segmentation procedures.

Early identification of dementia is critical to initiating therapies such as monoclonal antibodies (Budd Haeberlein et al., 2022; Van Dyck et al., 2023; Mintun et al., 2021) that may preserve a patient’s quality of life and alter disease progression. However, the disease course is often slow and unnoticed initially (Welsh-Bohmer, 2008), and repeated thorough screening to detect the earliest stages of dementia can be costly and invasive (Panegyres et al., 2016). The volume of the hippocampus and its subfields are a crucial biomarker for neurodegenerative diseases (de Jager et al., 2010; Den Haan et al., 2018), healthy aging (Adler et al., 2018), and brain development (Insausti, 2010). By providing a new approach to delineate the substructures of this crucial brain region (Fritch et al., 2021; Auguste et al., 2023), we hope that future postmortem neuroimaging studies will better describe the impact of brain diseases on memory.

An in-depth comprehension of the distinct patterns of atrophy or pathology within various hippocampal structures holds significant implications for elucidating the underlying mechanisms of diverse neurological disorders (Dekeyzer et al., 2017). This understanding plays a pivotal role in distinguishing between different forms of dementia, offering invaluable insights into disease progression and subtype characterization. The hippocampal head is closely associated with episodic memory formation and retrieval, making it particularly relevant in Alzheimer’s disease subtyping, where early and prominent atrophy in this region is often observed (Frisoni et al., 2010). Conversely, alterations in the hippocampal body, including changes in volume and connectivity, have been linked to vascular dementia and hippocampal sclerosis associated with epilepsy (La Joie et al., 2013; Kerchner et al., 2012). The hippocampal tail, implicated in spatial memory and navigation, may exhibit pathology in diseases like Lewy body dementia, where alpha-synuclein aggregates are frequently found in this region (Toledo et al., 2013). Furthermore, the dentate gyrus, critical for pattern separation and neurogenesis, shows dysfunction in conditions such as frontotemporal dementia, characterized by granule cell dispersion and alterations in neurogenesis (Ferrari et al., 2011). By discerning the nuanced alterations in hippocampal subregions, researchers and clinicians can refine diagnostic approaches and therapeutic strategies, thereby fostering more precise and tailored interventions (Márquez and Yassa, 2019). Moreover, such insights facilitate the development of novel biomarkers and imaging techniques aimed at early detection and prognostication, ultimately enhancing patient care and management. Hence, the meticulous examination of hippocampal structures’ pathology represents a fundamental endeavor in advancing our understanding of neurological diseases and their heterogeneous manifestations.

A comprehensive exploration into the function and status of hippocampal structures in neurodegenerative conditions holds promise for the development of precision-targeted therapeutic interventions (Myszczynska et al., 2020; Du et al., 2024; Roalf et al., 2024). Such endeavors offer a pathway towards the creation of treatments specifically designed to safeguard or rejuvenate these critical brain regions. By unraveling the intricate involvement of hippocampal structures in disease pathogenesis, researchers can identify novel therapeutic targets and strategies aimed at preserving neuronal integrity and functionality. This nuanced understanding not only facilitates the refinement of existing treatment modalities but also fuels the discovery of innovative pharmacological and non-pharmacological interventions tailored to address the specific needs of affected individuals. Consequently, the pursuit of deeper insights into hippocampal structure dynamics emerges as a pivotal endeavor in the quest for more effective and personalized therapies, offering hope for improved outcomes in the management of neurodegenerative diseases (Kobeissy et al., 2023; Lu et al., 2024).

Also, postmortem scans present a unique opportunity to validate observations derived from in vivo imaging methodologies through direct comparison with histological analyses, thereby ensuring a robust alignment between imaging modalities and the underlying biological substrates (Gabrielson et al., 2018; Dyrby et al., 2018; Nolte et al., 2022). This convergence of postmortem imaging and histopathological examination serves as a critical validation step, allowing researchers to verify the fidelity and accuracy of imaging techniques in depicting anatomical structures and pathological changes at a cellular level. By corroborating in vivo findings with postmortem histology, researchers can enhance the reliability and interpretability of imaging data, enabling more precise and meaningful insights into the underlying biological processes of interest. This integration of imaging and histological approaches not only strengthens the scientific rigor of neuroimaging studies but also facilitates the refinement and validation of imaging biomarkers for various neurological conditions (Canazza et al., 2014; Saceleanu et al., 2023). As such, the utilization of postmortem scans in conjunction with histological analyses represents an indispensable strategy for advancing our understanding of brain pathology and optimizing the clinical utility of imaging technologies (Kolasinski et al., 2012; Jonkman et al., 2019a,b).

Overall, our study confirms that UNet architectures can leverage multimodal MRI data to generate very accurate parcellations of brain substructures, even in restricted datasets. We need to mention that the sample size of the present study was restricted by the difficulty of acquiring and manually annotating postmortem scans. According to our cross-validated results, the 2D segmentation approach adopted during our experiments prevented overfitting. However, more efforts must be dedicated to achieving larger sample sizes. The long computational time required to train all the networks is an other common drawback in the Deep Learning field. A combination of fast, inaccurate registrations and small networks that are faster to train might fix these issues in the future (Wisse et al., 2016). Several emerging learning-based image segmentation techniques remain unexplored in the context of hippocampus and hippocampal structures segmentation. These innovative methodologies hold promise for advancing segmentation accuracy and efficiency in this critical area of neuroimaging research. Despite ongoing progress in segmentation methodologies, there exists an opportunity to leverage state-of-the-art machine learning algorithms, such as deep learning architectures and adversarial networks (Chen et al., 2019; Feyjie et al., 2020; Ayman, 2023; Anoop et al., 2024; Balasundaram et al., 2023; Ali et al., 2024; Tomar et al., 2022; Alshehri and Muhammad, 2023), to further refine segmentation outcomes. The application of these techniques has the potential to address challenges related to anatomical variability and image noise, thereby enhancing the precision and reliability of hippocampal segmentation. By embracing novel learning-based approaches, researchers can unlock new avenues for comprehensive characterization of hippocampal morphology and pathology, ultimately fostering deeper insights into neurological disorders and brain function. Thus, exploring these cutting-edge segmentation techniques represents a promising direction for advancing our understanding of hippocampal anatomy and its implications in health and disease.

5. Conclusion

Postmortem MRI is a very valuable neuroimaging approach to study the impact of neurodegenerative diseases on the inner structures of the hippocampus. Unfortunately, the annotation of these brain regions is time-consuming for neurologists. In this study, we propose to tackle the issue by training Deep Networks to combine the information extracted from several MRI modalities to parcellate the hippocampus. We evaluate the performance of five variants of UNet architectures designed for the task, including a new method relying on attention gates and an atrous spatial pyramidal pooling module. We demonstrate that combining T1-weighted, T2-weighted and susceptibility-weighted MRI yields better segmentations, and that the new model performs better than standard UNets for the task. Future work will focus on extending our approach to detect finer hippocampal structures, process larger sets of postmortem scans, and segment 3D volumes instead of sagittal slices.

Supplementary Material

Supplementary data

Appendix A. Supplementary data

Supplementary material related to this article can be found online at https://doi.org/10.1016/j.jneumeth.2024.110359.

Funding information

This study was supported in part by the National Institute of Health (NIH) grant P30AG066546 (South Texas Alzheimer’s Disease Research Center) and grant numbers 1U24AG074855, 5R01AG080821, 1R01AG085571, and 5R01AG083865 and by the William and Ella Owens Medical Research Foundation.

Footnotes

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

CRediT authorship contribution statement

Anoop B.N.: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Resources, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Karl Li: Writing – review & editing, Data curation. Nicolas Honnorat: Writing – review & editing, Supervision, Project administration, Formal analysis. Tanweer Rashid: Supervision, Data curation. Di Wang: Resources, Data curation. Jinqi Li: Data curation. Elyas Fadaee: Data curation. Sokratis Charisis: Writing – review & editing. Jamie M. Walker: Writing – review & editing. Timothy E. Richardson: Writing – review & editing. David A. Wolk: Writing – review & editing. Peter T. Fox: Writing – review & editing. José E. Cavazos: Writing – review & editing. Sudha Seshadri: Writing – review & editing. Laura E.M. Wisse: Writing – review & editing, Visualization, Supervision, Investigation, Data curation, Conceptualization. Mohamad Habes: Writing – review & editing, Validation, Supervision, Project administration, Conceptualization.

Data availability

The code implemented during this study is available at https://github.com/UTHSCSA-NAL/PMBM_Hippocampus_Subfield.

References

  1. Abdallah CG, Salas R, Jackowski A, Baldwin P, Sato JR, Mathew SJ, 2015. Hippocampal volume and the rapid antidepressant effect of ketamine. J. Psychopharmacol. 29 (5), 591–595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adler DH, Wisse LE, Ittyerah R, Pluta JB, Ding S-L, Xie L, Wang J, Kadivar S, Robinson JL, Schuck T, et al. , 2018. Characterizing the human hippocampus in aging and Alzheimer’s disease using a computational atlas derived from ex vivo MRI and histology. Proc. Natl. Acad. Sci. 115 (16), 4252–4257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ali A, Wang Y, Shi X, 2024. Segmentation and identification of brain tumour in MRI images using PG-OneShot learning CNN model. Multimedia Tools Appl. 1–22. [Google Scholar]
  4. Alshehri F, Muhammad G, 2023. A few-shot learning-based ischemic stroke segmentation system using weighted MRI fusion. Image Vis. Comput. 140, 104865. [Google Scholar]
  5. Anoop B, Chander B, Guravaiah K, Kumaravelan G, 2024. Handbook of AI-Based Models in Healthcare and Medicine: Approaches, Theories, and Applications. Taylor & Francis Limited. [Google Scholar]
  6. Anoop B, Parida S, Ajith B, Girish G, Kothari AR, Kavitha MS, Rajan J, 2021. Attention assisted patch-wise CNN for the segmentation of fluids from the retinal optical coherence tomography images. In: International Conference on Pattern Recognition and Machine Intelligence. Springer, pp. 213–223. [Google Scholar]
  7. Anoop B, Pavan R, Girish G, Kothari AR, Rajan J, 2020. Stack generalized deep ensemble learning for retinal layer segmentation in optical coherence tomography images. Biocybern. Biomed. Eng. 40 (4), 1343–1358. [Google Scholar]
  8. Auguste A, Fourcaud-Trocmé N, Meunier D, Gros A, Garcia S, Messaoudi B, Thevenet M, Ravel N, Veyrac A, 2023. Distinct brain networks for remote episodic memory depending on content and emotional experience. Prog. Neurobiol. 223, 102422. [DOI] [PubMed] [Google Scholar]
  9. Avants B, Epstein C, Grossman M, Gee J, 2008. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12 (1), 26–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Avants B, Tustison N, Wu J, Cook P, Gee J, 2011. An open source multivariate framework for n-tissue segmentation with evaluation on public data. Neuroinformatics 9 (4), 381–400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Ayman A, 2023. Prototype-based approach for one-shot segmentation of brain tumors using few-shot learning. arXiv preprint arXiv:2401.00016. [Google Scholar]
  12. Balasundaram A, Kavitha MS, Pratheepan Y, Akshat D, Kaushik MV, 2023. A foreground prototype-based one-shot segmentation of brain tumors. Diagnostics 13 (7), 1282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bender AR, Keresztes A, Bodammer NC, Shing YL, Werkle-Bergner M, Daugherty AM, Yu Q, Kühn S, Lindenberger U, Raz N, 2018. Optimization and validation of automated hippocampal subfield segmentation across the lifespan. Human Brain Mapp. 39 (2), 916–931. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Benet Nirmala A, Rashid T, Fadaee E, Honnorat N, Li K, Charisis S, Wang D, Vemula A, Li J, Fox P, et al. , 2023. Deep attention assisted multi-resolution networks for the segmentation of white matter hyperintensities in postmortem MRI scans. In: International Workshop on Machine Learning in Clinical Neuroimaging. Springer, pp. 143–152. [Google Scholar]
  15. Braak H, Braak E, 1991. Neuropathological stageing of Alzheimer-related changes. Acta Neuropathol. 82 (4), 239–259. [DOI] [PubMed] [Google Scholar]
  16. Budd Haeberlein S, Aisen P, Barkhof F, Chalkias S, Chen T, Cohen S, Dent G, Hansson O, Harrison K, Von Hehn C, et al. , 2022. Two randomized phase 3 studies of aducanumab in early Alzheimer’s disease. J. Prev. Alzheimer’s Dis. 9 (2), 197–210. [DOI] [PubMed] [Google Scholar]
  17. Caldairou B, Bernhardt BC, Kulaga-Yoskovitz J, Kim H, Bernasconi N, Bernasconi A, 2016. A surface patch-based segmentation method for Hippocampal subfields. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. pp. 379–387. [DOI] [PubMed] [Google Scholar]
  18. Canazza A, Minati L, Boffano C, Parati E, Binks S, 2014. Experimental models of brain ischemia: a review of techniques, magnetic resonance imaging, and investigational cell-based therapies. Front. Neurol. 5, 63894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Carré A, Klausner G, Edjlali M, Lerousseau M, Briend-Diop J, Sun R, Ammari S, Reuzé S, Alvarez Andres E, Estienne T, et al. , 2020. Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics. Sci. Rep. 10 (1), 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert D, 2018a. DRINet for medical image segmentation. IEEE Trans. Med. Imaging 37 (11), 2453–2462. [DOI] [PubMed] [Google Scholar]
  21. Chen X, Lian C, Wang L, Deng H, Fung SH, Nie D, Thung K-H, Yap P-T, Gateno J, Xia JJ, et al. , 2019. One-shot generative adversarial learning for MRI segmentation of craniomaxillofacial bony structures. IEEE Trans. Med. Imaging 39 (3), 787–796. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Chen X, Yao L, Zhang Y, 2020. Residual attention u-net for automated multi-class segmentation of covid-19 chest ct images. arXiv preprint arXiv:2004.05645. [Google Scholar]
  23. Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H, 2018b. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision. ECCV, pp. 801–818. [Google Scholar]
  24. Cohen I, Huang Y, Chen J, Benesty J, Benesty J, Chen J, Huang Y, Cohen I, 2009. Pearson correlation coefficient. Noise Reduct. Speech Process. 1–4. [Google Scholar]
  25. Cox R, Rüber T, Staresina BP, Fell J, 2019. Heterogeneous profiles of coupled sleep oscillations in human hippocampus. Neuroimage 202, 116178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Dale AM, Fischl B, Sereno MI, 1999. Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage 9 (2), 179–194. [DOI] [PubMed] [Google Scholar]
  27. Dawe RJ, Bennett DA, Schneider JA, Arfanakis K, 2011. Neuropathologic correlates of hippocampal atrophy in the elderly: a clinical, pathologic, postmortem MRI study. PLoS One 6 (10), e26286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. de Flores R, Berron D, Ding S-L, Ittyerah R, Pluta JB, Xie L, Adler DH, Robinson JL, Schuck T, Trojanowski JQ, et al. , 2020. Characterization of hippocampal subfields using ex vivo MRI and histology data: Lessons for in vivo segmentation. Hippocampus 30 (6), 545–564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. De Flores R, La Joie R, Chételat G, 2015. Structural imaging of hippocampal subfields in healthy aging and Alzheimer’s disease. Neuroscience 309, 29–50. [DOI] [PubMed] [Google Scholar]
  30. de Jager CA, Honey TE, Birks J, Wilcock GK, 2010. Retrospective evaluation of revised criteria for the diagnosis of Alzheimer’s disease using a cohort with post-mortem diagnosis. Int. J. Geriatr. Psychiatry 25 (10), 988–997. [DOI] [PubMed] [Google Scholar]
  31. Dekeyzer S, De Kock I, Nikoubashman O, Vanden Bossche S, Van Eetvelde R, De Groote J, Acou M, Wiesmann M, Deblaere K, Achten E, 2017. “Unforgettable”–a pictorial essay on anatomy and pathology of the hippocampus. Insights Imaging 8, 199–212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Den Haan J, Morrema TH, Rozemuller AJ, Bouwman FH, Hoozemans JJ, 2018. Different curcumin forms selectively bind fibrillar amyloid beta in post mortem Alzheimer’s disease brains: Implications for in-vivo diagnostics. Acta Neuropathol. Commun. 6, 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Dice LR, 1945. Measures of the amount of ecologic association between species. Ecology 26 (3), 297–302. [Google Scholar]
  34. Du L, Roy S, Wang P, Li Z, Qiu X, Zhang Y, Yuan J, Guo B, 2024. Unveiling the future: Advancements in MRI imaging for neurodegenerative disorders. Ageing Res. Rev. 102230. [DOI] [PubMed] [Google Scholar]
  35. Dyrby TB, Innocenti GM, Bech M, Lundell H, 2018. Validation strategies for the interpretation of microstructure imaging using diffusion MRI. Neuroimage 182, 62–79. [DOI] [PubMed] [Google Scholar]
  36. Elder GJ, Mactier K, Colloby SJ, Watson R, Blamire AM, O’Brien JT, Taylor J-P, 2017. The influence of hippocampal atrophy on the cognitive phenotype of dementia with lewy bodies. Int. J. Geriatr. Psychiatry 32 (11), 1182–1189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Ferrari R, Kapogiannis D, D Huey E, Momeni P, 2011. FTD and ALS: a tale of two diseases. Curr. Alzheimer Res. 8 (3), 273–294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Feyjie AR, Azad R, Pedersoli M, Kauffman C, Ayed IB, Dolz J, 2020. Semi-supervised few-shot learning for medical image segmentation. arXiv preprint arXiv:2003.08462. [Google Scholar]
  39. Fischl B, Dale AM, 2000. Measuring the thickness of the human cerebral cortex from magnetic resonance images. Proc. Natl. Acad. Sci. 97 (20), 11050–11055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Frisoni GB, Fox NC, Jack CR Jr., Scheltens P, Thompson PM, 2010. The clinical use of structural MRI in alzheimer disease. Nat. Rev. Neurol. 6 (2), 67–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Frisoni G, Laakso M, Beltramello A, Geroldi C, Bianchetti A, Soininen H, Trabucchi M, 1999. Hippocampal and entorhinal cortex atrophy in frontotemporal dementia and Alzheimer’s disease. Neurology 52 (1), 91. [DOI] [PubMed] [Google Scholar]
  42. Fritch HA, Spets DS, Slotnick SD, 2021. Functional connectivity with the anterior and posterior hippocampus during spatial memory. Hippocampus 31 (7), 658–668. [DOI] [PubMed] [Google Scholar]
  43. Gabrielson K, Maronpot R, Monette S, Mlynarczyk C, Ramot Y, Nyska A, Sysa-Shah P, 2018. In vivo imaging with confirmation by histopathology for increased rigor and reproducibility in translational research: a review of examples, options, and resources. ILAR J. 59 (1), 80–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Gerlach AR, Karim HT, Peciña M, Ajilore O, Taylor W, Butters MA, Andreescu C, 2022. MRI predictors of pharmacotherapy response in major depressive disorder. NeuroImage: Clin. 103157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Habes M, Erus G, Toledo JB, Zhang T, Bryan N, Launer LJ, Rosseel Y, Janowitz D, Doshi J, Van der Auwera S, et al. , 2016. White matter hyperintensities and imaging patterns of brain ageing in the general population. Brain 139 (4), 1164–1179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Habes M, Sotiras A, Erus G, Toledo JB, Janowitz D, Wolk DA, Shou H, Bryan NR, Doshi J, Völzke H, et al. , 2018. White matter lesions: Spatial heterogeneity, links to risk factors, cognition, genetics, and atrophy. Neurology 91 (10), e964–e975. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Hayes JP, Hayes S, Miller DR, Lafleche G, Logue MW, Verfaellie M, 2017. Automated measurement of hippocampal subfields in PTSD: evidence for smaller dentate gyrus volume. J. Psychiatr. Res. 95, 247–252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. He K, Zhang X, Ren S, Sun J, 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1026–1034. [Google Scholar]
  49. Hrybouski S, MacGillivray M, Huang Y, Madan CR, Carter R, Seres P, Malykhin NV, 2019. Involvement of hippocampal subfields and anterior-posterior subregions in encoding and retrieval of item, spatial, and associative memories: Longitudinal versus transverse axis. Neuroimage 191, 568–586. [DOI] [PubMed] [Google Scholar]
  50. Huang Y, Wang Q, Jia W, He X, 2019. See more than once–kernel-sharing atrous convolution for semantic segmentation. arXiv preprint arXiv:1908.09443. [Google Scholar]
  51. Huang Y, Wang Q, Jia W, Lu Y, Li Y, He X, 2021. See more than once: Kernel-sharing atrous convolution for semantic segmentation. Neurocomputing 443, 26–34. [Google Scholar]
  52. Ibtehaz N, Rahman MS, 2020. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87. [DOI] [PubMed] [Google Scholar]
  53. Insausti R, 2010. Postnatal Development of the Human Hippocampal Formation. Springer Science & Business Media. [PubMed] [Google Scholar]
  54. Jha D, Riegler MA, Johansen D, Halvorsen P, Johansen HD, 2020. Doubleu-net: A deep convolutional neural network for medical image segmentation. In: 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems. CBMS, IEEE, pp. 558–564. [Google Scholar]
  55. Jonkman LE, Galis-de Graaf Y, Bulk M, Kaaij E, Pouwels PJ, Barkhof F, Rozemuller AJ, van der Weerd L, Geurts JJ, van de Berg WD, 2019a. Normal aging brain collection amsterdam (NABCA): A comprehensive collection of postmortem high-field imaging, neuropathological and morphometric datasets of non-neurological controls. NeuroImage: Clin. 22, 101698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Jonkman LE, Kenkhuis B, Geurts JJ, van de Berg WD, 2019b. Post-mortem MRI and histopathology in neurologic disease: a translational approach. Neurosci. Bull. 35, 229–243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Kayalibay B, Jensen G, van der Smagt P, 2017. CNN-based segmentation of medical imaging data. arXiv preprint arXiv:1701.03056. [Google Scholar]
  58. Kerchner GA, Deutsch GK, Zeineh M, Dougherty RF, Saranathan M, Rutt BK, 2012. Hippocampal CA1 apical neuropil atrophy and memory performance in Alzheimer’s disease. Neuroimage 63 (1), 194–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Khandelwal P, Duong MT, Sadaghiani S, Lim S, Denning A, Chung E, Ravikumar S, Arezoumandan S, Peterson C, Bedard M, et al. , 2023. Automated deep learning segmentation of high-resolution 7 T ex vivo MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases. arXiv preprint arXiv:2303.12237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Kobeissy F, Goli M, Yadikar H, Mechref Y, 2023. Advances in neuroproteomics for neurotrauma: unraveling insights for personalized medicine and future prospects. Front. Neurol. 14, 1288740. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Kolasinski J, Stagg CJ, Chance SA, DeLuca GC, Esiri MM, Chang E-H, Palace JA, McNab JA, Jenkinson M, Miller KL, et al. , 2012. A combined post-mortem magnetic resonance imaging and quantitative histological study of multiple sclerosis pathology. Brain 135 (10), 2938–2951. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Kreilkamp B, Weber B, Elkommos S, Richardson M, Keller S, 2018. Hippocampal subfield segmentation in temporal lobe epilepsy: Relation to outcomes. Acta Neurol. Scand. 137 (6), 598–608. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Krizhevsky A, Sutskever I, Hinton GE, 2017. Imagenet classification with deep convolutional neural networks. Commun. ACM 60 (6), 84–90. [Google Scholar]
  64. Kulaga-Yoskovitz J, Bernhardt BC, Hong S-J, Mansi T, Liang KE, Van Der Kouwe AJ, Smallwood J, Bernasconi A, Bernasconi N, 2015. Multi-contrast submillimetric 3 Tesla hippocampal subfield segmentation protocol and dataset. Sci. Data 2 (1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. La Joie R, Perrotin A, de La Sayette V, Egret S, Doeuvre L, Belliard S, Eustache F, Desgranges B, Chételat G, 2013. Hippocampal subfield volumetry in mild cognitive impairment, Alzheimer’s disease and semantic dementia. NeuroImage: Clin. 3, 155–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. LeCun Y, Bengio Y, Hinton G, 2015. Deep learning. Nature 521 (7553), 436–444. [DOI] [PubMed] [Google Scholar]
  67. Leutgeb JK, Leutgeb S, Moser M-B, Moser EI, 2007. Pattern separation in the dentate gyrus and CA3 of the hippocampus. Science 315 (5814), 961–966. [DOI] [PubMed] [Google Scholar]
  68. Li K, Rashid T, Li J, Honnorat N, Nirmala AB, Fadaee E, Wang D, Charisis S, Liu H, Franklin C, et al. , 2023. Postmortem brain imaging in alzheimer’s disease and related dementias: The south texas alzheimer’s disease research center repository. J. Alzheimer’s Dis. (Preprint), 1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Lim HK, Hong SC, Jung WS, Ahn KJ, Won WY, Hahn C, Kim I, Lee CU, 2012. Automated hippocampal subfields segmentation in late life depression. J. Affect. Disord. 143 (1–3), 253–256. [DOI] [PubMed] [Google Scholar]
  70. Lim H, Hong S, Jung W, Ahn K, Won W, Hahn C, Kim I, Lee C, 2013. Automated segmentation of hippocampal subfields in drug-naïve patients with Alzheimer disease. Am. J. Neuroradiol. 34 (4), 747–751. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Long J, Shelhamer E, Darrell T, 2015. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3431–3440. [DOI] [PubMed] [Google Scholar]
  72. Lu B, Chen X, Castellanos FX, Thompson PM, Zuo X-N, Zang Y-F, Yan C-G, 2024. The power of many brains: Catalyzing neuropsychiatric discovery through open neuroimaging data and large-scale collaboration. Sci. Bull.. [DOI] [PubMed] [Google Scholar]
  73. Ma B, Zhao Y, Yang Y, Zhang X, Dong X, Zeng D, Ma S, Li S, 2020. MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation. Comput. Med. Imaging Graph. 86, 101800. [DOI] [PubMed] [Google Scholar]
  74. Manjón JV, Romero JE, Coupe P, 2022. A novel deep learning based hippocampus subfield segmentation method. Sci. Rep. 12 (1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Márquez F, Yassa MA, 2019. Neuroimaging biomarkers for Alzheimer’s disease. Mol. Neurodegener. 14 (1), 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Mintun MA, Lo AC, Duggan Evans C, Wessels AM, Ardayfio PA, Andersen SW, Shcherbinin S, Sparks J, Sims JR, Brys M, et al. , 2021. Donanemab in early Alzheimer’s disease. N. Engl. J. Med. 384 (18), 1691–1704. [DOI] [PubMed] [Google Scholar]
  77. Moreno-Jiménez EP, Flor-García M, Terreros-Roncal J, Rábano A, Cafini F, Pallas-Bazarra N, Ávila J, Llorens-Martín M, 2019. Adult hippocampal neurogenesis is abundant in neurologically healthy subjects and drops sharply in patients with Alzheimer’s disease. Nat. Med. 25 (4), 554–560. [DOI] [PubMed] [Google Scholar]
  78. Myszczynska MA, Ojamies PN, Lacoste AM, Neil D, Saffari A, Mead R, Hautbergue GM, Holbrook JD, Ferraiuolo L, 2020. Applications of machine learning to diagnosis and treatment of neurodegenerative diseases. Nat. Rev. Neurol. 16 (8), 440–456. [DOI] [PubMed] [Google Scholar]
  79. Nakahara S, Turner JA, Calhoun VD, Lim KO, Mueller B, Bustillo JR, O’Leary DS, McEwen S, Voyvodic J, Belger A, et al. , 2020. Dentate gyrus volume deficit in schizophrenia. Psychol. Med. 50 (8), 1267–1277. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Neylan TC, Mueller SG, Wang Z, Metzler TJ, Lenoci M, Truran D, Marmar CR, Weiner MW, Schuff N, 2010. Insomnia severity is associated with a decreased volume of the CA3/dentate gyrus hippocampal subfield. Biol. Psychiatry 68 (5), 494–496. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Niyas S, Pawan S, Kumar MA, Rajan J, 2022. Medical image segmentation with 3D convolutional neural networks: A survey. Neurocomputing 493, 397–413. [Google Scholar]
  82. Nolte P, Dullin C, Svetlove A, Brettmacher M, Rußmann C, Schilling AF, Alves F, Stock B, et al. , 2022. Current approaches for image fusion of histological data with computed tomography and magnetic resonance imaging. Radiol. Res. Pract. 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Nuninga JO, Mandl RC, Boks MP, Bakker S, Somers M, Heringa SM, Nieuwdorp W, Hoogduin H, Kahn RS, Luijten P, et al. , 2020. Volume increase in the dentate gyrus after electroconvulsive therapy in depressed patients as measured with 7T. Mol. Psychiatry 25 (7), 1559–1568. [DOI] [PubMed] [Google Scholar]
  84. Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, et al. , 2018. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. [Google Scholar]
  85. Olsen RK, Carr VA, Daugherty AM, La Joie R, Amaral RS, Amunts K, Augustinack JC, Bakker A, Bender AR, Berron D, et al. , 2019. Progress update from the hippocampal subfields group. Alzheimer’s & Dement.: Diagn. Assess. Dis. Monit. 11 (1), 439–449. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Panegyres PK, Berry R, Burchell J, 2016. Early dementia screening. Diagnostics 6 (1), 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Parker TD, Cash DM, Lane CA, Lu K, Malone IB, Nicholas JM, James S-N, Keshavan A, Murray-Smith H, Wong A, et al. , 2019. Hippocampal subfield volumes and pre-clinical Alzheimer’s disease in 408 cognitively normal adults born in 1946. PLoS One 14 (10), e0224030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Pawan S, Sankar R, Jain A, Jain M, Darshan D, Anoop B, Kothari AR, Venkatesan M, Rajan J, 2021. Capsule Network–based architectures for the segmentation of sub-retinal serous fluid in optical coherence tomography images of central serous chorioretinopathy. Med. Biol. Eng. Comput. 59 (6), 1245–1259. [DOI] [PubMed] [Google Scholar]
  89. Pfefferbaum A, Sullivan EV, Adalsteinsson E, Garrick T, Harper C, 2004. Postmortem MR imaging of formalin-fixed human brain. Neuroimage 21 (4), 1585–1595. [DOI] [PubMed] [Google Scholar]
  90. Piatti VC, Ewell LA, Leutgeb JK, 2013. Neurogenesis in the dentate gyrus: carrying the message or dictating the tone. Front. Neurosci. 7, 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Powers DM, 2020. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061. [Google Scholar]
  92. Rahil M, Anoop B, Girish G, Kothari AR, Koolagudi SG, Rajan J, 2023. A deep ensemble learning-based CNN architecture for multiclass retinal fluid segmentation in oct images. IEEE Access 11, 17241–17251. [Google Scholar]
  93. Rashid T, Abdulkadir A, Nasrallah IM, Ware JB, Liu H, Spincemaille P, Romero JR, Bryan RN, Heckbert SR, Habes M, 2021. DEEPMIR: a deep neural network for differential detection of cerebral microbleeds and iron deposits in MRI. Sci. Rep. 11 (1), 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Ravikumar S, Wisse LE, Lim S, Ittyerah R, Xie L, Bedard ML, Das SR, Lee EB, Tisdall MD, Prabhakaran K, et al. , 2021. Ex vivo MRI atlas of the human medial temporal lobe: characterizing neurodegeneration due to tau pathology. Acta Neuropathologica Communications 9 (1), 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Roalf DR, Figee M, Oathes DJ, 2024. Elevating the field for applying neuroimaging to individual patients in psychiatry. Transl. Psychiatry 14 (1), 87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Ronneberger O, Fischer P, Brox T, 2015. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18. Springer, pp. 234–241. [Google Scholar]
  97. Saceleanu VM, Toader C, Ples H, Covache-Busuioc R-A, Costin HP, Bratu B-G, Dumitrascu D-I, Bordeianu A, Corlatescu AD, Ciurea AV, 2023. Integrative approaches in acute ischemic stroke: from symptom recognition to future innovations. Biomedicines 11 (10), 2617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Saygin ZM, Kliemann D, Iglesias JE, van der Kouwe AJ, Boyd E, Reuter M, Stevens A, Van Leemput K, McKee A, Frosch MP, et al. , 2017. High-resolution magnetic resonance imaging reveals nuclei of the human amygdala: manual segmentation to automatic atlas. Neuroimage 155, 370–382. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Scharfman HE, 2011. The Dentate Gyrus: A Comprehensive Guide to Structure, Function, and Clinical Implications. Elsevier. [Google Scholar]
  100. Sharan TS, Tripathi S, Sharma S, Sharma N, 2022. Encoder modified U-net and feature pyramid network for multi-class segmentation of cardiac magnetic resonance images. IETE Tech. Rev. 39 (5), 1092–1104. [Google Scholar]
  101. Shi Y, Cheng K, Liu Z, 2019. Hippocampal subfields segmentation in brain MR images using generative adversarial networks. Biomed. Eng. Online 18 (1), 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Simonyan K, Zisserman A, 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. [Google Scholar]
  103. Taha AA, Hanbury A, 2015. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med. Imaging 15 (1), 1–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Tanaka KZ, 2021. Heterogeneous representations in the hippocampus. Neurosci. Res. 165, 1–5. [DOI] [PubMed] [Google Scholar]
  105. Terreros-Roncal J, Moreno-Jiménez E, Flor-García M, Rodríguez-Moreno C, Trinchero MF, Cafini F, Rábano A, Llorens-Martín M, 2021. Impact of neurodegenerative diseases on human adult hippocampal neurogenesis. Science 374 (6571), 1106–1113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Thomas E, Pawan S, Kumar S, Horo A, Niyas S, Vinayagamani S, Kesavadas C, Rajan J, 2020. Multi-res-attention UNet: a CNN model for the segmentation of focal cortical dysplasia lesions from magnetic resonance images. IEEE J. Biomed. Health Inf. 25 (5), 1724–1734. [DOI] [PubMed] [Google Scholar]
  107. Toledo JB, Cairns NJ, Da X, Chen K, Carter D, Fleisher A, Householder E, Ayutyanont N, Roontiva A, Bauer RJ, et al. , 2013. Clinical and multi-modal biomarker correlates of ADNI neuropathological findings. Acta Neuropathol. Commun. 1, 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Tomar D, Bozorgtabar B, Lortkipanidze M, Vray G, Rad MS, Thiran J-P, 2022. Self-supervised generative style transfer for one-shot medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 1998–2008. [Google Scholar]
  109. Travis S, Coupland NJ, Silversone PH, Huang Y, Fujiwara E, Carter R, Seres P, Malykhin NV, 2015. Dentate gyrus volume and memory performance in major depressive disorder. J. Affect. Disord. 172, 159–164. [DOI] [PubMed] [Google Scholar]
  110. Van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, Kanekiyo M, Li D, Reyderman L, Cohen S, et al. , 2023. Lecanemab in early Alzheimer’s disease. N. Engl. J. Med. 388 (1), 9–21. [DOI] [PubMed] [Google Scholar]
  111. Verhaaren BF, Debette S, Bis JC, Smith JA, Ikram MK, Adams HH, Beecham AH, Rajan KB, Lopez LM, Barral S, et al. , 2015. Multiethnic genome-wide association study of cerebral white matter hyperintensities on MRI. Circ.: Cardiovasc. Genet. 8 (2), 398–409. [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Welsh-Bohmer KA, 2008. Defining “prodromal” Alzheimer’s disease, frontotemporal dementia, and Lewy body dementia: are we there yet? Neuropsychol. Rev. 18 (1), 70–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Winner B, Kohl Z, Gage FH, 2011. Neurodegenerative disease and adult neurogenesis. Eur. J. Neurosci. 33 (6), 1139–1151. [DOI] [PubMed] [Google Scholar]
  114. Wisse L, Adler D, Ittyerah R, Pluta J, Robinson J, Schuck T, Trojanowski J, Grossman M, Detre J, Elliott M, et al. , 2017. Comparison of in vivo and ex vivo MRI of the human hippocampal formation in the same subjects. Cerebral Cortex 27 (11), 5185–5196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Wisse LEM, Chételat G, Daugherty AM, de Flores R, la Joie R, Mueller SG, Stark CEL, Wang L, Yushkevich PA, Berron D, Raz N, Bakker A, Olsen RK, Carr VA, 2021. Hippocampal subfield volumetry from structural isotropic 1mm3 MRI scans: A note of caution. Hum. Brain Mapp. 42, 539–550. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Wisse LE, Kuijf HJ, Honingh AM, Wang H, Pluta JB, Das SR, Wolk DA, Zwanenburg JJ, Yushkevich PA, Geerlings MI, 2016. Automated hippocampal subfield segmentation at 7T MRI. Am. J. Neuroradiol. 37 (6), 1050–1057. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Wong T-T, 2015. Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation. Pattern Recognit. 48 (9), 2839–2846. [Google Scholar]
  118. Yang Z, Zhuang X, Mishra V, Sreenivasan K, Cordes D, 2020. CAST: A multi-scale convolutional neural network based automated hippocampal subfield segmentation toolbox. Neuroimage 218, 116947. [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Yushkevich PA, Amaral RS, Augustinack JC, Bender AR, Bernstein JD, Boccardi M, Bocchetta M, Burggren AC, Carr VA, Chakravarty MM, et al. , 2015. Quantitative comparison of 21 protocols for labeling hippocampal subfields and parahippocampal subregions in in vivo MRI: towards a harmonized segmentation protocol. Neuroimage 111, 526–541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Zhu H, Shi F, Wang L, Hung S-C, Chen M-H, Wang S, Lin W, Shen D, 2019. Dilated dense U-Net for infant hippocampus subfield segmentation. Front. Neuroinform. 13, 30. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

Data Availability Statement

The code implemented during this study is available at https://github.com/UTHSCSA-NAL/PMBM_Hippocampus_Subfield.

RESOURCES