Abstract
Accurate labeling of specific layers in the human cerebral cortex is crucial for advancing our understanding of neurodevelopmental and neurodegenerative disorders. Building on recent advancements in ultra-high-resolution ex vivo MRI, we present a novel semi-supervised segmentation model capable of identifying supragranular and infragranular layers in ex vivo MRI with unprecedented precision. On a dataset consisting of 17 whole-hemisphere ex vivo scans at 120
m, we propose a Multi-resolution U-Nets framework that integrates global and local structural information, achieving reliable segmentation maps of the entire hemisphere, with Dice scores over 0.8 for supra- and infragranular layers. This enables surface modeling, atlas construction, anomaly detection in disease states, and cross-modality validation while also paving the way for finer layer segmentation. Our approach offers a powerful tool for comprehensive neuroanatomical investigations and holds promise for advancing our mechanistic understanding of progression of neurodegenerative diseases.
Keywords: ex vivo MRI, cortical layers, high resolution, semi-supervised learning, neurodegenerative diseases
Introduction
The human neocortex is a complex structure organized into a number of distinct layers, characterized by variations in the size and packing density of their constituent neurons. These layers form during cortical development as a result of radial and tangential neuronal migration (Hatanaka et al. 2016). During embryonic development, newly generated neocortical projection neurons migrate along radial glia in successive waves, leading to the formation of cortical layers in an inside-out pattern (Tan and Shi 2013). This means that the deepest layers are populated first, while the most superficial layers are occupied by the last-generated neurons. In addition to their unique organization, cortical layers also exhibit distinct patterns of connectivity (Markov et al. 2014; Vezoli et al. 2021). For example, pyramidal neurons in layers II and III predominantly project to other cortical regions, while those in layer V project mainly to the striatum and brainstem, and those in layer VI project to the thalamus (Gerfen et al. 2018).
When it comes to diseases affecting the human neocortex, specific layers or cell types often show particular pathologies. For instance, in schizophrenia, large pyramidal cells in layer III display reduced cell size (Arion et al. 2010). Deficits in reelin expression are primarily found in the most superficial layers, also known as supragranular layers, among schizophrenia (Negrón-Oyarzo et al. 2016), bipolar disorder (Benes and Berretta 2001), and autism spectrum disorder (Camacho et al. 2014) patients. Another example is the development of Alzheimer’s disease pathology, which includes the neuronal loss within the neocortex, and initially manifests in the superficial cortical layers (II–IV) during its early stages. As the disease progresses, it extends to affect the deeper layers (V–VI) (Romito-DiGiacomo et al. 2007). These examples highlight the importance of correctly annotating specific layers in the human neocortex. This identification is essential for advancing our understanding of these disorders and may provide valuable insights for potential therapeutic approaches.
A natural approach to identifying cortical layers is microscopic examination of tissue histology. While histology offers definitive insights into microscopic tissue morphology, it suffers from limitations such as sampling bias due to a restricted field of view—and therefore has difficulties in exploring the interrelationships between different and potentially dysfunctional regions (Yang et al. 2013). Moreover, histology is labor-intensive and invasive, which may decrease the measured cortical thickness by factors like dehydration and increase by factors like the slicing direction (Jonkman et al. 2016; Popescu et al. 2016). In general, any 2D technique for measuring cortical properties suffers inaccuracies that arise due to the effects of through-plane folding. In the context of cortical thickness, any 2D measure will inevitably overestimate it except in locations where the cut is perfectly orthogonal to the cortex.
In contrast, conventional in vivo MRI can provide isotropic whole-brain images rapidly at relatively good resolution (Lüsebrink et al. 2021; Bollmann et al. 2022; Gulban et al. 2022) and non-invasively with contribution of angioarchitectonic cortical layers to the MRI contrast. However, it lacks the resolution and specificity of histology (Costantini et al. 2023). Studies have employed boundary-based registration methods (Polimeni et al. 2010; Zimmermann et al. 2011) to place laminar surfaces inside the space between the white-matter surface and the pial surface. Nevertheless, the anatomical priors of these geometry-based methods (Waehnert et al. 2014; Leprince et al. 2015) may need to be further elaborated by MRI-contrast-based methods to capture the information of local variations in layer thicknesses. Unlike in vivo MRI, ex vivo MRI is not affected by motion artifacts and has much less limiting time constraints (Edlow et al. 2019). Extended scanning time enables increased spatial resolution, which in turn enables visualization of mesoscale neuroanatomy. The resulting increase in spatial resolution is crucial for visualizing mesoscale neuroanatomy, such as cortical layers and subcortical nuclei, which are challenging to visualize in even the highest-resolution in vivo MRI datasets (Keuken et al. 2018). Ex vivo MRI also circumvents the spatial distortions (tearing or folding) associated with histological methods during brain tissue fixation, embedding, sectioning, and slide-mounting (Sitek et al. 2019). This makes it well-suited for characterizing neuroanatomy at high resolution and providing finer macroscopic morphometric measures, such as cortical thickness, of the underlying cytoarchitecture. Although imaging the intact human brain ex vivo at high magnetic fields is challenging due to the need for specialized hardware (Edlow et al. 2019), recent progress in high-field scanner and coil technology, and imaging protocols (Chan et al. 2022), has enabled full-brain scanning with voxel sizes as small as 100
m (Edlow et al. 2019; Kim et al. 2021), helping bridge the gap between histology and MRI.
Equipped with these imaging advances (Khandelwal et al. 2024), we now have the means to acquire data sets consisting of high-resolution, whole-hemisphere scans from multiple post-mortem subjects. Previous high-resolution data sets, such as BigBrain (Amunts et al. 2013) and the Allen Brain Atlas (Ding et al. 2016), include only a single human brain with whole-brain MRI and histology, which prevents us from reliably quantifying the inter-subject variability of human neuroanatomy. However, manually labeling large multi-subject data sets is not feasible in practice and existing automated tools for segmenting the supra- and infragranular layers require a large amount of manually prepared training data (Wagstyl et al. 2020). In general, automated segmentation of ex vivo data is hindered by limited training data, and the few existing data sets that include multiple subjects only cover specific sub-structures (Saygin et al. 2017; Iglesias et al. 2018).
Convolutional Neural Networks (CNNs) are becoming increasingly popular in medical image analysis (Chen et al. 2022). When training data were available, processing large 3D volumes using CNNs is challenging due to limitations in Graphics Processing Unit memory. Downsampling the volumes to reduce the memory load will inevitably lead to a loss of fine structural details, resulting in decreased segmentation accuracy. Similarly, using subvolume patches can result in reduced accuracy due to the lack of global context information. To address these issues, researchers have proposed 2.5D segmentation approaches that operate on orthogonal planes and subsequently merge their information (Zhang et al. 2022); CNNs with separate high- and low-resolution paths (Kamnitsas et al. 2017); and light-weight models using dilated convolution (Perone et al. 2018) or group normalization (Brügger et al. 2019). Isensee et al. (Isensee et al. 2021) have developed the nnU-Net (”no-new-Unet”) framework as a robust and self-adjusting extension of the U-Net. This framework involves minor modifications to both the 2D and 3D U-Net designs, wherein 2D and 3D connections are integrated to collaboratively establish a network pool. Nevertheless, there are no existing methods with end-to-end training that can effectively incorporate important 3D global context information with local high-resolution details needed for accurate labeling of the supra- and infragranular layers.
In this paper, we present a dataset (Costantini et al. 2023) consisting of 17 whole-hemisphere ex vivo scans at 120
m with partial manual annotations and propose a semi-supervised model, Multi-resolution UNets Semi-supervised (MUS), that require a minimal amount of annotated training data to segment supra- and infragranular layers in ultra-high-resolution ex vivo brain MRI. A variant of the U-Net (Ronneberger et al. 2015), the multi-resolution U-Nets is designed to incorporate both global and local structural information for high-resolution segmentation accuracy. With this segmentation model, we obtained, for the first time, reliable segmentation maps (Dice score > 0.8) of supra- and infragranular layers over the whole hemisphere. The combination of the unique dataset and novel automated segmentation approach, paves the way for an in-depth examination of cortical layer organizations and will allow us to (1) place surface models and build atlases; (2) infer laminar anomalies between disease stages and healthy controls using the atlases; (3) benchmark and validate cortical layer segmentation results in other imaging modalities; and (4) progress to finer segmentation of more cortical layers in the future. The dataset and segmentations can be downloaded from the DANDI data archive(https://www.dandiarchive.org/dandiset/000026), and the method will be available in the FreeSurfer(https://surfer.nmr.mgh.harvard.edu) software suite under the program name mri_segment_layers.
Materials and methods
Datasets
MRI scans of 17 whole hemispheres (see demographics in Table 1) were acquired on a Siemens 7 Tesla scanner using a custom built 32-channel receive array as detailed in (Edlow et al. 2019). The scans were acquired at 120
m isotropic resolution (Fig. 1), which allows reliable visual identification of the supra- and infragranular boundary throughout the neocortex. This was achieved by using a multi-echo spoiled gradient echo sequence (ME-GRE) and acquired a series of images at different flip angles (Fischl et al. 2004). The k-space acquisition was segmented to fit data from a single segment into scanner memory, then streamed to a dedicated computer for offline reconstruction. Adjacent k-space segments were modified to contain a small number of overlapping lines enabling us to correct for phase discontinuities due to field drift during the extremely long scans (14 h per volume). Ex vivo MRI of formalin fixed human brain tissue derives the majority of its contrast via T2* differences in tissue types. Quantitative parameter mapping (Fischl et al. 2004) has been used to confirm that minimal T1-weighted contrast remains between gray and white matter. The scans are also sensitive to various distortions and intensity inhomogeneities due to variations in B0 and B1−/+ fields, which we mapped and corrected following the procedures in (Costantini et al. 2023). Briefly, the alternating reversing-polarity reads of the multi-echo fast low angle shot (MEF) scans provide a mechanism for mapping and correcting B0 field inhomogeneities at the intrinsic resolution of the scans when combined with a low-resolution field map (Varadarajan et al. 2020, 2021). For transmit (B1+) inhomogeneities, we acquired several low-resolution scans with varying transmit voltage to map the flip angle field, then used these maps and the B0-corrected MEF scans as inputs to the steady-state MEF equations (Fischl et al. 2004), yielding a set of synthesized scans that are of higher SNR than the individual input scans. Finally, we used the SAMSEG algorithm from Freesurfer (Puonti et al. 2016) to correct for receive inhomogeneities (B1-).
Table 1.
Demographic information of donor cohorts in the ex vivo MRI dataset. Interval denotes post-mortem interval in hours.
| Donor cohort | 17 (Right: 6, Left: 10, whole-brain: 1) | |
|---|---|---|
| Sex | Male: 11 | Female: 6 |
| Age | 64.5 11.2 |
66.3 6.7 |
| Interval | 19.4 5.9 |
15.8 8.6 |
Fig. 1.

(A) Sagittal (left), coronal (center), and axial (left) slices of case 11, 12, and 17. Bias correction was applied for all scans. Comprehensive visualization of all cases can be found in the Supplementary Material. (B) Zoomed views of selected regions in the whole brain case 17.
A relevant question is what does the visible contrast boundary in the cortical gray matter represent (Fig. 2B). Existing studies have associated visible contrast in brain MRI with local laminar architecture (Fukunaga et al. 2010; Zwanenburg et al. 2012; McColgan et al. 2021). Previously, we have performed confocal light-sheet fluorescence microscopy (LSFM) (Costantini et al. 2023) on tissue slabs from BA 44/45 treated with the SHORT clearing technique (Pesce et al. 2022). Supra- and infragranular labels can be derived from LSFM as neuron subtypes are specifically labeled. Using LSFM registered into the MRI space, we demonstrate that the MR contrast boundary in cortical gray matter corresponds to the cytoarchitectural boundary between layers III and V, visible on the NeuN stain (Fig. 2A. Based on the myelin density differences (Glasser et al. 2014; Chang et al. 2022) and their resulting contrast in MRI, we group layer I, II, and III together as the supragranular layer, and layer IV (absent in some regions), V, and VI together as the infragranular layer.
Fig. 2.

(A) LSFM derived supra- and infragranular layer labels co-registered with MRI (Costantini et al. 2023). More visualization can be found in Fig. 4, Fig. 5, and Fig. S4 of the original paper of this dataset (Costantini et al. 2023). (B) 2D slices of case 11, 12, 17 at sagittal, coronal, and axial view with manual annotation overlaid (red/outer: supragranular layer, white/inner: infragranular layer). Comprehensive visualization of annotations on all samples can be found in the Supplementary Material.
Data preprocessing
Manual annotation was performed through Freeview tool in FreeSurfer (Fischl 2012) to label the visible supra-/infra-laminar boundary. The supragranular layers appear as a bright band in the neocortex, while the infragranular layers appear as a slightly darker band. The white matter appears as the dark area interior to the neocortex. Using these intensity characteristics, we manually segmented these three structures and the background from 100 slices in the Brodmann area (BA) 44/45 of each hemisphere specimen using the coronal view on Freeview. We maintained this single plane for manual labeling to limit bias in the labeling process, while inspecting other planes to avoid jagged reconstruction. In addition, we labeled two samples across the whole-hemisphere, one slice every 40. In total, about 3% voxels of supra- and infragranular layers are manually labeled. Fig. 2(B) shows examples of manual annotation on selected samples.
An additional background (not cerebral gray or white matter) labeling for training the MUS segmentation model was created using a combination of the segmentation outputs from SynthSeg (Billot et al. 2023) and SAMSEG (Puonti et al. 2016). Specifically, the ex vivo scans were first downsampled to 500
m isotropic resolution, then processed with both SynthSeg and SAMSEG to produce probabilistic structure maps with values ranging from 0 to 1, and finally these maps were combined to produce a single background probability map consisting of all structures except the cortical gray matter and white matter. These maps were then upsampled back to the original resolution and thresholded at a value of 1.0 to get the final background labeling. An example background mask is show in Supplementary Fig. S19. The reason behind using both SAMSEG and SynthSeg was that while the SynthSeg background, i.e. non-brain, segmentation was better than the one from SAMSEG, the subcortical and cerebellum segmentation of SAMSEG was more accurate than the SynthSeg output. We note that the background mask is used only during training: at inference time, the trained network automatically labels regions far from the cortex, such as the hippocampus and other subcortical structures, as background.
Semi-supervised segmentation model
The U-Net is a deep learning architecture that has gained significant attention and popularity within the field of medical imaging. Initially proposed in 2015 by (Ronneberger et al. 2015), it resembles an autoencoder, in the sense that it consists of a contracting path that captures semantic context and a symmetric expansive path that enables precise feature localization. Crucially, encoder features are concatenated with features at the same resolution level in the decoder via skip connections, which effectively preserve high-frequency components of the signal that enable segmentation of convoluted boundaries. This design facilitates the incorporation of both global and local information, making it particularly effective for tasks where accurate delineation of structures is crucial, such as in identifying organs (Radiuk 2020; Ushinsky et al. 2021), tumors (Aboelenein et al. 2020), and anatomical features (Frid-Adar et al. 2018; Roy et al. 2019; Billot et al. 2023). Beyond image segmentation, the U-Net’s versatility has led to its adoption in various medical imaging applications, including image denoising (Heinrich et al. 2018; Jia et al. 2021), registration (Balakrishnan et al. 2019; Cheng et al. 2019), and super-resolution (Han et al. 2022; Lu and Chen 2022; Iglesias et al. 2023), showcasing its adaptability and robust performance across different scenarios.
Multi-resolution U-Nets: To overcome the limitations related to the size of the data set and sparse annotations described in the introduction, we propose a cascaded resolution approach, inspired by previous works (Kamnitsas et al. 2016; Isensee et al. 2021), in combination with semi-supervised learning, which takes in volumetric inputs downsampled at different resolutions, while ensuring that all U-Net components receive inputs of the same size. This enables us to simultaneously capture both a large field of view and fine structural details. Our multi-resolution U-Net architecture is depicted in Fig. 3(A) and employs a series of cascaded U-Net components. At a coarse resolution, the U-Net input volumes have a larger field of view but lack fine structural details. Conversely, at a fine resolution, the field of view is smaller, but fine structural details are preserved. By utilizing features extracted from highly downsampled volumes, we capture global context information, which is then integrated with features from volumes of the original resolution. Each component U-Net follows a standard U-Net architecture (Fig. 3B). During the forward pass, features from the corresponding volume are extracted from the penultimate layer of the U-Net and concatenated to the second layer of the next U-Net. This process ensures the incorporation of spatially matched information from different scales to improve the overall segmentation accuracy.
Fig. 3.
(A) Processing large ex vivo MRI volumes using multi-resolution U-Nets. Inputs are downsampled at different scales mimicking a zoom-in procedure. Features extracted from coarser resolutions contain global context information and are integrated into subsequent U-Nets. (B) Model architecture of component U-Nets. Features from the second layers are extracted, upsampled, and concatenated to the second layer of the next U-Net. All component U-Nets are trained simultaneously in an end-to-end fashion. (C) Cross-pseudo supervision is a semi-supervised learning technique that trains two or more networks at the same time and uses their outputs to supervise each other.
For the task of automatically segmenting supra- and infragranular layers in the ex vivo MRI dataset, we would ideally have a number of hemisphere samples fully labeled manually. This manual segmentation could then be used to train CNNs in a supervised fashion, in order to automatically predict labels on other samples by mimicking the manual segmentation procedure. However, 3D ultra-high-resolution ex vivo MRI data are very large and thus extremely time-consuming and laborious to manually annotate. In semi-supervised training, the network learns from both labeled and unlabeled data to train a predictive model; the latter is often relatively easier to obtain in much larger amounts. Semi-supervised training of CNNs mainly relies on the idea of incorporating knowledge priors (Zheng et al. 2019; Adiga Vasudeva et al. 2022) or enforcing consistency between labeled ground truth and predictions from unlabeled data (Bortsova et al. 2019; Ouali et al. 2020). Here, we propose a semi-supervised training strategy to effectively utilize the large amount of unlabeled data to improve the segmentation performance. Our semi-supervised segmentation approach is mainly adapted from the so-called cross pseudo-supervision strategy (Chen et al. 2021).
Semi-supervised segmentation: from a set of MRI volumes
, we aim to predict one-hot segmentation
. We denote labeled MRI volume as
with segmentation labels
and unlabeled MRI volumes as
. Two segmentation networks with identical architecture
and
are initialized with different random weights. These two CNNs are trained with two loss functions defined symmetrically. In regions with existing manual segmentation labels, we directly compare the network prediction using one-hot encodings of the ground truth:
![]() |
(1) |
where
denotes the soft Dice loss function (Milletari et al. 2016):
![]() |
(2) |
The vast majority of regions in the training data are unlabeled. In order to utilize the large amount of unlabeled data for improving the segmentation performance, we adapt a cross pseudo-supervision loss function on unlabeled data (Chen et al. 2021):
![]() |
(3) |
where
denotes the one-hot encoding function. The benefits of this approach are twofold. First, it promotes consistent predictions across differently initialized networks for the same input image, improving reliability and decision boundary placement. Second, during later optimization stages, the pseudo labeled data acts as an expansion of the training dataset to enhance training as compared to using labeled data alone.
Since the segmentation network operates at multiple resolutions, we also enforce the prediction to be consistent at different resolutions using a multi-resolution consistency loss:
![]() |
(4) |
where
denotes the downsampling operator.
One potential issue of cross pseudo supervision is error accumulation: in the late training stage, the predictions from the two networks will converge and may be trapped in local optima because errors will also be mutually learned and reinforced. One way to address this issue is to encourage the errors made by the two networks during training to be diverse. We therefore design an error diversity loss function based on the idea that on the labeled region, if both networks make incorrect prediction as compared to the ground truth, we encourage them to make different errors:
![]() |
(5) |
where
denotes the indicator function.
Implementation details
All CNN models are implemented using the PyTorch framework (Paszke et al. 2019). All scans were corrected for biases. Each input supplied to the multi-resolution U-Nets contains 5 volumes at different resolutions, and thus has dimensions of
. The image patch cascades undergo successive downsampling, reducing dimensions by 16-, 8-, 4-, 2-, and 1-fold along all three axes. Consequently, the segmentation output maintains the same hierarchical structure with dimensions of
, where each voxel corresponds to a semantic class label.
In the context of labeled input data, a crucial distinction is made based on whether the fifth input volume (original resolution, no downsampling) contains manually labeled supra- or infragranular layer class voxels. Inputs meeting this criterion are categorized as labeled, while others are categorized as unlabeled. During the initial training phase, when model predictions lack precision to mutually guide one another, a strategic approach is employed to progressively enhance the influence of unlabeled samples as training advances.
To this end, a parameter
is defined as
, where
denotes the current epoch number and
represents the total number of epochs. In each epoch, for every labeled input, the loss is computed as
. For unlabeled inputs, there exists an
probability of being chosen for training, with the loss calculated as
. The choice of loss weights is empirical:
,
,
, and
. Since the range of all loss functions is between 0 and 1, these values strike a balance between the importance of different loss functions.
Training spans 1000 epochs, with each epoch involving the loading of inputs from a randomly chosen scan into the training dataset. The input was augmented with intensity transformations such as random bias field,
transformation, and Gaussian noise. Geometric transformations such as rotation and elastic deformation were not applied. A batch size of 1 is set with a random sample from the training dataset, and the training employs the Adam optimizer (Kingma and Ba 2014) with a learning rate of
. In contrast to training only two networks, our modified cross pseudo-supervision approach involves training three networks to maintain a “backup”, thereby enhancing training stability and overall performance. At each step, the two networks with the most dissimilar segmentation predictions, as assessed by the Dice score, are chosen for cross-pseudo supervision.
During the prediction stage, an overlapping tile strategy (Ronneberger et al. 2015) is adopted to ensure smoothness at boundaries.
Results
Evaluation
To assess the performance of automatic supra-/infragranular layer segmentation, we conducted manual segmentation on specific slices. The selection of validation slices followed a structured procedure: (1) Each hemisphere sample underwent surface fitting at 1 mm to detect the white matter surface and then parcellation into 14 cortical regions using the recon-all-clincal tool (Gopinath et al. 2023) within FreeSurfer (Fischl 2012). (2) Within each region, a random point was chosen on the white matter surface. (3) The orientation (axial, coronal, or sagittal) most perpendicular to this surface point was determined, and a slice of dimensions
was extracted centered on this point. (4) Manual segmentation was carried out on the central region of this slice.
In total, 210 slices were chosen for evaluation. To gauge the reliability of the manual segmentation procedure, we randomly picked one slice from each cortical region, which was re-annotated by the same labeler after a 4-week interval. This allowed us to estimate intra-rater variability.
15-fold cross-validation was used in all experiments. In each fold, the two samples with sparse whole-hemisphere slice labeling and the 14 samples except the particular one for prediction were used as the training set.
We applied the public implementation of nnU-Net (Isensee et al. 2021) available on GitHub for comparison with our method. Default data augmentation during sampling (Gaussian noise, Gaussian blur,
transformation, etc.) in nnU-Net were used. Since our training data annotations contain unlabeled parts, we masked out unlabeled parts during the calculation of loss gradient.
Segmentation map of supragranular and infragranular layers
As a baseline method, we first applied nnU-Net, a widely recognized implementation of U-Nets that provides state-of-the-art results in an array of medical image segmentation tasks (Isensee et al. 2021). Given our ultra-high-resolution dataset, nnU-Net self-configured a pipeline with two training stages. In the first stage, a U-Net was trained on a downsampled version of the dataset, enabling the entire 3D volume to be processed by the U-Net. In the second stage, another U-Net was trained on 3D sub-volumes extracted from the whole volume, maintaining full resolution. The sub-volumes, along with their corresponding coarse segmentation from the first stage, were concatenated as the input. The predictions from this second stage were kept as the final results. However, as illustrated in Fig. 4(A), this approach yielded suboptimal results, notably missing portions of the neocortex in the layer segmentation. This failure is likely due to nnU-Net not being tailored for scenarios with limited labeled training data. Consequently, conventional supervised U-Net models proved insufficient in achieving our objective of accurately segmenting supra- and infragranular layers under these constraints.
Fig. 4.

(A) Example result of layer segmentation (red/outer: supragranular layer, white/inner: infragranular layer) by nnU-Net and our model (MUS: multi-resolution U-Nets semi-supervised). Comprehensive visualization of annotations on all samples can be found in the Supplementary material. (B) Manual annotation and automatic segmentation on example validation slices from our method and baselines (US: simple U-Net semi-supervised; MU: multi-resolution U-Nets supervised; MUS (no
): multi-resolution U-Nets semi-supervised with no error diversity loss).
In contrast to nnU-Net, which exclusively employs labeled data for training, our model adopts a semi-supervised approach, utilizing both labeled and the substantial majority of unlabeled data (
) for training. Furthermore, while nnU-Net also employs a multi-resolution strategy, it is limited to two stages and trained independently. In contrast, our multi-resolution U-Nets can operate across a larger number of resolutions while being trained in an end-to-end manner, which effectively leverages information from all resolutions and scales. This approach led to more accurate segmentation of the supra- and infragranular layers, as shown in Fig. 4(A), while excluding non-targeted regions such as the cerebellum. Additionally, as shown in Fig. 4(B), our method qualitatively yields the highest consistency with the manual annotations.
Moreover, in the sample examined by the BigBrain project, the layer I-III account for 47.8
1.6% of total thickness across different cortical regions (Wagstyl et al. 2020). Similar results were obtained by our study. We measured the percentage of the total volume of the supragranular layer as of the cortex. This was done by counting the number of voxels in our predicted segmentation map assigned to the supragranular layer as of the whole cortex. The supragranular layer accounts for 46.8
1.5% of total volume across different whole-hemisphere samples. This further validated our method in accurately detecting the supra- and infragranular layers. The volume percentage of supragranular layer on an example sample across different cortical parcels (47.0
2.0%) is also shown in Supplementary Table S1.
The generalization ability of a trained deep learning model is always a critical focus. Good performance on datasets collected from different sources with varying acquisition parameters is desirable. Here, we applied our trained model to the 7T ex vivo sample obtained at 100
m as described in (Edlow et al. 2019). The segmentation results are visualized in Supplementary Fig. S18, showcasing the robust generalization ability of our model.
Quantitative segmentation performance as a function of cortical region
We computed the Dice scores for the competing methods and presented them in Table 2. The intra-rater variability was calculated as 0.856 for the supragranular layer and 0.829 for the infragranular layer. Among the methods, MUS demonstrated the highest Dice score, attaining 0.828 for the supragranular layer and 0.818 for the infragranular layer. This latter outcome approaches intra-rater variability, signifying a segmentation performance close to that achieved by human experts through manual segmentation.
Table 2.
Dice score result of proposed method and baseline methods, the standard deviation is calculated across samples.
| nnU-Net | MU | US | MUS (no ) |
MUS | Intra-rater | |
|---|---|---|---|---|---|---|
| Supragranular | 0.726 0.042 |
0.783 0.044 |
0.796 0.051 |
0.807 0.039 |
0.828 0.040
|
0.856 |
| Infragranular | 0.758 0.060 |
0.769 0.056 |
0.802 0.048 |
0.815 0.041 |
0.818 0.037
|
0.829 |
As expected, the segmentation performance excelled in the BA 44/45 region (Fig. 5), which had full labeling in the training dataset. Generally, regions in close anatomical proximity or similar laminar structure to BA 44/45 exhibited good segmentation performance. Regions distant or anatomically dissimilar to BA 44/45, such as the primary visual cortex (V1), entorhinal, and perirhinal cortex, exhibited slightly lower segmentation performance. These findings suggest that incorporating manual segmentations for additional cortical areas may be necessary to enhance the overall supra- and infragranular layer segmentation across the entire hemisphere in future efforts. In Supplementary note 1, we discussed the appropriateness of using Dice score as the segmentation evaluation metric and provided additional evaluation results based on a distance-based metric.
Fig. 5.
(A) Box plots of segmentation performance across samples by proposed and baseline methods. (B) Performance specific to each cortical region (BA3a: somatosensory area (anterior); BA3p: somatosensory area (posterior); MT: visual area (middle temporal); V1: primary visual area; V2: secondary visual area).
Ablation study
We conducted ablation studies to analyze the importance of three key components in our model design: (1) multi-resolution U-Nets, (2) semi-supervised training, and (3) error diversity loss. In the basic U-Net semi-supervised model, instead of employing a cascade of U-Nets at various resolutions, we utilized a single U-Net operating on patches (size
) at full resolution. In the multi-resolution U-Nets supervised model, we removed the cross pseudo-supervision for the semi-supervised training, ensuring that only labeled training data contributed to the loss calculation. In the multi-resolution U-Nets semi-supervised model without error diversity loss, we excluded the error diversity loss.
Notably, as shown in Table 2, the absence of semi-supervised training led to a reduction in validation Dice score of approximately 0.04. Similarly, without multi-resolution U-Nets, there was an approximate 0.02 decrease in the validation Dice score, and without error diversity loss, the validation Dice score declined by
.
These results indicate that all three components (multi-resolution U-Nets, semi-supervised training, and error diversity loss) contributed to the performance of our segmentation model. Notably, the largest improvement in accuracy was attributed to the implementation of semi-supervised training.
Discussion
Accurate segmentation of cortical layers is essential for a comprehensive understanding of neocortical structural organization and its relevance to various neurological conditions and cognitive competencies. The neocortical division into six layers, each characterized by distinct connectivity patterns, underscores the critical importance of precise laminar identification. Leveraging an unprecedented ultra-high-resolution ex vivo whole-hemisphere MRI dataset and meticulous but sparse manual annotation, we introduce an innovative approach for the segmentation of supragranular and infragranular layers. For the first time, we obtain a reliable fine segmentation model covering the entire hemisphere.
Our proposed segmentation model, built on an enhanced version of the U-Net architecture and incorporating cross pseudo-supervision, demonstrates remarkable success in accurately delineating supra-and infragranular layers, achieving a Dice score over 0.8. Unlike most existing MRI segmentation models that heavily rely on fully annotated training data and operate at a single resolution, our semi-supervised multi-resolution U-Nets offer a valuable improvement: they reduce the need for large amounts of manually annotated training data and enhance efficiency when processing large volumes in an end-to-end training fashion. Rigorous ablation studies have demonstrated the efficacy of our novel modules.
Research focusing on supra- and infragranular layers has significant clinical implications. Prior studies have revealed distinct gene expression alterations, pathology accumulations, and atrophies between these layers in patients with conditions such as schizophrenia (Arion et al. 2010), autism spectrum disorder (Karst and Hutsler 2016), and epilepsy (Tóth et al. 2018), Alzheimer’s disease (Hof and Morrison 1990; Hof et al. 1990, 1993), Parkinson’s disease (Fathy et al. 2019), and Huntington’s disease (Heinsen et al. 1994). Our high-resolution segmentation maps of these layers across the entire hemisphere will facilitate multiscale investigations of these diseases by integrating with other data types like histological and genomic studies.
The introduced semi-supervised segmentation approach and its corresponding results hold promise for broader applications. It enables benchmarking and validating cortical layer segmentation outcomes across different imaging modalities, fostering cross-modal integration and enriching our understanding of cortical organization. In addition, this method sets the stage for finer segmentation of additional cortical layers and small subcortical nuclei in the future, allowing for even greater granularity in the analysis of cortical and subcortical architecture. Finally, our results can be used to construct surface models, providing insights into alterations in cortical thickness and sulcal depth in diseased states.
While our proposed segmentation model demonstrates promising results, two limitations should be acknowledged. First, the semi-supervised nature of our approach, utilizing a substantial majority of unlabeled data, introduces a degree of uncertainty in the training process. While this approach enhances efficiency, it may also lead to variations in segmentation performance across different cortical regions, as seen in the quantitative analysis. The model’s reliance on manual annotations in specific regions may limit its generalizability to areas with sparse or no labeled training data. Second, the proposed model’s performance may be influenced by factors such as post-mortem tissue properties, variability in brain morphology, and MRI imaging condition and parameters. Addressing these limitations and conducting further validation on diverse datasets will be crucial for ensuring the robustness and applicability of the presented approach.
In summary, this project presents an advancement in the segmentation of cortical layers within ultra-high-resolution ex vivo MRI data. We introduce the first MRI-contrast-based whole-hemisphere segmentation model of sup- and infragranular layers, thereby elevating the delineation of the human cerebral cortex in MRI from a single band to a dual-band representation. The incorporation of multi-resolution U-Nets and semi-supervised learning in the segmentation process has demonstrated impressive accuracy and reliability. The potential applications of this segmentation model are extensive, spanning from basic neuroscience research to clinical studies investigating various neurological conditions.
Supplementary Material
Contributor Information
Xiangrui Zeng, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Oula Puonti, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Blegdamsvej 9, 2100 København, Denmark.
Areej Sayeed, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Rogeny Herisse, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Jocelyn Mora, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Kathryn Evancic, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Divya Varadarajan, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Yael Balbastre, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Irene Costantini, National Institute of Optics (CNR-INO), National Research Council, Largo Enrico Fermi, 6, 50125 Sesto Fiorentino, Italy; European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy; Department of Biology, University of Florence, P.za di San Marco, 4, 50121 Firenze FI, Italy.
Marina Scardigli, European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy.
Josephine Ramazzotti, European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy.
Danila DiMeo, European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy.
Giacomo Mazzamuto, National Institute of Optics (CNR-INO), National Research Council, Largo Enrico Fermi, 6, 50125 Sesto Fiorentino, Italy; European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy; Department of Physics and Astronomy, University of Florence, P.za di San Marco, 4, 50121 Firenze FI, Italy.
Luca Pesce, European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy.
Niamh Brady, European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy.
Franco Cheli, European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy.
Francesco Saverio Pavone, National Institute of Optics (CNR-INO), National Research Council, Largo Enrico Fermi, 6, 50125 Sesto Fiorentino, Italy; European Laboratory for Non-Linear Spectroscopy (LENS), Via Nello Carrara, 1, 50019 Sesto Fiorentino, Italy; Department of Physics and Astronomy, University of Florence, P.za di San Marco, 4, 50121 Firenze FI, Italy.
Patrick R Hof, Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, USA.
Robert Frost, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Jean Augustinack, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
André van der Kouwe, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Juan Eugenio Iglesias, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Bruce Fischl, Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Boston, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
Author contributions
Xiangrui Zeng (Formal analysis, Investigation, Methodology, Validation, Visualization, Writing—original draft), Oula Puonti (Formal analysis, Investigation, Validation, Writing—original draft), Areej Sayeed (Data curation, Writing— original draft), Rogeny Herisse (Data curation), Jocelyn Mora (Data curation), Kathryn Evancic (Data curation), Divya Varadarajan (Resources), Yael Balbastre (Resources), Irene Costantini (Resources), Marina Scardigli (Resources), Josephine Ramazzotti (Resources), Danila DiMeo (Resources), Giacomo Mazzamuto (Resources), Luca Pesce (Resources), Niamh Brady (Resources), Franco Cheli (Resources), Francesco Saverio Pavone (Resources), Patrick Hof (Conceptualization, Resources, Writing—review & editing), Robert Frost (Resources, Writing—review & editing), Jean Augustinack (Resources, Writing—review & editing), André van der Kouwe (Resources, Writing—review & editing), Juan Eugenio Iglesias (Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing— original draft), Bruce Fischl (Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Software, Supervision, Validation, Writing—original draft). B.F. and J.E.I conceived the research. X.Z., J.E.I, and B.F. designed the method. X.Z. implemented and refined the method. X.Z. and O.P. conducted the research. D.V., Y.B., I.C., M.S., J.R., D.D., G.M., L.P., N.B., F.C., F.S.P, P.R.H, R.F., J.A., and A.v.d.K collected the dataset. A.S., R.H., J.M., and K.E. processed and manually annotated the dataset. X.Z, O.P., J.E.I, and B.F. evaluated the results. X.Z., O.P., J.E.I., and B.F. wrote the manuscript. All authors edited the manuscript.
Funding
This research was primarily funded by the National Institute of Mental Health 1RF1MH123195. Support for this research was provided in part by the BRAIN Initiative Cell Census Network grants U01MH117023 and UM1MH130981, the Brain Initiative Brain Connects consortium (U01NS132181, 1UM1NS132358), the National Institute for Biomedical Imaging and Bioengineering (1R01EB023281, R01EB006758, R21EB018907, R01EB019956, P41EB030006), the National Institute on Aging (1R56AG064027, 1R01AG064027, 5R01AG008122, R01AG016495, 1R01AG070988, 5R01AG057672. 1RF1AG080371), the National Institute of Mental Health (R01 MH123195, R01 MH121885), the National Institute for Neurological Disorders and Stroke (R01NS0525851, R21NS072652, R01NS070963, R01NS083534, R25NS125599, 5U01NS086625, 5U24NS10059103, R01NS105820, U24NS135561), European Union’s Horizon 2020 research and innovation Framework Programme under grant agreement No. 654148 (Laserlab-Europe), Italian Ministry for Education in the framework of Euro-Bioimaging Italian Node (ESFRI research infrastructure),“Fondazione CR Firenze” (private foundation), and was made possible by the resources provided by Shared Instrumentation Grants 1S10RR023401, 1S10RR019307, and 1S10RR023043. Additional support was provided by the NIH Blueprint for Neuroscience Research (5U01-MH093765), part of the multi-institutional Human Connectome Project. Much of the computation resources required for this research was performed on computational hardware generously provided by the Massachusetts Life Sciences Center (https://www.masslifesciences.com/). OP was supported by a grant from Lundbeckfonden (grant number R360–2021–395). JEI was supported by a grant from Jack Satter Foundation. XZ was supported by a postdoctoral fellowship from Huntington’s Disease Society of America human biology project.
Conflict of interest statement: B.F. is a medical advisor to DeepHealth, a company whose medical pursuits focus on medical imaging and measurement technologies. B.F.’s interests were reviewed and are managed by Massachusetts General Hospital and Partners HealthCare in accordance with their conflict of interest policies.
References
- Aboelenein NM, Songhao P, Koubaa A, Noor A, Afifi A. HTTU-Net: hybrid two track U-Net for automatic brain tumor segmentation. IEEE Access. 2020:8:101406–101415. https://doi.org/ 10.1109/ACCESS.2020.2998601. [DOI] [Google Scholar]
- Adiga Vasudeva S, Dolz J, Lombaert H. Leveraging labeling representations in uncertainty-based semi-supervised segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention; 2022; Cham: Springer Nature Switzerland. p. 265–275. https://doi.org/ 10.1007/978-3-031-16452-1_26. [DOI] [Google Scholar]
- Amunts K, Lepage C, Borgeat L, Mohlberg H, Dickscheid T, Rousseau M-É, Bludau S, Bazin P-L, Lewis LB, Oros-Peusquens A-M, et al. BigBrain: an ultrahigh-resolution 3D human brain model. Science. 2013:340(6139): 1472–1475. https://doi.org/ 10.1126/science.1235381. [DOI] [PubMed] [Google Scholar]
- Arion D, Horváth S, Lewis DA, Mirnics K. Infragranular gene expression disturbances in the prefrontal cortex in schizophrenia: signature of altered neural development? Neurobiol Dis. 2010:37(3): 738–746. https://doi.org/ 10.1016/j.nbd.2009.12.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, Dalca AV. Voxelmorph: a learning framework for deformable medical image registration. IEEE Trans Med Imaging. 2019:38(8): 1788–1800. https://doi.org/ 10.1109/TMI.2019.2897538. [DOI] [PubMed] [Google Scholar]
- Benes FM, Berretta S. Gabaergic interneurons: implications for understanding schizophrenia and bipolar disorder. Neuropsychopharmacology. 2001:25(1): 1–27. https://doi.org/ 10.1016/S0893-133X(01)00225-1. [DOI] [PubMed] [Google Scholar]
- Billot B, Greve DN, Puonti O, Thielscher A, Van Leemput K, Fischl B, Dalca AV, Iglesias JE, et al. SynthSeg: segmentation of brain MRI scans of any contrast and resolution without retraining. Med Image Anal. 2023:86:102789. https://doi.org/ 10.1016/j.media.2023.102789. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bollmann S, Mattern H, Bernier M, Robinson SD, Park D, Speck O, Polimeni JR. Imaging of the pial arterial vasculature of the human brain in vivo using high-resolution 7T time-of-flight angiography. elife. 2022:11:e71186. https://doi.org/ 10.7554/eLife.71186. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bortsova G, Dubost F, Hogeweg L, Katramados I, De Bruijne M. Semi-supervised medical image segmentation via learning consistency under transformations. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22; 2019; Springer International Publishing. p. 810–818. [Google Scholar]
- Brügger R, Baumgartner CF, Konukoglu E. A partially reversible U-Net for memory-efficient volumetric image segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part III 22; 2019; Springer International Publishing. p. 429–437. [Google Scholar]
- Camacho J, Ejaz E, Ariza J, Noctor SC, Martínez-Cerdeño V. Reln-expressing neuron density in layer I of the superior temporal lobe is similar in human brains with autism and in age-matched controls. Neurosci Lett. 2014:579:163–167. https://doi.org/ 10.1016/j.neulet.2014.07.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chan K-S, Hédouin R, Mollink J, Schulz J, van Cappellen A-M, van Walsum JP. Imaging white matter microstructure with gradient-echo phase imaging: is ex vivo imaging with formalin-fixed tissue a good approximation of the in vivo brain? Magn Reson Med. 2022:88(1): 380–390. https://doi.org/ 10.1002/mrm.29213. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chang S, Varadarajan D, Yang J, Chen IA, Kura S, Magnain C, Augustinack JC, Fischl B, Greve DN, Boas DA, et al. Scalable mapping of myelin and neuron density in the human brain with micrometer resolution. Sci Rep. 2022:12(1): 363. https://doi.org/ 10.1038/s41598-021-04093-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen X, Yuan Y, Zeng G, Wang J. Semi-supervised semantic segmentation with cross pseudo supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021. The Institute of Electrical and Electronics Engineers. p. 2613–2622.
- Chen X, Wang X, Zhang K, Fung K-M, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal. 2022:79:102444. https://doi.org/ 10.1016/j.media.2022.102444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cheng Z, Guo K, Wu C, Shen J, Qu L. U-Net cascaded with dilated convolution for medical image registration. In: 2019 Chinese Automation Congress (CAC); 2019; The Institute of Electrical and Electronics Engineers. p. 3647–3651. [Google Scholar]
- Costantini I, Morgan L, Yang J, Balbastre Y, Varadarajan D, Pesce L, Scardigli M, Mazzamuto G, Gavryusev V, Castelli FM, et al. A cellular resolution atlas of Broca’s area. Sci Adv. 2023:9(41): eadg3844. https://doi.org/ 10.1126/sciadv.adg3844. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ding S-L, Royall JJ, Sunkin SM, Ng L, Facer BA, Lesnar P, Guillozet-Bongaarts A, McMurray B, Szafer A, Dolbeare TA, et al. Comprehensive cellular-resolution atlas of the adult human brain. J Comp Neurol. 2016:524(16): 3127–3481. https://doi.org/ 10.1002/cne.24080. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edlow BL, Mareyam A, Horn A, Polimeni JR, Witzel T, Tisdall MD, Augustinack JC, Stockmann JP, Diamond BR, Stevens A, et al. 7 tesla MRI of the ex vivo human brain at 100 micron resolution. Sci Data. 2019:6(1): 244. https://doi.org/ 10.1038/s41597-019-0254-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
-
Fathy Y, Jonker A, Oudejans E, de Jong F, van Dam A-M, Rozemuller A, van de Berg W. Differential insular cortex subregional vulnerability to
-synuclein pathology in Parkinson’s disease and dementia with Lewy bodies. Neuropathol Appl Neurobiol. 2019:45(3): 262–277. https://doi.org/ 10.1111/nan.12501.
[DOI] [PMC free article] [PubMed] [Google Scholar] - Fischl B. Freesurfer. NeuroImage. 2012. FreeSurfer:62(2): 774–781. https://doi.org/ 10.1016/j.neuroimage.2012.01.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fischl B, Salat DH, Van Der Kouwe AJ, Makris N, Ségonne F, Quinn BT, Dale AM. Sequence-independent segmentation of magnetic resonance images. NeuroImage. 2004:23:S69–S84. https://doi.org/ 10.1016/j.neuroimage.2004.07.016. [DOI] [PubMed] [Google Scholar]
- Frid-Adar M, Ben-Cohen A, Amer R, Greenspan H. Improving the segmentation of anatomical structures in chest radiographs using U-Net with an imagenet pre-trained encoder. In: Image Analysis for Moving Organ, Breast, and Thoracic Images: Third International Workshop, RAMBO 2018, Fourth International Workshop, BIA 2018, and First International Workshop, TIA 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16 and 20, 2018, Proceedings 3; 2018; Springer International Publishing. p. 159–168.
- Fukunaga M, Li T-Q, van Gelderen P, de Zwart JA, Shmueli K, Yao B, Lee J, Maric D, Aronova MA, Zhang G, et al. Layer-specific variation of iron content in cerebral cortex as a source of MRI contrast. Proc Natl Acad Sci. 2010:107(8): 3834–3839. https://doi.org/ 10.1073/pnas.0911177107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gerfen CR, Economo MN, Chandrashekar J. Long distance projections of cortical pyramidal neurons. J Neurosci Res. 2018:96(9): 1467–1475. https://doi.org/ 10.1002/jnr.23978. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glasser MF, Goyal MS, Preuss TM, Raichle ME, Van Essen DC. Trends and properties of human cerebral cortex: correlations with cortical myelin content. NeuroImage. 2014:93:165–175. https://doi.org/ 10.1016/j.neuroimage.2013.03.060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gopinath K, Greve DN, Das S, Arnold S, Magdamo C, Iglesias JE. Cortical analysis of heterogeneous clinical brain MRI scans for large-scale neuroimaging studies. In: International Conference on Medical Image Computing and Computer-Assisted Intervention; 2023; Cham: Springer Nature Switzerland. p. 35–45. https://doi.org/ 10.1007/978-3-031-43993-3_4. [DOI]
-
Gulban OF, Bollmann S, Huber LR, Wagstyl K, Goebel R, Poser BA, Kay K, Ivanov D. Mesoscopic in vivo human
dataset acquired using quantitative MRI at 7 tesla. NeuroImage. 2022:264:119733. https://doi.org/ 10.1016/j.neuroimage.2022.119733.
[DOI] [PubMed] [Google Scholar] - Han N, Zhou L, Xie Z, Zheng J, Zhang L. Multi-level U-Net network for image super-resolution reconstruction. Displays. 2022:73:102192. https://doi.org/ 10.1016/j.displa.2022.102192. [DOI] [Google Scholar]
- Hatanaka Y, Zhu Y, Torigoe M, Kita Y, Murakami F. From migration to settlement: the pathways, migration modes and dynamics of neurons in the developing brain. Proc Jpn Acad Ser B. 2016:92(1): 1–19. https://doi.org/ 10.2183/pjab.92.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heinrich MP, Stille M, Buzug TM. Residual U-Net convolutional neural network architecture for low-dose ct denoising. Curr Dir Biomed Eng. 2018:4(1): 297–300. https://doi.org/ 10.1515/cdbme-2018-0072. [DOI] [Google Scholar]
- Heinsen H, Strik M, Bauer M, Luther K, Ulmar G, Gangnus D, Jungkunz G, Eisenmengers W, Götz M. Cortical and striatal neurone number in Huntington’s disease. Acta Neuropathol. 1994:88(4): 320–333. https://doi.org/ 10.1007/BF00310376. [DOI] [PubMed] [Google Scholar]
- Hof PR, Morrison JH. Quantitative analysis of a vulnerable subset of pyramidal neurons in Alzheimer’s disease: II. Primary and secondary visual cortex. J Comp Neurol. 1990:301(1): 55–64. https://doi.org/ 10.1002/cne.903010106. [DOI] [PubMed] [Google Scholar]
- Hof PR, Morrison JH, Cox K. Quantitative analysis of a vulnerable subset of pyramidal neurons in Alzheimer’s disease: I. Superior frontal and inferior temporal cortex. J Comp Neurol. 1990:301(1): 44–54. https://doi.org/ 10.1002/cne.903010105. [DOI] [PubMed] [Google Scholar]
- Hof PR, Archin N, Osmand A, Dougherty J, Wells C, Bouras C, Morrison J. Posterior cortical atrophy in Alzheimer’s disease: analysis of a new case and re-evaluation of a historical report. Acta Neuropathol. 1993:86(3): 215–223. https://doi.org/ 10.1007/BF00304135. [DOI] [PubMed] [Google Scholar]
- Iglesias JE, Insausti R, Lerma-Usabiaga G, Bocchetta M, Van Leemput K, Greve DN, Van der Kouwe A, Fischl B, Caballero-Gaudes C, Paz-Alonso PM, et al. A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology. NeuroImage. 2018:183:314–326. https://doi.org/ 10.1016/j.neuroimage.2018.08.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iglesias JE, Billot B, Balbastre Y, Magdamo C, Arnold SE, Das S, Edlow BL, Alexander DC, Golland P, Fischl B. SynthSR: a public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. Sci Adv. 2023:9(5): eadd3607. https://doi.org/ 10.1126/sciadv.add3607. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Isensee F, Jaeger PF, Kohl SA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021:18(2): 203–211. https://doi.org/ 10.1038/s41592-020-01008-z. [DOI] [PubMed] [Google Scholar]
- Jia F, Wong WH, Zeng T. DDUNet: dense dense U-Net with applications in image denoising. In: Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021; The Institute of Electrical and Electronics Engineers. p. 354–364.
- Jonkman LE, Klaver R, Fleysher L, Inglese M, Geurts JJ. The substrate of increased cortical FA in MS: a 7T post-mortem MRI and histopathology study. Mult Scler J. 2016:22(14): 1804–1811. https://doi.org/ 10.1177/1352458516635290. [DOI] [PubMed] [Google Scholar]
- Kamnitsas K, Ferrante E, Parisot S, Ledig C, Nori AV, Criminisi A, Rueckert D, Glocker B. Deepmedic for brain tumor segmentation. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: Second International Workshop, BrainLes 2016, with the Challenges on BRATS, ISLES and mTOP 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, 2016 October 17, Revised Selected Papers 2; 2016; Springer International Publishing. p. 138–149.
- Kamnitsas K, Ledig C, Newcombe VF, Simpson JP, Kane AD, Menon DK, Rueckert D, Glocker B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017:36:61–78. https://doi.org/ 10.1016/j.media.2016.10.004. [DOI] [PubMed] [Google Scholar]
- Karst AT, Hutsler JJ. Two-dimensional analysis of the supragranular layers in autism spectrum disorder. Res Autism Spectr Disord. 2016:32:96–105. https://doi.org/ 10.1016/j.rasd.2016.09.004. [DOI] [Google Scholar]
- Keuken MC, Isaacs BR, Trampel R, Van Der Zwaag W, Forstmann B. Visualizing the human subcortex using ultra-high field magnetic resonance imaging. Brain Topogr. 2018:31(4): 513–545. https://doi.org/ 10.1007/s10548-018-0638-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khandelwal P, Duong MT, Sadaghiani S, Lim S, Denning AE, Chung E, Ravikumar S, Arezoumandan S, Peterson C, Bedard M, et al. Automated deep learning segmentation of high-resolution 7 tesla postmortem MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases. Imaging Neurosci. 2024:2:1–30. https://doi.org/ 10.1162/imag_a_00171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim S, Sakaie K, Blümcke I, Jones S, Lowe MJ. Whole-brain, ultra-high spatial resolution ex vivo MRI with off-the-shelf components. Magn Reson Imaging. 2021:76:39–48. https://doi.org/ 10.1016/j.mri.2020.11.002. [DOI] [PubMed] [Google Scholar]
- Kingma DP, Ba J. Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diega, CA, USA, 2015.
- Leprince Y, Poupon F, Delzescaux T, Hasboun D, Poupon C, Rivière D. Combined Laplacian-equivolumic model for studying cortical lamination with ultra high field MRI (7 T). In: 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI); 2015; The Institute of Electrical and Electronics Engineers. p. 580–583. [Google Scholar]
- Lu Z, Chen Y. Single image super-resolution based on a modified U-Net with mixed gradient loss. In: Signal, Image and Video Processing; 2022. Springer London. p. 1–9.
- Lüsebrink F, Mattern H, Yakupov R, Acosta-Cabronero J, Ashtarayeh M, Oeltze-Jafra S, Speck O. Comprehensive ultrahigh resolution whole brain in vivo MRI dataset as a human phantom. Sci Data. 2021:8(1): 138. https://doi.org/ 10.1038/s41597-021-00923-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Markov NT, Vezoli J, Chameau P, Falchier A, Quilodran R, Huissoud C, Lamy C, Misery P, Giroud P, Ullman S, et al. Anatomy of hierarchy: feedforward and feedback pathways in macaque visual cortex. J Comp Neurol. 2014:522(1): 225–259. https://doi.org/ 10.1002/cne.23458. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McColgan P, Helbling S, Vaculčiaková L, Pine K, Wagstyl K, Attar FM, Edwards L, Papoutsi M, Wei Y, Van den Heuvel MP, et al. Relating quantitative 7T MRI across cortical depths to cytoarchitectonics, gene expression and connectomics. Hum Brain Mapp. 2021:42(15): 4996–5009. https://doi.org/ 10.1002/hbm.25595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milletari F, Navab N, Ahmadi S-A. V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV); 2016; The Institute of Electrical and Electronics Engineers. p. 565–571. [Google Scholar]
- Negrón-Oyarzo I, Lara-Vásquez A, Palacios-García I, Fuentealba P, Aboitiz F. Schizophrenia and reelin: a model based on prenatal stress to study epigenetics, brain development and behavior. Biol Res. 2016:49(1): 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ouali Y, Hudelot C, Tami M. Semi-supervised semantic segmentation with cross-consistency training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2020. The Institute of Electrical and Electronics Engineers. p. 12674–12684.
- Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al. PyTorch: an imperative style, high-performance deep learning library. Adv Neural Inf Proces Syst. 2019:32:1–12. [Google Scholar]
- Perone CS, Calabrese E, Cohen-Adad J. Spinal cord gray matter segmentation using deep dilated convolutions. Sci Rep. 2018:8(1): 5966. https://doi.org/ 10.1038/s41598-018-24304-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pesce L, Scardigli M, Gavryusev V, Laurino A, Mazzamuto G, Brady N, Sancataldo G, Silvestri L, Destrieux C, Hof PR, et al. 3D molecular phenotyping of cleared human brain tissues with light-sheet fluorescence microscopy. Commun Biol. 2022:5(1): 447. https://doi.org/ 10.1038/s42003-022-03390-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polimeni JR, Fischl B, Greve DN, Wald LL. Laminar analysis of 7 T BOLD using an imposed spatial activation pattern in human V1. NeuroImage. 2010:52(4): 1334–1346. https://doi.org/ 10.1016/j.neuroimage.2010.05.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Popescu V, Klaver R, Versteeg A, Voorn P, Twisk JW, Barkhof F, Geurts JJ, Vrenken H. Postmortem validation of MRI cortical volume measurements in MS. Hum Brain Mapp. 2016:37(6): 2223–2233. https://doi.org/ 10.1002/hbm.23168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Puonti O, Iglesias JE, Van Leemput K. Fast and sequence-adaptive whole-brain segmentation using parametric Bayesian modeling. NeuroImage. 2016:143:235–249. https://doi.org/ 10.1016/j.neuroimage.2016.09.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Radiuk P. Applying 3D U-Net architecture to the task of multi-organ segmentation in computed tomography. Applied Computer Systems. 2020:25(1): 43–50. https://doi.org/ 10.2478/acss-2020-0005. [DOI] [Google Scholar]
-
Romito-DiGiacomo RR, Menegay H, Cicero SA, Herrup K. Effects of Alzheimer’s disease on different cortical layers: the role of intrinsic differences in a
susceptibility. J Neurosci. 2007:27(32): 8496–8504. https://doi.org/ 10.1523/JNEUROSCI.1008-07.2007.
[DOI] [PMC free article] [PubMed] [Google Scholar] - Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18; 2015; Springer International Publishing. p. 234–241.
- Roy AG, Conjeti S, Navab N, Wachinger C, et al. Alzheimer’s Disease Neuroimaging Initiative, QuickNAT: a fully convolutional network for quick and accurate segmentation of neuroanatomy. NeuroImage. 2019:186:713–727. [DOI] [PubMed] [Google Scholar]
- Saygin ZM, Kliemann D, Iglesias JE, van der Kouwe AJ, Boyd E, Reuter M, Stevens A, Van Leemput K, McKee A, Frosch MP, et al. High-resolution magnetic resonance imaging reveals nuclei of the human amygdala: manual segmentation to automatic atlas. NeuroImage. 2017:155:370–382. https://doi.org/ 10.1016/j.neuroimage.2017.04.046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sitek KR, Gulban OF, Calabrese E, Johnson GA, Lage-Castellanos A, Moerel M, Ghosh SS, De Martino F. Mapping the human subcortical auditory system using histology, postmortem MRI and in vivo MRI at 7t. elife. 2019:8:e48932. https://doi.org/ 10.7554/eLife.48932. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tan X, Shi S-H. Neocortical neurogenesis and neuronal migration. Wiley Interdiscip Rev Dev Biol. 2013:2(4): 443–459. https://doi.org/ 10.1002/wdev.88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tóth K, Hofer KT, Kandrács Á, Entz L, Bagó A, Erőss L, Jordán Z, Nagy G, Sólyom A, Fabó D, et al. Hyperexcitability of the network contributes to synchronization processes in the human epileptic neocortex. J Physiol. 2018:596(2): 317–342. https://doi.org/ 10.1113/JP275413. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ushinsky A, Bardis M, Glavis-Bloom J, Uchio E, Chantaduly C, Nguyentat M, Chow D, Chang PD, Houshyar R. A 3D-2D hybrid U-Net convolutional neural network approach to prostate organ segmentation of multiparametric MRI. Am J Roentgenol. 2021:216(1): 111–116. https://doi.org/ 10.2214/AJR.19.22168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Varadarajan D, Frost R, van der Kouwe A, Morgan L, Diamond B, Boyd E, Fogarty M, Stevens A, Fischl B, Polimeni JR. Edge-preserving B0 inhomogeneity distortion correction for high-resolution multi-echo ex vivo MRI at 7T. Int Soc Magn Reson Med. 2020:664. [Google Scholar]
- Varadarajan D, Balasubramanian M, Park DJ, Witzel T, Stockmann JP, Polimeni JR. Characterizing the acquisition protocol dependencies of B0 field mapping and the effects of eddy currents and spoiling. Proc Int Soc Magn Reson Med. 2021:29:3552–3552. [Google Scholar]
- Vezoli J, Magrou L, Goebel R, Wang X-J, Knoblauch K, Vinck M, Kennedy H. Cortical hierarchy, dual counterstream architecture and the importance of top-down generative networks. NeuroImage. 2021:225:117479. https://doi.org/ 10.1016/j.neuroimage.2020.117479. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Waehnert M, Dinse J, Weiss M, Streicher MN, Waehnert P, Geyer S, Turner R, Bazin P-L. Anatomically motivated modeling of cortical laminae. NeuroImage. 2014:93:210–220. https://doi.org/ 10.1016/j.neuroimage.2013.03.078. [DOI] [PubMed] [Google Scholar]
- Wagstyl K, Larocque S, Cucurull G, Lepage C, Cohen JP, Bludau S, Palomero-Gallagher N, Lewis LB, Funck T, Spitzer H, et al. BigBrain 3D atlas of cortical layers: cortical and laminar thickness gradients diverge in sensory and motor cortices. PLoS Biol. 2020:18(4): e3000678. https://doi.org/ 10.1371/journal.pbio.3000678. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang S, Yang Z, Fischer K, Zhong K, Stadler J, Godenschweger F, Steiner J, Heinze H-J, Bernstein H-G, Bogerts B, et al. Integration of ultra-high field MRI and histology for connectome based research of brain disorders. Front Neuroanat. 2013:7:31. https://doi.org/ 10.3389/fnana.2013.00031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang Y, Liao Q, Ding L, Zhang J. Bridging 2D and 3D segmentation networks for computation-efficient volumetric medical image segmentation: an empirical study of 2.5 D solutions. Comput Med Imaging Graph. 2022:99:102088. https://doi.org/ 10.1016/j.compmedimag.2022.102088. [DOI] [PubMed] [Google Scholar]
- Zheng H, Lin L, Hu H, Zhang Q, Chen Q, Iwamoto Y, Han X, Chen Y-W, Tong R, Wu J. Semi-supervised segmentation of liver using adversarial learning with deep atlas prior. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22; 2019; Springer International Publishing. p. 148–156.
- Zimmermann J, Goebel R, De Martino F, Van de Moortele P-F, Feinberg D, Adriany G, Chaimow D, Shmuel A, Uğurbil K, Yacoub E. Mapping the organization of axis of motion selective features in human area MT using high-field fMRI. PLoS One. 2011:6(12): e28716. https://doi.org/ 10.1371/journal.pone.0028716. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zwanenburg JJ, Hendrikse J, Luijten PR. Generalized multiple-layer appearance of the cerebral cortex with 3D FLAIR 7.0-T MR imaging. Radiology. 2012:262(3): 995–1001. https://doi.org/ 10.1148/radiol.11110812. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.






















