Abstract
In human neuroimaging, brain atlases are essential for segmenting regions of interest (ROIs) and comparing subjects in a common coordinate frame. State-of-the-art atlases derived from histology1–3 provide exquisite three-dimensional cytoarchitectural maps but lack probabilistic labels throughout the whole brain: that is, the likelihood of each location belonging to a given ROI. Here we present NextBrain, a probabilistic histological atlas of the whole human brain. We developed artificial intelligence-enabled methods to align roughly 10,000 histological sections from five whole brain hemispheres into three-dimensional volumes and to produce delineations for 333 ROIs on these sections. We also created a companion Bayesian tool for automatic segmentation of these ROIs in magnetic resonance imaging (MRI) scans. We showcase two applications of the atlas: segmentation of ultra-high-resolution ex vivo MRI and volumetric analysis of Alzheimer’s disease using in vivo MRI. We publicly release raw and aligned data, an online visualization tool, the atlas, the segmentation tool, and ground truth delineations for a high-resolution ex vivo hemisphere used in validation. By enabling researchers worldwide to automatically analyse brain MRIs at a higher level of granularity, NextBrain holds promise to increase the specificity of findings and accelerate our quest to understand the human brain in health and disease.
Subject terms: Biomedical engineering, Biophysical models
NextBrain is an open source, probabilistic atlas of the entire human brain, assembled using artificial-intelligence-enabled registration and segmentation methods to reconstruct the multimodal serial histology of five human half brains, and which can be used to automatically segment brain MRI scans into 333 regions.
Main
MRI enables three-dimensional (3D) imaging of the human brain in vivo with millimetre resolution. Neuroimaging packages like FreeSurfer4, FSL5 and SPM6 enable large-scale studies with thousands of MRI scans. A core component of these packages is digital atlases: reference 3D brain images that comprise image intensities, neuroanatomical labels or both. (We note that the cerebral cortex is often modelled with specific atlases defined on surface coordinate systems rather than 3D images.) Atlases enable comparison of different subjects in a common coordinate frame (CCF). When they include neuroanatomical labels, atlases also provide previous spatial information for analyses such as automated image segmentation7.
Most volumetric atlases are built by averaging in vivo MRI scans from many subjects. However, their resolution (roughly 1 mm) is insufficient to study brain subregions with different function and connectivity8. Ex vivo MRI yields roughly 100-μm resolution9–12 but still fails to visualize cytoarchitecture. Histology is a microscopic modality that addresses this issue. Earlier versions of histological atlases were printed and comprised a small number of sections13. Subsequent efforts combined serial histology with image registration to produce 3D histological atlases14. These were mapped to in vivo scans of living subjects by means of intermediate 3D MRI templates (for example, the Montreal Neurological Institute (MNI) atlas15) or directly with Bayesian methods.
Earlier 3D histological atlases modelled only one brain region (for example, thalamus, basal ganglia16–18). More recent efforts targeted the whole brain. BigBrain1 comprises more than 7,000 histological sections of a single brain, but without labels. Its follow-up, Julich-Brain2, aggregates data from 23 individuals, with community-sourced labels for 248 cytoarchitectonic areas mapped to MNI space—albeit with limited accuracy and only partial subcortical labelling19. The Allen reference brain3 has comprehensive anatomical annotations but only on a sparse set of sections of a single specimen. The Allen MNI template is labelling of the MNI atlas with the Allen anatomical protocol, but with a fraction of the labels and less accurate delineations owing to limited resolution and contrast. The Ahead brains20 comprise quantitative MRI and registered 3D histology for two separate specimens, but labels are available for only a few dozen structures and are automated rather than manual. Further details on these atlases can be found in the ‘Extended Introduction’ in the Supplementary Information.
Although existing histological atlases provide exquisite 3D cytoarchitectural maps and some degree of MRI–histology integration, there are at present neither (1) datasets with densely labelled 3D histology of the whole brain nor (2) probabilistic atlases built from such datasets, which would enable analyses such as Bayesian segmentation or CCF mapping of the whole brain.
To address these issues, we present NextBrain, a densely labelled probabilistic atlas of the human brain built from histology images. We used custom artificial-intelligence-enabled registration and segmentation methods to assemble 3D reconstructions of multimodal serial histology of five human half brains, semi-automatically segment them into 333 ROIs and average the labels into the probabilistic atlas. NextBrain is open source and includes the atlas, a companion Bayesian segmentation method, the data (with an online visualization tool) and ground truth delineations for a 100-μm isotropic ex vivo scan12.
Densely labelled 3D histology of five human hemispheres
The NextBrain workflow is summarized in Fig. 1 and detailed in Methods. The first result of the pipeline (Fig. 1a–g) is a multimodal dataset with human hemispheres from five donors (three right, two left), including half cerebellum and brainstem. Each of the five cases comprises accurately aligned high-resolution ex vivo MRI, serial histology with hematoxylin and eosin (H&E) and Luxol fast blue (LFB) stains, and dense ground truth segmentations of 333 cortical and subcortical brain ROIs.
Fig. 1. NextBrain workflow.
a, Photograph of formalin-fixed hemisphere (lateral view). b, High-resolution (400 μm) ex vivo MRI scan, FreeSurfer segmentation and extracted pial surface (parcellated with FreeSurfer). Left, sagittal slice of MRI. Centre, corresponding FreeSurfer segmentation. Right, 3D rendering of reconstructed and parcellated pial surface. c, Tissue slabs and blocks, before and after paraffin embedding. Left, blocked coronal slice of the cerebrum. Right, blockface photo of a cerebral block. d, Histology: coronal section of cerebrum stained with LFB (left) and H&E (right). e, Artificial-intelligence-assisted labelling of 333 ROIs on LFB. Left, cerebrum; centre, brainstem; right, cerebellum28. f, Initialization of affine alignment of tissue blocks using a custom registration algorithm that minimizes overlap and gaps between blocks. g, Refinement of registration with histology and nonlinear transform24,25. Reconstructed coronal slice of LFB (left), H&E (middle) and labels (right), overlaid on MRI, after nonlinear registration with artificial intelligence and robust Bayesian refinement. h, Orthogonal slices of our 3D probabilistic atlas. Left, sagittal; middle, coronal; right, axial. Each voxel is painted with a linear combination the colours of each label, multiplied by their probabilities. i, Automated Bayesian segmentation of an in vivo scan into 333 ROIs using the atlas. The atlas can also be used for segmenting ex vivo MRI and as CCF for population analyses.
Aligning the histology of a case is analogous to solving a 2,000-piece jigsaw puzzle in 3D, with the ex vivo MRI as reference (similar to the image on the box cover), and with pieces that are deformed by sectioning and mounting on glass slides—with occasional tissue folding or tearing. This problem falls out of the scope of existing intermodality registration techniques21, including slice-to-volume22 and 3D histology reconstruction methods14, which do not have to address the joint constraints of thousands of sections acquired in non-parallel planes as part of different blocks.
Instead, we solve this challenging problem with a custom, state-of-the-art image registration framework (Fig. 2), which includes three components specifically developed for this project: (1) a differentiable regularizer that minimizes overlap of different blocks and gaps in between23, (2) an artificial intelligence registration method that uses contrastive learning to provide highly accurate alignment of corresponding brain tissue across MRI and histology24 and (3) a Bayesian refinement technique based on Lie algebra that guarantees the 3D smoothness of the reconstruction across modalities, even in the presence of outliers due to tissue folding and tearing25. We note that this is an evolution of our previously presented pipeline26, which incorporates the aforementioned contrastive artificial intelligence method and jointly optimizes the affine and nonlinear transforms to achieve a 32% reduction in registration error (details below).
Fig. 2. 3D histology reconstruction of Case 1.
a, Coronal slice of 3D reconstruction; boundaries between blocks are noticeable from uneven staining. Joint registration minimizes overlap and and gaps between blocks (this reconstructed slice comprises four different blocks). b, Accurate intermodality registration with artificial intelligence techniques. Registered MRI, LFB and H&E histology of a block, with tissue boundaries (traced on LFB) overlaid. c, Orthogonal view of reconstruction, which is smooth thanks to the Bayesian refinement and avoids gaps and overlaps thanks to the regularizer. d, Visualization of 3D landmark registration errors for Case 1. Left, visualization of landmarks. Right, histogram, mean and s.d. of error magnitude for this case, compared with our previous pipeline. Error (mean ± s.d.): 1.27 ± 0.59 mm. Error26: 1.42 ± 0.72 mm. See Table 1 and Extended Data Figs. 1, 2, 3 and 4 for results on the other cases.
Qualitatively, it is apparent from Fig. 2 that a very high level of accuracy is achieved for the spatial alignment, despite the non-parallel sections and distortions in the raw data. The regularizer effectively aligns the block boundaries in 3D without gaps or overlap (Fig. 2a–c), with minor discontinuities across blocks (for example, in the temporal lobe). When the segmentations of different blocks are combined (Fig. 2a, right), the result is a smooth mosaic of ROI labels.
The artificial-intelligence-enabled registration across MRI and histological stains is exemplified in Fig. 2b. Overlaying the main ROI contours on the different modalities shows the highly accurate alignment of the three modalities (MRI, H&E, LFB) even in convoluted regions of the cortex and the basal ganglia. The mosaic of modalities also highlights the accurate alignment at the substructural level: for example, subregions of the hippocampus.
Figure 2c shows the 3D reconstruction in orientations orthogonal to the main plane of sectioning (coronal). This illustrates not only the lack of gaps and overlaps between blocks but also the smoothness that is achieved within blocks. This is thanks to the Bayesian refinement algorithm, which combines the best features of methods that (1) align each section independently (high fidelity to the reference, but jagged reconstructions) and (2) those that align sections to their neighbours (smooth reconstructions, but with a ‘banana effect’: that is, straightening of curved structures).
To quantitatively evaluate the 3D reconstruction accuracy, we used 250 manually placed pairs of landmarks to compute registration errors (50 landmarks per case); landmarks are known to be a better proxy for registration error than similarity of label overlap metrics27. Table 1 displays means and standard deviations of the registration error for each of the five cases, comparing our method with our previous pipeline26. Histograms and 3D visualizations of the errors for individual cases can be found in Fig. 2d and in Extended Data Figs. 1d, 2d, 3d and 4d. Our method yields an average error of 0.99 mm (s.d., 0.51 mm; standard error, 0.03 mm), which is a considerable reduction with respect to ref. 26, which yielded 1.44 mm (s.d., 0.58 mm; standard error, 0.04 mm). The difference between the two methods is strongly significant: P values computed with a non-parametric paired Wilcoxon test were under 0.001 for all cases, and the P value for all 250 landmarks was P < 10−21; see details in Table 1. The spatial distribution of the error is further visualized with kernel regression in Extended Data Fig. 5, which shows that this distribution is fairly uniform: that is, there is no obvious consistent pattern across cases.
Table 1.
3D registration errors (in millimetres) for our method versus ref. 26
| Case | Error (μ ± σ), our method | Error (μ ± σ), previous method26 | P value (paired Wilcoxon) |
|---|---|---|---|
| Case 1 | 1.27 ± 0.59 | 1.42 ± 0.72 | 8.8 × 10−4 |
| Case 2 | 0.98 ± 0.55 | 1.49 ± 0.65 | 5.6 × 10−5 |
| Case 3 | 0.80 ± 0.32 | 1.41 ± 0.68 | 2.0 × 10−7 |
| Case 4 | 1.05 ± 0.50 | 1.49 ± 0.70 | 1.5 × 10−4 |
| Case 5 | 0.83 ± 0.57 | 1.39 ± 0.66 | 6.2 × 10−7 |
| All combined | 0.99 ± 0.51 | 1.44 ± 0.68 | 4.0 × 10−22 |
We used N = 50 for each case (250 all combined). Statistical significance is computed using a two-sided Wilcoxon test.
Extended Data Fig. 1. 3D reconstruction of Case 2.
The visualisation follows the same convention as in Fig. 3: (A) Coronal slice of the 3D reconstruction. (B) Registered MRI, LFB, and H&E histology of a block, with tissue boundaries (traced on LFB) overlaid. (C) Orthogonal view of reconstruction, which is smooth and avoids gaps and overlaps. (D) Visualization of 3D landmark registration errors for this specific case (left); histogram of their magnitude (right); and their mean ± standard deviation (bottom), compared with our previous pipeline (Mancini et al.6).
Extended Data Fig. 2. 3D reconstruction of Case 3.
The visualisation follows the same convention as in Fig. 3: (A) Coronal slice of the 3D reconstruction. (B) Registered MRI, LFB, and H&E histology of a block, with tissue boundaries (traced on LFB) overlaid. (C) Orthogonal view of reconstruction, which is smooth and avoids gaps and overlaps. (D) Visualization of 3D landmark registration errors for this specific case (left); histogram of their magnitude (right); and their mean ± standard deviation (bottom), compared with our previous pipeline (Mancini et al.6).
Extended Data Fig. 3. 3D reconstruction of Case 4.
The visualisation follows the same convention as in Fig. 3: (A) Coronal slice of the 3D reconstruction. (B) Registered MRI, LFB, and H&E histology of a block, with tissue boundaries (traced on LFB) overlaid. (C) Orthogonal view of reconstruction, which is smooth and avoids gaps and overlaps. (D) Visualization of 3D landmark registration errors for this specific case (left); histogram of their magnitude (right); and their mean ± standard deviation (bottom), compared with our previous pipeline (Mancini et al.6).
Extended Data Fig. 4. 3D reconstruction of Case 5.
The visualisation follows the same convention as in Fig. 3: (A) Coronal slice of the 3D reconstruction. (B) Registered MRI, LFB, and H&E histology of a block, with tissue boundaries (traced on LFB) overlaid. (C) Orthogonal view of reconstruction, which is smooth and avoids gaps and overlaps. (D) Visualization of 3D landmark registration errors for this specific case (left); histogram of their magnitude (right); and their mean ± standard deviation (bottom), compared with our previous pipeline (Mancini et al.6).
Extended Data Fig. 5. 3D landmark registration error.
Sagittal, coronal, and axial slices of the continuous maps of the 3D landmark registration error. The maps are computed from the discrete landmarks (displayed in Fig. 2d and Extended Data Figs. 1–4d) using Gaussian kernel regression with σ = 10 mm. There is no clear spatial pattern for the anatomical distribution of the error across subjects.
Our pipeline is widely applicable as it produces accurate 3D reconstructions from blocked tissue in standard-sized cassettes, sectioned with a standard microtome. The computer code and aligned dataset are freely available in our public repository. For educational and data inspection purposes, we have built an online visualization tool for the multimodality data, which is available at https://github-pages.ucl.ac.uk/BrainAtlas.
Supplementary Video 1 illustrates the aligned data, which include (1) MRI at 400-μm isotropic resolution, (2) aligned H&E and LFB histology digitized at 4-μm resolution (with 250-μm or 500-μm spacing, depending on the brain location) and (3) ROI segmentations, obtained with a semi-automated artificial intelligence method28. The ROIs comprise 34 cortical labels (following the Desikan–Killiany atlas29) and 299 subcortical labels (following different atlases for different brain regions; Methods and Supplementary Information). This public dataset enables researchers worldwide to conduct their own studies not only in 3D histology reconstruction but also other fields, such as high-resolution segmentation of MRI or histology30, MRI-to-histology and histological stain-to-stain image translation31, deriving MRI signal models from histology32 and many others.
A next-generation probabilistic atlas of the human brain
The labels from the five human hemispheres were coregistered and merged into a probabilistic atlas. This was achieved with a method that alternately registers the volumes to the estimate of the template and updates the template by means of averaging33. The registration method is diffeomorphic34 to ensure preservation of the neuroanatomic topology (for example, ROIs do not split or disappear in the deformation process). Crucially, we use an initialization based on the MNI template, which serves two important purposes: preventing biases towards any of the cases (which would happen if we initialized with one of them) and ‘centring’ our atlas on a well-established CCF computed from 305 subjects, which largely mitigates our relatively low number of cases. Because the MNI template is a greyscale volume, the first iteration of atlas building uses registrations computed with the ex vivo MRI scans. Subsequent iterations register labels directly with a metric based on the probability of the discrete labels according to the atlas33.
Figure 3 shows close-ups of orthogonal slices of the atlas, which model voxel-wide probabilities for the 333 ROIs on a 0.2-mm isotropic grid. The resolution and detail of the atlas represent a substantial advance with respect to the SAMSEG atlas35 now in FreeSurfer (Fig. 3a). SAMSEG models 13 brain ROIs at 1-mm resolution and is a highly detailed probabilistic atlas that covers all brain regions. The figure also shows roughly corresponding slices of the manual labelling of the MNI atlas with the simplified Allen protocol3. Compared with NextBrain, this labelling is not probabilistic and does not include many histological boundaries that are invisible on the MNI template (for example, hippocampal subregions, in violet). For this reason, it only has 138 ROIs—whereas NextBrain has 333.
Fig. 3. NextBrain probabilistic atlas.
a, Comparison with whole brain atlases. Portions of the NextBrain probabilistic atlas (which has 333 ROIs), the SAMSEG atlas in FreeSurfer35 (13 ROIs) and the manual labels of MNI based on the Allen atlas3 (138 ROIs). b, Close-up of three orthogonal slices of NextBrain. The colour coding follows the convention of the Allen atlas3, where the hue indicates the structure (for example, purple is thalamus, violet is hippocampus, green is amygdala) and the saturation is proportional to neuronal density. The colour of each voxel is a weighted sum of the colour corresponding to the ROIs, weighted by the corresponding probabilities at that voxel. The red lines separate ROIs on the basis of the most probable label at each voxel, thus highlighting boundaries between ROIs of similar colour; we note that the jagged boundaries are a common discretization artefact of probabilistic atlases in regions where two or more labels mix continuously: for example, the two layers of the cerebellar cortex.
A comparison between labelled sections of the printed atlas by ref. 13 and roughly equivalent sections of the Allen reference brain and NextBrain is included in the Supplementary Information. The agreement between the three atlases is generally good, especially for the outer boundaries of the whole structures: for example, the whole hippocampus, amygdala or thalamus. Mild differences can be found in the delineation of substructures, both cortical and subcortical (for example, subdivision of the accumbens), mainly due to (1) the forced choice of applying arbitrary anatomical criteria in both atlases because of lack of contrast in smaller regions, (2) different anatomical definitions and (3) the probabilistic nature of NextBrain. We emphasize that these differences are not exclusive to NextBrain, as they are also present between Mai–Paxinos and Allen.
Close-ups of NextBrain slices centred on representative brain regions are shown in Fig. 3b, with boundaries between the ROIs (computed from the maximum likelihood segmentation) overlaid in red. These highlight the anatomical granularity of the new atlas, with dozens of subregions for areas such as the thalamus, hippocampus, amygdala, midbrain and so on. An overview of the complete atlas is shown in Supplementary Video 2, which illustrates the atlas construction procedure and flies through all the slices in axial, coronal and sagittal view.
The probabilistic atlas is freely available as part of our segmentation module distributed with FreeSurfer. The maximum likelihood and colour-coded probabilistic maps (as in Fig. 3) can also be downloaded separately from our public repository for quick inspection and educational purposes. Developers of neuroimaging methods can freely capitalize on this resource, for example, by extending the atlas through combination with other atlases or manually tracing new labels; or by designing their own segmentation methods using the atlas. Neuroimaging researchers can use the atlas for fine-grained automated segmentation (as shown below) or as a highly detailed CCF for population analyses.
Segmentation of ultra-high-resolution ex vivo MRI
One of the new analyses that NextBrain enables is the automated fine-grained segmentation of ultra-high-resolution ex vivo MRI. Because motion is not a factor in ex vivo imaging, very long MRI scanning times can be used to acquire data at resolutions that are infeasible in vivo. One example is the publicly available 100-μm isotropic whole brain presented in ref. 12, which was acquired in a 100-hour session on a 7-T MRI scanner. Such datasets have huge potential in mesoscopic studies connecting microscopy with in vivo imaging36.
Volumetric segmentation of ultra-high-resolution ex vivo MRI can be highly advantageous in neuroimaging in two different manners: first, by supplementing such scans (like the 100-micron brain) with neuroanatomical information that augments their value as atlases (for example, as CCFs or for segmentation purposes37); and second, by enabling analyses of ex vivo MRI datasets at scale (for example, volumetry or shape analysis).
Dense manual segmentation of these datasets is practically infeasible, as it entails manually tracing ROIs on over 1,000 slices. Moreover, one typically seeks to label these images at a higher level of detail than in vivo (that is, more ROIs of smaller sizes), which exacerbates the problem. One may use semi-automated methods like the artificial-intelligence-assisted technique we used in to build NextBrain (see the previous section), which limits the manual segmentation to one every N slices28 (N = 4 in this work). However, such a strategy only ameliorates the problem to a certain degree, as tedious manual segmentation is still required for a significant fraction of slices.
A more appealing alternative is thus automated segmentation. However, existing approaches have limitations, as they either (1) were designed for 1-mm in vivo scans and do not capitalize on the increased resolution of ex vivo MRI18,35 or (2) use neural networks trained with ex vivo scans but with a limited number of ROIs because of the immense labelling effort that is required to generate the training data30.
This limitation is circumvented by NextBrain: as a probabilistic atlas of neuroanatomy, it can be combined with well-established Bayesian segmentation methods (which are adaptive to MRI contrast) to segment ultra-high-resolution ex vivo MRI scans into 333 ROIs. We have released in FreeSurfer an implementation that segments full brain scans in about 1 h, using a desktop equipped with a graphics processing unit.
To quantitatively evaluate the segmentation method, we have created a gold standard segmentation of the public 100-micron brain12, which we are publicly releasing as part of NextBrain. To make this burdensome task practical and feasible, we simplified it in five manners: (1) downsampling the data to 200-μm resolution, (2) labelling only one hemisphere, (3) using the same semi-automated artificial intelligence method as in NextBrain for faster segmentation, (4) using FreeSurfer to automatically subdivide the cerebral cortex and (5) labelling only a subset of 98 visible ROIs (Supplementary Videos 3 and 4). Even with these simplifications, labelling the scan took more than 100 h of manual tracing effort.
We compared the gold standard labels with the automated segmentations produced by NextBrain using Dice overlap scores. Because the gold standard has fewer ROIs (particularly in the brainstem), we (1) clustered the ROIs in the automated segmentation that correspond with the ROIs in the gold standard and (2) used a version of NextBrain in which the brainstem ROIs are simplified to better match those of the gold standard (with 264 labels instead of 333). The results are shown in Extended Data Table 1. As expected, there is a clear link between size and Dice. Larger ROIs like the cerebral white matter or cortex have Dice around 0.9. The smaller ROIs have lower Dice, but very few are below 0.4—which is enough to localize ROIs. We note that the median Dice (0.667) is comparable with that reported by other Bayesian segmentation methods for brain subregions38.
Extended Data Table 1.
NextBrain segmentation performance on ultra-high resolution ex vivo MRI scan
Dice scores between the ground truth labels of the 100 μm ex vivo brain MRI scan presented in3 and the automated segmentations obtained with NextBrain. ROIs are listed in decreasing order of size (volume). The Dice scores are shown for segmentations obtained at two different resolutions: 200 μm (the resolution at which we created the ground truth labels) and 1 mm (which is representative of in vivo data). We note that the Dice scores are computed from labels made on the right hemisphere (since we did not label the left side of the brain). We also note that the labels “rest of hippocampus” and “rest of amygdala” correspond to voxels that did not clearly belong to any of the manually labelled nuclei, and have therefore no direct correspondence with ROIs in NextBrain.
Sample slices and their corresponding automated and manual segmentations are shown in Fig. 4. The exquisite resolution and contrast of the dataset enables our atlas to accurately delineate a large number of ROIs with very different sizes, including small nuclei and subregions of the hippocampus, amygdala, thalamus, hypothalamus, midbrain and so on. Differences in label granularity aside, the consistency between the automated and gold standard segmentation is qualitatively very strong.
Fig. 4. NextBrain segmentation of ultra-high-resolution MRI.

Automated Bayesian segmentation of publicly available ultra-high-resolution ex vivo brain MRI12 using the simplified version of NextBrain, and comparison with the gold standard (only available for the right hemisphere). We show two coronal, sagittal and axial slices. The MRI was resampled to 200-μm isotropic resolution for processing. As in previous figures, the segmentation uses the Allen colour map3 with boundaries overlaid in red. We note that the manual segmentation uses a coarser labelling protocol.
This is a highly comprehensive dense segmentation of a human brain MRI scan. As ex vivo datasets with tens of scans become available30,39, https://dandiarchive.org/dandiset/000026, our tool has great potential in augmenting mesoscopic studies of the human brain. Moreover, the labelled MRI that we are releasing has great potential in other neuroimaging studies, for example, for training or evaluating segmentation algorithms; for ROI analysis in the high-resolution ex vivo space; or for volumetric analysis by means of registration-based segmentation.
Fine-grained analysis of in vivo MRI
NextBrain can also be used to automatically segment in vivo MRI scans at the resolution of the atlas (200-μm isotropic), yielding an extremely high level of detail. Scans used in research typically have isotropic resolution with voxel sizes ranging from 0.7 mm to 1.2 mm and therefore do not show all ROI boundaries with as much detail as ultra-high-resolution ex vivo MRI. However, many boundaries are still visible, including the external boundaries of brain structures (hippocampus, thalamus and so on) and some internal boundaries: for example, between the anteromedial and lateral posterior thalamus40. Bayesian segmentation capitalizes on these visible boundaries and combines them with the previous knowledge encoded in the atlas to produce the full subdivision—albeit with lower reliability for the indistinct boundaries10. A sample segmentation is shown in Fig. 1f.
Evaluation of segmentation accuracy
We first evaluated the in vivo segmentation quantitatively in two different experiments. First, we downsampled the ex vivo MRI scan from the previous section to 1-mm isotropic resolution (that is, the standard resolution of in vivo scans), segmented it at 200-μm resolution and computed Dice scores with the high-resolution reference. The results are displayed in Extended Data Table 1. The median Dice is 0.590, which is 0.077 lower than at 200 μm but still fair for such small ROIs38. Moreover, most Dice scores remain over 0.4, as for the ultra-high resolution, hinting that the priors can successfully provide a rough localization of internal boundaries, given the more visible external boundaries.
In a second experiment, we analysed the Dice scores produced by NextBrain in OpenBHB41, a public meta-dataset with roughly 1-mm isotropic T1-weighted scans of more than 3,000 healthy individuals acquired at more than 60 sites. Using FreeSurfer 7.0 as a silver standard, we computed Dice scores for our segmentations at the level of whole regions: that is, the level of granularity provided by FreeSurfer. Although these scores cannot assess segmentation accuracy at the subregion level, they do enable evaluation on a much larger multisite cohort, as well as comparison with the Allen MNI template—the only competing histological (or rather, histology-inspired) atlas that can segment the whole brain in vivo. The results (Extended Data Fig. 6) show that (1) NextBrain consistently outperform the Allen MNI template, as expected from the fact that one atlas is probabilistic whereas the other is not; (2) NextBrain yields Dice scores in the range expected from Bayesian segmentation methods35—despite using only five cases, thanks to the excellent generalization ability of generative models42; and (3) despite being built from a set of older subjects, our mitigation strategy (anchoring NextBrain on MNI and using highly generalizable Bayesian segmentation) enables NextBrain to produce segmentations that are consistently accurate throughout the lifespan, as opposed to the Allen MNI template, which has a strong negative correlation between age and performance: r = −0.274, P < 10−55, compared with NextBrain (r = 0.046, P = 0.009). Please see Extended Data Fig. 6b,c for further details.
Extended Data Fig. 6. NextBrain superior segmentation performance with respect the Allen MNI template.
Dice scores for automated segmentations computed on the OpenBHB dataset (3,330 subjects), using the Allen MNI template and NextBrain, with FreeSurfer segmentations as reference. The scores are computed at the whole regions level, i.e., the level of granularity at which FreeSurfer segments. (A) Box plots for 11 representative ROIs. On each box, the central mark indicates the median, the edges of the box indicate the 25th and 75th percentiles, the whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually as ‘+’. The abbreviations for the regions are: WM = white matter of the cerebrum, CT = cortex of the cerebrum, CWM = cerebellar white matter, CCT = cerebellar cortex, TH = thalamus, CA = caudate, PU = putamen, PA = pallidum, BS = brainstem, HP = hippocampus, AM = amygdala. (B) Scatter plot of Dice (averaged across the same 11 ROIs) vs age for the Allen MNI template. There is a clear negative correlation between age and accuracy: (r = −0.274, p = 1.67 × 10−56, two-sided test). (C) Scatter plot for NextBrain, whose accuracy is much more consistent across the lifespan, with almost no correlation with age (r = 0.046, p = 0.009, two-sided test).
Application to Alzheimer’s disease classification
To further compare NextBrain with the Allen MNI template, we used an Alzheimer’s disease classification task based on linear discriminant analysis (LDA) of ROI volumes (corrected by age and intracranial volume). Using a simple linear classifier on a task where strong differences are expected allows us to use classification accuracy as a proxy for the quality of the input features: that is, the ROI volumes derived from the automated segmentations. To enable direct comparison, we used a sample of 383 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset43 (168 Alzheimer’s disease, 215 controls) that we used in previous publications10,11,40.
Using the ROI volumes estimated by FreeSurfer 7.0 (which do not include subregions) yields an area under the receiver operating characteristic curve (AUROC) equal to 0.911, with classification accuracy of 85.4% at its elbow. The Allen MNI template exploits subregion information to achieve AUROC = 0.929 and 86.9% accuracy. The increased segmentation accuracy and granularity of NextBrain enables it to achieve AUROC = 0.953 and 90.3% accuracy—with a significant increase in AUROC with respect to the Allen MNI template (P = 0.01 for a DeLong test). This AUROC is also higher than those of specific ex vivo atlases we have presented in the previous work10,11,40—which range from 0.830 to 0.931.
Application to fine-grained signature of ageing
We performed Bayesian segmentation with NextBrain on 705 subjects (aged 36–90, mean 59.6 years) from the Ageing HCP dataset44, which comprises high-quality in vivo scans at 0.8-mm resolution. We computed the volumes of the ROIs for every subject, corrected them for total intracranial volume (by division) and sex (by regression) and computed their Spearman correlation with age. We used the Spearman rather than Pearson correlation because, being rank-based, it is a better model for ageing trajectories as they are known to be nonlinear for wide age ranges45,46.
The result of this analysis is a highly comprehensive map of regional ageing of the human brain (Fig. 5a and Extended Data Fig. 7a; see also full trajectories for select ROIs in Extended Data Fig. 8). Cortically, we found significant negative correlations with age in the prefrontal cortex (marked with ‘a’ in Fig. 5a) and insula (b), whereas the temporal (c) and parahippocampal cortices (d) did not yield significant correlation; this is consistent with findings from studies of cortical thickness47,48. The white matter (e) is known to decline steadily after about 35 years45,46, and such negative correlation is also detected by NextBrain. Other general ageing patterns at the whole-structure level45,46 are also successfully captured, such as a steady volume decrease of the caudate, thalamus and putamen (f) and the volumetric reduction of the hippocampus, amygdala and globus pallidus.
Fig. 5. Fine-grained ageing signature using NextBrain.
We report the absolute value of Spearman correlation for ROI volumes versus age derived from in vivo MRI scans. a, Ageing HCP dataset. Image resolution, 0.8-mm isotropic; N, 705; age range, 36–90 years; mean age, 59.6 years; please see main text for meaning of markers (letters). b, OpenBHB dataset41, restricted to subjects with ages over 35 years to match Ageing HCP. Resolution, 1-mm isotropic; N, 431; age range, 36–86 years; mean age, 57.9 years. c, Full OpenBHB dataset. N, 3,220; age range, 6–86 years; mean age, 25.2 years; please note the different scale of the colour bar. The ROI volumes are corrected by intracranial volume (by division) and sex (by regression). Further slices are shown in Extended Data Fig. 6.
Extended Data Fig. 7. Fine-grained ageing signature using NextBrain (additional slices).
We report the absolute value of Spearman correlation for ROI volumes vs age derived from in vivo MRI scans (additional slices). The visualisation follows the same convention as in Fig. 5: (A) Ageing HCP dataset. (B) OpenBHB dataset, restricted to ages over 35. (C) Full OpenBHB dataset.
Extended Data Fig. 8. Ageing trajectories for select ROIs in HCP dataset.
Subregions of brain structures (thalamus, hippocampus, cortex, etc) show differential ageing patterns. The red dots correspond to the ROI volumes of individual subjects, corrected by intracranial volume (by division) and sex (by regression). The blue lines represent the maximum likehood fit of a Laplace distribution with location and scale parameters parametrised by a B-spline with four control points (equally space between 30 and 95 years). The continuous blue line represents the location, whereas the dashed lines represent the 95% confidence interval (equal to three times the scale parameter in either direction of the location). Volumes of contralateral structures are averaged across left and right.
Importantly, NextBrain also unveils more granular patterns of the relationship between volumes and ageing in these regions. For example, the anterior caudate (g) showed a stronger negative correlation between age and volume than the posterior caudate (h). Similarly, the external segment of the globus pallidus (i) showed a stronger correlation than the internal segment (j)—an effect that was not observed in previous work studying the whole pallidum49. The ability to investigate separate subregions highlights a differential effect of ageing across brain networks, particularly a stronger effect on the regions of the limbic and prefrontal networks, given the correlations we found in the caudate head (g), insula (b), orbitofrontal cortex (k), amygdala and thalamus50. In the thalamus, the correlation is more significant in the mediodorsal (l), anteroventral (m) and pulvinar subnuclei (n), key regions in the limbic, lateral orbitofrontal and dorsolateral prefrontal circuits. In the hippocampus, subicular regions (o) correlate more strongly than the rest of the structure. The pattern of correlation strength is more homogeneous across subregions in the amygdala (key region in the limbic system), hypothalamus and cerebellum. We then revisited the OpenBHB dataset and performed the same regression analysis only for subjects older than 35 years, to match the age range of the Ageing HCP dataset (N = 431, aged 36–86 years, mean 57.9 years). The results are shown in Fig. 5b and Extended Data Fig. 7b. Despite the differences in acquisition and the huge heterogeneity of the OpenBHB dataset, the results are highly consistent with those from HCP—but with slightly lower significance, possibly owing to the increased voxel size (twice as big, because 1/0.83 ≈ 2).
We also performed the same analysis with all 3,220 subjects in OpenBHB; see the results in Fig. 5c and Extended Data Fig. 7c. For many regions, widening the age range to 6–86 years (mean age 25.2) yields non-monotonic ageing curves and therefore weaker Spearman correlations. Therefore, these graphs highlight the regions whose volumes start decreasing with age the earliest, such as the putamen or medial thalamus. Many other patterns of association between age and ROI volumes remain very similar to those of the older populations (for example, basal ganglia or hippocampus).
The segmentation code is publicly available in FreeSurfer (https://surfer.nmr.mgh.harvard.edu/fswiki/HistoAtlasSegmentation) and can be run with a single line of code. This enables researchers worldwide to analyse their scans at a high level of detail without manual effort or highly specific neuroanatomical knowledge.
Discussion and conclusion
NextBrain is a next-generation probabilistic human brain atlas that is publicly available and distributed with a companion Bayesian segmentation tool and multimodal dataset. The dataset itself is already a highly valuable resource: researchers have free access to both the raw and registered data, which they can use for their own research (for example, in MRI signal modelling or registration) or to augment the atlas with new ROIs (for example, by labelling them on the histology or MRI data and rebuilding the atlas). The atlas itself provides a high-resolution CCF for population analyses. The 3D segmentation of 100-μm ex vivo brain MRI scan12 is a valuable complement to this (already very useful) resource. Finally, the Bayesian tool enables segmentation of ex vivo and in vivo MRI at an unprecedented level of granularity.
NextBrain is customizable and extensible: because all the data and code are publicly available, it is possible to download the data, modify (or extend) the manual annotations and rebuild a custom atlas. NextBrain can be complemented by other segmentation methods and atlases that describe other aspects of the brain. For example, more accurate cortical segmentation and parcellation can be achieved with surface models51. We are at present working on models that combine neural networks with geometry processing to obtain laminar segmentations from in vivo and ex vivo scans52,53. Surface placement will also warrant compatibility with cortical atlases obtained with multimodal data54.
The Bayesian segmentation tool in NextBrain is compatible with 1-mm isotropic scans, as illustrated by the Alzheimer’s and ageing experiments. As with other probabilistic atlases, Bayesian segmentation can be augmented with models of pathology to automatically segment pathology, such as tumours55 or white matter hyperintensities56. Importantly, NextBrain’s high level of detail enables us to fully take advantage of high-resolution data, such as ex vivo MRI, ultra-high-field MRI (for example, 7 T) and exciting new modalities like HiP-CT57. As high-quality 3D brain images become increasingly available, NextBrain’s ability to analyse them with high granularity holds great promise to advance knowledge on the human brain in health and in disease.
Methods
Brain specimens
Hemispheres from five individuals (including half of the cerebrum, cerebellum and brainstem), were used in this study, following informed consent to use the tissue for research and the ethical approval for research by the National Research Ethics Service Committee London - Central. All hemispheres were fixed in 10% neutral buffered formalin (Fig. 1a). The laterality and demographics are summarized in Supplementary Table 1; the donors were neurologically normal, but one case had an undiagnosed, asymptomatic tumour (diameter roughly 10 mm) in the white matter, adjacent to the pars opercularis. This tumour did not pose issues in any of the processing steps described below.
Data acquisition
Our data acquisition pipeline largely leverages our previous work26. We summarize it here for completeness; the reader is referred to the corresponding publication for further details.
MRI scanning
Before dissection, the hemispheres were scanned on a 3-T Siemens MAGNETOM Prisma scanner. The specimens were placed in a container filled with Fluorinert (perfluorocarbon), a proton-free fluid with no MRI signal that yields excellent ex vivo MRI contrast and does not affect downstream histological analysis58. The MRI scans were acquired with a T2-weighted sequence (optimized long echo train 3D fast spin echo59) with the following parameters: TR = 500 ms, TEeff = 69 ms, BW = 558 hertz per pixel, echo spacing = 4.96 ms, echo train length = 58, 10 averages, with 400-μm isotropic resolution, acquisition time for each average = 547 s, total scanning time = 91 min. These scans were processed with a combination of SAMSEG35 and the FreeSurfer 7.0 cortical stream51 to bias-field-correct the images, generate rough subcortical segmentations and obtain white matter and pial surfaces with corresponding parcellations according to the Desikan–Killiany atlas29 (Fig. 1b).
Dissection
After MRI scanning, each hemisphere is dissected to fit into standard 74 mm × 52 mm cassettes. First, each hemisphere was split into cerebrum, cerebellum and brainstem. Using a metal frame as a guide, these were subsequently cut into 10-mm-thick slices in coronal, sagittal and axial orientation, respectively. These slices were photographed inside a rectangular frame of known dimensions for pixel size and perspective correction; we refer to these images as ‘whole slice photographs’. Although the brainstem and cerebellum slices all fit into the cassettes, the cerebrum slices were further cut into as many blocks as needed. ‘Blocked slice photographs’ were also taken for these blocks (Fig. 1c, left).
Tissue processing and sectioning
After standard tissue processing steps, each tissue block was embedded in paraffin wax and sectioned with a sledge microtome at 25-μm thickness. Before each cut, a photograph was taken with a 24 MPx Nikon D5100 camera (ISO = 100, aperture = f/20, shutter speed = automatic) mounted right above the microtome, pointed perpendicularly to the sectioning plane. These photographs (henceforth ‘blockface photographs’) were corrected for pixel size and perspective using fiducial markers. The blockface photographs have poor contrast between grey and white matter (Fig. 1c, right) but also negligible nonlinear geometric distortion, so they can be readily stacked into 3D volumes. A two-dimensional convolutional neural network (CNN) pretrained on the ImageNet dataset60 and fine-tuned on 50 manually labelled examples was used to automatically produce binary tissue masks for the blockface images.
Staining and digitization
We mounted on glass slides and stained two consecutive sections every N (see below), one with H&E and one with LFB (Fig. 1d). The sampling interval was N = 10 (that is, 250 μm) for blocks that included subcortical structures in the cerebrum, medial structures of the cerebellum or brainstem structures. The interval was N = 20 (500 μm) for all other blocks. All stained sections were digitized with a flatbed scanner at 6,400 DPI resolution (pixel size 3.97 μm). Tissue masks were generated using a two-dimensional CNN similar to the one used for blockface photographs (pretrained on ImageNet and fine-tuned on 100 manually labelled examples).
In vivo ADNI data
The in vivo ADNI dataset used in the preparation of this article were obtained from the ADNI database (https://adni.loni.usc.edu/). The ADNI was launched in 2003 as a public–private partnership, led by Principal Investigator M. W. Weiner. The primary goal of ADNI has been to test whether serial MRI, positron emission tomography, other biological markers and clinical and neuropsychological assessments can be combined to measure the progression of mild cognitive impairment and early Alzheimer’s disease. For up-to-date information, see www.adni-info.org.
Dense labelling of histology
Segmentations of 333 ROIs (34 cortical, 299 subcortical) were made by authors E.R., J.A. and E.B. (with guidance from D.K., M.B., Z.J. and J.C.A.) for all the LFB sections, using a combination of manual and automated techniques (Fig. 1e). The general procedure to label each block was (1) produce an accurate segmentation for one of every four sections, (2) run SmartInterpol28 to automatically segment the sections in between and (3) manually correct these automatically segmented sections when needed. SmartInterpol is a dedicated artificial intelligence technique that we have developed specifically to speed up segmentation of histological stacks in this project.
To obtain accurate segmentations on sparse sections, we used two different strategies depending on the brain region. For the blocks containing subcortical or brainstem structures, ROIs were manually traced from scratch using a combination of ITK-SNAP61 and FreeSurfer’s viewer ‘Freeview’. For cerebellum blocks, we first trained a two-dimensional CNN (a U-Net62) on 20 sections on which we had manually labelled the white matter and the molecular and granular layers of the cortex. The CNN was then run on the (sparse) sections and the outputs manually corrected. This procedure saves a substantial amount of time, because manually tracing the convoluted shape of the arbor vitae is extremely time consuming. For the cortical cerebrum blocks, we used a similar strategy as for the cerebellum, labelling the tissue as either white or grey matter. The subdivision of the cortical grey matter into parcels was achieved by taking the nearest neighbouring cortical label from the aligned MRI scan (details on the alignment below).
The manual labelling followed neuroanatomical protocols based on different brain atlases, depending on the brain region. Further details on the specific delineation protocols are provided in the Supplementary Information. The general ontology of the 333 ROIs is based on the Allen reference brain3 and is provide in a spreadsheet as part of the Supplementary Information.
3D histology reconstruction
3D histology reconstruction is the inverse problem of reversing all the distortion that brain tissue undergoes during acquisition, to reassemble a 3D shape that accurately follows the original anatomy. For this purpose, we used a framework with four modules.
Initial blockface alignment
To roughly initialize the 3D reconstruction, we relied on the stacks of blockface photographs. Specifically, we used our previously presented hierarchical joint registration framework23 that seeks to (1) align each block to the MRI with a similarity transform, by maximizing the normalized cross-correlation of their intensities while (2) discouraging overlap between blocks or gaps in between, by means of a differentiable regularizer. The similarity transforms allowed for rigid deformation (rotation, translation), as well as isotropic scaling to model the shrinking due to tissue processing. The registration algorithm was initialized with transforms derived from the whole slice, blocked slice and blockface photographs (see details in ref. 26). The registration was hierarchical in the sense that groups of transforms were forced to share the same parameters in the earlier iterations of the optimization, to reflect our knowledge of the cutting procedure. In the first iterations, we clustered the blocks into three groups: cerebrum, cerebellum and brainstem. In the following iterations, we clustered the cerebral blocks that were cut from the same slice and allowed translations in all directions, in-plane rotation and global scaling. In the final iterations, each block alignment was optimized independently. The numerical optimization used the LBFGS algorithm63. The approximate average error after this procedure was about 2 mm (ref. 23). A sample 3D reconstruction is shown in Fig. 1f.
Refined alignment with preliminary nonlinear model
Once a good initial alignment is available, we can use the LFB sections to refine the registration. These LFB images have exquisite contrast (Fig. 1d) but suffer from nonlinear distortion—rendering the good initialization from the blockface images crucial. The registration procedure was nearly identical to that of the blockface, with two main differences. First, the similarity term used the local (rather than global) normalized cross-correlation function64 to handle uneven staining across sections. Second, the deformation model and optimization hierarchy were slightly different because nonlinear registration benefits from more robust methods. Specifically, the first two levels of optimization were the same, with blocks grouped into cerebrum/cerebellum/brainstem (first level) or cerebral slices (second level) and optimization of similarity transforms. The third level (that is, each block independently) was subdivided into four stages in which we optimized transforms with increasing complexity, such that the solution of every level of complexity served as initialization to the next. In the first and simplest stage, we allowed for translations in all directions, in-plane rotation and global scaling (five parameters per block). In the second stage, we added a different scaling parameter in the normal direction of the block (six parameters per block). In the third stage, we allowed for rotation in all directions (eight parameters per block). In the fourth and final stage, we added to every section in every block a nonlinear field modelled with a grid of control points (10-mm spacing) and interpolating B-splines. This final deformation model has about 100,000 parameters per case (about 100 parameters per section, times about 1,000 LFB sections).
Nonlinear artificial intelligence registration
We seek to produce final nonlinear registrations that are accurate, consistent with each other and robust against tears and folds in the sections. We capitalize on Synth-by-Reg (SbR24), an artificial intelligence tool for multimodal registration that we have recently developed, to register histological sections to MRI slices resampled to the plane of the histology (as estimated by the linear alignment). SbR exploits the facts that (1) intramodality registration is more accurate than intermodality registration with generic metrics like mutual information65,66 and (2) there is a correspondence between histological sections and MRI slices: that is, they represent the same anatomy. In short, SbR trains a CNN to make histological sections look like MRI slices (a task known as style transfer67), using a second CNN that has been previously trained to register MRI slices to each other. The style transfer relies on the fact that only good MRI synthesis will yield a good match when used as input to the second CNN, which enables SbR to outperform unpaired approaches24 such as CycleGAN68. SbR also includes a contrastive loss69 that prevents blurring and content shift due to overfitting. SbR produces highly accurate deformations parameterized as stationary velocity fields (SVFs70).
Bayesian refinement
Running SbR for each stain and section independently (that is, LFB to resampled MRI and H&E to resampled MRI) yields a reconstruction that is jagged and sensitive to folds and tears. One alternative is to register each histological section to each neighbour directly, which achieves smooth reconstructions but incurs the so-called ‘banana effect’: that is, a straightening of curved structures14. We have proposed a Bayesian method that yields smooth reconstructions without the banana effect25. This method follows an overconstrained strategy by computing registrations between LFB and MRI, H&E and MRI, H&E and LFB, each LFB section and the two nearest neighbours in either direction across the stack, each H&E section and its neighbours, and each MRI slice and its neighbours. For a stack with S sections, this procedure yields 15xS-18 registrations, whereas the underlying dimensionality of the spanning tree connecting all the images is just 3xS-1. We use a probabilistic model of SVFs to infer the most likely spanning tree given the computed registrations, which are seen as noisy measurements of combinations of transforms in the spanning tree. The probabilistic model uses a Laplace distribution, which relies on L1 norms and is thus robust to outliers. Moreover, the properties of SVFs enable us to write the optimization problem as a linear program, which we solve with a standard simplex algorithm71. The result of this procedure was a 3D reconstruction that is accurate (it is informed by many registrations), robust and smooth (Figs. 1g and 2).
Atlas construction
The transforms for the LFB sections produced by the 3D reconstructions were applied to the segmentations to bring them into 3D space. Despite the regularizer from ref. 23, minor overlaps and gaps between blocks still occur. The former were resolved by selecting the label that is furthest inside the corresponding ROI. For the latter, we used our previously developed smoothing approach40.
Given the low number of available cases, we combined the left (2) and right (3) hemispheres into a single atlas. This was achieved by flipping the right hemispheres and computing a probabilistic atlas of the left hemisphere using an iterative technique33. To initialize the procedure, we registered the MRI scans to the MNI atlas15 with the right hemisphere masked out and averaged the deformed segmentations to obtain an initial estimate of the probabilistic atlas. This first registration was based on intensities, using a local normalized cross-correlation loss. From that point on, the algorithm operates exclusively on the segmentations.
Every iteration of the atlas construction process comprises two steps. First, the current estimate of the atlas and the segmentations are coregistered one at a time using (1) a diffeomorphic deformation model based on SVFs parameterized by grids of control points and B-splines (as implemented in NiftyReg72), which preserves the topology of the segmentations; (2) a data term, which is the log-likelihood of the label at each voxel according to the probabilities given by the deformed atlas (with a weak Dirichlet prior to prevent logs of zero); and (3) a regularizer based on the bending energy of the field, which encourages regularity in the deformations. The second step of each iteration updates the atlas by averaging the segmentations. The procedure converged (negligible change in the atlas) after five iterations. Slices of the atlas are shown in Figs. 1h and 3.
Bayesian segmentation
Our Bayesian segmentation algorithm builds on well-established methods in the neuroimaging literature18,73,74. In short, the algorithm jointly estimates a set of parameters that best explain the observed image in light of the probabilistic atlas, according to a generative model based on a Gaussian mixture model (GMM) conditioned on the segmentation, combined with a model of bias field. The parameters include the deformation of the probabilistic atlas; a set of coefficients describing the bias field; and the means, variances and weights of the GMM. The atlas deformation is regularized in the same way as the atlas construction (bending energy, in our case) and is estimated by means of numerical optimization with LBFGS. The bias field and GMM parameters are estimated with the Expectation Maximization algorithm75.
Compared with classical Bayesian segmentation methods operating at 1-mm resolution with just a few classes (for example, SAMSEG35, SPM18), our proposed method has several distinct features:
Because the atlas only describes the left hemisphere, we use a fast deep learning registration method (EasyReg76) to register the input scan to MNI space and use the resulting deformation to split the brain into two hemispheres that are processed independently.
Because the atlas only models brain tissue, we run SynthSeg77 on the input scan to mask out the extracerebral tissue.
Clustering ROIs into tissue types (rather than letting each ROI have its own Gaussian) is particularly important, given the large number of ROIs (333). The user can specify the clustering by means of a configuration file; by default, our public implementation uses a configuration with 15 tissue types, tailored to in vivo MRI segmentation.
The framework is implemented using the PyTorch package, which enables it to run on graphics processing units and curbs segmentation run times to about half an hour per hemisphere.
Sample segmentations with this method can be found in Fig. 1h (in vivo) and Fig. 4 (ex vivo).
Labelling of ultra-high-resolution ex vivo brain MRI
To quantitatively assess the accuracy of our segmentation method on the ultra-high-resolution ex vivo scan, we produced a gold standard segmentation of the publicly available 100-μm scan12 as follows. First, we downsampled the data to 200-μm resolution and discarded the left hemisphere, to alleviate the manual labelling requirements. Next, we used Freeview to manually label from scratch one coronal slice of every ten; we labelled as many regions from the histological protocol as the MRI contrast allowed—without subdividing the cortex. Then, we used SmartInterpol28 to complete the segmentation of the missing slices. Next, we manually corrected the SmartInterpol output as needed, until we were satisfied with the 200-μm isotropic segmentation. The cortex was subdivided using standard FreeSurfer routines. This labelling scheme led to a ground truth segmentation with 98 ROIs, which we have made publicly available. Supplementary Videos 3 and 4 fly over the coronal and axial slices of the labelled scan, respectively.
We used a simplified version of the NextBrain atlas when segmenting the 100-μm scan, to better match the ROIs of the automated segmentation and the ground truth (especially in the brainstem). This version was created by replacing the brainstem labels in the histological 3D reconstruction (Fig. 1g, right) by new segmentations made directly in the underlying MRI scan. These segmentations were made with the same methods as for the 100-μm isotropic scan. The new combined segmentations were used to rebuild the atlas.
Automated segmentation with Allen MNI template
Automated labelling with the Allen MNI template relied on registration-based segmentation with the NiftyReg package34,72, which yields state-of-the-art performance in brain MRI registration78. We used the same deformation model and parameters as the NiftyReg authors used in their own registration-based segmentation work79: (1) symmetric registration with a deformation model parameterized by a grid of control points (spacing 2.5 mm = 5 voxels) and B-spline interpolation; (2) local normalized cross-correlation as objective function (s.d. 2.5 mm); and (3) bending energy regularization (weight 0.001).
LDA for Alzheimer’s disease classification
We performed linear classification of Alzheimer’s disease versus controls based on ROI volumes as follows. Leaving out one subject at a time, we used all other subjects to (1) compute linear regression coefficients to correct for sex and age (intracranial volume was corrected by division); (2) estimate mean vectors for the two classes , as well as a pooled covariance matrix (Σ); and (3) use the means and covariance to compute an unbiased log-likehood criterion L for the left-out subject:
where x is the vector with ICV-, sex- and age-corrected volumes for the left-out subject. Once the criterion L has been computed for all subjects, it can be globally thresholded for accuracy and ROC analysis. We note that, for NextBrain, the high number of ROIs renders the covariance matrix singular. We prevent this by using regularized LDA: we normalize all the ROIs to unit variance and then compute the covariance as where S is the sample covariance, I is the identity matrix and is a constant. We note that normalizing to unit variance enables us to use a fixed, unit λ—rather than having to estimate λ for every left-out subject.
B-spline fitting of ageing trajectories
To compute the B-spline fits in Extended Data Fig. 8, we first corrected the ROI volumes by sex (using regression) and intracranial volume (by division). Next, we modelled the data with a Laplace distribution, which is robust against outliers which may be caused by potential segmentation mistakes. Specifically, we used an age-dependent Laplacian where the location μ and scale b are both B-splines with four evenly space control points at 30, 51.6, 73.3 and 95 years. The fit is optimized with gradient ascent over the log-likelihood function:
where is the Laplace distribution with location μ and scale b; vn is the volume of ROI for subject n; an is the age of subject n; is a B-spline describing the location, parameterized by θμ; and is a B-spline describing the scale, parameterized by θb. The 95% confidence interval of the Laplace distribution is given by μ ± 3b.
Ethics statement
The brain donation programme and protocols have received ethical approval for research by the National Research Ethics Service Committee London - Central, and tissue is stored for research under a license issued by the Human Tissue Authority (no. 12198).
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-025-09708-2.
Supplementary information
Extended introduction and related work.
This file describes the anatomical protocols for manual delineation of brain regions.
This file contains the ontology that we used for the regions of interest present in our new atlas.
This file compares roughly equivalent sections of the Mai–Paxinos atlas of the whole brain13, the Allen reference brain3 and our proposed atlas NextBrain.
Data used in preparation of this Article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (https://adni.loni.usc.edu/). As such, the investigators in the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. This file contains a complete listing of the ADNI investigators.
Table containing demographic information from the five donors of the study.
This video illustrates the aligned MRI scans, histological sections, and manual delineations.
This video illustrates the atlas construction procedure.
This video flies over the labelled ex vivo brain (Edlow et al., “7 Tesla MRI of the ex vivo human brain at 100 micron resolution”, Scientific data, 2019) in axial view.
This video flies over the same specimen, but in coronal view.
Acknowledgements
We would like to thank the donors, without whom this work would have not been possible. We would also like to acknowledge P. Johns for his invaluable courses in neuroanatomy at St George’s, University of London. Data collection and sharing for the ADNI data used in this article was funded by the Alzheimer’s Disease Neuroimaging Initiative (National Institutes of Health grant no. U01 AG024904) and DOD ADNI (Department of Defense grant no. W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. This research has been primarily funded by the European Research Council awarded to J.E.I. (Starting grant no. 677697, project ‘BUNGEE-TOOLS’). A.C. is supported by the POSTDOC-UdG203 grant from Universitat de Girona. M.B. is supported by a Fellowship award from the Alzheimer’s Society, UK (grant no. AS-JF-19a-004-517). O.P. is supported by a grant from the Lundbeck foundation (grant no. R360–2021–39). M.M. is supported by the Italian National Institute of Health with a Starting Grant and by the Wellcome Trust through a Sir Henry Wellcome Fellowship (grant no. 213722/Z/18/Z). B.L.E. is supported by the Chen Institute MGH Research Scholar Award. Further support was provided by NIH grant nos. 1RF1MH123195, 1R01AG070988, 1UM1MH130981, 1RF1AG080371 and 1R21NS109627.
Extended data figures and tables
Author contributions
Conceptualization: J.C.A., B.L.E., J.L.H., Z.J., J.E.I. Data curation: A.C., M.M., E.R., J.A., S.C., E.B., B.B., A.A., L.Z., D.L.T., D.K., M.B. Formal analysis: A.C., L.Z., J.E.I. Funding acquisition: J.E.I. Investigation: A.C., M.M., E.R., L.P., R.A., J.A., S.C., E.B., L.Z., J.E.I. Methodology: A.C., M.M., O.P., Y.B., J.L.H., Z.J., J.E.I. Project administration: E.R., J.L.H., C.S., Z.J., J.E.I. Resources: D.L.T., J.L.H., C.S., Z.J. Software: A.C., M.M., B.B., A.A., O.P., Y.B., P.S., J.H., J.E.I. Supervision: E.R., L.P., D.K., M.B., J.L.H., C.S., Z.J., J.E.I. Validation: A.C., J.E.I. Visualization: A.C., P.S., J.H., J.E.I. Writing—original draft: A.C., E.R., J.E.I. Writing—review and editing: all authors.
Peer review
Peer review information
Nature thanks Mallar Chakravarty and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Data availability
The raw data used in this Article (MRI, histology, segmentations and so on) can be downloaded from 10.5522/04/24243835. An online tool to interactively explore the 3D reconstructed data can be found at https://github-pages.ucl.ac.uk/NextBrain. This website also includes links to videos, publications, code and other resources. The segmentation of the ex vivo scan can be found at https://openneuro.org/datasets/ds005422/versions/1.0.1. The databases used in the aging study are freely accessible online: OpenBHB (https://baobablab.github.io/bhb/) and aHCP (https://www.humanconnectome.org/study/hcp-lifespan-aging). The ADNI dataset used in the Alzheimer’s disease study is freely accessible with registration at https://adni.loni.usc.edu/data-samples/adni-data/. The atlases used in the Supplementary Information for comparison can be found online: Mai-Paixinos (https://www.thehumanbrain.info/brain/sections.php) and Allen (https://atlas.brain-map.org/).
Code availability
The code used in this Article for 3D histology reconstruction can be downloaded from https://github.com/acasamitjana/ERC_reconstruction and used and distributed freely. The segmentation tool is provided as Python code and is integrated in our neuroimaging toolkit ‘FreeSurfer’: https://surfer.nmr.mgh.harvard.edu/fswiki/HistoAtlasSegmentation. The source code is available on GitHub: https://github.com/freesurfer/freesurfer/tree/dev/mri_histo_util.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
is available for this paper at 10.1038/s41586-025-09708-2.
Supplementary information
The online version contains supplementary material available at 10.1038/s41586-025-09708-2.
References
- 1.Amunts, K. et al. BigBrain: an ultrahigh-resolution 3D human brain model. Science340, 1472–1475 (2013). [DOI] [PubMed] [Google Scholar]
- 2.Amunts, K., Mohlberg, H., Bludau, S. & Zilles, K. Julich-Brain: a 3D probabilistic atlas of the human brain’s cytoarchitecture. Science369, 988–992 (2020). [DOI] [PubMed] [Google Scholar]
- 3.Ding, S. L. et al. Comprehensive cellular-resolution atlas of the adult human brain. J. Comp. Neurol.524, 3127–3481 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Fischl, B. FreeSurfer. Neuroimage62, 774–781 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Smith, S. M. et al. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage23, S208–S219 (2004). [DOI] [PubMed] [Google Scholar]
- 6.Penny, W. D., Friston, K. J., Ashburner, J. T., Kiebel, S. J. & Nichols, T. E. Statistical Parametric Mapping: The Analysis of Functional Brain Images (Elsevier, 2011).
- 7.Fischl, B. et al. Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron33, 341–355 (2002). [DOI] [PubMed] [Google Scholar]
- 8.Rolls, E. T. A computational theory of episodic memory formation in the hippocampus. Behav. Brain Res.215, 180–196 (2010). [DOI] [PubMed] [Google Scholar]
- 9.Yushkevich, P. A. et al. A high-resolution computational atlas of the human hippocampus from postmortem magnetic resonance imaging at 9.4 T. Neuroimage44, 385–398 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Iglesias, J. E. et al. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: application to adaptive segmentation of in vivo MRI. Neuroimage115, 117–137 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Saygin, Z. M. et al. High-resolution magnetic resonance imaging reveals nuclei of the human amygdala: manual segmentation to automatic atlas. Neuroimage155, 370–382 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Edlow, B. L. et al. 7 Tesla MRI of the ex vivo human brain at 100 micron resolution. Sci.Data6, 244 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Mai, J. K., Majtanik, M. & Paxinos, G. Atlas of the Human Brain (Academic, 2015).
- 14.Pichat, J., Iglesias, J. E., Yousry, T., Ourselin, S. & Modat, M. A survey of methods for 3D histology reconstruction. Med. Image Anal.46, 73–105 (2018). [DOI] [PubMed] [Google Scholar]
- 15.Mazziotta, J. et al. A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM). Philos. Trans. R. Soc. Lond. B Biol. Sci.356, 1293–1322 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Yelnik, J. et al. A three-dimensional, histological and deformable atlas of the human basal ganglia. I. Atlas construction based on immunohistochemical and MRI data. Neuroimage34, 618–638 (2007). [DOI] [PubMed] [Google Scholar]
- 17.Krauth, A. et al. A mean three-dimensional atlas of the human thalamus: generation from multiple histological data. Neuroimage49, 2053–2062 (2010). [DOI] [PubMed] [Google Scholar]
- 18.Ashburner, J. & Friston, K. J. Unified segmentation. Neuroimage26, 839–851 (2005). [DOI] [PubMed] [Google Scholar]
- 19.Paquola, C. et al. The BigBrainWarp toolbox for integration of BigBrain 3D histology with multimodal neuroimaging. eLife10, e70119 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Alkemade, A. et al. A unified 3D map of microscopic architecture and MRI of the human brain. Sci. Adv.8, eabj7892 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Sotiras, A., Davatzikos, C. & Paragios, N. Deformable medical image registration: a survey. IEEE Trans. Med. Imaging32, 1153–1190 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Ferrante, E. & Paragios, N. Slice-to-volume medical image registration: a survey. Med. Image Anal.39, 101–123 (2017). [DOI] [PubMed] [Google Scholar]
- 23.Mancini, M. et al. Hierarchical joint registration of tissue blocks with soft shape constraints for large-scale histology of the human brain. In Proc.16th International Symposium on Biomedical Imaging (ISBI 2019) 666–669 (IEEE, 2019).
- 24.Casamitjana, A., Mancini, M. & Iglesias, J. E. Synth-by-reg (sbr): contrastive learning for synthesis-based registration of paired images. In Proc.Simulation and Synthesis in Medical Imaging: 6th International Workshop, Held in Conjunction with MICCAI 2021 (eds Svoboda, D. et al.) 44–54 (Springer, 2021). [DOI] [PMC free article] [PubMed]
- 25.Casamitjana, A. et al. Robust joint registration of multiple stains and MRI for multimodal 3D histology reconstruction: application to the Allen human brain atlas. Med. Image Anal.75, 102265 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Mancini, M. et al. A multimodal computational pipeline for 3D histology of the human brain. Sci. Rep.10, 13839 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Rohlfing, T. Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable. IEEE Trans. Med. Imaging31, 153–163 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Atzeni, A., Jansen, M., Ourselin, S. & Iglesias, J. E. A probabilistic model combining deep learning and multi-atlas segmentation for semi-automated labelling of histology. In Proc.Medical Image Computing and Computer Assisted Intervention—MICCAI 2018 (eds Frangi, A. F. et al.) 219–227 (Springer, 2018).
- 29.Desikan, R. S. et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage31, 968–980 (2006). [DOI] [PubMed] [Google Scholar]
- 30.Khandelwal, P. et al. Automated deep learning segmentation of high-resolution 7 Tesla postmortem MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases. Imaging Neurosci.2, 1–30 (2024). [DOI] [PMC free article] [PubMed]
- 31.Salehi, P. & Chalechale, A. Pix2pix-based stain-to-stain translation: a solution for robust stain normalization in histopathology images analysis. In Proc.2020 International Conference on Machine Vision and Image Processing (MVIP) 1–7 (IEEE, 2020).
- 32.Alyami, W., Kyme, A. & Bourne, R. Histological validation of MRI: a review of challenges in registration of imaging and whole-mount histopathology. J. Magn. Reson. Imaging55, 11–22 (2022). [DOI] [PubMed] [Google Scholar]
- 33.Van Leemput, K. Encoding probabilistic brain atlases using Bayesian inference. IEEE Trans. Med. Imaging28, 822–837 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Modat, M. et al. Parametric non-rigid registration using a stationary velocity field. In Proc.2012 IEEE Workshop on Mathematical Methods in Biomedical Image Analysis 145–150 (IEEE, 2012).
- 35.Puonti, O., Iglesias, J. E. & Van Leemput, K. Fast and sequence-adaptive whole-brain segmentation using parametric Bayesian modeling. NeuroImage143, 235–249 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.DeFelipe, J. From the connectome to the synaptome: an epic love story. Science330, 1198–1201 (2010). [DOI] [PubMed] [Google Scholar]
- 37.Horn, A. et al. Lead-DBS v2: towards a comprehensive pipeline for deep brain stimulation imaging. Neuroimage184, 293–316 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Van Leemput, K. et al. Automated segmentation of hippocampal subfields from ultra-high resolution in vivo MRI. Hippocampus19, 549–557 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Costantini, I. et al. A cellular resolution atlas of Broca’s area. Sci. Adv.9, eadg3844 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Iglesias, J. E. et al. A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology. Neuroimage183, 314–326 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Dufumier, B. et al. OpenBHB: a large-scale multi-site brain MRI data-set for age prediction and debiasing. Neuroimage263, 119637 (2022). [DOI] [PubMed] [Google Scholar]
- 42.Ng, A. & Jordan, M. On discriminative vs. generative classifiers: a comparison of logistic regression and naive Bayes. In Proc. 15th International Conference on Neural Information Processing Systems: Natural and Synthetic (eds Dietterich, T. G. et al.) 841–848 (MIT Press, 2001).
- 43.Jack, C. R. Jr et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging27, 685–691 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Bookheimer, S. Y. et al. The lifespan human connectome project in aging: an overview. Neuroimage185, 335–348 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Coupé, P., Catheline, G., Lanuza, E., Manjón, J. V. & Initiative, A.sD. N. Towards a unified analysis of brain maturation and aging across the entire lifespan: A MRI analysis. Hum. Brain Mapp.38, 5501–5518 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Billot, B. et al. Robust machine learning segmentation for large-scale analysis of heterogeneous clinical brain MRI datasets. Proc. Natl Acad. Sci. USA120, e2216399120 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Salat, D. H. et al. Thinning of the cerebral cortex in aging. Cereb. Cortex14, 721–730 (2004). [DOI] [PubMed] [Google Scholar]
- 48.Llamas-Rodríguez, J. et al. TDP-43 and tau concurrence in the entorhinal subfields in primary age-related tauopathy and preclinical Alzheimer’s disease. Brain Pathol.33, e13159 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Narvacan, K., Treit, S., Camicioli, R., Martin, W. & Beaulieu, C. Evolution of deep gray matter volume across the human lifespan. Hum. Brain Mapp.38, 3771–3790 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Alexander, G. E., DeLong, M. R. & Strick, P. L. Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Annu. Rev. Neurosci.9, 357–381 (1986). [DOI] [PubMed] [Google Scholar]
- 51.Fischl, B. & Dale, A. M. Measuring the thickness of the human cerebral cortex from magnetic resonance images. Proc. Natl Acad. Sci. USA97, 11050–11055 (2000). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Gopinath, K. et al. Cortical analysis of heterogeneous clinical brain MRI scans for large-scale neuroimaging studies. In Proc.Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (eds Greenspan, H. et al.) 35–45 (Springer Nature, 2023).
- 53.Zeng, X. et al. Segmentation of supragranular and infragranular layers in ultra-high resolution 7T ex vivo MRI of the human cerebral cortex. Cereb. Cortex34, bhae362 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Glasser, M. F. et al. A multi-modal parcellation of human cerebral cortex. Nature536, 171–178 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Prastawa, M., Bullitt, E., Moon, N., Van Leemput, K. & Gerig, G. Automatic brain tumor segmentation by subject specific modification of atlas priors1. Acad. Radiol.10, 1341–1348 (2003). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Cerri, S. et al. A contrast-adaptive method for simultaneous whole-brain and lesion segmentation in multiple sclerosis. Neuroimage225, 117471 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Walsh, C. et al. Imaging intact human organs with local resolution of cellular structures using hierarchical phase-contrast tomography. Nat. Methods18, 1532–1541 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Iglesias, J. E. et al. Effect of fluorinert on the histological properties of formalin-fixed human brain tissue. J. Neuropathol. Exp. Neurol.77, 1085–1090 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Mugler, J. P. III Optimized three-dimensional fast-spin-echo MRI. J. Magn. Reson. Imaging39, 745–767 (2014). [DOI] [PubMed] [Google Scholar]
- 60.Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proc. 3rd International Conference on Learning Representations 1–14 (ICLR, 2015).
- 61.Yushkevich, P. A. et al. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage31, 1116–1128 (2006). [DOI] [PubMed] [Google Scholar]
- 62.Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Proc. 18thInternational Conference on Medical Image Computing and Computer-Assisted Intervention (eds Navab, N. et al.) 234–241 (Springer, 2015).
- 63.Liu, D. C. & Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program.45, 503–528 (1989). [Google Scholar]
- 64.Avants, B. B., Epstein, C. L., Grossman, M. & Gee, J. C. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal.12, 26–41 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Iglesias, J. E. et al. Is synthesizing MRI contrast useful for inter-modality analysis? In Proc. 16thInternational Conference on Medical Image Computing and Computer-Assisted Intervention (eds Mori, K. et al.) 631–638 (Springer, 2013). [DOI] [PMC free article] [PubMed]
- 66.Maes, F., Collignon, A., Vandermeulen, D., Marchal, G. & Suetens, P. Multimodality image registration by maximization of mutual information. IEEE Trans. Med. Imaging16, 187–198 (1997). [DOI] [PubMed] [Google Scholar]
- 67.Jing, Y. et al. Neural style transfer: a review. IEEE Trans. Vis. Comput. Graph.26, 3365–3385 (2019). [DOI] [PubMed] [Google Scholar]
- 68.Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE International Conference on Computer Vision 2223–2232 (IEEE, 2017).
- 69.Chen, T., Kornblith, S., Norouzi, M. & Hinton, G. A simple framework for contrastive learning of visual representations. In Proc.International Conference on Machine Learning (eds Daumé, H. & Singh, A.) 1597–1607 (JMLR, 2020).
- 70.Arsigny, V., Commowick, O., Pennec, X. & Ayache, N. A log-euclidean framework for statistics on diffeomorphisms. In Proc. 9th Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (eds Larsen, R. et al.) 924–931 (Springer, 2006). [DOI] [PubMed]
- 71.Boyd, S. P. & Vandenberghe, L. Convex Optimization (Cambridge Univ. Press, 2004).
- 72.Modat, M. et al. Fast free-form deformation using graphics processing units. Comput. Methods Prog. Biomed.98, 278–284 (2010). [DOI] [PubMed] [Google Scholar]
- 73.Van Leemput, K., Maes, F., Vandermeulen, D. & Suetens, P. Automated model-based tissue classification of MR images of the brain. IEEE Trans. Med. Imaging18, 897–908 (1999). [DOI] [PubMed] [Google Scholar]
- 74.Wells, W. M., Grimson, W. E. L., Kikinis, R. & Jolesz, F. A. Adaptive segmentation of MRI data. IEEE Trans. Med. Imaging15, 429–442 (1996). [DOI] [PubMed] [Google Scholar]
- 75.Dempster, A. P., Laird, N. M. & Rubin, D. B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Series B Stat Methodol.39, 1–22 (1977). [Google Scholar]
- 76.Iglesias, J. E. A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI. Sci. Rep.13, 6657 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Billot, B. et al. SynthSeg: segmentation of brain MRI scans of any contrast and resolution without retraining. Med. Image Anal.86, 102789 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Klein, A. et al. Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. Neuroimage46, 786–802 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Cardoso, M. J. et al. Geodesic information flows: spatially-variant graphs and their application to segmentation and fusion. IEEE Trans. Med. Imaging34, 1976–1988 (2015). [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Extended introduction and related work.
This file describes the anatomical protocols for manual delineation of brain regions.
This file contains the ontology that we used for the regions of interest present in our new atlas.
This file compares roughly equivalent sections of the Mai–Paxinos atlas of the whole brain13, the Allen reference brain3 and our proposed atlas NextBrain.
Data used in preparation of this Article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (https://adni.loni.usc.edu/). As such, the investigators in the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. This file contains a complete listing of the ADNI investigators.
Table containing demographic information from the five donors of the study.
This video illustrates the aligned MRI scans, histological sections, and manual delineations.
This video illustrates the atlas construction procedure.
This video flies over the labelled ex vivo brain (Edlow et al., “7 Tesla MRI of the ex vivo human brain at 100 micron resolution”, Scientific data, 2019) in axial view.
This video flies over the same specimen, but in coronal view.
Data Availability Statement
The raw data used in this Article (MRI, histology, segmentations and so on) can be downloaded from 10.5522/04/24243835. An online tool to interactively explore the 3D reconstructed data can be found at https://github-pages.ucl.ac.uk/NextBrain. This website also includes links to videos, publications, code and other resources. The segmentation of the ex vivo scan can be found at https://openneuro.org/datasets/ds005422/versions/1.0.1. The databases used in the aging study are freely accessible online: OpenBHB (https://baobablab.github.io/bhb/) and aHCP (https://www.humanconnectome.org/study/hcp-lifespan-aging). The ADNI dataset used in the Alzheimer’s disease study is freely accessible with registration at https://adni.loni.usc.edu/data-samples/adni-data/. The atlases used in the Supplementary Information for comparison can be found online: Mai-Paixinos (https://www.thehumanbrain.info/brain/sections.php) and Allen (https://atlas.brain-map.org/).
The code used in this Article for 3D histology reconstruction can be downloaded from https://github.com/acasamitjana/ERC_reconstruction and used and distributed freely. The segmentation tool is provided as Python code and is integrated in our neuroimaging toolkit ‘FreeSurfer’: https://surfer.nmr.mgh.harvard.edu/fswiki/HistoAtlasSegmentation. The source code is available on GitHub: https://github.com/freesurfer/freesurfer/tree/dev/mri_histo_util.













