Abstract
Data-driven machine learning has made significant strides in medical image analysis. However, most existing methods are tailored to specific modalities and assume a particular resolution (often isotropic). This limits their generalizability in clinical settings, where variations in scan appearance arise from differences in sequence parameters, resolution, and orientation. Furthermore, most general-purpose models are designed for healthy subjects and suffer from performance degradation when pathology is present. We introduce UNA (Unraveling Normal Anatomy), the first modality-agnostic learning approach for normal brain anatomy reconstruction that can handle both healthy scans and cases with pathology. We propose a fluid-driven anomaly randomization method that generates an unlimited number of realistic pathology profiles on-the-fly. UNA is trained on a combination of synthetic and real data, and can be applied directly to real images with potential pathology without the need for fine-tuning. We demonstrate UNA’s effectiveness in reconstructing healthy brain anatomy and showcase its direct application to anomaly detection, using both simulated and real images from 3D healthy and stroke datasets, including CT and MRI scans. By bridging the gap between healthy and diseased images, UNA enables the use of general-purpose models on diseased images, opening up new opportunities for large-scale analysis of uncurated clinical images in the presence of pathology. Code is available at https://github.com/peirong26/UNA.
1. Introduction
Recent machine learning based methods have significantly advanced the speed and accuracy of brain image analysis tasks, such as image segmentation [11, 26, 37, 41], registration [3, 9, 55], and super-resolution [46, 49]. Human brain imaging in vivo is primarily dominated by Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) [22]. CT is faster and preferred in emergency cases, while MRI provides superior contrast for soft tissues such as the brain. Unlike CT, which is a standardized modality that produces quantitative measurements in Hounsfield units, MRI is generally not calibrated and can generate a wide range of imaging contrasts (e.g., T1w, T2w, FLAIR) to visualize different tissues and abnormalities. This diversity in contrast and the lack of standardization complicate the quantitative analysis of MRI scans. As a result, most existing MRI analysis methods are contrast-specific and often suffer from performance degradation when voxel size or MRI contrast differs between training and testing datasets [52]. This limits the generalizability of machine learning models and leads to redundant data collection and training efforts for new datasets. Recent contrast-agnostic models that leverage synthetic data [5,20,23,24,33] have demonstrated impressive results, significantly extending their applicability to diverse clinical acquisition protocols. However, these models are primarily designed for analyzing healthy brain anatomy and typically struggle to produce reliable results in the presence of extensive abnormalities (Figs. 3 and 4).
Figure 3.

Qualitative comparisons on healthy anatomy reconstruction, between UNA, and the state-of-the-art modality-agnostic T1w synthesis method. Testing images are generated from real healthy subjects encoded with randomly simulated pathology profiles. Pathology regions are circled in red.
Figure 4.

Qualitative comparisons on healthy anatomy reconstruction between UNA and state-of-the-art modality-agnostic synthesis models. Testing images are from real stroke datasets (ISLES [19] and ATLAS [31]), where the stroke lesion annotations are provided, yet the ground truth healthy anatomy is unavailable. The last row shows a failure case of UNA, where it “over-corrects” the diseased anatomy. Pathology regions are circled in red.
To the best of our knowledge, the recently proposed PEPSI [34] is the only contrast-agnostic brain MRI analysis method that is compatible with extensive pathology. PEPSI leverages synthetic data to estimate T1w and FLAIR MRI from input scans containing pathology. However, it has several limitations: (i) It relies on paired pathology segmentation map associated with each brain anatomy during training, which limits its application to datasets that provide pathology annotations; (ii) It requires access to pre-trained pathology segmentation models to compute the implicit pathology segmentation loss; and (iii) It requires additional fine-tuning to detect anomalies.
Here, we introduce UNA, the first modality-agnostic learning method for Unraveling Normal Anatomy. UNA leverages the power of synthetic data, and can be applied to real images (CT and MRI) of both healthy and diseased populations, without the need for fine-tuning (Fig. 1).
We propose fluid-driven anomaly randomization (Sec. 3) to overcome the scarcity of pathology segmentation annotations. Using only limited existing pathology segmentations as initial conditions, our fluid-driven anomaly generator generates unlimited new pathology profiles on-the-fly through advection-diffusion partial differential equations (PDEs). This formulation offers a continuous and controllable trajectory for pathology evolution and also naturally enforces realistic constraints on brain abnormalities through boundary conditions (Fig. 1 (left)).
We introduce a modality-agnostic learning framework to reconstruct healthy brain anatomy from images with potential pathology (Sec. 4). Our framework leverages symmetry priors of brain anatomy and incorporates subject-specific anatomical features from contralateral healthy tissue in a self-contrastive learning fashion.
We extensively evaluate the healthy anatomy reconstruction performance of UNA on simulated and real images with stroke lesions, in both CT and different MR contrasts (T1w, T2w, and FLAIR) (Secs. 5.1 and 5.2). We further demonstrate the direct application of UNA to anomaly detection, without fine-tuning (Sec. 5.3). UNA achieves state-of-the-art performance in all tasks and modalities.
Figure 1.

Powered by the proposed fluid-driven anomaly randomization, UNA can handle a range of pathological patterns without requiring paired pathology annotations for training. (i) By bridging the gap between healthy and diseased anatomy, UNA enables the use of general analysis models for images containing pathology; (ii) By reconstructing anatomy in a modality-agnostic manner, UNA facilitates analysis with standard tools designed for high-resolution, healthy T1w MRI.
By bridging the gap between healthy and diseased anatomy UNA enables the use of general-purpose models for images containing pathology, unlocking the tremendous potential for analyzing clinical images with pathology.
2. Related Work
Foundation Models in Medical Imaging.
Large-scale datasets in medical imaging require significantly more effort to compile than those in natural imaging or language due to varying acquisition protocols and privacy requirements across institutions. Consequently, medical foundation models are not as well developed as their natural image counterparts. There have been, nevertheless, some notable efforts. SAM-Med3D-MoE [51] provides a 3D foundation model for medical image segmentation, trained on 22,000 scans. The MONAI [1] project includes a model zoo with pre-trained models, which are highly task-specific and sensitive to particular image contrasts. Zhou et al. [57] constructed a medical foundation model designed for detecting eye and systemic health conditions from retinal scans. Still, it only functions with color fundus photography and optical coherence tomography modalities. Recently, generalist biomedical AI systems, e.g., GMAI [39] and Med-PaLM M [44, 50], have demonstrated significant potential in biomedical tasks within a vision-language context, including visual question answering, image classification, and radiology report generation. However, they have not tackled more complex dense 3D prediction tasks such as reconstruction, segmentation, and registration.
Contrast-Agnostic Learning for MRI.
MRI scans acquired across sites vary substantially in appearance due to differences in contrast, resolution, and orientation. This heterogeneity leads to duplicate training efforts for approaches that are sensitive to specific MR contrast. Classical approaches in brain segmentation used Bayesian inference for contrast robustness [14, 29], but require long processing times and struggle with resolutions that are not high and isotropic [23, 40]. SynthSeg [5, 6] achieves contrast- and resolution-agnostic segmentation with a synthetic generator that simulates widely diverse contrasts and resolutions. The same generator has been used to achieve contrast invariance in tasks like image registration [10, 20], super-resolution [24], or skull stripping [21]. Brain-ID [33] explored contrast-agnostic feature representations that generalize across various fundamental medical image analysis tasks, including image synthesis, segmentation, and super-resolution. However, all these general-purpose methods are either trained exclusively on healthy anatomical labels, or require paired anatomy-pathology annotations, which limits their application primarily to healthy subjects or every specific pathology (e.g., white matter lesions) – as opposed to previously unseen pathology profiles (Figs. 3 and 4).
Fluid-Based Dynamics Modeling.
Fluid dynamics is a fundamental topic in physics and plays a crucial role in various real-world applications such as weather forecasting, airflow analysis [8], optical flow [45, 47], image registration [43, 48, 56], and perfusion analysis [32]. In fluid dynamics, advection-diffusion PDEs are commonly employed to describe the fluid transport processes. Liu et al. [35] introduced regularization-free representations to ensure the compressibility and positive semi-definiteness of estimated velocity and diffusion fields. Franz et al. [16] simulated 3D density and velocity fields from single-view data without 3D supervision. Xing et al. [54] proposed to learn the velocity field from past physical observations using Helmholtz dynamics, eliminating the need for ground truth velocity. In these studies, the inverse problem of velocity estimation provides interpretable insights for predicting future fluid behavior. We build upon the concept of fluid flow simulation and frame anomaly pattern randomization as a forward process of advection-diffusion PDEs. This formulation naturally enables us to ensure that simulated anomaly outcomes are well posed, through controllable velocity fields and established boundary conditions (Sec. 3.1).
3. Fluid-Driven Anomaly Randomization
Manually annotating pathology to create gold-standard segmentation is extremely costly, particularly for 3D medical images. This process not only requires specialized expertise from clinicians, but is also highly time-consuming and not reproducible. Consequently, large-scale datasets with gold-standard pathology annotations are almost inexistent (BraTS [36] being a notable exception). In addition, discrepancies often arise among the gold-standard pathology segmentation maps provided by different datasets. To address these issues, we seek to design an anomaly randomization approach that is:
Expressive: the generated anomaly profiles should exhibit diverse and expressive shapes and intensities that sufficiently reflect the variety of pathological appearances encountered in clinical practice.
Realistic: the randomized abnormalities must conform to realistic constraints. For example, abnormalities in white matter should not appear in other tissue structures, brain tumors should be localized within the brain region.
To achieve these two aims, we propose randomizing unlimited, diverse anomaly profiles by formulating the generation as a forward mass transport process, with realistic constraints naturally guaranteed by boundary conditions. Our anomaly randomization consists of three steps (Alg. 1): (i) Initializations of random anomaly (P0), velocity (V), and diffusion (D) for anomaly transport; (ii) Forward transport of abnormal intensities for random time steps; (iii) Appearance encoding of the generated anomaly on healthy images of any modality. Sec. 3.1 below describes the generation of abnormal profiles (i-ii), and Sec. 3.2 introduces the encoding of abnormalities on healthy images (iii).
Algoritham 1:
Fluid-Driven Anomaly Randomization
![]() |
3.1. Anomaly Profile Randomization
Background.
Advection-diffusion PDEs describe a large family of fluid dynamics processes, e.g., heat conduction, wind dynamics, and blood flow [8, 32, 54]. In general, the advection term refers to the mass transport driven by fluid flow, while the diffusion term refers to the gradient of mass concentration. Inspired by the advection-diffusion process, which computes the natural progression of mass intensities, we propose to randomize an unlimited variety of anomaly profiles by formulating the generation as a forward advection-diffusion, starting from either a single realistic pathology annotation map or a random shape.
Problem Setup.
Let denote the pathology probability at location in a bounded domain of interest (e.g., brain), at time . The local pathology probability changes of an anomaly randomization process are described by the advection-diffusion PDE:
| (1) |
| (2) |
where refers to the (maximum) time steps used for the generation of new anomaly profiles. The spatially varying velocity field and diffusion scalar field govern the advection and diffusion process of an initial anomaly, . The zero Neumann boundary condition ensures that the randomization process of satisfies pre-assumed bounds of the anomaly developing regions. To ensure that the dynamics of anomaly changes are well posed, we impose the incompressible flow and non-negative diffusion constraints on and [35], and rewrite the advection-diffusion process in Eq. (1) as:
| (3) |
where and refer to the potential fields for representing and , respectively, such that the resulting flow and diffusion will be incompressible and non-negative by construction.
Initializations of P0, V, D.
To enrich the diversity of abnormal profiles, we initialize the anomaly (P0 in Eq. (2)) from two sources: (i) Publicly available pathology annotations from the ATLAS [31] and ISLES [19] stroke datasets, which include high-quality gold-standard segmentation of stroke lesions. (ii) Random shapes using randomly thresholded Perlin noise, a widely used procedural generation algorithm known for creating rich textures. We further generate random Perlin noise for creating random potentials for , and for .
Forward Scheme.
We employ a first-order upwind scheme [30] to approximate the differential operators associated with the advection term, and a nested central-forward-backward difference scheme for the diffusion term in Eq. (3). Discretizing the spatial derivatives leads to a system of ordinary differential equations that can be solved with numerical integration. To enhance numerical stability and ensure compliance with the Courant-Friedrichs-Lewy (CFL) condition [17, 30], we apply the RK45 method for adaptive time-stepping ( ) in advancing to .
As shown in Fig. 1 (left), we can generate infinite variations from a single pathology profile via the introduced fluid-driven anomaly transport, while naturally satisfying boundary conditions imposed by the brain contour.
3.2. Anomaly Apprearance Randomization
As mentioned in Sec. 2, large-scale annotation of 3D medical imaging data requires tremendous effort. UNA is instead trained on a combination of synthetic and real images (many of them labeled automatically). Specifically, we encode the generated pathology profiles, P, into normal anatomy of healthy control scans, enabling the generation of diverse images with random modalities, each exhibiting a distinct appearance introduced by P.
Random Modality Generation.
To generate healthy images with complex structural details, we first leverage domain randomization [33] to synthesize images of random modality and resolution with healthy anatomy (Fig. 2 (left)). Specifically, we randomly sample intensities on 3D neuroanatomical segmentation (label maps L), where the intensities are conditioned on the label at each location:
| (4) |
where and refer to the mean and variance of the uniform distribution of each label l. control the shifts and scales. A random deformation field is then generated for augmentation purposes, comprising linear and non-linear transformations [24, 33].
Figure 2.

UNA’s framework overview for modality-agnostic learning of healthy anatomy, supported by fluid-driven anomaly randomization.
Anomaly Profile Encoding.
We encode the random anomaly profiles from Sec. 3.1 into the generated healthy anatomy , based on a priori knowledge on the white and gray matter intensities of [28, 34]:
| (5) |
| (6) |
is the mean of ’s white (gray) matter intensities. A higher resembles T1w, where pathology appears darker, while a lower resembles T2w/FLAIR, where pathology is typically brighter. Considering extreme scenarios, we randomly assign the sign of of the time. further undergoes a standard augmentation pipeline [23], introducing partial voluming [5] and various resolutions, noise, scanning artifacts commonly found in clinical practice.
4. Learning Anatomy Beyond Gold Standards
In this section, we present UNA’s end-to-end training framework, which learns to unravel normal anatomy from images of random modality containing potential pathology.
Contralateral-Paired Input.
Healthy human brain anatomy typically exhibits a high degree of symmetry in structure. Based on this fact, we combine the original input image () with its contralateral-mirrored image to create paired inputs for UNA’s healthy anatomy reconstruction learning. This approach allows our model to “borrow” healthy information from the contralateral counterpart, thereby enhancing subject-specific healthy anatomy reconstruction. To ensure structural correspondence and minimize computational complexity during training, we pre-compute the deformation () between each training subject’s scan and its axial-flipped image using NiftyReg [38, 42]. As a result, the contralateral-paired input for each subject sample is represented as .
Modality-Agnostic Healthy Anatomy Reconstruction.
To enhance model generalizability, UNA is trained on both real datasets containing pathology () and synthetic images generated from fluid-driven anomaly randomization (Sec. 3), featuring varying simulated modalities and abnormality conditions. During training, we define the following healthy anatomy reconstruction loss, which takes into account both the subject-level and the voxel-level abnormality of the input image ():
| (7) |
where indicates whether the current image is sourced from real datasets or generated synthetically (). The parameters and control the training weights for gradient L1 loss and attention to pathology, respectively. Specifically: (i) if the current training input image () is generated by UNA, i.e., the ground truth healthy anatomy of the entire brain region is accessible, we compute the anatomy reconstruction loss across the whole brain (). (ii) Conversely, if is sourced from real datasets, the ground truth healthy anatomy of the entire brain is not available. In this case, we compute the voxel-wise reconstruction loss exclusively for the healthy regions, while masking out any abnormalities.
Intra-Subject Self-Contrastive Learning.
In Eq. (7), the anatomy reconstruction in abnormal regions is not supervised when dealing with real images containing pathology. To enhance the performance of learning healthy anatomy, we propose an intra-subject learning strategy that exploits the (approximate) symmetry of the brain with a contrastive loss that encourages two properties:
Similarity in appearance between the reconstructed healthy anatomy and its contralateral healthy counterpart.
Distinctiveness between the reconstructed anatomy and the original regions that exhibit abnormalities.
Specifically, we define this intra-subject contrastive loss as:
| (8) |
where , ensuring that we exclude pathologies that appear at the same contralateral location on both hemispheres. represent the corresponding temperature scaling factors of each term.
Thus, UNA’s end-to-end healthy anatomy reconstruction training loss is obtained by the sum of Eqs. (7) and (8):
| (9) |
where is the weight of self-contrastive learning loss.
As shown in Fig. 1, as a general model for healthy anatomy reconstruction, UNA also addresses the following tasks: (i) Given an input image without any abnormalities, UNA performs anatomy reconstruction; (ii) Given a T1w MRI of any resolution, UNA performs super-resolution.
5. Experiments
We evaluate UNA’s performance and demonstrate its impact from three perspectives. (i) The reconstruction of anatomy from healthy images. This enables analysis with standard tools made for high-resolution T1w MRI, such as segmentation and parcellation using FreeSurfer [13], registration with NiftyReg [38, 42], ANTs [2], etc. (ii) The synthesis of healthy anatomy from images with pathology. This allows for the application of well-established general-purpose models to images with extensive pathology. For a more comprehensive assessment, we test on both synthetic data – where ground truth healthy images are available (Sec. 5.1) – and real images from two public stroke datasets – where the ground truth healthy anatomy is unknown (Sec. 5.2). (iii) We further demonstrate UNA’s direct application to anomaly detection (Sec. 5.3). Our test data includes CT and various MRI modalities (T1w, T2w, FLAIR).
Datasets.
We conducted experiments using eight public datasets: ADNI [25], ADNI3 [53], HCP [12], ADHD200 [7], AIBL [15], OASIS3 [27], ATLAS [31], ISLES [19]. ATLAS and ISLES include stroke patients, associated with gold-standard manual segmentations of stroke lesions (referred to as hereafter). The other datasets contain subjects with healthy anatomy (). These datasets cover both MR (T1w, T2w, FLAIR) and CT images. The train/test subject splits for each dataset are listed in Tab. 2.
Table 2.
Quantitative comparisons of healthy anatomy reconstruction performance between UNA and state-of-the-art, contrast-agnostic T1w synthesis models, evaluated on real images. Since we do not have ground truth anatomy for the stroke datasets, we only report the reconstruction performance within healthy regions. (ISLES [19] stroke dataset does not provide T1w MRI scans, therefore we only show qualitative results on ISLES in Fig. 4.)
| Modality | Dataset (Train/Test) | Method | Recon struction (on Healthy) | ||
|---|---|---|---|---|---|
| L1 (↓) | PSNR (↑) | SSIM (↑) | |||
| T1w MRI | ADNI [25] (1841/204) | SynthSR [23] | 0.014 | 26.78 | 0.984 |
| Brain–ID [33] | 0.012 | 33.82 | 0.993 | ||
| PEPSI [34] | 0.014 | 31.25 | 0.989 | ||
| UNA | 0.012 | 32.96 | 0.995 | ||
|
| |||||
| HCP [12] (808/87) | SynthSR [23] | 0.033 | 22.13 | 0.854 | |
| Brain–ID [33] | 0.020 | 27.47 | 0.957 | ||
| PEPSI [34] | 0.023 | 28.20 | 0.971 | ||
| UNA | 0.017 | 31.61 | 0.986 | ||
|
| |||||
| ADNI3 [53] (298/33) | SynthSR [23] | 0.023 | 23.60 | 0.928 | |
| Brain–ID [33] | 0.021 | 29.89 | 0.966 | ||
| PEPSI [34] | 0.020 | 26.67 | 0.935 | ||
| UNA | 0.019 | 30.01 | 0.975 | ||
|
| |||||
| ADHD200 [7] (865/96) | SynthSR [23] | 0.035 | 21.67 | 0.882 | |
| Brain–ID [33] | 0.011 | 32.48 | 0.996 | ||
| PEPSI [34] | 0.015 | 29.87 | 0.976 | ||
| UNA | 0.012 | 30.12 | 0.980 | ||
|
| |||||
| AIBL [15] (601/67) | SynthSR [23] | 0.026 | 22.95 | 0.916 | |
| Brain–ID [33] | 0.009 | 33.73 | 0.972 | ||
| PEPSI [34] | 0.012 | 29.86 | 0.950 | ||
| UNA | 0.010 | 32.89 | 0.964 | ||
|
| |||||
| * Stroke * ATLAS [31] (590/65) | SynthSR [23] | 0.030 | 23.50 | 0.881 | |
| Brain–ID [33] | 0.027 | 26.09 | 0.892 | ||
| PEPSI [34] | 0.025 | 26.73 | 0.905 | ||
| UNA | 0.020 | 29.10 | 0.974 | ||
|
| |||||
| T2w MRI | HCP [12] (808/87) | SynthSR [23] | 0.034 | 21.46 | 0.833 |
| Brain–ID [33] | 0.016 | 28.10 | 0.934 | ||
| PEPSI [34] | 0.018 | 26.45 | 0.915 | ||
| UNA | 0.016 | 28.62 | 0.949 | ||
|
| |||||
| AIBL [15] (272/30) | SynthSR [23] | 0.033 | 20.08 | 0.805 | |
| Brain–ID [33] | 0.022 | 23.99 | 0.861 | ||
| PEPSI [34] | 0.024 | 22.93 | 0.859 | ||
| UNA | 0.021 | 24.76 | 0.892 | ||
|
| |||||
| FLAIR MRI | ADNI3 [53] (298/33) | SynthSR [23] | 0.026 | 22.77 | 0.919 |
| Brain–ID [33] | 0.017 | 26.44 | 0.927 | ||
| PEPSI [34] | 0.023 | 25.62 | 0.929 | ||
| UNA | 0.015 | 27.43 | 0.965 | ||
|
| |||||
| AIBL [15] (302/34) | SynthSR [23] | 0.029 | 21.77 | 0.902 | |
| Brain–ID [33] | 0.019 | 27.25 | 0.936 | ||
| PEPSI [34] | 0.021 | 25.43 | 0.914 | ||
| UNA | 0.017 | 27.76 | 0.967 | ||
|
| |||||
| CT | OASIS3 [27] (795/88) | SynthSR [23] | 0.041 | 20.93 | 0.758 |
| Brain–ID [33] | 0.023 | 25.49 | 0.891 | ||
| PEPSI [34] | 0.027 | 22.98 | 0.842 | ||
| UNA | 0.022 | 25.68 | 0.897 | ||
Synthetic Data Generation.
We use the anatomical labels of training subjects from for random modality generation (Sec. 3.2). The synthetic abnormal profiles are generated using UNA’s fluid-driven anomaly randomization (Sec. 3), with initial profiles either sampled from the gold standard lesion segmentation maps of training subjects in , or Perlin noise (Sec. 3.1). For evaluation on simulated data in Sec. 5.1, we employ our synthetic generator to create 1,000 testing samples from , encoded with random anomaly profiles from . This generation is solely for providing ground truth healthy anatomy; therefore, we encode random anomaly profiles without applying any additional deformation and corruption.
Metrics.
For anatomy reconstruction and synthesis, we use L1 distance, PSNR, and SSIM. For anomaly detection, we assess performance using Dice scores.
Implementation Details.
For fair comparisons, we adopt the same 3D UNet [41] as utilized in the models [23, 33, 34] we compare with. The training sample images are sized at 1603, with a batch size of 4. We use the AdamW optimizer, beginning with a learning rate of 10−4 for the first 300,000 iterations, which is then reduced to 10−5 for the subsequent 100,000 iterations. The additional attention parameter ( in Eq. (7)) is set to 1 for healthy anatomy reconstruction in pathological regions. The intra-subject contrastive learning weight ( in Eq. (9)) is set to 2. The training process took approximately 14 days on an NVIDIA A100 GPU.
Competing Models.
UNA is the first model achieving modality-agnostic healthy anatomy synthesis and reconstruction. We compare UNA with the closest state-of-the-art modality-agnostic models for image reconstruction and anomaly detection: (i) SynthSR [23], a modality-agnostic super-resolution model; (ii) Brain-ID [33], a modality-agnostic feature representation and T1w synthesis model; (iii) PEPSI [34], a modality-agnostic pathology representation model for T1w and FLAIR MRI synthesis. Note that PEPSI does not synthesize healthy tissue in regions of pathology; (iv) VAE [4], an unsupervised anomaly detection variational autoencoder model for brain MRI; (v) LDM [18], an out-of-distribution detection model for 3D medical images using latent diffusion.
5.1. Simulations with Ground Truth Anatomy
To better evaluate UNA’s performance in healthy anatomy reconstruction, we first conduct experiments using 1,000 healthy images encoded with simulated pathologies, for which ground truth segmentations are available for quantitative assessment. To explicitly assess the model performance in pathology regions, we report reconstruction scores not only for the entire brain but also separately for areas that are originally healthy and diseased in the input image.
Tab. 1 reports the quantitative comparison results between UNA and the state-of-the-art modality-agnostic synthesis models. UNA yields the best performance across all metrics, modalities, and regions of interest – including the full brain, healthy anatomy, and pathological regions. Remarkably, UNA outperforms competing models by a large margin in anatomy reconstruction within diseased tissue. Visualization results for each test modality are provided in Fig. 3. UNA demonstrates consistent performance across modality and resolution. Notably, other models either fail to capture any anatomy (SynthSR [23]) or generate unrealistic patterns around the pathology (Brain–ID [33] and PEPSI [34]) when given a noisy CT scan (4th row in Fig. 3), whereas UNA successfully reconstructs plausible healthy anatomy.
Table 1.
Quantitative comparisons of healthy anatomy reconstruction performance between UNA and state-of-the-art contrast-agnostic T1w synthesis models, using images with simulated pathology. PEPSI [34] is designed to emphasize the abnormalities, therefore we do not report its scores within diseased regions.
| Modality | Method |
L1 (↓) |
PSNR (↑) |
SSIM (↑) |
||||||
|---|---|---|---|---|---|---|---|---|---|---|
| F | H | D | F | H | D | F | H | D | ||
| T1w MRI | SynthSR [23] | 0.0285 | 0.0253 | 0.0010 | 20.71 | 22.90 | 36.59 | 0.823 | 0.879 | 0.895 |
| Brain–ID [33] | 0.0231 | 0.0219 | 0.0007 | 22.86 | 23.71 | 40.22 | 0.859 | 0.890 | 0.904 | |
| PEPSI [34] | 0.0257 | 0.0194 | N/A | 21.78 | 23.21 | N/A | 0.831 | 0.872 | N/A | |
| UNA | 0.0147 | 0.0143 | 0.0003 | 31.98 | 33.25 | 45.61 | 0.981 | 0.992 | 0.998 | |
|
| ||||||||||
| T2w MRI | SynthSR [23] | 0.0362 | 0.0337 | 0.0016 | 18.25 | 20.66 | 35.47 | 0.816 | 0.864 | 0.880 |
| Brain–ID [33] | 0.0277 | 0.0269 | 0.0008 | 20.98 | 22.31 | 39.62 | 0.844 | 0.881 | 0.892 | |
| PEPSI [34] | 0.0295 | 0.0279 | N/A | 19.33 | 23.18 | N/A | 0.820 | 0.845 | N/A | |
| UNA | 0.0184 | 0.0182 | 0.0003 | 25.14 | 26.22 | 45.69 | 0.938 | 0.981 | 0.998 | |
|
| ||||||||||
| FLAIR MRI | SynthSR [23] | 0.0327 | 0.0300 | 0.0016 | 19.30 | 21.04 | 34.88 | 0.823 | 0.869 | 0.895 |
| Brain–ID [33] | 0.0285 | 0.0242 | 0.0010 | 19.98 | 20.32 | 38.76 | 0.840 | 0.879 | 0.907 | |
| PEPSI [34] | 0.0301 | 0.0287 | N/A | 19.82 | 21.59 | N/A | 0.842 | 0.850 | N/A | |
| UNA | 0.0202 | 0.0194 | 0.0007 | 28.34 | 28.93 | 42.91 | 0.921 | 0.982 | 0.996 | |
|
| ||||||||||
| CT | SynthSR [23] | 0.0541 | 0.0536 | 0.0029 | 13.97 | 13.13 | 28.50 | 0.712 | 0.763 | 0.725 |
| Brain–ID [33] | 0.0339 | 0.0357 | 0.0018 | 20.15 | 21.20 | 32.87 | 0.811 | 0.824 | 0.843 | |
| PEPSI [34] | 0.0473 | 0.0420 | N/A | 16.72 | 16.90 | N/A | 0.723 | 0.782 | N/A | |
| UNA | 0.0259 | 0.0266 | 0.0010 | 25.63 | 25.70 | 42.53 | 0.883 | 0.897 | 0.895 | |
(F: full brain; H: healthy region; D: diseased region.)
5.2. Real-World Datasets with Potential Pathology
We further evaluate UNA’s performance on all the real datasets as introduced in Sec. 5, among which ATLAS [31] and ISLES [19] contain stroke patients. Tab. 2 reports the reconstruction scores over all datasets and their available modalities: (i) For anatomy reconstruction of originally healthy subjects, UNA achieves the highest scores across most datasets, with the remaining scores on par with Brain–ID [33], which is specifically designed for healthy anatomy; (ii) On the ATLAS stroke dataset, UNA outperforms competing models by a larger margin (≈ 10%).
As shown in Fig. 4, other models tend to generate unrealistic patterns within and around abnormalities, whereas UNA’s reconstructions are notably more visually coherent. Additionally, we present a failure case (4th row in Fig. 4), where we observe that UNA tends to “over-distinguish” the reconstructed healthy anatomy from the diseased regions, particularly in challenging scenarios where the pathology pattern completely occludes the underlying anatomy.
5.3. Direct Application: Anomaly Detection
UNA’s ability to synthesize diseased-to-healthy anatomy naturally equips it with the potential for application to anomaly detection. To demonstrate its effectiveness, we directly use the reconstructed healthy anatomy from UNA to detect abnormalities. Specifically, we follow the standard evaluation pipeline for unsupervised anomaly detection in medical images [4,18] and compute UNA’s anomaly estimation maps by calculating the voxel-wise absolute differences between the diseased input and the reconstructed output. The anomaly detection Dice scores are then obtained by comparing the ground truth pathology segmentations with the computed anomaly estimation maps, scaled to the range [0, 1] such that they represent the normalized abnormality. The same procedure is applied to other competing models.
As shown in Fig. 5, UNA’s difference maps clearly identify anomalies with varying shapes and sizes. Quantitative comparisons are provided in Tab. 3, where UNA: (i) out-performs other modality-agnostic synthesis models, and the state-of-the-art anomaly detection models; and (ii) demonstrates consistent performance across various datasets.
Figure 5.

Visualizations of directly applying UNA’s healthy anatomy reconstruction for anomaly detection. The estimated anomaly is computed as the absolute difference between diseased T1w MRI scans and UNA’s reconstructed healthy anatomy.
Table 3.
Dice scores (↑) of downstream anomaly detection performance based on the voxel-wise absolute differences between the diseased input and the reconstruction. The testing images include healthy T1w MRI scans with simulated pathology, and real T1w MRI images from stroke patients in ATLAS [31] dataset.
| Image Source | Dataset | SynthSR [23] | Brain–ID [33] | VAE [4] | LDM [18] | UNA |
|---|---|---|---|---|---|---|
| Healthy T1w with Simulated Pathology | ADNI [25] | 0.27 | 0.26 | 0.18 | 0.23 | 0.36 |
| HCP [12] | 0.28 | 0.28 | 0.13 | 0.21 | 0.33 | |
| ADHD200 [7] | 0.23 | 0.25 | 0.15 | 0.23 | 0.34 | |
| ADNI3 [53] | 0.27 | 0.28 | 0.17 | 0.24 | 0.37 | |
| AIBL [15] | 0.25 | 0.24 | 0.12 | 0.20 | 0.32 | |
|
| ||||||
| Stroke T1w | ATLAS [31] | 0.24 | 0.24 | 0.11 | 0.22 | 0.31 |
5.4. Ablation Study
To assess the contributions of UNA’s individual components, we perform an ablation study with several variants: (a) Training without fluid-driven anomaly randomization, i.e., training exclusively with real images with pathology; (b) Training with fluid-driven anomaly randomization, but initializing the anomaly profiles with random noise; (c) Training without contralateral-paired input, i.e., using only a single image without its contralateral counterpart; (d) Training without the intra-subject self-contrastive loss.
As shown in Fig. 6 and Tab. 4, training without fluid-driven anomaly randomization (UNA-(a)) results in the largest performance drop, showing only slight improvement over Brain–ID [33] (reported in Fig. 3), which does not train on diseased inputs at all. Introducing fluid-driven anomaly randomization improves overall performance, but performance gaps remain evident when compared to the proposed UNA when no real pathology profiles are used for initialization (UNA-(b)). Leveraging subject-specific contralateral information (UNA-(c), UNA-(d)) further enhances reconstruction results, particularly within diseased regions.
Figure 6.

Ablations on UNA’s healthy anatomy reconstruction.
Table 4.
Ablation study on UNA. Testing images are real T1w MRI encoded with simulated pathology (same as first-row group in Tab. 1).
| Method |
L1 (↓) |
PSNR (↑) |
SSIM (↑) |
||||||
|---|---|---|---|---|---|---|---|---|---|
| F | H | D | F | H | D | F | H | D | |
| UNA-(a) | 0.0229 | 0.0193 | 0.0008 | 23.71 | 25.09 | 38.92 | 0.859 | 0.890 | 0.904 |
| UNA-(b) | 0.0195 | 0.0182 | 0.0005 | 25.79 | 27.30 | 42.35 | 0.903 | 0.925 | 0.950 |
| UNA-(c) | 0.0155 | 0.0163 | 0.0004 | 30.00 | 31.92 | 43.61 | 0.959 | 0.977 | 0.982 |
| UNA-(d) | 0.0195 | 0.0182 | 0.0005 | 27.13 | 28.04 | 42.97 | 0.931 | 0.950 | 0.969 |
| UNA | 0.0147 | 0.0143 | 0.0003 | 31.98 | 33.25 | 45.61 | 0.981 | 0.992 | 0.998 |
(F: full brain; H: healthy region; D: diseased region.)
6. Limitations and Future Work
Handling Extreme Cases.
As discussed in Sec. 5.2, UNA appears to “over-correct” its reconstructed healthy anatomy, especially in extreme cases where the pathology in the input image heavily occludes the underlying anatomy. This issue will be further investigated in our future work.
Broader Applications.
By bridging the gap between healthy and diseased anatomy, UNA opens up a wide range of applications beyond anomaly detection. For example, it could enable modality-agnostic image registration in the presence of pathology, as well as stroke treatment outcome prediction based on UNA’s reconstructed healthy anatomy. We plan to further explore these applications of UNA.
7. Conclusion
We introduce UNA, a modality-agnostic model for reconstructing healthy anatomy that works both with healthy subjects and images with varying degrees of pathology. Our fluid-driven anomaly randomization approach enables the generalization of an unlimited number of anomaly profiles from just a few real pathology segmentations. UNA can be directly applied to real images containing pathologies without fine-tuning. We demonstrate UNA’s superior performance across eight public datasets, including MR and CT images from healthy subjects and stroke patients. Additionally, we showcase UNA’s direct applicability to anomaly detection tasks. By bridging the gap between different modalities and the underlying anatomy, as well as between healthy and diseased images, we believe UNA opens up exciting opportunities for general image analysis in clinical practice, particularly for images with diverse pathologies.
Acknowledgments
Primarily supported by NIH 1RF1AG080371. Additional support from NIH 1UM1MH130981, 1R21NS138995, 1R01EB031114, 1R01AG070988, 1RF1MH123195.
References
- [1].MONAI model zoo. https://monai.io/model-zoo.html.
- [2].Avants Brian B, Tustison Nick, Song Gang, et al. Advanced normalization tools (ANTs). Insight j, 2009. [Google Scholar]
- [3].Balakrishnan Guha, Zhao Amy, Sabuncu Mert Rory, Guttag John V., and Dalca Adrian V.. VoxelMorph: A learning framework for deformable medical image registration. IEEE Transactions on Medical Imaging, 2018. [DOI] [PubMed] [Google Scholar]
- [4].Baur Christoph, Denner Stefan, Wiestler Benedikt, Navab Nassir, and Albarqouni Shadi. Autoencoders for unsupervised anomaly segmentation in brain MR images: a comparative study. Medical Image Analysis, 2021. [DOI] [PubMed] [Google Scholar]
- [5].Billot Benjamin, Greve Douglas N., Puonti Oula, Thielscher Axel, Van Leemput Koen, Fischl Bruce R., et al. SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Medical Image Analysis, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Billot Benjamin, Magdamo Colin, Cheng You, Arnold Steven E, Das Sudeshna, and Iglesias Juan Eugenio. Robust machine learning segmentation for large-scale analysis of heterogeneous clinical brain MRI datasets. Proceedings of the National Academy of Sciences, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Brown Matthew R. G., Sidhu Gagan Preet Singh, Greiner Russell, Asgarian Nasimeh, Bastani Meysam, Silverstone Peter H., et al. ADHD-200 global competition: diagnosing ADHD using personal characteristic data can outperform resting state fMRI measurements. Frontiers in Systems Neuroscience, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].de Bézenac Emmanuel, Pajot Arthur, and Gallinari Patrick. Deep learning for physical processes: Incorporating prior scientific knowledge. In ICLR, 2018. [Google Scholar]
- [9].de Vos Bob D., Berendsen Floris F., Viergever Max A., Sokooti Hessam, Staring Marius, and Išgum Ivana. A deep learning framework for unsupervised affine and deformable image registration. Medical Image Analysis, 2019. [DOI] [PubMed] [Google Scholar]
- [10].Dey Neel, Billot Benjamin, Wong Hallee E, Wang Clinton J, Ren Mengwei, Grant P Ellen, et al. Learning general-purpose biomedical volume representations using randomized synthesis. arXiv, abs/2411.02372, 2024. [Google Scholar]
- [11].Ding Zhipeng, Han Xu, Liu Peirong, and Niethammer Marc. Local temperature scaling for probability calibration. In ICCV, 2021. [Google Scholar]
- [12].Van Essen David C., Uğurbil Kâmil, Auerbach Edward J., Barch Deanna M., Behrens Timothy Edward John, Bucholz Richard D., et al. The human connectome project: A data acquisition perspective. NeuroImage, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Fischl Bruce, Salat David H, Busa Evelina, Albert Marilyn, Dieterich Megan, Haselgrove Christian, et al. Whole brain segmentation: automated labeling of neuroanatomical structures in the human brain. Neuron, 2002. [DOI] [PubMed] [Google Scholar]
- [14].Fischl Bruce R., Salat David H., Busa Evelina, Albert Marilyn S., Dieterich Megan, Haselgrove Christian, et al. Whole brain segmentation automated labeling of neuroanatomical structures in the human brain. Neuron, 2002. [DOI] [PubMed] [Google Scholar]
- [15].Fowler Christopher, Rainey-Smith Stephanie R., Bird Sabine M., Bomke Julia, Bourgeat Pierrick T., et al. Fifteen years of the australian imaging, biomarkers and lifestyle (AIBL) study: Progress and observations from 2,359 older adults spanning the spectrum from cognitive normality to alzheimer’s disease. Journal of Alzheimer’s Disease Reports, 2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Franz Erik, Solenthaler Barbara, and Thuerey Nils. Learning to estimate single-view volumetric flow motions without 3d supervision. In ICLR, 2023. [Google Scholar]
- [17].Gottlieb Sigal and Gottlieb Lee-Ad J.. Strong stability preserving properties of Runge-Kutta time discretization methods for linear constant coefficient operators. Journal of Scientific Computing, 2003. [Google Scholar]
- [18].Graham Mark S, Lopez Pinaya Walter Hugo, Wright Paul, Tudosiu Petru-Daniel, Mah Yee H, Teo James T, et al. Unsupervised 3d out-of-distribution detection with latent diffusion models. In MICCAI, 2023. [Google Scholar]
- [19].Hernandez Petzsche Moritz R, de la Rosa Ezequiel, Hanning Uta, Wiest Roland, Valenzuela Waldo, Reyes Mauricio, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific data, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Hoffmann Malte, Billot Benjamin, Greve Douglas N., Iglesias Juan Eugenio, Fischl Bruce R., and Dalca Adrian V.. SynthMorph: Learning contrast-invariant registration without acquired images. IEEE Transactions on Medical Imaging, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Hoopes Andrew, Mora Jocelyn S., Dalca Adrian V., Fischl Bruce R., and Hoffmann Malte. SynthStrip: skull-stripping for any brain image. NeuroImage, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Hussain Shah, Mubeen Iqra, Ullah Niamat, Shah Syed Shahab Ud Din, Abduljalil Khan Bakhtawar, Zahoor Muhammad, et al. Modern diagnostic imaging technique applications and risk factors in the medical field: a review. BioMed research international, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Iglesias Juan Eugenio, Billot Benjamin, Balbastre Yael, Magdamo Colin G., Arnold Steve, Das Sudeshna, et al. Synthsr: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. Science Advances, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Iglesias Juan Eugenio, Billot Benjamin, Balbastre Yael, Tabari Azadeh, Conklin John, Alexander Daniel C., et al. Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes from clinical MRI exams with scans of different orientation, resolution and contrast. NeuroImage, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Jack Clifford R., Bernstein Matt A., Fox Nick C, Thompson Paul M., Alexander Gene E., Harvey Danielle J., et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. Journal of Magnetic Resonance Imaging, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Kamnitsas Konstantinos, Ledig Christian, Newcombe Virginia F. J., Simpson Joanna P., Kane Andrew D., Menon David K., et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis, 2016. [DOI] [PubMed] [Google Scholar]
- [27].LaMontagne Pamela J., Keefe Sarah J., Lauren Wallace, Xiong Chengjie, Grant Elizabeth A., Moulder Krista L., et al. OASIS-3: Longitudinal neuroimaging, clinical, and cognitive dataset for normal aging and alzheimer’s disease. Alzheimer’s & Dementia, 2018. [Google Scholar]
- [28].Laso Pablo, Cerri Stefano, Sorby-Adams Annabel, Guo Jennifer, Mateen Farrah, Goebl Philipp, et al. Quantifying white matter hyperintensity and brain volumes in heterogeneous clinical and low-field portable MRI. In ISBI, 2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Van Leemput Koenraad, Maes Frederik, Vandermeulen Dirk, and Suetens Paul. A unifying framework for partial volume segmentation of brain MR images. IEEE Transactions on Medical Imaging, 2003. [DOI] [PubMed] [Google Scholar]
- [30].LeVeque Randall J.. Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, 2002. [Google Scholar]
- [31].Liew Sook-Lei, Anglin Julia M, Banks Nick W, Sondag Matt, Ito Kaori L, Kim Hosung, et al. A large, open source dataset of stroke anatomical brain images and manual lesion segmentations. Scientific data, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Liu Peirong, Lee Yueh Z., Aylward Stephen R., and Niethammer Marc. Perfusion imaging: An advection diffusion approach. IEEE Transactions on Medical Imaging, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Liu Peirong, Puonti Oula, Hu Xiaoling, Alexander Daniel C., and Iglesias Juan E.. Brain-ID: Learning contrast-agnostic anatomical representations for brain imaging. In ECCV, 2024. [Google Scholar]
- [34].Liu Peirong, Puonti Oula, Sorby-Adams Annabel, Kimberly William T, and Iglesias Juan E. PEPSI: Pathology-enhanced pulse-sequence-invariant representations for brain MRI. In MICCAI, 2024. [Google Scholar]
- [35].Liu Peirong, Tian Lin, Zhang Yubo, Aylward Stephen, Lee Yueh, and Niethammer Marc. Discovering hidden physics behind transport dynamics. In CVPR, 2021. [Google Scholar]
- [36].Menze Bjoern H, Jakab Andras, Bauer Stefan, Kalpathy-Cramer Jayashree, Farahani Keyvan, Kirby Justin, et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Milletarì Fausto, Navab Nassir, and Ahmadi Seyed-Ahmad. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. 3DV, 2016. [Google Scholar]
- [38].Modat Marc, Ridgway Gerard R, Taylor Zeike A, Lehmann Manja, Barnes Josephine, Hawkes David J, et al. Fast free-form deformation using graphics processing units. Computer methods and programs in biomedicine, 2010. [DOI] [PubMed] [Google Scholar]
- [39].Moor Michael, Banerjee Oishi, Abad Zahra F H, Krumholz Harlan M., Leskovec Jure, Topol Eric J., and Rajpurkar Pranav. Foundation models for generalist medical artificial intelligence. Nature, 2023. [DOI] [PubMed] [Google Scholar]
- [40].Puonti Oula, Iglesias Juan Eugenio, and Van Leemput Koenraad. Fast and sequence-adaptive whole-brain segmentation using parametric bayesian modeling. NeuroImage, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [41].Ronneberger Olaf, Fischer Philipp, and Brox Thomas. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. [Google Scholar]
- [42].Rueckert Daniel, Sonoda Luke I, Hayes Carmel, Hill Derek LG, Leach Martin O, and Hawkes David J. Nonrigid registration using free-form deformations: application to breast MR images. IEEE Transactions on Medical Imaging, 1999. [DOI] [PubMed] [Google Scholar]
- [43].Shen Zhengyang, Feydy Jean, Liu Peirong, Curiale Ariel H, Estepar Ruben San Jose, Estepar Raul San Jose, and Niethammer Marc. Accurate point cloud registration with robust optimal transport. NeurIPS, 2021. [Google Scholar]
- [44].Singhal Karan, Azizi Shekoofeh, Tu Tao, Mahdavi Said, Wei Jason, Chung Hyung Won, et al. Large language models encode clinical knowledge. Nature, 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Sun Deqing, Yang Xiaodong, Liu Ming-Yu, and Kautz Jan. PWC-net: CNNs for optical flow using pyramid, warping, and cost volume. In CVPR, 2018. [Google Scholar]
- [46].Tanno Ryutaro, Worrall Daniel E., Kaden Enrico, and Alexander Daniel C.. Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI. NeuroImage, 2020. [DOI] [PubMed] [Google Scholar]
- [47].Teed Zachary and Deng Jia. Raft: Recurrent all-pairs field transforms for optical flow. In ECCV, 2020. [Google Scholar]
- [48].Tian Lin, Puett Connor, Liu Peirong, Shen Zhengyang, Aylward Stephen R, Lee Yueh Z, and Niethammer Marc. Fluid registration between lung CT and stationary chest tomosynthesis images. In MICCAI, 2020. [Google Scholar]
- [49].Tian Qiyuan, Bilgiç Berkin, Fan Qiuyun, Ngamsombat Chanon, Zaretskaya Natalia, Fultz Nina E., et al. Improving in vivo human cerebral cortical surface reconstruction using data-driven super-resolution. Cerebral Cortex, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [50].Tu Tao, Azizi Shekoofeh, Driess Danny, Schaekermann Mike, Amin Mohamed, Chang Pi-Chuan, et al. Towards generalist biomedical ai. arXiv, abs/2307.14334, 2023. [Google Scholar]
- [51].Wang Guoan, Ye Jin, Cheng Junlong, Li Tianbin, Chen Zhaolin, Cai Jianfei, et al. SAM-Med3D-MoE: Towards a non-forgetting segment anything model via mixture of experts for 3D medical image segmentation. In MICCAI, 2024. [Google Scholar]
- [52].Wang Mei and Deng Weihong. Deep visual domain adaptation: A survey. Neurocomputing, 2018. [Google Scholar]
- [53].Weiner Michael W., Veitch Dallas P, Aisen Paul S., Beckett Laurel A, Cairns Nigel J., Green Robert C., et al. The Alzheimer’s disease neuroimaging initiative 3: Continued innovation for clinical trial improvement. Alzheimer’s & Dementia, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Xing Lanxiang, Wu Haixu, Ma Yuezhou, Wang Jianmin, and Long Mingsheng. HelmFluid: Learning helmholtz dynamics for interpretable fluid prediction. In ICML, 2024. [Google Scholar]
- [55].Yang Xiao, Kwitt Roland, and Niethammer Marc. Quicksilver: Fast predictive image registration – a deep learning approach. NeuroImage, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [56].Yang Xiao, Kwitt Roland, Styner Martin, and Niethammer Marc. Quicksilver: Fast predictive image registration–a deep learning approach. NeuroImage, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [57].Zhou Yukun, Chia Mark A, Karl Wagner Siegfried, Ayhan Murat S., Williamson Dominic J, Struyven Robbert R., et al. A foundation model for generalizable disease detection from retinal images. Nature, 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]

